Cyberattacks are accelerating with AI's help

"If you're a bad guy and you're not using ChatGPT or other LLMs to go and find vulnerabilities, you're probably not doing your job as a bad guy," said Scott Giordano, general counsel of Spirion.

Cybercriminals are supercharging their attacks with the help of large language models such as ChatGPT, and security experts warn that they’ve only scratched the surface of artificial intelligence’s threat-acceleration potential.

At last month’s RSA Conference, cybersecurity expert Mikko Hyppönen sounded the alarm that AI tools, long used to help bolster corporate security defenses, are now capable of doing real harm. “We are now actually starting to see attacks using large language models,” he said.

In an interview with Information Security Media Group, Hyppönen recounted an email he received from a malware writer boasting that he’d created a “completely new virus” using OpenAI’s GPT that can create computer code from instructions written in English.

“It’s the first malware we’ve ever seen which uses ChatGPT to rewrite its code,” Hyppönen told Information Security’s Matthew Schwartz. “It calls GPT to write the code for it, which means every time it’s different, and it will be trivial to modify to write it in any other language.”

Although ChatGPT can blacklist malicious users, Hyppönen said eventually developers might be able to create malware with their own LLMs built in.

Though less sophisticated than malware created by human beings, LLMs have expanded the pool of would-be cybercriminals by enabling even novices to develop malicious code, some experts say.

Sergey Shykevich, lead ChatGPT researcher at cybersecurity company Check Point, said he’s seen hackers bragging in dark web forums about creating malware and ransomware using ChatGPT.

“What’s important is that ChatGPT allows everyone, even those with zero experience in coding, to develop that skill. Maybe in six months or something, it will be able to also create completely sophisticated malware that we’ve never before seen,” he said.

“Now, it mostly allows people who are not software developers to create malware. That makes the threat higher because at the end of the day there will be more malware criminals in the wild and more malware criminals will try and attack corporations.”

LLMs also can streamline phishing attacks by composing convincing emails impersonating trusted institutions such as a bank or the Internal Revenue Service.

“Take, for example, Russian cybercriminals, because most of them don’t speak English or their English levels are extremely low. Up until now they used what are called external vendors. They went to graduates of English literature at Russian colleges who wrote them phishing emails in English,” Shykevich said.

“Now ChatGPT allows them to make their operations much more cost-effective. They just write in ChatGPT, or other similar tools, ‘please write an email that looks like it comes from Chase Bank or any other bank’ and asks the recipient to provide their bank account information or Social Security number. The chatbot can then generate the email in perfect English,” he said.

Scott Giordano, general counsel of data loss protection company Spirion, said AI’s greatest strength is its ability to quickly pore over mountains of code to find and exploit weaknesses in a company’s network before a defender can fix it.

“That’s the sharpest point that AI has right now. If you’re a bad guy and you’re not using ChatGPT or other LLMs to go and find vulnerabilities, you’re probably not doing your job as a bad guy,” Giordano said.

“And the converse is true for the good guys. This is an arms race and both sides need to understand how to use this technology for their side,” he said.

The bad guys, Giordano said, currently have the advantage. “They only have to be right once. They only have to trick you once to start exploiting your network,” he said.

Related: Navigating the hype of AI tools in HR

But companies can combat this by adding AI to their directory of enterprise risks, moving to a zero-trust architecture and training their employees to flag phishing scams or anything else that looks suspicious, Giordano said.

“The best cybersecurity measure ever invented is an alert employee,” he said.