Cybercriminals are supercharging their attacks with the help of large language models such as ChatGPT, and security experts warn that they've only scratched the surface of artificial intelligence's threat-acceleration potential.

At last month's RSA Conference, cybersecurity expert Mikko Hyppönen sounded the alarm that AI tools, long used to help bolster corporate security defenses, are now capable of doing real harm. "We are now actually starting to see attacks using large language models," he said.

In an interview with Information Security Media Group, Hyppönen recounted an email he received from a malware writer boasting that he'd created a "completely new virus" using OpenAI's GPT that can create computer code from instructions written in English.

Complete your profile to continue reading and get FREE access to BenefitsPRO, part of your ALM digital membership.

Your access to unlimited BenefitsPRO content isn’t changing.
Once you are an ALM digital member, you’ll receive:

  • Breaking benefits news and analysis, on-site and via our newsletters and custom alerts
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical converage of the property casualty insurance and financial advisory markets on our other ALM sites, PropertyCasualty360 and ThinkAdvisor
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.

Maria Dinzeo

Maria Dinzeo is a San Francisco-based journalist covering the intersection of technology and the law, with a focus on AI, privacy and cybersecurity.