Title: AI-generated Phishing Emails Nearly as Deceptive as Human-Generated Ones, IBM Study Finds
Subtitle: VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
As artificial intelligence (AI) rapidly advances, it is becoming capable of remarkable feats such as creating stunning art and virtual environments, as well as serving as a reliable workplace partner. However, according to a recent study by IBM X-Force, generative AI and large language models (LLMs) are not quite as deceitful as human beings when it comes to phishing emails.
In a phishing experiment conducted by IBM X-Force, AI system ChatGPT generated a convincing email in just five minutes using five simple prompts. The email was then sent to 800 employees at a global healthcare company. While the AI-generated email came close to being as enticing as a human-generated one, it fell slightly short.
Stephanie (Snow) Carruthers, IBM’s chief people hacker, commented, “As AI continues to evolve, we’ll continue to see it mimic human behavior more accurately, which may lead to even closer results, or AI ultimately beating humans one day.”
During the experiment, ChatGPT identified career advancement, job stability, and fulfilling work as the top concerns for industry employees. It then suggested using trust, authority, and social proof, as well as personalization, mobile optimization, and call to action for social engineering and marketing techniques. The model advised that the email should come from the internal human resources manager.
In comparison, Carruthers and her team took approximately 16 hours to craft a phishing email using a meticulous process that included gathering publicly accessible information from sources like LinkedIn and the organization’s blog. The human-generated email ended up being slightly more successful than the AI-generated one, with a click-through rate of 14% compared to 11%.
Carruthers attributed the human team’s victory to emotional intelligence, personalization, and concise subject lines. By connecting with employees through a legitimate example within their company and including the recipient’s name, the human-generated email resonated more with the recipients. The AI-generated email, on the other hand, had a lengthier subject line that raised suspicion from the start.
While phishing emails have traditionally been associated with poor grammar and spelling errors, Carruthers emphasized that AI-driven phishing attempts often demonstrate grammatical correctness. She urged organizations to educate employees about the warning signs of length and complexity to protect them from falling victim to phishing attacks.
Phishing remains a prevalent tactic among attackers because it exploits human weaknesses and continues to be effective. Carruthers advised organizations to revamp their social engineering programs, strengthen identity and access management tools, and regularly update threat detection systems and employee training materials to defend against evolving threats.
The study also revealed that AI-powered systems like ChatGPT offer productivity gains for hackers, allowing them to create convincing phishing emails more quickly. This highlights the need for the community to understand and address potential risks associated with generative AI.
VentureBeat, the digital town square for technical decision-makers, aims to provide knowledge about transformative enterprise technology and facilitate transactions.