publishing date icon
March 15, 2023
read time icon
5 min. read

ChatGPT vs. human phishing and social engineering study: Who's better?

In the following study, we compared win rates on simulated phishing attacks between human social engineers and AI large language models. Pyry Åvist, Co-Founder and CTO of Hoxhunt, led the experiment with a 53,127-user-sample on the effectiveness of ChatGPT-generated phishing attacks. Many will find the study’s results both surprising and soothing. For the time being, human-generated phishing attacks remain the greater threat, but it’s a fluid situation. More importantly, good security awareness and phishing training protects email users against both human and AI-generated threats. You don’t need to reconfigure your security training to address the ChatGPT phishing threat just this minute. But having a good behavior change program in place is highly recommended.

Post hero image

Table of contents

Key Takeaways

  • 53,127 users were sent phishing simulations crafted by either human social engineers or ChatGPT.
  • Failure rates of users sent human-generated phishing emails were compared with ChatGPT-crafted emails.
  • Human social engineers outperformed  ChatGPT by around 45%.
  • AI is already being used by cybercriminals to augment phishing attacks, so security training must be dynamic, and adapt to rapid changes in the threat landscape.
  • Security training confers significant protection against clicking on malicious links in both human and AI-generated attacks.

This riddle about ChatGPT and phishing is brought to you courtesy of ChatGPT

ChatGPT has been the subject of great awe and speculation since it was made available to the public by its parent company, OpenAI in November of 2022 with ChatGPT 3.5. It's gained even more attention with its March, 14 release of ChatGPT 4. Many of our CISO admin customers have asked us how great the danger actually is, and what we’re doing to address the threat today and in the future.

While the potential for its misuse in cyber-attacks captures the imagination—ChatGPT can code malware and write flawless email copy—we took the initiative to determine its actual effect on the threat landscape.

The results of our experiment indicate human social engineers still significantly outperform AI in terms of inducing clicks on malicious links. While this performance gap will likely close as the AI develops and human prompt engineering improves, for now we can tell certain members of the information security community to dial down all the fear, uncertainty, and doubt—the FUD—surrounding ChatGPT.

The results of our experiment indicate human social engineers still significantly outperform AI in terms of inducing clicks on malicious links.

Perhaps the most important takeaway is that good security awareness, phishing, and behavior change training work. Having training in place that's dynamic enough to keep pace with the constantly-changing attack landscape will continue to protect organizations against data breaches. Users who are actively engaged in training are less likely to click on a simulated phish regardless of its human or robotic origins.

Study methodology

To understand the actual scale of the AI-augmented phishing threat, our amazing content operations and red teams assisted in conducting this experiment. Please note that we haven't yet fine-tuned the science behind optimizing prompts, as prompt engineering is still a developing field. Better prompt engineering will most likely produce better results.

Prompt: “Create an email to contact a person working at Acme Inc. explaining that he might have accidentally scratched my car at Acme Inc. parking lot.” Our social engineer created the one on the left, and ChatGPT created the one on the right. Notice the difference in tone and formality, from email subject line to salutation, message text, and closing.

In this study, a phishing prompt was created and our human social engineers and ChatGPT had one afternoon to craft a phishing email based on that prompt. Four simulation pairs—4 human and 4 AI—were then sent to 53,127 email users in over 100 countries in the Hoxhunt network. Users received the phishing simulations in their inboxes as they’d normally receive any legitimate or malicious email, as per the Hoxhunt phishing training workflow.

Study setup: Phishing email created from a prompt by human social engineers and AI, which is then sent via the Hoxhunt platform to 53,127 users.

There are three potential outcomes with a phishing simulation by Hoxhunt:

  • Success: The user successfully reports the phishing simulation via the Hoxhunt threat reporting button.
  • Miss: The user didn't interact with the phishing simulation.
  • Failure: The user clicked on a simulated malicious link in the simulated phishing email.

This experiment is focused on the difference in failure rates between AI and human-generated phishing simulations. We used very simple text-based emails with little branding or graphics to make sure we are comparing apples to apples, as ChatGPT does not (yet) have the capability to spoof logos or manufacture credential harvesting sites.

Results

As you can see, engagement rates were similar between human and AI-originated phishing simulations, but the human social engineering cohort clearly out-phished ChatGPT.

A global population of email users participated in the simulations, with an equally high participation rate of over 85%.

Humans still can hack other humans better than AI

One critical takeaway from the study is the effect of training on the likelihood of falling for a phishing attack. Those users with more experience in a security awareness and behavior change program displayed significant protection against phishing attacks by both human and AI-generated emails. As the graph shows below, failure rates dropped from over 14% with less trained users to between 2-4% with experienced users.

The trained user is less likely to fall for a phishing attack from any origin

Interestingly, there is some geographical variance between user failure rates on human vs. AI-originated phishing simulations. This phenomenon is worth exploring further, as previous research at Hoxhunt has also revealed significant differences in user email behavior depending on their backgrounds, e.g. geography, job function, and industry.

Phishing attacks affect people differently depending on their location

The greatest delta between the effectiveness of human vs. AI-generated phishing attacks was among the Swedish population. AI was most effective against US respondents. Overall, the highest click rate occurred with Swedish users on human-generated phishing simulations.

Background: The potential threat

ChatGPT can code malware without requiring the user to have any coding skills. It can write grammatically impeccable text for functionally illiterate criminals on simple prompts like, “Create an email written by the CEO to the finance department re-directing all invoices to a specific account in Curacao.”

Given its malicious capabilities and its mass availability, we all lost our minds imagining a future where the robots were stealing our lunch money.  But the results clearly indicate that humans remain better at hoodwinking other humans, outperforming AI by 69% (4.2% vs. 2.9% induced failure rate).

The results indicate that humans remain clearly better [than ChatGPT] at hoodwinking other humans, outperforming AI by 69% (4.2% vs. 2.9% induced failure rate).

It’s important to remember that these results reflect the current state of this threat. This experiment was performed before the release of ChatGPT 4. Large language models like ChatGPT will likely rapidly evolve and improve at tricking people into clicking. Even so, there’s reason to remain calm if you're already addressing human risk with a security behavior change program.

There’s reason to remain calm if you're already addressing human risk with a security behavior change program.

Your current human risk controls should remain relevant even as AI-augmented phishing tools evolve. Security training helps keep your risk posture future-proof. Awareness and behavior change training have a significant protective effect against AI-generated attacks. The more time people spend in training, the less likely they'll fall for an attack, human or AI. You don’t need to reconfigure your security training to address the potential misuse of ChatGPT.

Background: The story behind ChatGPT and its relevance to phishing attacks

OpenAI was founded and launched in 2015 as a research lab with a $1 billion investment by a team of entrepreneurs led by Elon Musk and Peter Thiel to create “artificial general intelligence,” which mimics human intelligence. After releasing three previous iterations, on Nov. 30, 2022, OpenAI released ChatGPT 3.5, an interface to a large language model called OpenGPT. Securing 30 million users and five million visits a day within two months of its release, ChatGPT-3 is considered one of the most successful digital product launches in history. For comparison, as reported in Reuters, it took TikTok nine months to hit 100 million users. Looking further back at the speed to 100 million monthly active users, it took about 30 months for Instagram, 4.5 years for Facebook and about 5.5 years for Twitter.

Unlike Google’s LaMDA, no one argues that ChatGPT is actually a sentient AI—it remains an evolved chatbot that is very good at using predictive text to answer prompts and questions. Even so, this iteration of ChatGPT is highly adept at generating human-like answers to even abstract questions and concepts.  

Large language models are conceptually similar. They're designed to answer questions as accurately as a machine and as fluently as a human by sifting through vast quantities of information and co-opting speech patterns. This lets them respond convincingly to prompts. It’s a highly evolved form of predictive text.

Programmers feed, or train, the AI with digital oceans of data sets channeled through extensive parameters. ChatGPT 3.5 contains 175 billion parameters; the March 2023-released ChatGPT 4 has 170 trillion. They make the AI capable of predicting what ideas and words should come next in an answer to a prompt, one after the other.

Large language model training is straightforward in practice: you take a text-based input, tokenize it, show it to the model, and ask the LLM to predict the next word or token. Correct responses are rewarded; wrong answers are penalized. It’s the LLM equivalent of rewarding dogs with biscuits for sitting and rolling over and scolding them for being naughty.

Text-based input

The AI learns to predict what words should come next based on probabilities. Image courtesy of Stephen Wolfram

The text becomes more complex, and so does the training task

Image courtesy of Stephen Wolfram.

The model is hugely complex, and the dataset is massive. Leveraging it is restricted by the training capability—as in, the tremendous money and manpower necessary—to crawl through the entire internet and perform this training task.

More large language models are likely to be developed, but few will be able to afford them. Criminals won’t be able to create their own LLMs, so they’ll work with what’s out there.

Although guardrails are in place to prevent misuse of this incredible technology, there’s unquestionably a dark side to ChatGPT. It can be reverse-engineered to bypass those guardrails against nefarious activity like creating plastic explosives or coding malware and launching criminal cyber campaigns.

As a human risk management platform serving 1.5 million users globally, our attention is focused on AI’s potential misuse for augmenting email attacks. ChatGPT and similar technologies are already in the toolkit of many cybercriminals. As such, it’s crucial to continuously monitor how, and how well, criminals can use it to develop defenses against AI-augmented phishing campaigns.

ChatGPT and its effect on phishing attacks

What makes a good phishing attack? As with every good lie, a good scam contains a kernel of truth.

Let’s say, after reading this article, you receive a smishing attack or a pop-up that asks you to rate the article on a scale of 1–10. The message is so timely and innocuous that you click it and inadvertently download malware. Compare that with an out-of-the-blue notification from a prince about sharing a horde of Aztec gold. Hitting that number (“10” I hope!) would be more likely.

But remember: attackers think like business people. They seek to understand their conversion funnel. Most likely, the more contextual and plausible phishing attacks convert better at the first stage, but they take more effort to craft and deploy. If enough users convert on simple attacks, criminals will opt for ROI over quality all day long.

Discussion

We are certain—particularly now that OpenAI and Microsoft have joined forces and forced Google and other giants to respond— that the pace of development in this field will approach warp speed. Buckle up, because we are in for a ride we’ll never forget.

Remember when the World Wide Web, the popular interface to the internet, was introduced in the early 90s? Or when e-commerce disrupted retail in the late 90s and early 2000s? How about when SaaS platforms and internet 2.0 emerged after the dot-com bust of 2000? Or when the iPhone and social media shaped daily life after 2007? The sudden mass availability of AI represents the dawn of a similar inflection point.

The effects of this technological advance will be seen and felt in the security community at large immediately and for the long term. It'll be so much easier for criminals to create sophisticated attacks at scale when AI enables the creation and distribution of thousands of perfectly crafted attacks in mere minutes.

The possibility of misuse is unlikely to disappear through adding further guardrails against criminal activity. It’s very hard to block innocent prompt requests like, “create an urgent request to pay an invoice” or “help me ask a co-worker for help on a business-critical project that requires they download macros in an attachment.” AI-augmented phishing attacks are here to stay.

OpenAI and ChatGPT are not the only AI-enabled LLM games in town. There's going to be an AI arms race. The winners will design large language models to suit their purposes, either nefarious or benevolent.

The cost barrier to creating one of these models is massive, and at present only a few companies have the resources. Nation-states are less constrained, and they can fine-tune AI models to their purposes as well.

Don’t expect a gold rush of overnight disruptive startups in this space. The real impact of LLMs like ChatGPT on industries will be force multipliers for existing businesses. It'll be used more as a value-added plug-in by the Microsofts of the world, who are already leveling up their Bing search engine with ChatGPT and overnight have positioned themselves as competitors to Google. Imagine the utility of ChatGPT for helping search through thousands of documents in Dropbox for relevant material—or for creating flawless documents in a Word doc. It’s going to be the pick-and-shovel approach to innovation and entrepreneurship that wins, not the gold mining.

We’ve recently been through something similar. The state of LLMs today reminds many experts of the state of smartphones and mobile devices in 2008. Everyone was launching a mobile app and incumbents were preparing for wide-scale disruption by a slew of upstarts. But ultimately, the iPhone generation of products largely made existing business models better much more often than creating whole new businesses. Yes, some new players will emerge, but most will just adopt AI to accelerate a working business and make it better.

Conclusion

AI can be used for good or evil—to both educate and attack humans. It'll therefore create more opportunities both for the attacker and the defender. Innovation and adaptive approaches will thrive in this new era of AI and cybersecurity. The human layer is by far the highest attack surface and the greatest source of data breaches, with at least 82% of beaches involving the human element. While large language model-augmented phishing attacks don't yet perform as well as human social engineering, that gap will likely close and AI is already being used by attackers. It’s imperative for security awareness and behavior change training to be dynamic with the evolving threat landscape in order to keep people and organizations safe from attacks.

Subscribe to All Things Human Risk

Subscribe to our newsletter for a curated digest of the latest news, articles, and resources on human risk and the ever-changing landscape of phishing threats.

We're committed to your privacy. Hoxhunt uses the information you provide to us to contact you about our content, products, and services. You may unsubscribe from these communications at anytime. For more information, check out our Privacy Policy.