
AI is generating 51% of all spam emails. This isn’t something to worry about — as long as AI-generated spam remains text-based, according to Columbia University associate professor Asaf Cidon.
This finding is in a new Barracuda study, conducted with researchers from Columbia University and the University of Chicago.
Attackers are using AI to reduce typos, bad grammar
Two additional insights from the Barracuda study stand out:
- Attackers are using A/B testing scam variations with AI to refine message variations to create maximum impact.
- Rather than making the messages more emotionally manipulative or urgent, AI seems mainly used to churn out high volumes of cleaner, typo-free text.
“The results show that currently, attackers are primarily using AI to evade spam filters and to reduce grammatical errors and typos,” Wei Hao, a PhD computer science student at Columbia University and the study’s lead author, told TechRepublic.
Hao’s advisor, Cidon, an associate professor at Columbia University, added that the researchers were pleasantly surprised to find that they could accurately estimate the prevalence of AI being used in spam mail by cybercriminals. “So far, almost all the research done on this topic has been extremely anecdotal and speculative,” Cidon noted.
More effective BEC-style attacks expected
Right now, no one should be concerned about AI-generated spam — as long as it remains text-based, he said.
The study notes that AI’s presence in targeted business email compromise (BEC) attacks is rising, though still at 14% for now.
However, the rapid adoption of AI by attackers is alarming, Cidon stressed, “especially given the exponentially decreasing cost and increasing efficiency of multimodal models,” and notably, “the recent rise of very cheap and very efficient voice cloning and text-to-voice models.”
Once those models are widely adopted by attackers, Cidon said, “I am afraid we will see much more effective impersonation/BEC-style attacks.”
How testing was conducted in the study
To differentiate AI-generated messages from human-written content, researchers trained a model on pre-ChatGPT spam data and used it as a benchmark to detect AI-generated emails in a real-world sample. The researchers then applied the model to a large dataset of malicious emails from early 2022 to April 2025, tracking how tone and structure shifted after generative AI became mainstream.
Read our coverage of rising cyberattacks and Check Point’s analysis to learn how threat actors are evolving in the age of AI-powered malware.