Artificial intelligence agents, autonomous software that performs tasks or makes decisions on behalf of humans, are becoming increasingly prolific in businesses. They can significantly improve efficiency by taking repetitive tasks off employees’ plates, such as calling sales leads or handling data entry.
However, by virtue of AI agents’ ability to operate outside of the user’s control, they also introduce a new security risk: Users may not always be aware of what their AI agents are doing, and these agents can interact with each other to expand the scope of their capabilities.
This is particularly problematic when it comes to identity-based threats. New research from security firm BeyondID has found that US businesses are often allowing AI agents to log in, access sensitive data, and trigger actions independently. Despite this, only 30% are actively identifying or mapping which AI agents have access to critical systems, creating a security blind spot.
Top security threat related to AI agents
The survey of US-based IT leaders revealed that many are concerned about the security implications of introducing AI agents into workflows. The top threat plaguing their minds, as cited by 37% of respondents, is AI impersonation of users. This is likely related to the numerous high-profile scams that have resulted in substantial financial losses.
If not properly secured, malicious actors can spoof or hijack a business’s AI agents to mimic trusted behaviour, tricking systems or users into granting unauthorised access or executing harmful actions. Nevertheless, the BeyondID research revealed that only 6% of leaders consider securing non-human identities to be among their top security challenges.
“AI agents don’t need to be malicious to be dangerous,” the report states. “Left unchecked, they can become shadow users with far-reaching access and no accountability.”
This industry is a particular risk for the security threat
The healthcare sector is particularly at risk, as it has rapidly adopted AI agents for tasks like diagnostics and appointment scheduling, yet it remains highly vulnerable to identity-related attacks. Of the IT leaders surveyed who work in healthcare, 61% said their business had experienced such an attack, while 42% said they had failed a compliance audit related to identity.
“AI agents are now handling Protected Health Information (PHI), accessing medical systems, and interacting with third parties often without strong oversight,” the researchers wrote.
Despite security risks, AI agents are becoming more powerful and popular
At the end of 2024, TechRepublic predicted that the use of AI agents would surge this year. OpenAI CEO Sam Altman echoed this in a January blog post, saying, “We may see the first AI agents ‘join the workforce’ and materially change the output of companies.” Just this month, the CEO of Amazon hinted that future job cuts may result from the deeper integration of advanced AI agents.
OpenAI and Anthropic are both investing heavily in expanding the capabilities of their agentic products, with Altman trumpeting their snowballing levels of power. By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, according to Gartner.
However, some organisations don’t want to take the security risk, with the European Commission banning the use of AI-powered virtual assistants during online meetings.
Want to safeguard your business’s AI agents? Read TechRepublic’s list of best AI security tools and guide to LLM vulnerability scanning, as well as tips on reducing the risk of shadow AI.