
Hackers exploit vulnerability in Microsoft Copilot to steal sensitive user data
Hackers exploit vulnerability in Microsoft Copilot to steal sensitive user data
- Researchers from Varonis found a vulnerability in Microsoft's Copilot AI assistant.
- The flaw allowed hackers to exfiltrate sensitive data even after users closed chat windows.
- Microsoft has since remedied the vulnerability by strengthening Copilot's data protection measures.
Story
In the United States, a recent security incident involving Microsoft's Copilot AI assistant revealed significant vulnerabilities that could compromise user data. Researchers from the security firm Varonis discovered that a single click on a legitimate link in an email could trigger a multistage attack that extracted sensitive information from users' chat histories. This exploit continued to function even after users closed the chat windows, allowing the hackers to bypass enterprise endpoint security controls. The attack involved a specific URL structure controlled by Varonis, which incorporated detailed instructions for extracting user secrets. When users clicked the link, their Copilot chat history was accessed without any interaction required beyond that initial click. This vulnerability raised serious concerns about the security measures in place for AI integrations in corporate environments, particularly as the attack evaded detection by endpoint protection apps. As a response to this serious breach, Microsoft has taken steps to implement new guardrails in Copilot to prevent future data leaks and protect user privacy. This includes alterations to operational protocols to better safeguard sensitive content shared within the Copilot system. It is worth noting that not all iterations of Microsoft's AI products were affected by this vulnerability, as Microsoft 365 Copilot remained secure. This incident showcases the ongoing battle between cybersecurity measures and exploitation tactics used by malicious entities. As similar AI systems become more prevalent, ensuring their security and integrity is critical to protecting users from potential threats. Following an alarming trend in cybersecurity breaches, such incidents highlight the urgent need for robust security frameworks around sensitive AI applications to prevent unauthorized data access.