Microsoft Copilot AI Exploited by Penetration Testers to Access Restricted Passwords
A recent security evaluation by leading penetration testing company Pen Test Partners has exposed a troubling vulnerability in Microsoft’s Copilot AI for SharePoint, revealing that the AI-driven tool could potentially allow hackers to bypass restrictions and access highly sensitive, encrypted data, including protected passwords.
The alarming revelation emerged after security experts successfully utilised Copilot AI to retrieve passwords from a restricted and encrypted SharePoint spreadsheet—highlighting a critical loophole in Microsoft’s protection measures.
How Pen Testers Leveraged Copilot AI
During a simulated cyber-attack, the penetration testing specialists from Pen Test Partners encountered an encrypted spreadsheet alongside a file titled “passwords.txt” stored on Microsoft SharePoint. Despite rigorous attempts, conventional methods of accessing these files via browser or direct download proved unsuccessful due to robust restrictions.
However, testers shifted their approach, employing the Copilot AI agent itself—originally designed to simplify user tasks within SharePoint—to retrieve the content. Astonishingly, Copilot successfully bypassed these restrictions, printing the contents of the password file directly, thus enabling full access to sensitive encrypted data.
Microsoft’s Security Stance Challenged
Responding to the report, Microsoft maintained confidence in their existing security controls. A spokesperson emphasised that SharePoint’s information protection is built around user-specific permissions, audited logging, and continuous monitoring. They asserted that unauthorised users should not be able to view restricted content via Copilot or any other similar agent.
But according to Ken Munro, founder of Pen Test Partners, the issue lies not in Microsoft’s underlying technology, but in configuration and awareness among organisations. Munro noted that Microsoft’s response regarding permissions and logging is technically accurate but fails to address real-world implementation gaps. Many companies, he explained, do not adequately configure or audit user permissions and Copilot interactions—thus inadvertently enabling the AI agent to access sensitive information it shouldn’t.
Configuration Risks and Real-World Implications
This case illustrates a growing concern in cybersecurity circles—that AI tools, despite their many legitimate applications, also pose serious risks if poorly implemented or configured. Pen Test Partners’ findings underscore the critical importance of rigorous configuration, comprehensive monitoring, and in-depth security training for organisations deploying advanced AI technologies.
With attackers increasingly leveraging AI-powered techniques, such vulnerabilities could open doors for malicious actors, who would undoubtedly exploit these misconfigurations to access protected data.
Looking Ahead
This revelation comes as a stark reminder to enterprises adopting new technologies: AI-driven tools are only as secure as their weakest configuration. Companies must prioritise comprehensive user permissions management, ongoing configuration reviews, and heightened awareness of the risks posed by powerful, autonomous agents such as Copilot AI.
As organisations continue to expand their use of AI, maintaining proactive security measures and thorough logging practices becomes paramount to safeguarding against inadvertent access breaches.
Photo Credit: DepositPhotos.com