Microsoft AI researchers accidentally leaked up to 38TB of data: including secrets, private keys, passwords
Recently, the cloud security firm Wiz divulged in a detailed blog post a security issue they collaboratively resolved with Microsoft. It was discerned that Microsoft’s AI researchers inadvertently exposed a repository on Azure, spanning a staggering 38 terabytes. This oversight was primarily due to a mere configuration flaw and not a security vulnerability per se.
In June of this year, Wiz stumbled upon a GitHub repository utilized by Microsoft AI researchers. This repository contained Azure storage links which were assigned exceedingly simplistic access tokens. This oversight meant that anyone in possession of these links had unbridled access to the entire dataset.
Regrettably, Microsoft’s AI researchers unwittingly leaked data, amounting to 38 terabytes, encompassing passwords, encryption keys, and chat logs, among other things.
This lapse by Microsoft’s AI researchers mirrors errors previously made by numerous domestic users who shared content using a certain drive without adequate access controls. Essentially, they created publicly accessible links without any form of authentication, granting access to anyone with the link.
Consequently, modifications were made to the sharing mechanism of this particular drive, no longer permitting the creation of publicly accessible links. Instead, a password is now required to gain access, preventing unauthorized individuals from accessing the content indefinitely.
The silver lining is that this exposed repository scarcely contained any of Microsoft’s classified or customer-related data. The only discernible sensitive data belonged to two Microsoft employees, which included a significant number of account passwords, encryption keys, and chat logs from Microsoft Teams.
Upon investigation, it was revealed that this security lapse dates back to 2020, meaning this massive 38TB repository had been publicly accessible since then. It remained unnoticed, or perhaps those who did notice opted not to alert Microsoft.
On June 22nd, Wiz responsibly informed Microsoft of this oversight. Swiftly, by June 24th, Microsoft’s Security Response Center intervened, revoking the shared access signature token for this Azure repository, thereby invalidating the exposed links.
Today, Microsoft’s Response Center made this incident public, emphasizing that no customer data was compromised and Microsoft’s internal services remained unaffected.
However, considering that a Microsoft employee did expose certain passwords and keys, had this been detected by malevolent entities, it could have potentially been exploited to breach Microsoft’s systems. Since Microsoft assures that its internal services faced no threats, we may surmise that Wiz was indeed the first to identify this issue.