
Google has found itself at the center of a new privacy controversy following revelations that a system called “SafetyCore” was silently installed on Android devices without user notification. This feature conducts local image scanning to detect potentially undesirable content, but the lack of transparency surrounding its deployment has raised serious concerns.
A similar incident recently occurred with Apple, when users discovered that the “Enhanced Visual Search” function was transmitting portions of their photos to remote servers for landmark recognition. Despite Apple’s assurances regarding privacy, the mere fact that this functionality was enabled without explicit user consent sparked widespread outrage. Now, Google is facing a parallel situation—this time, however, the data processing occurs entirely on the device.
“SafetyCore” is an Android component designed to locally scan images and other data for spam, fraud, and potentially harmful content. Unlike Apple’s cloud-based approach, Google’s system operates directly on the device. However, its most contentious aspect is its covert installation without prior user approval.
Developers of GrapheneOS, an independent security-focused operating system, analyzed “SafetyCore” and confirmed that it does not transmit data to Google’s servers. Instead, it classifies images and messages, helping to flag potential phishing links, for instance. Nevertheless, researchers expressed frustration over Google’s refusal to disclose “SafetyCore”’s source code, preventing independent experts from verifying the system’s potential privacy risks.
According to ZDNet, “SafetyCore” was deployed and activated on all Android 9 and later devices through system updates beginning in October 2024. Notably, Google provided no explicit user notification regarding its introduction.
One of the most significant points of criticism is that “SafetyCore” was silently installed, without requesting clear user consent. This has fueled concerns that Google may introduce additional AI-driven functionalities in the future without transparency or opt-in mechanisms. While the company insists that “SafetyCore” operates in complete isolation, experts warn that even if it does not currently transmit data, future integrations with other Google services could alter its behavior.
“If you don’t trust Google, the concern isn’t whether ‘SafetyCore’ is sending data today, but whether it might start doing so in the future,” notes ZDNet.
Although Google has not publicly acknowledged the presence of this feature, users can manually disable it by following these steps:
- Open “Settings” → “Apps”
- Navigate to “System Apps”
- Locate “SafetyCore”
- Select “Force Stop” or “Disable”
The exact process may vary depending on the smartphone model, but the general method remains the same: “SafetyCore” can be removed or deactivated if users know where to look.
This incident highlights a growing trend among major tech companies: the silent deployment of AI-powered features without user awareness or explicit consent. Recently, Apple faced a similar backlash over its “Enhanced Visual Search”, which analyzed users’ photos without their knowledge. While both companies argue that such technologies are intended to enhance user experience and improve data protection, the lack of open communication fosters increasing distrust.
The key takeaway for Google and Apple is clear: if users remain unaware of new AI-driven functionalities on their devices, they are unlikely to trust subsequent assurances regarding security and privacy. Transparency, user notification, and the ability to opt out are what consumers expect. Otherwise, such controversies will continue to emerge, intensifying concerns over the opaque implementation of advanced technologies.
Related Posts:
- Google Advanced Protection Program now supports iOS applications
- Phishing Scam targets iOS user in India
- Mozilla wants Facebook to make app private to protect user privacy by default