With the rapid advancement and widespread adoption of cutting-edge AI generation tools such as OpenAI’s Sora 2, public concern over the potential misuse of deepfake technology has escalated sharply. Both public figures and ordinary users now face the growing risk of having their likenesses manipulated or impersonated without consent.
In response, YouTube has officially begun rolling out its long-awaited “Likeness Detection Tool” to a select group of creators, following its earlier announcement last year. The feature is designed to combat unauthorized deepfake content on the platform, allowing creators to request the removal of videos that falsely depict them.
In its initial phase, the tool will be available exclusively to members of the YouTube Partner Program (YPP). This strategic decision reflects the platform’s recognition that monetized creators—who typically enjoy higher visibility and public exposure—are more likely to become targets of deepfake impersonation.
According to YouTube’s documentation, creators must complete a verification process before activating the feature. Specifically, users are required to submit a government-issued ID and a short video selfie.
This step serves a dual purpose: first, to ensure that the applicant is indeed the individual being represented, thereby preventing misuse of the tool; and second, to provide the AI system with sufficient source material to build an accurate facial model for subsequent scans and comparisons.
Once verified, the system operates in a manner similar to YouTube’s Content ID copyright detection system. It automatically scans newly uploaded videos and compares them against the verified creator’s facial data to identify any AI-generated or manipulated likenesses.
When a potential match is detected, the creator receives a notification and can review the flagged content. If the creator confirms that the video constitutes unauthorized use, they can mark it for removal, prompting YouTube to take it down.
At present, the system is limited to detecting AI-modified facial imagery. Cases in which AI-generated voices are used without altering the accompanying visuals remain outside its detection capabilities—leaving AI voice impersonation as an unresolved challenge that YouTube must still address in future updates.
Related Posts:
- CapCut’s New Terms: ByteDance Gains Perpetual Rights to User Content, Likeness, & Voice Globally
- Beyond Phishing: How AI and Deepfakes Are Powering a New Generation of Scams
- Deepfake Scams on the Rise: CEOs, News Anchors, and Government Officials Impersonated
- Deepfakes and Deception: The Rise of Synthetic Identities in Remote Work
- Kimsuky Group Weaponizes AI Deepfakes in New Spear-Phishing Campaign