
A wanted poster for DPRK IT workers, retrieved on March 20, 2025.
In an era where remote work is increasingly prevalent, a new cybersecurity threat has emerged: the use of real-time deepfakes to infiltrate organizations. A recent report by Unit 42 highlights the “alarming ease” with which synthetic identities can be created, posing significant risks to security, legal, and compliance frameworks.
The report indicates a surge in candidates utilizing real-time deepfakes during job interviews. Investigators have uncovered cases where interviewees presented synthetic video feeds, often employing identical virtual backgrounds across different candidate profiles. This tactic enables malicious actors, including North Korean IT workers, to operate undetected and potentially generate revenue for sanctioned regimes.
Unit 42’s analysis reveals that this is a logical evolution of established fraudulent work infiltration schemes. North Korean threat actors have consistently shown a strong interest in identity manipulation techniques, including the creation of synthetic identities supported by compromised personal information. The use of real-time deepfakes offers key operational advantages: it allows a single operator to interview for the same position multiple times with different synthetic personas and helps operatives avoid identification.
One of the most unsettling revelations of the report is the ease with which deepfakes can be created. According to the report, “it took just over an hour with no prior experience to figure out how to create a real-time deepfake using readily available tools and cheap consumer hardware”. In a demonstration, a researcher with no prior experience used an AI search engine, a passable internet connection, and a GTX 3070 graphics processing unit to produce a sample deepfake. The researcher used images generated by thispersondoesnotexist[.]org and free deepfake tools to generate multiple identities. As the report states, “A simple wardrobe and background image change could be all it takes to come back to a hiring manager as a brand-new candidate”.
While the sophistication of deepfakes is increasing, there are still detection opportunities. The report outlines several technical shortcomings in real-time deepfake systems, including temporal consistency issues, occlusion handling problems, lighting adaptation inconsistencies, and audio-visual synchronization delays.
To combat this threat, the report emphasizes the need for collaboration between HR and security teams. It provides a range of mitigation strategies, including:
For HR Teams:
- Asking candidates to turn their cameras on and recording sessions (with consent).
- Implementing comprehensive identity verification workflows.
- Training recruiters to identify suspicious patterns.
- Instructing interviewers to ask candidates to perform movements challenging for deepfake software.
For Security Teams:
- Securing the hiring pipeline by monitoring job application IP addresses.
- Enriching provided phone numbers to check for VoIP carriers.
- Maintaining information sharing agreements.
- Identifying and blocking software applications that enable virtual webcam installation.
The report also advises monitoring for abnormal network access patterns post-hiring, deploying multi-factor authentication, developing protocols for handling suspected cases, creating security awareness programs, establishing technical controls to limit access for new employees, and documenting verification failures.
By implementing layered defenses and fostering collaboration between HR and security teams, organizations can significantly reduce the risk of deepfake infiltration and protect their operations. As the report aptly states, “Organizations must implement layered defenses by combining enhanced verification procedures, technical controls and ongoing monitoring throughout the employee lifecycle”.