Critical Flaws in Ollama AI Framework Could Enable DoS, Model Theft, and Poisoning
Nov 04, 2024 Ravie Lakshmanan Vulnerability / Cyber Threat
Cybersecurity researchers have disclosed six security flaws in the Ollama artificial intelligence (AI) framework that could be exploited by a malicious actor to perform various actions, including denial-of-service, model poisoning, and model theft.
"Collectively, the vulnerabilities could allow an attacker to carry out a wide-range of malicious actions with a single HTTP request, including denial-of-service (DoS) attacks, model poisoning, model theft, and more," Oligo Security researcher Avi Lumelsky said in a report published last week.
Ollama is an open-source application that allows users to deploy and operate large language models (LLMs) locally on Windows, Linux, and macOS devices. Its project repository on GitHub has been forked 7,600 times to date.
A brief description of the six vulnerabilities is below -
"Exposing Ollama to the internet without authorization is the equivalent to exposing the docker socket to the public internet, because it can upload files and has model pull and push capabilities (that can be abused by attackers)," Lumelsky noted.
https://thehackernews.com/2024/11/critical-flaws-in-ollama-ai-framework.html
Comments