Keras Deep Learning Framework Vulnerability (CVE-2025-1550)

Age
5 days ago
Information
Summary
A critical security vulnerability, CVE-2025-1550, has been identified in the Keras deep learning framework, allowing for arbitrary code execution through the `Model.load_model` function, even with `safe_mode=True` enabled. Attackers can exploit this flaw by crafting malicious `.keras` archive files, manipulating the `config.json` to specify arbitrary Python modules and functions, which are executed upon loading the model. This poses significant risks, including data compromise and system control, and carries a CVSS score of 7.3, denoting high severity. The vulnerability affects Keras versions before 3.9. Users are urged to upgrade to version 3.9 or later, load models only from trusted sources, and use self-created model archives to mitigate risks. No direct Indicators of Compromise were found, with the main indicator being the loading of a malicious `.keras` file.
How Blue Rock Helps

This security issue gives an attacker the ability to execute arbitrary code on a victim's system by tricking them into loading a specially crafted Keras model file, even with Keras's safe_mode enabled. The following protection guardrails can further prevent the following steps an attacker can take: When an attacker crafts a malicious .keras file with a manipulated configuration to specify arbitrary Python functions for execution upon model loading, Python Deserialization Protection helps prevent these malicious functions from being called and executed, thereby blocking the initial arbitrary code execution that occurs when the Model.load_model function processes the compromised configuration. Should this initial execution attempt to run further operating system commands, for instance, by using Python's os or subprocess modules to scan directories for sensitive data or execute reconnaissance tools, Python OS Command Injection Prevention steps in to block these unauthorized system-level commands. If the compromised Keras application is running within a container, and the attacker's code attempts to execute new scripts or binaries not originally part of the container image, such as downloading and running a second-stage payload like a Remote Access Trojan to establish persistence by creating a new scheduled task, Container Drift Protection (Binaries & Scripts) prevents these drifted executables from running. Finally, if the attacker, having gained code execution, attempts to establish a persistent command and control channel by initiating a direct socket connection back to their server to exfiltrate data or issue further commands, Reverse Shell Protection detects and blocks this malicious outbound connection, severing the attacker's remote access.

MITRE ATT&CK Techniques Inferred
  • T1203: Exploitation for Client Execution: The article describes a vulnerability in the Keras deep learning framework that allows for arbitrary code execution when a maliciously crafted model file is loaded. This is an example of the attacker exploiting a vulnerability to execute arbitrary code on the victim's system, which aligns with the MITRE ATT&CK technique for 'Exploitation for Client Execution'. The key aspect here is the exploitation of the Model.load_model function to execute code specified in a manipulated config.json file within a .keras archive.

See Blue Rock In Action