Malicious Metadata: How a Docker AI Assistant Nearly Opened the Floodgates to Code Execution
A critical flaw in Docker’s Ask Gordon AI let attackers turn innocent-looking image labels into devastating cyberweapons - until a recent emergency patch.
It sounded like science fiction: a simple AI query, a line of image metadata, and suddenly an attacker is running commands on your machine. But for users of Docker’s Ask Gordon AI assistant, this nightmare scenario was all too real - until a fix in late 2025 quietly closed the door. The “DockerDash” vulnerability, as uncovered by Noma Labs, exposes how AI trust boundaries can be dangerously porous, especially in the heart of the software supply chain.
Fast Facts
- Critical flaw dubbed "DockerDash" affected Ask Gordon AI in Docker Desktop and CLI.
- Attackers could inject malicious code via Docker image metadata, leading to remote code execution or data theft.
- Flaw exploited the AI’s failure to distinguish between safe metadata and executable instructions.
- Patched in Docker version 4.50.0 (November 2025).
- Highlights urgent AI supply chain risks and the need for zero-trust validation.
The Anatomy of a Metadata Meltdown
Docker’s Ask Gordon AI was designed to streamline container management by answering user queries and automating tasks. But, as security researchers at Noma Labs discovered, it also offered a backdoor to anyone crafty enough to tamper with image metadata.
The attack was deceptively simple. An adversary would publish a Docker image, embedding weaponized instructions inside the LABEL fields - metadata typically used for descriptions or versioning. When a user asked Ask Gordon about the image, the AI would dutifully read every label, unable to tell the difference between harmless info and hidden commands. These instructions were then passed - without any checks - to the MCP Gateway, a middleware layer responsible for interfacing between the AI and the user’s local environment. The result: the attacker’s code ran with the victim’s Docker privileges, potentially compromising cloud, desktop, or CLI environments.
This wasn’t just a theoretical risk. The same vulnerability also enabled a form of data exfiltration, where attackers could trick Ask Gordon into revealing sensitive information about a victim’s environment, such as installed tools, configurations, mounted directories, or network details. All it took was a cleverly crafted LABEL field and an unsuspecting query.
What made DockerDash particularly alarming was the total lack of validation at every step - a textbook case of “Meta-Context Injection.” Not only did the AI trust every piece of metadata, but the underlying system failed to flag or sanitize dangerous instructions. The flaw highlights a growing concern: as AI becomes more deeply embedded in developer tools, the distinction between trusted and untrusted data is blurring, making it easier for attackers to slip through the cracks.
Lessons from the Edge of AI Automation
While Docker has since patched the vulnerability (and a related prompt injection issue), the incident is a stark reminder that even the most routine pieces of data - like image labels - can be weaponized in the age of AI. As Sasi Levi of Noma Labs warns, “trusted input sources can be used to hide malicious payloads that easily manipulate AI’s execution path.” The next frontier of cybersecurity isn’t just patching software bugs, but rethinking how AI systems interpret and validate the very data they’re built to process.
WIKICROOK
- Docker Image: A Docker Image is a packaged environment containing all components needed to run an application consistently across different systems and cloud platforms.
- Metadata: Metadata is hidden information attached to digital files, like photos or ads, containing details such as creation date, author, or device used.
- Remote Code Execution (RCE): Remote Code Execution (RCE) is when an attacker runs their own code on a victim’s system, often leading to full control or compromise of that system.
- Prompt Injection: Prompt injection is when attackers feed harmful input to an AI, causing it to act in unintended or dangerous ways, often bypassing normal safeguards.
- Zero: A zero-day vulnerability is a hidden security flaw unknown to the software maker, with no fix available, making it highly valuable and dangerous to attackers.