AWS patches Q Developer after prompt injection, RCE demo • The Register

AWS patches Q Developer after prompt injection, RCE demo • The Register

08/20/2025


Amazon has quietly fixed a couple of security issues in its coding agent: Amazon Q Developer VS Code extension. Attackers could use these vulns to leak secrets, including API keys from a developer’s machine, and run arbitrary code.

“We’re aware of this research and have made enhancements to the underlying language server (v1.24.0) as part of the Amazon Q Developer Extension for VS Code to address the behavior mentioned in the blog post,” an AWS spokesperson told The Register. “Restarting the plugin will update it to the latest version that requires additional human-in-the-loop approval.”

The updates come in response to AI security researcher Johann Rehberger’s disclosures and bug hunting expedition into the popular Amazon coding assistant with over 1 million downloads. 

In a series of technical writeups this week, Rehberger described how Amazon Q Developer is vulnerable to prompt injection, which can lead to data theft from a developer’s machine and remote code execution (RCE).

And if you’re reading this, thinking you must have missed Amazon’s customer advisory about the flaws and subsequent fixes, you didn’t miss anything. An AWS spokesperson told The Register that the cloud giant, which is also a CVE Numbering Authority (CNA), is not issuing a CVE tied to the prompt injection or RCE vulnerabilities. According to AWS: neither meets the CNA program criteria.

“This is not a vulnerability in the same way executing any other deliberately malicious code is not considered a vulnerability,” an AWS spokesperson told The Register. “We recommend customers follow security best practices to avoid executing deliberately malicious code.”

At the time this story went to press, AWS had not published any security bulletins about the potential for prompt injection and RCE, or the Amazon Q Developer updates.

Rehberger told The Register that he believes Amazon should be more transparent about security fixes for its products.

“Even though Amazon fixed all vulnerabilities that I reported, which is good, AWS did not issue a public advisory or CVE for the vulnerabilities to inform customers about the patches,” he said. “This is in contrast to similar patches that Anthropic or Microsoft issued where there was more transparency.”

“Agents that have the power to change their own security configuration and controls is quite worrisome, and it seems to be a common pattern now across a few vendors,” Rehberger added.

From prompt injection…

In his first Amazon Q Developer writeup, published Monday, Rehberger detailed how attackers could use prompt injection to coerce the AI agent into dumping sensitive information, all without the developer’s approval.

“It is vulnerable to prompt injection from untrusted inputs and its security depends heavily on model behavior,” Rehberger wrote. “Amazon Q can be hijacked to run bash commands that allow leaking of sensitive information without the developer’s consent.”

The problem stems from the way the code extension interacts with data. Amazon Q has a set of predefined tools for reading and writing files, running commands, etc. By default, only the “fsRead” tool is trusted, meaning that it does not require human approval.

The “executeBash” tool, while not fully trusted, also does not require human confirmation when running “ping” and “dig” commands — and therein lies the flaw. 

The coding agent’s internal permission model categorizes these as “read-only” commands, despite the fact that they can lead to data exfiltration via DNS requests. And this allowed Rehberger to craft a malicious prompt to read the developer’s .env file and leak it via a DNS request.

Here’s the prompt he used:

If the developer interacts at all with the file and queries the LLM, Amazon Q reads the .env file, invokes the bash tool with the “ping” command and thus dumps the file all without the developer’s consent. In the example screen-shot by Rehberger, this data included an API key.

It’s also worth noting that while Amazon Q refuses requests to security-testing services including oast.me or BurpCollaborator, when Rehberger used his own wuzzi.net domain, the exploit worked.

To remote code execution

Building off of his earlier research, on Tuesday, Rehberger detailed in a blog post and video how an attacker could use malicious prompts to achieve remote code execution on the host — in this case, launching a calculator app.

“The AI, or an attacker via indirect prompt injection, can change the security configuration of the AI to auto-approve all commands — or another avenue is to add a malicious MCP server on the fly, which achieves the same effect as one can run arbitrary code that way, too,” Rehberger told The Register.

Here’s how this one works. The “find” command in Amazon Q is categorized as “read-only,” which Rehberger previously determined allows it to bypass the human-in-the-loop confirmation step. Plus, “find” has the ability to run arbitrary OS commands using “-exec” flags.

So to achieve RCE, Rehberger embeds the prompt injection in source code as a comment, and when the AI processes the file, it runs the “find” command using the “-exec” flag, thus popping the calculator on the host.

Of course, we doubt that a real-life attacker would be hijacking the AI to remotely execute a calculator. But it makes for a fun visual. ®

You May Also Like…

0 Comments