ChatGPT’s research assistant sprung a leak – since patched – that let attackers steal Gmail secrets with just a single carefully crafted email.
Deep Research, a tool unveiled by OpenAI in February, enables users to ask ChatGPT to browse the internet or their personal email inbox and generate a detailed report on its findings. The tool can be integrated with apps like Gmail and GitHub, allowing people to do deep dives into their own documents and messages without ever leaving the chat window.
Cybersecurity outfit Radware this week disclosed a critical flaw in the feature, dubbed “ShadowLeak,” warning that it could allow attackers to siphon data from inboxes with no user interaction whatsoever. Researchers showed that simply sending a maliciously crafted email to a Deep Research user was enough to get the agent to exfiltrate sensitive data when it later summarized that inbox.
The attack relies on hiding instructions inside the HTML of an email using white-on-white text, CSS tricks, or metadata, which a human recipient would never notice. When Deep Research later crawls the mailbox, it dutifully follows the attacker’s hidden orders and sends the contents of messages, or other requested data, to a server controlled by the attacker.
Radware stressed that this isn’t just a prompt injection on the user’s machine. The malicious request is executed from OpenAI’s own infrastructure, making it effectively invisible to corporate security tooling.
That server-side element is what makes ShadowLeak particularly nasty. There’s no dodgy link for a user to click, and no suspicious outbound connection from the victim’s laptop. The entire operation happens in the cloud, and the only trace is a benign-looking query from the user to ChatGPT asking it to “summarize today’s emails”.
Radware’s report warns that attackers could leak personally identifiable information, internal deal memos, legal correspondence, customer records, and even login credentials, depending on what sits in the mailbox. The researchers argue that the risk isn’t limited to Gmail either. Any integration that lets ChatGPT hoover up private documents could be vulnerable to the same trick if input sanitization isn’t watertight.
“ShadowLeak weaponizes the very capabilities that make AI assistants useful: email access, tool use, and autonomous web calls,” Radware researchers said. “It results in silent data loss and unlogged actions performed ‘on behalf of the user,’ bypassing traditional security controls that assume intentional user clicks or data leakage prevention at the gateway level.”
The potential consequences go beyond embarrassment. Depending on what is leaked, companies could find themselves on the hook for GDPR or CCPA violations, suffer regulatory investigations, and become victims of downstream fraud. Because the attack leaves so little forensic evidence, incident responders may struggle to prove what was taken.
Radware said it reported the ShadowLeak bug to OpenAI on June 18 and the company released a fix on September 3. The Register asked OpenAI what specific changes were made to mitigate this vulnerability and whether it had seen any evidence that the vulnerability had been exploited in the wild before disclosure, but did not receive a response.
Radware is urging organizations to treat AI agents as privileged users and to lock down what they can access. HTML sanitization, stricter control over which tools agents can use, and better logging of every action taken in the cloud are all on its list of recommendations. ®
0 Comments