Gartner’s document warns that AI sidebars mean “Sensitive user data – such as active web content, browsing history, and open tabs – is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed.”
The document suggests it’s possible to mitigate those risks by assessing the back-end AI services that power an AI browser to understand if their security measures present an acceptable risk to your organization.
If that process leads to approval for use of a browser’s back-end AI, Gartner advises organizations should still “Educate users that anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser’s sidebar to summarize or perform other autonomous actions.”
But if you decide the back-end AI is too risky, Gartner recommends blocking users from downloading or installing AI browsers.
Gartner’s fears about the agentic capabilities of AI browser relate to their susceptibility to “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.”
The authors also suggest that employees “might be tempted to use AI browsers and automate certain tasks that are mandatory, repetitive, and less interesting” and imagine some instructing an AI browser to complete their mandatory cybersecurity training sessions.
Another scenario they consider is exposing agentic browsers to internal procurement tools, then watching LLMs make mistakes that cause organizations to buy things they don’t want or need.
“A form could be filled out with incorrect information, a wrong office supply item might be ordered… or a wrong flight might be booked,” they imagine.
Again, the analysts recommend some mitigations, such as ensuring agents can’t use email, as that will limit their ability to perform some actions. They also suggest using settings that ensure AI browsers can’t retain data.
But overall, the trio of analysts think AI browsers are just too dangerous to use without first conducting risk assessments and suggest that even after that exercise you’ll likely end up with a long list of prohibited use cases – and the job of monitoring an AI browser fleet to enforce the resulting policies. ®




0 Comments