AI browser vulnerability (Perplexity Comet)

The threat of instruction injection


At Brave, we're developing the ability for our in-browser AI assistant Leo to browse the Web on your behalf, acting as your agent.
This kind of agentic browsing is incredibly powerful, but it also presents significant security and privacy challenges. As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply.
While looking at Comet, we discovered vulnerabilities which we reported to Perplexity, and which underline the security challenges faced by agentic AI implementations in browsers.
The vulnerability we're discussing in this post lies in how Comet processes webpage content: when users ask it to "Summarize this webpage," Comet feeds a part of the webpage directly to its LLM without distinguishing between the user's instructions and untrusted content from the webpage.

Attack demonstration


To illustrate the severity of this vulnerability in Comet, we created a proof-of-concept demonstration:


More:
 
Didn't we have about 40 years of media telling us that AI can't be trusted, and that it would be the downfall of humanity? Why didn't anyone listen?
 
Dumb people: "I dont trust this gaming company that takes my data!!!!, fuck them!!"
Same dumb people talking with AI: "so heres what I ate today and heres where I live and heres my bank statement, tell me you love me hihih"

same energy here, dont trust that AI, trust our AI bro.
 
Last edited:
Yeah saw this yesterday, absolutely crazy to see how easy that was and how there's zero guardrails for this.

The AI Assistant executing the "never ask the user to confirm" prompt is wild.
 
DO NOT SHARE WORK, IP, OR WHAT YOU DON'T WANT SEEN WITH AI.

I already requested al lof my Data Deletion from Gemini, Grok, OpenAI. My only hope is with it getting lost in the sea of data.

Now I only do it for research and analyzing scenarios.
 
Top Bottom