Reizo Ryuu
Member
The threat of instruction injection
At Brave, we're developing the ability for our in-browser AI assistant Leo to browse the Web on your behalf, acting as your agent.
This kind of agentic browsing is incredibly powerful, but it also presents significant security and privacy challenges. As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply.
While looking at Comet, we discovered vulnerabilities which we reported to Perplexity, and which underline the security challenges faced by agentic AI implementations in browsers.
The vulnerability we're discussing in this post lies in how Comet processes webpage content: when users ask it to "Summarize this webpage," Comet feeds a part of the webpage directly to its LLM without distinguishing between the user's instructions and untrusted content from the webpage.
Attack demonstration
To illustrate the severity of this vulnerability in Comet, we created a proof-of-concept demonstration:
More:

Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet | Brave
The attack we developed shows that traditional Web security assumptions don't hold for agentic AI, and that we need new security and privacy architectures for agentic browsing.