In short
In a demo, Comet’s AI assistant adopted embedded prompts and posted non-public emails and codes.
Courageous says the vulnerability remained exploitable weeks after Perplexity claimed to have fastened it.
Consultants warn that immediate injection assaults expose deep safety gaps in AI agent techniques.
Courageous Software program has uncovered a safety flaw in Perplexity AI’s Comet browser that confirmed how attackers may trick its AI assistant into leaking non-public person knowledge.
In a proof-of-concept demo printed August 20, Courageous researchers recognized hidden directions inside a Reddit remark. When Comet’s AI assistant was requested to summarize the web page, it didn’t simply summarize—it adopted the hidden instructions.
Perplexity disputed the severity of the discovering. A spokesperson advised Decrypt the difficulty “was patched earlier than anybody observed” and mentioned no person knowledge was compromised. “We’ve a reasonably sturdy bounty program,” the spokesperson added. “We labored instantly with Courageous to establish and restore it.”
Courageous, which is growing its personal agentic browser, maintained that the flaw remained exploitable weeks after the patch and argued Comet’s design leaves it open to additional assaults.
Courageous mentioned the vulnerability comes right down to how agentic browsers like Comet course of net content material. “When customers ask it to summarize a web page, Comet feeds a part of that web page on to its language mannequin with out distinguishing between the person’s directions and untrusted content material,” the report defined. “This enables attackers to embed hidden instructions that the AI will execute as in the event that they have been from the person.”
Immediate injection: outdated concept, new goal
One of these exploit is named a immediate injection assault. As a substitute of tricking an individual, it methods an AI system by hiding directions in plain textual content.
“It’s just like conventional injection assaults—SQL injection, LDAP injection, command injection,” Matthew Mullins, lead hacker at Reveal Safety, advised Decrypt. “The idea isn’t new, however the methodology is completely different. You’re exploiting pure language as an alternative of structured code.”
Safety researchers have been warning for months that immediate injection may develop into a serious headache as AI techniques achieve extra autonomy. In Could, Princeton researchers confirmed how crypto AI brokers may very well be manipulated with “reminiscence injection” assaults, the place malicious data will get saved in an AI’s reminiscence and later acted on as if it have been actual.
Even Simon Willison, the developer credited with coining the time period immediate injection, mentioned the issue goes far past Comet. “The Courageous safety group reported critical immediate injection vulnerabilities in it, however Courageous themselves are growing an analogous function that appears doomed to have related issues,” he posted on X.
Shivan Sahib, Courageous’s vp of privateness and safety, mentioned its upcoming browser would come with “a set of mitigations that assist scale back the danger of oblique immediate injections.”
“We’re planning on isolating agentic looking into its personal storage space and looking session, so {that a} person doesn’t unintentionally find yourself granting entry to their banking and different delicate knowledge to the agent,” he advised Decrypt. “We’ll be sharing extra particulars quickly.”
The larger threat
The Comet demo highlights a broader drawback: AI brokers are being deployed with highly effective permissions however weak safety controls. As a result of giant language fashions can misread directions—or comply with them too actually—they’re particularly susceptible to hidden prompts.
“These fashions can hallucinate,” Mullins warned. “They will go fully off the rails, like asking, ‘What’s your favourite taste of Twizzler?’ and getting directions for making a selfmade firearm.”
With AI brokers being given direct entry to electronic mail, recordsdata, and reside person classes, the stakes are excessive. “Everybody desires to slap AI into every part,” Mullins mentioned. “However nobody’s testing what permissions the mannequin has, or what occurs when it leaks.”
Typically Clever Publication
A weekly AI journey narrated by Gen, a generative AI mannequin.








