In the rapidly evolving world of artificial intelligence, few innovations are more intriguing and controversial than agentic AI browsers. These are web browsers powered by AI agents that don’t just load and render pages, but act on behalf of the user: summarizing content, navigating websites, making purchases, and more. Perplexity’s Comet browser is one of the prominent players in this field.
But with great power comes great risk. As users have flocked to Comet for its promise of intelligent browsing, security researchers and critics have sounded alarms. Prompt injection attacks, phishing vulnerabilities, and even “assistant hijacking” have raised serious red flags. In response, Perplexity has rolled out a significant update, adding new safety barriers but the debate over agentic browsers is far from settled.
Let’s dive into the full story: what went wrong, how Perplexity is responding, and what it means for the future of AI-powered browsing.
What Is Comet, Anyway?
To understand the issue, we first need to grasp what Comet is and what makes it different from traditional browsers.
-
Agentic Browser: Comet isn’t just a browser; it’s also an AI assistant. The AI lives in a sidebar (sometimes called a “sidecar”) and can actively act on your behalf. Whether you ask it to “summarize this article,” “go to my email and check unread,” or “shop for a laptop,” Comet’s agent can take you there or even do parts of the task for you.
-
Deep Integration: Because the AI is deeply integrated with the browser, it has access to your web sessions, cookies, and potentially sensitive data assuming you have already logged into websites like Gmail, Amazon, or banking portals.
-
Autonomous Actions: One of the most powerful (and dangerous) features: Comet’s assistant can act in a semi-autonomous way. That means it may take small decisions without explicitly asking you every single time, but for “sensitive” actions, it is supposed to pause and ask.
-
User Choice for Control: In its recent update, Perplexity introduced a refined “omnibar” (search bar) that lets users pick how much autonomy Comet’s assistant should have: manual browsing, one-time assistance, or automatic stepping in.
In short, Comet is a powerful tool, blending browsing and AI in a way that could redefine how many of us interact with the web.
The Big Security Concerns: Why Comet Raised Eyebrows
When Comet first made waves, security researchers were excited but also deeply worried. Several vulnerabilities began emerging quickly, showing that agentic browsers bring novel risks. Let’s break down the key concerns.
1. Prompt Injection Vulnerabilities
One of the most serious issues that researchers uncovered is known as indirect prompt injection. The problem? Comet sometimes fails to clearly separate user instructions from the webpage content it is summarizing or processing.
Here’s how the attack works:
-
An attacker hides malicious instructions within a webpage. These instructions might be invisible or disguised (white text on white background, HTML comments, or hidden in other ways).
-
When the user asks Comet to “summarize this page,” the agent fetches the content of the page and feeds it into its language model without filtering out the hidden, malicious part.
-
The AI, thinking it’s executing a user command, ends up following those malicious instructions which could involve navigating to a user’s email or bank page, extracting sensitive information, or performing actions the user never intended.
-
Bravery’s researchers demonstrated a proof-of-concept where Comet fetched a Gmail OTP (one-time password) and revealed it to an attacker.
This kind of vulnerability is especially dangerous because it bypasses classical web security protections. Traditional safeguards like the same-origin policy (SOP) or cross-origin resource sharing (CORS) assume that content on a page can’t execute privileged actions outside its domain. But when an AI agent is interpreting instructions from that page, those protections break down.
2. CometJacking and One-Click Data Theft
Even more alarming is a threat discovered by security researchers at LayerX, dubbed “CometJacking”. This attack doesn’t rely on stealing passwords — instead, it abuses the AI’s existing privileges:
-
A malicious URL crafted in a certain way triggers the Comet agent via a hidden prompt embedded in the link.
-
When a user clicks this URL (which might look innocuous), Comet’s assistant reads encoded instructions (for example, some Base64-encoded payload) and, because the assistant already has access to Gmail, Calendar, or other connected services, it can exfiltrate data without re-prompting for credentials or warnings.
-
In practical terms: with one click, the AI can be tricked into sending your email or calendar data to an attacker-controlled endpoint. The “trusted-but-evil” link makes the browser itself a command-and-control point.
What’s frightening about CometJacking is that it abuses trusted connectors not by cracking login, but by commanding the assistant to do what it’s already authorized to do. Perplexity initially downplayed the issue, saying it had “no security impact.” But many researchers and security experts strongly disagree.
3. Phishing and Fraudulent Purchases
Guardio Labs, another security research firm, uncovered yet more worrying scenarios:
-
They created a fake e-commerce storefront and told Comet, “Buy me an Apple Watch.” The AI followed through: it added the watch to the cart, loaded saved credit card details, and checked out all autonomously.
-
In another test, researchers sent a phishing email (fake Wells Fargo, for example) to a user. Comet did not flag any issue instead, it promptly visited the link and suggested that the user log in, effectively helping the user fall for a scam.
-
Guardio’s argument is chilling: human intuition, the gut sense we use to detect sketchy links or phishing, is bypassed when an AI agent is in control.
4. User Trust and Transparency
Beyond the technical vulnerabilities, there’s a big question about trust and transparency:
-
Some users on Reddit and other forums are worried that Perplexity isn’t being sufficiently open about which vulnerabilities are patched and which are still live.
-
According to one user, after the safety upgrade, Comet has become overly cautious: it refuses to do certain tasks, citing “security restrictions” or “prompt injection risks.”
-
That friction is real: users expected a high degree of autonomy, but many now feel constrained by the newly tightened guardrails.
Perplexity’s Response: New Safety Barriers, But Is It Enough?
Faced with growing scrutiny, Perplexity has taken steps to shore up Comet’s safety. On November 14, 2025, the company announced a set of new user controls and security mechanisms built into Comet Assistant. Here’s a breakdown of what changed, and what those changes mean.
Major Upgrades in the Update
-
Explicit Confirmation for Sensitive Actions
-
Comet will now pause and ask permission before performing “high-stakes” tasks like logging into websites or making a purchase.
-
This is a big deal: earlier, some of these actions could be more automated, depending on how much autonomy the user granted. Now, certain operations require explicit user consent.
-
-
Visibility Into What the Assistant Is Doing
-
A new assistant “sidecar” shows a step-by-step breakdown of what the AI agent is doing behind the scenes, including its decision-making rationale.
-
There are also clear buttons to halt the assistant or to guide it more precisely, giving users more immediate control.
-
-
More Granular Control Over Autonomy
-
In the omnibar, users can now choose between:
-
Browsing manually,
-
Allowing Comet to help only once,
-
Or letting the assistant automatically step in when it detects it can help.
-
-
This helps mitigate risk: users can dial back the autonomy if they prefer tighter control.
-
-
Reinforcement of Core Principles
-
Perplexity says these changes reflect its foundational principles: transparency, user control, and sound judgment.
-
By showing exactly what the assistant is doing and requiring permissions for sensitive tasks, the company tries to strike a balance between powerful automation and responsible AI.
-
Why These Changes Matter And Where Risks Still Linger
Perplexity’s updates are meaningful, but they don’t fully eliminate the threat landscape. Here’s a closer look at what’s improved and what remains concerning.
Where the New Barriers Help
-
User Awareness: By making the assistant’s actions visible, users are less likely to be “in the dark” about what the AI is doing. Transparency builds trust.
-
Checkpoint for Risky Actions: The confirmation pause before sensitive operations is a solid mitigation. It means an attacker can’t always rely on total automation some actions will have human eyeballs.
-
Flexible Autonomy: Not every user wants full AI autonomy. Giving users the choice helps those who are more cautious. It’s also a pragmatic hedge against risk.
But These Don’t Solve All Problems
-
Prompt Injection Is Not Fully Solved: Indirect prompt injection still remains a deep, structural problem. Even with better UI controls, the core vulnerability of mixing web content with user instructions is tricky to eliminate entirely.
-
CometJacking Risk: Sophisticated “CometJacking” attacks (via URL-based hidden prompt injection) may still be possible, because the AI agent continues to have broad access to connected services.
-
User Behavior: Not all users will carefully monitor the assistant’s actions or refuse permission every time. Some will blindly trust the AI, especially if they’re used to delegating.
-
Future Exploits: As researchers evolve their techniques, new vulnerabilities could emerge. Agentic browsers are a new frontier, and security models are still catching up.
-
Performance Trade-Off: According to some user feedback, the stronger security may be making Comet less “agentic” in practice.
-
Trust Deficit: Some in the community feel Perplexity’s response hasn’t been transparent enough, and patching without fully acknowledging the risks may not be sufficient.
Why Agentic Browsers Are Under Scrutiny
Comet isn’t alone. The debate around its security is part of a larger conversation about AI-native, Perplexity browsers and whether their benefits outweigh the risks.
What Makes Agentic Browsers Attractive
-
Time-saving: These Perplexity browsers promise to literally do tasks for you: summarizing, shopping, scheduling.
-
AI as a Co-Pilot: Instead of just giving you search results, the AI can act. That’s a transformative shift in how we interact with the web.
-
Automation Potential: For power users, enterprises, or people juggling many online tasks, agentic agents could be tremendously valuable.
But the Risks Are Real and Novel
-
New Attack Surface: Traditional Perplexity browsers have well-known security boundaries. Perplexity browsers create a new kind of “privileged agent” that interacts autonomously. That’s a completely different threat model.
-
Prompt Injection: As seen with Perplexity Comet, attackers can hide instructions that the AI might interpret as valid commands. Unlike typical code injection, this is about manipulating language.
-
Phishing at Scale: Because the AI can navigate and transact, phishing becomes potentially more dangerous: the user is not directly clicking or entering data; the AI is doing it.
-
Trust Misalignment: Users may overtrust the AI, especially when they don’t understand exactly how decisions are being made or when the assistant runs in the background.
-
Account Compromise: If the agent has access to authenticated sessions (email, cloud storage, banking), exploiters could gain a lot more than just passwords.
-
Regulatory and Ethical Issues: Who’s responsible when the AI makes a mistake? If the assistant buys something unwanted, who refunds the bill? And how do you audit what the AI did?
What Comes Next: Recommendations and Outlook
Perplexity’s move to tighten safety is a positive step but for agentic browsers to become truly safe and trustworthy, more needs to happen. Below are some recommendations and potential future scenarios.
For Perplexity (and Other Agentic Browser Makers)
-
Continuous Security Auditing: Use independent teams (like Brave, Guardio, LayerX) to audit Comet regularly, not just after major vulnerabilities are exposed.
-
Formal Security Roadmap: Publish a transparent roadmap that explains which vulnerabilities have been patched, which are being worked on, and how new defenses will evolve.
-
User Education: Many users may not fully understand the risks of delegating web tasks to an agent. Educational prompts, clear UI indicators, and regular reminders could help.
-
Permission Granularity: Allow even more fine-grained control: not just “ask me for purchases,” but “disallow purchases from unknown/new sites,” or “require 2FA for finance-related transactions.”
-
Behavioral Anomaly Detection: Build in monitoring to detect if the assistant’s actions deviate from typical behavior (for example, if it’s exfiltrating data or navigating to suspicious domains).
For Users
-
Be Cautious with Sensitive Accounts: Avoid connecting high-risk services (banking, email) until you fully understand the security tradeoffs.
-
Use the New Controls: Leverage the omnibar’s autonomy settings. For safety-critical operations, prefer “manual” or “ask every time.”
-
Verify Before You Click: Even though Comet may act for you, always double-check what it plans to do especially when link navigation or purchases are involved.
-
Stay Informed: Follow security researchers or Perplexity’s own updates. If there’s a major vulnerability or patch, you should know.
For Regulators and Industry
-
Security Standards: Work toward defining security norms for agentic Perplexity browsers analogous to how password managers or browsers are regulated today.
-
Disclosure Requirements: Mandate that AI browser vendors disclose major security risks, architecture decisions, and how they are mitigating prompt injection or similar threats.
-
User Protection Mechanisms: Explore policies that protect users from unauthorized transactions initiated by AI agents (refund rights, dispute resolution, etc.).
A Turning Point in Web Evolution
Perplexity’s Comet browser is more than just a product it’s a symbol of how AI is beginning to reshape how we experience the web. The idea that an AI agent can browse, summarize, shop, and interact for you is not just ambitious; it could be revolutionary.
But innovation like this comes with growing pains. The security risks unveiled in Comet’s early days prompt injection, CometJacking, phishing exploitation show that we’re still in the early phase of understanding what agentic browsers mean for privacy, trust, and safety.
Perplexity’s addition of safety barriers is a constructive step forward. By forcing permission for sensitive actions, giving users more visibility into what the assistant is doing, and offering more control, the company is acknowledging the severity of the threats. But these are only mitigations, not cure-alls.
Ultimately, the future of agentic browsers will be decided by how well companies like Perplexity balance power and responsibility. If they get it right, we may well enter a new age of intelligent, proactive web interaction but if they get it wrong, the risks could be severe.
For now, Comet is a powerful experiment, a bold bet on the next generation of browsing. Users, researchers, and regulators will all be watching closely to see whether that bet pays off or backfires. To know more about Perplexity browser subscribe Jatininfo.in now.











