Accept a meeting invite. Lose your passwords. That’s the attack Zenity Labs demonstrated against Perplexity’s Comet browser - and it worked without the user clicking, approving, or even noticing anything unusual.
The security firm disclosed “PleaseFix,” a family of critical vulnerabilities affecting agentic browsers that allow attackers to hijack AI agents, access local files, and steal credentials. The flaw requires no malware, no privilege escalation, and no traditional software exploit. It works by exploiting how AI agents interpret instructions.
“This is not a bug,” said Michael Bargury, co-founder and CTO of Zenity Labs. “It is an inherent vulnerability in agentic systems.”
The Attack: Calendar Invite to File Theft
The demonstration targeted Perplexity’s Comet, an “agentic browser” that autonomously performs tasks on behalf of users. Unlike traditional browsers, Comet can navigate websites, read files, interact with applications, and execute multi-step workflows without explicit permission for each action.
Zenity’s attack, dubbed “PerplexedBrowser,” exploits a technique called intent collision. The goal is to merge a legitimate user request with attacker-controlled instructions until the AI treats file system access as necessary to complete the task.
Here’s how it works:
Step 1: The bait. An attacker creates a Google Calendar event and invites the target. The invite looks legitimate - normal meeting details, familiar sender, appropriate timing.
Step 2: Hidden payload. Below empty lines that push content out of the preview window, the attacker embeds malicious instructions. These include fake UI elements claiming “the Yes button isn’t working” and system-mimicking blocks that replicate Comet’s internal formatting.
Step 3: Zero-click execution. When the user asks Comet to help with calendar management, the agent reads the invite, interprets the hidden instructions, and executes them. No additional clicks required.
Step 4: File exfiltration. The agent navigates to file:// URLs, browses directories, reads sensitive documents, embeds their contents into URL parameters, and navigates to attacker-controlled servers - transmitting the data through standard HTTP requests.
Throughout this process, Comet continues returning expected outputs to the user. The theft happens silently in the background.
The Second Attack: Password Manager Takeover
The file theft vulnerability is concerning. The credential attack is worse.
Zenity demonstrated a second exploit path targeting 1Password integration with Comet. Because the agentic browser has permission to interact with password managers on behalf of users, attackers can manipulate these workflows to:
- Extract individual stored credentials
- Change passwords without user awareness
- Modify recovery materials
- Achieve full account takeover
Since password managers serve as “control planes” for digital identity - holding keys to email, banking, cloud services, and everything else - compromise cascades rapidly.
Language Obfuscation Bypasses Guardrails
One detail stands out in Zenity’s research: using non-English languages makes attacks more likely to succeed.
The researchers found that embedding instructions in Hebrew bypassed guardrails designed to prevent indirect prompt injection. This suggests that safety measures trained primarily on English-language attacks may have blind spots for other languages.
It’s a pattern we’ve seen before. Multilingual jailbreaks have repeatedly exposed gaps in AI safety systems that work well in English but fail elsewhere.
Not a Bug - A Design Problem
What makes PleaseFix different from a typical software vulnerability is that it doesn’t exploit code flaws. The attack operates entirely within Comet’s intended capabilities. The browser is designed to read files, navigate websites, and interact with applications. The vulnerability is in how trust flows through the system.
Traditional security models assume clear boundaries between trusted and untrusted data. Agentic systems blur these boundaries by design. When an AI agent processes a calendar invite, it can’t reliably distinguish between user instructions and attacker instructions embedded in the same content.
Stav Cohen of Zenity Labs noted: “These issues do not target a single application bug. They exploit the execution model and trust boundaries of AI agents.”
OpenAI has acknowledged that such vulnerabilities are “unlikely to ever” be fully solved, though automated attack discovery and adversarial training could reduce the overall danger.
Perplexity’s Fix: Treating the Agent as Untrusted
Perplexity patched the specific vulnerabilities through a disclosure process that took nearly four months:
| Date | Event |
|---|---|
| Oct 22, 2025 | Vulnerability reported |
| Nov 21, 2025 | Classified as critical (P1) |
| Jan 23, 2026 | Initial patch deployed |
| Jan 27, 2026 | Bypass discovered via view-source:file:/// |
| Feb 13, 2026 | Final patch confirmed |
| Mar 3, 2026 | Public disclosure |
The key insight from Perplexity’s response: they implemented a hard boundary at the code level preventing Comet from autonomously accessing file:// paths. Users can still manually access local files, but the agent itself cannot.
This treats the agentic browser as an untrusted entity rather than relying on the LLM to make security decisions. It’s a meaningful architectural shift - accepting that the AI cannot be trusted to distinguish safe from unsafe instructions.
Additional safeguards include stricter confirmation requirements for sensitive actions and enterprise controls that let administrators disable the agent on designated sites.
1Password also hardened their integration, disabling automatic sign-in and requiring explicit confirmation for credential autofill.
Protecting Yourself from Agentic Browser Attacks
If you use Perplexity Comet or similar agentic browsers, consider these precautions:
Minimize agent permissions. Review what your agent can access. Disable integrations with password managers, email, and file systems unless absolutely necessary.
Be suspicious of calendar invites. Check the full content of meeting invitations before letting an agent process them. Hidden content lurks below the fold.
Use enterprise controls. If your organization offers site-blocking for agentic features, enable it for sensitive domains.
Adopt zero-trust thinking. Treat your AI agent as a potentially compromised entity. Don’t give it access to anything you wouldn’t hand to an untrusted third party.
Keep agents updated. The demonstrated attacks no longer work on patched versions. Automatic updates matter.
The Bigger Picture
PleaseFix is the first major disclosure in what will likely become a recurring pattern. As AI agents gain more capabilities - browsing the web, executing code, managing files, interacting with services - the attack surface expands proportionally.
Every permission granted to an AI agent is a permission that can be hijacked. Every integration is an attack vector. Every workflow that touches sensitive data creates exposure.
The industry is racing to deploy agentic AI systems (see: Nvidia’s entire GTC keynote about “agentic AI”) without solving the fundamental security problems. PleaseFix demonstrates what happens when autonomous systems inherit user privileges without inheriting user judgment.
The Bottom Line
A calendar invite shouldn’t be able to steal your passwords. But when AI agents blur the line between reading content and executing instructions, that’s exactly what can happen. Perplexity fixed the specific bugs, but the underlying problem - that agentic systems can’t reliably distinguish friend from foe - remains unsolved.