Your AI Browser Can Steal Your Passwords via a Calendar Invite

Zenity Labs reveals how a malicious calendar event could let attackers hijack Perplexity's Comet browser to exfiltrate local files and take over your 1Password account.

Accepting a calendar invite could let attackers steal your files and passwords. Security researchers at Zenity Labs have disclosed a family of vulnerabilities in AI-powered browsers that allowed malicious actors to hijack your AI agent using nothing more than a Google Calendar event.

The vulnerabilities, collectively called “PleaseFix,” were found in Perplexity’s Comet browser and highlight a fundamental security problem with agentic AI systems: they can’t reliably tell the difference between your instructions and an attacker’s.

How the Attack Worked

The exploit was elegant in its simplicity. Attackers crafted a calendar invitation with legitimate-looking meeting details at the top - participants, agenda, Zoom link - the standard fare that makes you click “accept” without thinking.

Below several empty lines that humans naturally skip but AI agents read completely, the attackers embedded hidden instructions. The payload included fake HTML button elements with node identifiers that mimicked genuine UI controls, system reminder blocks formatted to match Comet’s internal structure, and crucially, text written in Hebrew to potentially bypass English-language guardrails against prompt injection.

Once processed, the AI agent followed a devastating sequence:

  1. Navigate to an attacker-controlled website in “background mode” (no visible browser activity)
  2. Treat the website’s instructions as authoritative guidance
  3. Access the local file system via file:// paths
  4. Search directories for sensitive files
  5. Read file contents and embed them in URL parameters
  6. Send everything to the attacker through ordinary page loads

The entire attack required zero clicks beyond accepting the calendar invite. The agent continued returning normal responses while silently exfiltrating your data.

1Password Takeover

The file exfiltration was bad enough. But researchers demonstrated something worse.

“Once the 1Password extension is installed in the Comet browser and is unlocked, we can instruct Comet to go to the extension URL and hijack your 1Password account,” explained Michael Bargury, CTO of Zenity Labs.

The attack didn’t exploit any vulnerability in 1Password itself. Instead, it weaponized the AI agent’s legitimate access. If you’d given Comet permission to interact with your password manager (as many users do for convenience), the agent could navigate to account settings, change passwords, and extract recovery materials - all while you thought it was just booking your meeting room.

Not a Bug, an Architecture Problem

What makes PleaseFix particularly concerning isn’t what Perplexity did wrong. It’s what every agentic AI system does by design.

“This is not a bug. It is an inherent vulnerability in agentic systems,” Bargury stated.

The vulnerability stems from what researchers call “intent collision” - the AI merging your benign requests with attacker instructions from untrusted content. The agent simply cannot reliably distinguish between the two, treating them as a single coherent task.

“Anything that you put out on the internet that the user interacts with is being fed into the LLM’s context,” Bargury told The Register. “And so the attack surface is massive.”

Every email you open, every document you view, every website you visit becomes a potential vector for prompt injection. The more capable and autonomous your AI agent, the more damage an attacker can do once they’ve hijacked it.

Perplexity’s Response

The specific vulnerabilities in Comet have been patched. After Zenity reported the issues on October 22, 2025, Perplexity implemented fixes in stages:

  • January 23, 2026: Initial patch blocked file:// access at the code level
  • January 27, 2026: Zenity identified a bypass using view-source:file:/// prefixes
  • February 13, 2026: Second patch confirmed effective

Perplexity also added stricter user confirmations for sensitive actions and enterprise controls to disable agents on designated sites. The company declined to comment publicly.

1Password Reacts

1Password published a security advisory on January 30, 2026, adding hardening options for users of AI browsers:

  • New ability to disable automatic sign-in for the 1Password web app
  • Team policy controls for automatic sign-in permissions
  • Recommendations to lock the extension when browsing untrusted content
  • Confirmation prompts for sensitive information like credit cards

The company emphasized that their cryptography and vault design remain secure - the risk comes from AI agents leveraging your existing permissions, not from breaking security controls.

The Bigger Picture

PleaseFix isn’t an isolated incident. It’s a preview of the security landscape as AI agents gain more autonomy.

These “agentic browsers” promise to handle tasks across applications without constant human oversight. Comet can browse the web, manage your calendar, compose emails, and yes, interact with your password manager. That power comes with proportional risk.

The vulnerability family’s name is itself instructive. PleaseFix is a riff on ClickFix, a social engineering technique where attackers trick users into executing malicious actions. But where ClickFix requires convincing a human, PleaseFix targets AI agents - which can be manipulated without any human involvement at all.

What You Should Do

If you use any AI-powered browser or agent:

Lock down your password manager: Enable shorter lock timeouts. Require confirmation for all autofill. Disable automatic sign-in to web apps. Treat your password manager like what it is - the keys to your digital life.

Review agent permissions: What can your AI assistant access? Does it really need to read local files? Control your calendar? Access browser extensions? Reduce the blast radius of a potential compromise.

Be skeptical of calendar invites: This particular vector has been patched, but the principle remains. Any content your AI agent processes is an attack surface. Unexpected meeting requests from unfamiliar senders deserve extra scrutiny.

Consider whether you need agentic browsers at all: The convenience of AI automation comes with security tradeoffs that the industry hasn’t solved. For high-sensitivity work, a traditional browser with manual control may still be the safer choice.

The Bottom Line

Your AI browser can be turned against you through content you’d normally ignore. Perplexity’s Comet has been patched, but the underlying architecture problem - AI agents that can’t distinguish your intentions from an attacker’s - isn’t going away. The more autonomy we grant these systems, the more we need to think about what happens when that autonomy is hijacked.