Eighty-eight percent of organizations have experienced confirmed or suspected AI agent security incidents in the past year. More than half of all deployed agents operate without any security monitoring. And 64% of billion-dollar companies have lost more than a million dollars to AI failures.
Into this mess drops Lobster Trap: a single-file, MIT-licensed security proxy that watches every conversation between your AI agents and the models they talk to.
Veea Inc. announced the open-source release at Mobile World Congress 2026 in Barcelona on March 2. The tool solves a specific problem: AI agents communicate with language models through API calls, and nobody’s watching what goes in or comes out.
What It Does
Lobster Trap runs inline between AI agents and whatever LLM they’re talking to. Every prompt from the agent and every response from the model gets evaluated against security policies before anything proceeds.
Out of the box, it detects:
- Prompt injection attempts
- Credential exposure
- Personal information leakage
- Suspicious file access patterns
- Data exfiltration
When it catches something, it can block the interaction, flag it for review, or just log it for later analysis. The scanning happens in under a millisecond, adding no meaningful latency.
The technical design is deliberately minimal. Written in Go, it compiles to a single binary with no external dependencies. It runs on Linux, macOS, or Windows. If your AI setup uses an OpenAI-compatible API - which most local deployments do - you can drop it in without changing your application code.
Why This Matters Now
The timing isn’t coincidental. A recent survey of over 900 executives and technical practitioners revealed what the industry calls a “confidence paradox”: 82% of executives feel their policies protect against unauthorized agent actions, yet only 14.4% of organizations have full security approval for their entire agent fleet.
The numbers get worse the deeper you look:
- Average organizations only monitor 47.1% of their AI agents
- Just 21.9% treat AI agents as independent, identity-bearing entities
- 45.6% still rely on shared API keys for agent-to-agent authentication
- Healthcare hit 92.7% incident rate
The security industry reports similar findings. Eighty percent of surveyed organizations have documented risky autonomous agent behaviors, including unauthorized system access and data exposure. Only 21% of executives have complete visibility into what their agents are actually doing.
The Shadow AI Problem
Here’s the number that should concern anyone running local AI: 63% of employees using AI tools in 2025 pasted sensitive company data into personal chatbot accounts. Enterprises average approximately 1,200 unofficial AI applications in active use. That’s shadow IT, but for AI - and it costs an average of $670,000 more per breach than standard security incidents.
This is exactly why local deployment matters. Running models on your own hardware keeps data in-house. But local doesn’t automatically mean secure. If your locally-running agent can still be manipulated through prompt injection or tricked into leaking data, you’ve just moved the vulnerability closer to your sensitive systems.
Lobster Trap addresses this by creating an audit trail and enforcement layer that works whether you’re running Ollama on a laptop or a production cluster. Everything stays local. The tool runs on your infrastructure, logs stay on your infrastructure, and the MIT license means you can modify it however you need.
Practical Deployment
The NativelyAI partnership gives Lobster Trap immediate distribution to over 250,000 AI developers through the lablab.ai builder community. It’s being packaged within Native.Builder, allowing development teams to deploy AI agents with policy enforcement built in from the start.
For existing deployments, the integration path is straightforward: point your agent’s API calls through Lobster Trap instead of directly at your model backend. The tool handles the rest.
Configuration uses standard policy files. You define what’s allowed, what’s blocked, and what just gets flagged. The project includes example policies for common scenarios like development environments, customer-facing agents, and high-security deployments.
What It Can’t Do
Lobster Trap isn’t magic. It catches known patterns in the communication between agents and models. It won’t detect a model that’s been fine-tuned to behave maliciously, catch semantic attacks that don’t match its patterns, or protect against vulnerabilities in the agent code itself.
The tool also assumes you trust the language model on the other end of the connection. If you’re calling a cloud API, that traffic leaves your network regardless. For true isolation, you need local models - and Lobster Trap works perfectly well with Ollama, llama.cpp, vLLM, or any other OpenAI-compatible local backend.
The Bigger Picture
NIST launched an AI Agent Standards Initiative in January to develop security frameworks for autonomous AI systems. The Federal Register is collecting input on agent security considerations. Prompt injection sits at the top of OWASP’s 2025 LLM security risks.
The industry knows there’s a problem. What’s been missing is practical, accessible tooling that organizations can actually deploy today.
Lobster Trap doesn’t solve everything, but it solves something concrete: visibility into what your AI agents are doing, and the ability to stop them when they shouldn’t. For a tool you can set up in ten minutes and that runs in under a millisecond, that’s not a bad start.
The Bottom Line
If you’re running AI agents locally - or thinking about it - Lobster Trap is worth evaluating. It’s free, it’s fast, and it provides a security layer that most local deployments are currently missing. Given that 88% of organizations have already experienced agent security incidents, the question isn’t whether you need monitoring. It’s whether you’re going to deploy it before or after something goes wrong.
Code is available at github.com/veeainc/lobstertrap.