An AI Dinosaur Toy Exposed 50,000 Children's Private Conversations to Anyone With a Gmail Account

Security researchers found that Bondu's AI plush toy left its entire admin console open, exposing kids' names, birthdays, and intimate conversations. A senator wants answers.

Security researchers Joseph Thacker and Joel Margolis wanted to know how safe AI-powered toys really were for their own kids. So they started poking at Bondu, a company selling AI dinosaur plush toys aimed at children ages 3 to 9. What they found took minutes to discover and should have been impossible: the company’s admin console was open to anyone with a Gmail account.

No special tools. No exploits. Just click “Login with Google,” use any Gmail address, and you’re inside - staring at roughly 50,000 conversation transcripts between small children and their AI dinosaur. Names, birthdays, family details, schools, daily routines, and the kind of unguarded things kids say when they think they’re talking to a stuffed animal.

What Was Exposed

The Bondu admin console at console.bondu.com gave full access to the company’s backend. Thacker and Margolis found they could view chat transcripts, device information, and child profiles - including IP-based location data, battery status, and whether a device was currently online. They could also trigger firmware updates and device reboots remotely.

On top of the open console, the researchers discovered an IDOR (Insecure Direct Object Reference) vulnerability that let them pull up any child’s profile just by guessing sequential ID numbers. This meant even if the console login had been locked down, an attacker could still enumerate through every child in the system.

The chat logs themselves contained exactly the kind of data that makes this breach particularly alarming. Children shared their names, ages, and birthdays. They talked about parents and siblings by name. They mentioned their schools, their fears, their daily routines. These are the raw materials for social engineering, identity theft, or worse when the targets are children who can’t understand the concept of a data breach.

How Bondu Responded

Bondu moved fast once Thacker reported the issue on January 10, 2025. The company took the console offline within minutes of the disclosure and patched the vulnerabilities by the next day. During their internal audit, Bondu’s team found additional row-level security issues beyond what the researchers had reported and fixed those too. The CEO said they found “no evidence” of unauthorized access by anyone other than the researchers and brought in an external security firm.

Credit where it’s due: the response was swift and apparently thorough. But the vulnerability existed from the moment the product shipped. There’s no telling how long that console sat open before two security researchers decided to test it as a side project.

A Senator Wants Answers

On February 3, U.S. Senator Maggie Hassan - ranking member of the Senate’s Joint Economic Committee - sent a letter to Bondu demanding the company explain how this happened and what it’s doing to prevent it from happening again.

Hassan called the exposure a threat that “raises serious concerns regarding the business model” and the company’s ability to protect child safety. Her letter specifically asked about Bondu’s data security practices, its plans for preventing future incidents, and the scope of data that was accessible.

She’s not the only one in Congress paying attention. Senators Cantwell, Klobuchar, and Markey have separately raised concerns about AI in toys, and in California, Senator Steve Padilla introduced SB 867, a bill that would impose a four-year moratorium on selling AI chatbot toys to children under 12.

A Pattern, Not an Incident

Bondu isn’t an isolated case. It’s the latest in a growing list of AI toy failures that collectively paint a picture of an industry shipping products to the most vulnerable users with the least amount of security testing.

In November 2025, FoloToy’s AI teddy bear Kumma was pulled from sale after researchers from the Public Interest Research Group found it discussing BDSM, helping children locate knives and matches in their homes, and generating sexually explicit content when prompted. The bear used OpenAI’s language model with inadequate content filtering - it was a $99 pipeline straight from a child’s voice to an unrestricted LLM.

In September 2025, the FTC settled with Apitor, a robot toy maker whose companion app was secretly sending children’s geolocation data to servers in China through a third-party SDK. The app required location permissions to program the robots, then a hidden component called JPush forwarded that data overseas without parental consent.

And in February 2026, Common Sense Media published research finding that more than a quarter of AI toy outputs included inappropriate content such as mentions of self-harm, drugs, and risky behaviors. Their recommendation: no AI companion toys for children 5 and under, and extreme caution for ages 6 to 12.

The Regulatory Gap

Under COPPA - the Children’s Online Privacy Protection Act - companies collecting data from children under 13 need verifiable parental consent, clear privacy disclosures, and reasonable data security measures. The FTC has confirmed that COPPA applies to connected toys and IoT devices, not just websites and apps.

But enforcement is slow, penalties are small, and the AI toy market is growing faster than any regulator can monitor. Apitor’s COPPA penalty was $500,000 - which was then suspended because the company said it couldn’t afford to pay. That’s the deterrent facing a global market of AI toy manufacturers racing to ship products before regulations catch up.

California’s SB 867 represents the most aggressive legislative response so far, but it only covers one state and hasn’t passed yet. At the federal level, there’s congressional interest but no comprehensive legislation. The existing COPPA framework was designed for websites in the early 2000s, not for AI-powered plush animals that record children’s voices and store their conversations on cloud servers.

What Parents Can Do

Assume the toy is recording everything. If your child’s toy connects to the internet and responds to voice commands, it’s collecting data. Check what data the manufacturer collects, where it’s stored, and who can access it before giving it to a child.

Check the company’s security practices. Look for a privacy policy that specifically addresses children’s data. If the policy is vague or doesn’t mention COPPA compliance, that’s a red flag.

Consider whether the toy needs AI. A stuffed dinosaur that tells stories and answers questions sounds appealing. But the same functionality comes from a parent reading a book, without the data collection, content filtering failures, and security vulnerabilities that come with routing a child’s voice through a cloud API.

Monitor conversations. If you do use an AI toy, periodically check what your child is discussing with it. The Bondu breach revealed that children share surprisingly personal information with toys they trust - information that becomes a liability when it’s stored in a database.

The Bottom Line

A company built an AI toy for toddlers, gave it a cloud backend that stored every word those children said, and then left the admin panel open to anyone with a Gmail account. The vulnerability was trivial to find and trivial to exploit. That it was discovered by researchers and not by someone with worse intentions is luck, not design.

The AI toy industry has a choice: build security into products from day one, or wait for a breach that involves more than researchers doing the right thing. Given the track record so far, the smart money isn’t on the industry making the right call.