AI News: The 'AI Bowl' lands with a thud as viewers reject tech's biggest ad blitz

Daily roundup for February 9, 2026 covering Super Bowl LX AI ad backlash, New York's FAIR News Act targeting AI-generated journalism, shadow AI security risks in the workplace, and the AI regulation standoff between federal and state governments

Top Stories

The morning-after verdict on Super Bowl LX’s AI advertising blitz is in, and it’s brutal. At least eight advertisers leaned on AI during the broadcast, earning the event the nickname “the AI Bowl” on social media - and not as a compliment. NFL fans reported being “sick of AI ads” before the first quarter was over.

The numbers tell the story. According to iSpot, Anthropic’s Claude campaign landed in the bottom 3% of Super Bowl ad likeability over the past five years. Purchase intent scored 24% below Super Bowl norms. The most common viewer reaction in surveys: “WTF.” Only 7% of respondents reported using Claude, compared to 73% for ChatGPT - raising the question of why Anthropic spent tens of millions reaching an audience that overwhelmingly doesn’t know what it’s selling.

Svedka’s AI-generated vodka ad fared no better. Brand match registered at 7% against a 63% industry norm for alcohol brands. Viewers called the AI-rendered robots “weird” and “surreal.” Google’s Gemini spot showcased AI home design tools. Meta ran Oakley smart glasses ads featuring Spike Lee and Marshawn Lynch. OpenAI positioned ChatGPT as the “Kleenex of AI.”

Adweek’s post-game analysis identified a deeper problem: four years into the AI hype cycle, these companies still can’t articulate what makes one AI offering different from another. Every ad promised helpfulness, accessibility, and seamless integration into daily life. None explained why anyone should care. The industry has a technology adoption problem disguised as a marketing budget - and the Super Bowl just made it visible to 120 million people.

Sources: Adweek, Sports Illustrated, Slate, Washington Post

New York Introduces the FAIR News Act to Label AI-Generated Journalism

New York Senator Patricia Fahy and Assemblymember Nily Rozic introduced the NY FAIR News Act (Fundamental Artificial Intelligence Requirements in News Act), which would require news organizations to label any content “substantially composed, authored, or created through the use of generative artificial intelligence.”

The bill goes further than disclosure. It mandates that a human employee with editorial control review all AI-generated content - including audio, images, and video - before publication. News organizations would need to tell their journalists when and how AI is being used in the newsroom. And it explicitly prohibits using AI in ways that result in job displacement, reductions in hours or wages, or the erosion of collective bargaining agreements.

The bill has endorsements from WGA-East, SAG-AFTRA, DGA, and the NewsGuild of New York. It cites polling showing 76% of Americans worry about AI stealing journalism and local news stories.

If enacted, the FAIR News Act would take effect 60 days after becoming law, creating one of the most comprehensive state-level frameworks for AI in journalism. It also sets up a potential collision with the Trump administration’s executive order directing the Attorney General to challenge state AI laws - adding journalism to the growing list of domains where federal and state regulators are fighting over who gets to set the rules.

Sources: Nieman Lab

Security & Privacy

40% of Employee AI Use Now Involves Sensitive Corporate Data

New research from Cyberhaven reveals that nearly 40% of all employee interactions with AI tools involve sensitive corporate data - and most of it flows through unsanctioned applications that IT departments don’t control.

The scope of the problem is striking. More than four out of five of the top 100 GenAI SaaS apps are classified as medium, high, or critical risk. The number of employees using generative AI applications has tripled, while data policy violations have doubled. The average organization now experiences 223 AI-related data security incidents per month.

Employees are choosing niche AI tools that offer specialized workflows and require no prompting expertise, bypassing official channels in the process. Microsoft’s own Data Security Index report, published in January, paints a similar picture: only one in three organizations has established comprehensive AI governance frameworks, creating what amounts to a permanent gap between how fast workers adopt AI tools and how fast security teams can evaluate them.

The pattern is familiar from the early days of cloud computing and shadow IT, but the stakes are higher. When employees paste proprietary code, financial data, or customer information into third-party AI tools, that data becomes training material for models they don’t control. The security community’s term for this - “shadow AI” - undersells the problem. This isn’t rogue software; it’s a structural failure to match AI adoption speed with data governance.

Sources: TechNewsWorld, Microsoft Security Blog

Regulation & Policy

The Federal-State AI Regulation Standoff Enters Its Critical Month

February 2026 is shaping up to be a decisive month in the fight over who regulates AI in America. President Trump’s December executive order directed the Attorney General to establish an AI Litigation Task Force to challenge state AI laws, with a March deadline for action. Meanwhile, states keep legislating.

Colorado’s AI Act - the first comprehensive state AI law in the country - was delayed from February 1 to June 30 after contentious negotiations collapsed during a special session. Governor Polis signed a bill that literally did nothing but find-and-replace every instance of “February 1, 2026” with “June 30, 2026,” kicking the can five months down the road. The law itself remains unchanged: when it takes effect, it will still require transparency and accountability for high-risk AI systems used in employment, healthcare, and education decisions.

California’s Transparency in Frontier AI Act took effect January 1 and is already operational, requiring developers of powerful AI models to implement safety protocols and report critical safety incidents. Texas’s Responsible AI Governance Act went live the same day. New York now has two AI-related bills in play - the data center moratorium and the FAIR News Act - with more likely coming.

The federal government’s strategy is to declare state laws incompatible with national policy and sue them out of existence. Whether that works depends on courts that haven’t yet been asked to weigh in. In the meantime, twenty states have AI-specific laws either passed or in development, creating a patchwork that companies have to navigate regardless of what Washington prefers.

Sources: Reed Smith, ETC Journal

Quick Hits

  • Anthropic’s anti-ad stance may age poorly: Adweek spoke to advertising industry creatives who noted that Anthropic’s “no ads in AI” Super Bowl campaign risks looking hypocritical if the company eventually needs ad revenue to fund its massive compute costs. Several pointed out that promising “never” in a capital-intensive industry is a bet against economics. Adweek

  • Seahawks win the actual game: In case anyone forgot there was a football game happening between the AI ads, the Seattle Seahawks defeated the New England Patriots at Levi’s Stadium in Santa Clara. CNBC

  • AI regulation at 20 and counting: Twenty US states now have AI-specific laws either enacted or in legislative development, creating a compliance landscape that grows more complex by the month - especially with the federal government actively trying to preempt state action. Drata

Worth Watching

The Super Bowl AI ad debacle deserves more attention than the industry will probably give it. When you spend an estimated $7 million per 30-second spot to reach 120 million viewers and your campaign lands in the bottom 3% of likeability, the problem isn’t creative execution - it’s that consumers don’t want what you’re selling, at least not the way you’re selling it. The AI industry has spent the past four years talking to investors, developers, and each other. The Super Bowl was its first real attempt to talk to everyone else, and the response was confusion, indifference, and hostility. That gap between industry enthusiasm and public reception is the single biggest risk to AI adoption that nobody in Silicon Valley seems willing to address.

The New York FAIR News Act also bears watching. It’s the most aggressive attempt yet to regulate AI in journalism, and its labor protection provisions - explicitly barring AI from displacing journalists or cutting wages - go well beyond transparency. If it passes, expect similar bills in other states. And expect the Trump administration’s AI Litigation Task Force to add it to its target list, setting up a First Amendment vs. state labor rights collision that could define how AI intersects with media for years.