AI News: Seedance 2.0 rattles Hollywood, Temporal hits $5B, memory chip crisis deepens

Daily roundup for February 21, 2026 covering ByteDance's viral video AI sparking copyright battles, Temporal's $300M raise for AI agent infrastructure, the global memory shortage, Alibaba's Qwen3.5 launch, and the chatbot safety bill surge across 27 states.

Top Stories

ByteDance’s Seedance 2.0 video generator dominated the internet this week with AI-generated clips that crossed from impressive into unsettling. Friends characters reimagined as otters. Tom Cruise fighting Brad Pitt. Will Smith battling a spaghetti monster. The videos spread faster than ByteDance’s content moderation could catch them.

The Motion Picture Association denounced the model for copyright infringement. Disney sent a cease-and-desist claiming Seedance was trained on their works without compensation. ByteDance pledged to “strengthen safeguards” and “respect intellectual property rights” - standard PR language that commits to nothing specific.

The technical capabilities are genuine. Seedance 2.0 handles four input types - text, image, video, and audio - producing multi-shot sequences with consistent characters and phoneme-level lip-sync in eight languages. Resolution hits 2K at 30% faster generation than its predecessor.

The copyright question isn’t going away. Video generators trained on Hollywood content will keep producing Hollywood-adjacent output. The legal framework for this doesn’t exist yet. ByteDance operating from China complicates any enforcement. We’re watching the preview of a much larger collision between generative AI and entertainment IP.

Sources: CNN, TechCrunch, Al Jazeera

Temporal Raises $300M as AI Agent Infrastructure Becomes Critical

Temporal closed a $300 million Series D at a $5 billion valuation, doubling its worth from October. Andreessen Horowitz led; Lightspeed and Sapphire participated. The company reported 380% year-over-year revenue growth.

The pitch: AI agents fail. They hallucinate, lose state, timeout, and crash. Temporal’s platform preserves application state, retries failed steps automatically, and lets workflows resume exactly where they broke instead of starting over. For long-running AI tasks that interact with real systems, this matters.

The broader signal: infrastructure for agentic AI is becoming a distinct investment category. As companies move from chatbots to autonomous agents that execute multi-step workflows, the reliability layer underneath them becomes critical. An agent that books your travel, manages your calendar, or processes your expenses needs to handle failures gracefully - or creates expensive messes.

Temporal’s growth suggests enterprises are hitting this problem at scale. The AI agent gold rush requires picks and shovels; Temporal is selling them.

Sources: GeekWire, SiliconANGLE

Memory Chip Shortage Squeezes Everyone Outside AI

The AI infrastructure buildout is consuming the world’s memory chips. DRAM prices rose 80-90% this quarter. Tesla, Apple, and a dozen other major companies signaled that shortages will constrain production. Chinese smartphone makers Xiaomi and Oppo are cutting shipment forecasts by up to 20%.

The root cause: data centers will consume approximately 70% of memory manufactured in 2026. Meta, Microsoft, Amazon, and Google are spending an estimated $650 billion this year on AI infrastructure. That demand is diverting high-bandwidth memory away from every other industry.

Synopsys CEO Sassine Ghazi says the shortage runs through 2027 at minimum - new fab capacity takes two years to bring online. Samsung, SK Hynix, and Micron are prioritizing AI customers who pay premium prices for HBM (high-bandwidth memory). Consumer electronics get whatever’s left.

This is the hidden tax of the AI boom. Every phone, laptop, car, and appliance that needs memory is now competing with trillion-dollar companies building AI data centers. Prices go up. Products get delayed. Features get cut. The chip crisis from 2020-2022 taught supply chains nothing about diversification.

Sources: Bloomberg, CNBC

Quick Hits

  • Alibaba drops Qwen3.5: The 397B-parameter MoE model (17B active per task) supports 201 languages, native multimodal understanding, and “visual agentic capability” for controlling applications autonomously. Alibaba claims 60% lower costs and 8x throughput versus its predecessor, with benchmarks beating GPT-5.2 and Claude Opus 4.5 on some tasks. Open-weight version available. CNBC, Dataconomy

  • 78 chatbot safety bills in 27 states: Six weeks into 2026’s legislative session, states are flooding the zone with chatbot regulation. Most focus on child safety - prohibiting self-harm encouragement, suicidal ideation, and grooming. Virginia and Washington passed bills out of their Senates. Michigan’s SB 760 would ban chatbot products for minors unless they’re incapable of encouraging harmful behavior. Transparency Coalition

  • OpenAI fundraise may exceed $100B: The company is finalizing its record-breaking round, which could value OpenAI above $850 billion including the new capital. Microsoft remains among the backers. This would be the largest private funding round in history by a wide margin. Bloomberg

  • Altman and Amodei’s awkward moment: At the India AI Summit photo-op, OpenAI’s Sam Altman and Anthropic’s Dario Amodei stood side by side with PM Modi and other tech leaders - and pointedly refused to hold hands, raising fists instead. The rivalry between ChatGPT and Claude makers is now too personal for group photos. CNBC

  • India AI Summit extended through today: The expo opened to the public on February 20-21, drawing 300,000+ registered participants. The summit produced bilateral AI cooperation agreements and a joint declaration on “inclusive AI development” signed by 87 nations. Anthropic partnered with Infosys to deploy Claude to Indian enterprises. India AI Summit

Worth Watching

The chatbot legislation surge deserves attention. 78 bills in 27 states signals genuine bipartisan concern about AI interactions with minors - the kind of momentum that produces actual laws. The patchwork approach means compliance headaches for chatbot operators, but also experimentation in regulatory approaches. Watch which states’ frameworks other jurisdictions copy.

The memory shortage creates an interesting dynamic: AI infrastructure spending is essentially taxing every other tech sector through component scarcity. Consumer devices get worse or more expensive so that data centers can train larger models. That’s a wealth transfer from ordinary users to AI companies and their investors. As the shortage extends through 2027, this tension will surface in product reviews, earnings calls, and eventually policy discussions about semiconductor priorities.

Temporal’s valuation jump (from $2.5B to $5B in four months) reflects where the AI infrastructure buildout is headed. The first wave was foundation models and inference APIs. The next wave is reliability, orchestration, and monitoring for autonomous agents. Expect more infrastructure plays hitting unicorn valuations as enterprises discover that “just connect an LLM” doesn’t produce production-ready agent systems.