ByteDance's Seedance 2.0 Ignites All-Out War With Hollywood Over AI-Generated Video

Five major studios have sent cease-and-desist letters. Netflix is threatening immediate litigation. And ByteDance's AI video tool is only available in China - making enforcement nearly impossible.

Eight days. That’s how long it took for ByteDance’s Seedance 2.0 to go from launch to the target of cease-and-desist letters from five major Hollywood studios, a threat of immediate litigation from Netflix, and condemnation from SAG-AFTRA, the Motion Picture Association, and the Human Artistry Campaign.

The AI video generator - which ByteDance launched on February 12 through its Chinese Jianying app - can turn text prompts into near-cinematic video clips complete with synchronized audio, realistic human motion, and multi-camera storytelling. It accepts text, images, audio, and video as inputs. Swiss consulting firm CTOL described it as the “most advanced AI video generation model available,” saying it outperformed OpenAI’s Sora 2 and Google’s Veo 3.1 in testing.

The model generates clips up to 15 seconds long in up to 2K resolution, roughly 30 percent faster than its predecessor Seedance 1.5. Chinese media drew direct parallels to the DeepSeek R1 moment - viral demonstrations racked up tens of millions of views on Weibo within days.

Then the copyright problems started.

The Viral Videos That Broke Hollywood

Within hours of launch, users started generating videos featuring characters and actors they had no rights to. A clip showing Tom Cruise and Brad Pitt fighting on a rooftop, posted by filmmaker and VFX artist Rauiri Robinson, went massively viral. Other users created alternative endings to Stranger Things, inserted Elon Musk into the Squid Game environment, and generated scenes featuring Spider-Man, Darth Vader, Grogu, Walter White, and a roster of A-list actors including Will Smith, Tom Hanks, Chris Evans, and Anne Hathaway.

Disney fired the first shot on February 13, sending a cease-and-desist letter to ByteDance general counsel John Rogovin - who, in a twist of irony, previously served as Warner Bros.’ general counsel. Disney accused ByteDance of making available “a pirated library of Disney’s copyrighted characters from Star Wars, Marvel, and other Disney franchises, as if Disney’s coveted intellectual property were free public domain clip art.”

Paramount followed the same day. Then Warner Bros. Then Netflix, which gave ByteDance three days to comply and explicitly threatened “immediate litigation” - the first studio to do so. Sony joined as the fifth studio, demanding the removal of its intellectual property including Breaking Bad and the Spider-Verse films, and dismissing ByteDance’s safeguard promises as “belated” and “half-baked.”

The Motion Picture Association called Seedance 2.0’s enabling of copyright infringement “unauthorized use of US copyrighted works on a massive scale.” SAG-AFTRA condemned what it called “blatant infringement” that included “the unauthorized use of our members’ voices and likenesses.” The Human Artistry Campaign, which counts SAG-AFTRA and the Directors Guild of America among its members, described the launch as an attack on every creator in the world.

Screenwriter Rhett Reese, who wrote Deadpool, captured the industry mood: “It’s likely over for us.”

ByteDance’s Response: Guardrails Without Details

ByteDance said it “respects intellectual property rights” and pledged to “strengthen current safeguards as we work to prevent the unauthorised use of intellectual property and likeness by users.”

The company did not specify what those safeguards would look like. It also declined to say whether copyrighted material was used to train the model - the question at the center of every AI copyright dispute to date.

That silence matters. If ByteDance trained Seedance 2.0 on copyrighted films and television shows without permission, the training itself could constitute mass copyright infringement, regardless of what the outputs look like. It’s the same legal theory behind lawsuits against other AI companies, but with an added complication: the outputs here don’t just resemble copyrighted material in style or substance. They explicitly recreate recognizable characters, settings, and real people’s faces.

Sony’s response to ByteDance’s guardrail promises was blunt: “SPE will not tolerate delayed or half-baked measures.”

The Enforcement Problem Nobody Wants to Talk About

Here is where the story gets truly uncomfortable for Hollywood. Seedance 2.0 is currently only available in China, accessible through ByteDance’s Jianying app (the Chinese version of CapCut) and the Jimeng AI platform. It has no U.S.-facing product, no U.S.-based servers, and no direct U.S. revenue.

That creates a jurisdictional nightmare. As copyright law experts have pointed out, serving legal process in China through the Hague Convention can take up to two years. And while ByteDance does have U.S.-based employees and operations through TikTok, Seedance 2.0 itself exists in a regulatory grey zone.

Entertainment lawyer Jonathan Handel told Al Jazeera this marks “the beginning of a difficult road” for the film industry, adding that “until courts make a significant ruling, AI-generated videos will have major implications on the film industry.”

There’s a precedent here, and it’s not encouraging. Disney, NBCUniversal, and Warner Bros. Discovery sued MiniMax, another Chinese AI firm, in September 2025 over similar claims. That case is still grinding through the courts.

ByteDance has said it plans to bring Seedance 2.0 to global CapCut users, which would put the tool directly into a product used by hundreds of millions of people worldwide - including in the United States. That rollout could change the legal calculus significantly, giving studios a domestic target. But until then, Hollywood’s legal threats land in a jurisdiction that has historically shown little interest in enforcing American copyright claims against Chinese tech companies.

Why This Is Different From Sora

You might wonder: hasn’t OpenAI’s Sora and Google’s Veo already raised these same issues? Yes, but with a key distinction. Both OpenAI and Google operate primarily in the United States and are already subject to domestic copyright lawsuits. They also built content filtering systems designed to block the generation of recognizable characters and real people’s likenesses - imperfect systems, but systems that exist.

Seedance 2.0 launched with no such filters. Users immediately generated Disney characters, recreated copyrighted TV show scenes, and produced deepfake videos of real actors - all apparently without resistance from the model. The quality was good enough that even industry professionals struggled to dismiss the outputs as obvious AI slop.

This isn’t a model that occasionally slips through its guardrails. This is a model that launched without them.

The Bigger Picture

Seedance 2.0 is a stress test for international copyright enforcement in the age of generative AI. It exposes a gap that has been widening for years: the tools that can infringe on copyright most effectively are increasingly being built by companies in jurisdictions where the rights holders have the least legal leverage.

The entertainment industry’s legal arguments are strong on paper. Training AI on copyrighted works without permission likely violates copyright law. Generating recognizable characters and real people’s likenesses implicates copyrights, trademarks, and publicity rights simultaneously. But legal arguments only matter in courtrooms that have jurisdiction, and the courts that have jurisdiction are in countries with limited motivation to act.

ByteDance will almost certainly add some guardrails - it needs to if it wants the planned global CapCut rollout to proceed without getting blocked entirely in Western markets. But the underlying question remains unanswered and possibly unanswerable through litigation alone: what happens when the most powerful AI creative tools are built by companies that U.S. copyright law can’t effectively reach?

For now, Hollywood is throwing cease-and-desist letters across the Pacific and hoping something sticks. The screenwriters, actors, and artists whose work was used to train these models - and whose faces and characters appear in the outputs - are left watching the most realistic AI deepfakes ever produced of themselves, generated by a tool they can’t block, hosted in a country that won’t shut it down.

Netflix gave ByteDance three days. It’s now been four.