The creative community’s relationship with AI in March 2026 is fractured in ways that defy simple narratives. A major survey of 1,100 music producers shows adoption growing while anxieties mount. Label lawsuits are settling into licensing deals. And artists continue fleeing platforms that train on their work without consent.
Here’s what’s actually happening.
The Producer Survey That Cuts Through the Noise
Sonarworks and Sound On Sound surveyed more than 1,100 working music creators (producers, engineers, songwriters, composers, mixers) to understand how AI is actually changing studio workflows.
The headline numbers: one in five producers is already a regular AI user. Nearly half have experimented with AI tools. But the concerns reveal more than the adoption rates.
The biggest fear isn’t job loss - it’s sameness. When asked to identify their primary concern about AI in music, respondents consistently pointed to originality. Producers worry less about being replaced than about music losing its “sense of authorship, perspective, and emotional specificity.”
They’re not wrong. Listen to any Spotify playlist tagged “lo-fi beats to study to” and you’ll hear what AI optimization produces when it chases engagement metrics: palatable, interchangeable, forgettable.
The survey found producers envision their role evolving toward “creative director” - guiding musicians, shaping aesthetic vision, and directing intelligent tools. Genre matters too: producers see styles rooted in digital workflows (EDM, trap) as more susceptible to automation, while jazz, classical, and live performance remain “AI-resistant.”
What they’re actually using AI for: audio cleanup, noise reduction, stem separation, session organization. The grunt work, not the creative decisions.
The Label Lawsuits Are Settling - Mostly
The AI music copyright war is winding down for two of three majors.
Warner Music Group settled its landmark infringement lawsuit against Suno, becoming the first major label to officially partner with the AI music platform. Universal Music Group followed with an Udio settlement and licensing deal for a “next-generation” platform coming later this year.
Sony Music hasn’t settled with either company. Their cases remain active.
A ruling on fair use is expected this summer in UMG v. Suno, which could set major precedent for whether training AI models on copyrighted music constitutes fair use. Oral arguments on Suno’s motion to dismiss in an independent artist class action are scheduled for March 20 - two days from now. Germany’s GEMA has a ruling scheduled for June 12.
Meanwhile, Suno has reached nearly 100 million users and raised $250 million at a $2.45 billion valuation. The Danish collective management organization Koda has also filed suit against Suno.
Artists concerned about their work being used for training face a familiar problem: proving it happened and stopping it.
Artists Keep Fleeing X
X’s December rollout of a Grok-powered AI image editor that lets anyone edit any image on the platform (with no opt-out) triggered another wave of departures.
Manga illustrator Boichi (Dr. Stone) announced he would no longer post images on X, stating he doesn’t want his work “used for AI training or exploited without his consent or proper compensation.” Gachiakuta creator Kei Urana made a similar announcement, shifting to Instagram.
Bluesky has become the default destination. Its terms of service explicitly prohibit using posted images for AI training, and its decentralized architecture gives users more control. The “Artists Who Ditched Twitter/X” starter pack keeps growing.
The broader context: a NYC subway vandalism campaign against Friend AI’s $1 million ad buy demonstrated street-level hostility to AI companionship products. Over two months, New Yorkers covered Friend ads with messages like “AI wouldn’t care if you lived or died” and “stop profiting off of loneliness.”
Friend’s CEO designed the ads with huge amounts of white space specifically to invite graffiti. He told Adweek that “people in New York hate AI.”
The Image Generation Landscape Has Fragmented
No single tool dominates in 2026. Instead, creatives are mixing platforms strategically:
Midjourney remains the default for artistic styles. V7 added voice mode, draft mode (half cost, 10x speed), and improved character consistency. V8 Alpha launched March 17. The video generation tools produce 60 seconds of footage from six images in about three hours.
FLUX (from Black Forest Labs, founded by Stable Diffusion’s original team) leads for photorealism using open weights. It’s the go-to for anyone prioritizing control and local deployment.
Ideogram owns text rendering. For logos, posters, and typographic work, it’s “the only tool that reliably gets it right.”
Recraft V4 launched as #1 on HuggingFace benchmarks for logos and vectors. It’s the only AI model generating actual editable SVG files with structured layers - useful for production design work.
Leonardo AI carved out the game development and concept art niche with a generous free tier and deep customization.
Stable Diffusion 3.5 remains the open-source option for unlimited local use, though hardware requirements apply.
Stanford researchers are developing tools that give artists more control over outputs. ControlNet teaches models about spatial composition, letting creators guide layouts rather than just describe them.
The Adobe Contradiction
Adobe’s “commercially safe” messaging around Firefly continues colliding with reality.
In 2024, Adobe updated its terms of service with language that appeared to grant access to users’ creative work, triggering a backlash. The company quickly backtracked, calling it a “misunderstanding” and adding “We don’t train generative AI on customer content” as a legal obligation.
But Adobe Stock contributors say Adobe trained Firefly on their images without express notification. In 2024, Bloomberg revealed roughly five percent of Firefly’s training data consisted of AI-generated images (including outputs from rival systems like Midjourney) because Adobe Stock had accepted synthetic images from contributors.
Now Adobe is integrating third-party AI platforms like Runway’s Gen-4 Video and Google’s Veo3 directly into Firefly. These external models weren’t built with Adobe’s strict licensing standards, creating an apparent contradiction in the “commercially safe” pitch.
The #boycottadobe hashtag resurfaces regularly on X, driven partly by Creative Cloud price hikes.
Writers Are Pragmatic, Publishers Are Cautious
The AI writing tools landscape has matured. Sudowrite’s Muse 1.5, a proprietary fiction model fine-tuned on published novels, was preferred 2x over Claude 3.7 Sonnet in blind tests for fiction prose.
Writers use AI for brainstorming, dialogue drafts, and plot assistance. Few are using it to generate final prose wholesale - the community backlash remains fierce. As one review put it: Sudowrite has been “called an insult to writers everywhere and generally dismissed as a tool for hacks.”
NaNoWriMo shut down in April 2025 after 25 years, partly due to an AI stance controversy alongside participation decline and forum safety issues. The writing community has fractured into smaller groups.
What This Means
The “AI will replace creatives” narrative was always too simple. What’s happening instead:
Tools are fragmenting by use case. No single platform wins at everything. Professionals are building workflows across multiple tools.
The legal landscape is consolidating toward licensing. Major labels settling with Suno and Udio signals where this ends: deals, not bans.
Platform trust is collapsing. X’s AI image editor was the latest trigger, but artists have been drifting to Bluesky and Cara for months. Consent matters.
The originality concern is real. Producers aren’t worried about losing jobs. They’re worried about music becoming generic. That’s a harder problem to solve.
For creatives, the practical path forward involves treating AI as one tool among many - useful for specific tasks, dangerous for others, and always worth understanding what’s happening with your data.
What You Can Do
- Check your platform terms. X trains on images by default. Bluesky doesn’t. Adobe’s Firefly training claims are contested.
- Experiment on your terms. Use open-source tools like Stable Diffusion or FLUX locally if you want control.
- Follow the lawsuits. The March 20 Suno hearing and summer fair use ruling will shape what’s legal.
- Pick the right tool for the task. Ideogram for text, Recraft for vectors, Midjourney for artistic styles, FLUX for photos.
- Keep originals offline. If you don’t want something trained on, don’t post it publicly anywhere.