AI News Roundup: February 3, 2026

Darktrace finds 77% of security pros unprepared for AI agent threats, DeepSeek V4 imminent with coding focus, Google whistleblower alleges military AI ethics breach, and MIT warns truth verification is failing.

Here’s what happened in AI over the last 24 hours.

Darktrace: 77% of Security Pros Feel Unprepared for AI Agent Attacks

A new Darktrace report released today surveyed 1,540 cybersecurity professionals across 14 countries. The findings paint a concerning picture of the agentic AI security landscape.

Key numbers:

  • 73% say AI-powered threats are already significantly impacting their organizations
  • 87% report AI is increasing attack volume
  • 89% say AI is making attacks more sophisticated
  • 91% note AI is supercharging phishing and social engineering
  • 46% admit they feel unprepared to defend against AI-driven attacks

The top threat? Hyper-personalized phishing (50%), followed by automated vulnerability scanning (45%) and adaptive malware (40%).

Darktrace also announced a new product, Darktrace/SECURE AI, aimed at giving security teams visibility into how AI tools and agents operate within organizations. VP of Product Issy Richards framed the challenge directly: “These systems can act with the reach of an employee — accessing sensitive data and triggering business processes — without human context or accountability.”

DeepSeek V4 Expected Mid-February with Coding Focus

DeepSeek is preparing to release its V4 flagship model around the Lunar New Year period, according to The Information. Internal testing reportedly shows V4 outperforming Anthropic’s Claude and OpenAI’s GPT series on coding tasks.

The model builds on DeepSeek’s Mixture of Experts (MoE) architecture and introduces improved long-context processing based on the sparse attention tech from V3.2-Exp. Developers have spotted references to an unidentified “MODEL1” in DeepSeek’s GitHub repos, suggesting the release is imminent.

DeepSeek also published a new paper this month describing “Engram,” a technique for training larger models on less powerful chips. The approach stores basic facts separately from complex calculations, freeing up memory for reasoning tasks. It’s a direct response to US export controls limiting Chinese access to advanced GPUs.

ByteDance and Alibaba are reportedly also preparing February model launches, intensifying China’s AI race.

Google Whistleblower Alleges Military AI Ethics Breach

A Washington Post report published Saturday detailed a whistleblower complaint filed with the SEC. The former Google employee alleges the company violated its own AI principles by helping an Israeli military contractor use Gemini AI for drone surveillance analysis.

According to SEC complaint documents, a user with an Israeli Defence Forces email contacted Google Cloud in July 2024 seeking help improving Gemini’s ability to detect drones, tanks, and troops in aerial footage. Google’s cloud team allegedly provided technical guidance and conducted internal trials.

Google disputes the characterization, claiming the account spent “less than a few hundred dollars per month” and received only “standard customer support information.”

The timing matters: Google updated its AI policies in February 2025, removing pledges against using AI for weapons or surveillance. The company said the change was necessary to help “democratically elected governments maintain global AI dominance.”

This comes as the Pentagon rolls out GenAI.mil to over 3 million military members and contractors, with Google’s Gemini for Government as the first product available.

MIT Technology Review: AI Truth Tools Are Failing

MIT Technology Review published a sobering analysis of AI-generated misinformation this weekend.

The article’s core argument: the promised truth-verification tools aren’t working. The Content Authenticity Initiative, co-founded by Adobe and adopted by major tech companies, was supposed to attach labels disclosing when content was AI-generated. But even Adobe only applies these labels when content is entirely AI-made, not when AI is used for editing or enhancement.

More concerning: the article reports confirmation that the Department of Homeland Security is using AI video generators from Google and Adobe to create public-facing content supporting the administration’s immigration policies.

“We responded by preparing for a world in which the main danger was confusion,” the author writes. “What we’re entering instead is a world in which influence survives exposure, doubt is easily weaponized, and establishing the truth does not serve as a reset button.”

NVIDIA Nemotron 3: Open Models for Self-Hosted AI

NVIDIA’s Nemotron 3 family is now available in Nano size, with Super and Ultra versions coming in the first half of 2026. The open-source models use a hybrid Mamba-Transformer mixture-of-experts architecture with native 1M-token context windows.

The key for privacy-conscious developers: NVIDIA Open Model License is fully permissive. You can use, modify, distribute, and commercially deploy these models without crediting NVIDIA. Training datasets, techniques, and weights are all published.

Major companies already adopting Nemotron include Bosch, CrowdStrike, Palantir, Salesforce, and Uber. The models run on vLLM, SGLang, Ollama, and llama.cpp across any NVIDIA hardware.

State AI Laws: What’s Now in Effect

A quick regulatory update. As of January 1, 2026, several state AI laws are now active:

California:

  • AI companion chatbots must disclose when users could reasonably believe they’re talking to humans
  • Protections for employees who report AI safety concerns to authorities
  • CalCompute public AI cloud consortium established

Colorado:

  • The Colorado AI Act (requiring algorithmic discrimination protections) delayed to June 30, 2026

New York:

  • RAISE Act targets high-cost AI developers, mandating safety policies with penalties up to $10M for first offense

Meanwhile, President Trump’s December executive order on AI directed the Commerce Secretary to identify “burdensome state AI laws” by March 11, 2026 for potential federal preemption challenges.


Got a tip? Drop us a line.