The European Parliament has disabled AI features on approximately 8,000 devices used by lawmakers, their staff, and administrative personnel. The directive, communicated through an internal memo this week, blocks Microsoft Copilot in Windows 11, writing assistants, text summarization tools, and other AI features that process data through cloud services.
The Parliament’s IT department said it “could not guarantee the safety” of AI functions that send data off-device for processing, particularly given the sensitive nature of legislative materials passing through parliamentary systems.
What’s Blocked
The restrictions go further than most government AI policies. According to official parliamentary documents, the Directorate-General for Innovation and Technological Support disabled:
- Microsoft Copilot system-wide on Windows 11 devices
- Local AI processing through Windows Studio Effects
- Cloud AI service connections at the network level
- Microsoft 365 AI integrations
- AI-enhanced Windows Search features
- Writing assistants and text summarization tools
- Virtual assistants and web page summary features
The IT department explained its reasoning: “As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled.”
The Data Sovereignty Problem
The underlying concern is jurisdictional. Uploading data to AI services operated by US companies means that US authorities can compel those companies to turn over information about their users, regardless of where the data was generated or where the users are located.
For EU lawmakers handling sensitive legislative discussions, diplomatic communications, and constituent data, this creates an unacceptable risk. Parliamentary documents express concern about whether AI queries might be stored indefinitely and whether that data could face foreign surveillance laws.
There’s also the GDPR compliance problem. The “black box” nature of AI systems creates challenges with the regulation’s requirement that organizations understand how personal data is processed. When an AI summarizes an email or helps draft a document, it’s often unclear what data is transmitted, how it’s processed, and where it ends up.
Beyond Work Devices
The internal memo also advised lawmakers to review AI settings on personal devices and warned against exposing work materials to third-party AI tools that “scan or analyze content.”
This extends the Parliament’s data hygiene concerns beyond official hardware. The message is clear: any AI tool that processes parliamentary business through external servers poses a potential security risk, regardless of whether it’s on a work device or personal phone.
Not the First Ban
The Parliament has form on restricting technology platforms over data concerns. In 2023, it banned TikTok on staff devices over similar worries about where data was going. Lawmakers have also discussed moving away from Microsoft products toward European alternatives as part of a broader “digital sovereignty” push.
The AI restrictions fit a pattern: as data processing moves to cloud services operated by US tech giants, European institutions are increasingly uncomfortable with the surveillance implications.
More Restrictive Than Others
The Parliament’s approach is more aggressive than other Western legislatures. The US Congress has issued AI usage guidelines but hasn’t implemented technical blocks that disable features at the device level. The EU appears to be “leading by example” in its role as co-legislator of the AI Act.
Whether this precautionary stance influences other governments remains to be seen, but the message to AI vendors is direct: building features that require sending sensitive data to US cloud infrastructure creates compliance problems for customers who take data sovereignty seriously.
What This Means
The EU Parliament isn’t arguing that AI tools are inherently dangerous. The problem is uncertainty - uncertainty about what data is transmitted, where it goes, who can access it, and what legal regimes apply.
Until those questions have clear answers, Europe’s most important legislative body has decided the safest approach is to keep AI features off. For organizations handling similarly sensitive data, the implication is worth considering: if you can’t explain where your AI sends your data, maybe it shouldn’t be sending it anywhere.
The ban is temporary, pending clarification from tech providers. But “temporary” has a way of lasting when the underlying jurisdictional conflicts remain unresolved.