Europe stands at a critical crossroads in the AI era. The technology Elon Musk described as “the most disruptive force in history” is rapidly shaping national security, economic power, and the delicate balance between citizens and the state.
Today, AI-powered surveillance blankets over half the world’s population, more than 4 billion people, across at least 97 countries, enabling governments to track, profile, and control citizens on an unprecedented scale.
Across the EU, high-profile scandals reveal the real human cost: state spyware infiltrating journalists’ devices, deepfakes twisting election outcomes, and opaque algorithms judging citizens without accountability or appeal.
Europe has countered with landmark safeguards, including the EU AI Act and GDPR, to rein in this acceleration. Yet evidence reveals gaps between regulation and reality. European organizations lag global benchmarks on critical AI security controls, from anomaly detection (France 32%, Germany 35%, UK 37% versus a 40% global average) to training data recovery and supply chain visibility, leaving them less equipped to detect anomalous AI behaviour or respond to breaches.
As enforcement gaps widen, the uncomfortable question remains whether Europe’s AI rules are a genuine safeguard, or simply a slower-moving response to a faster technological reality.
Clearview AI Biometric Scraping: Illegal Mass Facial Recognition
Clearview’s AI database, in total containing over 30 billion scraped images, has sparked one of the EU’s most persistent surveillance scandals. It enabled unauthorized facial recognition for police and private entities.
The U.S. company has consistently violated GDPR by processing biometric data without consent or legal basis, ignoring bans and fines totaling over €100 million across the EU. The Netherlands served the largest GDPR fine to date after charging the company €30.5 million in 2024.
The scandal spans at least Belgium, France, Germany, Greece, Italy, Netherlands, Sweden, and Austria, with persistent complaints underscoring Clearview’s systematic defiance of European privacy rules.
Key Impacts & Regulatory Implications
- Mass unauthorized surveillance, enabling police misuse, identity theft & discrimination
- Vast biometric exposure (violations likely underreported)
- Prohibited under the EU AI Act for untargeted scraping & live facial recognition
- Enforcement gaps persist (fines ignored) – urgent need for unified EU mechanisms & stricter transatlantic rules
Pegasus and Predator: Widespread State-Sponsored Hacking
The 2021 Pegasus Project exposed NSO Group’s spyware as a tool for governments to remotely infect smartphones, extract messages, record calls, track locations, and activate cameras/microphones without detection.
Similar capabilities emerged in Intellexa’s Predator and Paragon’s Graphite spyware, often using AI for target selection, evasion of security software, and automated data processing, making them explicitly high-risk under the EU AI Act.
At least 14 EU countries purchased Pegasus, with high-profile cases including Spain’s CatalanGate (Pegasus was alledgedly used against over 60 independence leaders) and Greece (Predator alledgedly targeting opposition politicians and journalists during the 2022–2023 political crisis).
Amnesty International describes the pattern as Europe’s “growing spyware crisis,” with WhatsApp notifying 90+ victims in 2025 alone, including journalists and human rights defenders.
Key Impacts & Regulatory Implications
- Severe erosion of democratic trust and press freedom
- Total access to intimate personal data (cases likely far underreported)
- Classified as prohibited high-risk under the EU AI Act
- Enforcement gaps persist despite PEGA bans/export controls; urgent need for unified EU spyware prohibition and oversight
Deepfake Manipulation in Elections: AI Undermining Democracy
AI-generated deepfakes have become a powerful surveillance-adjacent threat, combining disinformation with targeted manipulation to influence voters and erode public trust.
A clear example is Slovakia’s 2023 election, where AI-manipulated audio clips falsely accused opposition leader Michal Šimečka of vote-rigging, helping sway the outcome and exposing early weaknesses in the Digital Services Act (DSA).
The trend has spread across Slovakia, Romania, Poland, Hungary, Italy, Lithuania, and Malta, seriously threatening electoral integrity throughout the EU.
Key Impacts & Regulatory Implications
- Fuels voter manipulation and societal division, intensifying “hybrid wars” in Eastern Europe
- Erodes shared truth by amplifying targeted disinformation drawn from surveillance data
- Classified as high-risk under the EU AI Act, mandating watermarking, detection tools, and transparency measures
- Enforcement gaps persist in DSA platform obligations despite mandatory risk assessments, calling for EU-wide deepfake bans and real-time monitoring
Predictive Policing and Algorithmic Bias: Discriminatory AI Profiling
EU countries have faced scandals over AI systems that predict crimes but perpetuate biases through flawed data, leading to discriminatory over-policing and profiling.
A key case is Belgium, where police developed a predictive profiling system criticized as an “AI surveillance machine” targeting marginalized communities with little transparency. A 2025 Statewatch report called for a ban due to risks from biased databases and sociodemographic statistics.
The issue spans Belgium, Netherlands, Denmark, Sweden, and parts of Southeast Europe, highlighting algorithmic injustice in public services.
Key Impacts & Regulatory Implications
- Amplifies inequalities and over-policing of minorities through biased training data
- Lack of transparency deepens public distrust and magnifies real-world harms
- Classified as high-risk under the EU AI Act, triggering mandatory audits and oversight
- Gaps persist despite GDPR enforcement and growing calls for stricter bans, demanding urgent bias mitigation and independent review
DeepSeek AI Bans and Foreign Tech Risks: Data Sovereignty Concerns
Chinese AI company DeepSeek drew EU-wide scrutiny for storing user data in China, creating risks of state access under PRC national intelligence laws and breaching GDPR requirements for data protection.
Czechia led with a 2025 public-sector ban, prohibiting use in administration due to cybersecurity and sovereignty threats. This set the tone for other countries: France introduced a federal ban in September 2025, Italy blocked the app in January 2025 amid privacy investigations, and Germany, the Netherlands, and Luxembourg followed with app-store removal requests and similar restrictions.
Key Impacts & Regulatory Implications
- State access risk to EU citizens’ data undermines trust in foreign AI tools
- Sensitive inputs exposed beyond EU jurisdiction, heightening privacy threats
- Reinforces EU AI Act priorities on data sovereignty and high-risk foreign tech restrictions
- Driven by GDPR probes and spreading national bans, demanding stricter localization and unified global oversight
The Road Ahead for EU AI Regulation 2026
These recent scandals lay bare the profound threats AI poses across Europe. The EU AI Act and GDPR together form the backbone of essential protections against these dangers. Yet enforcement remains uneven, fines are frequently disregarded, and significant implementation gaps continue to exist.
Thousands have endured direct harm. Millions more have seen their trust in technology erode. The fundamental question remains pressing:
Can Europe’s regulatory framework evolve swiftly enough to confront these escalating risks?
Author: Grace Sharp
See Also:
The Sovereign AI Race: The EU’s Fight for Tech Independence