By Annika Voigt, EU Affairs Analyst at Phoenix24
Brussels, July 2025 —
Behind the steel-and-glass facades of the European Quarter, a quiet battle is unfolding—one that could determine whether democracies can still govern in the age of autonomous systems. With the recent adoption of the European Union’s Artificial Intelligence Act (AI Act), Europe has taken a bold, historic step: attempting to regulate the most disruptive force of the 21st century. But the deeper question lingers—can democratic institutions keep pace with technologies designed to outpace us all?
While European officials celebrate the AI Act as a landmark achievement in ethical digital governance, critics warn that the clock is ticking faster than the legal machinery can move. Algorithms that determine creditworthiness, automate job applications, screen asylum seekers, or power facial recognition at borders are already deeply embedded across the continent. And yet, most of these systems remain opaque, unevaluated, and in many cases, beyond public accountability.
The ambition behind the AI Act is admirable: to impose a risk-based framework on artificial intelligence, banning unacceptable uses (such as social scoring or manipulative surveillance) and imposing strict oversight on high-risk applications. But in the corridors of Brussels, lobbyists from major tech firms—many based outside Europe—are working relentlessly to dilute the law’s scope. According to Corporate Europe Observatory, over 3,000 lobbying meetings were held in the last two years on the AI Act alone, most pushing for exemptions, self-regulation, or looser interpretations of what counts as “high-risk.”
The result? A regulatory compromise that risks being too weak to restrain the power it claims to oversee.
As a journalist covering both the European Parliament and NATO’s digital security strategies, I’ve seen firsthand the growing schism between policy language and operational reality. In theory, democratic oversight is central to Europe’s identity. In practice, algorithms often operate in legal gray zones—outsourced to private contractors, shielded by intellectual property laws, or deployed under vague “pilot programs” without parliamentary scrutiny.
This isn’t just a legal problem. It’s a democratic one.
In 2023, a Dutch court ruled that the SyRI system—a predictive policing algorithm used to detect welfare fraud—violated privacy and discriminated against ethnic minorities. In France, Amnesty International documented how AI-powered surveillance tools were used during protests, despite public outcry and a lack of proper authorization. Meanwhile, in Eastern Europe, AI-driven border controls have been deployed on migrants without transparency, sparking human rights concerns echoed by the European Data Protection Supervisor.
What we are witnessing is not a lack of regulation—it is the erosion of democratic control in real time.
Europe prides itself on being a global leader in digital rights. But if the enforcement of the AI Act is left to underfunded national agencies, and if the public remains unaware of how these systems shape their lives, we are building an illusion of control over systems that already shape democratic behavior from the shadows.
The real challenge is not simply technological—it is institutional.
Can a bureaucracy—however well-meaning—govern an ecosystem that evolves faster than legislative cycles? Can oversight mechanisms built in the 20th century manage tools created by neural networks with no clear logic trail? Can citizens truly give informed consent when algorithms are invisible, proprietary, and multilingual?
The urgency now lies not just in regulating AI, but in repoliticizing it. We must reject the idea that these systems are neutral. Algorithms are expressions of power, often encoded with the values, assumptions, and blind spots of those who design them. If democratic institutions do not assert their authority, others will—whether they are corporations, autocracies, or opaque partnerships between the two.
And if nothing changes, Europe’s AI Act will become a case study in legislative theatre: ambitious on paper, powerless in practice. But if disruption occurs—a legal precedent, a whistleblower leak, a citizen-driven algorithm audit—then the act may gain teeth. In a bifurcated scenario, Brussels must choose between regulatory timidity and global leadership in rights-based AI governance.
Because in the end, governing AI isn’t just about technology.
It’s about deciding who governs at all.
Annika Voigt, German journalist and international affairs correspondent for Phoenix24 since 2025. Specialist in political analysis and digital society.