The EU AI Act’s Uneven Shield: Malicious Risks and Global Reach

The EU AI Act's Uneven Shield: Malicious Risks and Global Reach

The EU AI Act: A Flawed Blueprint for Global AI Safety

Presented as a landmark regulatory model and backed by the Brussels Effect, the EU AI Act aspires to steer global AI governance. Its risk-based architecture advances protections for many applications, but the Act’s approach to malicious AI use is uneven. That inconsistency limits its authority as an exportable standard at a moment when international AI competition, especially from the United States and China, often prioritises capacity over restraint.

Critical Gaps in Malicious AI Risk Coverage

  • Bioweapons and chemical threats: The Act offers only generic provisions that touch on high-level AI misuse. It stops short of a targeted framework for AI-assisted biological or chemical harm.
  • Intentional rogue AIs: Mitigation relies heavily on human oversight standards for high-risk systems, which may not address systems designed to evade control or act maliciously.
  • Autonomous weapons and military use: Explicit exclusions leave a major class of harmful applications outside the Act’s scope.
  • Corporate concentration of power: The Act does not meaningfully curb platform dominance or systemic risk from a few large AI providers, a gap not fully closed by other EU digital rules.
  • Partial or indirect coverage: Disinformation, deepfakes, fraud, and cyber-offense are addressed piecemeal, often by relying on separate EU or national laws rather than coherent AI-specific rules.
  • Contrast: state surveillance: The Act provides comparatively robust constraints on oppressive surveillance, demonstrating selective coverage choices.

Impact on Global Governance and Business

By deferring to other legislation, excluding personal use, and leaving “foreseeable misuse” undefined, the Act reduces its predictability for non-EU firms. That creates opportunities for regulatory arbitrage, complicates cross-border compliance, and leaves investment exposed to unregulated threat vectors. For businesses and investors, these blind spots mean legal uncertainty and operational risk when deploying AI globally.

EU policymakers can strengthen the Act’s legitimacy by clarifying misuse definitions, integrating targeted rules for bio and military risks, and addressing market concentration. More realistic, transparent international dialogue is also necessary so the Act becomes a practical foundation for global AI governance rather than a partial template with important blind spots.