Elsewhere, while the pivotal EU AI Act remains in a legislative chrysalis, the Future of Life Institute (FLI) has developed an interesting new tool to help companies better understand whether they may have legal obligations under the Act (in 2025 or 2026). This is only a toy simplification of potential future obligations, but it could be a good way to bring the EU AI Act to the attention of clients thinking about handling AI systems in the vicinity of the European Union.
FLI also has regular examinations and breakdowns of developments in the Act, compiling analyses and viewpoints from civil society organizations, academics, and other experts, focusing on potential gaps, ambiguities, and ethical implications.
Here are some notes on FLI’s most recent analyses and critiques:
Loophole in 'High-Risk' Classification
Hundreds of organizations have joined in a critique that the Act, as recently altered, allows AI developers to subjectively determine whether their systems are 'high-risk,' potentially undermining the entire legislation.
Ambiguity in Definitions
An article by Matija Franklin and others argues that the Act's amendments on manipulation lack clarity and scientific support, especially concerning terms like 'personality traits.'
Lack of Attention to Complexity and Power Asymmetries
The EU AI Act was conceived before general purpose AI came on the scene (with generative AI regulations patched into drafts of the Act in 2023 after significant lobbying). Now, a workshop report from AlgorithmWatch and the University of Amsterdam calls for a more fundamental refocus on the complexities, power imbalances, and extensive impacts of general purpose AI systems.
Inadequate Child Protection
Among other critiques related to fundamental rights, Susanna Lindroos-Hovinheimo points out that the Act, despite consideration of fundamental rights, lacks any specific provisions for children.