The EU AI Act is a landmark regulatory framework with a global impact. In broad strokes, the AI Act carries obligations for providers and deployers based both inside and outside the EU (and representatives, importers, and exporters based in the EU). It involves a risk-based approach to the regulation of AI systems (and, separately, “general-purpose AI models”). But let’s answer a serious question.
What is banned in February 2025?
As of February 2024, certain AI literacy requirements will be in place and, more significantly, certain AI systems will be outright banned in the European Union, marking the first wave of restrictions under this comprehensive legislation. Let’s see what that entails.
AI systems are broadly defined as non-human systems that autonomously process information to make predictions, generate content, provide suggestions, or make decisions. The Act distinguishes between specific AI systems and general-purpose AI models, with the former subject to more stringent regulations depending on their risk level and application. In this case, certain use or purposes of AI systems are prohibited: manipulation, exploitation, social scoring, predictive policing, gathering facial recognition data, biometric categorisation, and biometric identification.
In more detail, Article 5 of the AI Act says that no one may place on the EU market or use any AI system in the EU for:
Manipulation through imperceptible stimuli: AI systems that subtly influence people’s decisions through undetectable audio or visual cues, such as nudging consumers toward certain products without their conscious awareness.
Exploitation of vulnerabilities: AI systems that exploit individuals' vulnerabilities, including those related to age, disability, or socio-economic status, causing significant harm.
Social scoring and classification: AI systems that evaluate or classify individuals based on social behaviors or personal characteristics, potentially leading to discrimination.
Predictive policing: AI systems used to predict an individual’s likelihood of committing a crime, raising serious ethical and legal concerns.
Unauthorized biometric data collection: AI systems that scrape images of faces from the internet or CCTV footage without consent.
Emotion inference in sensitive environments: AI systems that infer emotions in workplaces or educational institutions, potentially affecting privacy and autonomy.
Biometric categorization: AI systems that categorize individuals based on biometric data, such as race or political opinions, posing risks of discrimination and bias.
There are some exceptions made for law enforcement purposes in the above, such as searching for missing persons or preventing terrorist attacks.
Enforcement and Penalties
Non-compliance with the prohibitions on AI systems outlined in Article 5 can lead to severe penalties, including fines of up to €35,000,000 or 7% of the offender’s total worldwide annual turnover, whichever is higher. In addition to financial penalties, non-compliant AI systems can be removed from the EU market. European national regulators have a pragmatic reputation, but these are serious consequences. These are not just common regulatory requirements; they represent a significant risk to any company operating within the EU or targeting EU markets.
Compliance
Given the significant risks associated with non-compliance, companies at risk must take steps to ensure their AI systems adhere to the new regulations. The following are the first steps to help navigate the complexities of the Act and avoid the severe penalties coming into effect in February 2025.
AI Inventory: Catalog all AI systems in use, categorizing them by purpose and data processed.
Assessment: Review each system to identify any that fall under the prohibited practices in Article 5, with special focus on customer interaction, decision-making, and sensitive data processing systems.
Compliance Measures: Discontinue or modify any non-compliant systems, and establish policies for ongoing monitoring to ensure continued adherence.
Training: Educate employees on the new regulations, emphasizing the importance of compliance and risk mitigation.
Documentation: Keep detailed records of all AI systems, assessments, and compliance actions, ready for regulatory review if necessary.
Conclusion
As the most stringent prohibitions of the EU AI Act come into force on 2 February 2024, companies must be vigilant in ensuring their AI systems comply with these new rules. While these prohibitions represent the most immediate regulatory challenge, the broader rules on high-risk and general AI systems will follow in 2025 and beyond. In the next article, we’ll delve into the implications of these upcoming regulations and how they will shape the future of AI in the shadow of Brussels.