So, the latest I’m hearing is that the EU AI Act is expected to be published in late July due to a legislative backlog. It will become law 20 days after publication, potentially by mid-August. This means that rules on AI literacy (Art. 4) and prohibited AI systems (Art. 5) will be the first obligations to come into force, likely by February 2025.
Although I’m not based in the EU, the de jure reach of the AI Act and the de facto Brussels’ Effect make it globally significant. The following views (not legal advice) come from a deep reading of the Act and discussions with experts in the EU.
In broad strokes, the AI Act carries obligations for providers and deployers based both inside and outside the EU (and representatives, importers, and exporters based in the EU). It involves a risk-based approach to the regulation of AI systems (and, separately, “general-purpose AI models”).
But let’s not worry about any of that today. Let’s start small, and just answer one question.
Do I need to establish AI literacy for my client or my company before February 2025?
Article 4 of the AI Act establishes AI literacy obligations for providers and deployers of AI systems, who are defined in Art. 3(3) and 3(4) respectively, while Art. 2(1)(a)—(c) define the scope of the AI Act’s application to ‘providers and deployers’. ‘AI literacy’ is defined in Art. 3(56), and an AI System is defined in Art. 3(1). Putting them together, a guided hand on ChatGPT does a reasonable job of paraphrasing this simplest of the simple obligations under the AI Act:
Certain machine-based systems need special attention. These systems can work on their own in different ways and might learn and change after they are set up. They take the information they get and use it to make things like predictions, content, suggestions, or decisions. These results can change things in the real world or online.
If you use, sell, or share the results of these systems in the EU, you should check if you need to follow any special rules. To the best of your ability, you must make sure your people dealing with these systems know enough about them. This helps them use the systems correctly. Your people must also gain awareness about the opportunities, risks, and possible harm these systems can cause.
The level of this ‘AI literacy’ required depends on the technical knowledge, experience, education, and training of your people dealing with these systems. You also need to think about how and where these systems will be used. Finally, when providing ‘AI literacy’ you should consider who will be affected by these systems.
But what does AI training actually entail from a Brussels perspective? The smooth machine-based paragraphs above are no less ambiguous than the actual hazy requirements of Art. 4 of the AI Act. Will you be at risk around February 2025 if your client or company falls within the scope of the AI Act and a Brussel’s enforcer determines that you haven’t put your best efforts into the AI literacy of the relevant people?
Despite the alluring pull to make lawyer marketing copy claiming that “breaches [of the AI literacy requirement] could result in significant regulatory penalties”, the risks for not preemptively satisfying the EU on AI literacy appears very low. Art. 99 of the AI Act does set out huge potential penalties (including 7% of total worldwide annual turnover) for violations of the big Art. 5 prohibitions; and several other obligations in other articles also merit large fines. However, Art. 4 on AI literacy is not among them.
AI literacy will be enforced by this somewhat unfinished promise from Art. 99(1): “Member States shall lay down the [effective, proportionate and dissuasive] rules on penalties and other enforcement measures, which may also include warnings and non-monetary measures…” Take that together with the practical enforcement challenges – quoting from AI regulation guru Techie Ray:
European national regulators generally tend to be pragmatic and issue warnings/notices in the first instance to give the non-compliant entity an opportunity to fix their non-compliance. Penalties are handed down only if the non-compliant ignores the warning or fails to correct their conduct.
Due to resourcing and manpower constraints, European national regulators also tend to be selective over which instances of non-compliance to pursue. Generally, national regulators will focus their efforts on serious cases from the big players of the market (e.g. Big Tech) rather than small-medium enterprises (SME). Cracking down on the big players can also effectively send signals throughout market, which boosts the deterrent effect of the law.
Speaking of resourcing, the field of AI regulation will likely require people who possess both technical and legal skillsets to be able to competently assess and pursue AI cases. Such talent is relatively rare, and regulators would need to compete with private sector to secure such talent. Apparently, it’s been reported that the EU AI Office is struggling to fill positions (source).
In the AI Act context, it is likely national regulators will focus their attention on Big Tech first (as the primary producers of GPAI models) and socially-significant providers of high-risk systems (e.g. banks, insurance, healthcare institutions, etc).
Putting that together, Al training is important. If you use, sell, or share the results of AI in the EU (or anywhere), then get started with it. But don’t fear the regulators. Watch for Member States and the EU to offer additional guidance and think hard about whether your people dealing with these systems know enough about them, given all the various contexts swirling around AI.
AI literacy may be among the least burdensome obligations under the AI Act, but it is the first in time, alongside the prohibited AI systems in Art. 5. I’m looking forward to a deeper examination of Article 5 next.