Silicon Continent is a weekly Substack newsletter by Luis Garicano and Pieter Garicano that explores why the European Union lags behind the United States and Asia in digital technology, artificial intelligence, and broader innovation. The blog combines hard economic data with case studies to diagnose challenges and propose solutions—an ambitious goal inspired by transatlantic conversations between the authors.
I’ve discussed the EU AI Act previously (literacy, timelines, prohibited AI), but for the heavy work of looking at the impact on High-Risk systems, let’s start with Pieter Garicano’s take. I’ll let him set up the Act again in brief:
Originally, the way the AI Act was supposed to work is by regulating outcomes rather than capabilities. It places AI models into risk categories based on their uses — unacceptable, high, limited and minimal risk — and imposes regulations for each of those categories.
Many important AI uses … are treated as ‘high risk’. These are:
· Systems that are used in sectors like education, employment, law enforcement, recruiting, and essential public services2
· Systems used in certain kinds of products, including machinery, toys, lifts, medical devices, and vehicles.3
Garicano’s criticism of the regulatory regime is harsh. He suggests that the ‘high-risk rules’ are unnecessarily restrictive, unclear, and likely to impact startups in high value areas like education, employment, law enforcement, and healthcare and in sophisticated use cases like automated decision making. The administrative and financial overhead is unrealistic for many organizations, and compliance will be virtually unattainable for smaller actors. As a striking example:
Imagine you have a start-up and have built an AI teacher — an obvious and good AI use case. Before you may release it in the EU you must do the following:
1. Build a comprehensive ‘risk management system’4
2. Ensure the system is trained on data that has ‘the appropriate statistical properties’5
3. Draw up extensive technical documentation6
4. Create an ‘automatic recording of events across the systems lifetime’7
5. Build a system so a deployer can ‘interpret a system’s output’8
6. Build in functions for ‘human oversight’ and a ‘stop button’9
7. Build a cybersecurity system10
8. Build a ‘quality management system’ that includes ‘the setting-up, implementation and maintenance of a post-market monitoring system’11
9. Keep all the above for the next 10 years12
10. Appoint an ‘authorized representative which is established in the Union’13
11. Undergo a ‘conformity assessment’ verifying that you have done the above with a designated authority and receive a certificate14
12. Undergo a fundamental rights impact assessment and submit that to the Market Surveillance Authority15
13. Draw up an EU Declaration of Conformity16
14. Register in an EU database17
If you get any of that wrong, you may be fined up to the higher of 15 million euros or 3% of total revenue.18
Many aspects of AI Act compliance remain undefined. Nevertheless, Garicano has more harsh words for enforcement procedures
The rules are bad. It gets worse once you look at how the regulations are actually enforced …
At the EU level there will be four bodies: an AI office responsible for defining guidelines, definitions and coordinating bloc-wide enforcement, a Board staffed by representatives from the member states, a scientific panel, supporting both the office and the board, and an advisory forum.25
None of these EU bodies will actually be responsible for the vast majority of the act. While the laws were crafted at the European level, day-to-day execution will not be led by the commission but by multiple authorities within each country…. at least one Market Surveillance Authority [per country] responsible for ensuring compliance, investigating failures and applying penalties… at least one Notifying Authority [per country] that will supervise the organizations (called notified bodies) that … are responsible for certifying that systems conform to the requirements imposed on them.
And the consequences for European innovation appear dire. Rules impose fixed costs that disproportionately disadvantage startups and smaller firms while benefiting established players like Google or Microsoft.
Rather than let banks experiment with automatic AI tellers, biometric systems are categorized as high risk from the start and must be supervised by human monitors. Rather than let schools try and improve their quality by bringing in AI tutors, Europe pre-emptively says that there must be impact assessments, authorized representatives, notified bodies and monitoring.
This is all, understandably, framed as counterproductive to the EU’s goals. Perhaps … start over?
But the reason the impact of the Act is still limited is because many of its provisions are not yet enforced…. The delay provides Europe with an opportunity to nullify the damage the Act can do before it takes force. There are smaller escape clauses: the commission has been given the power to amend Annex III, which lists high-risk use cases, and much of the enforcement practices are still up to be defined. Given the stakes, it would be better to start from scratch. The Commission has already committed itself to reviewing the Act in 2029. To escape the strange world of the EU AI Act, Europe should do that now.
It’s worth reading the entire post and subscribing to the Silicon Continent newsletter.