I will write separately about politically-realistic ways to implement the EU AI Act in a way that supports innovation.
However, I want to be clear. There is only one optimal approach to this Act: CTRL+ALT+DEL.
A very imperfect piece of regulation
The early stages of the EU AI Act were promising. Alas, the process deprecated into a hastily negotiated piece of regulation that is, at best, severely imperfect.
- Incredibly complex. As described in the Draghi report: “While the ambitions of the EU’s GDPR and AI Act are commendable, their complexity and risk of overlaps and inconsistencies can undermine developments in the field of AI by EU industry actors”.1 It is also true that the Act is but one of many related laws being approved in Europe at record speed, which has been described as a “tsunami” of overregulation”.2
- Super costly. The costs of complying with the Act have been estimated at around 300,000 euros for an SME of 50 people.3 Smaller companies may not even have the resources needed to comply with the Act.
- Not great for open-source. While the Act has some exceptions for open source, these do not cover open source in the highest risk categories or, more concernedly, open-source products publicly available in the market4 (can you even do open source without making the result available in one or another kind of market?).
An already-outdated approach
EU AI Act supporters respond to these and other criticisms by noting that while improvement is needed, improvement is feasible.
I doubt this was ever the case, but it definitely is no longer the case.
Any hopes for success of the EU AI Act involved the so-called Brussels’s effect (or variants of this effect). If the EU managed to get other countries to adopt similar standards, the global baseline would be homogeneous, and EU companies would not need to compete at a disadvantage.
The harmonisation of global AI governance is not currently realistic.
The US and China will offer the world what they want to offer the world. Their AI products will be good (I’m hoping) or bad (others fear). This is yet to be seen. But there is little the EU can do to significantly influence these products.
And so, what next for the EU AI Act?
- If the EU blocks all non-complying AIs (let’s assume it can), the geopolitical cost would be enormous, and companies and people in the EU would suffer from a lack of access to technology.
- If the EU fully or partially exempts foreign AIs while forcing local innovators to comply with the Act, it would disfavour its companies even inside the EU.
- Anything in between is one or another version of ad-hoc implementation, which would invite regulatory capture and worsen regulatory uncertainty for anyone without access to decision-makers.
Conclusion
People in the EU value things like privacy, sustainability, and ethics. They need to be enabled, not restrained.
Alas, the EU AI Act is not designed to enable innovators.
Supporters of the Act describe it as balancing safety and innovation, but the reality is that the Act is all sticks and no carrots. In the current geopolitical context, these sticks are next-to-useless vis-a-vis foreign powers. Yet, they can easily be used to bash local innovators.
The best thing the EU could do with it is to bin it somewhere no one finds it.
—