AI or More? A Risk-based Approach to a Technology-based Society
A lot has already been said about the EU draft regulation for Artificial Intelligence (the ‘Regulation’), published by the European Commission on 21 April 2021. What has however remained little discussed is the fundamental issue of regulating an unspecified object (AI). What started out as a simple discussion on how to improve the definition of ‘AI’ in the Regulation has resulted in this blogpost, in which we come to the conclusion that there might be a better way of ‘shaping Europe’s digital future’.
AI Regulation (draft): scope and applicability
Article 3, sub 1, of the AI Regulation defines ‘AI’ by referring to software systems generating outputs for human-defined objectives:
‘Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.’
In Annex I, the European Commission (‘EC’) captures about every technique currently known:
‘(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation,
inductive (logic) programming, knowledge bases, inference and deductive engines,
(symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.’
Although most individuals will have a (conceptual) understanding of what should and should not qualify as ‘AI’, it seems the EC has struggled with finding a clear definition thereof. Assuming that it was the intention to capture all techniques, approaches and software that might now or in the future qualify as ‘AI’, one could understand why the techniques and approaches are formulated in such a broad way. However, looking at the definition from a legal point of view, it certainly does not seem desirable to introduce legislation that deals with a very broad and at the same time very unspecified object.
The EC has tried to mitigate this pitfall by introducing a risk-based approach, meaning that only those ‘AI’ systems that pose a high or moderate risk to the fundamental rights and freedom of European citizens fall under the categories affected by the Regulation.
While we can see the benefit of this practical approach of looking at risks rather than the techniques and systems, it raises issues from the perspective of legal acceptability and legal certainty. It is far from ideal to introduce a Regulation that creates such uncertainty as to its scope.
Moreover, according to the impact assessment accompanying the proposal, the aim is to ‘prevent unilateral Member States actions that risk to fragment the market and to impose even higher regulatory burdens on operators developing or using AI systems’. The proposed regulation is based on Article 114 TFEU, governing shared competence. Member States can only regulate those areas that are not within the scope of the proposal.
At first glance, tweaking the definition of ‘AI’ might seem a way to solve these issues. We could, indeed, try to come up with a narrower definition of ‘AI’, although it might be tricky to come up with a definition that will remain ‘future-proof’. We therefore believe it would be more logical to assess the Regulation from a practical point of view, to distil what it actually intends to capture, and to see how we might be able to achieve this in a ‘future-proof’ manner.
Technique vs AI
Looking at the setup and the definition of AI included in Article 3, it seems the EC tries to capture risky techniques instead of the sub-category ‘AI’. Indeed, the definition is broad enough to capture virtually all techniques known to humans (including calculators and decision trees), so why limit the Regulation to ‘AI’ only? For the world’s first legislation on AI from a legislative body, a proper and thorough Regulation would be a perfect way to demonstrate Europe’s readiness for the digital age. However, instead of well-considered legislation, we now run the risk of being the world’s laughingstock: passing calculators off as AI and making it even harder for software vendors to enter the European market.
What to do? Let us look at the Regulation’s essence: making Europe fit for the digital age and protecting EU citizens from inventions that could pose a threat to their fundamental rights, whilst at the same time avoiding any bottlenecks for innovation. Introducing a risk-based approach creates a leeway for inventions that are cutting-edge but harmless. However, if we want to prepare ourselves for a digital future, would it not make sense to try and capture other digital techniques currently not yet known, instead of limiting protection only to known AI techniques? This suggestion provides a robust system for inventions going forward, and immediately tackles the most frequent objection against the current draft, eg, its incorrect definition of ‘AI’. We therefore propose to further broaden the scope of the Regulation, having it apply to all (digital) technology, whilst at the same time keep the risk-based approach in place to ensure innovation is dealt with in a practical manner.
In practice, this suggestion leads to the following amendment of the current draft: eliminating references to sub-categories of ‘AI’ and opting for its header (‘technology’) instead. This would ensure that Europe remains ready for the new digital age and for protecting EU citizens from being exposed to new, risky technological advancements.
In summary, we propose to show the (draft) AI Regulation’s true colours and to rename it ‘Technology Regulation’, having it apply to all current and future technology. With the risk-based approach in place, this would in our view not hinder innovation but create a practical and forward-thinking framework that is fit for Europe’s digital future. Technology is here to stay, so should a Regulation!
Raimond Dufour is an attorney at Dufour Advocatuur.
Josje Koehof is a Legal Counsel at Optiver.
Tina van der Linden is an assistant professor at Vrije Universiteit Amsterdam.
Jan Smits is an Emeritus Full Professor of Law & Technology at Eindhoven University of Technology.