EU AI Act: start of regulation or end of innovation?
While the European Commission proposed the world’s first law to regulate artificial intelligence (“AI”) in 2021, it is only this year that the EU Artificial Intelligence Act reached its final stages. Although the process of finalising the law can be lengthy, it is possible that the Act will be adopted by the end of 2023.
According to the European Parliament’s website, the Act is designed to ensure AI systems in use in the EU are “safe, transparent, traceable, non-discriminatory and environmentally friendly”. Broadly, the Act aims to put in place different rules depending on assessed risk levels:
- “Unacceptable risk” AI, such as a system that undertakes cognitive behavioural manipulation of people or specific groups, will be banned (with some limited exceptions).
- “High risk” AI systems which may negatively affect safety or fundamental rights, such as systems designed for law enforcement or migration and border control management, will have to be registered in an EU database and assessed both prior to being put on the market and during their lifetime.
- “Limited risk” AI systems, for example those that manipulate images or audio, must comply with minimal transparency requirements so that consumers may make informed decisions about their use.
- “Generative AI” systems such as ChatGPT are also subject to transparency requirements – for example, disclosure that the content was AI-generated.
Consequences of the Act
The Act, when implemented, will have consequences far beyond the borders of the EU. UK-based companies are both consumers and competitors of EU-based AI systems. The US also does business regularly with the EU. Due consideration then will need to be given to the effects of the Act on new technologies being developed across the globe. The Act may also result in similar legislation being adopted elsewhere. In some countries, this has already begun – the Cyberspace Administration of China is consulting on a proposal for AI regulation, as is the UK’s Department for Science, Innovation & Technology.
For consumers concerned by the increasing involvement of AI in everyday technology, the news that there is soon to be regulation imposed by law in this space may be welcome. Some comfort may be taken in the knowledge that the current largely unregulated use of AI will soon be subject to tighter controls. There have, for example, been recent cases of deepfake pornographic images and videos being generated – without the subject’s consent. AI-authored “fake news” has been referenced as a possible source for the inciting of violence or unrest within targeted communities. Regulation of the AI sector could lead to more protective measures being put in place in relation to the development and distribution of technology that could be used for criminal purposes. Privacy, too, is often cited as one of the major issues with AI, and regulation of the space may go some way to addressing these concerns.
However, while the Act will have its benefits, there are concerns about its potential impact on innovation. The Center for Data Innovation in Washington DC estimated that the Act will cost almost €11 billion per year, damage the EU’s AI industry and decrease overall innovation. The Center also noted that the European Commission has not made clear how many technologies will fall into the category of “high risk” and, therefore, require additional regulation before being permitted onto the market. Developers may have to carry out expensive conformity assessments for safe, useful and low risk AI technology. The Center takes this point further, noting that despite its critics, AI has “quietly but dramatically improved many aspects of consumers’ lives”, for example, fraud protection, harmful online content filters, language translation, health and diagnostic data, and the improvement of voice and video calling. As a result, the development of new technologies may be slowed down (or stopped) due to the onerous and expensive requirements that will be placed on the innovators.
Another criticism that may be levied at the Act is the question of who decides how AI is defined and what technologies ought to be subject to regulation. There is no formally recognised definition of AI. AI is a continually developing technology and so formulating a clear, accurate definition has been next to impossible. While the Act does attempt to define AI for its own purposes, critics have argued that the proposed definition may either (i) not capture enough as it is not inclusive of future AI methods; or (ii) capture too much by including AI methods which have no relevance to the supposed requirement for regulation.
What does this all mean?
The answer is not straightforward. Like the definition of AI itself, the impact of this legislation is an ever-moving target and with the Act still being debated by the European Parliament, the above concerns may be addressed more fully in the coming months. In the meantime, innovators and consumers of AI, and those advising them, should keep a close eye on the Act, which may be adopted by the end of 2023 and applicable across the EU by 2025.
Regulars
Perspectives
Features
Briefings
- Criminal court: CPO breach application not out of time
- Licensing: The future of minimum unit pricing
- Insolvency: Who gets the benefit?
- Tax: Raising revenue with Holyrood’s devolved powers
- Immigration: When is Home Office support “adequate”?
- Scottish Solicitors' Discipline Tribunal: October 2023
- In-house: Public service – so many paths