Tech News : EU’s AI Regulations Agreed

Following 36 hours of talks, EU officials have finally reached a historic provisional deal on laws to regulate the use of artificial intelligence. 

The Artificial Intelligence Act 

The Council presidency and the European Parliament’s negotiators’ provisional agreement relates to the proposal on harmonised rules on artificial intelligence (AI), the so-called artificial intelligence act.  

The EU says the main idea behind the rules is to regulate AI based on its capacity to cause harm to society, i.e. following a ‘risk-based’ approach: the higher the risk, the stricter the rules.  

Protection & Stimulating Investment 

The comprehensive, world-first draft regulation aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. The hope is that this will also help stimulate investment and innovation in AI within Europe. 

The Key Elements 

Some of the key elements in the draft AI act include: 

– Rules relating to high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems. 

– A revised system of governance with some enforcement powers at EU level.

– The extension of a list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards.

– Improved protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.

The Key Aspects

The new EU Artificial Intelligence Act covers several key aspects: 

– Clarifying the definitions and scope of the proposed act. For example, the definition of an AI system aligns with the Organisation for Economic Co-operation and Development’s (OECD) approach, providing clear criteria to distinguish AI from simpler software. The regulation excludes areas outside EU law, national security, military/defence purposes, and AI used solely for research, innovation, or non-professional reasons. 

– The classification of AI systems and prohibited practices. AI systems are classified into high-risk and limited-risk categories. High-risk AI systems must meet certain requirements and obligations for EU market access, while limited-risk ones have lighter transparency obligations. The act bans AI practices considered unacceptable, like cognitive behavioural manipulation and untargeted facial image scraping.

– Any law enforcement exceptions. For example, the draft rules include any specific provisions allowing law enforcement to use AI with safeguards, including emergency deployment of high-risk AI tools and restricted use of real-time remote biometric identification.

– New rules addressing general-purpose AI (GPAI) systems and foundation models, with specific transparency obligations and a stricter regime for high-impact foundation models.

– A new governance architecture. An AI Office within the Commission will oversee advanced AI models, supported by a scientific panel. The AI Board, comprising member states’ representatives, will coordinate and advise, complemented by an advisory forum for stakeholders. 

– Penalties. Fines for violations are set as a percentage of the offending company’s global annual turnover or a predetermined amount, with caps for SMEs and startups. 

– Rules around transparency and protection of fundamental rights. For example, high-risk AI systems require a fundamental rights impact assessment before market deployment, while increased transparency is mandated, especially for public entities using such systems. 

– Measures in support of innovation including AI regulatory sandboxes for real-world testing and specific conditions and safeguards for AI system testing. The act also aims to reduce the administrative burden for smaller companies. 

EU Pleased 

The comments of Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence, highlight how pleased the EU is that it’s managed to be first to at least put a provisional, draft set of regulations together. As she says on the EU’s Council of the EU pages: “This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.” 

More Than Two Years Away 

However, despite the three days of negotiations and the announced provisional rules it’s understood that the AI act (which they will lead to) won’t apply until two years after it comes into force (with some exceptions for specific provisions). Given that it’s just over a year since ChatGPT was released and that in that short time we’ve also seen the release of OpenAI’s Dall-E,  Microsoft’s Copilot, Google’s Bard and Duet (and now its Gemini AI model), X’s Grok, and Amazon’s Q, you can’t help thinking that effective regulation of AI looks like it will stay some way behind the rapidly advancing and evolving technology for some time yet. 

Criticism

The idea of putting the AI act together for the EU got a negative response back in June when it was criticised by 150 executives in an open letter representing many well-known companies including Renault, Heineken, and Airbus. Some of the criticisms included were that the rules are too strict, are ineffective, and could negatively impact competition and opportunity and undermine the EU’s technological ambitions. 

What Does This Mean For Your Business? 

The provisional agreement on the EU’s Artificial Intelligence Act is a double-edged sword for businesses in the AI sector. On one hand, it establishes a framework for regulating AI technologies, yeton the other, its long gestation period and potential for stringent regulations have raised concerns about its possible impact on innovation and competition for the EU.

The Act’s implementation timeline is actually a crucial factor for businesses. For example, the new regulations won’t come into force until at least two years after being finalised, thereby creating a window of uncertainty. During this period, AI technology will continue to evolve rapidly, most likely outpacing the regulations being put into place. This could all lead to a regulatory framework that is outdated by the time it is implemented, potentially stifling innovation and putting the EU at a technological disadvantage compared to other regions that may have more agile or less restrictive approaches. 

Also, the Act’s stringent rules, particularly for high-risk AI systems, could impose significant compliance burdens on businesses. While these measures are intended to ensure safety and ethical use of AI, there is a risk that they might be too restrictive, hampering the ability of European companies to innovate and compete globally. Over-regulation, therefore, could deter investment in the AI sector, hindering the EU’s technological ambitions and possibly leading to a competitive disadvantage in the global AI landscape. 

The balance between regulation and innovation is therefore a delicate one. While (what will become) the Act aims to protect fundamental rights and ensure the ethical use of AI, it also needs to foster an environment conducive to technological advancement. If the regulations are perceived as overly burdensome or inflexible, they could inhibit the growth and competitiveness of EU-based AI companies, impacting the broader European technology sector. 

The EU’s AI Act may be a significant step towards regulating emerging technologies, but its success will largely depend on its ability to strike the right balance between safeguarding ethical standards and supporting innovation and competitiveness in the AI industry. Businesses must, therefore, prepare for a landscape that could change significantly in the coming years, staying agile and adaptable to navigate these upcoming regulatory challenges effectively.

Leave a Comment

Your email address will not be published. Required fields are marked *