EU's landmark AI Act set to become law: What are the implications?
The European Union is on the brink of making history as its groundbreaking AI Act is poised to become law in the coming weeks. The legislation, hailed as a global benchmark for AI regulation, aims to balance innovation with protecting citizens' rights and safety.
The legislation is expected to be formalized by a council of ministers in the coming weeks, with a phased implementation plan set over the next three years. "Users will be able to trust that the AI tools they have access to have been carefully vetted and are safe to use," said Guillaume Couneson, a partner at the law firm Linklaters, drawing parallels to the stringent security measures in banking apps.
Defining AI Under the Act
The Act provides a detailed definition of AI as a "machine-based system designed to operate with varying levels of autonomy," covering technologies from chatbots to systems sifting through job applications. This definition includes the ability to learn post-deployment and generate outputs such as predictions, recommendations, or decisions influencing physical or virtual environments.
Some AI systems are banned under the new Act
The legislation prohibits AI systems that pose "unacceptable risks," such as those manipulating individuals to cause harm, social scoring systems, predictive policing, emotion monitoring, biometric categorization, and the compilation of facial recognition databases through unauthorized means. However, these bans do not extend to AI tools designed for military, defense, or national security purposes, a point of contention among tech safety advocates.
"We fear that the exemptions for national security in the AI Act provide member states with a carte blanche to bypass crucial AI regulations and create a high risk of abuse," warned Kilian Vieth-Ditlmann, deputy head of policy at Algorithmwatch, a German non-profit advocating for responsible AI use.
Regulation of High-Risk Systems
The Act categorizes certain AI systems as "high risk," requiring stringent oversight. These include systems used in critical infrastructure, education, employment, healthcare, banking, and law enforcement. High-risk AI tools must be accurate, undergo risk assessments, have human oversight, and log their usage. EU citizens will have the right to explanations about decisions made by these AI systems affecting them.
Generative AI and Copyright
Generative AI, capable of producing text, images, video, and audio from simple prompts, is addressed under provisions for "general-purpose" AI systems. Developers must comply with EU copyright laws and provide summaries of training data. A stricter regime is reserved for models posing "systemic risks," requiring reporting of serious incidents and conducting adversarial testing.
Deepfakes and Transparency
Issuers of deepfakes must disclose AI-generated content, with exceptions for artistic, creative, or satirical work, which still requires appropriate disclosure. AI-generated text informing the public on matters of public interest must be flagged as AI-made unless it has undergone human review. Developers must ensure their output is detectable as AI-generated, using watermarking or other methods.
Enforcement and Penalties
The Act stipulates fines ranging from €7.5 million or 1.5% of a company's total worldwide turnover for minor breaches to €35 million or 7% of turnover for deploying or developing banned AI tools. Smaller companies and startups will face proportionate penalties. The obligations will take effect after 12 months, with a European AI office established to oversee compliance and set standards.
A Global Impact
The ramifications of the EU AI Act extend far beyond European borders. As an influential tech regulator, Brussels' approach to AI could set a precedent for other countries. "Many other countries will be watching what happens in the EU following the adoption of the AI Act. The EU approach will likely only be copied if it is shown to work," said Couneson.
The AI Act has elicited mixed reactions from the tech industry. While major companies publicly support the legislation, concerns about overregulation persist. Amazon expressed commitment to collaborating with the EU for safe AI development, whereas Meta warned against stifling innovation. Privately, some tech executives are also criticizing the Act's stringent requirements, fearing it may drive companies to relocate outside the EU.