The European Union has taken a significant step in shaping the future of technology by reaching a provisional agreement on the Artificial Intelligence Act (AI Act), a groundbreaking legislation designed to regulate the use of artificial intelligence (AI).
The EU is hoping that this will position it as a pioneering force in establishing comprehensive legal frameworks for AI, reflecting a strong commitment to ensuring safe and ethical development and use of this technology.
The AI Act represents the world’s first extensive legal framework dedicated to AI. Presented by the European Commission in April 2021, the legislation has been crafted to manage the risks associated with AI while harnessing its potential benefits. The act classifies AI systems based on the risk they pose to users, with different levels of regulation applied accordingly.
One of the key elements of the AI Act is its emphasis on the protection of fundamental rights and safety for both individuals and businesses. AI systems in the EU are expected to be safe, transparent, traceable, non-discriminatory, and environmentally friendly, with human oversight to prevent harmful outcomes. Additionally, the act aims to establish a technology-neutral, uniform definition of AI that could apply to future AI systems.
Historic!
The EU becomes the very first continent to set clear rules for the use of AI 🇪🇺
The #AIAct is much more than a rulebook — it's a launchpad for EU startups and researchers to lead the global AI race.
The best is yet to come! 👍 pic.twitter.com/W9rths31MU
— Thierry Breton (@ThierryBreton) December 8, 2023
The legislation identifies specific categories of risk associated with AI systems:
Unacceptable Risk: Certain AI systems, such as those that manipulate behavior or engage in social scoring, are considered a threat to individuals’ free will and will be banned. However, exceptions exist, like using “post” remote biometric identification systems for prosecuting serious crimes with court approval.
High Risk: AI systems affecting safety or fundamental rights will face stricter controls. This includes AI used in products like toys, aviation, and medical devices, as well as AI in areas like law enforcement and education.
Limited Risk: Other AI applications will undergo assessments but face fewer restrictions.
Specific Transparency Risk: The regulations will require clear disclosure when users interact with AI-generated content, including chatbots and deepfakes. All AI-created or modified content, such as audio, video, text, and images, must be marked as artificial, and the use of biometric and emotion recognition technologies must be transparently communicated to users.
Systems like ChatGPT will be required to adhere to transparency guidelines, including disclosure of AI-generated content and prevention of illegal content generation. Indiscriminate scraping of images from the internet or security footage to create facial recognition databases will be banned. However, the use of real-time remote biometric identification systems in publicly accessible spaces by law enforcement will be allowed under certain conditions.
The act also provides for significant fines for companies that violate the AI regulations, ranging from €7.5 million to 7% of global revenue, depending on the violation’s nature and the company’s size. Importantly, the legislation gives EU citizens the right to file complaints about AI systems and receive explanations about how high-risk systems might affect their rights.
This sweeping legislation proposal, hailed as a “global first” by EU chief Ursula von der Leyen, is expected to be seen as a model for other countries, contrasting with the more hands-off approaches in places like the UK and Japan, and more security-focused policies in the US and China. The AI Act is expected to set a global benchmark, influencing how governments worldwide regulate the rapidly evolving field of AI.
The agreement on the AI Act came after extensive negotiations and is set to undergo further refinement before its expected implementation by 2025. This regulatory framework is seen as a balance between protecting individuals and businesses from AI-related risks and not stifling innovation in this dynamic field.
The AI Act is now pending approval by the European Parliament and member states, marking a critical phase in its journey towards becoming law.
The news has not been met with positivity from most corners of the internet, of course. Many have aired their frustration on social media about the EU’s proposed legislation and fear that this will be the end of AI startups in the nations falling under the EU.
The general feeling is that EU lawmakers have not fully comprehended the technology they are attempting to regulate and that this is a knee-jerk reaction to the formidable pace at which AI is evolving.