With landmark AI Act, the EU takes the lead in regulating artificial intelligence
The European Union is once again the leader in big tech regulation, for better or for worse.
What you need to know
- European Parliament and Council today reached a provisional agreement to regulate artificial intelligence with the AI Act.
- The legislation is the first successful attempt to regulate AI, aiming to curb potential risks of advanced automation and misinformation.
- Though the AI Act still needs to be formally adopted, its enactment is nearly definite.
- However, the act's provisions are not expected to be enforceable for another year or two.
European Parliament and Council reached a provisional agreement today on the Artificial Intelligence Act, which represents the world's first concrete set of rules governing the use of AI. The EU, for better or worse, has been the leader in big tech regulation. The governing body of 27 European countries has had input on everything from data collection to the charging port on iPhones in recent years.
This latest piece of legislation was conceptualized in 2021 but has changed a lot following the AI boom over the past year. AI became mainstream, with chatbots like OpenAI's ChatGPT and Google's Bard debuting for use by the masses. Thus, EU legislators retooled the proposed AI Act, and it took two years to finally reach the just agreed-upon version today.
However, getting to this point wasn't easy. You'll notice that both of the aforementioned AI chatbots come from American companies, and some EU lawmakers expressed concern about the AI Act becoming an obstacle for homegrown startups.
The AI Act attempts to limit how the technology can be used by companies, governments, and law enforcement. It focuses on malicious applications of AI, like using the tech to violate a person's civil rights. Some examples given by the EU include predictive policing, image scraping for facial recognition purposes, and manipulating human behavior. These examples, and a few others, will be barred outright by the AI Act.
The EU did note a few exceptions that law enforcement can use AI-based biometric identification, and generally, these are when there is a specific and imminent threat.
The legislation also targets big tech companies under the "high-risk systems" provisions. The AI Act says that general-purpose AI, such as chatbots like ChatGPT, will have to comply with new transparency requirements. These companies must share technical documentation, comply with copyright law, and provide detailed summaries of the content used for training. The guidelines apply to both AI systems (like ChatGPT) and AI models (like GPT-4).
If the EU deems that an AI tool poses a "systemic risk," there are stricter requirements. These AI systems, if they meet undisclosed criteria, will have to "conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency," per the EU.
Get the top Black Friday deals right in your inbox: Sign up now!
Receive the hottest deals and product recommendations alongside the biggest tech news from the Android Central team straight to your inbox!
If companies defy the AI Act's stipulations, they could face fines of up to 7% of their revenue.
AI Act: 🇪🇺 Council and 🇪🇺 Parliament strike a deal on the first rules for AI in the world#AIActRead our press release👇December 9, 2023
Before becoming law, the AI Act will need to be formally ratified by the European Council and the European Parliament. However, this provisional agreement means that it is all but certain the AI Act will be adopted by the EU.
Although the legislation is significant in that it is the first to try to curb AI's rapid expansion, its effectiveness is unclear. The provisions in the law are not expected to be enforceable for 12 to 24 months, according to the New York Times. Who knows what the artificial intelligence industry will look like by that time or if the EU's rules will even still be relevant?
Brady is a tech journalist for Android Central, with a focus on news, phones, tablets, audio, wearables, and software. He has spent the last three years reporting and commenting on all things related to consumer technology for various publications. Brady graduated from St. John's University with a bachelor's degree in journalism. His work has been published in XDA, Android Police, Tech Advisor, iMore, Screen Rant, and Android Headlines. When he isn't experimenting with the latest tech, you can find Brady running or watching Big East basketball.