The breathtaking development of artificial intelligence has stunned users with composing music, creating images and writing essays, and raised concerns about its implications. Even European Union officials working on groundbreaking rules for managing emerging technologies have been caught off guard by the rapid growth of AI.
A block of 27 countries proposed the first AI rules in the Western world two years ago, focusing on curbing risky but narrow applications. Virtually no mention was made of general purpose artificial intelligence systems such as chatbots. Legislators working on the AI Act thought about whether to include them, but did not know how to do it, and even if it was necessary.
“Then came the ChatGPT boom,” said Dragos Tudorache, a Romanian member of the European Parliament, one of the leaders of the event. “If anyone else had doubts about whether we needed anything at all, I think the doubts dissipated quickly.”
The release of ChatGPT last year attracted worldwide attention due to its ability to generate human responses based on what it has learned from scanning vast amounts of online material. In response to emerging concerns, European lawmakers have been quick to add language in recent weeks regarding generic AI systems, putting the finishing touches on the legislation.
The EU AI law could become the de facto global standard for AI, and companies and organizations could decide that the size of a single block market would make compliance easier than developing different products for different regions.
“Europe is the first regional bloc to seriously attempt to regulate AI, which is a huge challenge given the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, Senior Policy Advisor at EDRi’s Digital Rights Group.
Authorities around the world are scrambling to figure out how to control rapidly advancing technologies to ensure they improve people’s lives without threatening their rights or safety. Regulators are concerned about the new ethical and social risks associated with ChatGPT and other general purpose artificial intelligence systems that could change everyday life, from work and education to copyright and privacy.
The White House recently invited the heads of AI tech companies, including Microsoft, Google and ChatGPT maker OpenAI, to discuss the risks, while the Federal Trade Commission warned it would not hesitate to crack down.
China has released a draft regulation requiring security assessments of any products that use generative artificial intelligence systems such as ChatGPT. The UK competition watchdog has launched an AI market review, and Italy briefly blocked ChatGPT over a privacy breach.
Wide-ranging EU rules covering any provider of AI services or products are expected to be approved by a committee of the European Parliament on Thursday, after which negotiations will begin between the 27 member states, the EU parliament and the executive commission.
European rules affecting the rest of the world – the so-called Brussels effect – have previously played out after the EU tightened data privacy and mandated the use of cables to charge phones, though such efforts have been criticized for stifling innovation.
This time the attitude may be different. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have called for a six-month break to consider the risks.
Jeffrey Hinton, a computer scientist known as the “Godfather of AI,” and fellow AI pioneer Yoshua Bengio expressed their concerns last week about the uncontrolled development of AI.
Mr Tudorache said such warnings show that the EU’s decision to start developing AI rules in 2021 was “the right decision.”
Google, which has responded to ChatGPT with its own Bard chatbot and is implementing artificial intelligence tools, declined to comment. The company told the EU that “AI is too important not to be regulated.”
Microsoft, which maintains OpenAI, did not respond to a request for comment. He hailed the EU’s efforts as an important step “towards making robust AI the norm in Europe and around the world.”
Mira Murati, Chief Technology Officer of OpenAI, said in an interview last month that she believes governments should be involved in regulating AI technologies.
But when asked if some of OpenAI’s tools should be classified as higher risk in the context of the proposed European rules, she said it was “a lot of nuance”.
“It depends on where you’re applying the technology,” she said, citing a “very high-risk medical or legal use case” compared to an accounting or advertising application as an example.
OpenAI CEO Sam Altman plans to tour the world this month in Brussels and other European cities to talk about the technology with users and developers.
According to a recent partial bill obtained by the Associated Press, newly added provisions in the EU AI Act would require “underlying” AI models to disclose copyrighted material used to train systems.
Core models, also known as large language models, are a subcategory of general purpose AI that includes systems such as ChatGPT. Their algorithms are trained on vast pools of online information such as blog posts, digital books, scientific articles, and popular songs.
“You have to make a significant effort to document the copyrighted material that you use in training the algorithm,” Mr. Tudorache said.
Policy makers for AI must balance the risks the technology poses against the transformative benefits it promises.
According to EDRi’s Ms Chander, big tech companies developing AI systems and European national ministries looking to deploy them are “seeking to limit the power of regulators,” while civil society groups are pushing for greater accountability.
“We need more information about how these systems are developed — about the level of environmental and economic resources invested in them — and how and where these systems are used so that we can effectively challenge them,” she said.
In line with the EU’s risk-based approach, the use of AI threatens people’s safety or rights and is subject to strict controls.
Remote facial recognition is expected to be banned. So are government “social evaluation” systems that rate people based on their behavior. The indiscriminate “retrieval” of photographs from the Internet used for biometric matching and facial recognition is also prohibited.
In addition to therapeutic or medical purposes, technologies for predictive control and emotion recognition are also missing.
Violations can result in fines of up to 6% of a company’s global annual revenue.
Even after receiving final approval, which is expected no later than the end of the year or early 2024, the AI Law will not go into effect immediately. Companies and organizations will have a grace period to figure out how to adopt the new rules.
It’s possible the industry will push for more time, arguing that the final version of the AI Act goes further than the original proposal, said Frederico Oliveira da Silva, senior lawyer at European consumer group BEUC.
They may object that “instead of one and a half to two years, we need two to three years,” he said.
He noted that ChatGPT was launched only six months ago, and during this time it has already brought a lot of problems and benefits.
If the AI Act does not go into effect for many years, “what will happen in those four years?” Mr. Da Silva said. “That’s really our concern and that’s why we’re asking the authorities to be aware of this, just to really focus on this technology.”
The story was reported by the Associated Press. AP Technology contributor Matt O’Brien in Providence, RI.