EU's 'pyramid of risk' aims to set global standards for AI
The European Commission on Wednesday announced draft rules on the use of artificial intelligence (AI), including a ban on most surveillance, in an attempt to set global standards for the key technology.
"On artificial intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted," Margrethe Vestager, Commission vice-president for digital competition, said in a statement.
The announcement was met with skepticism from tech lobbying groups, warning that the rules should not create more red tape for companies and users.
On the other side, civil and digital rights groups warned there were worrying gaps in the proposal that could be loopholes leaving room for abuse by repressive governments.
The Commission said the proposed new set of rules for AI systems used in the EU were created to ensure they "are safe, transparent, ethical, unbiased and under human control."
They would create a pyramid of risk, with artificial intelligence systems ranked from "unacceptable" through to "minimal risk."
"Anything considered a clear threat to EU citizens will be banned," said the policy document shared by Vestager.
AI would be classified "high risk" if it poses a risk under eight categories, including critical infrastructure, safety components, and law enforcement that may interfere with people's fundamental rights.
"Limited risk" products would include chatbots and other systems that would be labeled to "allow those interacting with the content to make informed decisions."
Finally, systems categorized as "minimal risk" would allow "free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category where the new rules do not intervene as these systems represent only minimal or no risk for citizens' rights or safety."
High risk AI systems won't be banned – but they will have to undergo a much stricter assessment and registration than those in lower categories.
Companies breaching the rules face fines of up to 6 percent of their global turnover or 30 million euros ($36 million), whichever is the higher figure.
Speaking in a news conference on Wednesday, Vestager said that, while AI was an important force for progress in the Europe, certain rules and controls had to be put in place to create trust.
"There is no room for mass surveillance in our society and that is why, in our proposal, the use of biometric identification in public places is prohibited in principle," Vestager said. "We propose a very narrow exception that is strictly defined, limited and regulated. Those are extreme cases, such as when the police authorities are in need of the technology in search for a missing child," she explained.
The proposal was, however, met with criticism from various groups, including those such as the European Digital Rights lobbying group, which warned that "the draft law does not prohibit the full extent of unacceptable uses of AI and in particular all forms of biometric mass surveillance."
Green Party lawmaker at the European Parliament, Patrick Breyer, was also scathing of the new rules, saying: "Biometric and mass surveillance, profiling and behavioral prediction technology in our public spaces undermine our freedoms and threaten our open societies. The proposed procedural requirements are a mere smokescreen."
The Commission will have to thrash out the details with EU national governments and the European Parliament before the rules can come into force.
That could take years marked by intense lobbying from companies and even foreign governments, said Patrick Van Eecke, partner and head of the European cyber practice at law firm Cooley.
The European Commission has previously set out a plan to invest more than $1 billion in AI.