Europe’s artificial intelligence blindspot: Race

 

Europe's vision of artificial intelligence regulation is color-blind — and not in a good way.

Between the U.S.'s laissez-faire and China's dirigiste approaches, the EU is intent on carving out a “third way” for AI regulation that boosts innovation but respects “European values,” including privacy and human rights. But activists and academics fear the rules will not consider the communities most at risk of AI-based discrimination — people of color.

In recent years, there have been high-profile examples of AI systems discriminating against racial minorities, including facial recognition systems that don’t recognize women or black and brown faces; opaque, unenforceable and discriminatory hiring algorithms; or applications that predict disproportionate criminality and offer worse legal outcomes.

The European Commission will unveil its AI rules this spring, requiring "high-risk" AI systems to meet minimum standards regarding trustworthiness. But with European countries already struggling to address racial discrimination when it comes to representation, policing and online abuse, Sarah Chander of digital rights group EDRi said those issues are likely to seep into tech.

“We shouldn't see the issues of the potential harmful impact on racialized communities through tech as a U.S. issue. It's going to be wherever you find manifest structural discrimination and racial inequality,” Chander said, adding that complacent attitudes among policymakers and an unwillingness to recognize the problem will only exacerbate it.

Algorithmic harms

Even with strict data protection rules, strong fundamental rights frameworks and a directive on racial equality, European minorities are not safe from algorithmic harms.

In January, the Dutch government resigned over a scandal where the government had used an algorithm to predict who is likely to wrongly claim child benefits. Without any evidence of fraud, the tax authority forced 26,000 parents — singling out parents of dual nationalities and ethnic minorities — to pay back tens of thousands of euros to the tax authority without the right to appeal. The Dutch Data Protection authority found the tax authority's methods “discriminatory.” 

“It seems like there's a complete disconnect between reality, which is that automating bias, automating prejudice, automating racism that has huge impacts on huge groups within society, and this blind vision that anything that can be automated is a good thing,” said Nani Jansen Reventlow of the Digital Freedom Fund, which supports digital rights through strategic litigation.

Nakeema Stefflbauer is the founder of Frauenloop.org, an organization that trains women in tech and runs a network for people of color in tech called Techincolor.eu. Stefflbauer said her daughter recently applied to the University of Amsterdam, and had to sit through an entrance exam that used a controversial exam surveillance software Proctorio, which has a bad track record in recognizing Black and brown faces.

If Europe doesn’t “have some type of clear target goal in regulating the outcomes of algorithmic implementation Europe, then we're going to have the same problems that they're having in North America, where it’s like, 'oops, nobody thought about this whole group of women or transgender people, or Black people, or Asian people, or any people that are not white men,'” Stefflbauer said, referring to a slew of legal complaints in the U.S. brought forward as a result of racial bias in AI systems.

"In Europe we should be able to at least avoid the worst of those excesses."

It's a problem the European Commission is aware of.

Last year, Vice President Věra Jourová specifically warned against "copy-pasting the imperfections and unfairness of the real world into the AI world," including racial bias. In sketching out its AI plans, the EU warned against the harms AI systems with racial bias could do, adding that any AI law "should concentrate on how to minimise the various risks of potential harm."

But it's unclear what provisions the Commission will actually put into upcoming rules. A spokesperson for the Commission did not comment on the upcoming proposal, but said that the EU has a "solid framework of legislation at EU and national level to protect fundamental rights and to ensure safety and consumer rights. To prevent breaches of these rules and to ensure that possible breaches can be addressed by national authorities, high-risk AI systems need to be well-documented and provide an adequate degree of transparency."

A coalition of civil society groups, led by EDRi, has campaigned for red lines in the upcoming AI laws that would ban technologies such as live facial recognition, which they warn would discriminate against people of color. 

EDRi’s Chander argues that such limits are necessary because human oversight or technical fixes alone — including broader datasets to train algorithms — cannot be relied upon to erase bias or discriminatory effects.

“It's not necessarily an issue of not enough representation in the datasets that are used, but rather how such systems might perpetuate existing discriminatory impact in society,” Chander said.

Inclusive policymaking

Conversations about race are often had at national level rather than in Brussels, and “a lot of efforts to deal with systemic racism are not always connected with the conversations about technology,” Jansen Reventlow said.

This can lead to these groups being excluded from technical discussions that shape AI policy, said Vanja Skoric, program director at the European Center for Not-For-Profit Law. “Often they themselves don’t feel good enough or ‘expert enough’ to participate, which leads to a lack of critical important voices in discussions,” she said.

Without policymakers and industry actively fielding input from diverse voices — or in some cases ignoring them or shutting them out — legislation ends up being poorer as a result. The Council of Europe, the 47-country human rights group that's drafting its own guidelines on ethical AI, has no specific requirement to ensure representation of marginalized, vulnerable groups or ethnic minorities, Skoric said.

A spokesperson for Council of Europe said non-discrimination will be one of the subjects addressed in its upcoming legal framework for AI, which will be ready in the end of 2021. Its steering committee on anti-discrimination, diversity and inclusion has "agreed that work on AI and non-discrimination will be one of its future priorities" and the unit will engage in preparing a "sectoral instrument on AI, non-discrimination and equality," the spokesperson added.

That absence of representation applies to Brussels, too, which scores poorly in ethnic diversity. Out of 705 members of the European Parliament, only a handful are people of color. All 27 EU commissioners are white.

The first thing Europe needs to address are the power structures in AI regulation, said Os Keyes of the University of Washington, who studies gender, race and power in AI systems. If the EU is serious about tackling racial bias in policy, it should also examine its internal power structures, Keyes continued.

Keyes said that lack of diversity partly explains discriminatory activities and policies. For example, the EU's research fund paid for a paper on race classification, tech that distinguishes a person’s race through features such as skin color or facial features. The technology has faced harsh criticism from computer scientists who fear that the technology might fuel discrimination, and doesn't take into account the social context of racial identity. The fund has also funneled money to the now-scrapped iBorderCtrl project, which tried to create a border security system that used facial recognition technology to find out if people were lying.

Frontex, the European border and coast guard agency has also tested military-grade surveillance drones in the Mediterranean and Aegean to spot migrants and refugees trying to reach Europe.

“Too often the decisions about what… should be regulated are handled by very narrow groups of people with very narrow ranges of interest,” Keyes said.

Source: https://www.politico.eu/article/europe-artificial-intelligence-blindspot-race-algorithmic-harm/

Comments

Popular posts from this blog

How a cyber attack hampered Hong Kong protesters

‘Not Hospital, Al-Shifa is Hamas Hideout & HQ in Gaza’: Israel Releases ‘Terrorists’ Confessions’ | Exclusive

Islam Has Massacred Over 669+ Million Non-Muslims Since 622AD