Facebook and Twitter Could Face Fines in Germany Over Hate Speech Posts
BERLIN — Social media giants including Facebook and Twitter are not doing enough to curb hate speech on their platforms and could face fines of up to $53 million if they do not strengthen their efforts to delete illegal posts, a German government minister said on Tuesday.
The move by the country’s authorities comes as technology companies face increasing scrutiny worldwide over how they police online material including hate speech, possible terrorist propaganda and so-called fake news. The debate has been particularly acute in Germany, which has become a case study for combating such material because of its stringent laws on what can and cannot be published.
For tech companies and free speech campaigners, this global regulatory push could limit how individuals communicate online by restricting people’s digital activities and allowing governments to expand their control over vast parts of the internet.
Yet for a growing number of policy makers in Europe, the United States and elsewhere, social media companies have a responsibility to block harmful content from their digital platforms, and they must respect national rules that often run counter to Silicon Valley’s efforts to operate across borders.
On Tuesday, Heiko Maas, Germany’s minister of justice and consumer protection, said he would propose a law that would impose stiff fines on companies whose social media platforms did not respond swiftly enough to complaints about illegal content. Mr. Maas has been a vocal critic of how companies treat online content that violates the country’s strict rules on hate speech.
“We must increase the pressure on social networks,” he said in a statement announcing the proposed legislation.
“This will set binding standards for how companies running social networks must handle complaints and require them to delete criminal content,” Mr. Maas said of the proposal.
If the law is approved, tech companies may face fines of up to 50 million euros, or $53 million, for not combating hate speech, potentially the highest such penalty in the Western world.
It would require social media platforms to make it easy for users to report contentious material, and to respond to those requests promptly. It calls for “obviously criminal content” to be deleted or blocked within 24 hours, while companies would have seven days to remove posts that are less clear-cut.
Communications experts in Germany welcomed the move.
“It doesn’t mean that the internet will no longer be a free space,” said Birgit Stark, head of the institute for communications at the University of Mainz. “You can’t just defame people, just because it is the internet.”
The development followed the publication on Tuesday of the results of a study that showed that Facebook and Twitter failed to meet the German target of removing 70 percent of hate speech within 24 hours of being alerted to its presence.
The yearlong study noted that while the two companies eventually erased nearly all illegal hate speech, in January and February, Facebook managed to delete 39 percent in the time frame sought by the German authorities and Twitter 1 percent. Google’s YouTube video service fared the best, taking down within 24 hours 90 percent of all content flagged.
Since September, the figure for Facebook has fallen by seven percentage points, while Twitter’s takedown rate has not changed. The issue has taken on new urgency as Germany gears up for parliamentary elections in September.
Tech companies deny playing fast and loose with national hate speech laws, saying they have taken down illegal material when it has been flagged by users. They also argue, however, that there is a fine line between complying with a country’s rules and outright censorship.
“We are doing far more than any other company to try and get on top of hate speech on our platform,” Richard Allen, Facebook’s head of public policy in Europe, said in an interview late last year. “We recognize that this is a work in progress.”
Germany, where it is illegal to promote Nazi ideology or to deny the Holocaust, has been at the center of the debate about what can be published on social media platforms and who is responsible for such content.
Many Germans — among the most engaged users of these services — also remain overtly wary of how much information American tech companies routinely collect about their online activities. Facebook and Google have run into problems with local lawmakers over what can be disseminated on their social networks and on video sites like YouTube.
In response to this criticism and to a recent tide of hate speech targeting new refugees in Germany, many tech companies agreed to work with the country’s officials in 2015 to remove xenophobic and racist messages from their digital platforms.
“We are disappointed by the results,” Klaus Gorny, a Facebook spokesman, said in a statement on Tuesday referring to the German government study. “We have clear rules against hate speech and work hard to keep it off our platform.”
Al Verney, a YouTube spokesman, said that the company was analyzing the proposed legislation and that the video service’s procedures for taking things down were robust. Twitter declined to comment on Mr. Maas’s proposal.
The German criticism over how social media companies handle hate speech and other illicit content online is part of a wider global pushback.
In December, companies including Facebook, Google and Microsoft announced that they were teaming up to fight the spread of terrorist propaganda on the web by sharing technology and information. They have also agreed to a voluntary code of conduct in Europe to fight the spread of hate speech online.
Many of these companies have also been accused of not doing enough to tackle the spread of fake news, which became endemic on social media before the presidential election in the United States in November. With a series of national elections in Europe this year, Facebook and Google have said they will clamp down on false reports shared on their platforms.
“Trust is one of the main assets that social media has,” Andrus Ansip, a European Commission vice president in charge of the region’s digital agenda, said in an interview last month. “If people can’t trust these channels, then they will stop using these platforms.”