Tech Companies Are Deleting Evidence of War Crimes
Algorithms that take down “terrorist” videos could hamstring efforts to bring human-rights abusers to justice.

Facebook took down the bloody video, whose source has yet to be conclusively determined, shortly after it surfaced. But it existed online long enough for copies to spread to other social-networking sites. Independently, human-rights activists, prosecutors, and other internet users in multiple countries scoured the clip for clues and soon established that the killings had occurred on the outskirts of Benghazi. The ringleader, these investigators concluded, was Mahmoud Mustafa Busayf al-Werfalli, an Al-Saiqa commander. Within a month, the International Criminal Court had charged Werfalli with the murder of 33 people in seven separate incidents—from June 2016 to the July 2017 killings that landed on Facebook. In the ICC arrest warrant, prosecutors relied heavily on digital evidence collected from social-media sites.
Shortly after the Werfalli arrest warrant was issued, Hadi Al Khatib, a Syrian-born open-source investigator based in Berlin, noticed something that distressed him: User-generated videos depicting firsthand accounts from the war in Syria were vanishing from the internet by the thousands. Khatib is the founder of the Syrian Archive, a collective of activists that, since 2014, has been scouring for digital materials posted by people left behind in Syria’s war zone. The Syrian Archive’s aim is “to build a kind of visual documentation relating to human-rights violations and other crimes committed by all sides during the eight-year-old conflict,” Khatib said in an interview.
In the late summer of 2017, Khatib and his colleagues were systematically building a case against the regime of Bashar al-Assad in much the same way ICC investigators pursued Werfalli. They had amassed scores and scores of citizens’ accounts, including video and photos that purportedly showed Assad was targeting hospitals and medical clinics in bombing campaigns. “We were collecting, archiving, and geolocating evidence, doing all sorts of verification for the case,” Khatib recalled. “Then one day we noticed that all the videos that we had been going through, all of a sudden, all of them were gone.”
It wasn’t a sophisticated hack attack by pro-Assad forces that wiped out their work. It was the ruthlessly efficient work of machine-learning algorithms deployed by social networks, particularly YouTube and Facebook.
With some reluctance, technology companies in Silicon Valley have taken on the role of prosecutors, judges, and juries in decisions about which words and images should be banished from the public’s sight. Lately, tech companies have become almost as skilled at muzzling speech as they are at enabling it. This hasn’t gone unnoticed by government entities that are keen to transform social networks into listening posts. Government, in effect, is “subcontracting” social-media platforms to be its eyes and ears on all kinds of content it deems objectionable, says Fionnuala Ní Aoláin, a law professor and special rapporteur for the United Nations Human Rights Council.
But some of what governments ask tech companies to do, such as suppressing violent content, cuts against other legitimate goals, such as bringing warlords and dictators to justice. Balancing these priorities is hard enough when humans are making judgments in accordance with established legal norms. In contrast, tech giants operate largely in the dark. They are governed by opaque terms-of-service policies that, more and more, are enforced by artificial-intelligence tools developed in-house with little to no input from the public. “We don’t even know what goes into the algorithms, what kind of in-built biases and structures there are,” Ní Aoláin said in an interview.
Designed to identify and take down content posted by “extremists”—“extremists” as defined by software engineers—machine-learning software has become a potent catch-and-kill tool to keep the world’s largest social networks remarkably more sanitized places than they were just a year ago. Google and Facebook break out the numbers in their quarterly transparency reports. YouTube pulled 33 million videos off its network in 2018—roughly 90,000 a day. Of the videos removed after automated systems flagged them, 73 percent were removed so fast that no community members ever saw them. Meanwhile, Facebook removed 15 million pieces of content it deemed “terrorist propaganda” from October 2017 to September 2018. In the third quarter of 2018, machines performed 99.5 percent of Facebook’s “terrorist content” takedowns. Just 0.5 percent of the purged material was reported by users first.
Those statistics are deeply troubling to open-source investigators, who complain that the machine-learning tools are black boxes. Few people, if any, in the human-rights world know how they’re programmed. Are these AI-powered vacuum cleaners able to discern that a video from Syria, Yemen, or Libya might be a valuable piece of evidence, something someone risked his or her life to post, and therefore worth preserving? YouTube, for one, says it’s working with human-rights experts to fine-tune its take-down procedures. But deeper discussions about the technology involved are rare.
“Companies are very loath to let civil society talk directly to engineers,” says Dia Kayyali, a technology-advocacy program manager at Witness, a human-rights organization that works with Khatib and the Syrian Archive. “It’s something that I’ve pushed for. A lot.”
These concerns are being drowned out by a counterargument, this one from governments, that tech companies should clamp down harder. Authoritarian countries routinely impose social-media blackouts during national crises, as Sri Lanka did after the Easter-morning terror bombings and as Venezuela did during the May 1 uprising. But politicians in healthy democracies are pressing social networks for round-the-clock controls in an effort to protect impressionable minds from violent content that could radicalize them. If these platforms fail to comply, they could face hefty fines and even jail time for their executives. New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron intend to up the ante at a summit next week calling on tech execs and world leaders to band together to eliminate the publication of extremist online content. After the March 15 mosque massacre in Christchurch, New Zealand, was streamed live on Facebook, countries including New Zealand, Australia, and the United Kingdom passed or proposed comprehensive new online-terror laws.
Human-rights advocates worry about the decisions tech giants and their algorithms will make under such outside pressure. “The danger is that governments will often get the balance wrong,” argued Ní Aoláin. “But actually we have the methods and means to challenge governments when they do so. But private entities? We don’t have the legal processes. These are private companies. And the legal basis upon which they regulate their relationships with their users, whether they’re in conflict zones or not, is determined by [the company’s] terms of service. It’s neither transparent nor fair. Your recourse is quite limited.”
In July, she wrote an open letter to Facebook’s founder, Mark Zuckerberg, finding fault with how Facebook defines terrorism-related content, a key determination in what it decides to flag and take down. From what Ní Aoláin can tell, “they just came up with a definition for terrorism that bears no relationship to the global definition agreed by states, which I think is a very dangerous precedent. I made that very clear in my communications with them.”
When I asked Facebook to comment on Ní Aoláin’s complaint, a company spokesperson shared detailed minutes from a December content-standards forum. The minutes are a remarkable document, one that underscores the complexity of the judgments tech companies are being asked to make as they seek to monetize human interactions on a global scale. Is a terrorist organization one that “engages in premeditated acts of violence against persons or property,” or should the definition expand to include any non-state group that “engages in or advocates and lends substantial support” to “purposive and planned acts of violence”? “It would shock me,” one person at the meeting commented, “if in a year we don’t come back and say we need to refine this definition again.” (A company spokesperson said recently that there’s no update on the matter to announce.)
How the tech giants’ algorithms will implement these subtle standards is an open question. But a new crop of anti-terrorism bills, post-Christchurch, will thrust technology companies into an even more assertive enforcement role. Under the threat of massive fines, tech giants are likely to invest more in aggressive machine-learning content filters to suppress potentially objectionable material. All this will have a chilling effect on those who are trying to expose wrongdoing in war zones.
“On the ground in Syria,” he continued, “Assad is doing everything he can to make sure the physical evidence [of potential human-rights violations] is destroyed, and the digital evidence, too. The combination of all this—the filters, the machine-learning algorithms, and new laws—will make it harder for us to document what’s happening in closed societies.” That, he fears, is what dictators want.
Source: https://www.theatlantic.com/ideas/archive/2019/05/facebook-algorithms-are-making-it-harder/588931/
Comments