Facebook's AI war on terror revealed: Software is learning to spot and shut down terror groups online

Facebook's AI war on terror revealed: Software is learning to spot and shut down terror groups online

  • Uses image matching and language understanding to find and remove content
  • Is rolling out the system across  all platforms including WhatsApp and Instagram
Facebook has offered new insight into its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.
Facebook has ramped up use of artificial intelligence such as image matching and language understanding to identify and remove content quickly, Monika Bickert, Facebook's director of global policy management, and Brian Fishman, counterterrorism policy manager, explained in a blog post.
Facebook uses artificial intelligence for image matching that allows the company to see if a photo or video being uploaded matches a known photo or video from groups it has defined as terrorist, such as Islamic State, Al Qaeda and their affiliates, the company said in the blog post.
Image matching that allows the company to see if a photo or video being uploaded matches a known photo or video from groups it has defined as terrorist, such as Islamic State, Al Qaeda and their affiliates
Image matching that allows the company to see if a photo or video being uploaded matches a known photo or video from groups it has defined as terrorist, such as Islamic State, Al Qaeda and their affiliates
'Our stance is simple: There’s no place on Facebook for terrorism,' the post says. 
'We remove terrorists and posts that support terrorism whenever we become aware of them. 
'When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities. 
'Although academic research finds that the radicalization of members of groups like ISIS and Al Qaeda primarily occurs offline, we know that the internet does play a role — and we don’t want Facebook to be used for any terrorist activity whatsoever.
'We believe technology, and Facebook, can be part of the solution.'
YouTube, Facebook, Twitter and Microsoft last year created a common database of digital fingerprints automatically assigned to videos or photos of militant content to help each other identify the same content on their platforms.

HOW FACBOOK SPOTS TERROR

Facebook's AI is using several techniques to try and spot terror groups online.
Image matching: When someone tries to upload a terrorist photo or video, systems look for whether the image matches a known terrorism photo or video.
Language understanding: AI is learning to understand text that might be advocating for terrorism. 'We’re currently experimenting with analyzing text that we’ve already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda' Facebook says. The machine learning algorithms work on a feedback loop and get better over time.
Removing terrorist clusters: When the system identifies Pages, groups, posts or profiles as supporting terrorism, it also use algorithms to “fan out” to try to identify related material that may also support terrorism. It uses signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.
Recidivism: 'We’ve also gotten much faster at detecting new fake accounts created by repeat offenders' Facebook says.
Cross-platform collaboration: Facebook is working on systems to enable it to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram. 
Similarly, Facebook now analyses text that has already been removed for praising or supporting militant organizations to develop text-based signals for such propaganda.
However, the firm says it has not abandoned humans, and is using them more than ever.
'AI can’t catch everything. 
'Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context. To understand more nuanced cases, we need human expertise.'
'More than half the accounts we remove for terrorism are accounts we find ourselves, that is something that we want to let our community know so they understand we are really committed to making Facebook a hostile environment for terrorists,' Bickert said in a telephone interview.
The AI is learning to understand text that might be advocating for terrorism
The AI is learning to understand text that might be advocating for terrorism
Germany, France and Britain, countries where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years, have pressed Facebook and other social media sites such as Google and Twitter to do more to remove militant content and hate speech.
Government officials have threatened to fine the company and strip the broad legal protections it enjoys against liability for the content posted by its users.
Asked why Facebook was opening up now about policies that it had long declined to discuss, Bickert said recent attacks were naturally starting conversations among people about what they could do to stand up to militancy.
In addition, she said, 'we're talking about this is because we are seeing this technology really start to become an important part of how we try to find this content.'

Source http://www.dailymail.co.uk/sciencetech/article-4608412/Facebook-discloses-new-details-removing-terrorism-content.html

Comments

Popular posts from this blog

How a cyber attack hampered Hong Kong protesters

‘Not Hospital, Al-Shifa is Hamas Hideout & HQ in Gaza’: Israel Releases ‘Terrorists’ Confessions’ | Exclusive

Islam Has Massacred Over 669+ Million Non-Muslims Since 622AD