Artificial intelligence reveals hate and extremism online

Antidemocratic forces have increasingly moved online to organise hate crimes. Until now, many of these perpetrators of violence have avoided detection, but researchers at Uppsala University will soon be launching a tool to identify threats before they are translated into violence.

The internet has opened the door to an entirely new world of communication and democracy. But as the old walls have been torn down, we’ve seen how hate and terror have gained ground online. After the attack at London Bridge in June 2017, the then prime minister Theresa May called for new methods for stopping extreme ideologies from spreading through social media. Three years later, researchers at Uppsala University have developed Dechefr, a digital system able to help police and security forces analyse written communication. The ambition is to reveal hate, radicalised thinking and violent intentions before they are put into action.

Nazar Akrami, Uppsala University

“We have fed the tool with over 50,000 randomly chosen texts along with a large number of documents created by perpetrators of violence and terrorists. Our tool uses algorithms to scan entire websites and identify potential threat scenarios with a high degree of accuracy. Several tests remain to be done, but we expect it to be ready by this summer,” says Nazar Akrami, a researcher in psychology.

Among the destructive forces growing in size quickly are various extremist organisations, which have gone from distributing flyers and isolated actions on market squares to coordinated online platforms where they encourage each other to act. Monitoring these types of antidemocratic movements has so far been done manually and required extensive resources. The ability to use Dechefr to switch to automated assessments has already sparked widespread interest both in Sweden and globally.

“Nationally, we have cooperated with the Swedish police, and our next step is a test run of the system with the help of around thirty organisations. Internationally, we have presented our research to Europol, many national security services and most recently to Finnish police who are interested in translating the tool into their own language, says Nazar Akrami.

Dechefr uses artificial intelligence to analyze suspected texts and identify threatening behaviors and risk factors of the writer. The results can be used to make risk- and hazard estimations of digital communication.

“When attacks or crimes are planned, something always leaks out, particularly among lone wolves. This could be the choice of words, psychological markers, naming of people and places, quite simply a collection of variables that together make it nearly impossible to trick the tool. We don’t claim the results should be understood as the absolute truth, but it gives analysts a basis for closer examination and potentially to take action.”

Dechefr is being released at a time when the majority of people are now online. Terrorism can quickly be mobilised across national borders, and in her 2017 speech, Theresa May encouraged democratic states to sign international agreements aimed at regulating the internet. This is a sensitive subject and, to this point, no such pacts have been signed. Nazar Akrami emphasises the importance of maintaining good margins to both legal and ethical boundaries.

“We have designed the tool in a way that makes it impossible both for unauthorised use and for general scanning of the internet on an extensive scale. If everything goes as planned, the system will be administered by analysts within the police and security services, and I am completely confident that Dechefr will provide exactly the support in fighting violent extremism as we have planned.”

Facts

  • Dechefr is a tool for text analysis that combines machine learning, language marker analysis, psychology on deviating and violent behaviour, and statistics.
  • Dechefr helps identify threats of violence in texts. The tool assesses threat levels based on several key indicators and generates an overall assessment.
  • The working group developing Dechefr includes

    Nazar Akrami, Department of Psychology, Uppsala University

    Katie Cohen, Swedish Defence Research Agency (FOI)

    Lisa Kaati, Department of Information Technology, Uppsala University, FOI

    Amendra Shrestha, Department of Information Technology, Uppsala University

/Public Release. View in full here.

Comments

Popular posts from this blog

How a cyber attack hampered Hong Kong protesters

‘The chances of nuclear use are minimal. Both Russia & Ukraine are well aware of results’: DB Venkatesh Varma

Pak off FATF Grey List; ‘Black Spot’ on Fight Against Terror Irks India; J&K Guv Says 'World is Watching'