Why the tech giants can no longer regulate themselves

The government’s recognition of the need to limit “online harms” is an important first step.

In 1996, the American “cyber-libertarian” John Perry Barlow wrote the “Declaration of the Independence of Cyberspace”, in which he called on governments worldwide to “leave us alone”. The internet was then a new space, free from the shackles of the real world, and politicians (and their regulations) were “not welcome among us”.

This ideology has been remarkably successful. The mindset it helped create has pervaded the culture of Silicon Valley, where CEOs are not mere managers, but leaders of a new world with a vision for a better society. It’s not about the product, it’s about the mission. And in the great mission, interference from pesky elected officials is still most unwelcome.
But with great power should come great regulation. For too long, society has taken a laissez-faire approach to the growing power of the technology giants, depending on the benevolence of Silicon Valley CEOs rather than attempting to subject them to meaningful democratic control.

This problem is much worse outside of the US, as nation states lack the ability to enforce domestic laws on these international monoliths. Under pressure from their citizens to take action on cyber-bullying, hate speech, extremist content and more, governments across the world are starting to take action. Among them is the UK government, which has today proposed a raft of new legislations to combat the so-called “online harms”. If successful in their plans, it would mark a significant break with the status quo and could entirely reshape how we approach the internet.

I, for one, though, would welcome it. The centuries-long challenges of racism, sexism, and violence cannot be left to the suits in California to combat alone. They are not the arbiters of morality, and nor should they be. Under their watch, hate-peddlers have been given a platform beyond their wildest dreams, paedophiles have been given unfettered access to children, and violent extremists have been able to download best-practice guidebooks.
They have enabled great things, too. We now have the entirety of the world’s knowledge accessible in the palm of our hands. Couples and families have been created by apps that bring people together. But for every smile induced by social media there are tears, too.
The government’s proposals to make tech bosses liable for harmful content on their platforms, and to mandate Ofcom to oversee actions against online harms, are in my view measured approaches to the problem. As a regulatory approach, it looks a lot like the regulation applied to the finance sector, and the lessons learned there could be very useful.

However, social media isn’t finance, and users are not employees to be trained or controlled. It may help focus minds within the offices of Facebook and Google, but will it make a substantial difference? In five years time, will bullying and racism online be diminished as a result of these proposals? Will MPs continue to be on the receiving end of abuse?
The truth is that there is no algorithm which can defeat racism, no mute button which will end misogyny, and no filter which can clamp down on extremism. Without fundamental changes to how we operate online, we are likely to see only minimal impact. The debates which Ofcom now needs to lead are around anonymity online, the limitations of free speech, and the ills of pay-per-click advertising. Beyond this, we need to consider truly effective methods of enforcement. Are fines meaningful against the rich? Is prosecuting one individual as effective as holding to account an entire organisation?

While regulation is important, small tweaks can only ever hope to mask the problem and provide short-term fixes. Muting a tweet does not eradicate the offensive sentiments it contains, much less the motives of the person who wrote it. The root cause of many of these “harms” exists in the offline world. And it is in the real world that we must focus, using preventative measures such as investment in digital literacy, support for anti-discrimination campaigns, and funding for the police. This will mean more than just fines, but taxation. A “civil internet tax” on social media platforms, with its revenues ringfenced for expenditure on offline initiatives that reduce the prevalence of online harms, would be one way for the state to intervene effectively.

The challenge now is for regulators and lawmakers to ensure that this break from the era of self-regulation really is a break. What comes next should be a situation in which tech giants recognise their place in society as subservient to democracy and the rule of law and in which, as a society, we need to be clear about what is and isn’t acceptable behaviour online. We cannot simply enforce sanctions on companies without having a clear view on what the online experience should be. Accepting the need for regulation is only the first step in this journey. 


Source: https://www.newstatesman.com/science-tech/2020/02/why-tech-giants-can-no-longer-regulate-themselves

Comments

Popular posts from this blog

How a cyber attack hampered Hong Kong protesters

Former FARC guerrilla, Colombian cop pose naked together to promote peace deal

‘Not Hospital, Al-Shifa is Hamas Hideout & HQ in Gaza’: Israel Releases ‘Terrorists’ Confessions’ | Exclusive