Social media is lauded for its ability to disseminate information; fuel social change and protest movement; influence the powerful, and empower the marginalised. It’s fast becoming an essential component of NGO activity for all these reasons; but despite the praise, social media has a dark side.
A recent MRG Council seminar focused on the topic of hate speech. Speakers were invited from Article 19 and Index on Censorship, inter-faith organisation Faith Matters, the Holocaust Memorial Day Trust, and Gypsy-Traveller support organisation Families and Friends of Travellers.
All participants highlighted the increasing role of social media in propagating hatred. Fiyaz Mughal, Director of Faith Matters, spoke about the shocking level of Islamophobia on Twitter in the UK. Matt Harris, of Index on Censorship called social media a ‘game changer’ in terms of hate speech; reports from Pakistan, Saudi Arabia and the United States confirm that the problem is international.
Sticks and stones? The impact of online hate
Online hate has many effects, causing untold suffering to individuals; worsening inter-community relations and increasing marginalisation. Faith Matters and other organisations point to its effect on ‘cumulative extremism’. More worryingly, social media can also incite real physical violence against minorities.
Hate speech on social media aggravated tensions during Kenya’s post-election turmoil in 2007, and provoked inter-ethnic riots between Macedonians and Albanians in Skopje in 2011. Online speech activity also as fueled the violence in Burma’s Rakhine state across 2012 and 2013.
In Sri Lanka, vitriolic Islamophobic campaigns on social networks accompanied the recent attacks against the country’s Muslim population. Social media activity was also implicated in the communal violence between Hindus and Muslims in India earlier this year.
The ‘wild web’: applying the law to the digital world
How can we combat online hate? The first approach is legislation. Hate speech laws exist in many countries; Article 20(2) of the International Covenant on Civil and Political Rights obliges states to prohibit hate speech when it amounts to “incitement to discrimination, hostility or violence”.
The internet is no longer the ‘lawless wasteland’ that it once was – individuals can be held accountable. Nevertheless, it’s exceptionally difficult to apply ‘real-world’ legal criteria online. Individuals can conceal their identity; blocked material can easily be hosted elsewhere; and the viral nature of content makes it hard to track and trace. Hate speech laws vary widely across countries, and aren’t uniformly applied; hardly ideal given the transnational nature of the web.
Media self-regulation: part of the problem or part of the solution?
Another option is for internet service providers (ISPs) and social media platforms to police hateful content themselves. Progress here is uneven. ISP’s have the right to block entire domain names, though to do so is lengthy and unwieldy. Facebook and Google signed a recent pact to combat internet hate, which Twitter notably opted out of. Last year, Jewish students in France had to pursue Twitter through the courts in order to block an anti-Semitic group.
Online operators need to work harder. More effective means of enforcement, enabling users to flag hate-speech and lifting/easing anonymity policies would be positive steps forward. There’s a notable absence of industry-wide initiatives focusing on this issue. Even so, media regulation doesn’t exist in a vacuum. Ordinary law provides a backdrop – and for hate speech, ordinary law is a shaky foundation.
Freedom of speech vs protection of minorities
Any prohibition of hate speech – either through law or self-regulation – can also easily run afoul of freedom of expression. Aside from the obvious ethical issues, a UNESCO report emphasises the dangers of pushing ‘unwanted’ opinions underground, making them impossible to counter. Matt Harris voiced fears that censoring content can give undue credibility to extreme individuals and groups.
Who watches the watchmen?
There are even more sinister concerns – legal and censorship powers can easily be abused. As Index on Censorship reports, attempts to tackle social media hate speech in India have been marred by politically motivated arrests and removal of anti-government material.
Such powers can also be used to persecute already marginalised groups. One speaker at the Seminar highlighted the suppression of pro-Rohingya articles in Burma’s press under hate speech laws. Other examples include Roma in the Czech Republic who have been prosecuted under defamation laws.
An alternative solution: counter speech
Almost all the speakers at the Seminar agreed that any prohibitive measures should only be part of a larger response. They, and many others, advocate for an alternative – counter speech.
Counter speech means raising awareness, improving education and building the capacity to speak out against hate speech. NGOs and campaigns like the No Hate Speech Movement and the Stop Racism and Hate Campaign are working towards these ends. Chris Whitwell of Families, Friends and Travellers talked about his own organisation’s efforts to challenge misconceptions and stereotypes about Roma and Gypsies in the media.
Counter speech has its own problems. Marginalised groups often lack capacity or motivation to engage in such activity. Counter speech is also inadequate when it comes to highly volatile situations. Nevertheless, in the long-term this approach is essential to eliminate the culture of permissibility that allows hate speech to thrive online.
How and when should we combat online hate speech? Whose responsibility is it to police this sort of behaviour, and when should we prohibit it outright? Can social media play a positive role? There are no easy answers to these questions – but they are questions that must be asked. As noted by Emma Eastwood in an earlier blog post, the events of Rwanda in 1994 offer a harrowing example of what can happen when the media becomes an unchecked platform for hate.
Image credit: European Commission DG ECHO