This year seems to have started off in much the same way as 2016 ended. Celebrities, politicians, and everyday people have flocked to social media to provide their commentary on everything from global crises to envelope sagas.

Towards the end of 2016, Twitter announced that no person is above their policies, particularly in respect of hate speech, and threatened to remove Donald Trump’s verified account if the President continued to violate them. But what exactly do the Twitter policies say?

 The rules

In terms of the Twitter rules, the following amounts to what the company calls “Abusive Behaviour”:

  •  Direct or indirect violent threats, or threats to promote violence or terrorism;
  • Targeted abuse or harassment of others; and
  • Hateful conduct such as attacking or threatening someone on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.

 What are the consequences of posting content that amounts to abusive behaviour?

 Not following the rules set by Twitter can lead to various results depending on the severity of the violation, including:

  •  A temporary locking of a user’s account until the abusive content is removed; or
  • A permanent suspension of the user’s account (which is usually enforced where a user has been reported for multiple instances of abusing the rules).

 

What else are countries doing to stop online racism and hate speech?

In addition to current laws that seek to curb racism on any platform, such as specific equality legislation, as well as the rights to dignity and equality entrenched in modern constitutions, many countries are starting to put in place specific legislation directed at the issue of online racism and hate speech.

In South Africa, for example, hearings were held in February 2017 by the South African Human Rights Commission and various stakeholders (including freedom of expression activists) to discuss the country’s way forward in addressing online racism, especially on social media. These hearings follow on from the publication of the Prevention and Combating of Hate Crimes and Hate Speech Bill in 2016, which has specifically included hate speech on electronic channels.

Germany recently announced that it plans to issue hefty fines to companies (specifically social media platforms), who do not take active steps to curb hate speech. There are also discussions that Germany will soon promulgate a law to this effect.

Both these countries have, of course, experienced gross injustices based on racial lines in their past, and it is not coincidental that they are taking positive steps to mitigate the instances of racism online.

A European Commission has recently announced that it will be placing increasing pressure on social media companies (who are predominantly based in the USA) to amend their terms and conditions for European users so that they comply with EU consumer law. The social media company requirement that has come under the most scrutiny is that by which users have to enforce their rights in the state of California (where most social media companies are based) rather than in their own country of residence.

What steps is Twitter taking while balancing users’ rights to expression?

Twitter’s policy states that “Freedom of expression means little if voices are silenced because people are afraid to speak up. We do not tolerate behaviour that harasses, intimidates, or uses fear to silence another person’s voice.”

In addition to the above statement, Twitter has, over the past 18 months, made changes to the way users can regulate the content on their feed, by allowing them block and report multiple accounts, as well as mute selected words so that topics using these words will be eliminated from their notifications. More information on how to customise your content can be found here.

This does not stop online racism occurring, but it is a step towards allowing users to control what topics come into their view, and makes it easier to do so.

The responsible use of social media

We previously wrote about the high court of South Africa placing a positive obligation on users to take steps to remove defamatory content contained in posts in which they were tagged.  The same approach could be followed for offensive tweets that tag you.  Users should therefore be mindful of even retweeting posts containing hate speech or offensive material.

Users should always keep in mind, before they engage in any commentary on social media, that even once the post is removed, it can have long lasting implications. Although removing the content may be sufficient to reinstate a blocked account, it may not be enough to escape legal issues such as being found guilty of defamation, hate speech, or a transgression of human rights in a court.

For employers, it is recommended that appropriate social media policies be put in place to regulate how employees are permitted to use social media, clearly explaining the consequences of breaching these policies. These consequences can be far-reaching, and employees should be aware that what they post during their ‘off the clock’ time, can still affect their employer’s reputation and could lead to a disciplinary hearing, and perhaps, a termination of employment.