Twitter and hate speech policy

This year seems to have started off in much the same way as 2016 ended. Celebrities, politicians, and everyday people have flocked to social media to provide their commentary on everything from global crises to envelope sagas.

Towards the end of 2016, Twitter announced that no person is above their policies, particularly in respect of hate speech, and threatened to remove Donald Trump’s verified account if the President continued to violate them. But what exactly do the Twitter policies say?

 The rules

In terms of the Twitter rules, the following amounts to what the company calls “Abusive Behaviour”:

  •  Direct or indirect violent threats, or threats to promote violence or terrorism;
  • Targeted abuse or harassment of others; and
  • Hateful conduct such as attacking or threatening someone on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.

 What are the consequences of posting content that amounts to abusive behaviour?

 Not following the rules set by Twitter can lead to various results depending on the severity of the violation, including:

  •  A temporary locking of a user’s account until the abusive content is removed; or
  • A permanent suspension of the user’s account (which is usually enforced where a user has been reported for multiple instances of abusing the rules).

 

What else are countries doing to stop online racism and hate speech?

In addition to current laws that seek to curb racism on any platform, such as specific equality legislation, as well as the rights to dignity and equality entrenched in modern constitutions, many countries are starting to put in place specific legislation directed at the issue of online racism and hate speech.

In South Africa, for example, hearings were held in February 2017 by the South African Human Rights Commission and various stakeholders (including freedom of expression activists) to discuss the country’s way forward in addressing online racism, especially on social media. These hearings follow on from the publication of the Prevention and Combating of Hate Crimes and Hate Speech Bill in 2016, which has specifically included hate speech on electronic channels.

Germany recently announced that it plans to issue hefty fines to companies (specifically social media platforms), who do not take active steps to curb hate speech. There are also discussions that Germany will soon promulgate a law to this effect.

Both these countries have, of course, experienced gross injustices based on racial lines in their past, and it is not coincidental that they are taking positive steps to mitigate the instances of racism online.

A European Commission has recently announced that it will be placing increasing pressure on social media companies (who are predominantly based in the USA) to amend their terms and conditions for European users so that they comply with EU consumer law. The social media company requirement that has come under the most scrutiny is that by which users have to enforce their rights in the state of California (where most social media companies are based) rather than in their own country of residence.

What steps is Twitter taking while balancing users’ rights to expression?

Twitter’s policy states that “Freedom of expression means little if voices are silenced because people are afraid to speak up. We do not tolerate behaviour that harasses, intimidates, or uses fear to silence another person’s voice.”

In addition to the above statement, Twitter has, over the past 18 months, made changes to the way users can regulate the content on their feed, by allowing them block and report multiple accounts, as well as mute selected words so that topics using these words will be eliminated from their notifications. More information on how to customise your content can be found here.

This does not stop online racism occurring, but it is a step towards allowing users to control what topics come into their view, and makes it easier to do so.

The responsible use of social media

We previously wrote about the high court of South Africa placing a positive obligation on users to take steps to remove defamatory content contained in posts in which they were tagged.  The same approach could be followed for offensive tweets that tag you.  Users should therefore be mindful of even retweeting posts containing hate speech or offensive material.

Users should always keep in mind, before they engage in any commentary on social media, that even once the post is removed, it can have long lasting implications. Although removing the content may be sufficient to reinstate a blocked account, it may not be enough to escape legal issues such as being found guilty of defamation, hate speech, or a transgression of human rights in a court.

For employers, it is recommended that appropriate social media policies be put in place to regulate how employees are permitted to use social media, clearly explaining the consequences of breaching these policies. These consequences can be far-reaching, and employees should be aware that what they post during their ‘off the clock’ time, can still affect their employer’s reputation and could lead to a disciplinary hearing, and perhaps, a termination of employment.

Social Media, Age, and the Entertainment Industry

Can a state law prevent a social media site from publicly posting accurate age information about individuals in the entertainment industry—even if that information is posted by users? The California legislature and Governor believed it was permissible, and the legislation went onto effect on September 24, 2016 (Cal. AB 1687, adding Cal. Civ. § 1798.83.5).  Five months later, a federal judge temporarily enjoined the government from enforcing that law, in IMDb.com Inc. v. Becerra, No. 16-cv-06535-VC (N.D. Cal. Feb. 22, 2017). Continue reading

UK Government seeks to tackle the “fake news” problem

In the past, concerns regarding news focussed on traditional media (typically newspapers and broadcasters) and the role they played in curating and controlling public information and sentiment. In recent months, the focus has shifted to the distribution of news on the internet and social media and the so-called ‘fake news’ problem. Continue reading

Industrial Designs: Protecting Graphical User Interfaces – A Primer for Social Media Entrepreneurs

This post is directed to entrepreneurs and developers who are building platforms incorporating features of social media networks, or building their own social media technologies, regarding design protection requirements in Canada. Several practice notices have been issued very recently by the Canadian Industrial Design Office, providing guidance on designs including colour and animated graphical user interfaces (GUIs), among others. Continue reading

Facebook’s California Choice-of-Law Provision Rules the Day

On January 9, 2017, the Northern District of California granted Facebook’s motion to dismiss for claims brought under New Jersey’s Truth-in-Consumer Contract, Warranty, and Notice Act (“the TCCWNA”). In Palomino v. Facebook, Inc., a putative class of New Jersey residents challenged Facebook’s Terms of Service, which, among other provisions, require users to waive potential claims for misconduct such as deceptive and fraudulent practices. Plaintiffs argued that this violated two provisions of the TCCWNA that prohibit such waivers.  The case was resolved before advancing to the merits. Continue reading

NLRB Reviews and Approves Northwestern University’s Revised Football Handbook Social Media Policy

On January 1, 2017, the National Labor Relations Board (“NLRB”) released an advice memorandum (dated September 22, 2016) that reviewed and approved Northwestern University’s revised Football Handbook’s social media policy. The NLRB Office of the General Counsel, which prepared the advice memorandum, was asked to advise whether the university’s Football Handbook policies, including its social media policy, were lawful. Continue reading

Tort claims may be adapting to a world of social media

The United States District Court for the Southern District of New York ruled, on January 18, 2017, on a defendant’s motion to dismiss replevin, conversion, and trespass claims related to the misuse of various domain names and social media accounts.  Salonclick LLC d/b/a Min New York , 16 Civ. 2555 (KMW), 2017 WL 239379 (S.D.N.Y. Jan. 18, 2017).

The plaintiff in the case (“Plaintiff”) operated a business that manufactured and sold a variety of grooming products, including hair and skin care products. The Plaintiff used various domain names and tag lines in its business, including www.newyorkheart.org and a corresponding Facebook page, which spoke out against ivory poaching. The social media page was used, in part, to promote the company as a socially responsible business. Continue reading

Risks of unlawful social media content: changes in UK defamation landscape and what you need to know

A carefully curated social media presence is a critical business requirement, but there are risks. One of these risks is unlawful content – be that unlawful content posted to your businesses’ own social media account (exposing the company to potential liability) or harmful content about your business (or its C-Suite or key personnel) posted on independent sites.

So how do you tackle unlawful content? Often the first point of call is the law of defamation. The UK is renowned as a claimant friendly jurisdiction for defamation litigation. With its widely respected court system and judiciary, the UK has been the forum of choice for international defamation disputes. Note that the rules have recently been tightened up with stricter thresholds brought in for defamation actions and a requirement, aimed at stopping “libel tourism,” that for claims against non-EU defendants the UK must be the “most appropriate place” in which to litigate (the Defamation Act 2013). Continue reading

Liability for Hyperlinks: German Court Increases Responsibility of Website Operators

The Regional Court of Hamburg recently applied for the first time the new decision by the Court of Justice of the European Union (CJEU) regarding the liability for hyperlinks and further increased the risks and responsibilities for social media website operators.

The EU Court Decision

The CJEU held in September 2016 that using a hyperlink may constitute an infringement of copyright law, if (a) the linked website to contains infringing content, (b) the hyperlink was provided with the intent to realize profits and (c) the person providing the link did not review the content on the linked website. Continue reading

The edit button: can the past be erased?

Social media users have a new demand for 2017 – they want the ability to edit their public messages. Spelling mistakes, missing words and misplaced pronouns can have embarrassing, unintended and sometimes dangerous consequences.  The ability to edit one’s message is an attractive feature.  This request has led some users on the social media platform Twitter to ask its CEO when an edit function would be introduced. Continue reading

LexBlog