Social media platforms have revolutionized the way people receive and deliver their news and information. Industry players, legislators, and consumers of social media have all had to adapt to this new medium of speech. While having the permanence and public nature of traditional forms of news, such as newspapers, social media posts are not subject to the same kinds of editorial review and control. The sheer volume and pace of social media posts has made it impractical for social media companies to maintain a similar amount of content review as newspapers or television broadcasts.

Although this new environment has provided a robust avenue for free speech, it also creates legal risks as it becomes difficult to protect against illicit forms of speech. Social media companies face both business and legal risks in Canada where their platforms are used by consumers to spread illicit speech. To thwart this risk, social media companies may need to consider monitoring and removing illicit content without taking on undo expense or undermining the benefits of the platform.

The German law on hate speech (Network Enforcement ActNetzwerkdurchsetzungsgesetz) which came into effect on October 1, 2017 is continuously subject to criticism. Its legal and political implications in regard of the current global debate on the dealing with different opinions, the power and influence of social media on information and disinformation and its place in the context of an increasing fragmentation of the internet are widely discussed throughout media (i.e. see our posts here and here).

Since January 1, 2018, social media providers are now obliged to maintain a procedure for complaints. This procedure forms a core element of the law, as the obligation on the social media provider to delete unlawful content and the time period for deletion are triggered by the receipt of a complaint.

June of 2017 ended with the German parliament approving the bill targeted at eliminating hate speech and fake news on social media, including on Facebook, YouTube, and Google. The law will take effect in October of 2017, and could carry fines up to EUR 50 million.

We previously discussed the bill on this blog post.  Now that the bill has been passed into law, social media companies are required to remove illegal hate speech within 24 hours after receiving notification or a complaint, and to block other offensive content within seven days.  The law also requires social media companies to report the number of complaints they have received, and how they have dealt with them, every six months.

In India, an administrator of a Whatsapp group has recently faced arrest, following the sharing of what is alleged to be a defamatory photo-shopped image of Prime Minister, Narendra Modi.  South Africa has yet to test the liability of a group admin with regard to what is shared on their group.  However, given the rise in online racism and hate speech, paired with the millions of people around the world who use the Whatsapp application, it could only be a matter of time before a case like that in India comes before the South African courts.

The German Justice Ministry has introduced a draft law that would impose fines of up to €50 million on social media companies that fail to remove hate speech and other illegal content from their platforms quickly.

The fines would be imposed whenever social media companies do not remove online threats, hate speech, or slanderous fake news. Any obviously illegal content would have to be deleted within 24 hours; reported material that is later determined to be illegal would have to be removed within seven days.

This year seems to have started off in much the same way as 2016 ended. Celebrities, politicians, and everyday people have flocked to social media to provide their commentary on everything from global crises to envelope sagas.

Towards the end of 2016, Twitter announced that no person is above their policies, particularly in respect of hate speech, and threatened to remove Donald Trump’s verified account if the President continued to violate them. But what exactly do the Twitter policies say?