June of 2017 ended with the German parliament approving the bill targeted at eliminating hate speech and fake news on social media, including on Facebook, YouTube, and Google. The law will take effect in October of 2017, and could carry fines up to EUR 50 million.

We previously discussed the bill on this blog post.  Now that the bill has been passed into law, social media companies are required to remove illegal hate speech within 24 hours after receiving notification or a complaint, and to block other offensive content within seven days.  The law also requires social media companies to report the number of complaints they have received, and how they have dealt with them, every six months.

What is a chatbot?  Essentially it is a computer program which simulates human behaviour online, including on social media. Chatbots are not a new concept but are becoming increasingly sophisticated in what they can do and how closely they can mimic human behaviour online, such that they are increasingly replacing humans in populating social media for organisations.

Chatbots are widely used by corporations to stimulate conversation, promote products/services, increase consumer engagement and generally enhance the user experience. For example, RBS has announced an intent to launch a chatbot “Luvo” to help its customers with more straightforward queries; H&M has a chatbot on Kik, which learns about a user’s style though viewing photographs and recommends outfits; and Pizza Hut has a chatbot on Facebook and Twitter, which allows customers to place order via those platforms.

On March 8, 2017, federal Judge Sidney Fitzwater, of the North District of Texas, issued a memorandum opinion and order in Charalambopoulos v. Grammer, No. 3:14-CV-2424-D, 2017 WL 930819. The case had already been in litigation for years and involved allegations of domestic violence and defamation.  According to earlier opinions issued in Charalambopoulos, the parties had been staying in Houston, Texas where the defendant – a reality television star and former wife of Kelsey Grammer – was undergoing cancer treatment.  The parties, who were dating at the time, got into an argument at their hotel during the trip.  Days later, the defendant tweeted about the incident to her roughly 198,000 Twitter followers.

In India, an administrator of a Whatsapp group has recently faced arrest, following the sharing of what is alleged to be a defamatory photo-shopped image of Prime Minister, Narendra Modi.  South Africa has yet to test the liability of a group admin with regard to what is shared on their group.  However, given the rise in online racism and hate speech, paired with the millions of people around the world who use the Whatsapp application, it could only be a matter of time before a case like that in India comes before the South African courts.

The German Justice Ministry has introduced a draft law that would impose fines of up to €50 million on social media companies that fail to remove hate speech and other illegal content from their platforms quickly.

The fines would be imposed whenever social media companies do not remove online threats, hate speech, or slanderous fake news. Any obviously illegal content would have to be deleted within 24 hours; reported material that is later determined to be illegal would have to be removed within seven days.

With the proliferation of so-called “fake news”, companies are starting to rely on third party organizations to perform a “fact checking” function in order to distinguish between legitimate news and fake news. The fake news epidemic gained traction in the recent US presidential election.  We have previously written about the fake news problem, as well as the UK Government’s plan to tackle the issue.

While fake news began as false information disguised as legitimate news sources, the problem with fake news and the question as to what constitutes fake news is becoming more complicated and nuanced.

In the past, concerns regarding news focussed on traditional media (typically newspapers and broadcasters) and the role they played in curating and controlling public information and sentiment. In recent months, the focus has shifted to the distribution of news on the internet and social media and the so-called ‘fake news’ problem.

A carefully curated social media presence is a critical business requirement, but there are risks. One of these risks is unlawful content – be that unlawful content posted to your businesses’ own social media account (exposing the company to potential liability) or harmful content about your business (or its C-Suite or key personnel) posted on independent sites.

So how do you tackle unlawful content? Often the first point of call is the law of defamation. The UK is renowned as a claimant friendly jurisdiction for defamation litigation. With its widely respected court system and judiciary, the UK has been the forum of choice for international defamation disputes. Note that the rules have recently been tightened up with stricter thresholds brought in for defamation actions and a requirement, aimed at stopping “libel tourism,” that for claims against non-EU defendants the UK must be the “most appropriate place” in which to litigate (the Defamation Act 2013).

Social media users have a new demand for 2017 – they want the ability to edit their public messages. Spelling mistakes, missing words and misplaced pronouns can have embarrassing, unintended and sometimes dangerous consequences.  The ability to edit one’s message is an attractive feature.  This request has led some users on the social media platform Twitter to ask its CEO when an edit function would be introduced.

On Thursday, December 15, 2016, President Obama signed into law H.R. 5111, now officially titled the “Consumer Review Fairness Act of 2016.” The substantive provisions of the bill, which we discussed in a previous post, are virtually unchanged, but the law’s text provides further details regarding enforcement by the Federal Trade Commission and the states.

One noteworthy enforcement feature of the law is a cross-reference to the Federal Trade Commission Act. A violation of the Consumer Review Fairness Act of 2016 by offering a form contract containing a provision described as void in the law is also a violation of 15 U.S.C. 45(a)(2), which essentially prohibits unfair or deceptive acts or practices.