Companies—especially those based outside the U.S.—sometimes ask why it is so difficult to bring a lawsuit based on something posted on social media.  A recent federal court case from California can help show how courts view these actions.  Prehired, LLC, v. Provins, No. 2:22-cv-00384-TLN-AC (E.D. Cal. April 12, 2022) (2022 WL 1093237).

Your friend tells you they saw a video of you on social media. You look it up. The person in that video looks like you. That person even sounds like you. To make matters worse the video shows this counterfeit version of you doing something incredibly embarrassing. You have never done what the video is portraying and yet here it is online forever. You have just been victimized by a deepfake.

What is a Deepfake?

Deepfakes (short for ‘deep learning’ and ‘fake’[1]) use AIs trained in audio and video synthesis to manufacture fake videos. The AI system itself is based on adversarial generative learning.[2] These AIs are able to learn how a face moves from a bank of photos that a user inputs. It then superimposes that face onto the body of someone else in an unrelated video.

As a wise person once said, truth often is stranger than fiction. The US Court of Appeals for the Fourth District of Texas (the “Appellate Court”) recently decided Hosseini v. Hansen, a bizarre case involving the intertwining of a tax preparation business, primate trainers and enthusiasts, and a defamation claim. Despite the unique factual circumstances, the case provided good general insight into social media use as it relates to defamation.

We have previously written about the U.S. legal landscape regarding consumers’ rights to post negative reviews of products or services on the internet, including some of the implications of the Consumer Review Fairness Act on these rights. The Consumer Review Fairness Act was passed in December of 2016 in response to some businesses’ efforts to prevent customers from giving honest reviews by signing non-disparagement or similar agreements as a condition to receiving a particular product or service.

This post concerns an issue involving the federal Communications Decency Act of 1996 (the “CDA”) and its relationship to rights and obligations of companies that provide a forum for reviews and ratings of businesses (the “review sites”), the reviewers, and the businesses that are reviewed. In July of this year, the Supreme Court of California issued an opinion, styled Hassell v. Bird, that analyzed the relationship of these entities and provided some guidance and clarity as to legal rights provided by the CDA in this context.

June of 2017 ended with the German parliament approving the bill targeted at eliminating hate speech and fake news on social media, including on Facebook, YouTube, and Google. The law will take effect in October of 2017, and could carry fines up to EUR 50 million.

We previously discussed the bill on this blog post.  Now that the bill has been passed into law, social media companies are required to remove illegal hate speech within 24 hours after receiving notification or a complaint, and to block other offensive content within seven days.  The law also requires social media companies to report the number of complaints they have received, and how they have dealt with them, every six months.

On March 8, 2017, federal Judge Sidney Fitzwater, of the North District of Texas, issued a memorandum opinion and order in Charalambopoulos v. Grammer, No. 3:14-CV-2424-D, 2017 WL 930819. The case had already been in litigation for years and involved allegations of domestic violence and defamation.  According to earlier opinions issued in Charalambopoulos, the parties had been staying in Houston, Texas where the defendant – a reality television star and former wife of Kelsey Grammer – was undergoing cancer treatment.  The parties, who were dating at the time, got into an argument at their hotel during the trip.  Days later, the defendant tweeted about the incident to her roughly 198,000 Twitter followers.

In India, an administrator of a Whatsapp group has recently faced arrest, following the sharing of what is alleged to be a defamatory photo-shopped image of Prime Minister, Narendra Modi.  South Africa has yet to test the liability of a group admin with regard to what is shared on their group.  However, given the rise in online racism and hate speech, paired with the millions of people around the world who use the Whatsapp application, it could only be a matter of time before a case like that in India comes before the South African courts.

The German Justice Ministry has introduced a draft law that would impose fines of up to €50 million on social media companies that fail to remove hate speech and other illegal content from their platforms quickly.

The fines would be imposed whenever social media companies do not remove online threats, hate speech, or slanderous fake news. Any obviously illegal content would have to be deleted within 24 hours; reported material that is later determined to be illegal would have to be removed within seven days.

With the proliferation of so-called “fake news”, companies are starting to rely on third party organizations to perform a “fact checking” function in order to distinguish between legitimate news and fake news. The fake news epidemic gained traction in the recent US presidential election.  We have previously written about the fake news problem, as well as the UK Government’s plan to tackle the issue.

While fake news began as false information disguised as legitimate news sources, the problem with fake news and the question as to what constitutes fake news is becoming more complicated and nuanced.

In the past, concerns regarding news focussed on traditional media (typically newspapers and broadcasters) and the role they played in curating and controlling public information and sentiment. In recent months, the focus has shifted to the distribution of news on the internet and social media and the so-called ‘fake news’ problem.