Your friend tells you they saw a video of you on social media. You look it up. The person in that video looks like you. That person even sounds like you. To make matters worse the video shows this counterfeit version of you doing something incredibly embarrassing. You have never done what the video is portraying and yet here it is online forever. You have just been victimized by a deepfake.

What is a Deepfake?

Deepfakes (short for ‘deep learning’ and ‘fake’[1]) use AIs trained in audio and video synthesis to manufacture fake videos. The AI system itself is based on adversarial generative learning.[2] These AIs are able to learn how a face moves from a bank of photos that a user inputs. It then superimposes that face onto the body of someone else in an unrelated video.

As we have previously written, the Pew Research Center found in 2016 that 62% of American adults consumed news on social media to some extent.

In September of 2017, the Pew Center updated its research, finding that, in 2017, about 67% or two-thirds of American adults are reporting getting “at least some of their news on social media,” a 5% increase from last year.

According to the research, this 5% growth was driven by more substantial increases among certain demographic groups. The research shows that 55% of American adults over 50 now consume news on social media sites, up from 45% in 2016. The research also reports that 74% of non-white Americans get news on social media sites in 2017, up from 64% in 2016. Last, there was an increase among those with less than a bachelor’s degree getting news from social media, to 69%, compared to 60% previously.

Despite all the headlines and studies on social media’s role in spreading fake news and its influence on public opinion, the majority of the public does not seem to plan to stop reading news on social media any time soon.  However, some optimistic leaders of the traditional news media see fake news as an opportunity to highlight the integrity of mainstream media.

The German Justice Ministry has introduced a draft law that would impose fines of up to €50 million on social media companies that fail to remove hate speech and other illegal content from their platforms quickly.

The fines would be imposed whenever social media companies do not remove online threats, hate speech, or slanderous fake news. Any obviously illegal content would have to be deleted within 24 hours; reported material that is later determined to be illegal would have to be removed within seven days.

With the proliferation of so-called “fake news”, companies are starting to rely on third party organizations to perform a “fact checking” function in order to distinguish between legitimate news and fake news. The fake news epidemic gained traction in the recent US presidential election.  We have previously written about the fake news problem, as well as the UK Government’s plan to tackle the issue.

While fake news began as false information disguised as legitimate news sources, the problem with fake news and the question as to what constitutes fake news is becoming more complicated and nuanced.

In the past, concerns regarding news focussed on traditional media (typically newspapers and broadcasters) and the role they played in curating and controlling public information and sentiment. In recent months, the focus has shifted to the distribution of news on the internet and social media and the so-called ‘fake news’ problem.

In the few months leading up to the United States election, social media was flooded with articles with sensationalized titles and incendiary content. Many of these “news” stories were fake. They were written for the purpose of swaying public opinion or generating a profit from ad revenue and were often published by sham entities or news websites. Large, popular companies may be the next targets, so this post will describe a few actions companies could take.