2019

In this age of social media, companies and brands have faced countless criticisms for their lack of transparency, copyright infringements disguised in the form of “flattery or inspiration” and we can’t forget the many inclusivity flops.

Brands, including beauty brands, are now dedicating more of their marketing budgets to paying influencers for their “honest” reviews in hopes that they can convince the public to purchase their products. What’s more striking is that consumers are heavily relying on social media for help in determining where to place their value and money. With these stakes, some companies have turned to deceptive practices in a search for social media popularity.

Artificial intelligence (AI) is a field of computer science referring to intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Social media platforms use artificial intelligence technologies such as natural language processing to understand text data, and image processing for facial recognition.

In some instances, regulation tries to create a “legal” definition of AI. For example, a law requiring disclosure of chat bots defines “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” Article 22 of GDPR provides for the right not to be subject to a decision based solely on “automated processing, including profiling” with legal or significant impact. AI laws also refer to driverless vehicles. These legal definitions of AI determine whether the law applies to the particular AI process or system.

On June 13, 2019, the 9th Circuit handed down a decision in Duguid v. Facebook, Inc., 926 F.3d 1146 (9th Cir. 2019), which has at least partially brought into question the future of the Telephone Consumer Protection Act (“TCPA”).

Around January 2014 Facebook started sending Noah Duguid sporadic text messages, alerting Duguid that an unrecognized browser was attempting to access his Facebook account. The messages followed a template akin to “Your Facebook account was accessed [by/from] <browser> at <time>. Log in for more info.” While this type of message may be alarming to the everyday Facebook user believing their account may be hacked, these text messages alarmed Duguid for a completely different reason – he does not have a Facebook account.

Your friend tells you they saw a video of you on social media. You look it up. The person in that video looks like you. That person even sounds like you. To make matters worse the video shows this counterfeit version of you doing something incredibly embarrassing. You have never done what the video is portraying and yet here it is online forever. You have just been victimized by a deepfake.

What is a Deepfake?

Deepfakes (short for ‘deep learning’ and ‘fake’[1]) use AIs trained in audio and video synthesis to manufacture fake videos. The AI system itself is based on adversarial generative learning.[2] These AIs are able to learn how a face moves from a bank of photos that a user inputs. It then superimposes that face onto the body of someone else in an unrelated video.

In an August 1, 2019 post titled “Without Proper Enforcement, Even the Strongest Social Media Policies May Not Protect Employers,” we discussed how enforcement of corporate social media policies was paramount to protecting employers from liability stemming from employee violations of that policy. That post discussed how employers must take care not only to formulate comprehensive social media policies, but also to provide thorough training and ensure rigorous enforcement of those policies to its employees and managers.

In keeping with that theme, this article examines a specific illustration of the importance of maintaining and enforcing corporate social media policies.

We have previously written about trademark cases where one party was ordered to turn over social media accounts, websites, links, etc. that included the disputed mark(s). But what happens if the defendant doesn’t turn them over but instead destroys them?

Most of us are familiar with Instagram – a social media engine, primarily utilized in its all-too-familiar form of a phone application, that allows users to share images and videos of themselves or others for public viewing and potential recognition.

With the increased popularity of photo-sharing social media tools like Instagram, users have begun to wonder more about what, if any, intellectual property rights they may own to the content they publish to such sites. In a previous post, we discussed the legal implications of posting content to social media and found that the user is often the primary owner of their content.

This begs the question – if each user is the owner of the content he/she posts, what, if any, are the legal implications of reposting the content of another user?