Socially responsible advertising

On April 6, 2016, the UK’s Advertising Standards Authority considered a complaint made against Guccio Gucci SpA regarding a video, which originally appeared on the website at www.thetimes.co.uk.  The advert featured several models dancing in a house, clothed in the apparel of the global fashion brand, and the complaint centred around the physical appearance of these models.

The complainant believed that the models appeared “unhealthily thin”, and that the advertisement therefore violated the applicable advertising code, which provides that “marketing communications must be prepared with a sense of responsibility to consumers and to society”.

The ASA ruled that the advert was irresponsible, must not appear again in its current form, and that Gucci must ensure that the images in their advertisements were prepared responsibly.

This ruling follows on the heels of various similar findings by the ASA against irresponsible advertisements in the past few years.

In January 2015, The Shop Channel UK received an adverse ruling against its television advert for a corset that “added to the impression that women should aspire to very small waists.”

A few months later, Yves Saint Laurent received an adverse ruling from the ASA regarding their advert that appeared in Elle magazine, and Urban Outfitters was dealt the same fate in 2014 in regard to an advertisement that appeared on its own website.

An interesting point to note is that Gucci advanced an argument that its target audience were the “older, sophisticated” readers of the Times.

The ASA did not seem to give much consideration to this argument in upholding the complaint.  Although the ASA did not specifically address the issue of the target audience, it would be remiss, in a time where social media has become the advertiser’s greatest asset, to earnestly try to convince an advertising watchdog that an advertisement is limited to a specific audience.

Given the ease with which an advertisement can be reproduced on various social media platforms (most often with a direct link on the website, on which the advertisement appears, to Facebook, YouTube, Instagram, and Pinterest), brand owners should diligently review the content of advertisements prior to publishing them.

Although Twitter does not ask for a potential user’s age when signing up, its privacy policy suggests that users must be over the age of 13, the mandatory age required by Facebook, Instagram, Pinterest, Tumblr, Reddit, and Snapchat.

The ASA has regularly held that the number of complaints made against an advert is only one factor taken into consideration. In reaching a decision on whether to uphold complaints, the ASA will also take into account factors including the audience, medium, and context of the advert.

The ASA can impose various sanctions in respect of an irresponsible advert, which range from ordering the withdrawal of the advertisement, issuing an ad alert to its members to withhold services such as access to advertising space, and calling for pre-vetting of all adverts of persistent offenders. For advertisers who have broken the advertising code on the grounds of taste and decency or social responsibility, the pre-vetting period can last for two years.

The expansive landscape of social media should be at the forefront of every advertiser’s mind when assessing the suitability of the content of their advertisements in order to avoid the irony of breaking through thin ice, only to land in hot water.

Liability for friends’ defamatory statements

Liability for third-party defamatory comments on one’s personal account, whether on Facebook or another internet-based platform, is an emerging legal issue in Canadian law.

If a social media “friend” posts defamatory statements about another person on your profile, or other site, can you be personally liable to the defamed person? Do you have any obligation to actively monitor your social media existence in the face of such statements?  Are you liable for third party statements that you may not even be aware of?

These questions were considered in a recent decision by the Supreme Court of British Columbia, which provided guidance in relation to potential liability and duties that result from the usage of social media in Canada (Pritchard v. Van Nes, 2016 BCSC 686).

The case involved a plaintiff who was the target of comments on the defendant’s Facebook page.   Third parties posted defamatory comments about the plaintiff, implying that he was a paedophile, among other things.  The allegations had serious ramifications on the professional career of the plaintiff, who was a music teacher at a local school.  In the court’s decision, Saunders J. noted that while there is a general rule that a person is responsible only for his or her own defamatory publications, and not for their repetition by others, there are several exceptions to the rule.  As reproduced from the decision (and Professor’s Brown’s text in The Law of Defamation in Canada, 2nd ed. (Scarborough: Carswell, 1994), at 348-350 (emphasis added):

Republication occurs where the person to whom the words were originally published communicates them to someone else. The general rule is that a person is responsible only for his or her own defamatory publications, and not for their repetition by others. There is no liability for a republication by a third person that the defendant neither authorized nor intended to be made.

There is no liability upon the original publisher of the libel when the repetition is the voluntary act of a free agent, over whom the original publisher had no control and for whose acts he is not responsible …

However, there are several exceptions to this rule. The defendant may intend or authorize another to publish a defamatory communication on his or her behalf. Secondly, a defendant may publish it to someone who is under some moral, legal or social duty to repeat the information to another person. Thirdly, a defendant may be liable if the repetition was the natural and probable result of his or her publication. These rules apply only where the information repeated is the same or substantially the same so that the sum and substance of the original charge remains. Once the requirements have been satisfied, the plaintiff is entitled to recover damages from the defendant both for the original publication and for the republication by the person to whom it was initially published.

The specific context and environment of social media was considered in determining whether the present fact scenario lent itself to an exception from the general rule. Saunders J. took judicial notice regarding the nature and operation of social media platforms and applications.  In particular, he noted the ubiquity of social media platforms, and how social media platforms facilitate distribution through their structure and architecture.  In coming to his decision, Saunders J. reviewed social media specific factors, including:

  • the defendant’s privacy settings (set as public),
  • her number of friends (over 2,000),
  • her initial posts,
  • the timeframe of her replies (implying that she was actively viewing her page and did not delete the statements within a reasonable time – but she did delete some posts following a complaint to the police),
  • the gravity of the defamatory remarks, and
  • the ease with which deletion could be accomplished.

The comments proliferated across the internet, and the comments had “gone viral” (despite being eventually deleted from the defendant’s page).In concluding that the defendant was indeed liable for the statements, Saunders J. found that there was a reasonable expectation of further defamatory statements being made, and further, that she had a positive obligation to actively monitor and control posted comments. He distinguished the present facts from passive service providers, and noted that the defendant’s failure to monitor allowed what may have only started off as thoughtless “venting” to snowball, and to become perceived as a call to action – offers of participation in confrontations and interventions, and recommendations of active steps being taken to shame the plaintiff publically, with devastating consequences.

Saunders J. found “the defendant ought to share in responsibility for the defamatory comments posted by third parties, from the time those comments were made, regardless of whether or when she actually became aware of them” (emphasis added). The plaintiff was awarded $2,500 for nuisance, $50,000 for defamation, punitive damages of $15,000, and his costs. Notably, as discussed in one of our recent posts, a similar finding (positive obligation to monitor social media accounts) was also found in a recent South African decision, and it appears that this issue will become increasingly common in the age of social media. In contrast, the law in the United States differs, as can be seen in this 2015 post.

Social media influencers are changing the face of marketing, consumerism, and activism—with great power comes great responsibility.

Rise of the social media bots

As we discussed in a recent post, “Social media overload”, social media has grown exponentially over the past decade and has caused businesses to change how they operate and how they make decisions. Social media has quickly become one of the most important marketing platforms, providing a convenient way for companies to reach broad audiences.

Often, the popularity or the success of an entity or an individual on social media is determined by some quantitative measurement, such as likes or followers. The more likes or followers (or other such social media unit) that an account has, the more likely it is to be perceived as being popular, successful, or relevant to the ongoing social conversation. This phenomenon has given rise to numerous services that offer to provide likes or followers by the thousands at extremely low costs. These services create and/or provide “bots” that mimic real social media users and can engage with the client’s account by following it, liking it, or even re-sharing or commenting on its posts. Such services provide an easy way for entities to gain large numbers of followers, likes, etc. that may result in the company getting attention as “trending” or simply getting noticed on a social media platform.

 On the other hand, companies do not want to risk losing their credibility by having it revealed that some or even most of their followers are fake accounts run by bots. It is also important for companies to have an accurate account of their social media activity and followers, a task that is nearly impossible when there are unknown numbers of bot accounts present.

 While platforms like Instagram and Twitter have tried to purge fake accounts themselves, it is nearly impossible for them to keep up and even harder for smaller social media platforms. Fortunately, there are several services that can help identify bots or fake/spam accounts—but they can’t guarantee that they will always be right. Bots have become increasingly more sophisticated, with some able to limit their activity to follow normal, human sleep patterns or others that are able to “follow” each other to create the appearance of legitimacy. Such features have made it difficult for a computer program to distinguish the real from the fake. This sophisticated technology presents the risk of wrongly identifying a real follower as a bot, which may be even more problematic than simply allowing the bots to remain.

The best way to address the bot issue while avoiding (or at least minimizing) the false positives in identifying bots may be to do it the old-fashioned way—through hands-on, human labor. People who are familiar with a company’s social media account and with its regular, active followers are in the best position to identify a bot. Companies that already have employees tasked with managing their social media accounts and Internet presence should leverage the knowledge of those experts to identify bots and fake/spam accounts. Others may consider another, more creative way to handle the problem by crowdsourcing the work. Companies with loyal, active followers may want to enlist their help in identifying and reporting fake accounts as they encounter them during their normal social media use.

Whether through computer-based services, employees, or active social media users, companies will have to find some way to tackle the issue of fake accounts and bots—at least until the social media platforms or regulators develop more sophisticated methods to take down such accounts or prevent them from being created in the first place.

 

Adding to Glossary of Social Media Terms

We have several new readers to our blog, and we want make sure everyone is aware of our features that not only include blog posts, but also several glossaries. One of our glossaries to help our readers understand what the Marketing and Promotions employees have already discovered is our Glossary of Terms.  Here are some new terms that you may not be familiar with:

  • Vine.  Vine, a social media application owned by Twitter, allows users to record and share looping videos that are six seconds or less. A video can be a full six-seconds, or can be separate, short videos, that are compiled together into a six-second video. A user’s videos that are published on Vine can also be shared on Twitter, Facebook, or Tumblr or embedded onto a website. Users can also share other Vine users’ content with their Vine followers by “revining” a video, which posts Vine videos taken by users directly onto the user’s feed. Vine is used by individual users, but has also been used by numerous companies and brands to promote its products or services. See our posts that include Vine.
  • Revine – the act of sharing a Vine video with all of the user’s Vine followers. Similar to Retweet on Twitter and Share on Facebook.
  • Thunderclap. Thunderclap is a crowdspeaking platform, on which a user creates a campaign (a “Thunderclap”) and invites followers to support his campaign. The campaign message is limited to 117 character, however, the user can tell a more detailed story on their Thunderclap page. A campaign can be a link to a charity or online petition, an auction, a health PSA, a political campaign, a campaign to raise awareness regarding a certain issue, a book launch, or any other campaign. If a Thunderclap user chooses to support the campaign, the supporter allows Thunderclap access to their social media account and permission to share a message on the user’s Facebook, Twitter, or Tumblr account. If the number of campaign supporters rises to a pre-determined threshold prior to the end of the campaign, then Thunderclap will “blast out” the social media messages from the supporter’s social media accounts at the same time. If the campaign creator does not recruit enough supporters by the campaign’s end date and time, no posts will be made. According to Thunderclap, the platform is not just for non-profits and social causes and can also be used by brands to connect with its audience, promote an event or product launch, or more. See our posts on Thunderclap.

You may also be interested in our other glossaries: Worldwide Glossary of Law, Glossary of Guidance, and Glossary of Social Media Sites.

Social Media and Potential Jurors

Social media profiles and postings by potential jurors can provide litigation counsel with substantial information about these individuals, including their likes, dislikes, and views on various issues and potential biases. A March 25, 2016 federal trial court ruling, however, led both parties to agree to forego these searches.

Background

The case involved a high-profile copyright dispute between Google and Oracle. Oracle claimed that Google’s Android smartphone operating system software infringed Oracle’s copyright in its Java technology, and Google denied the claim.

By way of background, in the United States, a federal trial judge will typically instruct jurors that they are prohibited not only from discussing on social media the cases in which they are involved, but also from using the Internet and social media during the trial to research or comment on anything relating to the case.

Concerns Regarding Social Media Searches on Potential Jurors

The court in this case used a two-page questionnaire for potential jurors, and was surprised when the parties requested up to two days to review the responses before beginning questions to potential jurors (voir dire). The judge stated that he “eventually realized” that the purpose of the delay was to permit counsel and their clients and agents to research the potential jurors’ online information.  The judge stated three reasons “to restrict, if not forbid, such searches by counsel, their jury consultants, investigators and clients”:

  1. “The danger that, upon learning of counsel’s own searches direct at them, our jurors would stray from the Court’s admonition to refrain from conducting Internet searches on the lawyers and the case.”
  2. Allowing that research “will facilitate improper personal appeals to particular jurors via jury arguments and witness examinations patterned after preferences of jurors found through such Internet searches.”’ The judge cited examples of trial counsel using quotes from a juror’s favorite books or attitudes on “fair trade, innovation, politics, or history” in order to sway the jurors.
  3. “To protect the privacy of the venire.” With respect to jurors, the court stated:  “Their privacy matters.  Their privacy should yield only as necessary to reveal bias or a reluctance to follow the Court’s instructions.”

The Court’s Options to the Parties

The court considered exercising its discretion to ban counsel and the parties from conducting social media and Internet searches on potential jurors, but instead gave the parties a choice:

 A. Consent to a ban against Internet and social media research on jurors until the trial is over or

 

B. Conduct the searches subject to all of the following:

 

i. Counsel would be required to inform the jurors of the specific extent the party will use the Internet searches to investigate and monitors jurors “including specifically searches on Facebook, LinkedIn, Twitter, and so on, including the extent to which they will log onto their own social media accounts to conduct searches and the extent to which they will perform ongoing searches while the trial is underway.”

ii. Counsel could not explain their searches “on the ground the other side will do it.”

       iii.        Counsel could not intimate that the court permitted the searches “and thereby leave the false impression that the judge approves of the intrusion.”

iv. Jurors would be informed that the trial teams would discover and review their “social media profiles and postings, depending on the social media privacy settings in place.  The venire persons will then be given a few minutes to use their mobile devices to adjust their privacy settings, if they wish.”

v. Jurors would be reminded that they cannot do electronic research on the case.

vi. Counsel would not be permitted to make any personal appeals to any juror “exploiting information learned about a juror via searches.”

Both parties chose the ban against Internet and social media research on potential jurors.

 The Takeaway

Companies may wish to consider how they would respond to such a choice, and may want to consider requesting a similar choice in jury trials. Another question to consider is the arguments to make against having to make such a choice, including questions of potential juror bias.

Asking Employee to Delete Twitter Posts Can Be Unlawful

On March 14, 2016, the popular chain, Chipotle Mexican Grill, was found to have violated the National Labor Relations Act (NLRA) when it asked an employee to delete posts on his Twitter account about the company. Specifically, in Chipotle Services LLC d/b/a Chipotle Mexican Grill and Pennsylvania Workers Organizing Committee, a National Labor Relations Board (NLRB) administrative law judge determined that  that the employee’s “tweets” constituted protected activity.

The opinion stated that Chipotle employed a “national social media strategist” whose job duties included monitoring employee social media postings in order to flag violations of company policy. The national social media strategist found a series of posts by the worker discussing his dissatisfaction with his employer.  In one post, he complained that hourly workers were required to work on snow days.  In another post, he responded to a customer who tweeted, ““Free chipotle is the best thanks,”  with “nothing is free, only cheap #labor. Crew members only make $8.50hr how much is that steak bowl really?”

At the social media strategist’s urging, the employee’s regional manager and store manager spoke with the employee about the Twitter posts during his shift at the Havertown, Pennsylvania Chipotle. After the employee acknowledged that he had written the posts, the managers asked him to delete them and provided him with an outdated copy of the company’s social media code of conduct.

The social media code of conduct provided to the employee, stated among other things,

If you aren’t careful and don’t use your head, your online activity can also damage Chipotle or spread incomplete, confidential, or inaccurate information. To avoid this, our Social Media Code of Conduct applies to you. Chipotle will take all steps to stop unlawful and unethical acts and behavior and may take disciplinary action, up to and including termination, against you if you violate this code or any other company policy, including Chipotle’s Code of Conduct.

You may not make disparaging, false, misleading, harassing or discriminatory statements about or relating to Chipotle, our employees, suppliers, customers, competition, or investors.

The administrative law judge found that these provisions in the company’s outdated social media code of conduct were unlawful because they could limit Section 7 activities. Section 7 of the NLRA grants employees in all workplaces (regardless of whether employees are represented by a union) the right to engage in concerted activities for the purpose of mutual aid or protection.  Employers are prohibited under Section 8 of the NLRA from restraining employees from exercising their Section 7 rights. In this case, the policy’s reference to “confidential,” “disparaging,” “inaccurate,” “false,” and “misleading” were overly broad or ambiguous terms that an employee could construe as limiting Section 7 activities.

The judge also found that Chipotle could be liable for terms in an old policy that was no longer in effect because the outdated policy had been distributed by the social media strategist on several occasions and because it formed the basis for the manager’s request that the employee remove his Twitter posts.

The judge also found that Chipotle’s request that the employee remove his Twitter posts constituted a violation of the NLRA. Because the Twitter posts related to working conditions—pay rates and being required to work on snow days—they qualified as protectable, concerted activity.  Moreover, even though the manager did not discipline the employee or demand that he remove the posts, the judge found that the request was equivalent to an implicit order not to post about wages or working conditions on Twitter.

This case is a reminder that the NLRB is still on the lookout for overbroad social media policies and keen to remind employers that employees are free to discuss working conditions on all forms of social media.

If you’re looking for more information about how to craft a social media policy that is designed to meet the NLRB requirements, see our previous posts:

Facebook “like” button violates privacy laws

By Nerushka Deosaran and Tatum Govender

On 9 March 2016 the Düsseldorf Regional Court in Germany ruled that an online shopping site, Peek & Cloppenburg, which integrated Facebook’s “like” button into its website had violated users’ privacy rights.

How the “like” button works

The button allows website users who click on it to share instantly the pages and content from the website on their Facebook profiles. This technology is a rapidly-growing marketing tool.

What most people are not aware of is that this button also uses cookies that automatically send personal information, such as the user’s IP address and browser string, from the website user’s computer to Facebook when the website is accessed. This transmission of information occurs even if the button is not clicked or the user is not a registered Facebook user.

Although the German case related to the Facebook “like” button, it applies equally to other social media buttons, such as those used for LinkedIn, Twitter and Google+, all of which result in similar data transfers.

Finding of the Court

Having found that the IP addresses amounted to personal information and that the transfer of IP addresses to Facebook was not necessary for the functioning of the website, the court found that Peek & Cloppenburg violated German data privacy laws when it integrated the button into its website.

The court held that the link to a data protection statement at the bottom of the website could not save the business’ defense. The court held that the link was insufficient to constitute an indication that data was being, or would be, processed.

The court found against Peek & Cloppenburg because the personal information of users was sent to Facebook without the user’s prior, express consent or approval and there was no method of revoking the data transfer. The court ruled that users should have consented and been informed of what personal information was being processed and the purpose of the processing.

A South African take

Although the Protection of Personal Information Act, 2013 (POPI) is not yet fully in force, POPI has similar requirements to inform data subjects (in this case, the website users) of the personal information being processed and the purpose of the processing, and to obtain their consent to processing.

The website owner will likely be a “responsible party” under POPI because the owner determines the purpose, and means, of processing the personal information, namely allowing the integration of the button on the website to transmit information to Facebook regardless of the fact that the transmission occurs automatically.

The Information Regulator and South African courts may come to a similar conclusion once POPI is in force, but the issue may turn on the adequacy of the website’s privacy policy and terms and conditions. Many of these terms specify that consent is given by the user using the website.  Whether this “consent” amounts to voluntary, specific and informed consent, as required by POPI, would probably depend on the circumstances of each case.

Social Media Overload

The explosion of social media in the past decade has caused a major shift in the way we conduct our affairs. In particular, businesses have been required to adapt to new ways of communicating with their clients.  At a rate of thousands of social media applications surfacing each month, and new legal issues surrounding the use of social media, it can feel overwhelming, especially for new businesses.  In order to take advantage of this new phenomenon, it is important to consider what steps can be taken to maximize the benefits of social media:

  • Research and determine the best platforms to launch and maintain
  • Consider legal issues surrounding the choice of platforms
  • Consider implementing corporate policies on the use of social media

Research

Maximizing the benefits of social media takes time and research. First, you need to determine which social media platforms are best suited for your company and will keep clients engaged and interested in your business:

  • Would your company benefit from a communication platform such as Facebook or Twitter?
  • Would your company benefit from a content sharing platform such as YouTube, Vine, Instagram or Pinterest?

Once you determine which social media apps your company should invest in, you may wish to consider bundling the apps in one place. For example, there are apps that allow your company to run all of its social media platforms from a single application.  This consolidation has the obvious benefits of convenience and maintaining consistency in your brand and the way your company communicates with its clients.

Once you have done the research to determine how your company will participate in social media, it is important to invest time to ensure that the platforms you have selected are actually providing material benefits to your company. Are users recognizing your brand?  Are you targeting the right demographic for the wares and services you are providing?  Are you increasing your client base as a result of your participation in social media?  Are you tracking and managing the number of users following your brand?  Luckily, there are social media dashboard apps that can help collect this information for you, such as Cyfe, Google Analytics and SumAll.  This category of apps is becoming more sophisticated and can provide valuable statistical and reputational information to help a company to make the best use of social media.

 Legal issues

 Once your company has determined which social media platforms will likely enhance its brand and increase its clientele, it is important to consider the legal implications of engaging in social media:

  • Communication platforms can engage issues such as defamation and privacy. For example, in an earlier post , we reported that The High Court of South Africa ruled that a Facebook user was guilty of defamation because a defamatory post appeared on his Facebook wall and was not removed by him, even though he was not the author of the post. These issues are only now beginning to surface in the courts and it is important to take steps to guard against exposure to these types of claims.
  • Content sharing platforms can engage issues such as trade-mark and copyright infringement. With thousands of videos, pictures, GIFs and logos promulgated each day through social media, companies may find themselves inadvertently running afoul of others’ intellectual property rights. In order to minimize this risk, it is important for companies to invest the time to educate their employees on the proper use of social media.

Corporate policies

As important as it is to conduct proper research and invest the time to protect your brand using social media, it is equally important that your employees are aware of the benefits and risks of using social media. Corporate policies aimed at curtailing the misuse of social media can have a lasting positive effect on a company’s image.

To avoid the feeling of a social media overload, the key to success is proper research, proper analysis of the legal implications of social media and taking proper measures to ensure that the corporation can maximize the value of social media while at the same time minimizing the risks it can bring.

A Recipe for Confusion

TTAB Denies Registration of “JAWS” for Online Cooking Show

Diving head first into the deep end, the Trademark Trial and Appeal Board (“TTAB”), recently decided whether a chef’s application to register “JAWS” for an online cooking channel should sink or swim. The precedential decision is useful for anyone wishing to learn more about the role that a famous trademark, such as the JAWS® film name in this case, can play in the “likelihood of confusion” analysis at the U.S. Patent and Trademark Office.

On an ex parte appeal, the TTAB refused to register two trademark applications for proposed marks JAWS and JAWS DEVOUR YOUR HUNGER. The applicant, a chef who uses the moniker “Mr. Recipe,” had applied to register the two marks for “Entertainment, namely, streaming of audiovisual material via an Internet channel providing programming related to cooking.”

The initial examining attorney and then the TTAB on appeal denied registration on the ground that the marks would create a likelihood of confusion with a prior JAWS trademark registration for the film. The JAWS registration at issue specifically covered “video recordings in all formats all featuring motion pictures.”

The TTAB reviewed evidence of the fame of the JAWS movie, commenting that “JAWS has permeated into general culture, including being parodied by filmmakers.” The applicant tried arguing that its mark JAWS DEVOUR YOUR HUNGER conveyed a meaning of food, which would distinguish the online cooking show from the film. The TTAB remained unpersuaded, however, opining that the mark would connote the “shark’s reputation as having a voracious appetite.”

In addition to analyzing the similarities of the marks and the fame of the film, the TTAB also found that video recordings and Internet streaming services were sufficiently related such that confusion could result. The TTAB reasoned that the consuming public “generally understands that video recordings of movies may be converted to a format that may be streamed over the Internet.”

Learning from Mr. Recipe’s Trademark Experiences

As the TTAB noted in its decision, the trademark prosecution process is restricted to the goods and services named on paper, and actual marketplace realities can alter the likelihood of confusion analysis. Nevertheless, the decision serves as a reminder that the question of confusion and trademark infringement can arise in unexpected situations. Furthermore, brand owners may be inclined to use puns and plays on other trademarks in their social media presence, but doing so can create the risk of a trademark infringement and/or dilution claim.

Owners of famous marks, meanwhile, may be able to enforce their mark against third-party uses on social media. As the TTAB recognized in the JAWS case, fame can increase the likelihood of confusion between a famous mark and a junior mark.

The U.S. also confers enhanced trademark protection to owners of famous marks under the federal trademark dilution cause of action. Notably, the dilution statute recognizes a defense for parody uses. In this case, given that the TTAB held that Mr. Recipe’s JAWS Internet cooking show would have created a likelihood of confusion with the film, it seems unlikely that a parody defense would have been successful in a dilution dispute before the TTAB.

In general, brand owners can help prevent potential conflicts and disputes by conducting clearance searches before adopting names and slogans, including those used for their online services, such as blogs, Internet videos, and social media profiles. A preliminary “knock-out” search can help avoid obvious disputes, and a comprehensive search can reduce the likelihood of prosecution issues when seeking protection under a federal U.S. trademark registration.

 

Social media users responsible for comments

The High Court of South Africa ruled in Isparta v Richter that a Facebook user was guilty of defamation because a defamatory post appeared on his Facebook wall and was not removed by him, even though he was not the author of the post.  The court ruled that because he knew of the post and “allowed his name to be coupled” with the author, he was as liable as the author.

This case began when a couple underwent an acrimonious divorce. After the divorce was finalised, the ex-husband’s new partner posted defamatory remarks on Facebook concerning the ex-wife and tagged the ex-husband in these posts.  The court found that the ex-husband was guilty of defamation because he had not taken down the post.  This judgment places a positive obligation on social media users to monitor the content of their social media accounts and to take positive steps to remove any objectionable material.

The five elements of defamation in South African law are:

  1. the wrongful and
  2. intentional
  3. publication of
  4. a defamatory statement
  5. concerning the plaintiff.Publication may take the form of a positive act or, as was the situation in this case, a failure to act.

If you have a social media page or profile you should be aware that:

  • At common law in South Africa, a failure to speak up or to make all relevant facts known that results in a half-truth being propagated is sufficient to show that publication has taken place.

The reasoning in Isparta, although not explicit, seems to rely on this common law principle.

  • You are responsible for the content on your pages

The fact that comments are made by third parties may not assist in your defence.

  • You should regularly monitor your pages

Facebook has a setting that allows users to approve posts before they appear on a profile or page.

  •  Respond to complaints

If you receive a complaint or a request to remove offending content or such content appears on your pages you should remove it immediately.The above applies not only to defamatory content, but to any other form of objectionable speech, such as hate speech. 

LexBlog