Social media: What happens to your account when you die?

Will you instruct your executor to memorialise or close your Facebook account or will you sign up to DeadSocial to post goodbye messages posthumously? The US government has created guidelines for dealing with your digital afterlife. It also provides a template social media will. The US government’s first guideline is to read the terms and privacy policies of the various social media websites. Let us break it down for you…

Twitter:

  • Will deactivate a deceased’s account on request by a person authorised to act on behalf of the estate or by a verified immediate family member.
  • Requires the following information to be sent by fax or post to its head office in San Francisco in order to process the request:
    • Deceased’s username (for example, @fake_username);
    • Death certificate;
    • Deceased’s identity document; and
    • A signed statement from the requester which includes:
      • First and last name;
      • Email address;
      • Contact information;
      • Relationship to the deceased;
      • Action required (i.e. “Please deactivate Twitter account”);
      • Evidence that the account belongs to the deceased (if the name of the account does not match the name on the death certificate); and
      • A link to an online obituary or a copy of a newspaper obituary (optional).
  • May contact the requester via email if further information is required.
  • Also accepts requests from immediate family members and other authorised individuals for the removal of images or videos of deceased individuals from when critical injury occurs to the moments before or after death. Requests to remove images or video can be sent to privacy@twitter.com. Twitter considers public interest factors when reviewing such requests (such as newsworthiness of content) and may not be able to honour every request.
  • May also deactivate inactive accounts after six months.

Facebook:

  • May memorialise (i.e. the deceased’s timeline is kept on Facebook but access and some features are limited) the account of a deceased person. Anyone can make the request to memorialise the profile, it does not have to be a family member or executor.
  • Requires the following information in order to memorialise an account:
    • Full name of deceased person as it is listed on the account;
    • A link to the Timeline for Facebook to investigate to ensure that it does not memorialise the account of someone with the same name;
    • Email addresses listed on the account;
    • Requester’s relationship to the deceased person;
    • Date of death;
    • Proof of death (for example, a link to an obituary or news article);
    • Requested action (i.e. to memorialise account); and
    • Requester’s contact email.
  • May process certain special requests, including requests to close an account, on receipt of a formal request from a verified family member or an executor of the deceased’s estate. Closing an account will completely remove the deceased’s profile and all associated content from Facebook so that no-one can view it.
  • Requires verification that the requester for special requests is an immediate family member or executor and will not process the request if they are unable to verify the requester’s relationship to the deceased person. Verification includes:
    • Providing the deceased’s persons birth certificate;
    • Providing the deceased’s persons death certificate;
    • Providing proof under local law that the requester is the lawful representative of the deceased person or their estate.
  • Will not provide anyone with login information to the deceased’s account.

LinkedIn:

  • May close a deceased persons account and remove their profile on receipt of a request (anyone can make the request to remove the profile, it does not have to be a family member or executor).  To do this the requester needs to answer some questions about the deceased by filling in a form via DocuSign. The form requires the following information:
    • Deceased’s name;
    • The company the deceased worked for;
    • The requester’s relationship to the deceased;
    • Link to the deceased’s profile; and
    • Deceased’s email address (if possible).
  • May, in closing an account, remove content associated with that account, for example, recommendations posted by the deceased on other profiles. There is no option to “memorialise” an account.
  • If the account is not closed the deceased’s LinkedIn connections will continue to receive notifications to endorse the deceased or to congratulate the deceased on a work anniversary or birthday. Users have reported that even profiles of closed accounts of deceased persons continue to be displayed in the “people you may know” notifications.

[Terms as at 27 August 2014]

No Secret in Brazil

The popular social networking app “Secret” has reportedly been temporarily enjoined in Brazil. A civil court in Brazil ordered both Google and Apple to remove Secret from their respective app stores, and to pull the apps from the phones of their users.

In its opinion, the Brazilian court raised concerns that Secret’s anonymity feature can become an instrument of cyberbullying. Indeed, the case was brought by a young marketing consultant who complained of intimate photos of himself being shared anonymously, leaving him no legal recourse.

Brazil is not the only jurisdiction dealing with anonymity issues related to social media. In the United States, companies have run into problems bringing anti-defamation suits against authors of anonymous postings. But in at least one case, a Virginia court has pierced the anonymity of Yelp reviews, forcing Yelp to identify the names of users that posted negative reviews of the plaintiff.

Read other Social Media Law Bulletin posts on anonymous postings.

 

Seth Jaffe (seth.jaffe@nortonrosefulbright.com / +1 713 651 5370) is an associate in Norton Rose Fulbright’s Intellectual Property Practice Group.

Social media ads and physical injury

Can social media ads lead a court to hold a business responsible for a physical assault that occurred after the customer left the business’ premises?  On July 9, 2014, a federal trial court in Pennsylvania ruled in Paynton v Spuds that the restaurant’s marketing and ads on Facebook were a “significant” factor in denying the business’ motion for summary judgment.

The case involved a restaurant located in a college town  The restaurant, Spuds, did not serve alcohol, but was located near several establishments that did.  The restaurant, like many businesses, chose to advertise around local events.  For example, the restaurant advertised on Facebook:  “Don’t forget to come get some Spuds after dollar drinks tonight!  We are open until 3 AM!”

One late night during college homecoming, the plaintiff and his companions were subjected to abusive behavior by several defendants in the restaurant.  The trial court found that the videotape surveillance was inconclusive as to whether the Spuds staff condoned this behavior.  The plaintiff claimed he feared for his safety and left the restaurant.  He was attacked two storefronts away, resulting in a traumatic brain injury and permanent damage to his right eye.

In this civil action, defendant Spuds moved for summary judgment based on the general principle that a business is not responsible for conduct that takes place off its property.  The court denied the motion, stating that the assault “might credibly be related to events occurring inside Spuds.”  Moreover, “I place significant weight on the nature of Spuds’ business, that is, marketing itself as a destination for young people after a night of consuming alcohol at nearby bars.”

Does your business advertise in connection with events where alcohol can be served or other potentially dangerous conduct can occur?  Have you reviewed your social media ads lately?

 

Sue Ross (susan.ross@nortonrosefulbright.com / +1 212 318 3280) is a lawyer in Norton Rose Fulbright’s US intellectual property practice.

Introducing our social media Glossary of law

Norton Rose Fulbright’s Social Media Bulletin is proud to offer a Glossary of Law describing various statutes involved in social media issues. The Glossary includes links to our respective blog posts on the statutes.

Take a look at our Glossary Section for more information and a list of these statutes.


Seth Jaffe (seth.jaffe@nortonrosefulbright.com / +1 713 651 5370) is an associate in Norton Rose Fulbright’s Intellectual Property Practice Group.

eDiscovery and private social media accounts

The United States District Court for the District of Kansas recently clarified the scope of discoverable information from private social media accounts.

In Stonebarger v. Union Pacific Corp., a wrongful death case where plaintiffs were seeking to recover damages for the deaths of two individuals killed in a collision, the defendants requested two types of information from plaintiffs’ social media accounts: (1) all account data for each plaintiff’s Facebook page, and (2) all photographs posted, uploaded or otherwise added to any social networking sites or blogs since the date of the accident.

Plaintiffs asserted various objections (including relevancy, overbreadth, and invasion of privacy), though they offered to produce requested information from their “public” Facebook postings.

Surveying a few other cases discussing the discoverability of social media, the court ultimately took the middle ground. It held that permitting discovery of all social media went too far, but given that the plaintiffs had put their emotional states at issue, limited discovery was permitted into private Facebook postings relevant to that issue. This approach, in the court’s view, provided an appropriate balance between allowing the defendant to discover information relevant to the plaintiffs’ claims while also protecting Plaintiffs from a fishing expedition. We previously covered a New York state court opinion that reached a similar result.


This article was prepared by James V. Leito IV
(james.leito@nortonrosefulbright.com / +1 214 855 8004), an associate in Norton Rose Fulbright’s litigation practice group.

Snapchat and Maryland Attorney General

We had previously written about the U.S. Federal Trade Commission’s proposed complaint and consent with mobile messaging service Snapchat, best known for promoting its “ephemeral” photo messaging site. The FTC alleged the Snapchat violated the Federal Trade Commission Act through six false or deceptive acts or practices, including Snapchat’s claim that messages can “disappear forever.” Under the proposed FTC consent, Snapchat does not admit or deny any liability. If approved and to settle the matter, Snapchat would be:

  • Prohibited from misrepresenting its products and services and treatment of personal information, or their privacy and security; and
  • Required to implement a comprehensive privacy program that is subject to a third-party audit every 2 years for the next 20 years.

On June 12, 2014, the Maryland Attorney General issued a press release that it had entered into a settlement agreement with Snapchat. (The settlement agreement is not publicly available.) Similar to the FTC’s claims, the Maryland Attorney General alleged that Snapchat misled consumers by representing that the messages were only temporary, and that Snapchat collected names and phone numbers from user’s electronic contact lists—not always disclosed to consumers and to which consumers did always not consent. Unlike the FTC, the Maryland Attorney General also claimed that Snapchat was aware that some of its users were under age 13 and therefore Snapchat failed to comply with the Children’s Online Privacy Protection Act (COPPA).

Similar to the FTC’s proposed consent, Snapchat did not accept any liability in settling with the Maryland Attorney General. The settlement, like the FTC’s proposed consent, prohibits Snapchat from making false representations relating to its app, and from misrepresenting the temporary nature of the messages. Unlike the FTC’s proposed consent, the Maryland Attorney General settlement requires Snapchat to make affirmative disclosures to user that the recipients of messages have the ability to capture or copy the messages they receive. The Maryland Attorney General’s settlement also requires Snapchat to obtain affirmative consent from consumers before it collects and saves contact information from address books. In addition, Snapchat has agreed, for a period of 10 years, to take specific steps to “ensure children under the age of 13 are not creating Snapchat accounts.”  Finally, Snapchat agreed to pay Maryland $100,000.

The FTC has no general authority to issue fines but can issue them only if a specified law or regulation (like COPPA) authorizes them or if a company violates a consent agreement. As this matter illustrates, state attorneys general typically do not have such limitations on their authority.


Sue Ross (susan.ross@nortonrosefulbright.com / +1 212 318 3280) is a lawyer in Norton Rose Fulbright’s US intellectual property practice.

Employer potentially liable for disability discrimina-tion after facebook comment

On June 23, 2014, in Shoun v. Best Formed Plastics, a federal judge declined to dismiss a lawsuit alleging that the plaintiff’s former employer, Best Formed Plastics, violated the Americans With Disabilities Act after it wrongfully disclosed his confidential medical information via a Facebook post.

While employed at Best, the plaintiff, George Shoun, spent several months away from work recovering from a shoulder injury he sustained on-the-job. An employee in Best’s human resources department handled the plaintiff’s workers’ compensation claim. Five days after Shoun filed an ADA lawsuit against Best, the human resources employee took to her personal Facebook page and posted:

“Isn’t [it] amazing how Jimmy experienced a 5 way heart bypass just one month ago and is back to work, especially when you consider George Shoun’s shoulder injury kept him away from work for 11 months and now he is trying to sue us.”

Shoun then filed an amended complaint, claiming that the post was a deliberate disclosure of his medical condition in violation of the ADA. Shoun alleged that the post contained information the employee learned in her capacity as a claims processor for Best, where she was responsible for monitoring and reporting Shoun’s medical treatment over a seven-month period.

Section 102 of the ADA provides that any information related to an employee’s medical condition obtained through an employment-related medical examination or inquiry must be treated by the employer as a confidential medical record. The confidentiality provisions are not invoked, however, where the employee volunteers their medical information to their employer or a co-worker outside of the context an employment-related medical inquiry.

Best argued that the lawsuit should be dismissed because Shoun had voluntarily disclosed his medical information to the public when he filed his ADA lawsuit, which is a public record. The court ruled that this argument was inappropriate for a motion to dismiss. It reasoned that whether the human resources employee gained knowledge of Shoun’s medical condition solely within the context of his workers’ compensation medical examination (which would be confidential) or through Shoun’s voluntarily disclosure in the lawsuit (which would not be confidential) was a fact question for the jury. Therefore, Shoun may continue to pursue his claims.

This case serves as a good reminder of the importance of regular training on this issue. Here, one individual’s comments on her “personal” social media page have entangled a company in a federal lawsuit. Companies can better insulate themselves from such claims if employees understand that they have a duty to protect employee personnel information (particularly medical information) learned through the course of their duties, the laws that impose that duty, and the extension of that duty beyond the office walls and into cyberspace.


Heather Sherrod (heather.sherrod@nortonrosefulbright.com / +1 713 651 5163) is a lawyer in the US employment and labor practice.

A two pronged prescription: The FDA releases new social media guidelines

The U.S. Food and Drug Administration (“FDA”) released two new sets of guidance regarding the use of social media to disseminate information about prescription drugs and medical devices.  This guidance supplements years of FDA warning letters and untitled letters sent to manufacturers, packers or distributors in regulated industries (each a “Regulated Entity”).  We have previously discussed the FDA letters on this blog.

In the first prong of its social media guidance, the FDA provided new “Twitter Rules.” For Internet and social media platforms with character space limitations (e.g., Twitter, Google AdWords and the paid search results links on Yahoo! and Bing), the Regulated Entity must present risk and benefit information for prescription drugs or medical devices (U.S. Department of Health and Human Services Food and Drug Administration, Guidance for Industry Internet/Social Media Platforms with Character Space Limitations — Presenting Risk and Benefit Information for Prescription Drugs and Medical Devices (June 2014).  In the guidance, the FDA reminds the Regulated Entities that the “benefit claims in product promotions should be balanced with risk information.”  That is, for every statement of a drug or devices benefit, the benefit description should:

  • be accurate and not misleading;
  • include material facts about the indications and usage of the product; and
  • be accompanied by risk information disclosing, at a minimum, the most serious risks associated with the product (Id., pages 5 and 9).

In addition to balanced drug or device descriptions, the description should provide “direct access” (e.g., a URL) to a more complete discussion of the associated risks (Id., page 8).

The FDA recommends that Regulated Entities “carefully consider the complexity of the indication and risk profiles for each of their products to determine whether a character-space-limited platform is a viable promotional tool for a particular product” (Id., page 4). In other words, the FDA is signaling that the agency believes online interactions having a limited character space may not be suitable for discussing the benefits and risks of drugs and devices, and as such, the FDA may show less flexibility when objecting to behavior on platforms with character space limitations, both in present-day enforcement and in future guidelines.

In the second prong of social media guidance, the FDA addressed the murkier topic of misinformation made public by unrelated third parties (“User-Generated Content” or “UGA”) as part of a conversational media forum (e.g., Facebook, Twitter, Tumblr, etc.) regarding a Regulated Entity’s own prescription drugs and medical devices (U.S. Department of Health and Human Services Food and Drug Administration, Guidance for Industry Internet/Social Media Platforms: Correcting Independent Third-Party Misinformation About Prescription Drugs and Medical Devices ( June 2014).  If a Regulated Entity chooses to correct erroneous UGA, any “appropriate corrective action” should “address all misinformation in a clearly defined portion of a forum on the Internet or social media, whether the misinformation is positive or negative.”  To qualify as “appropriate corrective action” the Regulated Entity’s corrections should meet the following criteria (Id., pages 5 and 6):

  • Be relevant and responsive to the misinformation;
  • Be limited and tailored to the misinformation;
  • Be non-promotional in nature, tone, and presentation;
  • Be accurate;
  • Be consistent with the FDA-required labeling for the product;
  • Be supported by sufficient evidence, including substantial evidence, when appropriate, for prescription drugs;
  • Either be posted in conjunction with the misinformation in the same area or forum (if posted directly to the forum by the firm), or should reference the misinformation and be intended to be posted in conjunction with the misinformation (if provided to the forum operator or author); and
  • Disclose that the person providing the corrective information is affiliated with the firm that manufactures, packs, or distributes the product.

Appropriate corrective action may be provided directly on the conversational media forum itself, when feasible (Id., page 7); or to the independent author for the author to incorporate (Id., page 8); or request that the author or site administrator modify or remove the misinformation (Id).  It is important to note that this guidance does not mandate that a Regulated Entity actively police all UGA available on the Internet.  Instead, it outlines the process by which a Regulated Entity may go about correcting misinformation in conversational media.

As a reward for staying within the boundaries of this guidance, the FDA “does not intend” to enforce the more stringent standard requirements related to labeling or advertising (Id., page 9). If a Regulated Entity chooses to provide information outside the scope of this guidance, however, it will be subject to the aforementioned labeling and advertising requirements.


Jay Greathouse (jay.greathouse@nortonrosefulbright.com / +1 210 270 7155) is a lawyer in Norton Rose Fulbright’s San Antonio Corporate, M&A and securities practice.

Online abuse

We have previously written about the difficulties that businesses can encounter when trying to get a posting removed from a social media site. We have also previously covered several matters relating to the special online protections legislators have created for children (e.g.COPPA).

These two areas combined in a local law passed by Albany, New York relating to cyberbullying. Bullying of children has long existed in schools, but advances in technology, including social media, have expanded both the frequency and extent of the problem. Albany addressed the issue with Albany County Local Law No. 11 of 2010, which made cyberbullying a criminal offense if committed against “any minor or person”—which included corporations. The law defined “cyberbullying” very broadly:

Any act of communicating or causing a communication to be sent by mechanical or electronic means, including posting statements on the internet or through a computer or email network, disseminating embarrassing or sexually explicit photographs; disseminating private, personal, false or sexual information, or sending hate mail, with no legitimate private, personal or public purpose, with the intent to harass, annoy, threaten, abuse, taunt, intimidate, torment, humiliate, or otherwise inflict significant emotional harm on another person.

Violations could be punished by up to one year in jail and a $1,000 fine.

In December of 2010, after the local law was in effect, a 15-year-old high school student created a Facebook page and posted sexual information about fellow classmates. He was charged with violating the law, and entered a guilty plea on the condition that he reserved the right to challenge the constitutionality of the law.

On July 1, 2014, New York’s highest court ruled, 5-2, that the law was too broad and violated the First Amendment to the U.S. Constitution. Although Albany conceded that the law was overbroad, the county asked the court to sever the overbroad language and to leave the remainder of the law intact. The court refused, stating that severing the problematic provisions was not permissible in this instance because the court would encroach upon the county’s legislative authority.

The court agreed with the county that its motivation to protect children from cyberbullying was a permissible purpose for the county, and that the defendant’s Facebook communications were “repulsive and harmful to the subjects of his rants and potentially created a risk of physical or emotional injury based on the private nature of the comments.”  Nevertheless, the court dismissed the charges against the teenager because the law was “overbroad and facially invalid under the Free Speech Clause of the First Amendment.”  A county executive has stated that he will work with the legislature to propose new legislation that addresses the court’s concerns.


Sue Ross (susan.ross@nortonrosefulbright.com / +1 212 318 3280) is a lawyer in Norton Rose Fulbright’s US intellectual property practice.

Google – Hanginout in court

In November 2013, Hanginout, Inc. (“Hanginout”) filed a lawsuit against Google Inc. (“Google”) alleging, among other things, that Google had infringed on Hanginout’s HANGINOUT mark.

Hanginout, a Virginia based social media company, uses its HANGINOUT mark for its interactive video response platform, which enables its users to create, promote, and sell their own brands by engaging directly with potential customers via pre-recorded video messages and/or video profiles. Google alleged that by 2009, it had already developed its internal version of what would become its HANGOUTS product. Google made its HANGOUTS product accessible to the public as part of its Google+ platform in June 2011. Google’s HANGOUTS product allows users to engage in live interactions with other users, including instant messaging and real-time video-conferencing. Neither Hanginout’s HANGINOUT mark or Google’s HANGOUTS marks are registered trademarks, as both are still the subject of pending trademark applications with the U.S. Patent and Trademark Office (“USPTO”).

In its action before the U.S. District Court for the Southern District of California, Hanginout moved for a preliminary injunction in January 2014.

However, the district court stated that, establishing priority of use is not in and of itself sufficient to bestow common law trademark rights. To warrant a preliminary injunction, Hanginout must also establish sufficient market penetration in a specific geographic area.

In support of its argument that it had sufficient market penetration, Hanginout asserted:

  1. it had a Facebook profile by March 2011;
  2. by May 2011, over 200 consumers had registered for and used Version 1.0 of the HANGINOUT web based platform;
  3. from September 15, 2012 through December 23, 2013 the HANGINOUT smart phone application had 30,000 visits from California 15 consumers, 7,000 visits from New York consumers, 3,500 visits from Florida consumers, 2,700 visits from Michigan consumers, 2,600 visits from Texas consumers, and a ranging number of visits from consumers in the remaining states;
  4. nearly 8,000 individuals had registered for the HANGINOUT iTunes application by the filing of the preliminary injunction; and
  5. various celebrities and politicians have created HANGINOUT accounts and published content utilizing the HANGINOUT platform.

The court did not find this evidence persuasive and found that Hanginout’s failure to present any evidence as the actual location of its registered users or sales was dispositive. Further, the court stated that there was a dramatic decline in the overall number of views of the HANGINOUT application. Therefore, the court denied Hanginout’s motion, finding that Hanginout had not proven that it had enough of the requisite market penetration to enforce its common law trademark under the Lanham Act.

While the court ultimately denied Hanginout’s motion for a preliminary injunction, it did find that Hanginout was the senior user of the mark. There was dispute as to whether Google began use of HANGOUTS in June 2011 or whether use did not begin until May 2013; however, the court determined that that issue did not need to be resolved, because Hanginout began use of its HANGINOUT mark in commerce in or around May 2011, prior to both of Google’s alleged first use dates.


This article was prepared by Shelby Bruce (shelby.bruce@nortonrosefulbright.com / +1 612 321 2207) is a lawyer in Norton Rose Fulbright’s Minneapolis intellectual property practice.

LexBlog