Online video celebrity Chrissy Chambers has recently settled a case against her ex-partner for damages suffered as a result of his posting sexually explicit video clips online.  The terms of the settlement are confidential but this and several other high-profile cases have generated much publicity around social media platforms’ responsibility to monitor and remove harmful or abusive content.

Content can be harmful in a number of ways, such as cyberbullying, threats of violence, hate speech and even “revenge porn” (the sharing, usually on a public platform, of intimate photos or videos of a person without their consent).

Corporate reputations are delicate things and employees can be your best ambassadors, but they can also be your worst. Immediate action must be taken to see that offending material is taken down and that the offending employee is appropriately dealt with. Being seen to tolerate harmful content (or even such things as breach of copyright) is bad for business, perhaps very bad.

How do social media platforms handle harmful content?

Most social media platforms do not tolerate harmful or abusive content and provide a mechanism for users to report it. For example, some platforms will remove content that violates their guidelines and will age-restrict (or “age-gate”) content that is not suitable for children.

Many social media sites permit users to flag inappropriate content, which will then be assessed against the site’s standards. Other platforms have developed tools to proactively combat harmful content, particularly the sharing of intimate images of someone without their consent.  Image matching technology is used to identify images which are similar and prevent them from being shared in the future.

Are social media platforms legally required to remove harmful content?

Although, most social media platforms do not tolerate harmful or abusive content, whether they are legally obliged to remove it depends on the jurisdiction. For example:

  • In Germany, social media platforms with over 2 million registered users must remove content which is “clearly illegal” within 24 hours after receiving a user complaint in terms of the Network Enforcement Act. If the content is not clearly illegal, the platform must investigate it, and if so remove it, within 7 days. Germany’s top court ruled in 2014 that an ex-partner must delete intimate images on request, so it is likely that social media platforms would be required to do the same.
  • South Africa’s draft Cybercrimes and Cybersecurity Bill criminalises the distribution of certain types of harmful or abusive content. It provides for a victim to obtain a court order pending finalisation of criminal proceedings to prohibit the offender from continuing to make the content available, as well as to require the administrator of the applicable system to remove or disable access to the content.

These laws are typically limited to assisting victims only if the social media platform is located in their jurisdiction.