As the telemedicine industry continues to grow, especially in light of COVID-19, businesses should reconsider their policies and procedures in connection with telehealth services and user safety.

Notably, Facebook recently responded to the growing use of telemedicine by amending its policies with respect to advertisements by telemedicine companies for prescription drugs. The new policy, which according to Facebook is intended to ban the promotion of illicit drugs and unsafe substances, goes into effect on August 25, 2021.

Have you considered what you would like to happen to your social media accounts when you die? Where the platform gives you options, have you selected one? A while ago we wrote about what happens to your social media account when you die.

Many platforms, including Facebook, Twitter, Instagram and LinkedIn have different policies about what will happen to a deceased person’s profile. Since our last post, some of these policies have changed. Here is the current status as of the date of this post:

June of 2017 ended with the German parliament approving the bill targeted at eliminating hate speech and fake news on social media, including on Facebook, YouTube, and Google. The law will take effect in October of 2017, and could carry fines up to EUR 50 million.

We previously discussed the bill on this blog post.  Now that the bill has been passed into law, social media companies are required to remove illegal hate speech within 24 hours after receiving notification or a complaint, and to block other offensive content within seven days.  The law also requires social media companies to report the number of complaints they have received, and how they have dealt with them, every six months.

In South Africa, employees are under the mistaken belief that what they do in their time away from the office, specifically on social media, is private and beyond the reach of their employer’s control.

They fail to consider that they could face disciplinary action for their online rants and comments. This could be fatal to their employment. The reality is that with the escalating use of social media during working hours as well as outside of company time, employees are regularly coming under fire for what they post online.

Each year Harvard University, one of the world’s most prestigious universities, receives over 30,000 applications from prospective students for about 2,000 places in its first year class. Recently, ten of those successful applicants, due to graduate in 2021, had their offers of admission revoked before they set foot onto campus.  The reason?  The content of the offensive memes they had shared on a private Facebook group, which at one stage had been named “Harvard memes for horny bourgeois teens”.

A 37 year old woman from Nottingham has lost a claim for future pain and suffering following failure by a hospital to notify her of a positive result of a sexually transmitted infection with the result that the infection was left untreated for a year.

The German Justice Ministry has introduced a draft law that would impose fines of up to €50 million on social media companies that fail to remove hate speech and other illegal content from their platforms quickly.

The fines would be imposed whenever social media companies do not remove online threats, hate speech, or slanderous fake news. Any obviously illegal content would have to be deleted within 24 hours; reported material that is later determined to be illegal would have to be removed within seven days.

With the proliferation of so-called “fake news”, companies are starting to rely on third party organizations to perform a “fact checking” function in order to distinguish between legitimate news and fake news. The fake news epidemic gained traction in the recent US presidential election.  We have previously written about the fake news problem, as well as the UK Government’s plan to tackle the issue.

While fake news began as false information disguised as legitimate news sources, the problem with fake news and the question as to what constitutes fake news is becoming more complicated and nuanced.

In the past, concerns regarding news focussed on traditional media (typically newspapers and broadcasters) and the role they played in curating and controlling public information and sentiment. In recent months, the focus has shifted to the distribution of news on the internet and social media and the so-called ‘fake news’ problem.