In December 2021, our post “Increased likelihood of US social media regulation” discussed Facebook whistleblower Frances Haugen and her call to hold social media platforms accountable for the potentially dangerous content that appears on their sites.
In February 2022, Haugen once again flagged Facebook’s algorithms as potentially harmful, but this time Haugen was speaking outside of the United States, to the Australian Parliament’s Select Committee on Social Media and Online Safety (the “Committee”).
At a public hearing before the Committee, Haugen argued that, outside of the United States, Facebook provides even less online abuse reporting and safety for its users. According to Haugen, non-United States governments allow Facebook to operate with less regulation, and as a result, Facebook underinvests in safety measures in those regions. Australia’s Department of Home Affairs corroborated Haugen’s allegations, sharing findings with the Committee that Facebook is often the most reluctant platform among the “big tech” companies to work with the Australian Government on online safety efforts.
Facebook disputes the claims.
Australia Takes Social Media Control into its own Hands
Haugen’s invitation to provide submissions before the Committee is part of a broader push by the Australian Government to develop a legislative and policy framework for better online safety. The commencement of the Online Safety Act 2021 (the “Act”) on 23 January 2022 is another major component of those efforts.
Complaints
The objects of the Act are to improve and promote online safety for Australians. According to the Federal Government in Australia, the Act creates a “world-first Adult Cyber Abuse Scheme” which is intended to protect not only children, but also individuals 18 years and older who are the target of cyber-abuse material.
Under the Act, generally, social media users can make a complaint to Australia’s independent regulator for online safety, the eSafety Commissioner, if a reasonable person would conclude the material in a post (for example, cyber bullying, intimate images, or material that depicts violent conduct) would cause serious harm to an Australian adult or have a serious effect on an Australian child.
Pro-active regulation
The Act encourages pro-active action and behavior by establishing a wide ranging set of basic online safety expectations. These expectations may be determined in a legislative instrument from time to time and must include the expectations that the provider of the service will:
- take reasonable steps to ensure that end users are able to use the service in a safe manner;
- take reasonable steps to minimize the extent to offensive material is provided on the service;
- ensure that the service has clear and readily identifiable mechanisms that enable end‑users to report, and make complaints about, any offensive material; and
- ensure that the service has clear and readily identifiable mechanisms that enable end‑users to report, and make complaints about, breaches of the service’s terms of use.
The eSafety Commissioner may require service providers to report on their compliance with the applicable determination.
The Act also creates a framework for the establishment of mandatory, enforceable industry codes and standards.
Reactive regulation and enforcement
The Act gives the eSafety Commissioner a number of powers to investigate a complaint. These powers include information gathering powers and investigative powers to help reveal the identities behind accounts people use to conduct serious online abuse or to exchange illegal content.
In terms of reactive enforcement, the Act establishes three key notice regimes:
- Removal Notice. Generally, if a complaint is made in accordance with the Act, the eSafety Commissioner may give the social media service (or an end user, hosting service provider, relevant electronic service, or designated internet service, as relevant) a removal notice requiring the social media service to take all reasonable steps to remove the offensive material, and to do so within 24 hours.
- Blocking Violent Content. Where the eSafety Commissioner identifies material that depicts or incites abhorrent violent conduct the eSafety Commissioner can issue a blocking request (which is not enforceable) or a blocking notice. This would allow the eSafety Commissioner to respond to crisis events such as the 2019 Christchurch terrorist attack.
- Link and App Deletion Notice. The provider of an internet search engine may be given a notice known as a ‘link deletion notice’ requiring the provider to cease providing a link to certain material. The provider of an app distribution service (e.g. an app store) may be given a notice known as an ‘app removal notice’ requiring the provider to cease enabling end users to download an app that facilitates the posting of certain material on a social media service.
The Act gives the Commissioner a range of enforcement options, including civil penalties, formal warnings, infringement notices, enforceable undertakings and injunctions.
Key takeaways
In summary, the eSafety Commissioner has substantial new powers under the Act which impact social media. The Commissioner’s expanded regulatory role includes:
- registering, regulating and enforcing industry codes;
- receiving complaints regarding cyber-bullying material and non-consensual sharing of intimate images;
- investigating and gathering information;
- issuing notices which direct online service providers to remove material;
- seeking civil penalties for failure to respond to notices; and
- seeking court orders to stop the provision of certain online services where there is a significant community risk.
Looking Forward
Whether the United States will follow Australia’s lead and tighten social media regulation in 2022 remains to be seen. In the meantime, internet providers should continue to review their current policies and procedures in anticipation of increased regulation in this space.