Australia’s Social Media Ban for Under-16s: What Parents and Teens Need to Know

In a groundbreaking move that’s sparked global attention, Australia has passed the Online Safety Amendment (Social Media Minimum Age) Act 2024, making it illegal for children under 16 to create or maintain accounts on major social media platforms.

This world-first legislation is being hailed as a bold step towards protecting youth mental health and holding tech giants accountable for the digital wellbeing of young Australians.

The new law requires platforms like TikTok, Instagram, Facebook, Snapchat, X (formerly Twitter), and Reddit to verify users’ ages and block access for anyone under the age of 16. Failure to comply could see companies fined up to AUD 32 million. It marks a significant shift from placing the onus on parents and educators to demanding real safeguards from technology companies themselves.

Why Now? The Alarming Rise in Youth Mental Health Concerns

Behind this legislative push lies a disturbing trend: increasing evidence that social media is harming the mental health of young people. The Australian Institute of Family Studies reported that one in four adolescents aged 14 to 17 experienced cyberbullying in the past year alone. Excessive screen time – often upwards of seven hours per day – has been linked to anxiety, depression, disrupted sleep, and body image issues, particularly among teenage girls.

Julie Inman Grant, Australia’s eSafety Commissioner, said the new law reflects a growing urgency to act. “This legislation is about giving children the chance to enjoy a childhood and adolescence free from algorithmic manipulation, inappropriate content, and constant comparison,” she said in a recent blog post.

Inman Grant, a former tech executive herself, has been a long-time advocate for digital safety, arguing that “parents cannot go it alone” when it comes to online safety – tech companies must play a key role.

What Does the Law Require?

Under the new act, platforms must take “reasonable steps” to verify a user’s age before allowing access. This goes well beyond the familiar checkbox asking “Are you over 13?” Instead, companies are expected to roll out biometric scans, government ID checks, or other age assurance technologies within 12 months.

The law doesn’t stop at account creation. If platforms fail to remove underage users already on their systems or fail to apply their policies uniformly, they face multimillion-dollar fines. The Australian Communications and Media Authority (ACMA) has been granted new powers to investigate and enforce these rules, in partnership with the eSafety Commissioner.

Can It Be Enforced? Challenges and Loopholes

As ambitious as the law is, enforcement is proving a technical and ethical minefield. Critics have raised concerns about privacy, data collection, and the effectiveness of biometric verification. Some platforms – like Discord – have started experimenting with age checks using facial recognition and ID uploads, which has led to mixed reactions from privacy advocates.

A bigger issue is circumvention. Teens are notoriously tech-savvy and may still find ways to bypass age gates using VPNs, fake accounts, or older relatives’ details. There’s also the question of scope: the law doesn’t cover encrypted messaging apps like WhatsApp or social gaming platforms, which many younger users frequent just as much as TikTok or Instagram.

Despite these challenges, supporters argue that this is a crucial first step. “The perfect should not be the enemy of the good,” says Professor Susan Sawyer from the University of Melbourne’s Centre for Adolescent Health. “Even if enforcement is difficult, this law sends a strong cultural message – that we value the wellbeing of our children more than platform profits.”

Guidance for Parents: What You Can Do Now

For parents, this new digital landscape may offer some relief – but it also requires action. The government isn’t expecting mums and dads to become cybersecurity experts overnight, but it is encouraging open, ongoing conversations about digital maturity.

Here are a few practical tips for navigating the change:

  • Talk early and often: Explain why the law exists and what the risks of social media can be.
  • Use parental controls: Tools like Apple’s Screen Time or Google Family Link can help manage usage and monitor activity.
  • Encourage alternative platforms: For younger kids, consider age-appropriate platforms like Messenger Kids or moderated forums for hobbies and learning.
  • Model good behaviour: Children often mimic adult screen habits, so it’s worth reflecting on your own digital routines.

For more resources, the eSafety website has an excellent toolkit tailored specifically for parents and carers.

Will Other Countries Follow Suit?

Australia’s legislation is already influencing conversations overseas. The UK’s Online Safety Act 2023 introduced similar measures to protect children, but stopped short of implementing a strict age floor. In the United States, several states have proposed digital age verification laws, though none have gone as far as Australia’s national-level ban.

Regulators across Europe are closely monitoring how Australia’s law unfolds. If it proves enforceable and effective, it may become a template for future child protection policies in the digital space.

Over to You

What do you think about Australia’s decision to ban under-16s from social media? Is it a necessary safeguard or a case of government overreach? Are you a parent, teacher or teen affected by the change? I’d love to hear your thoughts – share your opinions, experiences or questions in the comments below.

Subscribe

Related articles

Navigating Privacy in the Age of Smart Vehicles

For decades, the car was a relatively private space...

AI Bits, Bots & Bookmarks!

I've been playing around with new AI LLMs, tools...