Australia’s under-16s social media ban is doing what critics said it couldn’t
Australia’s decision to bar under‑sixteens from mainstream social media platforms was widely dismissed as unrealistic.
Listen to this article
Many predicted teenagers would lie about their age or use VPNs to get around the rules. A month on, early evidence shows the opposite. Platforms are blocking large numbers of underage accounts and raising barriers that did not exist before.
Meta alone removed more than half a million suspected under‑sixteen accounts across Instagram, Facebook and Threads in the first week after the ban took effect on 10 December. That does not signal failure. It shows that checks are finally being taken seriously and that younger users can no longer sign up with the ease they once did. Early feedback from families also suggests a broader benefit, with some children spending less time glued to their phones and more time on offline activities such as sport, hobbies and socialising.
Many parents have also welcomed that the responsibility to police access to online spaces is no longer solely on them. Under Australia’s framework, platforms such as TikTok, Snapchat and X must take reasonable steps to keep under‑sixteens out, facing fines of up to A$49.5 million if they fall short. For families, this creates backing that did not previously exist.
Critics point to children using VPNs, borrowed details, or alternative apps. But the aim is not perfection. It is to replace effortless access with something that requires intent and effort. For most teenagers, that change alone significantly reduces exposure.
However, traditional age checks remain part of the problem. Document uploads introduce friction for users, and raise concerns about how and where sensitive data is stored. Meanwhile, facial recognition systems add friction and are complicated by the constant evolution of deepfakes and real-time face swaps. These approaches risk wrongly blocking legitimate users while still allowing determined under‑sixteens to slip through.
Luckily, a more practical option already exists. Mobile network age verification allows platforms to check whether a user is over or under a required age without collecting documents or storing data. A platform sends a simple query to the mobile operator, receives an over‑16 or minor‑or‑unknown response, and grants or denies access accordingly. It is quick, private, and more difficult to fake.
This method can also support additional layers of verification when necessary. It can be linked to an active session to prevent hand‑me‑down logins and provide ongoing assurance with minimal disruption. It offers a realistic way for platforms to meet regulatory expectations without building complex new systems that reinvent the wheel.
The UK is moving in the right direction with the Online Safety Act, but the framework is still shaped around moderating harmful content rather than preventing underage access. Australia shows how outcome‑led rules can complement the Act by setting clear expectations while allowing platforms to choose the least intrusive technology to meet them.
Australia’s model is not perfect, but it proves that tougher rules can drive real change when enforcement is clear. Mobile network verification should play a central role as the UK finalises its approach.
______________
John Wilkinson is CEO and Co-Founder of age verification specialist TMT ID
LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.
The views expressed are those of the authors and do not necessarily reflect the official LBC position.
To contact us email opinion@lbc.co.uk