Australia’s online watchdog has accused the world’s largest social media companies of failing to properly enforce the country’s ban on under-16s using their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including permitting prohibited users to make repeated attempts at age verification and inadequate safeguards to prevent new accounts. In its initial compliance assessment since the prohibition came into force, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, warning that platforms must demonstrate they have implemented “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Revealed in Opening Large-scale Review
Australia’s eSafety Commissioner has outlined a worrying pattern of non-compliance amongst the world’s most prominent social media platforms in her first formal review since the ban came into effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have jointly neglected to establish appropriate safeguards to prevent minors from using their services. Julie Inman Grant expressed particular concern about structural gaps in age verification systems, highlighting that some platforms have permitted children who initially declared themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings demonstrate a notable intensification in the regulatory action, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has made clear that simply showing some children still hold accounts is inadequate; platforms must instead furnish substantive proof that they have put in place comprehensive systems and procedures intended to stop under-16s from opening accounts in the outset. This shift reflects the government’s commitment to ensure tech giants responsible, with potential penalties looming for companies that fail to meet the legal requirements.
- Enabling previously banned users to confirm again their age and regain account access
- Allowing repeated attempts at the same age assurance method with no repercussions
- Insufficient safeguards to block accounts for under-16s from being established
- Limited reporting tools for parents and members of the public
- Lack of transparent data about compliance actions and user account terminations
The Extent of the Challenge
The considerable scale of social media usage amongst young Australians highlights the regulatory challenge confronting both the government and the platforms in question. With millions of accounts already removed or restricted since the ban’s implementation, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s conclusions suggest that the technical and procedural obstacles to enforcing age restrictions have turned out to be considerably more complex than expected, with platforms struggling to distinguish genuine age declarations from fraudulent ones. This complexity has placed enforcement authorities grappling with the fundamental question of whether current age verification technologies are adequate to the task.
Beyond the technical obstacles lies a wider issue about the readiness of companies to prioritise compliance over user growth. Social media companies have long resisted strict identity verification requirements, citing privacy concerns and the real challenge of confirming age online. However, the Commissioner’s report suggests that some platforms might not be demonstrating sufficient effort to implement the systems mandated legally. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their compliance infrastructure, or they risk facing substantial fines that could reshape their business models in Australia and potentially influence regulatory approaches internationally.
What the Figures Indicate
In the first month subsequent to the ban’s implementation, Australian officials stated that 4.7 million accounts had been suspended or taken down. Whilst this statistic initially seemed to prove regulatory success, subsequent analysis reveals a more complex picture. The substantial number of account removals suggests that many under-16s had managed to establish accounts in the initial stages, demonstrating that preventive controls were inadequate. Additionally, the data casts doubt about whether suspended accounts constitute genuine enforcement or merely users removing their accounts willingly in in light of the latest limitations.
The restricted transparency concerning these figures has frustrated independent observers attempting to evaluate the ban’s true effectiveness. Platforms have disclosed scant details about their compliance procedures, effectiveness metrics, or the nature of removed accounts. This absence of transparency makes it difficult for regulators and the general public to determine whether the ban is working as intended or whether teenagers are just locating alternative ways to access social media. The Commissioner’s demand for thorough documentation of systematic compliance measures reflects mounting dissatisfaction with platforms’ resistance to disclosing complete details.
Sector Reaction and Pushback
The major tech platforms have addressed the regulator’s enforcement action with a mixture of assurances of compliance and scepticism about the ban’s practicality. Meta, which runs Facebook and Instagram, stressed its commitment to complying with Australian law whilst at the same time contending that accurate age determination continues to be a significant industry-wide challenge. The company has called for a alternative strategy, suggesting that robust age verification and parental approval mechanisms implemented at the application store level would be more efficient than platform-level enforcement. This position demonstrates wider concerns across the industry that the current regulatory framework puts an impractical burden on separate platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, announcing that it had suspended 450,000 accounts since the ban took effect and claiming to continue locking more daily. However, sector analysts dispute whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ business models—which traditionally depended on maximising user engagement and expansion—and the regulatory requirement to actively exclude an entire age demographic persists unaddressed. Companies have long resisted rigorous age verification methods, citing privacy issues and technical constraints, establishing an impasse between regulators and platforms over who bears responsibility for execution.
- Meta argues age verification ought to take place at app store level instead of on individual platforms
- Snap states to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups highlight privacy issues and technical challenges as barriers to effective age verification
- Platforms maintain they are doing their best whilst questioning the ban’s overall effectiveness
Wider Considerations Regarding the Ban’s Impact
As Australia’s under-16 online platform ban enters its enforcement phase, fundamental questions remain about whether the legislation will accomplish its intended goals or merely push young users towards less regulated platforms. The regulator’s first compliance report reveals that following implementation, significant loopholes remain—children continue finding ways to circumvent age verification mechanisms, and platforms have had difficulty prevent new underage accounts from being established. Critics contend that the ban’s success depends not merely on regulatory oversight but on whether young people will truly leave major social networks or simply migrate to alternative services, encrypted messaging applications, or VPNs designed to conceal their age and location.
The ban’s international ramifications add another layer of complexity to assessments of its success. Countries such as the United Kingdom, Canada, and various European states are watching Australia’s experiment closely, exploring similar laws for their respective populations. If the ban does not successfully reduce children’s social media usage or does not protect them from damaging material, it could damage the case for comparable regulations elsewhere. Conversely, if enforcement becomes sufficiently rigorous to truly restrict underage participation, it may encourage other nations to implement similar strategies. The conclusion will likely influence international regulatory direction for years to come, making Australia’s enforcement efforts scrutinised far beyond its borders.
Who Gains and Those Who Suffer
Mental health campaigners and child safety organisations have championed the ban as a essential measure against algorithmic manipulation and exposure to harmful content. Parents and educators argue that taking young Australians off platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the risks to mental health linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates legitimate uses of social media for young people—keeping friendships alive, accessing educational content, and engaging with online communities around shared interests. The regulatory framework assumes harm exceeds benefit, a calculation that some young people and their families question.
The ban’s concrete implications reaches past individual users to influence content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that depend on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously employed effectively. Meanwhile, the ban unintentionally benefits large technology companies with resources to develop age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects reach well further than the simple goal of child protection.
What Lies Ahead for Compliance Monitoring
Australia’s eSafety Commissioner has indicated a marked change from inactive oversight to proactive action, marking a pivotal moment in the rollout of the youth access prohibition. The authority will now gather evidence to ascertain whether companies have omitted “reasonable steps” to restrict child participation, a regulatory requirement that surpasses simply recording that young people stay within these systems. This method necessitates tangible verification that companies have introduced proper safeguards and processes designed to exclude minors. The Commissioner’s office has stated it will launch probes methodically, building cases that could lead to significant fines for non-compliance. This shift from observation to enforcement demonstrates mounting concern with the companies’ present approach and indicates that willing participation on its own will not be enough.
The enforcement phase presents significant concerns about the adequacy of penalties and the practical mechanisms for holding tech giants accountable. Australia’s legislation delivers compliance mechanisms, but their efficacy depends on the eSafety Commissioner’s commitment to initiate official proceedings and the platforms’ capability to adjust meaningfully. International observers, especially regulators in the Britain and Europe, will keenly observe Australia’s implementation tactics and consequences. A successful enforcement campaign could establish a blueprint for other nations considering equivalent prohibitions, whilst failure might weaken the entire regulatory framework. The forthcoming period will be critical whether Australia’s groundbreaking legislation produces real safeguards for adolescents or becomes largely performative in its effect.
