Australia’s online watchdog has criticised the world’s largest social media companies of failing to properly enforce the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to prevent new accounts. In its first compliance report since the prohibition came into force, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, cautioning that platforms must demonstrate they have implemented “appropriate systems and processes” to prevent children under 16 from accessing their services.
Regulatory Breaches Exposed in Opening Large-scale Review
Australia’s eSafety Commissioner has detailed a worrying pattern of failure to comply among the world’s most prominent social media platforms in her first formal review since the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish adequate safeguards to prevent minors from accessing their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification systems, noting that some platforms have allowed children who initially declared themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings indicate a notable intensification in the regulatory response, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has made clear that merely demonstrating some children still maintain accounts is inadequate; platforms must rather furnish substantive proof that they have put in place comprehensive systems and procedures designed to prevent under-16s from opening accounts in the outset. This shift signals the government’s determination to hold tech giants accountable, with possible sanctions looming for companies that do not meet the statutory obligations.
- Permitting formerly prohibited users to confirm again their age and regain account access
- Allowing repeated attempts at the identical verification process with no repercussions
- Insufficient safeguards to stop accounts for under-16s from being established
- Limited reporting tools for families and the wider community
- Lack of transparent data about regulatory measures and user account terminations
The Magnitude of the Problem
The substantial scale of social media usage amongst young Australians underscores the compliance challenge confronting both the authorities and the platforms themselves. With millions of accounts already restricted or removed since the implementation of the ban, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s findings indicate that the technical and procedural obstacles to enforcing age restrictions have proven far more complex than expected, with platforms having difficulty to distinguish genuine age declarations from fraudulent ones. This intricacy has placed enforcement authorities wrestling with the core issue of whether current age verification technologies are sufficient for the purpose.
Beyond the operational challenges lies a broader concern about the willingness of platforms to place compliance ahead of user growth. Social media companies have consistently opposed stringent age verification measures, citing data protection worries and the genuine difficulty of confirming age online. However, the Commissioner’s report suggests that some platforms might not be demonstrating sufficient effort to deploy the infrastructure mandated legally. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their regulatory systems, or they risk facing substantial fines that could transform their operations in Australia and possibly affect compliance frameworks internationally.
What the Figures Indicate
In the first month after the ban’s implementation, Australian regulators reported that 4.7 million accounts had been suspended or deleted. Whilst this statistic initially seemed to demonstrate enforcement effectiveness, subsequent analysis reveals a more nuanced picture. The sheer volume of account deletions indicates that many under-16s had successfully created accounts in the first place, revealing that preventative measures were inadequate. Furthermore, the data casts doubt about whether suspended accounts constitute authentic compliance or just users closing their pages willingly in response to the new restrictions.
The restricted transparency concerning these figures has troubled independent observers attempting to evaluate the ban’s actual effectiveness. Platforms have revealed little data about their implementation approaches, performance indicators, or the characteristics of deleted profiles. This lack of clarity makes it challenging for regulators and the public to determine whether the ban is operating as planned or whether young people are merely discovering alternative ways to use social media. The Commissioner’s demand for detailed evidence of systematic compliance measures reflects growing frustration with platforms’ resistance to disclosing full information.
Sector Reaction and Opposition
The social media giants have addressed the regulator’s enforcement action with a combination of assurances of compliance and doubts regarding the ban’s practicality. Meta, which runs Facebook and Instagram, stressed its dedication to adhering to Australian law whilst at the same time contending that accurate age determination remains a major challenge across the industry. The company has advocated for a alternative strategy, proposing that strong age verification systems and parental consent requirements implemented at the application store level would be more efficient than enforcement at the platform level. This position reflects broader industry concerns that the existing regulatory system puts an unrealistic burden on separate platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, stating that it had suspended 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, industry observers dispute whether such figures reflect authentic adherence or merely reactive account management. The fundamental tension between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the regulatory requirement to systematically remove an entire age demographic persists unaddressed. Companies have consistently opposed rigorous age verification methods, citing privacy issues and technical constraints, creating a standoff between regulators and platforms over who bears responsibility for execution.
- Meta maintains age verification should occur at app store level rather than on individual platforms
- Snap states to have locked 450,000 accounts following the ban’s implementation in December
- Industry groups point to privacy issues and technical challenges as impediments to effective age verification
- Platforms assert they are doing their best whilst questioning the ban’s general effectiveness
More Extensive Questions Concerning the Ban’s Efficacy
As Australia’s under-16 social media ban enters its implementation stage, fundamental questions remain about whether the legislation will achieve its stated objectives or merely push young users towards unregulated platforms. The regulator’s initial compliance assessment reveals that despite months of implementation, significant loopholes remain—children continue finding ways to bypass age verification systems, and platforms have had difficulty prevent new underage accounts from being established. Critics contend that the ban’s success depends not merely on regulatory vigilance but on whether young people will genuinely abandon mainstream platforms or simply shift towards other platforms, encrypted messaging applications, or virtual private networks designed to conceal their age and location.
The ban’s global implications increase the complexity of assessments of its impact. Countries such as the United Kingdom, Canada, and several European nations are watching Australia’s initiative closely, evaluating similar regulatory measures for their own populations. If the ban does not successfully reduce children’s social media usage or cannot protect them from dangerous online content, it could damage the case for equivalent legislation elsewhere. Conversely, if enforcement becomes sufficiently rigorous to genuinely restrict underage participation, it may inspire other administrations to implement similar strategies. The conclusion will probably shape international regulatory direction for many years ahead, making Australia’s regulatory efforts examined far beyond its borders.
Those Who Profit and Who Is Disadvantaged
Mental health advocates and organisations focused on child safety have backed the ban as a essential measure to counter algorithmic manipulation and contact with harmful content. Parents and educators argue that removing young Australians platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also removes valid applications of social media for young people—maintaining friendships, obtaining educational material, and participating in online communities around common interests. The regulatory framework assumes harm exceeds benefit, a calculation that some young people and their families dispute.
The ban’s concrete implications goes further than individual users to impact content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that rely on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations have trouble connecting with young people through channels they previously used effectively. Meanwhile, the ban unintentionally benefits large technology companies with resources to build age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects extend far beyond the simple goal of child protection.
What Follows for Enforcement
Australia’s eSafety Commissioner has announced a significant shift from hands-off observation to direct intervention, marking a critical turning point in the rollout of the age restriction. The regulator will now gather evidence to establish whether companies have failed to take “reasonable steps” to prevent underage access, a statutory benchmark that surpasses simply documenting that minors continue using these platforms. This approach demands concrete evidence that organisations have implemented proper safeguards and procedures meant to keep out minors. The regulatory body has stated it will launch probes methodically, constructing evidence that could lead to significant fines for failure to comply. This move from observation to enforcement demonstrates mounting concern with the services’ existing measures and indicates that willing participation by itself is insufficient.
The rollout phase highlights important questions about the appropriateness of fines and the operational systems for maintaining corporate responsibility. Australia’s statutory provisions delivers compliance mechanisms, but their effectiveness relies on the eSafety Commissioner’s willingness to pursue formal action and the platforms’ capacity to respond substantively. Overseas authorities, especially regulators in the Britain and Europe, will closely monitor Australia’s implementation tactics and outcomes. A effective regulatory push could set a template for further jurisdictions considering equivalent prohibitions, whilst shortcomings might compromise the entire regulatory framework. The coming months will determine whether Australia’s pioneering regulatory approach produces genuine protection for adolescents or becomes largely performative in its influence.
