
Beginning December 10, 2025, children under the age of 16 in Australia are officially barred from accessing major social-media platforms, as the country becomes the first in the world to enforce a nationwide age-minimum rule under the amended Online Safety Amendment (Social Media Minimum Age) Act 2024.
Under the new regulatory framework, ten of the largest social-media networks — including Facebook, Instagram, TikTok, YouTube, Snapchat, X, Reddit, Threads, Twitch and Kick — are required to prevent users under 16 from holding accounts. Those platforms must take “reasonable steps” to enforce the restriction or face fines of up to A$49.5 million (roughly US$33 million).
Already, some companies have begun implementing the measures. Under-16 Australians logging into previously active accounts are being greeted with lock-out screens, while others are being prompted to verify their age or download their data before deletion.
The move has stirred a spectrum of reactions across the country. Federal leaders frame the laws as a bold step in protecting children’s mental health and digital safety. Australia’s Prime Minister Anthony Albanese praised the legislation as “a major cultural shift,” urging young people to use the holiday period for offline activities instead of endless scrolling.
Many parents and child-safety advocates welcomed the ban, seeing it as necessary to shield minors from cyberbullying, addictive social-media features and online harms. Yet not everyone is convinced. Critics — including some teenagers and privacy experts — argue the law may push minors toward unregulated platforms, VPNs, or other workarounds. One Sydney teen, identified in media reports, warned that although banning apps won’t eliminate harmful content, it could force kids to find less safe corners of the internet.
Concerns have also been raised about the social and psychological impact on youths, especially those in remote or rural communities who often rely on social media to maintain contact with friends.
Meanwhile, enforcement remains a work in progress. The government concedes that age-verification systems — which may include facial recognition, document checks or behavioural-analysis tools — are imperfect and may not catch all under-age users at once.










