Australia has widened its crackdown on youth social media use, adding Reddit and Kick to the roster of platforms required to ban accounts for children under 16. The move, announced in Melbourne, extends earlier restrictions on Facebook, Instagram, Snapchat, Threads, TikTok, X and YouTube. It signals a hardening stance on online safety and age control as officials weigh how to enforce the policy across global companies.
“Australia has added message board Reddit and livestreaming service Kick to its list of social media platforms that must ban children younger than 16 from holding accounts. The platforms join Facebook, Instagram, Snapchat, Threads, TikTok, X and YouTube…”
Why Australia Is Tightening Rules
Officials argue that younger users face higher risks online, including bullying, grooming, and exposure to harmful content. The government has flagged age limits as one tool to reduce harm. It follows years of debate over whether platforms do enough to verify ages and protect minors.
Australia’s online safety framework has expanded in recent years, with fines and takedown requirements for harmful material. Lawmakers have pushed platforms to show stronger duty of care to children. Age-based account bans now sit at the center of that pressure.
What Changes for Reddit and Kick
Reddit and Kick must now apply the same under-16 restrictions already expected of larger social apps. That means preventing new sign-ups for users under 16 and addressing existing accounts held by minors. Both platforms host active youth communities, which raises immediate enforcement questions.
Kick’s livestream format can make content moderation and age checks more complex. Reddit’s forum-style communities also vary widely in rules and oversight, which complicates the task of policing underage accounts at scale.
Enforcement and Practical Hurdles
The core challenge is age assurance. Most platforms rely on self-declared birthdays. Stronger checks—such as ID scans, biometrics, or third-party verification—raise privacy and security concerns. They also risk excluding teens without access to formal identification.
Industry lawyers warn that strict bans could push younger users to lie about their ages or move to lesser-known apps. That can make risks harder to track and reduce oversight by parents and teachers.
- Age checks can be inaccurate or intrusive.
- ID-based systems may create data security risks.
- Workarounds by minors could undermine the policy.
Reactions From Experts and Advocates
Child-safety advocates say an under-16 ban sends a clear message. They argue that less time on social media can help mental health, sleep, and school performance. Some teachers welcome the shift, citing rising classroom distractions and reports of online bullying.
Digital rights groups take a different view. They warn that blanket bans may harm privacy and free expression, especially for teens who rely on online communities for support. They also argue that meaningful protections should focus on design choices, such as turning on strong privacy settings by default and limiting data collection.
Parents’ groups are split. Many want clear rules and better tools, including strong parental controls and reliable age checks that do not store sensitive data. Others fear that the rules will be hard to enforce at home and could spur risky workarounds.
Industry Impact and Next Steps
Technology firms face new compliance costs. They may need to upgrade age checks, redesign sign-up flows, and adjust recommendation systems for suspected teen users. Those changes require engineering work, legal reviews, and new policies for appeals and account recovery.
Companies will also have to show evidence of compliance. Regulators are likely to demand clear metrics, audits, and quicker response times when underage accounts are reported. Public pressure may grow if enforcement looks uneven across different apps.
Global Trend and Comparison
Australia’s push fits a wider global trend. Governments in Europe and the United States are testing or passing age-based rules for social media and online content. Many proposals pair age limits with stronger privacy defaults for young users.
Despite broad agreement on the goal, methods differ. Some laws rely on parental consent. Others require high assurance age checks. Most face legal tests over privacy, speech, and the practicality of proof-of-age systems at scale.
Australia’s update adds new pressure on Reddit and Kick while reinforcing limits already applied to larger platforms. The core test is enforcement. If age checks are too weak, the rules may have little effect. If they are too intrusive, they could spark backlash and drive teens to darker corners of the internet. The next phase will hinge on how platforms verify ages, what data they collect, and how regulators measure success. Watch for clearer guidance on acceptable age assurance, stronger default protections for teens, and early compliance reports from the companies now on the list.