Meta Expands Teen Safety Features Across Facebook and Messenger Amid Industry Debate

Meta has announced a significant expansion of its teen safety initiatives, introducing new protective measures across Facebook and Messenger to complement the existing features on Instagram. Launched initially in September 2024, the Teen Accounts feature has gained widespread acceptance, with 97% of teens aged 13-15 choosing to keep the default safety settings and 94% of parents reporting that these tools are beneficial. Now, Meta aims to enhance online safety for adolescents on more platforms, providing automatic safeguards that foster a safer social media environment.
Enhanced Safety Measures for Teen Users
Teen Accounts automatically apply a suite of safety features designed to limit potentially harmful interactions and content exposure. These include restrictions on who can contact teens, content filtering, and time management tools to prevent excessive usage. Meta asserts that these features directly address parents’ primary concerns, affording teens more control over their online experiences while offering families peace of mind. According to Meta’s Head of Instagram, Adam Mosseri, “Teen Accounts are designed to give parents confidence in their teens’ social media use, while empowering teens to navigate online spaces responsibly.”
Critics Question Effectiveness of Safety Features
Despite Meta’s confidence, independent research and advocacy groups have raised questions about the actual effectiveness of these safety measures. A study conducted by Northeastern University in September 2025 found that only eight out of 47 safety features tested were fully effective. Critics argue that certain protections, such as manual comment-hiding, shift responsibility onto teens rather than proactively preventing harmful interactions. Additionally, concerns persist regarding whether time management tools are sufficiently robust to curb excessive use, with some features receiving middling evaluations despite functioning as intended.
Meta Responds to Safety Concerns
Meta has responded to these critiques, emphasizing that its safety tools are improving and that millions of teens and parents are actively using them. The company states, “Our safety protections lead the industry by providing automatic, easy-to-use controls that reduce exposure to sensitive content and unwanted contact, while giving parents tools to monitor and limit usage.” Meta also highlights ongoing efforts to refine these features based on feedback and research, aiming to strike a balance between user autonomy and protection.
New Initiatives Extend Safety to Schools and Education
Beyond individual accounts, Meta has expanded its efforts into educational settings. The School Partnership Program now covers all middle and high schools across the United States, allowing educators to report issues such as bullying or unsafe content directly through Instagram. These reports receive prioritized review within 48 hours, enabling quicker intervention. Participating schools also gain access to resources like a nationwide online safety curriculum developed in partnership with Childhelp, which teaches students how to recognize online exploitation, seek help, and use reporting tools effectively.
This curriculum has already reached hundreds of thousands of students, with plans to educate one million middle schoolers over the next year. A peer-led version, created with LifeSmarts, encourages high school students to share safety lessons with younger peers, fostering a community approach to online protection.
The Ongoing Debate Over Teen Safety Measures
While Meta’s initiatives aim to create a safer online environment for teens, critics argue that more comprehensive measures are necessary. Industry watchdogs and research suggest that current protections may not be enough to prevent harm. Meta defends its efforts, asserting that its tools are effective and continuously improving, but the conversation about the adequacy of online safety measures for youth remains active.
As digital engagement among teens continues to grow, the responsibility for industry leaders to implement stronger safeguards increases. The effectiveness of Meta’s expanded safety features will likely be tested as online threats evolve and new challenges emerge.