Meta has announced a new set of safety enhancements aimed at strengthening protections for teenagers across its platforms, as the company ramps up efforts to create safer digital environments for young users.
The update centers on improved age assurance technology, combining artificial intelligence, product design changes, and parental support tools to better identify teenage users and ensure they are placed into age-appropriate experiences by default.
A key focus of the rollout is the use of advanced AI systems to detect accounts that may belong to underage users even when incorrect birth dates are provided. These systems analyze behavioral signals, profile activity, and increasingly, visual and contextual cues to flag potential age misrepresentation.
Once identified, such accounts are automatically moved into “Teen Account” settings, which come with built-in safeguards. These include restrictions on who can contact teens, tighter content controls, and default privacy protections designed to limit exposure to harmful material.
Meta is also strengthening enforcement against users under 13, who are not permitted on its platforms, by scaling technologies that can detect and remove such accounts more effectively.
Beyond detection, the company is expanding parental support features, providing guidance and tools to help families manage teens’ online experiences and encourage honest age reporting.
The move comes amid growing global scrutiny of how social media platforms handle teen safety, with regulators increasingly demanding stronger safeguards against online abuse, harmful content, and age misrepresentation.
Overall, Meta’s latest update signals a broader shift toward proactive, AI-driven safety enforcement, aiming to balance platform accessibility with stricter protections for younger users.





