Beyond the Birthday Date: How Meta is Using AI to Stop Underage Users
For years, the barrier to entry for children on social media has been a simple dropdown menu. By selecting a birth year that makes them appear 13 or older, millions of underage users have bypassed safety filters to access Instagram, and Facebook. Meta is now moving to close this loophole by implementing advanced AI-powered age assurance technology that looks beyond what a user claims on their profile.
The shift marks a transition from relying on “self-attestation”—where users simply tell the platform their age—to a system of active estimation. By analyzing visual cues and behavioral patterns, Meta aims to ensure that teenagers are placed in protected environments and that children under 13 are removed from its services entirely.
The Shift to AI Visual Analysis
While Meta has long used AI to estimate age, the company is now introducing visual analysis to increase accuracy. Previously, the platform relied heavily on textual and behavioral inputs, such as the language used in bios, captions, comments, and general profile activity. However, these methods can be deceptive or incomplete.

The new system analyzes faces in photos and videos to estimate a user’s age. It’s critical to distinguish this from facial recognition. According to Meta, the tool does not recognize specific individuals or identities; instead, it identifies visible indicators of age to determine if a user is under 13 or a teenager between 13 and 18.
Regional Rollouts and Regulatory Pressure
This technological push comes as regulators in the United States, Brazil, and Europe increase pressure on social media companies to tighten protections for minors. In response, Meta is expanding protections for users suspected of misrepresenting their age in specific regions:
- Instagram: Expanded protections are being deployed in the European Union and Brazil.
- Facebook: Enhanced measures are being implemented in the United States.
Creating a “Safe by Default” Experience
Age assurance isn’t just about removing users; it’s about placing the right users in the right settings. Once the AI identifies a user as a teenager, Meta automatically applies “safe by default” configurations. This includes the deployment of Teen Accounts across Instagram, Facebook, and Messenger.
These specialized accounts include built-in protections that limit who can contact teens and filter the content they encounter. Meta has updated its content policies to automatically place any user identified as under 18 into a “13+ content setting,” restricting access to mature material.
The Industry-Wide Challenge
Despite these advancements, Meta acknowledges that verifying age online is a complex, industry-wide struggle. To address the root of the problem, the company has renewed its call for app stores to take more responsibility. Meta argues that app stores should verify a user’s age during the initial signup process, creating a more robust first line of defense before a user even reaches a social media platform.
- Visual Analysis: AI now analyzes photos and videos to estimate age, moving beyond simple birthday inputs.
- Estimation, Not Recognition: The technology focuses on age indicators rather than identifying specific individuals.
- Automated Protections: Teens are automatically routed into “Teen Accounts” with restricted contact and content settings.
- Targeted Regions: Initial expanded protections are focused on the US, EU, and Brazil.
Frequently Asked Questions
Does Meta use facial recognition to find underage users?
No. Meta has clarified that it uses facial analysis to estimate age based on visible cues, not facial recognition technology to identify specific people.

What happens if the AI thinks I’m underage?
Users identified as being under 13 are removed from the services. Users identified as teenagers (13-17) are placed into default, age-appropriate experiences, such as Teen Accounts, which feature stricter privacy and content settings.
Which platforms are affected by these changes?
These AI age assurance measures are being implemented across Facebook and Instagram, with integrated protections extending to Messenger.
The Bottom Line
Meta’s move toward AI-driven age estimation represents a significant pivot in how social platforms handle user safety. By removing the “honor system” of birthdates and replacing it with visual and behavioral analysis, the company is attempting to satisfy global regulators and create a more secure environment for minors. However, the ultimate success of these measures may depend on whether app stores step up to provide the primary verification Meta is requesting.