It is just the beginning of 2026, and things are happening even faster than last year. Not only in technology, but also in regulations, laws, and in how we deal with all the information around us.
As a person born in the 90s, social media was once an unknown land for me, a place that felt genuine in the beginning. It still had dangers, but it seemed less risky, or maybe our parents’ rules were stricter.
I don’t want to go down the psychological path here, but I want to look at where we are headed with so much risk, especially on social media platforms.
Across Europe, governments and lawmakers are wrestling with a question that until recently belonged to parents and tech platforms: should social media have a legal age limit above 13?
The debate has moved fast, from discussions about digital safety to concrete proposals that would impose hard boundaries on who can sign up for TikTok, Instagram, Snapchat, X, and other networks.
Are they crossing some freedoms, or in order to keep their freedom of a non-invasive childhood and a straight road, do we need some regulations from time to time?
If we take the big picture, this isn’t just another regulation. It reflects a deeper shift in how democracies are responding to the pervasive presence of social media in young lives, and how they judge its long-term impact.
From soft law to hard proposals
In late 2025, the European Parliament adopted a resolution urging a minimum age of 16 to access social media and related services, while allowing 13- to 16-year-olds in with parental consent.
I will not add up that there are parents that creates profiles for their children since they were born. To make my point, according to CNIL, content shared by family members has been misused for harmful purposes, including redistribution on networks used for abusive content.
It estimates that a significant portion of photos and videos found on paedo-criminal forums were initially published by parents on social media.
The resolution goes further: it targets “addictive practices” on platforms and asks for the default disabling of features like infinite scroll and auto-play. This proposal isn’t legally binding, but it frames the politics of the debate in Brussels and across capitals.
At the national level, several countries have translated these concerns into draft laws. Spain’s government announced plans to ban access altogether for children under 16 unless platforms implement strict age verification.
The bill would also make tech leaders liable for illegal or harmful content on their networks, a notable shift from earlier models that focused on platform notice and takedown procedures.
France, a few months earlier, moved to ban social media use by those under 15 and requires platforms to verify ages for all users, not just minors. Enforcement could include vetting existing accounts as well as new ones.
Smaller states are joining the chorus too: Slovenia is drafting a law to outlaw social media use by under-15s, while Denmark has publicly debated similar limits. At the same time, parties in Germany are weighing a 16-year minimum.
Even in the United Kingdom, a rule proposed in its Online Safety Act would virtually require platforms to block access by under-16s through effective age verification; that law is already shaping how tech companies handle youth content and access.
The cumulative effect: the idea that platforms can self-police access by minors with mere date-of-birth entry is being rejected by political attention across Europe.
What’s driving the push
Two currents run through this debate.
First, lawmakers are reacting to real concerns about exposure to harmful content, addiction-like engagement models, and algorithmic amplification of risky material.
Social media companies have repeatedly been criticised for targeting youth with addictive design, and national campaigns increasingly highlight risks to mental health and privacy.
Second, the political framing around digital rights has changed. Leaders now talk openly about a “digital age of majority,” a threshold at which the perks of online interaction outweigh potential harms. For some policymakers, that line is 16. Others see 15 as defensible.
What unites them is the belief that the current model, where platforms rely on self-reported age, is insufficient.
That belief is backed by evidence that online systems struggle to know a user’s true age. Age assurance technology is still emerging, and widespread implementation remains uneven. OECD research highlights gaps in verification and incomplete protection in many jurisdictions.
Still, how practical are these measures? This is where the narrative shifts from aspiration to hard reality.
First, enforcement is a technical and legal challenge.
Simply stating that under-16s are banned from signing up does not instantly make it so. Platforms can require more robust checks, but any system that tries to verify age directly introduces new privacy questions.
Verification systems that scan IDs or use biometric tools can protect minors, but they also gather sensitive data. That trade-off is at the centre of ongoing criticism.
Second, universal compliance is hard to monitor. A teenager might still register for a platform using a VPN, a family member’s account, or a proxy. Even nations with strict digital ID ecosystems would need cross-border enforcement mechanisms, since platforms operate internationally.
Third, politics matters. In Spain, the government does not hold a stable majority, and legal opponents, including tech founders, have pushed back strongly, framing the rules as threats to freedom of speech and privacy.
For example, Telegram’s founder sent mass messages accusing regulators of overreach, leading to a public spat that highlights how much of this debate is symbolic as well as substantive.
“In the UK, a coroner found that content seen on Instagram and Pinterest contributed to the death of 14-year-old Molly Russell, who took her own life after repeatedly engaging with material about self-harm and suicide.”
So, I am asking, what freedom of speech does Molly Russell still have? None.
Possible consequences
The measures under discussion could reshape how digital culture evolves among young people.
Positive outcomes may include stronger age verification that genuinely limits harmful exposure. That could reduce early adoption of addictive scrolling habits and limit the reach of explicit material.
Parental control features could become more powerful if linked to legally enforced standards.
On the other hand, hard bans can create unintended effects. Teens who are excluded might drift toward unregulated corners of the internet or feel pushed into informal digital networks without safeguards.
Schools and social networks might become central hubs of digital identity outside mainstream platforms, raising questions about equity and access to information.
There’s also the risk that privacy trade-offs become normalized. If age verification requires sharing sensitive documents or biometric scans, policymakers will need to defend those systems against misuse, breaches, and mission creep.
Finally, these policies could influence global governance.
Australia already enacted a ban on under-16s late last year, offering a precedent that Europe is watching closely. The performance of that law, in terms of real effects on youth engagement versus tech freedom, will likely shape future debates.
At its core, Europe’s social media age discussion is not just about numbers. It’s about how societies balance digital opportunity and vulnerability.
Whether the final policies adopt 15, 16, or conditional frameworks with parental consent, the process is sparking intellectual contest over autonomy, safety, and the role of government in digital life.
For journalists, technologists, and digital citizens alike, the unfolding story raises a fundamental question: can we as a society protect children and teenagers online while also safeguarding their freedom of expression, access to information, and ability to participate in digital culture?
Get the TNW newsletter
Get the most important tech news in your inbox each week.