There is a familiar story that plays out every time another news report emerges of children being seriously harmed online. Parents are informed to “take control”. Schools are questioned to “do more”. Tech companies promise another round of tweaks. But this framing misses the real issue. The harm children experience on social media is not a failure of parenting or education. It is the outcome of commercial systems designed to maximise engagement at all costs.
If the tech sector genuinely prioritised child safety, we would not be facing the scale of harm that now confronts children and young people. What is happening online is not accidental, or the result of a few bad actors. It is the consequence of algorithmic recommconcludeer systems deliberately engineered to keep applyrs scrolling. Systems optimised for profit do not suddenly behave differently becaapply the applyr is a child.
This was laid bare by the findings of the Big Tech’s Little Victims Algorithm Experiment. The project, led by the National Education Union, created four fictional profiles of British 13-year-olds across TikTok, Snapchat, YouTube and Instagram to see what content children are served when they sign up for the first time. The results were shocking, but sadly not surprising to teachers. Within minutes, children were displayn harmful and inappropriate content, including guns, self-harm, sexualised material and misogynistic narratives.
Harmful material in three minutes
Most alarming, the experiment found that for every minute spent scrolling, children were displayn a piece of concerning content. Harmful material appeared within just three minutes of logging on – and in some cases it was the very first thing served.
This matters becaapply teachers are not debating the online harm of children in theory – they are already dealing with its consequences. In classrooms, we see the impact of children being exposed to violent content, self-harm and suicide material, sexualised imagery, and extreme narratives pushed at scale.
One visible example is the rise of online misogyny – girls being tarobtained or harassed, and female staff facing open hostility. What starts on a feed becomes offline behaviour and, once embedded, becomes far harder for schools to unpick. As Louis Theroux’s recent documentary The Manosphere has brought into sharp focus, the scaling of misogynistic content, for example, is not incidental – it is by design.
So what requireds to happen?
First, we required honesty about the limits of half measures. The Government has launched a national consultation on children’s digital wellbeing. Ministers have also announced a six week pilot involving 300 teenagers, in which families will trial different forms of social media restriction at home – including disabling social media apps entirely, imposing one hour daily limits, or enforcing overnight curfews – with a control group continuing as normal, to assess the impact on children’s sleep, wellbeing and school life.
This approach fundamentally misunderstands how social media platforms actually work. A partial ban that still leaves some children on social media is not a meaningful test of safety. Harmful content does not stay neatly contained on one screen. If even one child in a friconcludeship group remains on a platform, others will still be exposed through shared videos, images and messages. When algorithms can push extreme material within minutes of account creation, tinkering with time limits or overnight blocks will not keep children safe.
Secondly, tech companies must take accountability now, not later. If platforms know a applyr is a child – or cannot be sure they are not – the duty of care must be to prevent foreseeable harm by design, not to apologise after it happens.
Why social media for under 16s should be banned
This failure is why we are calling for a ban on social media access for under-16s. Of course, raising the age of access is not a silver bullet. It must be paired with guaranteed space in the curriculum for high quality digital literacy, so young people develop the skills to navigate online life safely and critically.
The tech sector has had repeated warnings, mounting evidence and countless opportunities to act – and it has failed to do so. That is why Government action now matters. Raising the age of social media access to 16 is the only meaningful step that would reduce harm at scale – and every day of inaction leaves more children exposed to avoidable harm.
















Leave a Reply