Parents, Companies Must Act Against Social Media Harms, Why Raising the Age Limit Is Only the Beginning
In 2017, a 14-year-old girl in the United Kingdom took her own life after being exposed to a stream of harmful online content. Like most teenagers her age, she used social media regularly, navigating the same platforms that have become the default social environment for an entire generation. After a lengthy legal battle, a coroner concluded that the content she encountered online had contributed directly to her death. The case became a catalyst for the UK government to strengthen its laws regulating digital platforms. It was a tragedy that forced a nation to confront the dark side of the digital revolution. But sadly, this tragedy was not unique. It was a single, devastating data point in a global crisis that has been building for over a decade.
Globally, evidence has been mounting that excessive use of social media is harming children’s mental health at an alarming and unprecedented rate. Research across several countries shows a clear and disturbing link: heavy social media use is associated with a two- to three-fold higher risk of suicidal ideation and self-harm among adolescents. In his landmark book, The Anxious Generation, social psychologist Jonathan Haidt traces the sharp, sudden decline in youth mental health directly to the rise of the smartphone-based childhood. The data is stark. Between 2010 and 2020, as smartphones became ubiquitous, rates of depression among teenagers rose sharply, including a reported 145 per cent increase among girls and 150 per cent among boys in some datasets. This is not a coincidence; it is a public health emergency.
Against this overwhelming backdrop of evidence, governments around the world are finally reconsidering how young people access social media. In December 2023, Australia passed a landmark piece of legislation raising the minimum age for social media use from 13 to 16. The decision was controversial, as any attempt to regulate the digital space inevitably is. Critics often describe such measures as a “ban,” arguing that it is an overreach, a violation of digital rights, or a simplistic solution to a complex problem. But this framing misses the fundamental point. The aim is not to “ban” young users from the digital world; it is to protect them from the specific, documented, and severe harms of engagement-driven social media platforms. Raising the age limit will not solve every problem, but it may be a sensible, necessary place to begin.
One of the most powerful effects of a higher age limit is its potential to change social norms. Currently, social pressure often drives children to join platforms before they are ready. When most of a child’s peers are already online, parents feel compelled to allow access even if they have deep reservations. They worry about their child being socially isolated, left out of conversations, or bullied for not being present. A higher, legally mandated age threshold could help reset these expectations. It would give parents a powerful tool to say “no,” backed by the authority of the state. It could reduce the pressure on families who want to delay exposure but feel powerless to do so against the tide of peer pressure.
The current minimum age of 13, after all, was never based on any research about adolescent brain development or child psychology. It originates from the United States’ Children’s Online Privacy Protection Act (COPPA) of 1998, which set 13 as the age at which companies could legally collect children’s data without parental consent. At the time, social media platforms as we know them today did not even exist. There was no Facebook, no Instagram, no TikTok, no Snapchat. Nearly three decades later, the same arbitrary age threshold persists, even as the digital landscape has been completely and irrevocably transformed.
Today’s social media platforms are not neutral tools. They are sophisticated attention-extraction machines, built on engagement-driven design. Every feature—the algorithmically curated feed, the infinite scroll, the constant push notifications—is meticulously engineered to keep users online as long as possible, to maximize the time their eyes are on the screen. For adults, resisting these persuasive designs is difficult enough. For children, whose brains are still developing, it is exponentially harder. The prefrontal cortex, the part of the brain responsible for impulse control, judgment, and long-term decision-making, is not fully developed until the mid-20s. Teenagers are, therefore, profoundly less equipped to resist these features or to critically evaluate the content they encounter. They are navigating a digital environment designed to exploit their vulnerabilities.
Adolescence is also a time when social validation from peers becomes paramount. A study across 26 districts in India found that nearly half of adolescents reported feeling distressed when their posts did not receive enough “likes.” This is not a trivial concern. For a developing brain, the validation of a “like” or a comment triggers a dopamine response, creating a cycle of craving and dependence. The absence of that validation can feel like a personal rejection, a measure of social failure. This is not how childhood is supposed to work.
Beyond mental health, there are also serious and escalating safety risks. Technology-facilitated child sexual exploitation is expanding worldwide, affecting an estimated 300 million children. Predators use social media platforms to groom, manipulate, and exploit minors, often hiding in plain sight within comment sections, direct messages, and private groups. The Annual Status of Education Report (ASER) 2024 found that nearly 90 per cent of adolescents aged 14-16 in India have access to a smartphone at home, and social media remains a dominant activity. These are the spaces where exploitation occurs, often without parents having any awareness.
Despite the overwhelming evidence of harm, the principle of “safety-by-design” remains the exception, not the norm. Platforms routinely introduce safety features only after problems emerge, only after a scandal breaks, only after a tragedy forces their hand. This reactive approach places an unreasonable burden on children and parents to manage risks that are embedded in the very architecture of the platforms. A child should not have to be a digital security expert to avoid being groomed. A parent should not have to be a forensic investigator to understand the algorithmic forces shaping their child’s daily life.
Parents certainly have a role to play. They must engage with their children, set boundaries, and educate themselves about the digital world. But the responsibility cannot rest solely on families. The companies that profit from these platforms, that have built billion-dollar businesses on the attention of young users, must also be held accountable. They must be required to prove that their products are safe for children before they are deployed at scale, not after the damage has been done.
Raising the minimum age for social media access to 16 should be seen as one critical step in a much broader effort to protect young users. Governments must ensure platform accountability through legislation. They must push for stronger, legally mandated safety-by-design standards that require platforms to prioritize child safety over engagement from the very beginning of the design process. They must demand greater transparency around algorithms and platform practices, so that researchers, parents, and regulators can understand the forces shaping children’s online experiences. A higher age limit is not a panacea, but it is a signal. It signals that society is finally taking this crisis seriously. It signals that the well-being of children is not a secondary consideration to be balanced against corporate profits. It may even encourage companies to finally invest in creating truly age-appropriate platforms, allowing children to benefit from technology rather than be harmed by it. The 14-year-old girl in the UK should not have died in vain. Her death, and the deaths and suffering of countless others, demand action. The time for half-measures is over.
Questions and Answers
Q1: What was the significance of the 2017 case of a 14-year-old girl in the UK?
A1: The girl took her life after being exposed to harmful online content. A coroner concluded that the online content contributed directly to her death. This tragic case became a catalyst for the UK government to strengthen its laws regulating digital platforms, highlighting the direct link between social media use and youth harm.
Q2: What does the author argue is the real purpose of raising the minimum age for social media use, rather than simply “banning” young users?
A2: The author argues the purpose is not to ban young users from the digital world, but to protect them from the specific harms of engagement-driven platforms. A higher age limit would also help reset social norms, reducing the peer pressure on families who want to delay their children’s exposure but feel powerless to do so.
Q3: Why is the current minimum age of 13 considered arbitrary and outdated?
A3: The age of 13 originates from the US 1998 Children’s Online Privacy Protection Act (COPPA), which set the age for data collection without parental consent. At that time, modern social media platforms did not exist. The same threshold persists nearly three decades later, despite the complete transformation of the digital landscape.
Q4: According to the article, why are adolescents particularly vulnerable to the persuasive design of social media platforms?
A4: Adolescents are vulnerable because their brains are still developing. The prefrontal cortex, responsible for impulse control and judgment, is not fully developed until the mid-20s. They are less equipped to resist engagement-driven features like infinite scroll and algorithmic feeds. Adolescence is also a time when social validation is paramount, making them more susceptible to the dopamine-driven cycle of “likes.”
Q5: What broader actions does the article recommend beyond simply raising the minimum age?
A5: The article recommends several broader actions:
-
Government accountability: Ensuring platform accountability through legislation.
-
Safety-by-design standards: Requiring legally mandated safety features from the start of the design process, not as reactive fixes.
-
Transparency: Demanding greater transparency around algorithms and platform practices so researchers and parents can understand the forces shaping children’s online experiences.
