The Worst Argument That Social-Media Companies Use to Defend Themselves

Many in Silicon Valley claim that regulation would hurt teens from historically disadvantaged communities. They’re wrong.

The Worst Argument That Social-Media Companies Use to Defend Themselves

When the tobacco industry was accused of marketing harmful products to teens, its leaders denied the charge but knew it was true. Even worse, the industry had claimed that smoking made people healthier—by reducing anxiety, say, or slimming waistlines.

The social-media industry is using a similar technique today. Instead of acknowledging the damage their products have done to teens, tech giants insist that they are blameless and that their products are mostly harmless. And at times, a more audacious claim is made: that social media helps teens, even as mounting evidence suggests that it’s harming many of them and playing a substantial role in the mental-health crisis afflicting young people in numerous countries around the world.

When Mark Zuckerberg was asked in 2022 about Meta’s own finding that Instagram made many teen users feel worse about their body, for instance, he cleverly reframed the result. After noting other, more favorable findings in the same study, he proclaimed that his platform was “generally positive” for teens’ mental health, even though at least one in 10 teen girls reported that Instagram worsened each of the following: body image, sleep, eating habits, and anxiety. (Zuckerberg also failed to mention internal data demonstrating the other dangers that social media poses for teens.)

[Derek Thompson: America’s teenage girls are not okay]

Tech lobbyists have gone further, deploying the dual argument that social media is especially beneficial to teens from historically marginalized communities, and therefore nearly any regulation would harm them. Through their funding and, at times, their own statements, many leaders in Silicon Valley have used these claims as part of their efforts to oppose a pair of bills—now before Congress—aimed at strengthening online protections for minors, referred to collectively as the Kids Online Safety and Privacy Act. (KOSPA combines the Kids Online Safety Act, widely known as KOSA, and the Children and Teens’ Online Privacy Protection Act.)

The talking point plays into a long-running strand of progressive thought that sees digital technology as a means of empowering disadvantaged groups. The early internet did in fact help many Black, low-income, and LGBTQ+ Americans—among others—find resources and community. And even today, surveys find that LGBTQ+ teens report experiencing more benefits from social media than non-LGBTQ+ teens.

That’s a good reason to be careful about imposing new regulation. But the wholesale opposition to legislation ignores strong evidence that social media also disproportionately harms young people in those same communities.

KOSPA could help. The legislation would require social-media companies to develop a version of their platforms that’s safe for children—eliminating advertising that targets minors, for example, and allowing users to scroll feeds that aren’t generated by personal-recommendation algorithms. It would demand that social-media companies take reasonable measures to mitigate potential harms such as sexual exploitation, mental-health disorders, and bullying. It would also hold companies responsible for ensuring that underage children obtain parental consent to use their platforms, without preventing teens from freely accessing social media. In July, the Senate passed the two bills 91–3; the House could take it up as soon as this month.

Even some tech companies support the legislation, but digital-rights groups––many of which are subsidized by the industry, including by Meta––have largely opposed it, arguing that KOSPA would take away the benefits that marginalized teens enjoy from social-media platforms. Some of these groups have released statements warning about the dangers that the legislation poses to LGBTQ+ youth, even after many LGBTQ+ advocates dropped their objections once they’d worked with legislators to revise KOSPA.

A think tank supported by tech companies, meanwhile, has argued that the bills’ ban on targeted advertising for minors might result in “fewer free online services designed for children, which would prove most detrimental to lower-income households.” While digital-rights groups appeal to the political left with unsubstantiated claims about marginalized groups, they tell the right that KOSPA amounts to censorship, even though it wouldn’t limit the kinds of content that teens could search for.

Whatever he actually believes, Zuckerberg is wrong that social media is “generally positive” for teens’ mental health. The tech industry is wrong that social media is especially good for teens in historically disadvantaged communities. And its lobbyists are wrong that regulation would do more harm than good for these groups. The evidence—from the private lives of tech executives, a growing body of empirical research, and the testimony of young users—by now strongly supports each of these points.


One technique for determining whether a product harms children is to ask the people who designed that product if they let their kids use it.

Steve Jobs limited his children’s use of technology. TikTok CEO Shou Zi Chew doesn’t let his children on TikTok. Bill Gates restricted his kids’ screen time and did not give them a phone until they were 14. Google CEO Sundar Pichai didn’t give his 11-year-old a phone. Mark Zuckerberg has carefully monitored his kids’ screen time and avoided sharing identifying photos of them on Instagram. Snap CEO Evan Spiegel limited his 7-year-old’s technology use to 90 minutes a week. (Compare that with the average American teen, who spends nearly nine hours a day on screens, not including for school or homework.)

[Jonathan Haidt: End the phone-based childhood now]

The examples continue: Some tech executives write up “nanny contracts,” compelling babysitters to keep their children away from screens. Many of them pay more than $35,000 a year to send their kids to the Waldorf School of the Peninsula—a few miles down the road from Meta’s and Google’s headquarters—which doesn’t allow children to use screens until seventh or eighth grade.

Of course, few people would call the children of tech elites marginalized. But it is curious that these elites publicly assert that digital technology helps children—especially the most vulnerable—while expunging it from their own kids’ lives. Those choices are particularly galling given how intensely social-media companies try to attract other people’s children to their products; how little they do to prevent underage use; and how hard many of them fight to block legislation that would protect young people on their platforms.


The social-media platforms of today are not like the internet of the 1990s. The early internet helped isolated and disadvantaged teens find information and support, as do many modern platforms. But today’s social media is engineered in such a way that makes it more dangerous than much of the early internet. Do teens really need bottomless, algorithmically curated news feeds that prioritize emotional power and political extremity just to find information? Do they really benefit from being interrupted throughout the day with manipulative notifications designed to keep them looking and clicking? How much was gained when social-media platforms took over teens’ online lives? How much was lost?

Researchers at Instagram didn’t have to ask that last question when they interviewed young users around 2019. Unprompted, teens across multiple focus groups blamed the platform for increasing rates of anxiety and depression. Other studies have found that a substantial share of young people believe that social media is bad for their mental health. An increasing amount of empirical evidence backs them up. On the Substack After Babel, written by two of this article’s authors, Jon and Zach, we have run numerous essays by young people testifying to these harms and have reported on organizations created by members of Gen Z to push back on social-media companies. Where are the Gen Z voices praising social media for the mental-health benefits it has conferred upon their generation? They are few and far between.

Of course, many teens do not feel that smartphones or social media have been a negative force in their lives; a majority tend to view the impacts of digital technology as neither positive nor negative. But that’s no reason to dismiss the harm experienced by so many young people. If evidence suggested that another product were hurting any significant number of the children and adolescents who used it, that product would be pulled from the shelves immediately and the manufacturer would be forced to fix it. Big Tech must be held to the same standard.

As it turns out, the adolescents being harmed the most by social media are those from historically disadvantaged groups. Recent surveys have found that LGBTQ+ adolescents are much more likely than their peers to say that social media has a negative impact on their health and that using it less would improve their lives. Compared with non-LGBTQ+ teens, nearly twice as many LGBTQ+ teens reported that they would be better off without TikTok and Instagram. Nearly three times as many said the same for Snapchat.

Youth from marginalized groups have good reason to feel this way. LGBTQ+ teens are significantly more likely to experience cyberbullying, online sexual predation, and a range of other online harms, including disrupted sleep and fragmented attention, compared with their peers. LGBTQ+ minors are also three times more likely to experience unwanted and risky online interactions.

One of us—Lennon, an LGBTQ+ advocate—has experienced many of these harms firsthand. At age 13, while navigating adolescence as a young transgender person, she got her first iPhone and immediately downloaded Facebook, Instagram, and Snapchat. Her Instagram following grew from less than 100 to nearly 50,000 in just one month as she began to achieve national recognition as a competitive dancer. Soon she was receiving insulting messages about her queer identity—even death threats. Seeking a friendlier place to explore her identity, she took the advice of some online users and began corresponding on gay chat sites, often with middle-aged men. Some offered her the support that she had been looking for, but others were malicious.

Several men asked Lennon to perform sexual acts on camera, threatening to publicize revealing screenshots they had taken of her if she tried to refuse. The shame, fear, and regret that she felt motivated her to devote her career to protecting children online, ultimately joining the Heat Initiative, which pushes the tech industry to make safer products and platforms for children.

What about youth from other historically disadvantaged communities? Black and Hispanic teens are slightly less likely than white teens to report cyberbullying, but they are much more likely to say that online harassment is “a major problem for people their age.” Evidence suggests that teens with depression may be at higher risk of harm from social media, and studies show that reducing social-media use is most beneficial for young people with preexisting mental-health problems.

[Jonathan Haidt: The dangerous experiment on teen girls]

Although social media can certainly provide benefits to vulnerable teens, the industry has regularly dismissed the fact that its platforms are consistently, and disproportionately, hurting them.


For the past three decades, the term digital divide has been used to refer to a seemingly immutable law: Kids in wealthy households have ample access to digital technologies; kids in other households, not so much. Policy makers and philanthropists put up large sums of money to close the gap. Although it persists in some parts of the world, the digital divide is starting to reverse in many developed nations, where kids from low-income families are now spending more time on screens and social media—and suffering more harm from them—than their economically privileged peers.

“Entertainment screen use” occupies about two additional hours a day for teens from low-income families compared with those from high-income families. A 2020 Pew Research Center report found that young children whose parents have no more than a high-school education are about three times likelier to use TikTok than children whose parents have a postgraduate degree. The same trend holds for Snapchat and Facebook. Part of the reason is that college-educated parents are more likely than parents without a college degree to believe that smartphones might adversely affect their children—and therefore more inclined to limit screen time.

The discrepancy isn’t just a matter of class. LGBTQ+ teens report spending more time on social media than non-LGBTQ+ teens. And according to a 2022 Pew survey, “Black and Hispanic teens are roughly five times more likely than White teens to say they are on Instagram almost constantly.”

In other words, expanding access to smartphones and social media seems to be increasing social disparities, not decreasing them. As Jim Steyer, the CEO of Common Sense Media, told The New York Times:

[Greater use of social media by Black and Hispanic young people] can help perpetuate inequality in society because higher levels of social media use among kids have been demonstrably linked to adverse effects such as depression and anxiety, inadequate sleep, eating disorders, poor self-esteem, and greater exposure to online harassment.

Meanwhile, tech leaders are choosing to delay their children’s access to digital devices, sending their kids to tech-free Waldorf schools and making their nannies sign screen-time contracts.


The tech industry and others who oppose regulations such as KOSPA often argue that more education and parental controls are the best ways to address social media’s harms. These approaches are certainly important, but they will do nothing to deter tech companies from continuing to develop products that are, by design, difficult to quit. That’s why calling for “consumer education” is an approach that other companies with harmful products (including alcohol and tobacco) have relied on to generate public sympathy and defer regulation.

The approach would do little to change the underlying reality that social-media platforms, as currently engineered, create environments that are unsafe for children and adolescents. They disseminate harmful content through personalized recommendation algorithms, they foster behavioral addiction, and they enable adult strangers from around the world to communicate directly and privately with children.

Social-media companies have shown over and over again that they will not solve these problems on their own. They need to be forced to change. Young people agree. A recent Harris Poll found that 69 percent of 18-to-27-year-olds support “a law requiring social media companies to develop a ‘child safe’ account option for users under 18.” Seventy-two percent of LGBTQ+ members of Gen Z do too.

Legislators must reject the flawed arguments that social-media companies and tech lobbyists promote in their efforts to block regulation, just as legislators rejected the arguments of tobacco companies in the 20th century. It’s time to listen to the young people—and the thousands of kids with stories like Lennon’s—who have been telling us for years that social media has to be fixed.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow