A New Development in the Debate About Instagram and Teens

Meta, infamous for kicking researchers off its platform, flirts with slightly more transparency.

A New Development in the Debate About Instagram and Teens

The teens are on Instagram. That much is obvious. A majority of teens say they use the app, including 8 percent who say they use it “almost constantly,” according to the Pew Research Center. And yet a lot is still unknown about what such extensive use might do to kids. Many people believe that it and other social-media apps are contributing to a teen mental-health crisis.

Now, after years of contentious relationships with academic researchers, Meta is opening a small pilot program that would allow a handful of them to access Instagram data for up to about six months in order to study the app’s effect on the well-being of teens and young adults. The company will announce today that it is seeking proposals that focus on certain research areas—investigating whether social-media use is associated with different effects in different regions of the world, for example—and that it plans to accept up to seven submissions. Once approved, researchers will be able to access relevant data from study participants—how many accounts they follow, for example, or how much they use Instagram and when. Meta has said that certain types of data will be off-limits, such as user-demographic information and the content of media published by users; a full list of eligible data is forthcoming, and it is as yet unclear whether internal information related to ads that are served to users or Instagram’s content-sorting algorithm, for example, might be provided. The program is being run in partnership with the Center for Open Science, or COS, a nonprofit. Researchers, not Meta, will be responsible for recruiting the teens, and will be required to get parental consent and take privacy precautions. Meta shared details about the initiative exclusively with The Atlantic ahead of the announcement.

The project cracks open the door for greater insights into social media’s effects—yet some researchers are nevertheless regarding it with trepidation. Like many online platforms, Instagram is essentially a black box, which has made it difficult for outsiders to draw direct links between the app and its possible effects on mental-health. “We consider ourselves to be in a very difficult and unusual situation, which is [that] the social-media companies have treasure troves of data that no academic researcher will ever amass on their own,” Holden Thorp, the editor in chief of Science, which published studies about the 2020 election in collaboration with Meta, told me. “So you have potentially a resource that could answer questions that you can’t answer any other way.”

[Read: No one knows exactly what social media is doing to teens]

Part of the reason this feels particularly fraught is that leaks from within Meta have indicated that the company has conducted its own research into the harms of its products. In 2021, documents released by the whistleblower France Haugen showed that the company’s own research has repeatedly found that Instagram can harm teenagers, especially teenage girls. “Almost no one outside of Facebook knows what happens inside of Facebook,” Haugen said in congressional testimony that year. (Meta was previously known as Facebook, which it owns; the company rebranded just a few weeks after Haugen’s appearance.) Later in her testimony, she said that “there is a broad swath of research that supports the idea that usage of social media amplifies the risk” of mental-health issues such as depression. Before that, Facebook became notorious among researchers for restricting their ability to study the site, including one high-profile incident in 2021, in which it kicked a group of researchers from New York University off the platform.

All of which underscores the value of independent research: The stakes are high, but the actual data are limited. Existing experimental research has produced mixed results, in part because of the issues around access. In the meantime, the idea that social media is harmful has calcified. Last month, the U.S. surgeon general proposed putting a cigarette-style warning label on social sites—to serve as a reminder to parents that they haven’t been proved safe. Cities and school districts across the country are busy passing rules and legislation to restrict the use of devices in the classroom.

[Read: Get phones out of schools now]

It is against this backdrop that Meta has decided to loosen its grip, however slightly. “As this topic has heated up, we have felt like we needed to find a way to share data in a responsible way, in a privacy-preserving way,” Curtiss Cobb, a vice president of research at Meta, told me. “It’s reasonable for people to have these questions. If we have the data that can illuminate it, and it can be shared in a responsible way, it’s in all of our interests to do that.”

Outside experts I talked with had mixed opinions on the project. Thorp pointed out that Meta has ultimate control over the data that are handed over. Candice Odgers, a psychologist at UC Irvine who studies the effects of technology on adolescent mental health and has written on the subject for The Atlantic, said the pilot program is a decent, if limited, first step. “Scientifically, I think this is a critical step in the right direction as it offers a potentially open and transparent way of testing how social media may be impacting adolescents’ well-being and lives,” she told me. “It can help to ensure that science is conducted in the light of day, by having researchers preregister their findings and openly share their code, data, and results for others to replicate.” Researchers have long called for more data sharing from Meta, Odgers noted. “This announcement represents one step forward, although they can, and should, certainly do more.”

Notably, Meta has been a complicated research partner for similar projects in the past. The political-partisanship studies published in Science came from a kindred program, though its design was slightly different; Meta served a bigger role as a research partner. As The Wall Street Journal reported, the company and researchers ended up disagreeing on the work’s conclusions before the studies were even published. The studies were ultimately inconclusive about Facebook’s ability to drive partisanship in U.S. elections, though Meta positioned them as adding “to a growing body of research showing there is little evidence that key features of Meta’s platforms alone” cause partisanship or change in political attitudes.  

Cobb told me that Meta has eliminated some of the problems with the 2020 election project by introducing a technique called “registered reports.” This, he said, will avoid some later back-and-forth over interpretations of the results that cropped up last time: Would-be researchers will be required to get their processes peer-reviewed upfront, and the results will be published regardless of outcome. Cobb also noted that Meta won’t be a research collaborator on the work, as it was in 2020. “We’re just going to be providing the data,” he explained. (The company is funding this research through a grant to the COS.)

Meta, for its part, has also framed the project as one that could later be built upon if it’s successful. Perhaps it’s best understood as a baby step forward in the direction of data transparency—and a much needed one at that.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow