Meta stopped an internal study on Facebook’s impact on mental health after its own research showed the platform could harm users, according to court documents from a lawsuit filed by U.S. school districts. The documents reveal that in 2020, Meta ran a project called "Project Mercury" with the survey firm Nielsen. The study looked at what happened when people took a break from Facebook for a week. Those who deactivated their accounts reported feeling less depressed, anxious, lonely, and less likely to compare themselves to others.
Despite these findings, Meta chose not to continue the research or share the results publicly. Instead, the company dismissed the study, calling it influenced by negative media stories about Meta. Yet some employees argued privately that the study’s results were valid. One researcher pointed out the clear link to harmful social comparisons, while another warned that hiding such information was similar to how the tobacco industry kept the dangers of cigarettes secret for years.
The lawsuit also accuses Meta of misleading Congress by claiming it couldn’t measure the harm its products caused to teenage girls. A Meta spokesperson said the study was halted because the research method wasn’t solid and defended the company’s efforts to improve safety on its platforms.
This case is part of a larger legal battle where several social media giants, including Google, TikTok, and Snapchat, face accusations from school districts. The plaintiffs say these companies hide the risks their platforms pose to kids from parents, teachers, and users. They claim the platforms encourage children under 13 to join, don’t effectively fight child sexual abuse content, and pushed to get more teens to use social media during school hours. There are also allegations that some companies tried to influence child-focused groups to publicly defend their products.
The documents accuse Meta of designing youth safety features to be ineffective and blocking tests that might hurt user growth. They reveal disturbing policies, like requiring a user to be caught trying to traffic people 17 times before being banned. Meta reportedly knew that boosting teen engagement led to more harmful content but continued anyway. Internal efforts to stop predators contacting minors were stalled, with safety teams pressured to justify these delays. Even Mark Zuckerberg reportedly said child safety wasn’t his top priority, focusing instead on other projects like building the metaverse. Requests from Meta’s former head of public policy to better fund child safety were ignored.
Meta disputes these claims, saying its teen safety programs work well and that accounts linked to sex trafficking are removed quickly. The company argues the lawsuit uses selective quotes and false opinions to paint an inaccurate picture. The original internal documents cited in the suit remain under seal, with Meta asking the court to limit what gets made public. A hearing on these matters is scheduled for January 26 in Northern California.
This case adds to the growing scrutiny of how social media companies handle user safety, especially for teenagers. It raises serious questions about transparency and the responsibility platforms have to protect young users from harm.