Instagram and Facebook to conceal harmful content for teenagers
2 min readThe action follows global regulatory pressure on Meta to safeguard children from inappropriate content on its applications
On Tuesday, Meta announced its intention to conceal more sensitive content from teenagers on Instagram and Facebook, responding to international regulatory demands for the social media platform to shield children from harmful material. This initiative aims to heighten the difficulty for teenagers to encounter content related to suicide, self-harm, and eating disorders when utilizing features like search and explore on Instagram. Meta clarified that all teenagers’ accounts will default to the most restrictive content control settings on both Instagram and Facebook, with additional limitations on search terms on Instagram, as detailed in a blog post.
The blog post states, “Our aim is to provide teens with safe and age-appropriate experiences on our apps.” It further announces additional safeguards geared towards the content that teenagers encounter on Instagram and Facebook.
Even if a teenager follows an account sharing content on sensitive topics, those posts will be excluded from the teenager’s feed, as mentioned in Meta’s blog. The company anticipates that these measures, set to be implemented in the coming weeks, will contribute to a more “age-appropriate” experience.
“For instance, consider someone sharing their ongoing struggle with thoughts of self-harm. While this is a significant narrative that can destigmatize these issues, it is a nuanced subject and may not be suitable for all young individuals. Henceforth, we will begin removing such content from teenagers’ experiences on Instagram and Facebook,” elaborated the company in its blog post.
Facing allegations of addiction and contributing to a youth mental health crisis in both the United States and Europe, Meta is confronting regulatory challenges. In October, attorneys general from 33 U.S. states, including California and New York, filed a lawsuit accusing the company of repeatedly deceiving the public regarding the hazards of its platforms. Concurrently, in Europe, the European Commission has sought information on Meta’s measures to safeguard children from illicit and harmful content.
This regulatory scrutiny intensified after a former Meta employee, Arturo Bejar, testified before the U.S. Senate, asserting that the company was aware of harassment and other threats faced by teenagers on its platforms but neglected to take corrective action.
Bejar advocated for design modifications on Facebook and Instagram, aiming to guide users toward more positive behaviors and enhance tools for young individuals to manage unpleasant experiences. He disclosed that his own daughter had encountered unwelcome advances on Instagram, a concern he raised with the company’s senior leadership. However, Meta’s top executives reportedly dismissed his appeals, as testified by Bejar.
Targeting children as consumers has long been an attractive strategy for businesses, aiming to capture them at an age of potential influence and establish lasting brand loyalty.
In Meta’s intense rivalry with TikTok for young user engagement in recent years, teenagers play a pivotal role in attracting more advertisers, who anticipate continued patronage from these individuals as they mature.