Industry News

Instagram, in the pillory for encouraging self-harm among adolescents


Instagram has been in the eye of the storm on countless occasions for allegedly severely damaging the health of young people. And now the Meta social network has to deal again with serious accusations about the way in which the content disseminated on its domains encourages serious health problems among adolescents. Research recently undertaken in Denmark by the organization Digitalt Ansvar shows that Instagram would be flagrantly failing when it comes to eliminating content directly related to self-harm to prevent this from reaching the eyes of the youngest and ultimately translating into interactions.

Digital Responsibility created a private network with a focus on self-harm on Instagram. This private network included “fake” profiles of young people whose age in some cases did not exceed 13 years and through which 85 pieces of content about self-harm were shared. The content disseminated through the “fake” profiles created for this by researchers progressively increased in severity and included images of blood and blades and specific calls for self-harm.

The objective of the report undertaken by Digitalt Ansvar was to demonstrate whether Meta had really improved (as the Instagram matrix states) the processes focused on the elimination of potentially harmful content that is distributed through its platforms. With the help of AI, Meta boasts of removing 99% of potentially harmful content that makes its way into its domains.

However, the research undertaken by Digitalt Ansvarwhich lasted for a month, dTo understand that the effectiveness of Meta when it comes to eliminating harmful content on Instagram is rather nil. None of the images distributed through the “fake” profiles created by the researchers were deleted by Meta.

Instagram does not detect or remove content directly related to self-harm on its domains

The results of the study highlight that Meta does not comply with the legislation currently in force in the European Union (EU), says Digitalt Ansvar. After all, under the EU Digital Services Act (DSA), lhe large digital platforms are obliged to identify and mitigate possible risks that directly affect the physical and mental well-being of their users.

«Content that promotes self-harm is against our policies and we remove this type of content when we detect it.. During the first half of 2024, in fact, we deleted more than 12 million pieces of content directly related to suicide and self-harm,” says a Meta spokesperson.

The American multinational also emphasizes that a couple of months ago it launched the so-called “teenage accounts” on Instagram to protect younger people from potentially sensitive content on this social network.

Nevertheless, Digitalt Ansvar’s research concluded that, far from clipping the wings of self-harm content, the Instagram algorithm actively contributed to its expansion on the social network.. And young people as young as 13 ended up connecting with all the “fake” profiles created within the framework of the investigation after adding only one of the group members to their friends list.

According to Digitalt Ansvar, the conclusions of their report suggest that “The Instagram algorithm actively contributes to the formation and dissemination of networks focused on self-harm.”

Ask Hesby Holm, CEO of Digitalt Ansvar, insists in statements to The Observer in what Ineffectiveness in moderating content directly related to self-harm can translate into “severe consequences” and ultimately lead to suicide attempts. In Holm’s opinion, Meta does not bother to moderate small private groups like the one created within the framework of the research to keep traffic and engagement at the highest levels possible. “We don’t know for sure if Meta adequately moderates larger groups, but the problem is that most self-harm-oriented groups are small,” he adds.

For her part, Lotte Rubæk, a psychologist who left a global group of experts created by Meta for suicide prevention last March because the multinational was apparently not doing enough to stop harmful content on Instagram, admits. not be surprised by the conclusions of the Digitalt Ansvar report. Rubæk is, however, surprised that Instagram did not even delete the most explicit images disseminated by the group created by the researchers.

«Meta repeatedly assures in the media that it constantly improves its technology and that it has the best engineers in the world. However, the results of the Digitalt Ansvar study show that what the multinational says in this regard is not true,” Rubæk emphasizes.



Industry News Updates


Discover more from CiptaVisual

Subscribe to get the latest posts sent to your email.

CiptaNetwork

A collection of useful articles about the world of graphic design and digital marketing that you should read to add insight.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Discover more from CiptaVisual

Subscribe now to keep reading and get access to the full archive.

Continue reading