
Brands will be able to “gag” trolls who comment on their ads on Facebook and Instagram
Brands ads on Facebook and Instagram: the trolls that swarm on social networks put brands in severe trouble and not only in publications of an organic nature but also in the advertisements that make their way into these channels. For this reason, and aware of this problem, Meta has released a new “brand safety” tool that will allow brands to silence comments in their advertising campaigns on Facebook and Instagram.
The company led by Mark Zuckerberg has announced updates to its “brand safety” platform which has been in workshops for years and made its official debut last year.
Meta’s efforts to strengthen the sacrosanct “brand safety” in its domains coincide (casualities of life) with the TikTok’s recent alliance with the company specialized in metrics and ad viewability Zefr. Under this agreement, brands will be able to avoid appearing next to TikTok videos that contain potentially sensitive topics (and which will be part of an exclusion list).
The eagerness with which Meta and TikTok are striving to provide brands with better security options on their respective platforms shows that the industry continues to work towards “brand safety” after Elon Musk (X) forced a few years ago months the cracking of the Global Alliance for Responsible Media (GARM) coalitionwhich ensured the protection of advertisers in online environments.
With the newly released option to mute comments on ads on Facebook and Instagram Brands will be able to lower the (often loud) volume of harassment and misinformation that often poison sponsored posts on these two platforms.
Beyond this option, Meta has also introduced more new features especially dedicated to “brand safety”, including the so-called exclusion lists. to prevent brand advertisements from being placed on the profile pages of users or undesirable media. Advertising on profiles on Facebook and Instagram was recently launched by Meta and exclusion lists will allow brands to get rid of advertising placements that are not aligned with their own values, according to reports AdAge.
Meta has also strengthened its alliance with Integral Ad Science (IAS)which will help advertisers avoid having their ads appear next to content that addresses topics they are not comfortable enough with.
“Brand safety” has become a headache for brands
The changes now presented by the parent company of Facebook and Instagram are added to the adjacency controls presented last year by Metawhich allow brands to prevent their ads from appearing next to “user-generated content” that is not in line with their “brand safety” principles.
Meta, TikTok, Google and other platforms are working on new methods to control where brands’ ads appear and also analyze the effectiveness of their campaigns in safe environments. and until a few months ago they collaborated closely with GARM on standards aimed at “brand safety”.
Last August, however, Elon Musk decided to sue GARM because he considered that he was conspiring to orchestrate a boycott against X and thus silence the most conservative voices. That lawsuit forced GARM which was part of the World Federation of Advertisers (WFA), to close which has forced online platforms to look for individual solutions to meet the needs of advertisers from the point of view of “brand safety.”
After all, social networks extraordinarily lavish in “deepfakes” and “fake news”, have ended up becoming a real ordeal for brands when it comes to “brand safety”. And even when social networks do what they can to safeguard the reputation of brands in their domains, it is more than evident that the problem is still alive and well. This summer Adalytics published, without going any further, a report which documented the (glamorous) failures in the area of “brand safety”.
The entry into the scene of AI has also exacerbated the problem of “brand safety” on social networks.which, on the one hand, are enthusiastically throwing themselves into the arms of this technology and, on the other, are seeking shelter for a copious volume of “fake news” born directly from the belly of AI. And in this increasingly complex panorama, 2.0 platforms and brands are forced to provide their own solutions to defend themselves from the threats that loom over “brand safety” now that they are not under the collective protection of GARM.
Discover more from CiptaVisual
Subscribe to get the latest posts sent to your email.