Introduction
Mental health and online harms
Data collection and ads
Conclusion


Introduction

From Facebook's 'thumbs up' to Reddit's 'upvote' and Instagram's and Tik-Tok's 'likes', so-called 'vanity metrics' in social media are ubiquitous, and 'double-tapping' has become so addictive that for many it is a biological reflex. 'Likes' have become a form of currency: often used as a measure of success of the popularity of a post and the engagement of followers, and as such are highly valuable for influencers getting paid for content in posts.(1)

It may then come as some surprise that Facebook-owned platform Instagram has recently announced that it is trialling an experiment that hides the amount of 'likes' users receive from their followers, in a move that has been dubbed as signalling the 'death of influencers'. In the trial, affected users will not see the number of 'likes' on their posts unless they click through deliberately, and 'like' counts on other user's posts won't be visible at all.

Whilst individuals that have been victim to accidentally 'liking' the 52-week-old photo might breathe a sigh of relief at the changes, there may be some unintended consequences to the proposed removal of the double-tap particularly for those using social media to build a business.

Mental health and online harms

Instagram's rationale behind the trail is an admirable one: to "remove the pressure" of worrying about how many 'likes' a post will receive and improve mental well-being. This is well-timed in light of the increasing pressure that policy makers globally are putting on social media companies to tackle the harmful effects of their platforms.

For example, in its Online Harms White Paper the UK government has promised that it will introduce a new statutory duty of care "to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services." Further, the role that 'likes', along with other strategies to extend user engagement, specifically play in contributing to harms online has been highlighted by the Information Commissioner's Office (the UK tech regulator) in the new UK draft code of practice issued in relation to children's privacy, which states that online service providers should not use "reward loops" (such as 'likes') "that seek to exploit human susceptibility to reward, or anticipatory and pleasure seeking behaviours in order to keep children engaged" to maximise data collection. Similarly, the UK Chief Medical Officers have recommended that tech companies "recognise a precautionary approach in developing structures and remove addictive capabilities" to safeguard the mental health of children and young people, calling on the technology industry to create a voluntary code of conduct to regulate these issues. Perhaps, then, the trial can be considered a precautionary step taken to self-regulate in advance of incoming legislation, in the hope of staving off further regulatory scrutiny and satisfying critics of some of their concerns.

So far, so good. But by removing 'likes' as users' main source of a digital pat-on-the-back, the likelihood is that users will look to other forms of interaction for their dopamine hit. This could manifest itself as a shift away from 'likes' and towards comments as the primary form of engagement; a potentially dangerous development. Comments have the potential to do much greater damage than 'likes'; the words used can be negative (and perhaps even harmful and abusive) as well as positive, compared with simply choosing not to 'like' a post.

In light of policy makers' proposals to crack down on online harms, such a shift could increase the already heavy burden on digital content providers and social media platforms to provide tools to monitor, prevent and remove harmful content. Harnessing the power of technology, such as machine learning, to build such tools into platforms and content from the start, as acknowledged in the UK government's green paper on the Internet Safety Strategy, could however help to achieve this. In this respect, Instagram's new feature that uses AI to detect inappropriate or offensive comments as a user is typing and asks the user to reconsider before posting might come in very useful.

Data collection and ads

A secondary consequence of removing 'likes' is that, without pressure to conform to popular content trends (determined by the amount of 'likes' a post receives), users may be encouraged to share more. Whilst this might encourage more creativity, innovation and creation amongst user communities, concerns have emerged amongst sceptics that this is an attempt by platforms to enhance usage, maximise data collection and, as a result, maintain influence and market reach. To prove the sceptics wrong, tech platforms will need to ensure they have effective measures and policies to protect the increased data that comes with increased sharing.

Some critics have predicted that removing 'likes' could stop posts from travelling as far as they otherwise would have done, as users are more likely to 'like' and share content that their friends have liked. What, then, for the nascent influencer industry? At least one concern is that brands might be more willing to spend their marketing budget on sponsored posts and targeted ads, resulting in a boost in revenue for the social media platforms and the Adtech market, rather than emerging influencer marketing routes, should 'like' counts be removed. Platforms should tread carefully in this respect – the CMA is due to publish a report in 2020 which highlights concerns around the market power of online platforms and competition in the supply of digital advertising, and the power for this to lead to direct consumer harm if substantiated. And for influencers, it will be increasingly important to tap in to alternative engagement strategies to ensure that they continue to drive reach and maintain demand.

As for the use of 'likes' by the influencer economy as a measure of success, the effect of removing 'likes' is unlikely to be as profound as predicted. 'Likes' are now considered to be a dated metric for brands to rely upon, as it is easy for 'likes' and other visual metrics to be bought and quickly manipulated by bots. Further, Instagram has confirmed that the removal of public 'like' counts would not affect measurement tools for businesses and creators, leaving alternative (better) indicators of whether an account is getting good engagement, and thus whether a particular influencer is worth working with, intact.

Conclusion

As the impact of the use of social media to communicate content continues to come under the spotlight, regulatory focus on social media platforms is unlikely to let up. For these platforms, it is important to consider the impact that shifting engagement trends might have on user-generated content in the context of online harms. Platforms should continue to look at what can be done to self-regulate, whether this is done through the removal of 'addictive' structures such as visual metrics, or safeguards built into mechanisms that fuel the creation of user-generated content, to ensure that the online world is a supportive place. For influencers and those using social media channels for monetary gain, the effect of removing visual metrics should not be as drastic as predicted providing that influencers focus on more useful metrics (such as brand affinity or click-throughs) to pitch to brands and measure success. Influencers may need to rethink their engagement strategies as well as their partnership and compensation models with brands in order to reflect the changes.

For further information on this topic please contact Bryony Gold at Bird & Bird LLP by telephone (+44 20 7415 6000) or email ([email protected]). The Bird & Bird LLP website can be accessed at www.twobirds.com.

Endnotes

(1) This article was originally published on MediaWrites – a news hub by the Media Group of international law firm Bird & Bird – https://mediawrites.law

This article has been reproduced from Lexology – www.Lexology.com.