Expert Speak Digital Frontiers
Published on Apr 02, 2020
Humanising digital labour: The toll of content moderation on mental health

When tech giant Facebook banned a user who had uploaded a photo of her breastfeeding her youngest child in November 2018, it found itself embroiled in a controversy on its questionable content moderation policies. In this case, the user was served a notice that cited the reasons for the post’s takedown – the post was deemed to carry sexual content or more precisely ‘sexual content involving a minor’ or ‘depictions of sexual violence.’ The gross misattribution of the post’s content indicates the confused and arbitrary cause and effects of content moderation policies across different social media platforms. The takedown of this post sparked a debate on the profanity of breastfeeding in public. However, questions on the frameworks of determining objectionable content followed by social media platforms that affect millions of users on an everyday basis are yet to be mainstreamed in debates on online safety in the media.

Similarly, questions of affective labour involved in content moderation, tasked with the responsibility of ensuring a safe internet without compromising on a user’s freedom of expression are rarely part of public scrutiny. The cognitive load of such a job and its effects on mental health and resilience must not be underestimated. For instance, content moderators in India are often paid lesser than their counterparts in the Silicon Valley and sometimes have to make split-second decisions on more than 2,000 images in an hour to ascertain whether they affront public sentiments on pornography or violence or not. Such decisions have significant consequences if not made properly, enhancing the mental health implications and the high pressure of this profession. Given the explosion of demand for moderators following public commitment and investment of platforms on content scrutiny, it is critical to look at the factors that affect the mental health and well-being of digital labour. This examination is important since it highlights the precarious working conditions of content moderators and helps in understanding how work-related stress affects their mental health and impedes their ability to decide what a user sees or doesn’t see on the platform.

‘Just a body in a seat’

Mental health of content moderators is adversely affected by both the nature and demands of content moderation work. It is mostly outsourced to countries like India and the Philippines where employees are hired on a contract basis, making them ineligible for the health benefits or any other perks associated with the job. Facebook alone employs around 20,000 moderators for its 2.38 billion users. Compared to the volume of content uploaded on social media servers every second, even this substantial human force struggles to effectively vet this data. Apart from having to sift through numerous posts of murder, rape and other repugnant and inhuman acts, the job involves making a judgement on the intention and contextual relevance of the content. This is not limited to Facebook. Moderating content on all popular digital media platforms requires nuanced and culturally-sensitive interventions that, because of continuous and almost mechanical repetition of sifting through copious quantities of content, may get eroded in those exposed to gruesome content on a daily basis. For example, moderators are frequently expected to differentiate between child pornography and iconic photos of the Vietnam war on the grounds of whether a post is highlighting human rights abuses or glorifying violence. Hate speech guidelines, on the other hand, are not enforced universally by platforms leading to the discrepancy in letting certain kinds of racist and homophobic content to remain online. In some other cases, platforms abuse constitutional laws to allow users unfettered freedom of speech. Thus, grounds for arbitration are highly culture-specific and lacking universality, reinstating the heightened pressures on content moderators.

Performance appraisal mechanisms for content moderators, too, can take a toll on an employee’s mental health. A moderator’s performance is measured through accuracy scores that calculate the inter-rater reliability of their judgements vis-à-vis others in the office. There is mounting pressure to “correctly” judge the veracity of content with an emphasis on consensus. According to the provocative coverage by Casey Newton on this subject, he found employees being threatened, often with dire consequences, because of disagreements on how the content was moderated. The non-disclosure agreements that the employees’ have to sign are meant to ostensibly protect a user’s personal information, but are often weaponised to silence them about the emotional toll of the job. Despite the legal risks involved, a few employees, narrated their experiences including the cover-up of a 42 year-old man who died of a heart attack while on the job. The stress caused by exposure to hideous content, coupled with the pressure to comply with arbitrary standards of judging the same, increases the mental health burden on employees. Moderators develop what psychologists call ‘secondary trauma’ that mirrors many of the symptoms of post-traumatic stress disorder (PTSD), often in response to repeatedly witnessing and evaluating trauma-inducing visuals or texts of violent or perverse content. Content moderators have displayed trouble while adapting to work-related stress. As articulated by Newton, content moderators primarily rely on the “dopamine rush of sex and incessant marijunana use. “We’d go down and get stoned and go back to work ... that’s not professional ... knowing that the content moderators for the world’s biggest social media platform are doing this on the job, while they are moderating content” this excerpt from Newton’s article hints at moderators displaying signs of ‘maladjusted coping,’ a pattern of troubling, indulgent and self-destructive behaviour of individuals unable to psychologically adapt to external stressors.

Harsh working conditions characterised by specific bathroom breaks and a meagre “nine minutes of wellness time” makes maladjusted coping seem like the most viable option. Stress resulting from such working conditions is further exacerbated by downplaying the importance of mental health care at the workplace. In response to growing allegations, certain social media giants have reinstated their commitment towards safeguarding their employees’ mental health and have clinical psychologists on call. However, an open letter drafted by content moderators for one of the world’s leading social media platform in Austin alleged that Accenture managers repeatedly coerced on-ground counsellors to break patient confidentiality. Although these allegations were refuted by Accenture, such fault lines between workers and management are bound to affect the organisational morale. This distrust is captured in the plight of an employee who believes, we are trash to them, just a body in a seat”.

Transparency vs Privacy

Content moderation is a contentious issue, since there are no doubts about its necessity in an online environment that can turn vitriolic, especially in politically-volatile times. Yet, the human cost of moderation is immense and often absent from discussions on online safety. Decentralisation of content moderation systems, use of artificial intelligence, hiking wages and working conditions for moderators are some of the solutions proposed to this worrying problem. At the moment, however, the jury is still out on the long-term efficacy of any of these solutions. Technologists are developing software for speech recognition, image classification, and natural language processing (NLP) that can detect and remove offensive content without need for human mediation. Currently, algorithms have been trained to effectively detect nudity with a limit of 96 percent accuracy. However, the way it frequently flags content that features ‘safe nudity’ such as a breastfeeding image or a Renaissance painting is problematic.

Meanwhile, systems analysts advocating for more transparency in content moderation policies have argued for it to be supplemented by a necessary system of holding companies accountable to users and independent institutions. At the moment, very few platforms report information on the enforcement of their terms of service or community rules that obstruct detailed analysis by third parties about how moderation systems actually work or identify scope of improvement. The Santa Clara Principles (2018), a joint declaration signed by a group of civil society organisations and academicians urge platforms to provide transparency on individual notices about specific content decisions to users and aggregated information on the global interventions. Demands for more transparency however clash with concerns over privacy with platforms opting for end-to-end encryption of messages. These initial steps are encouraging and need to be backed by narrow, empirically-grounded arguments about individual technologies that compose the social media ecosystem. Transparency coupled with reducing human component in digital labour may help alleviate some of the mental health implications of content moderation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Prithvi Iyer

Prithvi Iyer

Prithvi Iyer was a Research Assistant at Observer Research Foundation Mumbai. His research interests include understanding the mental health implications of political conflict the role ...

Read More +