Facebook Pays $52 Million to Content Moderators With PTSD



Facebook has announced it will pay $52 million to content moderators that have been plagued with mental health issues that they claim were a direct result of their job’s responsibilities.


Facebook’s announcement comes after a class-action lawsuit brought on behalf of the moderators that have said they developed a number of mental health issues, addiction, depression, and in some extreme cases, post traumatic stress disorder - PTSD - through their time at Facebook.


The agreement was filed in a California courtroom settling the case, with a judge expected to sign off on the settlement later this year. The settlement, according to the BBC “covers moderators who worked in California, Arizona, Texas and Florida from 2015 until now. Each moderator, both former and current, will receive a minimum of $1000, as well as additional funds if they were diagnosed with PTSD or related conditions. Around 11,250 moderators are eligible for compensation,” it reported.


There are also reports stating that if eligible, former moderators could receive multiple payments for multiple mental disorders, totalling $50,000 in damages per employee.


The moderators in question say that after witnessing first-hand traumatic events like rape, suicide and violence posted to the social network, they were left with considerable emotional damage which the social media giant failed to address.


Steve Williams, a lawyer representing the moderators said that “we are so pleased that Facebook worked with us to create an unprecedented program to help people performing work that was unimaginable even a few years ago… the harm that can be suffered from this work is real and severe,” he added.


Facebook noted in the settlement that it would reshape its tools for moderators, and implement things like muting a video’s audio by default, as well as changing the colour to black and white to minimise the potential of emotional damage. These changes are slated to be rolled out to 80% of its moderators by the end of the year, and 100% by 2021.


Facebook has said that it will also lean on machine learning and artificial intelligence to moderate its content as it moves into the future, with a spokesperson adding that it is “committed to providing them additional support through this settlement and in the future.”


In addition, moderators that regularly view disturbing content will gain access to weekly sessions with a mental health professional, monthly group therapy sessions and access to a licensed counselor if necessary. Facebook also says that it will screen future moderator applicants for their emotional resilience, post more information about the psychological support on offer, and infom moderators on how to report future violations of workplace standards.


The Verge, who first broke news of the settlement is writing that “in September 2018, former Facebook moderator Selena Scola sued Facebook, alleging that she developed PTSD after being placed in a role that required her to regularly view photos and images of rape, murder and suicide. Scola developed symptoms of PTSD after nine months on the job. The complaint alleged that Facebook had failed to provide them with a safe workspace.”


Selena Scola was one of a number of moderators hired in the wake of the 2016 Presidential election, an event that Facebook faced harsh criticism for failing to crack down on fake news and harmful content on its platform. The company responded by contacting a number of large consulting firms like Accenture, Cognizant, Genpact and ProUnlimited to contract thousands of moderators for Facebook.


One of those consulting firms, Accenture reportedly asked applicants to sign a form acknowledging the risks of the job and the potential for emotional trauma.


Facebook released its fifth Community Standards Enforcement Report earlier this week, stating that the company had detected 90% of “hate speech content” through the use of human moderators and AI. The company said that it is now developing a neural network named SimSearchNet that is able to identify copies of images that contain misleading or false information. This will, according to Facebook’s CTO Mike Schroepfer, allow human moderators to focus their attention on “new instances of misinformation,” instead of “near-identical variations” of images.

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • YouTube Best Practice Icon
  • LinkedIn Social Icon
  • Facebook Basic Square
  • Instagram Social Icon
  • Twitter Basic Square

© 2019 by Best Practice

  • White YouTube Icon
  • White LinkedIn Icon
  • White Instagram Icon
  • White Facebook Icon
  • White Twitter Icon