Behind the curtain -
Exploitation of Content
Moderators & Steps for Prevention
Abstract
This topic was chosen for this study for several reasons, primarily because this issue is often ignored in the world of internet media. Everyone in the current internet era is encouraged to voice their personal opinions and concerns which could either be constructive or destructive. Content moderators find the balance between these two. The importance of content moderation is not emphasized enough since all the work happens behind closed doors and is rarely available to the public to review, due to which the working conditions for moderators are often dismal.
Thousands of content moderators are outsourced by giant tech companies. Moderators are placed into a high-stakes environment that demands near-perfect accuracy in the job role. This situation can be helped or prevented by adhering to a few guidelines regarding this type of work. The guidelines should concern setting limits on the amount of time spent performing the moderation work, reducing stress in the work environment, and building a supportive work culture for workers to flourish. This article further explains in detail the steps for prevention and safety. For instance, how the employers can follow certain safety standard protocols for content moderators and how employees should be provided free unlimited medical access such as counseling sessions.
Artificial Intelligence tools are used in content moderation to make the work easier, faster, and less demanding. AI certainly has its limitations, mainly because algorithmic systems have the potential to perform badly due to the lack of data and biased training AI. AI Tools that are designed to detect images and analyze videos face many challenges that may become serious obstacles during moderation. This article also briefly explains how AI is used to classify obscenities and extremist content in videos and different variations of images.
Keywords – Mental health, Labor exploitation, Content Moderation, AI
Introduction
An estimated 100,000 people work today as commercial content moderators (1). A content moderator is responsible for viewing and evaluating user-generated content. Commercial content moderators are not only responsible for content regulation, but they also have the responsibility to guard the company’s reputation and work against anything that would lead to brand damage. User Generated Content (UGC) can be anything, from harmless spam messages to videos and images of beheadings, rape, or child sexual abuse. If there’s no proper content moderation, the internet would become a chaotic space, making it impossible for people to productively make use of the internet media. The Internet can sometimes seem like a vast, fraudulent, and ever-expanding immoral haven. It’s up to the content moderators to clean up this “digital pollution”. As a consequence, they tend to overwork themselves and continue to work on their duties even during their personal time. Numerous articles and case studies on this subject have all yielded a clear and confident inference that content moderators are not taken care of with due respect in terms of payment or health services. Their job role is hidden for safety reasons, and all the content moderators have to sign Non-Disclosure Agreements, because of these the employers are able to minimize the importance of human work. Content moderators are not provided the privileges to be able to speak up and share their mental pain with their close friends and families.
Facebook hires a minimum of 15,000 content moderators globally. The truth is these moderators are not even directly hired by Facebook. They are in fact hired by third-party vendors. The activity of content moderation somehow doesn’t fit into Silicon Valley’s self-image. They are underpaid and given low-status jobs. While content moderators are being exposed to disturbing content which usually involves repetition of negative media forms, they often get diagnosed with PTSD and trauma along with exhaustion and burnout. Regardless, most employers do not seem to care about their working conditions and do not provide any sort of wellness program. Thousands of content moderators are outsourced from India and the Philippines. This paper dives into the ethical issues concerning this subject and the ‘exploitation’ of moderators’ labor status. I will be pointing out the mental health conditions caused by such work and the subpar restitution techniques engaged by employers. I will be comparing the working conditions of content moderators in the US and developing countries like India and the Philippines. This article specifically emphasizes more on the mental health conditions of the content moderators. After studying several papers, I formulated a list of steps to help prevent a health crisis for content moderators. I believe these steps will help in the betterment of the moderators both physically and mentally while empowering them in their field of work
I believe AI tools will definitely play an important role in the future for content moderation be it for good or bad reasons. When AI tools and the human mind are combined, they can do the job more efficiently basically leading to diminishing exposition of such explicit content. This article explains a bit about the AI tools used in content moderation. This article also briefly talks about the downside and upside of AI and its future potential in content moderation.
Current Situation
Content Moderation seems to be inconsistent and sometimes confusing. Content moderation relies largely on community policing – that is, users reporting on other users if they see hate content or violate any community standards. This leads to inconsistency, as some users are more heavily affected than others. A person with a public profile and a large following tends to garner more attention and is therefore moderated more intensely than a person with a private account. For example, people who are popular social media influencers are easily targeted as opposed to people with a lesser following. When there is a trend, or sometimes a ‘threat’, platform companies tend to overreact as a result of the huge pressure they face from the public. For example, Facebook implemented a policy on ‘sexual solicitation’ that sprung into a garden of trolls (11). Many platforms now have adopted a filtering system that creates lists of words, IP addresses, and emails that support hate or any sort of violent content and bans them. This content is then removed or flagged for moderation. This definitely helps human content moderators as it can reduce the workload but they are not effective solutions for several reasons (I) It’s difficult to maintain. The lists must be updated and reviewed and managed manually. (II) It can be easily manipulated. When one of the letters is replaced or removed, the filter system will not be able to interpret it (12). I believe AI tools will definitely play an important role in the future for content moderation be it for good or bad reasons. When AI tools and the human mind are combined, they can do the job more efficiently basically leading to diminishing exposition of such explicit content. This article explains a bit about the AI tools used in content moderation. This article also briefly talks about the downside and upside of AI and its future potential in content moderation.
During the 2020 presidential election, the social platform Parler experienced immense growth, gaining 4 million followers in the first two weeks (12). Parler had large audiences of conservatives who considered the platform as an alternative to mainstream social media platforms like Facebook and Twitter which were accused of liberal bias. Parler was touted as the ‘free speech alternative’ platform, with a permissive content moderation policy (12). Parler relied on user complaints for content moderation which is then reviewed by the contracted moderators. This weak content moderation system made it a target for pornographic peddlers resulting in tens of thousands of pornographic images and videos being posted to the feeds of regular users over just a few days. This disrupted the user experience and slowed the momentum of the platform’s growth (12). This is just one of many real-life examples which showed how effective content moderation could prevent damage to brand reputation and growth.
Due to COVID -19, Facebook sent thousands of content moderators home, and its algorithms suffered. Facebook’s organic content policy manager Varun Reddy acknowledged that because a lot of human reviewers across the globe had to be sent home during the early months of the pandemic, the feedback loop for monitoring content was fractured. AI is dependent on test data designed by human moderators, he explained, adding that the shortage of human resources has diminished “how effective the AI is over time” (13). The pandemic changed the situation for obvious reasons, companies’ reliance on AI tools increased in response to disrupted content moderator workforces. Towards the end of 2020, Facebook defended its decision to bring back some content moderators into their offices in the midst of the coronavirus pandemic. This happened a day after more than 200 contractors posted an open letter asking the company to let them work from home (14). Facebook Vice President of Integrity Guy Rosen said the company needed some of its most sensitive content reviewed from an office setting and had taken steps to make those environments safe (14).
“We’re not able to route some of the most sensitive and graphic content to outsourced reviewers at home,” Rosen said on a press call. “This is really sensitive content. This is not something you want people reviewing from home with their family around” (14).
Guy Rosen
It is believed that Facebook and Instagram removed more than 12 million pieces of content between March and October 2020 that contained fake news and misinformation related to coronavirus that could have brought potential physical harm to oblivious viewers (14). This content included exaggerated cures and fake preventive measures. Due to the ongoing pandemic, companies are aiming for the right balance between AI and human workforces with such platforms also leaning towards expanding their use of AI for content moderation. Though AI plays a major role and is more efficient than human awareness, AI cannot recognize variation and subtleties in content like how humans do. With the increase in the volume of content on the internet, there’s a greater need for human intervention. Given the current circumstance where posts regarding misleading health suggestions, even with good intent, are circulating rapidly on social media, moderation efforts now have added urgency and pressure.
Platforms recently have decided to focus and direct the users toward statements from official health authorities. Due to high scientific uncertainty, and limited knowledge about current situations, it is difficult for platforms to maintain proper guidelines about what COVID-19 information to push to users. This is majorly complicated by political leaders taking advantage of the situation and giving statements that contradict advice from the official health authorities.
Labor Exploitation
With the rapid widespread adoption of technology, there is an unlimited flow of information on the internet. Tech companies set up rules for users to upload or create content on their platforms, and content moderators are expected to have thorough knowledge about these rules and act accordingly. They need to look at the content and have to decide if the content is “safe” enough to be present online which requires the content moderators to analyze the content and make the right call. This entire process is done in a matter of a few minutes by moderators who are generally low-paid and regarded as low-status workers. They may also be working in cultural contexts or in linguistic contexts they aren’t even familiar with. Content moderation tends to be treated as an extra cost rather than an opportunity to innovate and develop the platform. Sometimes, content moderators have to keep aside their ideologies, follow the companies’ protocols and make the decision even though they might feel it is a wrong decision. From the prevailing information, it’s understood that a content moderation job is considered an entry-level job, which means a lot of young people are doing it. They tend to be people who are well educated. These people supposedly work for Silicon Valley firms, but instead of joining these firms directly as full-badge employees with potential good career growth in front of them, they are coming in through third-party outsourced contract labor. They are provided with minimum to low wages and poor working conditions. Social media companies have made tens of thousands of jobs around the world to analyze and remove violent or hate content; their attempt to regain their reputation after failing to adequately police content on their platforms including live-streamed terrorist attacks. Silicon Valley seems to care little about the challenging work of content moderation and promotes immense discrepancies in employee benefits between their own employees and contractors (2).
“In Silicon Valley, there is a predisposition to favor computational solutions to just about every problem you can think of. And content moderation has been a problem that has, in many ways, eluded the full-on computational mechanism that would easily solve it (3)”Sarah Roberts
Moderators for Twitter were often expected to review as many as 1000 items a day, which includes individual tweets and replies, and messages, according to current and former workers (4). India and the Philippines are the major outsourcing content moderation hubs in the world since they have existing IT infrastructure such as call center sites, and also because of the outstanding population in these countries that have a fairly significant command of English. English is a prerequisite for moderation. Moderators in other parts of the world such as the Philippines, one of the biggest and fastest-growing hubs of such work, face another major issue prevailing in content moderation – diminishing limits on screen time. Moderators in Manila are forced to screen large terabytes of data shared all around the world, as opposed to India and the US where the work is usually limited to homegrown content.
Moderators in the Philippines said they were never provided with counselors at work. Some remarked they only had access to a counselor once a month or once every six months, unlike moderators in the United States where they can book time with a counselor once a week (4). Though labor laws in the Philippines generally state that third-party companies share responsibility for employees working for a contracting firm, those regulations do not apply to the Business Process Outsourcing (BPO) industry, which allows third-party companies that outsource work to BPO firms to avoid responsibility for the employees who do that work (5). In India, workers’ compensation lawsuits are adjudicated through various laws, but the country’s legal framework does not consider mental health an occupational hazard (5).
The employer burdens the moderators with the obligation to understand different cultures they may not be familiar with and asks them to moderate content in different languages they do not speak, keeping in mind that these decisions have to be made in a few seconds or minutes. Moderators in these developing countries face prevalent issues of social stigmas concerning mental health and also lack of healthcare access, so this stands in their way of speaking openly about psychological issues faced by them. While some companies do offer well-structured wellness programs in the workplace, moderation work differs from traditional corporate positions with repeated and prolonged exposure to disturbing content, so the impact of such wellness programs for moderators is not always sufficient.
Let’s take Facebook, for example, earlier this year when Facebook reached a $52 million settlement with lawyers representing more than 10,000 former and current contract content moderators in four U.S. states, their colleagues in India and the Philippines were entirely left out (5). The payout follows a class-action lawsuit filed in September 2018 describing the circumstances of Facebook content moderators like Selena Scola, who was the lead plaintiff in the suit. She worked as a public content contractor for about a year at Facebook’s offices in Menlo Park and Mountain View, Calif., where she was employed by the Florida-based contractor Pro Unlimited Inc (21).
Employees in the United States claimed that their job role required them to examine and screen gruesome videos like child pornography, terrorist decapitations, suicides, and rape. This had led to severe psychological issues. Similar to their American colleagues, thousands of Asian workers are employed as content moderators by outsourcing firms like Cognizant. They have been reporting grueling working conditions leaving some of them with lasting mental wounds. Facebook employs more than 15,000 content moderators globally. While the lawsuit provided relief for some moderators in the U.S., it did not address the plight of those who worked for the company abroad (5).
Since the lawsuit, Facebook was forced to make changes in order to better support content moderators, including requiring outsourcing firms to offer more psychological support. The proposed support and compensation were not extended to international moderators who have already suffered psychological harm in spite of earlier comments made by Facebook representatives to the media that the changes would apply to moderators working outside the US as well. Marte-Wood worries that the threat of future lawsuits in the U.S. may push moderation work to places like India and the Philippines, where labor laws are thinner and similar lawsuits less likely (6).
Mental Health
As mentioned earlier, moderators are exposed to violent and extremist content such as rapes, beheadings, murders, and child abuse videos; which can lead to lasting psychological and emotional distress such as panic disorders and depression. The main purpose of this paper is to briefly explain the struggle and trauma faced by the moderators through online content moderation and also identify some potential solutions to make their lives better. Many have called this problem, but few have suggested solutions. Content moderators experience a certain level of trauma after screening all these videos and images and hate speech while being in a field where their wages are based on quota. They have tremendous pressure to meet their number every single day. Facebook moderators go through hundreds of upsetting content during their every shift (1). Their posts include violent death including suicide and murder, self-harm, assault, violence against animals, hate speech, and sexualized violence (1).
“The nature of the work demanded total psychic engagement and commitment in a way that was disturbing, because it was a flow that they could not predict, and they were always open to anything at any time. People were flocking to these platforms, in no small part, at least in the American context, because they were being led to believe, either tacitly or overtly, in some cases, that being online in this way would allow them to express themselves.”
Sarah Roberts
Studies show that repeated exposure to these kinds of content will result in serious consequences which include the development of PTSD. There are instances of suicide cases as well, highlighted in the documentary film “The Cleaners”, which is based on content moderators. In the film, a moderator commits suicide after repeated requests for a transfer are rejected (1). In the article ‘The Psychological Well-Being Of Content Moderators, Dwoskin highlighted a comment one of the counselors handling moderators in Austin made that the work could cause a form of PTSD known as vicarious trauma (1). Vicarious trauma is the emotional residue of exposure that counselors have from working with people as they are hearing their trauma stories and becoming witnesses to the pain, fear, and terror that trauma survivors have endured (7). There are also cases where employees are being threatened because of disagreements on how particular content should be moderated. The non-disclosure agreements do not allow employees to talk about it and it is used as a weapon by the employers to silence them resulting in major emotional tolls on moderators. This results in secondary trauma and eventual burnout (8). “Secondary trauma is defined as indirect exposure to trauma through a firsthand account or narrative of a traumatic event” (9). Sarah Roberts has contributed to this topic, describing and researching the mental health of content moderators for the longest time, and has also coined the term ‘commercial content moderation’ for industrial-style moderation (8).
Commercial Content Moderation (CCM) is not an industry unto itself per se but rather a series of practices with shared personage that take place in a variety of worksites. There are additional pressures on CCM other than moderating the content. For instance, there are real monetary and other kinds of value that can be assigned to content that is deplorable but sensationalistic, often on the grounds of being disturbing or distasteful (10). It means that sometimes this content can often become trending or “go viral”, thus increasing the clicks and growth of the platform. For this reason, CCM finds itself in a conflicting situation, having to find a balance between the platform’s aim to attract users and clicks while at the same time looking out for brand reputation.
PREVENTION
As mentioned earlier, in the majority of these cases workers develop a form of PTSD after screening huge numbers of videos, images, and hate speech for days and weeks. As the old saying goes ‘Prevention is better than cure’; employers should start focusing on the moderator’s well-being and promote better health conditions at the workplace. It’s always better to follow the prevention steps and aim to stop the workers from being impacted negatively retroactively.
- Free unlimited medical access. The employers should provide free unlimited medical access such as counseling sessions to its moderators. This includes office settings all around the world, be it the US or in developing countries like India or the Philippines (6). Twitter says it now offers daily counseling and has instructed outsourcing companies to begin offering additional psychological support to workers after they leave the job (6). Other platforms have said that moderators all over the world have the freedom to leave their shifts early if they see a piece of content that disturbs them to an extent that they are unable to carry on with their work— and that they will be paid for the entire shift regardless (6).
- The employers should provide 24/7 phone support, off-site counselors, and mandatory scheduled on-site counselors.
- Stefania Pifer says, “The accuracy quotas, surveillance metrics and other mechanized routines of call-center work, honed for years by IT outsourcing companies, are ill-suited to a job that should require breaks, psychological support, and time to process emotionally harrowing material”, Stefania Pifer, a psychologist who runs the Workplace Wellness Project, a San Francisco-based consulting service for social media companies. “It’s an old type of model struggling to fit into a new type of work,” she said. “And it can lead to unethical working conditions.”(6)
- Follow safety standard protocols. Employers should follow certain safety standard protocols for content moderators. For example, limiting the amount of time per day of exposure to any violent graphic content. Another option is to give the moderators the choice to opt-out of viewing certain content that might trigger them in any harmful way.
- Companies should be able to come to terms and give compensation for past harm caused by the job.
- “The Guidebook”. In January 2015, “Employee Resilience Guidebook for Handling Child Sex Abuse Images” (hereinafter “The Guidebook”) was published to be a rule of thumb regarding employee management in content moderation (6). This book advises the people to limit the amount of time of exposure to such content. From prevailing literature on the topic, it is evident that employees are forced to screen such content all day long without a proper break. This should be discouraged.
- The Guidebook recommends that companies “have a robust, formal ‘resilience’ program in place to support an employee’s well-being and mitigate any effects of exposure.” Employees of a number of subcontractors in the Philippines do not mention any such program (6). Efforts must be taken to enforce such programs.
- Clear consent of the job role. Multitudes of articles on the internet suggest that most of the time moderators are taken by surprise when they have to screen child pornography or beheadings as part of their job. The employers should acquire clear consent that includes providing all the information required for the moderators to understand what the job role entails.
- There should be clear communication between the main platform companies and the subcontractor on the risks that the employees are getting involved in.
- Platform companies should fund training programs and other initiatives focused on all the kinds of content that the content moderators are likely to encounter on the job.
- Commission studies on content moderation. One concrete, extensive step companies can take is to band together, potentially through one of the industry associations such as the Internet Association or the Software Alliance, and commission a study on the working conditions of content moderators which would include recommendations for improvements. (6
There is an organization called ‘The Technology Coalition’ which was formed in 2006. Their primary goal is to fight against online child sexual exploitation. The organization claims to have members such as Adobe, Apple, Amazon, Facebook, GoDaddy, Google, TikTok, Microsoft, Oath, PayPal, Snapchat, and Twitter (15). The Occupational Safety and Health Administration (OSHA) should address the issue by enforcing a safety and health regime that can prevent mental health problems that arise from content moderation work (16). There is talk on Facebook and Google of certain technology solutions that can mitigate the psychological effects of viewing disturbing content – options to blur out faces in the videos, see the videos in grayscale, and mute the audio. These solutions and other industry standards drafted by big tech companies (though never implemented) could also provide a useful benchmark for OSHA regulations (16). It’s important for companies to be transparent about the hiring process when recruiting new employees in content moderation roles. The companies should also make sure that while hiring, they perform a series of psychological tests on the potential employees to ensure they are placing the right people in the role to match their skill sets. This reduces the liabilities on both the employees and employers, as they are naturally adept at dealing with the content.
Due to COVID-19, many of the employees are working from home. There are some basic steps where the companies can follow that will help the employees’ well-being.
- Invest in virtual training sessions. Training should include COVID-19-specific issues.
- Improve tools and software for virtual content moderation. For example, censor sensitive content on screen while people work from home to protect their families.
- Workers can unmask it manually as necessary.
- Make priorities for one-to-one online counseling sessions.
The work environment also matters, moderators should be able to relax whenever required or able to destress on breaks. Work environment providing a spacious lounge, fitness center, gaming center, and healthy snack options are some of the examples companies can support their employees’ mental health. The companies engaging with the study and following the recommendations could go far in building trust and in improving the working conditions for the people helping to keep their platforms safe. It is proven through research that resilience may be developed in the workplace through guidance
AI in Content Moderation
Companies operating with large-scale user-generated content need to invest in strategies such as Artificial Intelligence (AI) technology paired with human moderators, employee wellness programs, and innovative workforce staffing models. AI can help human moderation by helping triage, tag, and prioritize the content to be reviewed based on the severity of the probabilities of explicit content at an automated level of processing. AI can also internally moderate the amount of content reaching the review stage through automation like identifying and blurring out areas of images and videos in advance of moderators having to look at them (17). Depending completely on human content moderation can be difficult and time-consuming, in addition to being harmful to people doing the job. At the same time, AI algorithms cannot fully take over content moderation. As of today, the best approach would be to combine both in the most practical and efficient ways. Combining the power of human judgment and AI can be very helpful in providing a balance in the job duties of content moderators. This method will reduce a significant workload reduction for hundreds of thousands of psychologically harming content moderator positions.
AI tools definitely have difficulty in understanding subtlety, sarcasm, and subcultural meaning but it is possible to find an efficient trade-off regarding this when it comes to content moderation. Online content comes in different formats such as videos, images, memes, and texts. When using AI, requires different types of AI tools in understanding the different formats of content. For video, AI should be able to analyze images over multiple frames to be combined with audio analysis. For memes, it requires a combination of text and image analysis with contextual and cultural understanding. AI Tools that are designed to detect images face the challenge of being expected to be able to identify anything that is not right in the image, for example, a specific symbol that symbolizes hate or toxicity in any way, under any circumstance. AI should be able to understand different variations of that symbol such as in different lighting conditions, resolutions, or different angles, rotation, or skew. Tools are designed to classify whether an image contains a feature such as nudity. One approach to detecting nudity in an image is to analyze the proportion of pixels in an image that fall into a specific color range that has been pre-identified as representing skin color. This kind of tool can lead to the misclassification of underrepresented skin tones.
Natural Language Processing, usually shortened as NLP, is a branch of artificial intelligence that deals with the interaction between computers and humans using natural language. The ultimate objective of NLP is to read, decipher, understand, and make sense of human languages in a manner that is valuable (18). This can help in interpreting whether particular content is positive or negative and belong to some category such as hate speech or violent promoting content. Undoubtedly, human communication patterns can change quickly. This could lead static AI tools to become outdated and unable to analyze users’ style of communication (19). Biased training could lead to algorithmic discrimination. Algorithmic systems have the potential to perform badly on data related to underrepresented groups, including racial and ethnic minorities, non-dominant languages, and/or political leanings (19). Undue bias in algorithms can lead to increased risk to freedom of speech and expression for marginalized communities and individuals and the possibility of silencing them for the long future as technology evolves. Facebook always preferred to have AI handle more content moderation than humans. Facebook has been working towards this goal for quite a while now. The way it works is, that posts that are thought to violate any of the company’s guidelines including everything from spam to hate speech are flagged by the users or by AI filters. Such content is then automatically removed by AI and sometimes in the case of some ‘complicated’ content, it goes to the human moderators (20). Facebook aims to deal with the most harmful posts first. Facebook’s usage of AI for moderation has come under fire in the past, with specific detractors noting the objective nature of AI which lacks the human empathy to judge the context of online content(14). Topics like misinformation and bullying cannot be possible for a computer to interpret. Facebook’s Chris Palow, a software engineer in the company’s interaction integrity team, agreed that AI is limited by certain intangible constraints, but told reporters that the technology could still play a role in removing unwanted content. “The system is about marrying AI and human reviewers to make fewer total mistakes,” said Palow. “The AI is never going to be perfect.” (20). Palow says, “The bar for automated action is very high”. However, Facebook’s goal is to improve and add more AI to content moderation.
Conclusion
In this article, I contextualize content moderation and the effects of persistent exposure to graphic content for human moderators who provide protection to the common crowd. While the content moderators have received attention from researchers and journalists, very little has been done to improve their working conditions. Employers do not talk or make any effort on how their mental health can be improved. There have been several real-life examples of where big tech faltered in their responsibilities towards their moderators, the benefits provided range vastly by location. This article covers the mental health conditions of the moderators and how to improve their current situation in their work environments. The article also speaks about the role of AI in content moderation, detailing the negative consequences of automated moderation such as biased data training. This article points out how crucial it is to open conversations about the conditions of the moderation field to a global audience.
References
- Steiger M, Bharucha TJ, Venkatagiri S, Riedl MJ, Lease M. The Psychological Well-Being of Content Moderators. 2021;14.
- Inside the Work of Social Media Content Moderators [Internet]. US News & World Report. [cited 2021 Apr 24]. Available from: //www.usnews.com/news/best-countries/articles/2019-08-22/when-social-media-companies-outsource-content-moderation-far-from-silicon-valley
- Roberts ST, Noble SU. Empowered to Name, Inspired to Act: Social Responsibility and Diversity as Calls to Action in the LIS Context. Libr Trends. 2016;64(3):512–32.
- Dwoskin E, Whalen J, Cabato R. Content moderators at YouTube, Facebook and Twitter see the worst of the web — and suffer silently. Washington Post [Internet]. [cited 2021 Apr 24]; Available from: https://www.washingtonpost.com/technology/2019/07/25/social-media-companies-are-outsourcing-their-dirty-work-philippines-generation-workers-is-paying-price/
- “The despair and darkness of people will get to you” [Internet]. Rest of World. 2020 [cited 2021 Apr 24]. Available from: https://restofworld.org/2020/facebook-international-content-moderators/
- The Human Cost of Online Content Moderation [Internet]. Harvard Journal of Law & Technology. [cited 2021 Apr 22]. Available from: https://jolt.law.harvard.edu/digest/the-human-cost-of-online-content-moderation
- fact-sheet-9—vicarious-trauma.pdf [Internet]. [cited 2021 Apr 22]. Available from: https://www.counseling.org/docs/ trauma-disaster/fact-sheet-9—vicarious-trauma.pdf
- Barve PI and S. Humanising digital labour: The toll of content moderation on mental health [Internet]. ORF. [cited 2021 Apr 22]. Available from: https://www.orfonline.org/expert-speak/humanising-digital-labour-the-toll-of-content-mode ration-on-mental-health-64005/
- Secondary Traumatization in Mental Health Care Providers [Internet]. [cited 2021 Apr 22]. Available from: https://www.psychiatrictimes.com/view/secondary-traumatization-mental-health-care-providers
- Roberts ST. Commercial Content Moderation: Digital Laborers’ Dirty Work. Dirty Work. :12.
- McSherry JCY and C. Content Moderation is Broken. Let Us Count the Ways. [Internet]. Electronic Frontier Foundation. 2019 [cited 2021 Apr 23]. Available from: https://www.eff.org/deeplinks/2019/04/content-moderation-broken-let-us-count-ways
- Davis L. 7 Best Practices for Content Moderation [Internet]. [cited 2021 Apr 23]. Available from: https://www.spectrumlabsai.com/the-blog/best-practices-for-content-moderation
- Bhattacharya A. How Covid-19 lockdowns weakened Facebook’s content moderation algorithms [Internet]. Quartz. [cited 2021 Apr 23]. Available from: https://qz.com/india/1976450/facebook-covid-19-lockdowns-hurt-content-moderation-algorithms/
- Rodriguez S. Facebook defends decision to bring content moderators back to offices despite Covid-19 risks [Internet]. CNBC. 2020 [cited 2021 Apr 23]. Available from: https://www.cnbc.com/2020/11/19/facebook-defends-choice-to-bring-moderators-to-offices-during-pandemic.html
- Technology Coalition – Fighting child sexual exploitation online [Internet]. [cited 2021 Apr 22]. Available from: https://www.technologycoalition.org/
- Content Moderation: The Importance of Employee Wellness [Internet]. 24-7 Intouch. 2020 [cited 2021 Apr 22]. Available from: https://24-7intouch.com/blog/content-moderation-the-importance-of-employee-wellness/
- cambridge-consultants-ai-content-moderation.pdf [Internet]. [cited 2021 Apr 26]. Available from: https://www.ofcom.org.uk/__data/ assets/pdf_file/0028/157249/cambridge-consultants-ai -content-moderation.pdf
- Garbade DMJ. A Simple Introduction to Natural Language Processing [Internet]. Medium. 2018 [cited 2021 Apr 26]. Available from: https://becominghuman.ai/a-simple-introduction-to-natural-language-processing-ea66a17 47b32
- Llansó E. Artificial intelligence. :32.
- Vincent J. Facebook is now using AI to sort content for quicker moderation [Internet]. The Verge. 2020 [cited 2021 Apr 27]. Available from: https://www.theverge.com/2020/11/ 13/21562596/facebook-ai-moderation
- In Settlement, Facebook To Pay $52 Million To Content Moderators With PTSD https://www.npr.org/2020/05/12/ 854998616/in-settlement-facebook-to-pay-52-million-tocontent-moderators-with-ptsd#:~:text=In%20Settlement%2 C%20Facebook%20To%20Pay% 20%2452%20Million%20To%20 Content%20Moderators%20With%2 0PTSD,Facebook&text=Ben%20Mar got%2FAP-,Facebook%20will%20pay%2 0%245 %20million%20to%20thousand s%20of%20contract%20workers ,in%20a%20new%20legal%20filin