TLDR

  • Content moderators and data annotators in African countries like Kenya and Uganda work long hours for low pay, often exposed to disturbing content.

  • These workers play a crucial role in training AI and maintaining social media platforms, but face poor working conditions and psychological stress.

  • Many workers are on short-term contracts with little job security, and fear losing their jobs if they complain about conditions.

  • The work is intensely monitored, with strict productivity targets and surveillance of workers’ activities.

Behind the sleek façade of artificial intelligence and social media platforms lies a hidden workforce of content moderators and data annotators, many based in African countries, who endure grueling conditions for low pay.

These workers, essential to the functioning of AI systems and social media platforms, face long hours, psychological stress, and job insecurity while processing disturbing content and labeling data for some of the world’s largest tech companies.

In outsourcing centers across Kenya and Uganda, workers like Mercy and Anita spend their days sifting through social media posts to remove toxic content or labeling data to train AI algorithms.

Mercy, a content moderator for Meta in Nairobi, is expected to process one “ticket” every 55 seconds during her 10-hour shift. This often involves viewing disturbing images and videos, including graphic violence and sexual content.

“The most disturbing thing was not just the violence,” one moderator reported, “it was the sexually explicit and disturbing content.”

Workers in these moderation centers are continually exposed to graphic material, including suicides, torture, and rape, with little time to process what they’re witnessing. They’re expected to handle between 500 and 1,000 tickets per day, leading to severe psychological strain.

Anita, working for a business process outsourcing (BPO) company in Gulu, Uganda, spends hours reviewing footage of drivers for an autonomous vehicle company.

Her task is to identify any lapses in concentration or signs of drowsiness, helping to develop an “in-cabin behavior monitoring system.” For this intense, stressful work, data annotators like Anita earn approximately $1.16 per hour.

The working conditions in these facilities are oppressive. Every aspect of the workers’ lives is closely monitored, from biometric scanners at entry to extensive CCTV coverage.

Productivity is tracked by efficiency-monitoring software, with every second of their shift accounted for. Workers report a combination of complete boredom and suffocating anxiety, performing repetitive tasks at high speed under constant surveillance.

Job security is minimal, with many workers on short-term contracts that can be terminated at any time. This precarity leads to a culture of fear, where workers are afraid to voice concerns or demand better conditions.

“Most of us are damaged psychologically, some have attempted suicide … some of our spouses have left us and we can’t get them back,” one moderator commented.

The tech industry’s reliance on this labor force is significant. Roughly 80% of the time spent on training AI consists of annotating datasets.

The global market for data annotation was estimated at $2.22 billion in 2022 and is expected to grow to over $17 billion by 2030. Yet, the reality of this human labor is often obscured by tech companies, who present a vision of autonomous machines rather than acknowledging the grueling work involved.

This exploitation is rooted in global economic inequalities. Countries in the global south, with high unemployment rates and large informal job sectors, provide a vulnerable workforce that can be paid lower wages and is less likely to demand better conditions.

The outsourcing of this work is driven not by a desire to provide economic opportunities, but by the pursuit of a more tightly disciplined workforce and lower costs.

The stories of workers like Mercy and Anita highlight the human cost of our digital lives. Every time we use a search engine, interact with a chatbot, or scroll through social media, we are participating in a global network that relies on the labor of these hidden workers.

As consumers and users of AI-powered products and social media platforms, we have a responsibility to demand transparency and better conditions for these essential workers.

The AI revolution is not just about technological advancement; it’s also about the human beings who power it from behind the scenes, often at great personal cost.

The post African Workers: The Unseen Force Behind AI and Social Media Moderation appeared first on Blockonomi.