TikTok will overhaul its trust-and-safety operations in Germany, replacing its Berlin based content moderators with AI systems and external contractors. This major shift led to the termination of 150 employees – nearly 40% the local workforce that was responsible for the moderation of content for Germany’s German-speaking 32 million users. This is part of an overall trend for major social platforms to automate content moderating while cutting back on in-house teams.
Employees and trade unions are concerned and angry about the decision. Ver.di, a German trade union, has criticised TikTok’s refusal to negotiate severance packages or notice periods. This has led to protests and strikes. Workers claim that AI cannot replace human moderators when it comes to complex content, such as hate speech, misinformation or sensitive topics. AI-powered moderators are often criticized for incorrectly classifying posts, despite their efficiency in processing large amounts of content. It may mistakenly flag harmless video as inappropriate or fail identifying harmful material. This raises questions about automated systems’ effectiveness in fulfilling TikTok’s obligations under the Digital Services Act of the European Union (DSA). The Digital Services Act (DSA) requires platforms to take immediate and effective actions to remove harmful or illegal content. This places a large amount of responsibility on the content moderation team.
Outsourcing moderation to contractors raised concerns beyond technical issues. These included worker welfare. Moderators who were directly employed by TikTok Berlin in Berlin received mental health care and workplace protections. Many contractors who work for third-party providers lack the same resources. They are at greater risk of experiencing psychological stress as a result of exposure to disturbing content.
As technology companies face greater economic pressures, they are also turning to AI and outsourcing. TikTok automates moderation to reduce costs and increase efficiency. However, critics say that this could compromise the quality of content and user safety.
Ver.di’s protests highlight the growing unease of digital content moderators around the world, who are often faced with high stress conditions and job uncertainty. The union demanded more transparency from TikTok, and better treatment for the affected employees. The union has also threatened to take further industrial action if the company fails to meet their demands. This development is part a larger discussion about the balance of automation and human oversight for digital platforms’ content moderating. While AI is able to handle the volume of daily posts, it has difficulty interpreting cultural nuances, humor, or changing online behaviors, which are areas in which human moderators excel. To ensure a respectful and safe online environment, a combination of human judgment and technology is required.
TikTok’s German Trust and Safety team has played an important role in monitoring and removing damaging content to protect its users. The quality and responsiveness in moderation is increasingly scrutinised as the platform becomes more popular, particularly among young audiences. TikTok’s decision to replace a substantial portion of their German trust and security staff with AI and outsourcing workers has caused controversy. Workers and unions say that while the company emphasizes efficiency and innovation this approach could undermine content quality and employee well-being. The result of this transition will likely set the precedent for future social media platforms to balance automation and human input.