Support our Nation today - please donate here
News

TikTok fails to ease MPs’ concerns over AI moderation amid proposed job cuts

13 Nov 2025 3 minute read
Photo Yui Mok/PA Wire

TikTok has failed to show that proposals to lay off staff and expand the use of artificial intelligence (AI) to moderate content will not lead to more harms for its users, MPs have warned.

The Commons Science, Innovation and Technology Committee had asked the video-sharing app to explain how it will protect users from harmful content amid proposed job losses.

Chairwoman Dame Chi Onwurah on Thursday said it was “deeply concerning” the company had “come up empty to show” the shift towards AI moderation would not result in greater risks to users.

TikTok has put more than 400 jobs at risk in London as part of a restructuring of its trust and safety operations.

It has said the plan would see work concentrated in fewer sites globally as it invests in the use of AI to scale up its moderation.

In a letter to the MP, dated November 7 and published by the committee, TikTok’s Northern Europe public policy and government affairs director Ali Law wrote that “the evidence suggests that the proposed changes would speed up and improve the efficacy of moderation through the use of AI, third parties and specialist teams”.

He said company analysis indicated that using AI would have a “positive impact on user safety”.

Risk assessment

But the panel of MPs noted that TikTok did not share its data or risk assessment that justified this.

Dame Chi said: “TikTok’s response represents a commitment to reducing staffing levels in favour of increasing the use of AI to moderate content on its platform.

“But TikTok have come up empty to show that this transition to AI won’t lead to more harms for its users.

“This is deeply concerning, as the committee has heard time and time again – from TikTok itself and many others – that there are limitations to AI moderation.

“Not only this, reports of AI causing harm by advising people on how to do things such as commit suicide, show that the technology just isn’t reliable or safe enough to take on work like this.

“There is a real risk to the lives of TikTok users. The Government and Ofcom must do more before it’s too late.

“TikTok refers to evidence showing that their proposed staffing cuts and changes will improve content moderation and fact-checking – but at no point do they present any credible data on this to us.

“In their evidence to the committee, only seven months ago, they told us that they were accountable to Parliament. It’s alarming that they aren’t offering us transparency over this information.

“Without it, how can we have any confidence whether these changes will safeguard users?”

A spokesperson from the company said: “We are disappointed that this does not accurately reflect the facts.

“TikTok has long used a combination of AI and human moderators in content moderation, as is industry standard, and will continue to do so.

“We transparently publish every quarter statistics on our content moderation, which shows over 99% of violative content we removed was proactively taken down before anybody reported it to us, and more than 90% was removed before gaining a single view.”

The Chinese-owned company has already been making layoffs within its trust and safety teams around the world, including its German head office in Berlin.


Support our Nation today

For the price of a cup of coffee a month you can help us create an independent, not-for-profit, national news service for the people of Wales, by the people of Wales.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Our Supporters

All information provided to Nation.Cymru will be handled sensitively and within the boundaries of the Data Protection Act 2018.