TikTok rows back AI video descriptions in US after absurd errors

4 hours ago 13
ARTICLE AD BOX

Just now

Liv McMahonTechnology reporter

Getty Images TikTok logo displayed on a smartphone screen, with enlarged, faded versions of the logo reflected across it.Getty Images

TikTok has rowed back on an AI feature which incorrectly summarised some videos on the platform, including claiming a celebrity was fruit.

The company's 'AI overviews' recently began appearing beneath content on the platform to describe what a video was showing, or provide more context.

While only rolled out to some users in the US and the Philippines, the feature's incorrect and bizarre AI-generated summaries of TikTok content - seen beneath videos of celebrities like platform star Charli D'Amelio - have been shared widely.

According to TikTok, its experimental summaries have been tweaked to only suggest products similar to those shown in videos.

Much like the AI Overviews at the top of most Google search results, TikTok's AI-generated overviews would attempt to sum up the contents of videos for some users when they clicked to see more of a video's caption.

Some examples screenshotted by users and seen by the BBC showed videos on the platform being accurately described, but Business Insider also identified a number of "wildly inaccurate" AI overviews.

This included one which saw a video of dancer Charli D'Amelio described as a "collection of various blueberries with different toppings," the publication said.

Allow X content?

This article contains content provided by

X

. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read 

 and 

 before accepting. To view this content choose ‘accept and continue’.

The BBC is not responsible for the content of external sites. X content may contain adverts.

It saw similarly vague, inaccurate and strange AI-generated summaries on other TikTok videos of celebrities and artists, including Shakira and Olivia Rodrigo.

The feature will now only be used to surface information about items in videos, according to TikTok.

It comes as tech firms look to deploy more AI products on their platforms to boost user engagement. However, some such efforts have been met with user backlash, or mockery, when these tools go awry.

Posts reacting to TikTok's testing of AI overviews on its videos first began appearing in January.

But it appears the summaries were made more widely available, with several users and creators highlighting AI-generated descriptions containing absurd mistakes in late April.

A recent example shared on Reddit saw a performance by ballroom dancers Reagan and Juli To described in an AI overview on TikTok as "a person repeatedly striking their head with a rubber chicken".

Other examples shared by TikTok users contained similarly strange descriptions.

For instance, AI overviews for two separate videos, neither of which featured violence or tools, said they featured "a person repeatedly striking their head with a hammer".

Allow X content?

This article contains content provided by

X

. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read 

 and 

 before accepting. To view this content choose ‘accept and continue’.

The BBC is not responsible for the content of external sites. X content may contain adverts.

According to TikTok, users were able to report and provide feedback about AI overviews.

But this did not stop some from speculating as to whether the platform was "trolling" its users.

"The new AI Overview is so bad it feels like it has to be a joke," wrote TikTok user and creator Brett Vanderbrook alongside his video.

He showed a range of examples where TikTok's AI feature conjured up bizarre descriptions for what was happening in videos - such as a comedy skit described as someone "demonstrating a new, clever technique for cutting through water".

TikTok says it has identified the cause of AI overview errors and inconsistencies, without detailing what this was.

But generative AI tools often make things up when responding to users, summarising or generating information, and errors can range from being hilarious to potentially harmful in nature.

Apple later faced criticism after an AI tool designed to summarise notifications created false headlines for the BBC News and the New York Times apps.

Since then AI development has continued, with firms claiming the tech has vastly improved in ability and accuracy, but so-called "hallucinations" persist.

 The world’s biggest tech news in your inbox every Monday.”


Read Entire Article