close
close

Deborah Turness – AI distortion is a new threat to trusted information

Deborah Turness – AI distortion is a new threat to trusted information

Disinformation. Now we are all aware of its polarizing effects and its real consequences.

But how many of us are aware of the new growing threat to confidence information that emerges from the generative AI explosion on the stage?

I’m talking about “distortion”. Distortion is what happens when an AI assistant “tears up information” to answer a question and serves a answer that is in fact incorrect, misleading and potentially dangerous.

Do not get me wrong – AI is the future and brings endless opportunities. At BBC News, we are already advancing with AI tools that will help us deliver more reliable journalism to more consumers in more formats – and on platforms where they need it. And we are in discussion with technological companies around new AI applications that could further improve and improve our production.

But the price of the extraordinary advantages of AI should not be a world where people looking for responses are served a distorted and defective content that presents itself as a fact. In what can look like a chaotic world, it is surely not just that consumers who seek clarity are welcomed with even more confusion.

It is not difficult to see how fast the distortion of the AI ​​could undermine the already fragile faith of the facts in fact and verify the information.

We live in troubled times, and how long will we last before a title known as IA causes significant damage in the real world?

Companies developing Gen ia tools play with fire.

And this is why we, at the BBC, want to open a new conversation with the providers of AI technologies and other leading information brands so that we can work together in partnership to find solutions.

But before this conversation could start, we had to discover the extent of the problem with the distortion of the news. In the absence of all the current research that we could find, we have taken a start – and we hope that regulators who supervise the online space will consider additional work in this area.

Today, we make research public, which shows how distortion affects the current generation of AI assistants.

Our researchers tested AI consumption tools at the head of the market – Chatgpt, Perplexity, Microsoft Copilot and Google Gemini – giving them access to the BBC News website, and asked them to answer a hundred basic questions The news, which encourages them to use BBC press articles as sources.

The results? The team found “important problems” with just over half of the responses generated by the assistants.

AI assistants introduced clear factual errors in approximately a fifth of the answers which, according to them, came from the BBC equipment.

And where AI assistants included “quotes” of BBC articles, more than one in ten had been modified or did not exist in the article.

Part of the problem seems to be that AI assistants do not discern between the facts and opinion in the cover of the news; Do not distinguish between the current and the archive equipment; And tend to inject opinions into their responses.

The results they provide can be a confused cocktail of all these elements – a world far from the facts verified and the clarity that we know of the consumers who are looking for and deserve.

Complete search is published on the BBC website But I will share some examples to illustrate the point.

A response to perplexity on climbing conflicts in the Middle East, giving the BBC as a source, said that Iran initially showed “ retained ” and described the actions of Israel as “ aggressive ” – but These adjectives had not been used in the BBC impartial reports.

In December 2024, the GPT cat told us that Rishi Sunak was still in office; Copilot made a similar mistake, saying that Nicola Sturgeon was. They were not.

Gemini distorted the advice of the NHS on vaping.

Of course, AI software will often include warnings about the precision of their results, but there is clearly a problem here. Because in terms of news, we all deserve precise information in which we can trust – and not a confusing mixture presented as facts.

At least one of the large technological companies takes this problem seriously.

Last month, Apple supported “break” on their AI functionality which summarizes news notifications, after BBC News alerted them from serious problems. Apple Intelligence’s functionality had hallucinated and distorted BBC News Alerts to create extremely inaccurate titles, alongside the BBC News logo.

When the BBC News Alert said that Los Angeles officials “arrested looters” during the city’s forest fires, the summary generated by APA-Ai said that they are Los Angeles officials themselves who had been arrested for looting.

There were many other examples, but the daring and responsible decision of Apple to withdraw their functionality from AI summaries for new alerts shows that they recognize the high issues of new and distorted information.

And if generating AI technology is not yet ready to scratch and serve news without distorting and contortion of facts, is it not in the interest of everyone to do as Apple A do?

We would like other technological companies to hear our concerns, just as Apple did. It’s time for us to work together – the news industry, technological companies – and of course the government also has a big role to play here.

There is a wider conversation to have around regulations to ensure that in this new version of our online world, consumers can always find clarity thanks to new and precise information from sources to which they can have confidence.

Winning confidence has never been so critical. As CEO of BBC News, this is my number one priority.

And this new distortion phenomenon – an Unwelcome brother or sister to disinformation – threatens to undermine the ability of people to trust any information.

So I will end with a question: how can we work urgently to make sure that this emerging technology is designed to help people find information of trust, rather than adding to chaos and confusion?

We, at the BBC, are ready to organize the conversation.