, author: Ermakova M.

ChatGPT cites studies and articles that have never been published

The problem is bigger than it looks / ChatGPT comes up with studies and articles that not only have not been published, but the links and quotes are completely fabricated.

ChatGPT and its derivatives, such as the Bing chatbot, have become an incredibly useful resource for a wide variety of users in a short time. But in the early stages of existence, when users carefully check its capabilities, weaknesses are also revealed.

The fact is that ChatGPT not only offers useful and up-to-date information, but also shows a serious problem that will worsen over time if it is not corrected. OpenAI Chat exposes fake facts and misinformation to users.

In fact, ChatGPT invents entire articles in The Guardian that have never actually been published: while neither users nor ChatGPT itself are able to say with certainty what is true and what is fiction. And this is a huge problem not only for the credibility of ChatGPT technology, but also for ensuring that users can be sure that the chat is not a source of misinformation under the guise of reliable and valuable content.

ChatGPT comes up with research to justify its answers

A problem that, furthermore, could be exacerbated if the current technology is implemented as a search engine technology offering information that appears to be from reliable sources but is in fact an invention of ChatGPT itself.

“Much has been written about the tendency of generative AI to fabricate facts and events. But this feature is of particular concern to news organizations and journalists, whose inclusion adds legitimacy and weight to a convincingly written fantasy. And for readers and the information ecosystem, this opens up entirely new questions about whether trust the quotation somehow, and that could well fuel conspiracy theories on sensitive topics that never existed," said Chris Moran, head of editorial innovation at The Guardian.

And it goes beyond simple fictional articles, writes Futurism. The solution is unclear. As well as the culprit of the problem. Now the responsibility for solving this issue lies with the companies that develop AI.

x