Authenticity Issues with ChatGPT Search Tool

2024-12-04

Researchers from Columbia University’s Tow Center for Digital Journalism have identified significant shortcomings in OpenAI’s ChatGPT search tool concerning its ability to deliver accurate responses.

Introduced in October of last year for subscribers, OpenAI promoted the tool as capable of providing “quick and timely answers, along with relevant webpage links.” However, according to Futurism, the researchers found that ChatGPT’s search functionality struggles with accurately identifying quotations within articles, even when these quotes originate from publishers that have data-sharing agreements with OpenAI.

In their tests, the researchers tasked ChatGPT with sourcing “200 quotes from 20 different publications.” Of these, 40 quotes were from publishers that had barred OpenAI’s search crawlers from accessing their websites. Nevertheless, ChatGPT confidently provided incorrect information and rarely acknowledged any uncertainty regarding the details it supplied.

Overall, out of 160 responses, ChatGPT delivered partially or entirely inaccurate answers in 153 instances, while only admitting its inability to accurately respond to queries seven times. In these seven cases, ChatGPT used qualifiers such as “seems,” “may,” or “perhaps,” or stated, “I cannot find the exact article.”

The Tow Center researchers documented an error where ChatGPT mistakenly attributed a reader’s letter from The Orlando Sentinel to an article published in Time magazine. In another instance, when asked to identify the source of a quote from a New York Times article about endangered whales, ChatGPT provided a link to a website that had entirely plagiarized the story.

In response to Columbia’s report, OpenAI stated, “Without access to the data and methods retained by the Tow Center, it is difficult to address the citation errors,” and described the study as “an atypical test of the product.” The company also pledged to “continue improving search results.”