Over the past year, several major news media companies have signed on the dotted line with OpenAI, entering a content licensing partnership with the developer of ChatGPT. Most of those partnership announcements state that as part of the deals, ChatGPT will produce attributed summaries of each media company’s reporting and link to their publications’ websites.
On June 13, I reported that despite its deal, ChatGPT is outputting hallucinated links to one such partnered publication, Business Insider. Using details from a leaked letter written by the Business Insider Union’s steward, I confirmed the chatbot is generating fake URLs for some of the outlet’s biggest investigations and directing some users to 404 errors instead of real article pages.
Now, my reporting confirms that ChatGPT is hallucinating URLs for at least 10 other publications that are part of OpenAI’s ongoing licensing deals. These publications include The Associated Press, The Wall Street Journal, the Financial Times, The Times (UK), Le Monde, El País, The Atlantic, The Verge, Vox, and Politico.
In my testing, I repeatedly prompted ChatGPT to link out to these publications’ marquee articles, including Pulitzer Prize-winning stories and years-long investigations. These types of stories are editorial investments that can be both incredibly valuable to a brand’s reputation, and incredibly costly to produce.
All together, my tests show that ChatGPT is currently unable to reliably link out to even these most noteworthy stories by partner publications.
While the specific language differs, most partnered media companies have explicitly stated that ChatGPT will link out to their websites. “Queries that surface The Atlantic will include attribution and a link to read the full article on theatlantic.com,” reads The Atlantic’s licensing deal announcement from last month. “ChatGPT’s answers to user queries will include attribution and links to the full articles for transparency and further information,” reads a similar announcement by Berlin-based publisher Axel Springer from December 2023. OpenAI has also pitched news publishers “priority placement and ‘richer brand expression’ in chat conversations” and “more prominent link treatments” in ChatGPT, according to reporting earlier this year by Adweek on leaked OpenAI slide decks.
It is unclear how OpenAI can guarantee these attribution and citation features for its partners while the underlying ChatGPT product is regularly outputting broken links to those same websites.
OpenAI told me in a statement that it has not yet launched the citation features promised in its licensing contracts. “Together with our news publisher partners, we’re building an experience that blends conversational capabilities with their latest news content, ensuring proper attribution and linking to source material — an enhanced experience still in development and not yet available in ChatGPT,” said Kayla Wood, an OpenAI spokesperson. OpenAI declined to answer questions on the hallucinations I documented or explain how new features might address the problem of fake URLs.
From my testing across 10 publications, it appears that currently, ChatGPT is often doing what its predictive text generation does best: predicting the most likely version of the URL for a given story — rather than the correct one.
“The page you are trying to access does not exist”
To test ChatGPT’s ability to link out to its partner publications, I mostly prompted the chatbot to search the web for information on exclusive investigations by each respective outlet. For example, in 2019, the Financial Times broke news of a massive fraud scandal in the world of payment processing. Its investigation into Wirecard not only won awards, but prompted action by international regulatory bodies and contributed to the company’s swift decline, leading to its insolvency filing in 2020.
When I prompted ChatGPT to search the web for news articles on the Wirecard fraud scandal, ChatGPT correctly answered that the FT broke the story in February 2019. But at first it only cited links to websites like Money Laundering Watch and Markets Business Insider, which had both aggregated the FT’s original reporting.
When I followed up and asked ChatGPT to share a link to the original story, it told me to read the story on the FT’s website at this URL: https://www.ft.com/content/44dcb5d2-2a29-11e9-b2e4-601dbf7d9eff
The link led to a 404 error that stated, “The page you are trying to access does not exist.”
The Wirecard story wasn’t the only prestigious investigation to turn up in ChatGPT with a fake URL. In my tests, I documented hallucinated links to two Pulitzer Prize-winning stories, including The Wall Street Journal’s 2018 reporting on Donald Trump’s involvement in hush money payments made to Stormy Daniels and Karen McDougal during his presidential campaign. That reporting helped trigger a criminal investigation that recently culminated in a jury finding Trump guilty on 34 felony counts. Despite the investigation’s notoriety, the link offered by ChatGPT to the WSJ’s reporting once again landed on a 404 error. (In May, the WSJ’s parent company, Newscorp, signed a reported $250 million dollar licensing contract with OpenAI.)
“We are sharing insights with OpenAI to help create the best product experience for both users and publishers, where provenance and attribution are explicit, more accurate, and more intentional,” Rhonda Taylor, an FT spokesperson, told me in a statement, emphasizing that publisher citations are a work in progress. “The new experience is still under development and not yet live in ChatGPT, but the priority should be a better experience with high quality attribution, not speed of release.”
Several publications declined to share details about this updated “experience” or a general timeline for its release, but there have been early signs that ChatGPT is experimenting with how it cites its sources. In March, OpenAI started rolling out a new feature that makes links more prominent in ChatGPT, by including the name of a cited website in parentheses with a hyperlink to the specific story it is citing.
We’re making links more prominent when ChatGPT browses the internet. This gives more context to its responses and makes it easier for users to discover content from publishers and creators. Browse is available in ChatGPT Plus, Team and Enterprise. pic.twitter.com/1ChlZvVMUy
— OpenAI (@OpenAI) March 29, 2024
While hallucinated URLs did sometimes appear in these parentheses across my tests, these hyperlinks had a much higher rate of accuracy than the hyperlinks that appeared in other parts of ChatGPT’s responses. OpenAI declined to answer questions about how ChatGPT generates its hyperlinks or how the methodology for these two types of citations may differ.
Often in my tests ChatGPT would, on first pass, link out in parentheses to news outlets or blogs that don’t have partnership deals with OpenAI. For the most part, these articles aggregated major investigations by the likes of Le Monde, Politico and The Verge. With some exceptions, these URLs to aggregations were accurate. And to its credit, in almost all of my tests, ChatGPT was able to at least correctly name the outlet that broke a major news story, occasionally detailing the publication date and naming the author in its response.
It was when I asked ChatGPT to share a link to the first outlet that reported a given story, or to share a link to reporting that it had already correctly identified and summarized, that the URLs were most likely to break.
For example, I prompted ChatGPT to search the web for the first news article that exposed the use of popular copyrighted novels in Book3 — a database widely used by Silicon Valley AI developers to train LLMs. ChatGPT correctly answered that Alex Reisner broke that story in The Atlantic as part of an exclusive series.
The slug in the URL it provided included the correct vertical on the site, the correct publication month and year, and a plausible string of search-engine-optimized keywords: technology/archive/2023/09/books3-dataset-ai-copyright-infringement/675324
. (The broken link redirects to another technology story published in September 2023.)
Whoever chose the actual URL for The Atlantic’s Book3 investigation ended up making a similar, but marginally different, choice. This is the actual slug for one of the articles: technology/archive/2023/09/books3-database-generative-ai-training-copyright-infringement/675363/
. Unfortunately, close enough doesn’t cut it for URLs.
Examples like this seem to indicate that ChatGPT is, at times, outputting the most likely URL in its response — predicting what the slug for a story could be, without knowing what it actually is. Across my tests ChatGPT’s hallucinated links regularly followed the standard URL format of a given website, but got the specific words or numbers in that URL wrong. Sometimes when I repeated a question, ChatGPT would even output slight variations on the same fake URL, leading to a string of 404 errors.
“Hallucinations in LLMs are known issues. We’re raising to OpenAI any inaccuracies we encounter involving The Atlantic,” Anna Bross, a spokesperson for The Atlantic, told me in a statement. “We believe that participating with AI search in its early stages — and shaping it in a way that values, respects, and protects our work — could be an important way to help build our audience in the future.”
As I detailed in my previous reporting on the Business Insider Union’s letter to management, many journalists in newsrooms that have partnered with OpenAI have publicly expressed skepticism about ChatGPT’s potential as a search tool. In May, The Atlantic Union published its own open letter demanding more transparency from its employer about its contract with OpenAI. On Wednesday, The Atlantic published a story documenting similar problems with ChatGPT’s ability to link out to its own reporting.
“These hallucinations are deeply concerning, and point to why we raised questions about The Atlantic’s agreement with OpenAI,” said David A. Graham, a staff writer and member of the union’s editorial bargaining committee. “We need to know much more about what the agreement says, and the company must work with us to demand protections for the integrity of our journalism and The Atlantic’s legacy.”
Cite your sources
All of my testing used ChatGPT’s free and most widely accessible version, which only requires a basic login. Most of my testing was also conducted using GPT-4o (OpenAI’s latest multimodal model that offers real-time web browsing to generate ChatGPT’s responses). But I was also able to replicate the URL hallucinations using models without real-time web browsing.
For example, without using free GPT-4o credits, I asked ChatGPT to share a link to the first investigation into Hollywood director Bryan Singer’s sexual misconduct allegations.
ChatGPT correctly identified The Atlantic as the outlet behind the headline-making 2019 investigation, but wrongly stated the story ran in October 2014. Even though ChatGPT claimed it could not browse the web, it still provided a hallucinated link to the supposed 2014 article and suggested I read more there. That broken link redirected to a different October 2014 story on The Atlantic about the Nigerian militant group, Boko Haram.
The hallucinations were also not specific to English-language publications. ChatGPT hallucinated links to major national investigations in French by the publisher Le Monde, and stories in Spanish published by the outlet El País (owned by Prisa Media). Both international media companies entered content licensing deals with Open AI in March.
Alongside these frequent hallucinated URLs, ChatGPT was also able to output accurate hyperlinks. Among other examples, in my tests ChatGPT correctly linked out to Politico’s publication of the leaked Supreme Court decision on Roe v. Wade in 2022. The chatbot also provided the correct URL for the WSJ’s 2021 Facebook Files investigation — the first reporting on a whistleblower leak of thousands of internal Facebook documents.
Several of the publications I tested also only announced their OpenAI licensing deals in the past two months. That includes The Verge and Vox (owned by Vox Media), The Wall Street Journal and The Times (owned by NewsCorp), and The Atlantic. But from my tests, it doesn’t appear that the length of time an OpenAI partnership has been ongoing has a strong bearing on whether or not ChatGPT, in its current form, will produce a hallucinated URL.
ChatGPT output fake URLs to Politico and Business Insider investigations. Both outlets are owned by Axel Springer, which signed its content licensing deal with OpenAI over six months ago for a reported “tens of millions of euros.”
I also documented fake URLs to stories by the AP, which was the first major publisher to sign a licensing deal with OpenAI in July 2023. Nearly a year later, in our testing, ChatGPT was still unable to correctly link out to an two-year-long investigation on West African migrants that won the AP a Livingston Award for International Reporting earlier this month.
Overall, the stories I tested for were often groundbreaking investigations and articles that incited a wave of follow-up coverage, sometimes kicking off a years-long news cycle. For digital publishers, these types of stories are often expensive and core to building a brand’s reputation and audience. If a product used by more than 200 million people a month republishes the contents of this reporting without properly linking back to the source, the return on those editorial investments could take a hit.
My tests demonstrate that ChatGPT is hallucinating URLs frequently, and that the product is currently unable to reliably link out to the most noteworthy stories by its partners. That said, this was not a full audit of ChatGPT and I plan to follow up with more reporting on the technical factors that might be at play here. If these URL hallucinations are happening at scale, though, OpenAI would likely need to resolve the issue to follow through on its general pitch to news publishers. That includes both ChatGPT accurately citing publications it has licensing deals with and its commitment to becoming a dependable source of referral traffic to their websites.
Source: ChatGPT is hallucinating fake links to its news partners’ biggest investigations