14. March 2025
Ai Search Engines Under Fire Over Alarming 60 Error Rate

The Rise of Generative AI Search Engines: A Growing Concern for Accuracy and Trust
A recent study by Columbia Journalism Review’s Tow Center for Digital Journalism has revealed that generative AI search engines are giving incorrect answers at an alarming rate of 60%, casting a shadow over the trustworthiness of online news searches.
These AI-driven search tools are becoming increasingly popular, with roughly 1 in 4 Americans now relying on them as alternatives to traditional search engines. However, this growing reliance on AI-driven search tools has significant implications for the reliability of online news searches.
The study tested eight AI-driven search engines with live search functionality and found that error rates varied significantly among the platforms. Perplexity provided incorrect information in 37% of queries, while ChatGPT Search incorrectly identified 67% (134 out of 200) of articles queried. Grok 3, on the other hand, demonstrated the highest error rate, at a staggering 94%.
The researchers’ approach to testing the AI models involved feeding direct excerpts from actual news articles into each platform, then asking each model to identify the article’s headline, original publisher, publication date, and URL. This allowed them to assess the accuracy of each model in providing reliable information.
One striking finding was the tendency of AI models to provide “confidently wrong” search results. Rather than declining to respond when they lacked reliable information, these models frequently provided plausible-sounding incorrect or speculative answers. This behavior was consistent across all tested models, not limited to just one tool.
The researchers emphasized that this phenomenon suggested a lack of transparency and accountability in the development and deployment of AI-driven search tools. If AI models are more likely to provide accurate information when they’re certain about the answer rather than providing uncertain responses, this can create a false sense of trust among users.
The study also highlighted issues with citations and publisher control. Researchers found that some AI tools ignored Robot Exclusion Protocol settings, which publishers use to prevent unauthorized access. For example, Perplexity’s free version correctly identified all 10 excerpts from paywalled National Geographic content, despite National Geographic explicitly disallowing Perplexity’s web crawlers.
This raises important questions about the role of AI-driven search tools in disseminating accurate information. If these models are not transparent about their limitations and biases, how can users trust the information they provide? Furthermore, if premium paid versions of these AI search tools are no better than their free counterparts, what does this say about the value proposition of paying for a more reliable experience?
The study’s findings have significant implications for the media industry. As the use of AI-driven search engines continues to grow, publishers and content creators must take steps to ensure that their online presence is protected from exploitation by these models. This may involve implementing stricter controls on access to published content, using alternative metadata formats that are more resistant to manipulation, or investing in more sophisticated tools for detecting and mitigating the spread of misinformation.
The rapid evolution of AI technology has created a complex landscape for online news seekers. With the proliferation of generative AI search engines, users must be aware of the potential risks associated with these tools and take steps to verify the accuracy of the information they provide. The stakes are high, and it is essential that we prioritize accuracy and transparency in our digital lives.
The study’s authors hope that their research will spark a broader conversation about the role of AI-driven search tools in online news ecosystems. By exploring the challenges and limitations of these models, we can work towards creating a more trustworthy and reliable online environment for all users.