Openais Cutting-Edge Search Tool Found Vulnerable To Manipulation

Openais Cutting-Edge Search Tool Found Vulnerable To Manipulation

OpenAI’s ChatGPT Search Tool Exposed to Manipulation and Deception Tests

The Guardian has revealed potential security issues with OpenAI’s new search tool, ChatGPT. The search product is available to paying customers and has been encouraged to become the default search tool, but it appears that users can manipulate the results using hidden content.

Testing involved asking ChatGPT to summarize webpages containing hidden content. This hidden content can include instructions from third parties designed to alter ChatGPT’s responses or influence its output with positive, fake reviews. Such techniques can be used maliciously to cause ChatGPT to return a favorable assessment of a product despite negative reviews on the same page.

In an incident reported by security expert Karsten Nohl, a cryptocurrency enthusiast was using ChatGPT for programming assistance and received code that stole their credentials, resulting in them losing $2,500. This highlights the potential risks associated with relying solely on AI-generated content without proper vetting.

Nohl emphasizes that AI chat services should be used as “co-pilots,” and their output should not be viewed or used completely unfiltered. He notes that Large Language Models (LLMs) are “very trusting technology, almost childlike” with a huge memory but limited ability to make judgment calls.

The investigation found that hidden text, historically penalized by search engines like Google, may become less relevant as the use of AI-enabled search tools becomes more widespread. However, Nohl compares these issues to “SEO poisoning,” a technique where hackers manipulate websites to rank highly in search results, potentially with malicious code.

These vulnerabilities have significant implications, and website practices may change dramatically if combining search and LLMs becomes more widespread. OpenAI has warned users about possible mistakes from the service with a disclaimer at the bottom of every ChatGPT page. While the company’s efforts are commendable, it is crucial that users remain vigilant and skeptical when relying on AI-generated content.

As AI-enabled search tools become more prevalent, it is essential to address these vulnerabilities and ensure that users can trust the information they receive. The Guardian’s investigation serves as a reminder of the importance of responsible AI development and ongoing security research and testing to mitigate potential risks.

Latest Posts