Eu Approves Binding Measures For Ai Chatbot Providers

Eu Approves Binding Measures For Ai Chatbot Providers

Italy’s antitrust authority has closed its investigations into three AI chatbot providers, China’s DeepSeek, France’s Mistral AI, and Turkey’s Scaleup Yazilim (operator of Nova AI), after each company agreed to binding commitments designed to improve how users are warned about the risk of AI hallucinations.

The closures were published in the AGCM’s official bulletin, marking a significant development in the European Union’s efforts to regulate AI and protect consumers from potential harm. The three cases, PS12942 (DeepSeek), PS12968 (Mistral Le Chat), and PS12973 (Nova AI), were each opened on the basis that the companies’ AI chatbots had failed to inform users clearly, immediately, and intelligibly that their AI systems could generate inaccurate, misleading, or entirely fabricated content.

The failure of these companies to provide adequate transparency about hallucinations constituted a potentially unfair commercial practice under Italy’s Consumer Code, as it prevented users from making informed decisions about whether to use the services, particularly in high-stakes areas such as health, finance, and law. None of the three cases resulted in a formal finding of infringement or a fine, but were resolved through the commitment mechanism available under Article 27(7) of the Consumer Code.

The AGCM accepted the proposals from each company, with non-compliance within a 120-day window exposing each company to fines of up to approximately $11.6 million. The commitments made by each company reflect the specific transparency failures identified in each case, and demonstrate a clear understanding of the importance of providing adequate warnings about hallucinations.

DeepSeek, operated by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, agreed to the broadest package of commitments, including prominent warnings about hallucination risk added directly to its chat interfaces and website in Italian, a full Italian-language translation of relevant disclosures, internal compliance training workshops, and an active technical commitment to invest in reducing hallucination rates. The AGCM explicitly acknowledged that current technology cannot eliminate hallucinations entirely, making DeepSeek’s technical commitment a forward-looking obligation rather than a present-state claim.

Mistral Le Chat, the French AI company, developed its commitments along four lines under AGCM decision No. 31864. These included the inclusion of in-chat disclaimers; strengthening and Italian localization of its terms of service with explicit reference to the potential unreliability of outputs; improved accessibility of those terms throughout the user journey; and a full Italian translation of its website and help centre.

The AGCM’s emphasis was on what it called ‘contextual’ transparency: users must be warned at the moment and place where risk materializes, not merely in terms and conditions buried at the end of a sign-up flow. This approach is likely to inform how other EU regulators, and eventually the European Commission under the AI Act’s transparency obligations for general-purpose AI, approach the same question.

For AI companies operating in Europe, the message is clear: a disclaimer in the terms of service no longer satisfies the obligation. The warning must be where the user is, at the moment the risk is live. This approach recognizes that AI systems can cause harm through user overreliance on their outputs, particularly in high-stakes areas such as health, finance, and law.

The AGCM’s three-case sweep is the first time a European regulator has extracted binding, specific commitments from AI companies on hallucination disclosure as a consumer protection obligation, and the first to do so simultaneously across companies from three different jurisdictions (China, France, Turkey), applying the same standard to all. The conceptual framework Italy has established is transferable, and the argument is simple: if a consumer product can cause harm through user overreliance on its outputs, then informing users of that risk at the point of use is a basic consumer protection obligation, not optional transparency.

This approach is in line with the EU’s efforts to regulate AI and protect consumers from potential harm. The European Commission has opened its own antitrust case into Meta’s WhatsApp AI integration, while Italy has been among Europe’s most aggressive regulators in the AI consumer protection space. Alongside the hallucination probes, the authority launched a separate abuse of dominance investigation into Meta’s integration of Meta AI into WhatsApp.

The practical standard Italy has now articulated through these commitments, that hallucination warnings must be contextual, meaning present in the chat interface at the moment of use rather than buried in terms and conditions, is likely to inform how other EU regulators approach the same question. The AGCM’s enforcement arrives first and sets a concrete precedent for what ‘adequate’ means in practice under the AI Act.

For consumers, this development marks an important step forward in ensuring that AI companies provide adequate transparency about hallucinations. As AI systems become increasingly integrated into our daily lives, it is essential that we establish clear standards for transparency and accountability. The AGCM’s commitment to protecting consumers from potential harm through binding commitments on hallucination disclosure sets a high standard for the industry.

The Italian antitrust authority’s decision to close its investigations into three AI chatbot providers after they agreed to binding commitments designed to improve how users are warned about the risk of AI hallucinations marks an important development in the European Union’s efforts to regulate AI and protect consumers. The AGCM’s approach recognizes that AI systems can cause harm through user overreliance on their outputs, particularly in high-stakes areas such as health, finance, and law.

The commitments made by DeepSeek, Mistral AI, and Nova AI demonstrate a clear understanding of the importance of providing adequate warnings about hallucinations. The AGCM’s practical standard for contextual transparency will inform how other EU regulators approach the same question, setting a precedent for what ‘adequate’ means in practice under the AI Act.

As the European Union continues to regulate AI and protect consumers from potential harm, it is essential that we establish clear standards for transparency and accountability. The AGCM’s commitment to protecting consumers through binding commitments on hallucination disclosure sets a high standard for the industry, and will likely inform how other EU regulators approach the same question.

Original Source

Latest Posts