In a technological landscape where artificial intelligence increasingly shapes the way we access information, the trust placed in tools like ChatGPT, developed by OpenAI, remains to be qualified. A test carried out in 2025 on the free version of this conversational AI reveals that it falsely claimed that François Bayrou was never Prime Minister – a patently erroneous assertion. This misunderstanding raises important questions about the reliability of the data provided by these systems, even as tech giants such as Google DeepMind, Microsoft Azure AI, IBM Watson and Meta AI continue their efforts to improve the precision and timeliness of the responses generated by their algorithms.
The limitations of training data and the impact on ChatGPT reliability
When ChatGPT is asked about the political career of François Bayrou, the AI is categorical: “François Bayrou has never been Prime Minister”. However, the latter exercised this function for almost nine months before being replaced in September 2024. This error illustrates one of the major weaknesses of artificial intelligence based on language models: its dependence on training data, which in the case of ChatGPT stops at June 2024.
This time lag is one of the main causes why AI sometimes fails to reflect recent news or recent political changes. Microsoft Azure AI and AI21 Labs are working, among other things, to minimize these latencies by connecting their models to sources of information updated in real time, like the version of ChatGPT linked to the Bing engine.

When AI doesn’t check its sources, errors accumulate.
The free version of ChatGPT doesn’t have a direct internet connection, which limits its ability to verify information against reliable and up-to-date sources. As a result, some information, even relatively simple ones like the identity of the current Prime Minister or the current Pope, can be incorrect. For example, ChatGPT initially confirmed that Pope Francis was still in office, although he died last April. It was only after repeated insistence that the AI recognized the successor, Leo XIV.
This situation also stems from the “opt-out” policy of news sites, protected by the DAMUN copyright directive, which restricts AI bots’ access to recent content. In response, players such as Anthropic, Mistral AI, and Hugging Face are advocating for a better balance between content protection and AI’s access to the data needed to provide reliable and up-to-date information.
Concrete examples illustrating the constraints of AI in the dissemination of political information
Franceinfo made numerous attempts to correct the situation with ChatGPT, rephrasing the questions and requesting further confirmation of François Bayrou’s political role. It was only after multiple requests that the AI admitted its error, finally acknowledging that Bayrou had held the position of Prime Minister.
Similar mishaps have affected other areas, such as the deaths of famous figures: AI initially denied the death of Thierry Ardisson before integrating this data. This inconsistency reveals that despite the progress made by platforms like BlaBlaCar AI or IBM Watson, the models remain vulnerable to “hallucinations” and disinformation campaigns.
The Challenges for the Future of Conversational Artificial Intelligence
These limitations call into question the place of AI in our information systems and in society in general, particularly in terms of jurisdiction and regulation. In 2025, the deployment of the “Artificial Intelligence Act” in Europe imposes a strict framework to govern the use and risks associated with these technologies, encouraging developer accountability and increased control over data validity.
While Meta AI, Google DeepMind, and Mistral AI are investing massively in research to improve the veracity and contextualization of responses, the public remains urged to exercise active vigilance. It is recommended to cross-reference information from artificial intelligence with official sources, particularly for political issues where civic issues are major.
Premier commentaire ?