Simian Words

Under these conditions, LLM's basically never hallucinate

I think it is important to understand where LLM's work and where they may falter. This mental model helps me understand what to believe LLMs and when to be a bit skeptical.

I found some constraints in which LLM's basically never hallucinate.

The constraints

  1. Use GPT-5.4 thinking (extended) in ChatGPT without any customisations

  2. The context of the problem should not span more than around 4 pages

  3. It should be purely text - no images or voice

Under these constraints there is basically no hallucination whatsoever. Realistically the chances are <0.01%. I would challenge people to stress test this hypothesis.

Why bother?

From my conversations, I believe that vast majority of public has a bad mental model of ChatGPT in that they believe it basically hallucinates all the time. I want to dispel this misinformation so that people may use ChatGPT under these constraints without any anxiety or skepticism.