Skip to main content

Hallusinations

Until recently computer systems have been quite reliable producing facts. If they have been wrong they were always logical errors, or errors in the source data. But we could be certain the machine didn't make up things.

The new AI language models are a different thing. They can provide facts and produce other useful content. But sometimes, when they can't come up with a "correct" answer they are also capable of coming up with completely plausible sounding but utterly wrong answers. They can make up things.

This phenomenon is called hallucinating. The machine has learned to "experience" something that isn't real. Just like human mind is capable of producing real like experiences (in certain circumstances) that have no ground in reality.

It's scary how it's not the intelligent capability of the machine that make it more human like, but it's flaws.