Ảo giác ở người thể hiện sự trục trặc. Ảo giác ở LLM thể hiện nó hoạt động đúng chức năng

Khái niệm::
LLMs are designed to generate plausible answers and present them in an authoritative tone. All too often, however, they make things up that aren’t true.

Computer scientists have a technical term for this: hallucination. But this term is a misnomer because when a human hallucinates, they are doing something very different.

In psychiatric medicine, the term “hallucination” refers to the experience of false or misleading perceptions. LLMs are not the sorts of things that have experience, or perceptions.

Moreover, a hallucination is a pathology. It’s something that happens when systems are not working properly.

When an LLM fabricates a falsehood, that is not a malfunction at all. The machine is doing exactly what it has been designed to do: guess, and sound confident while doing it.

When LLMs get things wrong they aren’t hallucinating. They are bullshitting.

Nguồn:: LESSON 2