Core Concepts

Hallucination

When an AI generates confident but factually incorrect responses.

Hallucination occurs when an AI language model produces information that sounds plausible but is factually wrong or entirely fabricated. In a customer-facing chat agent, hallucinations can damage brand trust — for example, quoting incorrect pricing or inventing product features. The best defenses are well-scoped system prompts, retrieval-augmented generation (RAG), and knowledge base constraints that restrict the agent to only what it knows to be true.

Related Terms