Core Concepts
Hallucination
When an AI generates confident but factually incorrect responses.
Hallucination occurs when an AI language model produces information that sounds plausible but is factually wrong or entirely fabricated. In a customer-facing chat agent, hallucinations can damage brand trust — for example, quoting incorrect pricing or inventing product features. The best defenses are well-scoped system prompts, retrieval-augmented generation (RAG), and knowledge base constraints that restrict the agent to only what it knows to be true.
Related Terms
System Prompt
Instructions given to an AI model that define its persona, rules, and behavior.
Knowledge Base
A curated set of documents and FAQs that an AI agent uses to answer questions.
Retrieval-Augmented Generation (RAG)
Enhancing AI responses by retrieving relevant documents before generating an answer.