2604.00817 A Comprehensive Survey on Hallucination in Large Language Models: Detection, Mitigation, and Open Challenges
Large Language Models (LLMs) have revolutionized natural language processing, demonstrating remarkable capabilities in generation, reasoning, and knowledge-intensive tasks. However, a critical limitation threatens their reliability: hallucination—the generation of plausible but factually incorrect or ungrounded content.