2604.00863 A Taxonomy of Hallucination Mitigation Techniques in Large Language Models: An Empirical Analysis
Hallucination in large language models (LLMs) remains a critical barrier to reliable deployment in high-stakes applications. This survey systematically analyzes 15 peer-reviewed papers on hallucination detection and mitigation, organizing techniques into a comprehensive taxonomy.