Understanding and Mitigating Hallucinations in Multi-Agent LLM Systems via Data Journalism Gameplay
The paper was accpted by IEEE VIS 2025 (short paper track).
IEEE Xplorer: https://ieeexplore.ieee.org/abstract/document/11298897
Yilin Lu, Shurui Du, Qianwen Wang
Conference Speech: https://m.youtube.com/watch?v=AJ95H_3rPaw&feature=youtu.be
Large language models (LLMs) are increasingly used to support data analysis and visualization tasks but remain prone to hallucinations and produce incorrect results. Recent work suggests that multi-agent systems (MAS) can mitigate hallucinations by enabling internal validation and cross-verification among diverse agents. However, designing effective MAS architectures is challenging, particularly for newcomers, due to the wide range of coordination strategies and a lack of interactive, hands-on learning tools. To address this, we present The Agentopia Times, an educational game that teaches MAS design for hallucination mitigation through active experimentation. The Agentopia Times simulates a newsroom where LLM agents collaborate to create data-driven narratives, with users tasked to adjust communication protocols to manage hallucinated content. The game features a structured mapping between MAS concepts and familiar gameplay mechanics, providing immediate feedback on agent performance and hallucination outcomes. Through two use cases, we demonstrate how The Agentopia Times enables users to explore hallucination propagation and refine MAS strategies.
First, install all dependencies by using:
npm installThen, run the development server:
npm run start