WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation

1King Abdullah University of Science and Technology, 2Lanzhou University 3The University of Sydney, 4IHPC, A*STAR;
*Equal contribution
MY ALT TEXT

Comparison of existing text-only article generation methods and our proposed WikiAutoGen. Existing approaches rely exclusively on textual sources, often producing inconsistent or inaccurate results. For example, in (a), the target topic is ‘Benzoxonium Chloride’, yet the baseline incorrectly generates information about ‘Benzalkonium Chloride’. In contrast, our WikiAutoGen framework integrates both visual and textual modalities to generate coherent multimodal content. Additionally, WikiAutoGen employs a multi-perspective self-reflection mechanism, significantly improving content accuracy and reliability, as illustrated in (b).

Abstract

Knowledge discovery and collection are intelligence-intensive tasks that traditionally require significant human effort to ensure high-quality outputs. Recent research has explored multi-agent frameworks for automating Wikipedia-style article generation by retrieving and synthesizing information from the internet. However, these methods primarily focus on text-only generation, overlooking the importance of multimodal content in enhancing informativeness and engagement. In this work, we introduce WikiAutoGen, a novel system for automated multimodal Wikipedia-style article generation. Unlike prior approaches, WikiAutoGen retrieves and integrates relevant images alongside text, enriching both the depth and visual appeal of generated content. To further improve factual accuracy and comprehensiveness, we propose a multi-perspective self-reflection mechanism, which critically assesses retrieved content from diverse viewpoints to enhance reliability, breadth, and coherence, etc. Additionally, we introduce WikiSeek, a benchmark comprising Wikipedia articles with topics paired with both textual and image-based representations, designed to evaluate multimodal knowledge generation on more challenging topics. Experimental results show that WikiAutoGen outperforms previous methods by 8%-29% on our WikiSeek benchmark, producing more accurate, coherent, and visually enriched Wikipedia-style articles.

Method

MY ALT TEXT

Overview of WikiAutoGen. WikiAutoGen, our multimodal framework for Wikipedia-style article generation. The pipeline includes three main stages: (1) an Outline Proposal module that structures the article outline based on the multimodal topic input (image and text); (2) a Textual Article Writing module involving persona generation, multi-agent collaborative exploration, and article drafting; and (3) a Multimodal Article Writing module that incorporates relevant images through positioning proposals, retrieval, selection, and final polishing. The entire generation process is enhanced by a Multi-Perspective Self-Reflection module, leveraging supervisory and agent-specific feedback (writer, reader, editor) to iteratively improve article quality in terms of coherence, readability and engagement, etc

BibTeX

@misc{yang2025wikiautogenmultimodalwikipediastylearticle,
      title={WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation}, 
      author={Zhongyu Yang and Jun Chen and Dannong Xu and Junjie Fei and Xiaoqian Shen and Liangbing Zhao and Chun-Mei Feng and Mohamed Elhoseiny},
      year={2025},
      eprint={2503.19065},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.19065}, 
}