LongT2IBench:A Benchmark for Evaluating Long Text-to-Image Generation with
Graph-structured Annotations

AAAI 2026 Oral
1School of Artificial Intelligence, Xidian University
2State Key Laboratory of Electromechanical Integrated Manufacturing of High-Performance Electronic Equipments, Xidian University

Abstract

The increasing popularity of long Text-to-Image (T2I) generation has created an urgent need for automatic and interpretable models that can evaluate the image-text alignment in long prompt scenarios. However, the existing T2I alignment benchmarks predominantly focus on short prompt scenarios and only provide MOS or Likert scale annotations. This inherent limitation hinders the development of long T2I evaluators, particularly in terms of the interpretability of alignment. In this study, we contribute LongT2IBench, which comprises 14K long text-image pairs accompanied by graph-structured human annotations. Given the detail-intensive nature of long prompts, we first design a Generate-Refine-Qualify annotation protocol to convert them into textual graph structures that encompass entities, attributes, and relations. Through this transformation, fine-grained alignment annotations are achieved based on these granular elements. Finally, the graph-structed annotations are converted into alignment scores and interpretations to facilitate the design of T2I evaluation models. Based on LongT2IBench, we further propose LongT2IExpert, a LongT2I evaluator that enables multi-modal large language models (MLLMs) to provide both quantitative scores and structured interpretations through an instruction-tuning process with Hierarchical Alignment Chain-of-Thought (CoT). Extensive experiments and comparisons demonstrate the superiority of the proposed LongT2IExpert in alignment evaluation and interpretation.



LongPrompt-3K



LongT2IBench-14K



LongT2IExpert

BibTeX

@misc{yang2025longt2ibenchbenchmarkevaluatinglong,
      title={LongT2IBench: A Benchmark for Evaluating Long Text-to-Image Generation with Graph-structured Annotations}, 
      author={Zhichao Yang and Tianjiao Gu and Jianjie Wang and Feiyu Lin and Xiangfei Sheng and Pengfei Chen and Leida Li},
      year={2025},
      eprint={2512.09271},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.09271}, 
}