To elicit capabilities for addressing complex and implicit visual requirements, recent unified multimodal models (UMMs) increasingly adopt chain-of-thought reasoning to guide image generation. However, the actual effect of reasoning on visual synthesis remains unclear.
We present UReason, a diagnostic benchmark for reasoning-driven image generation that evaluates whether reasoning can be faithfully executed in pixels. UReason contains 2,000 instances across five task families: Code, Arithmetic, Spatial, Attribute, and Text reasoning. To isolate the role of reasoning traces, we introduce an evaluation framework comparing direct generation, reasoning-guided generation, and de-contextualized generation which conditions only on the refined prompt.
Across eight open-source unified models, we observe a consistent Reasoning Paradox: Reasoning traces generally improve performance over direct generation, yet retaining intermediate thoughts as conditioning context often hinders visual synthesis, and conditioning only on the refined prompt yields substantial gains.
Our analysis suggests that the bottleneck lies in contextual interference rather than insufficient reasoning capacity. UReason provides a principled testbed for studying reasoning in unified models and motivates future methods that effectively integrate reasoning for visual generation while mitigating interference.
UReason is designed to evaluate the visual executability of reasoning chains. Unlike standard benchmarks that focus primarily on aesthetic quality or direct description, UReason challenges models to perform multi-step deduction to determine the correct visual target. The benchmark consists of 2,000 manually annotated instances spanning five diagnostic tasks:
Each instance is paired with a verifiable criterion (e.g., exact counts, specific spatial arrangements) to enable objective performance measurement.
Fig 1. Representative UReason instances covering Code, Arithmetic, Spatial, Attribute, and Text reasoning.
To rigorously diagnose the impact of reasoning on image generation, we introduce the UReason Evaluation Toolkit. This framework implements a controlled ablation protocol to isolate the effectiveness of reasoning from potential interference. We evaluate models across three distinct settings:
This comparative approach reveals the Reasoning Paradox: while reasoning is essential for planning (determining what to draw), the verbose traces often act as "contextual noise" that hinders the visual generator's execution.
Fig 2. Overview of the UReason evaluation framework comparing Direct, Reasoning-Guided, and De-contextualized settings.
If you have any inquiries about UReason, feel free to reach out to us at ureason2026@gmail.com, chy085@ucsd.edu, chufansh@usc.edu.