UReason: Benchmarking the Reasoning Paradox in Unified Multimodal Models

1University of California San Diego   2University of Southern California
3University of Illinois Urbana-Champaign   4Carnegie Mellon University
*Equal contribution.

Abstract

To elicit capabilities for addressing complex and implicit visual requirements, recent unified multimodal models (UMMs) increasingly adopt chain-of-thought reasoning to guide image generation. However, the actual effect of reasoning on visual synthesis remains unclear.

We present UReason, a diagnostic benchmark for reasoning-driven image generation that evaluates whether reasoning can be faithfully executed in pixels. UReason contains 2,000 instances across five task families: Code, Arithmetic, Spatial, Attribute, and Text reasoning. To isolate the role of reasoning traces, we introduce an evaluation framework comparing direct generation, reasoning-guided generation, and de-contextualized generation which conditions only on the refined prompt.

Across eight open-source unified models, we observe a consistent Reasoning Paradox: Reasoning traces generally improve performance over direct generation, yet retaining intermediate thoughts as conditioning context often hinders visual synthesis, and conditioning only on the refined prompt yields substantial gains.

Our analysis suggests that the bottleneck lies in contextual interference rather than insufficient reasoning capacity. UReason provides a principled testbed for studying reasoning in unified models and motivates future methods that effectively integrate reasoning for visual generation while mitigating interference.

The UReason Benchmark

UReason is designed to evaluate the visual executability of reasoning chains. Unlike standard benchmarks that focus primarily on aesthetic quality or direct description, UReason challenges models to perform multi-step deduction to determine the correct visual target. The benchmark consists of 2,000 manually annotated instances spanning five diagnostic tasks:

  • Code Reasoning: Interpreting and executing code (e.g., HTML, Python) to render visual outputs.
  • Arithmetic Reasoning: Tracking object quantities through mathematical reasoning.
  • Spatial Reasoning: Inferring complex layouts from implicit spatial cues and logical constraints.
  • Attribute Reasoning: Tracking state transitions to determine final object properties.
  • Text Reasoning: Deriving text strings via logical rules rather than direct quotation.

Each instance is paired with a verifiable criterion (e.g., exact counts, specific spatial arrangements) to enable objective performance measurement.

UReason Tasks.

Fig 1. Representative UReason instances covering Code, Arithmetic, Spatial, Attribute, and Text reasoning.

Evaluation Framework

To rigorously diagnose the impact of reasoning on image generation, we introduce the UReason Evaluation Toolkit. This framework implements a controlled ablation protocol to isolate the effectiveness of reasoning from potential interference. We evaluate models across three distinct settings:

  1. Direct Generation: The baseline setting where the model generates images directly from the original prompt.
  2. Reasoning-Guided Generation: The model generates a Chain-of-Thought (CoT) reasoning trace first, and then generates the image conditioned on the full context (Prompt + Reasoning).
  3. De-contextualized Generation: The model performs reasoning to derive a refined prompt, but the intermediate thoughts are discarded. The image is generated conditioning only on the refined prompt.

This comparative approach reveals the Reasoning Paradox: while reasoning is essential for planning (determining what to draw), the verbose traces often act as "contextual noise" that hinders the visual generator's execution.

Evaluation Framework.

Fig 2. Overview of the UReason evaluation framework comparing Direct, Reasoning-Guided, and De-contextualized settings.

Main Results

We conduct examination of 8 UMMs on UReason across all 3 evaluation settings. The UReason leaderboard showing Visual Verification Accuracy (%) and Performance Gain (Δ).

📬 Contact Us

If you have any inquiries about UReason, feel free to reach out to us at ureason2026@gmail.com, chy085@ucsd.edu, chufansh@usc.edu.