CVPR 2026

Exploring Spatial Intelligence
from a Generative Perspective

GSI-Bench — a benchmark and training set for Generative Spatial Intelligence, spanning real-world (GSI-Real) and large-scale synthetic (GSI-Syn) settings.
Muzhi Zhu1,2*, Shunyao Jiang1*, Huanyi Zheng1, Zekai Luo1, Hao Zhong1, Anzhou Li1,2,
Kaijun Wang1, Jintao Rong4, Yang Liu1, Hao Chen1, Tao Lin3,2, Chunhua Shen1,2
1Zhejiang University, State Key Laboratory of CAD & CG  ·  2Ant Group  ·  3Westlake University  ·  4Zhejiang University of Technology  
*Equal contribution    Corresponding authors
GSI-Bench overview
Figure 1. GSI-Bench evaluates a diverse set of spatial editing skills across real-world (GSI-Real) and synthetic (GSI-Syn-Tabletop, GSI-Syn-Room) domains. A unified evaluation protocol measures Instruction Compliance (IC), Spatial Accuracy (SA), Edit Locality (EL), and Appearance Consistency (AC). Fine-tuning with GSI-Syn substantially boosts spatial generation and spatial understanding across every subset.

Abstract

Spatial intelligence is essential for multimodal large language models, yet current benchmarks largely assess it only from an understanding perspective. We ask whether modern generative or unified multimodal models also possess Generative Spatial Intelligence (GSI)—the ability to respect and manipulate 3D spatial constraints during image generation—and whether such capability can be measured or improved.

We introduce GSI-Bench, the first benchmark designed to quantify GSI through spatially grounded image editing. It consists of two complementary components: GSI-Real, a high-quality real-world dataset built via a 3D-prior-guided generation and filtering pipeline, and GSI-Syn, a large-scale synthetic benchmark with controllable spatial operations and fully automated labeling. Together with a unified evaluation protocol, GSI-Bench enables scalable, model-agnostic assessment of spatial compliance and editing fidelity.

Experiments show that fine-tuning unified multimodal models on GSI-Syn yields substantial gains on both synthetic and real tasks and, strikingly, also improves downstream spatial understanding. This provides the first clear evidence that generative training can tangibly strengthen spatial reasoning—establishing a new pathway for advancing spatial intelligence in multimodal models.

Key Statistics

A compact snapshot of GSI-Bench's scale and scope.

GSI-Real

441
real-world samples drawn from 211 diverse ScanNet++ scenes, spanning 3 operation types.

GSI-Syn-Tabletop

600
MesaTask-based tabletop samples with 3 controllable operations.

GSI-Syn-Room

593
AI2-THOR-based room samples covering 6 spatial operation categories.

GSI-Syn-Train

10,500
training pairs (1,500 per operation per environment) with strict scene separation.
Camera-Relative Movement Object Rotation Object Scaling Object Removal Perspective Control Object Placement Receptacle Placement

Contributions

Four concrete steps toward spatial intelligence from generation.

01

A benchmark for Generative Spatial Intelligence

We introduce GSI-Bench, which operationalizes GSI as spatially grounded image editing—requiring models to respect explicit 3D constraints while generating.

02

Two complementary datasets

GSI-Real is the first high-quality real-world set for spatially grounded editing; GSI-Syn is a large-scale synthetic set with controllable operations and difficulty levels.

03

Automated pipelines

We build an end-to-end pipeline for data generation and evaluation that combines 3D grounding priors, rule-based operation sampling, MLLM captioning & validation, and human verification.

04

Generation improves understanding

Fine-tuning on GSI-Syn—with no understanding or QA supervision—improves spatial generation and transfers to downstream spatial understanding benchmarks.

Benchmark Construction Pipeline

A unified pipeline for synthetic and real-world spatial editing data.

GSI-Bench construction pipeline
GSI-Syn (top) samples actionable viewpoints, validates candidate operations in 3D, executes them in a simulator, and filters outcomes with multimodal quality checks. GSI-Real (bottom) reuses 3D priors from ScanNet++ for scene reconstruction and pair mining, followed by MLLM captioning and human refinement—yielding high-fidelity (I, T, I′) triplets with aligned scene metadata.

Four Evaluation Dimensions

Each edit is scored along a fine-grained, model-agnostic protocol.

IC
Instruction Compliance

Does the output actually perform the requested spatial operation?

SA
Spatial Accuracy

Is the 3D displacement, rotation, or scale close to the ground-truth geometry?

AC
Appearance Consistency

Are object identity, category, and appearance preserved after editing?

EL
Edit Locality

Is the rest of the scene left untouched outside the intended region?

Main Results

Performance of nine state-of-the-art image-editing and unified multimodal models on GSI-Bench. Fine-tuning BAGEL on GSI-Syn—BAGEL + GSI-Syn—delivers the largest gains on every subset. Higher is better.

Performance comparison on GSI-Bench across three subsets and four dimensions (higher is better). Bold = best open-source; underline = second-best.
Subset Dim Closed-Source Open-Source Δ↑
Nano Banana GPT-img AnyEdit UniWorld Ultra Qwen OmniGen2 Emu3.5 BAGEL BAGEL + GSI-Syn
GSI-Real IC 38.7841.7210.2028.8010.66 51.0233.5651.70 31.9740.14+8.16
SA 21.6028.048.3718.365.70 31.2219.6229.51 22.0727.76+5.68
AC 38.7841.529.6828.759.48 50.9533.2051.70 31.8840.14+8.25
EL 34.9227.528.7518.518.97 40.5529.8241.17 27.8937.11+9.22
Avg 33.5234.709.2523.618.70 43.4429.0543.52 28.4636.28+7.83
GSI-Syn-Tabletop IC 36.6239.3310.3315.832.17 27.330.0039.17 27.1750.67+23.50
SA 38.9626.1622.8430.333.09 25.520.0024.09 26.5244.10+17.58
AC 36.6238.4010.3315.581.33 27.270.0038.82 26.5250.67+24.15
EL 35.9131.989.5214.431.93 25.510.0034.91 26.1749.52+23.36
Avg 37.0333.9713.2619.042.13 26.410.0034.25 26.5948.74+22.15
GSI-Syn-Room IC 20.658.057.0012.692.20 20.4018.7120.70 16.1124.01+7.90
SA 16.858.056.4611.552.21 17.7315.0316.56 14.5319.41+4.88
AC 28.0116.6911.8520.403.46 28.6725.9426.98 24.0031.64+7.64
EL 19.657.345.5011.031.86 18.4817.1317.56 14.8222.61+7.79
Avg 21.2910.037.7013.922.43 21.3219.2020.45 17.3724.42+7.05
bold = best underline = second-best Δ↑ = BAGEL + GSI-Syn − BAGEL baseline

Generation → Understanding Transfer

Fine-tuning on GSI-Syn uses only spatially grounded generative editing data—no QA or reasoning supervision—yet consistently improves downstream spatial understanding.

OmniSpatial benchmark (accuracy %). Best among open-source 7B models in bold. †Proprietary.
ModelSizeOverall Dynamic ReasoningSpatial Interaction Complex LogicPerspective Taking
GPT-4-turbo†34.0638.3936.4924.8033.69
Gemini-2.5†52.1263.5967.4635.6743.10
LLaVA-1.57B34.9735.3835.1325.9938.82
Qwen-VL-2.57B39.2546.3030.0635.6539.68
BAGEL7B41.5547.3845.6732.1439.22
BAGEL + GSI-Syn7B 42.0748.3347.67 28.9740.29
SAT-Real benchmark (accuracy %) across five spatial reasoning dimensions.
ModelOverall PerspectiveGoal Aiming Egocentric ActionObject MotionEgocentric Motion
Qwen-VL-2.556.3343.9467.6556.7656.5256.52
BAGEL65.3346.9775.0075.6865.2260.87
BAGEL + GSI-Syn 69.3348.4885.29 72.9765.2273.91

Qualitative Comparison

Five instruction types across GSI-Real, GSI-Tabletop, and GSI-Room. BAGEL+ (BAGEL + GSI-Syn fine-tuning) preserves unaffected content while executing the spatial operation more faithfully.

Qualitative spatial editing comparisons
Columns: input, Emu3.5, BAGEL, BAGEL+, ground truth. Rows 1–2 use GSI-Real; rows 3–4 use GSI-Tabletop; the last row uses GSI-Room.
Additional qualitative examples
Additional spatial editing cases across operation types.
GSI-Real gallery
Gallery of real-world examples from GSI-Real.
GSI-Syn gallery
Synthetic examples from GSI-Syn-Tabletop and GSI-Syn-Room.

Key Findings

Current SOTA struggles with spatial precision

Even leading closed-source models show large gaps on GSI-Real and GSI-Syn-Room, with Spatial Accuracy often below 30. Precise 3D-aware editing remains an open challenge.

Synthetic supervision transfers to real scenes

Trained purely on synthetic pairs, BAGEL + GSI-Syn still improves on GSI-Real by +7.83 Avg, with Edit Locality jumping +9.22.

Removal is easier than geometric manipulation

Across models, spatial removal succeeds more often than translation, rotation, or scaling—indicating an inductive bias toward deletion rather than true geometric edits.

Generation supervision boosts understanding

On OmniSpatial, GSI-Syn fine-tuning raises BAGEL by +2.00 on Spatial Interaction and +1.07 on Perspective Taking; on SAT-Real, overall accuracy jumps by +4.00.

Citation

If you find GSI-Bench useful for your research, please cite our paper.

@article{zhu2026exploringspatial,
  title={Exploring Spatial Intelligence from a Generative Perspective},
  author={Zhu, Muzhi and Jiang, Shunyao and Zheng, Huanyi and Luo, Zekai and Zhong, Hao and Li, Anzhou and Wang, Kaijun and Rong, Jintao and Liu, Yang and Chen, Hao and Lin, Tao and Shen, Chunhua},
  journal={arXiv preprint arXiv:2604.20570},
  year={2026}
}