SPAR-Bench / README.md
jasonzhango's picture
Update README.md
ee12287 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: img_type
      dtype: string
    - name: format_type
      dtype: string
    - name: task
      dtype: string
    - name: source
      dtype: string
    - name: image
      sequence: image
    - name: question
      dtype: string
    - name: answer
      dtype: string
  splits:
    - name: test
      num_bytes: 1887306946.625
      num_examples: 7211
  download_size: 1840289781
  dataset_size: 1887306946.625
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

GitHub Code arXiv Website

🎯 Spatial Perception And Reasoning Benchmark (SPAR-Bench)

A benchmark to evaluate spatial perception and reasoning in vision-language models (VLMs), with high-quality QA across 20 diverse tasks.

SPAR-Bench is a high-quality benchmark for evaluating spatial perception and reasoning in vision-language models (VLMs). It covers 20 diverse spatial tasks across single-view, multi-view, and video settings, with a total of 7,207 manually verified QA pairs.

SPAR-Bench is derived from the large-scale SPAR-7M dataset, and specifically designed to support zero-shot evaluation and task-specific analysis

πŸ“Œ SPAR-Bench at a glance:

  • βœ… 7,207 manually verified QA pairs
  • 🧠 20 spatial tasks (depth, distance, relation, imagination, etc.)
  • πŸŽ₯ Supports single-view, multi-view inputs
  • πŸ“ Two evaluation metrics: Accuracy & MRA
  • πŸ“· Available in RGB-only and RGB-D versions

🧱 Available Variants

We provide four versions of SPAR-Bench, covering both RGB-only and RGB-D settings, as well as full-size and lightweight variants:

Dataset Name Description
SPAR-Bench Full benchmark (7,207 QA) with RGB images
SPAR-Bench-RGBD Full benchmark with depths, camera pose and intrinsics
SPAR-Bench-Tiny 1,000-sample subset (50 QA per task), for fast evaluation or APIs
SPAR-Bench-Tiny-RGBD Tiny version with RGBD inputs

πŸ”Ž Tiny versions are designed for quick evaluation (e.g., APIs, human studies).
πŸ’‘ RGBD versions include depths, poses, and intrinsics, suitable for 3D-aware models.

To load a different version via datasets, simply change the dataset name:

from datasets import load_dataset
spar = load_dataset("jasonzhango/SPAR-Bench")
spar_rgbd = load_dataset("jasonzhango/SPAR-Bench-RGBD")
spar_tiny = load_dataset("jasonzhango/SPAR-Bench-Tiny")
spar_tiny_rgbd = load_dataset("jasonzhango/SPAR-Bench-Tiny-RGBD")

πŸ•ΉοΈ Evaluation

SPAR-Bench supports two evaluation metrics, depending on the question type:

  • Accuracy – for multiple-choice questions (exact match)
  • Mean Relative Accuracy (MRA) – for numerical-answer questions (e.g., depth, distance)

🧠 The MRA metric is inspired by the design in Thinking in Space, and is tailored for spatial reasoning tasks involving quantities like distance and depth.

We provide an evaluation pipeline in our GitHub repository, built on top of lmms-eval.

πŸ“š Bibtex

If you find this project or dataset helpful, please consider citing our paper:

@article{zhang2025from,
    title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D},
    author={Zhang, Jiahui and Chen, Yurui and Zhou, Yanpeng and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li},
    year={2025},
    journal={arXiv preprint arXiv:2503.22976},
}