---
license: apache-2.0
---
# GeoGramBench: Benchmarking the Geometric Program Reasoning in Modern LLMs
GeoGramBench is a tailored benchmark dataset designed for evaluating the geometric spatial reasoning capabilities of large language models (LLMs) over procedural programmatic code. The dataset introduces a novel task, **Program-to-Geometry**, that requires models to transform programmatic drawing code into abstract geometric reasoning for problem-solving.
## Features of GeoGramBench
- **500 Curated Problems:** Each sample includes procedural drawing code and associated geometry reasoning problems. These problems are rigorously curated to ensure quality, fairness, and diversity.
- **Taxonomy-Based Evaluation:** Problems are categorized into three difficulty levels:
- **Primitive Recognition:** Basic geometric problems requiring direct recognition of a few elements.
- **Local Relation Composition:** Involves reasoning about relationships between multiple geometric components.
- **Global Abstract Integration:** Complex problems requiring global spatial synthesis, parameterization, or 3D reasoning.
- **Six Subtypes:** Problems span six mathematical subfields: `Angle`, `Length`, `Area`, `Volume`, `Ratio`, and `Count`, supporting fine-grained diagnostics.
## Dataset Composition
| Subtype | Primitive | Compositional | Abstract |
|-----------|-----------|---------------|----------|
| Angle | 22 | 20 | 7 |
| Length | 25 | 88 | 20 |
| Area | 26 | 89 | 46 |
| Ratio | 14 | 51 | 4 |
| Count | 15 | 31 | 15 |
| Volume | 0 | 0 | 27 |
## Benchmark Highlights
- GeoGramBench differs from traditional math benchmarks by emphasizing the symbolic-to-spatial abstraction capabilities of LLMs, leveraging procedural code expressed in formats such as `Asymptote`.
- Initial evaluation using 17 state-of-the-art LLMs revealed substantial gaps, particularly for higher abstraction tasks:
- Models achieved less than **50%** accuracy on the most challenging **Global Abstract Integration** category.
- Even advanced models struggle to bridge procedural code with reliable spatial reasoning.
| Model | Primitive | Compositional | Abstract | ALL |
|-------|-----------|-----------|-----------|--------------|
| Closed-source Models |
| GPT-o3-mini | 84.33 | 75.66 | 42.16 | 70.00 |
| GPT-o1 | 86.76 | 76.02 | 43.35 | 70.92 |
| GPT-o1-preview | 74.79 | 55.98 | 26.20 | 53.15 |
| GPT-o1-mini | 79.62 | 63.21 | 29.09 | 58.94 |
| GPT-4o | 39.81 | 21.29 | 4.96 | 21.40 |
| Gemini-Pro-1.5 | 49.26 | 31.79 | 15.92 | 31.64 |
| Open-source Models |
| Qwen3-235B-Thinking-2507| 89.09 | 79.12 | 49.05 | 74.00 |
| DeepSeek-R1 | 85.66 | 75.27 | 40.38 | 69.17 |
| DeepSeek-v3-0324 | 80.57 | 68.89 | 27.67 | 62.05 |
| QwQ-32B | 85.17 | 73.12 | 37.92 | 67.20 |
| DeepSeek-R1-Distill-Qwen-32B | 79.78 | 67.83 | 35.92 | 62.68 |
| Bespoke-Stratos-32B | 62.50 | 42.56 | 17.02 | 40.55 |
| s1.1-32B | 75.37 | 58.96 | 26.58 | 54.60 |
| DeepSeek-R1-Distill-Qwen-7B | 72.79 | 58.74 | 24.16 | 53.38 |
| Sky-T1-mini-7B | 71.45 | 57.75 | 24.79 | 52.70 |
| DeepSeek-R1-Distill-Qwen-1.5B | 60.29 | 39.02 | 11.03 | 36.70 |
| DeepScaleR-1.5B-preview | 65.44 | 47.89 | 15.76 | 43.83 |
## Use Cases
GeoGramBench is designed for:
- Researchers developing **geometry-aware LLMs** for symbolic-to-spatial reasoning.
- Model diagnostics to pinpoint weaknesses in handling code-driven geometric reasoning or abstract spatial relations.
- Evaluation and advancement of LLMs' performance on tasks involving spatial reasoning.
## Citation
If you use GeoGramBench in your research, please cite:
```bibtex
@article{luo2025geogrambench,
title={Geogrambench: Benchmarking the geometric program reasoning in modern llms},
author={Luo, Shixian and Zhu, Zezhou and Yuan, Yu and Yang, Yuncheng and Shan, Lianlei and Wu, Yong},
journal={arXiv preprint arXiv:2505.17653},
year={2025}
}
```