redhairedshanks1 commited on
Commit
b4b99e4
·
verified ·
1 Parent(s): b56e481

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1236 -1228
README.md CHANGED
@@ -1,1228 +1,1236 @@
1
- <div align="center">
2
-
3
- <p align="center">
4
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/logo.png" width="300"/>
5
- <p>
6
-
7
- <h1 align="center">
8
- dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model
9
- </h1>
10
-
11
- [![Blog](https://img.shields.io/badge/Blog-View_on_GitHub-333.svg?logo=github)](https://github.com/rednote-hilab/dots.ocr/blob/master/assets/blog.md)
12
- [![HuggingFace](https://img.shields.io/badge/HuggingFace%20Weights-black.svg?logo=HuggingFace)](https://huggingface.co/rednote-hilab/dots.ocr)
13
-
14
-
15
- <div align="center">
16
- <a href="https://dotsocr.xiaohongshu.com" target="_blank" rel="noopener noreferrer"><strong>🖥️ Live Demo</strong></a> |
17
- <a href="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/wechat.png" target="_blank" rel="noopener noreferrer"><strong>💬 WeChat</strong></a> |
18
- <a href="https://www.xiaohongshu.com/user/profile/683ffe42000000001d021a4c" target="_blank" rel="noopener noreferrer"><strong>📕 rednote</strong></a> |
19
- <a href="https://x.com/rednotehilab" target="_blank" rel="noopener noreferrer"><strong>🐦 X</strong></a>
20
- </div>
21
-
22
- </div>
23
-
24
-
25
-
26
- ## Introduction
27
-
28
- **dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
29
-
30
- 1. **Powerful Performance:** **dots.ocr** achieves SOTA performance for text, tables, and reading order on [OmniDocBench](https://github.com/opendatalab/OmniDocBench), while delivering formula recognition results comparable to much larger models like Doubao-1.5 and gemini2.5-pro.
31
- 2. **Multilingual Support:** **dots.ocr** demonstrates robust parsing capabilities for low-resource languages, achieving decisive advantages across both layout detection and content recognition on our in-house multilingual documents benchmark.
32
- 3. **Unified and Simple Architecture:** By leveraging a single vision-language model, **dots.ocr** offers a significantly more streamlined architecture than conventional methods that rely on complex, multi-model pipelines. Switching between tasks is accomplished simply by altering the input prompt, proving that a VLM can achieve competitive detection results compared to traditional detection models like DocLayout-YOLO.
33
- 4. **Efficient and Fast Performance:** Built upon a compact 1.7B LLM, **dots.ocr** provides faster inference speeds than many other high-performing models based on larger foundations.
34
-
35
-
36
- ### Performance Comparison: dots.ocr vs. Competing Models
37
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/chart.png" border="0" />
38
-
39
- > **Notes:**
40
- > - The EN, ZH metrics are the end2end evaluation results of [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and Multilingual metric is the end2end evaluation results of dots.ocr-bench.
41
-
42
-
43
- ## News
44
- * ```2025.07.30 ``` 🚀 We release [dots.ocr](https://github.com/rednote-hilab/dots.ocr), — a multilingual documents parsing model based on 1.7b llm, with SOTA performance.
45
-
46
-
47
-
48
- ## Benchmark Results
49
-
50
- ### 1. OmniDocBench
51
-
52
- #### The end-to-end evaluation results of different tasks.
53
-
54
- <table>
55
- <thead>
56
- <tr>
57
- <th rowspan="2"><strong>Model<br>Type</strong></th>
58
- <th rowspan="2"><strong>Methods</strong></th>
59
- <th colspan="2"><strong>Overall<sup>Edit</sup>↓</strong></th>
60
- <th colspan="2"><strong>Text<sup>Edit</sup>↓</strong></th>
61
- <th colspan="2"><strong>Formula<sup>Edit</sup>↓</strong></th>
62
- <th colspan="2"><strong>Table<sup>TEDS</sup>↑</strong></th>
63
- <th colspan="2"><strong>Table<sup>Edit</sup>↓</strong></th>
64
- <th colspan="2"><strong>Read Order<sup>Edit</sup>↓</strong></th>
65
- </tr>
66
- <tr>
67
- <th><em>EN</em></th>
68
- <th><em>ZH</em></th>
69
- <th><em>EN</em></th>
70
- <th><em>ZH</em></th>
71
- <th><em>EN</em></th>
72
- <th><em>ZH</em></th>
73
- <th><em>EN</em></th>
74
- <th><em>ZH</em></th>
75
- <th><em>EN</em></th>
76
- <th><em>ZH</em></th>
77
- <th><em>EN</em></th>
78
- <th><em>ZH</em></th>
79
- </tr>
80
- </thead>
81
- <tbody>
82
- <tr>
83
- <td rowspan="8"><strong>Pipeline<br>Tools</strong></td>
84
- <td>MinerU</td>
85
- <td>0.150</td>
86
- <td>0.357</td>
87
- <td>0.061</td>
88
- <td>0.215</td>
89
- <td>0.278</td>
90
- <td>0.577</td>
91
- <td>78.6</td>
92
- <td>62.1</td>
93
- <td>0.180</td>
94
- <td>0.344</td>
95
- <td>0.079</td>
96
- <td>0.292</td>
97
- </tr>
98
- <tr>
99
- <td>Marker</td>
100
- <td>0.336</td>
101
- <td>0.556</td>
102
- <td>0.080</td>
103
- <td>0.315</td>
104
- <td>0.530</td>
105
- <td>0.883</td>
106
- <td>67.6</td>
107
- <td>49.2</td>
108
- <td>0.619</td>
109
- <td>0.685</td>
110
- <td>0.114</td>
111
- <td>0.340</td>
112
- </tr>
113
- <tr>
114
- <td>Mathpix</td>
115
- <td>0.191</td>
116
- <td>0.365</td>
117
- <td>0.105</td>
118
- <td>0.384</td>
119
- <td>0.306</td>
120
- <td>0.454</td>
121
- <td>77.0</td>
122
- <td>67.1</td>
123
- <td>0.243</td>
124
- <td>0.320</td>
125
- <td>0.108</td>
126
- <td>0.304</td>
127
- </tr>
128
- <tr>
129
- <td>Docling</td>
130
- <td>0.589</td>
131
- <td>0.909</td>
132
- <td>0.416</td>
133
- <td>0.987</td>
134
- <td>0.999</td>
135
- <td>1</td>
136
- <td>61.3</td>
137
- <td>25.0</td>
138
- <td>0.627</td>
139
- <td>0.810</td>
140
- <td>0.313</td>
141
- <td>0.837</td>
142
- </tr>
143
- <tr>
144
- <td>Pix2Text</td>
145
- <td>0.320</td>
146
- <td>0.528</td>
147
- <td>0.138</td>
148
- <td>0.356</td>
149
- <td>0.276</td>
150
- <td>0.611</td>
151
- <td>73.6</td>
152
- <td>66.2</td>
153
- <td>0.584</td>
154
- <td>0.645</td>
155
- <td>0.281</td>
156
- <td>0.499</td>
157
- </tr>
158
- <tr>
159
- <td>Unstructured</td>
160
- <td>0.586</td>
161
- <td>0.716</td>
162
- <td>0.198</td>
163
- <td>0.481</td>
164
- <td>0.999</td>
165
- <td>1</td>
166
- <td>0</td>
167
- <td>0.06</td>
168
- <td>1</td>
169
- <td>0.998</td>
170
- <td>0.145</td>
171
- <td>0.387</td>
172
- </tr>
173
- <tr>
174
- <td>OpenParse</td>
175
- <td>0.646</td>
176
- <td>0.814</td>
177
- <td>0.681</td>
178
- <td>0.974</td>
179
- <td>0.996</td>
180
- <td>1</td>
181
- <td>64.8</td>
182
- <td>27.5</td>
183
- <td>0.284</td>
184
- <td>0.639</td>
185
- <td>0.595</td>
186
- <td>0.641</td>
187
- </tr>
188
- <tr>
189
- <td>PPStruct-V3</td>
190
- <td>0.145</td>
191
- <td>0.206</td>
192
- <td>0.058</td>
193
- <td>0.088</td>
194
- <td>0.295</td>
195
- <td>0.535</td>
196
- <td>-</td>
197
- <td>-</td>
198
- <td>0.159</td>
199
- <td>0.109</td>
200
- <td>0.069</td>
201
- <td>0.091</td>
202
- </tr>
203
- <tr>
204
- <td rowspan="9"><strong>Expert<br>VLMs</strong></td>
205
- <td>GOT-OCR</td>
206
- <td>0.287</td>
207
- <td>0.411</td>
208
- <td>0.189</td>
209
- <td>0.315</td>
210
- <td>0.360</td>
211
- <td>0.528</td>
212
- <td>53.2</td>
213
- <td>47.2</td>
214
- <td>0.459</td>
215
- <td>0.520</td>
216
- <td>0.141</td>
217
- <td>0.280</td>
218
- </tr>
219
- <tr>
220
- <td>Nougat</td>
221
- <td>0.452</td>
222
- <td>0.973</td>
223
- <td>0.365</td>
224
- <td>0.998</td>
225
- <td>0.488</td>
226
- <td>0.941</td>
227
- <td>39.9</td>
228
- <td>0</td>
229
- <td>0.572</td>
230
- <td>1.000</td>
231
- <td>0.382</td>
232
- <td>0.954</td>
233
- </tr>
234
- <tr>
235
- <td>Mistral OCR</td>
236
- <td>0.268</td>
237
- <td>0.439</td>
238
- <td>0.072</td>
239
- <td>0.325</td>
240
- <td>0.318</td>
241
- <td>0.495</td>
242
- <td>75.8</td>
243
- <td>63.6</td>
244
- <td>0.600</td>
245
- <td>0.650</td>
246
- <td>0.083</td>
247
- <td>0.284</td>
248
- </tr>
249
- <tr>
250
- <td>OLMOCR-sglang</td>
251
- <td>0.326</td>
252
- <td>0.469</td>
253
- <td>0.097</td>
254
- <td>0.293</td>
255
- <td>0.455</td>
256
- <td>0.655</td>
257
- <td>68.1</td>
258
- <td>61.3</td>
259
- <td>0.608</td>
260
- <td>0.652</td>
261
- <td>0.145</td>
262
- <td>0.277</td>
263
- </tr>
264
- <tr>
265
- <td>SmolDocling-256M</td>
266
- <td>0.493</td>
267
- <td>0.816</td>
268
- <td>0.262</td>
269
- <td>0.838</td>
270
- <td>0.753</td>
271
- <td>0.997</td>
272
- <td>44.9</td>
273
- <td>16.5</td>
274
- <td>0.729</td>
275
- <td>0.907</td>
276
- <td>0.227</td>
277
- <td>0.522</td>
278
- </tr>
279
- <tr>
280
- <td>Dolphin</td>
281
- <td>0.206</td>
282
- <td>0.306</td>
283
- <td>0.107</td>
284
- <td>0.197</td>
285
- <td>0.447</td>
286
- <td>0.580</td>
287
- <td>77.3</td>
288
- <td>67.2</td>
289
- <td>0.180</td>
290
- <td>0.285</td>
291
- <td>0.091</td>
292
- <td>0.162</td>
293
- </tr>
294
- <tr>
295
- <td>MinerU 2</td>
296
- <td>0.139</td>
297
- <td>0.240</td>
298
- <td>0.047</td>
299
- <td>0.109</td>
300
- <td>0.297</td>
301
- <td>0.536</td>
302
- <td>82.5</td>
303
- <td>79.0</td>
304
- <td>0.141</td>
305
- <td>0.195</td>
306
- <td>0.069<</td>
307
- <td>0.118</td>
308
- </tr>
309
- <tr>
310
- <td>OCRFlux</td>
311
- <td>0.195</td>
312
- <td>0.281</td>
313
- <td>0.064</td>
314
- <td>0.183</td>
315
- <td>0.379</td>
316
- <td>0.613</td>
317
- <td>71.6</td>
318
- <td>81.3</td>
319
- <td>0.253</td>
320
- <td>0.139</td>
321
- <td>0.086</td>
322
- <td>0.187</td>
323
- </tr>
324
- <tr>
325
- <td>MonkeyOCR-pro-3B</td>
326
- <td>0.138</td>
327
- <td>0.206</td>
328
- <td>0.067</td>
329
- <td>0.107</td>
330
- <td><strong>0.246</strong></td>
331
- <td>0.421</td>
332
- <td>81.5</td>
333
- <td>87.5</td>
334
- <td>0.139</td>
335
- <td>0.111</td>
336
- <td>0.100</td>
337
- <td>0.185</td>
338
- </tr>
339
- <tr>
340
-
341
- <td rowspan="5"><strong>General<br>VLMs</strong></td>
342
- <td>GPT4o</td>
343
- <td>0.233</td>
344
- <td>0.399</td>
345
- <td>0.144</td>
346
- <td>0.409</td>
347
- <td>0.425</td>
348
- <td>0.606</td>
349
- <td>72.0</td>
350
- <td>62.9</td>
351
- <td>0.234</td>
352
- <td>0.329</td>
353
- <td>0.128</td>
354
- <td>0.251</td>
355
- </tr>
356
- <tr>
357
- <td>Qwen2-VL-72B</td>
358
- <td>0.252</td>
359
- <td>0.327</td>
360
- <td>0.096</td>
361
- <td>0.218</td>
362
- <td>0.404</td>
363
- <td>0.487</td>
364
- <td>76.8</td>
365
- <td>76.4</td>
366
- <td>0.387</td>
367
- <td>0.408</td>
368
- <td>0.119</td>
369
- <td>0.193</td>
370
- </tr>
371
- <tr>
372
- <td>Qwen2.5-VL-72B</td>
373
- <td>0.214</td>
374
- <td>0.261</td>
375
- <td>0.092</td>
376
- <td>0.18</td>
377
- <td>0.315</td>
378
- <td>0.434</td>
379
- <td>82.9</td>
380
- <td>83.9</td>
381
- <td>0.341</td>
382
- <td>0.262</td>
383
- <td>0.106</td>
384
- <td>0.168</td>
385
- </tr>
386
- <tr>
387
- <td>Gemini2.5-Pro</td>
388
- <td>0.148</td>
389
- <td>0.212</td>
390
- <td>0.055</td>
391
- <td>0.168</td>
392
- <td>0.356</td>
393
- <td>0.439</td>
394
- <td>85.8</td>
395
- <td>86.4</td>
396
- <td>0.13</td>
397
- <td>0.119</td>
398
- <td>0.049</td>
399
- <td>0.121</td>
400
- </tr>
401
- <tr>
402
- <td>doubao-1-5-thinking-vision-pro-250428</td>
403
- <td>0.140</td>
404
- <td>0.162</td>
405
- <td>0.043</td>
406
- <td>0.085</td>
407
- <td>0.295</td>
408
- <td><strong>0.384</strong></td>
409
- <td>83.3</td>
410
- <td><strong>89.3</strong></td>
411
- <td>0.165</td>
412
- <td><strong>0.085</strong></td>
413
- <td>0.058</td>
414
- <td>0.094</td>
415
- </tr>
416
- <tr>
417
- <td rowspan="1"><strong>Expert VLMs</strong></td>
418
- <td><strong>dots.ocr</strong></td>
419
- <td><strong>0.125</strong></td>
420
- <td><strong>0.160</strong></td>
421
- <td><strong>0.032</strong></td>
422
- <td><strong>0.066</strong></td>
423
- <td>0.329</td>
424
- <td>0.416</td>
425
- <td><strong>88.6</strong></td>
426
- <td>89.0</td>
427
- <td><strong>0.099</strong></td>
428
- <td>0.092</td>
429
- <td><strong>0.040</strong></td>
430
- <td><strong>0.067</strong></td>
431
- </tr>
432
- <tr>
433
- </tbody>
434
- </table>
435
-
436
-
437
- #### The end-to-end text recognition performance across 9 PDF page types.
438
-
439
- <table>
440
- <thead>
441
- <tr>
442
- <th><strong>Model<br>Type</strong></th>
443
- <th><strong>Models</strong></th>
444
- <th><strong>Book</strong></th>
445
- <th><strong>Slides</strong></th>
446
- <th><strong>Financial<br>Report</strong></th>
447
- <th><strong>Textbook</strong></th>
448
- <th><strong>Exam<br>Paper</strong></th>
449
- <th><strong>Magazine</strong></th>
450
- <th><strong>Academic<br>Papers</strong></th>
451
- <th><strong>Notes</strong></th>
452
- <th><strong>Newspaper</strong></th>
453
- <th><strong>Overall</strong></th>
454
- </tr>
455
- </thead>
456
- <tbody>
457
- <tr>
458
- <td rowspan="3"><strong>Pipeline<br>Tools</strong></td>
459
- <td>MinerU</td>
460
- <td>0.055</td>
461
- <td>0.124</td>
462
- <td><u>0.033</u></td>
463
- <td>0.102</td>
464
- <td>0.159</td>
465
- <td><strong>0.072</strong></td>
466
- <td><u>0.025</u></td>
467
- <td>0.984</td>
468
- <td>0.171</td>
469
- <td>0.206</td>
470
- </tr>
471
- <tr>
472
- <td>Marker</td>
473
- <td>0.074</td>
474
- <td>0.340</td>
475
- <td>0.089</td>
476
- <td>0.319</td>
477
- <td>0.452</td>
478
- <td>0.153</td>
479
- <td>0.059</td>
480
- <td>0.651</td>
481
- <td>0.192</td>
482
- <td>0.274</td>
483
- </tr>
484
- <tr>
485
- <td>Mathpix</td>
486
- <td>0.131</td>
487
- <td>0.220</td>
488
- <td>0.202</td>
489
- <td>0.216</td>
490
- <td>0.278</td>
491
- <td>0.147</td>
492
- <td>0.091</td>
493
- <td>0.634</td>
494
- <td>0.690</td>
495
- <td>0.300</td>
496
- </tr>
497
- <tr>
498
- <td rowspan="5"><strong>Expert<br>VLMs</strong></td>
499
- <td>GOT-OCR</td>
500
- <td>0.111</td>
501
- <td>0.222</td>
502
- <td>0.067</td>
503
- <td>0.132</td>
504
- <td>0.204</td>
505
- <td>0.198</td>
506
- <td>0.179</td>
507
- <td>0.388</td>
508
- <td>0.771</td>
509
- <td>0.267</td>
510
- </tr>
511
- <tr>
512
- <td>Nougat</td>
513
- <td>0.734</td>
514
- <td>0.958</td>
515
- <td>1.000</td>
516
- <td>0.820</td>
517
- <td>0.930</td>
518
- <td>0.830</td>
519
- <td>0.214</td>
520
- <td>0.991</td>
521
- <td>0.871</td>
522
- <td>0.806</td>
523
- </tr>
524
- <tr>
525
- <td>Dolphin</td>
526
- <td>0.091</td>
527
- <td>0.131</td>
528
- <td>0.057</td>
529
- <td>0.146</td>
530
- <td>0.231</td>
531
- <td>0.121</td>
532
- <td>0.074</td>
533
- <td>0.363</td>
534
- <td>0.307</td>
535
- <td>0.177</td>
536
- </tr>
537
- <tr>
538
- <td>OCRFlux</td>
539
- <td>0.068</td>
540
- <td>0.125</td>
541
- <td>0.092</td>
542
- <td>0.102</td>
543
- <td>0.119</td>
544
- <td>0.083</td>
545
- <td>0.047</td>
546
- <td>0.223</td>
547
- <td>0.536</td>
548
- <td>0.149</td>
549
- </tr>
550
- <tr>
551
- <td>MonkeyOCR-pro-3B</td>
552
- <td>0.084</td>
553
- <td>0.129</td>
554
- <td>0.060</td>
555
- <td>0.090</td>
556
- <td>0.107</td>
557
- <td>0.073</td>
558
- <td>0.050</td>
559
- <td>0.171</td>
560
- <td>0.107</td>
561
- <td>0.100</td>
562
- </tr>
563
- <tr>
564
- <td rowspan="4"><strong>General<br>VLMs</strong></td>
565
- <td>GPT4o</td>
566
- <td>0.157</td>
567
- <td>0.163</td>
568
- <td>0.348</td>
569
- <td>0.187</td>
570
- <td>0.281</td>
571
- <td>0.173</td>
572
- <td>0.146</td>
573
- <td>0.607</td>
574
- <td>0.751</td>
575
- <td>0.316</td>
576
- </tr>
577
- <tr>
578
- <td>Qwen2.5-VL-7B</td>
579
- <td>0.148</td>
580
- <td>0.053</td>
581
- <td>0.111</td>
582
- <td>0.137</td>
583
- <td>0.189</td>
584
- <td>0.117</td>
585
- <td>0.134</td>
586
- <td>0.204</td>
587
- <td>0.706</td>
588
- <td>0.205</td>
589
- </tr>
590
- <tr>
591
- <td>InternVL3-8B</td>
592
- <td>0.163</td>
593
- <td>0.056</td>
594
- <td>0.107</td>
595
- <td>0.109</td>
596
- <td>0.129</td>
597
- <td>0.100</td>
598
- <td>0.159</td>
599
- <td>0.150</td>
600
- <td>0.681</td>
601
- <td>0.188</td>
602
- </tr>
603
- <tr>
604
- <td>doubao-1-5-thinking-vision-pro-250428</td>
605
- <td>0.048</td>
606
- <td>0.048</td>
607
- <td>0.024</td>
608
- <td><strong>0.062</strong></td>
609
- <td>0.085</td>
610
- <td>0.051</td>
611
- <td>0.039</td>
612
- <td><strong>0.096</strong></td>
613
- <td>0.181</td>
614
- <td>0.073</td>
615
- </tr>
616
- <tr>
617
- <td rowspan="1"><strong>Expert VLMs</strong></td>
618
- <td><strong>dots.ocr</strong></td>
619
- <td><strong>0.031</strong></td>
620
- <td><strong>0.047</strong></td>
621
- <td><strong>0.011</strong></td>
622
- <td>0.082</td>
623
- <td><strong>0.079</strong></td>
624
- <td><strong>0.028</strong></td>
625
- <td><strong>0.029</strong></td>
626
- <td>0.109</td>
627
- <td><strong>0.056</strong></td>
628
- <td><strong>0.055</strong></td>
629
- </tr>
630
-
631
- </tbody>
632
- </table>
633
-
634
- > **Notes:**
635
- > - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR), [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and our own internal evaluations.
636
- > - We delete the Page-header and Page-footer cells in the result markdown.
637
- > - We use tikz_preprocess pipeline to upsample the images to dpi 200.
638
-
639
-
640
- ### 2. **dots.ocr-bench**
641
-
642
- This is an inhouse benchmark which contain 1493 pdf images with 100 languages.
643
-
644
- #### The end-to-end evaluation results of different tasks.
645
-
646
- <table>
647
- <thead>
648
- <tr>
649
- <th rowspan="1"><strong>Methods</strong></th>
650
- <th colspan="1"><strong>Overall<sup>Edit</sup>↓</strong></th>
651
- <th colspan="1"><strong>Text<sup>Edit</sup>↓</strong></th>
652
- <th colspan="1"><strong>Formula<sup>Edit</sup>↓</strong></th>
653
- <th colspan="1"><strong>Table<sup>TEDS</sup>↑</strong></th>
654
- <th colspan="1"><strong>Table<sup>Edit</sup>↓</strong></th>
655
- <th colspan="1"><strong>Read Order<sup>Edit</sup>↓</strong></th>
656
- </tr>
657
- </thead>
658
- <tbody>
659
- <td>MonkeyOCR-3B</td>
660
- <td>0.483</td>
661
- <td>0.445</td>
662
- <td>0.627</td>
663
- <td>50.93</td>
664
- <td>0.452</td>
665
- <td>0.409</td>
666
- </tr>
667
- <tr>
668
- <td>doubao-1-5-thinking-vision-pro-250428</td>
669
- <td>0.291</td>
670
- <td>0.226</td>
671
- <td>0.440</td>
672
- <td>71.2</td>
673
- <td>0.260</td>
674
- <td>0.238</td>
675
- </tr>
676
- <tr>
677
- <td>doubao-1-6</td>
678
- <td>0.299</td>
679
- <td>0.270</td>
680
- <td>0.417</td>
681
- <td>71.0</td>
682
- <td>0.258</td>
683
- <td>0.253</td>
684
- </tr>
685
- <tr>
686
- <td>Gemini2.5-Pro</td>
687
- <td>0.251</td>
688
- <td>0.163</td>
689
- <td>0.402</td>
690
- <td>77.1</td>
691
- <td>0.236</td>
692
- <td>0.202</td>
693
- </tr>
694
- <tr>
695
- <td><strong>dots.ocr</strong> </td>
696
- <td><strong>0.177</strong></td>
697
- <td><strong>0.075</strong></td>
698
- <td><strong>0.297</strong></td>
699
- <td><strong>79.2</strong></td>
700
- <td><strong>0.186</strong></td>
701
- <td><strong>0.152</strong></td>
702
- </tr>
703
-
704
- </tbody>
705
- </table>
706
-
707
- > **Notes:**
708
- > - We use the same metric calculation pipeline of [OmniDocBench](https://github.com/opendatalab/OmniDocBench).
709
- > - We delete the Page-header and Page-footer cells in the result markdown.
710
-
711
- #### Layout Detection
712
-
713
- <table>
714
- <thead>
715
- <tr>
716
- <th rowspan="2"><strong>Method</strong></th>
717
- <th colspan="5" style="text-align: center;"><strong>F1@IoU=.50:.05:.95↑</strong></th>
718
- <th colspan="5" style="text-align: center;"><strong>F1@IoU=.50↑</strong></th>
719
- </tr>
720
- <tr>
721
- <th>Overall</th>
722
- <th>Text</th>
723
- <th>Formula</th>
724
- <th>Table</th>
725
- <th>Picture</th>
726
- <th>Overall</th>
727
- <th>Text</th>
728
- <th>Formula</th>
729
- <th>Table</th>
730
- <th>Picture</th>
731
- </tr>
732
- </thead>
733
-
734
- <tbody>
735
- <td>DocLayout-YOLO-DocStructBench</td>
736
- <td>0.733</td>
737
- <td>0.694</td>
738
- <td>0.480</td>
739
- <td>0.803</td>
740
- <td>0.619</td>
741
- <td>0.806</td>
742
- <td>0.779</td>
743
- <td>0.620</td>
744
- <td>0.858</td>
745
- <td>0.678</td>
746
- </tr>
747
-
748
- <tr>
749
- <td>dots.ocr-parse all</td>
750
- <td>0.831</td>
751
- <td>0.801</td>
752
- <td>0.654</td>
753
- <td>0.838</td>
754
- <td>0.748</td>
755
- <td>0.922</td>
756
- <td>0.909</td>
757
- <td>0.770</td>
758
- <td>0.888</td>
759
- <td>0.831</td>
760
- </tr>
761
-
762
- <tr>
763
- <td> <strong>dots.ocr-detection only</strong> </td>
764
- <td><strong>0.845</strong></td>
765
- <td><strong>0.816</strong></td>
766
- <td><strong>0.716</strong></td>
767
- <td><strong>0.875</strong></td>
768
- <td><strong>0.765</strong></td>
769
- <td><strong>0.930</strong></td>
770
- <td><strong>0.917</strong></td>
771
- <td><strong>0.832</strong></td>
772
- <td><strong>0.918</strong></td>
773
- <td><strong>0.843</strong></td>
774
- </tr>
775
-
776
- </tbody>
777
- </table>
778
-
779
- > **Notes:**
780
- > - prompt_layout_all_en for **parse all**, prompt_layout_only_en for **detection only**, please refer to [prompts](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)
781
-
782
-
783
- ### 3. olmOCR-bench.
784
-
785
- <table>
786
- <thead>
787
- <tr>
788
- <th>Model</th>
789
- <th>ArXiv</th>
790
- <th>Old Scans<br>Math</th>
791
- <th>Tables</th>
792
- <th>Old Scans</th>
793
- <th>Headers and<br>Footers</th>
794
- <th>Multi<br>column</th>
795
- <th>Long Tiny<br>Text</th>
796
- <th>Base</th>
797
- <th>Overall</th>
798
- </tr>
799
- </thead>
800
- <tbody>
801
- <tr>
802
- <td>GOT OCR</td>
803
- <td>52.7</td>
804
- <td>52.0</td>
805
- <td>0.2</td>
806
- <td>22.1</td>
807
- <td>93.6</td>
808
- <td>42.0</td>
809
- <td>29.9</td>
810
- <td>94.0</td>
811
- <td>48.3 ± 1.1</td>
812
- </tr>
813
- <tr>
814
- <td>Marker</td>
815
- <td>76.0</td>
816
- <td>57.9</td>
817
- <td>57.6</td>
818
- <td>27.8</td>
819
- <td>84.9</td>
820
- <td>72.9</td>
821
- <td>84.6</td>
822
- <td>99.1</td>
823
- <td>70.1 ± 1.1</td>
824
- </tr>
825
- <tr>
826
- <td>MinerU</td>
827
- <td>75.4</td>
828
- <td>47.4</td>
829
- <td>60.9</td>
830
- <td>17.3</td>
831
- <td><strong>96.6</strong></td>
832
- <td>59.0</td>
833
- <td>39.1</td>
834
- <td>96.6</td>
835
- <td>61.5 ± 1.1</td>
836
- </tr>
837
- <tr>
838
- <td>Mistral OCR</td>
839
- <td>77.2</td>
840
- <td>67.5</td>
841
- <td>60.6</td>
842
- <td>29.3</td>
843
- <td>93.6</td>
844
- <td>71.3</td>
845
- <td>77.1</td>
846
- <td>99.4</td>
847
- <td>72.0 ± 1.1</td>
848
- </tr>
849
- <tr>
850
- <td>Nanonets OCR</td>
851
- <td>67.0</td>
852
- <td>68.6</td>
853
- <td>77.7</td>
854
- <td>39.5</td>
855
- <td>40.7</td>
856
- <td>69.9</td>
857
- <td>53.4</td>
858
- <td>99.3</td>
859
- <td>64.5 ± 1.1</td>
860
- </tr>
861
- <tr>
862
- <td>GPT-4o<br>(No Anchor)</td>
863
- <td>51.5</td>
864
- <td><strong>75.5</strong></td>
865
- <td>69.1</td>
866
- <td>40.9</td>
867
- <td>94.2</td>
868
- <td>68.9</td>
869
- <td>54.1</td>
870
- <td>96.7</td>
871
- <td>68.9 ± 1.1</td>
872
- </tr>
873
- <tr>
874
- <td>GPT-4o<br>(Anchored)</td>
875
- <td>53.5</td>
876
- <td>74.5</td>
877
- <td>70.0</td>
878
- <td>40.7</td>
879
- <td>93.8</td>
880
- <td>69.3</td>
881
- <td>60.6</td>
882
- <td>96.8</td>
883
- <td>69.9 ± 1.1</td>
884
- </tr>
885
- <tr>
886
- <td>Gemini Flash 2<br>(No Anchor)</td>
887
- <td>32.1</td>
888
- <td>56.3</td>
889
- <td>61.4</td>
890
- <td>27.8</td>
891
- <td>48.0</td>
892
- <td>58.7</td>
893
- <td><strong>84.4</strong></td>
894
- <td>94.0</td>
895
- <td>57.8 ± 1.1</td>
896
- </tr>
897
- <tr>
898
- <td>Gemini Flash 2<br>(Anchored)</td>
899
- <td>54.5</td>
900
- <td>56.1</td>
901
- <td>72.1</td>
902
- <td>34.2</td>
903
- <td>64.7</td>
904
- <td>61.5</td>
905
- <td>71.5</td>
906
- <td>95.6</td>
907
- <td>63.8 ± 1.2</td>
908
- </tr>
909
- <tr>
910
- <td>Qwen 2 VL<br>(No Anchor)</td>
911
- <td>19.7</td>
912
- <td>31.7</td>
913
- <td>24.2</td>
914
- <td>17.1</td>
915
- <td>88.9</td>
916
- <td>8.3</td>
917
- <td>6.8</td>
918
- <td>55.5</td>
919
- <td>31.5 ± 0.9</td>
920
- </tr>
921
- <tr>
922
- <td>Qwen 2.5 VL<br>(No Anchor)</td>
923
- <td>63.1</td>
924
- <td>65.7</td>
925
- <td>67.3</td>
926
- <td>38.6</td>
927
- <td>73.6</td>
928
- <td>68.3</td>
929
- <td>49.1</td>
930
- <td>98.3</td>
931
- <td>65.5 ± 1.2</td>
932
- </tr>
933
- <tr>
934
- <td>olmOCR v0.1.75<br>(No Anchor)</td>
935
- <td>71.5</td>
936
- <td>71.4</td>
937
- <td>71.4</td>
938
- <td><strong>42.8</strong></td>
939
- <td>94.1</td>
940
- <td>77.7</td>
941
- <td>71.0</td>
942
- <td>97.8</td>
943
- <td>74.7 ± 1.1</td>
944
- </tr>
945
- <tr>
946
- <td>olmOCR v0.1.75<br>(Anchored)</td>
947
- <td>74.9</td>
948
- <td>71.2</td>
949
- <td>71.0</td>
950
- <td>42.2</td>
951
- <td>94.5</td>
952
- <td>78.3</td>
953
- <td>73.3</td>
954
- <td>98.3</td>
955
- <td>75.5 ± 1.0</td>
956
- </tr>
957
- <tr>
958
- <td>MonkeyOCR-pro-3B</td>
959
- <td><strong>83.8</strong></td>
960
- <td>68.8</td>
961
- <td>74.6</td>
962
- <td>36.1</td>
963
- <td>91.2</td>
964
- <td>76.6</td>
965
- <td>80.1</td>
966
- <td>95.3</td>
967
- <td>75.8 ± 1.0</td>
968
- </tr>
969
- <tr>
970
- <td><strong>dots.ocr</strong></td>
971
- <td>82.1</td>
972
- <td>64.2</td>
973
- <td><strong>88.3</strong></td>
974
- <td>40.9</td>
975
- <td>94.1</td>
976
- <td><strong>82.4</strong></td>
977
- <td>81.2</td>
978
- <td><strong>99.5</strong></td>
979
- <td><strong>79.1 ± 1.0</strong></td>
980
- </tr>
981
- </tbody>
982
- </table>
983
-
984
-
985
- > **Note:**
986
- > - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
987
- [olmocr](https://github.com/allenai/olmocr), and our own internal evaluations.
988
- > - We delete the Page-header and Page-footer cells in the result markdown.
989
-
990
-
991
-
992
- # Quick Start
993
- ## 1. Installation
994
- ### Install dots.ocr
995
- ```shell
996
- conda create -n dots_ocr python=3.12
997
- conda activate dots_ocr
998
-
999
- git clone https://github.com/rednote-hilab/dots.ocr.git
1000
- cd dots.ocr
1001
-
1002
- # Install pytorch, see https://pytorch.org/get-started/previous-versions/ for your cuda version
1003
- pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu128
1004
- pip install -e .
1005
- ```
1006
-
1007
- If you have trouble with the installation, try our [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) for an easier setup, and follow these steps:
1008
- ```shell
1009
- git clone https://github.com/rednote-hilab/dots.ocr.git
1010
- cd dots.ocr
1011
- pip install -e .
1012
- ```
1013
-
1014
-
1015
- ### Download Model Weights
1016
- > 💡**Note:** Please use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`) for the model save path. This is a temporary workaround pending our integration with Transformers.
1017
- ```shell
1018
- python3 tools/download_model.py
1019
-
1020
- # with modelscope
1021
- python3 tools/download_model.py --type modelscope
1022
- ```
1023
-
1024
-
1025
- ## 2. Deployment
1026
- ### vLLM inference
1027
- We highly recommend using vllm for deployment and inference. All of our evaluations results are based on vllm version 0.9.1.
1028
- The [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) is based on the official vllm image. You can also follow [Dockerfile](https://github.com/rednote-hilab/dots.ocr/blob/master/docker/Dockerfile) to build the deployment environment by yourself.
1029
-
1030
- ```shell
1031
- # You need to register model to vllm at first
1032
- python3 tools/download_model.py
1033
- export hf_model_path=./weights/DotsOCR # Path to your downloaded model weights, Please use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`) for the model save path. This is a temporary workaround pending our integration with Transformers.
1034
- export PYTHONPATH=$(dirname "$hf_model_path"):$PYTHONPATH
1035
- sed -i '/^from vllm\.entrypoints\.cli\.main import main$/a\
1036
- from DotsOCR import modeling_dots_ocr_vllm' `which vllm` # If you downloaded model weights by yourself, please replace `DotsOCR` by your model saved directory name, and remember to use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`)
1037
-
1038
- # launch vllm server
1039
- CUDA_VISIBLE_DEVICES=0 vllm serve ${hf_model_path} --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --chat-template-content-format string --served-model-name model --trust-remote-code
1040
-
1041
- # If you get a ModuleNotFoundError: No module named 'DotsOCR', please check the note above on the saved model directory name.
1042
-
1043
- # vllm api demo
1044
- python3 ./demo/demo_vllm.py --prompt_mode prompt_layout_all_en
1045
- ```
1046
-
1047
- ### Hugginface inference
1048
- ```shell
1049
- python3 demo/demo_hf.py
1050
- ```
1051
-
1052
- <details>
1053
- <summary><b>Hugginface inference details</b></summary>
1054
-
1055
- ```python
1056
- import torch
1057
- from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
1058
- from qwen_vl_utils import process_vision_info
1059
- from dots_ocr.utils import dict_promptmode_to_prompt
1060
-
1061
- model_path = "./weights/DotsOCR"
1062
- model = AutoModelForCausalLM.from_pretrained(
1063
- model_path,
1064
- attn_implementation="flash_attention_2",
1065
- torch_dtype=torch.bfloat16,
1066
- device_map="auto",
1067
- trust_remote_code=True
1068
- )
1069
- processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
1070
-
1071
- image_path = "demo/demo_image1.jpg"
1072
- prompt = """Please output the layout information from the PDF image, including each layout element's bbox, its category, and the corresponding text content within the bbox.
1073
-
1074
- 1. Bbox format: [x1, y1, x2, y2]
1075
-
1076
- 2. Layout Categories: The possible categories are ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title'].
1077
-
1078
- 3. Text Extraction & Formatting Rules:
1079
- - Picture: For the 'Picture' category, the text field should be omitted.
1080
- - Formula: Format its text as LaTeX.
1081
- - Table: Format its text as HTML.
1082
- - All Others (Text, Title, etc.): Format their text as Markdown.
1083
-
1084
- 4. Constraints:
1085
- - The output text must be the original text from the image, with no translation.
1086
- - All layout elements must be sorted according to human reading order.
1087
-
1088
- 5. Final Output: The entire output must be a single JSON object.
1089
- """
1090
-
1091
- messages = [
1092
- {
1093
- "role": "user",
1094
- "content": [
1095
- {
1096
- "type": "image",
1097
- "image": image_path
1098
- },
1099
- {"type": "text", "text": prompt}
1100
- ]
1101
- }
1102
- ]
1103
-
1104
- # Preparation for inference
1105
- text = processor.apply_chat_template(
1106
- messages,
1107
- tokenize=False,
1108
- add_generation_prompt=True
1109
- )
1110
- image_inputs, video_inputs = process_vision_info(messages)
1111
- inputs = processor(
1112
- text=[text],
1113
- images=image_inputs,
1114
- videos=video_inputs,
1115
- padding=True,
1116
- return_tensors="pt",
1117
- )
1118
-
1119
- inputs = inputs.to("cuda")
1120
-
1121
- # Inference: Generation of the output
1122
- generated_ids = model.generate(**inputs, max_new_tokens=24000)
1123
- generated_ids_trimmed = [
1124
- out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
1125
- ]
1126
- output_text = processor.batch_decode(
1127
- generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
1128
- )
1129
- print(output_text)
1130
-
1131
- ```
1132
-
1133
- </details>
1134
-
1135
- ### Hugginface inference with CPU
1136
- Please refer to [CPU inference](https://github.com/rednote-hilab/dots.ocr/issues/1#issuecomment-3148962536)
1137
-
1138
-
1139
- ## 3. Document Parse
1140
- **Based on vLLM server**, you can parse an image or a pdf file using the following commands:
1141
- ```bash
1142
-
1143
- # Parse all layout info, both detection and recognition
1144
- # Parse a single image
1145
- python3 dots_ocr/parser.py demo/demo_image1.jpg
1146
- # Parse a single PDF
1147
- python3 dots_ocr/parser.py demo/demo_pdf1.pdf --num_thread 64 # try bigger num_threads for pdf with a large number of pages
1148
-
1149
- # Layout detection only
1150
- python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_layout_only_en
1151
-
1152
- # Parse text only, except Page-header and Page-footer
1153
- python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_ocr
1154
-
1155
- # Parse layout info by bbox
1156
- python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_grounding_ocr --bbox 163 241 1536 705
1157
-
1158
- ```
1159
- **Based on Transformers**, you can parse an image or a pdf file using the same commands above, just add `--use_hf true`.
1160
-
1161
- > Notice: transformers is slower than vllm, if you want to use demo/* with transformers,just add `use_hf=True` in `DotsOCRParser(..,use_hf=True)`
1162
-
1163
- <details>
1164
- <summary><b>Output Results</b></summary>
1165
-
1166
- 1. **Structured Layout Data** (`demo_image1.json`): A JSON file containing the detected layout elements, including their bounding boxes, categories, and extracted text.
1167
- 2. **Processed Markdown File** (`demo_image1.md`): A Markdown file generated from the concatenated text of all detected cells.
1168
- * An additional version, `demo_image1_nohf.md`, is also provided, which excludes page headers and footers for compatibility with benchmarks like Omnidocbench and olmOCR-bench.
1169
- 3. **Layout Visualization** (`demo_image1.jpg`): The original image with the detected layout bounding boxes drawn on it.
1170
-
1171
- </details>
1172
-
1173
- ## 4. Demo
1174
- You can run the demo with the following command, or try directly at [live demo](https://dotsocr.xiaohongshu.com/)
1175
- ```bash
1176
- python demo/demo_gradio.py
1177
- ```
1178
-
1179
- We also provide a demo for grounding ocr:
1180
- ```bash
1181
- python demo/demo_gradio_annotion.py
1182
- ```
1183
-
1184
-
1185
- ### Example for formula document
1186
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/formula1.png" alt="formula1.png" border="0" />
1187
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/formula2.png" alt="formula2.png" border="0" />
1188
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/formula3.png" alt="formula3.png" border="0" />
1189
-
1190
- ### Example for table document
1191
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/table1.png" alt="table1.png" border="0" />
1192
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/table2.png" alt="table2.png" border="0" />
1193
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/table3.png" alt="table3.png" border="0" />
1194
-
1195
- ### Example for multilingual document
1196
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/Tibetan.png" alt="Tibetan.png" border="0" />
1197
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/tradition_zh.png" alt="tradition_zh.png" border="0" />
1198
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/nl.png" alt="nl.png" border="0" />
1199
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/kannada.png" alt="kannada.png" border="0" />
1200
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/russian.png" alt="russian.png" border="0" />
1201
-
1202
- ### Example for reading order
1203
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/reading_order.png" alt="reading_order.png" border="0" />
1204
-
1205
- ### Example for grounding ocr
1206
- <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/grounding.png" alt="grounding.png" border="0" />
1207
-
1208
-
1209
- ## Acknowledgments
1210
- We would like to thank [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), [aimv2](https://github.com/apple/ml-aim), [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
1211
- [OmniDocBench](https://github.com/opendatalab/OmniDocBench), [PyMuPDF](https://github.com/pymupdf/PyMuPDF), for providing code and models.
1212
-
1213
- We also thank [DocLayNet](https://github.com/DS4SD/DocLayNet), [M6Doc](https://github.com/HCIILAB/M6Doc), [CDLA](https://github.com/buptlihang/CDLA), [D4LA](https://github.com/AlibabaResearch/AdvancedLiterateMachinery) for providing valuable datasets.
1214
-
1215
- ## Limitation & Future Work
1216
-
1217
- - **Complex Document Elements:**
1218
- - **Table&Formula**: dots.ocr is not yet perfect for high-complexity tables and formula extraction.
1219
- - **Picture**: Pictures in documents are currently not parsed.
1220
-
1221
- - **Parsing Failures:** The model may fail to parse under certain conditions:
1222
- - When the character-to-pixel ratio is excessively high. Try enlarging the image or increasing the PDF parsing DPI (a setting of 200 is recommended). However, please note that the model performs optimally on images with a resolution under 11289600 pixels.
1223
- - Continuous special characters, such as ellipses (`...`) and underscores (`_`), may cause the prediction output to repeat endlessly. In such scenarios, consider using alternative prompts like `prompt_layout_only_en`, `prompt_ocr`, or `prompt_grounding_ocr` ([details here](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)).
1224
-
1225
- - **Performance Bottleneck:** Despite its 1.7B parameter LLM foundation, **dots.ocr** is not yet optimized for high-throughput processing of large PDF volumes.
1226
-
1227
- We are committed to achieving more accurate table and formula parsing, as well as enhancing the model's OCR capabilities for broader generalization, all while aiming for **a more powerful, more efficient model**. Furthermore, we are actively considering the development of **a more general-purpose perception model** based on Vision-Language Models (VLMs), which would integrate general detection, image captioning, and OCR tasks into a unified framework. **Parsing the content of the pictures in the documents** is also a key priority for our future work.
1228
- We believe that collaboration is the key to tackling these exciting challenges. If you are passionate about advancing the frontiers of document intelligence and are interested in contributing to these future endeavors, we would love to hear from you. Please reach out to us via email at: [[email protected]].
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ title: dots-ocr
4
+ sdk: gradio
5
+ emoji: 🌖
6
+ colorFrom: pink
7
+ colorTo: yellow
8
+ ---
9
+ <div align="center">
10
+
11
+ <p align="center">
12
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/logo.png" width="300"/>
13
+ <p>
14
+
15
+ <h1 align="center">
16
+ dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model
17
+ </h1>
18
+
19
+ [![Blog](https://img.shields.io/badge/Blog-View_on_GitHub-333.svg?logo=github)](https://github.com/rednote-hilab/dots.ocr/blob/master/assets/blog.md)
20
+ [![HuggingFace](https://img.shields.io/badge/HuggingFace%20Weights-black.svg?logo=HuggingFace)](https://huggingface.co/rednote-hilab/dots.ocr)
21
+
22
+
23
+ <div align="center">
24
+ <a href="https://dotsocr.xiaohongshu.com" target="_blank" rel="noopener noreferrer"><strong>🖥️ Live Demo</strong></a> |
25
+ <a href="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/wechat.png" target="_blank" rel="noopener noreferrer"><strong>💬 WeChat</strong></a> |
26
+ <a href="https://www.xiaohongshu.com/user/profile/683ffe42000000001d021a4c" target="_blank" rel="noopener noreferrer"><strong>📕 rednote</strong></a> |
27
+ <a href="https://x.com/rednotehilab" target="_blank" rel="noopener noreferrer"><strong>🐦 X</strong></a>
28
+ </div>
29
+
30
+ </div>
31
+
32
+
33
+
34
+ ## Introduction
35
+
36
+ **dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
37
+
38
+ 1. **Powerful Performance:** **dots.ocr** achieves SOTA performance for text, tables, and reading order on [OmniDocBench](https://github.com/opendatalab/OmniDocBench), while delivering formula recognition results comparable to much larger models like Doubao-1.5 and gemini2.5-pro.
39
+ 2. **Multilingual Support:** **dots.ocr** demonstrates robust parsing capabilities for low-resource languages, achieving decisive advantages across both layout detection and content recognition on our in-house multilingual documents benchmark.
40
+ 3. **Unified and Simple Architecture:** By leveraging a single vision-language model, **dots.ocr** offers a significantly more streamlined architecture than conventional methods that rely on complex, multi-model pipelines. Switching between tasks is accomplished simply by altering the input prompt, proving that a VLM can achieve competitive detection results compared to traditional detection models like DocLayout-YOLO.
41
+ 4. **Efficient and Fast Performance:** Built upon a compact 1.7B LLM, **dots.ocr** provides faster inference speeds than many other high-performing models based on larger foundations.
42
+
43
+
44
+ ### Performance Comparison: dots.ocr vs. Competing Models
45
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/chart.png" border="0" />
46
+
47
+ > **Notes:**
48
+ > - The EN, ZH metrics are the end2end evaluation results of [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and Multilingual metric is the end2end evaluation results of dots.ocr-bench.
49
+
50
+
51
+ ## News
52
+ * ```2025.07.30 ``` 🚀 We release [dots.ocr](https://github.com/rednote-hilab/dots.ocr), a multilingual documents parsing model based on 1.7b llm, with SOTA performance.
53
+
54
+
55
+
56
+ ## Benchmark Results
57
+
58
+ ### 1. OmniDocBench
59
+
60
+ #### The end-to-end evaluation results of different tasks.
61
+
62
+ <table>
63
+ <thead>
64
+ <tr>
65
+ <th rowspan="2"><strong>Model<br>Type</strong></th>
66
+ <th rowspan="2"><strong>Methods</strong></th>
67
+ <th colspan="2"><strong>Overall<sup>Edit</sup>↓</strong></th>
68
+ <th colspan="2"><strong>Text<sup>Edit</sup>↓</strong></th>
69
+ <th colspan="2"><strong>Formula<sup>Edit</sup>↓</strong></th>
70
+ <th colspan="2"><strong>Table<sup>TEDS</sup>↑</strong></th>
71
+ <th colspan="2"><strong>Table<sup>Edit</sup>↓</strong></th>
72
+ <th colspan="2"><strong>Read Order<sup>Edit</sup>↓</strong></th>
73
+ </tr>
74
+ <tr>
75
+ <th><em>EN</em></th>
76
+ <th><em>ZH</em></th>
77
+ <th><em>EN</em></th>
78
+ <th><em>ZH</em></th>
79
+ <th><em>EN</em></th>
80
+ <th><em>ZH</em></th>
81
+ <th><em>EN</em></th>
82
+ <th><em>ZH</em></th>
83
+ <th><em>EN</em></th>
84
+ <th><em>ZH</em></th>
85
+ <th><em>EN</em></th>
86
+ <th><em>ZH</em></th>
87
+ </tr>
88
+ </thead>
89
+ <tbody>
90
+ <tr>
91
+ <td rowspan="8"><strong>Pipeline<br>Tools</strong></td>
92
+ <td>MinerU</td>
93
+ <td>0.150</td>
94
+ <td>0.357</td>
95
+ <td>0.061</td>
96
+ <td>0.215</td>
97
+ <td>0.278</td>
98
+ <td>0.577</td>
99
+ <td>78.6</td>
100
+ <td>62.1</td>
101
+ <td>0.180</td>
102
+ <td>0.344</td>
103
+ <td>0.079</td>
104
+ <td>0.292</td>
105
+ </tr>
106
+ <tr>
107
+ <td>Marker</td>
108
+ <td>0.336</td>
109
+ <td>0.556</td>
110
+ <td>0.080</td>
111
+ <td>0.315</td>
112
+ <td>0.530</td>
113
+ <td>0.883</td>
114
+ <td>67.6</td>
115
+ <td>49.2</td>
116
+ <td>0.619</td>
117
+ <td>0.685</td>
118
+ <td>0.114</td>
119
+ <td>0.340</td>
120
+ </tr>
121
+ <tr>
122
+ <td>Mathpix</td>
123
+ <td>0.191</td>
124
+ <td>0.365</td>
125
+ <td>0.105</td>
126
+ <td>0.384</td>
127
+ <td>0.306</td>
128
+ <td>0.454</td>
129
+ <td>77.0</td>
130
+ <td>67.1</td>
131
+ <td>0.243</td>
132
+ <td>0.320</td>
133
+ <td>0.108</td>
134
+ <td>0.304</td>
135
+ </tr>
136
+ <tr>
137
+ <td>Docling</td>
138
+ <td>0.589</td>
139
+ <td>0.909</td>
140
+ <td>0.416</td>
141
+ <td>0.987</td>
142
+ <td>0.999</td>
143
+ <td>1</td>
144
+ <td>61.3</td>
145
+ <td>25.0</td>
146
+ <td>0.627</td>
147
+ <td>0.810</td>
148
+ <td>0.313</td>
149
+ <td>0.837</td>
150
+ </tr>
151
+ <tr>
152
+ <td>Pix2Text</td>
153
+ <td>0.320</td>
154
+ <td>0.528</td>
155
+ <td>0.138</td>
156
+ <td>0.356</td>
157
+ <td>0.276</td>
158
+ <td>0.611</td>
159
+ <td>73.6</td>
160
+ <td>66.2</td>
161
+ <td>0.584</td>
162
+ <td>0.645</td>
163
+ <td>0.281</td>
164
+ <td>0.499</td>
165
+ </tr>
166
+ <tr>
167
+ <td>Unstructured</td>
168
+ <td>0.586</td>
169
+ <td>0.716</td>
170
+ <td>0.198</td>
171
+ <td>0.481</td>
172
+ <td>0.999</td>
173
+ <td>1</td>
174
+ <td>0</td>
175
+ <td>0.06</td>
176
+ <td>1</td>
177
+ <td>0.998</td>
178
+ <td>0.145</td>
179
+ <td>0.387</td>
180
+ </tr>
181
+ <tr>
182
+ <td>OpenParse</td>
183
+ <td>0.646</td>
184
+ <td>0.814</td>
185
+ <td>0.681</td>
186
+ <td>0.974</td>
187
+ <td>0.996</td>
188
+ <td>1</td>
189
+ <td>64.8</td>
190
+ <td>27.5</td>
191
+ <td>0.284</td>
192
+ <td>0.639</td>
193
+ <td>0.595</td>
194
+ <td>0.641</td>
195
+ </tr>
196
+ <tr>
197
+ <td>PPStruct-V3</td>
198
+ <td>0.145</td>
199
+ <td>0.206</td>
200
+ <td>0.058</td>
201
+ <td>0.088</td>
202
+ <td>0.295</td>
203
+ <td>0.535</td>
204
+ <td>-</td>
205
+ <td>-</td>
206
+ <td>0.159</td>
207
+ <td>0.109</td>
208
+ <td>0.069</td>
209
+ <td>0.091</td>
210
+ </tr>
211
+ <tr>
212
+ <td rowspan="9"><strong>Expert<br>VLMs</strong></td>
213
+ <td>GOT-OCR</td>
214
+ <td>0.287</td>
215
+ <td>0.411</td>
216
+ <td>0.189</td>
217
+ <td>0.315</td>
218
+ <td>0.360</td>
219
+ <td>0.528</td>
220
+ <td>53.2</td>
221
+ <td>47.2</td>
222
+ <td>0.459</td>
223
+ <td>0.520</td>
224
+ <td>0.141</td>
225
+ <td>0.280</td>
226
+ </tr>
227
+ <tr>
228
+ <td>Nougat</td>
229
+ <td>0.452</td>
230
+ <td>0.973</td>
231
+ <td>0.365</td>
232
+ <td>0.998</td>
233
+ <td>0.488</td>
234
+ <td>0.941</td>
235
+ <td>39.9</td>
236
+ <td>0</td>
237
+ <td>0.572</td>
238
+ <td>1.000</td>
239
+ <td>0.382</td>
240
+ <td>0.954</td>
241
+ </tr>
242
+ <tr>
243
+ <td>Mistral OCR</td>
244
+ <td>0.268</td>
245
+ <td>0.439</td>
246
+ <td>0.072</td>
247
+ <td>0.325</td>
248
+ <td>0.318</td>
249
+ <td>0.495</td>
250
+ <td>75.8</td>
251
+ <td>63.6</td>
252
+ <td>0.600</td>
253
+ <td>0.650</td>
254
+ <td>0.083</td>
255
+ <td>0.284</td>
256
+ </tr>
257
+ <tr>
258
+ <td>OLMOCR-sglang</td>
259
+ <td>0.326</td>
260
+ <td>0.469</td>
261
+ <td>0.097</td>
262
+ <td>0.293</td>
263
+ <td>0.455</td>
264
+ <td>0.655</td>
265
+ <td>68.1</td>
266
+ <td>61.3</td>
267
+ <td>0.608</td>
268
+ <td>0.652</td>
269
+ <td>0.145</td>
270
+ <td>0.277</td>
271
+ </tr>
272
+ <tr>
273
+ <td>SmolDocling-256M</td>
274
+ <td>0.493</td>
275
+ <td>0.816</td>
276
+ <td>0.262</td>
277
+ <td>0.838</td>
278
+ <td>0.753</td>
279
+ <td>0.997</td>
280
+ <td>44.9</td>
281
+ <td>16.5</td>
282
+ <td>0.729</td>
283
+ <td>0.907</td>
284
+ <td>0.227</td>
285
+ <td>0.522</td>
286
+ </tr>
287
+ <tr>
288
+ <td>Dolphin</td>
289
+ <td>0.206</td>
290
+ <td>0.306</td>
291
+ <td>0.107</td>
292
+ <td>0.197</td>
293
+ <td>0.447</td>
294
+ <td>0.580</td>
295
+ <td>77.3</td>
296
+ <td>67.2</td>
297
+ <td>0.180</td>
298
+ <td>0.285</td>
299
+ <td>0.091</td>
300
+ <td>0.162</td>
301
+ </tr>
302
+ <tr>
303
+ <td>MinerU 2</td>
304
+ <td>0.139</td>
305
+ <td>0.240</td>
306
+ <td>0.047</td>
307
+ <td>0.109</td>
308
+ <td>0.297</td>
309
+ <td>0.536</td>
310
+ <td>82.5</td>
311
+ <td>79.0</td>
312
+ <td>0.141</td>
313
+ <td>0.195</td>
314
+ <td>0.069<</td>
315
+ <td>0.118</td>
316
+ </tr>
317
+ <tr>
318
+ <td>OCRFlux</td>
319
+ <td>0.195</td>
320
+ <td>0.281</td>
321
+ <td>0.064</td>
322
+ <td>0.183</td>
323
+ <td>0.379</td>
324
+ <td>0.613</td>
325
+ <td>71.6</td>
326
+ <td>81.3</td>
327
+ <td>0.253</td>
328
+ <td>0.139</td>
329
+ <td>0.086</td>
330
+ <td>0.187</td>
331
+ </tr>
332
+ <tr>
333
+ <td>MonkeyOCR-pro-3B</td>
334
+ <td>0.138</td>
335
+ <td>0.206</td>
336
+ <td>0.067</td>
337
+ <td>0.107</td>
338
+ <td><strong>0.246</strong></td>
339
+ <td>0.421</td>
340
+ <td>81.5</td>
341
+ <td>87.5</td>
342
+ <td>0.139</td>
343
+ <td>0.111</td>
344
+ <td>0.100</td>
345
+ <td>0.185</td>
346
+ </tr>
347
+ <tr>
348
+
349
+ <td rowspan="5"><strong>General<br>VLMs</strong></td>
350
+ <td>GPT4o</td>
351
+ <td>0.233</td>
352
+ <td>0.399</td>
353
+ <td>0.144</td>
354
+ <td>0.409</td>
355
+ <td>0.425</td>
356
+ <td>0.606</td>
357
+ <td>72.0</td>
358
+ <td>62.9</td>
359
+ <td>0.234</td>
360
+ <td>0.329</td>
361
+ <td>0.128</td>
362
+ <td>0.251</td>
363
+ </tr>
364
+ <tr>
365
+ <td>Qwen2-VL-72B</td>
366
+ <td>0.252</td>
367
+ <td>0.327</td>
368
+ <td>0.096</td>
369
+ <td>0.218</td>
370
+ <td>0.404</td>
371
+ <td>0.487</td>
372
+ <td>76.8</td>
373
+ <td>76.4</td>
374
+ <td>0.387</td>
375
+ <td>0.408</td>
376
+ <td>0.119</td>
377
+ <td>0.193</td>
378
+ </tr>
379
+ <tr>
380
+ <td>Qwen2.5-VL-72B</td>
381
+ <td>0.214</td>
382
+ <td>0.261</td>
383
+ <td>0.092</td>
384
+ <td>0.18</td>
385
+ <td>0.315</td>
386
+ <td>0.434</td>
387
+ <td>82.9</td>
388
+ <td>83.9</td>
389
+ <td>0.341</td>
390
+ <td>0.262</td>
391
+ <td>0.106</td>
392
+ <td>0.168</td>
393
+ </tr>
394
+ <tr>
395
+ <td>Gemini2.5-Pro</td>
396
+ <td>0.148</td>
397
+ <td>0.212</td>
398
+ <td>0.055</td>
399
+ <td>0.168</td>
400
+ <td>0.356</td>
401
+ <td>0.439</td>
402
+ <td>85.8</td>
403
+ <td>86.4</td>
404
+ <td>0.13</td>
405
+ <td>0.119</td>
406
+ <td>0.049</td>
407
+ <td>0.121</td>
408
+ </tr>
409
+ <tr>
410
+ <td>doubao-1-5-thinking-vision-pro-250428</td>
411
+ <td>0.140</td>
412
+ <td>0.162</td>
413
+ <td>0.043</td>
414
+ <td>0.085</td>
415
+ <td>0.295</td>
416
+ <td><strong>0.384</strong></td>
417
+ <td>83.3</td>
418
+ <td><strong>89.3</strong></td>
419
+ <td>0.165</td>
420
+ <td><strong>0.085</strong></td>
421
+ <td>0.058</td>
422
+ <td>0.094</td>
423
+ </tr>
424
+ <tr>
425
+ <td rowspan="1"><strong>Expert VLMs</strong></td>
426
+ <td><strong>dots.ocr</strong></td>
427
+ <td><strong>0.125</strong></td>
428
+ <td><strong>0.160</strong></td>
429
+ <td><strong>0.032</strong></td>
430
+ <td><strong>0.066</strong></td>
431
+ <td>0.329</td>
432
+ <td>0.416</td>
433
+ <td><strong>88.6</strong></td>
434
+ <td>89.0</td>
435
+ <td><strong>0.099</strong></td>
436
+ <td>0.092</td>
437
+ <td><strong>0.040</strong></td>
438
+ <td><strong>0.067</strong></td>
439
+ </tr>
440
+ <tr>
441
+ </tbody>
442
+ </table>
443
+
444
+
445
+ #### The end-to-end text recognition performance across 9 PDF page types.
446
+
447
+ <table>
448
+ <thead>
449
+ <tr>
450
+ <th><strong>Model<br>Type</strong></th>
451
+ <th><strong>Models</strong></th>
452
+ <th><strong>Book</strong></th>
453
+ <th><strong>Slides</strong></th>
454
+ <th><strong>Financial<br>Report</strong></th>
455
+ <th><strong>Textbook</strong></th>
456
+ <th><strong>Exam<br>Paper</strong></th>
457
+ <th><strong>Magazine</strong></th>
458
+ <th><strong>Academic<br>Papers</strong></th>
459
+ <th><strong>Notes</strong></th>
460
+ <th><strong>Newspaper</strong></th>
461
+ <th><strong>Overall</strong></th>
462
+ </tr>
463
+ </thead>
464
+ <tbody>
465
+ <tr>
466
+ <td rowspan="3"><strong>Pipeline<br>Tools</strong></td>
467
+ <td>MinerU</td>
468
+ <td>0.055</td>
469
+ <td>0.124</td>
470
+ <td><u>0.033</u></td>
471
+ <td>0.102</td>
472
+ <td>0.159</td>
473
+ <td><strong>0.072</strong></td>
474
+ <td><u>0.025</u></td>
475
+ <td>0.984</td>
476
+ <td>0.171</td>
477
+ <td>0.206</td>
478
+ </tr>
479
+ <tr>
480
+ <td>Marker</td>
481
+ <td>0.074</td>
482
+ <td>0.340</td>
483
+ <td>0.089</td>
484
+ <td>0.319</td>
485
+ <td>0.452</td>
486
+ <td>0.153</td>
487
+ <td>0.059</td>
488
+ <td>0.651</td>
489
+ <td>0.192</td>
490
+ <td>0.274</td>
491
+ </tr>
492
+ <tr>
493
+ <td>Mathpix</td>
494
+ <td>0.131</td>
495
+ <td>0.220</td>
496
+ <td>0.202</td>
497
+ <td>0.216</td>
498
+ <td>0.278</td>
499
+ <td>0.147</td>
500
+ <td>0.091</td>
501
+ <td>0.634</td>
502
+ <td>0.690</td>
503
+ <td>0.300</td>
504
+ </tr>
505
+ <tr>
506
+ <td rowspan="5"><strong>Expert<br>VLMs</strong></td>
507
+ <td>GOT-OCR</td>
508
+ <td>0.111</td>
509
+ <td>0.222</td>
510
+ <td>0.067</td>
511
+ <td>0.132</td>
512
+ <td>0.204</td>
513
+ <td>0.198</td>
514
+ <td>0.179</td>
515
+ <td>0.388</td>
516
+ <td>0.771</td>
517
+ <td>0.267</td>
518
+ </tr>
519
+ <tr>
520
+ <td>Nougat</td>
521
+ <td>0.734</td>
522
+ <td>0.958</td>
523
+ <td>1.000</td>
524
+ <td>0.820</td>
525
+ <td>0.930</td>
526
+ <td>0.830</td>
527
+ <td>0.214</td>
528
+ <td>0.991</td>
529
+ <td>0.871</td>
530
+ <td>0.806</td>
531
+ </tr>
532
+ <tr>
533
+ <td>Dolphin</td>
534
+ <td>0.091</td>
535
+ <td>0.131</td>
536
+ <td>0.057</td>
537
+ <td>0.146</td>
538
+ <td>0.231</td>
539
+ <td>0.121</td>
540
+ <td>0.074</td>
541
+ <td>0.363</td>
542
+ <td>0.307</td>
543
+ <td>0.177</td>
544
+ </tr>
545
+ <tr>
546
+ <td>OCRFlux</td>
547
+ <td>0.068</td>
548
+ <td>0.125</td>
549
+ <td>0.092</td>
550
+ <td>0.102</td>
551
+ <td>0.119</td>
552
+ <td>0.083</td>
553
+ <td>0.047</td>
554
+ <td>0.223</td>
555
+ <td>0.536</td>
556
+ <td>0.149</td>
557
+ </tr>
558
+ <tr>
559
+ <td>MonkeyOCR-pro-3B</td>
560
+ <td>0.084</td>
561
+ <td>0.129</td>
562
+ <td>0.060</td>
563
+ <td>0.090</td>
564
+ <td>0.107</td>
565
+ <td>0.073</td>
566
+ <td>0.050</td>
567
+ <td>0.171</td>
568
+ <td>0.107</td>
569
+ <td>0.100</td>
570
+ </tr>
571
+ <tr>
572
+ <td rowspan="4"><strong>General<br>VLMs</strong></td>
573
+ <td>GPT4o</td>
574
+ <td>0.157</td>
575
+ <td>0.163</td>
576
+ <td>0.348</td>
577
+ <td>0.187</td>
578
+ <td>0.281</td>
579
+ <td>0.173</td>
580
+ <td>0.146</td>
581
+ <td>0.607</td>
582
+ <td>0.751</td>
583
+ <td>0.316</td>
584
+ </tr>
585
+ <tr>
586
+ <td>Qwen2.5-VL-7B</td>
587
+ <td>0.148</td>
588
+ <td>0.053</td>
589
+ <td>0.111</td>
590
+ <td>0.137</td>
591
+ <td>0.189</td>
592
+ <td>0.117</td>
593
+ <td>0.134</td>
594
+ <td>0.204</td>
595
+ <td>0.706</td>
596
+ <td>0.205</td>
597
+ </tr>
598
+ <tr>
599
+ <td>InternVL3-8B</td>
600
+ <td>0.163</td>
601
+ <td>0.056</td>
602
+ <td>0.107</td>
603
+ <td>0.109</td>
604
+ <td>0.129</td>
605
+ <td>0.100</td>
606
+ <td>0.159</td>
607
+ <td>0.150</td>
608
+ <td>0.681</td>
609
+ <td>0.188</td>
610
+ </tr>
611
+ <tr>
612
+ <td>doubao-1-5-thinking-vision-pro-250428</td>
613
+ <td>0.048</td>
614
+ <td>0.048</td>
615
+ <td>0.024</td>
616
+ <td><strong>0.062</strong></td>
617
+ <td>0.085</td>
618
+ <td>0.051</td>
619
+ <td>0.039</td>
620
+ <td><strong>0.096</strong></td>
621
+ <td>0.181</td>
622
+ <td>0.073</td>
623
+ </tr>
624
+ <tr>
625
+ <td rowspan="1"><strong>Expert VLMs</strong></td>
626
+ <td><strong>dots.ocr</strong></td>
627
+ <td><strong>0.031</strong></td>
628
+ <td><strong>0.047</strong></td>
629
+ <td><strong>0.011</strong></td>
630
+ <td>0.082</td>
631
+ <td><strong>0.079</strong></td>
632
+ <td><strong>0.028</strong></td>
633
+ <td><strong>0.029</strong></td>
634
+ <td>0.109</td>
635
+ <td><strong>0.056</strong></td>
636
+ <td><strong>0.055</strong></td>
637
+ </tr>
638
+
639
+ </tbody>
640
+ </table>
641
+
642
+ > **Notes:**
643
+ > - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR), [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and our own internal evaluations.
644
+ > - We delete the Page-header and Page-footer cells in the result markdown.
645
+ > - We use tikz_preprocess pipeline to upsample the images to dpi 200.
646
+
647
+
648
+ ### 2. **dots.ocr-bench**
649
+
650
+ This is an inhouse benchmark which contain 1493 pdf images with 100 languages.
651
+
652
+ #### The end-to-end evaluation results of different tasks.
653
+
654
+ <table>
655
+ <thead>
656
+ <tr>
657
+ <th rowspan="1"><strong>Methods</strong></th>
658
+ <th colspan="1"><strong>Overall<sup>Edit</sup>↓</strong></th>
659
+ <th colspan="1"><strong>Text<sup>Edit</sup>↓</strong></th>
660
+ <th colspan="1"><strong>Formula<sup>Edit</sup>↓</strong></th>
661
+ <th colspan="1"><strong>Table<sup>TEDS</sup>↑</strong></th>
662
+ <th colspan="1"><strong>Table<sup>Edit</sup>↓</strong></th>
663
+ <th colspan="1"><strong>Read Order<sup>Edit</sup>↓</strong></th>
664
+ </tr>
665
+ </thead>
666
+ <tbody>
667
+ <td>MonkeyOCR-3B</td>
668
+ <td>0.483</td>
669
+ <td>0.445</td>
670
+ <td>0.627</td>
671
+ <td>50.93</td>
672
+ <td>0.452</td>
673
+ <td>0.409</td>
674
+ </tr>
675
+ <tr>
676
+ <td>doubao-1-5-thinking-vision-pro-250428</td>
677
+ <td>0.291</td>
678
+ <td>0.226</td>
679
+ <td>0.440</td>
680
+ <td>71.2</td>
681
+ <td>0.260</td>
682
+ <td>0.238</td>
683
+ </tr>
684
+ <tr>
685
+ <td>doubao-1-6</td>
686
+ <td>0.299</td>
687
+ <td>0.270</td>
688
+ <td>0.417</td>
689
+ <td>71.0</td>
690
+ <td>0.258</td>
691
+ <td>0.253</td>
692
+ </tr>
693
+ <tr>
694
+ <td>Gemini2.5-Pro</td>
695
+ <td>0.251</td>
696
+ <td>0.163</td>
697
+ <td>0.402</td>
698
+ <td>77.1</td>
699
+ <td>0.236</td>
700
+ <td>0.202</td>
701
+ </tr>
702
+ <tr>
703
+ <td><strong>dots.ocr</strong> </td>
704
+ <td><strong>0.177</strong></td>
705
+ <td><strong>0.075</strong></td>
706
+ <td><strong>0.297</strong></td>
707
+ <td><strong>79.2</strong></td>
708
+ <td><strong>0.186</strong></td>
709
+ <td><strong>0.152</strong></td>
710
+ </tr>
711
+
712
+ </tbody>
713
+ </table>
714
+
715
+ > **Notes:**
716
+ > - We use the same metric calculation pipeline of [OmniDocBench](https://github.com/opendatalab/OmniDocBench).
717
+ > - We delete the Page-header and Page-footer cells in the result markdown.
718
+
719
+ #### Layout Detection
720
+
721
+ <table>
722
+ <thead>
723
+ <tr>
724
+ <th rowspan="2"><strong>Method</strong></th>
725
+ <th colspan="5" style="text-align: center;"><strong>F1@IoU=.50:.05:.95↑</strong></th>
726
+ <th colspan="5" style="text-align: center;"><strong>F1@IoU=.50↑</strong></th>
727
+ </tr>
728
+ <tr>
729
+ <th>Overall</th>
730
+ <th>Text</th>
731
+ <th>Formula</th>
732
+ <th>Table</th>
733
+ <th>Picture</th>
734
+ <th>Overall</th>
735
+ <th>Text</th>
736
+ <th>Formula</th>
737
+ <th>Table</th>
738
+ <th>Picture</th>
739
+ </tr>
740
+ </thead>
741
+
742
+ <tbody>
743
+ <td>DocLayout-YOLO-DocStructBench</td>
744
+ <td>0.733</td>
745
+ <td>0.694</td>
746
+ <td>0.480</td>
747
+ <td>0.803</td>
748
+ <td>0.619</td>
749
+ <td>0.806</td>
750
+ <td>0.779</td>
751
+ <td>0.620</td>
752
+ <td>0.858</td>
753
+ <td>0.678</td>
754
+ </tr>
755
+
756
+ <tr>
757
+ <td>dots.ocr-parse all</td>
758
+ <td>0.831</td>
759
+ <td>0.801</td>
760
+ <td>0.654</td>
761
+ <td>0.838</td>
762
+ <td>0.748</td>
763
+ <td>0.922</td>
764
+ <td>0.909</td>
765
+ <td>0.770</td>
766
+ <td>0.888</td>
767
+ <td>0.831</td>
768
+ </tr>
769
+
770
+ <tr>
771
+ <td> <strong>dots.ocr-detection only</strong> </td>
772
+ <td><strong>0.845</strong></td>
773
+ <td><strong>0.816</strong></td>
774
+ <td><strong>0.716</strong></td>
775
+ <td><strong>0.875</strong></td>
776
+ <td><strong>0.765</strong></td>
777
+ <td><strong>0.930</strong></td>
778
+ <td><strong>0.917</strong></td>
779
+ <td><strong>0.832</strong></td>
780
+ <td><strong>0.918</strong></td>
781
+ <td><strong>0.843</strong></td>
782
+ </tr>
783
+
784
+ </tbody>
785
+ </table>
786
+
787
+ > **Notes:**
788
+ > - prompt_layout_all_en for **parse all**, prompt_layout_only_en for **detection only**, please refer to [prompts](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)
789
+
790
+
791
+ ### 3. olmOCR-bench.
792
+
793
+ <table>
794
+ <thead>
795
+ <tr>
796
+ <th>Model</th>
797
+ <th>ArXiv</th>
798
+ <th>Old Scans<br>Math</th>
799
+ <th>Tables</th>
800
+ <th>Old Scans</th>
801
+ <th>Headers and<br>Footers</th>
802
+ <th>Multi<br>column</th>
803
+ <th>Long Tiny<br>Text</th>
804
+ <th>Base</th>
805
+ <th>Overall</th>
806
+ </tr>
807
+ </thead>
808
+ <tbody>
809
+ <tr>
810
+ <td>GOT OCR</td>
811
+ <td>52.7</td>
812
+ <td>52.0</td>
813
+ <td>0.2</td>
814
+ <td>22.1</td>
815
+ <td>93.6</td>
816
+ <td>42.0</td>
817
+ <td>29.9</td>
818
+ <td>94.0</td>
819
+ <td>48.3 ± 1.1</td>
820
+ </tr>
821
+ <tr>
822
+ <td>Marker</td>
823
+ <td>76.0</td>
824
+ <td>57.9</td>
825
+ <td>57.6</td>
826
+ <td>27.8</td>
827
+ <td>84.9</td>
828
+ <td>72.9</td>
829
+ <td>84.6</td>
830
+ <td>99.1</td>
831
+ <td>70.1 ± 1.1</td>
832
+ </tr>
833
+ <tr>
834
+ <td>MinerU</td>
835
+ <td>75.4</td>
836
+ <td>47.4</td>
837
+ <td>60.9</td>
838
+ <td>17.3</td>
839
+ <td><strong>96.6</strong></td>
840
+ <td>59.0</td>
841
+ <td>39.1</td>
842
+ <td>96.6</td>
843
+ <td>61.5 ± 1.1</td>
844
+ </tr>
845
+ <tr>
846
+ <td>Mistral OCR</td>
847
+ <td>77.2</td>
848
+ <td>67.5</td>
849
+ <td>60.6</td>
850
+ <td>29.3</td>
851
+ <td>93.6</td>
852
+ <td>71.3</td>
853
+ <td>77.1</td>
854
+ <td>99.4</td>
855
+ <td>72.0 ± 1.1</td>
856
+ </tr>
857
+ <tr>
858
+ <td>Nanonets OCR</td>
859
+ <td>67.0</td>
860
+ <td>68.6</td>
861
+ <td>77.7</td>
862
+ <td>39.5</td>
863
+ <td>40.7</td>
864
+ <td>69.9</td>
865
+ <td>53.4</td>
866
+ <td>99.3</td>
867
+ <td>64.5 ± 1.1</td>
868
+ </tr>
869
+ <tr>
870
+ <td>GPT-4o<br>(No Anchor)</td>
871
+ <td>51.5</td>
872
+ <td><strong>75.5</strong></td>
873
+ <td>69.1</td>
874
+ <td>40.9</td>
875
+ <td>94.2</td>
876
+ <td>68.9</td>
877
+ <td>54.1</td>
878
+ <td>96.7</td>
879
+ <td>68.9 ± 1.1</td>
880
+ </tr>
881
+ <tr>
882
+ <td>GPT-4o<br>(Anchored)</td>
883
+ <td>53.5</td>
884
+ <td>74.5</td>
885
+ <td>70.0</td>
886
+ <td>40.7</td>
887
+ <td>93.8</td>
888
+ <td>69.3</td>
889
+ <td>60.6</td>
890
+ <td>96.8</td>
891
+ <td>69.9 ± 1.1</td>
892
+ </tr>
893
+ <tr>
894
+ <td>Gemini Flash 2<br>(No Anchor)</td>
895
+ <td>32.1</td>
896
+ <td>56.3</td>
897
+ <td>61.4</td>
898
+ <td>27.8</td>
899
+ <td>48.0</td>
900
+ <td>58.7</td>
901
+ <td><strong>84.4</strong></td>
902
+ <td>94.0</td>
903
+ <td>57.8 ± 1.1</td>
904
+ </tr>
905
+ <tr>
906
+ <td>Gemini Flash 2<br>(Anchored)</td>
907
+ <td>54.5</td>
908
+ <td>56.1</td>
909
+ <td>72.1</td>
910
+ <td>34.2</td>
911
+ <td>64.7</td>
912
+ <td>61.5</td>
913
+ <td>71.5</td>
914
+ <td>95.6</td>
915
+ <td>63.8 ± 1.2</td>
916
+ </tr>
917
+ <tr>
918
+ <td>Qwen 2 VL<br>(No Anchor)</td>
919
+ <td>19.7</td>
920
+ <td>31.7</td>
921
+ <td>24.2</td>
922
+ <td>17.1</td>
923
+ <td>88.9</td>
924
+ <td>8.3</td>
925
+ <td>6.8</td>
926
+ <td>55.5</td>
927
+ <td>31.5 ± 0.9</td>
928
+ </tr>
929
+ <tr>
930
+ <td>Qwen 2.5 VL<br>(No Anchor)</td>
931
+ <td>63.1</td>
932
+ <td>65.7</td>
933
+ <td>67.3</td>
934
+ <td>38.6</td>
935
+ <td>73.6</td>
936
+ <td>68.3</td>
937
+ <td>49.1</td>
938
+ <td>98.3</td>
939
+ <td>65.5 ± 1.2</td>
940
+ </tr>
941
+ <tr>
942
+ <td>olmOCR v0.1.75<br>(No Anchor)</td>
943
+ <td>71.5</td>
944
+ <td>71.4</td>
945
+ <td>71.4</td>
946
+ <td><strong>42.8</strong></td>
947
+ <td>94.1</td>
948
+ <td>77.7</td>
949
+ <td>71.0</td>
950
+ <td>97.8</td>
951
+ <td>74.7 ± 1.1</td>
952
+ </tr>
953
+ <tr>
954
+ <td>olmOCR v0.1.75<br>(Anchored)</td>
955
+ <td>74.9</td>
956
+ <td>71.2</td>
957
+ <td>71.0</td>
958
+ <td>42.2</td>
959
+ <td>94.5</td>
960
+ <td>78.3</td>
961
+ <td>73.3</td>
962
+ <td>98.3</td>
963
+ <td>75.5 ± 1.0</td>
964
+ </tr>
965
+ <tr>
966
+ <td>MonkeyOCR-pro-3B</td>
967
+ <td><strong>83.8</strong></td>
968
+ <td>68.8</td>
969
+ <td>74.6</td>
970
+ <td>36.1</td>
971
+ <td>91.2</td>
972
+ <td>76.6</td>
973
+ <td>80.1</td>
974
+ <td>95.3</td>
975
+ <td>75.8 ± 1.0</td>
976
+ </tr>
977
+ <tr>
978
+ <td><strong>dots.ocr</strong></td>
979
+ <td>82.1</td>
980
+ <td>64.2</td>
981
+ <td><strong>88.3</strong></td>
982
+ <td>40.9</td>
983
+ <td>94.1</td>
984
+ <td><strong>82.4</strong></td>
985
+ <td>81.2</td>
986
+ <td><strong>99.5</strong></td>
987
+ <td><strong>79.1 ± 1.0</strong></td>
988
+ </tr>
989
+ </tbody>
990
+ </table>
991
+
992
+
993
+ > **Note:**
994
+ > - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
995
+ [olmocr](https://github.com/allenai/olmocr), and our own internal evaluations.
996
+ > - We delete the Page-header and Page-footer cells in the result markdown.
997
+
998
+
999
+
1000
+ # Quick Start
1001
+ ## 1. Installation
1002
+ ### Install dots.ocr
1003
+ ```shell
1004
+ conda create -n dots_ocr python=3.12
1005
+ conda activate dots_ocr
1006
+
1007
+ git clone https://github.com/rednote-hilab/dots.ocr.git
1008
+ cd dots.ocr
1009
+
1010
+ # Install pytorch, see https://pytorch.org/get-started/previous-versions/ for your cuda version
1011
+ pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu128
1012
+ pip install -e .
1013
+ ```
1014
+
1015
+ If you have trouble with the installation, try our [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) for an easier setup, and follow these steps:
1016
+ ```shell
1017
+ git clone https://github.com/rednote-hilab/dots.ocr.git
1018
+ cd dots.ocr
1019
+ pip install -e .
1020
+ ```
1021
+
1022
+
1023
+ ### Download Model Weights
1024
+ > 💡**Note:** Please use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`) for the model save path. This is a temporary workaround pending our integration with Transformers.
1025
+ ```shell
1026
+ python3 tools/download_model.py
1027
+
1028
+ # with modelscope
1029
+ python3 tools/download_model.py --type modelscope
1030
+ ```
1031
+
1032
+
1033
+ ## 2. Deployment
1034
+ ### vLLM inference
1035
+ We highly recommend using vllm for deployment and inference. All of our evaluations results are based on vllm version 0.9.1.
1036
+ The [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) is based on the official vllm image. You can also follow [Dockerfile](https://github.com/rednote-hilab/dots.ocr/blob/master/docker/Dockerfile) to build the deployment environment by yourself.
1037
+
1038
+ ```shell
1039
+ # You need to register model to vllm at first
1040
+ python3 tools/download_model.py
1041
+ export hf_model_path=./weights/DotsOCR # Path to your downloaded model weights, Please use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`) for the model save path. This is a temporary workaround pending our integration with Transformers.
1042
+ export PYTHONPATH=$(dirname "$hf_model_path"):$PYTHONPATH
1043
+ sed -i '/^from vllm\.entrypoints\.cli\.main import main$/a\
1044
+ from DotsOCR import modeling_dots_ocr_vllm' `which vllm` # If you downloaded model weights by yourself, please replace `DotsOCR` by your model saved directory name, and remember to use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`)
1045
+
1046
+ # launch vllm server
1047
+ CUDA_VISIBLE_DEVICES=0 vllm serve ${hf_model_path} --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --chat-template-content-format string --served-model-name model --trust-remote-code
1048
+
1049
+ # If you get a ModuleNotFoundError: No module named 'DotsOCR', please check the note above on the saved model directory name.
1050
+
1051
+ # vllm api demo
1052
+ python3 ./demo/demo_vllm.py --prompt_mode prompt_layout_all_en
1053
+ ```
1054
+
1055
+ ### Hugginface inference
1056
+ ```shell
1057
+ python3 demo/demo_hf.py
1058
+ ```
1059
+
1060
+ <details>
1061
+ <summary><b>Hugginface inference details</b></summary>
1062
+
1063
+ ```python
1064
+ import torch
1065
+ from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
1066
+ from qwen_vl_utils import process_vision_info
1067
+ from dots_ocr.utils import dict_promptmode_to_prompt
1068
+
1069
+ model_path = "./weights/DotsOCR"
1070
+ model = AutoModelForCausalLM.from_pretrained(
1071
+ model_path,
1072
+ attn_implementation="flash_attention_2",
1073
+ torch_dtype=torch.bfloat16,
1074
+ device_map="auto",
1075
+ trust_remote_code=True
1076
+ )
1077
+ processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
1078
+
1079
+ image_path = "demo/demo_image1.jpg"
1080
+ prompt = """Please output the layout information from the PDF image, including each layout element's bbox, its category, and the corresponding text content within the bbox.
1081
+
1082
+ 1. Bbox format: [x1, y1, x2, y2]
1083
+
1084
+ 2. Layout Categories: The possible categories are ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title'].
1085
+
1086
+ 3. Text Extraction & Formatting Rules:
1087
+ - Picture: For the 'Picture' category, the text field should be omitted.
1088
+ - Formula: Format its text as LaTeX.
1089
+ - Table: Format its text as HTML.
1090
+ - All Others (Text, Title, etc.): Format their text as Markdown.
1091
+
1092
+ 4. Constraints:
1093
+ - The output text must be the original text from the image, with no translation.
1094
+ - All layout elements must be sorted according to human reading order.
1095
+
1096
+ 5. Final Output: The entire output must be a single JSON object.
1097
+ """
1098
+
1099
+ messages = [
1100
+ {
1101
+ "role": "user",
1102
+ "content": [
1103
+ {
1104
+ "type": "image",
1105
+ "image": image_path
1106
+ },
1107
+ {"type": "text", "text": prompt}
1108
+ ]
1109
+ }
1110
+ ]
1111
+
1112
+ # Preparation for inference
1113
+ text = processor.apply_chat_template(
1114
+ messages,
1115
+ tokenize=False,
1116
+ add_generation_prompt=True
1117
+ )
1118
+ image_inputs, video_inputs = process_vision_info(messages)
1119
+ inputs = processor(
1120
+ text=[text],
1121
+ images=image_inputs,
1122
+ videos=video_inputs,
1123
+ padding=True,
1124
+ return_tensors="pt",
1125
+ )
1126
+
1127
+ inputs = inputs.to("cuda")
1128
+
1129
+ # Inference: Generation of the output
1130
+ generated_ids = model.generate(**inputs, max_new_tokens=24000)
1131
+ generated_ids_trimmed = [
1132
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
1133
+ ]
1134
+ output_text = processor.batch_decode(
1135
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
1136
+ )
1137
+ print(output_text)
1138
+
1139
+ ```
1140
+
1141
+ </details>
1142
+
1143
+ ### Hugginface inference with CPU
1144
+ Please refer to [CPU inference](https://github.com/rednote-hilab/dots.ocr/issues/1#issuecomment-3148962536)
1145
+
1146
+
1147
+ ## 3. Document Parse
1148
+ **Based on vLLM server**, you can parse an image or a pdf file using the following commands:
1149
+ ```bash
1150
+
1151
+ # Parse all layout info, both detection and recognition
1152
+ # Parse a single image
1153
+ python3 dots_ocr/parser.py demo/demo_image1.jpg
1154
+ # Parse a single PDF
1155
+ python3 dots_ocr/parser.py demo/demo_pdf1.pdf --num_thread 64 # try bigger num_threads for pdf with a large number of pages
1156
+
1157
+ # Layout detection only
1158
+ python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_layout_only_en
1159
+
1160
+ # Parse text only, except Page-header and Page-footer
1161
+ python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_ocr
1162
+
1163
+ # Parse layout info by bbox
1164
+ python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_grounding_ocr --bbox 163 241 1536 705
1165
+
1166
+ ```
1167
+ **Based on Transformers**, you can parse an image or a pdf file using the same commands above, just add `--use_hf true`.
1168
+
1169
+ > Notice: transformers is slower than vllm, if you want to use demo/* with transformers,just add `use_hf=True` in `DotsOCRParser(..,use_hf=True)`
1170
+
1171
+ <details>
1172
+ <summary><b>Output Results</b></summary>
1173
+
1174
+ 1. **Structured Layout Data** (`demo_image1.json`): A JSON file containing the detected layout elements, including their bounding boxes, categories, and extracted text.
1175
+ 2. **Processed Markdown File** (`demo_image1.md`): A Markdown file generated from the concatenated text of all detected cells.
1176
+ * An additional version, `demo_image1_nohf.md`, is also provided, which excludes page headers and footers for compatibility with benchmarks like Omnidocbench and olmOCR-bench.
1177
+ 3. **Layout Visualization** (`demo_image1.jpg`): The original image with the detected layout bounding boxes drawn on it.
1178
+
1179
+ </details>
1180
+
1181
+ ## 4. Demo
1182
+ You can run the demo with the following command, or try directly at [live demo](https://dotsocr.xiaohongshu.com/)
1183
+ ```bash
1184
+ python demo/demo_gradio.py
1185
+ ```
1186
+
1187
+ We also provide a demo for grounding ocr:
1188
+ ```bash
1189
+ python demo/demo_gradio_annotion.py
1190
+ ```
1191
+
1192
+
1193
+ ### Example for formula document
1194
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/formula1.png" alt="formula1.png" border="0" />
1195
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/formula2.png" alt="formula2.png" border="0" />
1196
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/formula3.png" alt="formula3.png" border="0" />
1197
+
1198
+ ### Example for table document
1199
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/table1.png" alt="table1.png" border="0" />
1200
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/table2.png" alt="table2.png" border="0" />
1201
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/table3.png" alt="table3.png" border="0" />
1202
+
1203
+ ### Example for multilingual document
1204
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/Tibetan.png" alt="Tibetan.png" border="0" />
1205
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/tradition_zh.png" alt="tradition_zh.png" border="0" />
1206
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/nl.png" alt="nl.png" border="0" />
1207
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/kannada.png" alt="kannada.png" border="0" />
1208
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/russian.png" alt="russian.png" border="0" />
1209
+
1210
+ ### Example for reading order
1211
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/reading_order.png" alt="reading_order.png" border="0" />
1212
+
1213
+ ### Example for grounding ocr
1214
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/grounding.png" alt="grounding.png" border="0" />
1215
+
1216
+
1217
+ ## Acknowledgments
1218
+ We would like to thank [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), [aimv2](https://github.com/apple/ml-aim), [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
1219
+ [OmniDocBench](https://github.com/opendatalab/OmniDocBench), [PyMuPDF](https://github.com/pymupdf/PyMuPDF), for providing code and models.
1220
+
1221
+ We also thank [DocLayNet](https://github.com/DS4SD/DocLayNet), [M6Doc](https://github.com/HCIILAB/M6Doc), [CDLA](https://github.com/buptlihang/CDLA), [D4LA](https://github.com/AlibabaResearch/AdvancedLiterateMachinery) for providing valuable datasets.
1222
+
1223
+ ## Limitation & Future Work
1224
+
1225
+ - **Complex Document Elements:**
1226
+ - **Table&Formula**: dots.ocr is not yet perfect for high-complexity tables and formula extraction.
1227
+ - **Picture**: Pictures in documents are currently not parsed.
1228
+
1229
+ - **Parsing Failures:** The model may fail to parse under certain conditions:
1230
+ - When the character-to-pixel ratio is excessively high. Try enlarging the image or increasing the PDF parsing DPI (a setting of 200 is recommended). However, please note that the model performs optimally on images with a resolution under 11289600 pixels.
1231
+ - Continuous special characters, such as ellipses (`...`) and underscores (`_`), may cause the prediction output to repeat endlessly. In such scenarios, consider using alternative prompts like `prompt_layout_only_en`, `prompt_ocr`, or `prompt_grounding_ocr` ([details here](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)).
1232
+
1233
+ - **Performance Bottleneck:** Despite its 1.7B parameter LLM foundation, **dots.ocr** is not yet optimized for high-throughput processing of large PDF volumes.
1234
+
1235
+ We are committed to achieving more accurate table and formula parsing, as well as enhancing the model's OCR capabilities for broader generalization, all while aiming for **a more powerful, more efficient model**. Furthermore, we are actively considering the development of **a more general-purpose perception model** based on Vision-Language Models (VLMs), which would integrate general detection, image captioning, and OCR tasks into a unified framework. **Parsing the content of the pictures in the documents** is also a key priority for our future work.
1236
+ We believe that collaboration is the key to tackling these exciting challenges. If you are passionate about advancing the frontiers of document intelligence and are interested in contributing to these future endeavors, we would love to hear from you. Please reach out to us via email at: [[email protected]].