File size: 4,955 Bytes
a244a6f 152505d 7177bf7 152505d 7177bf7 a244a6f 152505d 48c9982 152505d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
---
tags:
- ocr
- arabic
- document-understanding
- structure-preservation
- computer-vision
pretty_name: Misraj-DocOCR
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: uuid
dtype: string
- name: markdown
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 541115359
num_examples: 400
download_size: 537036141
dataset_size: 541115359
---
# Misraj-DocOCR: An Arabic Document OCR Benchmark📄
**Dataset:** [Misraj/Misraj-DocOCR](https://huggingface.co/datasets/Misraj/Misraj-DocOCR)
**Domain:** Arabic Document OCR (text + structure)
**Size:** 400 expertly verified pages (real + synthetic)
**Use cases:** OCR, Document Understanding, Markdown/HTML structure preservation
**Status:** Public 🤝
## ✨ Overview
**Misraj-DocOCR** is a curated, expert-verified benchmark for **Arabic document OCR** with an emphasis on **structure preservation** (Markdown/HTML tables, lists, footnotes, math, watermarks, multi-column, marginalia, etc.). Each page includes high-quality ground truth designed to evaluate both **text fidelity** and **layout/structure fidelity**.
- **Diverse content:** books, reports, forms, scholarly pages, and complex layouts.
- **Expert-verified ground truth:** human-reviewed for text **and** structure.
- **Open & reproducible:** intended for fair comparisons and reliable benchmarking.
---
## 📦 Data format
Each example typically includes:
- `uuid`: id of sample
- `image`: page image (PIL-compatible)
- `markdown`: target transcription with structure
### 🔌 Loading
```python
from datasets import load_dataset
ds = load_dataset("Misraj/Misraj-DocOCR")
split = ds["train"] # or another available split
ex = split[0]
img = ex["image"] # PIL.Image
gt = ex.get("markdown") or ex.get("text")
print(gt[:400])
# img.show() # uncomment in a local environment
```
---
## 🧪 Metrics
We report both **text** and **structure** metrics:
* **Text:** WER ↓, CER ↓, BLEU ↑, ChrF ↑
* **Structure:** **TEDS ↑**, **MARS ↑** (Markdown/HTML structure fidelity)
---
## 🏆 Leaderboard (Misraj-DocOCR)
Best values are **bold**, second-best are <u>underlined</u>.
| Model | WER ↓ | CER ↓ | BLEU ↑ | CHRF ↑ | TEDS ↑ | MARS ↑ |
| ----------------------------- | ---------: | ---------: | ----------: | ----------: | -------: | -----------: |
| **Baseer (ours)** | **0.25** | 0.53 | <u>76.18</u> | <u>87.77</u> | **66** | **76.885** |
| Gemini-2.5-pro | <u>0.37</u> | <u>0.31</u> | **77.92** | **89.55** | <u>52</u> | <u>70.775</u> |
| Azure AI Document Intelligence[^azure] | 0.44 | **0.27** | 62.04 | 82.49 | 42 | 62.245 |
| Dots.ocr | 0.50 | 0.40 | 58.16 | 78.41 | 40 | 59.205 |
| Nanonets | 0.71 | 0.55 | 42.22 | 67.89 | 37 | 52.445 |
| Qari | 0.76 | 0.64 | 38.59 | 64.50 | 21 | 42.750 |
| Qwen2.5-VL-32B | 0.76 | 0.59 | 37.62 | 62.64 | 41 | 51.820 |
| GPT-5 | 0.86 | 0.62 | 40.67 | 61.6 | 48 | 54.8 |
| Qwen2.5-VL-3B-Instruct | 0.87 | 0.71 | 25.39 | 53.42 | 27 | 40.210 |
| Qwen2.5-VL-7B | 0.92 | 0.77 | 31.57 | 54.70 | 27 | 40.850 |
| Gemma3-12B | 0.96 | 0.80 | 19.75 | 44.53 | 33 | 38.765 |
| Gemma3-4B | 1.01 | 0.85 | 9.57 | 31.39 | 28 | 29.695 |
| GPT-4o-mini | 1.36 | 1.10 | 22.63 | 47.04 | 26 | 36.52 |
| AIN | 1.23 | 1.11 | 1.25 | 2.24 | 21 | 11.620 |
| Aya-vision | 1.41 | 1.07 | 2.91 | 9.81 | 26 | 17.905 |
**Highlights:**
* **Baseer (ours)** leads on **WER**, **TEDS**, and **MARS** → strong text & structure fidelity.
* **Gemini-2.5-pro** tops **BLEU/ChrF**; **Azure AI Document Intelligence** attains lowest **CER**.
---
## 📚 How to cite
If you use **Misraj-DocOCR**, please cite:
```bibtex
@misc{hennara2025baseervisionlanguagemodelarabic,
title={Baseer: A Vision-Language Model for Arabic Document-to-Markdown OCR},
author={Khalil Hennara and Muhammad Hreden and Mohamed Motasim Hamed and Ahmad Bastati and Zeina Aldallal and Sara Chrouf and Safwan AlModhayan},
year={2025},
eprint={2509.18174},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.18174},
}
```
|