davanstrien HF Staff Claude Opus 4.6 (1M context) commited on
Commit
dafc1a9
·
1 Parent(s): 6786450

Rename dots-ocr-1.5 → dots-mocr (final release name)

Browse files

- dots.mocr is the official final name for what was dots.ocr-1.5
- Model now on HF directly: rednote-hilab/dots.mocr (no ModelScope mirror needed)
- Added SVG prompt mode (--prompt-mode svg) with auto temperature/top_p
- SVG-optimized variant: rednote-hilab/dots.mocr-svg
- Updated README with new model entry and detailed docs
- Tested: OCR mode 3/3 on L4, SVG mode 3/3 on L4

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

Files changed (2) hide show
  1. README.md +42 -3
  2. dots-ocr-1.5.py → dots-mocr.py +85 -59
README.md CHANGED
@@ -7,7 +7,7 @@ tags: [uv-script, ocr, vision-language-model, document-processing, hf-jobs]
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
- 15 OCR models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
@@ -43,7 +43,7 @@ That's it! The script will:
43
  | `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
44
  | `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
45
  | `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
46
- | `dots-ocr-1.5.py` | [DoTS.ocr-1.5](https://huggingface.co/davanstrien/dots.ocr-1.5) | 3B | vLLM | 7 prompt modes, layout + bbox, 100+ languages |
47
  | `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
48
  | `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
49
  | `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
@@ -392,7 +392,46 @@ Advanced reasoning-based OCR using [numind/NuMarkdown-8B-Thinking](https://huggi
392
  - 🔍 **Multi-column Layouts** - Handles complex document structures
393
  - ✨ **Thinking Traces** - Optional inclusion of reasoning process with `--include-thinking`
394
 
395
- ### DoTS.ocr (`dots-ocr.py`)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
396
 
397
  Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) with only 1.7B parameters:
398
 
 
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
+ 19 OCR scripts covering models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
 
43
  | `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
44
  | `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
45
  | `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
46
+ | `dots-mocr.py` | [dots.mocr](https://huggingface.co/rednote-hilab/dots.mocr) | 3B | vLLM | 8 prompt modes incl. SVG generation, layout + bbox, 100+ languages |
47
  | `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
48
  | `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
49
  | `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
 
392
  - 🔍 **Multi-column Layouts** - Handles complex document structures
393
  - ✨ **Thinking Traces** - Optional inclusion of reasoning process with `--include-thinking`
394
 
395
+ ### dots.mocr (`dots-mocr.py`) — SVG generation + SOTA OCR
396
+
397
+ Advanced multilingual OCR and SVG generation using [rednote-hilab/dots.mocr](https://huggingface.co/rednote-hilab/dots.mocr) with 3B parameters:
398
+
399
+ - 🌍 **100+ Languages** - Extensive multilingual support
400
+ - 📝 **Document OCR** - Clean text extraction (default mode)
401
+ - 📊 **Layout Analysis** - Structured output with bboxes and categories
402
+ - 📐 **Formula recognition** - LaTeX format support
403
+ - 🖼️ **SVG generation** - Convert charts, UI layouts, figures to editable SVG code
404
+ - 🔀 **8 prompt modes** - OCR, layout-all, layout-only, web-parsing, scene-spotting, grounding-ocr, svg, general
405
+ - 📄 **[Paper](https://arxiv.org/abs/2603.13032)** - 83.9% on olmOCR-Bench
406
+
407
+ **SVG variant:** Use `--model rednote-hilab/dots.mocr-svg` with `--prompt-mode svg` for best SVG results.
408
+
409
+ **Quick start:**
410
+
411
+ ```bash
412
+ # Basic OCR
413
+ hf jobs uv run --flavor l4x1 \
414
+ -s HF_TOKEN \
415
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
416
+ your-input-dataset your-output-dataset \
417
+ --max-samples 100
418
+
419
+ # SVG generation from charts/figures
420
+ hf jobs uv run --flavor l4x1 \
421
+ -s HF_TOKEN \
422
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
423
+ your-charts svg-output \
424
+ --prompt-mode svg --model rednote-hilab/dots.mocr-svg
425
+
426
+ # Layout analysis with bounding boxes
427
+ hf jobs uv run --flavor l4x1 \
428
+ -s HF_TOKEN \
429
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
430
+ your-documents layout-output \
431
+ --prompt-mode layout-all
432
+ ```
433
+
434
+ ### DoTS.ocr v1 (`dots-ocr.py`)
435
 
436
  Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) with only 1.7B parameters:
437
 
dots-ocr-1.5.py → dots-mocr.py RENAMED
@@ -13,23 +13,27 @@
13
  # ///
14
 
15
  """
16
- Convert document images to markdown using DoTS.ocr-1.5 with vLLM.
17
 
18
- DoTS.ocr-1.5 is a 3B multilingual document parsing model with SOTA performance
19
- on 100+ languages. Compared to v1 (1.7B), it adds web screen parsing, scene text
20
- spotting, SVG code generation, and stronger multilingual document parsing.
 
21
 
22
  Features:
23
  - Multilingual support (100+ languages)
24
  - Table extraction and formatting
25
  - Formula recognition
26
  - Layout-aware text extraction
27
- - Web screen parsing (NEW in v1.5)
28
- - Scene text spotting (NEW in v1.5)
29
- - SVG code generation (requires dots.ocr-1.5-svg variant)
30
-
31
- Model: rednote-hilab/dots.ocr-1.5
32
- vLLM: Officially supported (same DotsOCRForCausalLM architecture as v1)
 
 
 
33
  """
34
 
35
  import argparse
@@ -56,8 +60,8 @@ logger = logging.getLogger(__name__)
56
 
57
 
58
  # ────────────────────────────────────────────────────────────────
59
- # DoTS OCR 1.5 Prompt Templates (from official dots.ocr repo)
60
- # Source: https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py
61
  # ────────────────────────────────────────────────────────────────
62
 
63
  PROMPT_TEMPLATES = {
@@ -86,10 +90,13 @@ PROMPT_TEMPLATES = {
86
  # resized_h, resized_w = smart_resize(orig_h, orig_w)
87
  # scale_x, scale_y = orig_w / resized_w, orig_h / resized_h
88
  "layout-only": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
89
- # NEW in v1.5:
90
  "web-parsing": """Parsing the layout info of this webpage image with format json:\n""",
91
  "scene-spotting": """Detect and recognize the text in the image.""",
92
  "grounding-ocr": """Extract text from the given bounding box on the image (format: [x1, y1, x2, y2]).\nBounding Box:\n""",
 
 
 
 
93
  "general": """ """,
94
  }
95
 
@@ -122,6 +129,12 @@ def make_ocr_message(
122
  # Convert to RGB
123
  pil_img = pil_img.convert("RGB")
124
 
 
 
 
 
 
 
125
  # Convert to base64 data URI
126
  buf = io.BytesIO()
127
  pil_img.save(buf, format="PNG")
@@ -159,7 +172,7 @@ def create_dataset_card(
159
  tags:
160
  - ocr
161
  - document-processing
162
- - dots-ocr-1.5
163
  - multilingual
164
  - markdown
165
  - uv-script
@@ -168,7 +181,7 @@ tags:
168
 
169
  # Document OCR using {model_name}
170
 
171
- This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using DoTS.ocr-1.5, a 3B multilingual model with SOTA document parsing.
172
 
173
  ## Processing Details
174
 
@@ -191,13 +204,14 @@ This dataset contains OCR results from images in [{source_dataset}](https://hugg
191
 
192
  ## Model Information
193
 
194
- DoTS.ocr-1.5 is a 3B multilingual document parsing model that excels at:
195
  - 100+ Languages — Multilingual document support
196
  - Table extraction — Structured data recognition
197
  - Formulas — Mathematical notation preservation
198
  - Layout-aware — Reading order and structure preservation
199
  - Web screen parsing — Webpage layout analysis
200
  - Scene text spotting — Text detection in natural scenes
 
201
 
202
  ## Dataset Structure
203
 
@@ -227,10 +241,10 @@ for info in inference_info:
227
 
228
  ## Reproduction
229
 
230
- This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DoTS OCR 1.5 script:
231
 
232
  ```bash
233
- uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr-1.5.py \\
234
  {source_dataset} \\
235
  <output-dataset> \\
236
  --image-column {image_column} \\
@@ -250,7 +264,7 @@ def main(
250
  output_dataset: str,
251
  image_column: str = "image",
252
  batch_size: int = 16,
253
- model: str = "rednote-hilab/dots.ocr-1.5",
254
  max_model_len: int = 24000,
255
  max_tokens: int = 24000,
256
  gpu_memory_utilization: float = 0.9,
@@ -269,7 +283,7 @@ def main(
269
  top_p: float = 0.9,
270
  verbose: bool = False,
271
  ):
272
- """Process images from HF dataset through DoTS.ocr-1.5 model."""
273
 
274
  # Check CUDA availability first
275
  check_cuda_availability()
@@ -320,6 +334,12 @@ def main(
320
  gpu_memory_utilization=gpu_memory_utilization,
321
  )
322
 
 
 
 
 
 
 
323
  sampling_params = SamplingParams(
324
  temperature=temperature,
325
  top_p=top_p,
@@ -335,7 +355,7 @@ def main(
335
  for batch_indices in tqdm(
336
  partition_all(batch_size, range(len(dataset))),
337
  total=(len(dataset) + batch_size - 1) // batch_size,
338
- desc="DoTS.ocr-1.5 processing",
339
  ):
340
  batch_indices = list(batch_indices)
341
  batch_images = [dataset[i][image_column] for i in batch_indices]
@@ -344,7 +364,7 @@ def main(
344
  # Create messages for batch
345
  batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
346
 
347
- # Process with vLLM (dots.ocr-1.5 needs "string" content format)
348
  outputs = llm.chat(
349
  batch_messages,
350
  sampling_params,
@@ -372,7 +392,7 @@ def main(
372
  # Handle inference_info tracking (for multi-model comparisons)
373
  inference_entry = {
374
  "model_id": model,
375
- "model_name": "DoTS.ocr-1.5",
376
  "column_name": output_column,
377
  "timestamp": datetime.now().isoformat(),
378
  "prompt_mode": prompt_mode if not custom_prompt else "custom",
@@ -453,7 +473,7 @@ def main(
453
  card = DatasetCard(card_content)
454
  card.push_to_hub(output_dataset, token=HF_TOKEN)
455
 
456
- logger.info("DoTS.ocr-1.5 processing complete!")
457
  logger.info(
458
  f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
459
  )
@@ -475,77 +495,83 @@ if __name__ == "__main__":
475
  # Show example usage if no arguments
476
  if len(sys.argv) == 1:
477
  print("=" * 80)
478
- print("DoTS.ocr-1.5 Document Processing")
479
  print("=" * 80)
480
- print("\n3B multilingual OCR model supporting 100+ languages")
481
  print("\nFeatures:")
482
  print("- Multilingual support (100+ languages)")
483
  print("- Fast processing with vLLM")
484
  print("- Table extraction and formatting")
485
  print("- Formula recognition")
486
  print("- Layout-aware text extraction")
487
- print("- Web screen parsing (NEW in v1.5)")
488
- print("- Scene text spotting (NEW in v1.5)")
 
489
  print("\nPrompt modes:")
490
- print(" ocr - Text extraction (default)")
491
- print(" layout-all - Layout + bboxes + text (JSON)")
492
- print(" layout-only - Layout + bboxes only (JSON)")
493
- print(" web-parsing - Webpage layout analysis (JSON)")
494
  print(" scene-spotting - Scene text detection")
495
- print(" grounding-ocr - Text from bounding box region")
496
- print(" general - Free-form (use with --custom-prompt)")
 
497
  print("\nExample usage:")
498
  print("\n1. Basic OCR:")
499
- print(" uv run dots-ocr-1.5.py input-dataset output-dataset")
500
- print("\n2. Web screen parsing:")
501
- print(" uv run dots-ocr-1.5.py screenshots parsed --prompt-mode web-parsing")
502
- print("\n3. Scene text spotting:")
503
- print(" uv run dots-ocr-1.5.py photos detected --prompt-mode scene-spotting")
 
 
504
  print("\n4. Layout analysis with structure:")
505
- print(" uv run dots-ocr-1.5.py papers analyzed --prompt-mode layout-all")
506
  print("\n5. Running on HF Jobs:")
507
  print(" hf jobs uv run --flavor l4x1 \\")
508
  print(" -s HF_TOKEN \\")
509
  print(
510
- " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr-1.5.py \\"
511
  )
512
  print(" input-dataset output-dataset")
513
  print("\n" + "=" * 80)
514
- print("\nFor full help, run: uv run dots-ocr-1.5.py --help")
515
  sys.exit(0)
516
 
517
  parser = argparse.ArgumentParser(
518
- description="Document OCR using DoTS.ocr-1.5 (3B multilingual model)",
519
  formatter_class=argparse.RawDescriptionHelpFormatter,
520
  epilog="""
521
- Prompt Modes (official DoTS.ocr-1.5 prompts):
522
  ocr - Simple text extraction (default)
523
  layout-all - Layout analysis with bboxes, categories, and text (JSON output)
524
  layout-only - Layout detection with bboxes and categories only (JSON output)
525
- web-parsing - Webpage layout analysis (JSON output) [NEW in v1.5]
526
- scene-spotting - Scene text detection and recognition [NEW in v1.5]
527
- grounding-ocr - Extract text from bounding box region [NEW in v1.5]
528
- general - Free-form QA (use with --custom-prompt) [NEW in v1.5]
 
529
 
530
  SVG Code Generation:
531
- For SVG output, use --model rednote-hilab/dots.ocr-1.5-svg with:
532
- --custom-prompt 'Please generate the SVG code based on the image.'
 
533
 
534
  Examples:
535
  # Basic text OCR (default)
536
- uv run dots-ocr-1.5.py my-docs analyzed-docs
537
 
538
- # Web screen parsing
539
- uv run dots-ocr-1.5.py screenshots parsed --prompt-mode web-parsing
540
 
541
- # Scene text spotting
542
- uv run dots-ocr-1.5.py photos spotted --prompt-mode scene-spotting
543
 
544
  # Full layout analysis with structure
545
- uv run dots-ocr-1.5.py papers structured --prompt-mode layout-all
546
 
547
  # Random sampling for testing
548
- uv run dots-ocr-1.5.py large-dataset test --max-samples 50 --shuffle
549
  """,
550
  )
551
 
@@ -564,8 +590,8 @@ Examples:
564
  )
565
  parser.add_argument(
566
  "--model",
567
- default="rednote-hilab/dots.ocr-1.5",
568
- help="Model to use (default: rednote-hilab/dots.ocr-1.5)",
569
  )
570
  parser.add_argument(
571
  "--max-model-len",
 
13
  # ///
14
 
15
  """
16
+ Convert document images to markdown using dots.mocr with vLLM.
17
 
18
+ dots.mocr is a 3B multilingual document parsing model with SOTA performance
19
+ on 100+ languages. It excels at converting structured graphics (charts, UI
20
+ layouts, scientific figures) directly into SVG code. Core capabilities include
21
+ grounding, recognition, semantic understanding, and interactive dialogue.
22
 
23
  Features:
24
  - Multilingual support (100+ languages)
25
  - Table extraction and formatting
26
  - Formula recognition
27
  - Layout-aware text extraction
28
+ - Web screen parsing
29
+ - Scene text spotting
30
+ - SVG code generation (use --prompt-mode svg, or --model rednote-hilab/dots.mocr-svg for best results)
31
+
32
+ Model: rednote-hilab/dots.mocr
33
+ SVG variant: rednote-hilab/dots.mocr-svg
34
+ vLLM: Officially integrated since v0.11.0
35
+ GitHub: https://github.com/rednote-hilab/dots.mocr
36
+ Paper: https://arxiv.org/abs/2603.13032
37
  """
38
 
39
  import argparse
 
60
 
61
 
62
  # ────────────────────────────────────────────────────────────────
63
+ # dots.mocr Prompt Templates
64
+ # Source: https://github.com/rednote-hilab/dots.mocr/blob/master/dots_mocr/utils/prompts.py
65
  # ────────────────────────────────────────────────────────────────
66
 
67
  PROMPT_TEMPLATES = {
 
90
  # resized_h, resized_w = smart_resize(orig_h, orig_w)
91
  # scale_x, scale_y = orig_w / resized_w, orig_h / resized_h
92
  "layout-only": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
 
93
  "web-parsing": """Parsing the layout info of this webpage image with format json:\n""",
94
  "scene-spotting": """Detect and recognize the text in the image.""",
95
  "grounding-ocr": """Extract text from the given bounding box on the image (format: [x1, y1, x2, y2]).\nBounding Box:\n""",
96
+ # SVG code generation — {width} and {height} are replaced with actual image dimensions.
97
+ # For best results, use --model rednote-hilab/dots.mocr-svg
98
+ # Uses higher temperature (0.9) and top_p (1.0) per official recommendation.
99
+ "svg": """Please generate the SVG code based on the image. viewBox="0 0 {width} {height}" """,
100
  "general": """ """,
101
  }
102
 
 
129
  # Convert to RGB
130
  pil_img = pil_img.convert("RGB")
131
 
132
+ # For SVG mode, inject actual image dimensions into the prompt
133
+ if "{width}" in prompt and "{height}" in prompt:
134
+ prompt = prompt.replace("{width}", str(pil_img.width)).replace(
135
+ "{height}", str(pil_img.height)
136
+ )
137
+
138
  # Convert to base64 data URI
139
  buf = io.BytesIO()
140
  pil_img.save(buf, format="PNG")
 
172
  tags:
173
  - ocr
174
  - document-processing
175
+ - dots-mocr
176
  - multilingual
177
  - markdown
178
  - uv-script
 
181
 
182
  # Document OCR using {model_name}
183
 
184
+ This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using dots.mocr, a 3B multilingual model with SOTA document parsing and SVG generation.
185
 
186
  ## Processing Details
187
 
 
204
 
205
  ## Model Information
206
 
207
+ dots.mocr is a 3B multilingual document parsing model that excels at:
208
  - 100+ Languages — Multilingual document support
209
  - Table extraction — Structured data recognition
210
  - Formulas — Mathematical notation preservation
211
  - Layout-aware — Reading order and structure preservation
212
  - Web screen parsing — Webpage layout analysis
213
  - Scene text spotting — Text detection in natural scenes
214
+ - SVG code generation — Charts, UI layouts, scientific figures to SVG
215
 
216
  ## Dataset Structure
217
 
 
241
 
242
  ## Reproduction
243
 
244
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) dots.mocr script:
245
 
246
  ```bash
247
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \\
248
  {source_dataset} \\
249
  <output-dataset> \\
250
  --image-column {image_column} \\
 
264
  output_dataset: str,
265
  image_column: str = "image",
266
  batch_size: int = 16,
267
+ model: str = "rednote-hilab/dots.mocr",
268
  max_model_len: int = 24000,
269
  max_tokens: int = 24000,
270
  gpu_memory_utilization: float = 0.9,
 
283
  top_p: float = 0.9,
284
  verbose: bool = False,
285
  ):
286
+ """Process images from HF dataset through dots.mocr model."""
287
 
288
  # Check CUDA availability first
289
  check_cuda_availability()
 
334
  gpu_memory_utilization=gpu_memory_utilization,
335
  )
336
 
337
+ # SVG mode uses higher temperature/top_p per official recommendation
338
+ if prompt_mode == "svg" and temperature == 0.1 and top_p == 0.9:
339
+ logger.info("SVG mode: using recommended temperature=0.9, top_p=1.0")
340
+ temperature = 0.9
341
+ top_p = 1.0
342
+
343
  sampling_params = SamplingParams(
344
  temperature=temperature,
345
  top_p=top_p,
 
355
  for batch_indices in tqdm(
356
  partition_all(batch_size, range(len(dataset))),
357
  total=(len(dataset) + batch_size - 1) // batch_size,
358
+ desc="dots.mocr processing",
359
  ):
360
  batch_indices = list(batch_indices)
361
  batch_images = [dataset[i][image_column] for i in batch_indices]
 
364
  # Create messages for batch
365
  batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
366
 
367
+ # Process with vLLM (dots.mocr needs "string" content format)
368
  outputs = llm.chat(
369
  batch_messages,
370
  sampling_params,
 
392
  # Handle inference_info tracking (for multi-model comparisons)
393
  inference_entry = {
394
  "model_id": model,
395
+ "model_name": "dots.mocr",
396
  "column_name": output_column,
397
  "timestamp": datetime.now().isoformat(),
398
  "prompt_mode": prompt_mode if not custom_prompt else "custom",
 
473
  card = DatasetCard(card_content)
474
  card.push_to_hub(output_dataset, token=HF_TOKEN)
475
 
476
+ logger.info("dots.mocr processing complete!")
477
  logger.info(
478
  f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
479
  )
 
495
  # Show example usage if no arguments
496
  if len(sys.argv) == 1:
497
  print("=" * 80)
498
+ print("dots.mocr Document Processing")
499
  print("=" * 80)
500
+ print("\n3B multilingual OCR model with SVG generation")
501
  print("\nFeatures:")
502
  print("- Multilingual support (100+ languages)")
503
  print("- Fast processing with vLLM")
504
  print("- Table extraction and formatting")
505
  print("- Formula recognition")
506
  print("- Layout-aware text extraction")
507
+ print("- Web screen parsing")
508
+ print("- Scene text spotting")
509
+ print("- SVG code generation (charts, UI, figures)")
510
  print("\nPrompt modes:")
511
+ print(" ocr - Text extraction (default)")
512
+ print(" layout-all - Layout + bboxes + text (JSON)")
513
+ print(" layout-only - Layout + bboxes only (JSON)")
514
+ print(" web-parsing - Webpage layout analysis (JSON)")
515
  print(" scene-spotting - Scene text detection")
516
+ print(" grounding-ocr - Text from bounding box region")
517
+ print(" svg - SVG code generation")
518
+ print(" general - Free-form (use with --custom-prompt)")
519
  print("\nExample usage:")
520
  print("\n1. Basic OCR:")
521
+ print(" uv run dots-mocr.py input-dataset output-dataset")
522
+ print("\n2. SVG generation:")
523
+ print(
524
+ " uv run dots-mocr.py charts svg-output --prompt-mode svg --model rednote-hilab/dots.mocr-svg"
525
+ )
526
+ print("\n3. Web screen parsing:")
527
+ print(" uv run dots-mocr.py screenshots parsed --prompt-mode web-parsing")
528
  print("\n4. Layout analysis with structure:")
529
+ print(" uv run dots-mocr.py papers analyzed --prompt-mode layout-all")
530
  print("\n5. Running on HF Jobs:")
531
  print(" hf jobs uv run --flavor l4x1 \\")
532
  print(" -s HF_TOKEN \\")
533
  print(
534
+ " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \\"
535
  )
536
  print(" input-dataset output-dataset")
537
  print("\n" + "=" * 80)
538
+ print("\nFor full help, run: uv run dots-mocr.py --help")
539
  sys.exit(0)
540
 
541
  parser = argparse.ArgumentParser(
542
+ description="Document OCR using dots.mocr (3B multilingual model with SVG generation)",
543
  formatter_class=argparse.RawDescriptionHelpFormatter,
544
  epilog="""
545
+ Prompt Modes (official dots.mocr prompts):
546
  ocr - Simple text extraction (default)
547
  layout-all - Layout analysis with bboxes, categories, and text (JSON output)
548
  layout-only - Layout detection with bboxes and categories only (JSON output)
549
+ web-parsing - Webpage layout analysis (JSON output)
550
+ scene-spotting - Scene text detection and recognition
551
+ grounding-ocr - Extract text from bounding box region
552
+ svg - SVG code generation (auto-injects image dimensions into viewBox)
553
+ general - Free-form QA (use with --custom-prompt)
554
 
555
  SVG Code Generation:
556
+ Use --prompt-mode svg for SVG output. For best results, combine with
557
+ --model rednote-hilab/dots.mocr-svg (the SVG-optimized variant).
558
+ SVG mode automatically uses temperature=0.9, top_p=1.0 unless overridden.
559
 
560
  Examples:
561
  # Basic text OCR (default)
562
+ uv run dots-mocr.py my-docs analyzed-docs
563
 
564
+ # SVG generation with optimized variant
565
+ uv run dots-mocr.py charts svg-out --prompt-mode svg --model rednote-hilab/dots.mocr-svg
566
 
567
+ # Web screen parsing
568
+ uv run dots-mocr.py screenshots parsed --prompt-mode web-parsing
569
 
570
  # Full layout analysis with structure
571
+ uv run dots-mocr.py papers structured --prompt-mode layout-all
572
 
573
  # Random sampling for testing
574
+ uv run dots-mocr.py large-dataset test --max-samples 50 --shuffle
575
  """,
576
  )
577
 
 
590
  )
591
  parser.add_argument(
592
  "--model",
593
+ default="rednote-hilab/dots.mocr",
594
+ help="Model to use (default: rednote-hilab/dots.mocr, or rednote-hilab/dots.mocr-svg for SVG)",
595
  )
596
  parser.add_argument(
597
  "--max-model-len",