davanstrien HF Staff commited on
Commit
975e471
·
verified ·
1 Parent(s): 5d7bd9c

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +98 -0
  2. main.py +292 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # OCR to Markdown with Nanonets
2
+
3
+ Convert document images to structured markdown using [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) with vLLM acceleration.
4
+
5
+ ## Quick Start
6
+
7
+ ```bash
8
+ # Basic OCR conversion
9
+ uv run main.py document-images markdown-output
10
+
11
+ # With custom image column
12
+ uv run main.py scanned-docs extracted-text --image-column page
13
+
14
+ # Test with subset
15
+ uv run main.py large-dataset test-output --max-samples 100
16
+
17
+ # Run directly from Hub
18
+ uv run https://huggingface.co/datasets/davanstrien/dataset-creation-scripts/raw/main/ocr-vllm/main.py \
19
+ input-dataset output-dataset
20
+ ```
21
+
22
+ ## Features
23
+
24
+ Nanonets-OCR-s excels at:
25
+ - **LaTeX equations**: Mathematical formulas preserved in LaTeX format
26
+ - **Tables**: Complex table structures converted to markdown
27
+ - **Document structure**: Headers, lists, and formatting maintained
28
+ - **Special elements**: Signatures, watermarks, and checkboxes detected
29
+
30
+ ## HF Jobs Deployment
31
+
32
+ Deploy on GPU infrastructure:
33
+
34
+ ```bash
35
+ hfjobs run \
36
+ --flavor l4x1 \
37
+ --secret HF_TOKEN=$HF_TOKEN \
38
+ ghcr.io/astral-sh/uv:latest \
39
+ /bin/bash -c "
40
+ uv run https://huggingface.co/datasets/davanstrien/dataset-creation-scripts/raw/main/ocr-vllm/main.py \
41
+ your-document-dataset \
42
+ your-markdown-output \
43
+ --batch-size 32 \
44
+ --gpu-memory-utilization 0.8
45
+ "
46
+ ```
47
+
48
+ ## Parameters
49
+
50
+ | Parameter | Default | Description |
51
+ |-----------|---------|-------------|
52
+ | `--image-column` | `"image"` | Column containing images |
53
+ | `--batch-size` | `8` | Images per batch |
54
+ | `--model` | `nanonets/Nanonets-OCR-s` | OCR model to use |
55
+ | `--max-tokens` | `4096` | Max output tokens |
56
+ | `--gpu-memory-utilization` | `0.7` | GPU memory usage |
57
+ | `--split` | `"train"` | Dataset split |
58
+ | `--max-samples` | None | Limit samples (testing) |
59
+ | `--private` | False | Private output dataset |
60
+
61
+ ## Examples
62
+
63
+ ### Scientific Papers
64
+ ```bash
65
+ uv run main.py arxiv-papers arxiv-markdown \
66
+ --max-tokens 8192 # Longer output for equations
67
+ ```
68
+
69
+ ### Scanned Documents
70
+ ```bash
71
+ uv run main.py historical-scans extracted-text \
72
+ --image-column scan \
73
+ --batch-size 4 # Lower batch for high-res images
74
+ ```
75
+
76
+ ### Multi-page Documents
77
+ ```bash
78
+ uv run main.py pdf-pages document-text \
79
+ --image-column page_image \
80
+ --batch-size 16
81
+ ```
82
+
83
+ ## Tips
84
+
85
+ - **Batch size**: Reduce if encountering OOM errors
86
+ - **GPU memory**: Increase for better throughput
87
+ - **Max tokens**: Increase for long documents
88
+ - **Testing**: Use `--max-samples` to validate pipeline
89
+
90
+ ## Model Details
91
+
92
+ Nanonets-OCR-s (576M parameters) is optimized for:
93
+ - High-quality markdown output
94
+ - Complex document understanding
95
+ - Efficient GPU inference
96
+ - Multi-language support
97
+
98
+ For more details, see the [model card](https://huggingface.co/nanonets/Nanonets-OCR-s).
main.py ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub",
6
+ # "pillow",
7
+ # "vllm",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # ]
11
+ # ///
12
+
13
+ """
14
+ Convert document images to markdown using Nanonets-OCR-s with vLLM.
15
+
16
+ This script processes images through the Nanonets-OCR-s model to extract
17
+ text and structure as markdown, ideal for document understanding tasks.
18
+ """
19
+
20
+ import argparse
21
+ import base64
22
+ import io
23
+ import logging
24
+ import os
25
+ import sys
26
+ from typing import List, Dict, Any, Union
27
+
28
+ from PIL import Image
29
+ from datasets import load_dataset
30
+ from huggingface_hub import login
31
+ from toolz import partition_all
32
+ from tqdm.auto import tqdm
33
+ from vllm import LLM, SamplingParams
34
+
35
+ logging.basicConfig(level=logging.INFO)
36
+ logger = logging.getLogger(__name__)
37
+
38
+
39
+ def make_ocr_message(
40
+ image: Union[Image.Image, Dict[str, Any], str],
41
+ prompt: str = "Convert this image to markdown. Include all text, tables, equations, and structure.",
42
+ ) -> List[Dict]:
43
+ """Create chat message for OCR processing."""
44
+ # Convert to PIL Image if needed
45
+ if isinstance(image, Image.Image):
46
+ pil_img = image
47
+ elif isinstance(image, dict) and "bytes" in image:
48
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
49
+ elif isinstance(image, str):
50
+ pil_img = Image.open(image)
51
+ else:
52
+ raise ValueError(f"Unsupported image type: {type(image)}")
53
+
54
+ # Convert to base64 data URI
55
+ buf = io.BytesIO()
56
+ pil_img.save(buf, format="PNG")
57
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
58
+
59
+ # Return message in vLLM format
60
+ return [
61
+ {
62
+ "role": "user",
63
+ "content": [
64
+ {"type": "image_url", "image_url": {"url": data_uri}},
65
+ {"type": "text", "text": prompt},
66
+ ],
67
+ }
68
+ ]
69
+
70
+
71
+ def main(
72
+ input_dataset: str,
73
+ output_dataset: str,
74
+ image_column: str = "image",
75
+ batch_size: int = 8,
76
+ model: str = "nanonets/Nanonets-OCR-s",
77
+ max_model_len: int = 8192,
78
+ max_tokens: int = 4096,
79
+ gpu_memory_utilization: float = 0.7,
80
+ hf_token: str = None,
81
+ split: str = "train",
82
+ max_samples: int = None,
83
+ private: bool = False,
84
+ ):
85
+ """Process images from HF dataset through OCR model."""
86
+
87
+ # Login to HF if token provided
88
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
89
+ if HF_TOKEN:
90
+ login(token=HF_TOKEN)
91
+
92
+ # Load dataset
93
+ logger.info(f"Loading dataset: {input_dataset}")
94
+ dataset = load_dataset(input_dataset, split=split)
95
+
96
+ # Validate image column
97
+ if image_column not in dataset.column_names:
98
+ raise ValueError(f"Column '{image_column}' not found. Available: {dataset.column_names}")
99
+
100
+ # Limit samples if requested
101
+ if max_samples:
102
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
103
+ logger.info(f"Limited to {len(dataset)} samples")
104
+
105
+ # Initialize vLLM
106
+ logger.info(f"Initializing vLLM with model: {model}")
107
+ llm = LLM(
108
+ model=model,
109
+ trust_remote_code=True,
110
+ max_model_len=max_model_len,
111
+ gpu_memory_utilization=gpu_memory_utilization,
112
+ limit_mm_per_prompt={"image": 1},
113
+ )
114
+
115
+ sampling_params = SamplingParams(
116
+ temperature=0.0, # Deterministic for OCR
117
+ max_tokens=max_tokens,
118
+ )
119
+
120
+ # Process images in batches
121
+ all_markdown = []
122
+
123
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
124
+
125
+ # Process in batches to avoid memory issues
126
+ for batch_indices in tqdm(
127
+ partition_all(batch_size, range(len(dataset))),
128
+ total=(len(dataset) + batch_size - 1) // batch_size,
129
+ desc="OCR processing"
130
+ ):
131
+ batch_indices = list(batch_indices)
132
+ batch_images = [dataset[i][image_column] for i in batch_indices]
133
+
134
+ try:
135
+ # Create messages for batch
136
+ batch_messages = [make_ocr_message(img) for img in batch_images]
137
+
138
+ # Process with vLLM
139
+ outputs = llm.chat(batch_messages, sampling_params)
140
+
141
+ # Extract markdown from outputs
142
+ for output in outputs:
143
+ markdown_text = output.outputs[0].text.strip()
144
+ all_markdown.append(markdown_text)
145
+
146
+ except Exception as e:
147
+ logger.error(f"Error processing batch: {e}")
148
+ # Add error placeholders for failed batch
149
+ all_markdown.extend(["[OCR FAILED]"] * len(batch_images))
150
+
151
+ # Add markdown column to dataset
152
+ logger.info("Adding markdown column to dataset")
153
+ dataset = dataset.add_column("markdown", all_markdown)
154
+
155
+ # Push to hub
156
+ logger.info(f"Pushing to {output_dataset}")
157
+ dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
158
+
159
+ logger.info("✅ OCR conversion complete!")
160
+ logger.info(f"Dataset available at: https://huggingface.co/datasets/{output_dataset}")
161
+
162
+
163
+ if __name__ == "__main__":
164
+ # Show example usage if no arguments
165
+ if len(sys.argv) == 1:
166
+ print("=" * 80)
167
+ print("Nanonets OCR to Markdown Converter")
168
+ print("=" * 80)
169
+ print("\nThis script converts document images to structured markdown using")
170
+ print("the Nanonets-OCR-s model with vLLM acceleration.")
171
+ print("\nFeatures:")
172
+ print("- LaTeX equation recognition")
173
+ print("- Table extraction and formatting")
174
+ print("- Document structure preservation")
175
+ print("- Signature and watermark detection")
176
+ print("\nExample usage:")
177
+ print("\n1. Basic OCR conversion:")
178
+ print(" uv run main.py document-images markdown-docs")
179
+ print("\n2. With custom settings:")
180
+ print(" uv run main.py scanned-pdfs extracted-text \\")
181
+ print(" --image-column page \\")
182
+ print(" --batch-size 16 \\")
183
+ print(" --gpu-memory-utilization 0.8")
184
+ print("\n3. Running on HF Jobs:")
185
+ print(" hfjobs run \\")
186
+ print(" --flavor l4x1 \\")
187
+ print(" --secret HF_TOKEN=$HF_TOKEN \\")
188
+ print(" ghcr.io/astral-sh/uv:latest \\")
189
+ print(" /bin/bash -c \"")
190
+ print(" uv run https://huggingface.co/datasets/davanstrien/dataset-creation-scripts/raw/main/ocr-vllm/main.py \\\\")
191
+ print(" your-document-dataset \\\\")
192
+ print(" your-markdown-output \\\\")
193
+ print(" --batch-size 32")
194
+ print(" \"")
195
+ print("\n" + "=" * 80)
196
+ print("\nFor full help, run: uv run main.py --help")
197
+ sys.exit(0)
198
+
199
+ parser = argparse.ArgumentParser(
200
+ description="OCR images to markdown using Nanonets-OCR-s",
201
+ formatter_class=argparse.RawDescriptionHelpFormatter,
202
+ epilog="""
203
+ Examples:
204
+ # Basic usage
205
+ uv run main.py my-images-dataset ocr-results
206
+
207
+ # With specific image column
208
+ uv run main.py documents extracted-text --image-column scan
209
+
210
+ # Process subset for testing
211
+ uv run main.py large-dataset test-output --max-samples 100
212
+ """
213
+ )
214
+
215
+ parser.add_argument(
216
+ "input_dataset",
217
+ help="Input dataset ID from Hugging Face Hub"
218
+ )
219
+ parser.add_argument(
220
+ "output_dataset",
221
+ help="Output dataset ID for Hugging Face Hub"
222
+ )
223
+ parser.add_argument(
224
+ "--image-column",
225
+ default="image",
226
+ help="Column containing images (default: image)"
227
+ )
228
+ parser.add_argument(
229
+ "--batch-size",
230
+ type=int,
231
+ default=8,
232
+ help="Batch size for processing (default: 8)"
233
+ )
234
+ parser.add_argument(
235
+ "--model",
236
+ default="nanonets/Nanonets-OCR-s",
237
+ help="Model to use (default: nanonets/Nanonets-OCR-s)"
238
+ )
239
+ parser.add_argument(
240
+ "--max-model-len",
241
+ type=int,
242
+ default=8192,
243
+ help="Maximum model context length (default: 8192)"
244
+ )
245
+ parser.add_argument(
246
+ "--max-tokens",
247
+ type=int,
248
+ default=4096,
249
+ help="Maximum tokens to generate (default: 4096)"
250
+ )
251
+ parser.add_argument(
252
+ "--gpu-memory-utilization",
253
+ type=float,
254
+ default=0.7,
255
+ help="GPU memory utilization (default: 0.7)"
256
+ )
257
+ parser.add_argument(
258
+ "--hf-token",
259
+ help="Hugging Face API token"
260
+ )
261
+ parser.add_argument(
262
+ "--split",
263
+ default="train",
264
+ help="Dataset split to use (default: train)"
265
+ )
266
+ parser.add_argument(
267
+ "--max-samples",
268
+ type=int,
269
+ help="Maximum number of samples to process (for testing)"
270
+ )
271
+ parser.add_argument(
272
+ "--private",
273
+ action="store_true",
274
+ help="Make output dataset private"
275
+ )
276
+
277
+ args = parser.parse_args()
278
+
279
+ main(
280
+ input_dataset=args.input_dataset,
281
+ output_dataset=args.output_dataset,
282
+ image_column=args.image_column,
283
+ batch_size=args.batch_size,
284
+ model=args.model,
285
+ max_model_len=args.max_model_len,
286
+ max_tokens=args.max_tokens,
287
+ gpu_memory_utilization=args.gpu_memory_utilization,
288
+ hf_token=args.hf_token,
289
+ split=args.split,
290
+ max_samples=args.max_samples,
291
+ private=args.private,
292
+ )