📄 granite-vision-3.2-2b-table2html
Overview
granite-vision-3.2-2b-table2html is a fine-tuned multimodal model based on granite-vision-3.2-2b. It specializes in extracting HTML <table> structures from images of tables.
Intended Use
- 🧾 Input: An image containing a table (e.g., screenshot, scan, or photo).
- 🧪 Output: HTML snippet limited to the
<table>...</table>content that structurally and semantically represents the table in the image.
Use Cases
- OCR post-processing for tables
- Automatic document parsing
- AI agents generating structured markup from visual input
Training Details
This model was fine-tuned using PEFT with LoRA (Low-Rank Adaptation) to reduce memory footprint and improve training efficiency.
- Training Dataset:
apoidea/pubtabnet-html - System Message:
"Convert table to HTML (<table> ... </table>)" - Number of Training Images: 10,000
- Number of Test Images: 250
- Max Sequence Length: 1024
- Gradient Accumulation Steps: 8
- Epochs: 1
- Batch Size: 1 (per device)
- Learning Rate: 3e-4
- Warmup Steps: 10
- Weight Decay: 0.01
- Optimizer:
adamw_torch_fused - Precision: bf16
LoRA Configuration (PEFT)
target_modules = []
for layer_type in layers_to_tune:
target_modules.extend(
name for name, _ in model.named_modules()
if (layer_type in name)
and '_proj' in name
)
LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.1,
target_modules=target_modules,
use_dora=True,
init_lora_weights="gaussian"
)
Evaluation
- 🧪 Eval Loss:
0.0118 - 🧮 HTML Similarity:
0.770These metrics indicate that the model not only converged well during training but also performs accurately on semantic table reconstruction tasks.
Limitations
- ❌ Not designed for full HTML document generation
- ❌ May struggle with highly complex or nested tables
- ⚠️ Requires reasonably clean and well-captured table images
How to Use
from transformers import AutoProcessor, AutoModelForVision2Seq
from huggingface_hub import hf_hub_download
import torch
model_path = "ibm-granite/granite-vision-3.2-2b"
processor = AutoProcessor.from_pretrained(model_path, use_fast=True)
model = AutoModelForVision2Seq.from_pretrained(
model_path,
device_map="auto",
torch_dtype=torch.bfloat16,
_attn_implementation="flash_attention_2"
)
def predict(img):
# Prepare prompt
conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "Convert table to HTML (<table> ... </table>)"}
]
},
{
"role": "user",
"content": [
{"type": "image"}
],
},
]
text = processor.apply_chat_template(conversation,
add_generation_prompt=True,
)
inputs = processor(images=[img], text=text, return_tensors="pt").to(device)
output = model.generate(**inputs, max_new_tokens=1500)
output = processor.decode(output[0], skip_special_tokens=True)
return output.split('<|assistant|>')[-1].strip()
# Load image
ds = load_dataset('apoidea/pubtabnet-html', streaming=True)['validation']
sample = next(iter(ds))
# autoregressively complete prompt
table = predict(sample['image'])
display(HTML(table)
GitHub Repo
Blog Post
👉 Read the full story behind this project: "Fine-Tuning Granite-Vision 2B to Outperform 90B Giants (Table Extraction Task)"
Citation
If you use this model, please cite the work:
@misc{granite2025table2html,
title={granite-vision-3.2-2b-table2html: Table HTML extraction from images},
author={Julio Sánchez},
year={2025},
howpublished={\url{https://huggingface.co/JulioSnchezD/granite-vision-3.2-2b-table2html}},
}
- Downloads last month
- 11
Model tree for JulioSnchezD/granite-vision-3.2-2b-table2html
Base model
ibm-granite/granite-3.1-2b-base
Finetuned
ibm-granite/granite-3.1-2b-instruct
Finetuned
ibm-granite/granite-vision-3.2-2b