1 Introduction
DIVE-Doc is a VLM architecture built as a trade-off between end-to-end lightweight architectures and LVLMs for the DocVQA task.
Without relying on external tools such as OCR, it processes the inputs in an end-to-end way.
It takes an image document and a question as input and returns an answer.
- Repository: GitHub
- Paper [optional]: [More Information Needed]
2 Model Summary
DIVE-Doc is built as a trade-off between end-to-end lightweight architectures and LVLMs. Where the first category has both a lightweight visual encoder and a language decoder, and LVLMs have both a large visual encoder and a large decoder, DIVE-Doc contains a small visual encoder in combination with a large decoder in order to balance model size and performance. It is built by distilling the SigLIP-400m visual encoder of PaliGEMMA into a small hierarchical Swin transformer initialized with the weights of Donut, while reusing the original GEMMA decoder. This enables DIVE‑Doc to reduce its visual encoder’s parameter count by 80%. Moreover, the model is finetuned using LoRA adapters, which have been merged into the base model using merge_and_unload. Trained on the DocVQA dataset for both the distillation and finetuning steps, this strategy allows DIVE-Doc to be competitive with LVLMs while outperforming ligthweight architectures.
3 Quick Start
Installation
git clone https://github.com/JayRay5/DIVE-Doc.git
cd DIVE-Doc
conda create -n dive-doc-env python=3.11.5
conda activate dive-doc-env
pip install -r requirements.txt
Inference example using the model repository and gradio
In app.py, modify the path variable to "JayRay5/DIVE-Doc-ARD-HRes":
if __name__ == "__main__":
path = "JayRay5/DIVE-Doc-ARD-HRes"
app(path)
Then run:
python app.py
This will start a gradio web interface where you can use the model.
Notification
Direct Use
This model is designed to answer a question from a single-page image document and is mostly trained on industrial documents DocVQA dataset.
Downstream Use
This model can be finetuned on other DocVQA datasets such as InfoGraphVQA to improve its performance on infographic documents.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]
- Downloads last month
- 3