Overview
MMDocIR/MMDocIR_Retrievers huggingface repository contains all retriever checkpoints needed for MMDocIR, specifically to Download Retriever Checkpoints.
🛠️Retriever Checkpoints
The list of available retrievers are as follows:
BGE: bge-large-en-v1.5 which is cloned from BAAI/bge-large-en-v1.5.
ColBERT: colbertv2.0 which is cloned from colbert-ir/colbertv2.0.
E5: e5-large-v2 which is cloned from intfloat/e5-large-v2.
Contriever: contriever-msmarco which is cloned from facebook/contriever-msmarco.
DPR:
- question encoder: dpr-question_encoder-multiset-base which is cloned from facebook/dpr-question_encoder-multiset-base.
- passage encoder: dpr-ctx_encoder-multiset-base which is cloned from facebook/dpr-ctx_encoder-multiset-base.
ColPali:
- retriever adapter: colpali-v1.1 which is cloned from vidore/colpali-v1.1.
- retriever base VLM: colpaligemma-3b-mix-448-base which is cloned from vidore/colpaligemma-3b-mix-448-base.
ColQwen:
- retriever adapter: colqwen2-v1.0 which is cloned from vidore/colqwen2-v1.0.
- retriever base VLM: colqwen2-base which is cloned from vidore/colqwen2-base.
DSE-wikiss: dse-phi3-v1 which is processed as follows:
- clone from Tevatron/dse-phi3-v1.0.
- fix batch processing issue based on: https://huggingface.co/microsoft/Phi-3-vision-128k-instruct/discussions/32/files
- change
config.json
andpreprocessor_config.json
to point to .py files in checkpoint.
DSE-docmatix: dse-phi3-docmatix-v2 which is cloned from Tevatron/dse-phi3-docmatix-v2.
Environment
python 3.9
torch2.4.0+cu121
transformers==4.45.0
sentence-transformers==2.2.2 # for BGE, GTE, E5 retrievers
colbert-ai==0.2.21 # for colbert retriever
flash-attn==2.7.4.post1 # for DSE retrievers to run with flash attention
How to use these checkpoints
We standardize codes for all retrievers in two python files
- For text retrievers: refer to
text_wrapper.py
- For vision retrievers: refer to
vision_wrapper.py
If you want to encode MMDocIR_Evaluation_Dataset with these retrievers, you can refer to code MMDocIR/encode.py and inference command.
If you want to encode your own queries/pages/layouts with these retrievers, some simple demo codes are:
For text retrievers:
From text_wrapper import DPR, BGE, GTE, E5, ColBERTReranker, Contriever retriever = E5() query = ['how much protein should a child consume', 'What is the CDC requirements for women?'] passage = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] query_embeds = retriever.embed_queries(query) passage_embeds = retriever.embed_quotes(passage) scores = retriever.score(query, passage) print(scores)
For image retrievers:
From vision_wrapper import DSE, ColQwen2Retriever, ColPaliRetriever retriever = DSE(model_name="checkpoint/dse-phi3-v1", bs=2) query = ['how much protein should a child consume', 'What is the CDC requirements for women?'] prefix = "/home/user/xxx" images = [ "0704.0418_1.jpg", "0704.0418_2.jpg", "0704.0418_3.jpg", "0705.1104_0.jpg", "0705.1104_1.jpg", "0705.1104_2.jpg", "0704.0418_1.jpg", "0704.0418_2.jpg", "0704.0418_3.jpg", "0705.1104_0.jpg", "0705.1104_1.jpg", "0705.1104_2.jpg", ] images = [Image.open(prefix+x) for x in images] q_embeds = retriever.embed_queries(queries) img_embeds = retriever.embed_quotes(images) scores = retriever.score(q_embeds, img_embeds) print(scores)
💾Citation
If you use any datasets or models from this organization in your research, please cite our work as follows:
@misc{dong2025mmdocirbenchmarkingmultimodalretrieval,
title={MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents},
author={Kuicai Dong and Yujing Chang and Xin Deik Goh and Dexun Li and Ruiming Tang and Yong Liu},
year={2025},
eprint={2501.08828},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2501.08828},
}