Visual Document Retrieval
Transformers
Safetensors
ret2
dcaffo's picture
Update README.md
e5b8462 verified
metadata
library_name: transformers
license: apache-2.0
datasets:
  - aimagelab/ReT-M2KR
base_model:
  - openai/clip-vit-large-patch14
  - colbert-ir/colbertv2.0
pipeline_tag: visual-document-retrieval

Model Card: ReT-2

Official implementation of ReT-2: Recurrence Meets Transformers for Universal Multimodal Retrieval.

This model features a visual backbone based on openai/clip-vit-large-patch14 and a textual backbone based on colbert-ir/colbertv2.0.
The backbones have been fine-tuned on the M2KR dataset.

Model Sources

Training Data

aimagelab/ReT-M2KR

Citation

@article{caffagni2025recurrencemeetstransformers,
      title={{Recurrence Meets Transformers for Universal Multimodal Retrieval}}, 
      author={Davide Caffagni and Sara Sarto and Marcella Cornia and Lorenzo Baraldi and Rita Cucchiara},
      journal={arXiv preprint arXiv:2509.08897},
      year={2025}
}