Update README.md
Browse files
README.md
CHANGED
|
@@ -8,60 +8,66 @@ tags:
|
|
| 8 |
- multimodal-retrieval
|
| 9 |
- embedding-model
|
| 10 |
---
|
|
|
|
| 11 |
<h1 align="center">MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval</h1>
|
| 12 |
|
| 13 |
<p align="center">
|
| 14 |
<a href="https://arxiv.org/abs/2412.14475">
|
| 15 |
-
|
| 16 |
</a>
|
| 17 |
<a href="https://github.com/VectorSpaceLab/MegaPairs">
|
| 18 |
<img alt="Build" src="https://img.shields.io/badge/Github-Code-blue">
|
| 19 |
</a>
|
| 20 |
-
<a href="https://huggingface.co/datasets/
|
| 21 |
<img alt="Build" src="https://img.shields.io/badge/π€ Datasets-MegaPairs-yellow">
|
| 22 |
</p>
|
| 23 |
|
| 24 |
<p align="center">
|
| 25 |
</a>
|
| 26 |
-
<a href="https://huggingface.co/
|
| 27 |
-
<img alt="Build" src="https://img.shields.io/badge/π€ Model-
|
|
|
|
|
|
|
|
|
|
| 28 |
</a>
|
| 29 |
-
<a href="https://huggingface.co/
|
| 30 |
-
<img alt="Build" src="https://img.shields.io/badge/π€ Model-
|
| 31 |
</a>
|
| 32 |
-
<a href="https://huggingface.co/
|
| 33 |
-
<img alt="Build" src="https://img.shields.io/badge/π€ Model-
|
| 34 |
</a>
|
| 35 |
</p>
|
| 36 |
|
| 37 |
## News
|
| 38 |
-
```2024-
|
|
|
|
|
|
|
| 39 |
|
| 40 |
```2024-12-19``` ππ Release our paper: [MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval](https://arxiv.org/pdf/2412.14475).
|
| 41 |
|
| 42 |
## Release Plan
|
| 43 |
- [x] Paper
|
| 44 |
-
- [x]
|
| 45 |
-
- [
|
| 46 |
- [ ] MegaPairs Dataset
|
| 47 |
- [ ] Evaluation code
|
| 48 |
- [ ] Fine-tuning code
|
| 49 |
|
| 50 |
|
| 51 |
## Introduction
|
| 52 |
-
In this
|
| 53 |
|
| 54 |
-
|
| 55 |
|
| 56 |
## Model Usage
|
| 57 |
|
| 58 |
-
### 1.
|
| 59 |
-
You can easily use
|
| 60 |
```python
|
| 61 |
import torch
|
| 62 |
from transformers import AutoModel
|
| 63 |
|
| 64 |
-
MODEL_NAME = "
|
| 65 |
|
| 66 |
model = AutoModel.from_pretrained(MODEL_NAME, trust_remote_code=True) # You must set trust_remote_code=True
|
| 67 |
model.set_processor(MODEL_NAME)
|
|
@@ -81,39 +87,76 @@ with torch.no_grad():
|
|
| 81 |
print(scores)
|
| 82 |
```
|
| 83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
-
### 2. MMRet-MLLM Models
|
| 88 |
-
```Will be released soon.```
|
| 89 |
|
| 90 |
## Model Performance
|
| 91 |
### Zero-Shot Composed Image Retrieval
|
| 92 |
|
| 93 |
-
|
| 94 |
|
| 95 |
<img src="./assets/res-zs-cir.png" width="800">
|
| 96 |
|
| 97 |
### Zero-Shot Performance on MMEB
|
| 98 |
|
| 99 |
-
|
| 100 |
|
| 101 |
<img src="./assets/res-zs-mmeb.png" width="800">
|
| 102 |
|
| 103 |
### Fine-Tuning Performance on MMEB
|
| 104 |
|
| 105 |
-
After fine-tuning on downstream tasks,
|
| 106 |
|
| 107 |
<img src="./assets/res-ft-mmeb.png" width="800">
|
| 108 |
|
| 109 |
### Performance Scaling
|
| 110 |
-
MegaPairs showcases **scalability**:
|
| 111 |
|
| 112 |
<img src="./assets/res-scaling.png" width="800">
|
| 113 |
|
| 114 |
|
| 115 |
## License
|
| 116 |
-
The annotations for MegaPairs and the
|
| 117 |
|
| 118 |
|
| 119 |
|
|
@@ -127,4 +170,4 @@ If you find this repository useful, please consider giving a star β and citati
|
|
| 127 |
journal={arXiv preprint arXiv:2412.14475},
|
| 128 |
year={2024}
|
| 129 |
}
|
| 130 |
-
```
|
|
|
|
| 8 |
- multimodal-retrieval
|
| 9 |
- embedding-model
|
| 10 |
---
|
| 11 |
+
|
| 12 |
<h1 align="center">MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval</h1>
|
| 13 |
|
| 14 |
<p align="center">
|
| 15 |
<a href="https://arxiv.org/abs/2412.14475">
|
| 16 |
+
<img alt="Build" src="http://img.shields.io/badge/cs.CV-arXiv%3A2412.14475-B31B1B.svg">
|
| 17 |
</a>
|
| 18 |
<a href="https://github.com/VectorSpaceLab/MegaPairs">
|
| 19 |
<img alt="Build" src="https://img.shields.io/badge/Github-Code-blue">
|
| 20 |
</a>
|
| 21 |
+
<a href="https://huggingface.co/datasets/BAAI/MegaPairs">
|
| 22 |
<img alt="Build" src="https://img.shields.io/badge/π€ Datasets-MegaPairs-yellow">
|
| 23 |
</p>
|
| 24 |
|
| 25 |
<p align="center">
|
| 26 |
</a>
|
| 27 |
+
<a href="https://huggingface.co/BAAI/BGE-VL-base">
|
| 28 |
+
<img alt="Build" src="https://img.shields.io/badge/π€ Model-BGE_VL_base-yellow">
|
| 29 |
+
</a>
|
| 30 |
+
<a href="https://huggingface.co/BAAI/BGE-VL-large">
|
| 31 |
+
<img alt="Build" src="https://img.shields.io/badge/π€ Model-BGE_VL_large-yellow">
|
| 32 |
</a>
|
| 33 |
+
<a href="https://huggingface.co/BAAI/BGE-VL-MLLM-S1">
|
| 34 |
+
<img alt="Build" src="https://img.shields.io/badge/π€ Model-BGE_VL_MLLM_S1-yellow">
|
| 35 |
</a>
|
| 36 |
+
<a href="https://huggingface.co/BAAI/BGE-VL-MLLM-S2">
|
| 37 |
+
<img alt="Build" src="https://img.shields.io/badge/π€ Model-BGE_VL_MLLM_S2-yellow">
|
| 38 |
</a>
|
| 39 |
</p>
|
| 40 |
|
| 41 |
## News
|
| 42 |
+
```2024-3-4``` ππ We have released the BGE-VL-MLLM models on Huggingface: [BGE-VL-MLLM-S1](https://huggingface.co/BAAI/BGE-VL-MLLM-S1) and [BGE-VL-MLLM-S2](https://huggingface.co/BAAI/BGE-VL-MLLM-S2). **BGE-VL-MLLM-S1** is trained exclusively on our MegaPairs dataset, achieving outstanding performance in composed image retrieval, with an 8.1% improvement on the CIRCO benchmark (mAP@5) over the previous state-of-the-art. **BGE-VL-MLLM-S2** builds on BGE-VL-MLLM-S1 with an additional epoch of fine-tuning on the MMEB benchmark training set, delivering enhanced performance across a broader range of multimodal embedding tasks.
|
| 43 |
+
|
| 44 |
+
```2024-12-27``` ππ BGE-VL-CLIP models are released on Huggingface: [BGE-VL-base](https://huggingface.co/BAAI/BGE-VL-base) and [BGE-VL-large](https://huggingface.co/BAAI/BGE-VL-large).
|
| 45 |
|
| 46 |
```2024-12-19``` ππ Release our paper: [MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval](https://arxiv.org/pdf/2412.14475).
|
| 47 |
|
| 48 |
## Release Plan
|
| 49 |
- [x] Paper
|
| 50 |
+
- [x] BGE-VL-base and BGE-VL-large models
|
| 51 |
+
- [x] BGE-VL-MLLM model
|
| 52 |
- [ ] MegaPairs Dataset
|
| 53 |
- [ ] Evaluation code
|
| 54 |
- [ ] Fine-tuning code
|
| 55 |
|
| 56 |
|
| 57 |
## Introduction
|
| 58 |
+
In this work, we introduce **MegaPairs**, a novel data synthesis method that leverages open-domain images to create *heterogeneous KNN triplets* for universal multimodal retrieval. Our MegaPairs dataset contains over 26 million triplets, and we have trained a series of multimodal retrieval models, **BGE-VL**, including BGE-VL-CLIP (base and large) and BGE-VL-MLLM.
|
| 59 |
|
| 60 |
+
BGE-VL achieve state-of-the-art performance on four popular zero-shot composed image retrieval benchmarks and the massive multimodal embedding benchmark (MMEB). Extensive experiments demonstrate the ***efficiency, scalability, and generalization*** features of MegaPairs. Please refer to our [paper](https://arxiv.org/abs/2412.14475) for more details.
|
| 61 |
|
| 62 |
## Model Usage
|
| 63 |
|
| 64 |
+
### 1. BGE-VL-CLIP Models
|
| 65 |
+
You can easily use BGE-VL-CLIP models based on ```transformers```
|
| 66 |
```python
|
| 67 |
import torch
|
| 68 |
from transformers import AutoModel
|
| 69 |
|
| 70 |
+
MODEL_NAME = "BAAI/BGE-VL-base" # or "BAAI/BGE-VL-large"
|
| 71 |
|
| 72 |
model = AutoModel.from_pretrained(MODEL_NAME, trust_remote_code=True) # You must set trust_remote_code=True
|
| 73 |
model.set_processor(MODEL_NAME)
|
|
|
|
| 87 |
print(scores)
|
| 88 |
```
|
| 89 |
|
| 90 |
+
See the [demo](./retrieval_demo.ipynb) for a complete example of using BGE-VL for multimodel retrieval.
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
### 2. BGE-VL-MLLM Models
|
| 94 |
+
|
| 95 |
|
| 96 |
+
```python
|
| 97 |
+
import torch
|
| 98 |
+
from transformers import AutoModel
|
| 99 |
+
from PIL import Image
|
| 100 |
|
| 101 |
+
MODEL_NAME= "BAAI/BGE-VL-MLLM-S1"
|
| 102 |
+
|
| 103 |
+
model = AutoModel.from_pretrained(MODEL_NAME, trust_remote_code=True)
|
| 104 |
+
model.eval()
|
| 105 |
+
model.cuda()
|
| 106 |
+
|
| 107 |
+
with torch.no_grad():
|
| 108 |
+
model.set_processor(MODEL_NAME)
|
| 109 |
+
|
| 110 |
+
query_inputs = model.data_process(
|
| 111 |
+
text="Make the background dark, as if the camera has taken the photo at night",
|
| 112 |
+
images="./assets/cir_query.png",
|
| 113 |
+
q_or_c="q",
|
| 114 |
+
task_instruction="Retrieve the target image that best meets the combined criteria by using both the provided image and the image retrieval instructions: "
|
| 115 |
+
)
|
| 116 |
+
|
| 117 |
+
candidate_inputs = model.data_process(
|
| 118 |
+
images=["./assets/cir_candi_1.png", "./assets/cir_candi_2.png"],
|
| 119 |
+
q_or_c="c",
|
| 120 |
+
)
|
| 121 |
+
|
| 122 |
+
query_embs = model(**query_inputs, output_hidden_states=True)[:, -1, :]
|
| 123 |
+
candi_embs = model(**candidate_inputs, output_hidden_states=True)[:, -1, :]
|
| 124 |
+
|
| 125 |
+
query_embs = torch.nn.functional.normalize(query_embs, dim=-1)
|
| 126 |
+
candi_embs = torch.nn.functional.normalize(candi_embs, dim=-1)
|
| 127 |
+
|
| 128 |
+
scores = torch.matmul(query_embs, candi_embs.T)
|
| 129 |
+
print(scores)
|
| 130 |
+
```
|
| 131 |
|
|
|
|
|
|
|
| 132 |
|
| 133 |
## Model Performance
|
| 134 |
### Zero-Shot Composed Image Retrieval
|
| 135 |
|
| 136 |
+
BGE-VL sets a new performance benchmark in zero-shot composed image retrieval tasks. On the CIRCO benchmark, our BGE-VL-base model, with only 149 million parameters, surpasses all previous models, including those with 50 times more parameters. Additionally, BGE-VL-MLLM achieves an 8.1% improvement over the previous state-of-the-art model.
|
| 137 |
|
| 138 |
<img src="./assets/res-zs-cir.png" width="800">
|
| 139 |
|
| 140 |
### Zero-Shot Performance on MMEB
|
| 141 |
|
| 142 |
+
BGE-VL-MLLM achieves state-of-the-art zero-shot performance on the Massive Multimodal Embedding Benchmark (MMEB), despite being trained only on the ImageText-to-Image paradigm. This demonstrates the excellent generalization capability of MegaPairs for multimodal embedding.
|
| 143 |
|
| 144 |
<img src="./assets/res-zs-mmeb.png" width="800">
|
| 145 |
|
| 146 |
### Fine-Tuning Performance on MMEB
|
| 147 |
|
| 148 |
+
After fine-tuning on downstream tasks, BGE-VL-MLLM maintains its leading performance. Notably, it surpasses the previous state-of-the-art by 7.1% on the MMEB out-of-distribution (OOD) set. These results demonstrate the robust generalization capability of BGE-VL-MLLM and highlight the potential of MegaPairs as foundational training data for universal multimodal embedding.
|
| 149 |
|
| 150 |
<img src="./assets/res-ft-mmeb.png" width="800">
|
| 151 |
|
| 152 |
### Performance Scaling
|
| 153 |
+
MegaPairs showcases **scalability**: BGE-VL-base improves as training data increases. It also demonstrates **efficiency**: with just 0.5M training samples, BGE-VL-base significantly outperforms MagicLens, which uses the same CLIP-base backbone and was trained on 36.7M samples.
|
| 154 |
|
| 155 |
<img src="./assets/res-scaling.png" width="800">
|
| 156 |
|
| 157 |
|
| 158 |
## License
|
| 159 |
+
The annotations for MegaPairs and the BGE-VL models are released under the [MIT License](LICENSE). The images in MegaPairs originate from the [Recap-Datacomp](https://huggingface.co/datasets/UCSC-VLAA/Recap-DataComp-1B), which is released under the CC BY 4.0 license.
|
| 160 |
|
| 161 |
|
| 162 |
|
|
|
|
| 170 |
journal={arXiv preprint arXiv:2412.14475},
|
| 171 |
year={2024}
|
| 172 |
}
|
| 173 |
+
```
|