Update README.md
Browse files
README.md
CHANGED
|
@@ -1,8 +1,7 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
datasets:
|
| 4 |
-
-
|
| 5 |
-
- royokong/cirr_imgs
|
| 6 |
language:
|
| 7 |
- en
|
| 8 |
metrics:
|
|
@@ -541,5 +540,4 @@ About code, our project is based on [CLIP4Cir](https://github.com/ABaldrati/CLIP
|
|
| 541 |
|
| 542 |
About data, we train and evaluate on two CIR dataset [FashionIQ](https://github.com/XiaoxiaoGuo/fashion-iq/) and [CIRR](https://github.com/Cuberick-Orion/CIRR). We use [LLaVA](https://github.com/haotian-liu/LLaVA) to do caption generation and [Unicom](https://github.com/deepglint/unicom) to do image pair match.
|
| 543 |
|
| 544 |
-
Thanks for their great jobs! If you need to use a particular part of our code, please cite the relevant papers.
|
| 545 |
-
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
datasets:
|
| 4 |
+
- BUAADreamer/cir_dataset
|
|
|
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
metrics:
|
|
|
|
| 540 |
|
| 541 |
About data, we train and evaluate on two CIR dataset [FashionIQ](https://github.com/XiaoxiaoGuo/fashion-iq/) and [CIRR](https://github.com/Cuberick-Orion/CIRR). We use [LLaVA](https://github.com/haotian-liu/LLaVA) to do caption generation and [Unicom](https://github.com/deepglint/unicom) to do image pair match.
|
| 542 |
|
| 543 |
+
Thanks for their great jobs! If you need to use a particular part of our code, please cite the relevant papers.
|
|
|