Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,115 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-text
|
| 5 |
+
- text-to-image
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
---
|
| 9 |
+
# UniCTokens Dataset
|
| 10 |
+
|
| 11 |
+
*Version · 2025-10-24*
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
## 1 Data Overview
|
| 15 |
+
|
| 16 |
+
| Item | Description |
|
| 17 |
+
| ---------------------- | -------------------------------------------------------------------------------------- |
|
| 18 |
+
| **Total concepts** | 20 (Human × 10 · Animal × 5 · Object × 5) |
|
| 19 |
+
| **Images per concept** | **N ≈ 10 – 15** (already split into *train* / *test*) |
|
| 20 |
+
| **Negative samples** | `random_images/` (100 random irrelevant images) + `negative_example/` (hard negatives) |
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
## 2 Benchmark Tasks
|
| 24 |
+
|
| 25 |
+
### 2.1 MMU (Multi-Modal Understanding)
|
| 26 |
+
|
| 27 |
+
| Sub-task | Source files | Evaluation focus |
|
| 28 |
+
| ---------------- | --------------------------------- | ---------------------------------------------------------------- |
|
| 29 |
+
| **Text-Only QA** | `test/<concept>/text_only.json` | Check whether the model remembers concept knowledge (no image) |
|
| 30 |
+
| **VQA** | `test/<concept>/vqa.json` + image | Visual question answering about the concept image |
|
| 31 |
+
| **Rec** | `test/*.png` | Pure visual recognition capability |
|
| 32 |
+
|
| 33 |
+
### 2.2 T2I (Text-to-Image Generation)
|
| 34 |
+
|
| 35 |
+
| Mode | Input | Metrics |
|
| 36 |
+
| --------------------------------- | --------------------------------------------------------------- | ----------------------------------------------------------------- |
|
| 37 |
+
| **Vanilla generation** | Prompts from the DreamBooth Dataset → target-concept images | CLIP-I / CLIP-T · ArcFace similarity |
|
| 38 |
+
| **Personalized knowledge-driven** | `t2i_conditions.json` | Combined T2I-Score: must satisfy both visual & textual attributes |
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
## 3 Directory Structure
|
| 42 |
+
|
| 43 |
+
```text
|
| 44 |
+
UniCTokens/
|
| 45 |
+
├── black_512x512.png # Pure black placeholder
|
| 46 |
+
├── concepts_list.json # List of 20 concept names
|
| 47 |
+
├── template.json # Template for generating training data
|
| 48 |
+
├── random_images/ # 100 simple negative samples for training
|
| 49 |
+
│ ├── 0.png
|
| 50 |
+
│ └── … 99.png
|
| 51 |
+
├── concept/ # 🔑 Concept data (train / test)
|
| 52 |
+
│ ├── train/
|
| 53 |
+
│ │ └── <concept_name>/ # 20 folders
|
| 54 |
+
│ │ ├── 0.png … N.png # Original training images
|
| 55 |
+
│ │ ├── cropped/ # Cropped regions
|
| 56 |
+
│ │ ├── info.json # Concept profile & extra info
|
| 57 |
+
│ │ ├── conversations.json # Training dialogues
|
| 58 |
+
│ │ ├── positive_recognitions.json # Positive QA pairs
|
| 59 |
+
│ │ ├── random_recognitions.json # Negative QA pairs
|
| 60 |
+
│ │ └── negative_example/ # Hard negatives + score.json
|
| 61 |
+
│ └── test/
|
| 62 |
+
│ └── <concept_name>/
|
| 63 |
+
│ ├── 0.png … 4.png
|
| 64 |
+
│ ├── text_only.json # Text-only QA
|
| 65 |
+
│ ├── vqa.json # VQA pairs
|
| 66 |
+
│ └── t2i_conditions.json # Conditions for knowledge-driven T2I
|
| 67 |
+
├── gen_showo_training_data.py # Script to create Stage-1/2/3 training files
|
| 68 |
+
├── gen_test_data.py # Script to create all evaluation files
|
| 69 |
+
└── README.md
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
## 4 Quick Start
|
| 74 |
+
|
| 75 |
+
1. **Set the dataset root**
|
| 76 |
+
Open `gen_showo_training_data.py` and `gen_test_data.py`, change
|
| 77 |
+
|
| 78 |
+
```python
|
| 79 |
+
DATA_ROOT = "/path/to/UniCTokens_Dataset"
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
to the actual dataset path.
|
| 83 |
+
|
| 84 |
+
2. **Generate data**
|
| 85 |
+
|
| 86 |
+
```bash
|
| 87 |
+
# Create Stage-1/2/3 training samples
|
| 88 |
+
python gen_showo_training_data.py
|
| 89 |
+
|
| 90 |
+
# Create MMU & T2I evaluation samples
|
| 91 |
+
python gen_test_data.py
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
|
| 95 |
+
## 5 License
|
| 96 |
+
|
| 97 |
+
The dataset is released under **CC-BY-NC 4.0** and is intended for academic research **only**. Commercial use is not permitted.
|
| 98 |
+
|
| 99 |
+
|
| 100 |
+
## 6 Citation
|
| 101 |
+
|
| 102 |
+
```bibtex
|
| 103 |
+
@article{an2025unictokens,
|
| 104 |
+
title={UniCTokens: Boosting Personalized Understanding and Generation via Unified Concept Tokens},
|
| 105 |
+
author={An, Ruichuan and Yang, Sihan and Zhang, Renrui and Shen, Zijun and Lu, Ming and Dai, Gaole and Liang, Hao and Guo, Ziyu and Yan, Shilin and Luo, Yulin and others},
|
| 106 |
+
journal={arXiv preprint arXiv:2505.14671},
|
| 107 |
+
year={2025}
|
| 108 |
+
}
|
| 109 |
+
```
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
## 7 Contact
|
| 113 |
+
|
| 114 |
+
* GitHub Issues: [https://github.com/arctanxarc/UniCTokens/issues](https://github.com/arctanxarc/UniCTokens/issues)
|
| 115 |
+
* Email: [[email protected]](mailto:[email protected])
|