kanashi6 commited on
Commit
2409825
·
verified ·
1 Parent(s): 3376747

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -1,18 +1,18 @@
1
- ---
2
- license: apache-2.0
3
- datasets:
4
- - BLIP3o/BLIP3o-Pretrain-Long-Caption
5
- - BLIP3o/BLIP3o-Pretrain-Short-Caption
6
- - BLIP3o/BLIP3o-Pretrain-JourneyDB
7
- base_model:
8
- - OpenGVLab/InternVL3-1B
9
- ---
10
  This repository contains the model (**autoencoders**) presented in the paper UniLiP: Adapting CLIP for Unified Multimodal Understanding, Generation and Editing.
11
 
12
  UniLIP proposes a unified, CLIP-based encoder featuring both rich semantics and fine-grained image details. Through a **two-stage and self-distillation training** for reconstruction, we empower CLIP to achieve excellent reconstruction results **without compromising its original understanding abilities**. Leveraging this powerful unified representation, UniLIP excels across understanding, generation, and editing tasks.
13
 
14
  For more details, please refer to the original paper and the GitHub repository:
15
 
16
- Paper: [UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface](https://www.arxiv.org/abs/2507.23278)
17
 
18
  GitHub: https://github.com/nnnth/UniLIP
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - BLIP3o/BLIP3o-Pretrain-Long-Caption
5
+ - BLIP3o/BLIP3o-Pretrain-Short-Caption
6
+ - BLIP3o/BLIP3o-Pretrain-JourneyDB
7
+ base_model:
8
+ - OpenGVLab/InternVL3-1B
9
+ ---
10
  This repository contains the model (**autoencoders**) presented in the paper UniLiP: Adapting CLIP for Unified Multimodal Understanding, Generation and Editing.
11
 
12
  UniLIP proposes a unified, CLIP-based encoder featuring both rich semantics and fine-grained image details. Through a **two-stage and self-distillation training** for reconstruction, we empower CLIP to achieve excellent reconstruction results **without compromising its original understanding abilities**. Leveraging this powerful unified representation, UniLIP excels across understanding, generation, and editing tasks.
13
 
14
  For more details, please refer to the original paper and the GitHub repository:
15
 
16
+ Paper: https://www.arxiv.org/abs/2507.23278
17
 
18
  GitHub: https://github.com/nnnth/UniLIP