Image-Text-to-Text
Transformers
Safetensors
fusion_phi
text-generation
conversational
custom_code
File size: 4,325 Bytes
0fa09e7
 
 
 
81b920d
 
 
0fa09e7
81b920d
 
0fa09e7
81b920d
0fa09e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97b24e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81b920d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
base_model:
- microsoft/Phi-3.5-mini-instruct
- google/siglip-so400m-patch14-384
datasets:
- starriver030515/FUSION-Pretrain-10M
- starriver030515/FUSION-Finetune-12M
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers
---

# Model Card for FUSION

This is the checkpoint after Stage 1, Stage1.5 and Stage2 training of FUSION-Phi3.5-3B.

## Model Details

**Model Description**

<img src="https://raw.githubusercontent.com/starriver030515/FUSION/main/images/encoder.jpg" alt="encoder" width="1000px">

<img src="https://raw.githubusercontent.com/starriver030515/FUSION/main/images/decoder.jpg" alt="decoder" width="1000px">

FUSION is a family of multimodal large language models that adopts a fully integrated vision-language architecture, enabling comprehensive and fine-grained cross-modal understanding. In contrast to prior approaches that primarily perform shallow or late-stage modality fusion during the LLM decoding phase, FUSION achieves deep, dynamic integration across the entire vision-language processing pipeline.

To enable this, FUSION utilizes Text-Guided Unified Vision Encoding, which incorporates textual context directly into the vision encoder. This design allows for pixel-level vision-language alignment and facilitates early-stage cross-modal interaction.

During decoding, FUSION employs Context-Aware Recursive Alignment Decoding strategy. This component dynamically aggregates and refines visual features based on the evolving textual context at each decoding step, allowing the model to capture question-level semantics with high precision.

To further enhance alignment and reduce the semantic gap between modalities, FUSION integrates Dual-Supervised Semantic Mapping Loss, which provides simultaneous supervision in both visual and textual embedding spaces. This dual-path guidance strengthens the consistency and semantic coherence of the fused representations.

**Base Model**

**LLM**: [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)

**Vision Encoder**: [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)

## Training Details

**Training Strategies**

FUSION is trained with a  three-stage training framework, ensuring comprehensive alignment and integration between visual and linguistic modalities.

- **Stage1: Foundational Semantic Alignment**: We pretrain the vision encoder using extensive image-caption datasets to establish precise semantic alignment be- tween visual and textual representations.
- **Stage1.5: Contextual Multimodal Fusion**: In contrast to Stage 1, this intermediate stage incorporates various types of QA data along with image-caption pairs. This phase is designed to enhance the model’s adaptability in aligning vision and language representations across a broad spectrum of scenarios.
- **Stage2: Visual Instruction Tuning**: At this stage, we expose the model to various visual tasks, enabling it to answer downstream vision-related questions effectively.

**Training Data**

- [10M FUSION Alignment Data](https://huggingface.co/datasets/starriver030515/FUSION-Pretrain-10M) For Stage1
- [12M FUSION Curated Instruction Tuning Data](https://huggingface.co/datasets/starriver030515/FUSION-Finetune-12M) For Stage1.5 and Stage2

## Performance

<img src="https://raw.githubusercontent.com/starriver030515/FUSION/main/images/performance.jpg" alt="performance" width="1000px">

**Where to send questions or comments about the model:**

https://github.com/starriver030515/FUSION/issues

## Paper or resources for more information

- [https://arxiv.org/abs/2504.09925](https://arxiv.org/abs/2504.09925)
- [https://github.com/starriver030515/FUSION](https://github.com/starriver030515/FUSION)

## Citation

If you find FUSION useful for your research and applications, please cite using this BibTeX:

```bibtex
@misc{liu2025fusionfullyintegrationvisionlanguage,
      title={FUSION: Fully Integration of Vision-Language Representations for Deep Cross-Modal Understanding}, 
      author={Zheng Liu and Mengjie Liu and Jingzhou Chen and Jingwei Xu and Bin Cui and Conghui He and Wentao Zhang},
      year={2025},
      eprint={2504.09925},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.09925}, 
}
```