VisCoder2-32B

๐Ÿ  Project Page | ๐Ÿ“– Paper | ๐Ÿ’ป GitHub | ๐Ÿค— VisCode2

VisCoder2-32B is a lightweight multi-language visualization coding model trained for executable code generation, rendering, and iterative self-debugging.


๐Ÿง  Model Description

VisCoder2-32B is trained on the VisCode-Multi-679K dataset, a large-scale instruction-tuning dataset for executable visualization tasks across 12 programming language. It addresses a core challenge in multi-language visualization: generating code that not only executes successfully but also produces semantically consistent visual outputs by aligning natural-language instructions and rendering results.


๐Ÿ“Š Main Results on VisPlotBench

We evaluate VisCoder2-32B on VisPlotBench, which includes 888 executable visualization tasks spanning 8 languages, supporting both standard generation and multi-turn self-debugging.

main_results

VisCoder2-32B shows consistent performance across multiple languages and achieves notable improvements under the multi-round self-debug setting.


๐Ÿ“ Training Details

  • Base model: Qwen2.5-Coder-32B-Instruct
  • Framework: ms-swift
  • Tuning method: Full-parameter supervised fine-tuning (SFT)
  • Dataset: VisCode-Multi-679K

๐Ÿ“– Citation

If you use VisCoder2-32B or related datasets in your research, please cite:

@article{ni2025viscoder2,
  title={VisCoder2: Building Multi-Language Visualization Coding Agents},
  author={Ni, Yuansheng and Cai, Songcheng and Chen, Xiangchao and Liang, Jiarong and Lyu, Zhiheng and Deng, Jiaqi and Zou, Kai and Nie, Ping and Yuan, Fei and Yue, Xiang and others},
  journal={arXiv preprint arXiv:2510.23642},
  year={2025}
}

@article{ni2025viscoder,
  title={VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation},
  author={Ni, Yuansheng and Nie, Ping and Zou, Kai and Yue, Xiang and Chen, Wenhu},
  journal={arXiv preprint arXiv:2506.03930},
  year={2025}
}

For evaluation scripts and more information, see our GitHub repository.

Downloads last month
8
Safetensors
Model size
33B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TIGER-Lab/VisCoder2-32B

Base model

Qwen/Qwen2.5-32B
Finetuned
(107)
this model
Quantizations
2 models

Dataset used to train TIGER-Lab/VisCoder2-32B

Collection including TIGER-Lab/VisCoder2-32B