✨LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion✨

Fangfu Liu1, Hao Li2, Jiawei Chi1, Hanyang Wang1,3, Minghui Yang3, Fudong Wang3, Yueqi Duan1
1Tsinghua University, 2NTU, 3Ant Group

ICCV 2025 🔥

                   

Teaser Visualization

LangScene-X: We propose LangScene-X, a unified model that generates RGB, segmentation map, and normal map, enabling to reconstruct 3D field from sparse views input.

📄 Paper

The model was presented in the paper LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion.

🔗 Links

📖 Abstract

Recovering 3D structures with open-vocabulary scene understanding from 2D images is a fundamental but daunting task. Recent developments have achieved this by performing per-scene optimization with embedded language information. However, they heavily rely on the calibrated dense-view reconstruction paradigm, thereby suffering from severe rendering artifacts and implausible semantic synthesis when limited views are available. In this paper, we introduce a novel generative framework, coined LangScene-X, to unify and generate 3D consistent multi-modality information for reconstruction and understanding. Powered by the generative capability of creating more consistent novel observations, we can build generalizable 3D language-embedded scenes from only sparse views. Specifically, we first train a TriMap video diffusion model that can generate appearance (RGBs), geometry (normals), and semantics (segmentation maps) from sparse inputs through progressive knowledge integration. Furthermore, we propose a Language Quantized Compressor (LQC), trained on large-scale image datasets, to efficiently encode language embeddings, enabling cross-scene generalization without per-scene retraining. Finally, we reconstruct the language surface fields by aligning language information onto the surface of 3D scenes, enabling open-ended language queries. Extensive experiments on real-world data demonstrate the superiority of our LangScene-X over state-of-the-art methods in terms of quality and generalizability.

📢 News

  • 🔥 [04/07/2025] We release "LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion". Check our project page and arXiv paper.

🌟 Pipeline

Pipeline Visualization

Pipeline of LangScene-X. Our model is composed of a TriMap Video Diffusion model which generates RGB, segmentation map, and normal map videos, an Auto Encoder that compresses the language feature, and a field constructor that reconstructs 3DGS from the generated videos.

🎨 Video Demos from TriMap Video Diffusion

https://github.com/user-attachments/assets/55346d53-eb04-490e-bb70-64555e97e040

https://github.com/user-attachments/assets/d6eb28b9-2af8-49a7-bb8b-0d4cba7843a5

https://github.com/user-attachments/assets/396f11ef-85dc-41de-882e-e249c25b9961

⚙️ Setup

1. Clone Repository

git clone https://github.com/liuff19/LangScene-X.git
cd LangScene-X

2. Environment Setup

  1. Create conda environment
conda create -n langscenex python=3.10 -y
conda activate langscenex
  1. Install dependencies
conda install pytorch torchvision -c pytorch -y
pip install -e field_construction/submodules/simple-knn
pip install -e field_construction/submodules/diff-langsurf-rasterizer
pip install -e auto-seg/submodules/segment-anything-1
pip install -e auto-seg/submodules/segment-anything-2
pip install -r requirements.txt

3. Model Checkpoints

The checkpoints of SAM, SAM2 and fine-tuned CogVideoX can be downloaded from our huggingface repository.

💻Running

Quick Start

You can start quickly by running the following scripts:

chmod +x quick_start.sh
./quick_start.sh <first_rgb_image_path> <last_rgb_image_path>

Render

Run the following command to render from the reconstructed 3DGS field:

python entry_point.py \
    pipeline.rgb_video_path="does/not/matter" \
    pipeline.normal_video_path="does/not/matter" \
    pipeline.seg_video_path="does/not/matter" \
    pipeline.data_path="does/not/matter" \
    gaussian.dataset.source_path="does/not/matter" \
    gaussian.dataset.model_path="output/path" \
    pipeline.selection=False \
    gaussian.opt.max_geo_iter=1500 \
    gaussian.opt.normal_optim=True \
    gaussian.opt.optim_pose=True \
    pipeline.skip_video_process=True \
    pipeline.skip_lang_feature_extraction=True \
    pipeline.mode="render"

You can also configurate by editting configs/field_construction.yaml.

🔗Acknowledgement

We are thankful for the following great works when implementing LangScene-X:

📚Citation

@misc{liu2025langscenexreconstructgeneralizable3d,
      title={LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion}, 
      author={Fangfu Liu and Hao Li and Jiawei Chi and Hanyang Wang and Minghui Yang and Fudong Wang and Yueqi Duan},
      year={2025},
      eprint={2507.02813},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.02813}, 
}
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support