Abstract
FlashWorld generates 3D scenes from single images or text prompts quickly and with high quality by combining MV-oriented and 3D-oriented generation methods.
We propose FlashWorld, a generative model that produces 3D scenes from a single image or text prompt in seconds, 10~100times faster than previous works while possessing superior rendering quality. Our approach shifts from the conventional multi-view-oriented (MV-oriented) paradigm, which generates multi-view images for subsequent 3D reconstruction, to a 3D-oriented approach where the model directly produces 3D Gaussian representations during multi-view generation. While ensuring 3D consistency, 3D-oriented method typically suffers poor visual quality. FlashWorld includes a dual-mode pre-training phase followed by a cross-mode post-training phase, effectively integrating the strengths of both paradigms. Specifically, leveraging the prior from a video diffusion model, we first pre-train a dual-mode multi-view diffusion model, which jointly supports MV-oriented and 3D-oriented generation modes. To bridge the quality gap in 3D-oriented generation, we further propose a cross-mode post-training distillation by matching distribution from consistent 3D-oriented mode to high-quality MV-oriented mode. This not only enhances visual quality while maintaining 3D consistency, but also reduces the required denoising steps for inference. Also, we propose a strategy to leverage massive single-view images and text prompts during this process to enhance the model's generalization to out-of-distribution inputs. Extensive experiments demonstrate the superiority and efficiency of our method.
Community
TLDR: FlashWorld enables fast (7 seconds on a single A100/A800 GPU) and high-quality 3D scene generation across diverse scenes, from a single image or text prompt.
Project Page: https://imlixinyang.github.io/FlashWorld-Project-Page/
Demo:
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- WorldSplat: Gaussian-Centric Feed-Forward 4D Scene Generation for Autonomous Driving (2025)
- FantasyWorld: Geometry-Consistent World Modeling via Unified Video and 3D Prediction (2025)
- VideoFrom3D: 3D Scene Video Generation via Complementary Image and Video Diffusion Models (2025)
- MV-Performer: Taming Video Diffusion Model for Faithful and Synchronized Multi-view Performer Synthesis (2025)
- Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation (2025)
- ShapeGen4D: Towards High Quality 4D Shape Generation from Videos (2025)
- 4DNeX: Feed-Forward 4D Generative Modeling Made Easy (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper