MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second
Abstract
MoVieS synthesizes 4D dynamic novel views from monocular videos using Gaussian primitives, enabling unified modeling of appearance, geometry, and motion with minimal task-specific supervision.
We present MoVieS, a novel feed-forward model that synthesizes 4D dynamic novel views from monocular videos in one second. MoVieS represents dynamic 3D scenes using pixel-aligned grids of Gaussian primitives, explicitly supervising their time-varying motion. This allows, for the first time, the unified modeling of appearance, geometry and motion, and enables view synthesis, reconstruction and 3D point tracking within a single learning-based framework. By bridging novel view synthesis with dynamic geometry reconstruction, MoVieS enables large-scale training on diverse datasets with minimal dependence on task-specific supervision. As a result, it also naturally supports a wide range of zero-shot applications, such as scene flow estimation and moving object segmentation. Extensive experiments validate the effectiveness and efficiency of MoVieS across multiple tasks, achieving competitive performance while offering several orders of magnitude speedups.
Community
π Paper: https://arxiv.org/pdf/2507.10065
π Project page: https://chenguolin.github.io/projects/MoVieS
π» Code: https://github.com/chenguolin/MoVieS
π¨ We just released ποΈMoVieS β a feed-forward model that reconstructs 4D scenes in β‘οΈ1 second
My favorite part: It learns dense (pixel-wise) sharp 3D world movements from novel view rendering + sparse point tracking supervision π€―π―
Check it out π https://chenguolin.github.io/projects/MoVieS
We're excited to share our work, which we hope this work will facilitate downstream tasks such as point tracking, dynamic object segmentation, and video depth estimation.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper