Papers
arxiv:2507.10065

MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second

Published on Jul 14
Β· Submitted by chenguolin on Jul 15
Authors:
,
,
,

Abstract

MoVieS synthesizes 4D dynamic novel views from monocular videos using Gaussian primitives, enabling unified modeling of appearance, geometry, and motion with minimal task-specific supervision.

AI-generated summary

We present MoVieS, a novel feed-forward model that synthesizes 4D dynamic novel views from monocular videos in one second. MoVieS represents dynamic 3D scenes using pixel-aligned grids of Gaussian primitives, explicitly supervising their time-varying motion. This allows, for the first time, the unified modeling of appearance, geometry and motion, and enables view synthesis, reconstruction and 3D point tracking within a single learning-based framework. By bridging novel view synthesis with dynamic geometry reconstruction, MoVieS enables large-scale training on diverse datasets with minimal dependence on task-specific supervision. As a result, it also naturally supports a wide range of zero-shot applications, such as scene flow estimation and moving object segmentation. Extensive experiments validate the effectiveness and efficiency of MoVieS across multiple tasks, achieving competitive performance while offering several orders of magnitude speedups.

Community

Paper author Paper submitter
Paper author Paper submitter

🚨 We just released 🎞️MoVieS β€” a feed-forward model that reconstructs 4D scenes in ⚑️1 second

My favorite part: It learns dense (pixel-wise) sharp 3D world movements from novel view rendering + sparse point tracking supervision 🀯🎯

Check it out πŸ‘‰ https://chenguolin.github.io/projects/MoVieS

We're excited to share our work, which we hope this work will facilitate downstream tasks such as point tracking, dynamic object segmentation, and video depth estimation.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.10065 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.10065 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.10065 in a Space README.md to link it from this page.

Collections including this paper 4