Overview
MapAnything is a simple, end-to-end trained transformer model that directly regresses the factored metric 3D geometry of a scene given various types of modalities as inputs. A single feed-forward model supports over 12 different 3D reconstruction tasks including multi-image sfm, multi-view stereo, monocular metric depth estimation, registration, depth completion and more.
This is the Apache 2.0 variant of the model.
Quick Start
Please refer to our Github Repo
Citation
If you find our repository useful, please consider giving it a star ⭐ and citing our paper in your work:
@inproceedings{keetha2025mapanything,
title={{MapAnything}: Universal Feed-Forward Metric {3D} Reconstruction},
author={Nikhil Keetha and Norman Müller and Johannes Schönberger and Lorenzo Porzi and Yuchen Zhang and Tobias Fischer and Arno Knapitsch and Duncan Zauss and Ethan Weber and Nelson Antunes and Jonathon Luiten and Manuel Lopez-Antequera and Samuel Rota Bulò and Christian Richardt and Deva Ramanan and Sebastian Scherer and Peter Kontschieder},
booktitle={arXiv},
year={2025}
}
- Downloads last month
- 1,653
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support