UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations
This repository provides the pretrained weights of the UrbanFusion model โ a framework for learning robust spatial representations through stochastic multimodal fusion.
UrbanFusion can generate location encodings from any subset of the following modalities:
- ๐ Geographic coordinates
- ๐๏ธ Street-view imagery
- ๐ฐ๏ธ Remote sensing data
- ๐บ๏ธ OSM basemaps
- ๐ฌ Points of interest (POIs)
๐ The full source code is available on GitHub, and further details are described in our paper.
๐ Citation
@article{muehlematter2025urbanfusion,
title = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
author = {Dominik J. Mรผhlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
year = {2025},
journal = {arXiv preprint arXiv:2510.13774}
}
Model tree for DominikM198/UrbanFusion
Base model
BAAI/bge-small-en-v1.5