UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations

This repository provides the pretrained weights of the UrbanFusion model โ€” a framework for learning robust spatial representations through stochastic multimodal fusion.

UrbanFusion can generate location encodings from any subset of the following modalities:

  • ๐Ÿ“ Geographic coordinates
  • ๐Ÿ™๏ธ Street-view imagery
  • ๐Ÿ›ฐ๏ธ Remote sensing data
  • ๐Ÿ—บ๏ธ OSM basemaps
  • ๐Ÿฌ Points of interest (POIs)

๐Ÿ”— The full source code is available on GitHub, and further details are described in our paper.


๐Ÿ“– Citation

@article{muehlematter2025urbanfusion,
  title   = {UrbanFusion: Stochastic Multimodal Fusion for Contrastive Learning of Robust Spatial Representations},
  author  = {Dominik J. Mรผhlematter and Lin Che and Ye Hong and Martin Raubal and Nina Wiedemann},
  year    = {2025},
  journal = {arXiv preprint arXiv:2510.13774}
}

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DominikM198/UrbanFusion

Finetuned
(274)
this model

Dataset used to train DominikM198/UrbanFusion