Abstract
Distribution-conditioned transport framework enables generalization to unseen distribution pairs and supports semi-supervised learning for scientific applications.
Learning a transport model that maps a source distribution to a target distribution is a canonical problem in machine learning, but scientific applications increasingly require models that can generalize to source and target distributions unseen during training. We introduce distribution-conditioned transport (DCT), a framework that conditions transport maps on learned embeddings of source and target distributions, enabling generalization to unseen distribution pairs. DCT also allows semi-supervised learning for distributional forecasting problems: because it learns from arbitrary distribution pairs, it can leverage distributions observed at only one condition to improve transport prediction. DCT is agnostic to the underlying transport mechanism, supporting models ranging from flow matching to distributional divergence-based models (e.g. Wasserstein, MMD). We demonstrate the practical performance benefits of DCT on synthetic benchmarks and four applications in biology: batch effect transfer in single-cell genomics, perturbation prediction from mass cytometry data, learning clonal transcriptional dynamics in hematopoiesis, and modeling T-cell receptor sequence evolution.
Community
Introducing distribution-conditioned transport (DCT): generalizes transport maps to unseen distribution pairs via distribution embeddings, enabling semi-supervised forecasting and compatibility with diverse transport models such as flow matching and Wasserstein.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Shortest-Path Flow Matching with Mixture-Conditioned Bases for OOD Generalization to Unseen Conditions (2026)
- MapPFN: Learning Causal Perturbation Maps in Context (2026)
- Better Source, Better Flow: Learning Condition-Dependent Source Distribution for Flow Matching (2026)
- Training-Free Distribution Adaptation for Diffusion Models via Maximum Mean Discrepancy Guidance (2026)
- Transfer Learning Through Conditional Quantile Matching (2026)
- Causally-Aware Information Bottleneck for Domain Adaptation (2026)
- SOTAlign: Semi-Supervised Alignment of Unimodal Vision and Language Models via Optimal Transport (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper