MIST: Mutual Information Via Supervised Training
Abstract
A data-driven neural network approach estimates mutual information using a meta-dataset of synthetic distributions, offering flexibility, efficiency, and uncertainty quantification.
We propose a fully data-driven approach to designing mutual information (MI) estimators. Since any MI estimator is a function of the observed sample from two random variables, we parameterize this function with a neural network (MIST) and train it end-to-end to predict MI values. Training is performed on a large meta-dataset of 625,000 synthetic joint distributions with known ground-truth MI. To handle variable sample sizes and dimensions, we employ a two-dimensional attention scheme ensuring permutation invariance across input samples. To quantify uncertainty, we optimize a quantile regression loss, enabling the estimator to approximate the sampling distribution of MI rather than return a single point estimate. This research program departs from prior work by taking a fully empirical route, trading universal theoretical guarantees for flexibility and efficiency. Empirically, the learned estimators largely outperform classical baselines across sample sizes and dimensions, including on joint distributions unseen during training. The resulting quantile-based intervals are well-calibrated and more reliable than bootstrap-based confidence intervals, while inference is orders of magnitude faster than existing neural baselines. Beyond immediate empirical gains, this framework yields trainable, fully differentiable estimators that can be embedded into larger learning pipelines. Moreover, exploiting MI's invariance to invertible transformations, meta-datasets can be adapted to arbitrary data modalities via normalizing flows, enabling flexible training for diverse target meta-distributions.
Community
TL;DR: MIST is a fast, differentiable, neural mutual information estimator trained on synthetic data that outperforms baselines and provides well-calibrated uncertainty intervals.
paper: https://arxiv.org/abs/2511.18945
github: https://github.com/grgera/mist
dataset: zenodo.org/records/17599669
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Neural Mutual Information Estimation with Vector Copulas (2025)
- FMMI: Flow Matching Mutual Information Estimation (2025)
- Connecting Jensen-Shannon and Kullback-Leibler Divergences: A New Bound for Representation Learning (2025)
- From Kernels to Attention: A Transformer Framework for Density and Score Estimation (2025)
- Contrastive Predictive Coding Done Right for Mutual Information Estimation (2025)
- MINERVA: Mutual Information Neural Estimation for Supervised Feature Selection (2025)
- Partial Information Decomposition via Normalizing Flows in Latent Gaussian Distributions (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper