Estimating Time Series Foundation Model Transferability via In-Context Learning
Abstract
TimeTic is a transferability estimation framework that predicts the performance of time series foundation models after fine-tuning on unseen datasets, using tabular foundation models and entropy evolution for model characterization.
Time series foundation models (TSFMs) offer strong zero-shot forecasting via large-scale pre-training, yet fine-tuning remains critical for boosting performance in domains with limited public data. With the growing number of TSFMs, efficiently identifying the best model for downstream fine-tuning becomes increasingly challenging. In this work, we introduce TimeTic, a transferability estimation framework that recasts model selection as an in-context-learning problem: given observations on known (source) datasets, it predicts how a TSFM will perform after fine-tuning on a downstream (target) dataset. TimeTic flexibly organizes the observed model-data relationships as contextual information, allowing it to adapt seamlessly to various test-time scenarios. Leveraging the natural tabular structure formed by dataset meta-features, model characteristics, and fine-tuned performance, we employ tabular foundation models to serve as in-context learners. We further introduce a novel model characterization based on entropy evolution across model layers, capturing embedding-space distinctions and enabling TimeTic to generalize across arbitrary model sets. We establish a comprehensive benchmark for transferability estimation including 10 datasets, 10 foundation models, and 3 forecasting tasks. On this benchmark, TimeTic's estimation demonstrates strong alignment with actual fine-tuned performance for previously unseen datasets, achieving a mean rank correlation of approximately 0.6 and a 30% improvement compared to using zero-shot performance as the transferability score.
Community
How can we select the best pretrained time series foundation model (TSFM) for a given transfer scenario ?
We introduce TimeTic, a transferability estimation framework that predicts how a TSFM will perform after fine-tuning—without the need for additional training.
In-context transferability estimation: TimeTic leverages tabular foundation models as in-context learners to infer fine-tuned performance from prior observations, adapting flexibly to diverse scenarios (e.g., unseen models or unseen datasets).
Novel model characterization: We represent models via their entropy evolution across layers, capturing architectural distinctions and their relation to transfer performance.
Unified fine-tuning framework: Our implementation supports multiple models and tasks, enabling systematic benchmarking and further exploration of TSFM transfer in the community.
On a large-scale benchmark (10 datasets, 10 models, 3 forecasting tasks), TimeTic achieves strong alignment with actual fine-tuned performance, improving transferability estimation by ~30% compared to using zero-shot performance alone.
Welcome discussions 👋🏻
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- One-Embedding-Fits-All: Efficient Zero-Shot Time Series Forecasting by a Model Zoo (2025)
- Adapting LLMs to Time Series Forecasting via Temporal Heterogeneity Modeling and Semantic Alignment (2025)
- UniCast: A Unified Multimodal Prompting Framework for Time Series Forecasting (2025)
- Super-Linear: A Lightweight Pretrained Mixture of Linear Experts for Time Series Forecasting (2025)
- QuiZSF: An efficient data-model interaction framework for zero-shot time-series forecasting (2025)
- Kairos: Towards Adaptive and Generalizable Time Series Foundation Models (2025)
- MOMEMTO: Patch-based Memory Gate Model in Time Series Foundation Model (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper