Papers
arxiv:2509.23695

Estimating Time Series Foundation Model Transferability via In-Context Learning

Published on Sep 28
· Submitted by Qingren Yao on Oct 1
Authors:
,
,
,

Abstract

TimeTic is a transferability estimation framework that predicts the performance of time series foundation models after fine-tuning on unseen datasets, using tabular foundation models and entropy evolution for model characterization.

AI-generated summary

Time series foundation models (TSFMs) offer strong zero-shot forecasting via large-scale pre-training, yet fine-tuning remains critical for boosting performance in domains with limited public data. With the growing number of TSFMs, efficiently identifying the best model for downstream fine-tuning becomes increasingly challenging. In this work, we introduce TimeTic, a transferability estimation framework that recasts model selection as an in-context-learning problem: given observations on known (source) datasets, it predicts how a TSFM will perform after fine-tuning on a downstream (target) dataset. TimeTic flexibly organizes the observed model-data relationships as contextual information, allowing it to adapt seamlessly to various test-time scenarios. Leveraging the natural tabular structure formed by dataset meta-features, model characteristics, and fine-tuned performance, we employ tabular foundation models to serve as in-context learners. We further introduce a novel model characterization based on entropy evolution across model layers, capturing embedding-space distinctions and enabling TimeTic to generalize across arbitrary model sets. We establish a comprehensive benchmark for transferability estimation including 10 datasets, 10 foundation models, and 3 forecasting tasks. On this benchmark, TimeTic's estimation demonstrates strong alignment with actual fine-tuned performance for previously unseen datasets, achieving a mean rank correlation of approximately 0.6 and a 30% improvement compared to using zero-shot performance as the transferability score.

Community

Paper author Paper submitter

How can we select the best pretrained time series foundation model (TSFM) for a given transfer scenario ?

We introduce TimeTic, a transferability estimation framework that predicts how a TSFM will perform after fine-tuning—without the need for additional training.

  • In-context transferability estimation: TimeTic leverages tabular foundation models as in-context learners to infer fine-tuned performance from prior observations, adapting flexibly to diverse scenarios (e.g., unseen models or unseen datasets).

  • Novel model characterization: We represent models via their entropy evolution across layers, capturing architectural distinctions and their relation to transfer performance.

  • Unified fine-tuning framework: Our implementation supports multiple models and tasks, enabling systematic benchmarking and further exploration of TSFM transfer in the community.

On a large-scale benchmark (10 datasets, 10 models, 3 forecasting tasks), TimeTic achieves strong alignment with actual fine-tuned performance, improving transferability estimation by ~30% compared to using zero-shot performance alone.

Welcome discussions 👋🏻

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.23695 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.23695 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.23695 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.