⚑ LARGE WIRELESS MODELS (LWMs) 2025 CHALLENGE

The goal is to improve performance across five wireless downstream tasks by optimizing a baseline LWM and/or designing new downstream models

Model Hub Tutorials Website Contact

Challenge Overview β€’ Provided Materials β€’ Getting Started β€’ Submission Process β€’ Tutorials β€’ Citation β€’ Community & Support β€’ Team

 

Try Interactive Demo

 

πŸ“‘ Large Wireless Model (LWM) Challenge

Welcome to the official repository of the LWM 2025 Challenge, a competition designed to advance the state of foundation models in wireless communications and sensing. Participants are invited to optimize a provided baseline Large Wireless Model (LWM) and design downstream models to tackle five core wireless tasks with limited labeled data.


🧠 About LWM

Large Wireless Model (LWM) 1.1 is a Transformer-based foundation model pre-trained using self-supervised learning on over 1 million unlabeled wireless channel samples. It generates rich, task-agnostic embeddings that significantly outperform raw channel representations on downstream tasksβ€”especially when data is scarce or noisy or downstream models need to be simple.


🏁 Challenge Overview

Participants are given:

  • A pre-trained LWM 1.1 checkpoint
  • Baseline downstream task models
  • Training, validation, and public test sets for each task
  • Helper functions and templates

Your goal is to improve the Composite Generalization Score (CG-Score) across these five tasks:

  1. LoS/NLoS Classification – F1-score
  2. Sub-6 GHz Channel to mmWave Beam Prediction – Top-1 Beam F1-score
  3. Channel Interpolation – Normalized MSE
  4. Channel Estimation – Normalized MSE
  5. Localization – Normalized Localization Error

Final rankings are based on hidden test sets evaluated by the organizers.


πŸ“¦ Provided Materials

This repository contains:

  • pretrained_model.py β€” Loads the baseline or your refined LWM model
  • train_heads.py β€” The main script for training and evaluating all task-specific models. This file must not be modified. It is provided as a standardized template to ensure fairness and consistency across all teams. Participants must design their submissions to align with this script. The organizers will use an equivalent version of train_heads.py for final evaluation, and any deviation from the expected structure will result in automatic disqualification.
  • train_heads_config.py β€” Contains training configs and model head definitions
  • train_lwm.py β€” Contains LWM 1.1 pre-training and dataset reproducibility script
  • utils.py β€” Helper functions (training, scoring, data handling)
  • task_{t}/ β€” Contains the training, validation, and public test sets for each downstream task. These datasets are used for jointly fine-tuning your refined LWM and training the corresponding task-specific models. While downstream training is restricted to the provided datasets, you are free to use any dataset for LWM pre-training. Participants are granted early access to the DeepMIMO v4 dataset, which offers new, large-scale scenarios suitable for extended LWM refinement.
  • requirements.yml β€” Conda environment file for dependency setup

πŸš€ Getting Started

πŸ“₯ Clone the repo

git clone https://huggingface.co/wi-lab/lwm-competition-2025
cd lwm-competition-2025

πŸ› οΈ Set up the environment

conda env create -f requirements.yml
conda activate lwm_env

πŸ§ͺ Run baseline pipeline

python train_heads.py

This jointly finetunes LWM and trains downstream heads, evaluates on public test sets, and creates a submission ZIP file.

🧩 Submission Process

  1. Refine your LWM or downstream heads
  2. Update pretrained_model.py, train_heads_config.py, and utils.py.
  3. Run:
python train_heads.py
  1. Submit the generated ZIP file to the competition portal

πŸ›‘ Do not modify train_heads.py. While you may adapt it for local development or experimentation, your final submission must be fully compatible with the original, unmodified version provided. The evaluation script used by the organizers assumes this exact structureβ€”any deviation may result in disqualification.


πŸ“š Tutorials

Visit the official tutorials page:

πŸ‘‰ https://lwm-wireless.net/tutorials


πŸ§ͺ Citation

If you use the LWM model or its components, please cite:

@misc{alikhani2025largewirelessmodellwm,
      title={Large Wireless Model (LWM): A Foundation Model for Wireless Channels}, 
      author={Sadjad Alikhani and Gouranga Charan and Ahmed Alkhateeb},
      year={2025},
      eprint={2411.08872},
      archivePrefix={arXiv},
      primaryClass={cs.IT},
      url={https://arxiv.org/abs/2411.08872}, 
}

πŸ‘₯ Community & Support


πŸ‘¨β€πŸ”¬ Team

Developed by the Wireless Intelligence Lab at Arizona State University.

Sadjad Alikhani   Gouranga Charan   Ahmed Alkhateeb

Downloads last month
83
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Evaluation results