β‘ LARGE WIRELESS MODELS (LWMs) 2025 CHALLENGE
The goal is to improve performance across five wireless downstream tasks by optimizing a baseline LWM and/or designing new downstream models
Challenge Overview β’ Provided Materials β’ Getting Started β’ Submission Process β’ Tutorials β’ Citation β’ Community & Support β’ Team
π‘ Large Wireless Model (LWM) Challenge
Welcome to the official repository of the LWM 2025 Challenge, a competition designed to advance the state of foundation models in wireless communications and sensing. Participants are invited to optimize a provided baseline Large Wireless Model (LWM) and design downstream models to tackle five core wireless tasks with limited labeled data.
π§ About LWM
Large Wireless Model (LWM) 1.1 is a Transformer-based foundation model pre-trained using self-supervised learning on over 1 million unlabeled wireless channel samples. It generates rich, task-agnostic embeddings that significantly outperform raw channel representations on downstream tasksβespecially when data is scarce or noisy or downstream models need to be simple.
π Challenge Overview
Participants are given:
- A pre-trained LWM 1.1 checkpoint
- Baseline downstream task models
- Training, validation, and public test sets for each task
- Helper functions and templates
Your goal is to improve the Composite Generalization Score (CG-Score) across these five tasks:
- LoS/NLoS Classification β F1-score
- Sub-6 GHz Channel to mmWave Beam Prediction β Top-1 Beam F1-score
- Channel Interpolation β Normalized MSE
- Channel Estimation β Normalized MSE
- Localization β Normalized Localization Error
Final rankings are based on hidden test sets evaluated by the organizers.
π¦ Provided Materials
This repository contains:
pretrained_model.pyβ Loads the baseline or your refined LWM modeltrain_heads.pyβ The main script for training and evaluating all task-specific models. This file must not be modified. It is provided as a standardized template to ensure fairness and consistency across all teams. Participants must design their submissions to align with this script. The organizers will use an equivalent version oftrain_heads.pyfor final evaluation, and any deviation from the expected structure will result in automatic disqualification.train_heads_config.pyβ Contains training configs and model head definitionstrain_lwm.pyβ Contains LWM 1.1 pre-training and dataset reproducibility scriptutils.pyβ Helper functions (training, scoring, data handling)task_{t}/β Contains the training, validation, and public test sets for each downstream task. These datasets are used for jointly fine-tuning your refined LWM and training the corresponding task-specific models. While downstream training is restricted to the provided datasets, you are free to use any dataset for LWM pre-training. Participants are granted early access to the DeepMIMO v4 dataset, which offers new, large-scale scenarios suitable for extended LWM refinement.requirements.ymlβ Conda environment file for dependency setup
π Getting Started
π₯ Clone the repo
git clone https://huggingface.co/wi-lab/lwm-competition-2025
cd lwm-competition-2025
π οΈ Set up the environment
conda env create -f requirements.yml
conda activate lwm_env
π§ͺ Run baseline pipeline
python train_heads.py
This jointly finetunes LWM and trains downstream heads, evaluates on public test sets, and creates a submission ZIP file.
π§© Submission Process
- Refine your LWM or downstream heads
- Update
pretrained_model.py,train_heads_config.py, andutils.py. - Run:
python train_heads.py
- Submit the generated ZIP file to the competition portal
π Do not modify train_heads.py. While you may adapt it for local development or experimentation, your final submission must be fully compatible with the original, unmodified version provided. The evaluation script used by the organizers assumes this exact structureβany deviation may result in disqualification.
π Tutorials
Visit the official tutorials page:
π https://lwm-wireless.net/tutorials
π§ͺ Citation
If you use the LWM model or its components, please cite:
@misc{alikhani2025largewirelessmodellwm,
title={Large Wireless Model (LWM): A Foundation Model for Wireless Channels},
author={Sadjad Alikhani and Gouranga Charan and Ahmed Alkhateeb},
year={2025},
eprint={2411.08872},
archivePrefix={arXiv},
primaryClass={cs.IT},
url={https://arxiv.org/abs/2411.08872},
}
π₯ Community & Support
- π¬ Discussion Forum
- π¨ Contact: [email protected]
π¨βπ¬ Team
Developed by the Wireless Intelligence Lab at Arizona State University.
- Downloads last month
- 83
Evaluation results
- f1 on Wireless Channel Datasetself-reportedbaseline
- normalized-mse on Wireless Channel Datasetself-reportedbaseline
- normalized-mse on Wireless Channel Datasetself-reportedbaseline
- f1 on Wireless Channel Datasetself-reportedbaseline
- normalized-localization-error on Wireless Channel Datasetself-reportedbaseline