metadata
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
Model Description
Boomerang distillation is a phenomenon in LLMs where we can distill a teacher model into a student and reincorporate teacher layers to create intermediate-sized models with no additional training. This is the student model distilled from Pythia-2.8B from our paper.
Training Procedure
This model was initialized from Pythia-2.8B by copying every other layer and the last 2 layers. It was distilled on 2.1B tokens of The Pile deduplicated with cross entropy, KL, and cosine loss to match the activations of Pythia-2.8B. We used the following hyperparameters:
- Learning rate: 3e-4
- Learning rate scheduler: cosine
- Warmup ratio: 0.01
- Optimizer: AdamW
- Adam betas: (0.9, 0.95)
- Adam epsilon: 1e-8
- Weight decay: 0.1
- Max. gradient norm: 1.0
- Number of training steps: 500
- Max. sequence length: 2048
- Effective batch size: 2048
- Mixed precision: bf16
- KLDiv weight: 0.1
- Cosine distance weight per layer: 0.11
Use
To interpolate between this model and Pythia-2.8B, please use the build_intermediate_model
function from our github repository:
import torch
from patching.patch import build_intermediate_model
intermediate_model = build_intermediate_model(
teacher_name_or_path = "EleutherAI/pythia-2.8b",
student_name_or_path = "Harvard-DCML/boomerang-pythia-1.6B",
num_layers_to_patch = 2,
patch_first_k_layers = False,
dtype = torch.bfloat16,
)
Notes:
- Changing
num_layers_to_patch
changes the size of the intermediate model by patching different numbers of student layers. patch_first_k_layers
should be set to False for this model for optimal interpolation performance.
Citation
@article{kangaslahti2025boomerang,
title={Boomerang Distillation Enables Zero-Shot Model Size Interpolation},
author={Kangaslahti, Sara and Nayak, Nihal V and Geuter, Jonathan and Fumero, Marco and Locatello, Francesco and Alvarez-Melis, David},
journal={arXiv preprint arXiv:2510.05064},
year={2025},
url={https://arxiv.org/abs/2510.05064}
}