Boomerang Distillation
Collection
Distilled models from the boomerang distillation paper (https://arxiv.org/abs/2510.05064).
•
6 items
•
Updated
•
1
Boomerang distillation is a phenomenon in LLMs where we can distill a teacher model into a student and reincorporate teacher layers to create intermediate-sized models with no additional training. This is the student model distilled from Pythia-2.8B from our paper.
This model was initialized from Pythia-2.8B by copying every other layer and the last 2 layers. It was distilled on 2.1B tokens of The Pile deduplicated with cross entropy, KL, and cosine loss to match the activations of Pythia-2.8B. We used the following hyperparameters:
To interpolate between this model and Pythia-2.8B, please use the build_intermediate_model
function from our github repository:
import torch
from patching.patch import build_intermediate_model
intermediate_model = build_intermediate_model(
teacher_name_or_path = "EleutherAI/pythia-2.8b",
student_name_or_path = "Harvard-DCML/boomerang-pythia-1.6B",
num_layers_to_patch = 2,
patch_first_k_layers = False,
dtype = torch.bfloat16,
)
Notes:
num_layers_to_patch
changes the size of the intermediate model by patching different numbers of student layers.patch_first_k_layers
should be set to False for this model for optimal interpolation performance.@article{kangaslahti2025boomerang,
title={Boomerang Distillation Enables Zero-Shot Model Size Interpolation},
author={Kangaslahti, Sara and Nayak, Nihal V and Geuter, Jonathan and Fumero, Marco and Locatello, Francesco and Alvarez-Melis, David},
journal={arXiv preprint arXiv:2510.05064},
year={2025},
url={https://arxiv.org/abs/2510.05064}
}