Model Card for Switch Generation
This model implements Switch Generation, a novel approach presented in the paper Don't Throw Away Your Pretrained Model. Switch Generation aims to make the best of both worlds by enabling pretrained and aligned model versions to "speak" in turns within a response sequence. This method addresses the tradeoffs of alignment training by leveraging model collaboration, where a "switcher LM" dynamically guides different model checkpoints to generate segments where their strengths are most needed. Extensive experiments show that Switch Generation consistently outperforms individual models and baselines, discovering compositional skills and reusing by-products from expensive training pipelines.
Model Details
Model Description
Switch Generation is a model collaboration framework designed to overcome the limitations of alignment training, which can lead to losses in skills like creativity and calibration, where unaligned base models often excel. The core idea is to train a "switcher LM" that learns to choose between different models (e.g., a pretrained base model and an aligned version) to generate the next segment of text. This dynamic switching allows the system to harness the unique strengths of each participating model, leading to improved performance across tasks requiring diverse skills such as reasoning, instruction following, creativity, and calibration. It generalizes to unseen models and tasks by effectively repurposing existing model assets.
- Developed by: [More Information Needed]
- Model type: Switcher Language Model (LoRA adapter for Causal LM orchestration)
- Language(s) (NLP): English
- License: other
- Finetuned from model: allenai/Llama-3.1-Tulu-3-8B
Model Sources
- Repository: https://github.com/BunsenFeng/switch_generation
- Paper: Don't Throw Away Your Pretrained Model
Uses
Direct Use
The Switch Generation framework is intended for text generation tasks where combining the strengths of different language models (e.g., aligned for instruction following and unaligned for creativity) can lead to superior and more balanced responses. It is designed to orchestrate the generation process by dynamically selecting the most suitable underlying model for each segment.
Out-of-Scope Use
This model is not intended for standalone direct text generation without the orchestrated collaboration of multiple underlying language models. It functions as a "switcher" or controller within a larger generation system. As with any language model, users should be aware of potential biases and limitations in generated content.
How to Get Started with the Model
Use the code below to get started with the model.
Quick Start
Initialization
Create a conda environment for Switch Generation
conda env create -f switch.yml
conda activate switch_generation
Log into huggingface (for model access).
huggingface-cli login
Execute your first Switch Generation inference
bash main.sh
main.sh by default contains:
python main_generate.py \
--input data/input_sample.jsonl \
--gpu_ids 0,1,2,3 \
--overide_selector_path bunsenfeng/PFA_switcher_1 \
--total_max_length 256
--input: a JSONL file of inputs, look at data/input_sample.jsonl for an example of how to prepare your custom inputs. Output will come out at the same directory data/input_sample_switch_generation.jsonl.
--gpu_ids: a string of numbers separated by comma, 4 GPUs needed (one for P, F, A, and switcher each).
--overide_selector_path: path to the switcher LM on Huggingface. We provide bunsenfeng/PFA_switcher_1, bunsenfeng/PFA_switcher_2 with different task and training exposure, you can also just try the aligned model itself allenai/Llama-3.1-Tulu-3-8B or any model that could follow instructions.
--total_max_length: essentially max_new_tokens.
Other Settings
Your own data: format it like data/input_sample.jsonl.
Your own candidate models: change in lines 46-48 in main_generate.py. Make sure --gpu_ids provides (n+1) GPU ids where n is the amount of candidate models. Can be other than 3 models. Another recommended set: ["Qwen/Qwen2.5-7B", "bunsenfeng/yuru_qw_oasst1", "Qwen/Qwen2.5-7B-Instruct"], where the middle is an SFT model we made in here.
Training Details
Training Data
[More Information Needed]
Training Procedure
The switcher LM is trained by learning from outcomes of choosing different models to generate the next segment across diverse queries and contexts. At inference time, the switcher LM guides different model checkpoints to dynamically generate the next segment where their strengths are most needed.
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation
If Switch Generation is helpful to you:
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]
Model tree for bunsenfeng/PFA_switcher_1
Base model
meta-llama/Llama-3.1-8B