Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using dphn/Dolphin3.0-Llama3.2-1B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: marcuscedricridia/badllama3.2-1B
- model: CodeAtCMU/Llama-3.2-1B-GenerativePerturbations_full_sft_code_data_120K_imaginary
- model: syvai/emotion-reasoning-1b
merge_method: model_stock
base_model: dphn/Dolphin3.0-Llama3.2-1B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0]