Add text-generation and library name
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,6 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
| 2 |
---
|
| 3 |
license: llama2
|
|
|
|
|
|
|
| 4 |
base_model:
|
| 5 |
- unsloth/llama-2-13b
|
| 6 |
- layoric/llama-2-13b-code-alpaca
|
|
@@ -21,3 +35,136 @@ This repository includes one of the checkpoints used in the paper "Activation-In
|
|
| 21 |
- **AIM:** True
|
| 22 |
|
| 23 |
Benchmark results and paper details can be found at the official [GitHub](https://github.com/ahnobari/ActivationInformedMerging.git).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: llama2
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
+
base_model:
|
| 6 |
+
- unsloth/llama-2-13b
|
| 7 |
+
- layoric/llama-2-13b-code-alpaca
|
| 8 |
+
- vanillaOVO/WizardMath-13B-V1.0
|
| 9 |
+
- WizardLMTeam/WizardLM-13B-V1.2
|
| 10 |
+
tags:
|
| 11 |
+
- merge
|
| 12 |
+
---
|
| 13 |
|
| 14 |
---
|
| 15 |
license: llama2
|
| 16 |
+
library_name: transformers
|
| 17 |
+
pipeline_tag: text-generation
|
| 18 |
base_model:
|
| 19 |
- unsloth/llama-2-13b
|
| 20 |
- layoric/llama-2-13b-code-alpaca
|
|
|
|
| 35 |
- **AIM:** True
|
| 36 |
|
| 37 |
Benchmark results and paper details can be found at the official [GitHub](https://github.com/ahnobari/ActivationInformedMerging.git).
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
# Usage
|
| 41 |
+
You can re-deo the experiments we have here using the provided code. Below we detail how to replicate the experiments.
|
| 42 |
+
|
| 43 |
+
## Merging Models
|
| 44 |
+
If you wish to merge the models yourself instead of using the provided checkpoints you can do so with the `merge.py` script provided. For example to perform DARE Ties merging on the Code, Math and Instruction Tuned models you can run:
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
python merge.py --method dare_ties --base_model unsloth/llama-2-13b --models_to_merge WizardLMTeam/WizardLM-13B-V1.2,vanillaOVO/WizardMath-13B-V1.0,layoric/llama-2-13b-code-alpaca --save_path ./DARE_TIES_InstructMathCode
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
## Evaluating Models on Benchmarks
|
| 51 |
+
Once you have the checkpoints you want to test you can run the `evaluate_model.py` script to run the benchamrks on the model. For example to run the benchmarks on the model merged above you can run:
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
```bash
|
| 55 |
+
python evaluate_model.py --model ./DARE_TIES_InstructMathCode
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
or if you wanted to use the provided checkpoints:
|
| 59 |
+
|
| 60 |
+
```bash
|
| 61 |
+
python evaluate_model.py --model ahn1376/DARETies___Code-Math-Instruction_Tuned
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
## Applying AIM to A Merged Model
|
| 65 |
+
If you want to apply AIM to any merged model you will need to provide the merged checkpoint as well as the base model checkpoint. The only hyper-parameter in AIM is $\omega$, which we recommend to be set between $0.2-0.6$ we set this to $0.4$ for the experiments in our paper, but in some cases lower values (more relaxation) will yeild better results. Below is how you can apply AIM to the checkpoint the code above makes:
|
| 66 |
+
|
| 67 |
+
```bash
|
| 68 |
+
python performAIM.py --merged_model ./DARE_TIES_InstructMathCode --pretrained_model_name unsloth/llama-2-13b --omega 0.4 --save_path ./DARE_TIES_AIM_InstructMathCode
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
# Summary of Findings
|
| 73 |
+
We find that in basically all merging methods we tested applying AIM improves performance and pushed the pareto front of the resulting model population and achieves the highest scrores in benchmarks. The figure below shows how with decreasing $\omega$ (more AIM relaxation) leads to further improvements in some models (HV gain is the hypervolume gained by adding the model to the population models used for merging (more is better)):
|
| 74 |
+
|
| 75 |
+
<img width="600px" alt="Screenshot 2025-02-04 at 10 15 38 AM" src="https://github.com/user-attachments/assets/5cd5119e-a292-45d4-972f-b2dd6febf6f8" />
|
| 76 |
+
|
| 77 |
+
We can observe this better by visualizing some of the pareto fronts for different model populations:
|
| 78 |
+
|
| 79 |
+
<img width="100%" alt="Screenshot 2025-02-04 at 10 22 25 AM" src="https://github.com/user-attachments/assets/5d88a71e-16ca-4f71-84f7-6e8de96ea69a" />
|
| 80 |
+
|
| 81 |
+
Overall the results of our experiments are as follows for the different tests:
|
| 82 |
+
|
| 83 |
+
## Base Models
|
| 84 |
+
|
| 85 |
+
| Method | Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
|
| 86 |
+
|--------|----------|-----|-----------|------|------|------|-------|---------|----------|
|
| 87 |
+
| - | Base | - | 17.07 | 27.80 | 52.18 | 0.70 | 4.20 | 25.10 | - |
|
| 88 |
+
| - | Code | - | 17.07 | 31.60 | 52.91 | 6.00 | 24.10 | 26.25 | - |
|
| 89 |
+
| - | Instruction Tuned | - | **26.83** | **34.80** | **53.41** | 7.50 | 43.40 | **35.67** | - |
|
| 90 |
+
| - | Math | - | 15.24 | 27.60 | 51.89 | **13.10** | **59.10** | 21.58 | - |
|
| 91 |
+
|
| 92 |
+
## Merged Models
|
| 93 |
+
|
| 94 |
+
### DARE Task Arithmetic
|
| 95 |
+
|
| 96 |
+
| Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
|
| 97 |
+
|----------|-----|-----------|------|------|------|-------|---------|----------|
|
| 98 |
+
| Code + Instruction Tuned | No | 26.83 | 34.40 | 53.53 | 8.40 | 45.80 | 33.42 | 0.27 |
|
| 99 |
+
| | Yes | 29.27 (+9.09%) | 36.00 (+4.65%) | 54.18 (+1.21%) | 8.30 (-1.19%) | 46.20 (+0.87%) | 32.00 (-4.25%) | 0.28 (+2.49%) |
|
| 100 |
+
| Code + Math | No | 16.46 | 28.60 | 51.96 | 15.10 | 64.70 | 22.02 | 0.23 |
|
| 101 |
+
| | Yes | 15.85 (-3.71%) | 29.60 (+3.50%) | 52.50 (+1.04%) | 14.80 (-1.99%) | 64.10 (-0.93%) | 21.91 (-0.50%) | 0.23 (-1.65%) |
|
| 102 |
+
| Instruction Tuned + Math | No | 5.49 | 19.00 | 51.08 | 9.80 | 54.30 | 32.35 | 0.18 |
|
| 103 |
+
| | Yes | 12.20 (+122.22%) | 28.20 (+48.42%) | 52.72 (+3.21%) | 12.90 (+31.63%) | 62.20 (+14.55%) | 31.96 (-1.21%) | 0.26 (+40.71%) |
|
| 104 |
+
| Code + Instruction Tuned + Math | No | 11.59 | 19.60 | 50.89 | 9.10 | 49.70 | 33.20 | 0.16 |
|
| 105 |
+
| | Yes | 15.85 (+36.76%) | 27.00 (+37.76%) | 52.59 (+3.34%) | 12.20 (+34.07%) | 60.70 (+22.13%) | 33.59 (+1.17%) | 0.23 (+40.59%) |
|
| 106 |
+
|
| 107 |
+
### DARE Ties
|
| 108 |
+
|
| 109 |
+
| Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
|
| 110 |
+
|----------|-----|-----------|------|------|------|-------|---------|----------|
|
| 111 |
+
| Code + Instruction Tuned | No | 30.49 | 35.20 | 53.40 | 8.60 | 46.20 | 33.28 | 0.28 |
|
| 112 |
+
| | Yes | **30.49** | **36.80** (+4.55%) | 54.02 (+1.16%) | 8.60 | 47.20 (+2.16%) | 33.16 (-0.36%) | 0.29 (+1.63%) |
|
| 113 |
+
| Code + Math | No | 17.07 | 27.40 | 51.92 | 14.90 | 63.60 | 22.53 | 0.23 |
|
| 114 |
+
| | Yes | 17.68 (+3.57%) | 29.00 (+5.84%) | 52.61 (+1.33%) | 15.20 (+2.01%) | 63.90 (+0.47%) | 21.10 (-6.35%) | 0.24 (+4.00%) |
|
| 115 |
+
| Instruction Tuned + Math | No | 8.54 | 23.80 | 51.39 | 9.20 | 54.10 | 33.89 | 0.20 |
|
| 116 |
+
| | Yes | 15.85 (+85.60%) | 30.20 (+26.89%) | 52.89 (+2.92%) | 11.60 (+26.09%) | 57.80 (+6.84%) | 35.63 (+5.13%) | 0.26 (+31.22%) |
|
| 117 |
+
| Code + Instruction Tuned + Math | No | 13.41 | 21.20 | 51.15 | 8.70 | 51.50 | 35.75 | 0.17 |
|
| 118 |
+
| | Yes | 19.51 (+45.49%) | 28.60 (+34.91%) | 52.63 (+2.89%) | 11.60 (+33.33%) | 57.00 (+10.68%) | **36.20** (+1.26%) | 0.24 (+41.28%) |
|
| 119 |
+
|
| 120 |
+
### Task Arithmetic
|
| 121 |
+
|
| 122 |
+
| Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
|
| 123 |
+
|----------|-----|-----------|------|------|------|-------|---------|----------|
|
| 124 |
+
| Code + Instruction Tuned | No | 29.27 | 33.80 | 53.44 | 8.60 | 47.10 | 31.60 | 0.28 |
|
| 125 |
+
| | Yes | 29.88 (+2.08%) | 35.80 (+5.92%) | 54.12 (+1.27%) | 7.80 (-9.30%) | 46.60 (-1.06%) | 32.01 (+1.30%) | 0.28 (+0.61%) |
|
| 126 |
+
| Code + Math | No | 18.29 | 28.60 | 52.10 | 15.00 | 64.70 | 21.92 | 0.24 |
|
| 127 |
+
| | Yes | 17.68 (-3.34%) | 29.20 (+2.10%) | 52.52 (+0.81%) | 14.60 (-2.67%) | 64.50 (-0.31%) | 21.54 (-1.73%) | 0.24 (-2.65%) |
|
| 128 |
+
| Instruction Tuned + Math | No | 4.27 | 20.20 | 51.50 | 10.00 | 54.20 | 31.31 | 0.18 |
|
| 129 |
+
| | Yes | 8.54 (+100.00%) | 26.40 (+30.69%) | 52.83 (+2.58%) | 12.80 (+28.00%) | 61.30 (+13.10%) | 32.62 (+4.18%) | 0.24 (+34.52%) |
|
| 130 |
+
| Code + Instruction Tuned + Math | No | 11.59 | 19.60 | 51.20 | 9.00 | 52.70 | 32.87 | 0.16 |
|
| 131 |
+
| | Yes | 15.24 (+31.49%) | 27.40 (+39.80%) | 52.63 (+2.79%) | 12.00 (+33.33%) | 58.10 (+10.25%) | 33.91 (+3.16%) | 0.22 (+31.97%) |
|
| 132 |
+
|
| 133 |
+
### Ties Merging
|
| 134 |
+
|
| 135 |
+
| Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
|
| 136 |
+
|----------|-----|-----------|------|------|------|-------|---------|----------|
|
| 137 |
+
| Code + Instruction Tuned | No | 16.46 | 23.60 | 52.70 | 2.70 | 5.40 | 24.48 | 0.00 |
|
| 138 |
+
| | Yes | 15.24 (-7.41%) | 24.20 (+2.54%) | 53.15 (+0.85%) | 2.60 (-3.70%) | 5.20 (-3.70%) | 22.87 (-6.58%) | 0.05 (+inf%) |
|
| 139 |
+
| Code + Math | No | 15.85 | 26.80 | 51.86 | 14.30 | 62.60 | 21.63 | 0.20 |
|
| 140 |
+
| | Yes | 15.85 | 28.60 (+6.72%) | 52.29 (+0.83%) | **15.30** (+6.99%) | 63.80 (+1.92%) | 22.64 (+4.67%) | 0.23 (+13.55%) |
|
| 141 |
+
| Instruction Tuned + Math | No | 28.05 | 34.60 | 54.45 | 8.70 | 44.70 | 34.04 | 0.23 |
|
| 142 |
+
| | Yes | 27.44 (-2.17%) | 35.00 (+1.16%) | 54.74 (+0.53%) | 9.30 (+6.90%) | 46.10 (+3.13%) | 34.51 (+1.38%) | 0.25 (+6.38%) |
|
| 143 |
+
| Code + Instruction Tuned + Math | No | 21.34 | 29.20 | 53.97 | 6.30 | 29.20 | 26.95 | 0.11 |
|
| 144 |
+
| | Yes | 20.73 (-2.86%) | 29.20 | 54.46 (+0.91%) | 5.70 (-9.52%) | 23.70 (-18.84%) | 25.98 (-3.60%) | 0.11 (+4.33%) |
|
| 145 |
+
|
| 146 |
+
### WIDEN
|
| 147 |
+
|
| 148 |
+
| Model(s) | AIM | HumanEval | MBPP | MMLU | MATH | GSM8K | IFEval | HV Gain |
|
| 149 |
+
|----------|-----|-----------|------|------|------|-------|---------|----------|
|
| 150 |
+
| Code + Instruction Tuned | No | 26.22 | 35.60 | 54.90 | 8.30 | 45.00 | 30.42 | 0.27 |
|
| 151 |
+
| | Yes | 25.61 (-2.33%) | 34.60 (-2.81%) | 54.97 (+0.13%) | 8.20 (-1.20%) | 44.10 (-2.00%) | 31.60 (+3.88%) | 0.26 (-0.93%) |
|
| 152 |
+
| Code + Math | No | 17.07 | 29.40 | 53.35 | 14.20 | 64.40 | 24.02 | 0.24 |
|
| 153 |
+
| | Yes | 17.07 | 29.60 (+0.68%) | 53.36 (+0.02%) | 14.30 (+0.70%) | 62.20 (-3.42%) | 23.95 (-0.29%) | 0.24 (-1.22%) |
|
| 154 |
+
| Instruction Tuned + Math | No | 24.39 | 30.40 | 54.20 | 14.60 | 66.00 | 30.82 | 0.30 |
|
| 155 |
+
| | Yes | 23.78 (-2.50%) | 32.00 (+5.26%) | 54.69 (+0.90%) | 15.10 (+3.42%) | **68.20** (+3.33%) | 31.23 (+1.33%) | **0.31** (+2.54%) |
|
| 156 |
+
| Code + Instruction Tuned + Math | No | 25.00 | 33.20 | 54.58 | 13.50 | 64.20 | 31.44 | 0.29 |
|
| 157 |
+
| | Yes | 26.83 (+7.32%) | 32.80 (-1.20%) | **54.98** (+0.73%) | 14.40 (+6.67%) | 64.00 (-0.31%) | 32.82 (+4.39%) | 0.30 (+4.70%) |
|
| 158 |
+
|
| 159 |
+
# Citation
|
| 160 |
+
```bib
|
| 161 |
+
@misc{nobari2025activationinformedmerginglargelanguage,
|
| 162 |
+
title={Activation-Informed Merging of Large Language Models},
|
| 163 |
+
author={Amin Heyrani Nobari and Kaveh Alimohammadi and Ali ArjomandBigdeli and Akash Srivastava and Faez Ahmed and Navid Azizan},
|
| 164 |
+
year={2025},
|
| 165 |
+
eprint={2502.02421},
|
| 166 |
+
archivePrefix={arXiv},
|
| 167 |
+
primaryClass={cs.CL},
|
| 168 |
+
url={https://arxiv.org/abs/2502.02421},
|
| 169 |
+
}
|
| 170 |
+
```
|