Update README.md
Browse files
README.md
CHANGED
|
@@ -11,3 +11,154 @@ library_name: transformers
|
|
| 11 |
|
| 12 |
A quick multi-disciplinary fine tune of openchat/openchat-3.5-0106 using an alpaca-style dataset across different disciplines. I used LORA adapters that I then merged back into the main model for ease of use.
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
A quick multi-disciplinary fine tune of openchat/openchat-3.5-0106 using an alpaca-style dataset across different disciplines. I used LORA adapters that I then merged back into the main model for ease of use.
|
| 13 |
|
| 14 |
+
# Prompting
|
| 15 |
+
|
| 16 |
+
## Prompt Template for alpaca style
|
| 17 |
+
|
| 18 |
+
```
|
| 19 |
+
### Instruction:
|
| 20 |
+
|
| 21 |
+
<prompt> (without the <>)
|
| 22 |
+
|
| 23 |
+
### Response:
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
## Sample Code
|
| 27 |
+
|
| 28 |
+
```python
|
| 29 |
+
import torch
|
| 30 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 31 |
+
|
| 32 |
+
torch.set_default_device("cuda")
|
| 33 |
+
|
| 34 |
+
model = AutoModelForCausalLM.from_pretrained("ibivibiv/multimaster-7b", torch_dtype="auto", device_config='auto')
|
| 35 |
+
tokenizer = AutoTokenizer.from_pretrained("ibivibiv/multimaster-7b")
|
| 36 |
+
|
| 37 |
+
inputs = tokenizer("### Instruction: Who would when in an arm wrestling match between Abraham Lincoln and Chuck Norris?\nA. Abraham Lincoln \nB. Chuck Norris\n### Response:\n", return_tensors="pt", return_attention_mask=False)
|
| 38 |
+
|
| 39 |
+
outputs = model.generate(**inputs, max_length=200)
|
| 40 |
+
text = tokenizer.batch_decode(outputs)[0]
|
| 41 |
+
print(text)
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
# Model Details
|
| 45 |
+
* **Trained by**: [ibivibiv](https://huggingface.co/ibivibiv)
|
| 46 |
+
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
|
| 47 |
+
* **Model type:** **multimaster-7b** is a lora tuned version of openchat/openchat-3.5-0106 with the adapter merged back into the main model
|
| 48 |
+
* **Language(s)**: English
|
| 49 |
+
* **Purpose**: This model is a focus on multi-disciplinary model tuning
|
| 50 |
+
|
| 51 |
+
# Benchmark Scores
|
| 52 |
+
|
| 53 |
+
coming soon
|
| 54 |
+
|
| 55 |
+
## Citations
|
| 56 |
+
|
| 57 |
+
```
|
| 58 |
+
@misc{open-llm-leaderboard,
|
| 59 |
+
author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf},
|
| 60 |
+
title = {Open LLM Leaderboard},
|
| 61 |
+
year = {2023},
|
| 62 |
+
publisher = {Hugging Face},
|
| 63 |
+
howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"
|
| 64 |
+
}
|
| 65 |
+
```
|
| 66 |
+
```
|
| 67 |
+
@software{eval-harness,
|
| 68 |
+
author = {Gao, Leo and
|
| 69 |
+
Tow, Jonathan and
|
| 70 |
+
Biderman, Stella and
|
| 71 |
+
Black, Sid and
|
| 72 |
+
DiPofi, Anthony and
|
| 73 |
+
Foster, Charles and
|
| 74 |
+
Golding, Laurence and
|
| 75 |
+
Hsu, Jeffrey and
|
| 76 |
+
McDonell, Kyle and
|
| 77 |
+
Muennighoff, Niklas and
|
| 78 |
+
Phang, Jason and
|
| 79 |
+
Reynolds, Laria and
|
| 80 |
+
Tang, Eric and
|
| 81 |
+
Thite, Anish and
|
| 82 |
+
Wang, Ben and
|
| 83 |
+
Wang, Kevin and
|
| 84 |
+
Zou, Andy},
|
| 85 |
+
title = {A framework for few-shot language model evaluation},
|
| 86 |
+
month = sep,
|
| 87 |
+
year = 2021,
|
| 88 |
+
publisher = {Zenodo},
|
| 89 |
+
version = {v0.0.1},
|
| 90 |
+
doi = {10.5281/zenodo.5371628},
|
| 91 |
+
url = {https://doi.org/10.5281/zenodo.5371628}
|
| 92 |
+
}
|
| 93 |
+
```
|
| 94 |
+
```
|
| 95 |
+
@misc{clark2018think,
|
| 96 |
+
title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
|
| 97 |
+
author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
|
| 98 |
+
year={2018},
|
| 99 |
+
eprint={1803.05457},
|
| 100 |
+
archivePrefix={arXiv},
|
| 101 |
+
primaryClass={cs.AI}
|
| 102 |
+
}
|
| 103 |
+
```
|
| 104 |
+
```
|
| 105 |
+
@misc{zellers2019hellaswag,
|
| 106 |
+
title={HellaSwag: Can a Machine Really Finish Your Sentence?},
|
| 107 |
+
author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi},
|
| 108 |
+
year={2019},
|
| 109 |
+
eprint={1905.07830},
|
| 110 |
+
archivePrefix={arXiv},
|
| 111 |
+
primaryClass={cs.CL}
|
| 112 |
+
}
|
| 113 |
+
```
|
| 114 |
+
```
|
| 115 |
+
@misc{hendrycks2021measuring,
|
| 116 |
+
title={Measuring Massive Multitask Language Understanding},
|
| 117 |
+
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
|
| 118 |
+
year={2021},
|
| 119 |
+
eprint={2009.03300},
|
| 120 |
+
archivePrefix={arXiv},
|
| 121 |
+
primaryClass={cs.CY}
|
| 122 |
+
}
|
| 123 |
+
```
|
| 124 |
+
```
|
| 125 |
+
@misc{lin2022truthfulqa,
|
| 126 |
+
title={TruthfulQA: Measuring How Models Mimic Human Falsehoods},
|
| 127 |
+
author={Stephanie Lin and Jacob Hilton and Owain Evans},
|
| 128 |
+
year={2022},
|
| 129 |
+
eprint={2109.07958},
|
| 130 |
+
archivePrefix={arXiv},
|
| 131 |
+
primaryClass={cs.CL}
|
| 132 |
+
}
|
| 133 |
+
```
|
| 134 |
+
```
|
| 135 |
+
@misc{DBLP:journals/corr/abs-1907-10641,
|
| 136 |
+
title={{WINOGRANDE:} An Adversarial Winograd Schema Challenge at Scale},
|
| 137 |
+
author={Keisuke Sakaguchi and Ronan Le Bras and Chandra Bhagavatula and Yejin Choi},
|
| 138 |
+
year={2019},
|
| 139 |
+
eprint={1907.10641},
|
| 140 |
+
archivePrefix={arXiv},
|
| 141 |
+
primaryClass={cs.CL}
|
| 142 |
+
}
|
| 143 |
+
```
|
| 144 |
+
```
|
| 145 |
+
@misc{DBLP:journals/corr/abs-2110-14168,
|
| 146 |
+
title={Training Verifiers to Solve Math Word Problems},
|
| 147 |
+
author={Karl Cobbe and
|
| 148 |
+
Vineet Kosaraju and
|
| 149 |
+
Mohammad Bavarian and
|
| 150 |
+
Mark Chen and
|
| 151 |
+
Heewoo Jun and
|
| 152 |
+
Lukasz Kaiser and
|
| 153 |
+
Matthias Plappert and
|
| 154 |
+
Jerry Tworek and
|
| 155 |
+
Jacob Hilton and
|
| 156 |
+
Reiichiro Nakano and
|
| 157 |
+
Christopher Hesse and
|
| 158 |
+
John Schulman},
|
| 159 |
+
year={2021},
|
| 160 |
+
eprint={2110.14168},
|
| 161 |
+
archivePrefix={arXiv},
|
| 162 |
+
primaryClass={cs.CL}
|
| 163 |
+
}
|
| 164 |
+
```
|