Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,205 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: llm-jp-3-172b-instruct3-tou
|
| 4 |
+
license_link: https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3/raw/main/LICENSE_ja
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
- ja
|
| 8 |
+
programming_language:
|
| 9 |
+
- C
|
| 10 |
+
- C++
|
| 11 |
+
- C#
|
| 12 |
+
- Go
|
| 13 |
+
- Java
|
| 14 |
+
- JavaScript
|
| 15 |
+
- Lua
|
| 16 |
+
- PHP
|
| 17 |
+
- Python
|
| 18 |
+
- Ruby
|
| 19 |
+
- Rust
|
| 20 |
+
- Scala
|
| 21 |
+
- TypeScript
|
| 22 |
+
pipeline_tag: text-generation
|
| 23 |
+
library_name: transformers
|
| 24 |
+
inference: false
|
| 25 |
+
---
|
| 26 |
+
# llm-jp-3-172b-instruct3
|
| 27 |
+
|
| 28 |
+
This repository provides large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
|
| 29 |
+
|
| 30 |
+
The development was partially supported by [GENIAC](https://www.meti.go.jp/policy/mono_info_service/geniac/index.html).
|
| 31 |
+
|
| 32 |
+
| Model Variants |
|
| 33 |
+
| :--- |
|
| 34 |
+
| [llm-jp-3-1.8b](https://huggingface.co/llm-jp/llm-jp-3-1.8b) |
|
| 35 |
+
| [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) |
|
| 36 |
+
| [llm-jp-3-3.7b](https://huggingface.co/llm-jp/llm-jp-3-3.7b) |
|
| 37 |
+
| [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) |
|
| 38 |
+
| [llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) |
|
| 39 |
+
| [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) |
|
| 40 |
+
| [llm-jp-3-172b-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1) |
|
| 41 |
+
| [llm-jp-3-172b-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct) |
|
| 42 |
+
| [llm-jp-3-172b-beta2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2) |
|
| 43 |
+
| [llm-jp-3-172b-beta2-instruct2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2-instruct2) |
|
| 44 |
+
| [llm-jp-3-172b](https://huggingface.co/llm-jp/llm-jp-3-172b) |
|
| 45 |
+
| [llm-jp-3-172b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3) |
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
Checkpoints format: Hugging Face Transformers
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
## Required Libraries and Their Versions
|
| 52 |
+
|
| 53 |
+
- torch>=2.3.0
|
| 54 |
+
- transformers>=4.40.1
|
| 55 |
+
- tokenizers>=0.19.1
|
| 56 |
+
- accelerate>=0.29.3
|
| 57 |
+
- flash-attn>=2.5.8
|
| 58 |
+
|
| 59 |
+
## Usage
|
| 60 |
+
|
| 61 |
+
```python
|
| 62 |
+
import torch
|
| 63 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 64 |
+
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-172b-instruct3")
|
| 65 |
+
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-172b-instruct3", device_map="auto", torch_dtype=torch.bfloat16)
|
| 66 |
+
chat = [
|
| 67 |
+
{"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"},
|
| 68 |
+
{"role": "user", "content": "自然言語処理とは何か"},
|
| 69 |
+
]
|
| 70 |
+
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
|
| 71 |
+
with torch.no_grad():
|
| 72 |
+
output = model.generate(
|
| 73 |
+
tokenized_input,
|
| 74 |
+
max_new_tokens=100,
|
| 75 |
+
do_sample=True,
|
| 76 |
+
top_p=0.95,
|
| 77 |
+
temperature=0.7,
|
| 78 |
+
repetition_penalty=1.05,
|
| 79 |
+
)[0]
|
| 80 |
+
print(tokenizer.decode(output))
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
## Model Details
|
| 85 |
+
|
| 86 |
+
- **Model type:** Transformer-based Language Model
|
| 87 |
+
- **Total seen tokens:**:
|
| 88 |
+
- llm-jp-3-1.8b: 2.1T
|
| 89 |
+
- llm-jp-3-3.7b: 2.1T
|
| 90 |
+
- llm-jp-3-13b: 2.1T
|
| 91 |
+
- llm-jp-3-172b-beta1: 0.7T
|
| 92 |
+
- llm-jp-3-172b-beta2: 1.4T
|
| 93 |
+
- llm-jp-3-172b: 2.1T
|
| 94 |
+
|
| 95 |
+
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|
| 96 |
+
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 97 |
+
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|
| 98 |
+
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|
| 99 |
+
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|
| 100 |
+
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
|
| 101 |
+
|
| 102 |
+
|
| 103 |
+
## Tokenizer
|
| 104 |
+
|
| 105 |
+
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
|
| 106 |
+
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
|
| 107 |
+
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
|
| 108 |
+
|
| 109 |
+
## Datasets
|
| 110 |
+
|
| 111 |
+
### Pre-training
|
| 112 |
+
|
| 113 |
+
The models have been pre-trained using a blend of the following datasets.
|
| 114 |
+
|
| 115 |
+
| Language | Dataset | Tokens|
|
| 116 |
+
|:---|:---|---:|
|
| 117 |
+
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
|
| 118 |
+
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
|
| 119 |
+
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
|
| 120 |
+
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
|
| 121 |
+
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|
| 122 |
+
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
|
| 123 |
+
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
|
| 124 |
+
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
|
| 125 |
+
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
|
| 126 |
+
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
|
| 127 |
+
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
|
| 128 |
+
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|
| 129 |
+
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|
| 130 |
+
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|
| 131 |
+
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
|
| 132 |
+
|
| 133 |
+
### Post-training
|
| 134 |
+
|
| 135 |
+
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning and further aligned it with Direct Preference Optimization.
|
| 136 |
+
|
| 137 |
+
#### Supervised Fine-tuning
|
| 138 |
+
The datasets used for supervised fine-tuning are as follows:
|
| 139 |
+
|
| 140 |
+
| Language | Dataset | Description |
|
| 141 |
+
|:---|:---|:---|
|
| 142 |
+
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed Japanese instruction dataset. |
|
| 143 |
+
| |[answer-carefully-002](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed instruction dataset focusing on LLMs' safety. |
|
| 144 |
+
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
|
| 145 |
+
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
|
| 146 |
+
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
|
| 147 |
+
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/kanhatakeyama/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. We used a sampled subset. |
|
| 148 |
+
| |[wizardlm8x22b-logical-math-coding-sft_additional-ja](https://huggingface.co/datasets/kanhatakeyama/wizardlm8x22b-logical-math-coding-sft_additional-ja)| A synthetic instruction dataset. We used a sampled subset. |
|
| 149 |
+
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|
| 150 |
+
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
|
| 151 |
+
| |[FLAN](https://huggingface.co/datasets/Open-Orca/FLAN) | We used a sampled subset. |
|
| 152 |
+
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset-567k](https://huggingface.co/datasets/Aratako/Synthetic-JP-EN-Coding-Dataset-567k)| A synthetic instruction dataset. We used a sampled subset. |
|
| 153 |
+
|
| 154 |
+
|
| 155 |
+
#### Direct Preference Optimization
|
| 156 |
+
|
| 157 |
+
We used synthetic preference data to improve both the helpfulness and harmlessness of the LLM. The datasets will be made available soon.
|
| 158 |
+
|
| 159 |
+
## Evaluation
|
| 160 |
+
|
| 161 |
+
### llm-jp-eval (v1.4.1)
|
| 162 |
+
|
| 163 |
+
We evaluated the models using 100 examples from the dev split. Note that we skipped the CG (Code Generation) task.
|
| 164 |
+
|
| 165 |
+
| Model name | average | EL | FA | HE | MC | MR | MT | NLI | QA | RC | SUM |
|
| 166 |
+
| :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
|
| 167 |
+
| [llm-jp-3-172b-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1) | 0.5174 | 0.4460 | 0.2556 | 0.3700 | 0.6400 | 0.6100 | 0.8265 | 0.5600 | 0.5720 | 0.8505 | 0.0434 |
|
| 168 |
+
| [llm-jp-3-172b-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct) | 0.5700 | 0.4306 | 0.2292 | 0.4350 | 0.8433 | 0.6200 | 0.8228 | 0.6820 | 0.5873 | 0.8964 | 0.1529 |
|
| 169 |
+
| [llm-jp-3-172b-beta2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2) | 0.5422 | 0.3337 | 0.2725 | 0.4700 | 0.7767 | 0.6900 | 0.8283 | 0.5960 | 0.6133 | 0.8380 | 0.0037 |
|
| 170 |
+
| [llm-jp-3-172b-beta2-instruct2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2-instruct2) | 0.6022 | 0.5470 | 0.2665 | 0.5100 | 0.8600 | 0.7000 | 0.8392 | 0.6800 | 0.6346 | 0.8770 | 0.1076 |
|
| 171 |
+
| [llm-jp-3-172b](https://huggingface.co/llm-jp/llm-jp-3-172b) | 0.5431 | 0.4077 | 0.2662 | 0.5150 | 0.7633 | 0.6700 | 0.8227 | 0.5740 | 0.5686 | 0.8289 | 0.0148 |
|
| 172 |
+
| [llm-jp-3-172b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3) | 0.6130 | 0.5173 | 0.2711 | 0.5700 | 0.8733 | 0.7300 | 0.8437 | 0.7280 | 0.6012 | 0.8829 | 0.1121 |
|
| 173 |
+
|
| 174 |
+
|
| 175 |
+
|
| 176 |
+
### Japanese MT Bench
|
| 177 |
+
|
| 178 |
+
We evaluated the models using `gpt-4-0613`. Please see the [codes](https://github.com/wandb/llm-leaderboard/tree/g-leaderboard) for details.
|
| 179 |
+
|
| 180 |
+
| Model name | average | coding | extraction | humanities | math | reasoning | roleplay | stem | writing |
|
| 181 |
+
| :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
|
| 182 |
+
| [llm-jp-3-172b-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct) | 5.14 | 2.90 | 5.30 | 8.80 | 2.15 | 2.45 | 6.95 | 7.45 | 5.15 |
|
| 183 |
+
| [llm-jp-3-172b-beta2-instruct2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2-instruct2) | 6.72 | 4.10 | 6.90 | 7.60 | 4.00 | 6.35 | 8.70 | 7.95 | 8.15 |
|
| 184 |
+
| [llm-jp-3-172b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-beta2-instruct3) | 7.57 | 4.85 | 8.55 | 9.56 | 3.75 | 7.6 | 8.1 | 8.95 | 9.2 |
|
| 185 |
+
|
| 186 |
+
|
| 187 |
+
## Risks and Limitations
|
| 188 |
+
|
| 189 |
+
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
|
| 190 |
+
|
| 191 |
+
|
| 192 |
+
## Send Questions to
|
| 193 |
+
|
| 194 |
+
llm-jp(at)nii.ac.jp
|
| 195 |
+
|
| 196 |
+
|
| 197 |
+
## License
|
| 198 |
+
|
| 199 |
+
See the [LICENSE](LICENSE_ja) file.
|
| 200 |
+
|
| 201 |
+
## Model Card Authors
|
| 202 |
+
|
| 203 |
+
*The names are listed in alphabetical order.*
|
| 204 |
+
|
| 205 |
+
Hirokazu Kiyomaru and Takashi Kodama.
|