Update README.md
Browse files
README.md
CHANGED
|
@@ -7,15 +7,23 @@ This repo contains the weights of the Koala 13B model produced at Berkeley. It i
|
|
| 7 |
|
| 8 |
This version has then been converted to HF format.
|
| 9 |
|
| 10 |
-
##
|
|
|
|
| 11 |
|
| 12 |
-
|
| 13 |
-
* [
|
| 14 |
-
* [GPTQ quantized 4bit
|
|
|
|
|
|
|
|
|
|
| 15 |
* [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
|
| 16 |
* [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
|
|
|
|
|
|
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
| 19 |
```
|
| 20 |
git clone https://github.com/young-geng/EasyLM
|
| 21 |
|
|
|
|
| 7 |
|
| 8 |
This version has then been converted to HF format.
|
| 9 |
|
| 10 |
+
## My Koala repos
|
| 11 |
+
I have the following Koala model repositories available:
|
| 12 |
|
| 13 |
+
**13B models:**
|
| 14 |
+
* [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
|
| 15 |
+
* [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
|
| 16 |
+
* [GPTQ quantized 4bit 13B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g-GGML)
|
| 17 |
+
|
| 18 |
+
**7B models:**
|
| 19 |
* [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
|
| 20 |
* [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
|
| 21 |
+
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
|
| 22 |
+
* [GPTQ quantized 4bit 7B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g-GGML)
|
| 23 |
|
| 24 |
+
## How the Koala delta weights were merged
|
| 25 |
+
|
| 26 |
+
The Koala delta weights were merged using the following commands:
|
| 27 |
```
|
| 28 |
git clone https://github.com/young-geng/EasyLM
|
| 29 |
|