Update README.md
Browse files
README.md
CHANGED
|
@@ -35,6 +35,17 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
| 35 |
|
| 36 |
Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
## Repositories available
|
| 39 |
|
| 40 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
|
|
@@ -56,14 +67,14 @@ Each separate quant is in a different branch. See below for instructions on fet
|
|
| 56 |
|
| 57 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
| 58 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
| 59 |
-
| main | 4 | None | True | 35.33 GB |
|
| 60 |
-
| gptq-4bit-32g-actorder_True | 4 | 32 |
|
| 61 |
-
| gptq-4bit-64g-actorder_True | 4 | 64 |
|
| 62 |
-
| gptq-4bit-128g-actorder_True | 4 | 128 |
|
| 63 |
-
| gptq-3bit--1g-actorder_True | 3 | None |
|
| 64 |
| gptq-3bit-128g-actorder_False | 3 | 128 | False | Still processing | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
|
| 65 |
-
| gptq-3bit-128g-actorder_True | 3 | 128 |
|
| 66 |
-
| gptq-3bit-64g-actorder_True | 3 | 64 |
|
| 67 |
|
| 68 |
## How to download from branches
|
| 69 |
|
|
@@ -80,6 +91,13 @@ Please make sure you're using the latest version of [text-generation-webui](http
|
|
| 80 |
|
| 81 |
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
|
| 82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
1. Click the **Model tab**.
|
| 84 |
2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-70B-chat-GPTQ`.
|
| 85 |
- To download from a specific branch, enter for example `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
|
|
@@ -99,6 +117,12 @@ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) instal
|
|
| 99 |
|
| 100 |
`GITHUB_ACTIONS=true pip install auto-gptq`
|
| 101 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
Then try the following example code:
|
| 103 |
|
| 104 |
```python
|
|
@@ -167,7 +191,7 @@ print(pipe(prompt_template)[0]['generated_text'])
|
|
| 167 |
|
| 168 |
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
|
| 169 |
|
| 170 |
-
ExLlama
|
| 171 |
|
| 172 |
<!-- footer start -->
|
| 173 |
## Discord
|
|
|
|
| 35 |
|
| 36 |
Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
|
| 37 |
|
| 38 |
+
## Required: latest version of Transformers
|
| 39 |
+
|
| 40 |
+
Before trying these GPTQs, please update Transformers to the latest Github code:
|
| 41 |
+
|
| 42 |
+
```
|
| 43 |
+
pip3 install git+https://github.com/huggingface/transformers
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
If using a UI like text-generation-webui, make sure to do this in the Python environment of text-generation-webui.
|
| 47 |
+
|
| 48 |
+
|
| 49 |
## Repositories available
|
| 50 |
|
| 51 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
|
|
|
|
| 67 |
|
| 68 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
| 69 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
| 70 |
+
| main | 4 | None | True | 35.33 GB | False | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
| 71 |
+
| gptq-4bit-32g-actorder_True | 4 | 32 | False | Still processing | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
| 72 |
+
| gptq-4bit-64g-actorder_True | 4 | 64 | False | Still processing | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
| 73 |
+
| gptq-4bit-128g-actorder_True | 4 | 128 | False | 36.65 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
| 74 |
+
| gptq-3bit--1g-actorder_True | 3 | None | False | Still processing | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
| 75 |
| gptq-3bit-128g-actorder_False | 3 | 128 | False | Still processing | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
|
| 76 |
+
| gptq-3bit-128g-actorder_True | 3 | 128 | False | Still processing | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
|
| 77 |
+
| gptq-3bit-64g-actorder_True | 3 | 64 | False | Still processing | False | AutoGPTQ | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. Poor AutoGPTQ CUDA speed. |
|
| 78 |
|
| 79 |
## How to download from branches
|
| 80 |
|
|
|
|
| 91 |
|
| 92 |
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
|
| 93 |
|
| 94 |
+
Remember to update Transformers to the latest Github version:
|
| 95 |
+
```
|
| 96 |
+
pip3 install git+https://github.com/huggingface/transformers
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
ExLlama is not currently compatible with Llama 2 70B.
|
| 100 |
+
|
| 101 |
1. Click the **Model tab**.
|
| 102 |
2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-70B-chat-GPTQ`.
|
| 103 |
- To download from a specific branch, enter for example `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
|
|
|
|
| 117 |
|
| 118 |
`GITHUB_ACTIONS=true pip install auto-gptq`
|
| 119 |
|
| 120 |
+
Also update Transformers to the latest Github version:
|
| 121 |
+
|
| 122 |
+
```
|
| 123 |
+
pip3 install git+https://github.com/huggingface/transformers
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
Then try the following example code:
|
| 127 |
|
| 128 |
```python
|
|
|
|
| 191 |
|
| 192 |
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
|
| 193 |
|
| 194 |
+
ExLlama does not currently work with Llama 2 70B models.
|
| 195 |
|
| 196 |
<!-- footer start -->
|
| 197 |
## Discord
|