Upload README.md
Browse files
README.md
CHANGED
|
@@ -86,7 +86,7 @@ Models are released as sharded safetensors files.
|
|
| 86 |
|
| 87 |
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
|
| 88 |
| ------ | ---- | -- | ----------- | ------- | ---- |
|
| 89 |
-
| [main](https://huggingface.co/TheBloke/CodeLlama-13B-Python-AWQ/tree/main) | 4 | 128 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 |
|
| 90 |
|
| 91 |
<!-- README_AWQ.md-provided-files end -->
|
| 92 |
|
|
@@ -98,7 +98,7 @@ Documentation on installing and using vLLM [can be found here](https://vllm.read
|
|
| 98 |
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
|
| 99 |
|
| 100 |
```shell
|
| 101 |
-
python3 python -m vllm.entrypoints.api_server --model TheBloke/CodeLlama-13B-Python-
|
| 102 |
```
|
| 103 |
|
| 104 |
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
|
|
|
|
| 86 |
|
| 87 |
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
|
| 88 |
| ------ | ---- | -- | ----------- | ------- | ---- |
|
| 89 |
+
| [main](https://huggingface.co/TheBloke/CodeLlama-13B-Python-AWQ/tree/main) | 4 | 128 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | Processing, coming soon
|
| 90 |
|
| 91 |
<!-- README_AWQ.md-provided-files end -->
|
| 92 |
|
|
|
|
| 98 |
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
|
| 99 |
|
| 100 |
```shell
|
| 101 |
+
python3 python -m vllm.entrypoints.api_server --model TheBloke/CodeLlama-13B-Python-AWQ --quantization awq
|
| 102 |
```
|
| 103 |
|
| 104 |
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
|