Update README.md
Browse files
README.md
CHANGED
|
@@ -76,6 +76,12 @@ res = pipe(
|
|
| 76 |
print(res[0]["generated_text"])
|
| 77 |
```
|
| 78 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
## Quantization and sharding
|
| 80 |
|
| 81 |
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
|
|
|
|
| 76 |
print(res[0]["generated_text"])
|
| 77 |
```
|
| 78 |
|
| 79 |
+
This will apply and run the correct prompt format out of the box:
|
| 80 |
+
|
| 81 |
+
```
|
| 82 |
+
<|prompt|>Why is drinking water so healthy?</s><|answer|>
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
## Quantization and sharding
|
| 86 |
|
| 87 |
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
|