Update README.md
Browse files
README.md
CHANGED
|
@@ -46,6 +46,8 @@ Can I ask a question?<|im_end|>
|
|
| 46 |
|
| 47 |
## Support
|
| 48 |
|
|
|
|
|
|
|
| 49 |
To run inference on this model, you'll need to use Aphrodite, vLLM or EXL 2/tabbyAPI, as llama.cpp hasn't yet merged the required pull request to fix the llama 3.1 rope_freqs issue with custom head dimensions.
|
| 50 |
|
| 51 |
However, you can work around this by quantizing the model yourself to create a functional GGUF file. Note that until [this PR](https://github.com/ggerganov/llama.cpp/pull/9141) is merged, the context will be limited to 8 k tokens.
|
|
|
|
| 46 |
|
| 47 |
## Support
|
| 48 |
|
| 49 |
+
## No longer needed as LCPP has merged support - just update.,
|
| 50 |
+
|
| 51 |
To run inference on this model, you'll need to use Aphrodite, vLLM or EXL 2/tabbyAPI, as llama.cpp hasn't yet merged the required pull request to fix the llama 3.1 rope_freqs issue with custom head dimensions.
|
| 52 |
|
| 53 |
However, you can work around this by quantizing the model yourself to create a functional GGUF file. Note that until [this PR](https://github.com/ggerganov/llama.cpp/pull/9141) is merged, the context will be limited to 8 k tokens.
|