EXL2 Quants of Darkhn/L3.3-70B-Animus-V6-Exp
EXL2 quants of Darkhn/L3.3-70B-Animus-V6-Exp using exllamav2 for quantization.
Quants
Quant(Revision) | Bits per Weight | Head Bits |
---|---|---|
2.5_H6 | 2.5 | 6 |
3.0_H6 | 3.0 | 6 |
3.5_H6 | 3.5 | 6 |
4.0_H6 | 4.0 | 6 |
4.25_H6 | 4.25 | 6 |
5.0_H6 | 5.0 | 6 |
6.0_H6 | 6.0 | 6 |
8.0_H8 | 8.0 | 8 |
Downloading quants with huggingface-cli
Click to view download instructions
Install hugginface-cli:
pip install -U "huggingface_hub[cli]"
Download quant by targeting the specific quant revision (branch):
huggingface-cli download ArtusDev/Darkhn_L3.3-70B-Animus-V6-Exp-EXL2 --revision "5.0bpw_H6" --local-dir ./
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ArtusDev/Darkhn_L3.3-70B-Animus-V6-Exp-EXL2
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct
Finetuned
Darkhn/L3.3-70B-Animus-V6-Exp