Swallow-8B-it-v05-gguf-q6_k-mixed-v1

  • Quantization Type: Mixed Precision (q5_K, q6_K, q8_0)
  • Bits Per Weight (BPW): 7.13

Swallow-8B-it-v05-gguf-q6_k-mixed-v2

  • Quantization Type: Mixed Precision (q6_K, q8_0)
  • Bits Per Weight (BPW): 7.50

Swallow-8B-it-v05-gguf-q8_0-mixed-v1

  • Quantization Type: Mixed Precision (bf16, q4_K, q5_K, q6_K, q8_0)
  • Bits Per Weight (BPW): 8.01

Swallow-8B-it-v05-gguf-q8_0-mixed-v2

  • Quantization Type: Mixed Precision (bf16, q5_K, q6_K, q8_0)
  • Bits Per Weight (BPW): 9.31

Swallow-8B-it-v05-gguf-q8_0-mixed-v3

  • Quantization Type: Mixed Precision (bf16, q6_K, q8_0)
  • Bits Per Weight (BPW): 11.44

Swallow-8B-it-v05-gguf-q8_0-mixed-v4

  • Quantization Type: Mixed Precision (bf16, q8_0)
  • Bits Per Weight (BPW): 13.38
Downloads last month
291
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for marcelone/Llama-3.1-Swallow-8B-Instruct-v0.5-gguf

Collection including marcelone/Llama-3.1-Swallow-8B-Instruct-v0.5-gguf