Perplexity Benchmarks

#7
by thad0ctor - opened

Would it be possible to post perplexity mesaurements please?

Unsloth AI org
β€’
edited 1 day ago

Would it be possible to post perplexity mesaurements please?

Perplexity is a poor measurement of quality for quants. KL Divergence is usually the answer but doing those benchmarks take a lot of time. In general, Q2_K_XL is always the best in terms of size + accuracy

There's aider polygot benchmark for Q4_K_XL: https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF/discussions/8

Sign up or log in to comment