Qwen/Qwen3-Next-80B-A3B-Thinking has MMLU_PRO 82.7 but you guys get 0.7271

#2
by hlxxxxxx - opened

what is the differences?

quantization?

quantization?

not possible, I mean the mmlu pro benchmark of baseline

Sign up or log in to comment