is kimi k2 trained with fp8?

#30
by ct-2 - opened

Thanks for sharing the model with the world, it's wonderful and has few enough active parameters to run quickly by cpu.

Are you releasing training details or whitepaper?
Really nice choice and work, and easily accessible to a regular person!

Moonshot AI org

We will discuss these details in our technical report.

I admire Kimi-k2 and now exploring its API.

Moonshot AI org

i am confused.
2.4.3 of tech report says "we do not apply FP8 in computation".
from the size of checkpoint, it seems that weight is of dtype fp8.

Moonshot AI org

Yes, the dtype of weight is fp8. The report mainly talks about training. In the inference, we use blockwise fp8. It is the same as deepseek fp8. We have tested the model in all benchmarks. Its performance is the same as bf16 inference.

Yes, the dtype of weight is fp8. The report mainly talks about training. In the inference, we use blockwise fp8. It is the same as deepseek fp8. We have tested the model in all benchmarks. Its performance is the same as bf16 inference.

so can we say the model is trained in dtype bf16, after training, the model is converted to blockwise fp8 and released?

Moonshot AI org

Yes

Sign up or log in to comment