hanami

Llama-3.1-70B-Hanami-x1


This is an experiment over Euryale v2.2, which I think worked out nicely.

Feels different from it, in a good way. I prefer it over 2.2, and 2.1 from testing.

As usual, the Euryale v2.1 & 2.2 Settings work on it.

min_p of at minimum 0.1 is recommended for Llama 3 types.

I like it, so try it out?

Downloads last month
258
Safetensors
Model size
71B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Sao10K/L3.1-70B-Hanami-x1

Finetunes
1 model
Merges
137 models
Quantizations
5 models

Space using Sao10K/L3.1-70B-Hanami-x1 1