Quantized version of TutorRL-7B-think.

Downloads last month
16
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for dmacjam/TutorRL-7B-think-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(3)
this model

Collection including dmacjam/TutorRL-7B-think-GGUF