Llama architecture, don't need any special Llama.cpp support, works out of the box.

Downloads last month
1,303
GGUF
Model size
40B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ilintar/IQuest-Coder-V1-40B-Instruct-GGUF

Quantized
(14)
this model