Docker image
#18 opened about 2 months ago
by
ixn321
running model in ollama is not supported.
π π 9
4
#15 opened 10 months ago
by
humble92
llama_cpp_python: gguf_init_from_file_impl: failed to read tensor info
#14 opened 10 months ago
by
miscw
Kind of a strange responses GGGGGGGGGGGG....
π 2
2
#13 opened 10 months ago
by
avicohen
Run with 400MB
2
#12 opened 10 months ago
by
Dinuraj
How to easily run on Windows OS ?
7
#11 opened 10 months ago
by
lbarasc
Chat template issue
#8 opened 11 months ago
by
tdh111
TQ1 quant version
3
#7 opened 11 months ago
by
TobDeBer
Does not work in LM Studio
π β 4
11
#6 opened 11 months ago
by
mailxp
Chinese Ha is not supported
#3 opened 11 months ago
by
digmouse100
gguf not llama.cpp compatible yet
π 1
3
#2 opened 11 months ago
by
lefromage
Update README.md
#1 opened 11 months ago
by
bullerwins