Timon
KeyboardMasher
AI & ML interests
None yet
Organizations
None yet
Error with built-in Web UI
2
#3 opened 4 months ago
by
KeyboardMasher
Thanks for IQ4_NL
โค๏ธ
1
#1 opened 5 months ago
by
KeyboardMasher
128k Context GGUF, please?
4
#2 opened 7 months ago
by
MikeNate
Update README.md
#1 opened 7 months ago
by
KeyboardMasher
Other Imatrix quants (IQ3_XS) ?
๐
3
6
#1 opened 8 months ago
by
deleted
llama.cpp inference too slow?
3
#6 opened 11 months ago
by
ygsun
Over 2 tok/sec agg backed by NVMe SSD on 96GB RAM + 24GB VRAM AM5 rig with llama.cpp
๐
๐ฅ
4
9
#13 opened 10 months ago
by
ubergarm
Issue with --n-gpu-layers 5 Parameter: Model Only Running on CPU
12
#10 opened 11 months ago
by
vuk123
Advice on running llama-server with Q2_K_L quant
3
#6 opened 11 months ago
by
vmajor
I loaded DeepSeek-V3-Q5_K_M up on my 10yrs old old Tesla M40 (Dell C4130)
3
#8 opened 11 months ago
by
gng2info
Model will need to be requantized, rope issues for long context
โค๏ธ
2
3
#2 opened 11 months ago
by
treehugg3
Instruct version?
3
#1 opened about 1 year ago
by
KeyboardMasher
we need llama athene 3.1 70b
5
#5 opened over 1 year ago
by
gopi87
Change the 'Original model' link to tree/9092a8a, which contains the updated weights.
1
#2 opened about 1 year ago
by
AaronFeng753
Remove this model from Recent highlights collection
1
#9 opened about 1 year ago
by
KeyboardMasher
Continuous output
โ
1
8
#1 opened about 1 year ago
by
kth8
Q8_0 file is damaged.
5
#1 opened over 1 year ago
by
KeyboardMasher