Spaces:
Running
Running
Commit History
ggml : mul_mat_id use the same tensor for all the experts (llama/6387)
26fdc9f
unverified
Vulkan k-quant mmq and ggml-backend offload functionality (llama/6155)
1ff7b08
unverified
ggml : fix bounds checking of zero size views (llama/6347)
80db462
unverified
slaren
commited on
sync : ggml (#2001)
cbbfa9e
unverified
ggml, ci : Windows ARM runner and build fixes (llama/5979)
507b9dd
unverified
Michael Podvitskiy
commited on
ggml : remove old quantization functions (llama/5942)
11a2545
unverified
llama : support Mamba Selective State Space Models (llama/5328)
224fbc2
unverified
compilade
commited on
ggml : use SYS_get_cpu if SYS_getcpu is not defined (llama/5906)
909dbdc
unverified
ggml : fix unknown status (llama/0)
394e5d8
unverified
ggml : introduce ggml_status (ggml/750)
151c676
unverified
ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (llama/5760)
9a07f42
unverified
IQ4_XS: a 4.25 bpw quantization (llama/5747)
0ee1bfb
unverified
add google magika inference example (ggml/748)
10ac4bb
unverified
slaren
commited on
code : normalize enum names (llama/5697)
93e0830
unverified
IQ3_S: a much better alternative to Q3_K (llama/5676)
32589c9
unverified
Introduce backend GUIDs (ggml/743)
a7eb9f6
unverified
UEXTM.com
slaren
commited on
ggml : always define ggml_fp16_t as uint16_t (llama/5666)
bc567d3
unverified
sync : llama.cpp (ggml/0)
f8e8d34
unverified
Allow for Vulkan build with Accelerate.
7d255ac
unverified
ggml : compute forward no longer pass src tensors (ggml/729)
4e31c82
unverified
Siddharth Ramakrishnan
siddharthvader
commited on
ggml : fix conv_2d batch mode (ggml/737)
99ece5c
unverified
ggml : android and old glibc NUMA incompatibility bugfixes (llama/5557)
0206c2d
unverified
ggml, common, examples, tests : fixed type arguments in printf (llama/5528)
2f3a004
unverified
1.5 bit quantization (llama/5453)
9c3aa6a
unverified
ggml : add ALiBi support for ggml_soft_max_ext (llama/5488)
26c019a
unverified
ggml : add numa options (llama/5377)
7c952d2
unverified
ggml : add mmla kernels for quantized GEMM (llama/4966)
0d50a29
unverified
snadampal
commited on
ggml-alloc : v3 (ggml/727)
5cffd6f
unverified
slaren
commited on
Basic Vulkan Multi-GPU implementation (llama/5321)
5d130aa
unverified
ggml : avoid duplicating function calls using MIN/MAX macros (llama/5325)
9bb2b0a
unverified
llava : add MobileVLM support (llama/5132)
f17a416
unverified
JidongZhang-THU
slaren
commited on
ggml : limit n_threads to the max n_tasks (llama/5238)
2645c33
unverified
slaren
commited on
kompute : llama-bench support and ggml_cpu_has_kompute() (llama/5226)
0c9c434
unverified
ggml : add abort_callback for cpu backend (ggml/725)
a8ea91b
unverified
Michael Podvitskiy
commited on
SOTA 3-bit quants (llama/5196)
4649943
unverified
gguf : fix comparison (ggml/715)
80cfca4
unverified
gguf : add input validation, prevent integer overflows (ggml/709)
5bf1614
unverified
ggml : add Vulkan backend (llama/2059)
5a97aba
unverified
ggml : minor type fix (int64_t -> size_t)
1bbb1a9
unverified
Add OpenCL add kernel (llama/5151)
f833987
unverified
ggml : update softmax n_task calculation (llama/5126)
3a3eb8e
unverified
snadampal
commited on