Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
natasa365
/
whisper.cpp
like
0
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
d6b6852
whisper.cpp
/
ggml
/
src
6.66 MB
100 contributors
History:
566 commits
William Tambellini
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
d6b6852
10 months ago
ggml-amx
ggml : adapt AMX to tensor->grad removal (llama/0)
about 1 year ago
ggml-blas
ggml : add support for dynamic loading of backends (llama/10469)
about 1 year ago
ggml-cann
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-cpu
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-cuda
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-hip
CUDA: app option to compile without FlashAttention (llama/12025)
10 months ago
ggml-kompute
llama : add Qwen2VL support + multimodal RoPE (llama/10361)
about 1 year ago
ggml-metal
cuda/cpu: Increase support for fp16 unary operations (ggml/1125)
10 months ago
ggml-musa
CUDA: app option to compile without FlashAttention (llama/12025)
10 months ago
ggml-opencl
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-rpc
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-sycl
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-vulkan
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
CMakeLists.txt
11.9 kB
whisper : support GGML_BACKEND_DL (#2843)
10 months ago
ggml-alloc.c
Safe
38.5 kB
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-backend-impl.h
Safe
12 kB
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-backend-reg.cpp
Safe
17.2 kB
ggml : allow loading backend with env variable (ggml/1059)
12 months ago
ggml-backend.cpp
77.6 kB
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-common.h
Safe
133 kB
CUDA: use arch list for compatibility check (llama/11775)
10 months ago
ggml-impl.h
Safe
18.4 kB
MUSA: support ARM64 and enable dp4a .etc (llama/11843)
10 months ago
ggml-opt.cpp
Safe
31.7 kB
ggml-opt: fix data corruption (ggml/1022)
about 1 year ago
ggml-quants.c
Safe
214 kB
ggml : refactor online repacking (llama/10446)
about 1 year ago
ggml-quants.h
Safe
8.34 kB
ggml : build backends as libraries (llama/10256)
about 1 year ago
ggml-threading.cpp
Safe
250 Bytes
ggml : build backends as libraries (llama/10256)
about 1 year ago
ggml-threading.h
Safe
198 Bytes
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)
about 1 year ago
ggml.c
209 kB
ggml-cpu: Support s390x SIMD Instruction Set (llama/12019)
10 months ago
gguf.cpp
Safe
45 kB
cmake : add sanitizer flags for llama.cpp (llama/11279)
11 months ago