Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
natasa365
/
whisper.cpp
like
0
Sleeping
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
fea8f94
whisper.cpp
/
ggml
/
src
7.99 MB
100 contributors
History:
946 commits
taronaeo
ggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)
fea8f94
6 months ago
ggml-amx
ggml : adapt AMX to tensor->grad removal (llama/0)
about 1 year ago
ggml-blas
cmake : Fix broken CMake error messages (ggml/1252)
7 months ago
ggml-cann
CANN: Simplify the environment variable setting(#13104)
6 months ago
ggml-cpu
ggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)
6 months ago
ggml-cuda
CUDA/HIP: optimize mmv paths taken for HIP devices (llama/14324)
6 months ago
ggml-hip
HIP: disable rocwmma on gfx12 by default until rocm 7.0 (llama/14202)
6 months ago
ggml-kompute
llama : add Qwen2VL support + multimodal RoPE (llama/10361)
about 1 year ago
ggml-metal
metal : fix thread-safety (llama/14300)
6 months ago
ggml-musa
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (llama/13647)
7 months ago
ggml-opencl
opencl: ref count `ggml_backend_opencl_context` and refactor profiling (llama/14254)
6 months ago
ggml-rpc
rpc : nicer error messages for RPC server crash (llama/14076)
6 months ago
ggml-sycl
sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (llama/13973)
6 months ago
ggml-vulkan
Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (llama/13792)
6 months ago
CMakeLists.txt
15 kB
Implement GGML_CPU_ALL_VARIANTS for PowerPC (llama/14286)
6 months ago
ggml-alloc.c
Safe
38.5 kB
ggml: Don't assert fail when tensor data changes (llama/13222)
8 months ago
ggml-backend-impl.h
Safe
12 kB
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-backend-reg.cpp
17.5 kB
build : suppress gcc15 compile warnings (llama/14261)
6 months ago
ggml-backend.cpp
Safe
78.3 kB
sched : avoid changing cur_copy when a graph is already allocated (llama/13922)
7 months ago
ggml-common.h
Safe
133 kB
ggml-cpu : split arch-specific implementations (llama/13892)
6 months ago
ggml-impl.h
15.2 kB
ggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)
6 months ago
ggml-opt.cpp
Safe
39.8 kB
mnist: fix segmentation fault (ggml/1227)
7 months ago
ggml-quants.c
Safe
215 kB
ggml-quants : rename best_mad to best_error (ggml/1283)
6 months ago
ggml-quants.h
Safe
8.34 kB
ggml : build backends as libraries (llama/10256)
about 1 year ago
ggml-threading.cpp
Safe
250 Bytes
ggml : build backends as libraries (llama/10256)
about 1 year ago
ggml-threading.h
Safe
198 Bytes
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)
about 1 year ago
ggml.c
211 kB
ggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)
6 months ago
ggml.cpp
Safe
738 Bytes
ggml : Print backtrace on uncaught C++ exceptions (ggml/1232)
7 months ago
gguf.cpp
Safe
46.1 kB
ggml : do not output unprintable characters on GGUF load failure (llama/14381)
6 months ago