whisper.cpp/ggml/src
Eve d8bf63a41b vulkan: dynamic subgroup size for the remaining k quants (llama/10745)
* q5_k

q4_k

q3_k

q2_k

q6_k multi row example

* revert as multi row isnt faster for k quants
2024-12-18 12:52:16 +02:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (llama/0) 2024-11-20 21:00:08 +02:00
ggml-blas ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-cann ggml : refactor online repacking (llama/10446) 2024-12-18 12:52:16 +02:00
ggml-cpu ggml : disable iq4_nl interleave size 8 (llama/10709) 2024-12-18 12:52:16 +02:00
ggml-cuda CUDA: rename macros to avoid conflicts with WinAPI (llama/10736) 2024-12-18 12:52:16 +02:00
ggml-hip ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-kompute kompute : improve backend to pass test_backend_ops (llama/10542) 2024-12-08 20:14:35 +02:00
ggml-metal metal : Extend how Llama.cpp locates metal resources (llama/10676) 2024-12-18 12:52:16 +02:00
ggml-musa ggml : remove old files (skip) (#0) 2024-12-08 23:04:26 +02:00
ggml-rpc ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-sycl ggml : refactor online repacking (llama/10446) 2024-12-18 12:52:16 +02:00
ggml-vulkan vulkan: dynamic subgroup size for the remaining k quants (llama/10745) 2024-12-18 12:52:16 +02:00
CMakeLists.txt ggml : refactor online repacking (llama/10446) 2024-12-18 12:52:16 +02:00
ggml-aarch64.c ggml : optimize Q4_0 into Q4_0_X_Y repack (llama/10324) 2024-11-20 21:00:08 +02:00
ggml-aarch64.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-alloc.c ggml : remove return from ggml_gallocr_allocate_node (ggml/1048) 2024-12-18 12:52:16 +02:00
ggml-backend-impl.h ggml : move AMX to the CPU backend (llama/10570) 2024-12-08 20:14:35 +02:00
ggml-backend-reg.cpp ggml : add predefined list of CPU backend variants to build (llama/10626) 2024-12-08 20:14:35 +02:00
ggml-backend.cpp ggml : move AMX to the CPU backend (llama/10570) 2024-12-08 20:14:35 +02:00
ggml-common.h CUDA: rename macros to avoid conflicts with WinAPI (llama/10736) 2024-12-18 12:52:16 +02:00
ggml-impl.h Avoid using __fp16 on ARM with old nvcc (llama/10616) 2024-12-08 20:14:35 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-12-08 20:14:35 +02:00
ggml-quants.c ggml : refactor online repacking (llama/10446) 2024-12-18 12:52:16 +02:00
ggml-quants.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml.c ggml : add check for grad_accs (ggml/1046) 2024-12-18 12:52:16 +02:00