whisper.cpp/ggml/src
Diego Devesa a815940e0e ggml : add predefined list of CPU backend variants to build (llama/10626)
* ggml : add predefined list of CPU backend variants to build

* update CPU dockerfiles
2024-12-08 20:14:35 +02:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (llama/0) 2024-11-20 21:00:08 +02:00
ggml-blas ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-cann CANN: RoPE operator optimization (llama/10563) 2024-12-08 20:14:35 +02:00
ggml-cpu ggml : add predefined list of CPU backend variants to build (llama/10626) 2024-12-08 20:14:35 +02:00
ggml-cuda Add some minimal optimizations for CDNA (llama/10498) 2024-12-08 20:14:35 +02:00
ggml-hip ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-kompute kompute : improve backend to pass test_backend_ops (llama/10542) 2024-12-08 20:14:35 +02:00
ggml-metal ggml: add GGML_SET Metal kernel + i32 CPU kernel (ggml/1037) 2024-12-08 20:14:35 +02:00
ggml-musa mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (llama/10516) 2024-12-08 20:14:35 +02:00
ggml-rpc ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-sycl SYCL : Move to compile time oneMKL interface backend selection for NVIDIA backend (llama/10584) 2024-12-08 20:14:35 +02:00
ggml-vulkan vulkan: Implement "fast divide" (mul+shift) for unary ops like copy (llama/10642) 2024-12-08 20:14:35 +02:00
CMakeLists.txt ggml : add predefined list of CPU backend variants to build (llama/10626) 2024-12-08 20:14:35 +02:00
ggml-aarch64.c ggml : optimize Q4_0 into Q4_0_X_Y repack (llama/10324) 2024-11-20 21:00:08 +02:00
ggml-aarch64.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-alloc.c ggml: new optimization interface (ggml/988) 2024-11-20 21:00:08 +02:00
ggml-backend-impl.h ggml : move AMX to the CPU backend (llama/10570) 2024-12-08 20:14:35 +02:00
ggml-backend-reg.cpp ggml : add predefined list of CPU backend variants to build (llama/10626) 2024-12-08 20:14:35 +02:00
ggml-backend.cpp ggml : move AMX to the CPU backend (llama/10570) 2024-12-08 20:14:35 +02:00
ggml-common.h ggml-cpu: support IQ4_NL_4_4 by runtime repack (llama/10541) 2024-12-08 20:14:35 +02:00
ggml-impl.h Avoid using __fp16 on ARM with old nvcc (llama/10616) 2024-12-08 20:14:35 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-12-08 20:14:35 +02:00
ggml-quants.c ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-quants.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml.c ggml : add GGML_PAD_REFLECT_1D operation (ggml/1034) 2024-12-08 20:14:35 +02:00