whisper.cpp/ggml/include
Charles Xu 3298916e5e backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (llama/9921)
* backend-cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2024-11-20 21:00:08 +02:00
..
ggml-alloc.h ggml : fix typo in example usage ggml_gallocr_new (ggml/984) 2024-10-05 15:23:51 +03:00
ggml-amx.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-backend.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-blas.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cann.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cpp.h llama : use smart pointers for ggml resources (llama/10117) 2024-11-15 15:21:04 +02:00
ggml-cpu.h backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (llama/9921) 2024-11-20 21:00:08 +02:00
ggml-cuda.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-kompute.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-metal.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-rpc.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-sycl.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-vulkan.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00