whisper.cpp/ggml/include
Djip007 e990d1b791 ggml : refactor online repacking (llama/10446)
* rename ggml-cpu-aarch64.c to .cpp

* reformat extra cpu backend.

- clean Q4_0_N_M and IQ4_0_N_M
  - remove from "file" tensor type
  - allow only with dynamic repack

- extract cpu extra bufts and convert to C++
  - hbm
  - "aarch64"

- more generic use of extra buffer
  - generalise extra_supports_op
  - new API for "cpu-accel":
     - amx
     - aarch64

* clang-format

* Clean Q4_0_N_M ref

Enable restrict on C++

* add op GGML_OP_MUL_MAT_ID for Q4_0_N_M with runtime repack

* added/corrected control on tensor size for Q4 repacking.

* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add debug logs on repacks.

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-12-18 12:52:16 +02:00
..
ggml-alloc.h ggml : fix typo in example usage ggml_gallocr_new (ggml/984) 2024-10-05 15:23:51 +03:00
ggml-backend.h ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-blas.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cann.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cpp.h llama : use smart pointers for ggml resources (llama/10117) 2024-11-15 15:21:04 +02:00
ggml-cpu.h ggml : refactor online repacking (llama/10446) 2024-12-18 12:52:16 +02:00
ggml-cuda.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-kompute.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-metal.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-opt.h ggml: new optimization interface (ggml/988) 2024-11-20 21:00:08 +02:00
ggml-rpc.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-sycl.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-vulkan.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml.h ggml : refactor online repacking (llama/10446) 2024-12-18 12:52:16 +02:00