whisper.cpp/ggml
luoyu-intel 32f88af17b Add oneDNN primitive support (llama/9091)
* add onednn

* add sycl_f16

* add dnnl stream

* add engine map

* use dnnl for intel only

* use fp16fp16fp16

* update doc
2024-08-28 13:22:20 +03:00
..
cmake whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
include llama : simplify Mamba with advanced batch splits (llama/8526) 2024-08-28 13:22:20 +03:00
src Add oneDNN primitive support (llama/9091) 2024-08-28 13:22:20 +03:00
.gitignore whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
CMakeLists.txt cmake : remove unused option GGML_CURL (llama/9011) 2024-08-28 13:22:20 +03:00
ggml_vk_generate_shaders.py whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00