This website requires JavaScript.
Explore
Help
Sign In
ExternalVendorCode
/
whisper.cpp
Watch
1
Star
0
Fork
0
You've already forked whisper.cpp
mirror of
https://github.com/ggerganov/whisper.cpp.git
synced
2025-04-27 14:29:43 +00:00
Code
Issues
Actions
Packages
Projects
Releases
Wiki
Activity
whisper.cpp
/
ggml
History
Jeff Bolz
1d50c6ac22
vulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)
...
This is consistent with the ggml-cuda behavior and the mul_mat fallback.
2025-04-24 20:39:16 +03:00
..
cmake
ggml : sync/merge cmake,riscv,powerpc, add common.cmake (ggml/0)
2025-03-27 11:06:03 +02:00
include
ggml : add bilinear upscale support (ggml/1185)
2025-04-24 20:39:16 +03:00
src
vulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)
2025-04-24 20:39:16 +03:00
.gitignore
whisper : reorganize source code + improve CMake (
#2256
)
2024-06-26 19:34:09 +03:00
CMakeLists.txt
ggml : sync/merge cmake,riscv,powerpc, add common.cmake (ggml/0)
2025-03-27 11:06:03 +02:00