mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2025-06-17 22:38:07 +00:00
whisper : add GPU support via cuBLAS (#834)
* make : add WHISPER_CUBLAS * make : fix CUBLAS build * whisper : disable Flash Attention + adjust memory buffers * whisper : remove old commented code * readme : add cuBLAS instructions * cmake : add WHISPER_CUBLAS option * gitignore : ignore build-cublas
This commit is contained in:
@ -1,4 +1,4 @@
|
||||
if (WHISPER_SUPPORT_SDL2)
|
||||
if (WHISPER_SDL2)
|
||||
# talk-llama
|
||||
set(TARGET talk-llama)
|
||||
#add_executable(${TARGET} talk-llama.cpp llama.cpp)
|
||||
|
Reference in New Issue
Block a user