whisper.cpp/extra
Georgi Gerganov 933c5bef97
whisper : support ggml_conv with CUDA and Metal (#1473)
* ggml : add CUDA support for ggml_conv

* whisper : remove ggml_repeat for conv bias + single backend

* cuda : fix im2col kernel

* metal : add im2col support + mul mat-vec f16 x f16

* bench-all : add q4 models
2023-11-10 22:26:50 +02:00
..
bench-all.sh whisper : support ggml_conv with CUDA and Metal (#1473) 2023-11-10 22:26:50 +02:00
bench-wts.sh bench-wts.sh : rename script + add execute permission 2023-03-06 21:02:24 +02:00
bench.py extra: Add benchmark script implemented in Python (#1298) 2023-09-25 23:45:15 +08:00
convert-all.sh whisper : add support for large v3 (#1444) 2023-11-07 15:30:18 +02:00
deploy-wasm.sh Node.js package (#260) 2022-12-12 20:17:27 +02:00
quantize-all.sh extra : update 'quantize-all.sh' to quantize all downloaded models (#1054) 2023-06-28 22:07:02 +03:00
sha-all.sh extra : compute SHA of all models files 2022-11-02 18:31:55 +02:00
sync-ggml.sh cuda : fix HIPBLAS build 2023-11-05 19:41:15 +02:00