Commit Graph

864 Commits

Author SHA1 Message Date
geniusnut
ce6f747064
whisper.android : support decode wav file has 2 channels (#972) 2023-05-31 10:13:14 +03:00
Nicholas Albion
d7c936b44a
Feature/java bindings2 (#944)
* Java needs to call `whisper_full_default_params_by_ref()`, returning struct by val does not seem to work.
* added convenience methods to WhisperFullParams
* Remove unused WhisperJavaParams
2023-05-29 09:38:58 +10:00
genevera (she/her)
9b926844e3
models : fix README.md (#964)
Fixes typo on line 76 of models/README.md
2023-05-27 10:40:28 +03:00
DGdev91
5e2b3407ef
examples : update elevenlabs scripts to use official python API (#837)
* Update elevenlabs example to use ufficial python API

* Update elevenlabs example to use official python API
2023-05-24 21:11:01 +03:00
0xsourcecode
4e16a8fb63
readme : highlight OpenBLAS support (#956)
* highlight openblas support

* Update README.md
2023-05-24 11:23:51 +03:00
Georgi Gerganov
77eab3fbfe
talk-llama : sync latest llama.cpp (close #922, close #954) 2023-05-23 14:04:39 +03:00
Alexey Kharlamov
041be06d58
cmake : build with any BLAS compatible library (#927)
* Build with any BLAS library

* ci: Removed explicit CUDA nvcc path
2023-05-20 21:23:45 +03:00
Georgi Gerganov
429b9785c0
ggml : update WASM SIMD 2023-05-20 20:00:06 +03:00
Georgi Gerganov
e410cfc3ce
ggml : sync latest ggml repo
- new Q4 and Q8 quantization
- updated CUDA
2023-05-20 18:56:30 +03:00
Nicholas Albion
bc89f285d8
bindings : add java bindings (#931)
* WIP - java bindings

* updated README

* failed attempt at JNI

* fullTranscribe() test passes

* tested on Ubuntu 20

* link to Java bindings
2023-05-20 18:25:02 +03:00
Elkana Bardugo
56a87ba45d
whisper : fix hebrew language code (#935) 2023-05-20 18:17:54 +03:00
Ahmad Bilal
95b02d76b0
coreml : add support of large-v1 model (#926) 2023-05-15 18:36:06 +03:00
Georgi Gerganov
a5defbc1b9
release : v1.4.2 2023-05-14 19:06:45 +03:00
Georgi Gerganov
aaf0d41c7c
ggml : add AVX dot products 2023-05-14 18:56:46 +03:00
Georgi Gerganov
0cb820e0f9
talk-llama : fix build + sync latest llama.cpp 2023-05-14 18:46:42 +03:00
Jhen-Jie Hong
16564f554f
readme : improve Core ML model conversion guidance (#915) 2023-05-14 18:11:08 +03:00
Georgi Gerganov
fd01209d09
coreml : support quantized model files 2023-05-14 18:09:44 +03:00
Georgi Gerganov
e693074aa6
ggml : sync latest ggml
- New Q4 and Q5 formats
- Various improvements
2023-05-14 18:04:23 +03:00
Rich Jones
d652cf12ec
main : fix help for --no-timestamps arg (#908) 2023-05-14 17:54:57 +03:00
Georgi Gerganov
2b6a074305
extra : update ggml sync script 2023-05-14 10:01:52 +03:00
Jhen-Jie Hong
5300117471
whisper.objc : enable Core ML in example & fix segmentation fault (#910)
* coreml : update endcoder header import path

* coreml : force objc_arc in whisper-encoder.mm

* whisper.objc : create coreml/ group link

* whisper.objc : add coreml model link

* whisper.objc : update readme

* coreml : use -fobjc-arc for coreml/whisper-encoder.mm

* ci: create dummy .mlmodelc for pass ios build

* whisper.objc : update readme

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-14 09:47:02 +03:00
Georgi Gerganov
70af52a316
coreml : fix seg fault, double free (#919, #917, #899) 2023-05-14 09:42:19 +03:00
Georgi Gerganov
1d17cd5bb3
coreml : fix memory leak (#899) 2023-05-09 18:38:12 +03:00
Jonathan Soo
bf2449dfae
cmake : fix define used for COREML_ALLOW_FALLBACK (#893) 2023-05-08 21:08:09 +03:00
Luis Herrera
4e4d00c67a
talk-llama : only copy used KV cache in get / set state (#890)
---------

Co-authored-by: ejones <evan.q.jones@gmail.com>
2023-05-08 20:59:21 +03:00
Clifford Heath
9931d66400
readme : add instructions on converting to GGML + "--no-config" to wget (#874) 2023-05-08 20:58:36 +03:00
ZaBlazzingZephyrus
1a548c048e
cmake : fix options disabling AVX and AVX2 flags (#885) 2023-05-08 20:45:53 +03:00
Georgi Gerganov
14bee39b29
cmake : add options to disable CPU flags (#860) 2023-05-04 19:31:04 +03:00
RelatedTitle
d458fcbc15
ci : add cuBLAS build workflow and fix error causing lines in CMakeLists (#867)
* Add windows build with cuBLAS

* Remove error causing lines for cuBLAS on Windows
2023-05-03 23:47:37 +03:00
Vulcan
919e58b96a
readme : partial OpenCL GPU support via CLBlast (#863)
* ggml : CLBlast support as in llama.cpp

Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.

Usage:
WHISPER_CLBLAST=1 make

* CMake/Makefile : CLBlast support as in llama.cpp

Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.

Usage:
```
Makefile:
cd whisper.cpp
WHISPER_CLBLAST=1 make

CMake:
cd whisper.cpp ; mkdir build ; cd build
cmake -DWHISPER_CLBLAST=ON  ..
make
```

* Update README.md

Added OpenCL Build Instructions

* Instruction: Partial OpenCL GPU support via CLBlast

Added build instructions and examples for Make and CMake to support OpenCL enabled GPUs.
2023-05-03 19:24:43 +03:00
Vulcan
05bef0f0e9
build : CLBlast support as in llama.cpp (#862)
* ggml : CLBlast support as in llama.cpp

Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.

Usage:
WHISPER_CLBLAST=1 make

* CMake/Makefile : CLBlast support as in llama.cpp

Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.

Usage:
```
Makefile:
cd whisper.cpp
WHISPER_CLBLAST=1 make

CMake:
cd whisper.cpp ; mkdir build ; cd build
cmake -DWHISPER_CLBLAST=ON  ..
make
```
2023-05-02 22:50:32 +03:00
Georgi Gerganov
5974c8facd ggml : fix 32-bit ARM build + quantization 2023-05-02 21:52:26 +03:00
Georgi Gerganov
0bcb64b184
ggml : sync ggml (clBLAST + tensor names) 2023-05-02 21:24:18 +03:00
Luis Herrera
0bf680fea2
talk-llama : fix session prompt load (#854) 2023-05-02 20:05:27 +03:00
CRD716
b806420873
whisper : add detect-language mode (#853)
* add detectlanguage flag

* renaming and help

* no idea why that last one didn't commit

* run language detection if dl is set

* help message fix

* various fixes

* fix quitting

* fix language being english on print
2023-05-02 19:51:52 +03:00
Luis Herrera
be5911a9f3
talk-llama : add --session support (#845)
* feat: adding session support

* readme: adding --session info in examples/talk-llama

* llama: adding session fixes

* readme: updating session doc

* talk-llama: update the value of need_to_save_session to true in order to save the session in the subsequent interaction

* talk-llama: adding missing function which updates session_tokens
2023-05-01 20:18:10 +03:00
Georgi Gerganov
d375d73b2e
bench : improve benchmarks 2023-05-01 14:44:39 +03:00
Georgi Gerganov
7765770f89
whisper : add memory sizes for Q8_0 (close #846) 2023-05-01 10:03:56 +03:00
Baffin Lee
872a85ae94
whisper.wasm : fix typo in readme (#832) 2023-05-01 09:28:05 +03:00
Georgi Gerganov
9c61f5f585
release : v1.4.1 2023-04-30 22:57:42 +03:00
Georgi Gerganov
c94c469592
whisper : fix quantize bug (#842)
* whisper : debug

* whisper : fix bug during quantization
2023-04-30 22:50:04 +03:00
Georgi Gerganov
feac80dd3f
ggml : fix UB (int << 31) 2023-04-30 22:27:30 +03:00
Georgi Gerganov
fa8dbdc888
release : v1.4.0 2023-04-30 19:23:37 +03:00
Georgi Gerganov
4a7d49af95
examples : fix + refactor Levenshtein distance 2023-04-30 19:12:49 +03:00
Georgi Gerganov
794b162a46
whisper : add integer quantization support (#540)
* whisper : add integer quantization support

* examples : add common-ggml + prepare to add "quantize" tool

* whisper : quantization tool ready

* whisper : fix F32 support

* whisper : try to fix shared lib linkage

* wasm : update quantized models to Q5

* bench.wasm : remove "medium" button

* bench.wasm : fix custom model button

* ggml : add Q5_0 and Q5_1 WASM SIMD

* wasm : add quantized models to all WASM examples

* wasm : bump DB version number to 2

* talk-llama : update example to latest llama.cpp

* node : increase test timeout to 10s

* readme : add information for model quantization

* wasm : add links to other examples
2023-04-30 18:51:57 +03:00
Georgi Gerganov
5fd1bdd7fc
whisper : add GPU support via cuBLAS (#834)
* make : add WHISPER_CUBLAS

* make : fix CUBLAS build

* whisper : disable Flash Attention + adjust memory buffers

* whisper : remove old commented code

* readme : add cuBLAS instructions

* cmake : add WHISPER_CUBLAS option

* gitignore : ignore build-cublas
2023-04-30 12:14:33 +03:00
Georgi Gerganov
0ccd6746c9
ggml : fix WASM build 2023-04-29 21:37:23 +03:00
Georgi Gerganov
d9b550c0a1
ggml : fix 32-bit ARM NEON (#836)
* ggml : add support for 32-bit ARM

* ggml : fix

* ggml : fix
2023-04-29 21:33:33 +03:00
Georgi Gerganov
e9b091c92a
ggml : use vzip instead of vuzp for consistency 2023-04-29 21:14:09 +03:00
Georgi Gerganov
1f30b99208
ggml : fix WASM build 2023-04-29 20:21:25 +03:00