Przemysław Pawełczyk
62642bb61c
talk-llama : fix build after ggml sync ( #1049 )
...
sed -i 's,GGML_BACKEND_CUDA,GGML_BACKEND_GPU,g' examples/talk-llama/llama.cpp
2023-06-25 16:13:50 +03:00
Roddur Dasgupta
f11f33f1c0
models : cd statements are quoted to allow spaces in path ( #1041 )
2023-06-25 15:27:28 +03:00
Colin
14baf2e7f3
main : add diarization support for all current output types ( #1031 )
...
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-06-25 15:07:57 +03:00
Georgi Gerganov
5feb0dffba
ggml : sync latest ggml lib
2023-06-25 14:30:44 +03:00
faker
598f607e28
main : gracefully exit when invalid params are passed ( #1002 )
...
* Refactor whisper_params_parse to return false on failure
* Updated help flag behavior
2023-06-25 13:51:59 +03:00
Nicholas Albion
5b9e59bc07
speak
scripts for Windows
2023-06-01 22:45:00 +10:00
geniusnut
ce6f747064
whisper.android : support decode wav file has 2 channels ( #972 )
2023-05-31 10:13:14 +03:00
DGdev91
5e2b3407ef
examples : update elevenlabs scripts to use official python API ( #837 )
...
* Update elevenlabs example to use ufficial python API
* Update elevenlabs example to use official python API
2023-05-24 21:11:01 +03:00
Georgi Gerganov
77eab3fbfe
talk-llama : sync latest llama.cpp ( close #922 , close #954 )
2023-05-23 14:04:39 +03:00
Georgi Gerganov
e410cfc3ce
ggml : sync latest ggml repo
...
- new Q4 and Q8 quantization
- updated CUDA
2023-05-20 18:56:30 +03:00
Georgi Gerganov
0cb820e0f9
talk-llama : fix build + sync latest llama.cpp
2023-05-14 18:46:42 +03:00
Georgi Gerganov
e693074aa6
ggml : sync latest ggml
...
- New Q4 and Q5 formats
- Various improvements
2023-05-14 18:04:23 +03:00
Rich Jones
d652cf12ec
main : fix help for --no-timestamps arg ( #908 )
2023-05-14 17:54:57 +03:00
Jhen-Jie Hong
5300117471
whisper.objc : enable Core ML in example & fix segmentation fault ( #910 )
...
* coreml : update endcoder header import path
* coreml : force objc_arc in whisper-encoder.mm
* whisper.objc : create coreml/ group link
* whisper.objc : add coreml model link
* whisper.objc : update readme
* coreml : use -fobjc-arc for coreml/whisper-encoder.mm
* ci: create dummy .mlmodelc for pass ios build
* whisper.objc : update readme
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-14 09:47:02 +03:00
Luis Herrera
4e4d00c67a
talk-llama : only copy used KV cache in get / set state ( #890 )
...
---------
Co-authored-by: ejones <evan.q.jones@gmail.com>
2023-05-08 20:59:21 +03:00
Luis Herrera
0bf680fea2
talk-llama : fix session prompt load ( #854 )
2023-05-02 20:05:27 +03:00
CRD716
b806420873
whisper : add detect-language mode ( #853 )
...
* add detectlanguage flag
* renaming and help
* no idea why that last one didn't commit
* run language detection if dl is set
* help message fix
* various fixes
* fix quitting
* fix language being english on print
2023-05-02 19:51:52 +03:00
Luis Herrera
be5911a9f3
talk-llama : add --session support ( #845 )
...
* feat: adding session support
* readme: adding --session info in examples/talk-llama
* llama: adding session fixes
* readme: updating session doc
* talk-llama: update the value of need_to_save_session to true in order to save the session in the subsequent interaction
* talk-llama: adding missing function which updates session_tokens
2023-05-01 20:18:10 +03:00
Georgi Gerganov
7765770f89
whisper : add memory sizes for Q8_0 ( close #846 )
2023-05-01 10:03:56 +03:00
Baffin Lee
872a85ae94
whisper.wasm : fix typo in readme ( #832 )
2023-05-01 09:28:05 +03:00
Georgi Gerganov
c94c469592
whisper : fix quantize bug ( #842 )
...
* whisper : debug
* whisper : fix bug during quantization
2023-04-30 22:50:04 +03:00
Georgi Gerganov
4a7d49af95
examples : fix + refactor Levenshtein distance
2023-04-30 19:12:49 +03:00
Georgi Gerganov
794b162a46
whisper : add integer quantization support ( #540 )
...
* whisper : add integer quantization support
* examples : add common-ggml + prepare to add "quantize" tool
* whisper : quantization tool ready
* whisper : fix F32 support
* whisper : try to fix shared lib linkage
* wasm : update quantized models to Q5
* bench.wasm : remove "medium" button
* bench.wasm : fix custom model button
* ggml : add Q5_0 and Q5_1 WASM SIMD
* wasm : add quantized models to all WASM examples
* wasm : bump DB version number to 2
* talk-llama : update example to latest llama.cpp
* node : increase test timeout to 10s
* readme : add information for model quantization
* wasm : add links to other examples
2023-04-30 18:51:57 +03:00
Georgi Gerganov
5fd1bdd7fc
whisper : add GPU support via cuBLAS ( #834 )
...
* make : add WHISPER_CUBLAS
* make : fix CUBLAS build
* whisper : disable Flash Attention + adjust memory buffers
* whisper : remove old commented code
* readme : add cuBLAS instructions
* cmake : add WHISPER_CUBLAS option
* gitignore : ignore build-cublas
2023-04-30 12:14:33 +03:00
Zollner
5cc17418c7
whisper.android : add some tips ( #816 )
2023-04-29 11:00:20 +03:00
Laytan Laats
70567eff23
main : escape quotes in csv output ( #815 )
2023-04-23 19:01:59 +03:00
Taras Glek
02ec83c5d5
stream : flush upon finishing inference ( #811 )
2023-04-23 17:00:30 +03:00
Philipp Zabel
2bd4b8d577
examples : add missing #include <cstdint> ( #798 )
...
common.cpp uses uint8_t and uint64_t, which are defined in <cstdint>.
2023-04-23 16:52:52 +03:00
Tauseef Mohiuddin
eecf2c3d41
main : update escape_double_quotes() function ( #776 )
...
Updated the escape_double_quotes() function such that the function now escapes both double quotes and backslashes in the input string.
Changes Made:
- Renamed the function to escape_quotes_and_backslashes
- Modified the condition in the first loop to increment the value of 'escaped_length' for both double quotes and backslashes.
- Modified the condition in second loop to add a backslash before the current character if it is a double quote or a backslash.
Resolves : #769
2023-04-23 16:47:30 +03:00
Georgi Gerganov
f19e23fbd1
whisper : restore decoder temperature fallbacks
...
I disabled this because there were many complaints about slow decoding.
The current implementation does not allow batching the decoders when
using the "best of" or "beam size" parameters, so the decoding time is
proportional to the number of decoders, which is obviously not great.
However, now there are even more complaints about wrong decodings and
repetition.
So, making a compromise by re-enabling the fallbacks, but defaulting to
just 2 "best of" / "beam size" decoders. Also, the temperature step is
increased from 0.2 to 0.4 - i.e. from maximum of 5 fallbacks to maximum
of 2.
Also, the stream example now has fallbacks enabled by default.
close #471 #477 #508 #612 #719 #731
2023-04-15 16:12:55 +03:00
Bader-eddine Ouaich
2c856fb9e5
whisper : fix potential memory leaks ( #740 )
...
* fix potential memory leak if whisper_init_state failed
* fix potential memory leak if gpt2_init failed
2023-04-14 20:05:56 +03:00
Ali Alameh
2c4ac2627d
stream : support language auto-detect ( #501 )
...
#445 fix Language auto-detect "auto" flag does not work using the stream tool
2023-04-14 20:02:18 +03:00
DGdev91
001083a769
talk, talk-llama : add basic example script for eleven-labs tts ( #728 )
2023-04-14 19:53:58 +03:00
Maciek
78548dc03f
talk-llama : correct default speak.sh path ( #720 )
...
There is `speak.sh` file in `./examples/talk-llama` as described in README.
However `./examples/talk/speak.sh` is used in `talk-llama.cpp`, this commit corrects that.
2023-04-14 19:36:09 +03:00
LittleLoli
66110dafcc
main : add lrc output support ( #718 )
...
* add lrc output support.
* fix wrong comment
2023-04-14 19:35:33 +03:00
Georgi Gerganov
514cd04452
whisper : fix bug in prompt processing ( close #705 )
...
Was dereferencing a dangling pointer
2023-04-14 19:17:07 +03:00
Georgi Gerganov
114df388fe
talk-llama : increase context to 2048
2023-04-10 23:09:15 +03:00
Georgi Gerganov
ea36831459
talk-llama : update to latest llama.cpp (improved performance)
2023-04-10 22:59:13 +03:00
InconsolableCellist
5e6e2187a3
talk-llama : fixing usage message for talk-llama ( #687 )
...
"-ml" instead of "-mg" for specifying the llama file
2023-03-30 00:10:20 +03:00
Georgi Gerganov
a7f1f33715
main : add <cstring> header
2023-03-29 23:59:45 +03:00
Lucas Zanek
86ecfc6333
whisper.addon : fixed test to new async implementation ( #686 )
...
* fixed blocking code on node addon
* modify the example to run async
* format
* added logic to see the whisper output
* added logic to see the whisper output
* removed extra function for more clean example
* fixed whisper test to new async implementation
2023-03-29 23:59:17 +03:00
Egor Egorov
0f759f125d
main : fix typo in JSON output ( #648 )
...
* typo in JSON output
* fix double quotes in JSON output
2023-03-29 23:26:39 +03:00
Jhen-Jie Hong
eefed45e37
whisper : add initial_prompt param ( #645 )
2023-03-29 23:23:23 +03:00
Jonno
21c1e6afc5
whisper.swiftui : update README.md ( #682 )
...
- Slight tweaks to README for improved comprehension.
2023-03-29 23:04:38 +03:00
Evan Jones
a47e812a54
talk-llama : add alpaca support ( #668 )
2023-03-29 23:01:14 +03:00
Georgi Gerganov
e5c197d8aa
talk-llama : add discussion link
2023-03-28 10:11:34 +03:00
Georgi Gerganov
7cd1d3bc34
talk-llama : try to fix windows build ..
2023-03-27 22:40:59 +03:00
Georgi Gerganov
4a0deb8b1e
talk-llama : add new example + sync ggml from llama.cpp ( #664 )
...
* talk-llama : talk with LLaMA AI
* talk.llama : disable EOS token
* talk-llama : add README instructions
* ggml : fix build in debug
2023-03-27 21:00:32 +03:00
Lucas Zanek
21165580a1
Nodejs Addon blocking main thread. Implemented Napi::AsyncWorker ( #642 )
...
* fixed blocking code on node addon
* modify the example to run async
* format
* added logic to see the whisper output
* added logic to see the whisper output
* removed extra function for more clean example
2023-03-22 22:19:22 +02:00
Jhen-Jie Hong
1d749919e3
whisper.objc : add -O3 -DNDEBUG
in release mode ( #640 )
2023-03-22 22:16:04 +02:00