mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-20 05:07:52 +00:00
Minor
This commit is contained in:
parent
f7ab81fe51
commit
63b6786767
@ -12,7 +12,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
|
|||||||
- Zero memory allocations at runtime
|
- Zero memory allocations at runtime
|
||||||
- Runs on the CPU
|
- Runs on the CPU
|
||||||
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
|
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
|
||||||
- Supported platforms: Linux, Mac OS (Intel and Arm), Raspberry Pi, Android
|
- Supported platforms: Linux, Mac OS (Intel and Arm), Windows (MinGW), Raspberry Pi, Android
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
@ -34,7 +34,7 @@ For a quick demo, simply run `make base.en`:
|
|||||||
|
|
||||||
```java
|
```java
|
||||||
$ make base.en
|
$ make base.en
|
||||||
cc -O3 -std=c11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c ggml.c
|
cc -O3 -std=c11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c ggml.c
|
||||||
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c whisper.cpp
|
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread -c whisper.cpp
|
||||||
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread main.cpp whisper.o ggml.o -o main
|
c++ -O3 -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -pthread main.cpp whisper.o ggml.o -o main
|
||||||
./main -h
|
./main -h
|
||||||
@ -248,6 +248,8 @@ The original models are converted to a custom binary format. This allows to pack
|
|||||||
- vocabulary
|
- vocabulary
|
||||||
- weights
|
- weights
|
||||||
|
|
||||||
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script.
|
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script or from here:
|
||||||
|
|
||||||
|
https://ggml.ggerganov.com
|
||||||
|
|
||||||
For more details, see the conversion script [convert-pt-to-ggml.py](convert-pt-to-ggml.py) or the README in [models](models).
|
For more details, see the conversion script [convert-pt-to-ggml.py](convert-pt-to-ggml.py) or the README in [models](models).
|
||||||
|
@ -4,7 +4,7 @@ The [original Whisper PyTorch models provided by OpenAI](https://github.com/open
|
|||||||
have been converted to custom `ggml` format in order to be able to load them in C/C++. The conversion has been performed using the
|
have been converted to custom `ggml` format in order to be able to load them in C/C++. The conversion has been performed using the
|
||||||
[convert-pt-to-ggml.py](convert-pt-to-ggml.py) script. You can either obtain the original models and generate the `ggml` files
|
[convert-pt-to-ggml.py](convert-pt-to-ggml.py) script. You can either obtain the original models and generate the `ggml` files
|
||||||
yourself using the conversion script, or you can use the [download-ggml-model.sh](download-ggml-model.sh) script to download the
|
yourself using the conversion script, or you can use the [download-ggml-model.sh](download-ggml-model.sh) script to download the
|
||||||
already converted models.
|
already converted models from https://ggml.ggerganov.com
|
||||||
|
|
||||||
Sample usage:
|
Sample usage:
|
||||||
|
|
||||||
|
@ -2387,7 +2387,7 @@ int whisper_full(
|
|||||||
// print the prompt
|
// print the prompt
|
||||||
//printf("\n\n");
|
//printf("\n\n");
|
||||||
//for (int i = 0; i < prompt.size(); i++) {
|
//for (int i = 0; i < prompt.size(); i++) {
|
||||||
// printf("%s: prompt[%d] = %s\n", __func__, i, vocab.id_to_token[prompt[i]].c_str());
|
// printf("%s: prompt[%d] = %s\n", __func__, i, ctx->vocab.id_to_token[prompt[i]].c_str());
|
||||||
//}
|
//}
|
||||||
//printf("\n\n");
|
//printf("\n\n");
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user