mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-18 20:27:53 +00:00
readme : add instructions on converting to GGML + "--no-config" to wget (#874)
This commit is contained in:
parent
1a548c048e
commit
9931d66400
@ -71,6 +71,8 @@ Then, download one of the Whisper models converted in [ggml format](models). For
|
|||||||
bash ./models/download-ggml-model.sh base.en
|
bash ./models/download-ggml-model.sh base.en
|
||||||
```
|
```
|
||||||
|
|
||||||
|
If you wish to convert the Whisper models to ggml format yourself, instructions are in [models/README.md](models/README.md).
|
||||||
|
|
||||||
Now build the [main](examples/main) example and transcribe an audio file like this:
|
Now build the [main](examples/main) example and transcribe an audio file like this:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -1,15 +1,17 @@
|
|||||||
## Whisper model files in custom ggml format
|
## Whisper model files in custom ggml format
|
||||||
|
|
||||||
The [original Whisper PyTorch models provided by OpenAI](https://github.com/openai/whisper/blob/main/whisper/__init__.py#L17-L27)
|
The [original Whisper PyTorch models provided by OpenAI](https://github.com/openai/whisper/blob/main/whisper/__init__.py#L17-L27)
|
||||||
have been converted to custom `ggml` format in order to be able to load them in C/C++. The conversion has been performed
|
are converted to custom `ggml` format in order to be able to load them in C/C++.
|
||||||
using the [convert-pt-to-ggml.py](convert-pt-to-ggml.py) script. You can either obtain the original models and generate
|
Conversion is performed using the [convert-pt-to-ggml.py](convert-pt-to-ggml.py) script.
|
||||||
the `ggml` files yourself using the conversion script, or you can use the [download-ggml-model.sh](download-ggml-model.sh)
|
|
||||||
script to download the already converted models. Currently, they are hosted on the following locations:
|
You can either obtain the original models and generate the `ggml` files yourself using the conversion script,
|
||||||
|
or you can use the [download-ggml-model.sh](download-ggml-model.sh) script to download the already converted models.
|
||||||
|
Currently, they are hosted on the following locations:
|
||||||
|
|
||||||
- https://huggingface.co/ggerganov/whisper.cpp
|
- https://huggingface.co/ggerganov/whisper.cpp
|
||||||
- https://ggml.ggerganov.com
|
- https://ggml.ggerganov.com
|
||||||
|
|
||||||
Sample usage:
|
Sample download:
|
||||||
|
|
||||||
```java
|
```java
|
||||||
$ ./download-ggml-model.sh base.en
|
$ ./download-ggml-model.sh base.en
|
||||||
@ -21,6 +23,16 @@ You can now use it like this:
|
|||||||
$ ./main -m models/ggml-base.en.bin -f samples/jfk.wav
|
$ ./main -m models/ggml-base.en.bin -f samples/jfk.wav
|
||||||
```
|
```
|
||||||
|
|
||||||
|
To convert the files yourself, use the convert-pt-to-ggml.py script. Here is an example usage.
|
||||||
|
The original PyTorch files are assumed to have been downloaded into ~/.cache/whisper
|
||||||
|
Change `~/path/to/repo/whisper/` to the location for your copy of the Whisper source:
|
||||||
|
```
|
||||||
|
mkdir models/whisper-medium
|
||||||
|
python models/convert-pt-to-ggml.py ~/.cache/whisper/medium.pt ~/path/to/repo/whisper/ ./models/whisper-medium
|
||||||
|
mv ./models/whisper-medium/ggml-model.bin models/ggml-medium.bin
|
||||||
|
rmdir models/whisper-medium
|
||||||
|
```
|
||||||
|
|
||||||
A third option to obtain the model files is to download them from Hugging Face:
|
A third option to obtain the model files is to download them from Hugging Face:
|
||||||
|
|
||||||
https://huggingface.co/ggerganov/whisper.cpp/tree/main
|
https://huggingface.co/ggerganov/whisper.cpp/tree/main
|
||||||
|
@ -62,7 +62,7 @@ if [ -f "ggml-$model.bin" ]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if [ -x "$(command -v wget)" ]; then
|
if [ -x "$(command -v wget)" ]; then
|
||||||
wget --quiet --show-progress -O ggml-$model.bin $src/$pfx-$model.bin
|
wget --no-config --quiet --show-progress -O ggml-$model.bin $src/$pfx-$model.bin
|
||||||
elif [ -x "$(command -v curl)" ]; then
|
elif [ -x "$(command -v curl)" ]; then
|
||||||
curl -L --output ggml-$model.bin $src/$pfx-$model.bin
|
curl -L --output ggml-$model.bin $src/$pfx-$model.bin
|
||||||
else
|
else
|
||||||
|
Loading…
Reference in New Issue
Block a user