mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-18 20:27:53 +00:00
Update README.md
This commit is contained in:
parent
faa85f9840
commit
3996ecc156
30
README.md
30
README.md
@ -52,21 +52,6 @@ The tensor operators are optimized heavily for Apple silicon CPUs. Depending on
|
|||||||
instrisics or CBLAS Accelerate framework routines are used. The latter are especially effective for bigger sizes since
|
instrisics or CBLAS Accelerate framework routines are used. The latter are especially effective for bigger sizes since
|
||||||
the Accelerate framework utilizes the special-purpose AMX coprocessor available in modern Apple products.
|
the Accelerate framework utilizes the special-purpose AMX coprocessor available in modern Apple products.
|
||||||
|
|
||||||
## Limitations
|
|
||||||
|
|
||||||
- Inference only
|
|
||||||
- No GPU support
|
|
||||||
- Very basic greedy sampling scheme - always pick up the token with highest probability.
|
|
||||||
This should be similar to the [GreedyDecoder](https://github.com/openai/whisper/blob/main/whisper/decoding.py#L249-L274)
|
|
||||||
from the original python implementation, so in order to make a fair comparison between the 2 implementations, make sure
|
|
||||||
to run the python code with the following parameters:
|
|
||||||
|
|
||||||
```
|
|
||||||
whisper --best_of None --beam_size None ...
|
|
||||||
```
|
|
||||||
|
|
||||||
In the future, `whisper.cpp` will support more sampling strategies.
|
|
||||||
|
|
||||||
## Quick start
|
## Quick start
|
||||||
|
|
||||||
First, download one of the Whisper models converted in [ggml format](models). For example:
|
First, download one of the Whisper models converted in [ggml format](models). For example:
|
||||||
@ -220,6 +205,21 @@ make large
|
|||||||
| medium | 1.5 GB | ~2.6 GB | `fd9727b6e1217c2f614f9b698455c4ffd82463b4` |
|
| medium | 1.5 GB | ~2.6 GB | `fd9727b6e1217c2f614f9b698455c4ffd82463b4` |
|
||||||
| large | 2.9 GB | ~4.7 GB | `0f4c8e34f21cf1a914c59d8b3ce882345ad349d6` |
|
| large | 2.9 GB | ~4.7 GB | `0f4c8e34f21cf1a914c59d8b3ce882345ad349d6` |
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
- Inference only
|
||||||
|
- No GPU support
|
||||||
|
- Very basic greedy sampling scheme - always pick up the token with highest probability.
|
||||||
|
This should be similar to the [GreedyDecoder](https://github.com/openai/whisper/blob/main/whisper/decoding.py#L249-L274)
|
||||||
|
from the original python implementation, so in order to make a fair comparison between the 2 implementations, make sure
|
||||||
|
to run the python code with the following parameters:
|
||||||
|
|
||||||
|
```
|
||||||
|
whisper --best_of None --beam_size None ...
|
||||||
|
```
|
||||||
|
|
||||||
|
In the future, `whisper.cpp` will support more sampling strategies.
|
||||||
|
|
||||||
## Another example
|
## Another example
|
||||||
|
|
||||||
Here is another example of transcribing a [3:24 min speech](https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg)
|
Here is another example of transcribing a [3:24 min speech](https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg)
|
||||||
|
Loading…
Reference in New Issue
Block a user