mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-26 07:31:05 +00:00
7094ea5e75
* whisper : use flash attention in the encoder * whisper : add kv_pad * whisper : remove extra backend instance (huh?) * whisper : use FA for cross-attention * whisper : use FA for self-attention * whisper : simplify encoder FA * whisper : add flash_attn runtime parameter * scripts : add bench log * scripts : add M1 Pro bench log |
||
---|---|---|
.. | ||
.gitignore | ||
CMakeLists.txt | ||
eleven-labs.py | ||
gpt-2.cpp | ||
gpt-2.h | ||
README.md | ||
speak | ||
speak.bat | ||
speak.ps1 | ||
talk.cpp |
talk
Talk with an Artificial Intelligence in your terminal
Web version: examples/talk.wasm
Building
The talk
tool depends on SDL2 library to capture audio from the microphone. You can build it like this:
# Install SDL2
# On Debian based linux distributions:
sudo apt-get install libsdl2-dev
# On Fedora Linux:
sudo dnf install SDL2 SDL2-devel
# Install SDL2 on Mac OS
brew install sdl2
# Build the "talk" executable
make talk
# Run it
./talk -p Santa
GPT-2
To run this, you will need a ggml GPT-2 model: instructions
Alternatively, you can simply download the smallest ggml GPT-2 117M model (240 MB) like this:
wget --quiet --show-progress -O models/ggml-gpt-2-117M.bin https://huggingface.co/ggerganov/ggml/resolve/main/ggml-model-gpt-2-117M.bin
TTS
For best experience, this example needs a TTS tool to convert the generated text responses to voice.
You can use any TTS engine that you would like - simply edit the speak script to your needs.
By default, it is configured to use MacOS's say
or espeak
or Windows SpeechSynthesizer, but you can use whatever you wish.