mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-25 15:11:05 +00:00
b55b505690
* Do not use _GNU_SOURCE gratuitously. What is needed to build whisper.cpp and examples is availability of stuff defined in The Open Group Base Specifications Issue 6 (https://pubs.opengroup.org/onlinepubs/009695399/) known also as Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions, plus some stuff from BSD that is not specified in POSIX.1. Well, that was true until NUMA support was added recently in ggml, so enable GNU libc extensions for Linux builds to cover that. There is no need to penalize musl libc which simply follows standards. Not having feature test macros in source code gives greater flexibility to those wanting to reuse it in 3rd party app, as they can build it with minimal FTM (_XOPEN_SOURCE=600) or other FTM depending on their needs. It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2. * examples : include SDL headers before other headers Avoid macOS build error when _DARWIN_C_SOURCE is not defined, brought by SDL2 relying on Darwin extension memset_pattern4/8/16 (from string.h). * make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK * make : use BSD-specific FTMs to enable alloca on BSDs * make : fix OpenBSD build by exposing newer POSIX definitions * cmake : follow recent FTM improvements from Makefile |
||
---|---|---|
.. | ||
.gitignore | ||
CMakeLists.txt | ||
eleven-labs.py | ||
gpt-2.cpp | ||
gpt-2.h | ||
README.md | ||
speak | ||
speak.bat | ||
speak.ps1 | ||
talk.cpp |
talk
Talk with an Artificial Intelligence in your terminal
Web version: examples/talk.wasm
Building
The talk
tool depends on SDL2 library to capture audio from the microphone. You can build it like this:
# Install SDL2 on Linux
sudo apt-get install libsdl2-dev
# Install SDL2 on Mac OS
brew install sdl2
# Build the "talk" executable
make talk
# Run it
./talk -p Santa
GPT-2
To run this, you will need a ggml GPT-2 model: instructions
Alternatively, you can simply download the smallest ggml GPT-2 117M model (240 MB) like this:
wget --quiet --show-progress -O models/ggml-gpt-2-117M.bin https://huggingface.co/ggerganov/ggml/raw/main/ggml-model-gpt-2-117M.bin
TTS
For best experience, this example needs a TTS tool to convert the generated text responses to voice.
You can use any TTS engine that you would like - simply edit the speak script to your needs.
By default, it is configured to use MacOS's say
or espeak
or Windows SpeechSynthesizer, but you can use whatever you wish.