mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-27 15:58:50 +00:00
e30c679928
* scripts : update sync [no ci] * files : reorganize [no ci] * sync : llama.cpp * cmake : link math library * cmake : build normal ggml library * files : move headers to include * objc : fix path to ggml-metal.h * ci : fix WHISPER_CUDA -> GGML_CUDA * scripts : sync LICENSE [no ci] |
||
---|---|---|
.. | ||
whisper.cpp.swift | ||
whisper.swiftui.demo | ||
whisper.swiftui.xcodeproj | ||
.gitignore | ||
README.md |
A sample SwiftUI app using whisper.cpp to do voice-to-text transcriptions. See also: whisper.objc.
Usage:
- Select a model from the whisper.cpp repository.1
- Add the model to
whisper.swiftui.demo/Resources/models
via Xcode. - Select a sample audio file (for example, jfk.wav).
- Add the sample audio file to
whisper.swiftui.demo/Resources/samples
via Xcode. - Select the "Release" 2 build configuration under "Run", then deploy and run to your device.
Note: Pay attention to the folder path: whisper.swiftui.demo/Resources/models
is the appropriate directory to place resources whilst whisper.swiftui.demo/Models
is related to actual code.
-
I recommend the tiny, base or small models for running on an iOS device. ↩︎
-
The
Release
build can boost performance of transcription. In this project, it also added-O3 -DNDEBUG
toOther C Flags
, but adding flags to app proj is not ideal in real world (applies to all C/C++ files), consider splitting xcodeproj in workspace in your own project. ↩︎