2022-12-23 08:56:18 +00:00
A sample SwiftUI app using [whisper.cpp ](https://github.com/ggerganov/whisper.cpp/ ) to do voice-to-text transcriptions.
See also: [whisper.objc ](https://github.com/ggerganov/whisper.cpp/tree/master/examples/whisper.objc ).
To use:
1. Select a model from the [whisper.cpp repository ](https://github.com/ggerganov/whisper.cpp/tree/master/models ).[^1]
2. Add the model to "whisper.swiftui.demo/Resources/models" via Xcode.
3. Select a sample audio file (for example, [jfk.wav ](https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav )).
4. Add the model to "whisper.swiftui.demo/Resources/samples" via Xcode.
2023-03-22 20:16:04 +00:00
5. Select the "Release" [^2] build configuration under "Run", then deploy and run to your device.
2022-12-23 08:56:18 +00:00
[^1]: I recommend the tiny, base or small models for running on an iOS device.
2023-03-22 20:16:04 +00:00
[^2]: The `Release` build can boost performance of transcription. In this project, it also added `-O3 -DNDEBUG` to `Other C Flags` , but adding flags to app proj is not ideal in real world (applies to all C/C++ files), consider splitting xcodeproj in workspace in your own project.
2023-01-15 12:08:12 +00:00
![image ](https://user-images.githubusercontent.com/1991296/212539216-0aef65e4-f882-480a-8358-0f816838fd52.png )