mirror of
https://github.com/ggerganov/whisper.cpp.git
synced 2024-12-24 06:46:37 +00:00
5f8a086e22
* swift : fix resources & exclude build * whisper : impl whisper_timings struct & api * whisper.swiftui : model list & bench methods * whisper : return ptr for whisper_get_timings * revert unnecessary change * whisper : avoid designated initializer * whisper.swiftui: code style changes * whisper.swiftui : get device name / os from UIDevice * whisper.swiftui : fix UIDevice usage * whisper.swiftui : add memcpy and ggml_mul_mat (commented) |
||
---|---|---|
.. | ||
whisper.cpp.swift | ||
whisper.swiftui.demo | ||
whisper.swiftui.xcodeproj | ||
.gitignore | ||
README.md |
A sample SwiftUI app using whisper.cpp to do voice-to-text transcriptions. See also: whisper.objc.
Usage:
- Select a model from the whisper.cpp repository.1
- Add the model to
whisper.swiftui.demo/Resources/models
via Xcode. - Select a sample audio file (for example, jfk.wav).
- Add the sample audio file to
whisper.swiftui.demo/Resources/samples
via Xcode. - Select the "Release" 2 build configuration under "Run", then deploy and run to your device.
Note: Pay attention to the folder path: whisper.swiftui.demo/Resources/models
is the appropriate directory to place resources whilst whisper.swiftui.demo/Models
is related to actual code.
-
I recommend the tiny, base or small models for running on an iOS device. ↩︎
-
The
Release
build can boost performance of transcription. In this project, it also added-O3 -DNDEBUG
toOther C Flags
, but adding flags to app proj is not ideal in real world (applies to all C/C++ files), consider splitting xcodeproj in workspace in your own project. ↩︎