whisper.cpp/examples/whisper.swiftui
2024-12-08 20:14:35 +02:00
..
whisper.cpp.swift ci : disable Obj-C build + fixes 2024-12-08 20:14:35 +02:00
whisper.swiftui.demo whisper.swiftui : add model download list & bench methods (#2546) 2024-11-13 21:51:34 +02:00
whisper.swiftui.xcodeproj whisper.swiftui : switch Mac dest to Mac (Designed for iPad) (#2562) 2024-11-15 15:21:53 +02:00
.gitignore whisper.swiftui : add .gitignore 2024-01-04 15:00:27 +02:00
README.md whisper.swiftui : update README.md (#682) 2023-03-29 23:04:38 +03:00

A sample SwiftUI app using whisper.cpp to do voice-to-text transcriptions. See also: whisper.objc.

Usage:

  1. Select a model from the whisper.cpp repository.1
  2. Add the model to whisper.swiftui.demo/Resources/models via Xcode.
  3. Select a sample audio file (for example, jfk.wav).
  4. Add the sample audio file to whisper.swiftui.demo/Resources/samples via Xcode.
  5. Select the "Release" 2 build configuration under "Run", then deploy and run to your device.

Note: Pay attention to the folder path: whisper.swiftui.demo/Resources/models is the appropriate directory to place resources whilst whisper.swiftui.demo/Models is related to actual code.

image


  1. I recommend the tiny, base or small models for running on an iOS device. ↩︎

  2. The Release build can boost performance of transcription. In this project, it also added -O3 -DNDEBUG to Other C Flags, but adding flags to app proj is not ideal in real world (applies to all C/C++ files), consider splitting xcodeproj in workspace in your own project. ↩︎