docs : replace Core ML with OpenVINO (#2686)

This commit is contained in:
Konosuke Sakai 2025-01-02 19:03:02 +09:00 committed by GitHub
parent 227b5ffa36
commit 85b60f31d0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -293,7 +293,7 @@ This can result in significant speedup in encoder performance. Here are the inst
The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
cached for the next run.
For more information about the Core ML implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037).
For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037).
## NVIDIA GPU support