diff --git a/README.md b/README.md index 079c73b7..c268fd0c 100644 --- a/README.md +++ b/README.md @@ -293,7 +293,7 @@ This can result in significant speedup in encoder performance. Here are the inst The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get cached for the next run. -For more information about the Core ML implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037). +For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037). ## NVIDIA GPU support