From 85b60f31d01d37fe2cade54eebbcfe3f39e26624 Mon Sep 17 00:00:00 2001 From: Konosuke Sakai Date: Thu, 2 Jan 2025 19:03:02 +0900 Subject: [PATCH] docs : replace Core ML with OpenVINO (#2686) --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 079c73b7..c268fd0c 100644 --- a/README.md +++ b/README.md @@ -293,7 +293,7 @@ This can result in significant speedup in encoder performance. Here are the inst The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get cached for the next run. -For more information about the Core ML implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037). +For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037). ## NVIDIA GPU support