From b95b162f5ad11534e4b01ca832c3541d0cb5ce84 Mon Sep 17 00:00:00 2001 From: SevaSk Date: Thu, 1 Jun 2023 11:28:27 -0400 Subject: [PATCH] API now transcribes rather then translates. --- README.md | 4 ++-- TranscriberModels.py | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 89db7ed..1d3adea 100644 --- a/README.md +++ b/README.md @@ -83,7 +83,7 @@ python main.py --api Upon initiation, Ecoute will begin transcribing your microphone input and speaker output in real-time, generating a suggested response based on the conversation. Please note that it might take a few seconds for the system to warm up before the transcription becomes real-time. -The --api flag significantly enhances transcription speed and accuracy, and it's expected to be the default option in future releases. However, keep in mind that using the Whisper API will consume more OpenAI credits than using the local model. This increased cost is attributed to the advanced features and capabilities that the Whisper API provides. Despite the additional cost, the considerable improvements in speed and transcription accuracy might make it a worthwhile investment for your use case. +The --api flag will use the whisper api for trnascriptions. This significantly enhances transcription speed and accuracy, and it works in most languages (rather than just English without the flag). It's expected to become the default option in future releases. However, keep in mind that using the Whisper API will consume more OpenAI credits than using the local model. This increased cost is attributed to the advanced features and capabilities that the Whisper API provides. Despite the additional expense, the substantial improvements in speed and transcription accuracy may make it a worthwhile investment for your use case. ### ⚠️ Limitations @@ -93,7 +93,7 @@ While Ecoute provides real-time transcription and response suggestions, there ar **Whisper Model**: If the --api flag is not used, we utilize the 'tiny' version of the Whisper ASR model, due to its low resource consumption and fast response times. However, this model may not be as accurate as the larger models in transcribing certain types of speech, including accents or uncommon words. -**Language**: The Whisper model used in Ecoute is set to English. As a result, it may not accurately transcribe non-English languages or dialects. We are actively working to add multi-language support to future versions of the program. +**Language**: If you are not using the --api flag the Whisper model used in Ecoute is set to English. As a result, it may not accurately transcribe non-English languages or dialects. We are actively working to add multi-language support to future versions of the program. ## 📖 License diff --git a/TranscriberModels.py b/TranscriberModels.py index b0dc87e..fe31108 100644 --- a/TranscriberModels.py +++ b/TranscriberModels.py @@ -26,7 +26,7 @@ class APIWhisperTranscriber: def get_transcription(self, wav_file_path): try: with open(wav_file_path, "rb") as audio_file: - result = openai.Audio.translate("whisper-1", audio_file) + result = openai.Audio.transcribe("whisper-1", audio_file) except Exception as e: print(e) return ''