Ecoute is a live transcription tool that provides real-time transcripts for both the user's microphone input (You) and the user's speakers output (Speaker) in a textbox. It also generates a suggested response using OpenAI's GPT-3.5 for the user to say based on the live transcription of the conversation.
Go to file
2023-05-31 09:24:12 -04:00
custom_speech_recognition added support for m1 mac 2023-05-30 19:34:45 -04:00
.gitignore update requirements 2023-05-12 12:00:03 -04:00
AudioRecorder.py set default index to 1 for macos 2023-05-31 06:43:06 -04:00
AudioTranscriber.py Merge branch 'main' into add-mac-os-support 2023-05-31 09:24:12 -04:00
GPTResponder.py Update exception message 2023-05-22 14:50:44 -04:00
LICENSE Create LICENSE 2023-05-13 20:43:57 -04:00
main.py added --api flag 2023-05-29 20:34:23 -04:00
prompts.py first commit 2023-05-07 22:10:48 -04:00
README.md Merge branch 'main' into add-mac-os-support 2023-05-31 09:24:12 -04:00
requirements.txt added support for m1 mac 2023-05-30 19:34:45 -04:00
tiny.en.pt first commit 2023-05-07 22:10:48 -04:00
TranscriberModels.py fixed potential temp disk memory leak 2023-05-30 19:04:28 -04:00

🎧 Ecoute

Ecoute is a live transcription tool that provides real-time transcripts for both the user's microphone input (You) and the user's speakers output (Speaker) in a textbox. It also generates a suggested response using OpenAI's GPT-3.5 for the user to say based on the live transcription of the conversation.

📖 Demo

https://github.com/SevaSk/ecoute/assets/50382291/8ac48927-8a26-49fd-80e9-48f980986208

Ecoute is designed to help users in their conversations by providing live transcriptions and generating contextually relevant responses. By leveraging the power of OpenAI's GPT-3.5, Ecoute aims to make communication more efficient and enjoyable.

🚀 Getting Started

Follow these steps to set up and run Ecoute on your local machine.

📋 Prerequisites

  • Python >=3.8.0
  • An OpenAI API key
  • Windows OS (Not tested on others)
  • FFmpeg
Windows If FFmpeg is not installed in your system, you can follow the steps below to install it.

First, you need to install Chocolatey, a package manager for Windows. Open your PowerShell as Administrator and run the following command:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))

Once Chocolatey is installed, you can install FFmpeg by running the following command in your PowerShell:

choco install ffmpeg-full

Please ensure that you run these commands in a PowerShell window with administrator privileges. If you face any issues during the installation, you can visit the official Chocolatey and FFmpeg websites for troubleshooting.

macOS If FFmpeg is not installed in your system, you can follow the steps below to install it.
brew install ffmpeg
brew install portaudio
brew install python-tk

You might need to change the index of your speaker depending on your setting to 0 or 1
on line 55 AudioRecorder.py

🔧 Installation

  1. Clone the repository:

    git clone https://github.com/SevaSk/ecoute
    
  2. Navigate to the ecoute folder:

    cd ecoute
    
  3. Install the required packages:

    pip install -r requirements.txt
    
  4. Create a keys.py file in the ecoute directory and add your OpenAI API key:

    • Option 1: You can utilize a command on your command prompt. Run the following command, ensuring to replace "API KEY" with your actual OpenAI API key:

      python -c "with open('keys.py', 'w', encoding='utf-8') as f: f.write('OPENAI_API_KEY=\"API KEY\"')"
      
    • Option 2: You can create the keys.py file manually. Open up your text editor of choice and enter the following content:

      OPENAI_API_KEY="API KEY"
      

      Replace "API KEY" with your actual OpenAI API key. Save this file as keys.py within the ecoute directory.

🎬 Running Ecoute

Run the main script:

python main.py

For a better and faster version, use:

python main.py --api

Upon initiation, Ecoute will begin transcribing your microphone input and speaker output in real-time, generating a suggested response based on the conversation. Please note that it might take a few seconds for the system to warm up before the transcription becomes real-time.

The --api flag significantly enhances transcription speed and accuracy, and it's expected to be the default option in future releases. However, keep in mind that using the Whisper API will consume more OpenAI credits than using the local model. This increased cost is attributed to the advanced features and capabilities that the Whisper API provides. Despite the additional cost, the considerable improvements in speed and transcription accuracy might make it a worthwhile investment for your use case.

⚠️ Limitations

While Ecoute provides real-time transcription and response suggestions, there are several known limitations to its functionality that you should be aware of:

Default Mic and Speaker: Ecoute is currently configured to listen only to the default microphone and speaker set in your system. It will not detect sound from other devices or systems. If you wish to use a different mic or speaker, you will need to set it as your default device in your system settings.

Whisper Model: If the --api flag is not used, we utilize the 'tiny' version of the Whisper ASR model, due to its low resource consumption and fast response times. However, this model may not be as accurate as the larger models in transcribing certain types of speech, including accents or uncommon words.

Language: The Whisper model used in Ecoute is set to English. As a result, it may not accurately transcribe non-English languages or dialects. We are actively working to add multi-language support to future versions of the program.

📖 License

This project is licensed under the MIT License - see the LICENSE file for details.

🤝 Contributing

Contributions are welcome! Feel free to open issues or submit pull requests to improve Ecoute.