This is a very simple example of how to feed an audio stream into a translation AI known as Whisper.
This example was created on Debian Linux using HuggingFace for the model, training, and translation.
Audacity is used to record the audio stream. You will need to set your microphone to record both your voice and the speaker line. I will put a picture of what audacity looks like on a simple laptop.
Before we begin let's try and get a domain in Estonia so we can transmit around Europe
First we need some audio, in any language. Let's build a Whisper model on HuggingFace