Whenever an `offset_ms` is provided, the value of `seek_end` is
calculated incorrectly. This causes Whisper to keep transcribing
after the end of the file.
The current behavior looks like
```
[00:34:40.000 --> 00:34:47.000] This is an example audio file.
[00:34:47.000 --> 00:34:49.000] The text has been redacted
[00:34:49.000 --> 00:34:51.000] This is the end of the audio.
[00:34:51.000 --> 00:34:52.000] ***
[00:34:52.000 --> 00:34:53.000] ***
[00:34:53.000 --> 00:34:54.000] ***
[00:34:55.000 --> 00:34:56.000] ***
...
```
The expected behavior should be
```
[00:34:40.000 --> 00:34:47.000] This is an example audio file.
[00:34:47.000 --> 00:34:49.000] The text has been redacted
[00:34:49.000 --> 00:34:51.000] This is the end of the audio.
- end of program -
```
This commit changes the calculation of the `seek_end` variable to
only add `seek_start` if a custom `duration_ms` is provided.
Otherwise, it defaults to the end of the file.
Signed-off-by: Thijs Raymakers <thijs@raymakers.nl>
if the Core ML model cannot be loaded, continue without Core ML instead of
returning. This allows a single build to transcribe using Core ML models
where available, and regular models when not.
Updated the escape_double_quotes() function such that the function now escapes both double quotes and backslashes in the input string.
Changes Made:
- Renamed the function to escape_quotes_and_backslashes
- Modified the condition in the first loop to increment the value of 'escaped_length' for both double quotes and backslashes.
- Modified the condition in second loop to add a backslash before the current character if it is a double quote or a backslash.
Resolves: #769
I disabled this because there were many complaints about slow decoding.
The current implementation does not allow batching the decoders when
using the "best of" or "beam size" parameters, so the decoding time is
proportional to the number of decoders, which is obviously not great.
However, now there are even more complaints about wrong decodings and
repetition.
So, making a compromise by re-enabling the fallbacks, but defaulting to
just 2 "best of" / "beam size" decoders. Also, the temperature step is
increased from 0.2 to 0.4 - i.e. from maximum of 5 fallbacks to maximum
of 2.
Also, the stream example now has fallbacks enabled by default.
close#471#477#508#612#719#731
There is `speak.sh` file in `./examples/talk-llama` as described in README.
However `./examples/talk/speak.sh` is used in `talk-llama.cpp`, this commit corrects that.