Using AI to generate subtitles is inventive. Is it smart enough to insert the time codes such that the subtitle is well enough synchronised to the spoken line?
As someone who has started losing the higher frequencies and thus clarity, I have subtitles on all the time just so I don't miss dialogue. The only pain point is when the subtitles (of the same language) are not word-for-word with the spoken line. The discordance between what you are reading and hearing is really distracting.
This is my major peeve with my The West Wing DVDs, where the subtitles are often an abridgement of the spoken line.
> Is it smart enough to insert the time codes such that the subtitle is well enough synchronised to the spoken line?
Yes, Whisper has been able to do this since the first release. At work we use it to live-transcribe-and-translate all-hands meetings and it works very well.
As someone who has started losing the higher frequencies and thus clarity, I have subtitles on all the time just so I don't miss dialogue. The only pain point is when the subtitles (of the same language) are not word-for-word with the spoken line. The discordance between what you are reading and hearing is really distracting.
This is my major peeve with my The West Wing DVDs, where the subtitles are often an abridgement of the spoken line.