The fourth generation of ENCO‘s enCaption closed-captioning system is getting a full rollout at the 2017 NAB Show in Las Vegas, and the newest version is designed to distinguish between multiple speakers — further reducing the labor needed for live captioning.
With speech-to-text voice recognition powering the fourth generation enCaption3R4, “no respeaking, voice training, supervision, or real-time captioners” are its key selling points.
ENCO notes that enCaption3R4 integrates a special algorithm with the intelligence to manage complex captioning situations where multiple subjects are speaking at once. It achieves this by isolating each speaker’s microphone throughout the live program.
The system supports up to six independent microphone feeds, and the speakers’ names can be preconfigured based upon their assigned microphone position. Multilingual support is also built into the algorithm, and includes personalized and/or localized spelling capabilities to ensure greater accuracy.
“With our new multi-speaker identification feature, hearing-impaired viewers will not only know what is being said, but also who is saying it,” said ENCO GM Ken Frommert.
While one of the audio inputs could be a feed from a production truck, the system treats that audio stream as a single speaker, even if multiple people are speaking. If a pre-recorded video clip is rolled during a live show, the captioning of that audio automatically takes precedence over anyone speaking on set.
Frommert adds, “The algorithm does its best to determine who ‘owns’ the conversation—such as the person that started it or who dominates the discussion—and ignores distractions like low voices and brief interruptions. As soon as the conversation shifts to the next speaker, the algorithm immediately and seamlessly transitions to focus on that speaker. Without this selective management process, it becomes very difficult to caption live events, such as roundtable or panel discussions, where people often compete to be heard and disrupt the flow of conversation.”