YouTube's Auto Caps Not Only Help The Deaf, But Searchers Too

Google

Google has offered user-generated captioned videos for three years. What makes "auto-caps" different is that Google will now use the speech-recognition algorithms employed in Google Voice to automatically generate captions for all videos.

According to Google's blog, 20 hours of video are uploaded to YouTube every minute. The goal for auto-caps is to quickly get more videos captioned. Not only will the captions help the deaf and hearing-impaired help understand videos, but they will also assist people around the world to access video content, improve search and let viewers pinpoint exact sections of videos.

For now, auto-caps will only be visible on a limited number of partner channels, including U.C. Berkeley, Stanford, MIT, Yale, UCLA, Duke, UCTV, Columbia, PBS, National Geographic, Demand Media, UNSW and most Google and YouTube channels. Google has recognized the technology is not perfected, and is awaiting feedback from both viewers and video owners before a broader rollout.

In addition, Google announced automatic caption timing, or auto-timing, which eases the process of creating captions manually. Video creators build and upload a text file with all the words in the video. Google's automatic speech recognition (ASR) technology discerns when the words are spoken and creates the captions. Google hope the technology will bring even more captioned videos to the site since it significantly reduces the cost in time and resources necessary to create professional caption tracks.

id
unit-1659132512259
type
Sponsored post