MIT launches searchable video courseware
Posted by evolvingwheel on November 14, 2007
This thing gotta grab your attention. A searchable video discourse with an ability to select sections of it. Recently, MIT’s Regina Barzilay and James Glass launched a new lecture search engine that allows a student to land on the exact location of a video lecture and listen to the section signficant to him/her. In paper transcripts, it is easier to perform a text based search. But there is no such easy cross-section search in video and audio.
The researchers from MIT’s Computer Science and Artificial Intelligence Lab helped with this dilemma. The team created transcripts of lectures using speech recognition software. Then the video is loaded with the text assigned to its flow. I just had a demo and it was incredible. As the speaker kept on talking, the application kept on cursing through the words one after the other. In more than 90% of the time the words were mapped correctly. This mapping will help to locate one element of the video on the fly. Meanwhile, students do not have to play the whole content to locate a specific section. They can use the transcript search utility to land on the right spot.
This utility can also be used in medical transcription practice. Another use could be reading content to visually impaired people. A voice recognition element can also take instruction from them and reach the exact section of the text/speech on the video. Interactive audio books could use this concept too. Any other idea? You can read the entire article [here].