Posted by evolvingwheel on November 14, 2007
This thing gotta grab your attention. A searchable video discourse with an ability to select sections of it. Recently, MIT’s Regina Barzilay and James Glass launched a new lecture search engine that allows a student to land on the exact location of a video lecture and listen to the section signficant to him/her. In paper transcripts, it is easier to perform a text based search. But there is no such easy cross-section search in video and audio.
The researchers from MIT’s Computer Science and Artificial Intelligence Lab helped with this dilemma. The team created transcripts of lectures using speech recognition software. Then the video is loaded with the text assigned to its flow. I just had a demo and it was incredible. As the speaker kept on talking, the application kept on cursing through the words one after the other. In more than 90% of the time the words were mapped correctly. This mapping will help to locate one element of the video on the fly. Meanwhile, students do not have to play the whole content to locate a specific section. They can use the transcript search utility to land on the right spot.
This utility can also be used in medical transcription practice. Another use could be reading content to visually impaired people. A voice recognition element can also take instruction from them and reach the exact section of the text/speech on the video. Interactive audio books could use this concept too. Any other idea? You can read the entire article [here].
Posted in artificial intelligence, brain, Communication, Innovation | Leave a Comment »
Posted by evolvingwheel on October 4, 2007
I do remember the robot from Will Smith’s I Robot (2004), where the android moves its arm with unimaginable degrees of freedom. The artificial machine is capable of translating its wishes by communicating effectively with its mechanical appendages. Brain (central intelligence system) signals are decoded and converted into mechanical actions. One of the researchers from MIT has embarked on one such project of creating these movements in artificial prosthetics by decoding neural commands from the brain.
Laxminarayan Srinivasan has developed an algorithm that will enable a prosthetic device to move according to neural signals [read article here]. People who often loose their arms or limbs from accidents or paralysis are still able to think and manifest their intentions from the brain. The challenge is to interpret their intentions that originate as neural signals and match them with the mode of action sought. Then make the prosthetic device operate accordingly. The researcher and his team have developed an algorithm that matches such recorded signals with different archived mechanical actions and then instruct the machine to behave. Presumably, a lot of work needs to be done in understanding the nature of the neural transmission associated with the movements of our arms and limbs. The algorithm processes the signal modalities and all its subtle variations in stimulation and then appropriately connects the command with the action code. A highly robust library of actions and a very sensitive and critical recorder of signals.
With a very difficult task in hand, Srinivasan aspires to build a unifying model of decoding in the coming years.
These kind of activities will one day lead to artificial movements very close to the natural ones. The difference between science fiction and reality is TIME. As we develop smart interfaces and recording devices for neural signals, and are able to interpret their messages, the closer we will get in understanding the motor behaviors related to such signals. My forecast is over the next decade industry will focus on developing such interfaces and creating small prosthetics that use AI to learn and develop actions from the recorded signals. A very burgeoning area of bio-engineering.
Posted in artificial intelligence, brain, robotics, robots | Leave a Comment »