Posted by evolvingwheel on November 14, 2007
This thing gotta grab your attention. A searchable video discourse with an ability to select sections of it. Recently, MIT’s Regina Barzilay and James Glass launched a new lecture search engine that allows a student to land on the exact location of a video lecture and listen to the section signficant to him/her. In paper transcripts, it is easier to perform a text based search. But there is no such easy cross-section search in video and audio.
The researchers from MIT’s Computer Science and Artificial Intelligence Lab helped with this dilemma. The team created transcripts of lectures using speech recognition software. Then the video is loaded with the text assigned to its flow. I just had a demo and it was incredible. As the speaker kept on talking, the application kept on cursing through the words one after the other. In more than 90% of the time the words were mapped correctly. This mapping will help to locate one element of the video on the fly. Meanwhile, students do not have to play the whole content to locate a specific section. They can use the transcript search utility to land on the right spot.
This utility can also be used in medical transcription practice. Another use could be reading content to visually impaired people. A voice recognition element can also take instruction from them and reach the exact section of the text/speech on the video. Interactive audio books could use this concept too. Any other idea? You can read the entire article [here].
Posted in artificial intelligence, brain, Communication, Innovation | Leave a Comment »
Posted by evolvingwheel on October 4, 2007
I do remember the robot from Will Smith’s I Robot (2004), where the android moves its arm with unimaginable degrees of freedom. The artificial machine is capable of translating its wishes by communicating effectively with its mechanical appendages. Brain (central intelligence system) signals are decoded and converted into mechanical actions. One of the researchers from MIT has embarked on one such project of creating these movements in artificial prosthetics by decoding neural commands from the brain.
Laxminarayan Srinivasan has developed an algorithm that will enable a prosthetic device to move according to neural signals [read article here]. People who often loose their arms or limbs from accidents or paralysis are still able to think and manifest their intentions from the brain. The challenge is to interpret their intentions that originate as neural signals and match them with the mode of action sought. Then make the prosthetic device operate accordingly. The researcher and his team have developed an algorithm that matches such recorded signals with different archived mechanical actions and then instruct the machine to behave. Presumably, a lot of work needs to be done in understanding the nature of the neural transmission associated with the movements of our arms and limbs. The algorithm processes the signal modalities and all its subtle variations in stimulation and then appropriately connects the command with the action code. A highly robust library of actions and a very sensitive and critical recorder of signals.
With a very difficult task in hand, Srinivasan aspires to build a unifying model of decoding in the coming years.
These kind of activities will one day lead to artificial movements very close to the natural ones. The difference between science fiction and reality is TIME. As we develop smart interfaces and recording devices for neural signals, and are able to interpret their messages, the closer we will get in understanding the motor behaviors related to such signals. My forecast is over the next decade industry will focus on developing such interfaces and creating small prosthetics that use AI to learn and develop actions from the recorded signals. A very burgeoning area of bio-engineering.
Posted in artificial intelligence, brain, robotics, robots | Leave a Comment »
Posted by evolvingwheel on September 12, 2007
Consider this – your grandfather, 80+, lives alone. He gets up in the early morning for his first dose of pills. However, his frail knees prevent him from getting off the bed right away and walk to the bathroom cabinet… he has forgotten to keep the pills by bedside last night. How does he get help? Well, he has a smart robot called Zen at home. As he calls Zen and asks it to get the pills, Zen follows the command, rolls to the bathroom, opens the cabinet, grabs the right bottle, and brings it back to your grandfather.
Intelligent robots are being designed that will soon find their way to our homes. If not in the immediate future, definitely in the distant future, and if I am luckily wrong, then may be within next 5-10 years. Researchers in Japan, which is considered the powerhouse of industrial robots, are vehemently trying to bring such smart robots that would be able to perform several daily chores in absence of human labor. Scientists are eying the possibility of helping a growing elderly population with these smart robots. You may read the full [article here].
As I was mentioning before in my Boeing Dreamliner post, there are all different kinds of sensors being manufactured that are capable of providing realtime knowledge of our surrounding environment – from ambient light intensity to odor and from heat to vibration. Smart algorithms coupled with these sensors in robots can make these objects more intelligent. They could be reactive decision makers on the basis of the surroundings and the requirement logic. These robots will then find their way not only as workers for doing daily household activities but also for commercial purposes.
Posted in artificial intelligence, Innovation, robotics, robots | Leave a Comment »