Scientists from
The so-called Clarissa system is "hands-free" and responds to astronauts' voice commands, reading procedure steps out loud as they work, helping keep track of which steps have been completed, and supporting flexible voice-activated alarms and timers, said Beth Ann Hockey, project lead on the team that developed Clarissa at NASA Ames.
Because the system is required to always be ready to accept a voice command, an original version of the system tried to process all spoken words, including conversations between crew members. As a result, it had difficulty discerning between conversations and commands given to the system.
In 2004, NASA Ames contacted Xerox in
They were right. The Xerox methodology allowed Clarissa to more accurately analyse each utterance. It can now recognise words, sentences and word context and can act on a variety of commands phrased in different ways. The system now looks at all the individual words within the sentence, takes into account the system's confidence that it has correctly recognized each individual word, and uses a sophisticated machine-learning algorithm to weigh the various pieces of positive and negative information.
This significantly increases the system's ability to determine the difference between commands directed to the system and side conversations. The improvements cut the error rate of the system by more than half.
Clarissa currently supports about 75 individual commands, which can be accessed using a vocabulary of some 260 words. The number of commands and size of the vocabulary will be increased in the future.
Oxa launches autonomous Ford E-Transit for van and minibus modes
I'd like to know where these are operating in the UK. The report is notably light on this. I wonder why?