Robots may one day be equipped with the advanced listening skills of human beings if a team of
Dr Adrian Rees, who is leading the project at
department of Neurology, Neurobiology and Psychiatry, told The Engineer that his group is working on the development of a computer model of the auditory midbrain - or inferior colliculus.
This is the part of the brain responsible for identifying and processing different sounds. It is a highly complex structure described as a convergence centre, where information on different aspects of sound picked up by the ear converges and is processed.
Rees said the group plans to develop a biologically realistic computerised model of this auditory pathway, then adapt the system to control an experimental robot that will be able to respond to different sound stimuli in a noisy environment.
'If we can make this thing as realistic as possible we still start to see a number of things that the brain can do that most ways of analysing sound mechanically or electronically don't do very well,' he claimed.
Rees explained that the big limitation of existing computer sound recognition systems, and the most difficult issue to resolve, is extracting sound in the presence of background noise.
'Speech recognition works relatively well when you have a system that can be trained or is working in a quiet environment,' he said.
'But when you try to do lots of people talking in a party, machine hearing falls over very quickly whereas humans do a remarkably good job.'
The reason this is so difficult, he explained, is that most sounds, including background noise, consist of many different frequency components and there is a lot of overlap between these components.
The hardware for the system will be developed by a group from the
's school of computing and technology.
Professor Stefan Wermter, who is leading this team, said the system will be installed on an experimental mobile 'Koala' robot, that will be equipped with extremely sensitive, purpose-built microphones.
The group begins work on the three-year project this July and while the initial aim is to provide an improved insight into how the brain works, Rees said that in the longer term the technology could be particularly advantageous in a number of industrial and healthcare applications.
It could, for instance, be used to enable voice control of machines in noisy conditions, or be used at the heart of a new generation of sophisticated hearing aids that more accurately replicate the way in which humans process sound.
EV targets set to cost UK auto makers billions
The UK car ´parc´ (technical term) in 2023 was 36 million, UK car sales in 2023 just under 2 million and as the report above says 363,000 EV sales,...