Scientists hope to add a new dimension to music with technology that will transport digital home entertainment system users into the venue of performance.
In collaboration with
Surrey Universitysound recording engineer Prof Francis Rumsey, Dr Zoran Cvetkovic, a digital signal-processing engineer at
King's College London, is developing five-to-10 channel audio software.
This will give music lovers such a realistic experience that they will be able to accurately identify the type of venue in which the music was recorded (the 'envelopment experience') and where in the venue instruments are being played, no matter where they are in the room in relation to the speakers.
However, the researchers need to understand what constitutes a convincing musical illusion before they can apply those parameters to their technology.
They will investigate a number of factors to find the ideal ways of capturing sound field cues, such as the time difference between the ears, known as interaural time difference (ITD) and intensity differences between the ears, known as interaural level difference (ILD). Then they will recreate them using a multi-channel playback system to give the most credible illusion of the original sound field.
'The idea is to create a new technology that is going to produce real stereo experience. If you look at current stereo technologies, their major problem is that the size of the sweet spot, the area of the room where somebody can have some sort of stereo experience, is very limited,' said Cvetkovic.
To illustrate the sweet listening spot in a two-channel playback system, if a person stands at the point that forms an equilateral triangle with two speakers, the sound received will give them the full stereo experience. The experience is lost if the listener moves from this spot.
Unfortunately, simply adding more speakers is not enough, said Cvetkovic. 'Classical music is always recorded with two channels. If you reproduce it with more than two channels, somehow the additional channels are created by just combining these two channels, so that the third channel will be one combination of the original channels, the fourth another combination and so on. Although it would give some richer sound, the new channels do not bring any new information,' he said.
'We cannot reconstruct the exact sound field of a music performance in its original venue. It can be mathematically shown that to do so we would need millions of microphones, and that would require thousands of speakers.
'As soon as you introduce this number of microphones and speakers you significantly change the acoustic properties of the space.'
From preliminary research, the scientists have found that to recreate a convincing auditory perspective of an event on the horizontal plane in front of the listener, at least three independent channels are needed, and at least five are needed to achieve the illusion of instruments beside or behind the listener.
To create a 360º auditory perspective, the researchers have proposed placing speakers at vertices of a regular polygon. Each speaker will consist of two com0ponents, with one part radiating the direct sound field toward a listener, and another introducing additional scattering to reproduce a diffused sound field.
'It's still unclear what will be the optimal microphone array and optimal distribution of loudspeakers,' said Cvetkovic. 'But from a common sense point of view, if you want to create this well-balanced auditory perspective, microphones and speakers should probably be placed in a circle with equal angles between them.
'It is not realistic that somebody would be able to place speakers in those ideal positions in the room, so another question to investigate is how much is that (the ideal positioning) sensitive to displacement?'
In an effort to reproduce the desired soundfield, the researchers will also explore cross-talk cancellation techniques. When two speakers are playing, each ear hears a combination (cross-talk) of the sounds from the speakers, and it is this effect the scientists want to cancel using new and existing algorithms.
'We want to ensure that what is actually picked up by the left ear is produced by the left speaker and the right ear, the right speaker. This is something used in binaural techniques, that is, in music to be heard on headphones.
'We want to find techniques to prefilter those sounds so that when they finally get combined (played through speakers) they will be equal to what is pre-processed, the original sound of the left speaker and the right speaker,' said Cvetkovic.
Oxa launches autonomous Ford E-Transit for van and minibus modes
I'd like to know where these are operating in the UK. The report is notably light on this. I wonder why?