This week I wrote about a group of researchers who have essentially decoded parts of conversations heard by paralysed patients by analysing signals from within the brain.
It apparently opens up the possibility of a system that might transcribe the imagined internal speech of patients who cannot talk themselves.
Now, my initial thoughts on reading the press release were somewhat sceptical, since there has been constant stream of brain imaging studies in the past decade which overstate their findings and worse employ somewhat questionable scientific rigour (for a more detailed critique see - http://www.nature.com/news/2009/090113/full/457245a.html).
This includes the burgeoning field of ‘neuromarketing’ where researchers (perhaps looking for some extracurricular income) provide a consultancy service centring on the use of functional magnetic resonance imaging (fMRI) to measure brain changes in response to advertising for example and ultimately learn why consumers make the decisions they do.
And in the past few months we’ve had brain imaging studies of people having an orgasm in an MRI machine and those taking magic mushrooms.
But skimming over the most recent full academic paper – by a team at the University of California, Berkeley – I was given more reason to be hopeful.
For a start, the latest study was looking at electrical signals from electrodes placed directly on the brain surface (the paralysed patients were already scheduled for open brain surgery, and presumably bravely thought a bit more tinkering in the name of science couldn’t do much harm). fMRI, though very useful for some purposes, is an indirect measure looking at blood perfusion as a proxy for actively firing neurons.
Patients in the Berkeley study were played recordings of conversations 5-10 minutes in length. Using the electrode data captured during this time the team was able to reconstruct and play back the sounds the patients heard.
This was possible because there is evidence that the brain breaks down sound into its component acoustic frequencies – for example, between a low of about 1 Hertz (cycles per second) to a high of about 8,000 Hertz – that are important for speech sounds.
The team then tested two different computational models to match spoken sounds to the pattern of activity in the electrodes. The patients then heard a single word, and the models were able predict the word based on electrode recordings.
While it should certainly be repeated independently by other teams, it’s difficult not be impressed by the work. It sets one thinking about the possibility of entirely decoding and breaking down what are essentially biological binary signals into something we can smoothly interface into our silicon (or whatever platform replaces it) computer-based world. And letting the mind wander even further (pardon the pun) if it’s possible to entirely decode the brain into its constituent binary components, would it be possible transplant an entire consciousness into a non-biological, synthetic form? Only for now in science fiction of course.
And it does build on a growing body of work on machine-brain interfaces (MBI) – including some good recent stuff from the UK.
Portsmouth University in collaboration with Essex University now have a system that allows patients with locked-in-syndrome (a very severe form of paralysis) to compose music with their thoughts alone.
The research team used a method which combines electroencephalography (EEG) analysis with what the team terms a music engine module.
Participants sit in front of a computer screen that displays several ‘buttons’ that flash at different frequencies (normally between 8Hz and 16Hz).
The participant is asked to focus his or her attention on a particular button and the EEG device he or she is wearing captures that frequency — a phenomenon known as the field frequency-following effect. This frequency-tagged EEG signal is then matched to pre-specified note, or series of notes, played by the computer.
Where the current system differs from previous offerings is that it builds in a secondary level of control, where participants can control the intensity of their focus on the button to vary the composition.
Where it could really get interesting is when MBIs that achieve cognitive tasks like language or music composition converge with the equally promising field of neuroprosthetics, where people control movement with brain activity.
In October last year pioneering neuroscientist Miguel Nicolelis of Duke University demonstrated two-way interaction between a primate brain and a machine interface. And the bidirectional link is the crucial part here.
The team used implants with hundreds of hair-like filaments that could both record brain activity and deliver neural stimulation simultaneously and in real time.
The monkeys in the study eventually learned to employ brain activity alone to move an avatar hand and identify the texture of virtual objects (the experimental set-up is quite complex, but pretty important here so see the link – http://www.nature.com/news/2011/111005/full/news.2011.576.html). In doing this they basically bypassed the body’s network of nerve endings and supplied the sensations directly to the brain.
It’s important because although complex robotic limbs with multiple degrees of freedom are currently being developed, people rely on tactile feedback for fine control of their limbs.
‘Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton,’ said Nicolelis.
Five ways to prepare for your first day
If I may add my own personal Tip No. 6 it goes something like this: From time to time a more senior member of staff will start explaining something...