Algorithms aid prosthetics development

A transatlantic research team is developing advanced algorithms for decoding neural activity into physical commands, such as parameters for controlling a robotic arm.

Engineers from Cambridge University, along with neuroscientists at Stanford University in the US, believe that current decoding approaches are not capable of producing a clinically viable prosthetic device with the speed and accuracy comparable to a healthy human arm.

Principal investigator Zoubin Ghahramani, a professor of information engineering at Cambridge, said the challenge is that neural prosthetic designers do not completely understand how movements are represented in the brain.

‘Neurons are noisy information channels,’ he said. ‘So you get activity from many, many neurons spiking and it is a challenge to infer the desired action and direction of movement.

‘There have been advances in the field over the last decade or so but the methods people have used have generally been fairly simple linear filtering methods for decoding neural activities.

‘The main thing we’re hoping to contribute is much more advanced machine-learning methods.’

The £410,000 EPSRC-funded research project will create an intelligent algorithm that is more adaptive than current decoding mechanisms.

Register now to continue reading

Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.  

Benefits of registering

  • In-depth insights and coverage of key emerging trends

  • Unrestricted access to special reports throughout the year

  • Daily technology news delivered straight to your inbox