The new framework is said to incorporate computer vision into prosthetic leg control and includes artificial intelligence (AI) algorithms that allows new software to better account for uncertainty.
Engineering can take inspiration from nature for better prosthetics
"Lower-limb robotic prosthetics need to execute different behaviours based on the terrain users are walking on," said Edgar Lobaton, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University (NC State). "The framework we've created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making."
The researchers focused on distinguishing between six different terrains that require adjustments in a robotic prosthetic's behaviour, namely tile, brick, concrete, grass, ‘upstairs’ and ‘downstairs.’
"If the degree of uncertainty is too high, the AI isn't forced to make a questionable decision - it could instead notify the user that it doesn't have enough confidence in its prediction to act, or it could default to a 'safe' mode," said Boxuan Zhong, lead author of the paper and a recent Ph.D. graduate from NC State.
The researchers designed the "environmental context" framework for use with any lower-limb robotic exoskeleton or robotic prosthetic device and added cameras to the solution. In their study, the researchers used cameras worn on eyeglasses and cameras mounted on the lower-limb prosthesis itself. The researchers evaluated how the AI was able to make use of computer vision data from both types of camera, separately and when used together.
"Incorporating computer vision into control software for wearable robotics is an exciting new area of research," said Helen Huang, a co-author of the paper. "We found that using both cameras worked well but required a great deal of computing power and may be cost prohibitive. However, we also found that using only the camera mounted on the lower limb worked pretty well - particularly for near-term predictions, such as what the terrain would be like for the next step or two."
According to NC State, the most significant advance is to the AI itself.
"We came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty in a way that allows the system to incorporate uncertainty into its decision making," Lobaton said in a statement. "This is certainly relevant for robotic prosthetics, but our work here could be applied to any type of deep-learning system."
To train the AI system, researchers connected the cameras to able-bodied individuals, who then walked through different indoor and outdoor environments. The researchers then did a proof-of-concept evaluation by having a person with lower-limb amputation wear the cameras while traversing the same environments.
"We found that the model can be appropriately transferred so the system can operate with subjects from different populations," Lobaton said. "That means that the AI worked well even thought it was trained by one group of people and used by somebody different."
The new framework has not yet been tested in a robotic device.
"We are excited to incorporate the framework into the control system for working robotic prosthetics - that's the next step," Huang said.
The team plans also to make the system more efficient in terms of requiring less visual data input and less data processing.
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...