According to NTU Singapore, gesture recognition precision is currently hampered by the low quality of data arriving from wearable sensors, often due to their bulkiness and poor contact with the user, plus the effects of visually blocked objects and poor lighting. Further challenges arise from the integration of visual and sensory data as they represent mismatched datasets that must be processed separately and then merged at the end, which is inefficient and leads to slower response times.
FingerTrak wearable captures human hands in 3D
US team controls robot with brainwaves and hand gestures
The NTU team’s 'bioinspired' data fusion system uses skin-like stretchable strain sensors made from single-walled carbon nanotubes, and an AI approach that resembles the way that the skin senses and vision are handled together in the brain.
The NTU scientists developed their bio-inspired AI system by combining three neural network approaches in one system: a 'convolutional neural network', which is a machine learning method for early visual processing, a multilayer neural network for early somatosensory information processing, and a 'sparse neural network' to 'fuse' the visual and somatosensory information together.
The result is a system that can recognise human gestures more accurately and efficiently than existing methods. The team comprising scientists from NTU Singapore and the University of Technology Sydney (UTS) have had their findings published in Nature Electronics.
In a statement, lead author, Prof. Chen Xiaodong, from the School of Materials Science and Engineering at NTU, said, "Our data fusion architecture has its own unique bioinspired features which include a man-made system resembling the somatosensory-visual fusion hierarchy in the brain. We believe such features make our architecture unique to existing approaches.
"Compared to rigid wearable sensors that do not form an intimate enough contact with the user for accurate data collection, our innovation uses stretchable strain sensors that comfortably attaches onto the human skin. This allows for high-quality signal acquisition, which is vital to high-precision recognition tasks."
To capture reliable sensory data from hand gestures, the research team fabricated a transparent, stretchable strain sensor that adheres to the skin but cannot be seen in camera images.
As a proof of concept, the team tested their bio-inspired AI system using a robot controlled through hand gestures and guided it through a maze.
Results showed that hand gesture recognition powered by the bio-inspired AI system guided the robot through the maze with zero errors, compared to six recognition errors made by a visual-based recognition system.
High accuracy was also maintained when the new AI system was tested under poor conditions including noise and unfavourable lighting. The AI system worked effectively in the dark, achieving a recognition accuracy of over 96.7 per cent.
First author of the study, Dr Wang Ming from the School of Materials Science & Engineering at NTU Singapore, said, "The secret behind the high accuracy in our architecture lies in the fact that the visual and somatosensory information can interact and complement each other at an early stage before carrying out complex interpretation. As a result, the system can rationally collect coherent information with less redundant data and less perceptual ambiguity, resulting in better accuracy".
The NTU research team is now looking to build a VR and AR system based on the AI system for use in areas including entertainment technologies and home-based rehabilitation.
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...