AI-powered ‘smart choker’ gives voice to speech impaired

Researchers at Cambridge University have developed a ‘smart choker’ that uses a combination of flexible graphene sensors and AI to decode speech from throat movements.

The smart choker could help people with speech impairments communicate
The smart choker could help people with speech impairments communicate - University of Cambridge

The device is what’s known as a silent speech interface (SSI), analysing non-vocal signals to decode speech silently, with the user simply mouthing the words. It is made from a sustainable bamboo-based textile, embedded with graphene ink strain sensors. Under strain, tiny, controllable cracks form in the graphene, with different vocal movements creating different patterns that can be recognised by AI.

Described in the journal npj Flexible Electronics, the smart choker could help people with speech impairments communicate. It's claimed the sensitivity of the sensors is more than four times higher than current (SSIs). Tests showed the smart choker showed could recognise words with over 95 per cent accuracy, while using 90 per cent less computational energy than existing devices.

“Current solutions for people with speech impairments often fail to capture words and require a lot of training,” said research lead Dr Luigi Occhipinti, from the Cambridge Graphene Centre. “They are also rigid, bulky and sometimes require invasive surgery to the throat.

“These sensors can detect tiny vibrations, such as those formed in the throat when whispering or even silently mouthing words, which makes them ideal for speech detection. By combining the ultra-high sensitivity of the sensors with highly efficient machine learning, we’ve come up with a device we think could help a lot of people who struggle with their speech.”

The researchers trained their machine learning model on a database of the most frequently used words in English, and selected words which are frequently confused with each other, such as ‘book’ and ‘look’. The model was trained with a variety of users, including different genders, native and non-native English speakers, as well as people with different accents and different speaking speeds.

“We chose to train the model with lots of different English speakers, so we could show it was capable of learning,” said Occhipinti. “Machine learning has the capability to learn quickly and efficiently from one user to the next, so the retraining process is quick.”

Although the choker will have to undergo extensive testing and clinical trials before it is approved for use in patients with speech impairments, the researchers say that their smart choker could also be used in other health monitoring applications, or for improving communication in noisy or secure environments.