Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and a mouse can delay the procedure and increase the risk of spreading infection-causing bacteria, according to Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.
Researchers are said to be creating a system that uses depth-sensing cameras and specialised algorithms to recognise hand gestures as commands to manipulate MRI images on a large display. Recent research to develop the algorithms has been led by doctoral student Mithun George Jacob.
The researchers validated the system, working with veterinary surgeons to collect a set of gestures natural for clinicians and surgeons. The surgeons were asked to specify functions they perform with MRI images in typical surgeries and to suggest gestures for commands. Ten gestures were chosen: rotate clockwise and counter-clockwise; browse left and right; browse up and down; increase and decrease brightness; and zoom in and out.
Critical to the system’s accuracy is the use of ‘contextual information’ in the operating room. Cameras observe the surgeon’s torso and head to determine and continuously monitor what the surgeon wants to do.
‘A major challenge is to endow computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures,’ said Wachs in a statement.
‘Surgeons will make many gestures during the course of a surgery to communicate with other doctors and nurses. The main challenge is to create algorithms capable of understanding the difference between these gestures and those specifically intended as commands to browse the image-viewing system. We can determine context by looking at the position of the torso and the orientation of the surgeon’s gaze. Based on the direction of the gaze and the torso position, we can assess whether the surgeon wants to access medical images.’
The hand-gesture-recognition system uses Microsoft’s Kinect camera, which senses three-dimensional space and maps the surgeon’s body in 3D.
Findings show that integrating context allows the algorithms to accurately distinguish image-browsing commands from unrelated gestures, reducing false positives from 20.8 per cent to 2.3 per cent.
‘If you are getting false alarms 20 per cent of the time, that’s a big drawback,’ Wachs said. ‘So we’ve been able to greatly improve accuracy in distinguishing commands from other gestures.’
The system also has been shown to have a mean accuracy of about 93 per cent in translating gestures into specific commands, such as rotating and browsing images.
The algorithm takes into account what phase the surgery is in, which aids in determining the proper context for interpreting the gestures and reducing the browsing time.
‘By observing the progress of the surgery, we can tell what is the most likely image the surgeon will want to see next,’ Wachs said.
Findings from the research were detailed in a paper published in December in Journal of the American Medical Informatics Association. The paper was written by Jacob, Wachs and Rebecca A Packer, an associate professor of neurology and neurosurgery in Purdue University’s College of Veterinary Medicine.
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...