MobilePoser is said to leverage sensors already embedded within consumer mobile devices, including smartphones, smart watches and wireless earbuds. Using a combination of sensor data, machine learning and physics, MobilePoser accurately tracks a person’s full-body pose and global translation in space in real time.
In a statement, study lead Karan Ahuja said: “Running in real time on mobile devices, MobilePoser achieves state-of-the-art accuracy through advanced machine learning and physics-based optimisation, unlocking new possibilities in gaming, fitness and indoor navigation without needing specialised equipment.
“This technology marks a significant leap toward mobile motion capture, making immersive experiences more accessible and opening doors for innovative applications across various industries.”
Ahuja’s team unveiled MobilePoser on October 15, 2024 at the 2024 ACM Symposium on User Interface Software and Technology in Pittsburgh, USA.
In the film making world, motion-capture techniques require an actor to wear form-fitting suits covered in sensors as they move around specialised sets. A computer captures the sensor data and then displays the actor’s movements and expressions.
“This is the gold standard of motion capture, but it costs upward of $100,000 to run that setup,” said Ahuja. “We wanted to develop an accessible, democratised version that basically anyone can use with equipment they already have.”
Other motion-sensing systems rely on stationary cameras that view body movements. If a person is within the camera’s field of view, these systems work well, but they are impractical for mobile or on-the-go applications.
To overcome these limitations, Ahuja’s team turned to inertial measurement units (IMUs), a system that uses a combination of sensors - accelerometers, gyroscopes and magnetometers - to measure a body’s movement and orientation.
These sensors already reside within smartphones and other devices, but the fidelity is too low for accurate motion-capture applications. To enhance their performance, Ahuja’s team added a custom-built, multi-stage artificial intelligence (AI) algorithm, which they trained using a publicly available, large dataset of synthesised IMU measurements generated from high-quality motion capture data.
With the sensor data, MobilePoser gains information about acceleration and body orientation. Then, it feeds this data through an AI algorithm, which estimates joint positions and joint rotations, walking speed and direction, and contact between the user’s feet and the ground.
Finally, MobilePoser uses a physics-based optimiser to refine the predicted movements to ensure they match real-life body movements. The resulting system is said to have a tracking error of 8-to-10cm.
“The accuracy is better when a person is wearing more than one device, such as a smartwatch on their wrist plus a smartphone in their pocket,” said Ahuja. “But a key part of the system is that it’s adaptive. Even if you don’t have your watch one day and only have your phone, it can adapt to figure out your full-body pose.”
While MobilePoser could give gamers more immersive experiences, the new app also presents new possibilities for health and fitness. It goes beyond simply counting steps to enable the user to view their full-body posture, so they can ensure their form is correct when exercising. The new app could help physicians too in analysing patients’ mobility, activity levels and gait.
Related content
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...