MIT robot uses 3D keypoints for advanced coordination
Researchers at MIT have developed a new robot vision system that allows previously unseen objects to be picked up, moved and placed accurately.

Robots are extremely good at repetitive tasks with little to no variation, but struggle when dealing with added complexity or unfamiliar objects. To assess how objects should be picked up, robots tend to use either pose-based or geometry-based systems. Both methods have limitations, however, especially in the face of everyday tasks like picking up and placing a mug – a seemingly straightforward action that in fact requires advanced coordination and subtlety.
To equip its robot with that subtlety, the team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) used an approach that identifies objects as a collection of 3D keypoints, providing a type of visual roadmap that allows more nuanced manipulation. Known as kPAM (Keypoint Affordance Manipulation), the technique gives the robot all the information it needs to pick up, move and place objects accurately, while also providing enough flexibility to deal with variation between different categories of object, such as different shaped mugs or different styles of shoe.
Register now to continue reading
Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.
Benefits of registering
-
In-depth insights and coverage of key emerging trends
-
Unrestricted access to special reports throughout the year
-
Daily technology news delivered straight to your inbox
Experts speculate over cause of Iberian power outages
The EU and UK will be moving towards using Grid Forming inverters with Energy Storage that has an inherent ability to act as a source of Infinite...