The mobile telecommunications network is running out of space, the industry often tells us. In the coming years, we will see 5G standards supplant the existing 4G state-of-the-art, allowing more data to be sent between wireless devices faster and more accurately. However, a team of mechanical engineers from Purdue University in West LaFayette, Indiana, claims that the system that can use the existing standard wireless network to enable high quality 3D video communication on mobile devices such as tablets and smartphones.
“To our knowledge, this system is the first of its kind that can deliver dense and accurate 3D video content in real time across standard wireless networks,” said research leader Sung Zhang, who is to present his team's research at the upcoming Electronic Imaging 2018 conference in California.
The system, which the team has called holostream, works by converting the 3D video into a 2D format, and using standard 2D video compression algorithms to render the data transmittable on the existing network.
“Standard 2D image and video compression techniques are quite mature and enable today’s modern 2D video communications over standard wireless networks,” Zhang said. “If 3D geometry can be efficiently and precisely converted into standard 2D images, existing 2D video communication platforms can be immediately leveraged for low bandwidth 3D video communications.”
The innovation in the system is how the 3D video data and colour texture are captured and converted into 2D format. The images are formed by projecting structured patterns of stripes with an LED light onto the object being scanned by a 3D camera. The stripes allow the camera to determine the shape and depth of the object. The system then represents the scanned object as a mesh of intersecting lines that form triangles. Overlaid onto this mesh is a "texture" of features that make the object appear realistic. This allows moving video to be recorded, compressed, transmitted and decompressed in real time.
Such is the fidelity of the transmission, the team claims, that it could be used in online "facial behaviour analysis", where expressions and facial movements are used to diagnose medical conditions such as depression and post-traumatic stress disorder. It also has applications in collaborative design and remote control of surgical apparatus, they added.
The team explains the technology in this video:
Five ways to prepare for your first day
If I may add my own personal Tip No. 6 it goes something like this: From time to time a more senior member of staff will start explaining something...