The combined attributes from the natural world have resulted in multidimensional imaging with extraordinary depth range that can scan through blind spots.
Powered by computational image processing, the camera can decipher the size and shape of objects hidden around corners or behind other items. The technology could be incorporated into autonomous vehicles or medical imaging tools with sensing capabilities far beyond what is considered state of the art today, the team claimed. Their research has been published in Nature Communications.
In the dark, bats visualise their surroundings by using a form of echolocation, or sonar. Their high-frequency squeaks bounce off their surroundings and are picked back up by their ears. The differences in how long it takes for the echo to reach the nocturnal animals and the intensity of the sound tell them in real time where things are, what’s in the way and the proximity of potential prey.
Many insects have geometric-shaped compound eyes, in which each ‘eye’ is composed of hundreds to tens of thousands of individual units for sight that make it possible to see the same thing from multiple lines of sight. Compound eyes give flies a near-360-degree view even though their eyes have a fixed focus length, making it difficult for them to see anything far away.
Inspired by these two natural phenomena, the UCLA-led team set out to design a high-performance 3D camera system with advanced capabilities that leverage these advantages but also address nature’s shortcomings.
“While the idea itself has been tried, seeing across a range of distances and around occlusions has been a major hurdle,” said study leader Liang Gao, an associate professor of bioengineering at the UCLA Samueli School of Engineering. “To address that, we developed a novel computational imaging framework, which for the first time enables the acquisition of a wide and deep panoramic view with simple optics and a small array of sensors.”
Dubbed Compact Light-field Photography (CLIP), the framework reportedly allows the camera system to ‘see’ with an extended depth range and around objects. In experiments, the researchers demonstrated that their system can ‘see’ hidden objects that are not spotted by conventional 3D cameras.
The researchers also use a type of LiDAR (Light Detection And Ranging) in which a laser scans the surroundings to create a 3D map of the area.
Without CLIP, conventional LiDAR would take a high-resolution snapshot of the scene but miss hidden objects, much like human eyes would.
Using seven LiDAR cameras with CLIP, the array takes a lower-resolution image of the scene, processes what individual cameras see, then reconstructs the combined scene in high-resolution 3D imaging. The researchers demonstrated the camera system could image a complex 3D scene with several objects, all set at different distances.
“If you’re covering one eye and looking at your laptop computer, and there’s a coffee mug just slightly hidden behind it, you might not see it, because the laptop blocks the view,” Gao said in a statement. “But if you use both eyes, you’ll notice you’ll get a better view of the object. That’s sort of what’s happening here, but now imagine seeing the mug with an insect’s compound eye. Now multiple views of it are possible.”
According to Gao, CLIP helps the camera array make sense of what’s hidden in a similar manner. Combined with LiDAR, the system can achieve the bat echolocation effect so one can sense a hidden object by how long it takes for light to bounce back to the camera.
The co-lead authors of the published research are UCLA bioengineering graduate student Yayao Ma, who is a member of Gao’s Intelligent Optics Laboratory, and Xiaohua Feng — a former UCLA Samueli postdoc working in Gao’s lab and now a research scientist at the Research Center for Humanoid Sensing at the Zhejiang Laboratory in Hangzhou, China.
Five ways to prepare for your first day
If I may add my own personal Tip No. 6 it goes something like this: From time to time a more senior member of staff will start explaining something...