This is the claim of Daniel Gehrig and Davide Scaramuzza from the Department of Informatics at the University of Zurich (UZH) whose work is detailed in Nature.
Most current cameras are frame-based that take snapshots at regular intervals. Those currently used for driver assistance on cars typically capture 30 to 50 frames per second and an artificial neural network can be trained to recognise objects in their images such as pedestrians, bikes, and other cars.
“But if something happens during the 20 or 30 milliseconds between two snapshots, the camera may see it too late. The solution would be increasing the frame rate, but that translates into more data that needs to be processed in real-time and more computational power,” first author Gehrig said in a statement.
Event cameras are a recent innovation that have smart pixels - instead of a constant frame rate - that record information every time they detect fast movements.
“This way, they have no blind spot between frames, which allows them to detect obstacles more quickly. They are also called neuromorphic cameras because they mimic how human eyes perceive images”, said Scaramuzza, head of the Robotics and Perception Group at UZH.
However, they can miss things that move slowly, and their images are not easily converted into the kind of data that is used to train the AI algorithm.
Now, Gehrig and Scaramuzza have developed a hybrid system that includes a standard camera that collects 20 images per second, a relatively low frame rate compared to the ones currently in use.
Its images are processed by a convolutional neural network, an AI system trained to recognise cars or pedestrians.
The data from the event camera is coupled to an asynchronous graph neural network, a different type of AI system which the team said is particularly apt for analysing 3D data that changes over time.
Detections from the event camera are used to anticipate detections by the standard camera and boost its performance.
“The result is a visual detector that can detect objects just as quickly as a standard camera taking 5,000 images per second would do but requires the same bandwidth as a standard 50-frame-per-second camera”, said Gehrig.
The team tested their system against the best cameras and visual algorithms currently on the automotive market, finding that it leads to one hundred times faster detections while reducing the amount of data that must be transmitted between the camera and the onboard computer as well as the computational power needed to process the images without affecting accuracy.
Crucially, the system can effectively detect cars and pedestrians that enter the field of view between two subsequent frames of the standard camera.
According to the scientists, the method could be made even more powerful by integrating cameras with LiDAR sensors. “Hybrid systems like this could be crucial to allow autonomous driving, guaranteeing safety without leading to a substantial growth of data and computational power,” said Scaramuzza.
Visit our jobs site https://jobs.theengineer.co.uk/ to find out about some of the latest career opportunities at industry's biggest employers
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...