If you’ve been out on the roads lately, the chances are you were being watched. We may still be some way away from the sci fi vision of fully self-driving passenger cars, but just about every new car now uses some degree of electronic sensing to perceive its environment.
Training and calibrating these systems – many of which employ some degree of artificial intelligence – is a mammoth task. And it’s only getting more complex as the systems evolve, with potentially millions of different scenarios to consider.
Simulation specialist rFpro believes it has achieved a major breakthrough with its new ray tracing rendering technology. This is said to allow software and hardware in the loop to ‘see’ a virtual image in the same way that a camera or lidar system would for the very first time.
Virtual testing plays a key role in autonomous vehicle and ADAS development. Simulations can iterate through a multitude of different conditions – for instance, subtly varying the angle of the sun and the level of glare on the camera each time.
But there’s a problem. Traditional video rendering techniques are optimised for the human eye. And in order to deliver beautifully crisp, high resolution images in real time, they employ a number of tricks that are undetectable to our brains, but which can appear misleading to an image recognition system.
One of the biggest problems is reflections, explained Matt Daley, operations director of rFpro: “In a well-lit situation with most of the light coming from the sun, these simplified techniques can work quite well. But if the car is, say, driving on a shiny wet road at night with street lights, tail lights and oncoming headlights you could suddenly have hundreds of different light sources and literally thousands of reflections that need to be represented to give a true picture.”
As the name implies, ray tracing tracks the light from all these different sources and models the way it interacts with the rest of the scene. This isn’t a new concept, but traditional approaches can be incredibly processor intensive, with the systems used to render special effects for Hollywood blockbusters potentially taking 10 to 30 minutes for each frame. In contrast, the rasterization approach used for a real-time driving simulator might need to churn out 240 frames per second.
Just to add confusion, a simplified reflection model based on the concept of ray tracing is sometimes added on top of these real-time rasterised images, but Daley pointed out this is still very much an approximation. It’s good enough for our eyes, but not for a camera that scans individual pixels at a time.
Faster processing
The new technique that Daley and his colleagues have developed looks at ray tracing specifically for the needs of vehicle simulation. It’s been developed from the ground up for use with a new generation of graphics cards that have dedicated hardware for ray tracing calculations, and it draws directly from rFpro’s collaboration with a major automotive sensor manufacturer.
The result is said to be several orders of magnitude quicker. It’s still not quite fast enough to run in real-time – each frame takes a handful of seconds to process rather than the milliseconds required project at 240 Hz – but, crucially, it is now fast enough to crunch through offline simulations in an acceptable timescale.
“The big difference is that we’ve not taken an animation studio rendering ray tracer and tried to manipulate it and apply it here. Instead, we’ve written our own ray tracer from first principles, knowing what the challenge is, and knowing where we need to be efficient,” commented Daley. “Another advantage is that we’ve been working on this at a time when RTX [ray tracing] cores have been coming into graphics cards, so it’s been written specifically for hardware acceleration.”
Alongside the ray tracing technology, there’s a camera emulation system. Modern cameras typically take multiple exposures and then combine them into one HDR image. On top of that, the camera scans the image from top to bottom with perhaps one microsecond separating each line of pixels. This leads to so-called rolling shutter effects where something like a rotating propellor blade or a passing lamppost can appear curved as a result of the bottom of the image being captured slightly later than the top.
The rFpro software can mimic this by feeding each image, and even each individual line of pixels, to the vehicle’s software one at a time. An additional feature known as Synchro-Step slows down the software under test so the two are in sync – effectively creating the illusion that the images are being fed in real-time.
One of the key features of the rFpro system – something that the company believes to be unique currently – is that the simulation engine and the digital assets remain the same, irrespective of which rendering technology is used. This means that an engineer can sit down and create hundreds of scenarios and run through them in real-time on the desktop or in the simulator before picking out those they wish to test in high fidelity.
Test options
Daley said the company anticipates two main uses for the software. The first is generating offline training data to be fed into machine learning systems while they’re under development; the second is to test the finished system with software or hardware in the loop. Depending on the test requirements, this can be achieved by removing the lens and feeding the data directly into the sensor hardware or by pointing the complete camera at a high fidelity image.
Intriguingly, a third option has emerged, which is to pre-render the scenarios and then play them back in a driving simulator with a human passenger.
“We’ve discovered there’s actually huge interest in passenger in the loop simulation,” commented Daley. “With autonomous vehicles we’re moving from a driver in the loop, with driver influenced action and motion to a system that’s fully or partially automated. The passengers won’t be able to change the path of the vehicle themselves, but they will be able to experience it in far greater fidelity. We’ve already had customers approach us about that.”
Interest so far is said to have included tier one suppliers working with sensors and cameras for automated driving, a company focusing on the in-car passenger experience, and a headlight manufacturer looking to enhance the simulation of its optics.
“It’s far more than improving the aesthetics of the simulation,” noted Daley. “This really is about applying a physically-modelled and engineering-led approach to simulation. We’re confident that we’ve now reached the position where this technology can make a real difference to the safety of autonomous vehicles.”
It’s likely that those applications will span a very broad range. As a company, rFpro has its roots in motor racing (Daley also worked as a simulation engineer with the likes of McLaren and Ferrari before joining the company). However, its customers now cover just about every conceivable type of land vehicle. The latest application rFpro has become involved in is autonomous guided vehicles (AGVs) for port handling and logistics.
“Will we all be driving to work in autonomous taxis in a few years’ time? No. But I think a substantial proportion of our logistics could be moving around silently at night-time across thousands of kilometres of the US and Europe before too long,” commented Daley. “The autonomous industry is concentrating on where it can have success safely and economically. That might not always be the most glamorous situations. But then, engineering isn’t about glamour, it’s about making the right situations better.”
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...