You’d be forgiven for thinking that fully autonomous cars were just around the corner. In some respects, of course, they are. Partial automation – along the lines of Tesla’s much-publicised Autopilot – is set to become commonplace on premium cars over the next few years. Even when it comes to higher levels of autonomy, much of the required hardware is already available.
It’s all so tantalisingly close. And yet there is a huge amount of work – not to mention a good deal of legal and administrative wrangling – to be done before we can safely switch our cars over to autonomous mode and go to sleep.
To cross that threshold, autonomous cars have to truly comprehend their environment. They need to be able to identify potential hazards, anticipate the actions of others and make decisions of their own. The key to this ability is artificial intelligence, with systems such as neural networks promising to take us into a brave new world of machines that think for themselves.
Most of the sensor technology is here already. For long range use, radar is the default choice. It’s already widely used in adaptive cruise control and developers are aiming for up to 400 metres’ range. The same technology can be used to provide mid-range detection, along with lidar and stereo video cameras. For close proximity work, ultrasonic sensors and short-range cameras are the preferred solutions.
“All these sensors have strengths and weaknesses,” explained Charles Degutis, director of product management for highly automated driving at Bosch. “Radar is very powerful, but it can bounce off tunnels and bridges, and it can struggle to differentiate small closely-spaced objects. Video provides lots of detail, but it can be blinded by things like glare. And lidar gives you a 3D picture, but being light-based it can degrade in high moisture situations.”
Bosch believes the best way forward is to combine all three sensor types, giving a more comprehensive picture, plus a degree of redundancy. Some, however, claim that video on its own could be sufficient, and potentially cheaper, given the right processing. Either way, the sensor technology is unlikely to be an obstacle.
The final piece of the jigsaw is high-resolution mapping. In urban environments, autonomous cars will be able to pinpoint their location down to an inch or so by referencing sensor data to highly accurate 3D maps. These will be generated by radar surveys and kept up to date using data from fleets of vehicles connected to the cloud.
AI inside
Increasingly, the challenge facing autonomous vehicles is not so much capturing the world around them, but making sense of it.
The process of identifying and classifying objects from sensor data is known as semantic segmentation. For human adults – trained to recognise patterns from birth – this is a slightly abstract concept; we see an image of a car and instinctively know what it is, even if it’s not a specific type we’ve encountered before. For computers, however, this poses a significant challenge. The system has to recognise that, say, a small two-seater convertible is fundamentally the same type of object as a seven-seat SUV. Likewise, pedestrians and roadside objects all come in a bewildering array of sizes and forms.
In order to decipher these complex situations, autonomous vehicle developers are turning to artificial neural networks. As the name implies, these computer systems are inspired by the vast clusters of neurons found in the brain, and they ‘learn’ in a very similar way.
In place of traditional programming, the network is given a set of inputs and a target output (in this case, the inputs being image data and the output being a particular class of object). Essentially, what it does is feed the data into the mass of interconnected neurons – each of which can have tens of thousands of connections to the others – and then compare the observed output to the target. Over successive iterations the network refines itself, changing the strength of certain connections until the input exactly matches the desired output.
In the right conditions, neural networks can already exceed the capacity of humans in discerning specific patterns
Eventually, the network can learn to spot the tell-tale features that identify a particular class of object. It doesn’t follow any preset rules for identifying them, though. For want of a better description it simply ‘knows’. It’s this ability to think outside the box that makes neural networks such a powerful tool for semantic segmentation.
“In the right conditions, neural networks can already exceed the capacity of humans in discerning specific patterns,” said Christoph Peylo, global head of the Bosch Centre for Artificial Intelligence (BCAI). “What sets them apart is that they are capable of digesting highly-dimensional data. Other processes, such as decision trees, can work well for some applications, but they can’t cope with too many attributes. If you think about the range of inputs on an autonomous car, you might have data from the camera, radar, lidar, the road conditions, the humidity…perhaps 10 highly-dimensional sources. With so many attributes a neural network would make sense.”
The process of training a neural network for semantic segmentation involves feeding it numerous sets of training data with labels to identify key elements, such as cars or pedestrians. This data can be generated from simulations (providing they’re accurate enough) or captured from real-world footage.
The engineers at BCAI use a combination of the two, explained Peylo: “The system learns through specific examples, so you have to ensure that everything that’s potentially relevant can be trained. You can drive for perhaps millions of miles and not encounter a specific hazard, so you have to add those cases [artificially].”
Accurately identifying objects is a major step towards predicting their behaviour. A car, for example, generally follows a different set of rules to a pedestrian. But in order to make decisions, the car also needs to be able to cope with situations and behaviours that are outside of the normal rules. What should it do if there is a broken down vehicle blocking the carriageway, for instance, or how would it merge into another stream if there were no clear road markings?
A neural network could be taught to recognise that a ball bouncing into the road could be followed by a child.
In theory, this is another prime candidate for the use of neural networks. They could be used to predict behaviour based on a sequence of events. It’s not inconceivable, for instance, that a neural network could be taught to recognise that a ball bouncing into the road could be followed by a child.
“The expertise that you need to drive a car cannot be fully described in an algorithm, but you can learn by experience. Machine learning allows computers to carry out the same process,” said Peylo.
Unfortunately, at the moment there’s a snag. “Neural networks are very powerful, but they are not yet fully understood,” he explained. “We see the results, but we cannot say exactly how the machine came up with the solution. Making it understandable and explainable is a very important challenge, particularly for applications that have to be verified and certified. Understanding how the neural network functions is a prerequisite for that, and it’s one of our major research topics at BCAI.”
For the time being, Bosch prefers to use probability-based models for high-level decision making. These look at the chances of a vehicle diverging from its anticipated behaviour (i.e. failing to stop for a red light) and evaluate the potential risk. This technique is not as powerful or as flexible as a neural network, but it does have the key advantage that every decision can be tracked and understood.
Machine learning is already employed for semantic segmentation in driver assistance systems, such as autonomous emergency braking, though. It allows partially-automated cars to carry out tasks that would be virtually impossible with traditional computing techniques, helping them to comprehend the abstract and unpredictable world of driving. And in the future it could hold the key to cars that truly think for themselves.
Smart trucks
Passenger cars may grab the headlines, but it’s arguably commercial vehicles that lead the way in the adoption of driver assistance systems. Since 2015 all new trucks (over 8 tonnes) sold in the EU have had to be fitted with autonomous emergency braking (AEB). Lane keeping functions and adaptive cruise control (ACC) are also widespread, which means that commercial vehicles are already ahead of many passenger cars.
“We see a lot of potential for driver assistance systems in trucks,” said Bosch’s Emanuel Willman. “The commercial vehicle business is driven by total cost of ownership and reducing the number of accidents is a major part of that. With a highly automated truck, we think you could eliminate 90 per cent of the accidents that occur today.”
The next big thing is likely to be turn assist systems, which could dramatically reduce the number of accidents that occur between lorries and cyclists, he explains.
Looking ahead, several companies are already testing autonomous trucks and the technology required for partial autonomy on motorways is more or less production-ready. It’s closely related to that found on passenger cars, although the sensors have to be more robust (to cope with a potential million-mile lifespan). The software also has to be retrained to recognise objects from a very different angle, perched several metres further up.
Another function Bosch expects to see in the future is platooning. Here, a driver can alert others via the cloud that they are willing to lead a convoy. The other trucks can then ‘dock’ autonomously with the lead vehicle as they drive along. Willman says he foresees a system where the vehicles behind would pay a contribution to the lead driver’s costs in return for this service. It could reduce the aerodynamic drag of the convoy as a whole, as well as provide time for the other drivers to rest or carry out admin.
Poll finds engineers are Britain’s second most trusted profession
Interesting. Government ministers are nearly 50% more trusted than politicians! "politicians (11 per cent ), government ministers (15 per...