Guest blog
Anthony Martin, HORIBA MIRA
HORIBA MIRA’s Anthony Martin, EMC Chief Engineer, considers how vehicle control systems built on machine learning can be verified and why ISO 26262 for functional safety and SAE J3061TM for cyber security hold the key.
With machine learning set to take the place of classical “if-then-else” rules for autonomous vehicle control, how will control system integrity be verified?
As more control passes from the human driver to machine controllers, simplistic ‘if-then-else’ rule based algorithms are not dynamic enough to cope with the stochastic environment in which autonomous vehicles must safely and securely operate. Decision making will need to mimic the intelligence of the human brain, meaning that machine learning, a sub-set of Artificial Intelligence (AI), is currently the only viable option. But if the safe operation of autonomous vehicles will be in the hands of machines, how will we verify the integrity of their algorithms?
This issue is compounded by the fact that autonomous vehicles will be continually ‘learning’ or making adjustments to their algorithms based on the myriad of inputs from their local environment (via sensors), ‘lessons learnt’ via other autonomous vehicles (uploaded to the fleet to correct behavioural issues), and the many remote forms of information that will be made available such as traffic and weather.
Thought provoking advertising tells us to imagine a world where work starts when you leave the house, you are free to read emails, write reports, or participate in a conference call. A world where everyone is mobile, whether they are young or old, and all journeys are safe. Where autonomous vehicles are an integral and convenient part of everyone’s life and congestion is a thing of the past.
I don’t doubt it; I fully embrace it, and hope that I see it in my lifetime. But as an engineer I have to look through the glossy benefits and get to the nuts and bolts of what is required to realise the change. This is a seismic shift in the direction of transport the like of which has not been seen in the automotive industry since horses were replaced by cars in the early 1900s. It is such a fundamental shift that companies completely unrelated to automotive, let alone transport, are now posing a very real threat to the automotive giants of the past in their quest to revolutionise the future of mobility. This level of disruption was last observed with the advent of the World Wide Web which also offered a hotbed for opportunity and has since made possible and accelerated the world of Information Technology into that which we know today.
As the electrification of vehicles has hinged upon battery technology, so the automation of vehicles hinges on machine learning. For the near future, rule based algorithms may be adequate for semi-autonomous vehicles, and even fully autonomous vehicles driven in strongly controlled environments. However, the requirement for machine learning is fast approaching with the desire to have semi-bounded and unbounded autonomy. So what is the history of this key enabling technology?
In simple terms, machine learning is a sub-set of AI where a framework algorithm is initially coded within a computer, and then through learning, the computer can modify and optimise its algorithms itself. From its formal conception in the 1950s, machine learning took a steady but slow development path. It wasn’t until the 1990s when machine learning shifted from knowledge-driven learning to data-driven learning that its true potential was realised. Through a data-driven approach, computers are able to draw conclusions or ‘learn’. With the exception of a few breakthroughs, we then have to wait until 2011 for the fruits of huge research and development by major technology giants like Google, Tesla, Facebook, Amazon, Microsoft and IBM to see a significant increase in capability. In 2011 Google Brain used its deep neural network to learn to discover and categorise objects. In 2014 Facebook’s DeepFace demonstrated the ability to recognise and verify individuals in images to the same level as humans. In 2015 Google’s DeepMind showed an unprecedented level of ‘intelligence’ by beating a professional player at the world’s most complex board game, Go, five times out of five. More applicably though, Tesla has implemented an AutoPilot system in their Model S vehicle, and Google’s self-driving cars have been on the streets of a number of cities for some years now.
There have been significant strides in the development of the basic algorithms used in machine learning and this coupled with the amount of quality data available has revolutionised machine learning especially in vehicles. Infra-red sensors, Light Detection And Ranging (LiDAR) systems, 360° vision systems, wireless connectivity and many more data sources all combine to provide machine learning algorithms with a wealth of rich information from which to learn, optimise and grow.
So the technology is close and we are on the brink of a revolution, but is the industry ready for such a leap? Can some semblance of order be implemented on the stochastic nature of machine learning for such a complex application as autonomous driving? Think of the number of variables in a global environment even if autonomy was bounded to cities only - pedestrians, cyclists, any moveable object (bins, boxes, bags), weather, light conditions, building work, seasonal implications (leaves, high temperatures), etc. All bring with them a number of issues, but together the combinations of issues are incomputable without simplification. The algorithms required for such an application are currently incomprehensible but, as Google’s self-driving cars are illustrating, they are potentially realisable with suitable machine learning and long-term data input through real-world driving. A more unbounded set of tests are required before the true complexity of the algorithms can start to be fully understood.
The issue facing this revolution, however, is product integrity. Safety, security, functionality all combine to give a measure of resilience. But can these aspects be measured, assessed and verified for such a complex system? Currently ISO 26262 for functional safety and SAE J3061TM for cyber security offer the best chance of achieving the high levels of confidence required to engineer vehicles that are safer and more secure. Whilst changes are being implemented to tackle the issues surrounding autonomy and significant work is still required to align the standards, even ISO 26262 Edition 2 scheduled for release in 2018 is unlikely to fully cover the requirements for autonomous vehicles. This is a reflection of the complexity of verifying the safe secure operation of autonomous vehicles rather than any inadequacy in the standards generation process.
It is the Engineering Processes within these standards, defining rigorous recommendations and regulations - throughout the product lifecycle from concept to decommissioning - that must be built upon to fully realise resilience for autonomous systems. In this way HORIBA MIRA are providing a risk-driven approach for determining the requirements needed to achieve an acceptable level of safety, security and functionality and the fundamental processes required to verify that those levels have been achieved.
In assisting with the generation of key standards, ensuring that they underpin our engineering and test services, and through targeted research, development and investment, HORIBA MIRA will continue to ensure that customers deliver resilient systems and services to market that are safer, more secure, and functional.
For more information on HORIBA MIRA, please visit www.horiba-mira.com
.
英國鐵路公司如何推動凈零排放
It would be better if the trains had good coverage of the country. Large areas have no easy connection and so cars (or buses?) and lorries are still...