Artificial intelligence (AI) is emerging as one of the key trends in modern engineering. In the automotive industry, AI in its various forms is already being used for applications ranging from the use of machine learning for object detection in automated driving systems through to battery life predictions in electric vehicles. Even infotainment systems are starting to adopt generative AI technology.
The power of AI comes from the unique way it operates. For example, instead of using fixed models based on physics or classical statistics, machine learning uses vast sets of training data to teach an algorithm to spot patterns. Once trained, it can spot these patterns with uncanny speed and accuracy compared to traditional techniques, whether the task is unpicking human speech for voice control applications or predicting where an oncoming vehicle might go next. Compared to traditional methods, AI can accomplish things that otherwise simply wouldn’t be feasible with the time or the computing power available.
Technologies such as machine learning are extremely adept at spotting anomalous behaviour that may be the first warning signs that a system has been tampered with.
But along with the huge opportunities that come with AI, there are a number of risks, including new pathways for potential attackers. AI doesn’t necessarily follow traditional logic, so it can be hard to detect if the system has been manipulated. Plus, the sheer complexity of AI-based applications such as automated driving systems means that there are a huge number of different variables to consider, increasing the potential for failures or vulnerabilities.
With varying degrees of automation and driver assistance now seen on virtually every new car, attackers no longer need direct access to a vehicle to influence its operation. Various methods of ‘spoofing’ to intentionally mislead the vehicle’s sensors have been demonstrated by researchers. Examples of this include stickers placed on road signs to change the way they are read and interpreted by image recognition systems but ignored by the human eye, or the injection of false radar, sonar or Lidar signals to manipulate object detection.
This is an area where the intersection of cybersecurity and functional safety is particularly important. Cybersecurity engineers generally work to make the vehicle robust and resilient against intentional attack, whereas functional safety is concerned with the risks of systems failing by chance.
In some cases, though, a snow-covered lens or a faulty sensor could result in much the same effect as a spoofing attack. As such, a lot of work goes into validating sensor inputs and providing redundancy. Vehicles tend to use a fusion of multiple sensors. For instance, combining Lidar and camera inputs to build a more complete and more reliable picture of the external environment. This use of redundant and diverse sensor types can also increase the difficulty for an attacker to successfully spoof the signals.
The cybersecurity challenge, which HORIBA MIRA’s consultancy services help OEMs and suppliers to overcome, extends far beyond the vehicles themselves. There have been several high-profile cases of intellectual property theft in the automotive industry in recent years, and the huge value attached to AI algorithms could make them a particularly lucrative target. If a third party can access the system - either through a vehicle or by other means, such as an unattended laptop - then they could potentially copy the algorithm for their own use or manipulate it as an act of sabotage.
There’s also the danger of so-called supply chain attacks, where malicious code is inserted into legitimate software further down the supply chain, for example in control modules from third party suppliers.
Due to the unique way AI systems ‘learn’, attackers could manipulate an AI system without needing access to the algorithm itself. These systems are only as reliable as the training data used to teach them, so removing data or inserting false information into the training data (known as data poisoning) could have a profound impact.
Imagine, for instance, a trivial example of swapping images of red and green traffic lights in the training process for an automated driving system. Such a fundamental flaw would soon be identified in the vehicle testing, but the complexity of automated driving systems could make more subtle changes harder to spot. There’s also the danger of so-called supply chain attacks, where malicious code is inserted into legitimate software further down the supply chain, for example in control modules from third party suppliers. Such attacks are very difficult to detect at the vehicle level, highlighting the importance of effective supply chain risk management.
Part of the challenge is that securing a system is potentially an uneven contest. Development teams have a finite amount of time and resources available before the vehicle will need to be signed off for production. In contrast, attackers have the entire service life of the vehicle to attempt to attack it, and if just one of them gets through on one occasion they could do a huge amount of damage. AI may also be harnessed by attackers to increase their chances of gaining access to the system, to disguise the presence of malicious code or to adapt to changing security measures.
Fortunately, AI is also giving us new tools to tackle this asymmetry. Technologies such as machine learning are extremely adept at spotting anomalous behaviour that may be the first warning signs that a system has been tampered with. AI’s ability to sift efficiently through huge datasets also makes it a powerful tool for monitoring published information about new attacks. For example, we can use techniques like natural language processing to extract intelligence from unstructured information, filter out irrelevant noise and highlight potential threats.
Given the inherent opportunities and risks, there remain a lot of unanswered questions around the regulation of AI, but progress is being made. With the introduction of UN Regulation 155, cybersecurity is already part of the Vehicle Type Approval process in markets like the EU, Japan and South Korea. It’s possible that an AI element could be added to these automotive-specific approval processes in the future or it may be that the industry can follow the general-purpose AI frameworks that are starting to emerge, such as the EU AI Act or the US National Artificial Intelligence Initiative Act.
Whatever shape future regulations may take, AI will have a major role to play in the automotive industry and beyond. Organisations and individual engineers will need to be mindful of the potential risks introduced by this new technology. But harnessed securely, it will open up a whole range of exciting new possibilities to enhance future vehicle resilience.
Paul Wooderson is chief engineer for cybersecurity at HORIBA MIRA
More from The Engineer
Emergency law passed to protect UK steelmaking
<b>(:-))</b> Gareth Stace as director general of trade body UK Steel, is obviously an expert on blast furnace technology & operation. Gareth...