Comment: Without proper cybersecurity protections, AI is a gamble we cannot afford

AI’s efficiency is unmatched, promising gains that no business or government can ignore, says Patrice Caine, chairman and CEO of Thales.

There are two primary ways to attack AI systems
There are two primary ways to attack AI systems - AdobeStock

The AI debate is raging, and scepticism is high. But AI is here to stay. While some headlines criticise tech giants for AI-driven social media or questionable consumer tools, AI itself is becoming indispensable. 

Very soon, AI will be as integral to our lives as electricity – powering our cars, shaping our healthcare, securing our banks, and keeping the lights on. The big question is, are we ready for what comes next?

The public conversation around AI has largely focused on ethics, misinformation, and the future of work. But one vital issue is flying under the radar: the security of AI itself. With AI embedded in nearly every part of society, we’re creating massive, interconnected systems with the power to shape – or, in the wrong hands, shatter – our daily lives. Are we prepared for the risks?

As we give AI more control over tasks – from diagnosing diseases to managing physical access to sensitive locations – the fallout from a cyberattack grows exponentially. Disturbingly, some AIs are as fragile as they are powerful.

There are two primary ways to attack AI systems. The first is to steal data, compromising everything from personal health records to sensitive corporate secrets. Hackers can trick models into spitting out secure information, whether by exploiting medical databases or by fooling chatbots into bypassing their own safety nets.

The second is to sabotage the models themselves, skewing results in dangerous ways. An AI-powered car tricked into misreading a ‘Stop’ sign as ‘70 mph’ illustrates just how real the threat can be. And as AI expands, the list of possible attacks will only grow

Yet abandoning AI due to these risks would be the biggest mistake of all. Sacrificing competitiveness for security would leave organisations dependent on third parties, lacking experience and control over a technology that’s rapidly becoming essential

So how do we reap AI’s benefits without gambling on its risks? Here are three critical steps:

Choose AI wisely. Not all AI is equally vulnerable to attacks. Large language models, for example, are highly susceptible because they rely on vast datasets and statistical methods. But other types of AI, such as symbolic or hybrid models, are less data-intensive and operate on explicit rules, making them harder to crack.

Deploy proven defences. Tools like digital watermarking, cryptography, and customised training can fortify AI models against emerging threats. For instance, Thales’s ‘Battle Box’ lets cybersecurity teams stress-test AI models to find and fix vulnerabilities before hackers can exploit them.

Level-up organisational cybersecurity. AI doesn’t operate in isolation – it’s part of a larger information ecosystem. Traditional cybersecurity measures must be strengthened and tailored for the AI era. This starts with training employees; human error, after all, remains the Achilles’ heel of any cybersecurity system.

Some might think the battle over AI is just another chapter in the ongoing clash between bad actors and unwitting victims. But this time, the stakes are higher than ever. If AI’s security isn’t prioritised, we risk ceding control to those who would use its power for harm.

In the UK, Thales has also invested a state-of- the-art facility at Ebbw Vale in South Wales to pioneer work in cyber security and its application in the real world, including AI.

Located on the former site on one of the largest steelworks in Europe, the site was first opened in 2019 with investment from the company, academia and the Welsh Government.

The facility has grown from a single project to a cyber campus including the establishment of the Global Operational Technology Competence Centre. The facility can also be used as a hub for testing and securing AI-driven systems critical to the UK’s infrastructure.

It includes real-world examples of how secure AI can prevent disruptions, such as spurious financial transactions or malfunctioning automated braking systems.

Ebbw Vale also houses the Thales UK-based Cyber range, test & reference rigs, autonomous vehicle workshops & test track and an immersive customer experience centre.

In this area, the facility also specialises in the resilience of autonomous vehicles and systems, how they can be measured, quantified and ultimately ‘proven’ as part of safety critical systems.

Patrice Caine, chairman and CEO of Thales

MORE FROM ARTIFICIAL INTELLIGENCE