The AI opportunities action plan is certainly wide reaching, with recommendations focussing on much needed areas in AI such as skills, diversity and data usage through the National Data Centre. As AI is such a broad subject, it does justice to the breadth and depth of challenges that we face now and in the very near future.
Related content
The pipeline for UK skills in emerging tech is lacking, so it’s promising to see a strong focus on this within the plan. Employers are telling us that there is a lack of skills in industry to take advantage of AI. There is also a concern surrounding the implementation of AI with a lack of information and confidence in using it – IET research has previously shown that nearly 30% of respondents surveyed felt this way, with over a quarter of people saying they wished there was more information on how it works and how to use it (source: Artificial Intelligence behind 3 times more daily tasks than we think, IET, 2023).
When considering AI for government use, it’s imperative that there are strong data foundations, competency and full transparency, as this will enhance national public trust and uptake of the technology.
Critical areas missing
However, despite its extensive recommendations, the report does not provide enough emphasis on several critical areas.
Firstly, sustainability. Although the independent report by Matt Clifford CBE gives a nod to the need for increased ‘mitigation’ regarding sustainability, this does not go far enough. For too long the environmental impacts of AI have been overlooked for the ‘greater good’. We are looking at this the wrong way round – the UK has a unique opportunity here to be global leaders in sustainable innovation.
There are clear steps that can be taken. For example, you could award bronze, silver, gold standard for the approval of new data centres – which support AI servers – in the UK, based on a sustainability rating. This is in addition to the extensive range of measures that data centres can take to ensure minimal energy waste in the cooling process, many of which are already in practice.
Yet, fundamentally, the biggest waste comes from using AI when it is not needed. We need to change the way we look at sustainable technology and encourage ways to reduce the carbon footprint – for example, re-using existing AI models instead of starting from scratch. If this sort of approach was standard practice, the savings would be immense. The government response to this recommendation clearly focuses on the energy consumption of AI but not the sustainability of it as a technology.
Responsible data and safety
Secondly, responsible handover of AI. Too much information is lost between a product being developed and used. This is critical to safe use and understanding of what the AI can do and is capable of, but more importantly its limitations. For those who haven’t seen it, I recommend looking at the Responsible Handover of AI framework by Sense about Science.
AI, in all its encompassing forms, is not infallible and must be applied diligently, and appropriately. Much like the data / digital transformation that the government experienced in the 2010s, they must now use that learning to ensure a smooth and just transition to AI and its services. To do this, AI solutions must be aligned, encompassed, and influenced by software engineering, software architecture, management, governance, technology operations, and service delivery/service management. There really is no underestimating the power of getting the fundamentals right, especially for data.
MORE ON ARTIFICIAL INTELLIGENCE
There is not enough detail about safety, which is the pivotal aspect of future success. The definition of safety must be expanded. AI safety and the assessment of risk must go beyond the physical, to look at financial, societal, reputational and risks to mental health, amongst other harms. Although AI has huge potential and a range of applications, it is not suitable for every task and every challenge. Assessing the potential for AI to ‘Scan, Pilot, Scale’ very much relies on the scan phase of this to ensure that it is going to be developed in the right way for the right reason. Increasing standards available for reporting on the efficacy of new products would help regulators to ensure they are fit for purpose and cultivate an innovative and thriving AI environment.
Striving to be the best
Finally, the UK should be striving to operate above industry standards. The professional standards, published by the ISO, consider all aspects of AI use and should be applied where appropriate. The government should consider AI as a digital asset and follow asset management standards.
It is critical that the appropriate legal and regulatory structures are in place to allow AI’s safe development and use but also do not stifle innovation. It needs greater transparency around the training and operation of AI systems. This is especially relevant for publicly accessible LLMs, like ChatGPT, which trains its models in part on user data. The government should establish firm rules on which data can and cannot be used to train AI systems – and ensure this is unbiased as part of the new data centres outlined in the manifesto pledges.
The new AI report is certainly a few steps in the right direction – if the government demonstrates successful use of AI, it will encourage other sectors to adopt it.
Dr Graham Herries is Chair of the Institution of Engineering and Technology’s (IET) Digital Futures Policy Centre
PM outlines action plan for artificial intelligence
Orwellian! We are living post 1984 too.