Comment: Why the UK needs environmentally sustainable AI

There is no choosing between AI opportunities or environmentally sustainable AI. To access the former, we need the latter, writes Professor Tom Rodden, Chair of the NEPC working group on responsible AI.

Adobe Stock

The massive potential of AI is widely recognised and reaping the benefits of AI as quickly as possible is seen as a critical imperative by many governments. At this year’s AI Action Summit in Paris, the challenge around how to drive AI innovation to deliver benefits while responding to the need to address potential risks featured as a key debate.

MORE FROM THE ENGINEER'S AI FOCUS WEEK

Something of a consensus seems to have emerged from the summit  – now is not the time for new regulations. Speaking at the event, one European Commissioner stated “we will cut red tape and the administrative burden from our industries.”

Comments like these have been heralded by some as a sign that a cutting of ‘red tape’ is afoot – one that will unleash AI innovation and enable greater levels of productivity.  In the UK too, there are signs that an increasingly lightweight approach to regulation is on the cards to promote innovation and drive growth.

While it is presently uncertain what a ‘cutting of red tape’ would look like in the UK, what is certain is that if ‘cutting red tape’ means de-emphasising environmental sustainability, it will expose the UK to a host of risks – and block access to several opportunities.

The case for environmental regulation is laid out in a recent report from the National Engineering Policy Centre that I, alongside a working group of experts, recently published. In that report we outline that environmental regulation can make the environmental impacts of AI more visible while reducing environmental risks in the short terms. The right regulation can also, as the report lays out, create a niche that UK-based AI companies can step into.

AI uses huge quantities of energy water, and critical materials, but we don’t actually know precisely how much it’s using. Though every data centre is outfitted with a metering system that records energy and water use - not too dissimilar from the smart meters in people’s homes - operators are not obligated to make this data available to policymakers.

As a result, those who want to understand how much resource AI is using now, let alone predict how much it might use in the future, are forced to play a guessing game. This includes ministers, civil servants, and local authorities – who are currently grappling with the challenge actioning the AI Opportunities Action Plan’s commitment to fast-track development of data centres.

Without regulation to mandate the sharing of environmental data we don’t know, for instance, how newly proposed AI Growth Zones will affect the availability of water or energy in the regions where they’re located. We do know, however, that concerns over water availability have already been raised in Oxfordshire – where the first of these Growth Zones is set to be located, and where residents were forced to rely on bottled water due to shortages less than three years ago. If we want to be able to plan effectively for long-term prosperity – and ensure that the moves we make in the next few years do not put us on an unsustainable path – we need to better understand AI’s resource demands. Doing so means we can ensure that the build-out of data centres does not negatively impact the lives of the people who live near them – or the businesses who increasingly need access to renewable energy to sustainably grow.

Regulation, however, is not just about estimating future costs and reducing risk. Smart regulation also means we can steer development to suit the UK’s needs. Up to now, advances in AI systems and services have largely been driven by a race for size and scale and have demanded increasing amounts of computational power (or ‘compute’). The average amount of compute used to train popular AI models has increased 4.4 times a year since 2010, for instance. While this increasing demand for compute power has, to a certain extent, been offset by efficiency gains – those efficiency gains have themselves been offset by increasing demand for AI.

This trend of development does not suit the UK well. The resource intensive development that has come to characterise the products produced by companies like OpenAI favours ‘big tech’ – and disadvantages smaller companies who cannot afford the up-front costs. While there are no ‘big tech’ firms in the UK, there are a number of smaller firms who would be advantaged by regulations incentivising less resource-intensive development. The UK is also a leader in AI Assurance tech (i.e., tools for compliance), and government has noted the UK’s strength in this area. If regualtors work with assurers appropriately, assurance tools can streamline compliance with new environmental regulation – minimising the burden to industry while creating new opportunities for UK-based firms.

AI, in the right circumstances, could deliver significant benefit to businesses and people in the UK – but there are risks. As we look to access the opportunity AI affords, we need to ensure those risks do not come to cancel out the gains we make. Environmental regulation, if done right, is one way to do just that – and access the AI opportunity to its fullest extent.

Professor Tom Rodden CBE FREng FRS FBCS is Pro-Vice-Chancellor of Research & Knowledge Exchange and Professor of Computing at the University of Nottingham and Chair of the NEPC working group on responsible AI.