It’s a truism of living in the West that more and more of our lives are bound up with the digital world. We generate a storm of data throughout the day, whether we want to or not — when we shop or when we use our bank accounts or our phones — our salaries are logged in our employers’ data systems, as are our tax records. Social networking adds to the data, as do online photo storage and other internet-based activities. And the amount of data we generate personally is dwarfed by the numbers generated by government, industry and commerce.
All this data has to be stored and this is giving rise to a new form of building, characteristic to the early 21st century: the data centre. Sharing some of the form and characteristics of ages-old strongrooms and more modern hardened bunkers, these are the locations that keep the numbers vital to our lifestyles, and the fortunes of government and industry, safe. But this has also generated a set of problems for civil engineers. The most vital thing that a data centre has to do is to keep its ranks of computer servers running. For that, they need two things: power and cooling.
The two are connected: it’s the power consumption of the chips in the computers that leads to them generating heat, which has to be removed to keep the servers operating; and as computers get faster, their chips consume more energy and produce more heat. Both energy consumption and heat are of concern. Consulting firm McKinsey has predicted that data centres could outstrip airlines as CO2 emitters as early as 2020 and, with all energy consumers now under pressure to do their bit in the carbon emissions crusade, data centre operators are looking for ways to cut power requirements. For computer manufacturers, this means exploring low-energy data storage systems. For data centre designers and operators, this means finding ways to cut the bills for the air conditioning and refrigeration they need to keep their data chilly.
One way of doing this that has generated headlines recently is to build your data centre somewhere cold, and that’s the solution social networking giant Facebook has opted for. The company has announced that it is building its first data centre outside the US at Lulea in northern Sweden, 60 miles south of the Arctic Circle.
Serving customers in Europe, Africa and the Middle East, the centre will use two resources that Lulea has in abundance — cold air and hydroelectric energy. The site’s servers will use some 120MW of electricity, which will be derived from the hydropower schemes along the Lule Älv River. The facility will be connected to several substations and this multiple redundancy allows it to reduce the need for diesel-powered back-up generators by 70 per cent, although it will still have 40MW of back-up generation capacity.
“Being able to use air cooling for a significant portion of the year is a big advantage”
STEFAN SADOKIERSKI, ARUP
Sub-zero temperatures for much of the year will allow the centre to use air cooling for 97 per cent of the year to keep the servers cold, while the particular design of Facebook’s servers, which are taller than average, allows the company to use larger fans, running more slowly, to circulate the cold air, reducing power consumption by 10 per cent compared with conventional systems.
Facebook isn’t the first internet giant to turn to the frozen North for free cooling. In 2009, Google bought a paper mill in Finland, at Hamina, north east of Helsinki, and converted it into a data centre. ’Our team was anxious to use the opportunities of it being right on the Gulf of Finland to come up with an innovative and very efficient cooling system,’ said Joe Kava, Google’s director for operations.
Seawater from the gulf is pumped into the centre through a tunnel in the granite bedrock that was originally built for the paper mill, run through heat exchangers to remove the heat from the servers and mixed with more seawater to reduce its temperature before being returned to the gulf. ’That ensures that it’s closer in temperature to the inlet water, which minimises the impact we have on the environment,’ said Kava.
According to Stefan Sadokierski, a mechanical engineer and heat dissipation specialist at Arup, the trend of going to cold climates is a growing one. ’Being able to use air cooling for a significant portion of the year is an advantage; anything that means that you don’t have to use refrigeration keeps your costs and power consumption down considerably,’ he said. ’We’re seeing some countries in northern Europe making a definite effort to attract large data centres: Iceland and Ireland are other examples. Both of them have cold climates — although Iceland’s is colder — but, just as importantly, both of them have access to renewable energy. Iceland is mostly run on geothermal power and Ireland has a lot of wind power. When you’re consuming a lot of energy, access to low-emission sources is very useful.’
Some good news for Iceland and Ireland came this week with the announcement of plans to build a fast transatlantic cable linking London and New York that would go via both countries. The cable, planned by a company called Emerald Networks, would be able to carry internet traffic from London to New York and back in 62 milliseconds, making it the fastest of 10 cable links between the US and Europe; it would also be the most northerly, with most of the other transatlantic cabling making its UK landfall in Cornwall.
However, as Sadokierski pointed out, not every operator has the option to build an Arctic base. Financial companies, for example, are constrained by the legislation governing their industry as to where they can site their data centres. ’The FSA [Financial Services Authority] says you have to back up your data a certain minimum distance from your headquarters, and financial companies have to replicate their data within a very short period of time. So you tend to find a ring of data centres surrounding major financial centres such as London, Frankfurt and Paris,’ he said. ’In reality, it isn’t much of a problem; the average temperatures in most of the northern half of western Europe are low enough that you still don’t need refrigeration,’ he said. ’It’s when you get out to hotter locations, such as the Middle East, that you start getting problems.’
Data centre design issues mainly concern the efficient placement of server units, to ensure that power and cooling can be accessed easily, Sadoskierski said. ’Once upon a time, you’d just put a load of computers in a room, turn on the air conditioning, mix the air up and remove the heat that way,’ he explained. ’That worked reasonably well but, when the heat loads went up, you might have one computer blowing hot air into the inlet of another one, and that’s no good. That’s when we started stacking the servers in cabinets and arranging those in aisles, front to front and back to back, and supplying cold air in the aisles where it was needed. That makes things more efficient.’
In the last few years, data centres have been partitioned to wall off the hot aisles and cool those specifically in a closed loop. ’That stops the heat from getting out altogether and allows you to have higher loads,’ Sadoskierski said. Concern about the environmental profile of data centres led to the formation in 2007 of the Green Grid, an organisation to promote good environmental practice among data centre builders, designers and operators. Originally formed by IT companies, it now numbers civil engineers, electronics specialists, data centre contractors and data-intensive organisations among its members.
’Originally, we looked at the facilities management side of things — cooling, mostly — but this year we’ve expanded our remit to look at IT as well,’ said Harkeeret Singh, chair of the Green Grid’s EMEA (Europe, Middle East and Africa) technical work group. ’We’re also looking more deeply at environmental sustainability, such as the issues of electronics waste and recycling.’
The crucial measurement that determines a data centre’s energy efficiency is power usage effectiveness (PUE), which is the ratio of the total power load of the facility to the IT equipment power load. This measurement, Singh said, was pioneered by Green Grid members and is now accepted among data centre operators. ’If you look at data centres built five or six years ago, they were operating at a PUE of 2.0 to 2.4,’ he said. ’Look at them today and it’s not difficult — in fact common practice — to build a data centre with a PUE of 1.5.’
In the last few years, data centres have been partitioned to wall off the particularly hot aisles and cool those specifically in a closed loop
Recently, there has been a focus on the efficiency of data centres when they aren’t operating at full capacity. ’You really only operate at 100 per cent for a fairly short time,’ Singh said. ’So our members are looking at efficiencies at 25 per cent, 30 per cent and so on, and we’ve seen a lot of improvements at those scales.’
In terms of technology, Singh believes there’s a lot of room for optimising current systems. ’New technology tends to focus on using less refrigeration,’ he said. ’We want to see more innovation, such as reducing the amount of mechanical cooling — we have a technology roadmap that calls on our members to reduce the amount of time in the year they use mechanical cooling. We’re also interested in the liquid cooling of chips and similar technological solutions.’
One interesting question about data centres is whether they are a temporary problem — whether the ever-increasing power usage and heat output of computer servers is a trend that’s going to continue. ’I can certainly foresee a time when data centres don’t need extra cooling at all,’ Singh said, ’although that’s more likely for centres in temperate climates’.
Sadorskieski agrees. ’I think that, eventually, there will be a change in computer technology, with chips that produce less heat; it’s certainly something that the IT companies are working towards,’ he said. ’But in the short-to-medium term, we’re still going to have these systems and heat management will continue to be an issue.’
in depth
closer to home
The PEER1 hosting centre in Portsmouth shows you don’t need to go north to keep your data cool
Low-energy data centres are not confined to the frozen North. One of the most efficient in Europe opened this year in the decidedly non-frigid environment of Portsmouth.
The PEER1 hosting centre offers data storage to companies across the south east and operates at a PUE of 1.1. Costing £45m and occupying 5,372m², it has space for 11,000 servers and connects with a 10GB optical-fibre network. ’Data centre demand shows no signs of slowing down,’ said Dominic Monkhouse, EMEA managing director of PEER1. ’By investing, we have developed a wholly owned data centre that leads the way in reducing the carbon footprint for our customers and provides them with the capacity to grow.’
The servers are cooled by a system developed by Excool, a data centre cooling specialist based near Birmingham. In the winter, heat from the data hall is transferred to the outside air via air-to-air heat exchangers, with no outdoor air entering the building. In the summer, if the temperature outdoors rises above 24°C, water is sprayed into the outside air stream and its evaporation lowers the temperature in the data hall.
This system is known as indirect cooling and has also been used in the US, somewhat ironically, to cool the servers at the National Snow and Ice Data Centre (NSIDC) in Boulder, Colorado. The centre also uses renewable energy, with a solar array to power equipment and recharge its back-up batteries. ’The technology works and it shows that others can do this too,’ said NSIDC technical services manager David Gallaher.
in depth
cost cutters
A system combining two types of memory is claimed to be far cheaper to run than your average computer
Reducing the energy consumed by computer memory would solve many of the cooling problems associated with data centres. One project to tackle this has recently reached fruition at Pittsburgh University, with a system that its designers claim could cut power costs dramatically.
The project, led by Bruce Childers, a professor of computer science, aimed to replace dynamic random-access memory (DRAM), the main memory technology used in today’s servers. ’DRAM is rapidly reaching its limit in power consumption and capacity for data-centre-sized applications,’ Childers said.
The system combines a smaller DRAM with a different system of memory called PCM, or phase-change memory, which works by exploiting a property of glasses containing compounds of the elements sulphur, selenium or tellurium. Known as chalcogenide glasses, these change states between crystalline and amorphous when an electric current passes through them. Chalcogenide glasses are used in recordable CDs and a similar system is used in flash drives. PCM systems can store large amounts of data but are slower than DRAM, although they consume much less power. By combining them, Childers’ team developed a hybrid memory system that uses DRAM for fast data retrieval and PCM for storage.
’Our innovations in memory circuits have led to an eight-fold reduction in power cost,’ he said. ’They have also improved PCM lifetime, permitting this technology to last long enough for several years of usage in a data centre — something that was not possible previously.’
The team is now developing an operational prototype of the system that can be deployed in data centres.
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...