Data centres don’t have to come at the expense of urban growth

Vik Malyala
Vik Malyala
EMEA President at Supermicro

Vik Malyala, Managing Director at Supermicro, argues that smarter data centre design and more efficient cooling will be essential if AI infrastructure is to expand without squeezing power availability for homes, businesses, and wider society.

AI has become a normal part of business operations across sectors and organisation types. Demand for AI-driven tools is expected to continue rising, increasing the power requirements of the data centre infrastructure that supports them. While utility providers are seeking ways to expand energy supply, resources are limited, and growing competition for power can constrain efforts to address wider urban needs.

However, a range of technologies can help reduce data centre power usage, including more efficient liquid-cooling architectures and modular rack-scale designs. These approaches can lower operational costs while also helping to free up electricity for broader societal projects and initiatives.

AI growth compared to energy needs

Recent reporting found that housing construction projects were delayed because the electrical grid could not supply homes with sufficient electricity. This was attributed in part to the large amount of power consumed by data centres. Nor is this challenge isolated to the UK. Global growth in AI workloads is driving demand for new facilities designed specifically to support these applications. That creates a multifaceted challenge, potentially limiting electricity for other uses, pushing up prices, and increasing carbon emissions where utilities still rely on fossil fuels.

To mitigate the potential negative effects of rising electricity demand, operators can look at a number of technologies and strategies that reduce data centre power consumption and improve overall efficiency, potentially increasing power availability for other users.

Setting the groundwork for modernised infrastructure

As a starting point, data centre designers need to consider the age and effectiveness of the systems and technologies in use. One significant barrier to reducing energy consumption, particularly at scale, is outdated legacy infrastructure. Upgrading to newer server technology, which can deliver more work per watt than previous generations, can help address this challenge.

In addition, data centres supporting intensive AI workloads often operate at significantly higher temperatures. To prevent components from overheating and to maintain performance, appropriate cooling systems need to be in place.

One important option is liquid cooling, where recent developments have improved heat exchange efficiency at ambient temperatures of up to 45°C.

Liquid cooling can reduce power requirements by using liquid to remove heat from CPUs, GPUs, and other microelectronics. While that liquid still needs to be cooled, whether inside or outside the data centre, the reliance on traditional air-cooling infrastructure can be reduced.

Liquid cooling is not the only option, however, and designers should work with partners and engineers to evaluate which approach is best suited to their operational requirements.

In air-cooled data centres, for example, careful planning is needed to keep hot and cold air separate, improving cooling efficiency and reducing the power consumption of CRAC (computer room air-conditioning) units. Depending on the data centre’s location, free-air cooling may also be an option, provided humidity can be managed effectively, helping to reduce power consumption for part of the year.

Finally, operators can use hardware and software controls to reduce energy consumption when servers are idle or underused. This helps avoid unnecessary power draw, lowers operational costs, and allows energy to be directed where it is needed most.

Used individually or in combination, these measures can help reduce overall power demand in data centres.

Reducing peak power usage

The latest servers featuring next-generation CPUs and GPUs can offer higher performance per watt than earlier generations. For example, the number of tokens per watt, or the practical AI output delivered by the latest technology, can be significantly higher than before. That can allow the same amount of work to be completed using less power, or greater workloads to be supported within a more efficient footprint. Depending on service-level agreements and application requirements, this can play an important role in reducing server-level electricity consumption.

Balancing compute with societal needs

AI data centres generally need to be designed to handle peak workloads. In many cases, however, there will also be periods when CPU or GPU utilisation falls below maximum levels.

Intelligent software management can help operators concentrate workloads on specific servers during those periods, while powering down or reducing power to others. As well as reducing direct power consumption, this can also improve a facility’s overall PUE.

Overall, there are several practical ways to reduce a data centre’s power requirements. Improving efficiency within the data centre does more than lower costs for operators; it can also help make more power available for other societal uses, including private homes, shared infrastructure, and small businesses.

Categories

Related Articles

More Opinions

It takes just one minute to register for the leading twice weekly B2B newsletter for the data centre industry, and it's free.