Risk is a constant in the data centre industry – but the array of things that cause it is greater than ever.
Technologies, such as Artificial Intelligence (AI) and the Internet of Things (IoT), combined with an accelerated reliance on digital services, are creating increasingly compute-intensive workloads. The result? Rising power density per rack, which is impacting all facets of the data centre, including capacity planning, cooling, design, and power provisioning.
And these changes are also fuelling more disruption than previously anticipated. Data centre outages are becoming both more frequent and severe. In fact, in 2018 and 2019, 50% of data centre operators experienced an IT outage. But by 2020, this number increased considerably, with 78% of operators reporting an outage. Crucially, of those surveyed, 75% believe their downtime was preventable.
To better understand how data centre professionals can conquer the changes they are facing, and prevent outages, let’s further explore the challenges currently facing the industry, as well as the tools operators are leveraging to overcome them.
Accommodating evolving thermal requirements with new cooling technology
Fresh IoT and AI applications, combined with new use cases – in manufacturing and healthcare for example – require advanced levels of support from digital infrastructure. Although powerful, these high-density drivers are putting increasing strains on data centres’ digital infrastructure to perform in different ways, while supporting more complex thermal loads. As we have identified, if managed improperly, this can result in costly outages and downtime. To mitigate these challenges, operators must embrace the latest cooling technology, including cold plates and immersion cooling.
Naturally, this adds another layer of complexity to data centre design. Digital Twin simulation software, that harnesses Computational Fluid Dynamics (CFD), can help operators manage this effectively. In fact, operators who leverage Digital Twins can trial and test cooling, flow, and heat processes in the digital realm, prior to introducing such changes in real life. By first examining the impact of these changes in the 3D simulation, operators will avoid costly downtime and product incompatibility. Put simply, Digital Twins create a risk-free, virtual environment.
Implementing changes in data centre operations
It’s important to note that most operators are not starting from a blank canvas. Legacy data centres must accommodate new applications within their existing architecture – all while keeping commercial costs to a minimum. This speaks to how change within a data centre environment impacts both its design and operational efficiency. Against this backdrop, operators must justify costs and balance space and resource utilisation. After all, power and real estate are expensive commodities that need to be managed effectively.
With Digital Twin technology, powered by CFD, operators can accurately understand how new technologies are impacting the overall ecosystem. As an example, Digital Twins streamline the capacity planning process by ensuring all changes are validated in the model. The technology enriches data centre power analysis, by visualising the entire power network from utility to IT. It can even be leveraged to see breaker loadings for future deployments, as well as test failure scenarios, to ensure uninterrupted power can be supplied to every piece of IT in the facility, removing data centre risk.
Looking ahead with data centre simulation
CFD powered Digital Twins make it possible to build data centres with high-density supercomputers, and quickly. The technology even empowered Nvidia to build a supercomputer called Cambridge 1 in under 20 weeks – a world record. By adopting simulation, operators will not only simplify the design process, but also reduce the risk of change. Ultimately, this is because operators are empowered to examine future plans in a series of ‘what if?’ scenarios to understand the impact on sustainability, energy efficiency, or performance.