Skip to content Skip to footer

The next evolution in DCIM

Image: Adobe Stock / Moonlight Graphics

Today our dependency on digital infrastructure shows no sign of abating. Driven by factors such as the proliferation of smart devices, the emerging availability of 5G networks, and the growth of the Internet of Things (IoT), the volume of digital information surging across the digital economy continues to increase at a rapid rate. 

Little of this data is permanently stored on phones, PCs or IoT devices. On the contrary, it is stored in data centres and, in many cases, accessed remotely. Given the always-on nature of the digital world, it is essential that such data centres are secure, sustainable, and resilient, providing 24/7 accessibility to data.

Gartner predicts that by 2025, 75% of enterprise-generated data will be created and processed outside a centralised data centre or cloud. The demands of hybrid IT have required data centres to undergo significant evolution in terms of design, deployment and operations. 

For instance, today hyperscale data centres endure, but requirements for low latency connectivity and data availability for use in TV, streaming, social media and gaming platforms has driven more data centres to the edge of the network. 

Additionally, the concerns of data sovereignty, security, location and privacy — added to the need for businesses to react quickly to emerging market opportunities — have produced a plethora of new data centre architectures, many of which are smaller, more distributed and with the attendant problems of uptime, remote management and maintenance. 

The evolution of management software

From the earliest days of digitisation, software has been used to monitor and manage digital infrastructure. Today, we describe such software as Data Centre Infrastructure Management (DCIM), and in reality, we have reached the third generation of this technology.  

In the 1980s, and at the dawn of the server era, the infrastructure needed to provide resilience and continuity to hosted applications consisted of little more than racks and uninterruptible power supplies (UPS) with rudimentary tools to monitor such systems and alert users in the event of a power outage. Such tools were not called DCIM at the time, but were effectively the first examples of the category. With hindsight, we can refer to them as DCIM 1.0. 

In the 1990s, the heyday of the dot.com era spurred the growth of larger data centres and cloud-distributed software. The industry chose to consolidate core IT infrastructure in purpose-built data centres which brought a new set of management challenges. These included more-reliable cooling of high-density racks, managing space effectively and keeping energy costs to a minimum. The latter issue in particular forced operators to pay greater attention to efficiency and forced the development of metrics such as power usage effectiveness (PUE) to benchmark these efforts. 

In light of this, management software evolved into a phase we can call DCIM 2.0. Here, the monitoring of performance data from numerous infrastructure components including racks, power distribution units (PDU), cooling equipment and UPS, was used to provide insights to decision-makers, whereby data centres could be designed, built or even modernised for greater efficiency and reliability. Space utilisation was also a key challenge addressed, as were managing vulnerabilities with diligent planning, modeling and reporting to ensure resiliency. 

Such tools were mainly focused on large data centres, containing highly integrated and consolidated equipment typically from a handful of vendors. These data centres were likely to have on-site personnel and IT management professionals with highly formalised security procedures. Importantly, the software was typically hosted on premises and frequently, on proprietary hardware. 

The era of DCIM 3.0

With the emergence of hybrid IT and edge computing, data centre software has had to evolve again to meet the new challenges posed to owners, operators and CIOs. HPE states that while in 2000, enterprise software was entirely hosted at the core of the network, by 2025, 20% of IT will be hosted in the core, 30% in the public cloud, and 50% at the edge. 

For those in the era of infrastructure everywhere, it’s clear to see that the data centre environment has become increasingly complex and difficult to manage. One might even consider everything has become a data centre, in-part. 

New research from IDC found the chief concerns of edge deployments were managing the infrastructure at scale, securing remote edge facilities and finding the suitable space, with attendant facilities to ensure resilience and security at the edge. Moreover, between 2014 and 2021 there was a 40% increase in companies that have been compromised by a cyberattack. 

The pandemic, for example, forced people to work remotely and brought things into sharp focus. Now the data centre itself is not the only critical point in the ecosystem. One’s home router, PC, or an enterprise network closet is as mission-critical a link in the chain as a cloud data centre, with its strict security regime and redundancy. 

For many senior decision makers, managing energy at distributed sites is also going to be a bigger challenge than in traditional data centres and Schneider Electric estimates that by 2040, total data centre energy consumption will be 2,700 TWh, with 60% coming from distributed sites and 40% from data centres.

Resilience and sustainability

Today, distributed mission-critical environments need the same levels of security, resilience and efficiency across all points of the network. To realise this, a new iteration of management software is required, which we can call DCIM 3.0. 

Recognising that the role of Chief Information Officer (CIO) in many companies has become increasingly business focused and strategic, DCIM 3.0 will equip these decision-makers with insights into strategic issues — including where technology can best be deployed, how efficiently and sustainably it can be operated, and how it can be managed remotely, without loss of resilience. 

In some respects, this requires a greater use of artificial intelligence and machine learning to glean actionable information from the data amassed by IoT sensors. It will also require greater standardisation of both software tools and hardware assets to offer ease of management and faster speed of deployment. Further, increased customisation and integration will be key to making the hybrid IT environment resilient, secure, and sustainable.  

Customers also seek to deploy management tools in several ways. Some demand on-premises deployments, others insist on private cloud implementations, whereas others are happy to trust the public cloud. All methods must be supported to make DCIM 3.0 a reality. 

Ultimately, the issue of environmental sustainability will become increasingly important due to customer demand and government regulation. As well as operational data, DCIM 3.0 tools will have to support decisions such as how to source power from renewable sources, how to dispose of end-of-life products and how to manage the overall carbon footprint, of not just the IT infrastructure, but the enterprise as a whole. 

Right now, DCIM 3.0 is still in its infancy, although many of the above capabilities are already available. To deliver on the promise of DCIM 3.0, however, we must learn the lessons of the past and evolve DCIM to support a new generation of resilient, secure, and sustainable data centres.

Picture of Marc Garner
Marc Garner
VP, Secure Power Division, Schneider Electric UK&I

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.