Skip to content Skip to footer

In The Spotlight… RiT Tech

Jeff Safovich, Chief Technology Officer at RiT Tech, sits down with DCR to discuss how AI can help improve efficiency and reduce risk for data centre operators, the company’s XpedITe’s AI provisioning module, and the challenges facing the sector.

DCR: XpedITe’s AI provisioning module is the reason that RiT Tech has been shortlisted for the ER & DCR Excellence Awards DCIM Solution of the Year. How would you describe its value to data centre operators and owners and how different is it from other DCIM platforms?

JS: RiT Tech’s XpedITe software platform is quite different from traditional DCIM solutions; we actually don’t even like the traditional term DCIM. We are developing a newer, more comprehensive approach for data centre infrastructure management, which we call ‘UIM’: Universal & Intelligent Infrastructure Management. This is an evolution of traditional DCIM. XpedITe is our tool, our software platform, built based on the methodologies of UIM. 

It’s different to other DCIM platforms in many ways. First of all, traditional DCIM platforms rely a lot on manual planning, manual work, manual calculations – which is a good thing, but I believe today in today’s era, with innovative and modern technologies, there is so much room for improvement.

XpedITe includes an AI-powered provisioning model, which is built to automate the entire process of planning all the changes, all the installations, the smooth installs of servers and IT equipment in an automated and sophisticated way. So that not only removes human error, which is very significant and very important, but it also can do it in a much more comprehensive way.

One reason for this is that the XpedITe AI provisioning model is integrating with legacy and existing third-party software systems and hardware equipment, such as CMDB, BMS, EMS, ticketing systems. It takes all the information from all of these systems together to build a comprehensive model of the entire environment. Then based on that model, it makes the whole planning and optimisation of resource use much more effective and efficient.

Now, not only does it take into account the current state of the data centre, it looks into the future. We have a machine learning based model which predicts the future state of the data centre and then this automated provisioning planning takes into account the future evolution of the data centre infrastructure, and that makes the whole planning much more future resilient.

In addition to that, one of the main capabilities of this provisioning module is its automated workflow functionality. Not only does it plan how the infrastructure should look after provisioning, it also plans all the steps and activities to be done in between by the technicians in order to facilitate the management and actual execution of the provisioning workflow.

The model is fully compliant with organisational policies and also with industry standards. So not only does it make the data centre more resilient against downtime, and not only does it help in making the planning more efficient and much faster, but it also aligns the look and feel of the infrastructure in the data centre with how the data centre wants their infrastructure to be built in terms of length of cable; in terms of compliance with the capacities of resources being used; the power, cooling, space, networking. All of that is done automatically by this AI powered provisioning model.

DCR: How will this specific module improve data centre operations in terms of minimising downtime and helping support operators and owners in meeting their SLAs?

JS: In a recent report by Uptime Institute, they pinpointed that one of the main reasons for downtime in data centres is human error. We rely so much on human planning, human work, human analysis, human comprehensive vision – which was the best we could do until recently. As a result of that, many errors occur and that causes a lot of problems, such as downtime and shortages.

The XpedITe provisioning AI module specifically targets this challenge by reducing the human error. It takes into account all sorts of considerations which a human being cannot possibly think of, all at the same time. And thus, it makes the whole process of installs, moves, adds and changes – what is called the IMAC – much more human error resilient.

In addition to that, it also improves the whole efficiency of the data centre. It actually saves about 95% of time on planning provisioning steps; it reduces it by a factor of 20. It reduces the likelihood of over-utilising resources, and that is achieved by looking at all of the comprehensive constraints within the data centre infrastructure, and the data from all possible sources (the hardware equipment; the software systems which are already installed) and then taking them all into account, it adjusts the balancing of resource utilisation.

By balancing the resource utilisation, it not only reduces the utilisation of power and energy, but also makes the data centre more stable and resilient. It is taking into account the predictive state of the future of the data centre, and thus makes operation more reliable, improving the SLA and reducing unplanned downtime.

DCR: In your recent article, ‘Unleashing the power of AI in data centres: A path to sustainability’, you noted that innovations like this will improve efficiency and reduce compliance risk for operators and owners. Can you describe how XpedITe is improving operations and financial KPIs, as well as its potential impact on environmental goals?

JS: XpedITe improves the time it takes to consider all the challenges and constraints needed to be taken into account by a factor of 95%. So that improves the efficiency of just the manual work. In addition to that, it empowers people, because it not only aids their planning, it also highlights all the possible hotspots with high potential risk, combining all the resources being utilised at the same time. So as a result of that, it makes the human much more effective and efficient in the work they do, being empowered by this module.

The financial impact is also significantly leveraged by making the energy utilisation more efficient. It adds additional cost saving and as a result, better CAPEX and OPEX in utilising better equipment thanks to this automated planning and automated provisioning – and it also eventually saves on the daily power utilisation.

One significant factor is in reducing the frequency and severity of outages and downtime. It’s commonly known today that shortages cost fortunes for data centres. It’s actually one of the main factors of their losses in terms of ROI and in terms of financial impact.

So, reducing such shortages by frequency and also severity is a major contribution to the financial consideration and ROI. In terms of an environmental impact – it’s very important, especially these days. We have climate challenges, net zero, and a pact by the EU, so that it’s not just the large data centres that are now obliged to comply with various legislations and regulations around sustainability management, sustainability reporting and optimisation. They also have their own corporate agendas. I would say that’s an evolution; more and more data centres are realising the importance of putting sustainability on their agenda, on their strategic list of goals.

Tools like the XpedITe provisioning AI module help them to track sustainability, and also make reporting much easier and more comprehensive and compliant with the industry standards and regulations, such as the EED and CSRD reporting. It also helps specifically with making sustainability better, not just by counting KPIs and the metrics, but actually by improving the power utilisation and as a result of that, also the carbon footprint. Not only within the realm of power; that also goes for water utilisation and even with IT equipment utilisation, so all the KPIs, such as the ITU, could be improved as well, with standard considerations for sustainability, such as the PUE – which was the main KPI till now.

DCR: What other AI-powered modules are you looking to introduce and why – particularly in terms of sustainability?

JS: We have quite a comprehensive roadmap driven by the UIM, the Universal Intelligent Infrastructure Management practice – by the way, that’s a practice we are not developing just by ourselves; we have a comprehensive forum of industry experts, including data centre operators, integrators, consultants, and the end-users.

So together with them, we are developing this concept, the principles of the UIM ­– and the roadmap for AI development is actually deriving from this long-term vision, long-term goals and a roadmap of the UIM.

One of these models is data validation AI. One of the challenges today in relying on any type of data in data centres is how accurate the data is. It appears that often, the data is not so accurate, and assuming that there are multiple perspectives of data being inaccurate, they’re starting from conflicting data coming from different data sources. So, you need to identify which data is correct and identify the discrepancies.

In many cases, data is just missing. There are gaps – be it in the type of data, or some of the metrics are just missing or the data is not measured within the proper boundaries defined by the industry standards. So data validation AI is one of the models we are about to introduce.

Another one is a proactive reprovisioning. Proactive reprovisioning is a whole new approach to how data centres are structured or how we believe should be managed today. Any change, any proof, any type of provisioning, any type of planning within the data centre is driven by a proactive approach, e.g we have this new request, so we need to make this or that change today, or there is an issue, a problem, a downtime or some other external factor, which requires us to make any type of change. So it’s all reactive, it’s all a response to whatever is going on outside.

Now we believe that we’ve come to a new era when such management should become proactive. So, with this new proactive reprovisioning model, not only will it be possible to plan the changes based on their specific requirements and needs and as a response to a particular downtime or other challenge, but also the system will automatically proactively look into the data centre, it will analyse the level of probation, the SLA, the resilience, it will look at all the factors and the resource utilisation. It will consider the temperatures, it will consider the speed of fans, it will consider performance even on the CPU level.

And taking all of that into account, it will identify the A) risks and B) inefficiencies. So based on this, the system will be able to identify such future potential risks in advance and proactively come up with recommendations to make specific changes before these risks are realised as actual problems. Proactively reprovisioning will be coming up with suggestions and recommendations to make specific minor changes within the equipment and these changes will not only prevent potential risks, but they will also optimise the whole efficiency and the operation of the data centre infrastructure. These are two of the models, among others, that we are about to introduce to the market in the future.

DCR: So looking to the future, what do you think are going to be the main challenges facing the data centre industry?

JS: One of the main challenges that all of us are familiar with is bridging the siloes. It’s a challenge within data centres which has been spoken about a lot. We can see even now that, for example, the facility and the IT teams are usually operating completely separately from each other. So the whole optimisation, consideration and planning of one of these teams is not aligned with the challenges, needs, constraints and considerations of the other.

So bridging between the silos is definitely one of the challenges and we believe it should be addressed on all possible levels, starting from the strategic planning of the organisation and the management, then taking that down to technology and actual cooperation between the teams.

Another challenge we see a lot, especially these days when sustainability has become one of the main topics being considered and addressed – we see that some organisations are focusing on ticking the box. We need to be compliant with sustainability reporting regulations, such as the EED, which is definitely very important because that will create a whole new level of transparency and awareness in the industry. But we believe that this is not enough.

If we really want to address sustainability as a whole and dedicate to the net zero goal, corporations have to go beyond just ticking the box and just reporting on this minimal set of sustainability reporting required by the EED and the CSRD in Europe, as well as some other standards in the United States.

Companies should define their organisational strategy and address sustainability as a business project with measurable ROI. They should also leverage innovative tools. I actually cannot stress enough the importance of leveraging modern technologies. We see that even today, on the market, many tools exist working in the old era, and many of the technologies and tools are still focusing just on the monitoring and reporting which is, of course, very important – however, not sufficient. Today technologies have so much more to offer.

So I believe that data centre operators should start looking into the future of data centre infrastructure management, and consider how they want to see their data centre in the next few years.

The transcript of this video interview has been lightly edited for clarity.

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.