The future of data centre management in the financial services sector

Alex Blake, business development director for ABM Critical Solutions takes a look back on 70 years of evolution and explores some of the challenges facing the financial services sector – one of the early adopters of data centres – both now and in the future.

With every swipe of a bank card, tap onto the tube or post on Facebook, a data centre is hard at work behind the scenes, embedded in everyday transactions.

This hasn’t always been the case; the financial services sector was the first adopter of the concept over 70 years ago, paving the way for the robust critical frameworks we have today. However, with legacy sometimes comes a hangover, and as new and old processes collide, how should the industry navigate this?

In the 1950s and 60s, data centres, or mainframes as they were known, were a different beast. Running these facilities was labour intensive and demanded enormous expense.  

Pitt Turner, executive director of the Uptime Institute, summed this up nicely, when recalling how the process worked historically at a large regional bank, “In the evening, all trucks would arrive carrying reams of paper. Throughout the night the paper would be processed, the data crunched and print outs created. These print outs would be sent back to bank branches by truck in the morning.”

The previously cutting-edge, mainframes are a far cry from where we are today, and frankly, with pace and accuracy at the heart of how all industries run, they wouldn’t cut the mustard. Especially in the financial sector which has grown exponentially and relentlessly, demanding speed and efficiency.

What’s come with this growth, is a trend towards mixed processes – the sector uses a combination of outsourcing data centres via colocation services, alongside operating out of original sites, to manage huge data footprints.

For financial institutions working under a cloud of uncertainty and risk, these centres need constant investment but often there’s an unwillingness for this to come from CAPEX, so building their own data centres or updating and maintaining legacy systems isn’t a priority. Instead, data centres expand with more racks and hardware, making monitoring a constantly evolving job. At what risk though?

Downtime

Downtime is the biggest risk factor in legacy data centres, regularly driven by air particle contamination. Unlike new facilities which restrict airflow and the ability for particles to contaminate equipment, legacy centres are often more open to threat, require expert cleaning teams and take constant management from specialists.  

It’s hard to equate cleaning to serious financial risk, but in the financial sector there’s pressure for online banking, payment processing and the protection of personal information to work around the clock. Failure to deliver means fines and reputational damage – which can be avoided with the right technical cleaning services and expert infrastructure management. 

Preventative cleaning measures

Frequent air particle testing is fundamental in identifying issues ahead of time, especially in legacy centres, which must be done by specialist engineers and cleaners. Companies shouldn’t wait for issues to occur – as the saying goes, prevention is better than the cure; a preventative cleaning regime comes at a cost, but it will help manage issues before they threaten service.

Some specialists candetermine the cause of contamination on surfaces, but often the real damage can be caused by airborne particles not visible to the eye. The solution is to implement an annual preventative technical cleaning programme to ensure ISO Class 8 standards are maintained in critical spaces.

The right infrastructure

Downtime can also be successfully avoided in a legacy data centre by implementing data centre infrastructure management (DCIM). Relying on older, outdated solutions can be a gamble, given how susceptible legacy centres are to building degradation and contamination.

DCIM can enable smart, real-time decision making, and has the ability to introduce fail safes, meaning an issue doesn’t have to stunt services to catastrophic effect.

For example, a custom alarm can be developed and installed, which works to alert a specific team or contact as soon as error occurs. This ensures that technology and people work together; a problem will always be flagged immediately and attended to by an expert to assess and remedy the issue in quick time.   

The future

Monitoring technology

Monitoring technology will continue to grow and expand its remit, becoming more intelligent, precise and affordable. This will benefit legacy sites and with the right measures in place, will limit vulnerabilities.

I see a time in the not-too-distant future whereby advanced monitoring technology will help to drive efficiencies that lead to a more remote and cost-effective offsite management models. Used correctly, it would ultimately provide users with data that will guide their decisions and ensure they are one step ahead.

The role of sensors

Sensor technology, managed remotely, will play a huge role in flagging areas of concern. ABM Critical Solutions is currently trialling a new sensor technology in a new-build data centre. We’ve implemented sensors into our maintenance cleaning routine and hope to share our findings soon.

New locations

Last year, we saw a data centre submerged into the sea off the coast of Scotland. As technology increasingly helps us identify and fix issues remotely, I expect we’ll see more non-traditional data centre locations come in to play.

We’re at a very exciting inflection point in the industry; infrastructure, technology and artificial intelligence are working together in ways we didn’t think possible.

There are more options than ever before to get it right, and while I believe we’ll continue to see a shift towards utilising colocation services, legacy centres will be more protected than ever, owing to advancements across the board.

Related Articles

Top Stories