Skip to content Skip to footer

Automating IP-optical integration for dynamic interconnect services

Image: Adobe Stock/ TimeStopper

The needs of enterprises for cloud-based services are changing with their embrace of digital technologies across their operations.

Edge cloud use is growing, hosting workloads for a growing range of use cases such as automation, IoT and digital twins where latency issues and data sovereignty are at the fore. Hybrid work models, the use of virtual and augmented reality and cloud-native network architectures for 5G are also contributing to the shift.

A distributed cloud architecture that includes centralised and edge cloud capabilities require the interconnection fabric to be more dynamic, resilient and complex. Networks will need to rapidly provision resources at both the IP and optical layers to interconnect webscale and carrier data centres, co-location facilities and on-premises edge clouds.

The IP-optical challenge

There are a number of players involved in the cloud services value chain that need to consume network services: wholesale carriers, hyperscale cloud and interconnect providers, regional data centres and colocation providers, as well as enterprise private clouds. Whether using optical transport or IP peering and transit services, data centre interconnection should be as seamless as the internal data centre network.

From the perspective of enterprises and cloud operators, diverse network resources need to present as a single and consumable fabric with the assurance that dynamic service levels are being met. As cloud applications become more distributed and dynamic, it is not just virtual compute and storage resources that must respond to dynamic workload needs, but network services as well.

A software-defined network should be able to advertise and distribute capacity to central, regional and edge data centres, ultimately enabling the end customers at the OT level of the enterprise to manage network slices or partitions on a per-application basis. This is a cloud model where an industrial process can scale at will, consuming edge or centralised compute, storage and network interconnection resources as required.

One of the key challenges for consuming wide area interconnect services is that today’s network infrastructure is enabled by IP and optical layers with very little integration between them. Traditionally, the IP layer dynamically routes traffic flows locally, whereas the wide area optical transport layer has been built around statistically stable traffic volumes that do not have to be as dynamic as the routing layer. Typically, this means that each layer is over-engineered and under-utilised for the actual traffic demands, leading to increased CAPEX and OPEX costs.

At first sight, some might think the consumable model of the network could be achieved purely at the IP level. However, this ignores the importance of optical links in providing data centre interconnect. The cloud requires huge amounts of data to be transported between data centres. It also requires replication of this data using synchronous and asynchronous backup between primary, secondary and tertiary data centres, both for on- and off-premises business continuity and disaster recovery, and often between regions.

To drive more capacity at a lower cost per bit, interconnection infrastructure providers cannot ignore their IP-optical network designs. The amount of capacity needed and/or latency performance requirements means that traffic flows are sometimes more efficiently steered directly at the optical layer than at the IP layer. This is not a decision that should involve the service’s consumer; the consumed service needs to route the traffic at whatever layer is able to meet the performance parameters most efficiently.

Managing the interconnectivity value chain

This means abstracting the interconnection service from the transmission media. Service abstraction has long been possible in IP networks, however, this is new territory for the optical layer. Optical services have been able to be presented hierarchically at differing levels of granularity, but not virtually. The evolution that is required is for a software-defined networking model to be implemented at the optical as well as the IP layer.

To evolve to this level, the network industry is moving slowly to enable an interconnectivity value chain which starts with the wholesale carrier offering large partitions or slices to a hyperscale cloud provider or an interconnect provider. They in turn can further partition this capacity — whether fibre, wavelength, optical transport and IP transit services — to regional operators and colocation players, who in turn provide enterprise customers with retail services.

Some leading-edge network vendors, now enable this capacity to be managed end-to-end by software. They provide portals for all players in the value chain to manage their own capacity consumption and provisioning, including the enterprise cloud services manager. This gives control at each level, whether it is to bring up applications and access services when needed, increase utilisation efficiency, respond quickly to fast-changing demand, or ensure reliability and security.

Automating interconnectivity

For this to be implemented, the various layers of the virtual data centre interconnect fabric need to be programmable. Using high-level, intent-based languages, a network engineer should be able to describe the kind of performance characteristics required so that the resources are automatically configured across both the IP and optical layers.

To tailor services for a specific enterprise customer application, cloud providers should have the ability to orchestrate and control all network elements and functions, at all layers, using both off-the-shelf applications as well as custom-built ones. For the latter, they need a development environment that enables them to manipulate orchestrators, APIs and operations support systems using standard programmable platforms.

Many cloud providers, used to the DevOps approach for applications, will also want to implement a similar NetOps methodology that automates their development process for continuous integration and continuous development (CI/CD) methodologies, including digital sandboxes for testing new network services before launch.

Wholesale transport and data centre interconnect services must provide deterministic end-to-end services with guaranteed service level agreements. Given that the interconnectivity value chain may involve several players, these service-level agreements have to manage through multiple overlapping contracts over the shared network infrastructure. The network must use telemetry to provide information on element performance at all levels of the network and use this to provide closed-loop assurance.

The consumable network

As the cloud ramifies, moving from centralised clouds to regional clouds, edge and far-edge clouds, interconnection has to keep pace. All of the advantages of the cloud, its ability to scale on demand, shift compute and storage functions to where they are most efficiently utilised, rest on the availability of an equally agile and consumable network.

Essentially, IP/optical network automation for data centre connection and interconnection will give customers access to services when needed and help operators increase efficiency, respond quickly to fast-changing demand and ensure service performance reliability. Key attributes of these systems should include software-programmability that is customisable through APIs, the integration of customer web portals with network interconnection platforms, and the ability to coordinate IP/optical network operations from end-to-end through a single pane of glass.

Picture of Patrick McCabe
Patrick McCabe
Head of Webscale Marketing, IP Network Solutions at Nokia

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.