Skip to content Skip to footer

Why think about 800G now?

Image: Adobe Stock / Connect world

The increased need for home offices, streaming services for games, music and movies, as well as the rise of data-intensive applications such as machine learning and artificial intelligence (AI) are just a few examples of the many factors contributing to rising bandwidth demand.

These developments pose challenges for hyperscalers as well as enterprise and colocation data centres as, in addition to increased capacity requirements, they must also ensure lower latencies while meeting climate targets.

One way to achieve this is to make more efficient use of existing switch architectures (High Radix ASICS). For example, 32-port switches offer up to 12,800 Gb/s bandwidth (32 x 400G), and versions for 800G transmissions of up to 25,600 Gb/s are also available. These high-speed ports can be easily broken out into smaller bandwidths. This enables more energy-efficient operation while increasing the packing or port density (32 x 400G = 128 x 100G).

The need to support low latency, high availability and very high bandwidth applications will continue to grow in the future. The question is not whether data centre operators need to upgrade to meet the increasing demand for bandwidth, but when and how. Operators should therefore be prepared and adapt their network design now. After all, with a flexible infrastructure, it is possible to upgrade from 100 to 400 to 800G, for example, with surprisingly few changes.

Network design is becoming increasingly complex

Higher data rates, however, also increase the complexity of solutions and offerings. As mentioned previously, it is not necessarily a matter of fully utilising 800G for each port, but of supporting the bandwidth requirements of the end devices. Examples of this are Spine-Leaf connections with 4 x 200G or Leaf-Server connections with 400G ports, operated as 8 x 50G ports, which at the same time makes the network much more energy efficient. To achieve this, a variety of solutions exist, as well as new transceiver interfaces.

LC duplex and MPO/MTP connectors (12/24 fibres) are the well-known interfaces for transmission speeds of 10, 40 and 100G. For higher data rates such as 400G and 800G and beyond, additional connector types such as MDC, SN and CS (Very-Small-Form-Factor connectors), as well as MTP/MPO connectors with 16 fibres in a single row have been introduced.

For network operators, it can often be a challenge to keep track and choose the right technology and network components for their needs. Requirements for increasing bandwidths in network expansions often conflict with a lack of space for additional racks and frames or costs incurred as a result. Network equipment suppliers are therefore constantly working on new solutions to enable more density within the same space and to keep the network design scalable and at the same time as simple as possible.

Port breakout applications for more sustainability

In addition to a better utilisation of the high-speed ports and the associated port density, port breakout applications can also positively influence the power consumption of the network components and transceivers.

The power consumption of a 100G duplex transceiver for a QSFP-DD is about 4.5 watts, while a 400G parallel optical transceiver operated in breakout mode as four ports with 100G each consumes only 3 watts per port. This equates to savings of up to 30%, notwithstanding the additional savings in air conditioning/cooling and switch chassis power consumption and their contribution to space savings.

Effects on the network infrastructure

Scalable use of the backbone or trunk cabling is given when the lowest common multiple serves as the basis. For duplex applications, this would classically correspond to ‘Factor 4’, i.e. base-8 cabling, on the basis of which -R4 or -R8 transceiver models can be mapped. This type of cabling thus supports both current technologies and future developments.

In addition to the selection of a granular, scalable backbone, it is also important to plan sufficient fibre reserves for future upgrades or to implement expansions with the least possible change effort. With sufficient fibre reserve planned, network adjustments can be implemented by replacing only a few components: for example, an upgrade from 10G to 40/100G or 400/800G can be implemented by replacing MPO/MTP to LC modules and LC duplex patch cords with MTP adapter panels and MTP patch cords without making any changes to the backbone (fibre plant).

Modular fibre housings also allow a mix of different technologies and the integration of new connector interfaces (very-small-form-factor connectors) with a few simple steps. Options for termination are already available today: 8-, 12-, 24- and 36-fibre modules. The use of bend-insensitive fibre also helps make the cabling infrastructure durable, reliable and fail-safe.

Being prepared pays off

Data rates of 400G or 800G are still a long way off for most enterprise data centre operators, but bandwidth demand is growing, and fast. Sales of 400G and 800G transceivers are already on the rise, and it’s beneficial to be prepared, rather than having to upgrade later under time pressure. Data centre operators can make their facilities ready for 400G and 800G now, with just a few changes, to be optimally prepared for the future. Of course, this also applies to Fibre Channel applications.

Cindy Ryborz
Cindy Ryborz
Marketing Manager Data Centre EMEA at Corning Optical Communications

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.