• <Meer weten, ga naar intel.com

Scalable Data Center Networks

Flat Network Architectures Offer Superior Performance and Scale

A new data center network model that is being promoted by the incumbent enterprise switch providers consists of a central monolithic router, which comes with high cost, limited performance and significant complexity. This solution may also include switch fabric access nodes (often in the form of top-of-rack switches) that have similar performance and complexity. The overall data center network scalability and performance is limited by the central router, and the complexity is aimed at locking in customers due to the large up-front investment and training required.

The Fat Tree Architecture, which was originally implemented using proprietary fabrics, was first introduced to Ethernet with Intel® Ethernet Switch Silicon. This offers Ethernet's lower cost along with standards-compliant scalability without impacting performance. The figure below compares a fully connected fat tree architecture to a network using traditional enterprise switches. The switch elements are simple and efficient, and connect together in a uniform fashion to create a large scalable fabric. Intel’s white paper on Fat Tree Architecture provides in-depth analysis of this novel approach. The Fat Tree Architecture leads to greater scale, lower cost and lower power while maintaining low end-to-end latency when using Intel Ethernet Switch silicon.

Flat Network Architectures Offer Superior Performance and Scale

The Fat Tree Architecture, implemented with simple low-latency fabric building blocks, offer high scale and non-blocking throughput. Alternative architectures based on complex and costly central routers are limited in scale, throughput, and latency performance.

The Case for Low Latency

Although a certain amount of debate remains regarding the relative importance of low latency in the data center fabric, there is general agreement that lower latency leads to higher application performance. In the past, it was argued that the fabric latency was dwarfed by the latency elsewhere in the system (application, NIC, disk access, etc.). With each generation, the latency in the rest of the system continues to improve. Additionally, as the data center clusters continue to grow, tiers of switching are required to achieve new levels of density, and each tier adds additional latency. Because of this, the latency in the fabric matters, and if lower latency can be achieved in each switch, the more layers of switching that can be introduced without impacting the overall performance of the fabric. Low latency leads to greater scale.

The new-generation data center fabrics are leveraging commercial silicon technologies, such as Intel Ethernet Switch Silicon. Devices based on the Intel Ethernet Switch silicon are especially compelling in terms of latency, which is achieved using a cut-through architecture and a latency-optimized internal data path. This leads to extremely low latency that is independent of packet size. The figure below is from a recent Lippis Report that compares ToR switches from several vendors. This shows that systems utilizing Intel’s Ethernet switches have cut-through latency that is at least 2x better than systems using competing silicon.

Throughput Matters

Just as low latency leads to higher performance and scale in a data center, so does high throughput, but for slightly different reasons. The new-generation data centers have embraced the notion of clustering multi-socket and multi-core computing elements for improved application performance. With as many as 32 threads or more running (each at GHz frequencies), it has become quite reasonable to expect that a single computer will be constrained by its I/O bandwidth, even when using 10G Ethernet. A 10G Ethernet pipe with only 50% utilization used to be compelling (especially when compared to the traditional approach of aggregating two or four 1G Ethernet ports together in a link-aggregation group). Today however, the additional bandwidth is needed. Without it, the compute systems are again I/O constrained and additional congestion occurs in the fabric. This limits performance and scale compared to a non-blocking environment. High throughput leads to greater scale. Many enterprise switches are not fully provisioned and thus introduce congestion, which limits performance and scale. Intel Ethernet Switches, on the other hand, are architected to support fully non-blocking throughput, regardless of packet size and traffic patterns.

Lossless Qualities Enable Convergence

Whether you're talking about Data Center Bridging (DCB), Fibre Channel over Ethernet (FCoE), iSCSI or High Performance Computing (HPC), the notion of combining the compute, storage, and network traffic onto a single unified fabric has tremendous appeal in the data center. A well-implemented unified fabric can lead to reduced cost, greater efficiency, and greater simplicity, without compromising performance or scale.

Several new IEEE initiatives have been introduced to enable Ethernet to simultaneously support various traffic types, each with unique characteristics. Some of those new initiatives include:

  • Priority Flow Control (PFC), which supports pause-based flow control on a per-priority basis, providing the ability to protect the flow of certain traffic types over others on the same link.
  • Enhanced Transmission Selection (ETS), which controls the allocation of bandwidth among traffic classes and provides bandwidth guarantees to certain traffic types.
  • Data Center Bridging Exchange (DCBx) protocol, which allows neighbor switches to exchange information on their level of support for PFC, ETS and QCN.

Intel Ethernet switch silicon support these along with other emerging data center protocols to enable the most scalable and lowest latency layer 3 data center network solutions in the industry.