Paving the way for Terabit Ethernet

DMCA / Correction Notice
- Advertisement -


Despite advances in Wi-Fi technology and the recent introduction of Wi-Fi 6, Ethernet is still used for technology businesses when they need to transfer large amounts of data quickly, especially in data centers. While the technology behind Ethernet is now over 40 years old, new protocols have been developed over the years that enable even more gigabytes of data to be sent over it.

- Advertisement -

To learn more about the latest technologies, protocols, advancements and the future of Gigabit Ethernet and maybe one day soon, Terabit Ethernet, Nerdshala Pro Spoke with Tim Klein, CEO of storage connectivity company ATTO.

  • We’ve put together a list of the best small business routers available
  • These are the best cloud storage services on the market
  • Also check out our roundup of the best powerline adapters

Ethernet was first introduced in 1980, how has the technology evolved since then and where does it fit into today’s data center?

Now more than four decades old, Ethernet technology has undergone some major improvements, but one of the greats is that it looks exactly the same as it was when it was first introduced. Originally intended for scientists to share small packets of data at 10 megabits per second (Mbps), we now see giant data centers sharing massive chunks of unstructured data over Ethernet networks, and a roadmap that could lead to some Terabit will reach Ethernet in the same years.

advertisement

The exponential growth of data driven by new formats such as digital images created a great demand and early implementations of shared storage over Ethernet could not meet the performance needs or handle congestion with deterministic latency. As a result, protocols such as Fiber Channel were developed specifically for storage. Over the years, several innovations such as smart offloads and RDMA have been introduced so that Ethernet can meet the requirements of unstructured data and overcome the gridlock that arises when large pools of data are transferred. The latest high-speed Ethernet standards such as 10/25/40/50/100GbE are now the backbone of modern data centers.

Applications today are demanding higher and higher performance. What are the challenges of configuring the fast protocol? Can software help here?

Tuning is of utmost importance nowadays due to the demand for high performance. Each system, whether it is a client or a server, must be tailored to the needs of each specific workflow. The sheer number of file-sharing protocols and workflow requirements can be overwhelming. In the past, you would have readily accepted that half your bandwidth is taken away by overhead with misfires and packet loss slowing you to a crawl.

- Advertisement -

There are several methods available today to optimize throughput and tune Ethernet adapters for highly intensive workloads. Hardware drivers now come with built-in algorithms that improve efficiency, reducing the overhead that comes from the TCP Offload Engine network stack. Large Receive Offload (LRO) and TCP Segmentation Offload (TSO) can also be implemented in both hardware and software to aid in the transfer of large amounts of unstructured data. Adding buffers, such as striding receive queues, speeds up packet delivery thereby increasing fairness and improving performance. Newer technologies such as RDMA allow direct memory access, bypassing the OS network stack and virtually eliminating overhead.

What is the reason behind the adoption of 10/25/50/100GbE interface?

The demand for large, high-performance storage solutions and the enthusiasm for newer Ethernet technologies such as RDMA and NVMe-over-fabrics is driving the adoption of High Speed ​​Ethernet in modern data centers. 10 Gigabit Ethernet (10 GbE) is now the dominant interconnect for server class adapters, and 40 GbE was quickly introduced to push the envelope by combining four lanes of 10 GbE traffic. This eventually evolved into the 25/50/100GbE standard which uses 25 gigabit lanes. The networks are now all using a mix of 10/25/40/50/100GbE speeds, with 100GbE links on the core, 50 and 25GbE on the edge.

The ability to mix and match speeds, designing pathways to deliver the same power, and balancing core-to-edge data centers, is driving the 25/50/100GbE standard for rapid adoption. Newer technologies such as RDMA open up new opportunities for businesses to use NICs and network-attached storage (NAS) with deterministic latency to handle workloads that have in the past been more expensive storage-area-networks (SANs) using fiber. ) was to be done. Channel adapters that require more specialized support. More recently, we’ve been looking at NVMe-Over-Fabrics, which uses RDMA transport to share bleeding-edge NVMe technology over a storage fabric. 100GbE NICs with RDMA have opened doors for NVMe storage fabrics achieving the fastest throughput in the market today. These previously unimaginable levels of speed and reliability allow businesses to do more with their data than ever before.

What is RDMA and how does it affect Ethernet technology?

Remote Direct Memory Access (RDMA) allows Smart NICs to access memory directly from another system without going through the traditional TCP method and without any CPU interference. Traditional transfers for communication relied on the OS network stack (TCP/IP) and this caused massive overhead, resulting in lost performance and limiting what was possible with Ethernet and storage. RDMA now enables lossless transfer which virtually eliminates the overhead with a huge increase in efficiency due to saving of CPU cycles. Performance is enhanced and latency is reduced, allowing organizations to do more with less. RDMA is actually an extension of DMA (Direct Memory Access) and leaves it to the CPU to allow “zero-copy” operations. These technologies have been fixtures in Fiber Channel storage for many years. The deterministic latency that made Fiber Channel the premier choice for enterprise and intensive workloads is now readily available with Ethernet, making it easier for organizations of all sizes to enjoy high-end shared storage.

How does NVMe fit in?

Where NVMe fits in with Ethernet is through the NVMe-over-Fabrics protocol. It is the fastest way to transfer files over Ethernet today. NVMe itself was designed to take advantage of modern SSD and flash storage by upgrading the SATA/SAS protocol. NVMe sets the bar so high by taking advantage of non-volatile memory’s ability to work in parallel. Since NVMe is a direct connect storage technology, the next leap in shared storage is where Ethernet or Fiber Channel comes in: moving NVMe to a shared storage fabric.

What are the Ethernet requirements for storage technologies like RAM Disk and Smart Storage?

Smart NIC is a relatively new term that refers to the ability of network controllers to handle operations that have been the burden of a CPU in the past. Unloading the system’s CPU improves overall efficiency. Taking that concept further, NIC manufacturers are coming up with Field Programmable Gate Array (FPGA) technology that enables application-specific features, including offload and data acceleration, to be developed and coded into the FPGA. Resting on the hardware layer makes these NICs incredibly fast with great potential for further innovations that will be added to that layer in the future.

RAM Disk Smart Storage is advancing this area with the integration of data acceleration hardware into storage devices that use volatile RAM memory (which is faster than the non-volatile memory used in NVMe devices today). This results in extremely quick storage with the ability to streamline heavy workloads.

Combination of lightning-fast RAM storage, a NIC controller and FPGA integrated with smart offloads and data acceleration, there is immense potential for extremely high-speed storage. RAM disk and smart storage would not exist without the latest innovations in Ethernet RDMA and NVMe-over-fabrics.

What does the future hold when it comes to Ethernet technology?

200 Gigabit Ethernet is already starting to bleed from HPC solutions to data centers. The standard doubles each lane to 50GbE and has a massive roadmap that will see 1.5 Terabit in just a few years. PCI Express 4.0 and 5.0 will play a key role in enabling these higher speeds and companies will continue to look for ways to bring power to the edge, accelerate transfer speeds, and handle CPU and GPU operations with network controllers and FPGAs. .

  • We’ve also featured the best network switches

- Advertisement -

Stay on top - Get the daily news in your inbox

Recent Articles

Related Stories