Paving the way for Terabit Ethernet

Despite advancements in Wi-Fi technology and the recent introduction of Wi-Fi 6 , Ethernet is still the go to technology businesses utilize...

Despite advancements in Wi-Fi technology and the recent introduction of Wi-Fi 6, Ethernet is still the go to technology businesses utilize when they need to move large amounts of data quickly, particularly in data centers. While the technology behind Ethernet is now more than 40 years old, new protocols have been developed over the years that enable even more gigabytes of data to be sent over it.

To learn more about the latest technologies, protocols, advancements and the future of Gigabit Ethernet and perhaps even one day soon Terabit Ethernet, TechRadar Pro spoke with Tim Klein, CEO at the storage connectivity company ATTO.

Ethernet was first introduced in 1980, how has the technology evolved since then and where does it fit in today’s data centre?

Now over four decades old, there have been some major enhancements to Ethernet technology but there is also a great deal that looks exactly the same as it did when it was first introduced. Originally intended for scientists to share small packets of data at 10 megabits per second (Mbps), we now see giant data centres sharing massive pools of unstructured data across Ethernet networks, and a roadmap that will reach Terabit Ethernet in just a few years. 

The exponential growth of data, driven by new formats such as digital images, created a huge demand and those early implementations of shared storage over Ethernet could not meet the performance needs or handle congestions with deterministic latency. As a result, protocols like Fibre Channel were developed specifically for storage. Over the years, several innovations such as smart offloads and RDMA, have been introduced so Ethernet can meet the requirements of unstructured data and overcome the gridlock that can arise when large pools of data are transferred. The latest high-speed Ethernet standards like 10/25/40/50/100GbE are now the backbone of the modern data centre.

(Image credit: Pixabay)

Applications today are demanding higher and higher performance. What are the challenges of configuring faster protocols? Can software help here?

Tuning is extremely important nowadays because of the demand for higher performance. Each system, whether it is a client or a server, should be fine-tuned to the requirements of each specific workflow. The sheer number of file-sharing protocols and workflow requirements can be overwhelming. In the past, you may have simply accepted that half of your bandwidth is taken away by overhead with misfires and packet loss slowing you to a crawl. 

There are a number of methods available today to optimise throughput and tune Ethernet adapters for highly intensive workloads. Hardware drivers now come with built-in algorithms that improve efficiency, TCP offload engines reduce overhead coming from the network stack. Large Receive Offload (LRO) and TCP Segmentation Offload (TSO) can also be implemented in both hardware and software to aid in the transfer of large volumes of unstructured data. The addition of buffers like a striding receive queue, paces packet delivery increasing fairness and improving performance. Newer technologies such as RDMA allow direct memory access bypassing the OS network stack and virtually eliminating overhead.

What is driving the adoption of 10/25/50/100GbE interfaces?

The demand for larger, higher-performing storage solutions and enthusiasm for new Ethernet technologies such as RDMA and NVMe-over-Fabrics is driving the adoption of high speed Ethernet in the modern data centre. 10 Gigabit Ethernet (10 GbE) is now the dominant interconnect for server class adapters, and 40 GbE was quickly introduced to push the envelope by combining four lanes of 10GbE traffic. This finally evolved into the 25/50/100GbE standard which uses 25 Gigabit lanes. Networks are now using a mixture of all speeds 10/25/40/50/100GbE, with 100GbE links at the core, 50 and 25 GbE towards the edge. 

The ability to mix and match speeds, designing pathways to give them as much power they need and balancing across the data centre from the core to the edge, is driving the rapid adoption of the 25/50/100GbE standard. Newer technologies such as RDMA open up new opportunities for businesses to use NICs and Network-Attached Storage (NAS) with deterministic latency to handle workloads that in the past would have to be done by more expensive Storage-Area-Networks (SAN) using Fibre Channel adapters that need more specialised support. More recently, we are seeing NVMe-Over-Fabrics, which uses RDMA transport to share bleeding-edge NVMe technology over a storage fabric. 100GbE NICs with RDMA opened the door for NVMe storage fabrics that are achieving the fastest throughput on the market today. These previously unthinkable levels of speed and reliability allow businesses to do more with their data than ever before. 

What is RDMA and what impact does it have on Ethernet technology?

Remote Direct Memory Access (RDMA) allows Smart NICs to access memory directly from another system without going through the traditional TCP method and without any CPU intervention. Traditional transfers relied on the OS network stack (TCP/IP) to communicate and this was the cause of massive overhead, resulting in lost performance and limiting what was possible with Ethernet and storage. RDMA now enables lossless transfers that virtually eliminate overhead with a massive increase in efficiency due to saving CPU cycles. Performance is increased and latency is reduced, allowing organisations to do more with less. RDMA is in fact an extension of DMA (Direct Memory Access) and bypasses the CPU to allow “zero-copy” operations. These technologies have been fixtures in Fibre Channel storage for many years. That deterministic latency which made Fibre Channel the premier choice for enterprise and intense workloads is now readily available with Ethernet, making it easier for organisations of all sizes to enjoy high-end shared storage.

How does NVMe fit in?

Where NVMe fits in with Ethernet is via the NVMe-over-Fabrics protocol. This is simply the fastest way to transfer files over Ethernet today. NVMe itself was designed to take advantage of modern SSD and flash storage by upgrading the SATA/SAS protocols. NVMe sets the bar so much higher by taking advantage of non-volatile memory’s ability to operate in parallel. Since NVMe is a direct connect storage technology, the next leap to shared storage is where Ethernet or Fibre Channel comes in: taking NVMe to a shared storage fabric.

RAM

(Image credit: Gorodenkoff / Shutterstock)

What are Ethernet requirements of storage technologies such as RAM disk and smart storage?

Smart NICs is a relatively new term to refer to the ability of network controllers to handle operations that in the past have been the burden of a CPU. Offloading the system’s CPU improves overall efficiency. Taking that concept even further, NIC manufacturers are coming out with field programmable gate array (FPGA) technology which enables application-specific features, including offloads and data acceleration, that can be developed and coded to the FPGA. Resting at the hardware layer makes these NICs incredibly fast with huge potential in the future for more innovations that will be added at that layer. 

RAM disk Smart Storage is further advancing this area with the integration of data acceleration hardware into storage devices that use volatile RAM memory (which is faster than the non-volatile memory used in NVMe devices today). This results in extremely quick storage with the ability to streamline intense workloads. 

The combination of lightning-fast RAM storage, a NIC controller and FPGA integrated together with smart offloads and data acceleration has enormous potential for extremely high speed storage. RAM disk and smart storage would not exist without the latest innovations in Ethernet RDMA and NVMe-over-Fabrics.

What does the future hold when it comes to Ethernet technology?

200 Gigabit Ethernet is already starting to bleed over from HPC solutions to data centres. The standard doubles the lanes to 50GbE each and there is a hefty roadmap that will see 1.5 Terabit in just a few years. PCI Express 4.0 and 5.0 will play an important role in enabling these higher speeds and companies will continue to look for ways to bring power to the edge, accelerate transfer speeds, and find ways to handle CPU and GPU operations with network controllers and FPGAs.



from TechRadar - All the latest technology news https://ift.tt/3oZxdi9
via IFTTT

COMMENTS

BLOGGER
Name

Apps,3858,Business,151,Camera,1155,Earn $$$,3,Gadgets,1741,Games,926,GTA,1,Innovations,3,Mobile,1697,Paid Promotions,5,Promotions,5,Sports,1,Technology,8106,Trailers,796,Travel,37,Trending,4,Trendly News,25335,TrendlyNews,183,Video,5,XIAOMI,13,YouTube - 9to5Google,182,
ltr
item
Trendly News | #ListenNow #Everyday #100ShortNews #TopTrendings #PopularNews #Reviews #TrendlyNews: Paving the way for Terabit Ethernet
Paving the way for Terabit Ethernet
https://cdn.mos.cms.futurecdn.net/Sh4SNG5GwNT5QCvHVs9ABk.jpg
Trendly News | #ListenNow #Everyday #100ShortNews #TopTrendings #PopularNews #Reviews #TrendlyNews
http://www.trendlynews.in/2021/10/paving-way-for-terabit-ethernet.html
http://www.trendlynews.in/
http://www.trendlynews.in/
http://www.trendlynews.in/2021/10/paving-way-for-terabit-ethernet.html
true
3372890392287038985
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share. STEP 2: Click the link you shared to unlock Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy