Wikibon has been looking at the proposition that Gigabit Ethernet is destined to become the dominant or even only interconnect fabric in the future. In a posting on systems management and storage protocol performance, Wikibon looked at the performance of different interconnects. Wikibon concluded that Gigabit Ethernet shone in connectivity and cost for networks close to the end-user, but that other protocols such as SAS and Fibre Channel have better performance and lower latencies for networks closer to the processor. The cutover point was between high-performance storage and low-performance storage.
Wikibon believes it is always instructive to look at the edges of computer endeavor and observe the trends. The Top 500 Supercomputer List has been published for many years, and shows a wealth of detail about the very expensive high performance systems. Figure 1 and Figure 2 show the interconnect family broken down into three types:
- InfiniBand;
- Gigabit Ethernet;
- Other (this includes Custom, Proprietary and NUMALink (mainly for MPP supercomputers), Myrinet for clustered supercomputers and a few others.
For the 2011 Top500 supercomputers with a MPP architecture, 38% were Infiniband, and only 1% Gigabit Ethernet. For clustered supercomputers, 98% were either InfiniBand or Gigabit Ethernet.
The Top 100 supercomputer analysis in Figure 1 shows that almost all the highest performing systems used either Infiniband solution from Mellanox or QLogic, or a Custom/Proprietary solution. Only 1% used Gigabit Ethernet.
From both charts it is clear that there is no overall trend towards Gigabit Ethernet. They show that InfiniBand and Ethernet each have their own niches in the supercomputer designs. For the fastest systems requiring the lowest latency and high bandwidth, InfiniBand and Customer/Proprietary interconnects dominate. Indeed, the only trend shown in Figures 1 and 2 is a slight increase towards Custom/Proprietary interconnects, as new ideas are tried out, particularly in MPP systems. Innovation is still alive and well.
In high performance systems and arrays, InfiniBand has become more prevalent. EMC’s Isilon system, HP’s Ibrix and IBM’s SONAS systems scale-out file systems all use InfiniBand to help with metadata. Oracle’s Exadata and Exalogic systems use InfiniBand as an interconnect, and even transports Ethernet over InfiniBand. Startup Pure Storage also uses InfiniBand in its flash-only array. SGI, NetApp (E5400), Data Direct Networks and others offer InfiniBand attached storage.
Bottom Line Horses for Courses – each interconnect has its strengths and weaknesses. Fibre Channel is great for distance and high performance, SAS for low cost over shorter distances and excellent performance, InfiniBand for very low latency and very high bandwidth, and Gigabit Ethernet brings ubiquity, connectivity and reasonable performance.
In general, buyers should be looking for excellence in the unique characteristics of interconnect technology that is chosen. Vendors will be much better off focusing on achieving the lowest latency and highest throughput for InfiniBand, the highest availability and throughput for Fibre Channel, and the lowest cost and ease of management for Gigabit Ethernet and iSCSI. This is a better strategy than trying to cover two or more interconnect technologies with a hybrid approach.
Reasonable people can discuss and differ on the optimum interconnect in a given environment. However professionals and vendors that tout the superiority in general of any particular interconnect are revealing more about themselves than the technology.
Action Item: If part of an organization is saying that Ethernet should be the only interconnect, make sure they do not have responsibility for performance IO and Performance computing. If a vendor is arguing that Ethernet is the only way to go for performance computing or performance IO, talk to another vendor. Pick the appropriate interconnect technology for the job, and then pick the best vendor and department to deliver and manage the computing reliant on that technology.
Footnotes: