Widely seen as the next generation in Layer 2 network infrastructure, Fibre Channel over Ethernet (FCoE) is a relatively new industry effort to combine the lossless features of Fibre Channel (FC) with the ubiquity of Ethernet. Combined with the new 10 Gbit Ethernet, it also promises something close to fibre channel speeds and the opportunity to converge FC and Ethernet networks, allowing organizations to simplify their networking infrastructure. Because it can support both FC and Ethernet traffic over a single physical infrastructure, it supports convergence of storage and network traffic over a single set of cables, switches, and adapters, eliminating the need for maintaining two physical networks, energy consumption and heat generation as well as overall cost and complexity. Specifically, storage management on FCoE has the same look and feel as management on traditional FC interfaces. On the Ethernet side, FCoE introduces 10 Gbit lossless Ethernet, the next generation in this technology, providing higher data transfer rates with more security against packet loss than previous versions. Overall, this promises to be a win all around for the network infrastructure.
While FCoE standards are still a work in progress, they have matured to the point that vendors are beginning to build and market products with the reasonable expectation that they will comply with the final versions of the standards. This means that users can begin experimenting with FCoE and plan the eventual migration of their production environments to this next-generation, converged infrastructure platform. Specifically:
- Fourteen companies participated in a test drive in June, 2009 with quite favorable results.
- At SNW Fall 2009 – QLogic, Emulex, Intel, Brocade, PMC-Sierra, Cisco, NetApp, EMC and LSI Enginio successfully demonstrated FCoE and FC switches, Converged Network Adapters (CNAs), FC HBAs and storage targets.
- Another plug fest is scheduled for this month.
- An Open FCoE stack has been accepted Into the Linux Kernel v2.6.29. OpenSolaris has both target and initiator stacks. Chelsio has announced its software stacks, and Microsoft is developing its own.
- Many disk subsystem vendors now offer FCoE support and most will within 12 months.
- Dell and IBM, among others, are offering servers with FCoE support, and IBM just added support on its blade offering.
- Protocol analyzers are becoming available from the likes of JDSU and Wireshark.
- Multi-hop is available from Cisco. The Nexus 5000 can be a DCB-capable switch, and Cisco has supported FC-BB-5 since September 2009 including FIP (FCoE Initialization Protocol), but Cisco does not recommend it – likely for cost reasons.
- Cisco has released it “Palo” Virtual Interface Card/Controller super-CNA.
Not everything is perfect however. The bad news includes:
- No Multi-hop with Brocade – This really technical discussion explores why multi-hop FCoE switches were not initially available. The comments are especially enlightening. The take-away is that Brocade does not yet support the FIP portion of FC Backbone 5. Very recent Brocade documents specifically refer to a “Pre-FIP version of the protocol”.
- No Multipath – Since Ethernet is a Layer 2 protocol, it is not routable, but the IETF is working on it with a specification called TRILL (Transparent Interconnection of Lots of Links). TRILL will provide a solution for shortest-path frame routing in multi-hop CEE networks with arbitrary topologies. TRILL closely resembles Cisco’s L2MP (Layer 2 Multipathing) protocol. Solutions will be able to have multipathing only after TRILL is fully approved and implemented in products. The result is less than optimal resource utilization and no load balancing.
- Shared Media Issues – While native FC protocols and FC switches are designed to deal only with point-to-point connectivity, FCoE pass-through switches introduce shared-media (point-multipoint) semantics. In other words, a FCoE pass through switch is invisible to the Fibre Channel stack, but it acts as a concentrator of flows from multiple servers into the same port of a dual-stack switch, and thus it creates a shared-medium hanging off the dual-stack switch port. A typical Fibre Channel stack is not prepared to address shared-media links, since they do not exist in native Fibre Channel; these new semantics must therefore be handled by FCoE without affecting the Fibre Channel stack.
- FC-BB-6 – This project is starting up within the T11 committee, and while the charter and timeline are not finalized, it will be discussing how to enrich FCoE with larger FCoE configurations such as allowing for creation of a routable single CEE cloud, point-to-point CEE configurations with no switches, investigate improvements in support for high BER Ethernet transmission media, e.g., 10GBASE-T and any other item as deemed necessary during the development. In other words, one Layer 2 for everything in the data center.
- New infrastructure required from limited suppliers – Not really a new issue as the market for native FC switches consolidated down to Brocade, Cisco and QLogic. And, despite demonstrated interoperability at a base level, users consciously get locked into one switch vendor as usual, choosing one for value-added or price, for example. The same applies for adapter cards.
- It’s not really 10-Gbit – While storage traffic starts out on 10-gigabit CEE, it gets dropped onto an 8Gb native FC SAN (or 4Gb). This not really a big deal as it roughly balances the encapsulation overhead of FCoE.
- FC and FCoE are diverging – At the October 2009 Storage Networking World conference, the Fibre Channel Industry Association (FCIA) showed a roadmap for FC that goes 4->8->16->32 gigabit, whereas the roadmap for FCoE is 10->40 gigabit. Fortunately, all but 40Gb-FCoE use the same typical data center optical and copper assemblies, i.e., OM2, OM3, OM4 and TwinAx with the same SFP+ module connection.
FCoE offers several advantages, the most compelling of which are the cost savings, reduced complexity, and reduction of heat, energy use and physical cabling that results from convergence of data and storage networks. This fits in with the trend for convergence driven by the virtualization of servers and both supports virtualization and adds to the savings that realizes. Users should take a hard look at FCoE, try it out in the test environment, and start thinking about when and how they will eventually migrate their production environments.