#memeconnect #ql
On March 10th, 2009, Wikibon attended EMC's financial analysts meeting. At the session, EMC's Chairman and CEO, Joe Tucci cited Fibre Channel over Ethernet (FCoE) as one of EMC's high growth initiatives. Tucci mentioned FCoE along with server virtualization, cloud storage, solid state disk and data center efficiency (green), all pretty hot areas. This grabbed our attention.
Shortly thereafter, Wikibon was contacted by QLogic to provide a briefing about a single chip FCoE card that is available to OEMs today and will be generally available in Q2 of 2009. Emulex has also provided us a briefing and Brocade is offering CNAs. It prompted us to ask the question: "Should CIO's care about FCoE?"
Point #1 Ethernet is the Future
To answer this question, we started by looking at the future of networking and we came to a simple conclusion. No matter what happens on the technology front, Ethernet is the future of networking because it is ubiquitous with virtually no replacement on the horizon. Recent IDC figures suggest more than 350 million Ethernet cards shipped worldwide in 2008. Ethernet has consistently evolved its performance and 10Gigabit Ethernet is poised as the next wave and is ready for rapid adoption. Importantly, 10GbE has numerous protocol improvements, the most significant being a lossless capability meaning that it doesn't lose frames at high utilization rates.
Point #2 Fibre Channel is the Dominant Enterprise Storage Protocol
For enterprise SAN storage, Fibre Channel (FC) is a low overhead protocol, it's efficient (i.e. it's point-to-point), very reliable and touts an enormous ecosystem of vendors providing cards, drivers, services and software. Protocols such as iSCSI have competed very effectively in small enterprises but have not displaced FC in large SAN installations. This point is notable to CIO's because any technology that requires a conversion from FC will struggle to gain adoption.
An emerging protocol, FCoE exploits the lossless capabilities of 10Gb Ethernet which allows organizations to construct high speed, high bandwidth disk interconnects which leverage the ubiquity of Ethernet and at the same time efficiently utilize organizations' massive investments in Fibre Channel.
Point #3 Networks are Converging
Why does this matter to CIO's?
Today’s networks are comprised of different network infrastructure to support different types of traffic. For example, Fibre Channel networks are best suited for SAN traffic because they've proven to be reliable and fast. At the same time, Ethernet networks are used to deliver low cost LAN connectivity. This stove-piped approach means organizations need to maintain different networks, different sets of switches and multiple adaptors in each server to handle each type of network stream. The improved protocols (e.g. lossless capability) within 10Gb Ethernet networks allow a converged network to address, for example, SAN and LAN connectivity through a single pipe. This means lower cost connectivity, centralized network management and more flexible provisioning of data center resources.
Point #4 Future Servers and Blade Systems will use a Single Ethernet Card for all Communications
Currently a server has multiple network interface and SAN cards to support the communications requirements of the data center. A rack of servers contains a massive nest of cables to support connectivity. This creates space, power and density problems as well as constricts air flow and consumes considerable staff resources to install and maintain. The ideal solution is to have a single card and single cable supporting all networking needs. Server and blade vendors are eager to design such a capability into new systems because it will be less expensive, more space efficient, with lower power with fewer cables. It’s inevitable.
Point #5 Single Chip Converged Network Adapters are here
QLogic recently announced a converged network adapter in a single card which provides 10Gb Ethernet and FCoE on a single chip with a single cable. This is an example of an emerging technology in this space which exploits the ubiquity of Ethernet while preserving installed investments in Fibre Channel infrastructure.
Figure 1 shows, on the left hand side, a traditional network configuration with two separate Ethernet and FC cards dedicated to a LAN and SAN respectively. The right side of the graphic shows conceptually how QLogic has converged these capabilities into a single chip in what it calls the 8100 Series Converged Network Adapters based on its new Network Plus Architecture. In our view, the most significant piece of this announcement is the broad industry support that is coalescing around this new technology. QLogic's press release cites quotes from the following nine tier 1 vendors that presumably are leveraging this product, including: Cisco, Dell, EMC, HP, IBM, Microsoft, NetApp, Sun and VMware. Another significant piece of this announcement is the fact that QLogic has created a first-to-market advantage for itself, delivering a purchasable, single chip CNA to its OEMs ahead of competitors.
In our view, QLogic’s announcement validates our assertions and marks a turning point in the evolution of data center networks. Specifically, in our experience, when a leading vendor packages multiple critical functions into a single chip set, it signifies that volumes are about to escalate and price barriers will be broken, marking wide-scale adoption. QLogic claims to have 70% of the blade market and more than 50% of the FC HBA space, meaning it has the relationships and ecosystem to support this vision.
Should FCoE be Part of Data Center Strategies?
Clearly a converged network brings substantial benefits to IT organizations and the Wikibon community believes 10Gb Ethernet will be the backbone of a converged network strategy, although it will happen over a 2-5 year period. For organizations that have investments in Fibre Channel, FCoE is a no-brainer because it leverages existing FC investments and exploits a 10Gb Ethernet converged strategy. This is not to say that there is no place for iSCSI or other protocols, there is. In particular, many organizations have avoided putting in place a large investment in FC SAN infrastructure and as such, FCoE will not play an important role in the converged network strategies of these organizations.
For the FC crowd, however, the number one action in this area should be to ensure that new server acquisitions are equipped with a converged network adaptor that includes FCoE. This transition will not happen overnight, however within the next twenty-four months, this strategy will become the norm for most SAN-based enterprises. Organizations should create an overall Ethernet infrastructure by creating virtual sub-networks that specialize in supporting various connectivity needs (e.g. LAN, SAN and IPC) and leverage FCoE where appropriate.
Caveats
Practitioners in the Wikibon community have cited XX concerns about FCoE, specifically:
- It introduces another protocol that needs to be managed. In the near term this will increase complexity.
- Performance implications are unclear. In all likelihood, technologies will converge around 10 GigE but networks may very well remain separate (e.g. separate SAN and LAN network).
- Cascading dependencies-- A change in the SAN will have ripple effects throughout the Ethernet network.
- Organizational considerations. Who is in charge, the network professionals or the storage group? Most practitioners believe while the network group will likely have the most authority, the storage group will maintain responsibility for data integrity.
Despite these near term concerns, the Wikibon community on balance believes the benefits of convergence will outweigh the drawbacks and over a 2-5 year period adoption will escalate dramatically.
Action Item: Organizations with Fibre-based SAN should ensure that 10Gb Ethernet and FCoE strategies are in place and that new sever and storage gear is equipped with converged network adapters (CNAs). Such strategies will support a vision of a converged data center with high function/high speed communications between compute, storage, clients, voice, LAN, internal clouds and external clouds.
Footnotes: