Can You Really Build an All-Ethernet Data Center with FCoE?

Let’s take a hypothetical customer who is looking to build a new data center and they would like to embrace the latest and greatest technologies.  Can they construct an entire environment built with a Converged Network – an all-Ethernet environment?  This post will focus on the technology and ecosystem.

The Single Network

First of all, by convergence, I am talking about having all traffic going over a single wire.  Existing environments today have separate networks for security, management and traffic requirements (multiple Ethernet connections and/or a mixture of Ethernet, Fibre Channel (FC) and InfiniBand).  If we can get down to a single wire (of course with proper high availability), we can save power, cooling, space and most importantly operationally we simplify the environment which will allow for mobility of virtualized environments.  The “winner” of the single network has been known for many years – the answer is ETHERNET – the industry has known this for many years, but there have been limitations (some technical, some organizational).  With 10 Gigabit Ethernet, there is the opportunity to change things.  If you are already using NAS or iSCSI, it is simply a matter of moving to the faster speed and considering having multiple traffic types on a single wire (which is different that 1Gb iSCSI configurations).  For FC customers, FCoE is the option that will allow large storage environments to migrate into Ethernet without disrupting their current processes.

The Vendor Discussion

Most vendors discussing FCoE fall into one of 2 camps:

  • Blue Sky: “no barriers – every solution works great today – buy it all now”
  • FUD: “sure we have FCoE, but there are standards issues, and limitations – why don’t we do a beta for the next 18 months”

It’s not all bad, there are some good discussions going on in the blogosphere on some of the advanced functionality that is being created.  If you’re ready for an advanced topic on FCoE, there have been some great posts digging into TRILL (Layer 2 multipathing created in the IETF standards group; replaces Spanning Tree Protocol) including from Brad Hedlund (Cisco) and Greg Ferro (EtherealMind).


The reality is that most customers are still learning the basics on FCoE.  At EMC World last month, I gave an introductory level presentation on FCoE and Converged Networks and between the two sessions had almost one thousand attendees, the majority of which this was the first time they were hearing details on FCoE.

Let’s look at some of the “limitations” that might prevent customers from going All-Ethernet:

  • Operating Systems: today it is primarily Windows, Linux and VMware – UNIX is coming soon, Mainframe is a big TBD
  • Cabling – the #1 issue for the person building the data center is that the infrastructure can be used for the next 5-10 years. There are options at 10Gb Ethernet for copper, but in the next 5-10 years 40Gb and 100Gb must be considered, so is Copper dead?
  • Multi-hop configurations are coming – today you can do a blade server switch to an edge switch, more flexibility coming soon (see Joe Onisick’s post on muti-hop FCoE)
  • Only Edge/Access Layer switches – director class products which could be either Ethernet or FC directors with FCoE capability are not yet available. Be on the lookout to make sure that these products have the same high availability features as are available with FC solutions today
  • Native FCoE is available from NetApp today and is expected from other vendors soon – storage is typically attached to a director in the core, so expect the FCoE director products and native FCoE storage to come out around the same time

As you can see, the technical issues are all being worked and should be resolved in the next 6-12 months.  The ecosystem is growing for Lossless Ethernet which will address customer concerns about vendor lock-in – there needs to be more options than just Cisco and there will be.

The answer to the question – can you build an All-Ethernet Data Center with FCoE is a qualified yes if you are planning for a new data center for the end of 2010 or in 2011. Exact configurations and settings are still be tested and best practices are starting to be written now.  Turnkey rack-based solutions that take advantage of FCoE will simplify deployments. IBM, HP and Cisco (and Vblocks from the VCE initiative from Cisco, EMC and VMware) all have Rack Area Networks (RAN) that are architected with Ethernet as the converged network that can run FCoE.  We are still very early in the adoption curve, the technology barriers are coming down and it will be the organizational and operational issues that will decide if FCoE becomes a component of customer data centers.

For more background on FCoE, see the archive on my personal blog.  Comments and discussions are always welcome.


, , ,

  • Pingback: Tweets that mention Can You Really Build an All-Ethernet Data Center with FCoE? « Wikibon Blog --

  • The challenge with the Spanning Tree / TRILL discussion is that there are mostly written from the networking guy perspective. Here is a take from the Storage guy’s perspective:
    What is Trill?

  • You can build a storage network over ethernet using IP, but you'd be really brave to try and use FCoE to do it. It's still years before we will have the standards and vendor support that we need.

    Lets not quibble over proprietary, ad hoc installations that we have today. That's not brave, that's stupid.

  • stu

    Greg – proprietary? T11, IEEE and IETF are all involved – how many more standards do you want? You've got the server, network, storage, chip & test vendors all involved here. We're 3 years into the development of FCoE with deployments for the last 1 1/2 years – these are all ad hoc? I know you prefer iSCSI – it's fine for smaller (100-300 nodes), but for the FC customer with 1000s of servers – FCoE is the option to get on Ethernet. Let's point what else needs to be done to build these solutions reliably and securely rather than trashing anything related to FC…

  • That's part of my problem. Too many standards, not enough completion and all dependent on DCB which isn't here yet and was promised in the middle of last year.

    We are 3 years into FCoE deployments on twenty, maybe forty nodes in a single rack. That's not 300 nodes, that's toytime.

    We need better storage protocols than FC, and more reliable technology that FC / FCoE can give us. Oversized switches to deliver low latency and lossless network is good for vendors and even better for IP storage. FCoE isn't going to last long enough to make a difference.

  • Stu,
    Brocade has an FCoE blade for their DCX director (24 ports) so you can connect FCoE to the core.

    On the other hand Greg has a point calling it proprietary. There are 2 switch vendors and the DCB standards haven't been adopted yet. It will be a real ecosystem when the Ethernet vendors that aren't part of the FC oligopoly join in.

  • Sorry guys, but calling any of this “Proprietary ” or “DCB not ready” in the context of an FCoE conversation is complete B/S. Lets revist the undisbutable facts.

    Fact: Nothing in any vendor FCoE offering is proprietary
    Fact: FCoE has been standard since June 2009
    Fact: DCB standards required by FCoE have been silicon ready for quite some time.

    Have a great weekend.


  • stu

    Howard – has the DCX been qualified by any OEMs? Last I heard there were some limitations around HA on the solution which would stop many from putting it in a production environment. Customers should be making sure that online code load and replacement/failover are similar to what a FC solution has today.

  • Interesting post. I have stumbled and twittered this for my friends. Hope others find it as interesting as I did.

  • DCB standards are NOT completed, and it doesn't look like we will get them this year. You can't bet your data centre on “guessing” of standards as Cisco is doing.

    You can tell stories about “DCB ready” all you like, but they are not here. The level of fallacy over the last two years is astonishing. FCoE standards arrived a year later than promised, DCB is at least one year away, maybe two.

    In that time, iSCSI will continue to grow from 20% of the storage networking market and make FCoE obsolete. It's plainly inevitable.

  • Pingback: vSphere 4.1 Storage Networking Updates « Wikibon Blog()

  • Pingback: IBM on Building Infrastructure for Virtualization and FCoE « Wikibon Blog()

  • It`s not easy to build a data center but this post is helpful. Thanks a lot.