Long Distance Live vMotion Storage Gems from VMworld 2009 Portend the Future for EMC?

At VMworld 2009 in San Francisco, EMC, VMware, and Cisco presented a “super” session (TA-3105) entitled “Long Distance Live vMotion. Cisco published a white paper about it and Chad Sakac of EMC discussed it extensively in his blog entry. A video of this standing-room-only session is available at Blip TV link (had trouble playing link from Chad’s blog entry; doesn’t work with Firefox). VMware also reversed course and announced that it was now supporting this configuration.

Curiously, while the network and server virtualization community analyzed this demonstration extensively, but not much from a storage perspective, the storage community only mentioned it and did not dive into much detail. Yet, there were several astonishing storage-related gems – not hidden gems – in the aforementioned publicly available material.

First, a storage gem from Cisco’s White Paper

Extended VLAN and Active-Active Storage – An extended VLAN and active-active storage solution incorporates technologies that make data actively available at both the local and remote data centers at all times. The LAN extends across the data centers, and storage is provisioned in both data centers. Data is replicated across data centers using synchronous replication technology and rendered in an active-active state by the storage manufacturer. Normally when data is replicated, the secondary storage is locked by the replication process and is available to the remote server only in a read-only state. In contrast, active-active storage allows both servers to mount the data with read and write permissions as dictated by the VMware vMotion requirements.”

Aside from not mentioning EMC, the paper makes no further reference to this curious statement.

However, a second storage gem can be found in Chad Sakac’s blog regarding an Active/Active configuration (use case option 2 of the demo) – slightly edited for clarity:

Option 2 is a preview of something to come from EMC.  We had a lot of internal debate about whether or not to show this – as historically, EMC didn’t show things prior to GA, though this is starting to change. We thought: 1) there was a lot of interest; 2) we had data on solution behavior; 3) enough customers that would like Options 1a/b [two other use cases in the demo], but desire a faster transit time; 4) the solution is relatively close. Based on all that, we decided we should share the current data and demonstrate it. This also allows us to start to get customer feedback on our approach.

Option 2 EMC has a primary locus of effort for this use case (as we think it meets all the requirements the most broadly) and will be the first one available from EMC as a “hardware accelerated” option (it simply looks like a vMotion – the underlying storage mechanism is transparent to vSphere. …I know that this is very exciting, but PLEASE: don’t immediately reach out to your EMC team and ask to get in on this – it will only slow us down. We’re on it around the clock – let us focus on finishing with the quality customers expect from EMC.”

Also from Chad’s second video in the same post:

  • “Note: changing datastore to one hosted at the secondary site. Both sets of datastores must be visible to each vSphere cluster.
  • Option 2: Long Distance vMotion with advanced Active/Active Storage – “one that leverages technology coming from EMC around active/active storage virtualization across distance”.

The real gem here is the active/active part. At first glance, using traditional thinking, one might assume this means two sides of a storage controller or cluster are active or that two storage subsystems at different locations are active. However, this demo showed two VM datastores at two locations that are seen as one and both are R/W active. vMotion requires all ESX hosts to have read/write access to the same shared storage, but storage does not work well over narrow WAN links. There needs to be some kind of synchronization, or a different way to present VM datastores at both sides.

Replication won’t work in this use case, as replication doesn’t provide active/active access to the data. The secondary data center doesn’t have active R/W access to the replicated data — it just has a passive copy of the data, to which it can’t write. Using replication as a method of getting vMotion working will result in major problems.

Answering questions after the session at VMworld, Chad explained that this where EMC’s solution starts to shine. It involves a SAN solution with additional layers of virtualization built into it, so two physically separated ESX/vSphere servers actually share RAM and CPU and storage that turns both hosts cum storage into a single, logical entity. When doing a vMotion, no additional steps are needed on the vSphere platform to make all data (VMDK’s, etc.) available at the secondary datacenter, as the SAN itself does all the heavy lifting. This technique operates completely transparent to the vSphere environment, as only a single LUN is presented to the two hosts. So a single vMotion and a “logical” storage vMotion (actually hyper-speed synchronous dual write) are combined into a single vMotion which only takes a few minutes or seconds to execute.

So, has EMC solved the long-distance cache coherency and distributed lock management problem that has plagued the industry forever? Well, with only two nodes and fairly small latencies this demo was certainly small scale. Yet, it seems EMC is working vigorously to prove it will scale. Stay tuned.

Share

, , , , , , ,

  • http://chucksblog.emc.com/ Chuck Hollis

    Hi Nick — no comment! :-)

  • http://twitter.com/dvellante dvellante

    Thanks for the no comment Chuck. Well done Nick! Hollis is rarely speechless :-)

  • http://www.voipusbinternetphone.net/ voip_usb_internet_phone

    This is so interested! Where can I find more like this?

  • http://www.voipusbinternetphone.net/ voip_usb_internet_phone

    This is so interested! Where can I find more like this?

  • http://www.magnetic-generators.net Magnetic Power Generator

    Thanks for this good info. I have bookmarked it and will shortly inform the rest of my network members know. They should consider it as informative as myself.