The following is a transcribed interview of Rick Walsworth, director of product marketing, EMC Infrastructure Management Group, on on SiliconAngle.TV from EMCworld 2011. He was intervied by Wikibon Co-Founder david Vellante and SiliconAngle Founder John Furrier.
DV: I want to talk about bringing enterprise applications to the cloud. CIOs are concerned about security. CEOs want to get to the cloud as fast as possible. So the response has been to virtualize. Then the issue is: How do I make these applications – SAP and Oracle and Microsoft – how do I get them enterprise ready?
RW: What happens is that as you start to move these applications into the cloud, the expectation is that I am going to get better service-level performance. The reality is that as it goes in, and you start to virtualize and consolidate, you lose some control. So the question is how do I take advantage of some of the capabilities in the infrastructure to begin to deliver service levels that meet the needs of the enterprise.
DV: That brings me to the notion of IT as a service & particularly data protection as a service. People are saying data protection is broken. People think of it as a bolt-on, an after-thought. I get this application and then I’ve got to protect the data. Is that changing?
RW: Absolutely changing. And what happens is the economics start to become very compelling. When you look at the economics of trying to consolidate the infrastructure, now I have the ability to outsource my backups & replication, take it out through a service provider that pays for the LAN, the connectivity, and the recovery capability. So the economics from a CIO standpoint make a lot of sense, but only if they can guarantee the service levels.
DV: So talk specifically about how that manifests itself in a solution, whether it is with RecoverPoint or with partner products.
RW: So from the standpoint of where RecoverPoint fits in, RecoverPoint delivers levels of service so that for my SAP, Oracle, & my mission-critical applications I can deliver a quality of service that is comparable to what I can deliver on a physical environment. So I’m not giving anything up. The ability to extend that out then to my tier 2 applications also makes sense, because now I can take them and leverage that same infrastructure. So now I can take advantage of the services running in the cloud but at the same time make sure that that CIO has the services of the applications are being met.
DV: So one of the things we hear as we talk to members of the Wikibon community is that a lot of them don’t do charge-backs. And that is one of the fundamental premises of the cloud, that we are going to pay by the drink. Are you seeing more interest in doing charge-backs, or is it more of a show-back model? How is that whole thing being rationalized.
RW: It really depends on where you’re at relative to virtualization – the amount of storage and services you have virtualized in the environment itself. The deeper into virtualization you get, the more the expectation is that I want charge-back. So I want the accountability back to the business unit for the services level I am giving. So for my ERP and CRM systems that I am giving the highest levels of quality, I am putting more infrastructure into those environments, and I want to be able to make sure that they have the ability to charge back. So charge-back is absolutely a requirement that you’re seeing more and more, and I think as you see more companies virtualizing more and more of their infrastructure, chargeback will be more of a requirement.
JF: One of the big things people want to hear about the infrastructure is what is the big disruption happening out there? Can you give us a summary of what is going on out there?
RW: There’s a couple of disruptions. One of the big ones you will run into is the ability to virtualize and provide data protection across your heterogeneous infrastructure. So I may have EMC storage, I may have IBM storage in there. How do I consolidate that? How do I provide one way to protect all the data in that infrastructure? So RecoverPoint, one of the fundamental units we are using in all this cloud-based DR, is one of the tools that allows you the ability to connect VMAX and VNX, and other non-EMC storage across that infrastructure.
At the same time one of the big pain-points you have in any kind of data replication service is logical corruption. That if my primary copy of data is corrupted, I’m also potentially going to corrupt my replica as well. So how do I protect the business against data corruption within an environment and be able to recover? So the other disruption we’re seeing is the ability to roll back in time to give you that Tivo-capability of your data right within the data center itself.
JF: What do you think about the Playstation hack? So on the big cloud side you’ve got Amazon crashed. RSA got hacked. Playstation got hacked. And then you’ve got Hadoop innovation. How do you think about that? How do you make sense of that?
RW: It goes back to the old adage that every time you create a new solution to try to protect the data, the hackers seem to be one step ahead. So obviously it’s just a matter of staying ahead of the smart people who are out there creating these intrusions and making sure you have an effective way to try to protect against that. And obviously there’s a lot of R&D dollars that are going into that much more robust than it is today.
DV: I think it comes back to that notion that we were talking about earlier about data protection as a service. Data protection is not a one size fits all. Talk about the discussions that are going on in the customer base & maybe how they should occur. We talk about data protection as a service, but what does that mean? Do you sit down with the LOB and talk about what’s the requirement, how much are you willing to spend and is it an iterative process?
RW: Typically what happens is the data discussion starts around my ERP or CRM services that are mission critical. So the discussion starts around how do I protect that data. But at the same time I have the rest of my infrastructure that I want to include in that service. So you need the ability to assign and enforce service levels dynamically across the system that allows you the ability to assign priority and quality-of-service to the data that’s being protected across the environment. So within RecoverPoint I can take an application set and say that for this data set this is my mission-critical service, it’s going to have a certain RPO, so I can guarantee a specific recovery point objective in that environment, & I also can guarantee how long it is going to take me to recover the applications once the data’s back in line.
DV: And the goal of course is to automate that and make it policy based. Is that happening today?
RW: It’s definitely happening. What you are seeing is tools like Site Recovery Manager from VMware that give the ability to automate fail-over & testing within a virtual environment. So tools like this really builds an infrastructure that help to automate fail-over on virtual machines, so that now, rather than having to build it out separately, I have the ability to send a single command and automate fail-over of those virtual entities. So that’s definitely helping to automate a lot of fail-over, which is a necessity as you start to grow out the infrastructure. When I’m talking about 10 VMs, it’s very easy to fail those over. When I’m talking about thousands or tens of thousands of virtual machines that I want to fail over, I need an automation and orchestration framework to be able to do that. VMware’s done a very good job with Site Recovery Manager (SRM) in working with the storage vendors to provide that.
DV: Do you see a requirement for zero data loss or near-zero data loss as we get to the cloud? Is that becoming more important, or is it still “Oh that’s too expensive, I don’t want to do it.”
RW: It’s absolutely becoming a requirement, especially as I move my mission-critical applications into the cloud. The ability to guarantee near-zero data loss is very important, and not only to protect against a site outage or a power outage, but how do I protect against that data corruption that may have impacted data at both my primary and secondary sites. So as you move services, and especially mission-critical applications, into the cloud, it is very important to deliver that near-zero data loss across there and to be able to tune the application recovery capabilities to the data that’s being protected in there.
DV: A lot of the stuff we’re talking about falls into that “boring but really important” category. If you don’t figure this stuff out, your cloud is not going to work. So my last question to you is: What advice would you give to customers out there that are thinking of architecting in the cloud, particularly in the context of data protection.
RW: Data protection needs to be part of the cloud design from the beginning. You can’t bolt it on afterwards because often when people do that it becomes an afterthought, you try to fit it into an existing infrastructure. So it needs to be designed in from the beginning.
One of the things we’ve seen certainly working with the VCE team is they get that. They’ve taken the Vblock infrastructure and integrated RecoverPoint and a lot of the capabilities into the Vblock, so now it’s a standard offering. So now you don’t have to go in and try to architect it in afterwards, it actually gets designed in as part of the initial deployment, so it’s part of my initial roll-out. And you want to make sure as you’re doing this that you’re protecting at the local site, so I have protection from an operational recovery standpoint, and also that disaster recovery failover at the same time.
DV: So Vblock has that capability, and as you think of data protection as a service I can dial up or down depending on my application’s requirements, how much money I want to spend, and the like, right?
RW: Exactly. And I can also automate it. So if you want to automate it you add SRM into it. Now it provides a complete solution around storage infrastructure, networking infrastructure, and the server roll-out as well.