There is no doubt that virtualization is transforming data centers across the world. More and more organizations are now solidly in line on a virtualization track and on the way somewhere along the virtualization spectrum from zero virtualized guest systems to 100% virtualization. Organizations no doubt enjoy the fiscal and operational benefits of migrating from existing physical to virtual systems in their data centers, but there are many things to consider before jumping right in to doing that.
Datacenters benefit from reduced costs, including reduced floor space, power utilization, cooling, and management savings. What once used to populate many server racks with hundreds of physical rack-mount servers can now realistically be replaced by a dozen or so specialized virtualization hosts. That is no small feat. In other scenarios the virtualization infrastructure, deployed in many scenarios as a private cloud can span data centers, wide area networks, or large regional environments. Another of the well-known benefits revolves around system provisioning. Depending on a number of factors it could take weeks to order, ship, build, and deploy a traditional rack-mounted system. In a virtualized scenario, a build could be conceivably provisioned, ready for production in a couple of hours or less. If you’re breathing and in this business, you know all this.
However, not all applications can or should be virtualized. While most mainstream applications have expressed a virtualization support stance, there are many others that do not have a presence still in 2012 along the lines of virtualization. Sounds hard to believe, but I witness it again and again, and it is hard to point the finger at the person in an organization that is reluctant to virtualize a critical application that has no said support. Through communication with the publisher or vendor of that critical application however, a turning point might be in sight however. It just may well be that the authors have not the resources to put into a virtualization track and are unwilling to support something they have not tried or invested in testing themselves. These situations can be turned into a win. Those of us in the practice know that most everything can be virtualized; there is the matter of whether the organization is ready to have it virtualized and therein lies the challenge.
Other applications may be serving thousands of users and the very scale of the construct creates a similar situation. You must architect appropriately in terms of network, storage, and host system in order to satisfy the collective demands of all potential hosted systems and that is a very concerted task. For example, some applications may be ready to go on VMware, but not on Hyper-V, or vice-versa. Other times you may be looking at an application that is designed to be horizontal and not vertical, such as Exchange which has a distributed nature. Even further such resilient infrastructure strategies such as VMware’s Site Recovery Manager serve to provide data center level recovery for example.
Needless to say my first instinct in almost any environment is to virtualize everything, then look for the “why-not’s”, and move on to P2V goodness. It is with this same perspective that putting together a virtualization plan delivers the most service to an organization. Knowing what level of risk is brought into an environment as a result of support, or infrastructure, or putting all your eggs in one basket. There is definitely a difference between site resiliency, recovery, and high availability. Get familiar with best practices for any and all applications within your environment, assign a value to virtualizing that host, evaluate any risks, and proceed with a plan.