#memeconnect #ql
The principles in use to configure server and storage infrastructure will change drastically starting in 2010. The traditional approach to configuring servers is by application type. One rack of servers runs Web applications: another rack performs SQL work: a third crunches technical computations. Each set of servers is configured with the I/O and communications connectivity appropriate for the work assigned to it. Server utilization hovers around 6%.
Server virtualization and the new Intel-based Nehalem architecture change that server landscape. The I/O improvements for virtual machines with Nehalem take away the performance issues of running production virtual systems. Processors have the potential to be driven to 30%-50% utilization, and the Nehalem servers have sufficient I/O bandwidth to support these utilization levels. The three server racks in the traditional architecture can be condensed to a single rack or less.
To achieve that, work that was previously done on specific servers needs to be moved to general purpose servers. The issue is how to configure the I/O and communications of the server to allow the flexibility of the server to do any work. The traditional approach of putting in a maximum configuration with separate cards on each is not viable. The cost of putting in FC cards, 10Gb E cards, 1Gb E cards, etc., is prohibitive; the power and space requirements limit the server densities and the cabling of these servers becomes unmanageable.
There are three virtual I/O approaches to solving this problem:
- HP Virtual Connect on its c-Class BladeSystems,
- CNA’s (Converged Network Adapters) from companies such as Qlogic and Emulex,
- Virtual I/O switches from companies like NextIO and Xsigo.
HP Virtual Connect is a mature technology available since 2004. It provides a way to simplify configuring and reconfiguring the connections between the SAN and LAN systems and the servers. However, the HBA cards do need to be installed on each server.
CNA’s provide the ability to virtualize multiple protocols such as 10Gb E and FCoE (Fibre Channel over Ethernet) on a single HBA. By connecting these to a top-of-rack switch and connecting the outside SAN, LAN, and other networks to this switch, the number of cards, the number of switch ports, the amount of cabling and the power consumption can be significantly reduced. This approach is being marketed in particular by EMC and Cisco.
The Virtual I/O switches from NextIO, Xsigo and VirtenSys allow a different and potentially more flexible approach. NextIO connects an HSEC (High Speed Expansion Card) on each server directly to a PCI Express switch, which connects to all SAN, LAN, and IPC (Inter-processor communication) networks. VirtenSys uses PCIe connections from the server in a similar fashion to NextIO. Xsigo uses a HCA (Host Channel Adapter) on each server to connect directly (using InfiniBand) to a Xsigo I/O director, which again connects to the SAN, LAN, and IPC networks. The advantage of these solutions is that servers can be dynamically configured to provide the connection protocol and bandwidth required. This again reduces cabling, I/O ports, and the number of HBA cards, as well as improving manageability. Technology refreshes (e.g., 4Gb FC to 8Gb FC) are accommodated in the I/O switch or director rather than on each server. The types of I/O that can be directly connected is broader and can be intermixed. Xsigo was used at VMWorld 2009 to support the connection of EMC’s broad range of storage protocols to the server rack deployed for a wide variety of demonstrations.
All three approaches provide a more flexible environment for connecting I/O networks to servers They simplify cabling, reduce I/O port counts, reduce power consumption and heat density, and improve the overall management of servers racks and the I/O infrastructure. CNA’s provide significant reductions particularly in high performance FC storage environments. Virtual I/O solutions from NextIO and Xsigo provide a more radical but more flexible solution suitable for environments with a broad variety of I/O connectivity. The bottom line is that all the virtual I/O solutions discussed will significantly improve the flexibility of I/O connection and reduce server/storage infrastructure costs.
Action Item: Senior managers and CTOs responsible for server and I/O deployment should ensure that the design of server racks moves rapidly towards a general purpose server model supporting virtual servers running any workload. To achieve this, senior managers should ensure that virtual I/O solutions that reduce the HBA cards, reduce cabling, and provide configuration flexibility are carefully evaluated for all 2010 server deployments.
Footnotes: @nigelpoulton blog posts on IO virtualization.
Nick Allen provides more detail on IO Virtualization at Wikibon.