Storage Peer Incite: Notes from Wikibon’s March 20, 2012 Research Meeting
Recorded audio from the Peer Incite:
In our last Peer Incite meeting we discussed 10Gb Ethernet in light of Intel's Xeon ES-2600 announcement and the servers already appearing using this latest processor. Wikibon's analysts have long been champions of 10GbE and of Fiber-Channel-over-Ethernet (FCoE), but while these related technologies are widely seen as the next step in internal data networking, adoption has been slow. The advent of the ES2600 eight-core processor, however, may be the tipping point.
ES2600 servers eat large amounts of data, and the motherboards typically come with 10GbE built-in to meet that need. While companies can plug 10 1GbE cables into their servers, this is a kluge that creates major physical cabling challenges and is much less flexible, particularly in virtualized environments in which multiple applications with different data demands often run on a single server. Multiple 1GbE connections can only be allocated one way -- one connection at a time. A single 10GbE can be partitioned to provide each application with the network performance it needs, and those partitions can change dynamically as those needs change. This could for instance allow one connection to meet the needs of more than 10 small applications or alternatively provide an effective 4.5GbE to a single application while dividing the rest among several others.
However, not all applications need 10GbE. And moving up from the 1GbE networks typical today is expensive. Also, moving to 10GbE can create issues between storage and network admins over who owns what will quickly become a converged network. The articles below, provided by experts in their fields, provide an authoritative examination of these issues and advantages designed to provide IT professionals with a basis for deciding whether and where 10GbE is appropriate in their environments.Bert Latamore, Editor
On March 20th, 2012, the Wikibon community held a Peer Incite to discuss the impact of Intel’s Xeon E5-2600 processor family on the adoption of 10Gb Ethernet. McLeod Glass, Director of HP Industry Standard Servers, and Greg Scherer, VP of Server and Storage Strategy at Broadcom, joined the call.
Intel Xeon E5 Support for 10Gb Ethernet
Outside of blade servers, motherboards based upon previous generations of Xeon processors did not have embedded 10GbE. The Xeon E5-2600 family extends the reach of 10 GbE LAN on motherboards (LOMs) to rack and tower servers. Prior to the Xeon E5, bandwidth out of the server acted as a constraint on application performance, particularly in highly-consolidated, virtualized environments.
Latest generation servers from HP, Dell, IBM, and Cisco, among others, leverage PCI-Express 3 controllers. This enables a balanced system, where bandwidth out of the servers more closely matches the bandwidth within the servers. This is particularly important when servers require access to external, direct-attached or SAN-attached storage.
Throughput-Intensive Workloads, the Logical First Step
Certain workloads stand out as prime targets for migration to Xeon E5-2600 based servers and 10GbE. Possible application choices include streaming video, high-performance computing, and big data. These applications will create the least disruption and offer the fastest time-to-value. At the same time, some workloads, such as large database, single-image environments, where the focus is on latency rather than throughput, will continue most cost-effectively operating on Westmere-EX based servers.
Virtualization Drives Need for High-Performance, Meshed Network
Highly-virtualized environments represent the logical second step in 10GbE deployments. Three-tiered architectures of the past took the form of a database layer, an application layer, and a presentation layer. Each layer ran on dedicated, physical servers and each layer connected to the next, leveraging a network layer tuned for its specific section of the service-delivery stack. With the broader adoption of virtualization, servers will less-frequently be dedicated to a single application and applications less-frequently dedicated to a single server. As a result, networks, will need to be higher-performance, dynamic, and meshed, to support the fact that workloads may migrate from one server to another at any time.
In a highly virtualized infrastructure, the physical layer of the application stack will change more rapidly than the logical. To ease migration and management challenges in a virtualized world, CIOs should leverage tools that present a logical view of the network that is consistent with the today’s physical view.
Action item: With the advent of embedded 10GbE, CIOs should begin evaluating the performance of throughput-intensive applications on the latest generation of Xeon E5-2600 based servers. Beyond that, applications running on virtualized servers represent the next logical phase of 10GbE exploitation. Longer term, organizations may want to consider eliminating siloed management for converged server and storage network infrastructures, but in larger organizations this will require substantial organizational change and new ways of monitoring and managing performance.
More than ever before, IT departments are being tasked with demands that can be considered as coming from opposite sides of the request spectrum. Often called “doing more with less”, the reality is that a fierce business environment and a struggling economy mean that IT departments need to provide a value-add in everything they do. This means that new technologies--even replacement technologies--have to prove themselves worthy of investment of both hard dollars and human capital.
Today, a confluence of new technologies is hitting the market that may be just what the CIO ordered. These new technologies fit extremely well with general equipment lifecycle replacement plans and leverage existing staff skills in ways that provide a mostly seamless transition to more powerful hardware.
First on the docket is the release of Intel’s Xeon E5-2600 eight-core processing behemoth. Along with this processor comes significant support for 10Gb Ethernet, which is quickly becoming a necessity in the data center. With the E5 processor, workload dynamics shift a bit, too. This processor class is designed with virtualization in mind and provides excellent performance for these kinds of workloads. With a high core count--eight in each processor--E5-based systems can do more than ever before.
Some traditional workloads, such as large-database, single-image applications, or applications with a large memory footprint, are better suited to other processor products as they don’t leverage the E5 improvements as much as floating-point and memory-bandwidth intensive workloads. As CIOs begin to consider replacement server hardware, they should look at systems that integrate the E5’s new advancements for this latter set of applications.
To this end, some vendors are taking their work on servers to a new level. For example, with many of the company’s generation 8 blade systems, HP is bringing a streamlined management and maintenance experience to the product line. With FlexibleLOM (LAN on Motherboard) capability, there is additional flexibility when it comes to changing long-term needs. With traditional systems, IT managers needed to make a number of upfront hardware procurement decisions, which became difficult to change to meet evolving needs. One such decision revolved around the choice of network adapters. You chose 1 Gb Ethernet or 10 Gb Ethernet, and 1 Gb often won out due to cost and lack of immediate need for 10 GbE. These embedded adapters could not be upgraded. To add 10 GbE capability in the future, a mezzanine or PCI slot needed to be populated, if it was possible at all.
With Flexible LOM technology in Gen 8 Proliant servers, administrators can upgrade in-place from 1 GbE to 10 GbE by swapping out the modular LOM component. HP has partnered with a number of networking partners--including Broadcom and Emulex--to provide hardware. HP Flexible LOMs are available for different fabrics, including Ethernet, Fiber Channel over Ethernet, and Infiniband.
The newest Flexible LOMs from HP also support I/O virtualization, which allows administrators to slice-and-dice a 10 GbE link into any number of 1 GbE links, which can enable a single 10 GbE link to serve all of the various needs of even the most complex virtual environments while still maintaining a best practices-based approach of logically separating network traffic types. Better yet, this management and network link division is achieved using the tools that administrators already use.
From the CIO’s perspective, these technologies can help organizations meet their ever-growing IT needs while not needing to increase resources dedicated to infrastructure, non-core IT services. Here are some of the ways that this is achieved:
- Ability to leverage existing skills = less cost. Perhaps the biggest challenge in deploying any new technology lies in training the staff to implement and manage these new technologies. By enabling management of new technologies using existing tools, IT staff can implement faster, easier, and with less overall cost. Obviously, this doesn’t mean that organizations can stop training staff, but makes it less necessary for minor generational changes.
- Increased workload size = more virtualized workloads. With such high core counts in the new E5 processors, larger workloads can be considered as viable candidates for virtualization. In fact, the E5 line is tuned against large, memory-intensive single-application workloads, making them well-suited to the varied, multi-application nature of virtualization. So, have you been avoiding virtualizing that big SQL Server? Consider it!
- Less cabling = focus on IT value-add, lower costs. With 10 GbE and IO virtualization, organizations can enjoy a 10-to-1 reduction in the amount of network cabling that is necessary for a server. This translates into less time necessary to spend on physical networking needs.
As these efforts scale, organizations can begin to reap significant ongoing benefits related to both money and time. Companies should start by testing some of the new capabilities in a test bed to determine an appropriate level of investment based on the measured performance of the new hardware. They should begin to discover where such investments can be made sooner rather than later and what kinds of tools may need to be put into place to supplement existing ones.
- Start testing now in a lab.
- Measure workload performance on new system to adequately size virtual hosts in preparation for larger workloads.
- Work to shift infrastructure costs to lower amounts in favor of direct business value-add.
On March 20, 2012, the Wikibon community gathered for a Peer Incite to discuss the impact of Intel’s new family of Xeon X5-2600 based servers on networking. Those of us who have been watching the adoption of 10Gb Ethernet have been waiting for this release, which could cause an inflection point in customer deployments.
As I discussed in an article about HP’s Gen8 server launch convergence is a topic that needs to be revisited as customers deploy these new solutions. On the networking side, 10Gb Ethernet presents the opportunity for a single network including storage traffic using either iSCSI, NAS and/or FCoE. There can be a long technical debate on choosing a protocol, but a key issue for customers is determining who owns a converged environment.
Both virtualization and network convergence break down the traditional silos of IT management. While there will always be a need for expertise on individual technology segments (networking, compute, storage), the forces of automation and solutions that blur the boundaries of management require coordination and cross-training of the workforce. There are huge impacts on the networking organization.
The flattening of the network to a fabric/mesh architecture will require significant changes from traditional three-tier environments; users should look to external professional services for deployment. Customer adoption of storage options for Ethernet is increasing and this requires careful consideration of the roles and boundaries of the storage and networking teams.
Both storage and networking administrators have risk avoidance as job #1; one worries about potential data loss or unavailability, while the other is on call to avoid network outages. iSCSI has had smooth adoption for mid-sized companies where both networking and storage administration are handled by the same person/team.
Storage administrators have handled FC and have very different best practices than the LAN/Ethernet team. With the latest generation of adapters, switches and storage management tools, customers have the capability to migrate as much (or as little) of an environment to converged FCoE solutions, while allowing the storage administration to be carved out logically. For many companies, a virtualization administrator handles the role of an IT generalist and can smooth over some of the old struggles between silos.
Action item: As 10Gb Ethernet continues to move deeper into enterprise data centers, CIOs have a window of opportunity to reshape architectures and the teams that support the full environment. Neither the technology, nor the organizational adjustments need to be done in a single move; it should be a top priority to align resources in a more agile way that can be used at higher efficiencies.
In big data environments, the goal is to move as much of the processing as possible to where the data resides. But once processing is completed on local nodes, it is critical to move the results to the next point in the chain – be that another node where additional processing occurs or to the application layer – as fast as possible.
When trying to move big data between nodes using 1Gb Ethernet in any sort of mesh network framework, the elapsed time between when the data leaves point A and arrives at point B is prohibitively high, with throughput levels much too low. The result is destination B waiting around idle for data to arrive so it can perform its part of the processing chain. Put simply, 1Gb Ethernet does not cut it in big data environments.
10Gb Ethernet, therefore, should be a no-brainer in big data and high performance computing environments. 10Gb Ethernet on Intel Xeon Processor E5-2600 will provide dramatically improved transmission speeds and much faster big data movement to support the types of real-time big data applications that enterprises across industries are eager to exploit. The latency of 10Gb Ethernet continues to decline, meaning transmission speeds will likewise continue to rise.
Action item: When it comes to the networking component of the new big data paradigm, the watchword is “speed.” End-users want “real-time” big data applications and analytics to make decisions faster and smarter than the competition. This requires moving around large volumes of multi-structured data needs at much higher speeds than in traditional application environments. In such scenarios, 1Gb Ethernet simply doesn’t cut it. Enterprises that want to leverage big data to build real-time applications should explore alternative networking options, such as 10Gb Ethernet on Intel Xeon Processor E5-2600, to build the high-speed, high-throughput networks needed in the big data era.
10GbE pipes on the Romley (officially the Intel Xeon Processor E5-2600, previously known as Sandy Bridge) server motherboard are a “good thing”; simple queuing theory tells you that the throughput for a given response time will be much higher from a single 10GbE pipe compared with ten 1GbE pipes. Fewer ports means less real-estate, less power, and fewer switches. And of course, many fewer cables. Slam dunk, right?
Clearly applications that are bandwidth-constrained (e.g., some big data applications and some high performance compute (HPC) applications) should move to the Romley-based servers in a heart-beat. The combination delivers blazing processor performance and bandwidth. But the simple fact is that there are not many applications that consume 10GbE’s worth of bandwidth. This is especially true in virtualized environments, where a 10GbE pipe must be shared across multiple processors, cores, operating systems, and applications.
Broadcom, QLogic, and other 10gE providers have the ability to carve out logical partitions within a 10GbE pipe and provide shared IO with multiple connections and multiple protocols. Blade systems have already provided a different solution to IO sharing. For example, the HP blade systems have been using Virtual Connect to virtualize the IO for years. The result is far fewer cables and higher speed connections coming out of the blade system. There is clearly a need for IO sharing in virtualized non-blade systems, with large numbers of small virtual systems on a single physical machine.
What is also equally clear is that the supporting software ecosystem is not yet in place for rack-based and tower servers. The focus area for the ecosystem surrounding Romley and 10GbE needs to be the optimization of operating systems, hypervisors, and key middleware to be able to exploit IO sharing efficiently. For example, a single processor has a individual L1 cache to manage; in multi-core implementations each core shares that cache. The IO systems in hypervisors and operating systems will need significant improvement to avoid trashing the shared cache. Management, problem determination and security will also need to be significantly enhanced.
Action item: CXOs and server specialists should demand that vendors package and brand Romley servers together with the correct OS, hypervisor, and middleware software technologies to deliver proven improvements in throughput with minimal retraining of staff and changes in management and security procedures. VMware, Hyper-V are Xen should be the initial targets for these configurations.
The new Data Center is being dramatically reshaped with the rise of 10Gb Ethernet along with new Xeon E5-based Servers
On March 20th, I had the privilege of joining the Wikibon Peer Incite: "The Rise of 10Gb Ethernet and the Impact of Intels Xeon E5 Family of Processors". This was an important discussion that drilled down on many aspects of 10Gb Ethernet, it’s speed advantages over the existing 1Gb Ethernet environment, and HP’s Flex-LOM architecture that makes it easier to choose the right time for upgrading to faster speeds.
Another aspect of the 10Gb Ethernet upgrade cycle is the impact it has on how data centers are now being built. The traditional 3-tier data center (presentation tier, application tier, and database tier) leaves server resources in islands that are optimized for north-south networking traffic only. This means that each tier talked to its adjacent tier, but not to servers in the same tier (this is called east-west traffic). Reconfiguration to adapt to changing workloads in this kind of data center is a physical exercise which is both labor and time intensive. Read this as limited flexibility with a lot of operating expense (OpEx) when a change is needed.
This traditional three-tier data center is rapidly giving way to the construction of “virtual” data centers, where all the servers are fully connected via 10Gb Ethernet in a “flat” network, that is one with fewer network tiers. This not only simplifies network construction but allows ANY set of servers to be configured in any ‘logical’ tier of the data center as the need arises. Ultimately, this eliminates compute islands, increases flexibility and lowers OpEx. This is one of the biggest lessons learned from utility/public cloud computing architectural practices: Fast, fat, and flat networks save time and money.
Action item: One of the revolutions of the rise of 10Gb Ethernet in the data center is to simplify data center design by avoiding inflexible and expensive reconfiguration in order to adapt to today’s changing work loads. Fully connected 10Gb Ethernet networks deployed all the way to the edge of the network (inside modern servers) accomplishes long-term cost savings by eliminating un-needed “physical” network tiers and compute islands whose resources can’t be easily shared or reconfigured.
Footnotes: Greg Sherer is the Vice President of Server and Storage Strategy for Broadcom Corporation