Storage Peer Incite: Notes from Wikibon’s May 6, 2008 Research Meeting
Moderator: Dave Vellante & Analyst: Tina Rose
Remote replication is a key to disaster recovery, not just from a database crash but also from disasters that put the entire data center out of commission. Remote replication however comes in two basic choices, synchronous and asynchronous, and a multiplicity of flavors from synchronous replication to another SAN in the same building to asynchronous replication across a continent or ocean. And it serves several needs including creating an aggregate database for complex analysis, data protection, and DR. In general the faster the replication the more expensive the solution, with synchronous usually much more expensive than asynchronous, and fast async, which reduces the amount of data at risk of loss at any given moment, more expensive than slower. Overall there is no one best answer, and while the temptation is to say that the business needs full synchronous replication with an instantaneous cut-over in event of a failure at the primary site, such a solution is very expensive and serious overkill for many businesses. It also puts physical limits on the solution, and particularly on the distances involved, that can leave the business vulnerable to a regional disaster such as a large hurricane or flood that puts both sites out of action.
Thus best practice is to start with a realistic assessment of the business's actual needs. Many businesses can live with the loss of some data and with a delay of up to several hours in recovery after a major failure or disaster. Additionally, in a major disaster customers and business partners expect service disruptions and will tolerate delays in service restoration.
This week's newsletter is based on a Peer Incite presentation by Hewlett-Packard Customer Focused Training Group Senior Systems Engineer Tina Rose on the results of her hands-on research into the technical issues surrounding various replication choices of a large sample Oracle TP database, an investigation that resulted in some important technical findings.
Best practices in Oracle 11G remote replication: An HP EVA example
David Vellante with Tina Rose
In late 2007, HP's Customer Focused Testing (CFT) Group initiated a project to understand the best way to configure an Oracle 11g database on the HP EVA array for replication. HP wanted to share best practices with customers for the major elements of the deployment, namely servers, storage, interconnect, and the database itself. As well, HP wanted to understand how replication methods (e.g. synchronous, asynchronous), bandwidth, latency, and distance affect replication behavior.
The basic premise of this Peer Incite is that by taking advantage of HP's effort and leveraging its recommendations, customers can make better technology choices for their specific environments, optimize performance, speed implementation, cut costs, and reduce implementation risks.
The Project
HP replicated data between two HP Enterprise Virtual Array (EVA) 8000s connected by Fibre Channel over Internet Protocol (FCIP). The primary site (Site A) ran an Oracle 11g Real Application Clusters (RAC) database using Automatic Storage Management] (ASM). This was replicated to an Oracle 11g single-instance database using ASM at the backup site (Site B).
Figure 1 depicts the solution implemented by HP.Following Oracle best practices and tuning the database to the most efficient settings, HP was able to improve the base performance of the OLTP workload selected for the project by 16%. Additionally, for recovery purposes, HP configured two EVA and two ASM disk groups with main online files in the first group and backup files in the second group. HP used a two-controller configuration with 12 disk enclosures using 168 146GB drives spinning at 15K rpm and each LUN configured with RAID 1. The backup disk group was comprised of only 32 physical devices because the backup data is accessed far less frequently and could in theory be configured using RAID 5 and lower spin-speed drives.
Best Practice
One critical finding of this project was the recommendation to understand your specific environment for Oracle replication. Specifically, customers should evaluate five key attributes to understand recovery goals and business objectives, including:
- Recovery point objective (RPO) – the amount of tolerable data loss;
- Recovery time objective (RTO) – the maximum time to recover from a primary site failure;
- Bandwidth of the intersite link and other traffic contention for the connection;
- Latency – the round trip delay on the replication link;
- Workload – in particular the write intensity of the application and workload and its peaks/valleys.
Understanding these business and technical attributes will lead to the correct choice of replication technology; namely synchronous, asynchronous, or variations of these (e.g. enhanced asynch).
In order to ensure successful recovery, customers are advised to separate database files using two disk groups on the array comprised of two array groups and two ASM groups. Place the online files in the main group and the backup files in the secondary group and consider less expensive disk devices and protection schemes for the backup group if warranted. Note that if the flashback area is configured in Oracle 11g, Oracle will place a mirrored copy of the online redo logs onto your backup disk group, which can be removed to ensure best performance.
The choice of replication technology could have performance impacts that customers should understand. Specifically, the amount of data pushed through the link, the link bandwidth, and associated latencies can dramatically and detrimentally effect performance in a synchronous environment. Asynchronous replication will maintain performance as latencies increase but will have the drawback of creating greater exposure to data loss as write data queues up in the write history log.
For Oracle 10g or 11g replication environments, prior to acquiring technology customers should access HP's and any other vendor test data, to determine the configuration that best meets business requirements (Replication Best Practices for Oracle 11g with ASM & EVA8x00).
Advice for administrators
Basic database tuning allowed HP to improve OLTP workload performance by approximately 15%. As well, choosing the appropriate bandwidth for your workload is fundamental. As an example, HP saw a 17% improvement in application performance when upgrading the link from OC6 to OC9. HP's findings suggest synchronous replication should be spec'd for latencies of 20 ms or less (ideally below 10 ms) with sufficient bandwidth so as not to negatively impact application performance. The rule of thumb of 1 ms latency added for every 100 kilometers over a base minimum of say 4 ms is a reasonable starting point, but users should be warned that mileage will vary depending on the number of switches in the network routing, line noise, and a variety of other factors.
In addition, the following specific guidelines for storage administrators warrant consideration:
- Create at least two disk groups with multiple LUNs for the database.
- Data consistency is critical. Create one or two data replication groups (or data consistency groups), depending on the number of applications being replicated.
- Balance multiple data replication groups across controllers.
- Avoid filling the write history log by appropriately sizing the log based on the RPO requirements of the business.
- Ensure the link between sites has adequate bandwidth for the workload being replicated.
Action item: HP's CFT initiatives and those like it represent some of the best customer freebies in the business. Based on real world, customer-initiated implementations, these best practice guidelines can save substantial time and money and help users avoid critical mistakes. Storage executives managing projects should ask technical staff three questions: 1) Are such best practices available and have you read them, 2) are they being followed, and 3) where and why do you differ?
Planning for Oracle 11G Remote Replication with HP's EVA
Anyone contemplating remote replication must understand that the planning phase is the most important step. Getting it right from the start can save major hassles down the road. Disk array-based remote copy infrastructures have been available and in use for more than a decade. When properly configured and deployed, they are capable of providing a timely recovery from a variety of failures, including the loss of an entire data center site.
And the place to start is establishing your RPO and RTO (recovery time objective and recovery point objective) Depending on the RPO one then chooses synchronous or asynchronous replication. The choice of replication technology will have performance impacts that customers must understand. Specifically, the amount of data pushed through the link, the link bandwidth, and associated latencies can dramatically and detrimentally effect performance in a synchronous environment. Asynchronous replication will maintain performance as latencies increase but will have the drawback of creating greater exposure to data loss as write data queues up in the write history log. An aggressive RPO dictates synchronous replication and therefore a low latency network link – typically a link that is 100 miles or less.
Once that choice is made, the next planning phase is research. Read all the manuals, white papers, and case studies you can find – both from the vendors as well as independent research such as Wikibon. Of course, one should speak with as many users as possible who have a similar implementation. Asking industry analysts’ opinion is also wise.
After that it becomes mostly a sizing and testing exercise. Since HP’s EVA falls in the mid-range category, will it have enough power for the full life of this application? Will it scale with growth? What happens if one of the EVA’s two controllers fails or is shut down for an upgrade? Can this planning and implementation be leveraged into a high-end array, or is it back to square one?
Next comes testing. Test, test, and test some more. Then test it again. If possible get HP to test your application at one of its centers or install test systems at your sites. Be sure to pursue extensive fault injection. Test not only fail-over, but also fail-back to the original site. Do this several times.
Action item: When it comes to array-based replication, up front planning is first and foremost. This should be followed by extreme testing. Users should free up and allocate extensive resources for such a project.
Organizing Oracle array-based replication projects for success
Array-based replication projects for Oracle, or any DBMS, requires orchestration by senior level management (i.e. CTO or IT director). Starting with the administrator roles, executives must get the normally stove-piped and isolated database administrators, server admin and storage admin working together and communicating. As one example, these roles all have performance-related activities that can be made more productive through better communications. The DBA must optimize disk I/O or properly setup automatic storage management (ASM) groups, properly size the redo logs and control the frequency of log switches to minimize systems waits. The server admin must provide path availability for optimal performance and storage administrators must ensure that links between sites can provide adequate performance for workloads that are being replicated.
Beyond these critical roles, application owners need to provide business input on RPO and RTO and assist in communicating what other applications are being replicated to determine how and if supporting infrastructure and processes can be leveraged. Furthermore, a business case must be made to determine if remote sites are staffed, which will decrease RTO but represents an added expense for the organization that must be justified, funded and maintained.
Action item: Organizations must assign senior level management with explicit responsibilities for coordinating the efforts of key technical and business stakeholders in Oracle array-based replication projects. Deploying replication solutions that are not optimized for performance and business resilience, with input from key stakeholders, can create the illusion of protection until disaster strikes and it becomes too late.
Integrating replication into an application
Integrating replication into the application is a study in the art of the possible. The ideal solution for the business unit application will often be remote replication over long distances with zero loss of data and instant recovery. Physics and dollars will usually dictate compromise.
Remote replication is about recovering from a disaster at the primary site. No application is an island; the first priority is to understand the application and its ecosystem of support application. The next stage is to understand the business impact of an outage and the business impact of data loss on this application and its ecosystem. This business impact has to be expressed in dollars and the expected loss (probability of loss x loss) calculated. Then the impact on different solutions on loss and expected loss and recovery time has to be estimated, along with the cost of those alternatives.
Action item: Integrating replication into an application requires solid communication between the business and IT and a rich toolkit of alternative approaches, including host based replication, array-based asynchronous and synchronous replication, and different communication methods. Calculation and communication are the staples of such projects.
HP's "Cookbook" Planning Guide for Oracle11G Remote Replication Hits the Mark
A previous Wikibon article asked Why don't more storage vendors write "cookbooks?" The obvious response from some vendors was that they are expensive and don’t necessarily yield immediate returns.
Yet for years, many vendors have made significant investments in this area. IBM has for decades published its excellent “Redbooks” which set the standard. EMC also has some great recipes though they are often hard to find. Now as witnessed by this white paper HP's Array-based Replication Cookbook and with its CFT effort (Customer Focused Testing) HP is producing good stuff.
As we have seen, having the right, detailed, open and complete information can produce an excellent return compared with traditional vendor marketing hype. This approach can provide a terrific opportunity to build relationships with key clients and educate practitioners on specific solutions. It helps customers. It helps marketing.
The secret of course is to choose the right best practice development in which to invest. We think this has been a sore point with some vendors who did not embark with sufficient discipline and commitment. They also had poor processes for getting the information into the field.
Action item: Users should always ask if a vendor has a “cookbook” for a proposed solution. Vendors should write more of them, and perhaps more importantly, be sure the world knows they exist. Users should expect this quality of information and level of investment from their suppliers.
Leveraging installed assets for remote replication projects
Remote replication doesn't help get rid of stuff (GRS) per se. However, depending on RPO and other business considerations, users should employ common sense approaches to array-based remote replication to improve utilization. Specifically, the use of installed equipment such as older arrays and slower cpu's often makes sense at the B site (although users should be aware this will not improve the carbon footprint of a data center). As well, more economical approaches to backup disk groups include using slower spin speed devices, lower cost protection approaches (e.g. RAID 5 versus mirroring) or the use of single instance databases versus Real Application Clusters at the backup site, for example.
Action item: Organizations should start array-based replication projects by understanding the business and technical requirements and ensuring RPO is clearly understood with strategies to achieve objectives. This approach will immediately define the amount of wiggle room IT professionals have in keeping costs in line. Leveraging installed assets and lower-cost implementation approaches at the B site (e.g. less expensive disks) are common approaches to keeping costs in line while still meeting business needs.
Action Item:
Footnotes: