#memeconnect #fs
Contents |
Introduction
Wikibon has emphasized that Infrastructure 2.0 provides IT as a service to the businesses that IT serves. Included in that vision is application data protection that can be tuned to the specific line of business budgets and departmental requirements in a virtualized environment.
Traditional backup technologies require a backup window which is increasingly an impediment to the data protection SLAs demanded by the business. Array-based replication technologies provide the SLAs but are costly and difficult to manage. The combination is unlikely to be adequate to meet the requirements of Infrastructure 2.0 in many installations.
Wikibon believes that the new model of data protection is emerging based an underlying snapshot technologies; consistent snapshots will be taken on a regular basis and the incremental data from the last snapshot moved to another site. How often the snapshots are run, how quickly they are transferred and how quickly the data can be recovered remotely can be varied to meet the business recover time and budget requirements. Most important of all it eliminates the backup window. This model offers a far more flexible and cost effective strategy to meet Infrastructure 2.0 requirements.
FalconStor’s data protection vision is in line with these requirements. The announcement of RecoverTrac is a significant enhancement to that vision, in particular because it provides automated fail-over and fail-back.
Traditional Data Protection tied to the Backup Window
Traditional data protection software and methodologies still use a backup window model, during which a consistent set of data is copied from the production systems to the backup systems. Most systems today have a mixture of local disk to improve recovery times and remote copies of data (tape or disk) to guard against a true disaster to the primary production site.
Technologies such as de-duplication have been used to reduce the cost of local backup copies. Software techniques such as incremental backup have again helped to reduce the backup window, and storage-array techniques such as wide-striping and snapshots have improved the speed of backup within the backup window, but the fundamental model has remained the same.
Data Protection Metrics
The two metrics of data protection, RPO & RTO, remain unchanged. RPO measures how much data is lost and RTO is how quickly the system can be restored. Backup systems are usually daily. For a primary site disaster protected only by daily backups, the average amount of data lost can be calculated as ½ of the sum of 24 hours and the time taken to remove the data offsite. In practice this RPO is usually 18-24 hours, and the maximum amount of data lost is about 2 days. Recovery times (RTO) from a primary site disaster are also usually measured in days.
Array Replication
The traditional method of improving the RPO & RTO of specific applications is to use replication based on storage arrays. This methodology allows consistent copies of data to be replicated to a local and/or remote site. They can be synchronous or asynchronous. Replication techniques and array software allow much shorter RPO and RTO times for specific applications but are very expensive (five times the cost of a system without replication). On its own, storage array replication is not disaster recovery. To turn the replicated data into a true data protection mechanism requires significant bespoke work and testing.
There are some application-based recovery products, such as Oracle's Data Guard. However, most high-availability solutions today use replication technologies based on storage arrays.
A New Snapshot Model of Data Protection
A new model of data protection that fits the Infrastructure 2.0 requirements much better is to base data protection on snapshots of data that contain the incremental changes, copy these snapshots to another site, and use this data for data protection. This eliminates the requirement for the backup window. Backups can still occur if required (e.g., recovery of last resort) but no longer interfere with production.
On their own, snapshots are not sufficient for data protection. Software to manage the copies and the recovery from them as required. One of the products that provide this software is FalconStor’s Continuous Data Protector (CDP). Within this environment, the FalconStor RecoverTrac product allows definition of all IT resources including VMware and Hyper-V hypervisors; the source and target of the recovery can be either virtual or real1. Recover procedures can then be automated by RecoverTrac.
Fail-over is the normal method of starting the systems after any disaster, and fail-back is the method of cutting back to the original site. However, fail-back is very complicated and is only offered for synchronous replication in traditional array-based replication. Most businesses require longer distances between sites than is supported by synchronous replication. As a result, true fail-back is almost never tested.
The provision of automated fail-back is a significant enhancement and will provide IT installations with a method of fully testing disaster recovery scenarios.
Conclusions
Wikibon believes that Infrastructure 2.0 requires a new snapshot-based model of data protection. The introduction of FalconStor’s CDP and RecoverTrac technologies allow users a greater choice of technologies to meet this important strategic requirement.
Action Item: Senior IT executives should plan to introduce snapshot-based models of data protection as part of planning for Infrastructure 2.0. FalconStor’s CDP and RecoverTrac technologies should be included in any evaluation of modern data protection strategies.
Footnotes: