Most continuous data protection solutions save byte or block-level rather than file-level differences. This means that if you change one byte of a 100 GB file, only the changed byte or block is backed up. Traditional incremental and differential backups make copies of entire files, as does file-level backup.
For any kind of backup or replication process, there is always a first time when the first copy of the data gets created. This must, of course, be a full copy, and the process is called seeding. For local copies this is usually not a problem. For remote copies, however, bandwidth becomes the limiting factor – it can take hours, days, or weeks. And it can interfere with other traffic on the network. In some instances, users have made the initial seed copy on tape, portable disk, or even whole appliances and then transported the copy to the central site to act as the seed.
However, using a portable copy does not typically work with block-based replication as there is no knowledge of the file system within the replicating system. Thus, while the blocks may be faithfully copied, their relation to a file system is lost, and the data is not usable. One exception is when a disk image can be made, and the target hardware is a close match to the source hardware.
Other approaches involve getting a temporary increase in bandwidth if it is available from the carrier(s), which is not usually the case.
So, users often just rely on seeding over the network and apply “intelligent” throttling to avoid impacting other traffic. QOS techniques can also be used. However, early versions of CDP products have no such intelligent throttling. They have either no provisions, crude provisions, or totally manual provisions. Sophisticated products, which usually have been in the field for quite a while, provide advanced policy-based automation for not only seeding but also for restarts, network outages, and bandwidth shortages.
Action Item: Vendors should include sophisticated intelligent throttling in version 1.0 of their products.