Originating Author: Aaron Bowers
ITworld needed to deploy five identical systems with two large hard disks configured as a hardware-controlled RAID 1 running Linux on the EXT3 file system. We tried two approaches before we got the results we wanted. Here's more detail on our experience.
Our first attempt used scripting technologies to install and configure one model system, add supplemental software and configuration information as needed by hand, use Ghost or G4U to create a hard disk image, and then distribute that image to the other four target systems simultaneously over the network.
The creation of the model system using a combination of KiXtart and manual tweaking went smoothly. The creation of the Ghost image took around 30 hours to complete due to Ghost’s lack of ability to see and skip free space in the EXT3 file system. The distribution of this image to the other four systems appeared to proceed as expected, although this process also took 30 hours to complete. Unfortunately (and unknown to me), there was an error in the image that prevented any of the ghosted machines from booting. Not wanting to potentially waste another 30 to 60 hours, we decided to try something a little different.
For our second attempt, we used scripting technologies to install and configure one model system (SystemA), add supplemental software and configuration information as needed by hand, shut down the system, pull both mirrored drives from model system (A0 and A1), pull one drive from slot 0 from two other systems in the group that need to be deployed and set aside, evoke hardware RAID controller’s RAID rebuilding utility to rebuild to remaining disk in slot 1 on target systems SystemB and SystemC. Once the rebuild process completes, pull the newly rebuilt disks from slot 1 on the two target systems, replace them with the disks you set aside before and initiate a new rebuild. With the two newly rebuilt disks or the original disks from the model system, replace the disks in slot 0 on the two remaining target systems and rebuild.
Given
SystemA: A0 A1
Round 1 SystemB: A0 -> B1 SystemC: A1 -> C1
Round 2 SystemD: A0 -> D1 SystemE: A1 -> E1
While
SystemB: B0 -> B1 SystemC: C0 -> C1
Round 3 SystemD: D0 -> D1 SystemE: E0 -> E1
Each round of rebuilding took 3.5 hours, and you could check success or failure at the end of each round with certainty. Using this technique, duplicating the model system to the four other systems took a total of 12 hours to complete. Compared that to 60+ hours with Ghost or similar technologies. What’s more, while the machine was rebuilding its array, it remains online and available the entire time. Although you probably wouldn’t want to change content on a system working off the original seed drives for this process, you could certainly change the content on Systems B-E during Round 2 and Round 3 above without worry. On the RAID controllers we used, it was also possible to give priority to rebuild requests or to normal I/O. Depending on your situation, you might want to tweak those settings to expedite the rebuild. On our systems, giving priority to the rebuild process under a normal system load, only saved us about ten minutes off the 3.5 hours
Action Item: Although there are a number of techniques used to deploy a model system configuration to systems with identical hardware, some are much more time consuming, labor intensive, and prone to failure than others. Using RAID 1 members in a reference model LINUX system as seed drives for all the other RAID 1's in the same group of systems, proves to be an easy, rapid, and reliable method for system deployment.

