Is it better to ask whether protection from failure is an appropriate goal? Is it better to ask how we develop systems that expect failure certainty or even continuous failure yet still enable continuous access? Would it be more useful to remove failure from thinking in favor of something else, for example, availability assurance based on client provided mechanisms? Is one of the problems that we are thinking about the problem from a vendor-provided technology perspective (RAID is our technique) and failing (pun intended) to see the problem properly by not viewing this through the client side perspective? Is not one of the problems with our thinking that we not accounting for context, which is a way to identify the priority for protection or assurance?
In the Web-oriented world of today and tomorrow, the bigger picture is essential understating what IT service delivery profile is required. Maybe if we just rename RAID to be a Redundant Array of Instantiations of Data, we would move away from the device-silo level of thinking. For example, in the sharing-oriented world of the Web, maybe the data protection or data availability peer is a competitor or customers or a vendor or all three; that would be a new chuck size for present protection mechanisms to try and grapple with.
In closing, the Peer Incite summary states that innovation is burgeoning. I am not sure that is the case. I think that invention instances are burgeoning (more devices, more software, more systems) but because the process of innovation is stagnating, the resulting inventions are not advancing the industry very much.
Mike Alvarado