Cloud Provider ISVs (CPIs) have become essential IT providers for many organizations, providing excellent applications from sales force optimization (e.g., SaleForce.com) to individual productivity tools (e.g., Microsoft 365, Google Apps), to managing IT work-flow (e.g., ServiceNOW). The number of applications available from CPIs is expanding dramatically; with or without IT involvement, cloud is becoming a significant part of shadow or budgeted IT.
By using economies of scale, hyperscale architectures based on scale-out commodity hardware that is expected to fail, open source software and a Dev/Ops organization, CPIs can lower the barriers to adoption and provide application value to the business at an accelerated rate. In addition, low latency storage is removing the barriers to integrated higher function systems that can adapt to the work-flow required by an organization, rather than the tradition changing of work-flow to fit the ISV package.
The migration to any new IT platform is never easy. It is impossible to predict what CPI applications will be available at a set time in the future. However, some clear principles can be extracted to guide how topologies of legacy and CPI applications should co-exist, and how to take advantage of the massive volume of big data and data streams from the universes of people and things will generate. Ten guiding principles include:
- Active data and metadata will migrate from the SAN to the server, and data latency will improve from milliseconds to microseconds to nanoseconds (from hundreds of miles, to hundreds of yards to just feet away). Keep all active data as close as possible together on flash and the servers as close as possible to the data.
- CPIs will offer the majority of innovative applications. Move your legacy systems to the same mega-datacenter as your most important CPIs. Back-hauling data will reduce cost, increase security, and increase the value of combined data. Cloud-bursting within a data-center may make sense; cloud-bursting between data centers is moronic.
- Choose a mega-datacenter that supports an in-house ecosystem of CPIs relevant to the industry served, including competitors.
- Choose mega-datacenters that offer business continuous and disaster recovery as a service, including synchronous protection of new data as well as geographic asynchronous protection. This service should be integrated across legacy systems and cloud service providers.
- Choose mega-datacenters that offer multiple competing telecommunication vendor services to all major organizational hubs.
- Choose a mega-datacenter that also has cloud data and stream providers for the relevant industry.
- Overall, focus on where the data is, and always bring processing to the data where possible. Avoid data sprawl and CPI sprawl; if a CPI does not offer cloud services in your mega-datacenter, choose a CPI that does.
- Ensure that data sources about your competitors are as close as possible, even closer than internal data.
- A very high proportion of data should flow from active to passive; expansive metadata in active storage should mitigate transfers in the opposite direction. Movement of large amounts of data over networks is a sign of a fundamentally weak data infrastructure architecture and should be avoided like the plague.
- Passive data should be placed on the lowest cost magnetic media with appropriate geographic distribution to ensure security, immutability, provenance and integrity. The metadata should allow creation of multiple logical views of data (e.g., a backup view, an eDiscovery view, etc.) without physical data movement.
The migration to data movement minimalism will require compromises and pragmatism along the way. Adherence to the guiding principles should be unwavering.
Action Item: CIOs and CTOs should ensure that data architects are the main drivers of infrastructure strategy, and that expensive and slow data movement over distance is minimized.