In it’s heyday, IBM controlled 50% of the IT industry’s revenue and a whopping 2/3rds of its profit. I remember in the 1980’s, when IBM would release its annual report and 10Ks we used to pour through the documents looking for clues as to how its lines of business were performing. Because I was a storage analyst at the time, I was seeking any guidance on that segment of the company’s business so that my forecasts could be somewhat consistent with the industry’s leader. In short – IBM was what mattered most in the storage business.
Here’s one way to look at how the storage business evolved in the past 20 years. The arrows indicate the way in which value shifted between the various segments.
In the early 1990’s, it became obvious that the structure of the entire IT business was shifting from a vertically integrated model to one where competition occurred at each layer of the value chain. This had clear implications for storage generally and IBM specifically.
In a storage context, value (and function) began to flow away from servers and into storage systems, storage software and storage services. Servers became “de-frilled” and while devices makers like Seagate still thrived, razor thin margins forced massive consolidation – from 50+ players down to just a handful today. Meanwhile, storage systems, led by EMC’s moves and the trend to “open systems,” spawned a new growth category.
From IBM’s perspective, under storage head Ray Abuzayyad, with the blessing of CEO John Akers, IBM created a separate storage division called ADSTAR (Advanced Storage and Retrieval). The conventional wisdom of the day held that by spinning out separate divisions along the “dis-integrated” value chain of the IT business, IBM could better focus on growth segments, invest in product leadership, properly incent sales teams and win in the market.
CEO Lou Gerstner squashed that idea and made a monumental decision to lead with services and a single point of customer contact. It worked for IBM as a whole but product leadership suffered, especially in storage. Many argue that had IBM continued with the Akers plan it would have maintained leadership in several product categories. It’s hard to say; although it’s highly likely the quality of customer experiences would have declined significantly.
The Next Twenty Years
During this period of massive disruption for IBM’s storage division, IBM executives have cited the advantages of IBM’s deep R&D, its full portfolio and the fact that it is an end-to-end systems supplier. Many at IBM have forecast that the pendulum would eventually swing back to IBM’s favor. Thus far it hasn’t, at least as measured by market performance. The question is will it going forward?
IBM’s most recent quarter ending December 2012 showed IBM’s storage business declined 5% to $1.1B—a shadow of its former greatness. The fact is that IBM’s storage group hasn’t optimized its relationship with other parts of the organization. Despite being integrated as part of the Systems and Technology Group (STG) and under Steve Mills’ software group, cross-group synergies have been elusive for IBM.
[In fairness to the storage folks, the storage software business (Tivoli) has done quite well at IBM and is generally considered a leading solution. Alas, storage software is not counted under IBM’s storage division. Hence as has often been the case, IBM’s internal organization is one of its biggest challenges.]
I’ve had many “discussions” with the likes of Tom Georgens, and David Scott (when he ran 3PAR as an independent company) arguing that in fact the pendulum was swinging back toward integrated systems suppliers and these companies would begin to thrive in storage, leveraging synergies and convergence. So far I’ve been wrong if market share is the key metric.
So how should IBM proceed in storage? Let’s start by looking at the macro trends impacting the storage business.
There are five major disruptions we’re tracking today that will directly impact competition, strategies and actions:
- The trend toward hyperscale computing by large Internet players;
- Big Data as a source of competitive value is increasingly carrying more weight than the perception that data growth is an expense that needs to be contained;
- The “de-frilling” trend we saw in servers, combined with virtualization, is migrating to network and storage layers, creating a “software-led” paradigm shift that is driving the reconsolidation of infrastructure resources.
- Flash as a persistent storage medium is completely changing the mental model of how storage is architected and applications will be developed. It’s also shifting computing bottlenecks from spinning disks to networks;
- Everything-as-a-Service is creating new and powerful distribution channels that are creating a collision course between the Amazon’s of the world and traditional enterprise markets.
The implications of these trends are nuanced but suffice it to say that initiatives like Facebook’s Open Compute Project are revolutionary and a harbinger for the data center of the future. Rich software running on “disposable” commoditized hardware is more a long-term trend than an outlier. In turn, the move to converged infrastructure is more evolutionary and a stepping-stone to allow enterprise IT to reduce labor costs and somewhat keep pace with the efficiencies of massive scale out infrastructures.
The five trends cited could lead to a picture that looks something like this:
In the diagram – the past has been characterized by purpose-built storage services, tightly coupled with specialized hardware. The right hand side of the diagram shows the traditional storage OS being subsumed by the OS and hypervisor services of the system (i.e. Windows, Linux, VMware). Drivers will invoke specific storage services, which will be layered on top of commodity hardware. Over time (within 3-5 years) suppliers will extract these storage services from “the box” and sell them as independent offerings.
The implication of the diagram is that all active data will be served from flash, “fast servers,” not “slow storage,” will control key metadata and differentiation, value and profits will come from storage software. Specifically, the concept of classical storage arrays as we know them are passé.
Implications for IBM
IBM has a ridiculously large storage portfolio. It simply can’t adequately fund the development of and market such a vast array of products. So IBM must focus.
Ambuj Goyal is the new head of IBM’s storage business. Do a quick read of his bio and you’ll see this guy has a strong systems, silicon and development background. So he has the chops to make changes. In my view there are some tactical and strategic things he should undertake.
At the risk of sounding banal here are some thoughts on how IBM should approach its storage business.
- Focus marketing on Growth. IBM should divide its portfolio into three categories: Grow, Integrate and Cash Cows. It should place 90% of its marketing muscle into Grow.
- Grow should comprise everything branded Storwize – the V7000, V7000U and the V3700. As well, Grow should include the other “hot” products, including EasyTier, Real-time Compression (RtC) and the “Storage Hypervisor” (i.e. SVC).
- Integrate should include all the stuff that IBM can leverage with other parts of the company that really haven’t taken off and/or are slower growth opportunities. Things like GPFS, SONAS, LTFS and probably OEM stuff like the N-Series (from NetApp).
- Cash Cows should include the DS-Series products, the rest of tape and XIV. Minimize R&D expense, minimize marketing spend and maximize revenue for as long as possible.
Now maybe this mix isn’t perfect but you get the point. Put all the marketing wood behind a limited set of products that are clearly growing.
2. Focus R&D on Growth – A senior storage exec at IBM (who shall remain nameless) once told me that it’s far more profitable for him to integrate with other parts of the organization’s portfolio (e.g. some DB2 or Z-Series function) than to chase EMC’s product feature rollouts. Ok…Maybe given IBM’s services heft that’s true. But I can’t help think that with all of IBM’s R&D that some earth-shattering innovation is possible that could dramatically improve its position. SVC is a good example. I’d like to see more of those.
3. Figure out Flash. The TMS acquisition is nice but somehow I feel like IBM could do so much more in this important area. Near-to-mid term I would shore up the enterprise business and protect it from attack. IBM should integrate the best of TMS IP in as many products as reasonable, with a TMS flash-only module optimized for performance and efficiency; leveraging each architecture’s data management stack. This would give IBM a way to maintain its brand share against the onslaught of flash-only vendors. Also, because TMS is more high-end, I would consider buying a lower cost hybrid storage company (e.g. someone like Tintri) and use this to expand my low-end integrated storage and flash offering. Longer term I would aggressively pursue a partnership with my server brethren and develop an integrated set of offerings, focusing initially on SMB and hyperscale/server providers.
4. Compete with Tivoli. Heresy I know but do it! Tivoli is fine but it’s not your future. Software-led infrastructure is. Consider developing a robust storage and data management stack, perhaps using open source, with proprietary functionality on top. Leverage your current IP including Real-time Compression, Storage Hypervisor (SVC), EasyTier, etc. This set of services will appeal to emerging growth customers. Functionally the vision is to take the storage services capabilities shown in the slide above and create a suite of software products that can run on any open platform.
5. Learn to Compete for Hyperscale Business. Scale out, distributed/disbursed, self-healing, auto-protected object stores are the future. Get in the game now.
To be sure this prescription is not a detailed strat plan. But I’m betting IBM is already on parts of this path internally. The big thing I’m tracking is the impact of the new boss. I listened to Goyal very intently this past fall at IBM’s analyst event and earlier in the year at the Pure Systems launch. I was impressed by his intelligence, thoughtful remarks and measured commentary. He’s just coming off a stint of running a 23,000 person engineering organization responsible for systems and storage; and he has a software background. Will he make big changes or keep the same playbook?
I’m betting on the former.