NetApp is a company with a rich history, a culture of innovation and is a firm that has consistently proved the naysayers wrong. Still, NetApp is under fire again, including for many some strange reasons:
- The company rocketed out of the recession in 2010 and 2011 and hasn’t been able to sustain its incredible market share gains and growth momentum
- The company has too much cash – nearly $7B
- NetApp is not currently perceived by some on Wall Street as a company positioned for the future.
To the first point, on a relative basis, NetApp is actually doing okay, as most of its major competitors are flat to down. The notable exception is EMC, which adds a pinch of salt to the wound. Regardless, NetApp is going through a fairly major transition, which I’ll discuss in this post. Wall Street often doesn’t like to bet heavily on companies in transition, which is why on the last earnings call CEO Tom Georgens specifically stated that the company’s “complex product transition behind us…”
Perhaps Georgens is right and the transition is done…but from the outside looking in, the transition is not behind NetApp…it’s in full swing. Maybe it’s even just beginning. Which means there’s value for sharp investors. To this end and point #2, activist shareholder Paul Singer pressured NetApp to “share the cash love” as the company recently announced stock buybacks and a dividend. It’s a playbook that works. Take a big stake in a company that’s sitting on a pile of cash. Pressure management to buyback stock, pay dividends and reduce headcount. Stock goes up. Nice profit.
In this sense NetApp made a tactical move to placate investors. But if NetApp is to survive as an independent company, it must learn to balance the tightrope walk between making shareholders happy and long-term viability.
To point #3 above, here’s a recent quote about NetApp from Brian Alexander, a sell-side analyst:
“A takeover is less likely because their technology is not well positioned for where the future in storage will occur, such as big data and cloud computing.”
I can’t help but wonder who is “well positioned” for big data and cloud computing—Cloudera? Hortonworks? Amazon? Google? One has to begin to realize –you can hardly play storage as a sector today. If you like storage where do you place your bets? EMC – that’s a bet on VMware. The market is devoid of pure plays. Go to Google finance and type in NTAP or EMC. Who comes up as competitors? Hill, XRTX, HPQ, BRCD, QTM, OVRL, OCZ, ORCL, FIO…
Here’s the bottom line: NetApp is the last pure play “storage company” standing. Storage as we’ve known it for the past twenty years is over.
What does this mean for the future of NetApp
NetApp has always been known for making great filers. It also pioneered the notion of unified storage and has done a very good job of hopping on the virtualization trend, integrating particularly well with VMware. As well, NetApp has done an outstanding job within application environments such as Microsoft, Oracle and SAP. Moreover, practitioners in the Wikibon community tell us that NetApp products are simple, reliable, highly available and feature rich (e.g. copy services, cloning, snapshotting, compression, de-duplication, etc.).
The bottom line from customers is that NetApp makes one of the best if not the best product lines in the business.
NetApp’s primary technical challenge has always been scaling. Customers tell us that to scale NetApp, historically what they’ve done is to simply “buy another filer.” And it’s worked generally well for NetApp’s SAN and NAS customers. But consistently customers are telling us that as IOPS requirements increase, NetApp’s WAFL file system hits limits. Small and mid-sized installations seem to have fewer problems but at scale the story is consistent – NetApp systems run out of gas – meaning they become more complicated, harder to manage, harder to tune and too disruptive.
Clustering – NetApp’s Big Bet
NetApp is attacking this problem with clustering. Its Clustered ONTAP OS is designed specifically to deal with this problem. Architecturally, NetApp has embarked on an incredibly challenging mission – to build a platform that can truly scale out seamlessly and linearly with a very large number of nodes (e.g. theoretically like 256 nodes).
In 2003, the company purchased Spinnaker with the promise of integrating clustering technologies into its core platform. This practitioner sums up the frustrations we’ve heard time and time again from NetApp customers:
“Fast forward to 2011. They’ve had seven years to integrate the technology and meanwhile lots of other players such as Isilon, IBM SoNAS, Panasas, and others have matured in NetApp’s traditional areas of strength.”
“They failed to get the feedback that “limiting us to slice-and-dice 32-bit 16TB aggregates really sucks…”
And regarding ONTAP 8.1…
“Okay, we get 64Bit aggregates which will give us @100TB sized aggregates. Nowadays, that’s not nearly good enough. Yes, we’ll get a clumsily unified namespace that I still have to manage behind the scenes. It’s too little and too late. Perhaps 8.1.x or 8.2, huh?”
This particular individual is not alone. In the past 12 months we’ve talked to nearly fifty NetApp customers in depth and the story is similar. The author of the blog blames politics and product managers for the delays. Maybe there’s some truth to that but the reality is what NetApp is trying to do is incredibly difficult.
Why is Clustering So Hard?
NetApp’s architecture is incredibly flexible in the way pretty much everything inside is virtualized (e.g. disks, controllers, ports, LUNs, etc.). It’s probably the most flexible architecture in the industry in terms of how you can set up arrays and spread things across multiple arrays.
The issue with clustering is when you scale out, by design you have no single point of control – the system is the single point of control. As such if something goes wrong and Humpty Dumpty falls off the wall, putting the pieces back together again is ten times more complicated.
With clustering, instead of one nice neat ONTAP you have n ONTAPs. There’s no common clock. There’s no easy way to capture state, so it’s very hard to fix things at a specific point in time. It’s a free-for-all of sorts. Specifically, error recovery – i.e. rolling back to where you were when things were right is very complicated. As well, you can scale linearly in theory but when something goes wrong you have to start freezing processes and when you do that, it’s very hard to maintain linear scalability.
Historically, when technology companies introduce clustering they start with four nodes (where NetApp is today) and struggle to get to eight. To really make a difference in the market sixteen is probably a good target.
Clustered ONTAP: To 8.1 and Beyond!
NetApp is betting everything on Clustered ONTAP. The issue for NetApp right now is adoption. NetApp must perform rigorous testing and move in a step-by-step fashion and prove to its customers that ONTAP 8 is a safe bet.
What’s in it for customers? Simplicity, scale, ease of movement, less disruption, better productivity…
What’s the risk? The whole shop!
The move to clustered ONTAP is a profound change for customers. It literally could be a “cluster-$&#*” if something goes wrong. So customers are taking their time evaluating the change. But they are moving.
According to NetApp CEO Tom Georgens, from the last quarterly earnings call:
“The momentum of clustered Data ONTAP has grown over the course of fiscal year ’13 with a 4x increase in clustered nodes from fiscal year ’12. Sales of clustered nodes in Q4 increased 95% from Q3 on top of sequential increases of almost 70%, Q2 to Q3; and 120%, Q1 to Q2. The installed base includes almost 1,000 unique customers, of which 1/3 are repeat clustered ONTAP customers. In Q4, 18% of mid-range and high-end system shipped are running clustered ONTAP. In addition, over half of our installed base has migrated to Data ONTAP 8, the industry’s most innovative storage operating system, and we will continue to enhance it. You can expect to hear more about the next version of Data ONTAP in the near future.”
So squinting through this statement I get:
- There’s clearly adoption
- It’s growing super fast so it’s not universal
- Those customers in Missouri are still waiting for more – as are many others
- More is coming in the “near future” which to me implies by the next time NetApp reports its quarter.
OnCommand: The Future of ONTAP is Automation
Today it’s all about NetApp delivering and customer confidence. The future is all about automation.
Many of the trends we’ve been tracking at Wikibon and SiliconANGLE come from watching the hyperscale crowd (i.e. Google, Amazon, Facebook, etc.) and learning from them how enterprise customers improve their operations. Virtually all the mega dislocations we’re seeing today are being applied by hyperscale practitioners – scale out, software-defined, big data and even flash adoption. One of the defining attributes of hyperscale shops is the degree to which they’ve automated.
For NetApp, automation is critical. It’s easy to move stuff around with NetApp’s architecture but it’s still manual. Running scripts on one box and moving things around to another is no big deal. But when you move to a large environment, scripts don’t scale. When something goes bad you need the system to heal itself.
OnCommand is how NetApp will ultimately get there. When people talk about separating (for example) the control plane from the data plane – OnCommand is essentially NetApp’s control plane. It is key to NetApp’s software-defined storage (SDS) strategy and it enables the potential of clustered ONTAP to be realized. OnCommand, is a combination of products purchased via acquisition (e.g. Onaro and Akorri– the heterogeneous pieces) blended with organic NetAppp IP– home grown to manage NetApp workflows. Importantly, the suite not only manages NetApp storage but parts of it can deal with heterogeneous systems as well – another key to SDS.
OnCommand is how NetApp automates things like setting up resources, managing cluster performance, policy compliance, managing SLAs, incident/problem/change management, capacity management, reporting, etc. Essentially without OnCommand Clustered ONTAP is just a bunch of tech.
NetApp and SDS
Everyone’s talking about software-defined or what we call “software-led” storage/infrastructure and we expect NetApp will be weighing in on this topic more frequently. In many respects NetApp is well positioned in software-defined because 1) its architecture is highly virtualized; 2) it offers rich sets of services in a single platform (e.g. Snaps, Clones, compression, de-duplication, etc.); and 3) it has a means of addressing non-NetApp storage (e.g. V-Series and OnCommand).
The big question for NetApp is how will it play in the “Storage-as-a-Platform” game? Who are the big players here in the enterprise? VMware, OpenStack, Amazon (with Google and Microsft Azure chasing), EMC with ViPR, the Hadoop crowd. HP is betting on OpenStack. IBM has yet to weigh in.
What we envision for NetApp is the company will take an open platform approach – supporting OpenStack, playing with Amazon and offering a set of APIs: Above OnCommand, into OnCommand and into ONTAP. We expect NetApp to be part of the OpenStack framework, for example, contributing to Cinder and Swift and creating a ViPR-like platform built around Clustered ONTAP with OnCommand as the control plane.
The issue for NetApp is time. The clock is ticking. Flash-only arrays are moving fast, encroaching into the high-end space that NetApp covets. EMC’s marketing machine is in high gear and NetApp must move fast enough not to get boxed solely into the lower end archiving, bit bucket, file-serving space. The company must demonstrate with Clustered ONTAP and OnCommand that it is capable of delivering across a wide performance spectrum while maintaining its reliability and demonstrating best-in-class automation.
The pieces are there and it’s all for the taking.