Whither NetApp: The Future of a Silicon Valley Icon

NetApp is a company with a rich history, a culture of innovation and is a firm that has consistently proved the naysayers wrong. Still, NetApp is under fire again, including for many some strange reasons:

  1. The company rocketed out of the recession in 2010 and 2011 and hasn’t been able to sustain its incredible market share gains and growth momentum
  2. The company has too much cash – nearly $7B
  3. NetApp is not currently perceived by some on Wall Street as a company positioned for the future.

To the first point, on a relative basis, NetApp is actually doing okay, as most of its major competitors are flat to down. The notable exception is EMC, which adds a pinch of salt to the wound. Regardless, NetApp is going through a fairly major transition, which I’ll discuss in this post. Wall Street often doesn’t like to bet heavily on companies in transition, which is why on the last earnings call CEO Tom Georgens specifically stated that the company’s “complex product transition behind us…”

Perhaps Georgens is right and the transition is done…but from the outside looking in, the transition is not behind NetApp…it’s in full swing. Maybe it’s even just beginning. Which means there’s value for sharp investors. To this end and point #2, activist shareholder Paul Singer pressured NetApp to “share the cash love” as the company recently announced stock buybacks and a dividend. It’s a playbook that works. Take a big stake in a company that’s sitting on a pile of cash. Pressure management to buyback stock, pay dividends and reduce headcount. Stock goes up. Nice profit.

In this sense NetApp made a tactical move to placate investors. But if NetApp is to survive as an independent company, it must learn to balance the tightrope walk between making shareholders happy and long-term viability.

To point #3 above, here’s a recent quote about NetApp from Brian Alexander, a sell-side analyst:

“A takeover is less likely because their technology is not well positioned for where the future in storage will occur, such as big data and cloud computing.”

I can’t help but wonder who is “well positioned” for big data and cloud computing—Cloudera? Hortonworks? Amazon? Google? One has to begin to realize –you can hardly play storage as a sector today. If you like storage where do you place your bets? EMC – that’s a bet on VMware. The market is devoid of pure plays. Go to Google finance and type in NTAP or EMC. Who comes up as competitors? Hill, XRTX, HPQ, BRCD, QTM, OVRL, OCZ, ORCL, FIO…

Here’s the bottom line: NetApp is the last pure play “storage company” standing. Storage as we’ve known it for the past twenty years is over.

What does this mean for the future of NetApp

NetApp has always been known for making great filers. It also pioneered the notion of unified storage and has done a very good job of hopping on the virtualization trend, integrating particularly well with VMware. As well, NetApp has done an outstanding job within application environments such as Microsoft, Oracle and SAP.  Moreover, practitioners in the Wikibon community tell us that NetApp products are simple, reliable, highly available and feature rich (e.g. copy services, cloning, snapshotting, compression, de-duplication, etc.).

The bottom line from customers is that NetApp makes one of the best if not the best product lines in the business.

NetApp’s primary technical challenge has always been scaling. Customers tell us that to scale NetApp, historically what they’ve done is to simply “buy another filer.” And it’s worked generally well for NetApp’s SAN and NAS customers. But consistently customers are telling us that as IOPS requirements increase, NetApp’s WAFL file system hits limits. Small and mid-sized installations seem to have fewer problems but at scale the story is consistent – NetApp systems run out of gas – meaning they become more complicated, harder to manage, harder to tune and too disruptive.

Clustering – NetApp’s Big Bet

NetApp is attacking this problem with clustering. Its Clustered ONTAP OS is designed specifically to deal with this problem. Architecturally, NetApp has embarked on an incredibly challenging mission – to build a platform that can truly scale out seamlessly and linearly with a very large number of nodes (e.g. theoretically like 256 nodes).

In 2003, the company purchased Spinnaker with the promise of integrating clustering technologies into its core platform. This practitioner sums up the frustrations we’ve heard time and time again from NetApp customers:

“Fast forward to 2011. They’ve had seven years to integrate the technology and meanwhile lots of other players such as Isilon, IBM SoNAS, Panasas, and others have matured in NetApp’s traditional areas of strength.”

“They failed to get the feedback that “limiting us to slice-and-dice 32-bit 16TB aggregates really sucks…”  

And regarding ONTAP 8.1…

“Okay, we get 64Bit aggregates which will give us @100TB sized aggregates. Nowadays, that’s not nearly good enough. Yes, we’ll get a clumsily unified namespace that I still have to manage behind the scenes. It’s too little and too late. Perhaps 8.1.x or 8.2, huh?”

This particular individual is not alone. In the past 12 months we’ve talked to nearly fifty NetApp customers in depth and the story is similar. The author of the blog blames politics and product managers for the delays. Maybe there’s some truth to that but the reality is what NetApp is trying to do is incredibly difficult.

Why is Clustering So Hard?

NetApp’s architecture is incredibly flexible in the way pretty much everything inside is virtualized (e.g. disks, controllers, ports, LUNs, etc.). It’s probably the most flexible architecture in the industry in terms of how you can set up arrays and spread things across multiple arrays.

The issue with clustering is when you scale out, by design you have no single point of control – the system is the single point of control. As such if something goes wrong and Humpty Dumpty falls off the wall, putting the pieces back together again is ten times more complicated.

With clustering, instead of one nice neat ONTAP you have n ONTAPs. There’s no common clock. There’s no easy way to capture state, so it’s very hard to fix things at a specific point in time. It’s a free-for-all of sorts. Specifically, error recovery – i.e. rolling back to where you were when things were right is very complicated. As well, you can scale linearly in theory but when something goes wrong you have to start freezing processes and when you do that, it’s very hard to maintain linear scalability.

Historically, when technology companies introduce clustering they start with four nodes (where NetApp is today) and struggle to get to eight. To really make a difference in the market sixteen is probably a good target.

Clustered ONTAP: To 8.1 and Beyond!  

NetApp is betting everything on Clustered ONTAP. The issue for NetApp right now is adoption. NetApp must perform rigorous testing and move in a step-by-step fashion and prove to its customers that ONTAP 8 is a safe bet.

What’s in it for customers? Simplicity, scale, ease of movement, less disruption, better productivity…

What’s the risk? The whole shop!

The move to clustered ONTAP is a profound change for customers. It literally could be a “cluster-$&#*” if something goes wrong. So customers are taking their time evaluating the change. But they are moving.

According to NetApp CEO Tom Georgens, from the last quarterly earnings call:

“The momentum of clustered Data ONTAP has grown over the course of fiscal year ’13 with a 4x increase in clustered nodes from fiscal year ’12. Sales of clustered nodes in Q4 increased 95% from Q3 on top of sequential increases of almost 70%, Q2 to Q3; and 120%, Q1 to Q2. The installed base includes almost 1,000 unique customers, of which 1/3 are repeat clustered ONTAP customers. In Q4, 18% of mid-range and high-end system shipped are running clustered ONTAP. In addition, over half of our installed base has migrated to Data ONTAP 8, the industry’s most innovative storage operating system, and we will continue to enhance it. You can expect to hear more about the next version of Data ONTAP in the near future.”

So squinting through this statement I get:

  • There’s clearly adoption
  • It’s growing super fast so it’s not universal
  • Those customers in Missouri are still waiting for more – as are many others
  • More is coming in the “near future” which to me implies by the next time NetApp reports its quarter.

OnCommand: The Future of ONTAP is Automation

Today it’s all about NetApp delivering and customer confidence. The future is all about automation.

Many of the trends we’ve been tracking at Wikibon and SiliconANGLE come from watching the hyperscale crowd (i.e. Google, Amazon, Facebook, etc.) and learning from them how enterprise customers improve their operations. Virtually all the mega dislocations we’re seeing today are being applied by hyperscale practitioners – scale out, software-defined, big data and even flash adoption. One of the defining attributes of hyperscale shops is the degree to which they’ve automated.

For NetApp, automation is critical. It’s easy to move stuff around with NetApp’s architecture but it’s still manual. Running scripts on one box and moving things around to another is no big deal. But when you move to a large environment, scripts don’t scale. When something goes bad you need the system to heal itself.

OnCommand is how NetApp will ultimately get there. When people talk about separating (for example) the control plane from the data plane – OnCommand is essentially NetApp’s control plane. It is key to NetApp’s software-defined storage (SDS) strategy and it enables the potential of clustered ONTAP to be realized. OnCommand, is a combination of products purchased via acquisition (e.g. Onaro and Akorri– the heterogeneous pieces) blended with organic NetAppp IP– home grown to manage NetApp workflows. Importantly, the suite not only manages NetApp storage but parts of it can deal with heterogeneous systems as well – another key to SDS.

OnCommand is how NetApp automates things like setting up resources, managing cluster performance, policy compliance, managing SLAs, incident/problem/change management, capacity management, reporting, etc. Essentially without OnCommand Clustered ONTAP is just a bunch of tech.

NetApp and SDS

Everyone’s talking about software-defined or what we call “software-led” storage/infrastructure and we expect NetApp will be weighing in on this topic more frequently. In many respects NetApp is well positioned in software-defined because 1) its architecture is highly virtualized; 2) it offers rich sets of services in a single platform (e.g. Snaps, Clones, compression, de-duplication, etc.); and 3) it has a means of addressing non-NetApp storage (e.g. V-Series and OnCommand).

The big question for NetApp is how will it play in the “Storage-as-a-Platform” game? Who are the big players here in the enterprise? VMware, OpenStack, Amazon (with Google and Microsft Azure chasing), EMC with ViPR, the Hadoop crowd. HP is betting on OpenStack. IBM has yet to weigh in.

What we envision for NetApp is the company will take an open platform approach – supporting OpenStack, playing with Amazon and offering a set of APIs: Above OnCommand, into OnCommand and into ONTAP. We expect NetApp to be part of the OpenStack framework, for example, contributing to Cinder and Swift and creating a ViPR-like platform built around Clustered ONTAP with OnCommand as the control plane.

The issue for NetApp is time. The clock is ticking. Flash-only arrays are moving fast, encroaching into the high-end space that NetApp covets. EMC’s marketing machine is in high gear and NetApp must move fast enough not to get boxed solely into the lower end archiving, bit bucket, file-serving space. The company must demonstrate with Clustered ONTAP and OnCommand that it is capable of delivering across a wide performance spectrum while maintaining its reliability and demonstrating best-in-class automation.

The pieces are there and it’s all for the taking.

Share

,

  • Pingback: TweetChat Today : Software-Defined Storage (#SDS) – The Next Evolution #NetAppChat | SiliconANGLE

  • Pingback: Software-Defined Storage (#SDS) What it Means for the Enterprise Space #NetAppChat [Part 1] | SiliconANGLE

  • Jim

    Thanks for this article. It raises some interesting questions in regards NetApp’s future. My opinion is that of the top 5 storage vendors, NetApp is the most vulnerable right now. This opinion comes from a variety of sources but most convincingly from talking to sales reps who have left NetApp over the past year primarily because of the pressure to sell a very problematic Cluster-Mode and because of the lack of overall innovation. NetApp Cluster-Mode sales are growing but so is customer resentment. Many feel pressured to move to NetApp’s more complex and expensive clustered OS, not because of its compelling value proposition, but because that’s where NetApp announced much of its future R&D will go. For example, why would a customer want to move to an OS that cuts their current performance by 46%. You can verify this by comparing NetApp’s single system 2-controller FAS 6240 SPECsfs result to that of their FAS 6240 4-controller Cluster-Mode.
    There was a time when NetApp was innovative and had some unique features but those days are gone. Not only has the industry passed them by in performance, block ease-of-use, capacity utilization, rate of innovation, and disaster recovery, but there is a growing sense that their 1992 ONTAP code has been fully tapped out. How else do you explain their ONTAP innovation practically grinding to a halt over the past few years? – and this is the platform on which the entire company depends for its existence. You can test this by asking NetApp customers what NetApp innovations are they currently using that were released in the past two years. All of the popular NetApp features were introduced years ago: including Unified Storage (2003), SnapManager (2000), Dedupe (2007 – originally called Advanced Single Instance Storage or ASIS), Flash Cache (2008 – originally called Performance Acceleration Module or PAM), Cluster-Mode (2006 – originally called GX), and ONTAP 8.0 64-bit OS (2009). NetApp acquired Spinnaker in January 2004 and here it is over 9 years later and NetApp and the industry still talk about Cluster-Mode as if it were just introduced and they’re still working out the bugs. Also, NetApp’s current all-flash array is not based on ONTAP but on non-SSD optimized technology from Engenio. Clearly, NetApp would have loved to make ONTAP their flash platform – they didn’t and there is reason.
    At one point in your article, you mentioned how virtualized the NetApp architecture is. I see your point, and I do agree that they (and every other major vendor) have software that works with VMware and other application vendors, but I’m not sure that is a result of NetApp’s prowess in virtualization. All storage is virtualized to some extent, but the more advanced versions manage to stripe across all the disks in the array, put different RAID levels on the same disks, and offer competent thin provisioning. NetApp has none of these. In fact, NetApp enables thin provisioning not by turning on a feature but by turning one off – turning off space reservations, which if you understand the NetApp technology you will know are in there for a very good reason. Also, they have no native ability to reclaim zeroes but require separate SnapDrive licences for each OS. In my opinion, these are not the traits of a highly virtualized array. For all these reasons and more, I believe NetApp is in for some rough years ahead. Note: I work for a NetApp competitor.

  • Pingback: Where Does Software-Defined Storage Fit Into the IT Renaissance? #NetAppChat Highlights | SiliconANGLE

  • Pingback: NetApp Strikes Again : Massive Overhaul with ONTAP 8.2 Launch : Win for SDS | SiliconANGLE

  • Pingback: NetApp, Internet of Things + Consumer Cloud : Interview with Val Bercovici | SiliconANGLE

  • Pingback: Clustered Storage: Meeting Tomorrow’s Data Storage Needs | #NetAppChat Highlights | SiliconANGLE

  • Pingback: Clustered Storage: Meeting Tomorrow’s Data Storage Needs | #NetAppChat Highlights | SiliconANGLE | Dan Gorman's Technology News

  • GeneralRetard

    Well given you guys got caught red handed fudging the numbers while running tests between FAS and EMC VNX, do you really expect people to believe you now?

  • Raj

    this is a Pro-EMC article

    if you look at ontap 8.2 it is a pure bet .. you just need get yourself updated buddy.
    Keep reading and stay updated ….

  • Pingback: NetApp doubles down on flash : Latest hybrid cloud push | SiliconANGLE

  • Citerio

    Already
    7-Mode is slow, Not even a 6290 can provide >1200 MB /s throughput nor able
    to handle SSD Disks with >1-200’000 i.o. CPU is most of the time overused,
    wafl, Ontap as whole especially kahuna algorithm not multithreaded. Cluster takeover
    always stops i.o for 10-60 sec. What makes to whole story as storage System for
    small companies but not for enterprise? Even Netapp itself doesn’t believe
    anymore to Ontap as they forces E-Series very strong. E-Series is faster but
    can not handle >300’000 I.O for SSD, no block tiring as still to low cpu
    power as well.

    Netapp is a
    marketing company and the question is how long customer will accept slow, very
    expensive, complex storage system (clustermode). For me was always a surprise
    to see how customer are ignorant about technical deficits.

  • Dimitris Krekoukias

    Caught red handed? When and now? Provide proof pls.

  • GeneralRetard

    if i want a freebsd based san i’ll build one myself because then i’ll know who to blame when its a POS.

  • Dimitris Krekoukias

    Ah right, that article. I have commented therein long ago.

    The main arguments are pretty weak – that PCI-X cards restricted the performance dramatically, and that 8m cables are responsible for the long latencies.

    8m cables, right…

    The whole point of that test was to show performance with and without snaps anyway – not absolute performance.

    I somehow doubt that if short cables were used and PCI-E cards, that the clariion would have performed 3x faster taking snaps.

    Think like an engineer – not everything is a conspiracy.

    Thx

    D

  • GeneralRetard

    That’s a good point, and if excuses changed results you’d surely have something there, but since they don’t….
    I use Netapp 2240′s on a daily basis and for the price they aren’t worth it simply put, data ontap 8.1.3 is a nice piece of software albeit almost a decade behind other storage providers (8.2 cluster mode is very new and not mature enough for even your support staff to have a good base which is why we were told “don’t go that direction for at least a few months”).
    anyway if Netapp wants to reinvent itself as a leader they will need to do more than sit on the 7 billion they have in the bank right now to change the curve, right now Netapp has more in common with Blackberry than with any vendor that is a current major player (ie, EMC, Pure Data, Nimble) (I’m guessing IBM will buy Netapp given they are the only big company to push the product any more.)
    while we are here lets touch on the fact that while all the other storage vendors are spending money on custom ASIC’s for real time compression and dedupe, Netapp on the other hand is almost entirely reliant on commodity hardware only the price’s don’t reflect that.
    Shelf for IBM V7000 /w 24 SAS 15k 600gb 14k$ and now netapp 25k$ (10k)

  • Dimitris Krekoukias

    Who’s making excuses? :) “Truth in Engineering” as Audi says.

    I do find your response intriguing – IBM is responsible for a tiny percentage of all FAS sales. We don’t sell our FAS systems through other companies.

    We do very well indeed – ONTAP is by far the most prevalent storage OS. And E-Series the most resold platform on the planet. And growing. And we’re not resting on our laurels.

    And no, we are not 10 years behind (though the competition would love that). In many respects we are well ahead. In others, not so much.

    I’m curious who you think is researching custom ASICs and what value you think they provide. Most storage vendors do use “commodity” CPUs since they have been growing in performance very nicely indeed (even high end platforms like clustered NetApp 62xx or EMC VMAX use largely commodity CPUs).

    But based on the tone of your message you may not be using the 2240s (among our smallest boxes) the right way, or maybe they weren’t what you needed.

    Feel free to contact dimitri at NetApp dot com.

    Thx

    D

  • GeneralRetard

    I guess if i had done a poor job rebranding FreeBSD like Netapp has I would probably feel a little butt hurt right now as well don’t worry I get it.

    I appreciate the offer, but i’ve had my fill of Netapp which is why we are moving to another storage vendor, one where when i call first tier support they won’t be oblivious to the product or supporting applications.

  • Dimitris Krekoukias

    I wish you all the best. The grass is always greener and all that.

    But “rebranding FreeBSD”?

    Sure, there’s a modified FreeBSD kernel in there somewhere. It’s not what makes the boxes tick. Kinda like saying VNX runs Windows.

    Anyway – best of luck and happy new year!

    Thx

    D

  • http://untangleappliances.com/ Kate Reid

    I learned a lot from this, I supposed that every NetApp owner or planning to have one must read this.