Dave Hitz on NetApp’s Re-inventiveness: Live Blogging Notes from #ntapasummit12

We Just Turned 20...That's like 140 In the Valley

NetApp has a history of delivering innovation over a twenty-year period and has re-invented itself just about every five years. By identifying the major trends and making big bets (through storage innovations) that capitalize on these trends, NetApp has excelled in the market. The next major wave is large-scale infrastructure generally and specifically, data as infrastructure. This was the message put forth by NetApp founder Dave Hitz at the 2012 Analyst Summit.

What follows are live notes from his remarks to 100+ analysts with some commentary and points of view.

Relationships are like sharks – if they’re not moving forward…they die.

Companies are like that as well…”We just turned 20 – that’s like 140 years old in Silicon Valley dog years.”

NetApp has reinvented itself about every 5 years.

NetApp is in the process of reinventing itself, again.

Twenty years of innovation looking back: What was the big bet in 1992? – the big bet was solving problems for engineering workgroups in different ways. Technical computing. Someone with 5-10 or 20 workstations. Schematics, engineering or even chip design – oil and gas – Hollywood animation.

What were the main innovations: 1) fast and simple; 2) WAFL – snapshots, virtualization, always on raid (no knob to disable raid) – benchmarking w/ raid on.

Many VC’s who passed on NetApp could never understand why doing less was a good thing _ “a toaster does less than an oven” – netapp created a whole new category of startups – appliance co’s.

My Point of View: NetApp is a Silicon Valley icon in VC circles and the notion of an appliance that does one job really well was absolutely popularized by NetApp (whose name used to be Network Appliance). That notion lives on today and can be seen in database, backup, security, search, cloud and many other markets.

1997 – doubled down on the dot.com boom. Big bet. Evolution took Internet co’s to be much more like enterprise co’s. e.g. yahoo’s email couldn’t go down. So much of the innovation (e.g. clustered failover and remote mirroring) were designed to accommodate this market’s needs – also NFS and CIFS support on the same architeture – NAS – NetApp invented a term to describe a whole new industry.

Dot com crash forced NetApp to double down on enterprise – focus on companies, partnerships and apps. Oracle on NFS. Unified – NAS + SAN, cloning (innovation).

“Once we stopped being religious about NAS vs SAN people started to focus on the value of Ethernet storage – changing the way applications worked. Cloning was really designed to accelerate database environments. The big difference was app-centric storage – Oracle, SAP, Exchange…and really make the storage coupled and work tightly with the applications.

My PoV: NetApp was one of the best at the time at marrying storage with applications. It’s somewhat ironic that a storage company and not a systems vendor selling storage cracked this code first. Others have copied NetApp’s moves in this regard and this capability has become fundamental to selling storage.

2007 – vmware – NetApp doubled down on virtualization. Always building on each other. Innovations – Fast and flexible provisioning, storage efficiency, SATA and dedupe for primary – leading innovations that led to the vmware economic partnership … Deep partnership with their biggest competitor (via emc’s ownership). Was crazy to think netapp could partner w/vmware.

My PoV: In the mid 1990’s Lotus Development Corporation refused to write to Windows because Microsoft was its biggest competitor. It was a fatal move by Lotus. We saw some similar mistakes by some storage companies with regard to EMC’s ownership of VMware and it gave NetApp and EMC a lead in VMware integration. NetApp claims it has better VMware integration than even EMC. I would say these two companies are about on par but it seems that EMC gets there with brute force engineering, a massive commitment and insider baseball, whereas NetApp’s unified architecture makes integration simpler than dealing across a very large portfolio.

2012’s big bet – Large scale infrastructure– innovations what does it mean to have a huge infrastructure? – infr. Means a shared resource managed centrally – think hwy system, grid, phone…In IT it’s compute, network and data (still in process). 1954 – first comm’l cptrs intro’d. 1964 – IBM/360 intro’d. IBM’s goal was a cptr that could do everything. 360 stood for 360 degrees. Networking was originally siloed as storage is today – DECNet, SNA, Token Ring…advent of PC in 84/85, cisco saw the oppty for shared infrastructure where C/S drove not just a technology change but also a change in ITO – a network computing group.

Will see the same thing with data – 2005 – virtualization driving data. (Hmmm. Not sure I get that? Seems to me that mobile and social are driving data and virtualization and cloud are good ways to create a data infrastructure) Data has to be fluid and flexible able to follow cloud. (Again…seems to me that cloud has to be fluid and flexible to handle the data…not the other way around).

New era – Data as infrastructure – think about what happened w/Networking as infr. This applies to data.

First thing about infrastructure CIOs used to worry about bldg. blocks like routing and switches. Today however they worry about the infrastructure not the individual components – i.e. the bandwidth. Once u install infr. It’s “immortal” – i.e. it’s there forever.

Scaling requirements are incredible. Moore’s Law – 100X increase every decade. 10000X over 2 decades. Rough math says TCP/IP NW installed today is 10M X more aggregate bandwidth than what we started with. So not only immortal – it’s infinite.

Agile Infrastructure is Intelligent (easy, smart), Infinite (scales) and Immortal (lives forever).

What I feel good about at age 20 is here we are re-inventing ourselves for this enormous opportunity.  Key area of innovation is clustering – stay tuned.

My Summary:

Hitz is a legend in Silicon Valley and when he speaks I listen carefully. He and his company have made some incredible calls and so it’s best to not dismiss their visions. The big question is can NetApp’s architecture deliver on the promise of mega-scale? What role will Engenio play? What about object storage? Where does erasure coding and storage dispersal fit? NetApp has some of these assets in place – will it successfully integrate and “unify” these? Can it keep pace organically with the required innovation or will NetApp need to continue to acquire to compete? My bottom line bet – NetApp is becoming a more complicated beast and while it will figure this all out…it will be harder and take longer. As well, my belief is NetApp will increasingly have to increase its software and services contribution to deal with increased customer complexity. 

  • http://pulse.yahoo.com/_MSN4PFN24NOTA2Z6U56M3AQOOQ CJ

    you missed the 2002?

  • http://blogstu.wordpress.com stu

    2002 was about Unified (obviously a critical innovation for NetApp) and Oracle on NFS. See this Wikibon research note on the business case for unified storage: http://wikibon.org/wiki/v/Business_Case_for_Unified_Storage_Consolidation_for_Microsoft_Windows_Installations

  • Pingback: Founder Hitz talks NetApp strategy for clustering, convergence | Datacentre Management . org