Storage Peer Incite: Notes from Wikibon’s December 16, 2008 Research Meeting
Economic booms generally float all boats, but downturns separate the strong from the weak. Thus, a recession can be a time for cutting back for some organizations, but for others it can be an opportunity to position for the eventual recovery. In this period, when the world economy seems to be entering what might be the deepest downturn since 1930, it is relevant to realize that many businesses that still prosper today and in some cases dominate their industries were founded during the Great Depression, and that whole industries -- film and aerospace to name two -- grew to become giants during that period.
EMC's leadership certainly is aware of this. In the 2001 ecommerce bust EMC was caught off guard and struggled to remake itself. This time around it is sending a clear message that it is in a strong position and has the strategy, resources, and technologies to set the pace in both technological development and market growth. That certainly is the message it sent in its annual analyst meeting this month, and if it can maintain that pace, it could position itself to gain significant market and mindshare over the next year.
EMC does have some work to do in the area of energy efficiency. While it is doing good work internally, and this was presented at the meeting, it has not been as forthcoming about reducing energy consumption of its products. As the leader in the storage market, it needs to provide its customers with a clear set of plans and expectations that they can build into their own green IT and energy savings planning.
Several Wikibon members attended the EMC event. While it is impossible to cover everything from those two days in Hopkinton in a newsletter, the articles below analyze some of the major issues discussed -- and in some cases notably not discussed -- and provide our consensus on EMC's major strengths and weaknesses. G. Berton Latamore
On December 9th and 10th 2008, the industry analyst community converged on Hopkinton, Mass., for two days of business updates, insights, and directions from EMC. Unfortunately, Joe Tucci could not be in attendance because he was announcing a renewed relationship with Dell that both companies probably wished they’d announced before the financial meltdown. Tucci was missed. He is always a highlight at these meetings because of his powerful presence and willingness to provide a glimpse of the future that other managers don't have the authority to convey. As one Wikibon member in attendance said, "Tucci always shows a little leg."
EMC’s analyst meeting is consistently one of the most well-attended and informative sessions in the industry (notes from 2007's meeting). Despite the notable lack of any direct customer presentations and the non-participation of EMC’s most senior executives, this year’s event provided an excellent update of the company’s directions.
B.J. Jenkins, EMC’s Senior Vice President of Marketing kicked off the meeting with some impressive year-on-year financial figures, including: 1) Twenty-one consecutive quarters of double-digit revenue growth; 2) Q3 revenue of $3.7B, up 13%, and 3) Non-GAAP EPS up 14%, leading some to ask “What recession?”
Given the economic outlook, this could be the end of a pretty impressive run. Nonetheless, Jenkins made the case that EMC in this economy is in a much stronger position than it was in the 2001 downturn when it was mostly a slow-to-change, premium-priced hardware company with a high-end focus and a weak balance sheet. He argued EMC has a much broader portfolio today, is more responsive, and has the industry’s strongest balance sheet. In fact, Jenkins flatly stated EMC will gain share in its top markets and may selectively cut prices, but only in emerging regions (e.g., China and India). This is a sharp contrast to 2001, when EMC was clearly on the defensive. Jenkins delineated EMC's guiding principles for 2009 (shown in Figure 1) which provided a credible mental model of EMC’s operating approach for the coming year.
User implication: Look to EMC in 2009 as the safest bet with competitive market pricing and a willingness to provide a degree of up-front hand holding, for free, to win business.
EMC’s core point product story seems to be taking a back seat to a solutions emphasis largely enabled by a huge portfolio. The Wikibon community believes this is a credible way to present complexity—it’s easier for sales teams to describe solutions than to painstakingly enumerate point products. It more closely fits the model of how customers want to buy, aligning with a business capability rather than a product or feature. Integration remains a challenge, and EMC customers should not expect a seamless answer to integration across all EMC’s core lines in the near or mid-term. Serious exploitation will continue to require external services or internal efforts.
User implication: EMC’s broad portfolio is good news for large customers, as bundled deals will come easier with fewer throats to choke.
The Cloud and consumer
EMC shared how it is expanding its portfolio into far-flung places like cloud computing (with ATMOS) and consumer markets. The Wikibon community feels that while these initiatives bring some risk of EMC spreading itself thin, a number of its strategic investments will pay returns, and customers should look for EMC to be a leader or fast follower in many emerging spaces. EMC can afford to invest in these ventures given its strong financial condition, while other firms may not be inclined to keep pace.
User implications: Strategically, the cloud and the consumerization of IT mean new ways of doing computing, lower cost service-based models and new types of functionality. Enterprises need to begin to understand these issues and EMC's roadmap is a must see for users.
Security and CMA
Security with RSA is a big differentiator for EMC, and despite slow progress on a unified security platform, traditional EMC customers will see substantial benefits through integration. Non-EMC storage platforms will have to leverage APIs and the RSA ecosystem, and how facile this integration will be remains to be seen. RSA is a major force in the security space and a gem in EMC’s portfolio. It is a clear leader with a fresh perspective on fundamentally protecting information versus putting up more firewalls and perimeter infrastructure. The consensus in the Wikibon community is that RSA will continue to innovate and lead in the security space and will become one of EMC's most prized possessions.
User implication: EMC customers will see integrated RSA solutions first. Non-EMC storage customers will be more reliant on RSA's ecosystem and API's to integrate RSA's leading technology into storage solutions.
Content Management and Archiving (CMA)
The content management business is a "tale of two cities:" 1) Document workflow-- the roots of CMA and 2) Risk mitigation-- driven by regulation, litigation risks and changes to the Federal Rules of Civil Procedure in 2006. Unfortunately the business world had to mash these two together and the result was not pretty. The former is an area about business process efficiency, workflow, collaboration, version control, etc., with a strong productivity value proposition. Architecturally it is suited to a centralized repository model. The latter, risk mitigation has proven to be a distributed problem with risk in devices, desktops, laptops, and even social media tools. Centralized solutions don't scale, are expensive, perform inconsistently, and suffer from rapid obsolescence. Shoving everything that's discoverable into an archive was a knee jerk reaction to government regulations and litigation risks. Applying centralized document management approaches to risk mitigation is a short-term strategy for users.
EMC’s content management and archiving business is leading, competitive, diverse, and capable but suffers from the same woes as other CMA suppliers: 1) The traditional business is becoming commoditized by open source solutions (e.g. Alfresco) and 2) Applying the centralized model to mitigate risk is proving complicated, expensive, difficult to scale, and solutions purchased today may not be viable tomorrow.
Wikibon believes EMC is keenly aware of these trends and has some advantages in services, implementation skills and perspectives, as evidenced by the presentation of Andy Cohen, EMC's Assistant General Counsel. However EMC's large installed base makes it vulnerable to a market that is in flux and demanding new tools, methods, and solutions, especially to managing risk. EMC and others must move fast to thrive, and that's not easy in an entrenched market like content management.
User implications: Customers had better remain flexible and open to adopting new technologies as they enter the market, especially in the risk mitigation area. In regards to risk management, customers should use point products to fill holes and set low expectations regarding simple, cost-effective integration. Customers should continue to expect challenges servicing the needs of business lines, legal, records management, audit, and finance functions with an integrated solution.
Data center efficiency
On the VMware front the message is clear: EMC is a dominant storage force in the virtualization space. EMC’s VMware storage activities, initiatives, best practice documentation, and partnerships are second to none, and while VMware must continue to maintain independence from its majority shareholder, it is clear EMC the storage company intends to compete vigorously in the VMware storage space and grab as much VMware storage land as possible-- this is a high priority for EMC.
In a related matter, many Wikibon users have touted the overwhelming advantages of virtualizing storage in a VMware environment (ease of migration, better utilization, etc). EMC has never bought this argument and stresses flexibility as the key to VMware storage and claims that a fully virtualized storage backend is not a pre-requisite for flexibility. Notably, Invista, EMC's only fully virtualized heterogeneous storage system didn’t receive a single mention at the meeting. Yet the lack of a credible heterogeneous storage virtualization platform has not hurt EMC in the market up to this point, and there is little evidence it ever will. While Wikibon members continue to see advantages to fully virtualizing a VMware storage backend, discussions with EMC customers suggest they are exceedingly happy with their results.
User implications: Look beyond storage vendor claims of VMware leadership and push vendors for demonstrable milestones. These include broad support for multiple protocols, proof of scalability, backup knowledge, admin features, reference architecture, and strong services offerings.
The Wikibon community feels that the EMC story on sustainability goes something like this: EMC is a strong corporate citizen but not noticeably more active than other Fortune 500 companies. Certain competitors such as IBM, Sun and Hitachi have done a better job of communicating goals, milestones, and metrics. While EMC's Kathrin Winkler, Senior Director of Corporate Sustainability, showed several credible initiatives (e.g. lowering electricity consumption in its labs, telecommuting, water conservation, etc.) the full commitment from the very top is not evident across the spectrum. The storage industry in general and EMC specifically need to place more emphasis on green innovation.
Specifically, on the product design front, is EMC worse off than other storage suppliers? Not really in a broad sense, but certain companies stand out architecturally, and it's difficult to argue that EMC is 'The Leader' in green storage. As 'The Leader' in storage, EMC needs to more clearly and forcefully demonstrate its lead in this important area.
User implications: The U.S. IT industry in general needs to do more in green. Falling oil prices risk repeating the ecological lip service paid to conservation in the 1970's and threaten the planet, the industry's long term competitiveness, and organizations' operational efficiencies.
Other notable areas at the meeting included flash, where EMC bolted to the lead in January 2008, and is now right on the early slope of the S-curve. Other suppliers are behind, but EMC pre-announced flash before it was ready to gain a marketing advantage, and the storage industry as a whole will cluster around flash in the next 24 months. Nonetheless, we expect EMC to maintain its time-to-market lead (in flash as a disk replacement) for some time to come and perhaps indefinitely.
User implications: Customers should start testing various flash and solid state technologies and identifying application candidates to get ready to dissect a slew of offerings across the I/O spectrum.
Finally, EMC’s data de-duplication strategy is coming together nicely, as it is strongly positioned, especially in VMware environments. Avamar initially saw resistance in shops that didn’t want to re-architect the backup process; however, the increasing popularity of VMware and the need to improve backup processes for efficiency has been a boon to Avamar. Users should familiarize themselves with the various de-duplication alternatives to understand where they fit. EMC should be on the short list.
User implications: Customers willing to re-assess their backup processes could see substantial efficiencies with Avamar's source-based data de-duplication approach. While such solutions can be expensive and should never be a band-aid for getting rid of unneeded data, the cost of backup generally is one of the most onerous areas in the storage budget and next generation backup methodologies warrant investigation.
On balance, EMC did a very credible job at this year's analyst event of convincing analysts that it would maintain a leadership position and forge ahead in new markets. It is clearly an exceedingly well-run company and will likely strengthen its position in 2009-- EMC remains the safe bet for users.
However, users should expect EMC to continue to solve key integration challenges with a mix of professional services and investments in product synergies where EMC’s own returns can be maximized (e.g. Avamar integration). Users should not expect similar integration where EMC's position is less likely to be advanced (e.g. heterogeneous SRM). This is a blessing and a curse for customers as they are drawn to EMC’s problem-solving capabilities and outstanding service while at the same time becoming increasingly reliant on the company’s solutions, thereby reducing negotiating leverage.
Action item: Budget constraints in 2009 bring unique challenges and opportunities to EMC’s customers. Users should take advantage of EMC ‘freebies’ in its solutions space to reduce implementation risk by ensuring performance and configuration testing. Users should also construct a strategy to treat 2009 EMC acquisitions as a project organizing a cross-functional team with visibility to all EMC acquisitions. This will ensure the highest quality service and best price from the storage giant.
Back in May 2008 Mark Lewis, who heads EMC’s Content Management and Archiving (CMA) solutions group, along with several EMC colleagues, made a flurry of announcements that outlined their intention to integrate and develop a suite of existing and next generation products to support the creation, capture, flow, storage, and retrieval of the majority of unstructured data or electronically stored information (ESI) that is produced and received within an enterprise.
Using cool codenames such as projects Athena, Janus, and Magellan, EMC outlined promising Web 2.0 applications to enhance user collaboration for Documentum clients along with announcements regarding the “integration of transactional content management for out-of-the-box solutions through the integration of Captiva and Document Sciences” as well as the development of SOA (service-oriented architecture) and next generation capture technology.
Last week I spent most of my time at EMC’s annual IT analyst event with the CMA group in particular to understand how project Janus, the "next generation in email archiving", was progressing. Much of what was disclosed is not public however piecing together publicly available information, one can readily infer that project Janus has a ways to go before it’s ready for prime time.
If you are a happy EMC client and have been waiting for its Next Gen Enterprise Email Archiving (EEA) solution, don’t take this too hard. All of the market-leading solutions in the EEA space are either immature or are clunky, first-generation systems that don’t scale, are not well integrated, or lack basic functionality. Rome wasn’t built in a day, and neither will Janus be built quickly, as archiving solutions are complicated, expensive, and time consuming to bring to market. Take, for instance, segment leader Symantec, which opted to acquire major components such as KVS Systems and “integrate” with other point solutions, or Zantaz/Autonomy, which has separate hosted and enterprise solutions, which has also announced but not yet delivered a next/gen solution for the enterprise. Or Oracle, which has opted to OEM a solution from ZL Technologies and integrate it with its Stellent product line rather than build an EEA.
Moreover, with additional major systems players such as IBM, HP, Google, and Dell, along with a raft of smaller players, competing with in-house or hosted solutions EEA, the space is very crowded. And for good reason. Email archiving is big bucks and over the next twenty-four months, users should expect rapid change and innovations to address performance, scaling and inflexibility issues.
Large banks and financial services companies, as well as quasi-regulated big pharma firms have been forced to become early adopters for regulatory and compliance reasons. These firms will report their clear concerns that technologies they install today will be obsolete in five years-- it's the nature of the beast. As well, they will tell you the price tag for acquiring and maintaining EEA solutions can run into the tens of millions of dollars per year. Even reasonably well managed ESI storage growth for these same companies can run upwards of 5 to 10 terabytes per month. Compounding the problem is that not having a reasonably robust archiving solution to support e-discovery and litigation activities has the very real potential of costing a major corporation multiple times more than the archiving solution. These days lawsuits can be lost in the “meet and confer” stage as corporate lawyers, armed with newly minted FRCP (Federal Rules of Civil Procedure) regulations governing access to ESI, are extremely IT savvy when it comes to archiving solutions and mapping data. The cost of retrieving data using a forensic data IT service can be staggering.
Many vendors including Oracle, IBM, and EMC, have chosen to include archiving within their overall content management or information management framework. However, most of the real action, and customer need, is in the archiving area. Having a well integrated CMA suite of tools and solutions could benefit users but it promises to be expensive, time consuming, and complicated. In today’s business climate the only “have to have” solution in the CMA suite is archiving.
EMC, like others, obviously sees the opportunity an EEA solution provides for additional application growth and, of course, for storage. Up until now, EMC has kept its best clients from migrating from Email Extender and other point solutions including search, e-discovery, mailbox management, case management, policy management, auto classification, and journaling offerings they have provided themselves or through partners to placate user archiving needs. But with Next Gen solutions on the horizon from other major players and the limitations of First Gen systems becoming more exposed daily, EMC needs to bring Janus to market by the summer or it will start to loose the confidence of its most valued users and be passed by other innovators in the space. The fact that EMC has committed resources to support their Exchange and Sharepoint clients should help.
Action item: We’re at an inflection point in the CMA business, with new technologies, consumerization, search, and new classification capabilities, and big bets are risky unless you really need to fill some holes (which you should have done already). Be prepared to spend a lot. Wait and see the EMC roadmap because it’s in flux. Point solutions from all vendors look viable but don’t expect integrated CMA any time soon. Patchy is the watchword. Use point solutions to fill gaps.
After many years of being a niche solution, Solid State Disk (SSD) technology for the enterprise is finally gaining some momentum, and the price points are becoming interesting. Like virtualization, SSD technology has been around quite a while, but the current flash memory technologies in the consumer marketplace are gaining immense popularity.
The first SSD product that I’m aware of was made by Dataram Corp in 1976 for the mini-computer market. StorageTek introduced its 4305 mainframe SSD product in 1978, the same year that Texas Memory Systems, an SSD product company, was founded. Other SSD products and companies have emerged since then. I used a RAM disk in my first PC in the early 1980's. In all these cases, the SSD products were designed to appear to the system as one or more disk drives but with significantly better performance than mechanical drives. These early products were quite expensive and based on RAM technology.
Today, enterprise SSD products use either DRAM technology, enterprise NAND flash technology or a combination of both. DRAM is the same memory technology found inside servers. It is very fast but loses data when the electric power goes out. NAND Flash technology is non-volatile, retaining its data when the power goes out, but is not quite as fast as DRAM. Today’s flash technology is available in two flavors: enterprise and consumer. The enterprise flash drives use single-level cell (SLC) technology which stores one bit per cell and have a typical life cycle of 100,000 writes per cell. Consumer flash drives use multi-level cell (MLC) technology that stores multiple bits per cell and has a typical life cycle of 10,000 writes per cell. SLC technology is faster than MLC, and as expected, is more expensive.
Enterprise flash drives are architected to provide at least five years of useful life by using wear-leveling algorithms and some self-healing capabilities. In addition, enterprise flash drives provide significant performance improvements when compared to mechanical disk drives. Typical enterprise flash drives provide 25x-30x IOPS performance, 10x faster response time, have no moving parts, and have significant benefits in terms of power, heat, space, noise and weight savings. In addition, administrators of mission-critical applications can spend significantly less time in activities focused on overcoming storage performance bottlenecks. Other obvious improvements include the ability to drive higher transaction volumes with existing servers, reduced disk-based backup times, reduced “rebuild” times, and other similar benefits.
Does this mean that we should now replace all our mechanical disk drives with enterprise flash drives? Probably not, for at least two fundamental reasons. One obvious reason is cost. The raw cost of enterprise flash drives on a total capacity basis is still significantly higher than for mechanical disk drives. However, the price points are now such that one could populate a disk array with a combination of flash drives, Fibre Channel (or SAS) drives and lower-cost SATA drives for the same price range as the disk array fully populated with Fibre Channel disk drives. Configuring an array in this manner provides multiple tiers of storage, providing different performance levels for various applications, lower power consumption and potentially savings in open slots and possibly even price.
The second fundamental reason for not populating an entire disk array entirely with enterprise flash drives is that today’s disk array controllers are architected for the performance characteristics of mechanical disk drives, with some headroom. Populating a disk array entirely with enterprise flash drives would overwhelm today’s controllers, moving the bottleneck from the drives to the controllers.
So what are the short-term and long-term implications? In the short-term, enterprise flash drives should be viewed as complementary to mechanical disk drives. The 15K RPM mechanical disk drives are probably the fastest that we will see in mechanical disk drives and we should look to enterprise flash drives as the next higher performance category.
In the long-term, it will be probably five to ten years before enterprise flash drives become the preferred default choice for disk drives over mechanical disk drives. In addition to flash technology, there are other memory-based technologies in the research labs that may prove to be very cost effective as storage devices.
As these enterprise flash drives become more commonplace and we gradually move away from mechanical disk drives, we can begin to re-think storage. A great deal of our storage thought processes are either directly or indirectly related to the fact that the basic storage device is a mechanical piece in an otherwise all-electronic system.
Action item: Today’s large disk subsystems cannot support an array completely populated with enterprise flash drives. Ask your storage vendor for the maximum number of enterprise flash drives the arrays that you are considering acquiring can support. Also, consider a configuration of storage array that has mixed flash, Fibre Channel (or SAS) and SATA disk drives and compute the potential cost and power savings as compared to an array of all one type of disk drive. Look for management software that can handle three tiers of disk storage.
EMC’s corporate commitment to green IT and sound ecological practices are certainly to be commended. At last week's analyst briefing, Katherine Winkler, Sr. Director Corporate Sustainability, presented an impressive array of programs ranging from waste water recycling, on-site renewable energy generation, e-waste, and hazardous materials to green packaging concepts.
However, we would have liked to see more emphasis on what EMC is doing to engineer efficiency into its storage products. Yes, EMC has driven spin down on some products (although so far that capability is limited to EDL and is not widely usable). But at the meeting, EMC placed very little emphasis on the most basic of engineering concepts such as using highly efficient power supplies and adaptive cooling techniques. While these are nitty gritty details probably not appropriate for a high level analysts meeting, we encourage customers to ask EMC to discuss its roadmap in these areas.
In fairness, most storage suppliers today are not using highly efficient power supplies, but some, like Nexsan, Verari and Xyratex, are leading the way. EMC have announced adaptive cooling with the new CX4, and it's probably a reasonable bet that this technology will find its way to both DMX and Celerra products eventually. It's also reasonable to assume that EMC engineers are committed to fundamental efficiency designs, and it's probable that EMC's internal IT department, like every other IT department, is applying green thinking. However we believe EMC needs to pull together these largely grass roots efforts and set a more forceful green leadership agenda for the entire storage industry.
Here's the bottom line: We expect EMC, as the clear storage leader, to take a leadership role in innovative green technologies and as a premier product company, be held to the highest standard of product design innovations. We believe these innovations exist or are in the works but feel EMC needs to communicate better both internally and externally about them. We have an expectation of high performance from EMC in all significant areas and feel that EMC has some work to do to live up to its brand promise with this one.
To answer the growing demand for increased energy efficient solutions, EMC has elected to promote a high level conservation approach typified by power-aware information management; a translation would be, understand your data and its impact on power, and move or manipulate the data to gain power efficiency. Virtualization, consolidation, archiving, de-duplication, tiering, and automation are the recommendations that EMC are offering in response to power conservation.
Nothing is wrong with these resource and data management techniques; quite the contrary, they can be very effective. However, a reasonable skeptic wonders whether these are shills that enable EMC to camouflage excessive power consumption in its storage products. The lack of a highly visible and meaningful commitment to drive energy efficiencies at the most basic product level would be a significant failure on the part of the world's number one supplier of data storage hardware. It was missing from the conversation in Hopkinton and needs to be placed on the front burner in our view. We understand these capabilities exist and are calling for EMC to share their roadmap more publicly.
Wikibon has requested and EMC has agreed to share details in this area, and we remain upbeat about the progress EMC can make. Our impressions are based on observations over the past 15 months, during which we have listened to EMC's public presentations and compared its commitment to green with that of other suppliers. At this point we see EMC in the middle of the pack; whether the company is poised for a strong run will become more clear in the coming months.
Action item: It is time for Joe Tucci to unequivocally demonstrate EMC's commitment to improve energy efficiency across the spectrum, including unveiling the basic engineering innovations and roadmap on its products and empowering a green czar with the authority to commit resources. EMC expressed a credible commitment to “The Total Customer Experience” at the meeting and was rightly proud of its success to date. If customers push back and influence this metric, positive action will result. Remember lower energy consumption equates to lower operating expenses and TCO.
System and networking specs rate computer performance according to bandwidth and clock speed and ignore latency. Latency is the time that elapses between a request for data and its delivery. It is the sum of the delays each component adds in processing a request. Since it applies to every byte or packet that travels through a system, latency is at least as important as bandwidth, a much-quoted spec whose importance is overrated. High bandwidth just means having a wide, smooth road instead of a bumpy country lane. Moreover, latency is actually a much thornier problem in a world where applications are broken into pieces and often distributed around the world.
Advocates of the Cloud Computing hoopla tend to reduce relationships between software components and their associated data as taking place within some idealized network abstraction. Such an abstraction is a useful concept, but it does tend to de-emphasize how parts interact in the real, physical world.
Even a lot of staunch Cloud Computing advocates who liken using processor cycles out of some network grid to a computing version of the electric utility generally concede that storage is a trickier problem. Whereas computing is something you just consume, data has state. And if you lose that state, the data is gone. This is a fundamentally more serious problem than losing access to a compute utility for a few minutes or even an hour. For this reason, internal storage clouds will likely become way more popular than external storage clouds.
We tend to run applications close to the data they operate on for a reason: Application performance is often largely a function of how quickly it can read and write the data that it’s working on. And data stored on a local hard disk can almost always be accessed faster than that same data sitting at the other end of a network pipe hundreds or even thousands of miles away.
Thus, if storage stays inside organizations, a lot of the processing of that data will stay inside as well. And the general trend towards more data-intensive modeling and mining only strengthens this relationship, because ""latency matters more than ever"" in a world where the pipes are distributed networks
Local disk subsystems these days typically deliver data response times of five milliseconds or less. What does the Internet yield? To research this we conducted a few simple tests. To sample typical Internet delays, we pinged 15 of the most popular sites as listed by Alexa once a second for a period of one minute with the following results:
- Average Latency 72 ms
- Maximum Latency 142 ms
- Minimum Latency 25 ms
Interestingly, more than half of the larger latencies were encountered en route in the Internet and not by the targeted sites, usually by some one or two routers that seemed to be having a bad day. This is an all-too-common occurrence.
While ping times are practically immeasurable on small LANs (i.e., less than one ms.), bandwidth and protocol overhead dominate. To explore, we copied a one gigabyte file first locally and then over our LAN. All desktops were running Windows XP Pro, had CPUs in the 1 GHz range and 7200 RPM disk drives. The LAN was operating at 100 MHz. The results were:
- File copy disk to same disk 215 seconds
- File copy disk to a LAN attached desktop 420 seconds
So, just to copy this file over a local network to a system only five feet away took almost twice as long. From experience, we know that the time would be significantly better, but still not great, over a 1-gbit network. This is often the reason users shun iSCSI for higher performing applications, turning to Fibre Channel SANs or direct attach disk instead.
These simple tests illustrate that latency on the Internet is 100-1000 times higher than local networks. Most disconcerting is that latency in the Internet is highly variable and unpredictable. For the most part, users will tolerate slow, consistent response times and not want to deal with the unpredictability of inconsistent ones. What’s more, getting tech support for a slow Internet is like going down a black hole.
Action item: Don’t forget that the best I/O is no I/O. Cloud storage players need to reveal how they deal with high and extremely variable latencies. Users should approach cloud storage with informed skepticism.
The economics of consolidating large numbers of poorly utilized servers under VMware or other virtual server products is a no-brainer. Projects usually break even within a year, provisioning of new virtual servers is improved from weeks or months to hours, and if storage virtualization is tackled as well, overall computing and power costs can be significantly reduced.
Clearly the first application areas that have been virtualized are the less mission-critical. File and print servers, data mart servers, development and networking servers, and tier 3/tier 4 applications have been moved across in droves.
However, IT departments have correctly been holding back on moving tier 1 and tier 2 mission-critical applications. Reasons include a natural lag and reluctance to change anything in a mission critical environment that works. But other factors impact the business decision to virtualize workloads:
- Mission critical applications (especially high-performance applications), tend to have higher processor utilizations, so the business case for migration is not as strong;
- VMware's processor performance overheads become more acute in mission critical workloads;
- I/O performance and manageability issues in virtualized environments can impact the achievement of consistent response times and data availability levels;
- Elapsed time for application support services such as backup, recovery, and business continuance processes can be significantly longer, which can impact meeting RPO and RTO service levels for the applications.
EMC’s VMware group is working with Intel and other partners to introduce improved architectural features, and has aggressive plans to introduce additional end-to-end capabilities for a virtualized infrastructure. However, this will take time and requires significant testing by the industry as a whole and specific data centers before they reach general acceptance. In the meantime, IT organizations should and will work their way up the application tiers one at a time, introducing pragmatic virtualization infrastructures according to the business requirements of the applications.
Action item: Most organizations will keep some application groups off virtual environments for the foreseeable future. IT executives should not force-fit all application groups to one virtualization environment. It is likely that multiple virtual environments will be created to meet the performance, availability, and recovery requirements of different application tiers. Senior IT executives should be careful not to oversell the adoption and cost reduction potential of virtualization for mission critical applications.
One of the more disappointing areas of coverage at the 2008 EMC Analyst Event was consumer. Not because Joel Schwartz and Jay Krone were not engaging, interesting, and effective in describing Iomega and its products-- they were. The disappointment came because EMC didn't even come close to teasing the audience with the potential of its consumer strategy. In fact, there was very little discussion of the consumer vision in the same way EMC has provided a directional glimpse of cloud storage with ATMOS. The audience was left wanting more.
There were likely three primary reasons for this: 1) EMC is still figuring it out and doesn't want to tip its hand too early; 2) The consumer business is largely a separate entity similar to VMware; and 3) There was a lot of ground to cover at this meeting.
Here are the fundamentals. EMC is fond of citing IDC data that suggests 70% of new information will be created by individuals but much of that data will ultimately be stored in a secure online location (presumably enough to make a good business). This data is scattered, redundant, and persistent (often hanging around for many decades). It's interesting to note that few if any other storage vendors cite these statistics. Are they dubious? Not likely, they sound perfectly reasonable. Are there technical issues (e.g. speed of light)? Not likely with the type of unstructured information we're talking about. Are others just not paying attention? Perhaps.
EMC's consumer strategy is finally coming together and moving fast. In the fall of 2007, shortly after EMC's acquisition of Mozy, EMC put Tom Heiser, a long time EMC Type A overachiever, in charge of what was then referred to as the SaaS business (Software, or Storage as a Service)-- essentially comprised of Mozy and a vision of bringing SaaS to EMC's portfolio.EMC then acquired Pi, the company founded by Microsoft's Paul Maritz. Maritz was in, and Heiser moved over to help Art Coviello cream RSA's competition. Then Maritz was tapped to run VMware when Diane Greene was sent packing, leaving a void in what has become EMC's consumer services business.
Then in November 2008, EMC announced Decho, which stands for "digital echo", and the pieces of the puzzle started to come together. Funny how a good name can catalyze thought and action. Digital echo is an allusion to the bouncing of information between devices like cell phones, pda's, notebooks, and the cloud. Decho's goal is to bring information-centric computing to the cloud to address the exploding sea of digital information. The problem Decho addresses is to help individuals organize their digital lives, meaning photos, audios, financial records, personal documents, and so forth. These documents have increasing value to each individual and Decho hopes to store, protect, and provide services to exploit the value of this information.
Decho is made up of Mozy, Pi, and some homegrown EMC IP. When acquired by EMC, Pi, which stands for "personal information," was a stealth startup. It develops technology to search, tag, and index information with an understanding of where the information lives in the cloud. Pi's metadata platform will be combined with Mozy, presumably ATMOS and other cloud services geared to consumers wanting to organize their information stovepipes.
To be sure, Decho will be competing with the likes of Google, Flickr, Facebook, Xdrive, Yousendit and all the other zillions of places people store information, often for free. The challenge will be to offer a value proposition that organizes personal information (without the stovepipes) and monetizes Decho technology by offering services that extend existing services to places where these other stovepiped services aren't going. An example might be to offer Mozy customers a way to access or share some or all the information that's been backed up.
Where does Iomega, a device company, fit? Iomega has a brand and channel and sells truckloads of devices that store-- you guessed it -- personal information, the exact target of Decho. The ability to upsell Iomega customers is clearly an asset.
Today, Decho is an autonomous $10M+ company with 100 employees that gets virtually all its revenue from Mozy. It has facilities in Seattle, Utah, Canada, and India. It has numerous job openings and just hired a CEO to make the vision come to fruition.
Questions: 1) Will it work? Just maybe. 2) Can EMC compete effectively in Web 2.0 consumer services? Why not? 3) Can Decho execute? Probably. 4) Does it have the ecosystem and all the pieces? Doubtful.
One thing is clear-- EMC has the motivation, the vision, the technical skills, the cash, and execution ethos to make it happen.
Action item: EMC's consumer strategy is to anticipate change and attempt to exploit opportunity rather than hang on to the high margin past and resist clear trends. The survival of companies with entrenched business models requires this type of anticipatory thinking, and the industry at large should take note.