Archive for category Storage

Why Tape is Poised for a George Foreman-Like Comeback

Tape is Dead, Not!

The combination of tape and flash will yield much better performance and substantially lower cost than spinning disk. This statement will prove true for long-term data retention use cases storing large data objects. The implications of this forecast are: 1) Tape is relevant in this age of Big Data; 2) Certain tape markets may actually show growth again; 3) Spinning disk is getting squeezed from the top by flash and from below by a disk/tape mashup we call “flape.”

Spinning Disk: Slow and Getting Slower

Share

, ,

11 Comments

SDS Is More Than Just Virtualization

BullseyeToday, I read an article at SearchVirtualStorage entitled SDS a fancy way to say virtualization, says DataCore Software chairman.  In this article, DataCore Software Corp. chairman and founder Ziya Aral indicates that his belief is that SDS is essentially just virtualization and that he doesn’t really see the difference between the two terms.

SDS is a superset that includes virtualization

First, I understand where Mr. Aral is coming from as there are major similarities between the two concepts and people use the terms interchangeably and, let’s face it – vendors love to invent new terms all the time in a valiant effort to prove their forward-thinkingness and we’re seeing software-defined everything these days.  However, I see SDS is a superset technology that includes virtualization as one of its primary components.

Share

1 Comment

Whither NetApp: The Future of a Silicon Valley Icon

NetApp is a company with a rich history, a culture of innovation and is a firm that has consistently proved the naysayers wrong. Still, NetApp is under fire again, including for many some strange reasons:

  1. The company rocketed out of the recession in 2010 and 2011 and hasn’t been able to sustain its incredible market share gains and growth momentum
  2. The company has too much cash – nearly $7B
  3. NetApp is not currently perceived by some on Wall Street as a company positioned for the future.
Share

,

21 Comments

IBM’s FlashSystem Isn’t For Mainstream CIOs…yet

Introduction

0.5B88Recently, IBM announced a $1 billion initiative intended to improve the overall flash storage market and integrate flash storage in the company’s line of enterprise technology equipment, including servers, storage, and other products.  The company feels that flash-based storage is an a tipping point in the marketplace and is poised to become much more widely used, thanks to the incredible performance gains offered by the technology.  Further, as is the case with any technology, as it approaches a critical mass point, the overall costs of the technology begin to drop and this is certainly happening with flash storage.  There are also other significant cost benefits to flash-based storage, such as reduced power consumption.  At scale, such power savings can be real and significant.

Share

1 Comment

Flash Storage will Radically Change Systems and Application Design

 

I’d like to explore the topic of how system and storage architectures are changing and the impact this will have on application delivery and organizational productivity.

Allow me to put forth the following premise:

Today’s enterprise IT infrastructure limits application value.

What does that mean? To answer this, let’s first explore the notion of value. The value IT brings to an organization flows directly from the application to the business and is measured in terms of the productivity of the organization. Infrastructure in-and-of itself delivers no direct value; however the applications, which run on infrastructure directly affect business value. Value comes in many forms but at the highest level it’s about increasing revenue and/or cutting costs; and ultimately delivering bottom line profits.

Share

4 Comments

Flash Wars Heat Up as EMC and Fusion-io Battle for Top Gun

Quick Take

Flash competitors are aggressively jockeying for position as the market heats up. It’s a tale of two styles. On the one hand, EMC’s entrance into the all-flash array market targets traditional IT segments. It will both pressure competitive offerings and its own high-end block storage business. EMC is positioning to cannibalize its own base before others cut too deep into the EMC muscle; but it must walk a fine line. At the other end of the spectrum, Fusion-io is uniquely positioned to serve the hyperscale market and currently stands alone with a software-led strategy that leverages atomic writes and delivers new value to database workloads.

Share

, ,

5 Comments

Pushing Forward on Backup as a Service

Storage-as-a-Service is something we’ve been covering for years at Wikibon. In a piece we wrote way back in 2006, we said:

“The storage needs of business and application owners are simple: Give me storage when I need it. Provide services appropriate for my application in the most cost-effective manner. Charge me for what I use, don’t charge me for unnecessary waste.

Service-oriented storage has the potential to meet business needs by inherently offering the ability to:

  1. Provision storage capacity and function that meets application requirements based on performance, scalability, availability, cost and security needs of the business.
Share

, , ,

No Comments

Iomega: Simple and huge capacity sometimes overshadows huge features

When you think of EMC, you probably think of a massive storage company that builds enterprise-grade storage arrays chock full of all kinds of storage features, such as thin provisioning, inline deduplication, seven hundred and thirty different connectivity ports of varying type for every need, and any kind of replication that you might desire.

We write a lot about the changing storage needs of modern organizations as it pertains to changing ways of doing business.  It’s a sure thing that newer technologies, such as virtualization, have had a major impact on how storage systems are designed, sized, procured and configured.

Share

1 Comment

7 Green Data Centers Just in Time for Spring

The power of Green, it’s working! That’s right, growth of the world’s electricity consumed by data centers has slowed substantially despite the rapid growth in the number and power of data centers. The electricity conservation is directly linked to the adoption of Green friendly tactics of powering and cooling of many forward-thinking data centers.

A recent study executed by Stanford Professor Jonathan G. Koomey, PhD by request of the New York Times found that approximately 1.3% of the world’s electricity is being consumed by data centers. However, the growth rate from 2000-2005 indicated that by 2010, data centers should have been consuming 2.2% of the world’s electricity. What slowed the growth? Well the recession actually helped (maybe the only time anyone will ever say that) but more significantly, through an industry-wide effort to make data centers more eco-friendly via various energy saving techniques.

Share

, , ,

3 Comments

Data deduplication is an increasingly important aspect of storage technology

Although new product has been continuously shipped over the past two decades, the world of storage advancement has remained relatively stagnant, at least from a performance perspective.  According to PCWorld’s 50 Years of Hard Drives, the first 10,000 RPM disk was released in 1996 and the first 15,000 RPM disk released in 2000.  Since that time, storage companies have focused on density and capacity rather than on performance, leading to the need for an ever-increasing number of spindles—spinning disks in an array of arrays—in order to improve overall storage performance.  As a result of this eager march toward density, the primary metric by which storage has been measured has been as a function of capacity—dollar per gigabyte or dollar per terabyte, for example.

Share

4 Comments