SMB CIO: Sometimes “scale” means small

Last week, I attended and participated in the Next Generation Storage Symposium and listened to a number of vendors and community participants discuss the future of storage and how the storage revolution will come to change IT as we know it.  However, during one segment of the discussion, the conference organizer and a panel participant – Stephen Foskett – made what to me, was a profound statement.

“Scale doesn’t necessarily mean ‘big’.”

Why do I think this was an important statement?

Simply put, I’ve reviewed marketing collateral from dozens of storage vendors over the years.  Many of them equate their ability to scale to hundreds of petabytes and millions of IOPS as absolute proof that they can scale to meet any size workload need.

However, for CIOs that aren’t in the market for petabytes (PB) or storage or hundreds of thousands of IOPS, scale may mean something altogether different.  Rather than being able to scale to massive capacity and massive IOPS, scale may could mean:

  • Having the ability for the environment to expand beyond what is can do today to simply meet tomorrow’s needs.  How easy is it to simply expand the capacity?
  • Thinking small.  Can the solution scale in a granular way?  For big gear, it’s obvious that the equipment can go all the way into the PBs by adding dozens of terabytes (TB) at a time, but is it possible to affordably scale the unit in much smaller increments or to seamlessly add either flash cache or a flash tier for performance boosts down the line.  These CIOs can’t afford to add capacity in huge chunks and require the ability to think a bit smaller.

For SMB CIOs, “scale” really does mean something very different than it does for enterprise CIOs.  SMB CIOs need solutions that can scale very granularly and affordably and without having to use a forklift every time capacity needs to be expanded.  Further, these CIOs don’t need hundreds of thousands of IOPS; they just need a solution that can support their current workloads and scale to meet additional demands as they are placed.

At the same time, “scale” should include the ease by which an administrator can manage storage that has been upgraded.  After all, if adding storage in small, budget-friendly units results in a massive increase in complexity then the organization loses in the long run as IT focuses more on the tech and less on the business.

What do you think?  Does “scaling scale” make sense based on the size of your environment?  Do you agreee?



  • Rob Commins

    Hi Scott –

    I agree with Steve’s position on scale. No end user in the mid-range market cares about whether the fastest hybrid array in Tegile’s line can do 200,000 IOPS versus Pure’s all-flash array that does 1,000,000. Just yesterday, I was at an account that is ready to spend a respectable amount of his budget on storage that will meet his requirements to sustain a consistent 8,000 IOPS and burst to 20,000. I also think it applies to the large enterprise too. Before my current role at Tegile, I was with 3PAR/HP. Our V800 system could run over 500,000 IOPS and scale to almost a petabyte. This was before vendors were building all flash arrays – this was just by wide striping volumes over 15K drives. We loved the bragging rights in marketing, but customers really didn’t care that much. They cared that the could get a predictable # of IOPS and could scale in increments of our little 4-drive “magazines” (we recommended scaling by a pair of them to ensure symmetry of IO, but that’s not important here).

    Kind of reminds me of the argument over torque vs. horsepower in cars. What most of us really like is torque that gets us to a rational freeway speed quickly. It doesn’t really matter that the car has enough horsepower to keep the vehicle going at 130 MPH, we’ll never use that.

    Good catch Stephen – very valid point.


  • Scott, great post. As you know, this came up in the Peer Incite that we did with Poulin Grain: With an IT staff of one, they needed a data protection solution that cost less than $15K and didn’t have to be revisited every couple of years.

    Scaling down also matters to big organizations with multiple branches. StorMagic (I’m on their board) recently had a major win at a very large retailer. One of the reasons they were selected was because they could deliver a high-availability VMware solution in a 2-server configuration. The competition all required three servers. The savings of even a single server per location, multiplied across more than 2000 locations translates into big savings in acquisition cost, installation cost, maintenance, and reduced complexity.

  • Good analogy, Rob. My son is constantly hammering on about torque.

  • John –

    Excellent point! It’s easy to forget that a lot of “enterprises” are really just dozens of SMB’s smashed together into one entity 🙂

    That ability to scale granularly can be really significant in so many ways.


  • I think that “scale” means the ability to simply meet tomorrow’s needs… but it also means that if tomorrow the needs grow again you can “scale”… and if the next tomorrow they expand again you can scale again. “Scale” is more than “expand”… because many CIOs consistently misjudge the size of the expansion required (no aspersions cast… it is a very hard problem). Marketing guys quote the high end of scale to demonstrate that there is no limit.

    The real issue if you are small is that “scaling” is not free. You have to have scalable infrastructure and there is some cost associated with this infrastructure. A small EMC DCA cluster is significantly more expensive than the underlying servers+storage+network. But it scales.

  • Pingback: SMB CIO: Sometimes “scale” means small - CIOscape()