Home

From Wikibon

Revision as of 01:50, 1 October 2009 by Wikibon (Talk | contribs)
Jump to: navigation, search


>>Join our Group

>>Follow @Wikibon

>>Become a Fan

Latest Peer Incite Research:


Latest Peer Incites:


1. How Shopzilla Manages Insane Storage Growth (5:18)

Media:Shopzilla_mashup-short_version.mp3‎‎



Contents

Wikitip

SSDs: The Beginning of the End for RAM Caching?

http://www.velobit.com/storage-performance-blog/bid/118241/SSDs-The-Beginning-of-the-End-for-RAM-Caching

Introduction

For years, the solution to IO bottlenecks has been pretty consistent: (1) add spindles to decrease seek time and increase throughput, and (2) add as much RAM as you can so your filesystems and applications can cache hot data and avoid disk access entirely.

These brute-force attempts to gain performance are inherently flawed and costly. The price of increasing the number of disks in an array adds up quick, to say nothing of the investment in additional JBODs when you run out of slots in your array. And although the cost of consumer-grade memory has fallen, relying upon RAM for caching in enterprise environments can get expensive, quickly. Worse, once you run out of DIMM slots for all that RAM, you’re left with no way to increase the size of your cache aside from purchasing more servers and building a clustered environment.

Cheaper IOPS: SSD vs Spinning Rust

SSDs, on the other hand, are cheap when compared to RAM, and fast when compared to disk. The price advantage per random IOP on an SSD vs. traditional SAS disks is overwhelming:

Comparing IOPS; there really is no comparison

Clearly, adding a single SSD to your environment can deliver performance improvements beyond expanding your SAS array, even with multiple spindles. But moving your primary storage away from your SAN is a headache and requires huge investments in time and resources. Furthermore, you may well lose the data protection and infrastructure you’ve grown comfortable with. Using an SSD as server-side cache, however, is almost entirely painless, and you still get the benefits of SSD performance. You can have your cake and eat it, too.

Lots of RAM: Lots of Money, Lots of Headaches

Since operating systems use free RAM for caching IO, one easy way to improve application performance is to add RAM to an existing system. If money is no object, and you have unlimited DIMM slots, this technique works well; but I don’t know anyone that fits into either of those categories.

Other RAM-caching technologies exist, such as memcached, which allow you to utilize unused RAM and bring more servers to bear (increasing your DIMM slot count, at the cost of an entire server) in order to increase your RAM cache capacity. 37signals recently posted about their acquisition of 864GB of RAM to build out their “Russian-doll architecture of nested caching.” Kudos to them for their efforts to guarantee an excellent user experience, but I feel the pain of their system architects and admins. Not to mention their bank account - they report that this cache cost them $12,000!

That’s a ton of cache.

There’s an easier way! Use SSD as server-side cache, and avoid the Russian-doll, nested-caching headache.

The VeloBit Solution: RAM+SSD

If you already have a storage latency problem, you’ve probably already started down the path of increasing RAM and building out your storage arrays. The good news is that with VeloBit, you can add SSD to your environment and still make use of the RAM and spindles you already own. Even better, since VeloBit HyperCache utilizes your RAM as a high-speed compressed cache, you can get even more power out of the memory you already have!

Using VeloBit HyperCache to add SSD caching into your infrastructure is painless; just install an SSD, load a driver, and you’re done. You don’t have to dedicate entire servers as cache nodes or setup a complex “Russian-doll architecture of nested caching.” In just five minutes, you can be seeing improved performance across the board without spending thousands of dollars on rapidly-depreciating hardware!

Got five minutes? Try VeloBit now!

Sysadmins are busy people with difficult problems. But with such a quick installation and such powerful results, can you really afford not to take VeloBit HyperCache for a test drive? Register to try VeloBit now and start seeing improved performance immediately - without the headache of more disks, or the expense of more RAM.

View Another Wikitip

Featured Case Study

Financial giant goes green

The corporate IT group of a very large, worldwide financial organization with 100,000 employees, has initiated an ongoing “greening” process. This is focused largely on reducing energy use both to decrease the corporation's carbon footprint while creating a net savings in operational costs over the lifetime of new, more energy-efficient equipment, including new storage systems.

read more...

Storage Professional Alerts


Featured How-To Note

Planning a Green Storage Initiative

Fluctuating energy prices have heightened electricity and energy consumption as a major issue within the technology community. IT is a significant consumer of energy and IT energy costs have been rising disproportionately because of continued investment in denser IT equipment. Estimates from the EPA and others indicate that IT will account for 3% of energy consumption by 2012.

read more...

Personal tools