Portal:Storage
From Wikibon
WikitipSSDs: The Beginning of the End for RAM Caching?IntroductionFor years, the solution to IO bottlenecks has been pretty consistent: (1) add spindles to decrease seek time and increase throughput, and (2) add as much RAM as you can so your filesystems and applications can cache hot data and avoid disk access entirely. These brute-force attempts to gain performance are inherently flawed and costly. The price of increasing the number of disks in an array adds up quick, to say nothing of the investment in additional JBODs when you run out of slots in your array. And although the cost of consumer-grade memory has fallen, relying upon RAM for caching in enterprise environments can get expensive, quickly. Worse, once you run out of DIMM slots for all that RAM, you’re left with no way to increase the size of your cache aside from purchasing more servers and building a clustered environment. Cheaper IOPS: SSD vs Spinning RustSSDs, on the other hand, are cheap when compared to RAM, and fast when compared to disk. The price advantage per random IOP on an SSD vs. traditional SAS disks is overwhelming: Comparing IOPS; there really is no comparisonClearly, adding a single SSD to your environment can deliver performance improvements beyond expanding your SAS array, even with multiple spindles. But moving your primary storage away from your SAN is a headache and requires huge investments in time and resources. Furthermore, you may well lose the data protection and infrastructure you’ve grown comfortable with. Using an SSD as server-side cache, however, is almost entirely painless, and you still get the benefits of SSD performance. You can have your cake and eat it, too. Lots of RAM: Lots of Money, Lots of HeadachesSince operating systems use free RAM for caching IO, one easy way to improve application performance is to add RAM to an existing system. If money is no object, and you have unlimited DIMM slots, this technique works well; but I don’t know anyone that fits into either of those categories. Other RAM-caching technologies exist, such as memcached, which allow you to utilize unused RAM and bring more servers to bear (increasing your DIMM slot count, at the cost of an entire server) in order to increase your RAM cache capacity. 37signals recently posted about their acquisition of 864GB of RAM to build out their “Russian-doll architecture of nested caching.” Kudos to them for their efforts to guarantee an excellent user experience, but I feel the pain of their system architects and admins. Not to mention their bank account - they report that this cache cost them $12,000! That’s a ton of cache. There’s an easier way! Use SSD as server-side cache, and avoid the Russian-doll, nested-caching headache. The VeloBit Solution: RAM+SSDIf you already have a storage latency problem, you’ve probably already started down the path of increasing RAM and building out your storage arrays. The good news is that with VeloBit, you can add SSD to your environment and still make use of the RAM and spindles you already own. Even better, since VeloBit HyperCache utilizes your RAM as a high-speed compressed cache, you can get even more power out of the memory you already have! Using VeloBit HyperCache to add SSD caching into your infrastructure is painless; just install an SSD, load a driver, and you’re done. You don’t have to dedicate entire servers as cache nodes or setup a complex “Russian-doll architecture of nested caching.” In just five minutes, you can be seeing improved performance across the board without spending thousands of dollars on rapidly-depreciating hardware! Got five minutes? Try VeloBit now!Sysadmins are busy people with difficult problems. But with such a quick installation and such powerful results, can you really afford not to take VeloBit HyperCache for a test drive? Register to try VeloBit now and start seeing improved performance immediately - without the headache of more disks, or the expense of more RAM.
|
Featured Case StudyVirtualization Energizes Cal State UniversityJohn Charles is the CIO of California State University, East Bay (CSUEB) and Rich Avila is Director, Server & Network Operations. In late 2007 they were both looking down the barrel of a gun. The total amount of power being used in the data center was 67KVA. The maximum power from the current plant was 75kVA. PG&E had informed them that no more power could be delivered. They would be out of power in less than six months. A new data center was planned, but would not be available for two years. |
|
Featured How-To Note |
Storage Virtualization Design and DeploymentA main impediment to storage virtualization is the lack of multiple storage vendor (heterogeneous) support within available virtualization technologies. This inhibits deployment across a data center. The only practical approach is either to implement a single vendor solution across the whole of the data center (practical only for small and some medium size data centers) or to implement virtualization in one or more of the largest storage pools within a data center. |