Flash runs past read cache
An IT industry analyst article published by SearchDataCenter.
Just because you can add a cache doesn’t mean you should. It is possible to have the wrong kind, so weigh your options before implementing memory-based cache for a storage boost.
Can you ever have too much cache?
[Cache is the new black…] As a performance optimizer, cache has never gone out of style, but today’s affordable flash and cheap memory are worn by every data center device.
Fundamentally, a classic read cache helps avoid long repetitive trips through a tough algorithm or down a relatively long input/output (I/O) channel. If a system does something tedious once, it temporarily stores the result in a read cache in case it is requested again.
Duplicate requests don’t need to come from the same client. For example, in a large virtual desktop infrastructure (VDI) scenario, hundreds of virtual desktops might want to boot from the same master image of an operating system. In a cache, every user gets a performance boost and saves the downstream system from a lot of duplicate I/O work.
The problem with using old-school, memory-based cache for writes is if you lose power, you lose the cache. Thus, [unless with battery backup] it is used only for read cache. Writes are set up to “write through” — new data must persist somewhere safe on the back end before the application continues.
Flash is nonvolatile random access memory (NVRAM) and is used as cache or as a tier of storage directly…
…(read the complete as-published article there)