(Excerpt from original post on the Taneja Group News Blog)
As both a vendor product marketer and now an analyst, I’ve often been asked to help produce an “official” ROI (or the full TCO) calculator for some product. I used to love pulling out Excel and chaining together pages of cascading formulas. But I’m getting older and wiser. Now I see that ROI calculators are by and large just big rat holes. In fact I was asked again this week and, instead of quickly replying “yes, if you have enough money” and spinning out some rehashed spreadsheet (like some other IT analyst firms), I spent some time thinking about why the time and money spent producing detailed ROI calculators is usually a wasted investment, if not a wasted opportunity (to do better).
…(read the full post)
An IT industry analyst article published by SearchCloudStorage.
Customers should evaluate the cost and effectiveness of a hybrid cloud storage implementation when selecting a provider to store their valuable nearline data.
Security, governance, cost, bandwidth, migration, access control and provider stability once impeded the journey to the cloud, but today many companies’ on-premises storage arrays tier directly to public cloud storage.
Public cloud storage has graduated into an elastic utility that everyone can now use profitably. But there are some differences between the storage services various providers offer, and organizations should shop around before migrating terabytes of corporate data into the ether with a hybrid cloud implementation.
Cloud storage has evolved to the point where companies can use it for business, not just as a remote backup archive. Network latencies still prevent I/O-hungry data center workloads from using the cloud as primary storage, but a hybrid cloud implementation that can also move workloads to the cloud using virtual machines and containers is more the norm than the exception. Still, many data centers today have yet to start using the cloud for purposes beyond cold storage.
To nail down good use cases, look at the various tiers of storage each cloud service provider offers. Companies should consider each tier as a possible plug-in resource in their architecture. Businesses should ask service providers how simple — and costly — it is to transfer data between cloud storage services directly, as this can make it easier to shift data around should needs change. Be sure to understand the costs involved in both storing data over time and accessing it when necessary. There may be access costs, but they might be within expected budgets. And pay attention to access latencies: Backups that take hours to recall may not provide satisfactory levels of business continuity.
Service providers have been constantly dropping prices in what is only a beneficial turn of events for consumers. This means organizations considering a hybrid cloud implementation should position themselves to take advantage of the lowest costs available if all other factors are equal. It currently isn’t easy to migrate massive amounts of data from one provider to another, nor is it necessarily cheap. This results in some friction-based lock in, which is something to be aware of…(read the complete as-published article there)
An IT industry analyst article published by SearchSolidStateStorage.
Sometimes comparing the costs of flash arrays is an apples-to-oranges affair — interesting, but not very helpful.
We’re often told by hybrid and all-flash array vendors that their particular total cost of ownership (TCO) is effectively lower than the other guy’s. We’ve even heard vendors claim that by taking certain particulars into account, the per-gigabyte price of their flash solution is lower than that of spinning disk. Individually, the arguments sound compelling; but stack them side by side and you quickly run into apples-and-oranges issues.
Storage has a lot of factors that should be profiled and evaluated such as IOPS, latency, bandwidth, protection, reliability, consistency and so on, and these must match up with client workloads with unique read/write mixes, burstiness, data sizes, metadata overhead and quality of service/service-level agreement requirements. Standard benchmarks may be interesting, but the best way to evaluate storage is to test it under your particular production workloads; a sophisticated load gen and modeling tool like that from Load DynamiX can help with that process.
But as analysts, when we try to make industry-level evaluations hoping to compare apples to apples, we run into a host of half-hidden factors we’d like to see made explicitly transparent if not standardized across the industry. Let’s take a closer look…
…(read the complete as-published article there)