Cloudy Object Storage Mints Developer Currency

Having just been to VMWorld 2017 and getting pre-briefs for Strata Data NY coming up soon, I’ve noticed a few hot trends aiming to help foster cross-cloud, multi-cloud, hybrid operations. Among these are

  1. A focus on centralized management for both on-prem and cloud infrastructure
  2. The accelerating pace of systems Management as a Service (MaaS!?) offerings
  3. Container management “distributions” with enterprise features
  4. And an increasing demand for fluid, cross-hybrid infrastructure storage

We all know about AWS S3, the current lingua franca of object storage for web-scale application development and de facto REST-based “standard” for object storage API’s.  Only it’s not actually a standard. If you want to write apps for deployment anywhere, on any cloud or on-premise server or even your laptop, you might not appreciate being locked in to coding directly to the ultimately proprietary S3 API (or waiting for your big enterprise storage solutions to support it).

Which is where the very cool object (“blob”) storage solution from Minio comes in to play.  Minio is effectively a software defined object storage layer that on the backend virtually transforms just about any underlying storage into a consistent front-end developer friendly (and open source) object storage interface (still S3 compatible). This means you can code to Minio — and actually deploy Minio WITH your application if needed — onto any cloud, storage infrastructure, or local volume. And it adds all kinds of enterprise features like advanced reliability with erasure coding which will no doubt please those IT/devops folks that want to manage a consistent storage footprint no matter where the application deploys.

So this sounds at first like this could all be complicated, but I installed and ran Minio locally with but two commands in less than a minute.

$ brew install minio/stable/minio

$ minio server --address localhost:9500 /users/mike/minio
Minio Install and Launch

Done. We can immediately code with Minio’s universal REST API based calls to that URL/port. And of course, point a browser at it and you get a nice browser based interface – this is all you need to do if you just want something like a photo “dropbox” repository of your own.

But wait! One more step gets you a lot of really useful functionality. Install the Minio Client (mc) and configure it with your Mino server key. Now you have a powerful, consistent command line interface to manage and muck with both object and file stores.

$ brew install minio/stable/mc

# This line (and keys) is provided as output by the minio server start above

$ mc config host add myminio http://localhost:9500 <ACCESS KEY> <SECRET KEY>
Minio Client

The “mc” interface has supersetted a whole bunch of familiar UNIX command line utilities.  It’s likely many of your existing administrative scripts can be trivially updated to work cross-hybrid_cloud cross-file/object etc.

$ mc

NAME:
  mc - Minio Client for cloud storage and filesystems.

USAGE:
  mc [FLAGS] COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...]

COMMANDS:
  ls       List files and folders.
  mb       Make a bucket or a folder.
  cat      Display file and object contents.
  pipe     Redirect STDIN to an object or file or STDOUT.
  share    Generate URL for sharing.
  cp       Copy files and objects.
  mirror   Mirror buckets and folders.
  diff     Show differences between two folders or buckets.
  rm       Remove files and objects.
  events   Manage object notifications.
  watch    Watch for files and objects events.
  policy   Manage anonymous access to objects.
  admin    Manage Minio servers
  session  Manage saved sessions for cp and mirror commands.
  config   Manage mc configuration file.
  update   Check for a new software update.
  version  Print version info.
Minio Client Help Text

One can easily see that Minio delivers a nice, really robust object storage service that insulates developer and resource admin (devops) from infrastructure/architecture.

Minio can be pointed at/deployed on many different cloud providers to enable easier cross-cloud multi-cloud migration. And it’s locally efficient. For example, when in AWS, it uses S3,  when on Azure it uses native Azure blob storage services.

Folks looking at taking advantage of hybridizing operations with something like VMware on AWS might also want to deploy Minio over both their local storage solutions (SANs, NASs, even VSAN et.al) AND on the cloud side (AWS S3). This solves a big challenge when transitioning to hybrid ops – getting consistent, enterprise grade storage services everywhere.

There is a lot more to be said about Minio’s advanced feature set and exciting roadmap, but you can read all about those on the Minio website!  Have fun!

 

New approaches to scalable storage

An IT industry analyst article published by SearchDataCenter.

With all these scalable storage approaches, IT organizations must evaluate the options against their data storage and analytics needs, as well as future architectures.


Unrelenting data growth has spawned new scalable storage designs.

We’ve all read the storage reports about overwhelming data growth. It’s certainly a big and growing challenge that deserves attention, but I’ll skip the part where I scare you into thinking we’re about to be overwhelmed under a deluge of data. We tend to store about as much data as we can, no matter how much data there might be. There has always been more data than we could keep. That’s why even the earliest data center storage systems implemented quotas, archives and data summarization.

The new challenge today is effectively mining business value out of the huge amount of newly useful data, with even more coming fast in all areas of IT storage: block, file, object, and big data. If you want to stay competitive, you’ll likely have to tackle some data storage scaling projects soon. Newer approaches to large-scale storage can help.
Scaling storage out into space

The first thing to consider is the difference between scale-up and scale-out approaches. Traditional storage systems are based on the scale-up principle, in which you incrementally grow storage capacity by simply adding more disks under a relatively fixed number of storage controllers (or small cluster of storage controllers, with one to four high availability pairs being common). If you exceed the system capacity (or performance drops off), you add another system alongside it.

Scale-up storage approaches are still relevant, especially in flash-first and high-end hybrid platforms, where latency and IOPS performance are important. A large amount of dense flash can serve millions of IOPS from a small footprint. Still, larger capacity scale-up deployments can create difficult challenges — rolling out multiple scale-up systems tends to fragment the storage space, creates a management burden and requires uneven CapEx investment.

In response, many scalable storage designs have taken a scale-out approach. In scale-out designs, capacity and performance throughput grow incrementally by adding more storage nodes to a networked system cluster. Scale-up designs are often interpreted as having limited vertical growth, whereas scale-out designs imply a relatively unconstrained horizontal growth. Each node can usually service client I/O requests, and depending on how data is spread and replicated internally, each node may access any data in the cluster. As a single cluster can grow to very large scale, system management remains unified (as does the namespace in most cases). This gives scale-out designs a smoother CapEx growth path and a more overall linear performance curve.

Another trend that helps address storage scalability is a shift from hierarchical file systems towards object storage…

…(read the complete as-published article there)

Commodity storage has its place, but an all-flash architecture thrills

An IT industry analyst article published by SearchSolidStateStorage.

Some IT folks are trying to leverage commodity servers and disks with software-implemented storage services. But others want an all-flash architecture.


article_Commodity-storage-has-its-place-but-an-all-flash-architecture-thrills
Every day we hear of budget-savvy IT folks attempting to leverage commodity servers and disks by layering on software-implemented storage services. But at the same time, and at some of the same datacenters, highly optimized flash-fueled acceleration technologies are racing in with competitive performance and compelling price comparisons. Architecting IT infrastructure to balance cost vs. capability has never been easy, but the potential differences and tradeoffs in these storage approaches are approaching extremes. It’s easy to wonder: Is storage going commodity or custom?

One of the drivers for these trends has been with us since the beginning of computing: Moore’s famous law is still delivering ever-increasing CPU power. Today, we see the current glut of CPU muscle being recovered and applied to power up increasingly virtualized and software-implemented capabilities. Last year, for example, the venerable EMC VNX line touted a multi-year effort toward making its controllers operate “multi-core,” which is to say they’re now able to take advantage of plentiful CPU power with new software-based features. This trend also shows up in the current vendor race to roll out deduplication. Even if software-based dedupe requires significant processing, cheap extra compute is enabling wider adoption.

In cloud and object storage, economics trump absolute performance with capacity-oriented and software-implemented architectures popping up everywhere. Still, competitive latency matters for many workloads. When performance is one of the top requirements, optimized solutions that leverage specialized firmware and hardware have an engineered advantage.

For maximum performance, storage architects are shifting rapidly toward enterprise-featured solid-state solutions. Among vendors, the race is on to build and offer the best all-flash solution…

…(read the complete as-published article there)

Are You Making Money With Your Object Storage?

An IT industry analyst article published by Infostor.

by Mike Matchett, Sr. Analyst and Consultant
article_are-you-making-money-with-your-object-storage-1
Object storage has long been pigeon-holed as a necessary overhead expense for long-term archive storage—a data purgatory one step before tape or deletion. We have seen many IT shops view object storage as something exotic they have to implement to meet government regulations rather than as a competitive strategic asset that can help their businesses make money.

Normally, when companies invest in high-end IT assets like enterprise-class storage, they hope to recoup those investments in big ways. For example, they might accelerate the performance of market competitive applications or efficiently consolidate data centers. Maybe they are even starting to analyze big data to find better ways to run the business.

These kinds of “money-making” initiatives have been mainly associated with file and block types of storage—the primary storage commonly used to power databases, host office productivity applications, and build pools of shared resources for virtualization projects.

But that’s about to change.

If you’ve intentionally dismissed or just overlooked object storage, it is time to take deeper look. Today’s object storage provides brilliant capabilities for enhancing productivity, creating global platforms and developing new revenue streams.

…(read the complete as-published article there)

EMC Atmos 2.1 Accelerates Cloud Value

(Excerpt from original post on the Taneja Group News Blog)

Object storage is certainly a hot topic, and it’s rising above it’s old data retention “jail” perception. And for good reasons. We think due to cloud storage building and adoption, increasingly mobile users and distributed apps, the benefits of active archiving and retaining ever bigger data sets that having a solid object storage strategy becomes significantly important going into 2013.

EMC is aiming to be a key part of that object strategy – today releasing Atmos 2.1 making wider adoption not only possible, but more profitable for both in-house cloud builders and service providers. There are some performance improvements under the hood (for larger file read/write), and significant increases in manageability intended to support ever larger deployments. But we think the cloud accelerators that enable better integration to organizational needs are going to provide the biggest bang. This latest version comes with expanded browser integration, an enhanced GeoDrive, more developer tools, and even some support for transitioning traditional apps to the cloud (bulk ingest, CAS metadata).  The theme is definitely to broaden the integration and hasten the adoption of cloud storage, gaining both cloud economics and enhanced productivity.

Atmos is already a great cloud object storage solution for web developers, but now also provides an API for Android, fast taking over the mobile marketplace. For developers in general, Atmos 2.1 can now provide anonymous URLs, which means those developers can easily build one time upload/download features into their apps (this is key for many collaboration use cases –  picture or image uploads, external file sharing, content distribution and other schemes).  Atmos 2.1 also supports “named objects”, which may ease certain kinds of distributed development challenges.

GeoDrive, a free addon to licensed Atmos customers, provides a secure, cached, drag and drop cloud drive interface. GeoDrive really makes collaboration easy by eliminating the need to set up complicated shares or mount points.  Now with GeoDrive 1.1, there are a bunch of enhancements including built-in data encryption and a CIFS Cloud Gateway so you don’t always need client side software. Shareable URL’s bring more collaboration into the picture to improve the private “dropbox” use case.  And collaboration is truly going global with GeoDrive now available in 10 languages (Atmos itself is already highly suited for distributed global cloud storage).

Perhaps most important is the new Native (Amazon) S3 API support.  By enabling customers to migrate S3 apps to Atmos (and vice-versa), Atmos cloud providers can now offer hybrid and mixed solutions alternatives, without threatening vendor lock-in.  Enterprises holding back because of fears of vendor lock-in (or that were tied into S3) can now consider the various SLA’s, services, and price options presented by Atmos powered options.

…(read the full post)