Open source strategies bring benefits, but don’t rush in

An IT industry analyst article published by SearchDataCenter.


article_Open-source-strategies-bring-benefits-but-dont-rush-in
Before your organization can reap the benefits of open source, it’s important to understand your options and map out a plan that will guarantee success.

Mike Matchett
Small World Big Data

It’s ironic that we spend a lot of money on proprietary databases, business applications and structured business intelligence platforms for “little” data, but we turn to open source platforms for big data analytics. Why not just scale down free, big data open source systems to handle the little data too?

Of course, there are a number of real reasons, including minimizing risk and assuring enterprise-class data management requirements. Cost probably isn’t even the first criteria for most enterprises. Even when it comes to cost, open source doesn’t mean free in a real economic sense. Open source strategies require cutting-edge expertise, professional support and often buy-up into proprietary enterprise-class feature sets. The truth is, open source platforms don’t necessarily maximize ROI.

Still, open source strategies create attractive opportunities for businesses that want to evolve their aging applications. Many IT investing strategies now include a core principle preferring open source for new applications. In fact, we’d claim open source now represents the fastest growing segment of enterprise IT initiatives. From a theoretical point of view, when it comes to developing new ways of doing business, new types of agile and web-scale applications, and new approaches to analyze today’s ever-bigger data, open source presents innovative opportunities to compete and even disrupt the competition.

But this is much easier said than done. We’ve seen many enterprises fumble with aggressive open source strategies, eventually reverting to tried-and-true proprietary software stacks. So if enterprises aren’t adopting open source because it’s cheaper, and it often lacks enterprise-class features, then why has it become such a popular strategy?

Adopting open source strategies goes hand in hand with an ability to attract top technical talent, Rajnish Verma said at the Dataworks Summit in June, when he was president of big data software vendor Hortonworks. Smart people want to work in an open source environment so they can develop in-demand skills, establish broader relationships outside a single company and potentially contribute back to a larger community — all part of building a personal brand, I suppose.

In other words, organizations adopt open source because that’s what today’s prospective employees want to work on…(read the complete as-published article there)

Cloudy Object Storage Mints Developer Currency

Having just been to VMWorld 2017 and getting pre-briefs for Strata Data NY coming up soon, I’ve noticed a few hot trends aiming to help foster cross-cloud, multi-cloud, hybrid operations. Among these are

  1. A focus on centralized management for both on-prem and cloud infrastructure
  2. The accelerating pace of systems Management as a Service (MaaS!?) offerings
  3. Container management “distributions” with enterprise features
  4. And an increasing demand for fluid, cross-hybrid infrastructure storage

We all know about AWS S3, the current lingua franca of object storage for web-scale application development and de facto REST-based “standard” for object storage API’s.  Only it’s not actually a standard. If you want to write apps for deployment anywhere, on any cloud or on-premise server or even your laptop, you might not appreciate being locked in to coding directly to the ultimately proprietary S3 API (or waiting for your big enterprise storage solutions to support it).

Which is where the very cool object (“blob”) storage solution from Minio comes in to play.  Minio is effectively a software defined object storage layer that on the backend virtually transforms just about any underlying storage into a consistent front-end developer friendly (and open source) object storage interface (still S3 compatible). This means you can code to Minio — and actually deploy Minio WITH your application if needed — onto any cloud, storage infrastructure, or local volume. And it adds all kinds of enterprise features like advanced reliability with erasure coding which will no doubt please those IT/devops folks that want to manage a consistent storage footprint no matter where the application deploys.

So this sounds at first like this could all be complicated, but I installed and ran Minio locally with but two commands in less than a minute.

$ brew install minio/stable/minio

$ minio server --address localhost:9500 /users/mike/minio
Minio Install and Launch

Done. We can immediately code with Minio’s universal REST API based calls to that URL/port. And of course, point a browser at it and you get a nice browser based interface – this is all you need to do if you just want something like a photo “dropbox” repository of your own.

But wait! One more step gets you a lot of really useful functionality. Install the Minio Client (mc) and configure it with your Mino server key. Now you have a powerful, consistent command line interface to manage and muck with both object and file stores.

$ brew install minio/stable/mc

# This line (and keys) is provided as output by the minio server start above

$ mc config host add myminio http://localhost:9500 <ACCESS KEY> <SECRET KEY>
Minio Client

The “mc” interface has supersetted a whole bunch of familiar UNIX command line utilities.  It’s likely many of your existing administrative scripts can be trivially updated to work cross-hybrid_cloud cross-file/object etc.

$ mc

NAME:
  mc - Minio Client for cloud storage and filesystems.

USAGE:
  mc [FLAGS] COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...]

COMMANDS:
  ls       List files and folders.
  mb       Make a bucket or a folder.
  cat      Display file and object contents.
  pipe     Redirect STDIN to an object or file or STDOUT.
  share    Generate URL for sharing.
  cp       Copy files and objects.
  mirror   Mirror buckets and folders.
  diff     Show differences between two folders or buckets.
  rm       Remove files and objects.
  events   Manage object notifications.
  watch    Watch for files and objects events.
  policy   Manage anonymous access to objects.
  admin    Manage Minio servers
  session  Manage saved sessions for cp and mirror commands.
  config   Manage mc configuration file.
  update   Check for a new software update.
  version  Print version info.
Minio Client Help Text

One can easily see that Minio delivers a nice, really robust object storage service that insulates developer and resource admin (devops) from infrastructure/architecture.

Minio can be pointed at/deployed on many different cloud providers to enable easier cross-cloud multi-cloud migration. And it’s locally efficient. For example, when in AWS, it uses S3,  when on Azure it uses native Azure blob storage services.

Folks looking at taking advantage of hybridizing operations with something like VMware on AWS might also want to deploy Minio over both their local storage solutions (SANs, NASs, even VSAN et.al) AND on the cloud side (AWS S3). This solves a big challenge when transitioning to hybrid ops – getting consistent, enterprise grade storage services everywhere.

There is a lot more to be said about Minio’s advanced feature set and exciting roadmap, but you can read all about those on the Minio website!  Have fun!

 

What’s a Multi-cloud Really?  Some Insider Notes from VMworld 2017

(Excerpt from original post on the Taneja Group News Blog)

As comfortable 65-70 degree weather blankets New England here as we near end of summer, flying into Las Vegas for VMworld at 110 degrees seemed like dropping into hell. Last time I was in that kind of heat I was stepping off a C-130 into the Desert Shield/Desert Storm theater of operations. At least here, as everyone still able to breathe immediately says -“at least it’s a dry heat.”

…(read the full post)

The power and benefits of encouraging IT disruption

An IT industry analyst article published by SearchStorage.


article_The-power-and-benefits-of-encouraging-IT-disruption
IT can’t remain a reactive cost center and cheerful help desk, but must become a competitive, cutthroat service provider and powerful champion of emerging disruptive technology.

Mike Matchett
Small World Big Data

I have a theory that true IT disruption happens when something nonlinear occurs to change the traditional expectation or baseline for a key operating capability. In storage, this could be related to capacity, performance or value. We’ve seen great market disruption — not to mention data center evolution — with the rise of scale-out vs. scale-up storage architectures, flash vs. disk and big data analytics vs. data warehouse business intelligence, for example. These disruptions have all brought orders of magnitude improvement, enabling many new ways to distill more value out of data.

I’m not saying we leave disrupted technologies completely behind, but old top-tier technologies can quickly drop down our perceptual pyramids of perceived value. Some older techs do disappear — think floppy drives and CRT monitors. But usually, they get subsumed as a lower tier inside a larger, newer umbrella and relegated to narrower, less prestigious use cases.

Emerging disruptive storage technologies include nonvolatile memory express server-side flash and persistent memory; in-storage data processing, combining software-defined storage, containerization and in-stream processing; global file systems and databases with global consistency, security, protection and access features; pervasive machine learning; and truly distributed internet of things data processing.

If you know it’s coming, is it really disruptive?

You might think it’s been the monstrous, growing volume of data, which is certainly growing non-linearly, that’s given birth to these disruptive technologies. That’s the story told by all the vendor presentations I’ve seen in the last few years. I’ve even presented that line of thinking myself.

Successful disruption requires early adopters be willing to take some risks.

The problem with that interpretation is that it makes the storage industry look reactive instead of proactive. And it also makes them look like heroes or at least paragons of product management, figuring out exactly what people need and delivering it just in time to save IT from certain disaster.

If we’re honest, the truth is more likely that newer generations of storage technologies let us retain massively growing volumes of data, access that data faster and increase the ROI for keeping and analyzing ever bigger data sets. In other words, it’s new technology that enables more data, not the fact of more data causing new technology. Creating disruptive technology may be as much a matter of luck as intentional design.

Disruptive technologies arise out of vendors serving our competitive natures…(read the complete as-published article there)

Persistent data storage in containerized environments

An IT industry analyst article published by SearchStorage.


article_Persistent-data-storage-in-containerized-environments
The most significant challenge to the rise of containerized applications is quickly and easily providing enterprise-class persistent storage for containers.

Mike Matchett

The pace of change in IT is staggering. Fast growing data, cloud-scale processing and millions of new internet of things devices are driving us to find more efficient, reliable and scalable ways to keep up. Traditional application architectures are reaching their limits, and we’re scrambling to evaluate the best new approaches for development and deployment. Fortunately, the hottest prospect — containerization — promises to address many, if not all, of these otherwise overwhelming challenges.

In containerized application design, each individual container hosts an isolatable, and separately scalable, processing component of a larger application web of containers. Unlike monolithic application processes of the past, large, containerized applications can consist of hundreds, if not thousands, of related containers. The apps support Agile design, development and deployment methodologies. They can scale readily in production and are ideally suited for hosting in distributed, and even hybrid, cloud infrastructure.

Unfortunately, containers weren’t originally designed to implement full-stack applications or really any application that requires persistent data storage. The original idea for containers was to make it easy to create and deploy stateless microservice application layers on a large scale. Think of microservices as a form of highly agile middleware with conceptually no persistent data storage requirements to worry about.

Persistence in persisting

Because the container approach has delivered great agility, scalability, efficiency and cloud-readiness, and is lower-cost in many cases, people now want to use it for far more than microservices. Container architectures provide such a better way to build modern applications that we see many commercial software and systems vendors transitioning internal development to container form and even deploying them widely, often without explicit end-user or IT awareness. It’s a good bet that most Fortune 1000 companies already host third-party production IT applications in containers, especially inside appliances, converged approaches and purpose-built infrastructure.

It’s a good bet that most Fortune 1000 companies already host third-party container applications within production IT.

You might find large, containerized databases and even storage systems. Still, designing enterprise persistent storage for these applications is a challenge, as containers can come and go and migrate across distributed and hybrid infrastructure. Because data needs to be mastered, protected, regulated and governed, persistent data storage acts in many ways like an anchor, holding containers down and threatening to reduce many of their benefits.

Container architectures need three types of storage…(read the complete as-published article there)