Virtual Instruments Finally Gets NAS-ty

(Excerpt from original post on the Taneja Group News Blog)

When Virtual Instruments merged in/acquired Load Dynamix recently, we thought good things were going to happen.  VI could now offer its users a full performance management “loop” of monitoring and testing in a common suite. Apparently VI’s clientele agreed because they’ve just finished out a stellar first half of year financially. Now, to sweeten the offer even more, VI is broadening its traditionally Fibre Channel/block focused monitoring (historically rooted in their original FC SAN probes) to fully encompass NAS monitoring too.

…(read the full post)

Oracle ZS5 Throws Down a Cloud Ready Gauntlet

(Excerpt from original post on the Taneja Group News Blog)

Is anyone in storage really paying close enough attention to Oracle? I think too many mistakenly dismiss Oracle’s infrastructure solutions as expensive, custom and proprietarily Oracle database-only hardware. But, surprise, Oracle has been successfully evolving the well respected ZFS as a solid cloud-scale filer, today releasing the fifth version of the ZFS storage array – the Oracle ZS5. And perhaps most surprising, the ZS series powers Oracle’s own fast growing cloud storage services (at huge scale – over 600PBs and growing).

…(read the full post)

What do IT administrator skills mean now?

An IT industry analyst article published by SearchIToperations.


article_What-do-IT-administrator-skills-mean-now
In a world full of data-aware this and internet-connected that, deep IT administrator skills should be more in-demand than ever.

Mike Matchett

It seems everything in the IT world is getting smarter and more connected. Storage is becoming data-aware, IT infrastructure components are becoming part of the internet of things and even our applications are going global, mobile and always on. And big data analytics and machine learning promise to find any information, buried anywhere, to optimize operations and business processes. So where does that leave long-time IT administrators?

The hot trend of DevOps was just an early warning sign that IT is no longer going to be made up of backroom, silo-focused, shell-scripting admin jobs. DevOps is great because having someone versed deeply in the application as much as in the production infrastructure hosting it avoids many of the problems that occur when IT folks are thrown some black box code over the wall and told to just make it run well at scale. But as we’ve seen, native DevOps folks that can dig into application code as easily as they troubleshoot, re-balance, and even capacity plan production systems are quite rare.

It’s common to see DevOps folks coming from the application side when infrastructure is easily and simply cloud provisioned — hence the ready interest in containerized applications. But when it isn’t, especially if hybrid architectures are involved, IT experts might become better DevOps masters in the long run.

I suspect many IT experts consider that kind of move to be somewhat of a downgrade. Perhaps it should instead be seen as moving closer to providing direct business value. Personally, I love hacking code, building accurate capacity planning models, tuning production performance and yes, even troubleshooting arcane and exotic problems. But as I’ve often told anyone who doesn’t know the true depth of IT administrator skills — usually at cocktail parties when it comes out that I do something in technology — “I AM NOT [JUST] A PROGRAMMER!” (This is usually followed by everyone within earshot beating a hasty retreat. I’m really a lot of fun at parties!)

It’s all virtualization’s fault

IT specialists also need to broaden into — or be replaced by — IT generalists. Here we can blame virtualization and, to some extent, infrastructure convergence. There are an awful lot more virtual admins out there than 10 years ago. Virtual environment administration isn’t actually easy, but a big value when virtualizing infrastructure is to lower operational expenditures by making it easier to administer: more automatic sharing, simpler point-and-click operations, scalable policy-based management and plug-and-play integration. I often hear from virtual admins that their IT administrator skills are still challenged daily, simply with keeping the lights on and ensuring things are running smoothly, but they are relying more and more on built-in lower-level intelligence and automation. This frees up some time to take a bigger-picture view and operate at a wider span of control. Still, the trend toward IT generalists often disenfranchises the IT silo expert whose cheese gets virtualized or converged.

The role of the IT administrator will definitely need to change as data centers hybridize across multiple types of private and public clouds, stacks of infrastructure converge and hyper-converge, and systems management develops sentience. Of course, change is inevitable. But how can old-school IT administrators stay current and continue providing mastery-level value to their organizations? …(read the complete as-published article there)

When data storage infrastructure really has a brain

An IT industry analyst article published by SearchStorage.


article_When-data-storage-infrastructure-really-has-a-brain
Big data analysis and the internet of things are helping produce more intelligent storage infrastructure.

Mike Matchett

Cheaper and denser CPUs are driving smarter built-in intelligence into each layer of the data storage infrastructure stack.

Take storage, for example. Excess compute power can be harnessed to deploy agile software-defined storage (e.g., Hewlett Packard Enterprise StoreVirtual), transition to hyper-converged architectures (e.g., HyperGrid, Nutanix, Pivot3, SimpliVity), or optimize I/O by smartly redistributing storage functionality between application servers and disk hosts (e.g., Datrium).

There is a downside to all this built-in intelligence, however. It can diminish the visibility we might otherwise have between our data storage infrastructure and, well, changes — any IT change, really, whether due to intentional patching and upgrades, expanding usage and users, or complex bugs and component failures. Or, to put it another way, native, dynamic optimization enabled by powerful and inexpensive processors is making it increasingly difficult for us humans to figure out what’s going on with our infrastructures.

So while it’s really great when we don’t need to know any details, and can simply rely on low-level components to always do the right thing, until there is an absolutely autonomous data center — and, no, today’s public cloud computing doesn’t do away with the need for internal experts — IT may find baked-in intelligence a double-edged sword. Furthermore, while smarter data storage infrastructure helps us with provisioning, optimization, growth plans and troubleshooting, it can blind or fool us and actively work against our best efforts to bend infrastructure to our “will.”

Still, in spite of all these potential negatives, given the choice, I’d rather live in a smarter and more autonomous IT world than not (even if there is some risk of runaway AI). I’ll explain.

It’s all about the data

Remember when analysis used to be an offline process? Capture some data in a file; open Excel, SAS or other desktop tool; and weeks later receive a recommendation. Today, that kind of analysis latency is entirely too long and naïve.

Native, dynamic optimization enabled by powerful and inexpensive processors is making it increasingly difficult for us humans to figure out what’s going on with our infrastructures.

Given the speed and agility of our applications and users nowadays, not to mention bigger data streams and minute-by-minute elastic cloud brokering, we need insight and answers faster than ever. This kind of intelligence starts with plentiful, reliable data, which today’s infrastructures are producing more and more of every day (in fact, we’ll soon be drowning in new data thanks to the internet of things [IoT]), and a way to process and manage all that information.

Storage arrays, for example, have long produced insightful data, but historically required vendor-specific, complex and expensive storage resource management applications to make good use of it. Fortunately, today, there are a series of developments helping us become smarter about IT systems management and better (and faster) users of data generated by our infrastructures: …(read the complete as-published article there)