Reap IT automation benefits in every layer of the stack

An IT industry analyst article published by SearchITOperations.


article_Reap-IT-automation-benefits-in-every-layer-of-the-stack
Automation technologies create an artificial brain for IT operations, but that won’t turn skilled admins and engineers into zombies — far from it.

Mike Matchett
Small World Big Data

As a technology evangelist and professional IT systems optimizer, I see the benefits of IT automation and can only champion trends that increase it. When we automate onerous tasks and complex manual procedures, we naturally free up time to focus our energies higher in the stack. Better and more prevalent automation increases the relative return on our total effort so that we each become more productive and valuable. Simply put, IT automation provides leverage. So it’s all good, right?

Another IT automation benefit is that it captures, encapsulates and applies valuable knowledge to real-world problems. And actually, it’s increasingly hard to find IT automation platforms that don’t promote embedded machine learning and artificially intelligent algorithms. There is a fear that once our hard-earned knowledge is automated, we’ll no longer be necessary.

So, of course, I need to temper my automation enthusiasm. Automation can eliminate low-level jobs, and not everyone can instantly adjust or immediately convert to higher-value work. For example, industrial robots, self-driving cars or a plethora of internet of things (IoT)-enabled devices that cut out interactions with local retailers all tend to remove the bottom layer of the related pyramid of available jobs. In those situations, there will be fewer, more-utilized positions left as one climbs upward in skill sets.

Still, I believe automation, in the long run, can’t help but create even more pyramids to climb. We are a creative species after all. Today, we see niches emerging for skilled folks with a combination of internal IT and, for example, service provider, high-performance computing, data science, IoT and DevOps capabilities.

Automation initiatives aren’t automatic

If one squints a bit, almost every IT initiative aims to increase automation.

A service provider has a profit motive, so the benefit of IT automation is creating economies of scale. Those, in turn, drive competitive margins. But even within enterprise IT, where IT is still booked as a cost center, the drive toward intelligent automation is inevitable. Today, enterprise IT shops, following in the footsteps of the big service providers, are edging toward hybrid cloud-scale operations internally and finding that serious automation isn’t a nice-to-have, but a must-have.If one squints a bit, almost every IT initiative aims to increase automation. Most projects can be sorted roughly into these three areas with different IT automation benefits, from cost savings to higher uptime:

  • Assurance. Efforts to automate support and help desk tasks, shorten troubleshooting cycles, shore up security, protect data, reduce outages and recover operations quickly.
  • Operations. Necessary automation to stand up self-service catalogs, provision apps and infrastructure across hybrid and multi-cloud architectures to enable large-scale operations, and orchestrate complex system management tasks.
  • Optimization. Automation that improves or optimizes performance in complex, distributed environments, and minimizes costs through intelligent brokering, resource recovery and dynamic usage balancing.

Automation enablers at large
Successful automation initiatives don’t necessarily start by implementing new technologies like machine learning or big data. Organizational commitment to automation can drive a whole business toward a new, higher level of operational excellence…(read the complete as-published article there)

A serverless architecture could live in your data center

An IT industry analyst article published by SearchITOperations.


article_A-serverless-architecture-could-live-in-your-data-center
Just because you don’t see the server doesn’t mean it’s not there. Serverless frameworks are superseding containers, but is the extra abstraction worth it?

Mike Matchett

Have you figured out everything you need to know about managing and operating container environments already? How to host them in your production data centers at scale? Transform all your legacy apps into containerized versions? Train your developers to do agile DevOps, and turn your IT admins into cloud brokers? Not quite yet?

I hate to tell you, but the IT world is already moving past containers. Now you need to look at the next big thing: serverless computing.

I don’t know who thought it was a good idea to label this latest application architecture trend serverless computing. Code is useless, after all, unless it runs on a computer. There has to be a server in there somewhere. I guess the idea was to imply that when you submit application functionality for execution without caring about servers, it feels completely serverless.

In cloud infrastructure as a service, you don’t have to own or manage your own physical infrastructure. With cloud serverless architecture, you also don’t have to care about virtual machines, operating systems or even containers.

Serving more through serverless architecture?

So what is serverless computing? It’s a service in which a programmer can write relatively contained bits of code and then directly deploy them as standalone, function-sized microservices. You can easily set up these microservices to execute on a serverless computing framework, triggering or scheduling them by policy in response to supported events or API calls.

A serverless framework is designed to scale well with inherently stateless microservices — unlike today’s containers, which can host stateful computing as well as stateless code. You might use serverless functions to tackle applications that need highly elastic, event-driven execution or when you create a pipeline of arbitrary functionality to transform raw input into polished output. This event-pipeline concept meshes well with expected processing needs related to the internet of things. It could also prove useful with applications running in a real-time data stream.

A well-known public cloud example of serverless computing is Amazon Web Service’s Lambda. The Lambda name no doubt refers to anonymous lambda functions used extensively in functional programming. In languages such as JavaScript or Ruby, a function can be a first-class object defined as a closure of some code function within a prescribed variable scope. Some languages have actual lambda operators that a programmer can use to dynamically create new function objects at runtime (e.g., as other code executes).

So with a serverless framework, where does the actual infrastructure come into the picture? It’s still there, just under multiple layers of abstraction. Talk about software-defined computing. With this latest evolution into serverless computing, we now have perhaps several million lines of system- and platform-defining code between application code and hardware. It’s a good thing Moore’s Law hasn’t totally quit on us…(read the complete as-published article there)

What do IT administrator skills mean now?

An IT industry analyst article published by SearchIToperations.


article_What-do-IT-administrator-skills-mean-now
In a world full of data-aware this and internet-connected that, deep IT administrator skills should be more in-demand than ever.

Mike Matchett

It seems everything in the IT world is getting smarter and more connected. Storage is becoming data-aware, IT infrastructure components are becoming part of the internet of things and even our applications are going global, mobile and always on. And big data analytics and machine learning promise to find any information, buried anywhere, to optimize operations and business processes. So where does that leave long-time IT administrators?

The hot trend of DevOps was just an early warning sign that IT is no longer going to be made up of backroom, silo-focused, shell-scripting admin jobs. DevOps is great because having someone versed deeply in the application as much as in the production infrastructure hosting it avoids many of the problems that occur when IT folks are thrown some black box code over the wall and told to just make it run well at scale. But as we’ve seen, native DevOps folks that can dig into application code as easily as they troubleshoot, re-balance, and even capacity plan production systems are quite rare.

It’s common to see DevOps folks coming from the application side when infrastructure is easily and simply cloud provisioned — hence the ready interest in containerized applications. But when it isn’t, especially if hybrid architectures are involved, IT experts might become better DevOps masters in the long run.

I suspect many IT experts consider that kind of move to be somewhat of a downgrade. Perhaps it should instead be seen as moving closer to providing direct business value. Personally, I love hacking code, building accurate capacity planning models, tuning production performance and yes, even troubleshooting arcane and exotic problems. But as I’ve often told anyone who doesn’t know the true depth of IT administrator skills — usually at cocktail parties when it comes out that I do something in technology — “I AM NOT [JUST] A PROGRAMMER!” (This is usually followed by everyone within earshot beating a hasty retreat. I’m really a lot of fun at parties!)

It’s all virtualization’s fault

IT specialists also need to broaden into — or be replaced by — IT generalists. Here we can blame virtualization and, to some extent, infrastructure convergence. There are an awful lot more virtual admins out there than 10 years ago. Virtual environment administration isn’t actually easy, but a big value when virtualizing infrastructure is to lower operational expenditures by making it easier to administer: more automatic sharing, simpler point-and-click operations, scalable policy-based management and plug-and-play integration. I often hear from virtual admins that their IT administrator skills are still challenged daily, simply with keeping the lights on and ensuring things are running smoothly, but they are relying more and more on built-in lower-level intelligence and automation. This frees up some time to take a bigger-picture view and operate at a wider span of control. Still, the trend toward IT generalists often disenfranchises the IT silo expert whose cheese gets virtualized or converged.

The role of the IT administrator will definitely need to change as data centers hybridize across multiple types of private and public clouds, stacks of infrastructure converge and hyper-converge, and systems management develops sentience. Of course, change is inevitable. But how can old-school IT administrators stay current and continue providing mastery-level value to their organizations? …(read the complete as-published article there)