Serverless technology obfuscates workflows, performance data

An IT industry analyst article published by SearchITOperations.


article_Serverless-technology-obfuscates-workflows-performance-data
Serverless and microservices reshape the application stack into something that looks like a swath of stars in the sky. How do you find a slow, misconfigured component in this interconnected galaxy?

Mike Matchett
Small World Big Data

I’m hearing that IT infrastructure is dead. And who needs it anymore, really? The future is about moving up the stack to microservices and serverless technology, as we continue to abstract, embed and automate away all the complexities of explicit infrastructure layers, such as storage arrays and physical servers.

On-premises, Capex-style IT is shrinking, while rented and remotely managed hardware and cloud transformation set new standards for modern IT. All the cool kids use end-to-end orchestration, advanced machine learning, real-time management data streams, microservices architecture and insanely scalable container environments. And now we even have serverless computing, sometimes called function as a service (FaaS).

But can we have computing without the server? And where did the server go?

Serving more with serverless technology
There is a certain hazard in my life that comes from telling non-IT people that, as an IT industry analyst, I explore and explain technology. I’m asked all the time, even by my mom, questions like, “I suppose you can explain what the cloud is?

I tend to bravely charge in, and, after a lot of at-bats with this question, I’ve got the first 25 seconds down: “It’s like running all your favorite applications and storing all your data on somebody else’s servers that run somewhere else — you just rent it while you use it.” Then I lose them with whatever I say next, usually something about the internet and virtualization.

The same is mostly true with serverless computing. We are just moving one more level up the IT stack. Of course, there is always a server down in the stack somewhere, but you don’t need to care about it anymore. With serverless technology in the stack, you pay for someone else to provide and operate the servers for you.

We submit our code (functions) to the service, which executes it for us according to whatever event triggers we set. As clients, we don’t have to deal with machine instances, storage, execution management, scalability or any other lower-level infrastructure concerns.

The event-driven part is a bit like how stored procedures acted in old databases, or the way modern webpages call in JavaScript functions, hooked to and fired off in response to various clicks and other web events. In fact, AWS Lambda, a popular serverless computing service, executes client JavaScript functions, likely running Node.js in the background in some vastly scalable way.

Look ma, no server!
We need to tackle several issues to ready serverless technology for primetime enterprise use. The first is controlling complexity…(read the complete as-published article there)

A serverless architecture could live in your data center

An IT industry analyst article published by SearchITOperations.


article_A-serverless-architecture-could-live-in-your-data-center
Just because you don’t see the server doesn’t mean it’s not there. Serverless frameworks are superseding containers, but is the extra abstraction worth it?

Mike Matchett

Have you figured out everything you need to know about managing and operating container environments already? How to host them in your production data centers at scale? Transform all your legacy apps into containerized versions? Train your developers to do agile DevOps, and turn your IT admins into cloud brokers? Not quite yet?

I hate to tell you, but the IT world is already moving past containers. Now you need to look at the next big thing: serverless computing.

I don’t know who thought it was a good idea to label this latest application architecture trend serverless computing. Code is useless, after all, unless it runs on a computer. There has to be a server in there somewhere. I guess the idea was to imply that when you submit application functionality for execution without caring about servers, it feels completely serverless.

In cloud infrastructure as a service, you don’t have to own or manage your own physical infrastructure. With cloud serverless architecture, you also don’t have to care about virtual machines, operating systems or even containers.

Serving more through serverless architecture?

So what is serverless computing? It’s a service in which a programmer can write relatively contained bits of code and then directly deploy them as standalone, function-sized microservices. You can easily set up these microservices to execute on a serverless computing framework, triggering or scheduling them by policy in response to supported events or API calls.

A serverless framework is designed to scale well with inherently stateless microservices — unlike today’s containers, which can host stateful computing as well as stateless code. You might use serverless functions to tackle applications that need highly elastic, event-driven execution or when you create a pipeline of arbitrary functionality to transform raw input into polished output. This event-pipeline concept meshes well with expected processing needs related to the internet of things. It could also prove useful with applications running in a real-time data stream.

A well-known public cloud example of serverless computing is Amazon Web Service’s Lambda. The Lambda name no doubt refers to anonymous lambda functions used extensively in functional programming. In languages such as JavaScript or Ruby, a function can be a first-class object defined as a closure of some code function within a prescribed variable scope. Some languages have actual lambda operators that a programmer can use to dynamically create new function objects at runtime (e.g., as other code executes).

So with a serverless framework, where does the actual infrastructure come into the picture? It’s still there, just under multiple layers of abstraction. Talk about software-defined computing. With this latest evolution into serverless computing, we now have perhaps several million lines of system- and platform-defining code between application code and hardware. It’s a good thing Moore’s Law hasn’t totally quit on us…(read the complete as-published article there)

SQL Server machine learning goes full throttle on operational data

An IT industry analyst article published by SearchSQLServer.


article_SQL-Server-machine-learning-goes-full-throttle-on-operational-data
Artificial intelligence is a hot topic in IT, and Microsoft has made strides to synchronize SQL Server with machine learning tools for use in analyzing operational data pipelines.

Mike Matchett

One of the hottest IT trends today is augmenting traditional business applications with artificial intelligence or machine learning capabilities. I predict the next generation of data center application platforms will natively support the real-time convergence of online transaction processing with analytics. Why not bring the point of the sword on operational insight to the frontline where business actually happens?

But modifying production application code that is optimized for handling transactions to embed machine learning algorithms is a tough slog. As most IT folks are reluctant — OK, absolutely refuse — to take apart successfully deployed operational applications to fundamentally rebuild them from the inside out, software vendors have rolled out some new ways to insert machine intelligence into business workflows. Microsoft is among them, pushing SQL Server machine learning tools tied to its database software.

Basically, adding intelligence to an application means folding in a machine learning model to recognize patterns in data, automatically label or categorize new information, recommend priorities for action, score business opportunities or make behavioral predictions about customers. Sometimes this intelligence is overtly presented to the end user, but it can also transparently supplement existing application functionality.

In conventional data science and analytics activities, machine learning models typically are built, trained and run in separate analytics systems. But models applied to transactional workflows require a method that enables them to be used operationally at the right time and place, and may need another operational method to support ongoing training (e.g., to learn about new data).
Closeness counts in machine learning

In the broader IT world, many organizations are excited by serverless computing and lambda function cloud services in which small bits of code are executed in response to data flows and event triggers. But this isn’t really a new idea in the database world, where stored procedures have been around for decades. They effectively bring compute processes closer to data, the core idea behind much of today’s big data tools.

Database stored procedures offload data-intensive modeling tasks such as training, but can also integrate machine learning functionality directly into application data flows. With such injections, some transactional applications may be able to take advantage of embedded intelligence without any upstream application code which needs to be modified. Additionally, applying machine learning models close to the data in a database allows the operational intelligence to be readily shared among different downstream users…(read the complete as-published article there)