Follow That Transaction! APM Across Clouds

It’s one thing to stand up a new cloud-native application, but it’s impossible to manage end-to-end performance using the same approaches and tooling we’ve long used in the data center.

I’m hearing a lot about transitional “new” cloud-native applications that actually combine and span layers of existing persistent storage, traditional data stores and key legacy application functionality hosted in VM’s, and containerized new development. Existing back-end “stuff with API’s” can be readily topped and extended now by thousands (hundreds of thousands?) of microservices running web-like across hybrid and multi-cloud platform hosting. Even the idea of what makes up any particular application can get pretty fuzzy.

While there are certainly security, data protection, and availabilty/resilience concerns to be sure, the problem we are talking about today is that when you pile up that much complexity and scale, assuring production performance can become quite a challenge.

Transactional History in Performance Management

Performance management includes monitoring targeted service levels but also should provide ways to identify both sudden and creeping problems, troubleshoot down to root cause (and then help remediate in situ), optimize bottlenecks to provide better service (endless, because there is always a “next” longest pole in the tent) and plan/predict for possible changes in functionality, usage and resources (capacity planning in the cloud era).

I spent many years working for one of the so-called “big 4” system management companies implementing infrastructure capacity planning and performance management (IPM) solutions in Fortune 500 company data centers. With a little operational queuing modeling work on rather monolithic workloads (mainframe, AS400, mid-range UNIX…), we could help steer multi-million dollar IT buys towards the right resources that would solve today’s problems and assure future performance.

A core concept is the idea of the mythical “workload transaction” as the unit of application work. In those days, at least for capacity planning, we could get away with a statistical transaction unit of work. For example, we’d observe a certain amount of active usage on a given system in terms of it’s CPU utilization, memory, IO, etc., and then divide those metrics by an arbitrary usage metric (e.g. number of known users, # of IO’s written, processes forked, forms processed, functionality points, or the default generic cpu-second itself). This statistical modeling approach worked remarkably well in helping right-size, right-time, and right-host infrastructure investments.

However, this approach only went so far when it came to troubleshooting or optimizing within an application. We could readily look at application behavior in some kind of aggregate way, maybe isolating behavior down to a specific observed process (or set of identified processes). You could even in some cases get developers to add some instrumentation (anybody remember ARM?) into key application code to count and report on arbitrary app-specific transaction counts. Of course this was rarely achievable in practice (most business critical code was 3rd party and painful performance problems that needed solving “fast” were already in production).

If you needed to go inside the app itself, or track individual transactions across a distributed system (classically a 3-tier presentation/business logic/database architecture), you needed application insight from another set of tools that came to be called Application Performance Management (APM). APM solutions aimed to provide performance insight into application specific transaction “definitions”. Instrumentation for transaction tracking was often “inserted” early into the app development process. Of course this still requires some up front discipline.  Or a non-intrusive (but in many ways halfway) approach might capture network traffic and parse it (with deep packet inspection DPI) to produce information on transactional workflow and sometimes drill down to identify individual transactions flowing between systems.

Hybrid Containerized PM

It’s impossible to follow a unique transaction across today’s potentially huge web of containerized microservices. I think of it visually as similar to how our neurons theoretically fire and cascade in the brain – an overlapping mesh of activity. We can see behavior in aggregate easily enough, but tracking what goes into each unique transaction?

First we need to realize that transaction workflow in this kind of environment is naturally complex. Application devs (and third party services) can implement messaging busses, delivery queues, make synchronous calls and at the same fire asynchronous events and triggers, span arbitrarily large pauses (to account for human interactions like web page interaction), cause large cascades, aggregate behavior (trigger something X every 10 Y’s), and so on.

The only real approach to tracking unique transactions is still through instrumentation.  Luckily there is a “tracing” standard (see Opentracing project). But tracing is even more challenging at large scale (and across dynamic and abstracted platform hosting).  How much data (and how fast) can something like Splunk take in constant instrumentation data from 100,000’s of microservices (and how much will that cost)? This can easily be a case where performance measurement uses as much or more resource than the app itself.

To tackle this, there are some folks rolling out practical tracing services designed to tackle both the distributed complexity and huge scales involved. This last week LightStep rolled out of stealth (founder Ben Sigelman was instrumental in Opentracing 🙂 ). LightStep [x]PM, a managed service offering that incurs minimal performance analysis overhead on site, provides 100% transaction tracing at scale by doing some sophisticated sampling during aggregation/monitoring, but preserving full tracing info for immediate audit/drill down.  LightStep has some impressively large scale use cases already stacked up.

FaaS Performance Management

This of course is not the end of the transactional tracing saga. I’ve written before about Fission, a developing open source Function as a Service layer (FaaS ontop of K8). That project has now recently started on a next layer called Fission Workflow, which implements a YAML-like blueprint file to declare and stitch together functions into larger workflows (compare to AWS Step functionality).  I think workflows of functions will naturally correspond to interesting “application” transactions.

And FaaS workflows could very well be the future of application development. Each function runs as a container, but by using something like Fission the developer doesn’t need to know about containers or container management. And when it comes to generating  performance insight across webs of functions, for example, the Fission Workflow engine itself can (or will) explicitly track transactions across wherever they are defined to flow (tracing state/status, timing, etc).

[check out this interesting Fission Workflow work in progress page for some categorization of the complexity for tracking async “waiting” workflows…]

This immediately makes me want to collect Fission Workflow data into something like Cassandra and play with subsets in Spark (esp. graph structured queries and visualization).  There a lot of new frontiers here to explore.

I can’t wait to see what comes next!


The New Challenges of Capacity Management In Virtualized Cloudy IT

(Excerpt from original post on the Taneja Group News Blog)

As a long-time performance and capacity planning consultant I used to help IT organizations in the worst of situations remediate thorny resource allocation issues. In other words, someone just bought a lot of expensive vendor-specified infrastructure but the resulting performance was still terrible. Sometimes after having been burned a few times, or still retaining some corporate mainframe era derived wisdom, they would engage expert help to actually forward plan optimal infrastructure investments (i.e. before spending the money). Not everybody had the discipline, budget, or maturity for proactive planning, resulting in a lot of unnecessary performance pain delivered to end-users with many IT shops living in fire-fighting mode. In fact, I knew several IT admins that thrived on the adrenaline of the daily fire-fight!
Once actually delivering good service to end-users finally became a popular IT goal one of the big attractions of virtualization technologies was that they enabled highly responsive and even dynamic allocations of resources on demand. Many IT folks assumed this would alleviate the need for up front capacity planning because you could now easily and quickly react to performance problems by allocating more resources dynamically from a shared pool. In fact, higher-end capabilities of hypervisors can automate dynamic resource assignment and leveling through judicious setting of resource prioritization policies. And it does work well but only up to a point.
Now we have at least three “new” capacity management challenges. The biggest one is sizing the resources needed for the entire resource pool. As we virtualize more and more of our mission-critical applications it’s ever more important that the entire cluster be able to handle the aggregate demands of many kinds of applications co-hosted together. Despite increasingly popular modular scale-out virtual infrastructure solutions, this still requires capacity planning at the larger scale or you risk overspending on quickly obsoleting infrastructure (remember Moore’s law will get you more for you money the later you spend it), or face severe performance bottlenecks at the worst possible times when critical applications peak together. Capacity planning has always been about right sizing the right infrastructure at the right time. Sure, hybrid cloud bursting is just around the corner for many as yet another reactive panacea to in-house resource constraints, yet its still possible to overspend on cloud allocations, or under subscribe with poor resulting performance. While AWS is elastic, it’s elastic at the machine level with the best cost management offered by reserving known volumes of machines in advance.
The second issue is that as we virtualize deeper into our mission critical applications portfolio, we simply can’t continue to guess at what virtual resources might deliver satisfactory application performance and trust that the reactive system dynamics will smooth everything out. Virtualization is essentially sharing, and good sharing schemes require a sound understanding of the resource demands required by each application within each vm in order to set the knobs and buttons to do the right thing at run-time. It’s possible and maybe even desirable to oversubscribe the low-hanging fruit of servers in test and dev, but don’t try that with your mission critical apps in production.
Finally, much of what is happening in IT infrastructure these days is converging. It’s no longer sufficient to examine performance or capacity plan silo by silo (if it ever really was). Today, it’s critical that capacity management take a holistic view across servers, storage, networking, and any other critical resources. And with the advent of clouds, capacity management isn’t limited to the data center anymore either. It’s an enterprise function at the CIO visible level.
The bottom-line is that performance analysis and capacity planning disciplines aren’t even close to dead, although there are fewer and fewer adherents who learned the formal discipline in big iron. What’s needed for this new generation is a competitive approach to optimizing total IT spend for maximum business value that can be leveraged by the average virtual admin. It’s been hard for classic capacity management vendors to evolve their tooling as fast as virtualization and cloud technologies mature, but there are a few standouts. TeamQuest for one has not only been thriving as an employee-owned firm for many years, but is actively investing in and expanding their solutions. Recently they folded in a product called Surveyor which promises to stitch together whatever systems or infrastructure data, financial management, and other data you have into a cohesive ready-to-roll analytical and reporting environment. They claim painless deployment in that it effectively creates a virtual capacity management database over all your other tools and data sources without having to ETL or create yet another monolithic database repository.
TeamQuest’s core capacity planning for servers is based on non-linear predictive modeling that relates interactive system response time to resource utilizations (via expected workload demands). Non-linear modeling can analytically “predict” the right size infrastructure proactively to guarantee end-user performance goals. A non-linear queuing analysis is also baked into NetApp’s Balance solution that enables it to identify the optimal “balance” between loading and performance in virtual infrastructures accounting for not only virtual server resources, but also attached storage arrays. Key to its value is the cross-domain way it pierces through layers of virtualization to stitch together an end-to-end cross domain performance perspective with analysis from within the vm, from the hypervisor, and from the storage array points of view.
Old school capacity planning might be dead, but long live the new virtual infrastructure capacity management!

…(read the full post)