Emerging PaaS model puts cloud app dev into the fast lane

An IT industry analyst article published by SearchCloudComputing.


article_Emerging-PaaS-model-puts-cloud-app-dev-into-the-fast-lane
As they grapple with application backlogs and a shortage of seasoned, business-savvy developers, enterprises will increasingly look to drag-and-drop programming options.

Mike Matchett
Small World Big Data

Any complex organization wants to move faster, increase efficiency and lower its costs. That’s never been easy. There are too many moving parts — spread across layers of heterogeneous, hybrid IT — and too much inadequate expertise to accomplish everything.

I’ve never heard anyone say they ran out of applications to build before they ran out of good application developers to build them.

It’s no wonder that the get-to-the-cloud message, with its push-button, pay-someone-else-to-manage-it vision has finally penetrated almost every organization. With the cloud-first mantra these days, the CIO might as well be thought of as the cloud information officer. However, in today’s highly internetworked, hybrid world, IaaS is no longer the big cloud opportunity.

Where IT can help the business gain real competitive advantage is now up the stack with some form of PaaS model, such as high-productivity application PaaS. To be competitive, companies want to build and deploy new applications quickly. PaaS promises to enable developers to build better apps and deploy them faster without IT infrastructure friction, thereby unleashing pent-up productivity.

Switching to PaaS, however, can be hard, much like the move to Agile development methods. Using PaaS assumes you have a bevy of highly experienced and web-savvy developers willing to work on relatively plebeian business processes — and PaaS alone won’t solve all your problems.

Backlogged application development
Great business-savvy application developers are rare. In fact, I’ve never heard anyone say they ran out of applications to build before they ran out of good application developers to build them. And it’s not just developers. An organization’s application backlog problem could get worse for a number of reasons:

  • App dev bottleneck. How many people really grasp top-notch, web-scale coding practices and know the business? Among them, how many know about scalable databases, machine learning algorithms and also have the patience to provide internal customer support?
  • Data swampiness. Some of today’s most valuable data is big, bulky, barely structured, increasingly real-time and growing. Put it all in a data lake and maybe you can make some use of it, but only if you can sort out what’s relevant, what’s compliant and what’s true. Even harder, most new apps will want to naturally span and combine both structured and unstructured data sources.
  • Creativity cost. It takes a good idea and dedicated resources to make a great new business app work well. And it requires a culture that approves of investing in projects that might not always produce results. The biggest returns come from taking the biggest risks, which usually means more money on the line.
  • Ticking time. Ask everyone within your organization for application ideas, and you’ll be sure to compile a huge backlog. Many of those applications are impractical for the simple reason that, by the time developers finish, the app’s window of competitive value will have disappeared. Who needs another outdated application? It’s hard enough to maintain the ones already in use.

PaaS adoption can be a very good thing, helping enable and accelerate development on a number of fronts. But for many of the above reasons, the PaaS model itself won’t help everyone take advantage of all the potential new application opportunities…(read the complete as-published article there)

Serverless technology obfuscates workflows, performance data

An IT industry analyst article published by SearchITOperations.


article_Serverless-technology-obfuscates-workflows-performance-data
Serverless and microservices reshape the application stack into something that looks like a swath of stars in the sky. How do you find a slow, misconfigured component in this interconnected galaxy?

Mike Matchett
Small World Big Data

I’m hearing that IT infrastructure is dead. And who needs it anymore, really? The future is about moving up the stack to microservices and serverless technology, as we continue to abstract, embed and automate away all the complexities of explicit infrastructure layers, such as storage arrays and physical servers.

On-premises, Capex-style IT is shrinking, while rented and remotely managed hardware and cloud transformation set new standards for modern IT. All the cool kids use end-to-end orchestration, advanced machine learning, real-time management data streams, microservices architecture and insanely scalable container environments. And now we even have serverless computing, sometimes called function as a service (FaaS).

But can we have computing without the server? And where did the server go?

Serving more with serverless technology
There is a certain hazard in my life that comes from telling non-IT people that, as an IT industry analyst, I explore and explain technology. I’m asked all the time, even by my mom, questions like, “I suppose you can explain what the cloud is?

I tend to bravely charge in, and, after a lot of at-bats with this question, I’ve got the first 25 seconds down: “It’s like running all your favorite applications and storing all your data on somebody else’s servers that run somewhere else — you just rent it while you use it.” Then I lose them with whatever I say next, usually something about the internet and virtualization.

The same is mostly true with serverless computing. We are just moving one more level up the IT stack. Of course, there is always a server down in the stack somewhere, but you don’t need to care about it anymore. With serverless technology in the stack, you pay for someone else to provide and operate the servers for you.

We submit our code (functions) to the service, which executes it for us according to whatever event triggers we set. As clients, we don’t have to deal with machine instances, storage, execution management, scalability or any other lower-level infrastructure concerns.

The event-driven part is a bit like how stored procedures acted in old databases, or the way modern webpages call in JavaScript functions, hooked to and fired off in response to various clicks and other web events. In fact, AWS Lambda, a popular serverless computing service, executes client JavaScript functions, likely running Node.js in the background in some vastly scalable way.

Look ma, no server!
We need to tackle several issues to ready serverless technology for primetime enterprise use. The first is controlling complexity…(read the complete as-published article there)

Learn storage techniques for managing unstructured data use

An IT industry analyst article published by SearchStorage.


article_Learn-storage-techniques-for-managing-unstructured-data-use
Rearchitect storage to maximize unstructured data use at the global scale for larger data sets coming from big data analytics and other applications.

Mike Matchett
Small World Big Data

Back in the good old days, we mostly dealt with two storage tiers. We had online, high-performance primary storage directly used by applications and colder secondary storage used to tier less-valuable data out of primary storage. It wasn’t that most data lost value on a hard expiration date, but primary storage was pricey enough to constrain capacity, and we needed to make room for newer, more immediately valuable data.

We spent a lot of time trying to intelligently summarize and aggregate aging data to keep some kind of historical information trail online. Still, masses of detailed data were sent off to bed, out of sight and relatively offline. That’s all changing as managing unstructured data becomes a bigger concern. New services provide storage for big data analysis of detailed unstructured and machine data, as well as to support web-speed DevOps agility, deliver storage self-service and control IT costs. Fundamentally, these services help storage pros provide and maintain more valuable online access to ever-larger data sets.

Products for managing unstructured data may include copy data management (CDM), global file systems, hybrid cloud architectures, global data protection and big data analytics. These features help keep much, if not all, data available and productive.

Handling the data explosion

The underlying theme of many new storage offerings is to extend enterprise-quality IT management and governance across multiple tiers of global storage.

We’re seeing a lot of high-variety, high-volume and unstructured data. That’s pretty much everything other than highly structured database records. The new data explosion includes growing files and file systems, machine-generated data streams, web-scale application exhaust, endless file versioning, finer-grained backups and rollback snapshots to meet lower tolerances for data integrity and business continuity, and vast image and media repositories.

The public cloud is one way to deal with this data explosion, but it’s not always the best answer by itself. Elastic cloud storage services are easy to use to deploy large amounts of storage capacity. However, unless you want to create a growing and increasingly expensive cloud data dump, advanced storage management is required for managing unstructured data as well. The underlying theme of many new storage offerings is to extend enterprise-quality IT management and governance across multiple tiers of global storage, including hybrid and public cloud configurations.

If you’re architecting a new approach to storage, especially unstructured data storage at a global enterprise scale, here are seven advanced storage capabilities to consider:

Automated storage tiering. Storage tiering isn’t a new concept, but today it works across disparate storage arrays and vendors, often virtualizing in-place storage first. Advanced storage tiering products subsume yesterday’s simpler cloud gateways. They learn workload-specific performance needs and implement key quality of service, security and business cost control policies.

Much of what used to make up individual products, such as storage virtualizers, global distributed file systems, bulk data replicators, and migrators and cloud gateways, are converging into single-console unifying storage services. Enmotus and Veritas offer these simple-to-use services. This type of storage tiering enables unified storage infrastructure and provides a core service for many different types of storage management products.

Metadata at scale. There’s a growing focus on collecting and using storage metadata — data about stored data — when managing unstructured data. By properly aggregating and exploiting metadata at scale, storage vendors can better virtualize storage, optimize services, enforce governance policies and augment end-user analytical efforts.

Metadata concepts are most familiar in an object or file storage context. However, advanced block and virtual machine-level storage services are increasingly using metadata detail to help with tiering for performance. We also see metadata in data protection features. Reduxio’s infinite snapshots and immediate recovery based on timestamping changed blocks take advantage of metadata, as do change data capture techniques and N-way replication. When looking at heavily metadata-driven storage, it’s important to examine metadata protection schemes and potential bottlenecks. Interestingly, metadata-heavy approaches can improve storage performance because they usually allow for high metadata performance and scalability out of band from data delivery.

Storage analytics. You can use metadata and other introspective analytics about storage use gathered across enterprise storage, both offline and increasingly in dynamic optimizations. Call-home management is one example of how these analytics are used to better manage storage…(read the complete as-published article there)

Is demand for data storage or supply driving increased storage?

An IT industry analyst article published by SearchStorage.


article_Is-demand-for-data-storage-or-supply-driving-increased-storage
Figuring out whether we’re storing more data than ever because we’re producing more data or because constantly evolving storage technology lets us store more of it isn’t easy.

Mike Matchett
Small World Big Data

Whether you’re growing on-premises storage or your cloud storage footprint this year, it’s likely you’re increasing total storage faster than ever. Where we used to see capacity upgrade requests for proposals in terms of tens of terabytes growth, we now regularly see RFPs for half a petabyte or more. When it comes to storage size, huge is in.

Do we really need that much more data to stay competitive? Yes, probably. Can we afford extremely deep storage repositories? It seems that we can. However, these questions raise a more basic chicken-and-egg question: Are we storing more data because we’re making more data or because constantly evolving storage technology lets us?

Data storage economics
Looked at from a pricing perspective, the question becomes what’s driving price — more demand for data storage or more storage supply? I’ve heard economics professors say they can tell who really understands basic supply and demand price curve lessons when students ask this kind of question and consider a supply-side answer first. People tend to focus on demand-side explanations as the most straightforward way of explaining why prices fluctuate. I guess it’s easier to assume supply is a remote constant while envisioning all the possible changes in demand for data storage.

As we learn to wring more value out of our data, we want to both make and store more data.

But if storage supply is constant, given our massive data growth, then it should be really expensive. The massive squirreling away of data would instead be constrained by that high storage price (low availability). This was how it was years ago. Remember when traditional IT application environments struggled to fit into limited storage infrastructure that was already stretched thin to meet ever-growing demand?

Today, data capacities are growing large fast, and yet the price of storage keeps dropping (per unit of storage capacity). There’s no doubt supply is rising faster than demand for data storage. Technologies that bring tremendous supply-side benefits, such as the inherent efficiencies in shared cloud storage — and Moore’s law and clustered open source file systems like Hadoop Distributed File System and other technologies — have made bulk storage capacity so affordable that despite massive growth in demand for data storage, the price of storage continues to drop.

Endless data storage
When we think of hot new storage technologies, we tend to focus on primary storage advances such as flash and nonvolatile memory express. All so-called secondary storage comes, well, second. It’s true the relative value of a gigabyte of primary storage has greatly increased. Just compare the ROI of buying a whole bunch of dedicated, short-stroked HDDs as we did in the past to investing in a modicum of today’s fully deduped, automatically tiered and workload-shared flash.

It’s also worth thinking about flash storage in terms of impact on capacity, not just performance. If flash storage can serve a workload in one-tenth the time, it can also serve 10 similar workloads in the same time, providing an effective 10-times capacity boost.

But don’t discount the major changes that have happened in secondary storage…(read the complete as-published article there)

What’s our future if we don’t secure IoT devices?

An IT industry analyst article published by SearchITOperations.


article_Whats-our-future-if-we-dont-secure-IoT-devices
When everything from the coffee maker to the manufacturing plant’s robots to the electric grid is connected, shouldn’t security be IT’s primary concern?

Mike Matchett
Small World Big Data

I was recently asked about the most pressing IT challenge in 2018. At first, I was going to throw out a pat answer, something like dealing with big data or finally deploying hybrid cloud architecture. But those aren’t actually all that difficult to pull off anymore.

We should be much more afraid of today’s human ignorance than tomorrow’s AI.

Then I thought about how some people like to be irrationally scared about the future, and bogeyman like artificial intelligence in particular. But AI really isn’t the scary part. It’s the blind trust we already tend to put into black-box algorithms and short-sighted local optimizations that inevitably bring about unintended consequences. We should be much more afraid of today’s human ignorance than tomorrow’s AI.

Instead, what I came up with as the hard, impending problem for IT is how to adequately secure the fast-expanding internet of things. To be clear, I interpret IoT rather broadly to include existing mobile devices — e.g., smartphones that can measure us constantly with multiple sensors and GPS — connected consumer gadgets and household items, and the burgeoning realm of industrial IoT.

The rush to secure IoT devices isn’t just about your personal things, as in the risk of someone hacking your future driverless car. The potential scope of an IoT security compromise is, by definition, huge. Imagine every car on the road hacked — at the same time.

IoT exploits could also go wide and deep. Sophisticated compromises could attack your car, your phone, your home security system, your pacemaker and your coffeepot simultaneously. Imagine every coffee machine out of service on the same morning. We haven’t even begun to outline the potential nightmare scenarios caused by insecure IoT devices. And I sure hope Starbucks is keeping some analog percolators on standby.

If personal physical danger isn’t scary enough, think about the ease with which a single penetration of a key connected system could cause a nationwide or even global disaster. For example, a 2003 cascading power outage that affected over 50 million people in New England was triggered by a single alarm system misconfiguration. An inability to recover or reset something easily at that scale could push one into imagining a truly dystopian future.

Vulnerable with a capital V
What worries me more than the possibility of a large, direct attack is the very real likelihood of slow, insidious, creeping subversion, achieved through IoT device security breaches. And not just by one party or a single bad actor, but by many competing interests and organizations over time — some with supposedly good intentions.

We will make mistakes, take shortcuts and ignore vulnerabilities until it’s too late.

The total IoT attack surface will be too large to keep everything fully secured…(read the complete as-published article there)