What’s a Software Defined Data Center? – Pensa Aims Really High

This week Pensa came out of their stealthy development phase to announce the launch of their company and their Pensa Maestro cloud-based (SaaS) platform, accessible today through an initial service offering called Pensa Lab. The technology here has great opportunity, and importantly the team at Pensa is firming up with the best folks (I used to work for Tom Joyce).

I’m not sure we analysts have firmed out all the words to easily describe what they do yet, but basically Pensa provides a way to define the whole data center in code, validate it as a model, and then pull a trigger and aim it at some infrastructure to automatically deploy it. Data centers on demand!  Of course, doing all the background tranfigurations to validate and actually deploy this über level of complexity and scale requires big smarts – a large part of the magic here is some cleverly applied ML algorithms to drive required transformations, ensure policies and set up SDN configurations.

What is Software Defined?

So let’s back up a bit and explore some of the technologies involved – one of the big benefits of software and software-defined resources is that they can be spun up dynamically (and readily converged within compute hosts with applications and other software defined resources). These software-side “resources” are usually provisioned and configured through “editable model/manifest files/templates” – so-called “infrastructure as code”. Because they are implemented in software they are often also dynamically re-configurable and remotely programmable through API’s.

Application Blueprinting for DevOps

On the other side of the IT fence, applications are increasingly provisioned and deployed dynamically via recipes or catalog-style automation, which in turn rely on internal application “blueprint” or container manifest files that can drive automated configuration and deployment of application code and needed resources, like private network connections, storage volumes and specific data sets. This idea is most visible in new containerized environments, but we also see application blueprinting coming on strong for legacy hypervisor environments and bare metal provisioning solutions too.

Truly Software Defined Data Centers

If you put these two ideas together – SD and application blueprinting, you might envision a truly software defined data center describable fully in code. With some clever discovery solutions, you can imagine that an existing data center could be explored and captured/documented into a model file describing a complete blueprint for both infrastructure and applications (and the enterprise services that wrap around them). Versions of that data center “file” could be edited as desired (e.g. to make a test or dev version perhaps), with the resulting data center models deployable at will on some other actual infrastructure – like “another” public cloud.

Automation of this scenario requires an intelligent translation of high-level blueprint service and resource requirements into practical provisioning and operational configurations on specifically target infrastructure. But imagine being able to effectively snapshot your current data center top to bottom, and them be able to deploy a full, complete copy on demand for testing, replication or even live DR  (we might call this a “live re-inflation DR” (or LR-DR) scenario).

Of course, today’s data center is increasingly hybrid/multi-cloud consisting of a mix of physical, virtual machines and containerized apps and corporate data. But through emerging cutting-edge IT capabilities like hybrid-supporting software defined networking and storage, composable bare metal provisioning, virtualizing hypervisors and cloud-orchestration stacks, container systems, PaaS, and hybrid cloud storage services (e.g. HPE’s Cloud Volumes), it’s becoming possible to not just blueprint and dynamically deploy applications, but soon the whole data center around them.

There is no way that VMware, whose tagline has been SDDC for some time, will roll over and cede the territory here completely to Pensa (or any other startup). But Pensa now has a live service out there today – and that could prove disruptive to the whole enterprise IT  market.

‘Software-defined’ to define data center of the future

An IT industry analyst article published by SearchDataCenter.

Software-defined means many — sometimes conflicting — things to many people. At the core, it means letting the application control its resources.


article_Software-defined-to-define-data-center-of-the-future
Is there a real answer for how “software” can define “data center” underneath the software-defined hype?

Vendors bombard IT pros with the claim that whatever they are selling is a “software-defined solution.” Each of these solutions claims to actually define what “software-defined” means in whatever category that vendor serves. It’s all very clever individually, but doesn’t make much sense collectively.

We suffered through something similar with cloud washing, in which every bit of IT magically became critical for every cloud adoption journey. But at least in all that cloudiness, there was some truth. We all at least think we know what cloud means. The future cloud is likely a hybrid in which most IT solutions still play a role. But this rush to claim the software-defined high ground is turning increasingly bizarre. Even though VMware seems to be leading the pack with their Software-Defined Data Center (SDDC) concept, no one seems to agree on what software-defined actually means. The term is in danger of becoming meaningless.

Before the phrase gets discredited completely, let’s look at what it could mean, as with software-defined networking (SDN). In the networking space, the fundamental shift that SDN brought was to enable IT to dynamically and programmatically define and shape not only logical network layers, but also to manipulate the underlying physical network by remote controlling switches (and other components).

Once infrastructure becomes remotely programmable, essentially definable through software, it creates a new dynamic agility. No longer do networking changes bring whole systems to a grinding halt, manually moving cables and reconfiguring switches and host bus adapters one by one. Instead of an all-hands-on-deck-over-the-weekend effort to migrate from static state A to static state B, SDN enables networks to be effectively redefined remotely, on the fly.

This remote programmability brings third-party intelligence and optimization into the picture (a potential use for all that machine-generated big data you’re piling up)…

…(read the complete as-published article there)

Converging Facilities and Systems Management – Controlling Energy in the Data Center

(Excerpt from original post on the Taneja Group News Blog)

Looking back over 2012, it has really been the year of convergence. IT resource domain silos have been breaking down in favor of more coherent, cohesive, and unified architectures that look more like construction building blocks and less like electronics research computer labs. However, the vast majority of today’s data centers still have a long-established hard line between data center facilities and IT operations.

Power and cooling have always been integral to the data center, but have been managed disparately from the occupying compute and storage infrastructures. However, there are emerging technologies driven by green initiatives and by cost efficiency projects (perhaps most importantly to service providers) that are going to become increasingly important to the enterprise data center. Many of these technologies will enable a level of convergence between facilities and systems management.

As an EE who has always fallen on the compute side of the fence, I’m entranced by the idea of integrating end-to-end power and cooling management with advanced analytical and predictive systems management solutions, especially proactive performance and capacity management. An interesting power systems company, TrendPoint, focuses on what facilities folks call branch circuit power and cooling monitoring, and recently we had a chance to drill down into how that provides value to the larger data center “enterprise”.  They produce clever metering solutions that track power utilization to each circuit/rack, and also metering for heating and cooling at a fine grain too (all as SNMP data sources!).

With detailed energy usage and demand information, you can recoup energy costs by billing back to each data center client.  Similiarly correlatable heat mapping can translate to spot cooling requirements. Energy costs now can be accounted for as part of the service level negotiation. This is extremely valuable to service providers and colo data center operators, but can be used to help drive costs down in enterprise data centers of any size.

If you can then utilize this energy/heat information dynamically on the IT management side, the values to be gained beyond simple economic accountability are tremendous. You can imagine migrating virtual machines around a data center (or even to other regions) to dynamically balance power consumption and also to take advantage of better cooling patterns in order to fully maximize vm densities. But most interestingly you increase service reliability on the compute/storage side by avoiding tripping breakers and shutting whole racks of equipment down on the power side, and by keeping all equipment running within heat tolerances to avoid introducing failures and heat-induced errors. (Gotta keep that flash cool!)  

I see 2013 as the year that more data center sensors, more metering, and more raw information will be collected and leveragable than ever before (an emerging big data problem). And I imagine we’ll see new active controls on power and cooling – a real software-defined data center facility (actually, it is not that much of a stretch – see Nest for a consumer market active thermostat). And software-defined managed resources of all kinds will require new advanced operational intelligence (a machine learning challenge). So keep an eye on enabling solutions like those from TrendPoint, they are poised to help change the way data centers work.

…(read the full post)