Converging Facilities and Systems Management – Controlling Energy in the Data Center

(Excerpt from original post on the Taneja Group News Blog)

Looking back over 2012, it has really been the year of convergence. IT resource domain silos have been breaking down in favor of more coherent, cohesive, and unified architectures that look more like construction building blocks and less like electronics research computer labs. However, the vast majority of today’s data centers still have a long-established hard line between data center facilities and IT operations.

Power and cooling have always been integral to the data center, but have been managed disparately from the occupying compute and storage infrastructures. However, there are emerging technologies driven by green initiatives and by cost efficiency projects (perhaps most importantly to service providers) that are going to become increasingly important to the enterprise data center. Many of these technologies will enable a level of convergence between facilities and systems management.

As an EE who has always fallen on the compute side of the fence, I’m entranced by the idea of integrating end-to-end power and cooling management with advanced analytical and predictive systems management solutions, especially proactive performance and capacity management. An interesting power systems company, TrendPoint, focuses on what facilities folks call branch circuit power and cooling monitoring, and recently we had a chance to drill down into how that provides value to the larger data center “enterprise”.  They produce clever metering solutions that track power utilization to each circuit/rack, and also metering for heating and cooling at a fine grain too (all as SNMP data sources!).

With detailed energy usage and demand information, you can recoup energy costs by billing back to each data center client.  Similiarly correlatable heat mapping can translate to spot cooling requirements. Energy costs now can be accounted for as part of the service level negotiation. This is extremely valuable to service providers and colo data center operators, but can be used to help drive costs down in enterprise data centers of any size.

If you can then utilize this energy/heat information dynamically on the IT management side, the values to be gained beyond simple economic accountability are tremendous. You can imagine migrating virtual machines around a data center (or even to other regions) to dynamically balance power consumption and also to take advantage of better cooling patterns in order to fully maximize vm densities. But most interestingly you increase service reliability on the compute/storage side by avoiding tripping breakers and shutting whole racks of equipment down on the power side, and by keeping all equipment running within heat tolerances to avoid introducing failures and heat-induced errors. (Gotta keep that flash cool!)  

I see 2013 as the year that more data center sensors, more metering, and more raw information will be collected and leveragable than ever before (an emerging big data problem). And I imagine we’ll see new active controls on power and cooling – a real software-defined data center facility (actually, it is not that much of a stretch – see Nest for a consumer market active thermostat). And software-defined managed resources of all kinds will require new advanced operational intelligence (a machine learning challenge). So keep an eye on enabling solutions like those from TrendPoint, they are poised to help change the way data centers work.

…(read the full post)

EMC Atmos 2.1 Accelerates Cloud Value

(Excerpt from original post on the Taneja Group News Blog)

Object storage is certainly a hot topic, and it’s rising above it’s old data retention “jail” perception. And for good reasons. We think due to cloud storage building and adoption, increasingly mobile users and distributed apps, the benefits of active archiving and retaining ever bigger data sets that having a solid object storage strategy becomes significantly important going into 2013.

EMC is aiming to be a key part of that object strategy – today releasing Atmos 2.1 making wider adoption not only possible, but more profitable for both in-house cloud builders and service providers. There are some performance improvements under the hood (for larger file read/write), and significant increases in manageability intended to support ever larger deployments. But we think the cloud accelerators that enable better integration to organizational needs are going to provide the biggest bang. This latest version comes with expanded browser integration, an enhanced GeoDrive, more developer tools, and even some support for transitioning traditional apps to the cloud (bulk ingest, CAS metadata).  The theme is definitely to broaden the integration and hasten the adoption of cloud storage, gaining both cloud economics and enhanced productivity.

Atmos is already a great cloud object storage solution for web developers, but now also provides an API for Android, fast taking over the mobile marketplace. For developers in general, Atmos 2.1 can now provide anonymous URLs, which means those developers can easily build one time upload/download features into their apps (this is key for many collaboration use cases –  picture or image uploads, external file sharing, content distribution and other schemes).  Atmos 2.1 also supports “named objects”, which may ease certain kinds of distributed development challenges.

GeoDrive, a free addon to licensed Atmos customers, provides a secure, cached, drag and drop cloud drive interface. GeoDrive really makes collaboration easy by eliminating the need to set up complicated shares or mount points.  Now with GeoDrive 1.1, there are a bunch of enhancements including built-in data encryption and a CIFS Cloud Gateway so you don’t always need client side software. Shareable URL’s bring more collaboration into the picture to improve the private “dropbox” use case.  And collaboration is truly going global with GeoDrive now available in 10 languages (Atmos itself is already highly suited for distributed global cloud storage).

Perhaps most important is the new Native (Amazon) S3 API support.  By enabling customers to migrate S3 apps to Atmos (and vice-versa), Atmos cloud providers can now offer hybrid and mixed solutions alternatives, without threatening vendor lock-in.  Enterprises holding back because of fears of vendor lock-in (or that were tied into S3) can now consider the various SLA’s, services, and price options presented by Atmos powered options.

…(read the full post)

Commodity Infrastructure for Software Defined Storage – Coraid’s Scale Out Block

(Excerpt from original post on the Taneja Group News Blog)

As a storage professional we can get overwhelmed trying to keep up with every new “alternative” architecture professing to replace our tried-and-true storage solutions. But there seems to be no way to avoid constantly searching for new solutions to deal with data growth, demand for more and differentiating storage services and constrained budgets. Wouldn’t it be great if there was a solid storage building block of infrastructure we could acquire like a commodity across all our projects rather than hunt down specific storage for each new application and use case? We could plug them in like Legos when we needed to transform or expand, and use software to configure them for block, file, and even object storage services.

…(read the full post)