Converging Facilities and Systems Management – Controlling Energy in the Data Center

Converging Facilities and Systems Management – Controlling Energy in the Data Center

(Excerpt from original post on the Taneja Group News Blog)

Looking back over 2012, it has really been the year of convergence. IT resource domain silos have been breaking down in favor of more coherent, cohesive, and unified architectures that look more like construction building blocks and less like electronics research computer labs. However, the vast majority of today’s data centers still have a long-established hard line between data center facilities and IT operations.

Power and cooling have always been integral to the data center, but have been managed disparately from the occupying compute and storage infrastructures. However, there are emerging technologies driven by green initiatives and by cost efficiency projects (perhaps most importantly to service providers) that are going to become increasingly important to the enterprise data center. Many of these technologies will enable a level of convergence between facilities and systems management.

As an EE who has always fallen on the compute side of the fence, I’m entranced by the idea of integrating end-to-end power and cooling management with advanced analytical and predictive systems management solutions, especially proactive performance and capacity management. An interesting power systems company, TrendPoint, focuses on what facilities folks call branch circuit power and cooling monitoring, and recently we had a chance to drill down into how that provides value to the larger data center “enterprise”.  They produce clever metering solutions that track power utilization to each circuit/rack, and also metering for heating and cooling at a fine grain too (all as SNMP data sources!).

With detailed energy usage and demand information, you can recoup energy costs by billing back to each data center client.  Similiarly correlatable heat mapping can translate to spot cooling requirements. Energy costs now can be accounted for as part of the service level negotiation. This is extremely valuable to service providers and colo data center operators, but can be used to help drive costs down in enterprise data centers of any size.

If you can then utilize this energy/heat information dynamically on the IT management side, the values to be gained beyond simple economic accountability are tremendous. You can imagine migrating virtual machines around a data center (or even to other regions) to dynamically balance power consumption and also to take advantage of better cooling patterns in order to fully maximize vm densities. But most interestingly you increase service reliability on the compute/storage side by avoiding tripping breakers and shutting whole racks of equipment down on the power side, and by keeping all equipment running within heat tolerances to avoid introducing failures and heat-induced errors. (Gotta keep that flash cool!)  

I see 2013 as the year that more data center sensors, more metering, and more raw information will be collected and leveragable than ever before (an emerging big data problem). And I imagine we’ll see new active controls on power and cooling – a real software-defined data center facility (actually, it is not that much of a stretch – see Nest for a consumer market active thermostat). And software-defined managed resources of all kinds will require new advanced operational intelligence (a machine learning challenge). So keep an eye on enabling solutions like those from TrendPoint, they are poised to help change the way data centers work.

…(read the full post)