(Excerpt from original post on the Taneja Group News Blog)
One thing is certain in technology – the wheel keeps turning from differentiating advantage to fungible commodity, and then eventually back again. Now we think the time has come for data center connectivity to arise once more and become a competitive asset again. Yep, I’m talking about cables, switches, and the actually physical connections that tie all our precious IT infrastructure together. We think Fiber Mountain is about to change the game here, and provide some real disruption to the way data centers are wired.
First, consider the changes in data center traffic, from North-South to more East-West from increased virtualization and Hyperconvergence densities, scale out clusters for SANs and Big Data and private clouds, and the increased dynamics (and possible fluid motion) of increasingly “software-defined” resources. According to Fiber Mountain, 70% of traffic is actually within a rack these days. Maintaining a proper cabling and switch infrastructure requires a lot of time and attention to detail and adherence to “best practices”, not to mention endless spreadsheets and arcane cabling schemes. It’s getting harder to maintain, not to mention the increasing cost for huge core switches in order to keep up that precious hub design in which most packets must still go to core and come back down.
Given this reality, Fiber Mountain has just announced a new paradigm. First, they let you pull ultra-dense, intelligent fiber cables everywhere (think 24+ fiber “ribbon” cables with MPO connectors) across the row and datacenter. Everything is on fiber, which provides for any speed of any protocol current or future. Second, they provide matching intelligent top of rack and end of row, and core switches which are based on full optical cross-connections. Light speed ahead. While YMMV, think about sub 5ns latency anywhere to anywhere in the datacenter (and maybe beyond?), with a much smaller (and cheaper) core switch requirement.
The first mind-bender here is that every cable termination knows and can report on what it is connected to physically (MAC address). That’s right, the fiber (and/or copper if you insist) itself knows what it is connected to. You no longer track which cable is connected to what port where on what. The system does it for you. Just rack whatever you have and plug it into whatever fiber happens to be dangling nearby (no, we don’t really recommend leaving fiber just dangling, but you get the point). No more time or effort spent labeling, tracking, inventorying, troubleshooting what cable goes to which port on what device.
The rack and row switches then can smartly switch any traffic directly from point A to point B without going through a formerly slow and bottlenecked core. And if traffic does need to flow through core now, it’s all glass. And if required, fiber capacity can be dedicated for things like “remote” DASD or even physical level multi-tenancy separation assurance. It’s all managed by the Fiber mountain “meta” orchestration system which knows about topology, integrates alarms, and controls connections (and has a REST API for you hackers out there). This is SDN in spades…
There is a great roadmap for trying this out, starting with one row and migrating row by row in parallel connectivity as desired. Fiber Mountain estimates the total cost of this scheme is 1/3 Capex of what folks pay today for the big core hub centered design, with at least two times the capacity and clear performance advantages. And the Opex is expected to be much lower – less space, power, cooling, MAC effort, connectivity mistakes/outages, etc.
I’ve been predicting the eventual convergence of DCIM, IPM, and APM. I had thought the key overlap was in tying power consumption to performance to application (showback/chargeback). But now I see that it’s going to be much more than that, with physical connectivity as “software-defined” as servers, storage, and logical-layer network functions.
What can you do with 2/3+ of your data center connectivity budget back in your pocket? Let us know if you’ve had a chance to look at Fiber Mountain, and what you think about this new future data center design.
…(read the full post)