The Internet of Things and Beyond: 5 Things We’ll Be Tracking for a Better Tomorrow

An IT industry analyst article published by Virtualization Review.

What the Internet of Things offers is this huge opportunity to build intelligent applications that can actively optimize and direct just about any system that is dynamically programmable. Here are the five types of things that are soon likely to be “sensorized” in your IT shop.


With the incredible rise in the number of mobile devices we can also see the advent of the Internet Of Things. Every device, mobile or otherwise, that has some ability to generate an interesting stream of data is getting “sensorized” and connected. The resulting streams of big data provide a wild new frontier for intelligence mining. We at Taneja Group see this trend opening up huge opportunities to build intelligent applications that can actively optimize and direct just about any system that is dynamically programmable.

Creating intelligent feedback and control loops has always been an engineering goal, whether it’s controlling home automation, a fleet of airplane engines, or your next IT “software-defined” data center. It all starts with getting all the devices that have a story to tell to stream back their location, connectivity, performance, capacity, health, usage, errors, and configurations dynamically. Now, some food for thought: Here are the five types of things that are soon likely to be “sensorized” in your IT shop.

  • TAKE 1 – User computing devices
    Laptops, tablets, and smart phones already generate large streams of interesting data, but printers, monitors, desktops, spare batteries, and even peripherals (e.g. do know where all your USB sticks are right now?) could get detailed sensors. Maybe even the break room coffee machine.
  • TAKE 2 – Each infrastructure box or “appliance”
    We get reams of data from the logical “application” side of systems today, but knowing where the physical boxes are racked or stacked, how each is experiencing temperature, getting jostled or vibrated, and using power will all get correlated with how each is delivering on its performance, capacity, and resiliency service requirements.
  • TAKE 3 – Cards, modules, and components

…(read the complete as-published article there)

Database performance tuning: Five ways for IT to save the day

An IT industry analyst article published by SearchDataCenter.

IT teams can play heroes when database performance issues disrupt applications. Try these five tips for performance tuning before there’s a problem.


article_Database-performance-tuning-Five-ways-for-IT-to-save-the-day
When database performance takes a turn for the worse, IT can play the hero. Production database performance can slow dramatically as both data and the business grow. Whenever a key database slows down, widespread business damage can result.

Technically, performance can be tackled at many different levels — applications can be optimized, databases tuned or new architectures built. However, in production, the problem often falls on IT operations to implement something fast and in a minimally disruptive manner.

There are some new ways for IT pros to tackle slowdown problems. However, one question must be addressed first: Why is it up to IT?

Database administrators and developers have many ways to achieve database performance tuning. They can adjust configuration files to better align database requirements with underlying infrastructure, add indexing, implement stored procedures or even modify the schema to (gasp!) denormalize some tables.

Developers have significant control over how a database is used; they determine what data is processed and how it is queried. Great developers wield fierce SQL ninja skills to tune client queries, implement caching and build application-side throttling. Or, they rewrite the app to use a promising new database platform, such as a NoSQL variant.

All kinds of databases, however, can eventually suffer performance degradation at operational scales. Worse, many developers simply expect IT to simply add more infrastructure if things get slow in production, which clearly isn’t the best option.

Five ways [IT] can address database performance issues:…

…(read the complete as-published article there)

Pernix Data FVP Now Cache Mashing RAM and Flash

(Excerpt from original post on the Taneja Group News Blog)

Performance acceleration solutions tend to either replace key infrastructure or augment what you have. PernixData FVP for VMware clusters is firmly in the second camp, today with a new release making even better use of total cluster resources to provide IO performance acceleration to “any VM, on any host, with any shared storage”.

…(read the full post)

New Team At Violin Memory Playing Flashier Microsoft Music

(Excerpt from original post on the Taneja Group News Blog)

Recently we caught up with Violin Memory and they are full of energetic plans to capitalize on their high performance flash arrays, elevating their game from a focus on bringing fast technology to market to one of addressing big market problems head-on. Today they are announcing a very interesting new solution that creates a whole new segment of storage – a MS Windows-specific file-serving flash array.

…(read the full post)

Converged Infrastructure in the Branch: Riverbed Granite Becomes SteelFusion

(Excerpt from original post on the Taneja Group News Blog)

With today’s rebranding of Riverbed Granite as SteelFusion, Riverbed is prodding all branch IT owners (and vested users) to step up and consider what branch IT should ideally look like. Instead of a disparate package of network optimization, remote servers and storage arrays, difficult if not foresworn data protection approaches, and independently maintained branch applications and IT support, simple converged SteelFusion edge appliances sit in the branch to provide local computing performance but work on “projected” data that is actually consolidated and protected back in the data center.

…(read the full post)

A Billion Here, A Billion There – Big Data Is Big Money

(Excerpt from original post on the Taneja Group News Blog)

When we talk about big data today we aren’t talking just about the data and its three V’s (or up to 15 depending on who you consult), but more and more about the promise of big transformation to the data center. In other words, it’s about big money.

First, consider recent news about some key Hadoop distro vendors. Many of them are now billion dollar players, much of that on speculation and expectation of future data center occupation. When Pivotal spun off from EMC it got to start with a gaggle of successful commercially deployed products giving it a tremendous day one revenue stream. With GE’s 10% outside stake at $105M that made them a billion dollar startup. Coming back from the Cloudera Analyst Event last month we found that Cloudera was doing really well with $160M in new funding, but soon thereafter Intel weighed in to top them up over a billion in funding (valuation at 4.1B). Not to be left out in the cold, Hortonworks announced a $100M round that valued them at $1B (ostensibly they claim they could take in 20x more, but are raising funds as they need them).

Second, consider the infrastructure that not just billions but trillions and more (gadzillions?) pieces of data have to still land on, even if made up of commodity disks/server clusters. Of course most companies are going to want to build out big data solutions, or they risk getting left behind competitively. But many of these are going to eventually turn into massive investments that only grow as the data grows (i.e. predicted to be exponential!) and occupy more and more of the data center, not stay constrained as little R+D projects or simple data warehouse offloading platforms.

Cleary big data is now a playing field for competition amongst billionaires.  I’m sure the lot of startups in that space are only encouraged by the ecosystem wealth and opportunity for acquisition, but as the big money grows, keep an eye on how standards and open source foundations increasingly become political battlefields, with results not always in the best interest of the end user. 

While there is an open source model underpinning the whole ecosystem, with this much money on the table it will be interesting to see how fair competition plays out. From my perspective it looks like big data isn’t going to be very free, or there wouldn’t be billions of dollars in bets being made. Up till now most of the ecosystem vendors have been making arguments about providing better support than the other guy.  In that academic view, there is not much call for outside analysis or third party validation.

But every big, big data vendor we talk now to has some proprietary angle on how they do things better than the next guy – with lurking implied vendor lock-in –  based on how enterprises can effectively manage big data or extract maximal value from it. Which sounds like the current IT vendor ecosystem we know and love.  And which requires some analysis and validation to separate the wheat from the chaff, the real from the hype.

As an IT organization faced with big data challenges, how do you feel about suddenly dealing with billion dollar behemoths in a space founded on open source principles? In the end, it doesn’t really impact our recommended approach – you need to have enterprise capabilities for big data, and you always were likely to get the best of those from vendors with highly competitive proprietary technology. We’ve started working with big data vendors now as real IT solution vendors. In our book, Pivotal, Cloudera, Hortonworks and the like have simply graduated into the full-fledged IT vendor category, which can only help the IT organization faced with enterprise-level big data challenges.

…(read the full post)