What features and functions should a data protection application support? Can your existing data protection solution be modernized with the addition of other tools? In this short video learn how Cobalt Iron uses their own IP layered on legacy data prot…
An IT industry analyst article published by SearchDataManagement.
Small World Big Data
I’ve long said that all data will eventually become big data, and big data platforms will evolve into our next-generation data processing platform. We have reached a point in big data evolution where it is now mainstream, and if your organization is not neck-deep in figuring out how to implement big data technologies, you might be running out of time.
Indeed, the big data world continues to change rapidly, as I observed recently at the Strata Data Conference in New York. While there, I met with over a dozen key vendors in sessions and on the show floor.
Overall, the folks attending conferences like this one are less and less those slightly goofy and idealistic, open source research-focused geeks, and are more real-world big data and machine learning practitioners looking to solve real business problems in enterprise production environments. Given that basic vibe, here are my top five takeaways from Strata on the big data trends that are driving the big data evolution.
1. Structured data
Big data isn’t just about unstructured or semi-structured data anymore. Many of the prominent vendors, led by the key platform providers like Hortonworks, MapR and Cloudera, are now talking about big data implementations as full enterprise data warehouses (EDWs). The passive, often swampy data lake idea seems a bit passé, while there is a lot of energy aimed at providing practical, real-time business intelligence to a wider corporate swath of BI consumers.
I noted a large number of the big data-based acceleration competitors are applying on-demand analytics against tremendous volumes — both historical and streaming IoT style — of structured data.
Clearly, there is a war going on for the corporate BI and EDW investment. Given what I’ve seen, my bet is on big data platforms to inevitably outpace and outperform monolithic and proprietary legacy EDW.
2. Converged system of action
This leads into the observation that big data evolution includes implementations that host more and more of a company’s entire data footprint — structured and unstructured data together.
We’ve previously noted that many advanced analytical approaches can add tremendous value when they combine many formerly disparate corporate data sets of all different types…(read the complete as-published article there)
In this short video learn how Ivanti helps their clients become more agile and secure to ultimately help save money and grow revenue. Unifying IT makes workflows leaner and automates 90% of repetitive tasks.
Original video (with full player and complete transcript) posted at TruthInIT.com
Browse related videos on the Small World Big Data channel at TruthInIT
…(Also posted on Cobalt Iron’s blog)
Are you actually protecting all your important data today? Not “almost,” “just about”, or “we have plans,” but all of it?
And I don’t mean just your most mission-critical data sets, but any and all of your data – critical/operational, analytical, and even archival that could be important to anyone (or any process) in your organization. The sad truth is that very few enterprise IT shops are able to claim they provide adequate protection for all their important data, much less provide rock-solid business-enhancing protection services even on just their mission-critical data.
Why? First, IT seems to grow more complex every day. Our high-tech IT architecture keeps evolving – stack convergence, hybrid cloud operations, multi-cloud brokering, distributed and mobile users, edge computing, and more.
Second, data growth is not quite totally out of control, but that’s only because we can only keep what we can actually store. With the Internet of Things streaming more data every day, Machine Learning algorithms feeding on longer tails of detailed history, and demanding users expecting ever more active archives, both the available data and the demand for more of that data is increasing non-linearly quarter-by-quarter.
And third, businesses change. Mergers and acquisitions add new layers of data center complexity. Corporations upsize, downsize, globalize, reorganize and even evolve new ways to conduct business.
It’s no wonder that we’ve outgrown the older generation of increasingly fragile data protection solutions. You might pretend that keeping up is just a matter of buying more backup licenses, hiring more IT people, and finally getting that extra budget increase this year (unlike every past year). But the truth is that what we are doing today isn’t working and isn’t going to work ever again.
It used to be simple to have an admin just backup the main file system once a week to some deep storage repository and be done with it. Then we added virtualization, deduplicating target storage, incremental backups, and remote cloud repositories. Swizzle in a growing morass of compliance and regulatory requirements and yesterday’s solutions become overwhelmingly difficult to maintain, much less serve to protect all those new applications that harness big data, leverage cloud processing, and deliver end user experiences to mobile devices. (Note – Change is hard! I can still call on an old muscle memory to type “tar –cvf backup|gzip >>/dev/tape” without thinking.)
In fact, we’ve outgrown many generations of data protection solutions and yet sometimes expect many of them to work reliably forever. How many different types of protection software do you have deployed? How many different skill sets does that require? Can you upgrade and patch all those older solutions in a timely manner? And I’d still bet you have major gaps in protection coverage, have blown out backup windows, and are unsure that a timely restore for some systems is actually possible.
Yet a New Hope
But there is always hope. While many of the changes listed above make assuring protection more complex, there are new approaches that can also help make things simple again. We know how we got here, but how do we get out of this morass?
Fundamentally we have to recognize and accept that all applications (and thus most if not all data) are becoming critical to the business. People depend on data and applications today for almost every aspect of their jobs. If there are still manual process fallbacks, no one quite remembers. Those kinds of business continuity plans are simply no longer realistic. We have built the basis of our brave new high-tech world on increasing task automation and the enhancement of human capabilities through intelligent applications. All of that data deserves protection.
The rise of pervasive, intelligent, and automated applications is not just a growing data protection problem for IT. Luckily, that trend also provides big clues for how to solve today’s data protection challenges.
There is great hope now available that combines automation, machine learning, and managed services. None of these things really offers a better mousetrap as it were, but when applied smartly together they can provide a very practical (and actually affordable) assurance that all those naughty mice actually get caught.
I can say this with some conviction because I was recently introduced to Cobalt Iron and their latest Adaptive Data Protection release (ADP 4.1 as of this writing). They have successfully encapsulated a massive amount of field experience into their data protection service, eliminating a tremendous amount of complexity and low-level skill requirement (what I might call technical trivia) on behalf of their clients. It’s all about advanced automation, applied intelligence, and leveraging computers to help people better manage all their data.
Why does Linux make more sense than Windows for endpoint computing? When you need to manage thousands of devices and need to do it with as few full time employees as possible, you may want to consider a Linux based OS. Learn what the advantages are for…