Actual Hybrid of Enterprise Storage and Public Cloud? Oracle creates a Cloud Converged System

(Excerpt from original post on the Taneja Group News Blog)

What’s a Cloud Converged system? It is really what us naive people thought hybrid storage was all about all along.  Yet until now no high performance enterprise class storage ever actually delivered it.  But now, Oracle’s latest ZFS Storage Appliance, the ZS5, comes natively integrated with Oracle Cloud storage. What does that mean? On-premise ZS5 Storage Object pools now extend organically into Oracle Cloud storage (which is also made up of ZS storage) – no gateway or third party software required.
 
Oracle has essentially brought enterprise hybrid cloud storage to market, no integration required. I’m not really surprised that Oracle has been able to roll this out, but I am a little surprised that they are leading the market in this area.
 
Why hasn’t Dell EMC come up with a straightforward hybrid cloud leveraging their enterprise storage and cloud solutions? Despite having all the parts, they failed to actually produce the long desired converged solution – maybe due to internal competition between infrastructure and cloud divisions? Well, guess what. Customers want to buy hybrid storage, not bundles or bunches of parts and disparate services that could be integrated (not to mention wondering who supports the resulting stack of stuff).
 
Some companies so married to their legacy solutions that they, like NetApp for example, don’t even offer their own cloud services – maybe they were hoping this cloud thing would just blow over? Maybe all those public cloud providers would stick with web 2.0 apps and wouldn’t compete for enterprise GB dollars?
 
(Microsoft does have StorSimple which may have pioneered on-prem storage integrated with cloud tiering (to Azure). However, StorSimple is not a high performance, enterprise class solution (capable of handling PBs+ with massive memory accelerated performance). And it appears that Microsoft is no longer driving direct sales of StorSimple, apparently positioning it now only as one of many on-ramps to herd SME’s fully into Azure.)
 
We’ve reported on the Oracle ZFS Storage Appliance itself before. It has been highly augmented over the years. The Oracle ZFS Storage Appliance is a great filer on its own, competing favorably on price and performance with all the major NAS vendors. And it provides extra value with all the Oracle Database co-engineering poured into it.  And now that it’s inherently cloud enabled, we think for some folks it’s likely the last storage NAS they will ever need to invest in (if you’ll want more performance, you will likely move to in-memory solutions, and if you want more capacity – well that’s what the cloud is for!).
 
Oracle’s Public Cloud is made up of – actually built out of – Oracle ZFS Storage Appliances. That means the same storage is running on the customer’s premise as in the public cloud they are connected with. Not only does this eliminate a whole raft of potential issues, but solving any problems that might arise is going to be much simpler – (and less likely to happen given the scale of Oracle’s own deployment of their own hardware first).
 
Compare this to NetApp’s offering to run a virtual image of NetApp storage in a public cloud that only layers up complexity and potential failure points. We don’t see many taking the risk of running or migrating production data into that kind of storage. Their NPS co-located private cloud storage is perhaps a better offering, but the customer still owns and operates all the storage – there is really no public cloud storage benefit like elasticity or utility pricing.
 
Other public clouds and on-prem storage can certainly be linked with products like Attunity CloudBeam, or additional cloud gateways or replication solutions.  But these complications are exactly what Oracle’s new offering does away with.
 
There is certainly a core vendor alignment of on-premises Oracle storage with an Oracle Cloud subscription, and no room for cross-cloud brokering at this point. But a ZFS Storage Appliance presents no more technical lock-in than any other NAS (other than the claim that they are more performant at less cost, especially for key workloads that run Oracle Database.), nor does Oracle Cloud restrict the client to just Oracle on-premise storage.
 
And if you are buying into the Oracle ZFS family, you will probably find that the co-engineering benefits with Oracle Database (and Oracle Cloud) makes the set of them all that much more attractive (technically and financially). I haven’t done recent pricing in this area, but I think we’d find that while there may be cheaper cloud storage prices per vanilla GB out there, looking at the full TCO for an enterprise GB, hybrid features and agility could bring Oracle Cloud Converged Storage to the top of the list.

…(read the full post)

Oracle ZS5 Throws Down a Cloud Ready Gauntlet

(Excerpt from original post on the Taneja Group News Blog)

Is anyone in storage really paying close enough attention to Oracle? I think too many mistakenly dismiss Oracle’s infrastructure solutions as expensive, custom and proprietarily Oracle database-only hardware. But, surprise, Oracle has been successfully evolving the well respected ZFS as a solid cloud-scale filer, today releasing the fifth version of the ZFS storage array – the Oracle ZS5. And perhaps most surprising, the ZS series powers Oracle’s own fast growing cloud storage services (at huge scale – over 600PBs and growing).

…(read the full post)

Agile Big Data Clusters: DriveScale Enables Bare Metal Cloud

(Excerpt from original post on the Taneja Group News Blog)

We’ve been writing recently about the hot, potentially inevitable, trend, towards a dense IT infrastructure in which components like CPU cores and disks are not only commoditized, but deployed in massive stacks or pools (with fast matrixing switches between them). Then a layered provisioning solution can dynamically compose any desired “physical” server or cluster out of those components. Conceptually this becomes the foundation for a bare-metal cloud. DriveScale today announces their agile architecture with this approach, aimed first at solving big data multi-cluster operational challenges. 

…(read the full post)

Integrate cloud tiering with on-premises storage

An IT industry analyst article published by SearchCloudStorage.


Cloud and on-premises storage are increasingly becoming integrated. This means cloud tiering is just another option available to storage administrators. Organizations aren’t likely to move 100% of their data into cloud services, but most will want to take advantage of cloud storage benefits for at least some data. The best approaches to using cloud storage in a hybrid fashion create a seamless integration between on-premises storage resources and the cloud. The cloud tiering integration can be accomplished with purpose-built software, cloud-enabled applications or the capabilities built into storage systems or cloud gateway products.

This may be the year that public cloud adoption finally moves beyond development projects and Web 2.0 companies and enters squarely into the mainstream of IT. Cloud service providers can offer tremendous advantages in terms of elasticity, agility, scalable capacity and utility pricing. Of course, there remain some unavoidable concerns about security, competitiveness, long-term costs and performance. Also, not all applications or workloads are cloud-ready and most organizations are not able to operate fully in a public cloud. However, these concerns lead to what we are seeing in practice as a hybrid cloud approach, attempting to combine the best of both worlds.

Taneja Group research supports that view, determining that only about 10% of enterprise IT organizations are even considering moving wholesale into public clouds. The vast majority of IT shops continue to envision future architectures with cloud and on-premises infrastructure augmented by hyperconverged products, at least within the next 3-5 years. Yet, in those same shops, increasing storage consolidation, virtualization and building out cloud services are the top IT initiatives planned out for the next 18 months. These initiatives lean toward using available public cloud capabilities where it makes sense — supporting Web apps and mobile users, collaboration and sharing, deep archives, off-site backups, DRaaS and even, in some cases, as a primary storage tier.

The amount of data that many IT shops will have to store, manage, protect and help process, by many estimates, is predicted to double every year for the foreseeable future. Given very real limits on data centers, staffing and budget, it will become increasingly harder to deal with this data growth completely in-house.

…(read the complete as-published article there)

All Your Clouds Belong to Us! EMC Federates Virtustream

(Excerpt from original post on the Taneja Group News Blog)

Today EMC (the squared version) announced the acquisition of Virtustream, and the positioning of it as a new full EMC federation member alongside EMC II (storage, etc.), VMware, and Pivotal.  Virtustream is all about managing mission-critical production workload-hosting clouds, and has both a software business selling management layer solutions and an IaaS business as a service provider.

…(read the full post)

Signs it may be time to adopt a hybrid cloud strategy

An IT industry analyst article published by SearchCloudStorage.

The cloud is gaining traction, but the public cloud raises security concerns. Learn why a hybrid cloud strategy can offer businesses more benefits.


If you’re like most data storage professionals, you’re likely faced with the prospect of phasing cloud storage into your traditional storage environment. Many companies are reluctant to move into public cloud storage for obvious reasons — loss of control, oversight, security and concerns about how the cloud impacts compliance requirements, to name just a few. But the public cloud also offers compelling economics and elastic computing opportunities that have some businesses wanting to seize the potential benefits.

A sign that it may be time to adopt a hybrid cloud strategy is when business folks start contracting directly with public cloud providers for shadow IT services. Some of the reasons public clouds are attractive to business folks, assuming it’s not simply the friction of having to work with an underfunded internal IT group, include:

  • Economic elasticity. Cloud services are available under a number of on-demand agreements. All of those shift the IT budget from periodic large Capex investments to smoother Opex payments. It’s possible that over time it may be more expensive from a TCO perspective to use large amounts of public cloud services, but the ability to continually adjust the volume of services needed while paying for essentially only what you use makes a lot of sense in the face of unpredictable business environments.
  • Agility and quickness. Massive amounts of resources can be spun up in minutes when needed, as opposed to the days, weeks or months required for IT to procure, stage and deliver new infrastructure. At the same time, these resources can be shifted, almost on-demand, as needs change.
  • Broad functionality. Today’s public clouds offer any range or level of cloud outsourcing desired, including low-level infrastructure, container-like development platforms, fully functional applications and complete subscription business services.

The sensible cloud storage strategy is a hybrid approach in which IT retains control of cloud consumption and integrates it with on-premises resources as appropriate.

But there’s another side to the story. When business essentially goes outside the IT department to contract with public cloud services, problems can arise. That’s when issues of governance and control surface, including lack of compliance oversight, loss of data management control and potential security risks…

…(read the complete as-published article there)

How to become an internal hybrid cloud service provider

An IT industry analyst article published by SearchCloudStorage.

Working on a hybrid cloud project? Mike Matchett explains the steps an organization should take to become an internal hybrid cloud service provider.


One major key to success with a hybrid cloud project is to ensure that IT fundamentally transitions into an internal hybrid cloud service provider. This means understanding that business users are now your clients. You must now proactively create and offer services instead of the traditional reactive stance of working down an endless queue of end-user requests. It becomes critical to track and report on the service quality delivered and the business service utilization. Without those metrics, it’s impossible to optimize the services for price, performance, surge capacity, availability or whatever factor might be important to the overall business.

Every cloud storage option has its pros and cons. Depending on your specific needs, the size of your environment, and your budget, it’s essential to weigh all cloud and on-prem options. Download this comprehensive guide in which experts analyze and evaluate each cloud storage option available today so you can decide which cloud model – public, private, or hybrid – is right for you.

Hallmarks of a successful hybrid organization include:

  • A renewed focus on implementing higher levels of automation, spurred by the need to provide clients ways to provision and scale services in an agile manner. This automation usually extends to other parts of IT, like helping to build non-disruptive maintenance processes.
  • An effective process monitoring and management scheme that works as cloud scales to help ensure service-level agreements.
  • Clients aware of what they are consuming and using, even if they’re not actually seeing a bill for the services.

Perhaps the first step is to evaluate the involved workloads and their data sets to look for good hybrid opportunities. If you find that workloads are currently fine or require specialized support, it might be best to leave them alone for now and focus instead on workloads that are based on common platforms.

Next, it’s imperative to address the following implementation concerns before letting real data travel across hybrid boundaries…

…(read the complete as-published article there)

Siloing stifles data center growth

An IT industry analyst article published by SearchDataCenter.

It’s time to knock down those silos, one by one. IT is transforming from a siloed set of reactive cost centers into a service provider with a focus on helping the business compete.


article_Siloing-stifles-data-center-growth
In the old days of IT, admins built clear silos of domain expertise; IT infrastructure was complicated. Server admins monitored compute hosts, storage admins wrangled disks and network people untangled wires. Implementing parallel domains seemed like the best way to optimize IT. The theory was that you could run IT as efficiently as possible, allowing experts to learn specialized skills, deploy domain-specific hardware and manage complex resources.

Except that dealing with multiple IT domains was never optimal for anyone in the data center. When IT is organized into silos, anytime there is problem — troubleshooting application performance, competing for rack space, or allocating a limited budget — the resulting bickering, finger-pointing and political posturing wastes valuable time and money. And heterogeneous infrastructure is not very interoperable, despite standardized protocols and thorough vendor validation testing.

Navigating a byzantine organization just to try out new things can stifle business creativity and innovation, but things are beginning to change. There is a massive shift in IT organization and staffing…

…(read the complete as-published article there)

What does the next big thing in technology mean for the data center?

An IT industry analyst article published by SearchDataCenter.

With modern technology, it’s hard to pinpoint one single next big thing, but there are plenty of options ready to wreak havoc on the data center.


article_What-does-the-next-big-thing-in-technology-mean-for-the-data-center
There are plenty of technologies touted as the next big thing. Big data, flash, high-performance computing, in-memory processing, NoSQL, virtualization, convergence, software-defined whatever all represent wild new forces that could bring real disruption but big opportunities to your local data center.

As a senior analyst at Taneja Group, I will discuss what we learn in ongoing IT industry research. We specialize in analyzing disruptive new technologies — figuring out the opportunities they present and identifying what will actually work in practice.

We must get past the marketing hype and weed through conflicting messages from competing vendors. In their enthusiasm, brash startups can make grossly exaggerated claims, while larger incumbents might introduce fear, uncertainty and doubt when faced with new competition. How do you decide what will be the next big thing in technology?

New products promise a compelling increase in performance, efficiency, productivity or end results. Sometimes these improvements justify an immediate rip and replace, but it’s more likely that a careful evolutionary approach is warranted. For example, big data presents a potentially disruptive opportunity. The amount of interesting and available data is growing fast. Our competitive natures make us want to mine all the value out of it as quickly as we can. In response, a multitude of emerging infrastructure systems offers to help us cruise through these floods of data. It can be hard to know where to look first.

In this column, I [will] explore new approaches to big data inspired by advances in high-performance computing, Web-scale applications and supercomputing architecture, including new forms of distributed storage, in-memory processing, cross-cloud workflows and scale-out processing platforms like Hadoop….

…(read the complete as-published article there)

How Many Eyeballs Does It Take?  New Network Performance Insight with Thousand Eyes

(Excerpt from original post on the Taneja Group News Blog)

How many eyeballs does it take to really see into and across your whole network – internal and external?  Apparently a “thousand” according to Thousand Eyes, who is coming out of stealth mode today with a very promising new approach to network performance management. We got to talk recently with Mohit Lad, CEO of Thousand Eyes, who gave us a fascinating preview. Thousand Eyes already has a very strong list of logos signed up, and we think that they will simply explode across the market as one of the few strong solutions that really helps unify management across clouds.

One of the issues with network side performance management these days is that much of the network that IT enterprises rely on is external. Over the years there have been a few attempts at quantifying and monitoring external network issues – who hasn’t scripted some traceroute commands to check on some questionable services?  You might have many network tools  – maybe you run Netscout in-house but rely on some APM tools to monitor a SaaS provider.  Still you are likely missing all the network traversal in between, and it’s almost impossible to analyze for actual contention and hotspots in a deep topology (there is an echo here in what NetApp Balance did for layers of server/storage virtualization).

Companies like Gomez/Compuware, BMC, Keynote, and others have created vast endpoint armies (the good botnets I suppose) to test public facing web apps from many, many places at once. But if there is a problem, then what?  There is now a growing challenge to find and isolate devious performance issues hidden in complex networks hosting all kinds of internal and external apps and services that span private and public clouds, CDNs, and even DDOS protection vendors. Network topologies are just darned complex and often your visibility is extremely limited.

This is where Thousand Eyes comes in. They do offer a similar HTTP page component loading analysis like Compuware and Keynote, but the exciting thing is what and how they link that application performance with network performance constraints on end-to-end network topology mappings, regardless of where those networks traverse.  Imagine being able to tell your service provider that they are dropping packets at IP xxx in their network, and its affecting your apps x, y, and z in location a and b!

It’s a bit hard to describe in text – a picture here would really be worth thousands of words – but when there is a network issue, Thousand Eye’s uniquely focusing topology visualization (supported by “deep path analysis”) nails down the bottleneck fast regardless of if the problem is in-house, or at your ISP, service provider, CDN, etc.  And since its SaaS based, you can easily “share” your dynamic view of the problem with the support teams at those providers for quick collaborative resolution (and I expect providing a motivation for those folks to also subscribe to Thousand Eyes).

This may be one of those things that you need to see to appreciate, but I was quickly impressed by the thoughtful visual analysis of some issues that would otherwise be pretty opaque and hairy to figure out.  If you manage application performance across wide areas and multiple providers, or are responsible for network performance on complex topologies, you’ll want to check these guys out.  Network pros will probably get more out of it immediately than others, but even a server or storage guy can tell that a red circle in a service providers IP address is something to call them about.

…(read the full post)