The New Big Thing in Big Data: Results From Our Apache Spark Survey

(Excerpt from original post on the Taneja Group News Blog)

In the last few months I’ve been really bullish on Apache Spark as an big enabler of wider big data solution adoption. Recently we got the great opportunity to conduct some deep Spark market research (with Cloudera’s sponsorship) and were able to survey nearly seven thousand (6900+) highly qualified technical and managerial people working with big data from around the world.
   
Some highlights — First, across the broad range of industries, company sizes, and big data maturities, over one-half (54%) of respondents are already actively using Spark to solve a primary organizational use case. That’s an incredible adoption rate, and no doubt due to the many ways Spark makes big data analysis accessible to a much wider audience – not just Phd’s but anyone with a modicum of SQL and scripting skills.
   
When it comes to use cases, in addition to the expected Data Processing/Engineering/ETL use case (55%), we found high rates of forward-looking and analytically sophisticated use cases like Real-time Stream Processing (44%), Exploratory Data Science (33%) and Machine Learning (33%). And support for the more traditional customer intelligence (31%) and BI/DW (29%) use cases weren’t far behind. By adding those numbers up you can see that many organizations indicated that Spark was already being applied to more than one important type of use case at the same time – a good sign that Spark supports nuanced applications and offers some great efficiencies (sharing big data, converging analytical approaches).
 
Is Spark going to replace Hadoop and the Hadoop ecosystem of projects?  A lot of folks run Spark on its own cluster, but we assess mostly only for performance and availability isolation. And that is likely just a matter of platform maturity – its likely future schedulers (and/or something like Pepperdata) will solve the multi-tenancy QoS issues with running Spark alongside and converged with any and all other kinds of data processing solutions (e.g. NoSQL, Flink, search…).
 
In practice already, converged analytics are the big trend with near half of current users (48%) said they used Spark with HBase and 41% again also with Kafka. Production big data solutions are actually pipelines of activities that span from data acquisition and ingest through full data processing and disposition. We believe that as Spark grows its organizational footprint out from initial data processing and ad-hoc data science into advanced operational (i.e. data center) production applications, that it truly blossoms when fully enabled by supporting other big data ecosystem technologies.

…(read the full post)

Virtual Instruments Finally Gets NAS-ty

(Excerpt from original post on the Taneja Group News Blog)

When Virtual Instruments merged in/acquired Load Dynamix recently, we thought good things were going to happen.  VI could now offer its users a full performance management “loop” of monitoring and testing in a common suite. Apparently VI’s clientele agreed because they’ve just finished out a stellar first half of year financially. Now, to sweeten the offer even more, VI is broadening its traditionally Fibre Channel/block focused monitoring (historically rooted in their original FC SAN probes) to fully encompass NAS monitoring too.

…(read the full post)

Oracle ZS5 Throws Down a Cloud Ready Gauntlet

(Excerpt from original post on the Taneja Group News Blog)

Is anyone in storage really paying close enough attention to Oracle? I think too many mistakenly dismiss Oracle’s infrastructure solutions as expensive, custom and proprietarily Oracle database-only hardware. But, surprise, Oracle has been successfully evolving the well respected ZFS as a solid cloud-scale filer, today releasing the fifth version of the ZFS storage array – the Oracle ZS5. And perhaps most surprising, the ZS series powers Oracle’s own fast growing cloud storage services (at huge scale – over 600PBs and growing).

…(read the full post)

Do ROI Calculators Produce Real ROI?

(Excerpt from original post on the Taneja Group News Blog)

As both a vendor product marketer and now an analyst, I’ve often been asked to help produce an “official” ROI (or the full TCO) calculator for some product. I used to love pulling out Excel and chaining together pages of cascading formulas.  But I’m getting older and wiser.  Now I see that ROI calculators are by and large just big rat holes. In fact I was asked again this week and, instead of quickly replying “yes, if you have enough money” and spinning out some rehashed spreadsheet (like some other IT analyst firms), I spent some time thinking about why the time and money spent producing detailed ROI calculators is usually a wasted investment, if not a wasted opportunity (to do better).

…(read the full post)

We Can No Longer Contain Containers!

(Excerpt from original post on the Taneja Group News Blog)

Depsite naysayers (you know who you are!) I’ve been saying this is the year for containers, and half way into 2016 it’s looking like I’m right. The container community is maturing enterprise grade functionality, perhaps modeled on virtualization predecessors, extremely rapidly. ContainerX is one of those interesting solutions that fills in a lot of gaps for enterprises looking to stand up containers in production. In fact, they claim to be the “vSphere for Containers”.

…(read the full post)

Hyperconverged Storage Evolves – Or is it Pivoting When it Comes to Pivot3?

(Excerpt from original post on the Taneja Group News Blog)

Pivot3 recently acquired NexGen (Mar 2016). Many folks have been wondering what they are doing. Pivot3 has made a name in the surveillance/video vertical with bulletproof hyperconvergence based on highly reliable data protection(native erasure coding) and large scalability (no additional east/west traffic with scale) as a specialty. So what does NexGen IP bring?  For starters, multi-tier flash performance and enterprise storage features (like snapshots).

…(read the full post)

Server Side Is Where It’s At – Leveraging Server Resources For Performance

(Excerpt from original post on the Taneja Group News Blog)

If you want performance, especially in IO, you have to bring it to where the compute is happening. We’ve recently seen Datrium launch a smart “split” array solution in which the speedy (and compute intensive) bits of the logical array are hosted server-side, with persisted data served from a shared simplified controller and (almost-JBOD) disk shelf. Now Infinio has announced their new caching solution version 3.0 this week, adding tiered cache support for server-side SSD’s and other flash to their historically memory focused IO acceleration.

…(read the full post)

Unifying Big Data Through Virtualized Data Services – Iguaz.io Rewrites the Storage Stack

(Excerpt from original post on the Taneja Group News Blog)

One of the more interesting new companies to arrive on the big data storage scene is iguaz.io. The iguaz.io team has designed a whole new, purpose-built storage stack that can store and serve the same master data in multiple formats, at high performance and in parallel streaming speeds to multiple different kinds of big data applications. This promises to obliterate the current spaghetti data flows with many moving parts, numerous transformation and copy steps, and Frankenstein architectures required to currently stitch together increasingly complex big data workflows. We’ve seen enterprises need to build environments that commonly span from streaming ingest and real time processing through interactive query and into larger data lake and historical archive based analysis, and end up making multiple data copies in multiple storage formats in multiple storage services.

…(read the full post)

Playing with Neo4j version 3.0

I’ve been playing again with Neo4j now that v3 is out. And hacking through some ruby scripts to load some interesting data I have laying around (e.g. the database for this website which I’m mainly modeling as “(posts)<-(tags); (posts:articles)<-(publisher)”).

For ruby hacking in the past I’ve used the Neology gem, but now I’m trying out the Neo4jrb set of gems. And though I think an OGM is where it’s at (next Rails app I build will no doubt be using some graph db), I’m starting with just neo4j-core to get a handle on graph concepts and Cypher.

One thing that stumped me for a bit is that with the latest version of these gems – maybe now that they support multiple Neo4j sessions – I found it helped to add a “default: true” parameter to the session “open” to keep everything down stream working at the neo4j-core level. Otherwise Node and other neo4j-core classes seemed to lose the current session and give a weird error (depending on scope?).  Or maybe I just kept clobbering my session context somehow. Anyway doesn’t seem to hurt.

require 'neo4j-core'
@_neo_session = nil
def neo_session
  @_neo_session ||= Neo4j::Session.open(:server_db,
    'http://user:password@localhost:7474',
    default: true)
end
#...
neo_session
Neo4j::Node.create({title: "title"}, :Blog)
#...
Neo4j-core Session

The Neo4j v3 Mac OSX “desktop install” has removed terminal neo4j-shell access in favor of the updated slick browser interface. This updated browser interface is pretty good, but for some things I’d still really like to play with a terminal window command shell.  Maybe I’m just getting old :)… If you still want the neo4j shell, apparently you can instead install the linux tarball version (but then you don’t get the browser client?). I’m not sure why product managers make either-or packaging decisions like this. It’s not as if the shell was deprecated (e.g. to save much dev, time or testing effort).

Anyway, things look pretty cool in the browser interface, and playing with Cypher is straightforward as you can change between table, text, and graph views of results with just a click.

Screen Shot 2016-06-07 at 3.44.03 PM I’ve also been wanting to play with Gephi more. So I’m exporting data from Neo (using .cvs files though as the Gephi community neo4j importer plugin isn’t yet updated to Gephi v0.9) using Cypher statements like these and the browser interface download button.

#for the node table export -> Gephi import
MATCH (n) RETURN ID(n) AS Id, LABELS(n) AS Label, n.title As Title, n.url AS URL, toString(n.date) as Date, n.name AS Name, n.publisher AS Publisher

#for the edge table export -> Gephi import
MATCH (l)-[rel]->(r) RETURN ID(rel) AS Id, TYPE(rel) AS Label, ID(l) AS Source, ID(r) AS Target
Cypher Queries for Importing into Gephi