Data Stream Mining with Cube

Time-series data analysis can be approached in two ways. Traditionally time-series data is aggregated into partitioned historical data bases, and then reported on at scheduled intervals. Commonly, reports delivered today cover data collected yesterday. A modern (and perhaps most relevant to Big Data) approach is to recognize that time-series data just “keeps coming”. And since the timeliest analysis could theoretically deliver the most value, visualizations should update as soon as the data streams in.

Square’s evolving Cube library (it’s still early version 0) enables web developers to easily deliver real-time charting of streaming time-series data on dynamic web pages:

Cube is an open-source system for visualizing time series data, built on MongoDB, Node and D3. If you send Cube timestamped events (with optional structured data), you can easily build realtime visualizations of aggregate metrics for internal dashboards.

I’ve spend a large chunk of my professional life working at IT system management vendors, each of whom spent significant resources to build and deliver proprietary event and time-series data analysis and visualization tools. In the last few years there have been successful open source discrete event monitoring and management tools (threshold, alert, etc) that really disrupted the market of old school proprietary event solutions.  Open source time-series solutions like Cube have similar potential to disrupt proprietary time-series analysis markets.

Time-Series Data Stream Mining

Real-time time-series visualization is fundamentally data stream mining, maybe not at Big Data scales but certainly there are some hints about the future for Big Data stream mining in the way Cube is architected. Continue reading

What is the Question?

The answer I’m sure is innovation.

Practically the first thing to do is figure out the questions to ask. Don’t stick to just questions that are hanging out there already needing to be answered, but create new questions that you couldn’t answer before you had your Big Data. Don’t forget that the data you have isn’t limited to what’s in-house, you can find and mashup “tons” of public, government, and licensed data sets.

Data mining, just like data visualization, is as much art as science…

When You Have a Traditional Question, All Data Looks Traditional

Mine near Woodburn, Oregon

Old Mine - Image by OSU Special Collections & Archives via Flickr

Is the challenge simply to map and reduce the Big Data into smaller data so we can look at it the same way we always have? So we can support the same business processes, the same decision-making? Answer the same questions but at larger scale perhaps?

The real challenge to think differently – to ask different questions that can only be answered by unlocking the Big Information spread over the Big Data. The whole process from data gathering through mining, analysis and visualization and presentation needs to be designed to help create and answer these new and different questions.

Enhanced by Zemanta

Why Didn’t We Already Find What We’re Looking For?

building the data plotter

Image by !mz via Flickr

What we primarily look for in data is to make sense of it – find summaries and statistics to help inform analytical decision-making or discover patterns and stories creating new insights into the larger world behind the data.

This should all sound familiar if you are a Flowing Data blog fan as I am.  From author Nathan Yau in his book Visualize This – “Whatever you decide visualization is… you’re ultimately looking for the truth.” But the truth is hard to come by. Basically numbers don’t lie, people do – either on purpose or through incompetence.

Most of us have probably read How to Lie with Statistics, but with Big Data the dangers are multiplied by magnitudes. Search for the truth, always try to tell the truth but beware of people saying they have the big truth.

Big Data Visual Exploration

There are lots of tools to analyze and visualize non-Big Data (smaller data?).  But when we approach Big Data our options are almost by definition limited. In fact most definitions of Big Data are in terms of the constraints of current “smaller data” tools to handle it effectively.  What we do have currently is centered around map/reduce processing (see Hadoop) that essentially first makes smaller datasets for analysis (e.g. check out the free Infobright/Pentaho VM).

This map/reduce approach requiring low-level distributed programming isn’t well suited to serendipitous discovery by amateur data scientists, although there is ongoing work in this area (see Pig and Hive). There are also emerging companies specializing in automating the deep “data scientist” geekery to provide a “small data” exploration experience over Big Data sets (Opera Solutions, still stealthy Zillabyte?).

The real challenge is still that we don’t really know what we are looking for in Big Data sets before we find it – discovery more than answers to questions. And whatever it is, it probably wasn’t in the smaller data we already have made optimal use of (or not, most data goes unexamined even in non-big databases.).

Enhanced by Zemanta

Big Data Analytics – Intelligence for Disruption

Taken together, the V-word characteristics of Big Data both identify and shape the kinds of innovative solutions that can be created from Big Data opportunities.  These solutions will tend to provide intelligence more than absolute truth.

Disruption is the Real Opportunity

Hurricane Irene Makes Landfall in North Carolina

Image by NASA Goddard Photo and Video via Flickr

It’s worth keeping in mind that adding Big Data Analysis to a current business isn’t the whole enchilada. Having better intelligence than the next guy is a great competitive advantage, but in itself isn’t “disruptive.” The idea that Big Data will enable game-changing new business opportunities, not simply adding insight into current processes or decision-support practices, is why Big Data Analysis is exciting.

Entrepreneurs who create new ways of doing business fueled by Big Data intelligence will dominate. The key to the difference between improving current business and innovative disruption is looking for answers to new and different questions. Sounds easy enough but that is truly difficult creative work.

Big Data Doesn’t Come with an Instruction Manual

Big Data sets don’t start with a schema model that defines the answers “findable” within them. It’s not just a huge BI warehouse. Rather, it takes a cunning mind and a dedicated soul to explore through Big Data – for example trying various map/reduce algorithms to find new patterns and assembling new visualizations discovering new ways of looking and seeing.

This skilled data mining and keen perceptive ability must be fused with an entrepreneurial mindset that is always evaluating how any new big data intelligence could be formed into new and ultimately disruptive innovation.

Big Data Defined by the V’s

There are lots of definitions of Big Data. Most of them are fuzzy marketing speak along the lines of “Big Data is just bigger than your old data, too big to deal with the same way you dealt with data before.”  Amusingly a lot of examples being given for “historical” Big Data successes are based on traditional data methods and technologies applied overlarge amounts of traditional data.

Data Represented in an Interactive 3-D Form

Image by Idaho National Laboratory via Flickr

Clearly there is something new happening with the way we can get value out of very large data sets, but it’s really hard to see what the line really is between Big Data and not-so-Big Data. Ironically most pundits seem to be saying we can spot Big Data the same way we know what’s obscene  – we’d simply recognize it when we see it. The irony of course is that Big Data is just too big to see, or visualize as it is.

Think how big a picture it would take to show a 5 Pb Big Data set at one pixel per data point.

Big Data by the V-words

I’ve read more than a few definitions that talk about some clever V-word characteristics that Big Data scientists need to be concerned with:

  1. Volume – Obviously Big Data is Big.
  2. Variety – Many identified Big Data sets are internally heterogeneous (e.g. big data documents).  The data isn’t collected or authored according to a single master schema.
  3. Velocity – Big Data sets tend to grow rapidly, even as we use them.  Implies some dynamic and possible real-time behavior as well.

I’d add a fourth V:

  1. Veracity – Or rather, the lack thereof.  Raw Big Data is often not verifiable/verified nor validated (until processed for that goal specifically, e.g. security fraud). Analysis can’t always be duplicated (as data keeps growing/changing). Duplication, omission, and general incompleteness are to be expected.

It may be impossible to repeat the same analysis definitively on a truly big “big data” set.  If results can’t be exactly reproduced (or explained back to raw data), they can’t serve as literal truth.

Enhanced by Zemanta

Nowhere To Hide

Your life so far has been a big data trail for someone else to mine.

Trail

Image by Xpectro via Flickr

Google took what was essentially crumbs of data left by millions (billions?) of people as they navigated around the internet, compiled and analyzed it into an index of how relevant and popular any place is that you want to visit.  As they compile more bits of information about you and your social circles and browsing history (and recommendations and…), your lifetime becomes laid bare to their ultimately commercial interest.

Privacy is being hotly debated in some circles but most are not even aware of what is at stake. For some the world has evolved and we can no longer apply past expectations of privacy to constructs and capabilities emerging today – the new world is a shared one. For others, any data associated with their personal identification is off-limits.

There is a new huge privacy conflict dead ahead. Continue reading

It Is a Small (and connected) World

Despite bigger and bigger data, the world is a small place and it is full of people. Increasingly networked people. I like Clay Shirky’s thinking in Here Comes Everybody  about new ways people online can gather and form loose communities whose effectiveness is multiplied by new found freedoms and capabilities for distributed but coordinated group action. (Twitter doesn’t topple governments, people linked by Twitter do.)

In Cognitive Surplus he writes about the ability to harness huge untapped human potential. For example, the average Westernized civilization’s tuned-out TV time represents a significant amount of lost “cognition”. If it were possible to recover just a small percentage of that wasted human capital in the pursuit of just about anything, tremendous things could happen. Given the emerging abilities of internet societies to both encourage and allow everyone to contribute, we might be at the start of a tremendous acceleration in human achievement (e.g. see how online gamers solve aids protein puzzle).

It Is a Small World After All

small world #5

Image by bass_nroll via Flickr

It is no longer news that companies can (and must) look for competitive advantage and innovative, even disruptive, opportunities in their “big data”. We are flooded daily with press releases about new big data technology, much of it designed to make the analysis and visualization of big data easier – even for the non-data scientist. You might even call 2011 the start of a renaissance for data visualization gurus and infographic artists.  (And we are seeing data mining history being rewritten to cast any past complex analysis victory as a win for “big data”.)

But not that much is being said about the human psychology around big data analysis. Maybe a few cautionary stories about ensuring good design and not intentionally lying with big data stats (the bigger the data, the bigger the potential lie…). And some advice that the career of the future is “data scientist,” conflicting with emerging technology marketing hype indicating we won’t really need them.

The world is changing for the people who live here but we talk mostly about gadgetry.

Enhanced by Zemanta

Who Will Drive Data Driven Documents?

Check out this javascript library D3 for “Data Driven Documents” (D3 on Github).  At first D3 seems to be just yet another way to add graphs and charts to web pages, but if you spend a few minutes soaking it in you can see how it might change how people think about data and documents altogether.

Example data diagrams produced with D3 javascript library

Like anything that changes our mental paradigm it takes a bit of noodling to wrap your head around it. D3 is similar to basic jQuery with the twist that you can add and transform data attached to arbitrary DOM elements, then use that data to drive the visualization and behavior of the DOM dynamically.

D3.js is a small, free JavaScript library for manipulating documents based on data.

D3 allows you to bind arbitrary data to a Document Object Model (DOM), and then apply data-driven transformations to the document. As a trivial example, you can use D3 to generate a basic HTML table from an array of numbers. Or, use the same data to create an interactive SVG bar chart with smooth transitions and interaction.

Data Driven Functions

There are some clever things to be done in just a few lines of code when you use D3 to map what might normally be static attributes of your CSS/HTML/SVG (or other DOM elements) to data driven functions.  D3 provides: Continue reading

MongoDB – Storing Big Data Documents

If a Big Data set (or smaller data) is in the form of documents, then it’s difficult to store them in a traditional schema-defined row and column database. Sure, you can create large blob fields to hold large arbitrary chunks of data, serialize and encode the document in some way, or just store them in a filesystem, but those options aren’t much good for querying or analysis when the data gets big.

MongoDB Document Database

MongoDB is a great example of a document database. There are no predefined schemas for tables (the schema is considered “dynamic”). Rather you declare “collections” and insert or update documents directly into each collection.

A document in this case is basically JSON with some extensions (actually BSON - Binary encoded JSON). It supports nested arrays and other things that you wouldn’t find in a relational database. If you are object oriented, this is fundamentally an object store.

Documents added to a single collection can vary widely from each other in terms of content and composition/structure (although an application layer above could obviously enforce consistency as happens when MongoDB is used under Rails).

MongoDB’s list of key features is a fantastic mini-tutorial in itself: Continue reading