I’m Going Fission

I just spent a couple of weeks in Boston at Red Hat Summit and OpenStack Summit.  Containers are clearly the big thing this year – Kubernetes, Openshift, etc. And increasingly, IT is learning how to take advantage of remote Management As A Service (MaaS) offerings that free up folks to focus more on business value and less on running complex stacks. On that front I talked with folks like Platform9, who happen to also sponsor a “server-less” computing solution called Fission (- later in this blog post I’ll show how I got Fission deployed to my Mac).

Because I’m an industry analyst (in my day job), here is a big picture of the evolution happening in application infrastructure: Physically hosted apps (server and O/S) –> Virtual machines (in a hypervisor) –> Cloud platforms (e.g. OpenStack) –> Container “ships” (e.g. OpenShift, Docker, Kubernetes) –> Serverless Computing (e.g. AWS Lambda and Fission).

Applications have always been constructed out of multiple tiers and communicating parts, but generally we are moving towards a world in which functionality is both defined and deployed (distributable, scalable) in small, testable bits (i.e. “units” as in unit testing), while an application “blueprint” defines all the related bits and required service properties in operation.  Some folks are calling the blueprinting part “infrastructure as code”.

(BTW – the next evolutionary step is probably some kind of highly intelligent, dynamic IoT/Big Data/Distributed engine that inherently analyzes and distributes compute functionality out as far as it can go towards the IoT edge while centralizing data only as much as required. Kind of like a database query planner on IoT-size steroids).

So, onto my Mac deployment of Fission. I’ve already got VirtualBox installed for running Hadoop cluster sandboxes and other fun projects, but OpenStack is probably not something I really need or want to run on my own Mac (although apparently I could if I wanted more agility in spinning up and down big data clusters). But – Ah ha! – now a mental lightbulb goes on! (or rather, an LED went on – gotta save power these days).

This Fission project means I can run my own lambda services now too on my little desktop Mac, and then easily deploy really cool stuff to really big clouds when someday I create that killer app (with lambdas that happily interface with other coolness like Spark, Neo4j, Ruby on Rails…).  Ok, this is definitely something I want to play with.  And I’m thinking, wait for it –  Ruby lambdas!  (Ruby is not dead, you fools! You’ll all eventually see why Ruby is the one language that will be used in the darkness to bind them all!)

Well, we’ll come back to Ruby later.  First things first – we’ll start with the default node.js example. Let’s aim for a local nested stack that will run like this:

osx (-> virtualbox (-> minikube (-> fission (-> node.js))))

host server – hypervisor – container cluster – lambda services – execution environment

While the lambda execution will be nested, the CLI commands to interface with minikube/kubernetes (kubectl) and fission (fission) will be available locally at the osx command line (in a terminal window).

Ok, I’ve already got VirtualBox, but it’s out of date for minikube. So I directly download the latest off the web and install – oops, first issue! Mac OSX now has some fancy SIP security layer that prevents anyone from actually getting anything done as root (I swear if they keep making my unix-based Mac work like IOS I’m gonna convert to Ubuntu!). So after working around security to get that update in place (and thank you Oracle for VirtualBox) we are moving on!

$ virtualbox
Oh, and make sure to also have kubectl installed locally. The local kubectl will get dynamically linked into the minikube kubernetes environment that will be running inside virtualbox.
$ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.6.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
$ kubectl version

For the minikube install I used brew, which of course I had to update first. And of course, I had to again work around the Mac OSX SIP challenge above (hopefully this is a one time fix) by setting /usr/local directly ownership to myself (then back to root:wheel after the dust settled).

$ brew update
$ brew cask install minikube
$ minikube start 
# minikube stop
# minikube service [-n NAMESPACE] [--url] NAME
$ minikube ip
$ minikube dashboard

At this point you can deploy containerized apps with kubectl into the minikube “cluster”.  This next bit is an example of a simple “echo” server from the minikube github.

$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
$ kubectl expose deployment hello-minikube --type=NodePort
$ kubectl get pod
$ curl $(minikube service hello-minikube --url)

(If you are following along, you might suggest that I should play here with minishift too, but now is not yet the time! Maybe I’ll climb into that PaaS arena in another post.)

Now it’s time for Fission. These next snippets are taken from fission github readme page. The first curl gets the fission client command lines installed locally. The kubectl lines start the fission services up. The two shell variables are just for convenience of the provided example, and not part of the required install.

$ curl http://fission.io/mac/fission > fission && chmod +x fission && sudo mv fission /usr/local/bin/

$ kubectl create -f http://fission.io/fission.yaml
$ kubectl create -f http://fission.io/fission-nodeport.yaml

$ export FISSION_URL=http://$(minikube ip):31313
$ export FISSION_ROUTER=$(minikube ip):31314    (for these examples)

Ok, so now have our own lambda services host running. Next we can start deploying lambda functions. Fission does a number of things like scale-out our services and keep a few containers ready for fast startup, and probably a bunch of stuff I won’t figure out until some O’Reilly book comes out (oh, I could just read the code…).

$ fission env create --name nodejs --image fission/node-env
$ curl https://raw.githubusercontent.com/fission/fission/master/examples/nodejs/hello.js > hello.js

$ fission function create --name hello --env nodejs --code hello.js
$ fission route create --method GET --url /hello --function hello

First, we create a fission environment associating a fission environment container image with the name “nodejs”. Then we create a fission function with our functional lambda hello.js “code” into that fission environment. Here we are using javascript and node.js, but there are other execution environments available (and we can make our own!). We also then need to map a web services route to our fission function.


module.exports = async function(context) {
    return {
        status: 200,
        body: "Hello, World!\n"
    };
}
hello.js

You can see that a Fission lambda function is just a javascript function. In this case all it does is return a standard HTTP response.

$ curl http://$FISSION_ROUTER/hello
 ->  Hello, World!

Testing it out – we hit the URL with a GET request and tada!  Hello World!

This is quite an onion we’ve built, but you hopefully can appreciate that each layer is adding to the architecture that would enable easy deployment at large scale and wide distribution down the road. Next up though, I personally want Ruby lambdas!

I could build a full native ruby fission environment (should be easy enough to start with an existing RH or docker ruby container). There is a python fission example that wouldn’t be hard to emulate. I’d have to decide on key gems to pre-load, and that leads to a big question on what I’d like to actually do and how big and fat that environment might get (which could be slow and bloated). Or we could try to stay very small – there have been small embeddable ruby’s like mruby (although that one looks dead since 2015). There is also some interesting advice for building minimal ruby app containers .

While not actually ruby, CoffeeScript transpiling ruby-like coffeescript code to javascript seems the easiest route at the moment, and just uses the vanilla fission node.js environment we already have above. I could also see also embedding “coffee” in a fission environment easily enough so that I could send coffeescript code directly to fission (although that would require transpiling on every lambda execution – it’s always a trade-off). To get started with coffee, add it to your local node.js environment (install Node first if you don’t already have that).

$ npm install -g coffee-script
$ coffee

Using coffee is easy enough. Learning it might take a bit of study, although if you like ruby and only suffer when forced to work with native Javascript, it’s well worth it.

But CoffeeScript is not ruby.  Something like Opal (transpiling full ruby syntax to js) is an even more interesting project, and if it was ever solid it could be implemented here with fission in a number of ways – possibly embedding it in a unique Opal ruby fission environment, statically applying it upstream from a node.js fission environment like with CoffeeScript, or even using it dynamically as a wrapper with ruby code sent to the node.js environment.

Another idea is to build a small ruby native fission solution with something like a nested ruby Sinatra design. First create a local “super-fission-sinatra” DSL that would deploy sinatra-like web service definition code to an embedded ruby/sinatra fission environment. Kind of meta-meta maybe, but maybe an interesting way to build scalable, instrumented API’s.

All right – that’s enough for now. Time to play! Let me know if you create any Ruby Fission examples!

Diamanti Reveals Hyperconverged Scale-out Appliances for Containers

(Excerpt from original post on the Taneja Group News Blog)

Diamanti (pre-launch known as Datawise.io) has recently rolled out their brand new hyperconverged “container” appliances. Why would containers, supposedly able to be fluidly hosted just about anywhere, need a specially built host? Kubernetes et.al. might take care of CPU allotment, but there are still big obstacles for naked containers in a production data center, especially as containers are now being lined up to host far more than simple stateless micro-services.  Now their real-world storage and networking needs have to be matched, aligned, and managed or the whole efficiency opportunity can be easily lost.

…(read the full post)

Will container virtualization be the biggest data center trend of 2016?

An IT industry analyst article published by SearchServerVirtualization.


It’s hard to predict what the biggest thing to hit the data center will be in 2016. Big data? Hyper-convergence? Hybrid cloud? I’ve decided that this is the year that containers will arrive in a big way — much earlier and faster than many expect, catching unprepared IT shops by surprise.

Unlike other technologies like big data that require vision and forward investment, containers are a natural next step for application packaging, deployment and hosting that don’t require massive shifts in mindset or vision. It’s just quicker and easier to develop and deploy an application in a container than it is to build a virtual appliance. Containerized architectures also have the compelling operational and financial benefits of cheaper or free licensing, more efficient use of physical resources, better scalability and ultimately service reliability. Looking ahead, container virtualization will help organizations take better advantage of hybrid or cross-cloud environments.

Server virtualization was also a great idea when it first came out with significant advantages over physical hosting, but it still took many years for it to mature (remember how long it was before anyone hosted an important database in a VM?). The same has been true for private or hybrid clouds, new storage technologies and even big data. But even though container virtualization  is just out of the gate, it has gotten farther down the maturity road by leveraging the roadmap laid out by server virtualization. And you can get a jumpstart by using trusted hypervisors like VMware vSphere Integrated Containers to shepherd in containers while the native container world polishes up its rougher edges. Because containers are sleeker and slimmer than VMs (they are essentially just processes), they will slip into the data center even if IT isn’t looking or paying attention (and even if IT doesn’t want them yet).

…(read the complete as-published article there)

Container technology’s role in storage

An IT industry analyst article published by SearchServerVirtualization.


Could containers dethrone virtual machines as the next generation compute architecture? I’ve heard many industry folks say that containers are moving faster into real deployments than almost any previous technology, driven by application developers, DevOps and business-side folks looking for agility as much as IT needs efficiency and scale.

Containers were one of the hottest topics at VMworld 2015. VMware clearly sees a near-term mash-up of virtual machines and containers coming quickly to corporate data centers. And IT organizations still need to uphold security and data management requirements — even with containerized applications. VMware has done a bang-up job of delivering that on the VM side, and now it’s weighed in with designs that extend its virtualization and cloud management solutions to support (and, we think, ultimately assimilate) enterprise containerization projects.

VMware’s new vSphere Integrated Containers (VICs) make managing and securing containers, which in this case are running nested in virtual machines (called “virtual container hosts”), pretty much the same as managing and securing traditional VMs. The VICs show up in VMware management tools as first-class IT managed objects equivalent to VMs, and inherit much of what of vSphere offers to virtual machine management, including robust security. This makes container adoption something every VMware customer can simply slide into.

However, here at Taneja Group we think the real turning point for container adoption will be when containers move beyond being simply stateless compute engines and deal directly with persistent data.

…(read the complete as-published article there)