Container Technologies – Just Pack Up What You Need

April 3rd, 2015Cloud Computing


Experienced road warriors and backpackers share a common goal when preparing for travel.  Just pack what you need… no more, no less.  For simple reasons like less weight to lug around and more portability.  You can head right to the airport TSA line or depart the plane without the hassle of checking or retrieving your luggage.  There’s a reason we use the phrase “less baggage” to quickly communicate these traits.

When dealing with the lifecycle of software applications we all can benefit from the less baggage metaphor in even more ways than our backpacker.  What if we could stuff everything we needed into a container for our software development, testing and production phases of the application lifecycle?  We certainly could accomplish this today utilizing virtual machine (VM) images as containers but they are pretty heavy weight.  Including the whole operating system just to run your application is the equivalent of packing everything including the kitchen sink!

A much lighter approach has been gathering widespread adoption in the DevOp communities.  It is a technology defined by combining Linux containers (Libcontainer) with the Docker management toolkit.  Together these allow you to pack up just the application dependencies into a self-contained image that is independent of hardware and hosting environment.  The result represents a much smaller footprint than a VM image, is more portable and easily replicated.  It combines a copy-on-write file system, where the majority of the files are shared in a read-only manner and the Docker engine for shared runtime resources in an environment available on Linux and even Microsoft Windows hosts.

But what new capabilities does this represent?  Does this replace VM’s?  No, you still require a minimal Docker enabled host environment running on bare metal or within a virtual machine.  However, immediate advantages can be realized in typical development and test scenarios.  This is an operating system level virtualization where you share a kernel with separate name spaces and results in extremely fast boot, minimal additional disk and memory expansion and low CPU overhead. This enables you to fire up 10’s to even over 100 instantiations of an environment on a laptop for development or test scenarios with identical configurations and run them in parallel.  The copy-on-write file system provides each instance their own pristine copy of the data for testing.  No restoring a VM or reloading a database to start the test over again.  You can snapshot, roll back or commit the state of image container.

The advantages extend into the deployment realm of Operations as well.  A container allows developers to configure and tune it for the application allowing Operations to focus on just running containers.  This supports a DevOps collaboration model and lowers those situations where an application runs fine in development, but fails in the deployment environment.

The less baggage benefits of container technology enables hybrid IT cloud adoption and continuous application deployment with its efficient portability.  Many recognizable names in the industry have embraced the technology.  AWS, Google, IBM, Microsoft, Red Hat, SUSE and VMware to name just a few.  Interest has exploded in the past two years with over 750 community contributors, 75,000 registered “Dockerized” applications and over 100 million Docker engine downloads.  The ecosystem is expanding through projects like Flocker for stateful Docker cluster management; Apache Mesos cluster management to launch Docker images; and Openstack integration to enable host management running hundreds of containers.  Google open sourced a version of their container orchestration software known as Kubernetes.  This is what they used to launch 2 billion+ container instances weekly that service their Apps, Maps, Gmail and Search applications.

As you would expect, there is always room for improvement. CoreOS, an early adopter of the Docker container approach, developed a stripped down version of the Linux operating system specifically designed to run containers.  More recently they’ve focused on improving the security, container composition and runtime with the release of Rocket.

The net of all this is that we now have the ability to combine “just what you need” containers for any process that runs on a computer with a secure and svelte runtime engine infrastructure.  You can readily envision a very scalable application distribution system across a very large and heterogeneous environment and that’s exactly what is taking place.


Tags-   Application Lifecycle Cloud Container Docker Microsoft Azure