vintage, helm, sextant

Kicking the Tires on Kubernetes – Part 1

Containerization — My Historical Perspective

Containerization is not really a new concept in the *NIX world. The first time I ran into containers was when I used to manage web infrastructure at a large Auto Financing company back in 2000-2001. Back then we ran Netscape (or was it called iPlanet) webservers to host the website which provided online vehicle selection, leasing, financing, and auctioning capabilities. We used to run this on a bunch of E250 servers from Sun Microsystems, running Solaris 2.6. Those servers were expensive, and we had to improvise to provide dev/test environments on limited resources. So our lead engineer, an old school UNIX/Linux programmer devised a plan to use ‘chroot’ (providing essential binaries via ‘/bin’, ‘/sbin’, ‘/usr/bin’, libraries via ‘/lib’, and configurations via ‘/etc’ on each instance) to create self-contained mini-OS instances running on a single Solaris 2.6 OS host instance. We didn’t really name these as containers per se, just used them to build multiple web server instances to set up the test rig, have our developers and testers use them as dev instances, developing test cases, and so on.

Later on, Sun Microsystems introduced a containerization feature starting with Solaris 10. This feature was named “Zones”, and introduced the concept of lightweight containers sharing a single host OS kernel, fully isolated from other containers with progressively advanced features of resource sharing and capping via hard allocation of vCPUs, Memory, and virtual network instances, with provisions to use a dedicated ZFS pool per container. I used this functionality between 2007-2012 in conjunction with Sun Cluster — the host, and application cluster-ware for Solaris, to build farms of Solaris 10 containers running in full HA mode, migrating a whole bunch of standalone legacy apps and Oracle Database instances. That project saw a massive data center footprint reduction, cost savings, and introduced agility and automation into the process of setting up these massive database instances within minutes using shell scripts.

Now onto Kubernetes

Kubernetes brings back memories of my previous work on containers, but in a completely different light. Much is the same, but much is also different. Solaris zones could not run non-Solaris OS or code (except for a brief foray into running Linux containers, which never really saw mainstream usage).

The Solaris zone-based approach had one major issue — the zones needed to be “patched” in the traditional “UNIX” way, especially in the case of Solaris 10 based installations. When Sun/Oracle released Solaris 11, they did away with the old patch management system which was cumbersome and was often fraught with dangers, that required considerable time and effort, with weekends and nights spent by many engineers to effectively patch and test functionality. So while Solaris zones provided some of the advantages of ‘modern’ containerization via Docker and Kubernetes, it did not render all pets into cattle. It did allow us to scale out quickly by quickly deploying (even automating deployment) clustered zones with far greater agility than deploying physical servers, or even VMs in a VMware farm.

The ‘modern’ container approach has some specific features, such as abstracting out the uniqueness from the containers, so that they can be turned into cattle (as opposed to pets which would require far more specific care and nurturing). If applications need to be patched, a new container image can be deployed. This provides the ability to integrate version control, automate the creation and delivery of the applications (via container images) in a very fine-grained manner (and allows for CI/CD, Blue-Green deployment strategies, and so on).

Initially, I think the main focus was on stateless applications which could run anywhere, and could rapidly be recreated in cases of failure of the underlying infrastructure, or be scaled out dynamically. But modern datacenters and applications have stateful aspects to them as well, such as databases, key-value stores, etc. Hence additional functionality was added on to support persistent storage options.

The 12-factor rules of thumb, for ‘modern’ containerization (part of what’s now the new industry buzzword — Infrastructure Modernization), are a great resource for considerations thereof. The Kubernetes official documentation provides insight into what Kubernetes offers, what it IS, and what it IS NOT. To summarize, Kubernetes is a container orchestration, scheduling framework which provides features such as network load-balancing, self-contained deployment of apps (via containers), ability to store and share secrets, configurations, network overlay functionality to bring SDN functionality, provide the ability to autoscale resources, and provides some self-healing functionality. In a sense, Kubernetes does what I tried to cobble together with Solaris zones and Sun Cluster many moons ago, but has the collective brainpower of the opensource community, and with varied backgrounds and experiences, to provide what is, in essence, an excellent tool of choice for modern application deployment and delivery.

Related Posts

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.