vintage, helm, sextant

Kicking the Tires on Kubernetes — Part 2

High-level Architecture

First a disclaimer — The way I learn/understand things is the “rosetta stone” way. By that, what I mean is, I try to map my subject of study to something that I’m already familiar with (if such an association is practical).So, please bear with me as I ramble on about the way I present the remainder of this particular post. Feel free to comment on things you feel can be improved upon, if you feel the need.

The primary workload that runs in a Kubernetes cluster is known as a Pod. A Pod is a logical grouping of one or more containers which in turn constitute an application service. A most common example might be an Nginx instance running within a single container in a pod. But it could also be a web instance (nginx) running in one container, and a fluentd instance running in another container, as a log processor, in what is known as a “side-car” design pattern. This article does a great job of explaining the different container design patterns that can be used. The Pod is addressable over what is called the container network, which is an overlay upon the node-level network topology using an SDN implementation (called a CNI Plugin or a Container Network Interface Plugin) such as Flannel, Weave Net, or Project Calico.

As with any clustered solution, there are three main categories of components involved in Kubernetes at the infrastructure level.

  • Nodes
  • Network
  • Storage

The nodes fall under two categories — Master nodes and Worker nodes. The Master nodes are the brains of the cluster, i.e., they form what is called the Control Plane, and determine how and where containers can run, how users and other services can communicate with the cluster (and resources within) as well as provide high availability. The Master nodes have four components —

  • kube-controller-manager — This is a daemon that runs on the master nodes (i.e., control plane nodes) and is a control loop that watches the state of the cluster through the apiserver and ensures that the cluster attains the desired state. The controller is a single binary and provides the following controller functionalities —
    • Node Controller — Keeps track of Cluster Node states
    • Replication Controller — Keeps track of and ensures the proper number of pods are running (for every replication controller object in the system)
    • Endpoints Controller — populates the endpoints objects
    • Service Account and Token Controllers — Create default accounts and API access tokens for new namespaces
  • kube-scheduler — this is the process that assigns Pods to Nodes.
  • kube-apiserver — This service validates and configures data for the API objects such as Pods, services, replicationcontrollers etc. It services REST operations and provides the frontend to the cluster’s shared state through which all other components interact.
  • etcd — The KV store which maintains state information and associated cluster metadata. This is a distributed, highly available service and requires a minimum of 3 nodes in HA mode.

There’s another “cloud-controller-manager” service that doesn’t apply to on-prem distributions but is applicable in the case of public-cloud based Kubernetes implementations.

The worker nodes constitute the actual compute resources of the cluster, and contain three main services —

  • kubelet– this is an agent that runs on each node in the cluster and is responsible for ensuring that the containers provided in PodSpecs (i.e., a YAML or JSON object that defines a pod) by the apiserver are running and in a healthy state.
  • kube-proxy — A service instance that acts as a network proxy on each node, and maintains network rules on the cluster nodes. These allow for communication to/from Pods within the cluster.
  • A container runtime — the name is self-explanatory. There are several container runtimes supported, such as Docker, CR-IO, containerd, and any implementation of the Kubernetes CRI (Container Runtime Interface).

There are other functionalities that come with a Kubernetes cluster build, such as a DNS service that allows for a user-friendly way to reference the Pods. This DNS service is internal to the Kubernetes cluster, but can be extended onto external DNS infrastructure via the ExternalDNS service or the k8s_external plugin to CoreDNS. This blogpost does a good job of discussing the details.

What I’ve written above is all information readily available on the official Kubernetes website, but somehow it felt apropos in the context of this series. Now, to pull this brief overview together in the context of the workloads and how they are delivered, consider the following.

Containerization goes hand-in-hand with microservices architecture, wherein, the venerable principle envisioned and brought to life by the sages of UNIX are embodied in a distributed scale. What I mean by that is, the UNIX (and hence Linux) philosophy with the various tools provided therein, are designed to do one thing, and one thing alone. They can be chained together to achieve a larger objective. As an example, consider the following —


$ find . -name \*.txt|wc -l
     1926 

Here we’re searching for all files with the extension of ‘.txt’ in the current directory on my trusted MBP (BSD UNIX under the hood) and doing a count. Of course, if you’re reading this, you’re no stranger to UNIX or Linux, and hence, will have no trouble understanding what I mean. Getting back to microservices, it would be akin to, if we designed one pod to only do “find” and another to only do the wordcount “wc”, and provided a way for us to send the output of the find pod to the count pod. Of course, I’m reducing this down to a very simple example for the sake of elucidation, but there are several books written about microservices architecture and what it entails in great detail. To summarize, let me quote from the website microservices.io

Microservices architecture is an architectural style that structures an application as a collection of services that are highly maintainable and testable, loosely coupled, independently deployable, organized around business capabilities

Pods are network-enabled applications that other applications would need to communicate with. Then there are the end-users who need to communicate with these Pods. Usually, a Pod is instantiated in a declarative manner, called a “deployment“. A deployment is in the form of a YAML or JSON file which provides the specifications for the Pod, and so on. This deployment would contain details such as the number of replicas of a Pod that need to run in the cluster (e.g., for a web-based service, we’d like to run more than one instance of a web server instance such as Nginx.).

Consider applications deployed within Kubernetes as existing inside a private network, such that only applications within this private network can communicate with each other. In order to let the “outside world” communicate with the applications, they need to be exposed to the outside world. Therefore, a deployment (or a resource) is exposed to the rest of the world via a service (i.e., a network service), which provides some form of load-balancing, port-forwarding, etc. For instance, I’ve exposed a MySQL pod running in my test cluster to the outside world such that I can access it by accessing the specified port in the service.

$ kubectl expose po mysql-1605200795-6949fc588-sps8w --type=NodePort --name=mysql-svc 

 
$ kubectl get svc mysql-svc
 NAME        TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
 mysql-svc   NodePort   10.111.235.117   <none>        3306:31812/TCP   11m  

$ kubectl describe svc mysql-svc
 Name:                     mysql-svc
 Namespace:                default
 Labels:                   app=mysql-1605200795
                           pod-template-hash=6949fc588
                           release=mysql-1605200795
 Annotations:              <none>
 Selector:                 app=mysql-1605200795,pod-template-hash=6949fc588,release=mysql-1605200795
 Type:                     NodePort
 IP:                       10.111.235.117
 Port:                     <unset>  3306/TCP
 TargetPort:               3306/TCP
 NodePort:                 <unset>  31812/TCP
 Endpoints:                10.244.3.4:3306
 Session Affinity:         None
 External Traffic Policy:  Cluster
 Events:                   <none> 

Now if I try to get to this from the external network from my laptop, where 192.168.7.10 is the Virtual IP associated with my HA Kubernetes cluster control plane, I get —

$ telnet 192.168.7.10 31812
 Trying 192.168.7.10...
 Connected to k8s.realsysadmin.com.
 Escape character is '^]'.
 J
 5.7.30??0H&QdCB0K^
 #.pVmysql_native_password 

Other related posts

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.