vintage, helm, sextant

Kicking the Tires on Kubernetes — Part 5

The topic of providing a persistent storage in the Kubernetes world is simpler today than it was a couple of years back. In order for this series and my personal edification, I opted to set up GlusterFS on the Kubernetes nodes in what is known as a “hyper-converged” deployment. The industry terms of “converged” and “hyper-converged” infrastructure denote the manner in which storage is presented to the compute infrastructure. If the compute connects to a remote storage layer via some network mechanism (typically called a “Storage Area Network”), that model is called “converged”, while if the storage layer resides on the same physical nodes as where the compute is, then that model is called “hyper-converged”.

A little anecdote from my early days at Cloudera comes to mind, when I was tasked with ascertaining whether a GlusterFS backend would be able to provide a viable storage layer for a Hadoop Cluster. During that exercise, I had to install CDH4 and connect it to a GlusterFS cluster, using a plugin which would allow GlusterFS to replace HDFS, the default distributed storage layer of the Hadoop stack. This was more than six years ago, and unfortunately the stringent performance demands of Hadoop workloads resulted in GlusterFS not making the cut from a performance perspective. However, as a technology, Gluster is very good, and we had toyed with using it as a distributed, scalable network storage in a previous lifetime (way back in 2012-13). And with the advent of progressively lower prices and higher performance profiles of Solidstate drives, storage performance really becomes a function of the network on which the storage layer is presented. For example, a single SATA SSD can provide ~ 400MB/s of sequential IO, an NVMe SSD can provide ~ 800MB/s-1GB/s of sequential IO.

The above graphic is acquired from the public documentation of the Gluster project (https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/)

The way Gluster works is by setting up a variety of distributed storage volumes, based on the type of data redundancy and performance required by the workload that the storage is meant for. This hands-on workshop does a great job of stepping through configuring Gluster and installing Heketi, the RESTful management service for Gluster. As I had indicated in a previous post in this series, I could have opted for something like Ceph or Portworx, but my humble lab didn’t have either the storage or horsepower to be able to do anything functional on either of those two solutions.

In my test environment, I used all the nodes of the cluster, in a hyperconverged mode with glusterfs running on the kubernetes nodes. It is possible to run Gluster and Heketi as a service within Kubernetes, or as an external service, parallel to Kubernetes. I opted to run it externally, as a parallel service —

$ heketi-cli --secret "password" --user admin cluster info d94e72ab3d24aac75f98267dbcfb6cc8 
Cluster id: d94e72ab3d24aac75f98267dbcfb6cc8
Nodes:
3b7cbec32fc8f56ffc3be1fd81147e1a
4c9798884377d35742729d36ff9219f1
60c66fcbc252965be21a1b916df1e7f9
b618b3538e4a6b3ca3589cfc6b90a730
bee1d57a805d6323a726fbb90ff2a580
Volumes:
266e53e25893dc4949ec513b2c742664
8f91d54dda06b9d665fc65bceca5b30e
Block: true

File: true

And get the topology —

$ heketi-cli --secret "password" --user admin topology info

Cluster Id: d94e72ab3d24aac75f98267dbcfb6cc8

    File:  true
    Block: true

    Volumes:

	Name: vol_266e53e25893dc4949ec513b2c742664
	Size: 1
	Id: 266e53e25893dc4949ec513b2c742664
	Cluster Id: d94e72ab3d24aac75f98267dbcfb6cc8
	Mount: 192.168.7.15:vol_266e53e25893dc4949ec513b2c742664
	Mount Options: backup-volfile-servers=192.168.7.11,192.168.7.14,192.168.7.12,192.168.7.13
	Durability Type: replicate
	Replica: 3
	Snapshot: Disabled

		Bricks:
			Id: 06f3f24487865c204d37cfd02f21a970
			Path: /var/lib/heketi/mounts/vg_111b083c9b70c5225ec0deb55a76e34b/brick_06f3f24487865c204d37cfd02f21a970/brick
			Size (GiB): 1
			Node: bee1d57a805d6323a726fbb90ff2a580
			Device: 111b083c9b70c5225ec0deb55a76e34b

			Id: 75eefd0490902a4b7cca651c6d89f7b7
			Path: /var/lib/heketi/mounts/vg_34cb0ed290ffab8e1b3e5a6f6366d50b/brick_75eefd0490902a4b7cca651c6d89f7b7/brick
			Size (GiB): 1
			Node: b618b3538e4a6b3ca3589cfc6b90a730
			Device: 34cb0ed290ffab8e1b3e5a6f6366d50b

			Id: e0457339b2b33e420efc548983a594fa
			Path: /var/lib/heketi/mounts/vg_3e87b449588b8b27cf3bc4134e17cdb5/brick_e0457339b2b33e420efc548983a594fa/brick
			Size (GiB): 1
			Node: 60c66fcbc252965be21a1b916df1e7f9
			Device: 3e87b449588b8b27cf3bc4134e17cdb5


	Name: vol_8f91d54dda06b9d665fc65bceca5b30e
	Size: 8
	Id: 8f91d54dda06b9d665fc65bceca5b30e
	Cluster Id: d94e72ab3d24aac75f98267dbcfb6cc8
	Mount: 192.168.7.15:vol_8f91d54dda06b9d665fc65bceca5b30e
	Mount Options: backup-volfile-servers=192.168.7.11,192.168.7.14,192.168.7.12,192.168.7.13
	Durability Type: replicate
	Replica: 3
	Snapshot: Enabled
	Snapshot Factor: 1.00

		Bricks:
			Id: 3ed3f1175ab10e7d0049a4de46cdc88b
			Path: /var/lib/heketi/mounts/vg_e2a934d91180d50552187cf669e25e21/brick_3ed3f1175ab10e7d0049a4de46cdc88b/brick
			Size (GiB): 8
			Node: 4c9798884377d35742729d36ff9219f1
			Device: e2a934d91180d50552187cf669e25e21

			Id: 4508c1144f3fcf9e6d29b58225ac37ca
			Path: /var/lib/heketi/mounts/vg_3e87b449588b8b27cf3bc4134e17cdb5/brick_4508c1144f3fcf9e6d29b58225ac37ca/brick
			Size (GiB): 8
			Node: 60c66fcbc252965be21a1b916df1e7f9
			Device: 3e87b449588b8b27cf3bc4134e17cdb5

			Id: a35c59ee0f415b7c8a6cf24f5dbd9b2d
			Path: /var/lib/heketi/mounts/vg_111b083c9b70c5225ec0deb55a76e34b/brick_a35c59ee0f415b7c8a6cf24f5dbd9b2d/brick
			Size (GiB): 8
			Node: bee1d57a805d6323a726fbb90ff2a580
			Device: 111b083c9b70c5225ec0deb55a76e34b


    Nodes:

	Node Id: 3b7cbec32fc8f56ffc3be1fd81147e1a
	State: online
	Cluster Id: d94e72ab3d24aac75f98267dbcfb6cc8
	Zone: 1
	Management Hostnames: k8s05
	Storage Hostnames: 192.168.7.15
	Devices:
		Id:e65030ad8527b004a1a0c8602de82942   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):500     
			Bricks:

	Node Id: 4c9798884377d35742729d36ff9219f1
	State: online
	Cluster Id: d94e72ab3d24aac75f98267dbcfb6cc8
	Zone: 1
	Management Hostnames: k8s01
	Storage Hostnames: 192.168.7.11
	Devices:
		Id:e2a934d91180d50552187cf669e25e21   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):8       Free (GiB):491     
			Bricks:
				Id:3ed3f1175ab10e7d0049a4de46cdc88b   Size (GiB):8       Path: /var/lib/heketi/mounts/vg_e2a934d91180d50552187cf669e25e21/brick_3ed3f1175ab10e7d0049a4de46cdc88b/brick

	Node Id: 60c66fcbc252965be21a1b916df1e7f9
	State: online
	Cluster Id: d94e72ab3d24aac75f98267dbcfb6cc8
	Zone: 1
	Management Hostnames: k8s04
	Storage Hostnames: 192.168.7.14
	Devices:
		Id:3e87b449588b8b27cf3bc4134e17cdb5   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):9       Free (GiB):490     
			Bricks:
				Id:4508c1144f3fcf9e6d29b58225ac37ca   Size (GiB):8       Path: /var/lib/heketi/mounts/vg_3e87b449588b8b27cf3bc4134e17cdb5/brick_4508c1144f3fcf9e6d29b58225ac37ca/brick
				Id:e0457339b2b33e420efc548983a594fa   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_3e87b449588b8b27cf3bc4134e17cdb5/brick_e0457339b2b33e420efc548983a594fa/brick

	Node Id: b618b3538e4a6b3ca3589cfc6b90a730
	State: online
	Cluster Id: d94e72ab3d24aac75f98267dbcfb6cc8
	Zone: 1
	Management Hostnames: k8s02
	Storage Hostnames: 192.168.7.12
	Devices:
		Id:34cb0ed290ffab8e1b3e5a6f6366d50b   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):1       Free (GiB):498     
			Bricks:
				Id:75eefd0490902a4b7cca651c6d89f7b7   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_34cb0ed290ffab8e1b3e5a6f6366d50b/brick_75eefd0490902a4b7cca651c6d89f7b7/brick

	Node Id: bee1d57a805d6323a726fbb90ff2a580
	State: online
	Cluster Id: d94e72ab3d24aac75f98267dbcfb6cc8
	Zone: 1
	Management Hostnames: k8s03
	Storage Hostnames: 192.168.7.13
	Devices:
		Id:111b083c9b70c5225ec0deb55a76e34b   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):9       Free (GiB):490     
			Bricks:
				Id:06f3f24487865c204d37cfd02f21a970   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_111b083c9b70c5225ec0deb55a76e34b/brick_06f3f24487865c204d37cfd02f21a970/brick
				Id:a35c59ee0f415b7c8a6cf24f5dbd9b2d   Size (GiB):8       Path: /var/lib/heketi/mounts/vg_111b083c9b70c5225ec0deb55a76e34b/brick_a35c59ee0f415b7c8a6cf24f5dbd9b2d/brick

Ensure that the following ports are open —

firewall-cmd --permanent --add-port=24007/tcp
firewall-cmd --permanent --add-port=24008/tcp
firewall-cmd --permanent --add-port=2222/tcp
firewall-cmd --permanent --add-port=49152-49251/tcp
firewall-cmd --permanent --add-port=8080/tcp

Refer to the latest documentation for updated port details — https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Clients/

Install Helm and deploy something to test this out

With Helm installed on my admin node (my trusted laptop), I was able to install mysql backed by the newly minted Persistent storage layer.

$ brew install helm
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo add stable https://charts.helm.sh/stable
$ helm version
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"dirty", GoVersion:"go1.15.3"}

After installing helm, we can deploy services such as mysql —

$ helm install stable/mysql --generate-name
NAME: mysql-1605200795
LAST DEPLOYED: Thu Nov 12 11:06:38 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysql-1605200795.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1605200795 -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysql-1605200795 -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following command to route the connection:
    kubectl port-forward svc/mysql-1605200795 3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

Check all the resources launched —

$ kubectl get all -l app=mysql-1605200795
NAME                                   READY   STATUS    RESTARTS   AGE
pod/mysql-1605200795-6949fc588-sps8w   1/1     Running   0          135m

NAME                       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/mysql-1605200795   ClusterIP   10.97.35.89   <none>        3306/TCP   135m

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql-1605200795   1/1     1            1           135m

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-1605200795-6949fc588   1         1         1       135m

Check the PV being used —

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS     REASON   AGE
pvc-28508741-1b70-454f-87f1-9df077af8117   8Gi        RWO            Delete           Bound    default/mysql-1605200795   hyperconverged            136m

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.