Running your own Bare-Metal Kubernetes cluster

Godfrey Menezes
8 min readOct 16, 2020

I just happened to finish reading the book ‘Kubernetes: Up and Running’. The appendix had an interesting project ‘Building a Raspberry Pi Kubernetes Cluster’ which got me interested. Curious, I set out to build my own!

First the parts that I used

  1. 3 Raspberry Pi 4 boards (Raspberry Pi 2/3 will also work) $55 each— $172.95(shipping and taxes included)
  2. 3 SDHC memory cards(I used 64GB each, 8GB will do) — $21.98+Taxes
  3. 3 12-inch USB Type C USB cables for charging PI — $10
  4. One 5-port USB charger — $17

These are extra additions if you intend to run the PI of Ethernet. I am using WiFi, so didn’t find them necessary and that helped save me some money. I had an ethernet cable lying around and that helped with bringing up the RasPIs after having flashed the OS on the card.

Extra items if you would want to connect the PIs via Ethernet-

One 5-port 10/100 Fast Ethernet switch — $8
Cat. 6 Ethernet cables — $10
One Raspberry Pi stackable case capable of holding four Pis — $40

Getting the Raspberry PIs to work

Flash the Raspberry Pi to the SD card. To avoid getting into in the details of the how, I used the following link. After it has been done, insert the SD card into the PIs.

Connect the PIs-one at a time-to your router/modem-I have ATT supplied ARRIS/BGW210–700 which has a fair amount of ethernet slots that I can use. Power them up and after a certain period of time you will notice them in the device list with their IP address. My PIs were listed as black-pearl. ssh into them using pirate/hypriot from my personal Ubuntu laptop. Run the following commands to update and upgrade them -

sudo apt updatesudo apt full-upgrade

Setting WiFi access

I had a fair amount of trouble trying to setup WiFi access on the RasPi. Using the help document, while extensive, I could not get it to work. Finally found this StackExchange post that helped me resolve it. Here are the contents of my /etc/network/interfaces

allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

Add the connectivity details using the following command. Substitute the SSID and KEY with your wireless details.

wpa_passphrase SSID KEY >> /etc/wpa_supplicant/wpa_supplicant.conf_supplicant/wpa_supplicant.conf

Setting Hostname and Allocated IP address

Setting the hostname names for the PIs was a challenge. I tried to set the hostname using raspi-config but every time I rebooted it, the name would go back to black-pearl. After googling around, I found the following steps that worked for me.

Change the /etc/hosts and affix the name of the machine to -

127.0.1.1 masterpi masterpi

Change the /etc/hostname and affix the name of the machine in the file-

masterpi

This is the clincher, change the /etc/cloud/cloud.cfg to make the changes permanent.

# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: true

Comment out the following in the cloud_init_modules file.

# The modules that run in the 'init' stage
cloud_init_modules:
# - update_hostname
# - update_etc_hosts

Finally for easier communication between the systems, I’ve designated the hostnames as masterpi for master and pinode-1 and pinode-2 for the nodes. Allocate a private fixed IP address on your home network so that it can communicate with each other by hostnames. In my case I did it as,

For reference, add these details in the /etc/hosts file of all the allow them to communicate with each other.

192.168.1.190 masterpi
192.168.1.191 pinode-1
192.168.1.192 pinode-2

Setting up the Cluster

So we have out Cluster components ready, lets work towards setting them up. First step is to setup the master -

# kubeadm init --pod-network-cidr=10.244.0.0/16

It will end with the following if all is successful

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.1.190:6443 — token rgc3v3.57wawjohvov3uq86 \
— discovery-token-ca-cert-hash sha256:259269a2a40ecee85e8227b0d90cfc35e8f94826c412721e3fa460c70fe9c15d

Make a note of the last line. This will be used on the worker nodes to join the cluster. Run the following from the non-root user on the master to setup configurations-

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After all the commands have been run, run the following command. If all is good it will return that the master node is running and available.

$ kubectl get node
NAME STATUS ROLES AGE VERSION
masterpi Ready master 3m23s v1.19.2

Moving to the worker nodes executed the ‘kubeadm join’ command to join the cluster. Here is the first worker node -

HypriotOS/armv7: root@pinode-1 in ~
# kubeadm join 192.168.1.190:6443 --token rgc3v3.57wawjohvov3uq86 \
> --discovery-token-ca-cert-hash sha256:259269a2a40ecee85e8227b0d90cfc35e8f94826c412721e3fa460c70fe9c15d

[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

And the second worker node to add-

HypriotOS/armv7: root@pinode-2 in ~
# kubeadm join 192.168.1.190:6443 --token rgc3v3.57wawjohvov3uq86 \
> --discovery-token-ca-cert-hash sha256:259269a2a40ecee85e8227b0d90cfc35e8f94826c412721e3fa460c70fe9c15d

[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Give a moment and check on the master node and it will show that the worker nodes have now been added to the cluster and we have our cluster ready.

$ kubectl get node
NAME STATUS ROLES AGE VERSION
masterpi Ready master 6m53s v1.19.2
pinode-1 Ready <none> 92s v1.19.2
pinode-2 Ready <none> 83s v1.19.2

Inter-communication between the Nodes

We have the Nodes ready and part of the cluster but they still need to communicate between each other. For a detailed understanding I’d recommend to take a look here. To facilitate the communication I’ve used Flannel. I’m using the the one that is available at github.

HypriotOS/armv7: pirate@masterpi in ~
$ kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/flannel configured
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds created

Give it some time to complete. The final output should look similar to this-

HypriotOS/armv7: pirate@masterpi in ~
$ kubectl get pods — all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6–4qz2p 0/1 Running 0 9m33s
kube-system coredns-f9fd979d6-xkslk 0/1 Running 0 9m33s
kube-system etcd-masterpi 1/1 Running 0 9m39s
kube-system kube-apiserver-masterpi 1/1 Running 0 9m39s
kube-system kube-controller-manager-masterpi 1/1 Running 0 9m39s
kube-system kube-flannel-ds-4jqh5 1/1 Running 0 17s
kube-system kube-flannel-ds-7tm82 1/1 Running 0 17s
kube-system kube-flannel-ds-z8pr2 1/1 Running 0 17s
kube-system kube-proxy-ctchx 1/1 Running 0 4m29s
kube-system kube-proxy-n7nt6 1/1 Running 0 9m33s
kube-system kube-proxy-wzhxm 1/1 Running 0 4m19s
kube-system kube-scheduler-masterpi 1/1 Running 0 9m39s

Testing the Cluster setup

Now that we have the Cluster up and running, lets try deploying a simple nginx container and check if works as desired

HypriotOS/armv7: pirate@masterpi in ~
$ kubectl create deployment nginx — image=nginx

deployment.apps/nginx created

Check if the deployment has been created.

HypriotOS/armv7: pirate@masterpi in ~
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 40s

The deployment successful, create a service that exposes this deployment.

HypriotOS/armv7: pirate@masterpi in ~
$ kubectl create service nodeport nginx — tcp=80:80

service/nginx created

Just to make sure that the POD has the deployment available.

$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6799fc88d8–2959q 1/1 Running 0 80s

To access the service deployment, get the details of the service.

HypriotOS/armv7: pirate@masterpi in ~
$ kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14m
nginx NodePort 10.104.25.140 <none> 80:32348/TCP 61s

and the description of the service will allow us to know which ip address the service can be accessible and the nodeport to access.

HypriotOS/armv7: pirate@masterpi in ~
$ kubectl describe service nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.104.25.140
Port: 80–80 80/TCP
TargetPort: 80/TCP
NodePort: 80–80 32348/TCP
Endpoints: 10.244.2.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Issue a request to one of the nodes on the exposed port using curl

HypriotOS/armv7: pirate@masterpi in ~
$ curl http://192.168.1.191:32348/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Success!!!!

So just to summarize, we deployed a Kubernetes Raspberry PI cluster, and deployed a simple application to test and prove it works!!!

Honorable Mentions

Since the publication of this article I’ve noticed that I’ve missed out on some glaring steps, eg. installation of software that actually K8s. So here are the missing links -

Install the software for K8

Install docker for Ubuntu

Resolution for ‘Get http://localhost:10248/healthz’ error

10248 healthz error

Swap off

--

--