Kubernetes Install

We have chosen to use K3s as the Kubernetes flavor for our setup, as it offers a complete Kubernetes experience while requiring fewer resources in terms of CPU and RAM compared to other options available.

With your favorite ssh client, log into Node1.

For our reference, we will use this table again:

  • Node1 - - Kubernetes controller and worker node
  • Node2 - - Kubernetes worker node
  • Node3 - - Kubernetes worker node and NFS server
  • Node4 - - Kubernetes worker node

Initializing Master Node

curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 --disable servicelb --token myrandompassword --node-ip --disable-cloud-controller --disable local-storage

Some explanations:

  • --write-kubeconfig-mode 644 - This flag is used to specify the mode for the kubeconfig file. It is optional, but required if you plan to connect to the Rancher manager later.
  • --disable servicelb - This flag disables the service load balancer. Instead, we will use metallb.
  • --token - This flag sets the token used to connect to the K3s master node. Choose a secure random password and keep it safe.
  • --node-ip - This flag sets the K3s master node to bind to a specific IP address.
  • --disable-cloud-controller - This flag disables the K3s cloud controller, which we deemed unnecessary in our use case.
  • --disable local-storage - This flag disables the K3s local storage, as we will be setting up the longhorn storage provider and NFS provider instead.

After the installation is complete, you can verify its success by using the command "kubectl get nodes". This will return output similar to the following:

root@cube01:~# kubectl get nodes
cube01 Ready control-plane,master 6m22s v1.25.6+k3s1

Adding Workers

Next we will add the rest of the workers. Log in via SSH to Node2 to 4 and execute the following command.

curl -sfL https://get.k3s.io | K3S_URL= K3S_TOKEN=myrandompassword sh -

You do not have to do it one by one, you can execute the command on rest of the nodes simultaneously.

When the script finishes, on Node1 execute again "kubectl get nodes". This will return output similar to the following:

root@cube01:~# kubectl get nodes
cube01 Ready control-plane,master 43m v1.25.6+k3s1
cube04 Ready <none> 38s v1.25.6+k3s1
cube02 Ready <none> 35s v1.25.6+k3s1
cube03 Ready <none> 34s v1.25.6+k3s1


This step is optional, but it is recommended to label/tag the nodes in order to see the role as "worker" instead of "<none>". Labeling the nodes is not just a cosmetic change, it can be useful in specifying a node for running certain workloads. For instance, if you have a node with specialized hardware, such as a Jetson Nano, you can label that node and direct applications that require that hardware to run only on that node.

Let's add this tag key:value: kubernetes.io/role=worker to nodes. This is more cosmetic, to have nice output from kubectl get nodes.

kubectl label nodes cube01 kubernetes.io/role=worker
kubectl label nodes cube02 kubernetes.io/role=worker
kubectl label nodes cube03 kubernetes.io/role=worker
kubectl label nodes cube04 kubernetes.io/role=worker

There is another label/tag that can be added. This label can be used to specify a preference for nodes with "node-type=workers" for running deployments. "node-type" is the chosen key name, but it can be named anything.

kubectl label nodes cube01 node-type=worker
kubectl label nodes cube02 node-type=worker
kubectl label nodes cube03 node-type=worker
kubectl label nodes cube04 node-type=worker

If your cube04 is Nvidia Jetson, you could use node-type=jetson, and use this to run ML containers only on that node.

Your nodes should look like this now:

root@cube01:~# kubectl get nodes
cube01 Ready control-plane,master,worker 52m v1.25.6+k3s1
cube02 Ready worker 10m v1.25.6+k3s1
cube03 Ready worker 10m v1.25.6+k3s1
cube04 Ready worker 10m v1.25.6+k3s1

You can list all tags per node with:

kubectl get nodes --show-labels

Congratulations, you have successfully set up a Kubernetes cluster! However, there are still additional steps to take to enhance the usability of the cluster. Keep reading our guide for next steps.


Was this article helpful?

1 out of 1 found this helpful
Have more questions? Submit a request

Comments (3 comments)

Chris S

The `node-type` label example is redundant with the standard `kubernetes.io/role` label.

Would it not make more sense to align the example with the text and make the `node-type` label reflect the compute module type we are using? Values would then be `cm4`, `jetson`, etc.

Chris S
(Edited )

If you are trying to install K3s on a multi-homed RPI, e.g. that has both a WIFI and Ethernet interface, you must use `--node-ip` instead of `--bind-address` when installing K3s on the master node:

curl -sfL https://get.k3s.io | sh -s - \
--write-kubeconfig-mode 644 \
--disable servicelb \
--token myrandompassword \
--node-ip \
--disable-cloud-controller \
--disable local-storage

Cf. the Multi-homed RPI and bad TLS certificate #6993 issue on Github.


Chris S, thanks, changed it now.


Please sign in to leave a comment.