The plan

Having a clear plan and defined goals is always advisable. When setting up a Kubernetes cluster, there are certain requirements that must be met. Some of these requirements may be easily fulfilled by using the Turing Pi V2 board, while others may require further decision-making and planning.

What we need ?

Every Kubernetes cluster in its very basic form needs just a few things:

  • CPU power
  • RAM
  • Storage
    • Persistent storage for apps
  • Networking

CPU and RAM

It is important to note that the resource demands of our applications should not exceed the capacity of a single cluster node. This means that if one of the nodes in the cluster only has 2 GB of RAM, like the Raspberry Pi 2 GB RAM modules, the applications should not require more than 2 GB of RAM. Despite the impressive capabilities of Kubernetes, it cannot merge two 2 GB RAM nodes into a single 4 GB RAM cluster. When considering hardware or running applications, it's important to keep in mind that having more RAM is typically more beneficial than having more CPU power.

Storage

Storage is often an overlooked aspect in setting up a Kubernetes cluster, yet it is crucial for applications that require persistence of data. If you plan to run applications that need to persist their state, proper storage configuration is a must. On the other hand, if your applications are stateless and do not require data retention between restarts, the storage aspect can be omitted.

In our scenario, we have two reliable storage options:

While there are other storage options available, these two have proven to be effective solutions. Longhorn requires a minimum of 3 nodes with hard drives in order to establish a storage pool. Using the SD card on Raspberry Pi is not recommended for this purpose (although possible). Instead, we can utilize the onboard SATA connectors and two mini PCIe slots to achieve 3 nodes with SATA SSDs for proper storage. You can find mini PCIe to SATA converters on websites such as AliExpress.

minipcie_to_sata.png

The most straightforward solution, with fewer added features, is the NFS plugin. With this option, you will only need a single disk attached to the onboard SATA connector, and we will designate Node3 as the NFS server. Although this option is easy to set up, it also has a single point of failure.

Networking

Networking in a Kubernetes cluster can be a complex topic, particularly when it comes to network plugins (CNI) and related components once the cluster is up and running. However, the underlying network resources can be relatively straightforward, especially if you are using the Turing Pi V2 board, which provides a 1Gbps network connection between each node. It's important to note that when using persistent storage, the network speed can become a bottleneck, as the application may need to access the disk on a different node than where it is running. To ensure proper communication between the nodes in our Kubernetes cluster, we need to make sure that the build in switch is connected to a router and each node is able to receive an IP address. This can be accomplished by either connecting the Turing Pi V2 directly to your existing network or by using one of the nodes as a router.

 

What we will achieve.

We will construct a 4-node Kubernetes cluster using 4 Raspberry Pi CM modules. The network setup, persistent storage, and a sample application utilizing persistent storage will be established. This will provide a foundational Kubernetes platform for further exploration and development.

We will be using this schema:

  • Node1 - 10.0.0.60 - Kubernetes controller and worker node
  • Node2 - 10.0.0.61 - Kubernetes worker node
  • Node3 - 10.0.0.62 - Kubernetes worker node and NFS server
  • Node4 - 10.0.0.63 - Kubernetes worker node

Before setting up Kubernetes, it is important to adjust the network settings to match your network's configuration. In the storage section, we will provide instructions for both NFS and Longhorn setup, however it is recommended to only choose one storage solution. Lastly, as an optional task, we will replace one of the nodes with an Nvidia Jetson module and aim to use it for machine learning tasks.

We would be interested in learning about your Kubernetes projects utilizing the Turing Pi V2, so please feel free to share in the comments section.

 

Was this article helpful?

2 out of 2 found this helpful
Have more questions? Submit a request

Comments (0 comments)

Please sign in to leave a comment.