Network setting

Networking in Kubernetes can be a complex and multifaceted challenge. It involves considering various networking scenarios, such as how containers will communicate with one another internally (Internal Networking), and how services will be made accessible to users outside of the cluster (External Networking). This can involve setting up routing rules, configuring load balancers, and deciding on appropriate DNS or IP addresses. Ensuring a robust and secure network architecture is crucial for the proper functioning of a Kubernetes cluster.

 

In order to keep the networking aspect of Kubernetes simple and contained within the cluster, we will opt for an internal solution rather than using external DNS servers or load balancers. This means that we will assign IP addresses from our network to services and allow them to be accessed through their own IPs, resulting in a more straightforward and manageable setup.

We are going to use HELM in this example, so please read the HELM guide first.

 

MetalLB

You can read what exactly MetalLB does in great detail on their own home page HERE.

MetalLB is a Kubernetes-based load balancer that provides load balancing services for services running within a cluster. It operates by assigning IP addresses to services, and responding to network requests to those IPs. This enables services to be exposed externally to clients, making them more accessible and scalable. It is useful in cases where a Kubernetes cluster does not have an external load balancer available, or where a cluster administrator wants to use their own load balancing solution.

We can install it via helm, please do this only on your main control node.

# First add metallb repository to your helm
helm repo add metallb https://metallb.github.io/metallb
# Check if it was found
helm search repo metallb

Example of "helm search repo metallb"

root@cube01:~# helm search repo metallb
NAME CHART VERSION APP VERSION DESCRIPTION
metallb/metallb 0.13.7 v0.13.7 A network load-balancer implementation for Kube...

Install MetalLB

helm upgrade --install metallb metallb/metallb --create-namespace \
--namespace metallb-system --wait

This will return output similar to the following:

root@cube01:~# helm upgrade --install metallb metallb/metallb --create-namespace \
--namespace metallb-system --wait
Release "metallb" does not exist. Installing it now.
NAME: metallb
LAST DEPLOYED: Tue Jan 31 14:28:54 2023
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.

Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs

We have now MetalLB installed, but it does not perform it function yet. We need to give it an IP range that it will be able to use. In our case, we will allow the MetalLB to use range 10.0.0.70 to .80, so we have 10 IPs to be assigned to our services.

cat << 'EOF' | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 10.0.0.70-10.0.0.80
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
EOF

 

This will return output:

ipaddresspool.metallb.io/default-pool created
l2advertisement.metallb.io/default created


Check

To check if everything worked ok, there is a Traefik installed by default, and this should get the first free IP address.

root@cube01:~# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h45m
metrics-server ClusterIP 10.43.201.157 <none> 443/TCP 3h45m
traefik LoadBalancer 10.43.99.73 10.0.0.70 80:31771/TCP,443:30673/TCP 3h44m

root@cube01:~# kubectl get events -n kube-system --field-selector involvedObject.name=traefik
LAST SEEN TYPE REASON OBJECT MESSAGE
61s Normal IPAllocated service/traefik Assigned IP ["10.0.0.70"]
60s Normal nodeAssigned service/traefik announcing from node "cube02" with protocol "layer2"

As you can see from the first command we listed our services, and we can see that traefik is available on 10.0.0.70. The second command will filter out events for traefik, and we see it got the IP assigned.

 

Now we can assign individual IP from our network to services. How to do it will be shown in the sample deployment.

 

Traefik

Traefik is a popular open-source reverse proxy and load balancer for microservices in a Kubernetes environment. It dynamically routes incoming requests to appropriate microservices based on their domain name, path, and other attributes. Traefik integrates with Kubernetes and other cloud-native tools to provide features such as service discovery, automatic SSL certificate management, and request routing based on custom rules. It also provides advanced features like circuit breaking, canary deployments, and traffic shaping.

Traefik is a pre-installed component in the Kubernetes cluster, if you followed the K3s installation method from our guide. To use Traefik effectively, it requires a working DNS server, which is external to the Kubernetes cluster. However, for local testing purposes, you can leverage the /etc/hosts file on your local machine and basically fake the DNS server.

The host file is located at:

  • Mac: /private/etc/hosts
  • Windows: c:\windows\system32\drivers\etc\hosts
  • Linux: /etc/hosts

You could edit this file and add entry like:

10.0.0.70 turing-cluster turing-cluster.local

And when you then enter https://turing-cluster.local in your browser, you should be redirected to a 404 page of Traefik. Mind you, this will work only on computers where you edit your host file. To make it global for your network, you need DNS server and all PCs to know about the DNS server.

Now you can use Traefik to access your service under turing-cluster.local/service or service.turing-cluster.local (Of course you need to set up traefik to route to specific service)

Was this article helpful?

0 out of 0 found this helpful
Have more questions? Submit a request

Comments (0 comments)

Please sign in to leave a comment.