4.2. Architecture

In order to secure Kubernetes we want to understand its different components. For that, we install a minimal Kubernetes Distribution ourselves:

Task 4.2.1: Install a Kubernetes Cluster

For this task we need to switch to a VM, there we will install a Kubernetes Cluster using kind

SSH into your VM: You find the relevant command in the file welcome

ssh -i /home/project/id-ecdsa <namespace>@159.69.155.196

Now download kind:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

Similar to Kubernetes kind can be configured using a yaml resource, execute the command below to create the file cluster.yaml:

cat <<EOF >> cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
EOF

To create a two-node closter please execute:

kind create cluster --config cluster.yaml

After a while, you should have a cluster running. Please check it with:

kubectl get nodes

The goal now is to identify our minimal moving parts in Kubernetes and address some security-relevant features. By using kubectl you did use the standard config ~/.kube/config. If you are curious you can check the used cert and see which user you are and the address of the API Server.

You find the main control plane parts in the kube-system namespace:

kubectl -n kube-system get pods

Which will give you an output like this:

NAME                                         READY   STATUS    RESTARTS        AGE
NAME                                         READY   STATUS    RESTARTS   AGE
coredns-7db6d8ff4d-gdqhg                     1/1     Running   0          43s
coredns-7db6d8ff4d-zkfq4                     1/1     Running   0          43s
etcd-kind-control-plane                      1/1     Running   0          59s
kindnet-5w4n8                                1/1     Running   0          41s
kindnet-fqnhp                                1/1     Running   0          43s
kube-apiserver-kind-control-plane            1/1     Running   0          59s
kube-controller-manager-kind-control-plane   1/1     Running   0          59s
kube-proxy-2fmst                             1/1     Running   0          41s
kube-proxy-s7g8c                             1/1     Running   0          43s
kube-scheduler-kind-control-plane            1/1     Running   0          59s

The core services for Kubernetes are all here:

  • etcd-kind-control-plane: etcd is a key-value store used by Kubernetes to store all cluster data, including configuration, state, and other critical data
  • kindnet: kindnet is a CNI (Container Network Interface) plugin that handles networking for the Kind cluster.
  • kube-apiserver-kind-control-plane: kube-apiserver is the central component of the control plane, which exposes the Kubernetes API. It processes requests and interacts with other control plane components.
  • kube-controller-manager-kind-control-plane: kube-controller-manager runs controller processes that handle routine tasks like managing node states, replicas, and deployments.
  • kube-proxy: enabling routing and load-balancing for traffic between services and pods in the cluster.
  • kube-scheduler-kind-control-plane: kube-scheduler assigns pods to nodes based on resource availability and constraints. This is the component taking care of pod (re)starts.

Kubernetes CIS Benchmark

We want to check our local Kubernetes Cluster using the CIS Kubernetes Benchmark which is a set of best practices and security guidelines developed by the Center for Internet Security (CIS) to help organizations secure their Kubernetes clusters. For this we use a tool named kube-bench which should also be part of every kubernetes lifecycle pipeline-test.

Task 4.2.2: Check your clusters security

We will run kube-bench directly in our Kubernetes cluster using a Kubernetes job:

kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/refs/heads/main/job.yaml

Wait for a few seconds for the job to complete

kubectl get pods

The results are held in the pod’s logs (adjust the name)

kubectl logs kube-bench-fpwnt

We see that our kind cluster fails in the RBAC Section because it binds the users kubernetes and the group kubeadm:cluster-admins to the cluster-admin role to give it full privileges. Check also the different warnings for other sections, for a lot we already have the knowledge to remediation the issues.