Kubernetes cluster for CKA

Published on
Last updated on
Kubernetes Container

I registered CKA certification, to practice I setup a mini cluster with one master and 2 nodes. although I have experience, dedicated practices are required.

This post record all processes.

Prepare the nodes

  • 2 node(debian 12) with kvm locally

    Terminal window
    apt install containerd bash-completion apt-transport-https ca-certificates curl gpg -y
    curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
    echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
    sudo swapoff -a
  • enable ip forwarding

    Terminal window
    cat /etc/sysctl.conf
    net.ipv4.ip_forward=1
    net.ipv6.conf.all.forwarding=1

Setup the Cluster

  • Get all kubeadm versions

    Terminal window
    apt list kubeadm -a
    Listing... Done
    kubeadm/unknown 1.32.1-1.1 amd64
    kubeadm/unknown,now 1.32.0-1.1 amd64
  • Configure containerd

    • Generate default config

      Terminal window
      containerd config default > /etc/containerd/config.toml

      and then modify the runc options to enable SystemdCgroup,

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
      BinaryName = ""
      CriuImagePath = ""
      CriuPath = ""
      CriuWorkPath = ""
      IoGid = 0
      IoUid = 0
      NoNewKeyring = false
      NoPivotRoot = false
      Root = ""
      ShimCgroup = ""
      SystemdCgroup = true
      • I encountered an that the cluster can’t run properly. After extensive investigation and consulting the official documentation, I discovered that the SystemdCgroup parameter needed to be configured. By default, it is set to SystemdCgroup = false. Once I changed it to SystemdCgroup = true and restarted the containerd service, the cluster worked as expected.
      • since the default cgroup driver of config of kubelet is set to Systemd, so we need to keep consistent

      also modify sandbox image, from

      sandbox_image = "registry.k8s.io/pause:3.6"

      to

      sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.10"
  • Create config for initialization, content of kubeadm-config.yaml

    apiVersion: kubeadm.k8s.io/v1beta4
    kind: ClusterConfiguration
    imageRepository: "registry.aliyuncs.com/google_containers"
    kubernetesVersion: "v1.32.0"
    networking:
    serviceSubnet: "10.96.0.0/16"
    podSubnet: "10.244.0.0/24"
    dnsDomain: "cluster.local"
    • I static set kubernetesVersion: "v1.32.0", as we will upgrade the cluster to v1.32.1 later

    optionally enable nftables, with the config for kubeproxy

    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    mode: nftables
  • Initialize the cluster

    Terminal window
    export VERSION=1.32.0-1.1
    apt install -y kubeadm=$VERSION kubectl=$VERSION kubelet=$VERSION
    apt-mark hold kubectl kubeadm kubelet
    kubeadm init --config kubeadm-config.yaml
    export KUBECONFIG=/etc/kubernetes/admin.conf
    kubeadm join 192.168.122.203:6443 --token 2h76lp.w9p7f7ooxpu2n12g \
    --discovery-token-ca-cert-hash sha256:981bb84350c02e5795e71960c70c569bda9ac6f6ac47a2d30d91f23afe10579a
  • install network plugin

    • calico
      Terminal window
      curl -SLO https://docs.projectcalico.org/v3.25/manifests/calico.yaml
      kubectl apply -f calico.yaml
    • cilium
      Terminal window
      curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
      chmod 700 get_helm.sh
      ./get_helm.sh
      helm repo add cilium https://helm.cilium.io/
      helm install cilium cilium/cilium --version 1.17.1 --namespace kube-system

    we may need to replace the default image registry, see reference 2

  • install metrics server

    Terminal window
    curl -SLO https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    modiy the image registry, replace registry.k8s.io with k8s.m.daocloud.io

    also change parmeter

    - --kubelet-insecure-tls

    then install metrics server

    Terminal window
    kubectl apply -f components.yaml
  • install ingress controller

    Terminal window
    curl -SLO https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml

    modiy the image registry, replace registry.k8s.io with k8s.m.daocloud.io

    then change the service type to NodePort for ingress-nginx-controller

    after that, install ingress controller

    Terminal window
    kubectl apply -f deploy.yaml
    ``
  • install hostpath provisioner

    Terminal window
    export SNAPSHOTTER_BRANCH=release-6.3
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_BRANCH}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_BRANCH}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_BRANCH}/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
    export SNAPSHOTTER_VERSION=v6.3.3
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
    git clone https://github.com/kubernetes-csi/csi-driver-host-path.git
    cd csi-driver-host-path
    deploy/kubernetes-latest/deploy.sh
    • install storage class csi-storageclass.yaml

      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
      name: csi-hostpath-sc
      provisioner: hostpath.csi.k8s.io
      reclaimPolicy: Delete
      volumeBindingMode: Immediate
      allowVolumeExpansion: true

      then

      Terminal window
      kubectl apply -f csi-storageclass.yaml

    then modify the statefulset, change the image registry, replace registry.k8s.io with k8s.m.daocloud.io

    • install gateway resources
      Terminal window
      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml
  • Create KVM snapshots used for rollback after each practice

    Terminal window
    virsh snapshot-create-as --domain kube-master --name "kube-master-snap-02" --description "cluster created"
    virsh snapshot-create-as --domain kube-node01 --name "kube-node01-snap-02" --description "cluster created"
    virsh snapshot-create-as --domain kube-node02 --name "kube-node01-snap-01" --description "cluster created"

    rollback if required

    Terminal window
    virsh shutdown --domain kube-master
    virsh snapshot-revert --domain kube-master --snapshotname kube-master-snap-02 --running

Reference

© 2025 Jennings Liu. All rights reserved.