Kubernetes cluster for CKA
I registered CKA certification, to practice I setup a mini cluster with one master and 2 nodes. although I have experience, dedicated practices are required.
This post record all processes.
Prepare the nodes
-
2 node(debian 12) with kvm locally
Terminal window apt install containerd bash-completion apt-transport-https ca-certificates curl gpg -ycurl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgecho 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.listsudo swapoff -a -
enable ip forwarding
Terminal window cat /etc/sysctl.confnet.ipv4.ip_forward=1net.ipv6.conf.all.forwarding=1
Setup the Cluster
-
Get all kubeadm versions
Terminal window apt list kubeadm -aListing... Donekubeadm/unknown 1.32.1-1.1 amd64kubeadm/unknown,now 1.32.0-1.1 amd64 -
Configure containerd
-
Generate default config
Terminal window containerd config default > /etc/containerd/config.tomland then modify the runc options to enable
SystemdCgroup
,[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]BinaryName = ""CriuImagePath = ""CriuPath = ""CriuWorkPath = ""IoGid = 0IoUid = 0NoNewKeyring = falseNoPivotRoot = falseRoot = ""ShimCgroup = ""SystemdCgroup = true- I encountered an that the cluster can’t run properly. After extensive investigation and consulting the official documentation, I discovered that the
SystemdCgroup
parameter needed to be configured. By default, it is set toSystemdCgroup = false
. Once I changed it toSystemdCgroup = true
and restarted the containerd service, the cluster worked as expected. - since the default cgroup driver of config of
kubelet
is set toSystemd
, so we need to keep consistent
also modify sandbox image, from
sandbox_image = "registry.k8s.io/pause:3.6"to
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.10" - I encountered an that the cluster can’t run properly. After extensive investigation and consulting the official documentation, I discovered that the
-
-
Create config for initialization, content of
kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta4kind: ClusterConfigurationimageRepository: "registry.aliyuncs.com/google_containers"kubernetesVersion: "v1.32.0"networking:serviceSubnet: "10.96.0.0/16"podSubnet: "10.244.0.0/24"dnsDomain: "cluster.local"- I static set
kubernetesVersion: "v1.32.0"
, as we will upgrade the cluster tov1.32.1
later
optionally enable nftables, with the config for
kubeproxy
---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: nftables - I static set
-
Initialize the cluster
Terminal window export VERSION=1.32.0-1.1apt install -y kubeadm=$VERSION kubectl=$VERSION kubelet=$VERSIONapt-mark hold kubectl kubeadm kubeletkubeadm init --config kubeadm-config.yamlexport KUBECONFIG=/etc/kubernetes/admin.confkubeadm join 192.168.122.203:6443 --token 2h76lp.w9p7f7ooxpu2n12g \--discovery-token-ca-cert-hash sha256:981bb84350c02e5795e71960c70c569bda9ac6f6ac47a2d30d91f23afe10579a -
install network plugin
- calico
Terminal window curl -SLO https://docs.projectcalico.org/v3.25/manifests/calico.yamlkubectl apply -f calico.yaml - cilium
Terminal window curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3chmod 700 get_helm.sh./get_helm.shhelm repo add cilium https://helm.cilium.io/helm install cilium cilium/cilium --version 1.17.1 --namespace kube-system
we may need to replace the default image registry, see reference 2
- calico
-
install metrics server
Terminal window curl -SLO https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yamlmodiy the image registry, replace
registry.k8s.io
withk8s.m.daocloud.io
also change parmeter
- --kubelet-insecure-tlsthen install metrics server
Terminal window kubectl apply -f components.yaml -
install ingress controller
Terminal window curl -SLO https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yamlmodiy the image registry, replace
registry.k8s.io
withk8s.m.daocloud.io
then change the service type to NodePort for ingress-nginx-controller
after that, install ingress controller
Terminal window kubectl apply -f deploy.yaml`` -
install hostpath provisioner
Terminal window export SNAPSHOTTER_BRANCH=release-6.3kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_BRANCH}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yamlkubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_BRANCH}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yamlkubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_BRANCH}/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yamlexport SNAPSHOTTER_VERSION=v6.3.3kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yamlkubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yamlgit clone https://github.com/kubernetes-csi/csi-driver-host-path.gitcd csi-driver-host-pathdeploy/kubernetes-latest/deploy.sh-
install storage class
csi-storageclass.yaml
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: csi-hostpath-scprovisioner: hostpath.csi.k8s.ioreclaimPolicy: DeletevolumeBindingMode: ImmediateallowVolumeExpansion: truethen
Terminal window kubectl apply -f csi-storageclass.yaml
then modify the statefulset, change the image registry, replace
registry.k8s.io
withk8s.m.daocloud.io
- install gateway resources
Terminal window kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml
-
-
Create KVM snapshots used for rollback after each practice
Terminal window virsh snapshot-create-as --domain kube-master --name "kube-master-snap-02" --description "cluster created"virsh snapshot-create-as --domain kube-node01 --name "kube-node01-snap-02" --description "cluster created"virsh snapshot-create-as --domain kube-node02 --name "kube-node01-snap-01" --description "cluster created"rollback if required
Terminal window virsh shutdown --domain kube-mastervirsh snapshot-revert --domain kube-master --snapshotname kube-master-snap-02 --running