Module 1 — Cluster Architecture¶
Overview¶
Understanding the architecture of a Kubernetes cluster is the foundation for everything else in the CKA exam. This module covers every component, what it does, how it communicates, and where it runs.
1. High-Level Architecture¶
A Kubernetes cluster is composed of two types of nodes:
2. Control Plane Components¶
The control plane makes global decisions about the cluster (scheduling, detecting and responding to events). In a kubeadm-based cluster, control plane components run as static pods in the kube-system namespace.
2.1 kube-apiserver¶
The central hub of the entire cluster. Every component communicates through it.
| Aspect | Detail |
|---|---|
| Role | RESTful API frontend for the cluster — the only component that talks directly to etcd |
| Port | 6443 (HTTPS) by default |
| Authentication | Client certificates, bearer tokens, OIDC, webhook |
| Authorization | RBAC (default), Node, Webhook, ABAC |
| Admission | Mutating → Validating admission controllers (webhooks) |
| Manifest path | /etc/kubernetes/manifests/kube-apiserver.yaml |
Key flags to know for the exam:
CKA Tip: If the API server is down,
kubectlwon't work. Check the static pod manifest and container logs withcrictlordocker.
2.2 etcd¶
The brain of the cluster — a distributed key-value store that holds all cluster state.
| Aspect | Detail |
|---|---|
| Role | Stores all cluster data (pods, services, secrets, configmaps, etc.) |
| Port | 2379 (client), 2380 (peer) |
| Protocol | gRPC over TLS |
| Consistency | Raft consensus algorithm |
| Manifest path | /etc/kubernetes/manifests/etcd.yaml |
| Data directory | /var/lib/etcd |
Key flags:
CKA Tip: etcd backup/restore is a very common exam task. We cover it in detail in 04-etcd-backup-restore.md.
2.3 kube-scheduler¶
Decides which node a newly created pod should run on.
| Aspect | Detail |
|---|---|
| Role | Watches for unscheduled pods and assigns them to nodes |
| Port | 10259 (HTTPS) |
| Manifest path | /etc/kubernetes/manifests/kube-scheduler.yaml |
Scheduling process:
Filtering reasons a node may be excluded:
- Insufficient CPU/memory (vs resource requests)
- Node taints not tolerated by the pod
- nodeSelector / nodeAffinity mismatch
- PodAntiAffinity conflict
- Unschedulable node (
spec.unschedulable: true— cordoned)
2.4 kube-controller-manager¶
Runs a collection of controllers — control loops that watch the cluster state and make changes to move toward the desired state.
| Aspect | Detail |
|---|---|
| Role | Runs all built-in controllers |
| Port | 10257 (HTTPS) |
| Manifest path | /etc/kubernetes/manifests/kube-controller-manager.yaml |
Key controllers bundled inside:
| Controller | What it does |
|---|---|
| Node Controller | Monitors node health, marks nodes as NotReady, evicts pods after timeout |
| ReplicaSet Controller | Ensures the desired number of pod replicas are running |
| Deployment Controller | Manages ReplicaSets for rolling updates/rollbacks |
| Job Controller | Creates pods for Job/CronJob workloads |
| ServiceAccount Controller | Creates default ServiceAccounts in new namespaces |
| Endpoint Controller | Populates Endpoints objects for Services |
| Namespace Controller | Cleans up resources when a namespace is deleted |
CKA Tip: If the controller manager is down, existing pods keep running but no new replicas will be created, no scaling will happen, and node health won't be monitored.
2.5 cloud-controller-manager (optional)¶
Only present in cloud-managed clusters (EKS, AKS, GKE). Handles cloud-specific logic: - Node lifecycle (detecting deleted VMs) - Route configuration - LoadBalancer service provisioning
Not a focus for CKA, but good to know it exists.
3. Worker Node Components¶
3.1 kubelet¶
The agent running on every node (including control plane nodes).
| Aspect | Detail |
|---|---|
| Role | Ensures containers described in PodSpecs are running and healthy |
| Port | 10250 (HTTPS) |
| Config | /var/lib/kubelet/config.yaml |
| Service | systemctl status kubelet |
| Logs | journalctl -u kubelet -f |
What kubelet does:
1. Registers the node with the API server
2. Watches the API server for pods assigned to its node
3. Instructs the container runtime to pull images and start containers
4. Reports pod and node status back to the API server
5. Runs liveness, readiness, and startup probes
6. Manages static pods from /etc/kubernetes/manifests/
CKA Tip: kubelet is the only component that does NOT run as a pod — it runs as a systemd service. If kubelet is down, the node appears
NotReady. Check with:
3.2 kube-proxy¶
Maintains network rules on each node to implement Services.
| Aspect | Detail |
|---|---|
| Role | Programs iptables/ipvs rules for Service → Pod routing |
| Runs as | DaemonSet in kube-system namespace |
| Modes | iptables (default), ipvs, userspace (legacy) |
| Config | ConfigMap kube-proxy in kube-system |
How kube-proxy works (iptables mode):
Check kube-proxy mode:
For networking experts: kube-proxy doesn't proxy traffic itself in iptables/ipvs mode — it only programs kernel rules. The kernel handles the actual packet forwarding.
3.3 Container Runtime¶
The software responsible for running containers.
| Runtime | CRI Compatible | Notes |
|---|---|---|
| containerd | Yes | Default in most distributions (kubeadm, EKS, GKE) |
| CRI-O | Yes | Used by OpenShift |
| Docker Engine | Via cri-dockerd shim | Deprecated as direct runtime since K8s 1.24 |
Useful commands with crictl (CRI-compatible CLI):
CKA Tip:
crictlis the go-to tool whenkubectlis not available (e.g., API server is down). Configure it in/etc/crictl.yaml:
4. Communication Flows¶
4.1 Component-to-API Server Communication¶
All components communicate through the API server. No component talks directly to another.
| Communication | Direction | Protocol | Port |
|---|---|---|---|
| kubectl → API server | Client → Server | HTTPS | 6443 |
| kubelet → API server | Node → Control Plane | HTTPS | 6443 |
| API server → kubelet | Control Plane → Node | HTTPS | 10250 |
| API server → etcd | Internal | gRPC/TLS | 2379 |
| Scheduler → API server | Internal | HTTPS | 6443 |
| Controller Manager → API server | Internal | HTTPS | 6443 |
| kube-proxy → API server | Node → Control Plane | HTTPS | 6443 |
4.2 Pod-to-Pod Communication¶
Kubernetes networking model requires:
- Every pod gets its own IP address
- Pods on any node can communicate with pods on any other node without NAT
- Agents on a node can communicate with all pods on that node
This is implemented by the CNI plugin (Calico, Flannel, Cilium, etc.). Detailed in 11-networking-model.md.
4.3 Pod-to-Service Communication¶
Detailed in 10-services.md.
5. Inspecting the Cluster Architecture¶
5.1 Useful Commands¶
5.2 Key Directories and Files¶
| Path | Content |
|---|---|
/etc/kubernetes/manifests/ |
Static pod manifests (apiserver, etcd, scheduler, controller-manager) |
/etc/kubernetes/pki/ |
Cluster PKI certificates and keys |
/etc/kubernetes/admin.conf |
Admin kubeconfig file |
/etc/kubernetes/kubelet.conf |
Kubelet kubeconfig |
/etc/kubernetes/scheduler.conf |
Scheduler kubeconfig |
/etc/kubernetes/controller-manager.conf |
Controller manager kubeconfig |
/var/lib/kubelet/config.yaml |
Kubelet configuration |
/var/lib/etcd/ |
etcd data directory |
/etc/cni/net.d/ |
CNI plugin configuration |
/opt/cni/bin/ |
CNI plugin binaries |
6. Practice Exercises¶
Exercise 1 — Identify Components¶
Exercise 2 — Explore the API Server Configuration¶
Exercise 3 — Explore a Worker Node¶
Exercise 4 — Break and Fix¶
7. Key Takeaways for the CKA Exam¶
| Point | Detail |
|---|---|
| API server is the single point of contact | Every component talks to the API server, never directly to each other |
| etcd is the single source of truth | Only the API server reads/writes to etcd |
| kubelet is not a pod | It's a systemd service — troubleshoot with systemctl and journalctl |
| Static pods | Control plane components are managed as static pods in /etc/kubernetes/manifests/ |
| Know the ports | 6443 (API), 2379/2380 (etcd), 10250 (kubelet), 10259 (scheduler), 10257 (ctrl-mgr) |
| Know the certificate paths | /etc/kubernetes/pki/ — you may need to inspect or fix cert issues |
| crictl is your friend | When kubectl doesn't work, use crictl to inspect containers directly |
Next: 02-cluster-installation-kubeadm.md — Installing a cluster with kubeadm