Module 14 — Storage
Overview
Storage in Kubernetes decouples data from pod lifecycles. When a pod dies, its filesystem is lost — Volumes, PersistentVolumes, and PersistentVolumeClaims solve this. The CKA exam tests PV/PVC creation, StorageClasses, access modes, volume expansion, and common volume types. This module covers all five topics.
1. Storage Concepts
1.1 The Problem
| Without persistent storage:
Pod starts → writes data to container filesystem
Pod dies → data is LOST
Pod restarts → starts with empty filesystem
With persistent storage:
Pod starts → writes data to a Volume
Pod dies → data persists on the Volume
Pod restarts → mounts the same Volume → data is still there
|
1.2 The Three Layers
| ┌──────────────────────────────────────────────────────┐
│ Pod │
│ spec.volumes + spec.containers[].volumeMounts │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────┐ │
│ │ PersistentVolumeClaim (PVC) │ ← what the │
│ │ "I need 10Gi of RWO storage" │ pod asks for│
│ └──────────────┬───────────────────┘ │
│ │ bound │
│ ▼ │
│ ┌──────────────────────────────────┐ │
│ │ PersistentVolume (PV) │ ← actual │
│ │ 10Gi, RWO, /mnt/data │ storage │
│ └──────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────┐ │
│ │ Physical Storage │ ← disk, NFS, │
│ │ (cloud disk, NFS, local, etc.) │ cloud volume│
│ └──────────────────────────────────┘ │
└──────────────────────────────────────────────────────┘
|
| Object |
Scope |
Purpose |
| Volume |
Pod-level |
Attaches storage to a pod (ephemeral or persistent) |
| PersistentVolume (PV) |
Cluster-wide |
Represents a piece of provisioned storage |
| PersistentVolumeClaim (PVC) |
Namespaced |
A request for storage by a pod |
| StorageClass |
Cluster-wide |
Defines how to dynamically provision PVs |
2. Volumes (Pod-Level)
Volumes are defined in the pod spec and mounted into containers. They exist as long as the pod exists.
2.1 emptyDir
A temporary directory created when the pod starts. Deleted when the pod is removed (not on container restart).
| apiVersion: v1
kind: Pod
metadata:
name: emptydir-pod
spec:
containers:
- name: writer
image: busybox
command: ["sh", "-c", "echo hello > /data/message; sleep 3600"]
volumeMounts:
- name: shared
mountPath: /data
- name: reader
image: busybox
command: ["sh", "-c", "cat /data/message; sleep 3600"]
volumeMounts:
- name: shared
mountPath: /data
volumes:
- name: shared
emptyDir: {}
|
| Aspect |
Detail |
| Lifetime |
Same as the pod (survives container restarts, deleted with pod) |
| Use case |
Shared scratch space between containers, caching |
| Backed by |
Node's filesystem (or medium: Memory for tmpfs) |
| # Memory-backed emptyDir (tmpfs — counts against container memory limit)
volumes:
- name: cache
emptyDir:
medium: Memory
sizeLimit: 256Mi
|
2.2 hostPath
Mounts a file or directory from the host node's filesystem into the pod.
| volumes:
- name: host-data
hostPath:
path: /var/log/pods
type: Directory
|
| type |
Behavior |
"" (empty) |
No checks — mount whatever is at the path |
DirectoryOrCreate |
Create directory if it doesn't exist |
Directory |
Must already exist as a directory |
FileOrCreate |
Create file if it doesn't exist |
File |
Must already exist as a file |
Socket |
Must be a Unix socket |
Warning: hostPath ties the pod to a specific node and is a security risk (access to host filesystem). Avoid in production. Used by static pods (etcd data dir) and for CKA exam scenarios.
2.3 configMap and secret Volumes
Mount ConfigMap or Secret data as files (covered in detail in 09-configmaps-secrets.md):
| volumes:
- name: config
configMap:
name: app-config
- name: creds
secret:
secretName: db-secret
|
2.4 projected Volume
Combines multiple sources into a single directory:
| volumes:
- name: all-in-one
projected:
sources:
- configMap:
name: app-config
- secret:
name: db-secret
- downwardAPI:
items:
- path: labels
fieldRef:
fieldPath: metadata.labels
- serviceAccountToken:
path: token
expirationSeconds: 3600
|
2.5 downwardAPI Volume
Exposes pod metadata as files:
| volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
- path: "cpu-request"
resourceFieldRef:
containerName: app
resource: requests.cpu
|
3. PersistentVolumes (PV)
3.1 What Is a PV?
A PV is a cluster-wide storage resource provisioned by an admin (static) or dynamically by a StorageClass. It exists independently of any pod.
3.2 PV YAML
| apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath: # backend storage type
path: /mnt/data
|
3.3 PV Fields
| Field |
Purpose |
capacity.storage |
Size of the volume |
accessModes |
How the volume can be mounted (RWO, ROX, RWX) |
persistentVolumeReclaimPolicy |
What happens when the PVC is deleted |
storageClassName |
Links to a StorageClass (or "" for no class) |
| Backend spec |
hostPath, nfs, csi, awsElasticBlockStore, etc. |
3.4 PV Lifecycle Phases
| Available → Bound → Released → (Reclaimed or Deleted)
|
| Phase |
Meaning |
Available |
PV is free and not yet bound to a PVC |
Bound |
PV is bound to a PVC |
Released |
PVC was deleted, but PV hasn't been reclaimed yet |
Failed |
Automatic reclamation failed |
| kubectl get pv
# NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS STORAGECLASS
# pv-data 10Gi RWO Retain Available manual
|
3.5 Reclaim Policies
| Policy |
Behavior |
Use case |
Retain |
PV is kept after PVC deletion — data preserved, manual cleanup needed |
Production data |
Delete |
PV and underlying storage are deleted when PVC is deleted |
Dynamic provisioning (default) |
Recycle |
Deprecated — performs rm -rf on the volume |
Don't use |
4. PersistentVolumeClaims (PVC)
4.1 What Is a PVC?
A PVC is a namespaced request for storage. It binds to a PV that satisfies its requirements.
4.2 PVC YAML
| apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: manual # must match PV's storageClassName
|
4.3 PVC-to-PV Binding
Kubernetes matches a PVC to a PV based on:
| PVC requests: PV must have:
───────────── ─────────────
accessModes: [RWO] → accessModes includes RWO
storage: 5Gi → capacity >= 5Gi
storageClassName: manual → storageClassName: manual
|
| ┌──────────────┐ ┌──────────────┐
│ PVC │ bind │ PV │
│ 5Gi, RWO │────────▶│ 10Gi, RWO │
│ class:manual│ │ class:manual│
└──────────────┘ └──────────────┘
|
Note: A PVC can bind to a larger PV (5Gi claim → 10Gi PV), but the extra space is not available to other PVCs. The PV is exclusively bound.
4.4 Using a PVC in a Pod
| apiVersion: v1
kind: Pod
metadata:
name: pvc-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: my-claim # references the PVC
|
4.5 PVC Status
| kubectl get pvc
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
# my-claim Bound pv-data 10Gi RWO manual
|
| Status |
Meaning |
Pending |
No matching PV found (or waiting for dynamic provisioning) |
Bound |
Successfully bound to a PV |
Lost |
The bound PV no longer exists |
5. Access Modes
| Mode |
Abbreviation |
Meaning |
ReadWriteOnce |
RWO |
Mounted as read-write by a single node |
ReadOnlyMany |
ROX |
Mounted as read-only by multiple nodes |
ReadWriteMany |
RWX |
Mounted as read-write by multiple nodes |
ReadWriteOncePod |
RWOP |
Mounted as read-write by a single pod (K8s 1.27+) |
5.1 Access Mode Support by Storage Type
| Storage Type |
RWO |
ROX |
RWX |
| hostPath |
✅ |
❌ |
❌ |
| AWS EBS |
✅ |
❌ |
❌ |
| Azure Disk |
✅ |
❌ |
❌ |
| GCE PD |
✅ |
✅ |
❌ |
| NFS |
✅ |
✅ |
✅ |
| AWS EFS |
✅ |
✅ |
✅ |
| CephFS |
✅ |
✅ |
✅ |
CKA Tip: RWO is the most common. RWX is needed when multiple pods on different nodes must write to the same volume (e.g., shared file storage). The exam usually uses RWO.
5.2 Important Clarification
RWO means single node, not single pod. Multiple pods on the same node can mount an RWO volume simultaneously.
6. StorageClasses and Dynamic Provisioning
6.1 The Problem with Static Provisioning
| Static provisioning:
Admin creates PV → User creates PVC → PVC binds to PV
(Admin must pre-create PVs for every request — doesn't scale)
Dynamic provisioning:
Admin creates StorageClass → User creates PVC → StorageClass auto-creates PV
(Fully automated)
|
6.2 StorageClass YAML
| apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
annotations:
storageclass.kubernetes.io/is-default-class: "true" # default SC
provisioner: kubernetes.io/aws-ebs # who creates the volumes
parameters:
type: gp3 # provisioner-specific params
fsType: ext4
reclaimPolicy: Delete # default for dynamic PVs
allowVolumeExpansion: true # allow PVC resize
volumeBindingMode: WaitForFirstConsumer # or Immediate
|
| Field |
Purpose |
provisioner |
Plugin that creates the actual storage |
parameters |
Provisioner-specific settings (disk type, IOPS, etc.) |
reclaimPolicy |
Delete (default) or Retain |
allowVolumeExpansion |
Whether PVCs using this class can be resized |
volumeBindingMode |
When to bind PVC to PV |
6.3 Common Provisioners
| Provisioner |
Storage |
kubernetes.io/aws-ebs |
AWS EBS |
kubernetes.io/gce-pd |
GCE Persistent Disk |
kubernetes.io/azure-disk |
Azure Managed Disk |
kubernetes.io/no-provisioner |
Local/static — no dynamic provisioning |
rancher.io/local-path |
Local path provisioner (common in lab environments) |
| CSI drivers |
ebs.csi.aws.com, disk.csi.azure.com, etc. |
6.4 volumeBindingMode
| Mode |
Behavior |
Immediate |
PV is provisioned as soon as PVC is created |
WaitForFirstConsumer |
PV is provisioned only when a pod using the PVC is scheduled |
WaitForFirstConsumer is preferred because it provisions storage in the same zone/node as the pod, avoiding scheduling failures.
6.5 Dynamic Provisioning in Action
| # 1. StorageClass (created by admin, once)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
# 2. PVC (created by user — PV is auto-created)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
|
| # PVC is Pending until a pod uses it (WaitForFirstConsumer)
kubectl get pvc
# NAME STATUS VOLUME CAPACITY STORAGECLASS
# dynamic-claim Pending standard
# After a pod mounts it, PV is auto-created and PVC becomes Bound
kubectl get pvc
# NAME STATUS VOLUME CAPACITY STORAGECLASS
# dynamic-claim Bound pvc-a1b2c3d4-e5f6-7890-abcd-ef1234567890 5Gi standard
|
6.6 Default StorageClass
If a PVC doesn't specify storageClassName, the default StorageClass is used:
| # Check which StorageClass is default
kubectl get sc
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE
# standard (default) rancher.io/local-path Delete WaitForFirstConsumer
|
To prevent dynamic provisioning for a PVC, set storageClassName: "":
| spec:
storageClassName: "" # explicitly no StorageClass — only binds to static PVs
|
7. Volume Expansion
7.1 Requirements
- StorageClass must have
allowVolumeExpansion: true
- Only works for dynamically provisioned PVs (or CSI volumes that support it)
- You can only increase size, never decrease
7.2 Expanding a PVC
| # Check if expansion is allowed
kubectl get sc <storageclass> -o yaml | grep allowVolumeExpansion
# Edit the PVC to increase storage
kubectl edit pvc my-claim
# Change spec.resources.requests.storage from 5Gi to 10Gi
|
Or patch:
| kubectl patch pvc my-claim -p '{"spec":{"resources":{"requests":{"storage":"10Gi"}}}}'
|
7.3 Expansion Process
| PVC edit (5Gi → 10Gi)
│
├── Controller expands the underlying volume (cloud API call)
│
├── If filesystem resize needed:
│ └── Pod must be restarted (or volume re-mounted)
│ Condition: FileSystemResizePending
│
└── PVC shows new capacity
|
| # Check expansion status
kubectl get pvc my-claim
kubectl describe pvc my-claim | grep -A5 Conditions
# Type: FileSystemResizePending (waiting for pod to restart)
# or
# Type: Resizing (in progress)
|
7.4 Offline vs Online Expansion
| Type |
Behavior |
Support |
| Offline |
Pod must be deleted/restarted for filesystem resize |
Most storage backends |
| Online |
Filesystem resized while pod is running |
Some CSI drivers (EBS CSI, etc.) |
8. Static Provisioning — Full Walkthrough
8.1 Step-by-Step
| # 1. Create the PV
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: static-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath:
path: /mnt/static-data
EOF
# 2. Create the PVC
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: static-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: manual
EOF
# 3. Verify binding
kubectl get pv static-pv
kubectl get pvc static-pvc
# Both should show STATUS: Bound
# 4. Use in a pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: static-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: static-pvc
EOF
|
9. Troubleshooting Storage
| Problem |
Cause |
Fix |
PVC stuck in Pending |
No matching PV (capacity, access mode, or storageClassName mismatch) |
Check PV specs match PVC requirements |
PVC Pending with dynamic provisioning |
StorageClass doesn't exist or provisioner is broken |
kubectl get sc, check provisioner pods |
PVC Pending with WaitForFirstConsumer |
Normal — PV is created when a pod uses the PVC |
Create a pod that mounts the PVC |
Pod stuck in ContainerCreating |
PVC not bound, or volume mount failure |
kubectl describe pod — check Events |
PV shows Released after PVC deletion |
Reclaim policy is Retain |
Manually delete PV or remove claimRef to reuse |
| Volume expansion not working |
allowVolumeExpansion: false on StorageClass |
Edit StorageClass or recreate with expansion enabled |
FileSystemResizePending |
Filesystem needs resize after volume expansion |
Restart the pod to trigger filesystem resize |
| # Debugging commands
kubectl get pv
kubectl get pvc
kubectl describe pvc <name>
kubectl describe pv <name>
kubectl get sc
kubectl describe pod <name> # check Events for mount errors
kubectl get events --sort-by='.lastTimestamp'
|
9.1 Reusing a Released PV
When a PVC is deleted and the PV has Retain policy, the PV goes to Released state. To reuse it:
| # Remove the claimRef to make it Available again
kubectl patch pv <pv-name> -p '{"spec":{"claimRef": null}}'
# PV goes back to Available
kubectl get pv
|
10. Practice Exercises
Exercise 1 — Static PV and PVC
| # 1. Create a PV with 500Mi, RWO, storageClassName: exam
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: exam-pv
spec:
capacity:
storage: 500Mi
accessModes:
- ReadWriteOnce
storageClassName: exam
hostPath:
path: /mnt/exam-data
EOF
# 2. Create a PVC requesting 200Mi with the same storageClassName
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: exam-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storageClassName: exam
EOF
# 3. Verify binding
kubectl get pv,pvc
# 4. Create a pod that mounts the PVC at /data
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: exam-pod
spec:
containers:
- name: app
image: busybox
command: ["sh", "-c", "echo 'CKA exam' > /data/test.txt; sleep 3600"]
volumeMounts:
- name: storage
mountPath: /data
volumes:
- name: storage
persistentVolumeClaim:
claimName: exam-pvc
EOF
# 5. Verify data was written
kubectl exec exam-pod -- cat /data/test.txt
# 6. Clean up
kubectl delete pod exam-pod
kubectl delete pvc exam-pvc
kubectl delete pv exam-pv
|
Exercise 2 — emptyDir Shared Between Containers
| # 1. Create a multi-container pod sharing an emptyDir
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: shared-vol
spec:
containers:
- name: producer
image: busybox
command: ["sh", "-c", "while true; do date >> /shared/log.txt; sleep 5; done"]
volumeMounts:
- name: shared
mountPath: /shared
- name: consumer
image: busybox
command: ["sh", "-c", "tail -f /shared/log.txt"]
volumeMounts:
- name: shared
mountPath: /shared
volumes:
- name: shared
emptyDir: {}
EOF
# 2. Check the consumer sees the producer's data
kubectl logs shared-vol -c consumer
# 3. Clean up
kubectl delete pod shared-vol
|
Exercise 3 — Dynamic Provisioning
| # 1. Check available StorageClasses
kubectl get sc
# 2. Create a PVC using the default StorageClass
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
# 3. Check PVC status (may be Pending if WaitForFirstConsumer)
kubectl get pvc dynamic-pvc
# 4. Create a pod to trigger provisioning
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: dynamic-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: dynamic-pvc
EOF
# 5. Verify PVC is now Bound and a PV was auto-created
kubectl get pvc dynamic-pvc
kubectl get pv
# 6. Clean up
kubectl delete pod dynamic-pod
kubectl delete pvc dynamic-pvc
|
Exercise 4 — Troubleshoot a Pending PVC
| # 1. Create a PVC with a non-existent StorageClass
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: broken-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: does-not-exist
EOF
# 2. Check status
kubectl get pvc broken-pvc
# STATUS: Pending
# 3. Describe to find the reason
kubectl describe pvc broken-pvc
# "storageclass.storage.k8s.io "does-not-exist" not found"
# 4. Fix: change to an existing StorageClass
kubectl delete pvc broken-pvc
# Recreate with correct storageClassName
# 5. Clean up
kubectl delete pvc broken-pvc --ignore-not-found
|
Exercise 5 — Reclaim Policy Behavior
| # 1. Create a PV with Retain policy
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: retain-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: retain-test
hostPath:
path: /mnt/retain-data
EOF
# 2. Create and bind a PVC
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: retain-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: retain-test
EOF
# 3. Verify Bound
kubectl get pv retain-pv
# 4. Delete the PVC
kubectl delete pvc retain-pvc
# 5. Check PV status — should be Released (not Available)
kubectl get pv retain-pv
# STATUS: Released
# 6. Make it Available again by removing claimRef
kubectl patch pv retain-pv -p '{"spec":{"claimRef": null}}'
kubectl get pv retain-pv
# STATUS: Available
# 7. Clean up
kubectl delete pv retain-pv
|
11. Key Takeaways for the CKA Exam
| Point |
Detail |
| PV = cluster-wide, PVC = namespaced |
PVCs bind to PVs that match capacity, access mode, and storageClassName |
| Static vs dynamic |
Static: admin creates PV. Dynamic: StorageClass auto-creates PV from PVC |
storageClassName must match |
PVC and PV must have the same class, or PVC uses default SC if omitted |
storageClassName: "" |
Explicitly disables dynamic provisioning — only binds to static PVs with no class |
| RWO = single node, not single pod |
Multiple pods on the same node can share an RWO volume |
| Reclaim: Retain vs Delete |
Retain keeps data after PVC deletion; Delete removes everything |
WaitForFirstConsumer |
PV provisioned when pod is scheduled — avoids zone mismatches |
| Volume expansion |
Only increase, never decrease. Requires allowVolumeExpansion: true on SC |
| emptyDir dies with the pod |
Survives container restarts but not pod deletion |
| hostPath ties pod to a node |
Avoid in production; used for static pods and exam scenarios |
| PVC Pending = check describe |
kubectl describe pvc shows why binding failed |
| Released PV → remove claimRef |
kubectl patch pv <name> -p '{"spec":{"claimRef": null}}' |
Previous: 13-network-policies.md — Network Policies
Next: 15-troubleshooting-clusters.md — Troubleshooting Clusters