Skip to content

LVM storage with TopoLVM — full interaction example

This applies to Kubernetes and identically to OpenShift Container Platform.

The CSI driver used here is TopoLVM.


0 Preconditions (important)

On each node:

  1. A disk is available (e.g. /dev/sdb)
  2. An LVM Volume Group exists

Example (done outside Kubernetes):

pvcreate /dev/sdb
vgcreate vg-data /dev/sdb

TopoLVM will only manage existing VGs — it does not create them.

Source: https://topolvm.io/docs/


1 TopoLVM CSI driver (conceptual)

TopoLVM is installed via:

  • Helm
  • or Operator (in OpenShift)

What matters for manifests:

  • CSI provisioner name:
topolvm.io

You do not manually write the CSI deployment YAML.

Source: https://github.com/topolvm/topolvm


2 StorageClass (TopoLVM)

Purpose: tell Kubernetes which VG to use.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: lvm-storage
provisioner: topolvm.io
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
parameters:
  csi.storage.k8s.io/fstype: ext4
  topolvm.io/vg-name: vg-data

Key points

  • topolvm.io/vg-name → LVM Volume Group name
  • WaitForFirstConsumer is critical:

  • volume must be created on the node where the Pod runs

Source: https://topolvm.io/docs/storageclass/


4 PersistentVolumeClaim (PVC)

Purpose: request an LVM Logical Volume.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: lvm-storage
  resources:
    requests:
      storage: 1Gi

What happens now

  • PVC is Pending
  • No PV yet
  • No LV created yet

Because:

  • provisioning is delayed until Pod scheduling

Source: https://kubernetes.io/docs/concepts/storage/storage-classes/


5 Pod using the PVC

apiVersion: v1
kind: Pod
metadata:
  name: lvm-app
spec:
  containers:
    - name: app
      image: busybox
      command: ["sh", "-c", "sleep 3600"]
      volumeMounts:
        - name: data
          mountPath: /data
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: lvm-claim

6 What happens when the Pod is created

Step-by-step (this is the important part):

  1. Scheduler picks a node
  2. Kubernetes calls TopoLVM on that node
  3. TopoLVM:

  4. creates an LVM Logical Volume

  5. formats it (ext4)
  6. CSI creates a PV
  7. PVC is Bound
  8. Volume is mounted into /data

7 Auto-created PersistentVolume (for understanding)

You normally never write this, but this is what appears:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-7c4b2e1d-xxxx
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: lvm-storage
  persistentVolumeReclaimPolicy: Delete
  csi:
    driver: topolvm.io
    volumeHandle: lvm-abcdef1234
    fsType: ext4

Mapping to LVM

Kubernetes Linux
PV Logical Volume
StorageClass VG policy
PVC LV size request

Source: https://topolvm.io/docs/concepts/


8 Verify on the node (optional but enlightening)

On the node where the Pod runs:

lvs

You will see something like:

LV               VG       Attr       LSize
pvc-abcdef1234   vg-data  -wi-ao----  1.00g

This confirms:

  • real LVM is used
  • Kubernetes is just orchestrating it

9 Deletion behavior

Action Result
Delete Pod LV remains
Delete PVC LV is deleted
Delete PV LV already gone
Node failure Data inaccessible

Reclaim policy = Delete

Source: https://kubernetes.io/docs/concepts/storage/persistent-volumes/


10 Constraints (must be clear)

  • ❌ No ReadWriteMany
  • ❌ No automatic HA
  • ❌ Pod is node-affined
  • ✅ Very fast local storage
  • ✅ Simple, transparent design

11 Full interaction chain (LVM)

1
2
3
4
5
6
Pod
 └─ PVC (1Gi)
      └─ PV (CSI-managed)
           └─ TopoLVM
                └─ LVM Logical Volume
                     └─ Physical disk

Bottom line

  • TopoLVM = local block storage via CSI
  • Kubernetes does not manage data
  • LVM remains pure Linux LVM
  • CSI is just the control plane glue