Skip to content

K8s sriov

SR-IOV Drivers Explained

The driver controls how SR-IOV VFs shows up inside the container.

netdevice driver — "Just a normal network card"

  • It works exactly like any other Linux network device
  • The container's apps don't need to know anything special — they just use standard Linux networking (sockets, etc.)
  • The OS kernel is still involved in managing it

vfio-pci driver — "Direct hardware access"

  • Requires the app to use a special library (typically DPDK) to drive the hardware itself
  • The kernel essentially steps aside — the app handles everything
  • This is where you get the absolute lowest latency and highest throughput

  • netdevice → VF looks like a normal network card inside the container ✅

  • vfio-pci → VF is exposed as raw hardware, kernel networking is bypassed entirely

SR-IOV and Red Hat OpenShift Container Platform

SR-IOV Network Operator creates and manages the components of the SR-IOV stack:

  • Orchestrates discovery and management of SR-IOV network devices
  • Generates NetworkAttachmentDefinition custom resources for the SR-IOV Container Network Interface (CNI)
  • Creates and updates the configuration of the SR-IOV network device plugin
  • Creates node specific SriovNetworkNodeState custom resources
  • Updates the spec.interfaces field in each SriovNetworkNodeState custom resource