πŸ“šBook Signing at KubeCon EU 2026Meet us at Booking.com HQ (Mon 18:30-21:00) & vCluster booth #521 (Tue 24 Mar, 12:30-1:30pm) β€” free book giveaway!RSVP Booking.com Event
Configuration intermediate ⏱ 20 minutes K8s 1.28+

Configure ClusterPolicy kernelModuleType for GPU Operator

Understand and configure the driver.kernelModuleType field in the NVIDIA GPU Operator ClusterPolicy to choose between auto, open, and proprietary kernel.

By Luca Berton β€’ β€’ πŸ“– 5 min read

πŸ’‘ Quick Answer: Set driver.kernelModuleType: open to enable DMA-BUF and GPUDirect Storage. The auto default selects the recommended type based on driver branch and GPU model, but open guarantees compatibility with modern features.

The driver.kernelModuleType field in the ClusterPolicy controls which NVIDIA kernel module variant the GPU Operator builds and loads on each node.

Available Options

ValueBehavior
autoGPU Operator chooses based on driver branch and GPU devices (default since v25.3.0)
openForces the Open GPU Kernel Module (required for DMA-BUF and GDS)
proprietaryForces the proprietary kernel module (legacy)

When to Use Each Option

  • auto β€” Safe default for most clusters. Newer driver versions automatically select open.
  • open β€” Required when you need DMA-BUF GPUDirect RDMA, GPUDirect Storage (GDS v2.17.5+), or want to ensure forward compatibility.
  • proprietary β€” Only needed for legacy GPU architectures that do not support open modules.

Check Current Setting

oc get clusterpolicy gpu-cluster-policy -o jsonpath='{.spec.driver.kernelModuleType}'

Change the Setting

oc edit clusterpolicy gpu-cluster-policy
spec:
  driver:
    kernelModuleType: open

Restart driver pods to apply:

oc delete pod -n gpu-operator -l app=nvidia-driver-daemonset

Verify the Active Module Type

Check the driver container logs for the resolution:

oc logs -n gpu-operator ds/nvidia-driver-daemonset -c nvidia-driver-ctr | grep -i kernel

With auto, look for the line:

nvidia-installer --print-recommended-kernel-module-type
kernel_module_type=open

This confirms the Operator resolved auto β†’ open for your hardware.

Verify on the Host

oc debug node/<node-name>
chroot /host
modinfo nvidia | grep license

Open kernel modules show Dual MIT/GPL licensing. Proprietary modules show NVIDIA only.

Feature Dependency Matrix

FeatureRequires open
DMA-BUF GPUDirect RDMAYes
GPUDirect Storage (GDS v2.17.5+)Yes
Legacy nvidia-peermem RDMANo
Standard GPU computeNo

Why This Matters

Choosing the correct kernel module type determines which GPU features are available. Setting open unlocks DMA-BUF and GDS while maintaining full compute compatibility on Turing+ GPUs.

#nvidia #gpu-operator #clusterpolicy #kernel-modules #configuration #openshift
Luca Berton
Written by Luca Berton

Principal Solutions Architect specializing in Kubernetes, AI/GPU infrastructure, and cloud-native platforms. Author of Kubernetes Recipes and creator of CopyPasteLearn courses.

Kubernetes Recipes book cover

Want More Kubernetes Recipes?

This recipe is from Kubernetes Recipes, our 750-page practical guide with hundreds of production-ready patterns.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens