Configure ClusterPolicy kernelModuleType for GPU Operator
Understand and configure the driver.kernelModuleType field in the NVIDIA GPU Operator ClusterPolicy to choose between auto, open, and proprietary kernel.
π‘ Quick Answer: Set
driver.kernelModuleType: opento enable DMA-BUF and GPUDirect Storage. Theautodefault selects the recommended type based on driver branch and GPU model, butopenguarantees compatibility with modern features.
The driver.kernelModuleType field in the ClusterPolicy controls which NVIDIA kernel module variant the GPU Operator builds and loads on each node.
Available Options
| Value | Behavior |
|---|---|
auto | GPU Operator chooses based on driver branch and GPU devices (default since v25.3.0) |
open | Forces the Open GPU Kernel Module (required for DMA-BUF and GDS) |
proprietary | Forces the proprietary kernel module (legacy) |
When to Use Each Option
autoβ Safe default for most clusters. Newer driver versions automatically selectopen.openβ Required when you need DMA-BUF GPUDirect RDMA, GPUDirect Storage (GDS v2.17.5+), or want to ensure forward compatibility.proprietaryβ Only needed for legacy GPU architectures that do not support open modules.
Check Current Setting
oc get clusterpolicy gpu-cluster-policy -o jsonpath='{.spec.driver.kernelModuleType}'Change the Setting
oc edit clusterpolicy gpu-cluster-policyspec:
driver:
kernelModuleType: openRestart driver pods to apply:
oc delete pod -n gpu-operator -l app=nvidia-driver-daemonsetVerify the Active Module Type
Check the driver container logs for the resolution:
oc logs -n gpu-operator ds/nvidia-driver-daemonset -c nvidia-driver-ctr | grep -i kernelWith auto, look for the line:
nvidia-installer --print-recommended-kernel-module-type
kernel_module_type=openThis confirms the Operator resolved auto β open for your hardware.
Verify on the Host
oc debug node/<node-name>
chroot /host
modinfo nvidia | grep licenseOpen kernel modules show Dual MIT/GPL licensing. Proprietary modules show NVIDIA only.
Feature Dependency Matrix
| Feature | Requires open |
|---|---|
| DMA-BUF GPUDirect RDMA | Yes |
| GPUDirect Storage (GDS v2.17.5+) | Yes |
| Legacy nvidia-peermem RDMA | No |
| Standard GPU compute | No |
Why This Matters
Choosing the correct kernel module type determines which GPU features are available. Setting open unlocks DMA-BUF and GDS while maintaining full compute compatibility on Turing+ GPUs.

Recommended
Kubernetes Recipes β The Complete Book100+ production-ready patterns with detailed explanations, best practices, and copy-paste YAML. Everything in one place.
Get the Book βLearn by Doing
CopyPasteLearn β Hands-on Cloud & DevOps CoursesMaster Kubernetes, Ansible, Terraform, and MLOps with interactive, copy-paste-run lessons. Start free.
Browse Courses βπ Deepen Your Skills β Hands-on Courses
Courses by CopyPasteLearn.com β Learn IT by Doing
