Master Kubernetes networking with recipes covering Ingress controllers, Service types, NetworkPolicies, DNS configuration, and load balancing patterns.
49 recipes available
Learn different ways to expose Kubernetes services externally using LoadBalancer, NodePort, and ExternalIPs. Compare options for various environments.
Implement a zero-trust network security model in Kubernetes by creating a default deny-all NetworkPolicy. Learn how to block all ingress and egress.
Use agent-config.yaml to select network devices by PCI path for SR-IOV VF creation, ensuring deterministic NIC targeting across OpenShift nodes.
Create bonded network interfaces on Kubernetes worker nodes using NodeNetworkConfigurationPolicy for NIC redundancy and link aggregation.
Configure static routes, DNS servers, and policy-based routing on worker nodes using NodeNetworkConfigurationPolicy for multi-network setups.
Create Linux bridges on Kubernetes worker nodes using NodeNetworkConfigurationPolicy for KubeVirt VM networking and pod bridging.
Set MTU and enable jumbo frames on worker node interfaces using NodeNetworkConfigurationPolicy for high-throughput storage and AI networking.
Troubleshoot NodeNetworkConfigurationPolicy failures, monitor enactments, configure rollback timeouts, and recover from bad network configurations.
Use NodeNetworkConfigurationPolicy to assign static IPv4 and IPv6 addresses to worker node interfaces with nodeSelector targeting.
Configure VLAN interfaces on Kubernetes worker nodes using NodeNetworkConfigurationPolicy for network segmentation and traffic isolation.
Route raw GPU inference traffic via NodePort for low-latency gRPC and HTTPS model serving via OpenShift ingress controller.
Enable NIC Feature Discovery through NicClusterPolicy and verify the node labels required by SR-IOV and RDMA GPU networking workflows on Kubernetes.
Use the OpenShift SR-IOV Network Operator to create and manage Virtual Functions from selected Physical Functions on GPU worker nodes.
Confirm the actual OVN underlay network path by checking ovn-encap-ip, bridge port ownership, and physical route associations on Kubernetes nodes.
Implement the Gateway API for advanced traffic routing in Kubernetes. Learn HTTPRoute, TLSRoute, and traffic splitting with the next-generation Ingress.
Deploy Linkerd service mesh for Kubernetes. Learn to add mTLS encryption, traffic management, and observability with minimal configuration overhead.
Customize CoreDNS configuration for your cluster. Add custom DNS entries, configure forwarding, and optimize DNS resolution.
Configure custom DNS settings in Kubernetes. Learn CoreDNS customization, stub domains, upstream servers, and pod DNS policies.
Control pod DNS resolution with DNS policies and configs. Configure custom nameservers, search domains, and optimize DNS for your workloads.
Configure advanced routing rules with Kubernetes Ingress. Implement path-based routing, host-based routing, and traffic management.
Configure TLS termination for Kubernetes Ingress using cert-manager and Let's Encrypt. Automate certificate issuance and renewal.
Understand and manage EndpointSlices for scalable service discovery. Configure endpoint slicing, troubleshoot connectivity, and optimize large clusters.
Understand and configure Kubernetes DNS with CoreDNS. Customize DNS policies, configure external DNS resolution, and troubleshoot DNS issues.
Secure pod-to-pod communication with Kubernetes Network Policies. Learn to create ingress and egress rules, isolate namespaces, and implement zero-trust.
Protect your services with rate limiting. Configure rate limits using Ingress, service mesh, and API gateways to prevent abuse and ensure fair usage.
Learn how to set up NGINX Ingress Controller with automatic TLS certificates from Let's Encrypt using cert-manager. Complete YAML examples and.
Configure GPUDirect RDMA using DMA-BUF kernel subsystem for zero-copy GPU-to-GPU transfers over InfiniBand and RoCE networks.
Configure HAProxy with Keepalived VIPs for per-tenant GPU cluster ingress with Jinja2 templates and per-tenant logging.
Compare InfiniBand and Ethernet networking for GPU AI workloads on Kubernetes, including RDMA, RoCE, latency, and throughput considerations.
Configure bonded NICs for NFS over RDMA using switch access mode for VLAN assignment. Aggregation on untagged interfaces for RDMA redundancy.
Configure dedicated NICs for NFS over RDMA on Kubernetes worker nodes. NFSoRDMA requires untagged interfaces β no VLAN tagging supported.
Configure 9000 MTU jumbo frames for NFSoRDMA interfaces using NNCP to maximize RDMA throughput on Kubernetes nodes.
Design multi-VLAN NFSoRDMA networks using switch access mode ports. Separate storage, replication, and backup traffic with dedicated NICs per VLAN.
Troubleshoot NFS over RDMA connectivity issues, diagnose TCP fallback, tune performance, and benchmark RDMA throughput on Kubernetes workers.
Complete worker node setup for NFS over RDMA including kernel modules, NFS client configuration, PersistentVolume mounts, and RDMA transport verification.
Design a complete multi-NIC worker node architecture with NNCP for separated management, storage, tenant, and GPU traffic using bonds, VLANs, and bridges.
Configure Open vSwitch bridges on Kubernetes worker nodes using NodeNetworkConfigurationPolicy for advanced SDN and DPDK networking.
Configure SR-IOV virtual functions and macvlan interfaces on worker nodes using NodeNetworkConfigurationPolicy for high-performance networking.
Understand and customize the NVIDIA NIC driver container entrypoint for MOFED and DOCA driver lifecycle on Kubernetes and OpenShift.
Configure SR-IOV with mixed ConnectX-7 and ConnectX-6 NICs for RDMA data plane and management traffic on GPU worker nodes.
Configure SR-IOV Virtual Functions on Mellanox ConnectX NICs using the NVIDIA Network Operator NicClusterPolicy for high-performance Kubernetes networking.
Deploy SR-IOV Virtual Functions with RDMA support for distributed AI training on Kubernetes, including multi-NIC pod configuration and NCCL tuning.
Set up GPUDirect RDMA on Kubernetes using the NVIDIA GPU Operator with either DMA-BUF or legacy nvidia-peermem, including Network Operator integration.
Migrate from the legacy nvidia-peermem kernel module to the recommended DMA-BUF GPUDirect RDMA path using the NVIDIA GPU Operator.
Run ib_write_bw with CUDA DMA-BUF to verify GPUDirect RDMA data transfer rates between GPU pods and validate network operator configuration.
Create a SriovNetwork resource that auto-generates a Multus NetworkAttachmentDefinition using nv-ipam for high-performance SR-IOV secondary interfaces.
Define a valid nv-ipam IPPool and node-aware sizing strategy so SR-IOV workloads can reliably obtain secondary interface IP addresses on Kubernetes.
Implement advanced traffic management with Istio service mesh including traffic splitting, fault injection, circuit breaking, and intelligent routing.
Deploy Istio service mesh for traffic management, security, and observability. Learn to configure virtual services, destination rules, and mTLS.
Our book includes an entire chapter dedicated to networking with dozens more examples.
π Explore All Chapters