πŸ“šBook Signing at KubeCon EU 2026Meet us at Booking.com HQ (Mon 18:30-21:00) & vCluster booth #521 (Tue 24 Mar, 12:30-1:30pm) β€” free book giveaway!RSVP Booking.com Event
Troubleshooting intermediate ⏱ 20 minutes K8s 1.28+

OpenShift Ingress Router Troubleshooting

Debug OpenShift HAProxy router issues: pods stuck Pending, hostPort conflicts, PDB violations during maintenance, and custom router deployment scaling problems.

By Luca Berton β€’ β€’ πŸ“– 5 min read

πŸ’‘ Quick Answer: Router pods stuck Pending usually means hostPort conflicts β€” all nodes already have a router bound to ports 80/443. Check oc describe pod <pending-router> for β€œdidn’t have free ports”. Fix: reduce replicas to N-1 (where N is node count), or move some routers to dedicated infra nodes with different ports.

The Problem

Custom OpenShift IngressControllers (e.g., for different domains or environments) create multiple router deployments, each using hostNetwork: true binding ports 80/443. When nodes are drained for maintenance or cluster updates, replacement router pods can’t schedule because every other node already has those ports occupied.

The Solution

Step 1: Identify Stuck Routers

# Find Pending router pods
oc get pods -n openshift-ingress | grep -E "Pending|ContainerCreating"

# Check scheduling failures
oc describe pod <pending-router-pod> -n openshift-ingress
# Events:
#   Warning FailedScheduling: 0/6 nodes are available:
#     6 didn't have free ports for the requested host ports

Step 2: Map Router Distribution

# See which routers are on which nodes
oc get pods -n openshift-ingress -o wide --sort-by='{.spec.nodeName}'

# Count routers per node
oc get pods -n openshift-ingress -o json | \
  jq -r '.items[] | select(.status.phase=="Running") | .spec.nodeName' | sort | uniq -c | sort -rn

Step 3: Check IngressController Configuration

# List all IngressControllers
oc get ingresscontroller -n openshift-ingress-operator

# Check a specific one
oc get ingresscontroller custom-router -n openshift-ingress-operator -o yaml

Key fields:

spec:
  replicas: 6                    # Too many? Should be ≀ node count - 1
  endpointPublishingStrategy:
    type: HostNetwork            # Uses host ports 80, 443
  nodePlacement:
    nodeSelector:
      matchLabels:
        node-role.kubernetes.io/worker: ""

Step 4: Fix the Configuration

Option A: Reduce replicas

# Set replicas to worker_count - 1 for maintenance headroom
WORKER_COUNT=$(oc get nodes -l node-role.kubernetes.io/worker= --no-headers | wc -l)
oc patch ingresscontroller custom-router -n openshift-ingress-operator \
  --type merge -p "{\"spec\":{\"replicas\":$((WORKER_COUNT - 1))}}"

Option B: Use different ports per router

spec:
  endpointPublishingStrategy:
    type: HostNetwork
    hostNetwork:
      httpPort: 8080      # Non-standard port
      httpsPort: 8443
      statsPort: 1937

Option C: Use NodePort instead of HostNetwork

spec:
  endpointPublishingStrategy:
    type: NodePortService
    nodePort:
      protocol: TCP

During Maintenance: Temporary Scale-Down

# Before draining a node, scale down routers that will block
ROUTERS=$(oc get deploy -n openshift-ingress -o name)
for router in $ROUTERS; do
  replicas=$(oc get "$router" -n openshift-ingress -o jsonpath='{.spec.replicas}')
  echo "$router: $replicas replicas"
done

# Scale down the one blocking drain
oc scale deploy/router-custom -n openshift-ingress --replicas=0
# ... drain node ...
# Restore after
oc scale deploy/router-custom -n openshift-ingress --replicas=5

Common Issues

Default Router Conflicts with Custom Routers

Both the default router and custom routers bind to ports 80/443. Solutions:

  • Use different ports for custom routers
  • Use nodeSelector to separate them onto different node groups
  • Remove the default router if not needed

Router Pods Evicted During Node Pressure

# Check for evicted router pods
oc get pods -n openshift-ingress | grep Evicted

# Clean up
oc delete pods -n openshift-ingress --field-selector status.phase=Evicted

Best Practices

  • Set replicas to N-1 where N is eligible nodes β€” always leave headroom for maintenance
  • Use dedicated infra nodes for ingress routers with taints and tolerations
  • Separate routers by port if running multiple IngressControllers on the same nodes
  • Use maxUnavailable: 1 PDB β€” not minAvailable: N which blocks drains
  • Monitor router readiness during MCP rollouts

Key Takeaways

  • hostNetwork routers bind host ports β€” only one router can use port 80 per node
  • Pending routers mean all nodes’ ports are occupied β€” no room for rescheduling
  • Set replicas ≀ node_count - 1 for maintenance headroom
  • Consider NodePort or different port assignments for multiple IngressControllers
  • During maintenance, temporarily scale down blocking routers then restore
#openshift #ingress #haproxy #router #troubleshooting
Luca Berton
Written by Luca Berton

Principal Solutions Architect specializing in Kubernetes, AI/GPU infrastructure, and cloud-native platforms. Author of Kubernetes Recipes and creator of CopyPasteLearn courses.

Kubernetes Recipes book cover

Want More Kubernetes Recipes?

This recipe is from Kubernetes Recipes, our 750-page practical guide with hundreds of production-ready patterns.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens