How to Set Up Linkerd Service Mesh
Deploy Linkerd service mesh for Kubernetes. Learn to add mTLS encryption, traffic management, and observability with minimal configuration overhead.
The Problem
You need to secure service-to-service communication, implement traffic management, and gain deep observability into your microservices without modifying application code.
The Solution
Deploy Linkerd, an ultralight, security-first service mesh that provides automatic mTLS, observability, and reliability features with minimal resource overhead.
Linkerd Architecture
flowchart TB
subgraph controlplane["ποΈ CONTROL PLANE"]
direction LR
destination["π§ destination<br/>(routing)"]
identity["π identity<br/>(mTLS CA)"]
injector["π proxy-injector<br/>(sidecar injection)"]
end
controlplane --> dataplane
subgraph dataplane["π‘ DATA PLANE"]
subgraph podA["π¦ Pod A"]
AppA["π App"]
ProxyA["π Proxy"]
AppA --- ProxyA
end
subgraph podB["π¦ Pod B"]
ProxyB["π Proxy"]
AppB["π App"]
ProxyB --- AppB
end
ProxyA <-->|"π mTLS"| ProxyB
endStep 1: Install Linkerd CLI
# Install CLI
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
# Add to PATH
export PATH=$HOME/.linkerd2/bin:$PATH
# Verify installation
linkerd version
# Check cluster compatibility
linkerd check --preStep 2: Install Linkerd Control Plane
Option A: Using CLI
# Generate certificates for identity (required for production)
step certificate create root.linkerd.cluster.local ca.crt ca.key \
--profile root-ca --no-password --insecure
step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \
--profile intermediate-ca --not-after 8760h --no-password --insecure \
--ca ca.crt --ca-key ca.key
# Install CRDs
linkerd install --crds | kubectl apply -f -
# Install control plane
linkerd install \
--identity-trust-anchors-file ca.crt \
--identity-issuer-certificate-file issuer.crt \
--identity-issuer-key-file issuer.key \
| kubectl apply -f -
# Wait for deployment
linkerd checkOption B: Using Helm
# Add Helm repo
helm repo add linkerd https://helm.linkerd.io/stable
helm repo update
# Install CRDs
helm install linkerd-crds linkerd/linkerd-crds -n linkerd --create-namespace
# Install control plane
helm install linkerd-control-plane linkerd/linkerd-control-plane \
-n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.keyStep 3: Install Viz Extension (Observability Dashboard)
# Install viz extension
linkerd viz install | kubectl apply -f -
# Check viz deployment
linkerd viz check
# Access dashboard
linkerd viz dashboard &Step 4: Inject Linkerd Proxy into Workloads
Automatic Injection (Recommended)
Add annotation to namespace:
apiVersion: v1
kind: Namespace
metadata:
name: myapp
annotations:
linkerd.io/inject: enabledManual Injection
# Inject into existing deployment
kubectl get deploy myapp -o yaml | linkerd inject - | kubectl apply -f -
# Inject all deployments in namespace
kubectl get deploy -n myapp -o yaml | linkerd inject - | kubectl apply -f -Deployment with Linkerd Annotations
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: myapp
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
annotations:
linkerd.io/inject: enabled
# Optional: Configure proxy resources
config.linkerd.io/proxy-cpu-request: "100m"
config.linkerd.io/proxy-memory-request: "64Mi"
config.linkerd.io/proxy-cpu-limit: "500m"
config.linkerd.io/proxy-memory-limit: "128Mi"
spec:
containers:
- name: web
image: nginx:1.25
ports:
- containerPort: 80Step 5: Verify mTLS
# Check mTLS status for all meshed pods
linkerd viz edges deployment -n myapp
# Check specific connection
linkerd viz tap deployment/web-app -n myapp
# View traffic statistics
linkerd viz stat deployment -n myappTraffic Management with Linkerd
Traffic Split (Canary Deployments)
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-app-canary
namespace: myapp
spec:
service: web-app
backends:
- service: web-app-stable
weight: 900m # 90%
- service: web-app-canary
weight: 100m # 10%Services for Traffic Split
apiVersion: v1
kind: Service
metadata:
name: web-app
namespace: myapp
spec:
ports:
- port: 80
selector:
app: web-app
---
apiVersion: v1
kind: Service
metadata:
name: web-app-stable
namespace: myapp
spec:
ports:
- port: 80
selector:
app: web-app
version: stable
---
apiVersion: v1
kind: Service
metadata:
name: web-app-canary
namespace: myapp
spec:
ports:
- port: 80
selector:
app: web-app
version: canaryService Profiles for Advanced Routing
apiVersion: linkerd.io/v1alpha2
kind: ServiceProfile
metadata:
name: web-app.myapp.svc.cluster.local
namespace: myapp
spec:
routes:
- name: GET /api/users
condition:
method: GET
pathRegex: /api/users
responseClasses:
- condition:
status:
min: 500
max: 599
isFailure: true
# Retry configuration
isRetryable: true
- name: POST /api/orders
condition:
method: POST
pathRegex: /api/orders
# Timeout configuration
timeout: 30s
responseClasses:
- condition:
status:
min: 500
max: 599
isFailure: true
- name: GET /health
condition:
method: GET
pathRegex: /healthRetries and Timeouts
apiVersion: linkerd.io/v1alpha2
kind: ServiceProfile
metadata:
name: backend.myapp.svc.cluster.local
namespace: myapp
spec:
# Default retry budget: 20% additional requests
retryBudget:
retryRatio: 0.2
minRetriesPerSecond: 10
ttl: 10s
routes:
- name: GET /api
condition:
method: GET
pathRegex: /api/.*
timeout: 5s
isRetryable: trueCircuit Breaking with Failure Accrual
Configure per-route failure handling:
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
name: web-app-server
namespace: myapp
spec:
podSelector:
matchLabels:
app: web-app
port: http
proxyProtocol: HTTP/1Authorization Policies
Allow Traffic from Specific Services
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
name: backend-server
namespace: myapp
spec:
podSelector:
matchLabels:
app: backend
port: 8080
proxyProtocol: HTTP/1
---
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
name: frontend-to-backend
namespace: myapp
spec:
server:
name: backend-server
client:
meshTLS:
serviceAccounts:
- name: frontend
namespace: myappDeny All by Default
apiVersion: policy.linkerd.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: myapp
spec:
targetRef:
group: policy.linkerd.io
kind: Server
name: backend-server
requiredAuthenticationRefs: [] # Empty means deny allObservability Features
Golden Metrics via CLI
# View success rate, requests/sec, latency
linkerd viz stat deployment -n myapp
# Real-time traffic
linkerd viz tap deployment/web-app -n myapp
# Top traffic sources
linkerd viz top deployment/web-app -n myapp
# Route-level metrics
linkerd viz routes deployment/web-app -n myappPrometheus Integration
# ServiceMonitor for Prometheus Operator
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: linkerd-proxies
namespace: monitoring
spec:
selector:
matchLabels:
linkerd.io/control-plane-component: proxy
namespaceSelector:
any: true
endpoints:
- port: admin-http
path: /metrics
interval: 30sGrafana Dashboards
Linkerd provides pre-built Grafana dashboards:
# Install with Grafana
linkerd viz install --set grafana.enabled=true | kubectl apply -f -
# Access Grafana
kubectl port-forward -n linkerd-viz svc/grafana 3000:3000Multi-Cluster Communication
Link Clusters
# On target cluster: Create gateway
linkerd multicluster install | kubectl apply -f -
# On source cluster: Link to target
linkerd multicluster link --cluster-name target | kubectl apply -f -
# Verify link
linkerd multicluster check
linkerd multicluster gatewaysExport Service to Other Clusters
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: myapp
labels:
mirror.linkerd.io/exported: "true"
spec:
ports:
- port: 8080
selector:
app: backendDebugging with Linkerd
Debug Container
# Start debug container in mesh
linkerd viz tap deployment/web-app -n myapp --to deployment/backend
# Check proxy logs
kubectl logs deploy/web-app -n myapp -c linkerd-proxy
# Proxy diagnostics
linkerd diagnostics proxy-metrics -n myapp pod/web-app-xxxCommon Issues
# Check if injection is working
kubectl get pods -n myapp -o jsonpath='{.items[*].spec.containers[*].name}'
# Should see: app-container, linkerd-proxy
# Verify mTLS
linkerd viz edges deployment -n myapp
# Check for skipped ports
kubectl get deploy web-app -n myapp -o yaml | grep skip-outboundProduction Configuration
High Availability Control Plane
linkerd install --ha | kubectl apply -f -Or with Helm:
helm install linkerd-control-plane linkerd/linkerd-control-plane \
-n linkerd \
--set controllerReplicas=3 \
--set webhookFailurePolicy=Fail \
--set podDisruptionBudget.enabled=trueResource Tuning
# deployment annotation for proxy resources
metadata:
annotations:
config.linkerd.io/proxy-cpu-request: "200m"
config.linkerd.io/proxy-cpu-limit: "1"
config.linkerd.io/proxy-memory-request: "128Mi"
config.linkerd.io/proxy-memory-limit: "256Mi"
# Skip injection for specific ports
config.linkerd.io/skip-outbound-ports: "3306,6379"
config.linkerd.io/skip-inbound-ports: "9090"Verification Commands
# Overall health check
linkerd check
# Viz check
linkerd viz check
# View meshed pods
linkerd viz stat -n myapp deploy
# Check edges (connections)
linkerd viz edges deploy -n myapp
# Live traffic tap
linkerd viz tap deploy/web-app -n myapp
# Route statistics
linkerd viz routes deploy/web-app -n myapp --to svc/backendCleanup
# Remove viz extension
linkerd viz uninstall | kubectl delete -f -
# Remove control plane
linkerd uninstall | kubectl delete -f -
# Remove CRDs
helm uninstall linkerd-crds -n linkerdLinkerd vs Istio Comparison
| Feature | Linkerd | Istio |
|---|---|---|
| Resource usage | Very light (~10MB/proxy) | Heavier (~50MB/proxy) |
| Complexity | Simple | Complex, feature-rich |
| mTLS | Automatic | Configurable |
| Traffic management | Basic (SMI) | Advanced (VirtualService) |
| Learning curve | Low | High |
| Best for | Simplicity, performance | Advanced traffic control |
Summary
Linkerd provides a lightweight, secure service mesh with automatic mTLS, golden metrics, and traffic management. Its simplicity makes it ideal for teams wanting service mesh benefits without operational complexity.
π Go Further with Kubernetes Recipes
Love this recipe? Thereβs so much more! This is just one of 100+ hands-on recipes in our comprehensive Kubernetes Recipes book.
Inside the book, youβll master:
- β Production-ready deployment strategies
- β Advanced networking and security patterns
- β Observability, monitoring, and troubleshooting
- β Real-world best practices from industry experts
βThe practical, recipe-based approach made complex Kubernetes concepts finally click for me.β
π Get Your Copy Now β Start building production-grade Kubernetes skills today!
π Get All 100+ Recipes in One Book
Stop searching β get every production-ready pattern with detailed explanations, best practices, and copy-paste YAML.
Want More Kubernetes Recipes?
This recipe is from Kubernetes Recipes, our 750-page practical guide with hundreds of production-ready patterns.