How to Set Up Centralized Logging with EFK Stack
Deploy Elasticsearch, Fluentd, and Kibana for centralized Kubernetes logging. Learn to collect, parse, and visualize container logs at scale.
How to Set Up Centralized Logging with EFK Stack
The EFK stack (Elasticsearch, Fluentd, Kibana) provides centralized logging for Kubernetes clusters. Collect logs from all containers and visualize them in a unified dashboard.
Architecture Overview
flowchart TB
subgraph Cluster["Kubernetes Cluster"]
subgraph Pods["Application Pods"]
P1[Pod 1]
P2[Pod 2]
P3[Pod 3]
P4[...]
end
FD["Fluentd DaemonSet<br/>(every node)"]
ES[Elasticsearch]
KB[Kibana]
P1 --> FD
P2 --> FD
P3 --> FD
P4 --> FD
FD --> ES
ES --> KB
endCreate Logging Namespace
# logging-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: logging
labels:
name: loggingDeploy Elasticsearch
# elasticsearch.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
ports:
- port: 9200
name: http
- port: 9300
name: transport
selector:
app: elasticsearch
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: logging
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: fix-permissions
image: busybox:latest
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox:latest
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
resources:
limits:
memory: 2Gi
cpu: 1000m
requests:
memory: 1Gi
cpu: 500m
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "elasticsearch-0.elasticsearch,elasticsearch-1.elasticsearch,elasticsearch-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "elasticsearch-0,elasticsearch-1,elasticsearch-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: xpack.security.enabled
value: "false"
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: standard
resources:
requests:
storage: 50GiDeploy Fluentd DaemonSet
# fluentd-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: logging
data:
fluent.conf: |
# Collect container logs
<source>
@type tail
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
<parse>
@type json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
# Enrich with Kubernetes metadata
<filter kubernetes.**>
@type kubernetes_metadata
@id filter_kube_metadata
kubernetes_url "#{ENV['KUBERNETES_SERVICE_HOST']}:#{ENV['KUBERNETES_SERVICE_PORT']}"
verify_ssl false
skip_labels false
skip_container_metadata false
skip_master_url true
</filter>
# Parse application JSON logs
<filter kubernetes.**>
@type parser
key_name log
reserve_data true
remove_key_name_field true
<parse>
@type json
</parse>
</filter>
# Output to Elasticsearch
<match kubernetes.**>
@type elasticsearch
@id out_es
host elasticsearch.logging.svc.cluster.local
port 9200
logstash_format true
logstash_prefix k8s-logs
include_tag_key true
type_name _doc
tag_key @log_name
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_interval 5s
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
</buffer>
</match>
---
# fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: logging
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch8-1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
memory: 200Mi
cpu: 100m
volumeMounts:
- name: varlog
mountPath: /var/log
- name: containers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config
mountPath: /fluentd/etc/fluent.conf
subPath: fluent.conf
volumes:
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /var/lib/docker/containers
- name: config
configMap:
name: fluentd-configFluentd RBAC
# fluentd-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups: [""]
resources:
- pods
- namespaces
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluentd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluentd
subjects:
- kind: ServiceAccount
name: fluentd
namespace: loggingDeploy Kibana
# kibana.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
ports:
- port: 5601
targetPort: 5601
selector:
app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:8.11.0
resources:
limits:
memory: 1Gi
cpu: 1000m
requests:
memory: 512Mi
cpu: 500m
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch:9200"
- name: SERVER_PUBLICBASEURL
value: "http://kibana.example.com"Access Kibana
# kibana-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana
namespace: logging
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: kibana-auth
spec:
ingressClassName: nginx
rules:
- host: kibana.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana
port:
number: 5601Structured Application Logging
# sample-app.yaml - Log in JSON format for parsing
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app
spec:
template:
spec:
containers:
- name: app
image: myapp:latest
env:
- name: LOG_FORMAT
value: "json"
- name: LOG_LEVEL
value: "info"Sample JSON log output:
{"timestamp":"2024-01-15T10:30:00Z","level":"info","message":"Request processed","method":"GET","path":"/api/users","status":200,"duration_ms":45}Useful Kibana Queries (KQL)
# Find errors in specific namespace
kubernetes.namespace_name: "production" AND level: "error"
# Search by pod name
kubernetes.pod_name: "myapp-*" AND message: "timeout"
# Filter by container
kubernetes.container_name: "nginx" AND status: 500
# Time-based search
@timestamp >= "2024-01-15" AND level: ("error" OR "warn")Summary
The EFK stack provides comprehensive logging for Kubernetes. Fluentd collects logs from all nodes, Elasticsearch stores and indexes them, and Kibana provides visualization. Use structured JSON logging in applications for better searchability.
📘 Go Further with Kubernetes Recipes
Love this recipe? There’s so much more! This is just one of 100+ hands-on recipes in our comprehensive Kubernetes Recipes book.
Inside the book, you’ll master:
- ✅ Production-ready deployment strategies
- ✅ Advanced networking and security patterns
- ✅ Observability, monitoring, and troubleshooting
- ✅ Real-world best practices from industry experts
“The practical, recipe-based approach made complex Kubernetes concepts finally click for me.”
👉 Get Your Copy Now — Start building production-grade Kubernetes skills today!
📘 Get All 100+ Recipes in One Book
Stop searching — get every production-ready pattern with detailed explanations, best practices, and copy-paste YAML.
Want More Kubernetes Recipes?
This recipe is from Kubernetes Recipes, our 750-page practical guide with hundreds of production-ready patterns.