In modern cloud-native architectures, logging becomes a significant part of monitoring and troubleshooting. The EFK Stack (Elasticsearch, Fluentd, Kibana) is a popular option for storing logs of Kubernetes workloads.
Elasticsearch for storing logs, Fluentd for log collection, and Kibana for visualization.
In this guide, we will deploy the EFK stack on Google Kubernetes Engine (GKE) and test it with a simple CRUD operation in Elasticsearch.
1. Prerequisites
Before we begin, ensure you have:
-
Google Cloud CLI installed and authenticated (
gcloud auth login
) - kubectl installed and configured
- Helm installed for managing Kubernetes charts
- A Google Cloud project with billing enabled
2. Create a GKE Cluster
First, let’s create a Kubernetes cluster on GKE.
export ZONE=us-central1-c
gcloud container clusters create demo-cluster \
--zone $ZONE \
--release-channel rapid \
--num-nodes 5 \
--gateway-api=standard \
--machine-type e2-standard-4
Once the cluster is ready, connect to it:
gcloud container clusters get-credentials demo-cluster --zone=$ZONE
3. Deploy Elasticsearch
Add the official Elastic Helm repository:
helm repo add elastic https://helm.elastic.co
Install Elasticsearch with a LoadBalancer service:
helm install elasticsearch elastic/elasticsearch \
--set replicas=1 \
--set persistence.enabled=false \
--set service.type=LoadBalancer
Check the status:
kubectl get pods -w
4. Configure Fluentd
Fluentd will read container logs and send them to Elasticsearch.
Let's create a custom Fluentd configuration file (custom-fluentd.conf
):
<source>
@type tail
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
<parse>
@type regexp
pattern ^(?<time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z)\s+(?<level>\w+)\s+(?<message>.*)$
</parse>
</source>
<match **>
@type stdout
</match>
Create a ConfigMap for Fluentd:
kubectl create configmap custom-fluentd-config --from-file=custom-fluentd.conf
Add Fluentd Helm chart repository,
helm repo add bitnami https://charts.bitnami.com/bitnami
Install Fluentd using the Bitnami Helm chart:
helm install fluentd bitnami/fluentd \
--set elasticsearch.host=elasticsearch-master \
--set replicas=1 \
--set configMap=custom-fluentd-config
5. Deploy Kibana
Kibana will help us visualize the logs stored in Elasticsearch.
helm install kibana-new elastic/kibana \
--set replicas=1 \
--set service.type=LoadBalancer
Get the Kibana LoadBalancer IP:
kubectl get svc
6. Access Elasticsearch Credentials
Get the Elasticsearch username and password:
kubectl get secrets --namespace=default elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d
kubectl get secrets --namespace=default elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
7. Test CRUD in Elasticsearch
In Kibana, go to Menu → Management → Dev Tools and run:
POST my_index/_doc/1
{
"name": "Oly Mahmud",
"role": "Backend Engineer",
"skills": ["Java", "Spring Boot", "Go", "DevOps"],
"experience": 1
}
Output:
{
"_index": "my_index",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"_seq_no": 0,
"_primary_term": 1
}
Congratulations 🎉 — we just created a document in Elasticsearch!
8. Summary
We successfully:
- Created a GKE cluster
- Installed Elasticsearch, Fluentd, and Kibana using Helm
- Configured Fluentd to read Kubernetes logs
- Connected Kibana to Elasticsearch
- Tested a simple data insertion in Elasticsearch