A centralized logging system can be indispensible in evaluating the health of multiple services deployed in a kubernetes cluster (incluiding the cluster itself). This can be useful in troubleshooting and optimization of services.
In this tutorial, we are going to set up a logging pipeline that will include 3 distinct components
All the code in this tutorial can be found here
For this tutorials we will used a k8s cluster that runs locally on your computer. We will achieve this using minikube. If you havent already, please go ahead and install it.
In order to follow along with the tutorial, aside from minikube resource requirements, you will also need, at least, 15GB of free RAM
Once you have minikube set up, you are ready to get started :smile:
minikube start
csi-hostpath-driver
minikube addonminikube addons enable csi-hostpath-driver
minikube dashboard
virtual cluster
abstraction in k8s. Names of namespaced resources & objects need to be unique with a namespace but not accross namespaces.apiyo@castle:kube-logging$ kubectl get namespaces
NAME STATUS AGE
cert-manager Active 35d
default Active 69d
ingress-nginx Active 66d
kube-node-lease Active 69d
kube-public Active 69d
kube-system Active 69d
kubernetes-dashboard Active 66d
monitoring Active 63d
namespace.yaml
file with the following contentkind: Namespace
apiVersion: v1
metadata:
name: kube-logging
kubectl apply -f namespace.yaml
NAME STATUS AGE
cert-manager Active 35d
default Active 69d
ingress-nginx Active 66d
kube-logging Active 5s
kube-node-lease Active 69d
kube-public Active 69d
kube-system Active 69d
kubernetes-dashboard Active 66d
monitoring Active 63d
elasticsearch_statefulset.yaml
and paste/type in the following YAML:apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: kube-logging
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.2.0
resources:
limits:
memory: 4096Mi
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: es-pv-home
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: es-pv-home
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: es-pv-home
labels:
type: local
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: csi-hostpath-sc
resources:
requests:
storage: 10Mi
kubectl -n kube-logging get pv
elasticsearch
container startsapiyo@castle:kube-logging$ kubectl apply -f elasticsearch_statefulset.yaml
statefulset.apps/es-cluster created
watch kubectl -n kube-logging get pods
elasticsearch_svc.yaml
and paste/type in the following YAML:kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: kube-logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
apiyo@castle:kube-logging$ kubectl apply -f elasticsearch_svc.yaml
service/elasticsearch created
apiyo@castle:kube-logging$ kubectl -n kube-logging get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 11s
elasticsearch
cluster from our host computer, forward the ES port as followsapiyo@castle:kube-logging$ kubectl port-forward es-cluster-0 9200:9200 --namespace=kube-logging
Forwarding from 127.0.0.1:9200 -> 9200
Forwarding from [::1]:9200 -> 9200
elasticsearch
cluster by running: curl http://localhost:9200/_cluster/state?pretty
kibana.yaml
and paste/type in the following YAML:apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
app: kibana
spec:
ports:
- port: 5601
selector:
app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: kibana:7.2.0
resources:
limits:
cpu: 1000m
memory: 2048Mi
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
ports:
- containerPort: 5601
apiyo@castle:kube-logging$ kubectl -n kube-logging apply -f kibana.yaml
service/kibana created
deployment.apps/kibana created
apiyo@castle:kube-logging$ kubectl -n kube-logging get pods
NAME READY STATUS RESTARTS AGE
es-cluster-0 1/1 Running 0 15m
es-cluster-1 1/1 Running 0 15m
es-cluster-2 1/1 Running 0 14m
kibana-7595dd5f5f-j87qw 1/1 Running 0 77s
You could also check it's logs by running: kubectl -n kube-logging logs kibana-7595dd5f5f-j87qw
In order to be able to access kibana
from our host computer, forward the kibana container port as follows
apiyo@castle:kube-logging$ kubectl port-forward kibana-7595dd5f5f-j87qw 5601:5601 --namespace=kube-logging
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601
fluentbit.yaml
and type/paste the following YAML:---
image:
repository: onaio/fluent-bit
tag: "1.9.3-hardened"
config:
## https://docs.fluentbit.io/manual/pipeline/inputs
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
multiline.parser docker, cri
[INPUT]
Name cpu
Tag cpu
## https://docs.fluentbit.io/manual/pipeline/filters
# filters: |
# [FILTER]
# Name k8s
# Match *
# Tag k8s
## https://docs.fluentbit.io/manual/pipeline/outputs
outputs: |
[OUTPUT]
Name es
Match *
Host elasticsearch
Port 9200
Generate_ID On
Logstash_Format On
Logstash_Prefix fluent-bit-temp
Retry_Limit False
Replace_Dots On
## how to deploy
# helm upgrade -n kube-logging -f fluentbit.yaml fluent-bit fluent/fluent-bit
Add the fluent helm repo:
helm repo add fluent https://fluent.github.io/helm-charts
Install fluentbit as follows:
apiyo@castle:kube-logging$ helm upgrade --install -n kube-logging -f fluentbit.yaml fluent-bit fluent/fluent-bit
Release "fluent-bit" does not exist. Installing it now.
NAME: fluent-bit
LAST DEPLOYED: Thu Oct 13 22:10:28 2022
NAMESPACE: kube-logging
STATUS: deployed
REVISION: 1
NOTES:
Get Fluent Bit build information by running these commands:
export POD_NAME=$(kubectl get pods --namespace kube-logging -l "app.kubernetes.io/name=fluent-bit,app.kubernetes.io/instance=fluent-bit" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace kube-logging port-forward $POD_NAME 2020:2020
curl http://127.0.0.1:202
http://localhost:5601/
Now, hit Discover in the left hand navigation menu.
You should see a histogram graph and some recent log entries:
apiyo@castle:kube-logging$ kubectl delete namespace kube-logging
namespace "kube-logging" deleted