Deliver Kubernetes application logs to ELK with filebeat

One of the problems you may face while running applications in Kubernetes cluster is how to gain knowledge of what is going on. Running kubectl logs is fine if you run a few nodes, but as the cluster grows you need to be able to view and query your logs from a centralized location.

If you have an Elastic Stack in place you can run a logging agent – filebeat for instance – as DaemonSet and securely deliver your application logs from Kubernetes cluster to Logstash. Logs that are written to stdout/stderr by applications are picked up by the docker engine and saved under /var/lib/docker/containers in json format. Filebeat can be configured to read from these files and deliver the messages to your stack.

A DaemonSet ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to them, and as nodes are removed from the cluster, those pods are garbage collected.

In this example, filebeat will be running in a pod as a DaemonSet on all nodes in the cluster and deliver the application logs to Logstash.

Let’s assume Logstash is expecting connections from filebeat at port 5514:

First we need to add a filter to Logstash to parse the messages delivered by filebeat:

Now that Logstash is ready for filebeat let’s create a Secret object to store the SSL CA, Client Certificate and the Private Key which will be used by filebeat to secure the connection with Logstash.

Note: Base64 encoded contents of cacert.crt, client.crt, and client.key will be stored as cacert, cert, and key. Once filebeat pods running on each node have mounted this Secret object as a directory cacert, cert, and key will be accessible as regular files in it.

Next step is creating a docker image for filebeat to run on. Although there are a number of public images that can be used for filebeat DaemonSet, you may want to control the filebeat configuration that is built into the image.

Save below configuration as filebeat.yml.

Note: ssl options seen above point to a directory /etc/ssl where our certificate secrets we stored earlier will be available when we create the filebeat DaemonSet.

Filebeat will process log files ending with .log that are located in /var/log/containers directory. This directory will be made available to filebeat pods when we create the DaemonSet.

Next, the docker image. Save below as Dockerfile:

To build the image and push it to your registry save below as Makefile and run make.

Now on to DaemonSet. With hostPath /var/log and /var/lib/docker/containers that reside on nodes are made available to filebeat pods. Why two directories? The json formatted log messages in /var/log/containers are symlinked to /var/log/pods which are symlinked to /var/lib/docker/containers. Filebeat pods must have access to both directories in order to read from the log files.

Save below file as filebeat-daemonset.yml:

Note: SSL secrets we stored earlier will be made available under /etc/ssl per filebeatssl definitions seen above.

Run kubectl create -f filebeat-daemonset.yml to create the DaemonSet. You can verify that the pods are in running state, one pod per node:

1
2
3
4
5
6
7
8
9
$ kubectl get pods --namespace=kube-system --selector=app=filebeat
NAMESPACE     NAME             READY     STATUS    RESTARTS   AGE
kube-system   filebeat-17g9g   1/1       Running   0          43s
kube-system   filebeat-4ckh7   1/1       Running   0          43s
kube-system   filebeat-7d0jr   1/1       Running   0          43s
kube-system   filebeat-8vll4   1/1       Running   0          43s
kube-system   filebeat-cxcfm   1/1       Running   0          43s
kube-system   filebeat-gvgh2   1/1       Running   0          43s
kube-system   filebeat-lc5sm   1/1       Running   0          43s

Tags: 

Related Notes: