Logging Docker Containers with Elasticsearch

Raju Dawadi
3 min readJun 21, 2021

Containers are ephemeral

Keeping logs of the containers is challenging because of their temporary nature. Although we can use volume map or other ways to make the logs with persistent, its hard to keep track of logs overtime as well as get the metrics out of the logs.

Elasticsearch, a Lucene based search engine library to store, search, and analyze large amounts of structured and unstructured data. With the integration of open-source UI — Kibana, Elasticsearch is a good choice for storing bigger or smaller sized logs and get the metrics visualized with Kibana.

In this post, we use a tool called Fluent Bit which also runs as container, is a log forwarder which sends the docker logs to Elasticsearch endpoint.

We are using docker compose for ease and all the codes and configurations used in this post is on this github repo.

version: "3.8"
services:
app:
image: alpine
command: [/bin/echo, "Listen my log!"]
depends_on:
- fluentbit
logging:
driver: fluentd
options:
tag: 'app'

fluentbit:
image: fluent/fluent-bit:1.7
ports:
- "24224:24224"
- "24224:24224/udp"
volumes:
- ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
hostname: 'app'

Docker has inbuilt mechanism for getting the information about running containers called as “logging drivers”. There are many drivers available by default like: local, json-file, awslogs, splunk, fluentd, syslog etc. Among which, we are using fluentd driver as fluent bit is near to the driver for forwarding the logs.

We have a minimal fluent bit configuration:

[SERVICE]
log_level debug

# The stdin plugin allows to retrieve valid JSON text messages over the standard input interface (stdin)
[INPUT]
Name forward

# The Record Modifier Filter plugin allows to append fields or to exclude specific fields.
[FILTER]
Name record_modifier
Match *
# The stdout output plugin allows to print to the standard output the data received through the input plugin.
[OUTPUT]
Name es
Match **
Host elasticsearch-cluster.us-east-1.es.amazonaws.com
Port 443
tls on
Index ${HOSTNAME}

Few things that needs to be considered on the config but there are 4 sections that need definition: SERVICE, INPUT, FILTER, OUTPUT. Further additions can be done on it as defined on this official doc.

On the SERVICE section, we have defined log_level to debug to include only include error, warning, info and debug. On the INPUT, we are using forward plugin to set the log forwarder. The FILTER is configured to send all logs of the app container.

We are sending the logs via fluent bit to elasticsearch cluster which can be self hosted or managed. The OUTPUT section needs host, port, index name etc. We are using index_name as HOSTNAME which is set as app in the docker-compose.yml file.

Time to make it run

docker-compose up -d

Check the log of the main container which by default should exit as we are using simple alpine image but yours should stay there. There will be plus one container running — fluent-bit which sits there and communicates the log of adjacent container to the ES cluster. The resource usage of fluent-bit container is negligible, so I don’t see much worrying thing to run additional forwarder container.

If you scroll to elasticsearch cluster through kibana or api, there should be a new index created. Happy having persistent logging and enjoy exploring with Kibana user interface.

--

--