If you’re running a fleet of containerized applications on Kubernetes, aggregating and analyzing your logs can be a bit daunting if you’re not equipped with the proper knowledge and tools. Thankfully, there’s plenty of useful documentation to help you get started; observIQ provides the tools you need to gather and analyze your application logs with ease. In the first part of this blog series Kubernetes Logging Simplified, I’ll highlight a few ‘need to know’ concepts so you can start digging into your application logs quickly.
The simplest logging method for containerized applications is writing to stdout and stderr. If you’re deploying an application, it’s best practice to enable logging to stdout and stderr or build this functionality into your custom application. Doing so will streamline your overall Kubernetes logging configuration and will help facilitate the implementation of a Cluster-Level Logging solution.
Out of the box, Kubernetes and container engines do not provide a complete Cluster-Level Logging solution, so it’s important to implement a logging backend like ELK, Google Cloud Logging, or observIQ to ensure you can gather, store and analyze your application logs as the state and scale of your cluster changes.
For applications that log to stdout and stderr, the Kubelet will detect and hand-off to the container engine and write the streams to a path on your node – this behavior is determined by the logging driver you’ve configured. For Docker and containerd, this path typically defaults to
/var/log/containers. Using a Node Log Agent architecture is recommended to gather these logs, which I’ll touch on a bit more below.
As application logs will ultimately be written to your nodes, it’s important to administer a Node log rotation solution, as filling Node storage could impact the overall health of your cluster. Depending on how you deploy your Cluster, node log rotation may or may not be configured by default. For example, if you deploy using
kube-up.sh, logrotate will be configured automatically. If you’re using Docker, you can set max-size and max-file options using log-opt.
The Kubernetes docs outline logging architecture in a pretty clear and concise way. This particular blog focuses on application logs, but If you’re just getting started with Kubernetes, I’d encourage you to check out the following links to better understand container, system, and audit logging more deeply.
You can gather your application in a number of ways. Manually via the command line or by implementing a Cluster-level logging architecture described below.
Ahead of implementing a complete Cluster-level logging solution, it’s useful to familiarize yourself with some basic commands to manually access, stream and dump your application logs.
For a quick list, check out the
kubectl cheat sheet here:
For a complete list of
kubectl commands, check out the docs here: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs
Stern – https://github.com/wercker/stern
Kubetail – https://github.com/johanhaleby/kubetail
Kail – https://github.com/boz/kail
When you are ready to implement Cluster-level logging, there are a few primary architectures you should consider:
To best leverage Node-level logging, you can deploy a log agent like Fluentd, Logstash, or observIQ log agent to the nodes in your cluster to read your application logs and ship logs to your preferred backend. Typically, it’s recommended that the agent be run as a Dameonset – deploying an agent for each node in the cluster. At observIQ, we recommend deploying Node Log Agents as the simplest and most efficient method to gather your application logs.
If your application can’t log to stdout and stderr, you can use a Stream Sidecar. The Stream Sidecar can grab logs from an application container’s filesystem and then stream the logs to its own stdout and stderr streams. Similar to Node log agents, this is another path to get the application logs written on the Node.
If your application can’t log to stdout and stderr, you can deploy a log agent as a sidecar, which can grab the logs from your application container’s filesystem, and send them on to your preferred backend
Now that we’ve stepped through the basic architectures, let’s walk through setting up Cluster-level logging with observIQ. With observIQ, you can easily implement Node-level logging agent architecture, deploying the observIQ log agent as a Daemonset, gathering the logs from a single, many, or all of your containerized applications in a few simple steps.
A template in observIQ is an object that allows you to manage the same logging configuration across multiple agents, all from a single place in the UI. It also gives you the ability to both define and update logging configuration before and after your deploy observIQ agents, something I’ll be exploring a bit more in my next post in the series.
To create a Template, navigate to the Fleet > Templates page in observIQ, and select ‘Add Template’, and then select ‘Kubernetes’ as the platform.
Specify a friendly name for your cluster, GKE US East 1 in this example, and choose ‘Enable Container Logs’. From here, you can specify a specific pod or container, or leave the default option and gather logs from all pods and containers. In this case, I’m going to leave the default options, and gather all of the application logs from my cluster. Then click ‘Create’.
Once you have your Template created, click ‘Add Agents’.
On the Install Kubernetes Agents page, download and copy the
observiq-agent.yaml to your cluster, and apply by running
kubectl apply -f observiq-agent.yaml command.
After a few minutes, observIQ agents will be running in your cluster. If you run
kubectl get pods | grep observiq-agent, you’ll see an observIQ Agent for each node in your cluster. Additionally, if you return to your template, you’ll also see each of these agents related to your Template as well. A good thing to know, if you want to make configuration changes to your agents, you can now just modify Agent configuration directly from the Templates.
After a few minutes, you’ll start to see your application logs appear on the observIQ Explore page. The messages will be labeled with the type as
Opening up one of the application logs, you can see application messages, as well as useful labels and metadata that have been automatically added to help trace the message to your specific application.
Gathering your application logs is critical to understanding and debugging application workloads. Knowing manual commands is useful, but as your application and cluster scales, it’s important to implement a Cluster-level logging solution that fits your environment and requirements.
For more information about observIQ and our other log management solutions check out: https://observiq.com/solutions
In my next post, I’ll be diving into System and Cluster events and will step through how to easily ship and analyze your logs with observIQ.