Blog

Kubernetes Logging Simplified – Pt 1: Applications

by Joe Howell on March 4, 2021

Overview

If you’re running a fleet of containerized applications on Kubernetes, aggregating and analyzing your logs can be a bit daunting if you’re not equipped with the proper knowledge and tools. Thankfully, there’s plenty of useful documentation to help you get started; observIQ provides the tools you need to gather and analyze your application logs with ease. In the first part of this blog series Kubernetes Logging Simplified, I’ll highlight a few ‘need to know’ concepts so you can start digging into your application logs quickly.  

Kubernetes Logging Architecture – A Few Things You Need to Know

Standard Output and Error streams

The simplest logging method for containerized applications is writing to stdout and stderr. If you’re deploying an application, it’s best practice to enable logging to stdout and stderr or build this functionality into your custom application. Doing so will streamline your overall Kubernetes logging configuration and will help facilitate the implementation of a Cluster-Level Logging solution.

Cluster-Level Logging

Out of the box, Kubernetes and container engines do not provide a complete Cluster-Level Logging solution, so it’s important to implement a logging backend like ELK, Google Cloud Logging, or observIQ to ensure you can gather, store and analyze your application logs as the state and scale of your cluster changes.

Node-Level Logging

For applications that log to stdout and stderr, the Kubelet will detect and hand-off to the container engine and write the streams to a path on your node – this behavior is determined by the logging driver you’ve configured. For Docker and containerd, this path typically defaults to /var/log/containers. Using a Node Log Agent architecture is recommended to gather these logs, which I’ll touch on a bit more below.

Node-Level Log rotation*

As application logs will ultimately be written to your nodes, it’s important to administer a Node log rotation solution, as filling Node storage could impact the overall health of your cluster. Depending on how you deploy your Cluster, node log rotation may or may not be configured by default. For example, if you deploy using kube-up.sh, logrotate will be configured automatically. If you’re using Docker, you can set max-size and max-file options using log-opt.

Where can I find out more?

The Kubernetes docs outline logging architecture in a pretty clear and concise way. This particular blog focuses on application logs, but If you’re just getting started with Kubernetes, I’d encourage you to check out the following links to better understand container, system, and audit logging more deeply.

https://kubernetes.io/docs/concepts/cluster-administration/logging/

https://kubernetes.io/docs/concepts/cluster-administration/system-logs/

https://kubernetes.io/docs/tasks/debug-application-cluster/audit/

How Do I Get Application Logs From Kubernetes?   

You can gather your application in a number of ways. Manually via the command line or by implementing a Cluster-level logging architecture described below.

Manual Commands

Ahead of implementing a complete Cluster-level logging solution, it’s useful to familiarize yourself with some basic commands to manually access, stream and dump your application logs. 

Cheat Sheet

For a quick list, check out the kubectl cheat sheet here:

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs

Complete list

For a complete list of kubectl commands, check out the docs here:  https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs

Custom utilities worth checking out

Sternhttps://github.com/wercker/stern

Kubetailhttps://github.com/johanhaleby/kubetail

Kailhttps://github.com/boz/kail

Cluster-Level Logging Architecture

When you are ready to implement Cluster-level logging, there are a few primary architectures you should consider:

Node Log Agents (recommended)

To best leverage Node-level logging, you can deploy a log agent like Fluentd, Logstash, or observIQ log agent to the nodes in your cluster to read your application logs and ship logs to your preferred backend. Typically, it’s recommended that the agent be run as a Dameonset – deploying an agent for each node in the cluster. At observIQ, we recommend deploying Node Log Agents as the simplest and most efficient method to gather your application logs.

Stream Sidecar

If your application can’t log to stdout and stderr, you can use a Stream Sidecar.  The Stream Sidecar can grab logs from an application container’s filesystem and then stream the logs to its own stdout and stderr streams. Similar to Node log agents, this is another path to get the application logs written on the Node.

Agent Sidecar:

If your application can’t log to stdout and stderr, you can deploy a log agent as a sidecar, which can grab the logs from your application container’s filesystem, and send them on to your preferred backend

Deploying Kubernetes Cluster-level Logging with observIQ

Now that we’ve stepped through the basic architectures, let’s walk through setting up  Cluster-level logging with observIQ. With observIQ, you can easily implement Node-level logging agent architecture, deploying the observIQ log agent as a Daemonset, gathering the logs from a single, many, or all of your containerized applications in a few simple steps. 

Create a Kubernetes Template

What is a Template?

A template in observIQ is an object that allows you to manage the same logging configuration across multiple agents, all from a single place in the UI. It also gives you the ability to both define and update logging configuration before and after your deploy observIQ agents, something I’ll be exploring a bit more in my next post in the series.

Add a new Template

To create a Template, navigate to the Fleet > Templates page in observIQ, and select ‘Add Template’, and then select ‘Kubernetes’ as the platform.

Specify a friendly name for your cluster, GKE US East 1 in this example, and choose ‘Enable Container Logs’. From here, you can specify a specific pod or container, or leave the default option and gather logs from all pods and containers. In this case, I’m going to leave the default options, and gather all of the application logs from my cluster.  Then click ‘Create’.

Creating a Kubernetes Template
Creating a Kubernetes Template

Deploy observIQ to your Kubernetes Cluster

Once you have your Template created, click ‘Add Agents’.

On the Install Kubernetes Agents page, download and copy the observiq-agent.yaml to your cluster, and apply by running kubectl apply -f observiq-agent.yaml command.

Install Kubernetes Agents page
Install Kubernetes Agents page

After a few minutes, observIQ agents will be running in your cluster.  If you run kubectl get pods | grep observiq-agent, you’ll see an observIQ Agent for each node in your cluster.  Additionally, if you return to your template, you’ll also see each of these agents related to your Template as well.  A good thing to know, if you want to make configuration changes to your agents, you can now just modify Agent configuration directly from the Templates.

Kubectl get pods | grep observiq-agent
Kubectl get pods | grep observiq-agent
Kubernetes agents associated with Template
Kubernetes agents associated with Template

View your Application Logs in observIQ

After a few minutes, you’ll start to see your application logs appear on the observIQ Explore page. The messages will be labeled with the type as k8s.container.

Opening up one of the application logs, you can see application messages, as well as useful labels and metadata that have been automatically added to help trace the message to your specific application.

Application log in observIQ with Kubernetes Labels and Metadata
Application log in observIQ with Kubernetes Labels and Metadata

Wrapping up

Gathering your application logs is critical to understanding and debugging application workloads. Knowing manual commands is useful, but as your application and cluster scales, it’s important to implement a Cluster-level logging solution that fits your environment and requirements.

For more information about observIQ and our other log management solutions check out: https://observiq.com/solutions

In my next post, I’ll be diving into System and Cluster events and will step through how to easily ship and analyze your logs with observIQ.

observIQ’s Stanza Log Agent Now Part Of OpenTelemetry Project

by Mike Kelly on January 27, 2021

Today I’m happy to announce that observIQ’s Stanza Log Agent will become a key part of the OpenTelemetry project. This has been in the works for many months and the team at observIQ is thrilled to see it becoming a reality. We’re particularly pleased to see it happening just as we launch our log management platform which will be the first platform to take full advantage of the log agent technology now incorporated into OpenTelemetry.

Our mission since launching observIQ has been to deliver a simple to use, powerful, and performant log management experience. We found that the biggest source of complexity for the customers we worked with was the ingestion pipeline itself. We’ve changed that with observIQ Cloud and we started with the log agent.

We launched Stanza alongside observIQ. It was a critical component for the log management pipeline and it required starting from scratch to build an agent that was performant, highly configurable, with a focus on flexibility. Stanza is a small footprint, high capability, log shipping agent. It uses roughly 10% of the CPU and memory of other popular log agents. We launched as an open source project and committed to keeping it that way. From the beginning we’ve felt strongly that there was an opportunity to take a big step forward in log agent capabilities, and we wanted everyone to benefit from it, whether using observIQ or another platform.

I’ve also strongly believed in the mission of OpenTelemetry since its start in 2019. Like much of the industry, we recognize the value to observability and monitoring in general that will be realized by a standardized telemetry system used by all of the industry’s best platforms. An end to vendor lock-in is key to unlocking innovation.

OpenTelemetry started with tracing and soon moved to add metrics. Together with the OpenTelemetry team, we saw the opportunity to accelerate the log component with the addition of the Stanza project and we quickly moved to make it a reality.

Today, Stanza is a component of OpenTelemetry. Over the next few months, we’ll be making improvements to more tightly couple Stanza with the OpenTelemetry collector.

You can find more information on Stanza and the OpenTelemetry project here. Or sign up for a free trial of observIQ and see Stanza in action as a native component.

Hello From observIQ!

by Mike Kelly on July 21, 2020

At observIQ our mission is to build the very best open source observability solutions for the DevOps and ITOps communities.

Welcome to observIQ!

observIQ is a small group of engineers based in Grand Rapids, Michigan with a long history of developing the very best metric and log observability tools in the world. We’re launching today with our first open source product focused entirely on high performance log data acquisition. We’ve worked together with some of the largest organizations in the industry to solve the challenge of streaming terabytes of logs with the lowest possible resource footprint while maintaining ease of deployment and configurability you’d expect from a high quality log agent. We’re excited to share this project with the community and can’t wait for you to try it out! Read on to hear what to expect from observIQ in the future.

observIQ, observIQ logo

We May Look Familiar

The observIQ team started as Blue Medora, and for over 10 years developed metric and log integrations for some of the leading organizations in observability. We developed integrations and agent technology for Google, New Relic, IBM, Oracle, and VMware along with many others. As Blue Medora, we developed an unmatched expertise in the field of IT data acquisition. This led to a particularly strong partnership with VMware. Blue Medora delivered metric monitoring integrations to VMware vRealize® Operations customers through the Blue Medora True Visibility Suite of products. Over time, the True Visibility Suite became a critical piece of the VMware vRealize Operations ecosystem. Due to the success of the relationship and a desire to provide a seamless process for our shared customers, VMware acquired the True Visibility Suite team and products from Blue Medora and will move forward with True Visibility Suite as a VMware solution. We couldn’t be happier to see this powerful collection of tools reach more VMware customers.

What’s Next for observIQ?

Over the past several years, apart from the VMware side of our business, we’ve launched a number of market-defining products, most recently BindPlane. BindPlane is the first of its kind platform designed to bring hundreds of metric and log monitoring integrations to nearly any observability platform.

What we’ve found after years of laser focus on the monitoring space, and after talking to hundreds of customers, is that there’s a gap in log management capabilities. Logs are a struggle for most organizations. The growth in log data has been exponential and the tools to monitor logs haven’t kept up with that expansion. Some of the largest organizations in the world are struggling to solve the challenge of shipping terabytes of log data every day, or every hour. The bottleneck is often at the agent level. We’ve been working hand in hand with some of our largest customers to develop a new, high performance, highly configurable, open source log agent capable of meeting their needs. We’re pleased to announce that today it’s available for download.

We’re excited to share this with you and as a way of marking this transition, Blue Medora has now become observIQ. At observIQ we’re focused on solving the log data acquisition challenges through innovative open source products that are designed with performance and configurability first.

The open source log agent is the first product we’re releasing. Architected from the ground up for high performance, the agent is written in Go, optimized for low resource utilization, and designed to have even greater configurability and customizability than legacy agents. We’ve released an initial set of input and output plugins to support the most common workloads and will be releasing new plugins weekly. We developed our agent with open source standards in mind and have designed it for maximum compatibility with existing observability projects.

Coming Soon

Over the next few months, we’ll be launching observIQ Cloud which provides remote configuration, simplified deployment, and best-of-class visualization for a full log solution. Click here to join the beta.

At observIQ we’re producing solutions by engineers for engineers and we love hearing from our customers so connect with us, give our products a try, or consider working here by visiting our careers page if you share our passion for Observability!

Check out our open source log agent today!

Sign Up for the observIQ Cloud Beta

Download the Splunk Solution Brief

Sign Up to receive updates on our products

observIQ Support

For support on observIQ Cloud, please contact:

support@observIQ.com

For the Open Source Log Agent, community-based support is available on our:

GitHub Repository

Sign Up for Our Newsletter