Technical “How-To’s”

Filtering Metrics with the observIQ Distro for OpenTelemetry Collector

Dylan Myers
Dylan Myers
Share:

In this post, we will address the common monitoring use case of filtering metrics within observIQ’s distribution of the OpenTelemetry (OTEL) collector. Whether the metrics are deemed unnecessary, or they are filtered for security concerns, the process is fairly straightforward.

For our sample environment, we will use MySQL on Red Hat Enterprise Linux 8. The destination exporter will be to Google Cloud Operations, but the process is exporter agnostic. We are using this exporter to provide the visual charts showing the metric before and after filtering.

Environment Prerequisites

  • Suitable operating system
  • observIQ Distro for OTEL Collector installed
  • MySQL installed
  • MySQL Least Privilege User (LPU) setup
  • OTEL configured to collect metrics from MySQL

Resources

Initial Metrics

Once configured using the LPU I created, MySQL metrics should be flowing. For our purposes, we will focus on the specific metric `mysql.buffer_pool.limit`. Currently our config.yaml MySQL section looks like this:

yaml
1mysql:
2    endpoint: localhost:3306
3    username: otel
4    password: otelPassword
5    collection_interval: 60s

After waiting for at least 5 minutes to get a good amount of data, metrics will look something like this in Google’s Metrics Explorer:

Filtering

Now that metrics are flowing, we can filter them. First, let us discuss the reasons to filter this specific metric. The answer is simple: it isn’t really all that useful or important. It will, barring a configuration change by the DBA, be a flat line. Even after a configuration change, it would simply step that flat line up or down.

To do the filtering, we first need to look at the metadata file for the MySQL receiver. In this file, we find a listing of the attributes and metrics associated with this receiver. If we go to the metrics section of the file, and find our pool limit metric, we learn it looks like this:

yaml
1mysql.buffer_pool.limit:
2  enabled: true
3  description: The configured size of the InnoDB buffer pool.
4  unit: By
5  sum:
6    value_type: int
7    input_type: string
8    monotonic: false
9    aggregation: cumulative

This lets us know that it is enabled by default, gives a description, and some other important data about the metric. As these are the defaults, we can interpret from it that if we set the `enabled` parameter to false, then it should disable – aka filter – this metric. It will not be collected, and since it isn’t collected, it also will not be sent to the exporter.

To achieve this in our configuration file, we make the following changes:

yaml
1mysql:
2    endpoint: localhost:3306
3    username: otel
4    password: otelPassword
5    collection_interval: 60s
6    metrics:
7      mysql.buffer_pool.limit:
8        enabled: false

This replicates the structure from the metadata file, but with everything else trimmed other than the bare minimum number of lines needed to achieve our goal.

Once this has been changed, and the collector restarted, I again wait at least 5 minutes and check Google’s Metrics Explorer to see what has changed:

As shown in the screenshot, data was last sent to Google at 10:48, and it is now 11:13.

Conclusion

While the information needed is located in a few different places, filtering is very easy to do. Additionally, one can always reach out to observIQ support if unable to find the documents needed to provide the information. Finally, don’t forget that the metadata we looked at, also provides other information that can be useful in understanding your data.

Dylan Myers
Dylan Myers
Share:

Related posts

All posts

Get our latest content
in your inbox every week

By subscribing to our Newsletter, you agreed to our Privacy Notice

Community Engagement

Join the Community

Become a part of our thriving community, where you can connect with like-minded individuals, collaborate on projects, and grow together.

Ready to Get Started

Deploy in under 20 minutes with our one line installation script and start configuring your pipelines.

Try it now