How to Monitor MySQL with OpenTelemetry

Deepa Ramachandra
Deepa Ramachandra

MySQL is the trusted open-source database management system for many desktop, mobile, web, and cloud applications. Monitoring the performance of MySQL is critical but as the applications expand over multi-cloud, cloud-native, and hybrid cloud, monitoring also grows in complexity. Continuous monitoring and scaling help applications take advantage of MySQL’s capabilities such as reliability, security, flexibility, availability, and performance scalability. Our primary focus for monitoring a MySQL database is to:

  • Identify areas of improvement in the user experience.
  • Find possible security vulnerabilities
  • Create an effective resource management plan
  • Perform impact analysis easily for database-related changes
  • Optimize database performance

To help facilitate this, observIQ recently built and contributed a MySQL metric receiver to the OpenTelmetry Collector contrib repo. You can check it out here!

You can utilize this receiver in conjunction with any OTel collector: including the OpenTelemetry Collector and observIQ’s distribution of the collector.

Below are steps to get up and running quickly with observIQ’s distribution, shipping MySQL metrics to a popular backend: Google Cloud Ops. You can find out more on observIQ’s GitHub page:

What signals matter?

The metrics categories that matter the most to maintain the health and efficiency of the MySQL database are:

  • Connections and connection-related errors
  • Queries related to throughput
  • Threads related to running, creation, and connection
  • Commands executed
  • InnoDB related actions
  • Query Cache
  • Tables statuses

All of the metrics related to the categories above can be gathered with the MySQL receiver – so let’s get started!

Step 1: Installing the collector

Use the following single-line installation script to install the oiq-otel-collector. Note: you can install the collector on the same host as the database instance or on a separate VM.


1msiexec /i "" /quiet


1sudo sh -c "$(curl -fsSlL"

Step 2: Setting up pre-requisites and authentication credentials

Please note that, in the following example, we are using Google Cloud Operations as the destination. However, OpenTelemtry offers exporters for over a dozen destinations. Check out the list of exporters here.

  • Setting up Google Cloud exporter prerequisites

If running outside of Google Cloud (On prem, AWS, etc) or without the Cloud Monitoring scope, the Google Exporter requires a service account.

Create a service account with the following roles:

  • Metrics: roles/monitoring.metricWriter

Create a service account JSON key and place it on the system that is running the collector.


In this example, the key is placed at /opt/observiq-otel-collector/sa.json and its permissions are restricted to the user running the collector process.

1sudo cp sa.json /opt/observiq-otel-collector/sa.json
2sudo chown observiq-otel-collector: /opt/observiq-otel-collector/sa.json
3sudo chmod 0400 /opt/observiq-otel-collector/sa.json

Set the GOOGLE_APPLICATION_CREDENTIALS environment variable by creating a systemd override. A systemd override allows users to modify the systemd service configuration without modifying the service directly. This allows package upgrades to happen seamlessly. You can learn more about systemd units and overrides here.

Run the following command

1sudo systemctl edit observiq-otel-collector

If this is the first time an override is being created, paste the following contents into the file:


If an override is already in place, simply insert the Environment parameter into the existing Service section.

Restart the collector

1sudo systemctl restart observiq-otel-collector


In this example, the key is placed at C:/observiq/collector/sa.json.

Set the GOOGLE_APPLICATION_CREDENTIALS with the command prompt setx command.

Run the following command

1setx GOOGLE_APPLICATION_CREDENTIALS "C:/observiq/collector/sa.json" /m

Restart the service using the services application.

  1. Setting up authentication environment variables

The configuration assumes the following environment variables are set:


Set the variables by creating a systemd override.

Run the following command

1sudo systemctl edit observiq-otel-collector

If this is the first time an override is being created, the file will be empty. Paste the following contents into the file. If the Service section is already present, append the two Environment lines to the Service section. Please note that the JSON file path also needs to be in the override file. Here’s an example with the additional variable to the JSON service file:

[Service] Environment=GOOGLE_APPLICATION_CREDENTIALS=/opt/observiq-otel-collector/creds.json

Replace otel with your MySQL username and password.


After restarting the collector, the configuration will attempt to use the username:password otel:otel.

1sudo systemctl restart observiq-otel-collector

Step 3: Configuring the mysql receiver

After the installation, the config file for the collector can be found at:

  • C:\Program Files\observIQ OpenTelemetry Collector\config.yaml (Windows)
  • /opt/observiq-otel-collector/config.yaml (Linux)

Edit the configuration file and use the following configuration.

2  mysql:
3    endpoint: localhost:3306
4    username: $MYSQL_USERNAME
5    password: $MYSQL_PASSWORD
6    collection_interval: 60s
9  # Resourcedetection is used to add a unique (
10  # to the metric resource(s), allowing users to filter
11  # between multiple agent systems.
12  resourcedetection:
13    detectors: ["system"]
14    system:
15      hostname_sources: ["os"]
17  # Used for Google generic_node mapping.
18  resource:
19    attributes:
20    - key: namespace
21      value: "mysql"
22      action: upsert
23    - key: location
24      value: "global"
25      action: upsert
27  normalizesums:
29  batch:
32  googlecloud:
33    retry_on_failure:
34      enabled: false
35    metric:
36      prefix:
37    resource_mappings:
38    - source_type: ""
39      target_type: generic_node
40      label_mappings:
41      - source_key:
42        target_key: node_id
43      - source_key: location
44        target_key: location
45      - source_key: namespace
46        target_key: namespace
49  pipelines:
50    metrics:
51      receivers:
52      - mysql
53      processors:
54      - resourcedetection
55      - resource
56      - normalizesums
57      - batch
58      exporters:
59      - googlecloud

In the above example, we’ve set the configuration to:

  1. Receive metrics from the MySQL system at the specified endpoint. The authentication credentials for the MySQL host are provided.
  2. Set the time interval for fetching the metrics. The default value for this parameter is 10s. However, if exporting metrics to Google Cloud operations, this value is set to 60s by default automatically.
  3. The resourcedetection processor is used to create a distinction between metrics received from multiple MySQL systems. This helps with filtering metrics from specific MySQL hosts in the monitoring tool, in this case, Google Cloud operations.
  4. In the googlecloud exporter, we do the following mapping:
  • Set the target type to a generic node to simplify filtering metrics from the collector in Metrics Explorer.
  • Set node_id, location and namespace for the metrics. Location and namespace are set from the resource processor.
  1. It is important to note that the project ID is not set in the googlecloud exporter configuration. Google automatically detects the project ID from the .json credentials file
  2. Add the normalizesums processor to smooth out data points for more comprehensive visualizations. To know more about this processor check the documentation.
  3. Add the batch processor to bundle the metrics from multiple receivers. We highly recommend using this processor in the configuration.. To learn more about this processor check the documentation.
  4. It is recommended to set the retry_on_failure to false. If this is not set, the retry attempts fall into a loop for five attempts.

Step 4: Viewing the metrics collected in Google Cloud Operations

Listed below are the metrics fetched by this configuration of the MySQL receiver and the equivalent metric namespace used in Google Cloud Operations.

MetricDescriptionMetric Namespace
mysql.buffer_pool_pagesThe number of pages in the InnoDB buffer
mysql.buffer_pool_operationsThe number of operations on the InnoDB buffer
mysql.buffer_pool_sizeThe number of bytes in the InnoDB buffer
mysql.commandsThe number of times each type of command has been
mysql.handlersThe number of requests to various MySQL
mysql.double_writesThe number of writes to the InnoDB doublewrite
mysql.log_operationsThe number of InndoDB log
mysql.operationsThe number of InndoDB
mysql.page_operationsThe number of InndoDB page
mysql.row_locksThe number of InndoDB row
mysql.row_operationsThe number of InndoDB row
mysql.locksThe number of MySQL
mysql.sortsThe number of MySQL
mysql.threadsThe state of MySQL

To view the metrics follow the steps outlined below:

  1. In the Google Cloud Console, head to metrics explorer
  2. Select the resource as a generic node.
  3. Follow the namespace equivalent in the table above and filter the metric to view the chart.

Broken image

observIQ’s distribution is a game-changer for companies looking to implement the OpenTelemetry standards. The single line installer, seamlessly integrated receivers, exporter, and processor pool make working with this collector simple. Follow this space to keep up with all our future posts and simplified configurations for various sources. For questions, requests, and suggestions, reach out to our support team at

Deepa Ramachandra
Deepa Ramachandra

Related posts

All posts

Get our latest content
in your inbox every week

By subscribing to our Newsletter, you agreed to our Privacy Notice

Community Engagement

Join the Community

Become a part of our thriving community, where you can connect with like-minded individuals, collaborate on projects, and grow together.

Ready to Get Started

Deploy in under 20 minutes with our one line installation script and start configuring your pipelines.

Try it now