MySQL is the trusted open-source database management system for many desktop, mobile, web, and cloud applications. Monitoring the performance of MySQL is critical but as the applications expand over multi-cloud, cloud-native, and hybrid cloud, monitoring also grows in complexity. Continuous monitoring and scaling help applications take advantage of MySQL’s capabilities such as reliability, security, flexibility, availability, and performance scalability. Our primary focus for monitoring a MySQL database is to:
- Identify areas of improvement in the user experience.
- Find possible security vulnerabilities
- Create an effective resource management plan
- Perform impact analysis easily for database-related changes
- Optimize database performance
To help facilitate this, observIQ recently built and contributed a MySQL metric receiver to the OpenTelmetry Collector contrib repo. You can check it out here!
You can utilize this receiver in conjunction with any OTel collector: including the OpenTelemetry Collector and observIQ’s distribution of the collector.
Below are steps to get up and running quickly with observIQ’s distribution, shipping MySQL metrics to a popular backend: Google Cloud Ops. You can find out more on observIQ’s GitHub page: https://github.com/observIQ/observiq-otel-collector
What signals matter?
The metrics categories that matter the most to maintain the health and efficiency of the MySQL database are:
- Connections and connection-related errors
- Queries related to throughput
- Threads related to running, creation, and connection
- Commands executed
- InnoDB related actions
- Query Cache
- Tables statuses
All of the metrics related to the categories above can be gathered with the MySQL receiver – so let’s get started!
Step 1: Installing the collector
Use the following single-line installation script to install the oiq-otel-collector. Note: you can install the collector on the same host as the database instance or on a separate VM.
Windows:
msiexec /i "https://github.com/observIQ/observiq-otel-collector/releases/latest/download/observiq-otel-collector.msi" /quiet
MacOS/Linux:
sudo sh -c "$(curl -fsSlL https://github.com/observiq/observiq-otel-collector/releases/latest/download/install_unix.sh)" install_unix.sh
Step 2: Setting up pre-requisites and authentication credentials
Please note that, in the following example, we are using Google Cloud Operations as the destination. However, OpenTelemtry offers exporters for over a dozen destinations. Check out the list of exporters here.
- Setting up Google Cloud exporter prerequisites
If running outside of Google Cloud (On prem, AWS, etc) or without the Cloud Monitoring scope, the Google Exporter requires a service account.
Create a service account with the following roles:
- Metrics: roles/monitoring.metricWriter
Create a service account JSON key and place it on the system that is running the collector.
Linux
In this example, the key is placed at /opt/observiq-otel-collector/sa.json and its permissions are restricted to the user running the collector process.
sudo cp sa.json /opt/observiq-otel-collector/sa.json
sudo chown observiq-otel-collector: /opt/observiq-otel-collector/sa.json
sudo chmod 0400 /opt/observiq-otel-collector/sa.json
Set the GOOGLE_APPLICATION_CREDENTIALS environment variable by creating a systemd override. A systemd override allows users to modify the systemd service configuration without modifying the service directly. This allows package upgrades to happen seamlessly. You can learn more about systemd units and overrides here.
Run the following command
sudo systemctl edit observiq-otel-collector
If this is the first time an override is being created, paste the following contents into the file:
[Service]
Environment=GOOGLE_APPLICATION_CREDENTIALS=/opt/observiq-otel-collector/sa.json
If an override is already in place, simply insert the Environment parameter into the existing Service section.
Restart the collector
sudo systemctl restart observiq-otel-collector
Windows
In this example, the key is placed at C:/observiq/collector/sa.json.
Set the GOOGLE_APPLICATION_CREDENTIALS with the command prompt setx command.
Run the following command
setx GOOGLE_APPLICATION_CREDENTIALS "C:/observiq/collector/sa.json" /m
Restart the service using the services application.
- Setting up authentication environment variables
The configuration assumes the following environment variables are set:
- MYSQL_USERNAME
- MYSQL_PASSWORD
Set the variables by creating a systemd override.
Run the following command
sudo systemctl edit observiq-otel-collector
If this is the first time an override is being created, the file will be empty. Paste the following contents into the file. If the Service section is already present, append the two Environment lines to the Service section. Please note that the JSON file path also needs to be in the override file. Here’s an example with the additional variable to the JSON service file:
[Service] Environment=GOOGLE_APPLICATION_CREDENTIALS=/opt/observiq-otel-collector/creds.json
Replace otel with your MySQL username and password.
[Service]
Environment=MYSQL_USERNAME=otel
Environment=MYSQL_PASSWORD=otel
After restarting the collector, the configuration will attempt to use the username:password otel:otel.
sudo systemctl restart observiq-otel-collector
Step 3: Configuring the mysql receiver
After the installation, the config file for the collector can be found at:
- C:\Program Files\observIQ OpenTelemetry Collector\config.yaml (Windows)
- /opt/observiq-otel-collector/config.yaml (Linux)
Edit the configuration file and use the following configuration.
receivers:
mysql:
endpoint: localhost:3306
username: $MYSQL_USERNAME
password: $MYSQL_PASSWORD
collection_interval: 60s
processors:
# Resourcedetection is used to add a unique (host.name)
# to the metric resource(s), allowing users to filter
# between multiple agent systems.
resourcedetection:
detectors: ["system"]
system:
hostname_sources: ["os"]
# Used for Google generic_node mapping.
resource:
attributes:
- key: namespace
value: "mysql"
action: upsert
- key: location
value: "global"
action: upsert
normalizesums:
batch:
exporters:
googlecloud:
retry_on_failure:
enabled: false
metric:
prefix: workload.googleapis.com
resource_mappings:
- source_type: ""
target_type: generic_node
label_mappings:
- source_key: host.name
target_key: node_id
- source_key: location
target_key: location
- source_key: namespace
target_key: namespace
service:
pipelines:
metrics:
receivers:
- mysql
processors:
- resourcedetection
- resource
- normalizesums
- batch
exporters:
- googlecloud
In the above example, we’ve set the configuration to:
- Receive metrics from the MySQL system at the specified endpoint. The authentication credentials for the MySQL host are provided.
- Set the time interval for fetching the metrics. The default value for this parameter is 10s. However, if exporting metrics to Google Cloud operations, this value is set to 60s by default automatically.
- The resourcedetection processor is used to create a distinction between metrics received from multiple MySQL systems. This helps with filtering metrics from specific MySQL hosts in the monitoring tool, in this case, Google Cloud operations.
- In the googlecloud exporter, we do the following mapping:
- Set the target type to a generic node to simplify filtering metrics from the collector in Metrics Explorer.
- Set node_id, location and namespace for the metrics. Location and namespace are set from the resource processor.
- It is important to note that the project ID is not set in the googlecloud exporter configuration. Google automatically detects the project ID from the .json credentials file
- Add the normalizesums processor to smooth out data points for more comprehensive visualizations. To know more about this processor check the documentation.
- Add the batch processor to bundle the metrics from multiple receivers. We highly recommend using this processor in the configuration.. To learn more about this processor check the documentation.
- It is recommended to set the retry_on_failure to false. If this is not set, the retry attempts fall into a loop for five attempts.
Step 4: Viewing the metrics collected in Google Cloud Operations
Listed below are the metrics fetched by this configuration of the MySQL receiver and the equivalent metric namespace used in Google Cloud Operations.
Metric | Description | Metric Namespace |
---|---|---|
mysql.buffer_pool_pages | The number of pages in the InnoDB buffer pool. | custom.googleapis.com/opencensus/mysql.buffer_pool_pages |
mysql.buffer_pool_operations | The number of operations on the InnoDB buffer pool. | custom.googleapis.com/opencensus/mysql.buffer_pool_operations |
mysql.buffer_pool_size | The number of bytes in the InnoDB buffer pool. | custom.googleapis.com/opencensus/mysql.buffer_pool_size |
mysql.commands | The number of times each type of command has been executed. | custom.googleapis.com/opencensus/mysql.commands |
mysql.handlers | The number of requests to various MySQL handlers. | custom.googleapis.com/opencensus/mysql.handlers |
mysql.double_writes | The number of writes to the InnoDB doublewrite buffer. | custom.googleapis.com/opencensus/mysql.double_writes |
mysql.log_operations | The number of InndoDB log operations. | custom.googleapis.com/opencensus/mysql.log_operations |
mysql.operations | The number of InndoDB operations. | custom.googleapis.com/opencensus/mysql.operations |
mysql.page_operations | The number of InndoDB page operations. | custom.googleapis.com/opencensus/mysql.page_operations |
mysql.row_locks | The number of InndoDB row locks. | custom.googleapis.com/opencensus/mysql.row_locks |
mysql.row_operations | The number of InndoDB row operations. | custom.googleapis.com/opencensus/mysql.row_operations |
mysql.locks | The number of MySQL locks. | custom.googleapis.com/opencensus/mysql.locks |
mysql.sorts | The number of MySQL sorts. | custom.googleapis.com/opencensus/mysql.sorts |
mysql.threads | The state of MySQL threads. | custom.googleapis.com/opencensus/mysql.threads |
To view the metrics follow the steps outlined below:
- In the Google Cloud Console, head to metrics explorer
- Select the resource as a generic node.
- Follow the namespace equivalent in the table above and filter the metric to view the chart.
observIQ’s distribution is a game-changer for companies looking to implement the OpenTelemetry standards. The single line installer, seamlessly integrated receivers, exporter, and processor pool make working with this collector simple. Follow this space to keep up with all our future posts and simplified configurations for various sources. For questions, requests, and suggestions, reach out to our support team at support@observIQ.com.