Free Report! Gartner® Hype Cycle™ for Monitoring and Observability.Read more
Technical “How-To’s”

How to monitor MongoDB with OpenTelemetry

Deepa Ramachandra
Deepa Ramachandra

MongoDB is a document-oriented and cross-platform database that maintains its documents in the binary-encoded JSON format. Mongo’s replication capabilities and horizontal capability using database sharding make MongoDB highly available. An effective monitoring solution can make it easier for you to identify issues with MongoDB, such as resource availability, execution slowdowns, and scalability.

observIQ recently built and contributed a MongoDB metric receiver to the OpenTelemetry contrib repo. You can check it out here!

You can utilize this receiver in conjunction with any OTel Collector, including the OpenTelemetry Collector and observIQ’s distribution of the collector.

Below are steps to get up and running quickly with observIQ’s distribution, shipping MongoDB metrics to any popular backend. You can find out more about it on observIQ’s GitHub page.

You can find OTel config examples for MongoDB and other applications shipping to Google Cloud here.

Let’s get started!

What signals matter?

The most critical MongoDB-related metrics to monitor are:

  • The status of processes and memory utilization: Monitoring MongoDB’s server processes helps identify slowness in its activity or health. Unresponsive processes during command execution are an example of a scenario that needs further analysis. The mongodb.collection.count metric helps determine the stability, restart numbers, and backup performance related to the collections in that MongoDB instance. The gives the value of the storage space consumed by the data in your current MongoDB instance.

Broken image

  • Operations and connections metrics: When there are performance issues in the application, it is necessary to rule out if the problem stems from the database layer. In this case, monitoring the connections and operations patterns becomes very critical. Metrics such as mongodb.cache.operations and mongodb.connection.count gives insights into the connections’ operation and count. By monitoring the operations, you can draw a pattern and set thresholds and alerts for those thresholds.

Broken image

  • Query Optimization: For a query, the MongoDB query optimizer chooses and caches the most efficient query plan given the available indexes. The most efficient query plan is evaluated based on the number of “work units” ( works ) performed by the query execution plan when the query planner evaluates candidate plans. For instance, metrics such as mongodb.global_lock.time show the trends in lock time for query optimization.

Broken image

Before creating your configuration, you should have observIQ’s distribution of the OpenTelemetry Collector installed. For installation instructions and the collector's latest version, check our GitHub repo.

Configuring the mongoDB receiver

After the installation, the config file for the collector can be found at:

  • C:\Program Files\observIQ OpenTelemetry Collector\config.yaml (Windows)
  • /opt/observiq-otel-collector/config.yaml(Linux)

Let’s begin with the configuration for the receiver.

  • Here, we set up the host as the endpoint, essentially the IP address and port of the Mongo system.
  • For all configurations using the Google Cloud Operations as an endpoint, the collection interval is set to 60s, which is the requirement.
  • Disable TLS. This is done to remove any restriction from TLS to transmit the metrics data to the third party, in this case, Google Cloud Operations.
2 mongodb:
3   hosts:
4     - endpoint:
5   collection_interval: 60s
6   # disable TLS
7   tls:
8     insecure: true

Next up, the processors:

Please note that these processors are optional. You may choose to use any of the available processors documented here.

  • The resourcedetection processor will create a unique identifier for each MongoDB instance monitored using this configuration.
  • Use the Normalize Sums Processor to average the initial metrics received for better visualization.
  • Use the batch processor to collate the metrics from multiple receivers and send them to the exporter destination. We recommend using this processor with all receiver configurations when applicable.
2 # Resourcedetection is used to add a unique (
3 # to the metric resource(s), allowing users to filter
4  # between multiple agent systems.
5 resourcedetection:
6   detectors: ["system"]
7   system:
8     hostname_sources: ["os"]
10 resourceattributetransposer:
11   operations:
12     - from:
13       to: agent
15 normalizesums:
17 batch:

In this example, we are showing you a sample config for exporting metrics to Google Cloud. However, you may choose to export the metrics to any of the available destinations documented here. The configuration below exports the metrics to Google Cloud.

2 googlecloud:
3   retry_on_failure:
4     enabled: false
5   metric:
6     prefix:

Finally, set up the pipeline.

2  pipelines:
3    metrics:
4      receivers:
5     - mongodb
6      processors:
7     - resourcedetection
8     - resourceattributetransposer
9     - normalizesums
10     - batch
11      exporters:
12     - googlecloud

Viewing the metrics collected

The following metrics are fetched using the configuration above:

mongodb.cache.operationsThe number of cache operations of the instance.
mongodb.collection.countThe number of collections. size of the collection. Data compression does not affect this value.
mongodb.connection.countThe number of connections.
mongodb.extent.countThe number of extents
mongodb.global_lock.timeThe time the global lock has been held.
mongodb.index.countThe number of indexes
mongodb.index.sizeSum of the space allocated to all indexes in the database, including free index space.
mongodb.memory.usageThe amount of memory used.
mongodb.object.countThe number of objects.
mongodb.operation.countThe number of operations executed. total amount of storage allocated to this collection.

To view the metrics, follow the steps outlined below:

  1. In the Google Cloud Console, head to Metrics Explorer.
  2. Select the resource as a generic node.
  3. Follow the namespace equivalent in the table above and filter the metric to view the chart.

Broken image

Broken image

observIQ’s distribution is a game-changer for companies looking to implement the OpenTelemetry standards. The single-line installer, seamlessly integrated receivers, exporter, and processor pool make working with this collector simple. Follow this space to keep up with all our future posts and simplified configurations for various sources. For questions, requests, and suggestions, contact our support team at

Deepa Ramachandra
Deepa Ramachandra

Related posts

All posts

Get our latest content
in your inbox every week

By subscribing to our Newsletter, you agreed to our Privacy Notice

Community Engagement

Join the Community

Become a part of our thriving community, where you can connect with like-minded individuals, collaborate on projects, and grow together.

Ready to Get Started

Deploy in under 20 minutes with our one line installation script and start configuring your pipelines.

Try it now