Free Report! Gartner® Hype Cycle™ for Monitoring and Observability.Read more
Technical “How-To’s”

Serverless Monitoring In The Cloud With The observIQ Distro for OpenTelemetry

Dylan Myers
Dylan Myers

Part 1: Google Cloud Run

In this part 1 of a blog series on serverless monitoring, we will learn how to run the observIQ Distro For OpenTelemetry Collector, referred to as “oil-otel-collector”, in Google Cloud Run. For many reasons, someone may want to run monitoring in a serverless state. In our example, we will monitor MongoDB Atlas, a cloud-hosted version of MongoDB.

Environment Prerequisites:

  • MongoDB Atlas Target
    • This target is already set up correctly with the API Access Keys
  • Access to Google Cloud Run
  • Access to Google Cloud Secret Manager
    • Secrets Created for config, public key, and private key
  • oiq-otel-collector configuration with the MongoDB Atlas receiver
  • Container Images in the GCR repository


Setting Up Prerequisites

The first task on our agenda is to get our container image transferred from Docker Hub to the Google Container Repository (GCR). To do this, we need a system with docker installed. Additionally, we need to have the project already created in Google Cloud. For this blog, we’ve created a temporary project called dm-cloudrun-blog. Now that we’re ready with docker and our Google Cloud project, we can run the following commands to import the image into gcr:

1docker pull observiq/observiq-otel-collector:1.4.0
2docker tag observiq/observiq-otel-collector:1.4.0
3docker push

Our second prerequisite task is to set up our secrets. For this piece of the puzzle, I go to the Google Cloud Secret Manager and create three secrets: mongo-atlas-priv-key, mongo-atlas-pub-key, mongo-otel-config. The values of the secrets are the ones set up on the MongoDB Atlas site for the 2 keys and the configuration we’ve written for the config secret. This is the configuration we’re using today:

2  mongodbatlas:
5  collection_interval: 60s
8  googlecloud:
9  metric:
10    resource_filters:
11      - prefix: mongodb_atlas
14  pipelines:
15  metrics:
16    receivers:
17    - mongodbatlas
18    exporters:
19    - googlecloud

Creation of the Cloud Run Deployment

Now that we have the prerequisites out of the way, we can focus on creating our deployment. We click “Create Service” in the Google Cloud Console under Cloud Run.

On the next page, we will need to fill in several values. Under the initial display, fill in the following:

  • Container Image URL (we created above):
  • Service Name: I’m using oiq-otel-mongo
  • Check “cpu always allocated”
  • Container port: 8888 (collector’s metrics port)
  • Set autoscaling min 1 and max 1
  • Ingress: Allow internal traffic only
  • Authentication: Require authentication

Now, we need to expand the Container, Variables & Secrets, Connections, and Security section by clicking the dropdown arrow to the right of that heading. Once it grows, we can access the VARIABLES & SECRETS tab.

Click the REFERENCE A SECRET link. Using the Secret dropdown, select mongo-atlas-priv-key, and change the Reference Method to Exposed as the environment variable. Finally, the Name should be set to MONGODB_ATLAS_PRIVATE_KEY. Repeat this process for mongo-atlas-pub-key with MONGODB_ATLAS_PUBLIC_KEY.

One more time, we click the REFERENCE A SECRET link. This time, we set the secret to mongo-otel-config and the Reference Method to Mounted as volume. For the Mount path, we put it to etc/otel/config.yaml. The preceding slash is a permanent part of the input textbox, so do not provide it. This will insert the config file into the appropriate place inside the container’s file system.

We are now finished with container parameters. All other parameters can be left at the default, and we can click the blue CREATE button at the bottom of the page.

Reviewing the Container

Now that our image is deployed, we can click on it in the list of Cloud Run services. Doing so brings us to a dashboard of metrics for the container. We can choose from other tabs, such as logs, revisions, and triggers. The metrics here can tell us if our container needs to be edited to have more CPU and/or Memory. The logs will display the logs from inside the container, where we can see what is happening with the collector and rectify any issues it has by editing the configuration file secret.


At some point, most tech teams will need to monitor a serverless computing resource. Running an instance of a telemetry collector inside another serverless computing platform can often be an inexpensive and effective way to address this need.

I look forward to the next installment of this three-part series: AWS Elastic Container Service. I'll be sure to repeat what we achieved with Google Cloud Run over in AWS in that installment. I look forward to seeing you there.

Dylan Myers
Dylan Myers

Related posts

All posts

Get our latest content
in your inbox every week

By subscribing to our Newsletter, you agreed to our Privacy Notice

Community Engagement

Join the Community

Become a part of our thriving community, where you can connect with like-minded individuals, collaborate on projects, and grow together.

Ready to Get Started

Deploy in under 20 minutes with our one line installation script and start configuring your pipelines.

Try it now