The Observability Blog

  • Google Cloud
  • Metrics
  • OpenTelemetry

Serverless Monitoring In The Cloud With The observIQ Distro for OpenTelemetry

by Dylan Myers on
July 25, 2022

Part 1: Google Cloud Run

In this part 1 of a blog series on serverless monitoring, we will learn how to run the observIQ Distro For OpenTelemetry Collector, referred to as “oiq-otel-collector”, in Google Cloud Run. There are many reasons that someone may want to run monitoring in a serverless state. In our example, we will be monitoring MongoDB Atlas, a cloud hosted version of MongoDB.

Environment Prerequisites

  • MongoDB Atlas Target
    • This target already set up correctly with the API Access Keys
  • Access to Google Cloud Run
  • Access to Google Cloud Secret Manager
    • Secrets Created for config, public key, and private key
  • oiq-otel-collector configuration with the MongoDB Atlas receiver
  • Container Images in the gcr repository


Setting Up Prerequisites

The first task on our agenda is to get our container image transferred from Docker Hub to the Google Container Repository (gcr). In order to do this, we need a system with docker installed. Additionally, we need to have the project already created in Google Cloud. For the purpose of this blog, I’ve created a temporary project called dm-cloudrun-blog. Now that we’re ready with docker and our Google Cloud project, we can run the following commands to import the image into gcr:

docker pull observiq/observiq-otel-collector:1.4.0
docker tag observiq/observiq-otel-collector:1.4.0
docker push

Our second prerequisite task is to set up our secrets. For this piece of the puzzle, I go to the Google Cloud Secret Manager and create three secrets: mongo-atlas-priv-key, mongo-atlas-pub-key, mongo-otel-config. The values of the secrets are the ones set up on the MongoDB Atlas site for the 2 keys, and the configuration we’ve written for the config secret. This is the configuration we’re using today:

	collection_interval: 60s

    	- prefix: mongodb_atlas

  	- mongodbatlas
  	- googlecloud

Creation of the Cloud Run Deployment

Now that we have the prerequisites out of the way, we can focus on creating our deployment. In the Google Cloud Console, under Cloud Run, we click “Create Service”.

On the next page, we will need to fill in several values. Under the initial display, fill in the following:

  • Container Image URL (we created above):
  • Service Name: I’m using oiq-otel-mongo
  • Check “cpu always allocated”
  • Container port: 8888 (collector’s metrics port)
  • Set autoscaling min 1 and max 1
  • Ingress: Allow internal traffic only
  • Authentication: Require authentication

Now we need to expand the section called Container, Variables & Secrets, Connections, Security by clicking the dropdown arrow to the right of that heading. Once it expands, we can access the VARIABLES & SECRETS tab.

Click the REFERENCE A SECRET link. Using the Secret dropdown, select mongo-atlas-priv-key, change the Reference Method to Exposed as environment variable. Finally, the Name should be set to MONGODB_ATLAS_PRIVATE_KEY. Repeat this process for mongo-atlas-pub-key with MONGODB_ATLAS_PUBLIC_KEY.

One more time, we click the REFERENCE A SECRET link. This time we set the secret to mongo-otel-config, and the Reference Method to Mounted as volume. For the Mount path, we set it to etc/otel/config.yaml. The preceding slash is a permanent part of the input textbox, so do not provide it. This will insert the config file into the appropriate place inside the container’s file system.

We are now finished with container parameters. All other parameters can be left at the default, and we can click the blue CREATE button at the bottom of the page.

Reviewing the Container

Now that our image is deployed, we can click on it in the list of Cloud Run services. Doing so, brings us to a dashboard of metrics for the container. There are other tabs we can choose from, such as logs, revisions, and triggers. The metrics here can tell us if our container needs to be edited to have more CPU and/or Memory. The logs will display the logs from inside the container, where we can see what is happening with the collector and rectify any issues it has by editing the configuration file secret.


At some point, the vast majority of tech teams will need to monitor a serverless computing resource. Running an instance of a telemetry collector inside of another serverless computing platform can often be an inexpensive and effective way to address this need.

I look forward to the next installment of this three part series: AWS Elastic Container Service. In that installment, I will repeat what we achieved with Google Cloud Run over in AWS. See you there.