We are constantly working on contributing monitoring support for various sources, the latest in that line is support for Tomcat monitoring using the JMX Receiver in the OpenTelemetry collector. If you are as excited as we are, take a look at the details of this support in OpenTelemetry’s repo.
You can utilize this receiver in conjunction with any OTel collector: including the OpenTelemetry Collector and observIQ’s distribution of the collector.
In this post, we take you through the steps to set up this receiver with observIQ’s distribution of the OpenTelemetry Collector and send out the metrics to Google Cloud Operations.
What signals matter?
Performance metrics are the most important to monitor for Tomcat servers. Here’s a list of signals to keep track of:
- Application metrics: Metrics related to each application that is deployed. Metrics such as tomcat.sessions and tomcat.processing_time give insight into the number of active sessions and the processing times for the application since startup.
- Request Processor Metrics: Monitoring the request processing times helps gauge the hardware needs to enable the Tomcat server to handle the required number of requests in a specific time period. Metrics such astomcat.request_count and tomcat.max_time give insights into the total number of requests processed since the start time and the maximum time taken to process a request.
- Managing the traffic to the server: Tracking requests sent and received gives a good idea on the volume of traffic that the server is handling at any point in time. This is essential especially during peak traffic times, keeping a close eye on the server’s performance based on traffic volumes. The tomcat.traffic metric shows the request received and response sent at any given time.
- Number of threads:. By default Tomcat servers create 200 threads, as the limit is reached, Tomcat continues to accommodate a certain number of concurrent connections. However, it is necessary to keep track of the total number of threads created. The tomcat.threads metric gives the total number of threads.
All of the metrics related to the categories above can be gathered with the JMX receiver – so let’s get started!
The first step in this configuration is to install observIQ’s distribution of the OpenTelemetry Collector. For installation instructions and the latest version of the collector check our GitHub repo.
Enabling JVM for Tomcat
Tomcat by default does not have JVM enabled. To enable JVM follow the instructions linked here.
Configuring the jmxreceiver
After the installation, the config file for the collector can be found at:
- C:\Program Files\observIQ OpenTelemetry Collector\config.yaml (Windows)
The first step is the receiver’s configuration:
- We are using the JMX receiver to gather Tomcat metrics. The jar_path attribute lets you specify the path to the jar file that facilitates gathering Tomcat metrics using the JMX receiver. This file path is created automatically when observIQ’s distribution of the OpenTelemetry Collector is installed.
- Set the IP address and port for the system from which the metrics are gathered as the endpoint.
- When we connect to JMX there are different categories of metrics; the Tomcat metrics and JVM metrics are the ones that this configuration intends to scrape. This target_system attribute specifies that.
- Set the time interval for fetching the metrics for the collection_interval attribute. The default value for this parameter is 10s. However, if exporting metrics to Google Cloud operations, this value is set to 60s by default.
- The Properties attribute allows you to set arbitrary attributes. For instance, if you are configuring multiple JMX receivers to collect metrics from many Tomcat servers, this attribute allows you to set the unique IP addresses for each of those endpoint systems. Please note that this is not the only use of the properties option.
receivers: jmx: jar_path: /opt/opentelemetry-java-contrib-jmx-metrics.jar endpoint: localhost:9000 target_system: tomcat,jvm collection_interval: 60s properties: # Attribute 'endpoint' will be used for generic_node's node_id field. otel.resource.attributes: endpoint=localhost:9000
The next step is to configure the processors:
- Use the resourcedetection processor to create an identifier value for each Tomcat system that the metrics are scraped from.
- Add the batch processor to bundle the metrics from multiple receivers. We highly recommend using this processor in the configuration, especially for the benefit of the logging component of the collector. To learn more about this processor check the documentation.
processors: resourcedetection: detectors: ["system"] system: hostname_sources: ["os"] batch:
The next step is to setup a destination for exporting the metrics as shown below. You can check the configuration for your preferred destination from OpenTelemetry’s documentation here.
exporters: googlecloud: retry_on_failure: enabled: false
Set up the pipeline.
service: pipelines: metrics: receivers: - jmx processors: - resourcedetection - resourceattributetransposer - resource - batch exporters: - googlecloud
Viewing the metrics collected
The JMX metrics gatherer scrapes the following metrics and exports them to Google Cloud Operations, based on the config detailed above.
|tomcat.sessions||The number of active sessions.
|tomcat.errors||The number of errors encountered.
|tomcat.processing_time||The total processing time.
|tomcat.traffic||The number of bytes transmitted and received.
|tomcat.threads||The number of threads.
|tomcat.max_time||Maximum time to process a request.
|tomcat.request_count||The total requests.
observIQ’s distribution is a game-changer for companies looking to implement the OpenTelemetry standards. The single line installer, seamlessly integrated receivers, exporter, and processor pool make working with this collector simple. Follow this space to keep up with all our future posts and simplified configurations for various sources. For questions, requests, and suggestions, reach out to our support team at support@observIQ.com.