Monitoring Solr is very critical because it handles the search and analysis of data in your application. Similifying this monitoring is necessary to gain full visibility into Solr’s availability and ensure it is performing as expectedn. We’ll show you how to do this using the jmxreceiver for the OpenTelemetry collector.
You can utilize this receiver in conjunction with any OTel collector: including the OpenTelemetry Collector and observIQ’s distribution of the collector.
What signals matter?
Monitoring Solr includes scraping JVM metrics such as memory utilization, JVM threads, and the metrics that are exposed exclusively by Solr such as request counts and Caching related metrics. The JMX receiver scrapes all the metrics necessary to gather the following critical inferences:
Understanding request handling using request rates
Solr nodes and clusters handle requests that are sent to it. Tracking the volume of requests that are received and handled helps fine tune the performance and eliminate any bottlenecks. A dashboard for Solr can point to sudden dips or rises in requests received.

Monitoring caching capabilities
Caching is another key feature to monitor mainly because of Solr’s architecture. The caching feature facilitates easy access to cached data without buying into the disk utilization. Usually Caching leads to memory and disk expenses, that may bite into performance. Keeping all caching operations monitored ensures memory health and CPU utilization is optimized. Small caches lead to reduced hit rates, which in turn results in reduced node performance. While big caches deteriorate the JVM heaping performance and decrease the node performance.

Request Latency
The pace at which the requests are handled is another critical factor to monitor closely. The request latency gives a clear indication of how the queries and requests are handled. In an architecture where the search handlers are assigned specific search categories, tracking the latency across these handlers can give the difference in request latency between each handler and each data type. Also, a comparison between the request latency and request rates help identifying issues with request handling very easily.

Configuring the JMX receiver to gather Solr metrics
Use the following configuration to gather metrics using the JMX receiver and forward the metrics to the destination of your choice. OpenTelemetry supports over a dozen destinations to which you can forward the gathered metrics. More information is available about exporters in OpenTelemetry’s rehttps://github.com/open-telemetry.. In this sample, the configuration for the JMX receiver is covered.
Receiver configuration:
- Configure the collection_interval attribute. It is set to 60 seconds in this sample configuration.
- Setup the endpoint attribute as the system that is running the Solr instance
- Specify the jar_path for the JMX receiver. We are using the JMX receiver to gather Solr metrics. The jar_path attribute lets you specify the path to the jar file that facilitates gathering Solr metrics using the JMX receiver. This file path is created automatically when the observIQ OpenTelemetry collector is installed.
- Set the target_system attribute to Solr. When we connect to the JMX receiver there are different categories of metrics, the Solr metrics, and JVM metrics are the ones that this configuration intends to scrape. This attribute specifies that.
- Use resource_attributes to set the local host port number
receivers:
jmx:
collection_interval: 30s
endpoint: localhost:9999
jar_path: /opt/opentelemetry-java-contrib-jmx-metrics.jar
target_system: solr
resource_attributes:
solr.endpoint: localhost:9999
Processor configuration:
- The resourcedetection processor is used to create a unique identity to each metric host such that you have the ability to filter between the various hosts to view the metrics specific to that host.
- The system attribute gathers the host information
- The batch processor is used to batch all the metrics together during collection.
processors:
resourcedetection:
detectors: ["system"]
system:
hostname_sources: ["os"]
batch:
Exporter Configuration:
In this example, the metrics are exported to New Relic using the OTLP exporter. If you would like to forward your metrics to a different destination, check the destinations that OpenTelemetry supports at this time, here.
exporters:
otlp:
endpoint: https://otlp.nr-data.net:443
headers:
api-key: 00000-00000-00000
tls:
insecure: false
Setup the pipeline:
service:
pipelines:
metrics:
receivers:
- jmx
processors:
- resourcedetection
- batch
exporters:
- otlp
Viewing the metrics
All the metrics the JMX receiver scrapes are listed below.
Metric | Description |
---|---|
solr.cache.eviction.count | The number of cache evictions for the current index search. |
solr.cache.hit.count | The number of cache hits for the current index searcher. |
solr.cache.insert.count | The number of inserts into the cache. |
solr.cache.lookup.count | The number of lookups against the cache. |
solr.cache.size | The number of open tcp connections for internal cluster communication. |
solr.document.count | The number of document cache. |
solr.index.size | The size of the solr index |
solr.request.count | The number of requests received. |
solr.request.error.count | The number of requests that resulted in an error. |
solr.request.time.average | Average time taken to complete a request |
solr.request.timeout.count | The number of requests that resulted in a timeout. |
Alerting
Now that you have the metrics gathered and exported to the destination of your choice, you may want to explore how to effectively configure alerts for these metrics. Here are some alerting possibilities for Solr:
- Alerts to notify that the Solr server is down
- Alerts based on threshold values for request rate, cache size, timeout count, cache hit count
- Alerts for anomaly scenarios where the values of certain metrics deviate from the baseline
- Set up resampling to avoid reacting to false alarms
- Notifying the on-call support team about any critical alerts
observIQ’s distribution is a game-changer for companies looking to implement the OpenTelemetry standards. The single line installer, seamlessly integrated receivers, exporter, and processor pool make working with this collector simple. Follow this space to keep up with all our future posts and simplified configurations for various sources. For questions, requests, and suggestions, reach out to our support team at support@observIQ.com.