Apache Solr is an open source search platform built on top of Apache Lucene. It is used to quickly and easily search large volumes of data. It provides powerful features such as faceting, text analysis, and geo-spatial search. Solr is highly scalable and can be used in a wide variety of applications.
The prerequisites for monitoring Solr with Netdata are to have Solr and Netdata installed on your system.
Netdata auto discovers hundreds of services, and for those it doesn’t turning on manual discovery is a one line configuration. For more information on configuring Netdata for Solr monitoring please read the collector documentation.
You should now see the Solr section on the Overview tab in Netdata Cloud already populated with charts about all the metrics you care about.
Netdata has a public demo space (no login required) where you can explore different monitoring use-cases and get a feel for Netdata.
Search requests metric is used to measure the number of search requests sent to Apache Solr per second. This metric is important to monitor as it can indicate the performance of the search service and if there is an issue with the search requests being sent. If the number of search requests is too high, it can indicate that the service may be over-utilized, or that too many users are trying to access the service at once.
Search errors metric is used to measure the number of errors that were encountered during the search requests to Apache Solr per second. This metric is important to monitor as it can indicate potential issues with the search service. If the number of errors is too high, it can indicate an issue with the service, such as the search requests being too complex, the index not being updated properly, or the query being too slow.
Search errors by type metric is used to measure the number of errors that were encountered by type during the search requests to Apache Solr per second. This metric is important to monitor as it can indicate potential issues with the search service, such as client errors, server errors, or timeouts. If the number of errors is too high, it can indicate an issue with the service, such as the search requests being too complex, the index not being updated properly, or the query being too slow.
Search requests processing time metric is used to measure the amount of time it takes for Apache Solr to process each search request, in milliseconds. This metric is important to monitor as it can indicate potential issues with the search service, such as slow queries or indexing issues. If the processing time is too high, it can indicate an issue with the service and can help pinpoint potential bottlenecks.
Search requests timings metric is used to measure the min, median, mean, and max amount of time it takes for Apache Solr to process each search request, in milliseconds. This metric is important to monitor as it can indicate potential issues with the search service, such as slow queries or indexing issues. If the timings are too high, it can indicate an issue with the service and can help pinpoint potential bottlenecks.
Search requests processing time percentile metric is used to measure the 75th, 95th, 99th, and 999th percentile of time it takes for Apache Solr to process each search request, in milliseconds. This metric is important to monitor as it can indicate potential issues with the search service, such as slow queries or indexing issues. If the percentile time is too high, it can indicate an issue with the service and can help pinpoint potential bottlenecks.
Update requests metric is used to measure the number of update requests sent to Apache Solr per second. This metric is important to monitor as it can indicate the performance of the update service and if there is an issue with the update requests being sent. If the number of update requests is too high, it can indicate that the service may be over-utilized, or that too many users are trying to access the service at once.
Update errors metric is used to measure the number of errors that were encountered during the update requests to Apache Solr per second. This metric is important to monitor as it can indicate potential issues with the update service. If the number of errors is too high, it can indicate an issue with the service, such as the update requests being too complex, the index not being updated properly, or the query being too slow.
Update errors by type metric is used to measure the number of errors that were encountered by type during the update requests to Apache Solr per second. This metric is important to monitor as it can indicate potential issues with the update service, such as client errors, server errors, or timeouts. If the number of errors is too high, it can indicate an issue with the service, such as the update requests being too complex, the index not being updated properly, or the query being too slow.
Update requests processing time metric is used to measure the amount of time it takes for Apache Solr to process each update request, in milliseconds. This metric is important to monitor as it can indicate potential issues with the update service, such as slow queries or indexing issues. If the processing time is too high, it can indicate an issue with the service and can help pinpoint potential bottlenecks.
Update requests timings metric is used to measure the min, median, mean, and max amount of time it takes for Apache Solr to process each update request, in milliseconds. This metric is important to monitor as it can indicate potential issues with the update service, such as slow queries or indexing issues. If the timings are too high, it can indicate an issue with the service and can help pinpoint potential bottlenecks.
Update requests processing time percentile metric is used to measure the 75th, 95th, 99th, and 999th percentile of time it takes for Apache Solr to process each update request, in milliseconds. This metric is important to monitor as it can indicate potential issues with the update service, such as slow queries or indexing issues. If the percentile time is too high, it can indicate an issue with the service and can help pinpoint potential bottlenecks.
Want a personalised demo of Netdata for your use case?