Summary:
This idea outlines the essential need for a generic per-process configuration for scraping Prometheus data in Instana.
The current missing of this feature limits Prometheus monitoring on Kubernetes and OpenShift; workarounds are causing unnecessary overhead.
Current Situation:
Instana allows environment variables for specific sensor configurations, as the documentation describes, see:
https://www.ibm.com/docs/en/instana-observability/222?topic=agent-host-configuration#configurations-from-process-environment
Unfortunately, the Prometheus sensor "com.instana.plugin.prometheus" does not recognize process environment variables, always defaulting to the specified value.
Example config:
--------
com.instana.plugin.prometheus:
# Generic Definition of Prometheus Scraper customMetricSources:
# metrics endpoint, the IP and port are auto-discovered
- url:
configuration_from:
type: env env_name: MON_PROMETHEUS_URL
default_value: '/metrics' ---
----------
Problem:
We do not use Prometheus on every service and we have a security / performance risk, when we query all of our services on Openshift / Kubernetes for prometheus metrics.
Therefore there is the need for granular settings, to enable / disable or change the configurations, since every service is different.
These changes should be applied by the service teams themselves for their own services.
The approach the let teams define, how their services is scraped is also standard with prometheus and part of the kubernetes scrapper via labels.
Proposed Idea:
Introduce the option to define the following attributes in the prometheus sensor at the process level for Prometheus scraping:
enabled
url
user
pwd
metricexcludeRegex
poll-intervall
This implementation will allow the setting of Prometheus scraping per process with individual settings by teams and a company default behaivour.
Long-Term Vision:
Eventually, these settings should align with pod labels, similar to how the Prometheus scraper functions. The Instana Kubernetes sensor should obtain k8s Labels on pods and remotely configure agents to scrape the pods. Additionally, Prometheus metrics should be visible in the INSTANA UI at the process level and queryable with associated infrastructure hierarchy information.
Necessity:
This idea is crucial for organizations utilizing prometheus metrics as part of the Opentelemetry support with Instana on Kubernetes and OpenShift. The present configuration is not suitable and results in inefficiency. Implementing this idea is necessary to enable effective Prometheus monitoring on these platforms.
Conclusion:
We strongly encourage the prioritization of this idea in upcoming Instana updates. It will enhance Instana's compatibility with standard monitoring practices on Kubernetes and OpenShift, reduce superfluous overhead, and increase user satisfaction.
Thank you for considering this vital innovation.