Skip to content

cadvisor metrics and problematic identification

Created by: bobheadxi

Background

Prometheus generally attaches useful labels based on the target it is scraping. For example, when scraping frontend, Prometheus reaches out to frontend, knows certain things about frontend (e.g. service name, pod, instance, etc), and can attach those labels onto metrics exported by frontend

cAdvisor exports metrics for other containers. So despite all cAdvisor metrics looking like they are coming from cAdvisor, they are actually for other containers.

On some systems cAdvisor generates a name that is some combination of fields that might make a target monitored by cAdvisor unique. This worked alright for a while right up until we discovered it didn't: https://github.com/sourcegraph/sourcegraph/issues/17069, https://github.com/sourcegraph/sourcegraph/issues/17072

Problem

We need an effective way to identify Sourcegraph services inside cAdvisor metrics. The current strategy is outlined in our docs, but the approach is not perfect:

  • in environments like GCP (Cloud and k8s.sgdev), certain containers like GCP's prometheus-to-* exporters get picked up on the prometheus matcher.
  • in the past, cAdvisor's export-everything-nature has caused issues like killing Prometheus: https://github.com/sourcegraph/customer/issues/75

We are a bit hamstrung in that whatever name-labelling convention we have must also:

  • work with the limited set of labels that cAdvisor provides (e.g. in Kubernetes, we only get io.kubernetes.container.name,io.kubernetes.pod.name,io.kubernetes.pod.namespace,io.kubernetes.pod.uid)
  • work in environments that don't use the Docker runtime (which cAdvisor is geared towards)
  • be easy to match on across both Kubernetes (where we need to encode a lot of information in the name, e.g. pod and container) and docker-compose (where just the container name is sufficient)
    • by extension, it needs to work despite varying naming conventions across the two (e.g. sourcegraph-frontend vs frontend vs sourcegraph-frontend-internal)

Docker-compose doesn't seem to be as much of an issue since it doesn't seem they are generally deployed on machines that do anything other than serve Sourcegraph on docker-compose, but in kubernetes there's no telling what's on the nodes.

One approach attempted was to filter on namespace via metric_relabel_configs in k8s (https://github.com/sourcegraph/deploy-sourcegraph/pull/1644), e.g.:

      metric_relabel_configs:
      # cAdvisor-specific customization. Drop container metrics exported by cAdvisor
      # not in the same namespace as Sourcegraph.
      # Uncomment this if you have problems with certain dashboards or cAdvisor itself
      # picking up non-Sourcegraph services. Ensure all Sourcegraph services are running
      # within the Sourcegraph namespace you have defined.
      # The regex must keep matches on '^$' (empty string) to ensure other metrics do not
      # get dropped.
      - source_labels: [container_label_io_kubernetes_pod_namespace]
        regex: ^$|ns-sourcegraph # ensure this matches with namespace declarations
        action: keep

but due to the various ways a namespace can be applied when deploying, there's no guarantee that a customer won't forget to update the Prometheus relabel rule - more discussion in https://github.com/sourcegraph/deploy-sourcegraph/pull/1578. Regardless, this is the change currently applied in Cloud and k8s.sgdev.org to resolve our issue.