There are two ways to setup Prometheus integration, depending on where your apps are running:
* For deployments on Kubernetes, GitLab can automatically [deploy and manage Prometheus](#managed-prometheus-on-kubernetes)
* For other deployment targets, simply [specify the Prometheus server](#manual-configuration-of-prometheus).
Integration with Prometheus requires the following:
1. GitLab 9.0 or higher
1. Prometheus must be configured to collect one of the [supported metrics](prometheus_library/metrics.md)
1. Each metric must be have a label to indicate the environment
1. GitLab must have network connectivity to the Prometheus server
## Getting started with Prometheus monitoring
## Managed Prometheus on Kubernetes
Depending on your deployment and where you have located your GitLab server, there are a few options to get started with Prometheus monitoring.
GitLab can seamlessly deploy and manage Prometheus on a [connected Kubernetes cluster](../clusters/index.md), making monitoring of your apps easy.
* If both GitLab and your applications are installed in the same Kubernetes cluster, you can leverage the [bundled Prometheus server within GitLab](#configuring-omnibus-gitlab-prometheus-to-monitor-kubernetes).
* If your applications are deployed on Kubernetes, but GitLab is not in the same cluster, then you can [configure a Prometheus server in your Kubernetes cluster](#configuring-your-own-prometheus-server-within-kubernetes).
* If your applications are not running in Kubernetes, [get started with Prometheus](#getting-started-with-prometheus-outside-of-kubernetes).
### Getting started with Prometheus outside of Kubernetes
Installing and configuring Prometheus to monitor applications is fairly straight forward.
1. Set up one of the [supported monitoring targets](prometheus_library/metrics.md)
1. Configure the Prometheus server to [collect their metrics](https://prometheus.io/docs/operating/configuration/#scrape_config)
### Requirements
### Configuring Omnibus GitLab Prometheus to monitor Kubernetes deployments
* GitLab [10.5 or above](https://gitlab.com/gitlab-org/gitlab-ce/issues/28916)
* A [connected Kubernetes cluster](../clusters/index.md)
* Helm Tiller [installed by GitLab](../clusters/index.md#installing-applications)
With Omnibus GitLab running inside of Kubernetes, you can leverage the bundled
version of Prometheus to collect the supported metrics. Once enabled, Prometheus will automatically begin monitoring Kubernetes Nodes and any [annotated Pods](https://prometheus.io/docs/operating/configuration/#<kubernetes_sd_config>).
### Getting started
1. Read how to configure the bundled Prometheus server in the
Prometheus is deployed into the `gitlab-managed-apps` namespace, using the [official Helm chart](https://github.com/kubernetes/charts/tree/master/stable/prometheus). Prometheus is only accessible within the cluster, with GitLab communicating through the [Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
Or use `kubectl`:
The Prometheus server will [automatically detect and monitor](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Ckubernetes_sd_config%3E) nodes, pods, and endpoints. To configure a resource to be monitored by Prometheus, simply set the following [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/):
*`prometheus.io/scrape` to `true` to enable monitoring of the resource.
*`prometheus.io/port` to define the port of the metrics endpoint.
*`prometheus.io/path` to define the path of the metrics endpoint. Defaults to `/metrics`.
```bash
kubectl apply -f path/to/prometheus.yml
```
CPU and Memory consumption is monitored, but requires [naming conventions](prometheus_library/kubernetes.html#specifying-the-environment) in order to determine the environment. If you are using [Auto DevOps](../../../topics/autodevops/), this is handled automatically.
Once deployed, you should see the Prometheus service, deployment, and
pod start within the `prometheus` namespace. The server will begin to collect
metrics from each Kubernetes Node in the cluster, based on the configuration
provided in the template. It will also attempt to collect metrics from any Kubernetes Pods that have been [annotated for Prometheus](https://prometheus.io/docs/operating/configuration/#pod).
The [NGINX Ingress](../clusters/index.md#installing-applications) that is deployed by GitLab to clusters, is automatically annotated for monitoring providing key response metrics: latency, throughput, and error rates.
Since GitLab is not running within Kubernetes, the template provides external
network access via a `NodePort` running on `30090`. This method allows access
to be controlled using provider firewall rules, like within Google Compute Engine.
## Manual configuration of Prometheus
Since a `NodePort` does not automatically have firewall rules created for it,
one will need to be created manually to allow access. In GCP/GKE, you will want
to confirm the Node that the Prometheus pod is running on. This can be done
either by looking at the Pod in the Kubernetes dashboard, or by running:
### Requirements
```bash
kubectl describe pods -n prometheus
```
Next on GKE, we need to get the `tag` of the Node or VM Instance, so we can
create an accurate firewall rule. The easiest way to do this is to go into the
Google Cloud Platform Compute console and select the VM instance that matches
the name of the Node gathered from the step above. In this case, the node tag
needed is `gke-prometheus-demo-5d5ada10-node`. Also make a note of the
**External IP**, which will be the IP address the Prometheus server is reachable
@@ -24,10 +24,11 @@ Prometheus server up and running. You have two options here:
- If you have an Omnibus based GitLab installation within your Kubernetes cluster, you can leverage the bundled Prometheus server to [monitor Kubernetes](../../../../administration/monitoring/prometheus/index.md#configuring-prometheus-to-monitor-kubernetes).
- To configure your own Prometheus server, you can follow the [Prometheus documentation](https://prometheus.io/docs/introduction/overview/) or [our guide](../../../../administration/monitoring/prometheus/index.md#configuring-your-own-prometheus-server-within-kubernetes).
## Specifying the Environment label
## Specifying the Environment
In order to isolate and only display relevant metrics for a given environment
however, GitLab needs a method to detect which labels are associated. To do this, GitLab will [look for an `environment` label](metrics.md#identifying-environments).
In order to isolate and only display relevant CPU and Memory metrics for a given environment, GitLab needs a method to detect which containers it is running. Because these metrics are tracked at the container level, traditional Kubernetes labels are not available.
Instead, the [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) or [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) name should begin with [CI_ENVIRONMENT_SLUG](../../../../ci/variables/README.md#predefined-variables-environment-variables). It can be followed by a `-` and additional content if desired.
If you are using [GitLab Auto-Deploy](../../../../ci/autodeploy/index.md) and one of the two [provided Kubernetes monitoring solutions](../prometheus.md#getting-started-with-prometheus-monitoring), the `environment` label will be automatically added.
...
...
@@ -44,4 +45,4 @@ These metrics expect an `environment` label of the form `$CI_ENVIRONMENT_SLUG-ca
| Name | Query |
| ---- | ----- |
| Average Memory Usage (MB) | (sum(avg(container_memory_usage_bytes{container_name!="POD",environment="%{ci_environment_slug}-canary"}) without (job))) / count(avg(container_memory_usage_bytes{container_name!="POD",environment="%{ci_environment_slug}-canary"}) without (job)) /1024/1024 |
| Average CPU Utilization (%) | sum(avg(rate(container_cpu_usage_seconds_total{container_name!="POD",environment="%{ci_environment_slug}-canary"}[2m])) without (job)) * 100 |
| Average CPU Utilization (%) | sum(avg(rate(container_cpu_usage_seconds_total{container_name!="POD",environment="%{ci_environment_slug}-canary"}[2m])) without (job)) * 100 |
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/13438) in GitLab 9.5
GitLab has support for automatically detecting and monitoring the Kubernetes NGINX ingress controller. This is provided by leveraging the built in Prometheus metrics included in [version 0.9.0](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/Changelog.md#09-beta1) of the ingress.
GitLab has support for automatically detecting and monitoring the Kubernetes NGINX ingress controller. This is provided by leveraging the built in Prometheus metrics included in [version 0.9.0](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/Changelog.md#09-beta1)and above of the ingress.
## Requirements
The [Prometheus service](../prometheus/index.md) must be enabled.
[Prometheus integration](../prometheus/index.md) must be active.
## Metrics supported
...
...
@@ -18,24 +18,23 @@ The [Prometheus service](../prometheus/index.md) must be enabled.
## Configuring NGINX ingress monitoring
If you have deployed with the [gitlab-omnibus](https://docs.gitlab.com/ee/install/kubernetes/gitlab_omnibus.md) Helm chart, and your application is running in the same cluster, no further action is required. The ingress metrics will be automatically enabled and annotated for Prometheus monitoring. Simply ensure Prometheus monitoring is [enabled for your project](../prometheus.md), which is on by default.
If you have deployed NGINX Ingress using GitLab's [Kubernetes cluster integration](../../clusters/index.md#installing-applications), it will automatically be monitored by Prometheus.
For other deployments, there is some configuration required depending on your installation:
* NGINX Ingress should be version 0.9.0 or above
* NGINX Ingress should be version 0.9.0 or above, with metrics enabled
* NGINX Ingress should be annotated for Prometheus monitoring
* Prometheus should be configured to monitor annotated pods
### Setting up NGINX Ingress for Prometheus monitoring
### Manually setting up NGINX Ingress for Prometheus monitoring
Version 0.9.0 and above of [NGINX ingress](https://github.com/kubernetes/ingress/tree/master/controllers/nginx) have built-in support for exporting Prometheus metrics. To enable, a ConfigMap setting must be passed: `enable-vts-status: "true"`. Once enabled, a Prometheus metrics endpoint will start running on port 10254.
With metric data now available, Prometheus needs to be configured to collect it. The easiest way to do this is to leverage Prometheus' [built-in Kubernetes service discovery](https://prometheus.io/docs/operating/configuration/#kubernetes_sd_config), which automatically detects a variety of Kubernetes components and makes them available for monitoring. Since NGINX ingress metrics are exposed per pod, a scrape job for Kubernetes pods is required. A sample pod scraping configuration [is available](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L248). This configuration will detect pods and enable collection of metrics **only if** they have been specifically annotated for monitoring.
Next, the ingress needs to be annotated for Prometheus monitoring. Two new annotations need to be added:
Depending on how NGINX ingress was deployed, typically a DaemonSet or Deployment, edit the corresponding YML spec. Two new annotations need to be added:
*`prometheus.io/scrape: "true"`
*`prometheus.io/port: "10254"`
Prometheus should now be collecting NGINX ingress metrics. To validate view the Prometheus Targets, available under `Status > Targets` on the Prometheus dashboard. New entries for NGINX should be listed in the kubernetes pod monitoring job, `kubernetes-pods`.
Managing these settings depends on how NGINX ingress has been deployed. If you have deployed via the [official Helm chart](https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress), metrics can be enabled with `controller.stats.enabled` along with the required annotations. Alternatively it is possible edit the NGINX ingress YML directly in the [Kubernetes dashboard](https://github.com/kubernetes/dashboard).