Commit bbd532fb authored by Amy Qualls's avatar Amy Qualls Committed by Craig Norris

Minor tone and style revisions

Touching up the pages to bring them closer to GitLab standards.
parent 07d2ebc6
......@@ -6,85 +6,90 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Grafana Configuration
[Grafana](https://grafana.com/) is a tool that allows you to visualize time
series metrics through graphs and dashboards. GitLab writes performance data to Prometheus
and Grafana will allow you to query to display useful graphs.
[Grafana](https://grafana.com/) is a tool that enables you to visualize time
series metrics through graphs and dashboards. GitLab writes performance data to Prometheus,
and Grafana allows you to query the data to display useful graphs.
## Installation
[Omnibus GitLab can help you install Grafana (recommended)](https://docs.gitlab.com/omnibus/settings/grafana.html)
Omnibus GitLab can [help you install Grafana (recommended)](https://docs.gitlab.com/omnibus/settings/grafana.html)
or Grafana supplies package repositories (Yum/Apt) for easy installation.
See [Grafana installation documentation](https://grafana.com/docs/grafana/latest/installation/)
for detailed steps.
NOTE: **Note:**
Before starting Grafana for the first time, set the admin user
and password in `/etc/grafana/grafana.ini`. Otherwise, the default password
will be `admin`.
and password in `/etc/grafana/grafana.ini`. If you don't, the default password
is `admin`.
## Configuration
Login as the admin user. Expand the menu by clicking the Grafana logo in the
top left corner. Choose 'Data Sources' from the menu. Then, click 'Add new'
in the top bar.
![Grafana empty data source page](img/grafana_data_source_empty.png)
![Grafana data source configurations](img/grafana_data_source_configuration.png)
1. Log in to Grafana as the admin user.
1. Expand the menu by clicking the Grafana logo in the top left corner.
1. Choose **Data Sources** from the menu.
1. Click **Add new** in the top bar:
![Grafana empty data source page](img/grafana_data_source_empty.png)
1. Edit the data source to fit your needs:
![Grafana data source configurations](img/grafana_data_source_configuration.png)
1. Click **Save**.
## Import Dashboards
You can now import a set of default dashboards that will give you a good
start on displaying useful information. GitLab has published a set of default
[Grafana dashboards](https://gitlab.com/gitlab-org/grafana-dashboards) to get you started. Clone the
repository or download a zip/tarball, then follow these steps to import each
JSON file.
Open the dashboard dropdown menu and click 'Import'
![Grafana dashboard dropdown](img/grafana_dashboard_dropdown.png)
Click 'Choose file' and browse to the location where you downloaded or cloned
the dashboard repository. Pick one of the JSON files to import.
![Grafana dashboard import](img/grafana_dashboard_import.png)
Once the dashboard is imported, be sure to click save icon in the top bar. If
you do not save the dashboard after importing it will be removed when you
navigate away.
![Grafana save icon](img/grafana_save_icon.png)
You can now import a set of default dashboards to start displaying useful information.
GitLab has published a set of default
[Grafana dashboards](https://gitlab.com/gitlab-org/grafana-dashboards) to get you started.
Clone the repository, or download a ZIP file or tarball, then follow these steps to import each
JSON file individually:
1. Log in to Grafana as the admin user.
1. Open the dashboard dropdown menu and click **Import**:
![Grafana dashboard dropdown](img/grafana_dashboard_dropdown.png)
1. Click **Choose file**, and browse to the location where you downloaded or
cloned the dashboard repository. Select a JSON file to import:
![Grafana dashboard import](img/grafana_dashboard_import.png)
1. After the dashboard is imported, click the **Save dashboard** icon in the top bar:
![Grafana save icon](img/grafana_save_icon.png)
NOTE: **Note:**
If you don't save the dashboard after importing it, the dashboard is removed
when you navigate away from the page.
Repeat this process for each dashboard you wish to import.
Alternatively you can automatically import all the dashboards into your Grafana
instance. See the README of the [Grafana dashboards](https://gitlab.com/gitlab-org/grafana-dashboards)
repository for more information on this process.
Alternatively, you can import all the dashboards into your Grafana
instance. For more information about this process, see the
[README of the Grafana dashboards](https://gitlab.com/gitlab-org/grafana-dashboards)
repository.
## Integration with GitLab UI
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/61005) in GitLab 12.1.
If you have set up Grafana, you can enable a link to access it easily from the sidebar:
After setting up Grafana, you can enable a link to access it easily from the
GitLab sidebar:
1. Go to the **Admin Area > Settings > Metrics and profiling**.
1. Navigate to the **{admin}** **Admin Area > Settings > Metrics and profiling**.
1. Expand **Metrics - Grafana**.
1. Check the "Enable access to Grafana" checkbox.
1. If Grafana is enabled through Omnibus GitLab and on the same server,
leave **Grafana URL** unchanged. It should be `/-/grafana`.
In any other case, enter the full URL of the Grafana instance.
1. Check the **Enable access to Grafana** checkbox.
1. Configure the **Grafana URL**:
- *If Grafana is enabled through Omnibus GitLab and on the same server,*
leave **Grafana URL** unchanged. It should be `/-/grafana`.
- *Otherwise,* enter the full URL of the Grafana instance.
1. Click **Save changes**.
1. The new link will be available in the **Admin Area > Monitoring > Metrics Dashboard**.
GitLab displays your link in the **{admin}** **Admin Area > Monitoring > Metrics Dashboard**.
## Security Update
Users running GitLab version 12.0 or later should immediately upgrade to one of the following security releases due to a known vulnerability with the embedded Grafana dashboard:
Users running GitLab version 12.0 or later should immediately upgrade to one of the
following security releases due to a known vulnerability with the embedded Grafana dashboard:
- 12.0.6
- 12.1.6
After upgrading, the Grafana dashboard will be disabled and the location of your existing Grafana data will be changed from `/var/opt/gitlab/grafana/data/` to `/var/opt/gitlab/grafana/data.bak.#{Date.today}/`.
After upgrading, the Grafana dashboard is disabled, and the location of your
existing Grafana data is changed from `/var/opt/gitlab/grafana/data/` to
`/var/opt/gitlab/grafana/data.bak.#{Date.today}/`.
To prevent the data from being relocated, you can run the following command prior to upgrading:
......@@ -100,19 +105,23 @@ sudo mv /var/opt/gitlab/grafana/data.bak.xxxx/ /var/opt/gitlab/grafana/data/
However, you should **not** reinstate your old data _except_ under one of the following conditions:
1. If you are certain that you changed your default admin password when you enabled Grafana
1. If you run GitLab in a private network, accessed only by trusted users, and your Grafana login page has not been exposed to the internet
1. If you're certain that you changed your default admin password when you enabled Grafana.
1. If you run GitLab in a private network, accessed only by trusted users, and your
Grafana login page has not been exposed to the internet.
If you require access to your old Grafana data but do not meet one of these criteria, you may consider:
If you require access to your old Grafana data but don't meet one of these criteria, you may consider:
1. Reinstating it temporarily.
1. [Exporting the dashboards](https://grafana.com/docs/grafana/latest/reference/export_import/#exporting-a-dashboard) you need.
1. Refreshing the data and [re-importing your dashboards](https://grafana.com/docs/grafana/latest/reference/export_import/#importing-a-dashboard).
DANGER: **Danger:**
This poses a temporary vulnerability while your old Grafana data is in use and the decision to do so should be weighed carefully with your need to access existing data and dashboards.
These actions pose a temporary vulnerability while your old Grafana data is in use.
Deciding to take any of these actions should be weighed carefully with your need to access
existing data and dashboards.
For more information and further mitigation details, please refer to our [blog post on the security release](https://about.gitlab.com/releases/2019/08/12/critical-security-release-gitlab-12-dot-1-dot-6-released/).
For more information and further mitigation details, please refer to our
[blog post on the security release](https://about.gitlab.com/releases/2019/08/12/critical-security-release-gitlab-12-dot-1-dot-6-released/).
---
......
......@@ -9,27 +9,25 @@ info: To determine the technical writer assigned to the Stage/Group associated w
>- Available since [Omnibus GitLab 8.17](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/1132).
>- Renamed from `GitLab monitor exporter` to `GitLab exporter` in [GitLab 12.3](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/16511).
The [GitLab exporter](https://gitlab.com/gitlab-org/gitlab-exporter) allows you to
measure various GitLab metrics, pulled from Redis and the database, in Omnibus GitLab
The [GitLab exporter](https://gitlab.com/gitlab-org/gitlab-exporter) enables you to
measure various GitLab metrics pulled from Redis and the database in Omnibus GitLab
instances.
NOTE: **Note:**
For installations from source you'll have to install and configure it yourself.
For installations from source you must install and configure it yourself.
To enable the GitLab exporter in an Omnibus GitLab instance:
1. [Enable Prometheus](index.md#configuring-prometheus)
1. Edit `/etc/gitlab/gitlab.rb`
1. Add or find and uncomment the following line, making sure it's set to `true`:
1. [Enable Prometheus](index.md#configuring-prometheus).
1. Edit `/etc/gitlab/gitlab.rb`.
1. Add, or find and uncomment, the following line, making sure it's set to `true`:
```ruby
gitlab_exporter['enable'] = true
```
1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
for the changes to take effect
for the changes to take effect.
Prometheus will now automatically begin collecting performance data from
the GitLab exporter exposed under `localhost:9168`.
[← Back to the main Prometheus page](index.md)
Prometheus automatically begins collecting performance data from
the GitLab exporter exposed at `localhost:9168`.
......@@ -14,18 +14,18 @@ To enable the GitLab Prometheus metrics:
1. [Restart GitLab](../../restart_gitlab.md#omnibus-gitlab-restart) for the changes to take effect.
NOTE: **Note:**
For installations from source you'll have to configure it yourself.
For installations from source you must configure it yourself.
## Collecting the metrics
GitLab monitors its own internal service metrics, and makes them available at the
`/-/metrics` endpoint. Unlike other [Prometheus](https://prometheus.io) exporters, to access
it, the client IP address needs to be [explicitly allowed](../ip_whitelist.md).
the metrics, the client IP address must be [explicitly allowed](../ip_whitelist.md).
For [Omnibus GitLab](https://docs.gitlab.com/omnibus/) and Chart installations,
these metrics are enabled and collected as of
[GitLab 9.4](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/1702).
For source installations or earlier versions, these metrics must be enabled
For source installations, these metrics must be enabled
manually and collected by a Prometheus server.
For enabling and viewing metrics from Sidekiq nodes, see [Sidekiq metrics](#sidekiq-metrics).
......@@ -40,7 +40,7 @@ The following metrics are available:
| `gitlab_banzai_cacheless_render_real_duration_seconds` | Histogram | 9.4 | Duration of rendering Markdown into HTML when cached output does not exist | `controller`, `action` |
| `gitlab_cache_misses_total` | Counter | 10.2 | Cache read miss | `controller`, `action` |
| `gitlab_cache_operation_duration_seconds` | Histogram | 10.2 | Cache access time | |
| `gitlab_cache_operations_total` | Counter | 12.2 | Cache operations by controller/action | `controller`, `action`, `operation` |
| `gitlab_cache_operations_total` | Counter | 12.2 | Cache operations by controller or action | `controller`, `action`, `operation` |
| `gitlab_ci_pipeline_creation_duration_seconds` | Histogram | 13.0 | Time in seconds it takes to create a CI/CD pipeline | |
| `gitlab_ci_pipeline_size_builds` | Histogram | 13.1 | Total number of builds within a pipeline grouped by a pipeline source | `source` |
| `job_waiter_started_total` | Counter | 12.9 | Number of batches of jobs started where a web request is waiting for the jobs to complete | `worker` |
......@@ -92,9 +92,9 @@ The following metrics are available:
| `gitlab_view_rendering_duration_seconds` | Histogram | 10.2 | Duration for views (histogram) | `controller`, `action`, `view` |
| `http_requests_total` | Counter | 9.4 | Rack request count | `method` |
| `http_request_duration_seconds` | Histogram | 9.4 | HTTP response time from rack middleware | `method`, `status` |
| `gitlab_transaction_db_count_total` | Counter | 13.1 | Counter for total number of sql calls | `controller`, `action` |
| `gitlab_transaction_db_write_count_total` | Counter | 13.1 | Counter for total number of write sql calls | `controller`, `action` |
| `gitlab_transaction_db_cached_count_total` | Counter | 13.1 | Counter for total number of cached sql calls | `controller`, `action` |
| `gitlab_transaction_db_count_total` | Counter | 13.1 | Counter for total number of SQL calls | `controller`, `action` |
| `gitlab_transaction_db_write_count_total` | Counter | 13.1 | Counter for total number of write SQL calls | `controller`, `action` |
| `gitlab_transaction_db_cached_count_total` | Counter | 13.1 | Counter for total number of cached SQL calls | `controller`, `action` |
| `http_redis_requests_duration_seconds` | Histogram | 13.1 | Redis requests duration during web transactions | `controller`, `action` |
| `http_redis_requests_total` | Counter | 13.1 | Redis requests count during web transactions | `controller`, `action` |
| `http_elasticsearch_requests_duration_seconds` **(STARTER)** | Histogram | 13.1 | Elasticsearch requests duration during web transactions | `controller`, `action` |
......@@ -120,7 +120,7 @@ The following metrics can be controlled by feature flags:
## Sidekiq metrics
Sidekiq jobs may also gather metrics, and these metrics can be accessed if the
Sidekiq exporter is enabled (for example, using the `monitoring.sidekiq_exporter`
Sidekiq exporter is enabled: for example, using the `monitoring.sidekiq_exporter`
configuration option in `gitlab.yml`. These metrics are served from the
`/metrics` path on the configured port.
......@@ -187,16 +187,16 @@ The following metrics are available:
## Connection pool metrics
These metrics record the status of the database [connection pools](https://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html).
These metrics record the status of the database
[connection pools](https://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html),
and the metrics all have these labels:
They all have these labels:
1. `class` - the Ruby class being recorded.
1. `ActiveRecord::Base` is the main database connection.
1. `Geo::TrackingBase` is the connection to the Geo tracking database, if
enabled.
1. `host` - the host name used to connect to the database.
1. `port` - the port used to connect to the database.
- `class` - the Ruby class being recorded.
- `ActiveRecord::Base` is the main database connection.
- `Geo::TrackingBase` is the connection to the Geo tracking database, if
enabled.
- `host` - the host name used to connect to the database.
- `port` - the port used to connect to the database.
| Metric | Type | Since | Description |
|:----------------------------------------------|:------|:------|:--------------------------------------------------|
......@@ -256,10 +256,10 @@ When Puma is used instead of Unicorn, the following metrics are available:
GitLab's Prometheus client requires a directory to store metrics data shared between multi-process services.
Those files are shared among all instances running under Unicorn server.
The directory must be accessible to all running Unicorn's processes, or
metrics won't function correctly.
metrics can't function correctly.
This directory's location is configured using environment variable `prometheus_multiproc_dir`.
For best performance, create this directory in `tmpfs`.
If GitLab is installed using [Omnibus GitLab](https://docs.gitlab.com/omnibus/)
and `tmpfs` is available, then the metrics directory will be configured for you.
and `tmpfs` is available, then GitLab configures the metrics directory for you.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment