Commit fd75d74a authored by Evan Read's avatar Evan Read

Merge branch 'docs-aqualls-admin-monitor-scrub' into 'master'

Docs aqualls admin monitor scrub

See merge request gitlab-org/gitlab!47038
parents 57116cc0 c83c6415
......@@ -14,8 +14,8 @@ Find more about them [in Audit Events documentation](audit_events.md).
System log files are typically plain text in a standard log file format.
This guide talks about how to read and use these system log files.
[Read more about how to customise logging on Omnibus GitLab
installations](https://docs.gitlab.com/omnibus/settings/logs.html)
Read more about how to
[customize logging on Omnibus GitLab installations](https://docs.gitlab.com/omnibus/settings/logs.html)
including adjusting log retention, log forwarding,
switching logs from JSON to plain text logging, and more.
......@@ -87,7 +87,7 @@ In addition, the log contains the originating IP address,
(`remote_ip`), the user's ID (`user_id`), and username (`username`).
Some endpoints such as `/search` may make requests to Elasticsearch if using
[Advanced Search](../user/search/advanced_global_search.md). These will
[Advanced Search](../user/search/advanced_global_search.md). These
additionally log `elasticsearch_calls` and `elasticsearch_call_duration_s`,
which correspond to:
......@@ -367,9 +367,9 @@ After GitLab version 12.2, this file was renamed from `githost.log` to
`git_json.log` and stored in JSON format.
GitLab has to interact with Git repositories, but in some rare cases
something can go wrong, and in this case you may need know what exactly
something can go wrong. If this happens, you need to know exactly what
happened. This log file contains all failed requests from GitLab to Git
repositories. In the majority of cases this file will be useful for developers
repositories. In the majority of cases this file is useful for developers
only. For example:
```json
......@@ -482,7 +482,7 @@ This file contains logging information about jobs before Sidekiq starts
processing them, such as before being enqueued.
This log file follows the same structure as
[`sidekiq.log`](#sidekiqlog), so it will be structured as JSON if
[`sidekiq.log`](#sidekiqlog), so it is structured as JSON if
you've configured this for Sidekiq as mentioned above.
## `gitlab-shell.log`
......@@ -605,7 +605,7 @@ installations from source.
## Unicorn Logs
Starting with GitLab 13.0, Puma is the default web server used in GitLab
all-in-one package based installations as well as GitLab Helm chart deployments.
all-in-one package based installations, and GitLab Helm chart deployments.
### `unicorn_stdout.log`
......@@ -726,21 +726,26 @@ was initiated, such as `1509705644.log`
## `sidekiq_exporter.log` and `web_exporter.log`
If Prometheus metrics and the Sidekiq Exporter are both enabled, Sidekiq
will start a Web server and listen to the defined port (default:
starts a Web server and listen to the defined port (default:
`8082`). By default, Sidekiq Exporter access logs are disabled but can
be enabled via the `sidekiq['exporter_log_enabled'] = true` option in `/etc/gitlab/gitlab.rb`
for Omnibus installations, or via the `sidekiq_exporter.log_enabled` option
in `gitlab.yml` for installations from source. When enabled,
access logs will be generated in
be enabled:
- For Omnibus GitLab installations, using the `sidekiq['exporter_log_enabled'] = true`
option in `/etc/gitlab/gitlab.rb`.
- For installations from source, using the `sidekiq_exporter.log_enabled` option
in `gitlab.yml`.
When enabled, access logs are generated in
`/var/log/gitlab/gitlab-rails/sidekiq_exporter.log` for Omnibus GitLab
packages or in `/home/git/gitlab/log/sidekiq_exporter.log` for
installations from source.
If Prometheus metrics and the Web Exporter are both enabled, Puma/Unicorn will
start a Web server and listen to the defined port (default: `8083`). Access logs
will be generated in `/var/log/gitlab/gitlab-rails/web_exporter.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/web_exporter.log` for
installations from source.
If Prometheus metrics and the Web Exporter are both enabled, Puma/Unicorn
starts a Web server and listen to the defined port (default: `8083`), and access logs
are generated:
- For Omnibus GitLab packages, in `/var/log/gitlab/gitlab-rails/web_exporter.log`.
- For installations from source, in `/home/git/gitlab/log/web_exporter.log`.
## `database_load_balancing.log` **(PREMIUM ONLY)**
......@@ -756,13 +761,11 @@ It's stored at:
> Introduced in GitLab 12.6.
This file lives in
`/var/log/gitlab/gitlab-rails/elasticsearch.log` for Omnibus GitLab
packages or in `/home/git/gitlab/log/elasticsearch.log` for installations
from source.
This file logs information related to the Elasticsearch Integration, including
errors during indexing or searching Elasticsearch. It's stored at:
It logs information related to the Elasticsearch Integration including
errors during indexing or searching Elasticsearch.
- `/var/log/gitlab/gitlab-rails/elasticsearch.log` for Omnibus GitLab packages.
- `/home/git/gitlab/log/elasticsearch.log` for installations from source.
Each line contains a JSON line that can be ingested by services like Elasticsearch and Splunk.
Line breaks have been added to the following example line for clarity:
......@@ -785,12 +788,12 @@ Line breaks have been added to the following example line for clarity:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/17819) in GitLab 12.6.
This file lives in
`/var/log/gitlab/gitlab-rails/exceptions_json.log` for Omnibus GitLab
packages or in `/home/git/gitlab/log/exceptions_json.log` for installations
from source.
This file logs the information about exceptions being tracked by
`Gitlab::ErrorTracking`, which provides a standard and consistent way of
[processing rescued exceptions](https://gitlab.com/gitlab-org/gitlab/blob/master/doc/development/logging.md#exception-handling). This file is stored in:
It logs the information about exceptions being tracked by `Gitlab::ErrorTracking` which provides a standard and consistent way of [processing rescued exceptions](https://gitlab.com/gitlab-org/gitlab/blob/master/doc/development/logging.md#exception-handling).
- `/var/log/gitlab/gitlab-rails/exceptions_json.log` for Omnibus GitLab packages.
- `/home/git/gitlab/log/exceptions_json.log` for installations from source.
Each line contains a JSON line that can be ingested by Elasticsearch. For example:
......@@ -820,7 +823,7 @@ Omnibus GitLab packages or in `/home/git/gitlab/log/service_measurement.log` for
installations from source.
It contains only a single structured log with measurements for each service execution.
It will contain measurements such as the number of SQL calls, `execution_time`, `gc_stats`, and `memory usage`.
It contains measurements such as the number of SQL calls, `execution_time`, `gc_stats`, and `memory usage`.
For example:
......@@ -945,7 +948,7 @@ For Omnibus GitLab installations, Alertmanager logs reside in `/var/log/gitlab/a
## Crond Logs
For Omnibus GitLab installations, crond logs reside in `/var/log/gitlab/crond/`.
For Omnibus GitLab installations, `crond` logs reside in `/var/log/gitlab/crond/`.
## Grafana Logs
......@@ -973,7 +976,7 @@ from a GitLab instance.
If the bug or error is readily reproducible, save the main GitLab logs
[to a file](troubleshooting/linux_cheat_sheet.md#files--dirs) while reproducing the
problem once or more times:
problem a few times:
```shell
sudo gitlab-ctl tail | tee /tmp/<case-ID-and-keywords>.log
......@@ -986,7 +989,7 @@ Conclude the log gathering with <kbd>Ctrl</kbd> + <kbd>C</kbd>.
If performance degradations or cascading errors occur that can't readily be attributed to one
of the previously listed GitLab components, [GitLabSOS](https://gitlab.com/gitlab-com/support/toolbox/gitlabsos/)
can provide a perspective spanning all of Omnibus GitLab. For more details and instructions
to run it, see [the GitLabSOS documentation](https://gitlab.com/gitlab-com/support/toolbox/gitlabsos/#gitlabsos).
to run it, read [the GitLabSOS documentation](https://gitlab.com/gitlab-com/support/toolbox/gitlabsos/#gitlabsos).
NOTE: **Note:**
GitLab Support likes to use this custom-made tool.
......@@ -995,5 +998,5 @@ GitLab Support likes to use this custom-made tool.
[Fast-stats](https://gitlab.com/gitlab-com/support/toolbox/fast-stats) is a tool
for creating and comparing performance statistics from GitLab logs.
For more details and instructions to run it, see
[read the documentation for fast-stats](https://gitlab.com/gitlab-com/support/toolbox/fast-stats#usage).
For more details and instructions to run it, read the
[documentation for fast-stats](https://gitlab.com/gitlab-com/support/toolbox/fast-stats#usage).
......@@ -17,11 +17,10 @@ monitor the health and progress of the importer.
|------------------------------------------|-----------|
| `github_importer_total_duration_seconds` | histogram |
This metric tracks the total time spent (in seconds) importing a project (from
This metric tracks the total time, in seconds, spent importing a project (from
project creation until the import process finishes), for every imported project.
The name of the project is stored in the `project` label in the format
`namespace/name` (e.g. `gitlab-org/gitlab`).
`namespace/name` (such as `gitlab-org/gitlab`).
## Number of imported projects
......@@ -59,7 +58,7 @@ projects. This metric does not expose any labels.
This metric tracks the number of imported issues across all projects.
The name of the project is stored in the `project` label in the format
`namespace/name` (e.g. `gitlab-org/gitlab`).
`namespace/name` (such as `gitlab-org/gitlab`).
## Number of imported pull requests
......@@ -70,7 +69,7 @@ The name of the project is stored in the `project` label in the format
This metric tracks the number of imported pull requests across all projects.
The name of the project is stored in the `project` label in the format
`namespace/name` (e.g. `gitlab-org/gitlab`).
`namespace/name` (such as `gitlab-org/gitlab`).
## Number of imported comments
......@@ -81,7 +80,7 @@ The name of the project is stored in the `project` label in the format
This metric tracks the number of imported comments across all projects.
The name of the project is stored in the `project` label in the format
`namespace/name` (e.g. `gitlab-org/gitlab`).
`namespace/name` (such as `gitlab-org/gitlab`).
## Number of imported pull request review comments
......@@ -92,7 +91,7 @@ The name of the project is stored in the `project` label in the format
This metric tracks the number of imported comments across all projects.
The name of the project is stored in the `project` label in the format
`namespace/name` (e.g. `gitlab-org/gitlab`).
`namespace/name` (such as `gitlab-org/gitlab`).
## Number of imported repositories
......
......@@ -7,21 +7,17 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# GitLab Configuration
GitLab Performance Monitoring is disabled by default. To enable it and change any of its
settings, navigate to **Admin Area > Settings > Metrics and profiling**
(`/admin/application_settings/metrics_and_profiling`).
settings:
![GitLab Performance Monitoring Admin Settings](img/metrics_gitlab_configuration_settings.png)
1. Navigate to **Admin Area > Settings > Metrics and profiling**
(`/admin/application_settings/metrics_and_profiling`):
Finally, a restart of all GitLab processes is required for the changes to take
effect:
![GitLab Performance Monitoring Administration Settings](img/metrics_gitlab_configuration_settings.png)
```shell
# For Omnibus installations
sudo gitlab-ctl restart
1. You must restart all GitLab processes for the changes to take effect:
# For installations from source
sudo service gitlab restart
```
- For Omnibus GitLab installations: `sudo gitlab-ctl restart`
- For installations from source: `sudo service gitlab restart`
## Pending Migrations
......
......@@ -7,7 +7,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# GitLab Performance Monitoring
GitLab comes with its own application performance measuring system as of GitLab
8.4, simply called "GitLab Performance Monitoring". GitLab Performance Monitoring is available in both the
8.4, called "GitLab Performance Monitoring". GitLab Performance Monitoring is available in both the
Community and Enterprise editions.
Apart from this introduction, you are advised to read through the following
......@@ -42,7 +42,7 @@ Two types of metrics are collected:
Transaction metrics are metrics that can be associated with a single
transaction. This includes statistics such as the transaction duration, timings
of any executed SQL queries, time spent rendering HAML views, etc. These metrics
of any executed SQL queries, time spent rendering HAML views, and so on. These metrics
are collected for every Rack request and Sidekiq job processed.
### Sampled Metrics
......@@ -59,5 +59,5 @@ parts:
The actual interval can be anywhere between a half of the defined interval and a
half above the interval. For example, for a user defined interval of 15 seconds
the actual interval can be anywhere between 7.5 and 22.5. The interval is
re-generated for every sampling run instead of being generated once and re-used
re-generated for every sampling run instead of being generated one time and reused
for the duration of the process' lifetime.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment