Commit fefbb917 authored by Evan Read's avatar Evan Read

Merge branch 'docs-aqualls-20200306.2-spelling' into 'master'

Docs: fix spelling errors

See merge request gitlab-org/gitlab!26709
parents d899c283 a7303803
......@@ -13,7 +13,7 @@ This guide talks about how to read and use these system log files.
This file lives in `/var/log/gitlab/gitlab-rails/production_json.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/production_json.log` for
installations from source. When GitLab is running in an environment
other than production, the corresponding logfile is shown here.
other than production, the corresponding log file is shown here.
It contains a structured log for Rails controller requests received from
GitLab, thanks to [Lograge](https://github.com/roidrage/lograge/). Note that
......@@ -105,7 +105,7 @@ NOTE: **Note:** Starting with GitLab 12.5, if an error occurs, an
This file lives in `/var/log/gitlab/gitlab-rails/production.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/production.log` for
installations from source. (When GitLab is running in an environment
other than production, the corresponding logfile is shown here.)
other than production, the corresponding log file is shown here.)
It contains information about all performed requests. You can see the
URL and type of request, IP address, and what parts of code were
......
......@@ -39,8 +39,8 @@ Test Connection to ensure the configuration is correct.
- **Name**: `InfluxDB`
- **Default**: Checked
- **Type**: `InfluxDB 0.9.x` (Even if you're using InfluxDB 0.10.x)
- **Url**: `https://localhost:8086` (Or the remote URL if you've installed InfluxDB
on a separate server)
- For the URL, use `https://localhost:8086`, or provide the remote URL if you've installed InfluxDB
on a separate server
- **Access**: `proxy`
- **Database**: `gitlab`
- **User**: `admin` (Or the username configured when setting up InfluxDB)
......@@ -52,7 +52,7 @@ Test Connection to ensure the configuration is correct.
If you intend to import the GitLab provided Grafana dashboards, you will need to
set up the right retention policies and continuous queries. The easiest way of
doing this is by using the [influxdb-management](https://gitlab.com/gitlab-org/influxdb-management)
doing this is by using the [InfluxDB Management](https://gitlab.com/gitlab-org/influxdb-management)
repository.
To use this repository you must first clone it:
......@@ -74,7 +74,7 @@ and then editing the `.env` file to contain the correct InfluxDB settings. Once
configured you can simply run `bundle exec rake` and the InfluxDB database will
be configured for you.
For more information see the [influxdb-management README](https://gitlab.com/gitlab-org/influxdb-management/blob/master/README.md).
For more information see the [InfluxDB Management README](https://gitlab.com/gitlab-org/influxdb-management/blob/master/README.md).
## Import Dashboards
......
......@@ -33,7 +33,7 @@ page was open. Only the first two requests per unique URL are captured.
## Request warnings
For requests exceeding pre-defined limits, a warning icon will be shown
For requests exceeding predefined limits, a warning icon will be shown
next to the failing metric, along with an explanation. In this example,
the Gitaly call duration exceeded the threshold:
......
......@@ -201,7 +201,7 @@ When Puma is used instead of Unicorn, the following metrics are available:
| `puma_running_workers` | Gauge | 12.0 | Number of booted workers |
| `puma_stale_workers` | Gauge | 12.0 | Number of old workers |
| `puma_running` | Gauge | 12.0 | Number of running threads |
| `puma_queued_connections` | Gauge | 12.0 | Number of connections in that worker's "todo" set waiting for a worker thread |
| `puma_queued_connections` | Gauge | 12.0 | Number of connections in that worker's "to do" set waiting for a worker thread |
| `puma_active_connections` | Gauge | 12.0 | Number of threads processing a request |
| `puma_pool_capacity` | Gauge | 12.0 | Number of requests the worker is capable of taking right now |
| `puma_max_threads` | Gauge | 12.0 | Maximum number of worker threads |
......
......@@ -22,8 +22,8 @@ To enable the PgBouncer exporter:
Prometheus will now automatically begin collecting performance data from
the PgBouncer exporter exposed under `localhost:9188`.
The PgBouncer exporter will also be enabled by default if the [pgbouncer_role][postgres roles]
is enabled.
The PgBouncer exporter will also be enabled by default if the [`pgbouncer_role`][postgres roles]
role is enabled.
[← Back to the main Prometheus page](index.md)
......
......@@ -98,7 +98,7 @@ This is a brief overview. Please refer to the above instructions for more contex
1. [Rebuild the `authorized_keys` file](../raketasks/maintenance.md#rebuild-authorized_keys-file)
1. Enable writes to the `authorized_keys` file in Application Settings
1. Remove the `AuthorizedKeysCommand` lines from `/etc/ssh/sshd_config` or from `/assets/sshd_config` if you are using Omnibus Docker.
1. Reload sshd: `sudo service sshd reload`
1. Reload `sshd`: `sudo service sshd reload`
1. Remove the `/opt/gitlab-shell/authorized_keys` file
## Compiling a custom version of OpenSSH for CentOS 6
......@@ -184,7 +184,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
You should see a line that reads: "debug1: Remote protocol version 2.0, remote software version OpenSSH_7.5"
If not, you may need to restart sshd (e.g. `systemctl restart sshd.service`).
If not, you may need to restart `sshd` (e.g. `systemctl restart sshd.service`).
1. *IMPORTANT!* Open a new SSH session to your server before exiting to make
sure everything is working! If you need to downgrade, simple install the
......
......@@ -31,7 +31,7 @@ If you want to see progress, replace `-xf` with `-xvf`.
### Tar pipe to another server
You can also use a tar pipe to copy data to another server. If your
`git` user has SSH access to the newserver as `git@newserver`, you
`git` user has SSH access to the new server as `git@newserver`, you
can pipe the data through SSH.
```shell
......
......@@ -33,7 +33,7 @@ uploading user SSH keys to GitLab entirely.
How to fully set up SSH certificates is outside the scope of this
document. See [OpenSSH's
PROTOCOL.certkeys](https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/PROTOCOL.certkeys?annotate=HEAD)
`PROTOCOL.certkeys`](https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/PROTOCOL.certkeys?annotate=HEAD)
for how it works, and e.g. [RedHat's documentation about
it](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/sec-using_openssh_certificate_authentication).
......
......@@ -57,7 +57,7 @@ local location or even use object storage.
The packages for Omnibus GitLab installations are stored under
`/var/opt/gitlab/gitlab-rails/shared/packages/` and for source
installations under `shared/packages/` (relative to the Git homedir).
installations under `shared/packages/` (relative to the Git home directory).
To change the local storage path:
**Omnibus GitLab installations**
......
......@@ -38,7 +38,7 @@ which you can set it up:
the Pages daemon is installed, so you will have to share it via network.
- Run the Pages daemon in the same server as GitLab, listening on the same IP
but on different ports. In that case, you will have to proxy the traffic with
a loadbalancer. If you choose that route note that you should use TCP load
a load balancer. If you choose that route note that you should use TCP load
balancing for HTTPS. If you use TLS-termination (HTTPS-load balancing) the
pages will not be able to be served with user provided certificates. For
HTTP it's OK to use HTTP or TCP load balancing.
......@@ -256,7 +256,7 @@ GitLab supports [custom domain verification](../../user/project/pages/custom_dom
When adding a custom domain, users will be required to prove they own it by
adding a GitLab-controlled verification code to the DNS records for that domain.
If your userbase is private or otherwise trusted, you can disable the
If your user base is private or otherwise trusted, you can disable the
verification requirement. Navigate to **Admin Area > Settings > Preferences** and
uncheck **Require users to prove ownership of custom domains** in the **Pages** section.
This setting is enabled by default.
......@@ -358,7 +358,7 @@ For Omnibus, normally this would be fixed by [installing a custom CA in GitLab O
but a [bug](https://gitlab.com/gitlab-org/gitlab/issues/25411) is currently preventing
that method from working. Use the following workaround:
1. Append your GitLab server TLS/SSL certficate to `/opt/gitlab/embedded/ssl/certs/cacert.pem` where `gitlab-domain-example.com` is your GitLab application URL
1. Append your GitLab server TLS/SSL certificate to `/opt/gitlab/embedded/ssl/certs/cacert.pem` where `gitlab-domain-example.com` is your GitLab application URL
```shell
printf "\ngitlab-domain-example.com\n===========================\n" | sudo tee --append /opt/gitlab/embedded/ssl/certs/cacert.pem
......@@ -582,7 +582,7 @@ but commented out to help encourage others to add to it in the future. -->
### `open /etc/ssl/ca-bundle.pem: permission denied`
GitLab Pages runs inside a `chroot` jail, usually in a uniquely numbered directory like
GitLab Pages runs inside a chroot jail, usually in a uniquely numbered directory like
`/tmp/gitlab-pages-*`.
Within the jail, a bundle of trusted certificates is
......@@ -592,7 +592,7 @@ from `/opt/gitlab/embedded/ssl/certs/cacert.pem`
as part of starting up Pages.
If the permissions on the source file are incorrect (they should be `0644`) then
the file inside the `chroot` jail will also be wrong.
the file inside the chroot jail will also be wrong.
Pages will log errors in `/var/log/gitlab/gitlab-pages/current` like:
......@@ -601,7 +601,7 @@ x509: failed to load system roots and no roots provided
open /etc/ssl/ca-bundle.pem: permission denied
```
The use of a `chroot` jail makes this error misleading, as it is not
The use of a chroot jail makes this error misleading, as it is not
referring to `/etc/ssl` on the root filesystem.
The fix is to correct the source file permissions and restart Pages:
......
......@@ -35,7 +35,7 @@ which you can set it up:
the Pages daemon is installed, so you will have to share it via network.
1. Run the Pages daemon in the same server as GitLab, listening on the same IP
but on different ports. In that case, you will have to proxy the traffic with
a loadbalancer. If you choose that route note that you should use TCP load
a load balancer. If you choose that route note that you should use TCP load
balancing for HTTPS. If you use TLS-termination (HTTPS-load balancing) the
pages will not be able to be served with user provided certificates. For
HTTP it's OK to use HTTP or TCP load balancing.
......@@ -51,7 +51,7 @@ Before proceeding with the Pages configuration, make sure that:
this document we assume that to be `example.io`.
1. You have configured a **wildcard DNS record** for that domain.
1. You have installed the `zip` and `unzip` packages in the same server that
GitLab is installed since they are needed to compress/uncompress the
GitLab is installed since they are needed to compress and decompress the
Pages artifacts.
1. (Optional) You have a **wildcard certificate** for the Pages domain if you
decide to serve Pages (`*.example.io`) under HTTPS.
......@@ -388,7 +388,7 @@ Each request to view a resource in a private site is authenticated by Pages
using that token. For each request it receives, it makes a request to the GitLab
API to check that the user is authorized to read that site.
From [GitLab 12.8](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/3689) onwards,
From [GitLab 12.8](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/3689) onward,
Access Control parameters for Pages are set in a configuration file, which
by convention is named `gitlab-pages-config`. The configuration file is passed to
pages using the `-config flag` or CONFIG environment variable.
......
......@@ -129,7 +129,7 @@ Done!
## LDAP Check
The LDAP check Rake task will test the bind_dn and password credentials
The LDAP check Rake task will test the bind DN and password credentials
(if configured) and will list a sample of LDAP users. This task is also
executed as part of the `gitlab:check` task, but can run independently.
See [LDAP Rake Tasks - LDAP Check](ldap.md#check) for details.
......
......@@ -184,13 +184,13 @@ Courier, which we will install later to add IMAP authentication, requires mailbo
imapd start
```
1. The courier-authdaemon isn't started after installation. Without it, imap authentication will fail:
1. The `courier-authdaemon` isn't started after installation. Without it, IMAP authentication will fail:
```shell
sudo service courier-authdaemon start
```
You can also configure courier-authdaemon to start on boot:
You can also configure `courier-authdaemon` to start on boot:
```shell
sudo systemctl enable courier-authdaemon
......
......@@ -118,7 +118,7 @@ downtime. Otherwise skip to the next section.
```
1. This forces the process to generate a Ruby backtrace. Check
`/var/log/gitlab/unicorn/unicorn_stderr.log` for the backtace. For example, you may see:
`/var/log/gitlab/unicorn/unicorn_stderr.log` for the backtrace. For example, you may see:
```plaintext
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `block in start'
......
......@@ -19,7 +19,7 @@ running on.
## strace-parser
[strace-parser](https://gitlab.com/wchandler/strace-parser) is a small tool to analyze
and summarize raw strace data.
and summarize raw `strace` data.
## Pritaly
......
......@@ -8,7 +8,7 @@ Troubleshooting Elasticsearch requires:
## Common terminology
- **Lucene**: A full-text search library written in Java.
- **Near Realtime (NRT)**: Refers to the slight latency from the time to index a
- **Near real time (NRT)**: Refers to the slight latency from the time to index a
document to the time when it becomes searchable.
- **Cluster**: A collection of one or more nodes that work together to hold all
the data, providing indexing and search capabilities.
......@@ -271,7 +271,7 @@ Generally speaking, ensure:
- The Elasticsearch server have enough RAM and CPU cores.
- That sharding **is** being used.
Going into some more detail here, if Elasticsearch is running on the same server as GitLab, resource contention is **very** likely to occur. Ideally, Elasticsearch, which requires ample resources, should be running on its own server (maybe coupled with logstash and kibana).
Going into some more detail here, if Elasticsearch is running on the same server as GitLab, resource contention is **very** likely to occur. Ideally, Elasticsearch, which requires ample resources, should be running on its own server (maybe coupled with Logstash and Kibana).
When it comes to Elasticsearch, RAM is the key resource. Elasticsearch themselves recommend:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment