Commit d84052f7 authored by Amy Qualls's avatar Amy Qualls Committed by Evan Read

Add more spelling exceptions to the file

More words that shouldn't be tracked by the spell checker.
parent 8e1a207f
......@@ -61,6 +61,7 @@ burndown
cacheable
CAS
CentOS
chai
Chatops
checksummed
checksumming
......@@ -86,6 +87,7 @@ crosslinking
crosslinks
Crossplane
CrowdIn
Dangerfile
datetime
Debian
deduplicate
......@@ -96,20 +98,26 @@ deduplication
denylist
denylisting
denylists
deployer
deployers
deprovision
deprovisioned
deprovisioning
deprovisions
DevOps
discoverability
Disqus
Dockerfile
Dockerfiles
dogfood
dogfoods
dogfooding
dotenv
downvoted
downvotes
Dpl
Dreamweaver
Ecto
Elasticsearch
enablement
enqueued
......@@ -128,6 +136,7 @@ Flawfinder
Flowdock
Fluentd
Forgerock
Fugit
Gantt
Gemnasium
gettext
......@@ -178,6 +187,7 @@ jasmine-jquery
JavaScript
Jaeger
Jenkins
Jenkinsfile
Jira
jQuery
jsdom
......@@ -195,6 +205,7 @@ Kubesec
Laravel
LDAP
ldapsearch
Leiningen
Libravatar
Lograge
Logstash
......@@ -220,6 +231,7 @@ mergeable
Microsoft
middleware
middlewares
migratus
Minikube
MinIO
mitmproxy
......@@ -235,6 +247,8 @@ mixins
mockup
mockups
ModSecurity
monorepo
monorepos
mutex
nameserver
nameservers
......@@ -260,6 +274,8 @@ parallelization
parallelizations
passwordless
performant
phaser
phasers
Pipfile
Pipfiles
Piwik
......@@ -276,6 +292,9 @@ prefill
prefilled
prefilling
prefills
preload
preloading
preloads
prepend
prepended
prepends
......@@ -302,6 +321,7 @@ Redcarpet
Redis
Redmine
reCAPTCHA
refactorings
referer
referers
reindex
......@@ -490,10 +510,17 @@ webpack
webserver
whitepaper
whitepapers
wireframe
wireframes
wireframed
wireframing
Wireshark
Wordpress
worktree
worktrees
Xcode
Xeon
YouTrack
Zeitwerk
Zendesk
zsh
......@@ -356,7 +356,7 @@ test:
### Run your CI/CD pipeline
That's it! Add all your new files, commit, and push. For a reference of what our repo should
That's it! Add all your new files, commit, and push. For a reference of what our repository should
look like at this point, please refer to the [final commit related to this article on my sample repository](https://gitlab.com/blitzgren/gitlab-game-demo/commit/8b36ef0ecebcf569aeb251be4ee13743337fcfe2).
By applying both build and test stages, GitLab will run them sequentially at every push to
our repository. If all goes well you'll end up with a green check mark on each job for the pipeline:
......@@ -422,15 +422,15 @@ fully understand [IAM Best Practices in AWS](https://docs.aws.amazon.com/IAM/lat
1. Log into your AWS account and go to the [Security Credentials page](https://console.aws.amazon.com/iam/home#/security_credential)
1. Click the **Access Keys** section and **Create New Access Key**. Create the key and keep the ID and secret around, you'll need them later
![AWS Access Key Config](img/aws_config_window.png)
![AWS Access Key Configuration](img/aws_config_window.png)
1. Go to your GitLab project, click **Settings > CI/CD** on the left sidebar
1. Expand the **Variables** section
![GitLab Secret Config](img/gitlab_config.png)
1. Add a key named `AWS_KEY_ID` and copy the key ID from Step 2 into the **Value** textbox
1. Add a key named `AWS_KEY_SECRET` and copy the key secret from Step 2 into the **Value** textbox
1. Add a key named `AWS_KEY_ID` and copy the key ID from Step 2 into the **Value** field
1. Add a key named `AWS_KEY_SECRET` and copy the key secret from Step 2 into the **Value** field
### Deploy your game with GitLab CI/CD
......@@ -529,7 +529,7 @@ a lot of breathing room in quickly getting changes to players.
Here are some ideas to further investigate that can speed up or improve your pipeline:
- [Yarn](https://yarnpkg.com) instead of npm
- Set up a custom [Docker](../../../ci/docker/using_docker_images.md#define-image-and-services-from-gitlab-ciyml) image that can preload dependencies and tools (like AWS CLI)
- Set up a custom [Docker](../../../ci/docker/using_docker_images.md#define-image-and-services-from-gitlab-ciyml) image that can pre-load dependencies and tools (like AWS CLI)
- Forward a [custom domain](https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html) to your game's S3 static website
- Combine jobs if you find it unnecessary for a small project
- Avoid the queues and set up your own [custom GitLab CI/CD runner](https://about.gitlab.com/blog/2016/03/01/gitlab-runner-with-docker/)
......@@ -32,7 +32,7 @@ It uses a clean, minimal [Blade syntax](https://laravel.com/docs/master/blade) t
## Initialize our Laravel app on GitLab
We assume [you have installed a new laravel project](https://laravel.com/docs/master/installation#installation), so let's start with a unit test, and initialize Git for the project.
We assume [you have installed a new Laravel project](https://laravel.com/docs/master/installation#installation), so let's start with a unit test, and initialize Git for the project.
### Unit Test
......@@ -667,7 +667,7 @@ If something doesn't work as expected, you can roll back to the latest working v
By clicking on the external link icon specified on the right side, GitLab opens the production website.
Our deployment successfully was done and we can see the application is live.
![laravel welcome page](img/laravel_welcome_page.png)
![Laravel welcome page](img/laravel_welcome_page.png)
In the case that you're interested to know how is the application directory structure on the production server after deployment, here are three directories named `current`, `releases` and `storage`.
As you know, the `current` directory is a symbolic link that points to the latest release.
......
......@@ -46,7 +46,7 @@ Phoenix can run in any OS where Erlang is supported:
- Debian
- Windows
- Fedora
- Raspbian
- Raspberry Pi OS
Check the [Phoenix learning guide](https://hexdocs.pm/phoenix/overview.html) for more information.
......@@ -154,7 +154,7 @@ point `localhost` to `127.0.0.1`.
Great, now we have a local Phoenix Server running our app.
Locally, our application is running in an `iex` session. [iex](https://elixir-lang.org/getting-started/introduction.html#interactive-mode) stands for Interactive Elixir.
Locally, our application is running in an [`iex`](https://elixir-lang.org/getting-started/introduction.html#interactive-mode) session, which stands for Interactive Elixir.
In this interactive mode, we can type any Elixir expression and get its result. To exit `iex`, we
need to press `Ctrl+C` twice. So, when we need to stop the Phoenix server, we have to hit `Ctrl+C`
twice.
......@@ -164,7 +164,7 @@ twice.
With GitLab, we can manage our development workflow, improve our productivity, track issues,
perform code review, and much more from a single platform. With GitLab CI/CD, we can be much more
productive, because every time we, or our co-workers push any code, GitLab CI/CD will build and
test the changes, telling us in realtime if anything goes wrong.
test the changes, telling us in real time if anything goes wrong.
Certainly, when our application starts to grow, we'll need more developers working on the same
project and this process of building and testing can easily become a mess without proper management.
......
......@@ -75,7 +75,7 @@ There are some high level differences between the products worth mentioning:
with the [`rules` syntax](../yaml/README.md#rules).
- GitLab [pipeline scheduling concepts](../pipelines/schedules.md) are also different than with Jenkins.
- You can reuse pipeline configurations using the [`include` keyword](../yaml/README.md#include)
and [templates](#templates). Your templates can be kept in a central repo (with different
and [templates](#templates). Your templates can be kept in a central repository (with different
permissions), and then any project can use them. This central project could also
contain scripts or other reusable code.
- You can also use the [`extends` keyword](../yaml/README.md#extends) to reuse configuration
......@@ -139,7 +139,7 @@ GitLab works a bit differently, we use the more highly structured [YAML](https:/
places scripting elements inside of `script:` blocks separate from the pipeline specification itself.
This is a strength of GitLab, in that it helps keep the learning curve much simpler to get up and running
and avoids some of the problem of unconstrained complexity which can make your Jenkinsfiles hard to understand
and avoids some of the problem of unconstrained complexity which can make your Jenkinsfile hard to understand
and manage.
That said, we do of course still value DRY (don't repeat yourself) principles and want to ensure that
......@@ -205,9 +205,9 @@ be used by all projects in the group. An instance administrator can set a group
the source for [instance project templates](../../user/group/custom_project_templates.md),
which can be used by projects in that instance.
## Converting Declarative Jenkinsfiles
## Converting a declarative Jenkinsfile
Declarative Jenkinsfiles contain "Sections" and "Directives" which are used to control the behavior of your
A declarative Jenkinsfile contains "Sections" and "Directives" which are used to control the behavior of your
pipelines. There are equivalents for all of these in GitLab, which we've documented below.
This section is based on the [Jenkinsfile syntax documentation](https://www.jenkins.io/doc/book/pipeline/syntax/)
......
......@@ -79,7 +79,7 @@ merge happens.
For more information, read the [documentation on Merge Trains](merge_trains/index.md).
## Automatic pipeline cancelation
## Automatic pipeline cancellation
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/12996) in [GitLab Premium](https://about.gitlab.com/pricing/) 12.3.
......
......@@ -11,7 +11,7 @@ type: reference
## Overview
GitLab provides a lot of great reporting tools for [merge requests](../user/project/merge_requests/index.md) - [JUnit reports](junit_test_reports.md), [codequality](../user/project/merge_requests/code_quality.md), performance tests, etc. While JUnit is a great open framework for tests that "pass" or "fail", it is also important to see other types of metrics from a given change.
GitLab provides a lot of great reporting tools for [merge requests](../user/project/merge_requests/index.md) - [JUnit reports](junit_test_reports.md), [code quality](../user/project/merge_requests/code_quality.md), performance tests, etc. While JUnit is a great open framework for tests that "pass" or "fail", it is also important to see other types of metrics from a given change.
You can configure your job to use custom Metrics Reports, and GitLab will display a report on the merge request so that it's easier and faster to identify changes without having to check the entire log.
......
......@@ -140,7 +140,7 @@ third party ports for other languages like JavaScript, Python, Ruby, and so on.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/207528) in GitLab 13.0.
> - Requires [GitLab Runner](https://docs.gitlab.com/runner/) 11.5 and above.
The `terraform` report obtains a Terraform `tfplan.json` file. [JQ processing required to remove creds](../../user/infrastructure/index.md#output-terraform-plan-information-into-a-merge-request). The collected Terraform
The `terraform` report obtains a Terraform `tfplan.json` file. [JQ processing required to remove credentials](../../user/infrastructure/index.md#output-terraform-plan-information-into-a-merge-request). The collected Terraform
plan report will be uploaded to GitLab as an artifact and will be automatically shown
in merge requests. For more information, see
[Output `terraform plan` information into a merge request](../../user/infrastructure/index.md#output-terraform-plan-information-into-a-merge-request).
......
......@@ -142,7 +142,7 @@ If you want to see the evolution of your project code coverage over time,
you can download a CSV file with this data. From your project:
1. Go to **{chart}** **Project Analytics > Repository**.
1. Click **Download raw data (.csv)**
1. Click **Download raw data (`.csv`)**
### Removing color codes
......
......@@ -159,7 +159,7 @@ For variables with the type **File**, the Runner creates an environment variable
For the value, the Runner writes the variable value to a temporary file and uses this path.
You can use tools like [the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html)
and [kubectl](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#the-kubeconfig-environment-variable)
and [`kubectl`](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#the-kubeconfig-environment-variable)
to customize your configuration by using **File** type variables.
In the past, a common pattern was to read the value of a CI variable, save it in a file, and then
......@@ -566,7 +566,7 @@ then be available as environment variables on the running application
container.
CAUTION: **Caution:**
Variables with multiline values are not currently supported due to
Variables with multi-line values are not currently supported due to
limitations with the current Auto DevOps scripting environment.
### Override a variable by manually running a pipeline
......@@ -648,8 +648,8 @@ Below you can find supported syntax reference:
It sometimes happens that you want to check whether a variable is defined
or not. To do that, you can compare a variable to `null` keyword, like
`$VARIABLE == null`. This expression is going to evaluate to truth if
variable is not defined when `==` is used, or to falsey if `!=` is used.
`$VARIABLE == null`. This expression evaluates to true if
variable is not defined when `==` is used, or to false if `!=` is used.
1. Checking for an empty variable
......
......@@ -66,7 +66,7 @@ You can add a command to your `.gitlab-ci.yml` file to
| `CI_JOB_TOKEN` | 9.0 | 1.2 | Token used for authenticating with the [GitLab Container Registry](../../user/packages/container_registry/index.md) and downloading [dependent repositories](../../user/project/new_ci_build_permissions_model.md#dependent-repositories) |
| `CI_JOB_JWT` | 12.10 | all | RS256 JSON web token that can be used for authenticating with third party systems that support JWT authentication, for example [HashiCorp's Vault](../examples/authenticating-with-hashicorp-vault). |
| `CI_JOB_URL` | 11.1 | 0.5 | Job details URL |
| `CI_KUBERNETES_ACTIVE` | 13.0 | all | Included with the value `true` only if the pipeline has a Kubernetes cluster available for deployments. Not included if no cluster is availble. Can be used as an alternative to [`only:kubernetes`/`except:kubernetes`](../yaml/README.md#onlykubernetesexceptkubernetes) with [`rules:if`](../yaml/README.md#rulesif) |
| `CI_KUBERNETES_ACTIVE` | 13.0 | all | Included with the value `true` only if the pipeline has a Kubernetes cluster available for deployments. Not included if no cluster is available. Can be used as an alternative to [`only:kubernetes`/`except:kubernetes`](../yaml/README.md#onlykubernetesexceptkubernetes) with [`rules:if`](../yaml/README.md#rulesif) |
| `CI_MERGE_REQUEST_ASSIGNEES` | 11.9 | all | Comma-separated list of username(s) of assignee(s) for the merge request if [the pipelines are for merge requests](../merge_request_pipelines/index.md). Available only if `only: [merge_requests]` or [`rules`](../yaml/README.md#rules) syntax is used and the merge request is created. |
| `CI_MERGE_REQUEST_CHANGED_PAGE_PATHS` | 12.9 | all | Comma-separated list of paths of changed pages in a deployed [Review App](../review_apps/index.md) for a [Merge Request](../merge_request_pipelines/index.md). A [Route Map](../review_apps/index.md#route-maps) must be configured. |
| `CI_MERGE_REQUEST_CHANGED_PAGE_URLS` | 12.9 | all | Comma-separated list of URLs of changed pages in a deployed [Review App](../review_apps/index.md) for a [Merge Request](../merge_request_pipelines/index.md). A [Route Map](../review_apps/index.md#route-maps) must be configured. |
......
......@@ -75,10 +75,10 @@ ordering of variables definitions.
### Execution shell environment
This is an expansion that takes place during the `script` execution.
How it works depends on the used shell (bash/sh/cmd/PowerShell). For example, if the job's
How it works depends on the used shell (`bash`, `sh`, `cmd`, PowerShell). For example, if the job's
`script` contains a line `echo $MY_VARIABLE-${MY_VARIABLE_2}`, it should be properly handled
by bash/sh (leaving empty strings or some values depending whether the variables were
defined or not), but will not work with Windows' cmd/PowerShell, since these shells
defined or not), but will not work with Windows' `cmd` or PowerShell, since these shells
are using a different variables syntax.
Supported:
......
......@@ -20,8 +20,8 @@ existing indexes need to be updated. The more indexes there are the slower this
can potentially become. Indexes can also take up quite some disk space depending
on the amount of data indexed and the index type. For example, PostgreSQL offers
"GIN" indexes which can be used to index certain data types that can not be
indexed by regular btree indexes. These indexes however generally take up more
data and are slower to update compared to btree indexes.
indexed by regular B-tree indexes. These indexes however generally take up more
data and are slower to update compared to B-tree indexes.
Because of all this one should not blindly add a new index for every column used
to filter data by. Instead one should ask themselves the following questions:
......
......@@ -21,7 +21,7 @@ In the `plan_limits` table, you have to create a new column and insert the
limit values. It's recommended to create separate migration script files.
1. Add new column to the `plan_limits` table with non-null default value
that represents desired limit, eg:
that represents desired limit, such as:
```ruby
add_column(:plan_limits, :project_hooks, :integer, default: 100, null: false)
......@@ -31,7 +31,7 @@ limit values. It's recommended to create separate migration script files.
enabled. You should use this setting only in special and documented circumstances.
1. (Optionally) Create the database migration that fine-tunes each level with
a desired limit using `create_or_update_plan_limit` migration helper, eg:
a desired limit using `create_or_update_plan_limit` migration helper, such as:
```ruby
class InsertProjectHooksPlanLimits < ActiveRecord::Migration[5.2]
......@@ -65,7 +65,7 @@ for plans that do not exist.
#### Get current limit
Access to the current limit can be done through the project or the namespace,
eg:
such as:
```ruby
project.actual_limits.project_hooks
......@@ -76,13 +76,13 @@ project.actual_limits.project_hooks
There is one method `PlanLimits#exceeded?` to check if the current limit is
being exceeded. You can use either an `ActiveRecord` object or an `Integer`.
Ensures that the count of the records does not exceed the defined limit, eg:
Ensures that the count of the records does not exceed the defined limit, such as:
```ruby
project.actual_limits.exceeded?(:project_hooks, ProjectHook.where(project: project))
```
Ensures that the number does not exceed the defined limit, eg:
Ensures that the number does not exceed the defined limit, such as:
```ruby
project.actual_limits.exceeded?(:project_hooks, 10)
......
......@@ -12,7 +12,7 @@ Both EE and CE require some add-on components called GitLab Shell and Gitaly. Th
## Components
A typical install of GitLab will be on GNU/Linux. It uses NGINX or Apache as a web front end to proxypass the Unicorn web server. By default, communication between Unicorn and the front end is via a Unix domain socket but forwarding requests via TCP is also supported. The web front end accesses `/home/git/gitlab/public` bypassing the Unicorn server to serve static pages, uploads (e.g. avatar images or attachments), and precompiled assets. GitLab serves web pages and the [GitLab API](../api/README.md) using the Unicorn web server. It uses Sidekiq as a job queue which, in turn, uses Redis as a non-persistent database backend for job information, meta data, and incoming jobs.
A typical install of GitLab will be on GNU/Linux. It uses NGINX or Apache as a web front end to proxypass the Unicorn web server. By default, communication between Unicorn and the front end is via a Unix domain socket but forwarding requests via TCP is also supported. The web front end accesses `/home/git/gitlab/public` bypassing the Unicorn server to serve static pages, uploads (e.g. avatar images or attachments), and pre-compiled assets. GitLab serves web pages and the [GitLab API](../api/README.md) using the Unicorn web server. It uses Sidekiq as a job queue which, in turn, uses Redis as a non-persistent database backend for job information, meta data, and incoming jobs.
We also support deploying GitLab on Kubernetes using our [GitLab Helm chart](https://docs.gitlab.com/charts/).
......@@ -254,7 +254,7 @@ Elasticsearch is a distributed RESTful search engine built for the cloud.
- Process: `gitaly`
- GitLab.com: [Service Architecture](https://about.gitlab.com/handbook/engineering/infrastructure/production/architecture/#service-architecture)
Gitaly is a service designed by GitLab to remove our need for NFS for Git storage in distributed deployments of GitLab (think GitLab.com or High Availability Deployments). As of 11.3.0, this service handles all Git level access in GitLab. You can read more about the project [in the project's readme](https://gitlab.com/gitlab-org/gitaly).
Gitaly is a service designed by GitLab to remove our need for NFS for Git storage in distributed deployments of GitLab (think GitLab.com or High Availability Deployments). As of 11.3.0, this service handles all Git level access in GitLab. You can read more about the project [in the project's README](https://gitlab.com/gitlab-org/gitaly).
#### Praefect
......@@ -287,7 +287,7 @@ repository updates to secondary nodes.
- Process: `gitlab-exporter`
- GitLab.com: [Monitoring of GitLab.com](https://about.gitlab.com/handbook/engineering/monitoring/)
GitLab Exporter is a process designed in house that allows us to export metrics about GitLab application internals to Prometheus. You can read more [in the project's readme](https://gitlab.com/gitlab-org/gitlab-exporter).
GitLab Exporter is a process designed in house that allows us to export metrics about GitLab application internals to Prometheus. You can read more [in the project's README](https://gitlab.com/gitlab-org/gitlab-exporter).
#### GitLab Pages
......@@ -551,7 +551,7 @@ An external registry can also be configured to use GitLab as an auth endpoint.
- Layer: Monitoring
- GitLab.com: [Searching Sentry](https://about.gitlab.com/handbook/support/workflows/500_errors.html#searching-sentry)
Sentry fundamentally is a service that helps you monitor and fix crashes in realtime.
Sentry fundamentally is a service that helps you monitor and fix crashes in real time.
The server is in Python, but it contains a full API for sending events from any language, in any application.
For monitoring deployed apps, see the [Sentry integration docs](../user/project/operations/error_tracking.md)
......@@ -657,7 +657,7 @@ When making a request to an HTTP Endpoint (think `/users/sign_in`) the request w
### GitLab Git Request Cycle
Below we describe the different pathing that HTTP vs. SSH Git requests will take. There is some overlap with the Web Request Cycle but also some differences.
Below we describe the different paths that HTTP vs. SSH Git requests will take. There is some overlap with the Web Request Cycle but also some differences.
### Web Request (80/443)
......@@ -790,7 +790,7 @@ ps aux | grep '^git'
```
GitLab has several components to operate. It requires a persistent database
(PostgreSQL) and Redis database, and uses Apache httpd or NGINX to proxypass
(PostgreSQL) and Redis database, and uses Apache `httpd` or NGINX to proxypass
Unicorn. All these components should run as different system users to GitLab
(e.g., `postgres`, `redis` and `www-data`, instead of `git`).
......@@ -866,7 +866,7 @@ NGINX:
- `/var/log/nginx/` contains error and access logs.
Apache httpd:
Apache `httpd`:
- [Explanation of Apache logs](https://httpd.apache.org/docs/2.2/logs.html).
- `/var/log/apache2/` contains error and output logs (on Ubuntu).
......@@ -880,7 +880,7 @@ PostgreSQL:
- `/var/log/postgresql/*`
### GitLab specific config files
### GitLab specific configuration files
GitLab has configuration files located in `/home/git/gitlab/config/*`. Commonly referenced config files include:
......@@ -902,7 +902,7 @@ bundle exec rake gitlab:env:info RAILS_ENV=production
bundle exec rake gitlab:check RAILS_ENV=production
```
Note: It is recommended to log into the `git` user using `sudo -i -u git` or `sudo su - git`. While the sudo commands provided by gitlabhq work in Ubuntu they do not always work in RHEL.
Note: It is recommended to log into the `git` user using `sudo -i -u git` or `sudo su - git`. While the sudo commands provided by GitLab work in Ubuntu they do not always work in RHEL.
## GitLab.com
......
......@@ -24,7 +24,7 @@ trigger.
If you want to create a package from a specific branch, commit or tag of any of
the GitLab components (like GitLab Workhorse, Gitaly, GitLab Pages, etc.), you
can specify the branch name, commit sha or tag in the component's respective
can specify the branch name, commit SHA or tag in the component's respective
`*_VERSION` file. For example, if you want to build a package that uses the
branch `0-1-stable`, modify the content of `GITALY_SERVER_VERSION` to
`0-1-stable` and push the commit. This will create a manual job that can be
......
......@@ -10,7 +10,7 @@ the main components.
![CI software architecture](img/ci_architecture.png)
<!-- Editable diagram available at https://app.diagrams.net/#G1LFl-KW4fgpBPzz8VIH9rsOlAH4t0xwKj -->
On the left side we have the events that can trigger a pipeline based on various events (trigged by a user or automation):
On the left side we have the events that can trigger a pipeline based on various events (triggered by a user or automation):
- A `git push` is the most common event that triggers a pipeline.
- The [Web API](../../api/pipelines.md#create-a-new-pipeline).
......
......@@ -314,7 +314,7 @@ experience, refactors the existing code). Then:
- For non-mandatory suggestions, decorate with (non-blocking) so the author knows they can
optionally resolve within the merge request or follow-up at a later stage.
- After a round of line notes, it can be helpful to post a summary note such as
"LGTM :thumbsup:", or "Just a couple things to address."
"Looks good to me", or "Just a couple things to address."
- Assign the merge request to the author if changes are required following your
review.
......@@ -381,7 +381,7 @@ Merge Results against the latest `master` at the time of the pipeline creation.
One of the most difficult things during code review is finding the right
balance in how deep the reviewer can interfere with the code created by a
reviewee.
author.
- Learning how to find the right balance takes time; that is why we have
reviewers that become maintainers after some time spent on reviewing merge
......@@ -389,7 +389,7 @@ reviewee.
- Finding bugs and improving code style is important, but thinking about good
design is important as well. Building abstractions and good design is what
makes it possible to hide complexity and makes future changes easier.
- Asking the reviewee to change the design sometimes means the complete rewrite
- Asking the author to change the design sometimes means the complete rewrite
of the contributed code. It's usually a good idea to ask another maintainer or
reviewer before doing it, but have the courage to do it when you believe it is
important.
......@@ -402,7 +402,7 @@ reviewee.
- There is a difference in doing things right and doing things right now.
Ideally, we should do the former, but in the real world we need the latter as
well. A good example is a security fix which should be released as soon as
possible. Asking the reviewee to do the major refactoring in the merge
possible. Asking the author to do the major refactoring in the merge
request that is an urgent fix should be avoided.
- Doing things well today is usually better than doing something perfectly
tomorrow. Shipping a kludge today is usually worse than doing something well
......
......@@ -70,7 +70,9 @@ Use these instructions for exploring the GitLab database while developing with t
1. **PostgreSQL user to authenticate as**: usually your local username, unless otherwise specified during PostgreSQL installation.
1. **Password of the PostgreSQL user**: the password you set when installing PostgreSQL.
1. **Port number to connect to**: `5432` (default).
1. **Use an ssl connection?** This depends on your installation. Options are:
1. <!-- vale gitlab.Spelling = NO -->
**Use an ssl connection?**
<!-- vale gitlab.rulename = NO --> This depends on your installation. Options are:
- **Use Secure Connection**
- **Standard Connection** (default)
1. **(Optional) The database to connect to**: `gitlabhq_development`.
......@@ -86,7 +88,7 @@ of the extension documentation.
### `ActiveRecord::PendingMigrationError` with Spring
When running specs with the [Spring preloader](rake_tasks.md#speed-up-tests-rake-tasks-and-migrations),
When running specs with the [Spring pre-loader](rake_tasks.md#speed-up-tests-rake-tasks-and-migrations),
the test database can get into a corrupted state. Trying to run the migration or
dropping/resetting the test database has no effect.
......
......@@ -87,7 +87,7 @@ the following preparations into account.
#### Preparation when adding or modifying queries
- Write the raw SQL in the MR description. Preferably formatted
nicely with [sqlformat.darold.net](http://sqlformat.darold.net) or
nicely with [pgFormatter](https://sqlformat.darold.net) or
[paste.depesz.com](https://paste.depesz.com).
- Include the output of `EXPLAIN (ANALYZE, BUFFERS)` of the relevant
queries in the description. If the output is too long, wrap it in
......
......@@ -2,7 +2,7 @@
Sometimes it is useful to import the database from a production environment
into a staging environment for testing. The procedure below assumes you have
SSH+sudo access to both the production environment and the staging VM.
SSH and `sudo` access to both the production environment and the staging VM.
**Destroy your staging VM** when you are done with it. It is important to avoid
data leaks.
......@@ -20,7 +20,7 @@ sudo gitlab-ctl stop sidekiq
```
Next, we let the production environment stream a compressed SQL dump to our
local machine via SSH, and redirect this stream to a psql client on the staging
local machine via SSH, and redirect this stream to a `psql` client on the staging
VM.
```shell
......
......@@ -5,7 +5,7 @@ the possibility of the migration already been included in past releases or in th
Because of it, it's not possible to delete existing migrations, as that could lead to:
- Schema inconsistency, as changes introduced into the database were not rollbacked properly.
- Schema inconsistency, as changes introduced into the database were not rolled back properly.
- Leaving a record on the `schema_versions` table, that points out to migration that no longer exists on the codebase.
Instead of deleting we can opt for disabling the migration.
......@@ -22,7 +22,7 @@ Migrations can be disabled if:
In order to disable a migration, the following steps apply to all types of migrations:
1. Turn the migration into a noop by removing the code inside `#up`, `#down`
1. Turn the migration into a no-op by removing the code inside `#up`, `#down`
or `#perform` methods, and adding `#no-op` comment instead.
1. Add a comment explaining why the code is gone.
......
......@@ -26,4 +26,4 @@ end
It's also possible to run an entire scenario with a feature flag enabled, without having to edit existing tests or write new ones.
Please see the [QA readme](https://gitlab.com/gitlab-org/gitlab/tree/master/qa#running-tests-with-a-feature-flag-enabled) for details.
Please see the [QA README](https://gitlab.com/gitlab-org/gitlab/tree/master/qa#running-tests-with-a-feature-flag-enabled) for details.
......@@ -222,7 +222,7 @@ See GitLab merge requests for examples: <https://gitlab.com/gitlab-org/gitlab-fo
- Dashboard
- User Preferences
- ReadMe, Changelog, License shortcuts
- README, Changelog, License shortcuts
- Issues
- Milestones and Labels
- Manage project members
......
......@@ -793,7 +793,7 @@ posthog:
```
You can customize the installation of PostHog by defining `.gitlab/managed-apps/posthog/values.yaml`
in your cluster management project. Refer to the [Configuration section of the PostHog chart's readme](https://github.com/PostHog/charts/tree/master/charts/posthog)
in your cluster management project. Refer to the [Configuration section of the PostHog chart's README](https://github.com/PostHog/charts/tree/master/charts/posthog)
for the available configuration options.
NOTE: **Note:**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment