Commit 77c640c7 authored by Suzanne Selhorn's avatar Suzanne Selhorn Committed by Marcel Amirault

Add more language tags

Related to 32881
parent ef172552
......@@ -26,7 +26,7 @@ To do so we'll generate a dump of our current database. This dump will only
contain the structure, not any data. To generate this dump run the following
command on your active database server:
```bash
```shell
sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_dump -h /var/opt/gitlab/postgresql -p 5432 -U gitlab-psql -s -f /tmp/structure.sql gitlabhq_production
```
......@@ -39,14 +39,14 @@ Once the structure dump is generated we also need to generate a dump for the
can't be replicated easily by Slony. To generate this dump run the following
command on your active database server:
```bash
```shell
sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_dump -h /var/opt/gitlab/postgresql/ -p 5432 -U gitlab-psql -a -t schema_migrations -f /tmp/migrations.sql gitlabhq_production
```
Next we'll need to move these files somewhere accessible by the new database
server. The easiest way is to simply download these files to your local system:
```bash
```shell
scp your-user@production-database-host:/tmp/*.sql /tmp
```
......@@ -63,7 +63,7 @@ install Slony using said package manager.
When compiling Slony from source you *must* use the following commands to do so:
```bash
```shell
./configure --prefix=/path/to/installation/directory --with-perltools --with-pgconfigdir=/path/to/directory/containing/pg_config/bin
make
make install
......@@ -71,7 +71,7 @@ make install
Omnibus users can use the following commands:
```bash
```shell
./configure --prefix=/opt/gitlab/embedded --with-perltools --with-pgconfigdir=/opt/gitlab/embedded/bin
make
make install
......@@ -81,7 +81,7 @@ This assumes you have installed GitLab into `/opt/gitlab`.
To test if Slony is installed properly, run the following commands:
```bash
```shell
test -f /opt/gitlab/embedded/bin/slonik && echo 'Slony installed' || echo 'Slony not installed'
test -f /opt/gitlab/embedded/bin/slonik_init_cluster && echo 'Slony Perl tools are available' || echo 'Slony Perl tools are not available'
/opt/gitlab/embedded/bin/slonik -v
......@@ -91,7 +91,7 @@ This assumes Slony was installed to `/opt/gitlab/embedded`. If Slony was
installed properly the output of these commands will be (the mentioned "slonik"
version may be different):
```
```plaintext
Slony installed
Slony Perl tools are available
slonik version 2.2.5
......@@ -126,7 +126,7 @@ First we'll need to create some required directories and set the correct
permissions. To do so, run the following commands on both the old and new
database server:
```bash
```shell
sudo mkdir -p /var/log/gitlab/slony /var/run/slony1 /var/opt/gitlab/postgresql/slony
sudo chown gitlab-psql:root /var/log/gitlab/slony /var/run/slony1 /var/opt/gitlab/postgresql/slony
```
......@@ -184,7 +184,7 @@ use it. The following placeholders should be replaced:
The list of tables to replicate can be generated by running the following
command on your old PostgreSQL database:
```
```shell
sudo gitlab-psql gitlabhq_production -c "select concat('\"', schemaname, '.', tablename, '\",') from pg_catalog.pg_tables where schemaname = 'public' and tableowner = 'gitlab' and tablename != 'schema_migrations' order by tablename asc;" -t
```
......@@ -216,13 +216,13 @@ sure that the SQL files we generated earlier can be found in the `/tmp`
directory of the new server. Once these files are in place start a `psql`
session on this server:
```
```shell
sudo gitlab-psql gitlabhq_production
```
Now run the following commands:
```
```plaintext
\i /tmp/structure.sql
\i /tmp/migrations.sql
```
......@@ -231,7 +231,7 @@ To verify if the structure is in place close the session, start it again, then
run `\d`. If all went well you should see output along the lines of the
following:
```
```plaintext
List of relations
Schema | Name | Type | Owner
--------+---------------------------------------------+----------+-------------
......@@ -248,13 +248,13 @@ following:
Now we can initialize the required tables and what not that Slony will use for
its replication process. To do so, run the following on the old database:
```
```shell
sudo -u gitlab-psql /opt/gitlab/embedded/bin/slonik_init_cluster --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf | /opt/gitlab/embedded/bin/slonik
```
If all went well this will produce something along the lines of:
```
```plaintext
<stdin>:10: Set up replication nodes
<stdin>:13: Next: configure paths for each node/origin
<stdin>:16: Replication nodes prepared
......@@ -264,13 +264,13 @@ If all went well this will produce something along the lines of:
Next we need to start a replication node on every server. To do so, run the
following on the old database:
```
```shell
sudo -u gitlab-psql /opt/gitlab/embedded/bin/slon_start 1 --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf
```
If all went well this will produce output such as:
```
```plaintext
Invoke slon for node 1 - /opt/gitlab/embedded/bin/slon -p /var/run/slony1/slony_replication_node1.pid -s 1000 -d2 slony_replication 'host=192.168.0.7 dbname=gitlabhq_production user=slony port=5432 password=hieng8ezohHuCeiqu0leeghai4aeyahp' > /var/log/gitlab/slony/node1/gitlabhq_production-2016-10-06.log 2>&1 &
Slon successfully started for cluster slony_replication, node node1
PID [26740]
......@@ -279,7 +279,7 @@ Start the watchdog process as well...
Next we need to run the following command on the _new_ database server:
```
```shell
sudo -u gitlab-psql /opt/gitlab/embedded/bin/slon_start 2 --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf
```
......@@ -288,13 +288,13 @@ This will produce similar output if all went well.
Next we need to tell the new database server what it should replicate. This can
be done by running the following command on the _new_ database server:
```
```shell
sudo -u gitlab-psql /opt/gitlab/embedded/bin/slonik_create_set 1 --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf | /opt/gitlab/embedded/bin/slonik
```
This should produce output along the lines of the following:
```
```plaintext
<stdin>:11: Subscription set 1 (set1) created
<stdin>:12: Adding tables to the subscription set
<stdin>:16: Add primary keyed table public.abuse_reports
......@@ -308,13 +308,13 @@ This should produce output along the lines of the following:
Finally we can start the replication process by running the following on the
_new_ database server:
```
```shell
sudo -u gitlab-psql /opt/gitlab/embedded/bin/slonik_subscribe_set 1 2 --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf | /opt/gitlab/embedded/bin/slonik
```
This should produce the following output:
```
```plaintext
<stdin>:6: Subscribed nodes to set 1
```
......@@ -324,7 +324,7 @@ not days. Unfortunately Slony itself doesn't really provide a way of knowing
when the two databases are in sync. To get an estimate of the progress you can
use the following shell script:
```
```shell
#!/usr/bin/env bash
set -e
......@@ -365,7 +365,7 @@ GitLab so it can use the new database, etc.
First, let's stop all of GitLab. Omnibus users can do so by running the
following on their GitLab server(s):
```
```shell
sudo gitlab-ctl stop unicorn
sudo gitlab-ctl stop sidekiq
sudo gitlab-ctl stop mailroom
......@@ -382,7 +382,7 @@ as this data will be lost.
To stop replication, run the following on both database servers:
```bash
```shell
sudo -u gitlab-psql /opt/gitlab/embedded/bin/slon_kill --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf
```
......@@ -394,7 +394,7 @@ The above setup does not replicate database sequences, as such these must be
reset manually in the target database. You can use the following script for
this:
```bash
```shell
#!/usr/bin/env bash
set -e
......@@ -459,7 +459,7 @@ main
Upload this script to the _target_ server and execute it as follows:
```bash
```shell
bash path/to/the/script/above.sh
```
......@@ -471,7 +471,7 @@ This will correct the ownership of sequences and reset the next value for the
Next we need to remove all Slony related data. To do so, run the following
command on the _target_ server:
```bash
```shell
sudo gitlab-psql gitlabhq_production -c "DROP SCHEMA _slony_replication CASCADE;"
```
......
......@@ -81,7 +81,7 @@ If you wanted to increase the max attachment size to 200m in a GitLab
[Omnibus](https://docs.gitlab.com/omnibus/) install, for example, you might need to
add the line below to `/etc/gitlab/gitlab.rb` before increasing the max attachment size:
```
```ruby
nginx['client_max_body_size'] = "200m"
```
......
......@@ -6,7 +6,7 @@ type: reference
GitLab protects the following paths with Rack Attack by default:
```
```plaintext
'/users/password',
'/users/sign_in',
'/api/#{API::API.version}/session.json',
......@@ -23,7 +23,7 @@ that exceed 10 requests per minute per IP address.
This header is included in responses to blocked requests:
```
```plaintext
Retry-After: 60
```
......
......@@ -213,7 +213,7 @@ An approval will be optional when a license report:
When including a security job template like [`SAST`](sast/index.md#configuration),
the following error can be raised, depending on your GitLab CI/CD configuration:
```
```plaintext
Found errors in your .gitlab-ci.yml:
* sast job: stage parameter should be unit-tests
......
......@@ -27,7 +27,7 @@ that the IP address of the pods are routable within the GCP network.
First, we need to declare some environment variables with configuration that will be used throughout this guide:
```sh
```shell
export PROJECT_ID=crossplane-playground # the GCP project where all resources reside.
export NETWORK_NAME=default # the GCP network where your GKE is provisioned.
export REGION=us-central1 # the GCP region where the GKE cluster is provisioned.
......@@ -43,7 +43,7 @@ NOTE: **Note:**
For a non-GitLab managed cluster, ensure that the service account for the token provided can manage resources in the `database.crossplane.io` API group.
​1. Save the following YAML as `crossplane-database-role.yaml`:
```sh
```shell
cat > crossplane-database-role.yaml <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
......@@ -69,7 +69,7 @@ EOF
Once the file is created, apply it with the following command in order to create the necessary role:
```sh
```shell
kubectl apply -f crossplane-database-role.yaml
```
......@@ -94,7 +94,7 @@ This can done by either:
[configuring private services access](https://cloud.google.com/vpc/docs/configure-private-services-access).
Create a GlobalAddress and Connection resources:
```sh
```shell
cat > network.yaml <<EOF
---
# gitlab-ad-globaladdress defines the IP range that will be allocated for cloud services connecting to the instances in the given Network.
......@@ -133,14 +133,14 @@ EOF
Apply the settings specified in the file with the following command:
```sh
```shell
kubectl apply -f network.yaml
```
You can verify creation of the network resources with the following commands.
Verify that the status of both of these resources is ready and is synced.
```sh
```shell
kubectl describe connection.servicenetworking.gcp.crossplane.io gitlab-ad-connection
kubectl describe globaladdress.compute.gcp.crossplane.io gitlab-ad-globaladdress
```
......@@ -154,7 +154,7 @@ Resource classes are a way of defining a configuration for the required managed
1. A default CloudSQLInstanceClass.
1. A CloudSQLInstanceClass with labels.
```sh
```shell
cat > gcp-postgres-standard.yaml <<EOF
apiVersion: database.gcp.crossplane.io/v1beta1
kind: CloudSQLInstanceClass
......@@ -204,13 +204,13 @@ EOF
Apply the resource class configuration with the following command:
```sh
```shell
kubectl apply -f gcp-postgres-standard.yaml
```
Verify creation of the Resource class with the following command:
```sh
```shell
kubectl get cloudsqlinstanceclasses
```
......@@ -239,13 +239,13 @@ The Auto DevOps pipeline should provision a PostgresqlInstance when it runs succ
Verify creation of the PostgreSQL Instance.
```sh
```shell
kubectl get postgresqlinstance
```
Sample Output: The `STATUS` field of the PostgresqlInstance transitions to `BOUND` when it is successfully provisioned.
```
```plaintext
NAME STATUS CLASS-KIND CLASS-NAME RESOURCE-KIND RESOURCE-NAME AGE
staging-test8 Bound CloudSQLInstanceClass cloudsqlinstancepostgresql-standard CloudSQLInstance xp-ad-demo-24-staging-staging-test8-jj55c 9m
```
......@@ -254,13 +254,13 @@ The endpoint of the PostgreSQL instance, and the user credentials, are present i
Verify the secret with the database information is created with the following command:
```sh
```shell
kubectl describe secret app-postgres
```
Sample Output:
```
```plaintext
Name: app-postgres
Namespace: xp-ad-demo-24-staging
Labels: <none>
......
......@@ -19,7 +19,7 @@ Below are the fingerprints for GitLab.com's SSH host keys.
Add the following to `.ssh/known_hosts` to skip manual fingerprint
confirmation in SSH:
```
```plaintext
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
......@@ -41,7 +41,7 @@ GitLab.com can be reached via a [different SSH port][altssh] for `git+ssh`.
An example `~/.ssh/config` is the following:
```
```plaintext
Host gitlab.com
Hostname altssh.gitlab.com
User git
......@@ -455,7 +455,7 @@ per second per IP address.
The following example headers are included for all API requests:
```
```plaintext
RateLimit-Limit: 600
RateLimit-Observed: 6
RateLimit-Remaining: 594
......@@ -481,7 +481,7 @@ user confirmation, user sign in, and password reset.
This header is included in responses to blocked requests:
```
```plaintext
Retry-After: 60
```
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment