Commit f55d8570 authored by Evan Read's avatar Evan Read

Merge branch 'docs-geo-markdown-1' into 'master'

Docs: Clean up markdown spacing in geo docs

See merge request gitlab-org/gitlab-ce!30112
parents 4771ad2c f3ed4ecc
...@@ -143,11 +143,13 @@ If the **primary** and **secondary** nodes have a checksum verification mismatch ...@@ -143,11 +143,13 @@ If the **primary** and **secondary** nodes have a checksum verification mismatch
1. On the project admin page get the **Gitaly storage name**, and **Gitaly relative path**: 1. On the project admin page get the **Gitaly storage name**, and **Gitaly relative path**:
![Project admin page](img/checksum-differences-admin-project-page.png) ![Project admin page](img/checksum-differences-admin-project-page.png)
1. Navigate to the project's repository directory on both **primary** and **secondary** nodes (the path is usually `/var/opt/gitlab/git-data/repositories`). Note that if `git_data_dirs` is customized, check the directory layout on your server to be sure. 1. Navigate to the project's repository directory on both **primary** and **secondary** nodes
(the path is usually `/var/opt/gitlab/git-data/repositories`). Note that if `git_data_dirs`
is customized, check the directory layout on your server to be sure.
```sh ```sh
cd /var/opt/gitlab/git-data/repositories cd /var/opt/gitlab/git-data/repositories
``` ```
1. Run the following command on the **primary** node, redirecting the output to a file: 1. Run the following command on the **primary** node, redirecting the output to a file:
......
...@@ -21,20 +21,20 @@ To bring the former **primary** node up to date: ...@@ -21,20 +21,20 @@ To bring the former **primary** node up to date:
1. SSH into the former **primary** node that has fallen behind. 1. SSH into the former **primary** node that has fallen behind.
1. Make sure all the services are up: 1. Make sure all the services are up:
```sh ```sh
sudo gitlab-ctl start sudo gitlab-ctl start
``` ```
> **Note 1:** If you [disabled the **primary** node permanently][disaster-recovery-disable-primary], NOTE: **Note:** If you [disabled the **primary** node permanently][disaster-recovery-disable-primary],
> you need to undo those steps now. For Debian/Ubuntu you just need to run you need to undo those steps now. For Debian/Ubuntu you just need to run
> `sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install `sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install
> the GitLab instance from scratch and set it up as a **secondary** node by the GitLab instance from scratch and set it up as a **secondary** node by
> following [Setup instructions][setup-geo]. In this case, you don't need to follow the next step. following [Setup instructions][setup-geo]. In this case, you don't need to follow the next step.
>
> **Note 2:** If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record) NOTE: **Note:** If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record)
> for this node during disaster recovery procedure you may need to [block for this node during disaster recovery procedure you may need to [block
> all the writes to this node](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/doc/gitlab-geo/planned-failover.md#block-primary-traffic) all the writes to this node](planned_failover.md#prevent-updates-to-the-primary-node)
> during this procedure. during this procedure.
1. [Setup database replication][database-replication]. Note that in this 1. [Setup database replication][database-replication]. Note that in this
case, **primary** node refers to the current **primary** node, and **secondary** node refers to the case, **primary** node refers to the current **primary** node, and **secondary** node refers to the
......
...@@ -143,26 +143,26 @@ access to the **primary** node during the maintenance window. ...@@ -143,26 +143,26 @@ access to the **primary** node during the maintenance window.
all HTTP, HTTPS and SSH traffic to/from the **primary** node, **except** for your IP and all HTTP, HTTPS and SSH traffic to/from the **primary** node, **except** for your IP and
the **secondary** node's IP. the **secondary** node's IP.
For instance, you might run the following commands on the server(s) making up your **primary** node: For instance, you might run the following commands on the server(s) making up your **primary** node:
```sh ```sh
sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 22 -j ACCEPT sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 22 -j ACCEPT
sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 22 -j ACCEPT sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 22 -j ACCEPT
sudo iptables -A INPUT --destination-port 22 -j REJECT sudo iptables -A INPUT --destination-port 22 -j REJECT
sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 80 -j ACCEPT sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 80 -j ACCEPT
sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 80 -j ACCEPT sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 80 -j ACCEPT
sudo iptables -A INPUT --tcp-dport 80 -j REJECT sudo iptables -A INPUT --tcp-dport 80 -j REJECT
sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 443 -j ACCEPT sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 443 -j ACCEPT
sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 443 -j ACCEPT sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 443 -j ACCEPT
sudo iptables -A INPUT --tcp-dport 443 -j REJECT sudo iptables -A INPUT --tcp-dport 443 -j REJECT
``` ```
From this point, users will be unable to view their data or make changes on the From this point, users will be unable to view their data or make changes on the
**primary** node. They will also be unable to log in to the **secondary** node. **primary** node. They will also be unable to log in to the **secondary** node.
However, existing sessions will work for the remainder of the maintenance period, and However, existing sessions will work for the remainder of the maintenance period, and
public data will be accessible throughout. public data will be accessible throughout.
1. Verify the **primary** node is blocked to HTTP traffic by visiting it in browser via 1. Verify the **primary** node is blocked to HTTP traffic by visiting it in browser via
another IP. The server should refuse connection. another IP. The server should refuse connection.
...@@ -187,10 +187,11 @@ access to the **primary** node during the maintenance window. ...@@ -187,10 +187,11 @@ access to the **primary** node during the maintenance window.
before it is completed will cause the work to be lost. before it is completed will cause the work to be lost.
1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the 1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
following conditions to be true of the **secondary** node you are failing over to: following conditions to be true of the **secondary** node you are failing over to:
- All replication meters to each 100% replicated, 0% failures.
- All verification meters reach 100% verified, 0% failures. - All replication meters to each 100% replicated, 0% failures.
- Database replication lag is 0ms. - All verification meters reach 100% verified, 0% failures.
- The Geo log cursor is up to date (0 events behind). - Database replication lag is 0ms.
- The Geo log cursor is up to date (0 events behind).
1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues** 1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
and wait for all the `geo` queues to drop to 0 queued and 0 running jobs. and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
......
...@@ -16,11 +16,10 @@ The basic steps of configuring a **secondary** node are to: ...@@ -16,11 +16,10 @@ The basic steps of configuring a **secondary** node are to:
You are encouraged to first read through all the steps before executing them You are encouraged to first read through all the steps before executing them
in your testing/production environment. in your testing/production environment.
> **Notes:** NOTE: **Note:**
> - **Do not** setup any custom authentication for the **secondary** nodes. This will be **Do not** set up any custom authentication for the **secondary** nodes. This will be handled by the **primary** node.
handled by the **primary** node. Any change that requires access to the **Admin Area** needs to be done in the
> - Any change that requires access to the **Admin Area** needs to be done in the **primary** node because the **secondary** node is a read-only replica.
**primary** node because the **secondary** node is a read-only replica.
### Step 1. Manually replicate secret GitLab values ### Step 1. Manually replicate secret GitLab values
...@@ -31,47 +30,47 @@ they must be manually replicated to the **secondary** node. ...@@ -31,47 +30,47 @@ they must be manually replicated to the **secondary** node.
1. SSH into the **primary** node, and execute the command below: 1. SSH into the **primary** node, and execute the command below:
```sh ```sh
sudo cat /etc/gitlab/gitlab-secrets.json sudo cat /etc/gitlab/gitlab-secrets.json
``` ```
This will display the secrets that need to be replicated, in JSON format. This will display the secrets that need to be replicated, in JSON format.
1. SSH into the **secondary** node and login as the `root` user: 1. SSH into the **secondary** node and login as the `root` user:
```sh ```sh
sudo -i sudo -i
``` ```
1. Make a backup of any existing secrets: 1. Make a backup of any existing secrets:
```sh ```sh
mv /etc/gitlab/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json.`date +%F` mv /etc/gitlab/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json.`date +%F`
``` ```
1. Copy `/etc/gitlab/gitlab-secrets.json` from the **primary** node to the **secondary** node, or 1. Copy `/etc/gitlab/gitlab-secrets.json` from the **primary** node to the **secondary** node, or
copy-and-paste the file contents between nodes: copy-and-paste the file contents between nodes:
```sh ```sh
sudo editor /etc/gitlab/gitlab-secrets.json sudo editor /etc/gitlab/gitlab-secrets.json
# paste the output of the `cat` command you ran on the primary # paste the output of the `cat` command you ran on the primary
# save and exit # save and exit
``` ```
1. Ensure the file permissions are correct: 1. Ensure the file permissions are correct:
```sh ```sh
chown root:root /etc/gitlab/gitlab-secrets.json chown root:root /etc/gitlab/gitlab-secrets.json
chmod 0600 /etc/gitlab/gitlab-secrets.json chmod 0600 /etc/gitlab/gitlab-secrets.json
``` ```
1. Reconfigure the **secondary** node for the change to take effect: 1. Reconfigure the **secondary** node for the change to take effect:
```sh ```sh
gitlab-ctl reconfigure gitlab-ctl reconfigure
gitlab-ctl restart gitlab-ctl restart
``` ```
### Step 2. Manually replicate the **primary** node's SSH host keys ### Step 2. Manually replicate the **primary** node's SSH host keys
...@@ -89,80 +88,80 @@ keys must be manually replicated to the **secondary** node. ...@@ -89,80 +88,80 @@ keys must be manually replicated to the **secondary** node.
1. SSH into the **secondary** node and login as the `root` user: 1. SSH into the **secondary** node and login as the `root` user:
```sh ```sh
sudo -i sudo -i
``` ```
1. Make a backup of any existing SSH host keys: 1. Make a backup of any existing SSH host keys:
```sh ```sh
find /etc/ssh -iname ssh_host_* -exec cp {} {}.backup.`date +%F` \; find /etc/ssh -iname ssh_host_* -exec cp {} {}.backup.`date +%F` \;
``` ```
1. Copy OpenSSH host keys from the **primary** node: 1. Copy OpenSSH host keys from the **primary** node:
If you can access your **primary** node using the **root** user: If you can access your **primary** node using the **root** user:
```sh ```sh
# Run this from the secondary node, change `<primary_node_fqdn>` for the IP or FQDN of the server # Run this from the secondary node, change `<primary_node_fqdn>` for the IP or FQDN of the server
scp root@<primary_node_fqdn>:/etc/ssh/ssh_host_*_key* /etc/ssh scp root@<primary_node_fqdn>:/etc/ssh/ssh_host_*_key* /etc/ssh
``` ```
If you only have access through a user with **sudo** privileges: If you only have access through a user with **sudo** privileges:
```sh ```sh
# Run this from your primary node: # Run this from your primary node:
sudo tar --transform 's/.*\///g' -zcvf ~/geo-host-key.tar.gz /etc/ssh/ssh_host_*_key* sudo tar --transform 's/.*\///g' -zcvf ~/geo-host-key.tar.gz /etc/ssh/ssh_host_*_key*
# Run this from your secondary node: # Run this from your secondary node:
scp <user_with_sudo>@<primary_node_fqdn>:geo-host-key.tar.gz . scp <user_with_sudo>@<primary_node_fqdn>:geo-host-key.tar.gz .
tar zxvf ~/geo-host-key.tar.gz -C /etc/ssh tar zxvf ~/geo-host-key.tar.gz -C /etc/ssh
``` ```
1. On your **secondary** node, ensure the file permissions are correct: 1. On your **secondary** node, ensure the file permissions are correct:
```sh ```sh
chown root:root /etc/ssh/ssh_host_*_key* chown root:root /etc/ssh/ssh_host_*_key*
chmod 0600 /etc/ssh/ssh_host_*_key* chmod 0600 /etc/ssh/ssh_host_*_key*
``` ```
1. To verify key fingerprint matches, execute the following command on both nodes: 1. To verify key fingerprint matches, execute the following command on both nodes:
```sh ```sh
for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done
``` ```
You should get an output similar to this one and they should be identical on both nodes: You should get an output similar to this one and they should be identical on both nodes:
```sh ```sh
1024 SHA256:FEZX2jQa2bcsd/fn/uxBzxhKdx4Imc4raXrHwsbtP0M root@serverhostname (DSA) 1024 SHA256:FEZX2jQa2bcsd/fn/uxBzxhKdx4Imc4raXrHwsbtP0M root@serverhostname (DSA)
256 SHA256:uw98R35Uf+fYEQ/UnJD9Br4NXUFPv7JAUln5uHlgSeY root@serverhostname (ECDSA) 256 SHA256:uw98R35Uf+fYEQ/UnJD9Br4NXUFPv7JAUln5uHlgSeY root@serverhostname (ECDSA)
256 SHA256:sqOUWcraZQKd89y/QQv/iynPTOGQxcOTIXU/LsoPmnM root@serverhostname (ED25519) 256 SHA256:sqOUWcraZQKd89y/QQv/iynPTOGQxcOTIXU/LsoPmnM root@serverhostname (ED25519)
2048 SHA256:qwa+rgir2Oy86QI+PZi/QVR+MSmrdrpsuH7YyKknC+s root@serverhostname (RSA) 2048 SHA256:qwa+rgir2Oy86QI+PZi/QVR+MSmrdrpsuH7YyKknC+s root@serverhostname (RSA)
``` ```
1. Verify that you have the correct public keys for the existing private keys: 1. Verify that you have the correct public keys for the existing private keys:
```sh ```sh
# This will print the fingerprint for private keys: # This will print the fingerprint for private keys:
for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done
# This will print the fingerprint for public keys: # This will print the fingerprint for public keys:
for file in /etc/ssh/ssh_host_*_key.pub; do ssh-keygen -lf $file; done for file in /etc/ssh/ssh_host_*_key.pub; do ssh-keygen -lf $file; done
``` ```
NOTE: **Note**: NOTE: **Note:**
The output for private keys and public keys command should generate the same fingerprint. The output for private keys and public keys command should generate the same fingerprint.
1. Restart sshd on your **secondary** node: 1. Restart sshd on your **secondary** node:
```sh ```sh
# Debian or Ubuntu installations # Debian or Ubuntu installations
sudo service ssh reload sudo service ssh reload
# CentOS installations # CentOS installations
sudo service sshd reload sudo service sshd reload
``` ```
### Step 3. Add the **secondary** node ### Step 3. Add the **secondary** node
...@@ -176,22 +175,22 @@ keys must be manually replicated to the **secondary** node. ...@@ -176,22 +175,22 @@ keys must be manually replicated to the **secondary** node.
1. Click the **Add node** button. 1. Click the **Add node** button.
1. SSH into your GitLab **secondary** server and restart the services: 1. SSH into your GitLab **secondary** server and restart the services:
```sh ```sh
gitlab-ctl restart gitlab-ctl restart
``` ```
Check if there are any common issue with your Geo setup by running: Check if there are any common issue with your Geo setup by running:
```sh ```sh
gitlab-rake gitlab:geo:check gitlab-rake gitlab:geo:check
``` ```
1. SSH into your **primary** server and login as root to verify the 1. SSH into your **primary** server and login as root to verify the
**secondary** node is reachable or there are any common issue with your Geo setup: **secondary** node is reachable or there are any common issue with your Geo setup:
```sh ```sh
gitlab-rake gitlab:geo:check gitlab-rake gitlab:geo:check
``` ```
Once added to the admin panel and restarted, the **secondary** node will automatically start Once added to the admin panel and restarted, the **secondary** node will automatically start
replicating missing data from the **primary** node in a process known as **backfill**. replicating missing data from the **primary** node in a process known as **backfill**.
...@@ -250,9 +249,8 @@ The two most obvious issues that can become apparent in the dashboard are: ...@@ -250,9 +249,8 @@ The two most obvious issues that can become apparent in the dashboard are:
1. Database replication not working well. 1. Database replication not working well.
1. Instance to instance notification not working. In that case, it can be 1. Instance to instance notification not working. In that case, it can be
something of the following: something of the following:
- You are using a custom certificate or custom CA (see the - You are using a custom certificate or custom CA (see the [troubleshooting document](troubleshooting.md)).
[troubleshooting document]). - The instance is firewalled (check your firewall rules).
- The instance is firewalled (check your firewall rules).
Please note that disabling a **secondary** node will stop the synchronization process. Please note that disabling a **secondary** node will stop the synchronization process.
...@@ -304,5 +302,4 @@ See the [troubleshooting document](troubleshooting.md). ...@@ -304,5 +302,4 @@ See the [troubleshooting document](troubleshooting.md).
[gitlab-org/gitlab-ee#3789]: https://gitlab.com/gitlab-org/gitlab-ee/issues/3789 [gitlab-org/gitlab-ee#3789]: https://gitlab.com/gitlab-org/gitlab-ee/issues/3789
[gitlab-com/infrastructure#2821]: https://gitlab.com/gitlab-com/infrastructure/issues/2821 [gitlab-com/infrastructure#2821]: https://gitlab.com/gitlab-com/infrastructure/issues/2821
[omnibus-ssl]: https://docs.gitlab.com/omnibus/settings/ssl.html [omnibus-ssl]: https://docs.gitlab.com/omnibus/settings/ssl.html
[troubleshooting document]: troubleshooting.md
[using-geo]: using_a_geo_server.md [using-geo]: using_a_geo_server.md
...@@ -4,7 +4,7 @@ This document is relevant if you are using a PostgreSQL instance that is *not ...@@ -4,7 +4,7 @@ This document is relevant if you are using a PostgreSQL instance that is *not
managed by Omnibus*. This includes cloud-managed instances like AWS RDS, or managed by Omnibus*. This includes cloud-managed instances like AWS RDS, or
manually installed and configured PostgreSQL instances. manually installed and configured PostgreSQL instances.
NOTE: **Note**: NOTE: **Note:**
We strongly recommend running Omnibus-managed instances as they are actively We strongly recommend running Omnibus-managed instances as they are actively
developed and tested. We aim to be compatible with most external developed and tested. We aim to be compatible with most external
(not managed by Omnibus) databases but we do not guarantee compatibility. (not managed by Omnibus) databases but we do not guarantee compatibility.
...@@ -13,17 +13,17 @@ developed and tested. We aim to be compatible with most external ...@@ -13,17 +13,17 @@ developed and tested. We aim to be compatible with most external
1. SSH into a GitLab **primary** application server and login as root: 1. SSH into a GitLab **primary** application server and login as root:
```sh ```sh
sudo -i sudo -i
``` ```
1. Execute the command below to define the node as **primary** node: 1. Execute the command below to define the node as **primary** node:
```sh ```sh
gitlab-ctl set-geo-primary-node gitlab-ctl set-geo-primary-node
``` ```
This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`. This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`.
### Configure the external database to be replicated ### Configure the external database to be replicated
...@@ -101,26 +101,27 @@ To configure the connection to the external read-replica database and enable Log ...@@ -101,26 +101,27 @@ To configure the connection to the external read-replica database and enable Log
1. SSH into a GitLab **secondary** application server and login as root: 1. SSH into a GitLab **secondary** application server and login as root:
```bash ```bash
sudo -i sudo -i
``` ```
1. Edit `/etc/gitlab/gitlab.rb` and add the following 1. Edit `/etc/gitlab/gitlab.rb` and add the following
```ruby ```ruby
## ##
## Geo Secondary role ## Geo Secondary role
## - configure dependent flags automatically to enable Geo ## - configure dependent flags automatically to enable Geo
## ##
roles ['geo_secondary_role'] roles ['geo_secondary_role']
# note this is shared between both databases,
# make sure you define the same password in both
gitlab_rails['db_password'] = '<your_password_here>'
# note this is shared between both databases, gitlab_rails['db_username'] = 'gitlab'
# make sure you define the same password in both gitlab_rails['db_host'] = '<database_read_replica_host>'
gitlab_rails['db_password'] = '<your_password_here>' ```
gitlab_rails['db_username'] = 'gitlab'
gitlab_rails['db_host'] = '<database_read_replica_host>'
```
1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) 1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
### Configure the tracking database ### Configure the tracking database
...@@ -147,73 +148,72 @@ the tracking database on port 5432. ...@@ -147,73 +148,72 @@ the tracking database on port 5432.
1. SSH into a GitLab **secondary** server and login as root: 1. SSH into a GitLab **secondary** server and login as root:
```bash ```bash
sudo -i sudo -i
``` ```
1. Edit `/etc/gitlab/gitlab.rb` with the connection params and credentials for 1. Edit `/etc/gitlab/gitlab.rb` with the connection params and credentials for
the machine with the PostgreSQL instance: the machine with the PostgreSQL instance:
```ruby ```ruby
geo_secondary['db_username'] = 'gitlab_geo' geo_secondary['db_username'] = 'gitlab_geo'
geo_secondary['db_password'] = '<your_password_here>' geo_secondary['db_password'] = '<your_password_here>'
geo_secondary['db_host'] = '<tracking_database_host>' geo_secondary['db_host'] = '<tracking_database_host>'
geo_secondary['db_port'] = <tracking_database_port> # change to the correct port geo_secondary['db_port'] = <tracking_database_port> # change to the correct port
geo_secondary['db_fdw'] = true # enable FDW geo_secondary['db_fdw'] = true # enable FDW
geo_postgresql['enable'] = false # don't use internal managed instance geo_postgresql['enable'] = false # don't use internal managed instance
``` ```
1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) 1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
1. Run the tracking database migrations: 1. Run the tracking database migrations:
```bash ```bash
gitlab-rake geo:db:create gitlab-rake geo:db:create
gitlab-rake geo:db:migrate gitlab-rake geo:db:migrate
``` ```
1. Configure the 1. Configure the [PostgreSQL FDW](https://www.postgresql.org/docs/9.6/static/postgres-fdw.html)
[PostgreSQL FDW](https://www.postgresql.org/docs/9.6/static/postgres-fdw.html) connection and credentials:
connection and credentials:
Save the script below in a file, ex. `/tmp/geo_fdw.sh` and modify the connection
Save the script below in a file, ex. `/tmp/geo_fdw.sh` and modify the connection params to match your environment. Execute it to set up the FDW connection.
params to match your environment. Execute it to set up the FDW connection.
```bash
```bash #!/bin/bash
#!/bin/bash
# Secondary Database connection params:
# Secondary Database connection params: DB_HOST="<public_ip_or_vpc_private_ip>"
DB_HOST="<public_ip_or_vpc_private_ip>" DB_NAME="gitlabhq_production"
DB_NAME="gitlabhq_production" DB_USER="gitlab"
DB_USER="gitlab" DB_PASS="<your_password_here>"
DB_PASS="<your_password_here>" DB_PORT="5432"
DB_PORT="5432"
# Tracking Database connection params:
# Tracking Database connection params: GEO_DB_HOST="<public_ip_or_vpc_private_ip>"
GEO_DB_HOST="<public_ip_or_vpc_private_ip>" GEO_DB_NAME="gitlabhq_geo_production"
GEO_DB_NAME="gitlabhq_geo_production" GEO_DB_USER="gitlab_geo"
GEO_DB_USER="gitlab_geo" GEO_DB_PORT="5432"
GEO_DB_PORT="5432"
query_exec () {
query_exec () { gitlab-psql -h $GEO_DB_HOST -d $GEO_DB_NAME -p $GEO_DB_PORT -c "${1}"
gitlab-psql -h $GEO_DB_HOST -d $GEO_DB_NAME -p $GEO_DB_PORT -c "${1}" }
}
query_exec "CREATE EXTENSION postgres_fdw;"
query_exec "CREATE EXTENSION postgres_fdw;" query_exec "CREATE SERVER gitlab_secondary FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '${DB_HOST}', dbname '${DB_NAME}', port '${DB_PORT}');"
query_exec "CREATE SERVER gitlab_secondary FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '${DB_HOST}', dbname '${DB_NAME}', port '${DB_PORT}');" query_exec "CREATE USER MAPPING FOR ${GEO_DB_USER} SERVER gitlab_secondary OPTIONS (user '${DB_USER}', password '${DB_PASS}');"
query_exec "CREATE USER MAPPING FOR ${GEO_DB_USER} SERVER gitlab_secondary OPTIONS (user '${DB_USER}', password '${DB_PASS}');" query_exec "CREATE SCHEMA gitlab_secondary;"
query_exec "CREATE SCHEMA gitlab_secondary;" query_exec "GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO ${GEO_DB_USER};"
query_exec "GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO ${GEO_DB_USER};" ```
```
NOTE: **Note:** The script template above uses `gitlab-psql` as it's intended to be executed from the Geo machine,
NOTE: **Note:** The script template above uses `gitlab-psql` as it's intended to be executed from the Geo machine, but you can change it to `psql` and run it from any machine that has access to the database. We also recommend using
but you can change it to `psql` and run it from any machine that has access to the database. We also recommend using `psql` for AWS RDS.
`psql` for AWS RDS.
1. Save the file and [restart GitLab](../../restart_gitlab.md#omnibus-gitlab-restart) 1. Save the file and [restart GitLab](../../restart_gitlab.md#omnibus-gitlab-restart)
1. Populate the FDW tables: 1. Populate the FDW tables:
```bash ```bash
gitlab-rake geo:db:refresh_foreign_tables gitlab-rake geo:db:refresh_foreign_tables
``` ```
...@@ -50,17 +50,17 @@ The following steps enable a GitLab cluster to serve as the **primary** node. ...@@ -50,17 +50,17 @@ The following steps enable a GitLab cluster to serve as the **primary** node.
1. Edit `/etc/gitlab/gitlab.rb` and add the following: 1. Edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby ```ruby
## ##
## Enable the Geo primary role ## Enable the Geo primary role
## ##
roles ['geo_primary_role'] roles ['geo_primary_role']
## ##
## Disable automatic migrations ## Disable automatic migrations
## ##
gitlab_rails['auto_migrate'] = false gitlab_rails['auto_migrate'] = false
``` ```
After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect. After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect.
...@@ -107,36 +107,36 @@ Configure the [**secondary** database](database.md) as a read-only replica of ...@@ -107,36 +107,36 @@ Configure the [**secondary** database](database.md) as a read-only replica of
the **primary** database. Use the following as a guide. the **primary** database. Use the following as a guide.
1. Edit `/etc/gitlab/gitlab.rb` in the replica database machine, and add the 1. Edit `/etc/gitlab/gitlab.rb` in the replica database machine, and add the
following: following:
```ruby ```ruby
## ##
## Configure the PostgreSQL role ## Configure the PostgreSQL role
## ##
roles ['postgres_role'] roles ['postgres_role']
## ##
## Secondary address ## Secondary address
## - replace '<secondary_node_ip>' with the public or VPC address of your Geo secondary node ## - replace '<secondary_node_ip>' with the public or VPC address of your Geo secondary node
## - replace '<tracking_database_ip>' with the public or VPC address of your Geo tracking database node ## - replace '<tracking_database_ip>' with the public or VPC address of your Geo tracking database node
## ##
postgresql['listen_address'] = '<secondary_node_ip>' postgresql['listen_address'] = '<secondary_node_ip>'
postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32', '<tracking_database_ip>/32'] postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32', '<tracking_database_ip>/32']
## ##
## Database credentials password (defined previously in primary node) ## Database credentials password (defined previously in primary node)
## - replicate same values here as defined in primary node ## - replicate same values here as defined in primary node
## ##
postgresql['sql_user_password'] = '<md5_hash_of_your_password>' postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
gitlab_rails['db_password'] = '<your_password_here>' gitlab_rails['db_password'] = '<your_password_here>'
## ##
## When running the Geo tracking database on a separate machine, disable it ## When running the Geo tracking database on a separate machine, disable it
## here and allow connections from the tracking database host. And ensure ## here and allow connections from the tracking database host. And ensure
## the tracking database IP is in postgresql['md5_auth_cidr_addresses'] above. ## the tracking database IP is in postgresql['md5_auth_cidr_addresses'] above.
## ##
geo_postgresql['enable'] = false geo_postgresql['enable'] = false
``` ```
After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect. After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect.
...@@ -151,47 +151,47 @@ only a single machine, rather than as a PostgreSQL cluster. ...@@ -151,47 +151,47 @@ only a single machine, rather than as a PostgreSQL cluster.
Configure the tracking database. Configure the tracking database.
1. Edit `/etc/gitlab/gitlab.rb` in the tracking database machine, and add the 1. Edit `/etc/gitlab/gitlab.rb` in the tracking database machine, and add the
following: following:
```ruby ```ruby
## ##
## Enable the Geo secondary tracking database ## Enable the Geo secondary tracking database
## ##
geo_postgresql['enable'] = true geo_postgresql['enable'] = true
geo_postgresql['listen_address'] = '<ip_address_of_this_host>' geo_postgresql['listen_address'] = '<ip_address_of_this_host>'
geo_postgresql['sql_user_password'] = '<tracking_database_password_md5_hash>' geo_postgresql['sql_user_password'] = '<tracking_database_password_md5_hash>'
## ##
## Configure FDW connection to the replica database ## Configure FDW connection to the replica database
## ##
geo_secondary['db_fdw'] = true geo_secondary['db_fdw'] = true
geo_postgresql['fdw_external_password'] = '<replica_database_password_plaintext>' geo_postgresql['fdw_external_password'] = '<replica_database_password_plaintext>'
geo_postgresql['md5_auth_cidr_addresses'] = ['<replica_database_ip>/32'] geo_postgresql['md5_auth_cidr_addresses'] = ['<replica_database_ip>/32']
gitlab_rails['db_host'] = '<replica_database_ip>' gitlab_rails['db_host'] = '<replica_database_ip>'
# Prevent reconfigure from attempting to run migrations on the replica DB # Prevent reconfigure from attempting to run migrations on the replica DB
gitlab_rails['auto_migrate'] = false gitlab_rails['auto_migrate'] = false
## ##
## Disable all other services that aren't needed, since we don't have a role ## Disable all other services that aren't needed, since we don't have a role
## that does this. ## that does this.
## ##
alertmanager['enable'] = false alertmanager['enable'] = false
consul['enable'] = false consul['enable'] = false
gitaly['enable'] = false gitaly['enable'] = false
gitlab_monitor['enable'] = false gitlab_monitor['enable'] = false
gitlab_workhorse['enable'] = false gitlab_workhorse['enable'] = false
nginx['enable'] = false nginx['enable'] = false
node_exporter['enable'] = false node_exporter['enable'] = false
pgbouncer_exporter['enable'] = false pgbouncer_exporter['enable'] = false
postgresql['enable'] = false postgresql['enable'] = false
prometheus['enable'] = false prometheus['enable'] = false
redis['enable'] = false redis['enable'] = false
redis_exporter['enable'] = false redis_exporter['enable'] = false
repmgr['enable'] = false repmgr['enable'] = false
sidekiq['enable'] = false sidekiq['enable'] = false
unicorn['enable'] = false unicorn['enable'] = false
``` ```
After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect. After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect.
...@@ -211,50 +211,50 @@ following modifications: ...@@ -211,50 +211,50 @@ following modifications:
1. Edit `/etc/gitlab/gitlab.rb` on each application server in the **secondary** 1. Edit `/etc/gitlab/gitlab.rb` on each application server in the **secondary**
cluster, and add the following: cluster, and add the following:
```ruby ```ruby
## ##
## Enable the Geo secondary role ## Enable the Geo secondary role
## ##
roles ['geo_secondary_role', 'application_role'] roles ['geo_secondary_role', 'application_role']
## ##
## Disable automatic migrations ## Disable automatic migrations
## ##
gitlab_rails['auto_migrate'] = false gitlab_rails['auto_migrate'] = false
## ##
## Configure the connection to the tracking DB. And disable application ## Configure the connection to the tracking DB. And disable application
## servers from running tracking databases. ## servers from running tracking databases.
## ##
geo_secondary['db_host'] = '<geo_tracking_db_host>' geo_secondary['db_host'] = '<geo_tracking_db_host>'
geo_secondary['db_password'] = '<geo_tracking_db_password>' geo_secondary['db_password'] = '<geo_tracking_db_password>'
geo_postgresql['enable'] = false geo_postgresql['enable'] = false
## ##
## Configure connection to the streaming replica database, if you haven't ## Configure connection to the streaming replica database, if you haven't
## already ## already
## ##
gitlab_rails['db_host'] = '<replica_database_host>' gitlab_rails['db_host'] = '<replica_database_host>'
gitlab_rails['db_password'] = '<replica_database_password>' gitlab_rails['db_password'] = '<replica_database_password>'
## ##
## Configure connection to Redis, if you haven't already ## Configure connection to Redis, if you haven't already
## ##
gitlab_rails['redis_host'] = '<redis_host>' gitlab_rails['redis_host'] = '<redis_host>'
gitlab_rails['redis_password'] = '<redis_password>' gitlab_rails['redis_password'] = '<redis_password>'
## ##
## If you are using custom users not managed by Omnibus, you need to specify ## If you are using custom users not managed by Omnibus, you need to specify
## UIDs and GIDs like below, and ensure they match between servers in a ## UIDs and GIDs like below, and ensure they match between servers in a
## cluster to avoid permissions issues ## cluster to avoid permissions issues
## ##
user['uid'] = 9000 user['uid'] = 9000
user['gid'] = 9000 user['gid'] = 9000
web_server['uid'] = 9001 web_server['uid'] = 9001
web_server['gid'] = 9001 web_server['gid'] = 9001
registry['uid'] = 9002 registry['uid'] = 9002
registry['gid'] = 9002 registry['gid'] = 9002
``` ```
NOTE: **Note:** NOTE: **Note:**
If you had set up PostgreSQL cluster using the omnibus package and you had set If you had set up PostgreSQL cluster using the omnibus package and you had set
......
...@@ -80,8 +80,8 @@ In this diagram: ...@@ -80,8 +80,8 @@ In this diagram:
- If present, the [LDAP server](#ldap) should be configured to replicate for [Disaster Recovery](../disaster_recovery/index.md) scenarios. - If present, the [LDAP server](#ldap) should be configured to replicate for [Disaster Recovery](../disaster_recovery/index.md) scenarios.
- A **secondary** node performs different type of synchronizations against the **primary** node, using a special - A **secondary** node performs different type of synchronizations against the **primary** node, using a special
authorization protected by JWT: authorization protected by JWT:
- Repositories are cloned/updated via Git over HTTPS. - Repositories are cloned/updated via Git over HTTPS.
- Attachments, LFS objects, and other files are downloaded via HTTPS using a private API endpoint. - Attachments, LFS objects, and other files are downloaded via HTTPS using a private API endpoint.
From the perspective of a user performing Git operations: From the perspective of a user performing Git operations:
...@@ -107,8 +107,8 @@ The following are required to run Geo: ...@@ -107,8 +107,8 @@ The following are required to run Geo:
- An operating system that supports OpenSSH 6.9+ (needed for - An operating system that supports OpenSSH 6.9+ (needed for
[fast lookup of authorized SSH keys in the database](../../operations/fast_ssh_key_lookup.md)) [fast lookup of authorized SSH keys in the database](../../operations/fast_ssh_key_lookup.md))
The following operating systems are known to ship with a current version of OpenSSH: The following operating systems are known to ship with a current version of OpenSSH:
- [CentOS](https://www.centos.org) 7.4+ - [CentOS](https://www.centos.org) 7.4+
- [Ubuntu](https://www.ubuntu.com) 16.04+ - [Ubuntu](https://www.ubuntu.com) 16.04+
- PostgreSQL 9.6+ with [FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html) support and [Streaming Replication](https://wiki.postgresql.org/wiki/Streaming_Replication) - PostgreSQL 9.6+ with [FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html) support and [Streaming Replication](https://wiki.postgresql.org/wiki/Streaming_Replication)
- Git 2.9+ - Git 2.9+
......
...@@ -10,41 +10,42 @@ Once removed from the Geo admin page, you must stop and uninstall the **secondar ...@@ -10,41 +10,42 @@ Once removed from the Geo admin page, you must stop and uninstall the **secondar
1. On the **secondary** node, stop GitLab: 1. On the **secondary** node, stop GitLab:
```bash ```bash
sudo gitlab-ctl stop sudo gitlab-ctl stop
``` ```
1. On the **secondary** node, uninstall GitLab: 1. On the **secondary** node, uninstall GitLab:
```bash ```bash
# Stop gitlab and remove its supervision process # Stop gitlab and remove its supervision process
sudo gitlab-ctl uninstall sudo gitlab-ctl uninstall
# Debian/Ubuntu # Debian/Ubuntu
sudo dpkg --remove gitlab-ee sudo dpkg --remove gitlab-ee
# Redhat/Centos # Redhat/Centos
sudo rpm --erase gitlab-ee sudo rpm --erase gitlab-ee
``` ```
Once GitLab has been uninstalled from the **secondary** node, the replication slot must be dropped from the **primary** node's database as follows: Once GitLab has been uninstalled from the **secondary** node, the replication slot must be dropped from the **primary** node's database as follows:
1. On the **primary** node, start a PostgreSQL console session: 1. On the **primary** node, start a PostgreSQL console session:
```bash ```bash
sudo gitlab-psql sudo gitlab-psql
``` ```
NOTE: **Note:** NOTE: **Note:**
Using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions. Using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions.
1. Find the name of the relevant replication slot. This is the slot that is specified with `--slot-name` when running the replicate command: `gitlab-ctl replicate-geo-database`. 1. Find the name of the relevant replication slot. This is the slot that is specified with `--slot-name` when running the replicate command: `gitlab-ctl replicate-geo-database`.
```sql ```sql
SELECT * FROM pg_replication_slots; SELECT * FROM pg_replication_slots;
``` ```
1. Remove the replication slot for the **secondary** node: 1. Remove the replication slot for the **secondary** node:
```sql ```sql
SELECT pg_drop_replication_slot('<name_of_slot>'); SELECT pg_drop_replication_slot('<name_of_slot>');
``` ```
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment