Commit aaef2603 authored by Achilleas Pipinellis's avatar Achilleas Pipinellis

Merge branch 'dewet-geo-docs' into 'master'

Refactor instructions for Geo docs

When I followed these instructions, it was hard to keep my place, or get a feel for how close I am to the beginning or the end, so I am trying to create a numbered list for the **Secondary Node** instructions, but doing formatting while keeping it as a list is proving to be really hard.

/cc @axil

See merge request !864
parents 3e51eeca 437c4c8d
......@@ -43,18 +43,14 @@ Keep in mind that:
## Setup instructions
GitLab Geo requires some additional work installing and configuring your
instance, than a normal setup.
There are a couple of things you need to do in order to have one or more GitLab
Geo instances. Follow the steps below in the **exact order** that they appear:
1. Follow the instructions to [install GitLab Enterprise Edition][install-ee]
on the server that will serve as the secondary Geo node, but don't further
configure GitLab as authentication will be handled by the primary node (more
on this in the configuration step).
1. [Setup a database replication](database.md) in `primary <-> secondary (read-only)` topology.
1. [Configure GitLab](configuration.md) and set the primary and secondary nodes.
In order to set up one or more GitLab Geo instances, follow the steps below in
this **exact order**:
1. Follow the first 3 steps to [install GitLab Enterprise Edition][install-ee]
on the server that will serve as the secondary Geo node. Do not login or
set up anything else in the secondary node for the moment.
1. [Setup the database replication](database.md) (`primary <-> secondary (read-only)` topology)
1. [Configure GitLab](configuration.md) to set the primary and secondary nodes.
## After setup
......
# GitLab Geo configuration
> **Important:**
Make sure you have followed the first two steps of the
[Setup instructions](README.md#setup-instructions).
This is the final step you need to follow in order to setup a Geo node.
---
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Setting up GitLab](#setting-up-gitlab)
- [Prerequisites](#prerequisites)
- [Step 1. Adding the primary GitLab node](#step-1-adding-the-primary-gitlab-node)
- [Step 2. Updating the `known_hosts` file of the secondary nodes](#step-2-updating-the-known_hosts-file-of-the-secondary-nodes)
- [Step 3. Copying the database encryption key](#step-3-copying-the-database-encryption-key)
- [Step 4. Enabling the secondary GitLab node](#step-4-enabling-the-secondary-gitlab-node)
- [Step 5. Replicating the repositories data](#step-5-replicating-the-repositories-data)
- [Step 6. Regenerating the authorized keys in the secondary node](#step-6-regenerating-the-authorized-keys-in-the-secondary-node)
- [Next steps](#next-steps)
- [Adding another secondary Geo node](#adding-another-secondary-geo-node)
- [Additional information for the SSH key pairs](#additional-information-for-the-ssh-key-pairs)
- [Troubleshooting](#troubleshooting)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Setting up GitLab
>**Notes:**
- Don't setup any custom authentication in the secondary nodes, this will be
handled by the primary node.
- Do not add anything in the secondaries Geo nodes admin area
(**Admin Area ➔ Geo Nodes**). This is handled solely by the primary node.
---
After having installed GitLab Enterprise Edition in the instance that will serve
as a Geo node and set up the database replication, the next steps can be summed
as a Geo node and set up the [database replication](database.md), the next steps can be summed
up to:
1. Configure the primary node
......@@ -13,85 +42,159 @@ up to:
1. Start GitLab in the secondary node's machine
1. Configure every secondary node in the primary's Admin screen
After GitLab's instance is online and defined in **Geo Nodes** admin screen,
new data will start to be automatically replicated, but you still need to copy
old data from the primary machine (more information below).
### Prerequisites
## Primary node GitLab setup
This is the last step of configuring a Geo node. Make sure you have followed the
first two steps of the [Setup instructions](README.md#setup-instructions):
>**Notes:**
- You will need to setup your database into a **Primary <-> Secondary (read-only)** replication
topology, and your Primary node should always point to a database's Primary
instance. If you haven't done that already, read [database replication](./database.md).
- Only in the Geo nodes admin area of the primary node, will you be adding all
nodes' information (secondary and primary). Do not add anything in the Geo
nodes admin area of the secondaries.
To setup the primary node:
1. [Create the SSH key pair][ssh-pair] for the primary node.
1. Visit the primary node's **Admin Area > Geo Nodes** (`/admin/geo_nodes`).
1. Add your primary node by providing its full URL and the public SSH key
1. You have already installed on the secondary server the same version of
GitLab Enterprise Edition that is present on the primary server.
1. You have set up the database replication.
1. Your secondary node is allowed to communicate via HTTP/HTTPS and SSH with
your primary node (make sure your firewall is not blocking that).
Some of the following steps require to configure the primary and secondary
nodes almost at the same time. For your convenience make sure you have SSH
logins opened on all nodes as we will be moving back and forth.
### Step 1. Adding the primary GitLab node
1. SSH into the **primary** node and login as root:
```
sudo -i
```
1. Create a new SSH key pair for the primary node. Choose the default location
and leave the password blank by hitting 'Enter' three times:
```bash
sudo -u git -H ssh-keygen -b 4096 -C 'Primary GitLab Geo node'
```
Read more in [additional info for SSH key pairs](#additional-information-for-the-ssh-key-pairs).
1. Get the contents of `id_rsa.pub` the was just created:
```
# Omnibus GitLab installations
sudo -u git cat /var/opt/gitlab/.ssh/id_rsa.pub
# Installations from source
sudo -u git cat /home/git/.ssh/id_rsa.pub
```
1. Visit the primary node's **Admin Area ➔ Geo Nodes** (`/admin/geo_nodes`) in
your browser.
1. Add the primary node by providing its full URL and the public SSH key
you created previously. Make sure to check the box 'This is a primary node'
when adding it.
![Add new primary Geo node](img/geo_nodes_add_new.png)
![Add new primary Geo node](img/geo_nodes_add_new.png)
---
1. Click the **Add node** button.
>**Note:**
Don't set anything up for the `secondary` node yet, make sure to follow the
[Secondary node GitLab setup](#secondary-node-gitlab-setup) first.
### Step 2. Updating the `known_hosts` file of the secondary nodes
1. SSH into the **secondary** node and login as root:
In the following table you can see what all these settings mean:
```
sudo -i
```
| Setting | Description |
| --------- | ----------- |
| Primary | This marks a Geo Node as primary. There can be only one primary, make sure that you first add the primary node and then all the others. |
| URL | Your instance's full URL, in the same way it is configured in `gitlab.yml` (source based installations) or `/etc/gitlab/gitlab.rb` (omnibus installations). |
|Public Key | The SSH public key of the user that your GitLab instance runs on (unless changed, should be the user `git`). That means that you have to go in each Geo Node separately and create an SSH key pair. See the [SSH key creation][ssh-pair] section. |
1. The secondary nodes need to know the SSH fingerprint of the primary node that
will be used for the Git clone/fetch operations. In order to add it to the
`known_hosts` file, run the following command and type `yes` when asked:
## Secondary node GitLab setup
```
sudo -u git -H ssh git@<primary-node-url>
```
>**Note:**
The Geo nodes admin area (**Admin Area > Geo Nodes**) is not used when setting
up the secondary nodes. This is handled at the primary one.
Replace `<primary-node-url>` with the FQDN of the primary node.
To install a secondary node, you must follow the normal GitLab Enterprise
Edition installation, with some extra requirements:
1. Verify that the fingerprint was added by checking `known_hosts`:
- You should point your database connection to a [replicated instance](./database.md).
- Your secondary node should be allowed to [communicate via HTTP/HTTPS and
SSH with your primary node (make sure your firewall is not blocking that).
- Don't make any extra steps you would do for a normal new installation
- Don't setup any custom authentication (this will be handled by the `primary` node)
```
# Omnibus GitLab installations
cat /var/opt/gitlab/.ssh/known_hosts
You need to make sure you restored the database backup (that is part of setting
up replication) and that the primary node PostgreSQL instance is ready to
replicate data.
# Installations from source
cat /home/git/.ssh/known_hosts
```
### Database Encryption Key
### Step 3. Copying the database encryption key
GitLab stores a unique encryption key in disk that we use to safely store
sensitive data in the database.
sensitive data in the database. Any secondary node must have the
**exact same value** for `db_key_base` as defined in the primary one.
1. SSH into the **primary** node and login as root:
```
sudo -i
```
1. Find the value of `db_key_base` and copy it:
```
# Omnibus GitLab installations
cat /etc/gitlab/gitlab-secrets.json
# Installations from source
cat /home/git/gitlab/config/secrets.yml
```
1. SSH into the **secondary** node and login as root:
```
sudo -i
```
1. Open the secrets file and paste the value of `db_key_base` you copied in the
previous step:
```
# Omnibus GitLab installations
editor /etc/gitlab/gitlab-secrets.json
# Installations from source
editor /home/git/gitlab/config/secrets.yml
```
Any secondary node must have the **exact same value** for `db_key_base` as
defined in the primary one.
1. Save and close the file.
- For Omnibus installations it is stored at `/etc/gitlab/gitlab-secrets.json`.
- For installations from source it is stored at `/home/git/gitlab/config/secrets.yml`.
### Step 4. Enabling the secondary GitLab node
Find that key in the primary node and copy paste its value in the secondaries.
1. SSH into the **secondary** node and login as root:
### Enable the secondary GitLab instance
```
sudo -i
```
1. Create a new SSH key pair for the secondary node. Choose the default location
and leave the password blank by hitting 'Enter' three times:
```bash
sudo -u git -H ssh-keygen -b 4096 -C 'Secondary GitLab Geo node'
```
Your new GitLab secondary node can now be safely started.
Read more in [additional info for SSH key pairs](#additional-information-for-the-ssh-key-pairs).
1. Get the contents of `id_rsa.pub` the was just created:
1. [Create the SSH key pair][ssh-pair] for the secondary node.
1. Visit the primary node's **Admin Area > Geo Nodes** (`/admin/geo_nodes`).
1. Add your secondary node by providing its full URL and the public SSH key
you created previously.
1. Hit the **Add node** button.
```
# Omnibus installations
sudo -u git cat /var/opt/gitlab/.ssh/id_rsa.pub
# Installations from source
sudo -u git cat /home/git/.ssh/id_rsa.pub
```
1. Visit the **primary** node's **Admin Area ➔ Geo Nodes** (`/admin/geo_nodes`)
in your browser.
1. Add the secondary node by providing its full URL and the public SSH key
you created previously. **Do NOT** check the box 'This is a primary node'.
1. Click the **Add node** button.
---
......@@ -101,29 +204,41 @@ accessible.
The two most obvious issues that replication can have here are:
- Database replication not working well
- Instance to instance notification not working. In that case, it can be
something of the following:
- You are using a custom certificate or custom CA (see the
[Troubleshooting](#troubleshooting) section)
- Instance is firewalled (check your firewall rules)
1. Database replication not working well
1. Instance to instance notification not working. In that case, it can be
something of the following:
- You are using a custom certificate or custom CA (see the
[Troubleshooting](#troubleshooting) section)
- Instance is firewalled (check your firewall rules)
### Repositories data replication
### Step 5. Replicating the repositories data
Getting a new secondary Geo node up and running, will also require the
repositories directory to be synced from the primary node. You can use `rsync`
for that. Assuming `1.2.3.4` is the IP of the primary node, SSH into the
secondary and run:
for that.
```bash
# For Omnibus installations
rsync -guavrP root@1.2.3.4:/var/opt/gitlab/git-data/repositories/ /var/opt/gitlab/git-data/repositories/
gitlab-ctl reconfigure # to fix directory permissions
Make sure `rsync` is installed in both primary and secondary servers and root
SSH access with a password is enabled. Otherwise, you can set up an SSH key-based
connection between the servers.
# For installations from source
rsync -guavrP root@1.2.3.4:/home/git/repositories/ /home/git/repositories/
chmod ug+rwX,o-rwx /home/git/repositories
```
1. SSH into the **secondary** node and login as root:
```
sudo -i
```
1. Assuming `1.2.3.4` is the IP of the primary node, run the following command
to start the sync:
```bash
# For Omnibus installations
rsync -guavrP root@1.2.3.4:/var/opt/gitlab/git-data/repositories/ /var/opt/gitlab/git-data/repositories/
gitlab-ctl reconfigure # to fix directory permissions
# For installations from source
rsync -guavrP root@1.2.3.4:/home/git/repositories/ /home/git/repositories/
chmod ug+rwX,o-rwx /home/git/repositories
```
If this step is not followed, the secondary node will eventually clone and
fetch every missing repository as they are updated with new commits on the
......@@ -133,12 +248,12 @@ While active repositories will be eventually replicated, if you don't rsync,
the files, any archived/inactive repositories will not get in the secondary node
as Geo doesn't run any routine task to look for missing repositories.
### Authorized keys regeneration
### Step 6. Regenerating the authorized keys in the secondary node
The final step will be to regenerate the keys for `~/.ssh/authorized_keys` using
the command below (HTTPS clone will still work without this extra step).
The final step is to regenerate the keys for `~/.ssh/authorized_keys`
(HTTPS clone will still work without this extra step).
On the secondary node where the database is [already replicated](./database.md),
On the **secondary** node where the database is [already replicated](./database.md),
run:
```
......@@ -152,47 +267,28 @@ sudo -u git -H bundle exec rake gitlab:shell:setup RAILS_ENV=production
This will enable `git` operations to authorize against your existing users.
New users and SSH keys updated after this step, will be replicated automatically.
### Ready to use
### Next steps
Your instance should be ready to use. You can visit the Admin area in the
secondary node to check if it's correctly identified as a secondary Geo node and
if Geo is enabled.
Your nodes should now be ready to use. You can login to the secondary node
with the same credentials as used in the primary. Visit the secondary node's
**Admin Area ➔ Geo Nodes** (`/admin/geo_nodes`) in your browser to check if it's
correctly identified as a secondary Geo node and if Geo is enabled.
If your installation isn't working properly, check the
[troubleshooting](#troubleshooting) section.
## Create SSH key pairs for new Geo nodes
## Adding another secondary Geo node
>**Note:**
These are general instructions to create a new SSH key pair for a new Geo node,
either primary or secondary.
To add another Geo node in an already Geo configured infrastructure, just follow
[the steps starting form step 2](#step-2-updating-the-known_hosts-file-of-the-secondary-nodes).
Just omit the first step that sets up the primary node.
---
## Additional information for the SSH key pairs
When adding a new Geo node, you must provide an SSH public key of the user that
your GitLab instance runs on (unless changed, should be the user `git`). This
user will act as a "normal user" who fetches from the primary Geo node.
1. Run the command below on each server that will be a Geo node:
```bash
sudo -u git -H ssh-keygen
```
1. Get the contents of `id_rsa.pub` the was just created:
```
# Omnibus installations
sudo -u git cat /var/opt/gitlab/.ssh/id_rsa.pub
# Installations from source
sudo -u git cat /home/git/.ssh/id_rsa.pub
```
1. Copy them to the admin area of the **primary** node (**Admin Area > Geo Nodes**).
---
If for any reason you generate the key using a different name from the default
`id_rsa`, or you want to generate an extra key only for the repository
synchronization feature, you can do so, but you have to create/modify your
......@@ -213,28 +309,6 @@ Host example.com # The FQDN of the primary Geo node
IdentityFile ~/.ssh/mycustom.key # The location of your private key
```
### Add the primary node to the `known_hosts` file of the secondary nodes
>**Note:**
This operation is only needed for the secondary nodes.
---
The secondary nodes need to know the SSH fingerprint of the primary node that
will be used for the Git clone/fetch operations. In order to add it to the
`known_hosts` file, while in the terminal of a secondary node, run the
following command and type `yes` when asked:
```
sudo -u git -H ssh git@<primary-node-url>
```
Replace `<primary-node-url>` with the FQDN of the primary node. You can verify
that the fingerprint was added by checking:
- `/var/opt/gitlab/.ssh/known_hosts` for Omnibus installations or
- `/home/git/.ssh/known_hosts` for installations from source
## Troubleshooting
Setting up Geo requires careful attention to details and sometimes it's easy to
......@@ -247,13 +321,13 @@ where you have to fix (all commands and path locations are for Omnibus installs)
writing permissions.
- Any secondary nodes should point only to read-only instances.
- Can Geo detect my current node correctly?
- Geo uses your defined node from `Admin > Geo` screen, and tries to match
- Geo uses your defined node from `Admin Geo` screen, and tries to match
with the value defined in `/etc/gitlab/gitlab.rb` configuration file.
The relevant line looks like: `external_url "http://gitlab.example.com"`.
- To check if node on current machine is correctly detected type:
```
sudo gitlab-rails runner "Gitlab::Geo.current_node"
sudo gitlab-rails runner "puts Gitlab::Geo.current_node.inspect"
```
and expect something like:
......
......@@ -4,53 +4,111 @@ This document describes the minimal steps you have to take in order to
replicate your GitLab database into another server. You may have to change
some values according to your database setup, how big it is, etc.
The GitLab primary node where the write operations happen will connect to
`primary` database server, and the secondary ones which are read-only will
connect to `secondary` database servers (which are read-only too).
>**Note:**
In many databases documentation you will see `primary` being references as `master`
and `secondary` as either `slave` or `standby` server (read-only).
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [PostgreSQL replication](#postgresql-replication)
- [PostgreSQL - Configure the primary server](#postgresql-configure-the-primary-server)
- [PostgreSQL - Configure the secondary server](#postgresql-configure-the-secondary-server)
- [PostgreSQL - Initiate the replication process](#postgresql-initiate-the-replication-process)
- [Prerequisites](#prerequisites)
- [Step 1. Configure the primary server](#step-1-configure-the-primary-server)
- [Step 2. Configure the secondary server](#step-2-configure-the-secondary-server)
- [Step 3. Initiate the replication process](#step-3-initiate-the-replication-process)
- [Next steps](#next-steps)
- [MySQL replication](#mysql-replication)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## PostgreSQL replication
The GitLab primary node where the write operations happen will connect to
`primary` database server, and the secondary ones which are read-only will
connect to `secondary` database servers (which are read-only too).
>**Note:**
In many databases documentation you will see `primary` being references as `master`
and `secondary` as either `slave` or `standby` server (read-only).
### Prerequisites
The following guide assumes that:
- You are using PostgreSQL 9.1 or later which includes the
[`pg_basebackup` tool][pgback]. As of this writing, the latest Omnibus
packages (8.5) have version 9.2.
- You have a primary server already set up, running PostgreSQL 9.2.x, and you
- You have a primary server already set up (the GitLab server you are
replicating from), running PostgreSQL 9.2.x, and you
have a new secondary server set up on the same OS and PostgreSQL version. If
you are using Omnibus, make sure the GitLab version is the same on all nodes.
- The IP of the primary server for our examples will be `1.2.3.4`, whereas the
secondary's IP will be `5.6.7.8`.
[pgback]: http://www.postgresql.org/docs/9.2/static/app-pgbasebackup.html
### Step 1. Configure the primary server
**For Omnibus installations**
1. SSH into your GitLab **primary** server and login as root:
```
sudo -i
```
1. Omnibus GitLab has already a replication user called `gitlab_replicator`.
You must set its password manually. Replace `thepassword` with a strong
password:
```bash
sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql \
-d template1 \
-c "ALTER USER gitlab_replicator WITH ENCRYPTED PASSWORD 'thepassword'"
```
1. Edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby
postgresql['listen_address'] = "1.2.3.4"
postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','1.2.3.4/32']
postgresql['md5_auth_cidr_addresses'] = ['5.6.7.8/32']
postgresql['sql_replication_user'] = "gitlab_replicator"
postgresql['wal_level'] = "hot_standby"
postgresql['max_wal_senders'] = 10
postgresql['wal_keep_segments'] = 10
postgresql['hot_standby'] = "on"
```
Where `1.2.3.4` is the public IP address of the primary server, and `5.6.7.8`
the public IP address of the secondary one. If you want to add another
secondary, the relevant setting would look like:
```ruby
postgresql['md5_auth_cidr_addresses'] = ['5.6.7.8/32','11.22.33.44/32']
```
### PostgreSQL - Configure the primary server
Edit the `wal` values as you see fit.
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
1. Now that the PostgreSQL server is set up to accept remote connections, run
`netstat -plnt` to make sure that PostgreSQL is listening to the server's
public IP.
1. Continue to [set up the secondary server](#step-2-configure-the-secondary-server).
---
**For installations from source**
1. Login as root and create a replication user:
1. SSH into your database **primary** server and login as root:
```
sudo -i
```
1. Create a replication user named `gitlab_replicator`:
```bash
sudo -u postgres psql -c "CREATE USER gitlab_replicator REPLICATION ENCRYPTED PASSWORD 'thepassword';"
```
1. Edit `postgresql.conf` to configure the primary server for streaming replication
(for Debian/Ubuntu that would be `/etc/postgresql/9.2/main/postgresql.conf`):
(for Debian/Ubuntu that would be `/etc/postgresql/9.x/main/postgresql.conf`):
```bash
listen_address = '1.2.3.4'
......@@ -66,7 +124,7 @@ The following guide assumes that:
1. Set the access control on the primary to allow TCP connections using the
server's public IP and set the connection from the secondary to require a
password. Edit `pg_hba.conf` (for Debian/Ubuntu that would be
`/etc/postgresql/9.2/main/pg_hba.conf`):
`/etc/postgresql/9.x/main/pg_hba.conf`):
```bash
host all all 127.0.0.1/32 trust
......@@ -75,70 +133,88 @@ The following guide assumes that:
```
Where `1.2.3.4` is the public IP address of the primary server, and `5.6.7.8`
the public IP address of the secondary one.
the public IP address of the secondary one. If you want to add another
secondary, add one more row like the replication one and change the IP
address:
1. Restart PostgreSQL for the changes to take effect
```bash
host all all 127.0.0.1/32 trust
host all all 1.2.3.4/32 trust
host replication gitlab_replicator 5.6.7.8/32 md5
host replication gitlab_replicator 11.22.33.44/32 md5
```
---
1. Restart PostgreSQL for the changes to take effect.
1. Now that the PostgreSQL server is set up to accept remote connections, run
`netstat -plnt` to make sure that PostgreSQL is listening to the server's
public IP.
### Step 2. Configure the secondary server
**For Omnibus installations**
1. Omnibus GitLab has already a replicator user called `gitlab_replicator`.
You must set its password manually:
1. SSH into your GitLab **secondary** server and login as root:
```
sudo -i
```
1. Test that the remote connection to the primary server works:
```
sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h 1.2.3.4 -U gitlab_replicator -d gitlabhq_production -W
```
When prompted enter the password you set in the first step for the
`gitlab_replicator` user. If all worked correctly, you should see the
database prompt.
1. Exit the PostgreSQL console:
```bash
sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql \
-d template1 \
-c "ALTER USER gitlab_replicator WITH ENCRYPTED PASSWORD 'thepassword'"
```
\q
```
1. Edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby
postgresql['listen_address'] = "1.2.3.4"
postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','1.2.3.4/32']
postgresql['md5_auth_cidr_addresses'] = ['5.6.7.8/32']
postgresql['sql_replication_user'] = "gitlab_replicator"
postgresql['wal_level'] = "hot_standby"
postgresql['max_wal_senders'] = 10
postgresql['wal_keep_segments'] = 10
postgresql['hot_standby'] = "on"
```
Where `1.2.3.4` is the public IP address of the primary server, and `5.6.7.8`
the public IP address of the secondary one.
Edit the `wal` values as you see fit.
1. [Reconfigure GitLab][] for the changes to take effect.
1. Continue to [initiate the replication process](#step-3-initiate-the-replication-process).
---
Now that the PostgreSQL server is set up to accept remote connections, run
`netstat -plnt` to make sure that PostgreSQL is listening to the server's
public IP.
**For installations from source**
Test that the remote connection works by going to the secondary server and
running:
1. SSH into your database **secondary** server and login as root:
```
# For Omnibus installations
sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h 1.2.3.4 -U gitlab_replicator -d gitlabhq_production -W
```
sudo -i
```
1. Test that the remote connection to the primary server works:
# For source installations
sudo -u postgres psql -h 1.2.3.4 -U gitlab_replicator -d gitlabhq_production -W
```
```
sudo -u postgres psql -h 1.2.3.4 -U gitlab_replicator -d gitlabhq_production -W
```
When prompted enter the password you set in the first step for the
`gitlab_replicator` user. If all worked correctly, you should see the database
prompt.
When prompted enter the password you set in the first step for the
`gitlab_replicator` user. If all worked correctly, you should see the
database prompt.
### PostgreSQL - Configure the secondary server
1. Exit the PostgreSQL console:
**For installations from source**
```
\q
```
1. Edit `postgresql.conf` to configure the secondary for streaming replication
(for Debian/Ubuntu that would be `/etc/postgresql/9.2/main/postgresql.conf`):
(for Debian/Ubuntu that would be `/etc/postgresql/9.x/main/postgresql.conf`):
```bash
wal_level = hot_standby
......@@ -148,76 +224,93 @@ prompt.
hot_standby = on
```
1. Restart PostgreSQL for the changes to take effect
---
**For Omnibus installations**
1. Edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby
postgresql['wal_level'] = "hot_standby"
postgresql['max_wal_senders'] = 10
postgresql['wal_keep_segments'] = 10
postgresql['hot_standby'] = "on"
```
1. [Reconfigure GitLab][] for the changes to take effect.
1. Restart PostgreSQL for the changes to take effect.
1. Continue to [initiate the replication process](#step-3-initiate-the-replication-process).
### PostgreSQL - Initiate the replication process
### Step 3. Initiate the replication process
Below we provide a script that connects to the primary server, replicates the
database and creates the needed files for replication.
The directories used are the defaults that are set up in Omnibus. Configure it
as you see fit replacing the directories and paths.
The directories used are the defaults that are set up in Omnibus. If you have
changed any defaults or are using a source installation, configure it as you
see fit replacing the directories and paths.
>**Warning:**
Make sure to run this on the _**secondary**_ server as it removes all PostgreSQL's
Make sure to run this on the **secondary** server as it removes all PostgreSQL's
data before running `pg_basebackup`.
```bash
#!/bin/bash
1. SSH into your GitLab **secondary** server and login as root:
```
sudo -i
```
PORT="5432"
USER="gitlab_replicator"
echo Enter ip of primary postgresql server
read HOST
echo Enter password for $USER@$HOST
read -s PASSWORD
1. Save the snippet below in a file, let's say `/tmp/replica.sh`:
echo Stopping PostgreSQL
gitlab-ctl stop
```bash
#!/bin/bash
PORT="5432"
USER="gitlab_replicator"
echo ---------------------------------------------------------------
echo WARNING: Make sure this scirpt is run from the secondary server
echo ---------------------------------------------------------------
echo
echo Enter the IP of the primary PostgreSQL server
read HOST
echo Enter the password for $USER@$HOST
read -s PASSWORD
echo Stopping PostgreSQL and all GitLab services
gitlab-ctl stop
echo Backing up postgresql.conf
sudo -u gitlab-psql mv /var/opt/gitlab/postgresql/data/postgresql.conf /var/opt/gitlab/postgresql/
echo Cleaning up old cluster directory
sudo -u gitlab-psql rm -rf /var/opt/gitlab/postgresql/data
rm -f /tmp/postgresql.trigger
echo Starting base backup as the replicator user
echo Enter the password for $USER@$HOST
sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_basebackup -h $HOST -D /var/opt/gitlab/postgresql/data -U gitlab_replicator -v -x -P
echo Writing recovery.conf file
sudo -u gitlab-psql bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- _EOF1_
standby_mode = 'on'
primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD'
trigger_file = '/tmp/postgresql.trigger'
_EOF1_
"
echo Restoring postgresql.conf
sudo -u gitlab-psql mv /var/opt/gitlab/postgresql/postgresql.conf /var/opt/gitlab/postgresql/data/
echo Starting PostgreSQL and all GitLab services
gitlab-ctl start
```
echo Backup postgresql.conf
sudo -u gitlab-psql mv /var/opt/gitlab/postgresql/data/postgresql.conf /var/opt/gitlab/postgresql/
1. Run it with:
echo Cleaning up old cluster directory
sudo -u gitlab-psql rm -rf /var/opt/gitlab/postgresql/data
rm -f /tmp/postgresql.trigger
```
bash /tmp/replica.sh
```
echo Starting base backup as replicator
echo Enter password for $USER@$HOST
sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_basebackup -h $HOST -D /var/opt/gitlab/postgresql/data -U gitlab_replicator -v -x -P
When prompted, enter the password you set up for the `gitlab_replicator`
user in the first step.
echo Writing recovery.conf file
sudo -u gitlab-psql bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- _EOF1_
standby_mode = 'on'
primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD'
trigger_file = '/tmp/postgresql.trigger'
_EOF1_
"
The replication process is now over.
echo Restore postgresql.conf
sudo -u gitlab-psql mv /var/opt/gitlab/postgresql/postgresql.conf /var/opt/gitlab/postgresql/data/
### Next steps
echo Starting PostgreSQL
gitlab-ctl start
```
Now that the database replication is done, the next step is to configure GitLab.
When prompted, enter the password you set up for the `gitlab_replicator` user.
[➤ GitLab Geo configuration](configuration.md)
## MySQL replication
We don't support MySQL replication for GitLab Geo.
[pgback]: http://www.postgresql.org/docs/9.2/static/app-pgbasebackup.html
[reconfigure GitLab]: ../administration/restart_gitlab.md#omnibus-gitlab-reconfigure
# Geo nodes admin area
For more information about setting up GitLab Geo, read the
[Geo documentation](../../gitlab-geo/README.md).
When you're done, you can navigate to **Admin area ➔ Geo nodes** (`/admin/geo_nodes`).
In the following table you can see what all these settings mean:
| Setting | Description |
| --------- | ----------- |
| Primary | This marks a Geo Node as primary. There can be only one primary, make sure that you first add the primary node and then all the others. |
| URL | Your instance's full URL, in the same way it is configured in `/etc/gitlab/gitlab.rb` (Omnibus GitLab installations) or `gitlab.yml` (source based installations). |
| Public Key | The SSH public key of the user that your GitLab instance runs on (unless changed, should be the user `git`). |
A primary node will have a star right next to it to distinguish from the
secondaries.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment