Commit b3288eec authored by Achilleas Pipinellis's avatar Achilleas Pipinellis

Merge branch '1664-geo-improve-setup' into 'master'

Geo improved setup for 9.1

See merge request !1584
parents 5003d0b7 5b2bceec
......@@ -62,25 +62,15 @@ logins opened on all nodes as we will be moving back and forth.
sudo -i
```
1. Get the contents of `id_rsa.pub` for the git user:
1. Execute the command below to define the node as primary Geo node:
```
# Omnibus GitLab installations
sudo -u git cat /var/opt/gitlab/.ssh/id_rsa.pub
gitlab-ctl set-geo-primary-node
```
Read more in [additional info for SSH key pairs](#additional-information-for-the-ssh-key-pairs).
1. Visit the primary node's **Admin Area ➔ Geo Nodes** (`/admin/geo_nodes`) in
your browser.
This command will use your defined `external_url` in `gitlab.rb` and pre-generated SSH key pairs.
1. Add the primary node by providing its full URL and the public SSH key
you created previously. Make sure to check the box 'This is a primary node'
when adding it.
![Add new primary Geo node](img/geo_nodes_add_new.png)
1. Click the **Add node** button.
Read more in [additional info for SSH key pairs](#additional-information-for-the-ssh-key-pairs).
### Step 2. Updating the `known_hosts` file of the secondary nodes
......@@ -103,7 +93,6 @@ logins opened on all nodes as we will be moving back and forth.
1. Verify that the fingerprint was added by checking `known_hosts`:
```
# Omnibus GitLab installations
cat /var/opt/gitlab/.ssh/known_hosts
```
......@@ -119,11 +108,10 @@ sensitive data in the database. Any secondary node must have the
sudo -i
```
1. Find the value of `db_key_base` and copy it:
1. Execute the command below to display current encryption key and copy it:
```
# Omnibus GitLab installations
cat /etc/gitlab/gitlab-secrets.json | grep db_key_base
gitlab-rake geo:db:show_encryption_key
```
1. SSH into the **secondary** node and login as root:
......@@ -136,7 +124,6 @@ sensitive data in the database. Any secondary node must have the
previous step:
```
# Omnibus GitLab installations
editor /etc/gitlab/gitlab-secrets.json
```
......@@ -150,69 +137,22 @@ sensitive data in the database. Any secondary node must have the
sudo -i
```
1. (This step is required only if you want to enable the new Disaster Recovery
feature in Alpha shipped in GitLab 9.0) Edit `/etc/gitlab/gitlab.rb`:
1. Edit `/etc/gitlab/gitlab.rb`:
```
geo_postgresql['enable'] = true
```
1. (This step is required only if you want to enable the new Disaster Recovery
feature in Alpha shipped in GitLab 9.0) Create `database_geo.yml` with the
information of your secondary PostgreSQL database. Note that GitLab will
set up another database instance separate from the primary, since this is
where the secondary will track its internal state:
```
sudo cp /opt/gitlab/embedded/service/gitlab-rails/config/database_geo.yml.postgresql /opt/gitlab/embedded/service/gitlab-rails/config/database_geo.yml
```
1. (This step is required only if you want to enable the new Disaster Recovery
feature in Alpha shipped in GitLab 9.0) Edit the content of
`database_geo.yml` in `production:` to be like the following:
```yaml
#
# PRODUCTION
#
production:
adapter: postgresql
encoding: unicode
database: gitlabhq_geo_production
pool: 10
username: gitlab_geo
# password:
host: /var/opt/gitlab/geo-postgresql
port: 5431
```
1. (This step is required only if you want to enable the new Disaster Recovery
feature in Alpha shipped in GitLab 9.0) Reconfigure GitLab:
1. Reconfigure GitLab:
```
sudo gitlab-ctl reconfigure
```
1. (This step is required only if you want to enable the new Disaster Recovery
feature in Alpha shipped in GitLab 9.0) Set up the Geo tracking database:
```
sudo gitlab-rake geo:db:setup
```
1. Create a new SSH key pair for the secondary node. Choose the default location
and leave the password blank by hitting 'Enter' three times:
```bash
sudo -u git -H ssh-keygen -b 4096 -C 'Secondary GitLab Geo node'
```
Read more in [additional info for SSH key pairs](#additional-information-for-the-ssh-key-pairs).
1. Get the contents of `id_rsa.pub` the was just created:
1. Get the contents of `id_rsa.pub` key that was pre-generated by Omnibus GitLab
and copy them:
```
# Omnibus installations
sudo -u git cat /var/opt/gitlab/.ssh/id_rsa.pub
```
......
......@@ -75,7 +75,6 @@ logins opened on all nodes as we will be moving back and forth.
1. Get the contents of `id_rsa.pub` for the git user:
```
# Installations from source
sudo -u git cat /home/git/.ssh/id_rsa.pub
```
......@@ -111,7 +110,6 @@ logins opened on all nodes as we will be moving back and forth.
1. Verify that the fingerprint was added by checking `known_hosts`:
```
# Installations from source
cat /home/git/.ssh/known_hosts
```
......@@ -127,11 +125,10 @@ sensitive data in the database. Any secondary node must have the
sudo -i
```
1. Find the value of `db_key_base` and copy it:
1. Execute the command below to display the current encryption key and copy it:
```
# Installations from source
cat /home/git/gitlab/config/secrets.yml | grep db_key_base
bundle exec rake geo:db:show_encryption_key
```
1. SSH into the **secondary** node and login as root:
......@@ -184,7 +181,6 @@ sensitive data in the database. Any secondary node must have the
previous step:
```
# Installations from source
editor /home/git/gitlab/config/secrets.yml
```
......@@ -210,7 +206,6 @@ sensitive data in the database. Any secondary node must have the
1. Get the contents of `id_rsa.pub` the was just created:
```
# Installations from source
sudo -u git cat /home/git/.ssh/id_rsa.pub
```
......
......@@ -66,14 +66,12 @@ The following guide assumes that:
1. Edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby
geo_primary_role['enable'] = true
postgresql['listen_address'] = "1.2.3.4"
postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','1.2.3.4/32']
postgresql['md5_auth_cidr_addresses'] = ['5.6.7.8/32']
postgresql['sql_replication_user'] = "gitlab_replicator"
postgresql['wal_level'] = "hot_standby"
postgresql['max_wal_senders'] = 10
postgresql['wal_keep_segments'] = 10
postgresql['hot_standby'] = "on"
# postgresql['max_wal_senders'] = 10
# postgresql['wal_keep_segments'] = 10
```
Where `1.2.3.4` is the public IP address of the primary server, and `5.6.7.8`
......@@ -109,7 +107,9 @@ The following guide assumes that:
postgresql['md5_auth_cidr_addresses'] = ['5.6.7.8/32','11.22.33.44/32']
```
Edit the `wal` values as you see fit.
You may also want to edit the `wal_keep_segments` and `max_wal_senders` to
match your database replication requirements. Consult the [PostgreSQL - Replication documentation](https://www.postgresql.org/docs/9.6/static/runtime-config-replication.html)
for more information.
1. Check to make sure your firewall rules are set so that the secondary nodes
can access port 5432 on the primary node.
......@@ -146,11 +146,7 @@ The following guide assumes that:
1. Edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby
postgresql['wal_level'] = "hot_standby"
postgresql['max_wal_senders'] = 10
postgresql['wal_keep_segments'] = 10
postgresql['hot_standby'] = "on"
gitlab_rails['auto_migrate'] = false # prevents migrations to be executed on the secondary server
geo_secondary_role['enable'] = true
```
1. [Reconfigure GitLab][] for the changes to take effect.
......@@ -175,59 +171,15 @@ data before running `pg_basebackup`.
sudo -i
```
1. Save the snippet below in a file, let's say `/tmp/replica.sh`:
```bash
#!/bin/bash
PORT="5432"
USER="gitlab_replicator"
echo ---------------------------------------------------------------
echo WARNING: Make sure this scirpt is run from the secondary server
echo ---------------------------------------------------------------
echo
echo Enter the IP of the primary PostgreSQL server
read HOST
echo Enter the password for $USER@$HOST
read -s PASSWORD
echo Stopping PostgreSQL and all GitLab services
gitlab-ctl stop
echo Backing up postgresql.conf
sudo -u gitlab-psql mv /var/opt/gitlab/postgresql/data/postgresql.conf /var/opt/gitlab/postgresql/
echo Cleaning up old cluster directory
sudo -u gitlab-psql rm -rf /var/opt/gitlab/postgresql/data
rm -f /tmp/postgresql.trigger
echo Starting base backup as the replicator user
echo Enter the password for $USER@$HOST
sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_basebackup -h $HOST -D /var/opt/gitlab/postgresql/data -U gitlab_replicator -v -x -P
echo Writing recovery.conf file
sudo -u gitlab-psql bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- _EOF1_
standby_mode = 'on'
primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD'
trigger_file = '/tmp/postgresql.trigger'
_EOF1_
"
echo Restoring postgresql.conf
sudo -u gitlab-psql mv /var/opt/gitlab/postgresql/postgresql.conf /var/opt/gitlab/postgresql/data/
echo Starting PostgreSQL and all GitLab services
gitlab-ctl start
```
1. Run it with:
1. Execute the command below to start a backup/restore and begin the replication:
```
bash /tmp/replica.sh
gitlab-ctl replicate-geo-database --host=1.2.3.4
```
When prompted, enter the password you set up for the `gitlab_replicator`
user in the first step.
Change the `--host=` to the primary node IP or FQDN. You can check other possible
parameters with `--help`. When prompted, enter the password you set up for
the `gitlab_replicator` user in the first step.
The replication process is now over.
......
......@@ -74,7 +74,9 @@ The following guide assumes that:
See the Omnibus notes above for more details of `listen_address`.
Edit the `wal` values as you see fit.
You may also want to edit the `wal_keep_segments` and `max_wal_senders` to
match your database replication requirements. Consult the [PostgreSQL - Replication documentation](https://www.postgresql.org/docs/9.6/static/runtime-config-replication.html)
for more information.
1. Set the access control on the primary to allow TCP connections using the
server's public IP and set the connection from the secondary to require a
......
......@@ -42,6 +42,11 @@ namespace :geo do
Rake::Task['geo:config:restore'].reenable
end
end
desc 'Display database encryption key'
task show_encryption_key: :environment do
puts Rails.application.secrets.db_key_base
end
end
namespace :config do
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment