@@ -497,54 +497,135 @@ For instructions about how to set up Patroni on the primary node, see the
...
@@ -497,54 +497,135 @@ For instructions about how to set up Patroni on the primary node, see the
If you are currently using `repmgr` on your Geo primary, see [these instructions](#migrating-from-repmgr-to-patroni) for migrating from `repmgr` to Patroni.
If you are currently using `repmgr` on your Geo primary, see [these instructions](#migrating-from-repmgr-to-patroni) for migrating from `repmgr` to Patroni.
A production-ready and secure setup requires at least three Patroni instances on
A production-ready and secure setup requires at least three Patroni instances on
the primary, and a similar configuration on the secondary nodes. Be sure to use
the primary site, and a similar configuration on the secondary sites. Be sure to
password credentials and other database best practices.
use password credentials and other database best practices.
Similar to `repmgr`, using Patroni on a secondary node is optional.
Similar to `repmgr`, using Patroni on a secondary node is optional.
To set up database replication with Patroni on a secondary node, configure a
### Step 1. Configure Patroni permanent replication slot on the primary site
_permanent replication slot_ on the primary node's Patroni cluster, and ensure
password authentication is used.
To set up database replication with Patroni on a secondary node, we need to
configure a _permanent replication slot_ on the primary node's Patroni cluster,
On Patroni instances for the primary node, add the following to the
and ensure password authentication is used.
`/etc/gitlab/gitlab.rb` file:
For each Patroni instance on the primary site **starting on the Patroni
```ruby
Leader instance**:
# You need one entry for each secondary, with a unique name following PostgreSQL slot_name constraints:
#
1. SSH into your Patroni instance and login as root:
'PATRONI_SECONDARY1_IP/32','PATRONI_SECONDARY2_IP/32','PATRONI_SECONDARY3_IP/32'# we list all secondary instances as they can all become a Standby Leader
consul['enable']=true
# any other instance that needs access to the database as per documentation
# any other instance that needs access to the database as per documentation
patroni['use_pg_rewind']=true
]
patroni['postgresql']['max_wal_senders']=8# Use double of the amount of patroni/reserved slots (3 patronis + 1 reserved slot for a Geo secondary).
patroni['postgresql']['max_replication_slots']=8# Use double of the amount of patroni/reserved slots (3 patronis + 1 reserved slot for a Geo secondary).
patroni['enable']=true
patroni['standby_cluster']['enable']=true
postgresql['md5_auth_cidr_addresses']=[
patroni['standby_cluster']['host']='PATRONI_PRIMARY_LEADER_IP'# this needs to be changed anytime the primary Leader changes
'PATRONI_SECONDARY1_IP/32','PATRONI_SECONDARY2_IP/32','PATRONI_SECONDARY3_IP/32','PATRONI_SECONDARY_PGBOUNCER/32'# We list all secondary instances as they can all become a Standby Leader
patroni['standby_cluster']['primary_slot_name']='geo_secondary'# or the unique replication slot name you setup before