1. Set up a `gitlab` username with a password of your choice. The `gitlab` user
needs privileges to create the `gitlabhq_production` database.
1. If you are using a cloud-managed service, you may need to grant additional
roles to your `gitlab` user:
- Amazon RDS requires the [`rds_superuser`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.Roles) role.
- Azure Database for PostgreSQL requires the [`azure_pg_admin`](https://docs.microsoft.com/en-us/azure/postgresql/howto-create-users#how-to-create-additional-admin-users-in-azure-database-for-postgresql) role.
---
redirect_to:'postgresql/external.md'
---
1. Configure the GitLab application servers with the appropriate connection details
for your external PostgreSQL service in your `/etc/gitlab/gitlab.rb` file:
```ruby
# Disable the bundled Omnibus provided PostgreSQL
postgresql['enable'] = false
# PostgreSQL connection details
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'unicode'
gitlab_rails['db_host'] = '10.1.0.5' # IP/hostname of database server
gitlab_rails['db_password'] = 'DB password'
```
For more information on GitLab HA setups, refer to [configuring GitLab for HA](high_availability/gitlab.md).
1. Reconfigure for the changes to take effect:
# Configure GitLab using an external PostgreSQL service
```shell
sudo gitlab-ctl reconfigure
```
This content has been moved to a [new location](postgresql/external.md).
@@ -45,7 +45,7 @@ Because of the additional complexity involved in setting up this configuration
for PostgreSQL and Redis, it is not covered by this Geo multi-server documentation.
For more information about setting up a multi-server PostgreSQL cluster and Redis cluster using the omnibus package see the multi-server documentation for
[PostgreSQL](../../high_availability/database.md) and
[PostgreSQL](../../postgresql/replication_and_failover.md) and
As part of its High Availability stack, GitLab Premium includes a bundled version of [Consul](https://www.consul.io/) that can be managed through `/etc/gitlab/gitlab.rb`. Consul is a service networking solution. When it comes to [GitLab Architecture](../../development/architecture.md), Consul utilization is supported for configuring:
1.[Monitoring in Scaled and Highly Available environments](monitoring_node.md)
1.[PostgreSQL High Availability with Omnibus](database.md#high-availability-with-omnibus-gitlab-premium-only)
1.[PostgreSQL High Availability with Omnibus](../postgresql/replication_and_failover.md)
A Consul cluster consists of multiple server agents, as well as client agents that run on other nodes which need to talk to the Consul cluster.
...
...
@@ -27,7 +27,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
On each Consul node perform the following:
1. Make sure you collect [`CONSUL_SERVER_NODES`](database.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step, before executing the next step.
1. Make sure you collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step, before executing the next step.
1. Edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
NOTE: **Note:** The role `postgres_role` was introduced with GitLab 10.3
1.[Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
1. Note the PostgreSQL node's IP address or hostname, port, and
plain text password. These will be necessary when configuring the GitLab
application servers later.
1.[Enable monitoring](#enable-monitoring)
Advanced configuration options are supported and can be added if
needed.
### High Availability with Omnibus GitLab **(PREMIUM ONLY)**
> Important notes:
>
> - This document will focus only on configuration supported with [GitLab Premium](https://about.gitlab.com/pricing/), using the Omnibus GitLab package.
> - If you are a Community Edition or Starter user, consider using a cloud hosted solution.
> - This document will not cover installations from source.
>
> - If HA setup is not what you were looking for, see the [database configuration document](https://docs.gitlab.com/omnibus/settings/database.html)
> for the Omnibus GitLab packages.
>
> Please read this document fully before attempting to configure PostgreSQL HA
> for GitLab.
>
> This configuration is GA in EE 10.2.
The recommended configuration for a PostgreSQL HA requires:
- A minimum of three database nodes
- Each node will run the following services:
-`PostgreSQL` - The database itself
-`repmgrd` - A service to monitor, and handle failover in case of a failure
-`Consul` agent - Used for service discovery, to alert other nodes when failover occurs
- A minimum of three `Consul` server nodes
- A minimum of one `pgbouncer` service node, but it's recommended to have one per database node
- An internal load balancer (TCP) is required when there is more than one `pgbouncer` service node
You also need to take into consideration the underlying network topology,
making sure you have redundant connectivity between all Database and GitLab instances,
otherwise the networks will become a single point of failure.
#### Architecture
![PostgreSQL HA Architecture](img/pg_ha_architecture.png)
Database nodes run two services with PostgreSQL:
- Repmgrd. Monitors the cluster and handles failover when issues with the master occur. The failover consists of:
- Selecting a new master for the cluster.
- Promoting the new node to master.
- Instructing remaining servers to follow the new master node.
On failure, the old master node is automatically evicted from the cluster, and should be rejoined manually once recovered.
- Consul. Monitors the status of each node in the database cluster and tracks its health in a service definition on the Consul cluster.
Alongside each PgBouncer, there is a Consul agent that watches the status of the PostgreSQL service. If that status changes, Consul runs a script which updates the configuration and reloads PgBouncer
##### Connection flow
Each service in the package comes with a set of [default ports](https://docs.gitlab.com/omnibus/package-information/defaults.html#ports). You may need to make specific firewall rules for the connections listed below:
- Application servers connect to either PgBouncer directly via its [default port](https://docs.gitlab.com/omnibus/package-information/defaults.html#pgbouncer) or via a configured Internal Load Balancer (TCP) that serves multiple PgBouncers.
- PgBouncer connects to the primary database servers [PostgreSQL default port](https://docs.gitlab.com/omnibus/package-information/defaults.html#postgresql)
- Repmgr connects to the database servers [PostgreSQL default port](https://docs.gitlab.com/omnibus/package-information/defaults.html#postgresql)
- PostgreSQL secondaries connect to the primary database servers [PostgreSQL default port](https://docs.gitlab.com/omnibus/package-information/defaults.html#postgresql)
- Consul servers and agents connect to each others [Consul default ports](https://docs.gitlab.com/omnibus/package-information/defaults.html#consul)
#### Required information
Before proceeding with configuration, you will need to collect all the necessary
information.
##### Network information
PostgreSQL does not listen on any network interface by default. It needs to know
which IP address to listen on in order to be accessible to other services.
Similarly, PostgreSQL access is controlled based on the network source.
This is why you will need:
- IP address of each nodes network interface. This can be set to `0.0.0.0` to
listen on all interfaces. It cannot be set to the loopback address `127.0.0.1`.
- Network Address. This can be in subnet (i.e. `192.168.0.0/255.255.255.0`)
or CIDR (i.e. `192.168.0.0/24`) form.
##### User information
Various services require different configuration to secure
the communication as well as information required for running the service.
Bellow you will find details on each service and the minimum required
information you need to provide.
##### Consul information
When using default setup, minimum configuration requires:
-`CONSUL_USERNAME`. Defaults to `gitlab-consul`
-`CONSUL_DATABASE_PASSWORD`. Password for the database user.
-`CONSUL_PASSWORD_HASH`. This is a hash generated out of Consul username/password pair.
Can be generated with:
```shell
sudo gitlab-ctl pg-password-md5 CONSUL_USERNAME
```
-`CONSUL_SERVER_NODES`. The IP addresses or DNS records of the Consul server nodes.
Few notes on the service itself:
- The service runs under a system account, by default `gitlab-consul`.
- If you are using a different username, you will have to specify it. We
will refer to it with `CONSUL_USERNAME`,
- There will be a database user created with read only access to the repmgr
database
- Passwords will be stored in the following locations:
-`/etc/gitlab/gitlab.rb`: hashed
-`/var/opt/gitlab/pgbouncer/pg_auth`: hashed
-`/var/opt/gitlab/consul/.pgpass`: plaintext
##### PostgreSQL information
When configuring PostgreSQL, we will set `max_wal_senders` to one more than
the number of database nodes in the cluster.
This is used to prevent replication from using up all of the
available database connections.
In this document we are assuming 3 database nodes, which makes this configuration:
```ruby
postgresql['max_wal_senders']=4
```
As previously mentioned, you'll have to prepare the network subnets that will
be allowed to authenticate with the database.
You'll also need to supply the IP addresses or DNS records of Consul
server nodes.
We will need the following password information for the application's database user:
-`POSTGRESQL_USERNAME`. Defaults to `gitlab`
-`POSTGRESQL_USER_PASSWORD`. The password for the database user
-`POSTGRESQL_PASSWORD_HASH`. This is a hash generated out of the username/password pair.
-`PGBOUNCER_NODE`, is the IP address or a FQDN of the node running PgBouncer.
Few notes on the service itself:
- The service runs as the same system account as the database
- In the package, this is by default `gitlab-psql`
- If you use a non-default user account for PgBouncer service (by default `pgbouncer`), you will have to specify this username. We will refer to this requirement with `PGBOUNCER_USERNAME`.
- The service will have a regular database user account generated for it
- This defaults to `repmgr`
- Passwords will be stored in the following locations:
-`/etc/gitlab/gitlab.rb`: hashed, and in plain text
-`/var/opt/gitlab/pgbouncer/pg_auth`: hashed
##### Repmgr information
When using default setup, you will only have to prepare the network subnets that will
be allowed to authenticate with the service.
Few notes on the service itself:
- The service runs under the same system account as the database
- In the package, this is by default `gitlab-psql`
- The service will have a superuser database user account generated for it
- This defaults to `gitlab_repmgr`
#### Installing Omnibus GitLab
First, make sure to [download/install](https://about.gitlab.com/install/)
Omnibus GitLab **on each node**.
Make sure you install the necessary dependencies from step 1,
add GitLab package repository from step 2.
When installing the GitLab package, do not supply `EXTERNAL_URL` value.
#### Configuring the Database nodes
1. Make sure to [configure the Consul nodes](consul.md).
1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information), [`POSTGRESQL_PASSWORD_HASH`](#postgresql-information), the [number of db nodes](#postgresql-information), and the [network address](#network-information) before executing the next step.
1. On the master database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
```ruby
# Disable all components except PostgreSQL and Repmgr and Consul
1.[Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
##### Application node post-configuration
Ensure that all migrations ran:
```shell
gitlab-rake gitlab:db:configure
```
> **Note**: If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to
PostgreSQL it may be that your PgBouncer node's IP address is missing from
PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
[PgBouncer error `ERROR: pgbouncer cannot connect to server`](#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
in the Troubleshooting section before proceeding.
##### Ensure GitLab is running
At this point, your GitLab instance should be up and running. Verify you are
able to login, and create issues and merge requests. If you have troubles check
the [Troubleshooting section](#troubleshooting).
#### Example configuration
Here we'll show you some fully expanded example configurations.
##### Example recommended setup
This example uses 3 Consul servers, 3 PgBouncer servers (with associated internal load balancer),
3 PostgreSQL servers, and 1 application node.
We start with all servers on the same 10.6.0.0/16 private network range, they
can connect to each freely other on those addresses.
Here is a list and description of each machine and the assigned IP:
-`10.6.0.11`: Consul 1
-`10.6.0.12`: Consul 2
-`10.6.0.13`: Consul 3
-`10.6.0.20`: Internal Load Balancer
-`10.6.0.21`: PgBouncer 1
-`10.6.0.22`: PgBouncer 2
-`10.6.0.23`: PgBouncer 3
-`10.6.0.31`: PostgreSQL master
-`10.6.0.32`: PostgreSQL secondary
-`10.6.0.33`: PostgreSQL secondary
-`10.6.0.41`: GitLab application
All passwords are set to `toomanysecrets`, please do not use this password or derived hashes and the `external_url` for GitLab is `http://gitlab.example.com`.
Please note that after the initial configuration, if a failover occurs, the PostgresSQL master will change to one of the available secondaries until it is failed back.
##### Example recommended setup for Consul servers
On each server edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Consul
roles['consul_role']
consul['configuration']={
server: true,
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
consul['monitoring_service_discovery']=true
```
[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
##### Example recommended setup for PgBouncer servers
On each server edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Pgbouncer and Consul agent
[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
##### Internal load balancer setup
An internal load balancer (TCP) is then required to be setup to serve each PgBouncer node (in this example on the IP of `10.6.0.20`). An example of how to do this can be found in the [PgBouncer Configure Internal Load Balancer](pgbouncer.md#configure-the-internal-load-balancer) section.
##### Example recommended setup for PostgreSQL servers
###### Primary node
On primary node edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except PostgreSQL and Repmgr and Consul
This example uses 3 PostgreSQL servers, and 1 application node (with PgBouncer setup alongside).
It differs from the [recommended setup](#example-recommended-setup) by moving the Consul servers into the same servers we use for PostgreSQL.
The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with PostgreSQL [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [Consul outage recovery](consul.md#outage-recovery) on the same set of machines.
In this example we start with all servers on the same 10.6.0.0/16 private network range, they can connect to each freely other on those addresses.
Here is a list and description of each machine and the assigned IP:
-`10.6.0.21`: PostgreSQL master
-`10.6.0.22`: PostgreSQL secondary
-`10.6.0.23`: PostgreSQL secondary
-`10.6.0.31`: GitLab application
All passwords are set to `toomanysecrets`, please do not use this password or derived hashes.
The `external_url` for GitLab is `http://gitlab.example.com`
Please note that after the initial configuration, if a failover occurs, the PostgresSQL master will change to one of the available secondaries until it is failed back.
##### Example minimal configuration for database servers
##### Primary node
On primary database node edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except PostgreSQL, Repmgr, and Consul
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
## Troubleshooting
### Consul and PostgreSQL changes not taking effect
Due to the potential impacts, `gitlab-ctl reconfigure` only reloads Consul and PostgreSQL, it will not restart the services. However, not all changes can be activated by reloading.
To restart either service, run `gitlab-ctl restart SERVICE`
For PostgreSQL, it is usually safe to restart the master node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done. To be safe, you can stop `repmgrd` on the standby nodes first with `gitlab-ctl stop repmgrd`, then start afterwards with `gitlab-ctl start repmgrd`.
On the Consul server nodes, it is important to restart the Consul service in a controlled fashion. Read our [Consul documentation](consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
**Note**: We recommend that you follow the instructions here for a full [PostgreSQL cluster](#high-availability-with-omnibus-gitlab-premium-only).
If you are reading this section due to an old bookmark, you can find that old documentation [in the repository](https://gitlab.com/gitlab-org/gitlab/blob/v10.1.4/doc/administration/high_availability/database.md#configure-using-omnibus).
Read more on high-availability configuration:
1.[Configure Redis](redis.md)
1.[Configure NFS](nfs.md)
1.[Configure the GitLab application servers](gitlab.md)
1.[Configure the load balancers](load_balancer.md)
1.[Manage the bundled Consul cluster](consul.md)
This content has been moved to a [new location](../postgresql/replication_and_failover.md).
@@ -24,7 +24,7 @@ Continue configuration of other components by going back to the
> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
1. Make sure to collect [`CONSUL_SERVER_NODES`](database.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
If you enable Monitoring, it must be enabled on **all** GitLab servers.
1. Make sure to collect [`CONSUL_SERVER_NODES`](database.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
...
...
@@ -203,7 +203,7 @@ for more details.
Read more on high-availability configuration:
1.[Configure the database](database.md)
1.[Configure the database](../postgresql/replication_and_failover.md)
1.[Configure Redis](redis.md)
1.[Configure NFS](nfs.md)
1.[Configure the load balancers](load_balancer.md)
package you want using **steps 1 and 2** from the GitLab downloads page.
- Do not complete any other steps on the download page.
1. Make sure to collect [`CONSUL_SERVER_NODES`](database.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
@@ -12,124 +12,7 @@ In a HA setup, it's recommended to run a PgBouncer node separately for each data
### Running PgBouncer as part of an HA GitLab installation
1. Make sure you collect [`CONSUL_SERVER_NODES`](database.md#consul-information), [`CONSUL_PASSWORD_HASH`](database.md#consul-information), and [`PGBOUNCER_PASSWORD_HASH`](database.md#pgbouncer-information) before executing the next step.
1. One each node, edit the `/etc/gitlab/gitlab.rb` config file and replace values noted in the `# START user configuration` section as below:
```ruby
# Disable all components except PgBouncer and Consul agent
If you're running more than one PgBouncer node as recommended, then at this time you'll need to set up a TCP internal load balancer to serve each correctly. This can be done with any reputable TCP load balancer.
As an example here's how you could do it with [HAProxy](https://www.haproxy.org/):
```plaintext
global
log /dev/log local0
log localhost local1 notice
log stdout format raw local0
defaults
log global
default-server inter 10s fall 3 rise 2
balance leastconn
frontend internal-pgbouncer-tcp-in
bind *:6432
mode tcp
option tcplog
default_backend pgbouncer
backend pgbouncer
mode tcp
option tcp-check
server pgbouncer1 <ip>:6432 check
server pgbouncer2 <ip>:6432 check
server pgbouncer3 <ip>:6432 check
```
Refer to your preferred Load Balancer's documentation for further guidance.
This content has been moved to a [new location](../postgresql/replication_and_failover.md#configuring-the-pgbouncer-node).
### Running PgBouncer as part of a non-HA GitLab installation
If you enable Monitoring, it must be enabled on **all** Redis servers.
1. Make sure to collect [`CONSUL_SERVER_NODES`](database.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
...
...
@@ -1002,7 +1002,7 @@ Changes to Redis HA over time.
Read more on High Availability:
1.[High Availability Overview](README.md)
1.[Configure the database](database.md)
1.[Configure the database](../postgresql/replication_and_failover.md)
1.[Configure NFS](nfs.md)
1.[Configure the GitLab application servers](gitlab.md)
1.[Configure the load balancers](load_balancer.md)
1. Set up a `gitlab` username with a password of your choice. The `gitlab` user
needs privileges to create the `gitlabhq_production` database.
1. If you are using a cloud-managed service, you may need to grant additional
roles to your `gitlab` user:
- Amazon RDS requires the [`rds_superuser`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.Roles) role.
- Azure Database for PostgreSQL requires the [`azure_pg_admin`](https://docs.microsoft.com/en-us/azure/postgresql/howto-create-users#how-to-create-additional-admin-users-in-azure-database-for-postgresql) role.
1. Configure the GitLab application servers with the appropriate connection details
for your external PostgreSQL service in your `/etc/gitlab/gitlab.rb` file:
```ruby
# Disable the bundled Omnibus provided PostgreSQL
postgresql['enable'] = false
# PostgreSQL connection details
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'unicode'
gitlab_rails['db_host'] = '10.1.0.5' # IP/hostname of database server
gitlab_rails['db_password'] = 'DB password'
```
For more information on GitLab HA setups, refer to [configuring GitLab for HA](../high_availability/gitlab.md).
# PostgreSQL replication and failover with Omnibus GitLab **(PREMIUM ONLY)**
> Important notes:
>
> - This document will focus only on configuration supported with [GitLab Premium](https://about.gitlab.com/pricing/), using the Omnibus GitLab package.
> - If you are a Community Edition or Starter user, consider using a cloud hosted solution.
> - This document will not cover installations from source.
>
> - If HA setup is not what you were looking for, see the [database configuration document](https://docs.gitlab.com/omnibus/settings/database.html)
> for the Omnibus GitLab packages.
>
> Please read this document fully before attempting to configure PostgreSQL HA
> for GitLab.
## Architecture
The Omnibus GitLab recommended configuration for a PostgreSQL high availability
cluster requires:
- A minimum of three database nodes.
- A minimum of three `Consul` server nodes.
- A minimum of one `pgbouncer` service node, but it's recommended to have one
per database node.
- An internal load balancer (TCP) is required when there is more than one
`pgbouncer` service node.
![PostgreSQL HA Architecture](img/pg_ha_architecture.png)
You also need to take into consideration the underlying network topology, making
sure you have redundant connectivity between all Database and GitLab instances
to avoid the network becoming a single point of failure.
### Database node
Each database node runs three services:
`PostgreSQL` - The database itself.
`repmgrd` - Communicates with other repmgrd services in the cluster and handles
failover when issues with the master server occurs. The failover procedure
consists of:
- Selecting a new master for the cluster.
- Promoting the new node to master.
- Instructing remaining servers to follow the new master node.
- The old master node is automatically evicted from the cluster and should be
rejoined manually once recovered.
`Consul` agent - Monitors the status of each node in the database cluster and
tracks its health in a service definition on the Consul cluster.
### Consul server node
The Consul server node runs the Consul server service.
### PgBouncer node
Each PgBouncer node runs two services:
`PgBouncer` - The database connection pooler itself.
`Consul` agent - Watches the status of the PostgreSQL service definition on the
Consul cluster. If that status changes, Consul runs a script which updates the
PgBouncer configuration to point to the new PostgreSQL master node and reloads
the PgBouncer service.
### Connection flow
Each service in the package comes with a set of [default ports](https://docs.gitlab.com/omnibus/package-information/defaults.html#ports). You may need to make specific firewall rules for the connections listed below:
- Application servers connect to either PgBouncer directly via its [default port](https://docs.gitlab.com/omnibus/package-information/defaults.html#pgbouncer) or via a configured Internal Load Balancer (TCP) that serves multiple PgBouncers.
- PgBouncer connects to the primary database servers [PostgreSQL default port](https://docs.gitlab.com/omnibus/package-information/defaults.html#postgresql)
- Repmgr connects to the database servers [PostgreSQL default port](https://docs.gitlab.com/omnibus/package-information/defaults.html#postgresql)
- PostgreSQL secondaries connect to the primary database servers [PostgreSQL default port](https://docs.gitlab.com/omnibus/package-information/defaults.html#postgresql)
- Consul servers and agents connect to each others [Consul default ports](https://docs.gitlab.com/omnibus/package-information/defaults.html#consul)
## Setting it up
### Required information
Before proceeding with configuration, you will need to collect all the necessary
information.
#### Network information
PostgreSQL does not listen on any network interface by default. It needs to know
which IP address to listen on in order to be accessible to other services.
Similarly, PostgreSQL access is controlled based on the network source.
This is why you will need:
- IP address of each nodes network interface. This can be set to `0.0.0.0` to
listen on all interfaces. It cannot be set to the loopback address `127.0.0.1`.
- Network Address. This can be in subnet (i.e. `192.168.0.0/255.255.255.0`)
or CIDR (i.e. `192.168.0.0/24`) form.
#### Consul information
When using default setup, minimum configuration requires:
-`CONSUL_USERNAME`. Defaults to `gitlab-consul`
-`CONSUL_DATABASE_PASSWORD`. Password for the database user.
-`CONSUL_PASSWORD_HASH`. This is a hash generated out of Consul username/password pair.
Can be generated with:
```shell
sudo gitlab-ctl pg-password-md5 CONSUL_USERNAME
```
-`CONSUL_SERVER_NODES`. The IP addresses or DNS records of the Consul server nodes.
Few notes on the service itself:
- The service runs under a system account, by default `gitlab-consul`.
- If you are using a different username, you will have to specify it. We
will refer to it with `CONSUL_USERNAME`,
- There will be a database user created with read only access to the repmgr
database
- Passwords will be stored in the following locations:
-`/etc/gitlab/gitlab.rb`: hashed
-`/var/opt/gitlab/pgbouncer/pg_auth`: hashed
-`/var/opt/gitlab/consul/.pgpass`: plaintext
#### PostgreSQL information
When configuring PostgreSQL, we will set `max_wal_senders` to one more than
the number of database nodes in the cluster.
This is used to prevent replication from using up all of the
available database connections.
In this document we are assuming 3 database nodes, which makes this configuration:
```ruby
postgresql['max_wal_senders']=4
```
As previously mentioned, you'll have to prepare the network subnets that will
be allowed to authenticate with the database.
You'll also need to supply the IP addresses or DNS records of Consul
server nodes.
We will need the following password information for the application's database user:
-`POSTGRESQL_USERNAME`. Defaults to `gitlab`
-`POSTGRESQL_USER_PASSWORD`. The password for the database user
-`POSTGRESQL_PASSWORD_HASH`. This is a hash generated out of the username/password pair.
-`PGBOUNCER_NODE`, is the IP address or a FQDN of the node running PgBouncer.
Few notes on the service itself:
- The service runs as the same system account as the database
- In the package, this is by default `gitlab-psql`
- If you use a non-default user account for PgBouncer service (by default `pgbouncer`), you will have to specify this username. We will refer to this requirement with `PGBOUNCER_USERNAME`.
- The service will have a regular database user account generated for it
- This defaults to `repmgr`
- Passwords will be stored in the following locations:
-`/etc/gitlab/gitlab.rb`: hashed, and in plain text
-`/var/opt/gitlab/pgbouncer/pg_auth`: hashed
#### Repmgr information
When using default setup, you will only have to prepare the network subnets that will
be allowed to authenticate with the service.
Few notes on the service itself:
- The service runs under the same system account as the database
- In the package, this is by default `gitlab-psql`
- The service will have a superuser database user account generated for it
- This defaults to `gitlab_repmgr`
### Installing Omnibus GitLab
First, make sure to [download/install](https://about.gitlab.com/install/)
Omnibus GitLab **on each node**.
Make sure you install the necessary dependencies from step 1,
add GitLab package repository from step 2.
When installing the GitLab package, do not supply `EXTERNAL_URL` value.
### Configuring the Database nodes
1. Make sure to [configure the Consul nodes](../high_availability/consul.md).
1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information), [`POSTGRESQL_PASSWORD_HASH`](#postgresql-information), the [number of db nodes](#postgresql-information), and the [network address](#network-information) before executing the next step.
1. On the master database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
```ruby
# Disable all components except PostgreSQL and Repmgr and Consul
If the 'Role' column for any node says "FAILED", check the
[Troubleshooting section](#troubleshooting) before proceeding.
Also, check that the check master command works successfully on each node:
```shell
su - gitlab-consul
gitlab-ctl repmgr-check-master ||echo'This node is a standby repmgr node'
```
This command relies on exit codes to tell Consul whether a particular node is a master
or secondary. The most important thing here is that this command does not produce errors.
If there are errors it's most likely due to incorrect `gitlab-consul` database user permissions.
Check the [Troubleshooting section](#troubleshooting) before proceeding.
### Configuring the PgBouncer node
1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`CONSUL_PASSWORD_HASH`](#consul-information), and [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information) before executing the next step.
1. One each node, edit the `/etc/gitlab/gitlab.rb` config file and replace values noted in the `# START user configuration` section as below:
```ruby
# Disable all components except PgBouncer and Consul agent
If you're running more than one PgBouncer node as recommended, then at this time you'll need to set up a TCP internal load balancer to serve each correctly. This can be done with any reputable TCP load balancer.
As an example here's how you could do it with [HAProxy](https://www.haproxy.org/):
```plaintext
global
log /dev/log local0
log localhost local1 notice
log stdout format raw local0
defaults
log global
default-server inter 10s fall 3 rise 2
balance leastconn
frontend internal-pgbouncer-tcp-in
bind *:6432
mode tcp
option tcplog
default_backend pgbouncer
backend pgbouncer
mode tcp
option tcp-check
server pgbouncer1 <ip>:6432 check
server pgbouncer2 <ip>:6432 check
server pgbouncer3 <ip>:6432 check
```
Refer to your preferred Load Balancer's documentation for further guidance.
### Configuring the Application nodes
These will be the nodes running the `gitlab-rails` service. You may have other
1.[Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
#### Application node post-configuration
Ensure that all migrations ran:
```shell
gitlab-rake gitlab:db:configure
```
> **Note**: If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to
PostgreSQL it may be that your PgBouncer node's IP address is missing from
PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
[PgBouncer error `ERROR: pgbouncer cannot connect to server`](#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
in the Troubleshooting section before proceeding.
### Ensure GitLab is running
At this point, your GitLab instance should be up and running. Verify you are
able to login, and create issues and merge requests. If you have troubles check
the [Troubleshooting section](#troubleshooting).
## Example configuration
Here we'll show you some fully expanded example configurations.
### Example recommended setup
This example uses 3 Consul servers, 3 PgBouncer servers (with associated internal load balancer),
3 PostgreSQL servers, and 1 application node.
We start with all servers on the same 10.6.0.0/16 private network range, they
can connect to each freely other on those addresses.
Here is a list and description of each machine and the assigned IP:
-`10.6.0.11`: Consul 1
-`10.6.0.12`: Consul 2
-`10.6.0.13`: Consul 3
-`10.6.0.20`: Internal Load Balancer
-`10.6.0.21`: PgBouncer 1
-`10.6.0.22`: PgBouncer 2
-`10.6.0.23`: PgBouncer 3
-`10.6.0.31`: PostgreSQL master
-`10.6.0.32`: PostgreSQL secondary
-`10.6.0.33`: PostgreSQL secondary
-`10.6.0.41`: GitLab application
All passwords are set to `toomanysecrets`, please do not use this password or derived hashes and the `external_url` for GitLab is `http://gitlab.example.com`.
Please note that after the initial configuration, if a failover occurs, the PostgresSQL master will change to one of the available secondaries until it is failed back.
#### Example recommended setup for Consul servers
On each server edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Consul
roles['consul_role']
consul['configuration']={
server: true,
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
consul['monitoring_service_discovery']=true
```
[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
#### Example recommended setup for PgBouncer servers
On each server edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Pgbouncer and Consul agent
[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
#### Internal load balancer setup
An internal load balancer (TCP) is then required to be setup to serve each PgBouncer node (in this example on the IP of `10.6.0.20`). An example of how to do this can be found in the [PgBouncer Configure Internal Load Balancer](#configure-the-internal-load-balancer) section.
#### Example recommended setup for PostgreSQL servers
##### Primary node
On primary node edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except PostgreSQL and Repmgr and Consul
This example uses 3 PostgreSQL servers, and 1 application node (with PgBouncer setup alongside).
It differs from the [recommended setup](#example-recommended-setup) by moving the Consul servers into the same servers we use for PostgreSQL.
The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with PostgreSQL [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [Consul outage recovery](../high_availability/consul.md#outage-recovery) on the same set of machines.
In this example we start with all servers on the same 10.6.0.0/16 private network range, they can connect to each freely other on those addresses.
Here is a list and description of each machine and the assigned IP:
-`10.6.0.21`: PostgreSQL master
-`10.6.0.22`: PostgreSQL secondary
-`10.6.0.23`: PostgreSQL secondary
-`10.6.0.31`: GitLab application
All passwords are set to `toomanysecrets`, please do not use this password or derived hashes.
The `external_url` for GitLab is `http://gitlab.example.com`
Please note that after the initial configuration, if a failover occurs, the PostgresSQL master will change to one of the available secondaries until it is failed back.
#### Example minimal configuration for database servers
##### Primary node
On primary database node edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except PostgreSQL, Repmgr, and Consul
1. On each PgBouncer node, edit `/etc/gitlab/gitlab.rb`:
1. Ensure `gitlab_rails['db_password']` is set to the plaintext password for
the `gitlab` database user
1.[Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect
## Troubleshooting
### Consul and PostgreSQL changes not taking effect
Due to the potential impacts, `gitlab-ctl reconfigure` only reloads Consul and PostgreSQL, it will not restart the services. However, not all changes can be activated by reloading.
To restart either service, run `gitlab-ctl restart SERVICE`
For PostgreSQL, it is usually safe to restart the master node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done. To be safe, you can stop `repmgrd` on the standby nodes first with `gitlab-ctl stop repmgrd`, then start afterwards with `gitlab-ctl start repmgrd`.
On the Consul server nodes, it is important to restart the Consul service in a controlled fashion. Read our [Consul documentation](../high_availability/consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
[PgBouncer in conjunction with Repmgr](../postgresql/replication_and_failover.md)
is recommended.
### Instance level replication with GitLab Geo **(PREMIUM ONLY)**
...
...
@@ -163,7 +163,7 @@ column.
| [Consul](../../development/architecture.md#consul)([3](#footnotes)) | Service discovery and health checks/failover | [Consul configuration](../high_availability/consul.md)**(PREMIUM ONLY)** | Yes |
| Repmgr | PostgreSQL cluster management and failover | [PostgreSQL and Repmgr configuration](../high_availability/database.md) | Yes |
| Repmgr | PostgreSQL cluster management and failover | [PostgreSQL and Repmgr configuration](../postgresql/replication_and_failover.md) | Yes |
| [Redis](../../development/architecture.md#redis)([3](#footnotes)) | Key/value store for fast data lookup and caching | [Redis configuration](../high_availability/redis.md) | Yes |