After a fail-over, it is possible to fail back to the demoted primary to restore your original configuration.
This process consists of two steps: making old primary a secondary and promoting secondary to a primary.
After a failover, it is possible to fail back to the demoted primary to
restore your original configuration. This process consists of two steps:
### Configure the former primary to be a secondary
1. Making the old primary a secondary
1. Promoting a secondary to a primary
## Configure the former primary to be a secondary
Since the former primary will be out of sync with the current primary, the first
step is to bring the former primary up to date. There is one downside though, some uploads and repositories
that have been deleted during an idle period of a primary node, will not be deleted from the disk but the overall sync will be much faster. As an alternative, you can set up a [GitLab instance from scratch](https://docs.gitlab.com/ee/gitlab-geo/#setup-instructions) to workaround this downside.
step is to bring the former primary up to date. There is one downside though,
some uploads and repositories that have been deleted during an idle period of a
primary node, will not be deleted from the disk but the overall sync will be
much faster. As an alternative, you can set up a
[GitLab instance from scratch](../replication/index.md#setup-instructions) to
workaround this downside.
To bring the former primary up to date:
1. SSH into the former primary that has fallen behind.
1. Make sure all the services are up by running the command
1. SSH into the former primary that has fallen behind
1. Make sure all the services are up:
```bash
sudo gitlab-ctl start
```
Note: If you [disabled the primary permanently](index.md#step-2-permanently-disable-the-primary), you need to undo those steps now. For Debian/Ubuntu you just need to run `sudo systemctl enable gitlab-runsvdir`. For CentoOS 6, you need to install GitLab instance from scratch and setup it as a secondary node by following [Setup instructions](../replication/index.md#setup-instructions). In this case you don't need the step below.
NOTE: **Note:** If you [disabled the primary permanently](index.md#step-2-permanently-disable-the-primary),
you need to undo those steps now. For Debian/Ubuntu you just need to run
`sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install
the GitLab instance from scratch and setup it as a secondary node by
following the [setup instructions](../replication/index.md#setup-instructions).
In this case you don't need to follow the next step.
1.[Setup database replication](../replication/database.md). In this documentation, primary
refers to the current primary, and secondary refers to the former primary.
1.[Setup database replication](../replication/database.md). Note that in this
case, primary refers to the current primary, and secondary refers to the
former primary.
If you have lost your original primary, follow the
[setup instructions](../replication/index.md#setup-instructions) to set up a new secondary.
### Promote the secondary to primary
## Promote the secondary to primary
When initial replication is complete and the primary and secondary are closely in sync you can do a [Planned Failover](planned_fail_over.md)
When the initial replication is complete and the primary and secondary are
closely in sync, you can do a [planned failover](planned_failover.md).
### Restore the secondary node
## Restore the secondary node
If your objective is to have two nodes again, you need to bring your secondary node back online as well by repeating the first step ([Configure the former primary to be a secondary](#configure-the-former-primary-to-be-a-secondary)) for the secondary node.
If your objective is to have two nodes again, you need to bring your secondary
node back online as well by repeating the first step
([configure the former primary to be a secondary](#configure-the-former-primary-to-be-a-secondary))
A planned fail-over is similar to a disaster recovery scenario, except you are able
to notify users of the maintenance window, and allow data to finish replicating to
secondaries.
Please read this entire document as well as [Disaster Recovery](index.md)
before proceeding.
### Notify users of scheduled maintenance
1. On the primary, in Admin Area > Messages, add a broadcast message.
Check Admin Area > Geo Nodes to estimate how long it will take to finish syncing.
```
We are doing scheduled maintenance at XX:XX UTC, expected to take less than 1 hour.
```
1. On the secondary, you may need to clear the cache for the broadcast message to show up.
### Block primary traffic
1. At the scheduled time, using your cloud provider or your node's firewall, block HTTP and SSH traffic to/from the primary except for your IP and the secondary's IP.
### Allow replication to finish as much as possible
1. On the secondary, navigate to Admin Area > Geo Nodes and wait until all replication progress is 100% on the secondary "Current node".
1. Navigate to Admin Area > Monitoring > Background Jobs > Queues and wait until the "geo" queues drop ideally to 0.
### Promote the secondary
1. Finally, follow [Disaster Recovery](index.md) to promote the secondary to a primary.