@@ -1239,24 +1239,30 @@ The `per_repository` election strategy solves this problem by electing a primary
repository. Combined with [configurable replication factors](#configure-replication-factor), you can
horizontally scale storage capacity and distribute write load across Gitaly nodes.
Primary elections are run when:
Primary elections are run:
- Praefect starts up.
- The cluster's consensus of a Gitaly node's health changes.
- In GitLab 14.1 and later, lazily. This means that Praefect doesn't immediately elect
a new primary node if the current one is unhealthy. A new primary is elected if it is
necessary to serve a request while the current primary is unavailable.
- In GitLab 13.12 to GitLab 14.0 when:
- Praefect starts up.
- The cluster's consensus of a Gitaly node's health changes.
A Gitaly node is considered:
A valid primary node candidate is a Gitaly node that:
-Healthy if `>=50%` Praefect nodes have successfully health checked the Gitaly node in the
previous ten seconds.
-Unhealthy otherwise.
-Is healthy. A Gitaly node is considered healthy if `>=50%` Praefect nodes have
successfully health checked the Gitaly node in the previous ten seconds.
-Has a fully up to date copy of the repository.
During an election run, Praefect elects a new primary Gitaly node for each repository that has
an unhealthy primary Gitaly node. The election is made:
If there are multiple primary node candidates, Praefect:
- Randomly from healthy secondary Gitaly nodes that are the most up to date.
- Only from Gitaly nodes assigned to the host repository.
- Picks one of them randomly.
- Prioritizes promoting a Gitaly node that is assigned to host the repository. If
there are no assigned Gitaly nodes to elect as the primary, Praefect may temporarily
elect an unassigned one. The unassigned primary is demoted in favor of an assigned
one when one becomes available.
If there are no healthy secondary nodes for a repository:
If there are no valid primary candidates for a repository:
- The unhealthy primary node is demoted and the repository is left without a primary node.
- Operations that require a primary node fail until a primary is successfully elected.
...
...
@@ -1351,23 +1357,37 @@ Migrate to [repository-specific primary nodes](#repository-specific-primary-node
Gitaly Cluster recovers from a failing primary Gitaly node by promoting a healthy secondary as the
new primary.
To minimize data loss, Gitaly Cluster:
In GitLab 14.1 and later, Gitaly Cluster:
- Elects a healthy secondary with a fully up to date copy of the repository as the new primary.
- Repository becomes unavailable if there are no fully up to date copies of it on healthy secondaries.
To minimize data loss in GitLab 13.0 to 14.0, Gitaly Cluster:
- Switches repositories that are outdated on the new primary to [read-only mode](#read-only-mode).
- Elects the secondary with the least unreplicated writes from the primary to be the new primary.
Because there can still be some unreplicated writes, [data loss can occur](#check-for-data-loss).
- Elects the secondary with the least unreplicated writes from the primary to be the new
primary. Because there can still be some unreplicated writes,
[data loss can occur](#check-for-data-loss).
### Read-only mode
> - Introduced in GitLab 13.0 as [generally available](https://about.gitlab.com/handbook/product/gitlab-the-product/#generally-available-ga).
> - Between GitLab 13.0 and GitLab 13.2, read-only mode applied to the whole virtual storage and occurred whenever failover occurred.
> - [In GitLab 13.3 and later](https://gitlab.com/gitlab-org/gitaly/-/issues/2862), read-only mode applies on a per-repository basis and only occurs if a new primary is out of date.
new primary. If the failed primary contained unreplicated writes, [data loss can occur](#check-for-data-loss).
> - Removed in GitLab 14.1. Instead, repositories [become unavailable](#unavailable-repositories).
When Gitaly Cluster switches to a new primary, repositories enter read-only mode if they are out of
date. This can happen after failing over to an outdated secondary. Read-only mode eases data
recovery efforts by preventing writes that may conflict with the unreplicated writes on other nodes.
In GitLab 13.0 to 14.0, when Gitaly Cluster switches to a new primary, repositories enter
read-only mode if they are out of date. This can happen after failing over to an outdated
secondary. Read-only mode eases data recovery efforts by preventing writes that may conflict
with the unreplicated writes on other nodes.
To enable writes again, an administrator can:
When Gitaly Cluster switches to a new primary In GitLab 13.0 to 14.0, repositories enter
read-only mode if they are out of date. This can happen after failing over to an outdated
secondary. Read-only mode eases data recovery efforts by preventing writes that may conflict
with the unreplicated writes on other nodes.
To enable writes again in GitLab 13.0 to 14.0, an administrator can:
1.[Check](#check-for-data-loss) for data loss.
1. Attempt to [recover](#data-recovery) missing data.
...
...
@@ -1375,21 +1395,38 @@ To enable writes again, an administrator can:
[accept data loss](#enable-writes-or-accept-data-loss) if necessary, depending on the version of
GitLab.
## Unavailable repositories
> - From GitLab 13.0 through 14.0, repositories became read-only if they were outdated on the primary but fully up to date on a healthy secondary. `dataloss` sub-command displays read-only repositories by default through these versions.
> - Since GitLab 14.1, Praefect contains more responsive failover logic which immediately fails over to one of the fully up to date secondaries rather than placing the repository in read-only mode. Since GitLab 14.1, the `dataloss` sub-command displays repositories which are unavailable due to having no fully up to date copies on healthy Gitaly nodes.
A repository is unavailable if all of its up to date replicas are unavailable. Unavailable repositories are
not accessible through Praefect to prevent serving stale data that may break automated tooling.
### Check for data loss
The Praefect `dataloss` sub-command identifies replicas that are likely to be outdated. This can help
identify potential data loss after a failover. The following parameters are
available:
The Praefect `dataloss` subcommand identifies:
- Copies of repositories in GitLab 13.0 to GitLab 14.0 that at are likely to be outdated.
This can help identify potential data loss after a failover.
- Repositories in GitLab 14.1 and later that are unavailable. This helps identify potential
data loss and repositories which are no longer accessible because all of their up-to-date
replicas copies are unavailable.
The following parameters are available:
-`-virtual-storage` that specifies which virtual storage to check. The default behavior is to
display outdated replicas of read-only repositories as they might require administrator action.
- In GitLab 13.3 and later, `-partially-replicated` that specifies whether to display a list of
[outdated replicas of writable repositories](#outdated-replicas-of-writable-repositories).
-`-virtual-storage` that specifies which virtual storage to check. Because they might require
an administrator to intervene, the default behavior is to display:
- In GitLab 13.0 to 14.0, copies of read-only repositories.
- In GitLab 14.1 and later, unavailable repositories.
- In GitLab 14.1 and later, [`-partially-unavailable`](#unavailable-replicas-of-available-repositories)
that specifies whether to include in the output repositories that are available but have
some assigned copies that are not available.
NOTE:
`dataloss` is still in beta and the output format is subject to change.
To check for repositories with outdated primaries, run:
To check for repositories with outdated primaries or for unavailable repositories, run: