Commit 308eb4a3 authored by Mek Stittri's avatar Mek Stittri Committed by Achilleas Pipinellis

Move all HA related docs to a new page

- `availability/index.md` to `reference_architectures/index.md`
- `high_availability/index.md` to `reference_architectures/index.md`
- `scaling/index.md` to `reference_architectures/index.md`
parent 8a7806ff
......@@ -23,10 +23,11 @@ No matter how you use GitLab, we have documentation for you.
| Essential Documentation | Essential Documentation |
|:-------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------|
| [**User Documentation**](user/index.md)<br/>Discover features and concepts for GitLab users. | [**Administrator documentation**](administration/index.md)<br/>Everything GitLab self-managed administrators need to know. |
| [**Contributing to GitLab**](#contributing-to-gitlab)<br/>At GitLab, everyone can contribute! | [**New to Git and GitLab?**](#new-to-git-and-gitlab)<br/>We have the resources to get you started. |
| [**Contributing to GitLab**](#contributing-to-gitlab)<br/>At GitLab, everyone can contribute! | [**New to Git and GitLab?**](#new-to-git-and-gitlab)<br/>We have the resources to get you started. |
| [**Building an integration with GitLab?**](#building-an-integration-with-gitlab)<br/>Consult our automation and integration documentation. | [**Coming to GitLab from another platform?**](#coming-to-gitlab-from-another-platform)<br/>Consult our handy guides. |
| [**Install GitLab**](https://about.gitlab.com/install/)<br/>Installation options for different platforms. | [**Customers**](subscriptions/index.md)<br/>Information for new and existing customers. |
| [**Update GitLab**](update/README.md)<br/>Update your GitLab self-managed instance to the latest version. | [**GitLab Releases**](https://about.gitlab.com/releases/)<br/>What's new in GitLab. |
| [**Update GitLab**](update/README.md)<br/>Update your GitLab self-managed instance to the latest version. | [**Reference Architectures**](administration/reference_architectures/index.md)<br/>GitLab's reference architectures |
| [**GitLab Releases**](https://about.gitlab.com/releases/)<br/>What's new in GitLab. | |
## Popular Documentation
......
---
type: reference, concepts
redirect_to: ../reference_architectures/index.md
---
# Availability
GitLab offers a number of options to manage availability and resiliency. Below are the options to consider with trade-offs.
| Event | GitLab Feature | Recovery Point Objective (RPO) | Recovery Time Objective (RTO) | Cost |
| ----- | -------------- | --- | --- | ---- |
| Availability Zone failure | "GitLab HA" | No loss | No loss | 2x Git storage, multiple nodes balanced across AZ's |
| Region failure | [GitLab Geo Disaster Recovery](../geo/disaster_recovery/index.md) | 5-10 minutes | 30 minutes | 2x primary cost |
| All failures | Backup/Restore | Last backup | Hours to Days | Cost of storing the backups |
This document was moved to [another location](../reference_architectures/index.md).
......@@ -35,7 +35,7 @@ Follow the steps below to set up a custom hook:
`/home/git/gitlab/file_hooks/`. For Omnibus installs the path is
usually `/opt/gitlab/embedded/service/gitlab-rails/file_hooks`.
For [highly available](availability/index.md) configurations, your hook file should exist on each
For [highly available](reference_architectures/index.md) configurations, your hook file should exist on each
application server.
1. Inside the `file_hooks` directory, create a file with a name of your choice,
......
......@@ -143,7 +143,7 @@ To configure the connection to the external read-replica database and enable Log
database to keep track of replication status and automatically recover from
potential replication issues. Omnibus automatically configures a tracking database
when `roles ['geo_secondary_role']` is set. For high availability,
refer to [Geo High Availability](../../availability/index.md).
refer to [Geo High Availability](../../reference_architectures/index.md).
If you want to run this database external to Omnibus, please follow the instructions below.
The tracking database requires an [FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html)
......
......@@ -47,12 +47,12 @@ It is possible to use cloud hosted services for PostgreSQL and Redis, but this i
## Prerequisites: Two working GitLab HA clusters
One cluster will serve as the **primary** node. Use the
[GitLab HA documentation](../../availability/index.md) to set this up. If
[GitLab HA documentation](../../reference_architectures/index.md) to set this up. If
you already have a working GitLab instance that is in-use, it can be used as a
**primary**.
The second cluster will serve as the **secondary** node. Again, use the
[GitLab HA documentation](../../availability/index.md) to set this up.
[GitLab HA documentation](../../reference_architectures/index.md) to set this up.
It's a good idea to log in and test it, however, note that its data will be
wiped out as part of the process of replicating from the **primary**.
......@@ -371,7 +371,7 @@ more information.
The minimal reference architecture diagram above shows all application services
running together on the same machines. However, for high availability we
[strongly recommend running all services separately](../../availability/index.md).
[strongly recommend running all services separately](../../reference_architectures/index.md).
For example, a Sidekiq server could be configured similarly to the frontend
application servers above, with some changes to run only the `sidekiq` service:
......
......@@ -2,7 +2,7 @@
> - Introduced in GitLab Enterprise Edition 8.9.
> - Using Geo in combination with
> [High Availability](../../availability/index.md)
> [High Availability](../../reference_architectures/index.md)
> is considered **Generally Available** (GA) in
> [GitLab Premium](https://about.gitlab.com/pricing/) 10.4.
......
---
type: reference, concepts
redirect_to: ../reference_architectures/index.md
---
The page have been deprecated, please see:
# Reference Architectures
1. [Availability page](../availability/index.md)
1. [Scaling page](../scaling/index.md)
1. [Docs page for high availability](./gitlab.md)
1. [High availability solutions page](https://about.gitlab.com/solutions/high-availability/)
This document was moved to [another location](../reference_architectures/index.md).
......@@ -24,7 +24,7 @@ If you use a cloud-managed service, or provide your own PostgreSQL:
## PostgreSQL in a Scaled and Highly Available Environment
This section is relevant for [Scalable and Highly Available Setups](../scaling/index.md).
This section is relevant for [Scalable and Highly Available Setups](../reference_architectures/index.md).
### Provide your own PostgreSQL instance **(CORE ONLY)**
......
......@@ -11,7 +11,7 @@ should consider using Gitaly on a separate node.
See the [Gitaly HA Epic](https://gitlab.com/groups/gitlab-org/-/epics/289) to
track plans and progress toward high availability support.
This document is relevant for [Scalable and Highly Available Setups](../scaling/index.md).
This document is relevant for [scalable and highly available setups](../reference_architectures/index.md).
## Running Gitaly on its own server
......@@ -19,7 +19,7 @@ See [Running Gitaly on its own server](../gitaly/index.md#running-gitaly-on-its-
in Gitaly documentation.
Continue configuration of other components by going back to the
[Scaling](../scaling/index.md#components-provided-by-omnibus-gitlab) page.
[reference architecture](../reference_architectures/index.md#configure-gitlab-to-scale) page.
## Enable Monitoring
......
......@@ -2,7 +2,9 @@
type: reference
---
# Configuring GitLab for Scaling and High Availability
# Configuring GitLab application (Rails)
This section describes how to configure the GitLab application (Rails) component.
NOTE: **Note:** There is some additional configuration near the bottom for
additional GitLab application servers. It's important to read and understand
......
......@@ -11,7 +11,7 @@ You can configure a Prometheus node to monitor GitLab.
## Standalone Monitoring node using Omnibus GitLab
The Omnibus GitLab package can be used to configure a standalone Monitoring node running [Prometheus](../monitoring/prometheus/index.md) and [Grafana](../monitoring/performance/grafana_configuration.md).
The monitoring node is not highly available. See [Scaling and High Availability](../scaling/index.md)
The monitoring node is not highly available. See [Scaling and High Availability](../reference_architectures/index.md)
for an overview of GitLab scaling and high availability options.
The steps below are the minimum necessary to configure a Monitoring node running Prometheus and Grafana with
......
......@@ -27,7 +27,7 @@ These will be necessary when configuring the GitLab application servers later.
## Redis in a Scaled and Highly Available Environment
This section is relevant for [Scalable and Highly Available Setups](../scaling/index.md).
This section is relevant for [scalable and highly available setups](../reference_architectures/index.md).
### Provide your own Redis instance **(CORE ONLY)**
......@@ -43,8 +43,8 @@ In this configuration Redis is not highly available, and represents a single
point of failure. However, in a scaled environment the objective is to allow
the environment to handle more users or to increase throughput. Redis itself
is generally stable and can handle many requests so it is an acceptable
trade off to have only a single instance. See [High Availability](../availability/index.md)
for an overview of GitLab scaling and high availability options.
trade off to have only a single instance. See the [reference architectures](../reference_architectures/index.md)
page for an overview of GitLab scaling and high availability options.
The steps below are the minimum necessary to configure a Redis server with
Omnibus:
......@@ -89,7 +89,7 @@ Advanced configuration options are supported and can be added if
needed.
Continue configuration of other components by going back to the
[Scaling](../scaling/index.md#components-provided-by-omnibus-gitlab) page.
[reference architectures](../reference_architectures/index.md#configure-gitlab-to-scale) page.
### High Availability with Omnibus GitLab **(PREMIUM ONLY)**
......
......@@ -34,7 +34,7 @@ Learn how to install, configure, update, and maintain your GitLab instance.
- [Install](../install/README.md): Requirements, directory structures, and installation methods.
- [Database load balancing](database_load_balancing.md): Distribute database queries among multiple database servers. **(STARTER ONLY)**
- [Omnibus support for log forwarding](https://docs.gitlab.com/omnibus/settings/logs.html#udp-log-shipping-gitlab-enterprise-edition-only) **(STARTER ONLY)**
- [High Availability](availability/index.md): Configure multiple servers for scaling or high availability.
- [High Availability](reference_architectures/index.md): Configure multiple servers for scaling or high availability.
- [Installing GitLab HA on Amazon Web Services (AWS)](../install/aws/index.md): Set up GitLab High Availability on Amazon AWS.
- [Geo](geo/replication/index.md): Replicate your GitLab instance to other geographic locations as a read-only fully operational version. **(PREMIUM ONLY)**
- [Disaster Recovery](geo/disaster_recovery/index.md): Quickly fail-over to a different site with minimal effort in a disaster situation. **(PREMIUM ONLY)**
......
......@@ -37,7 +37,7 @@ For configuring GitLab to use Object Storage refer to the following guides:
### Other alternatives to filesystem storage
If you're working to [scale out](scaling/index.md) your GitLab implementation,
If you're working to [scale out](reference_architectures/index.md) your GitLab implementation,
or add [fault tolerance and redundancy](high_availability/README.md) you may be
looking at removing dependencies on block or network filesystems.
See the following guides and
......@@ -77,7 +77,7 @@ with the Fog library that GitLab uses. Symptoms include:
### GitLab Pages requires NFS
If you're working to add more GitLab servers for [scaling or fault tolerance](scaling/index.md)
If you're working to add more GitLab servers for [scaling or fault tolerance](reference_architectures/index.md)
and one of your requirements is [GitLab Pages](../user/project/pages/index.md) this currently requires
NFS. There is [work in progress](https://gitlab.com/gitlab-org/gitlab-pages/issues/196)
to remove this dependency. In the future, GitLab Pages may use
......
---
type: reference, concepts
---
# Reference architectures
<!-- TBD to be reviewed by Eric -->
You can set up GitLab on a single server or scale it up to serve many users.
This page details the recommended Reference Architectures that were built and verified by GitLab's Quality and Support teams.
Below is a chart representing each architecture tier and the number of users they can handle. As your number of users grow with time, it’s recommended that you scale GitLab accordingly.
![Reference Architectures](img/reference-architectures.png)
<!-- Internal link: https://docs.google.com/spreadsheets/d/1obYP4fLKkVVDOljaI3-ozhmCiPtEeMblbBKkf2OADKs/edit#gid=1403207183 -->
Testing on these reference architectures were performed with [GitLab's Performance Tool](https://gitlab.com/gitlab-org/quality/performance)
at specific coded workloads, and the throughputs used for testing were calculated based on sample customer data.
After selecting the reference architecture that matches your scale, refer to
[Configure GitLab to Scale](#configure-gitlab-to-scale) to see the components
involved, and how to configure them.
Each endpoint type is tested with the following number of requests per second (RPS) per 1000 users:
- API: 20 RPS
- Web: 2 RPS
- Git: 2 RPS
For GitLab instances with less than 2,000 users, it's recommended that you use the [default setup](#automated-backups-core-only)
by [installing GitLab](../../install/README.md) on a single machine to minimize maintenance and resource costs.
If your organization has more than 2,000 users, the recommendation is to scale GitLab's components to multiple
machine nodes. The machine nodes are grouped by component(s). The addition of these
nodes increases the performance and scalability of to your GitLab instance.
As long as there is at least one of each component online and capable of handling
the instance's usage load, your team's productivity will not be interrupted.
Scaling GitLab in this manner also enables you to perform [zero-downtime updates](https://docs.gitlab.com/omnibus/update/#zero-downtime-updates).
When scaling GitLab, there are several factors to consider:
- Multiple application nodes to handle frontend traffic.
- A load balancer is added in front to distribute traffic across the application nodes.
- The application nodes connects to a shared file server and PostgreSQL and Redis services on the backend.
NOTE: **Note:** Depending on your workflow, the following recommended
reference architectures may need to be adapted accordingly. Your workload
is influenced by factors including how active your users are,
how much automation you use, mirroring, and repository/change size. Additionally the
displayed memory values are provided by [GCP machine types](https://cloud.google.com/compute/docs/machine-types).
For different cloud vendors, attempt to select options that best match the provided architecture.
## Up to 1,000 users
From 1 to 1,000 users, a [single-node setup with frequent backups](#automated-backups-core-only) is adequate.
| Users | Configuration([8](#footnotes)) | GCP type | AWS type([9](#footnotes)) |
|-------|--------------------------------|---------------|---------------------------|
| 100 | 2 vCPU, 7.2GB Memory | n1-standard-2 | c5.2xlarge |
| 500 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| 1000 | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge |
This solution is appropriate for many teams that have a single server at their disposal. With automatic backup of the GitLab repositories, configuration, and the database, this can be an optimal solution if you don't have strict availability requirements.
You can also optionally configure GitLab to use an [external PostgreSQL service](../external_database.md) or an [external object storage service](../high_availability/object_storage.md) for added performance and reliability at a relatively low complexity cost.
<!--
## Up to 2,000 users
For up to 2,000 users, defining the reference architecture is [being worked on](https://gitlab.com/gitlab-org/quality/performance/-/issues/223).
-->
## Up to 3,000 users
NOTE: **Note:** The 3,000-user reference architecture documented below is
designed to help your organization achieve a highly-available GitLab deployment.
If you do not have the expertise or need to maintain a highly-available
environment, you can have a simpler and less costly-to-operate environment by
deploying two or more GitLab Rails servers, external load balancing, an NFS
server, a PostgreSQL server and a Redis server. A reference architecture with
this alternative in mind is [being worked on](https://gitlab.com/gitlab-org/quality/performance/-/issues/223).
> - **Supported users (approximate):** 3,000
> - **Test RPS rates:** API: 40 RPS, Web: 4 RPS, Git: 4 RPS
| Service | Nodes | Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|---------------------------------|---------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 3 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge |
| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
## Up to 5,000 users
> - **Supported users (approximate):** 5,000
> - **Test RPS rates:** API: 100 RPS, Web: 10 RPS, Git: 10 RPS
| Service | Nodes | Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|---------------------------------|---------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 3 | 16 vCPU, 14.4GB Memory | n1-highcpu-16 | c5.4xlarge |
| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge |
| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
## Up to 10,000 users
> - **Supported users (approximate):** 10,000
> - **Test RPS rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS
| Service | Nodes | GCP Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|-------------------------------------|----------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 3 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge |
| PostgreSQL | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 16 vCPU, 60GB Memory | n1-standard-16 | m5.4xlarge |
| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
## Up to 25,000 users
> - **Supported users (approximate):** 25,000
> - **Test RPS rates:** API: 500 RPS, Web: 50 RPS, Git: 50 RPS
| Service | Nodes | Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|---------------------------------|----------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 5 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge |
| PostgreSQL | 3 | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 32 vCPU, 120GB Memory | n1-standard-32 | m5.8xlarge |
| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
## Up to 50,000 users
> - **Supported users (approximate):** 50,000
> - **Test RPS rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS
| Service | Nodes | Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|---------------------------------|----------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 12 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge |
| PostgreSQL | 3 | 16 vCPU, 60GB Memory | n1-standard-16 | m5.4xlarge |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 64 vCPU, 240GB Memory | n1-standard-64 | m5.16xlarge |
| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge |
## Availability complexity
GitLab comes with the following availability complexity for your use, listed from
least to most complex:
1. [Automated backups](#automated-backups-core-only)
1. [Traffic Load Balancer](#Traffic-load-balancer-starter-only)
1. [Automated database failover](#automated-database-failover-premium-only)
1. [Instance level replication with GitLab Geo](#instance-level-replication-with-gitlab-geo-premium-only)
As you get started implementing HA, begin with a single server and then do
backups. Only after completing the first server should you proceed to the next.
Also, not implementing HA for GitLab doesn't necessarily mean that you'll have
more downtime. Depending on your needs and experience level, non-HA servers can
have more actual perceived uptime for your users.
### Automated backups **(CORE ONLY)**
> - Level of complexity: **Low**
> - Required domain knowledge: PostgreSQL, GitLab configurations, Git
> - Supported tiers: [GitLab Core, Starter, Premium, and Ultimate](https://about.gitlab.com/pricing/)
This solution is appropriate for many teams that have the default GitLab installation.
With automatic backups of the GitLab repositories, configuration, and the database,
this can be an optimal solution if you don't have strict availability requirements.
[Automated backups](../../raketasks/backup_restore.md#configuring-cron-to-make-daily-backups)
is the least complex to setup. This provides a point-in-time recovery of a predetermined schedule.
### Traffic load balancer **(STARTER ONLY)**
> - Level of complexity: **Medium**
> - Required domain knowledge: HAProxy, shared storage, distributed systems
> - Supported tiers: [GitLab Starter, Premium, and Ultimate](https://about.gitlab.com/pricing/)
This requires separating out GitLab into multiple application nodes with an added
[load balancer](../high_availability/load_balancer.md). The load balancer will distribute traffic
across GitLab application nodes. Meanwhile, each application node connects to a
shared file server and database systems on the back end. This way, if one of the
application servers fails, the workflow is not interrupted.
[HAProxy](https://www.haproxy.org/) is recommended as the load balancer.
With this added availability component you have a number of advantages compared
to the default installation:
- Increase the number of users.
- Enable zero-downtime upgrades.
- Increase availability.
### Automated database failover **(PREMIUM ONLY)**
> - Level of complexity: **High**
> - Required domain knowledge: PgBouncer, Repmgr, shared storage, distributed systems
> - Supported tiers: [GitLab Premium and Ultimate](https://about.gitlab.com/pricing/)
By adding automatic failover for database systems, you can enable higher uptime
with an additional database nodes. This extends the default database with a
cluster management and failover policies.
[PgBouncer](../../development/architecture.md#pgbouncer) in conjunction with
[Repmgr](../high_availability/database.md) is recommended.
### Instance level replication with GitLab Geo **(PREMIUM ONLY)**
> - Level of complexity: **Very High**
> - Required domain knowledge: Storage replication
> - Supported tiers: [GitLab Premium and Ultimate](https://about.gitlab.com/pricing/)
[GitLab Geo](../geo/replication/index.md) allows you to replicate your GitLab
instance to other geographical locations as a read-only fully operational instance
that can also be promoted in case of disaster.
## Configure GitLab to scale
The following components are the ones you need to configure in order to scale
GitLab. They are listed in the order you'll typically configure them if they are
required by your [reference architecture](#reference-architectures) of choice.
Most of them are bundled in the GitLab deb/rpm package (called Omnibus GitLab),
but depending on your system architecture, you may require some components which are
not included in it. If required, those should be configured before
setting up components provided by GitLab. Advice on how to select the right
solution for your organization is provided in the configuration instructions
column.
| Component | Description | Configuration instructions | Bundled with Omnibus GitLab |
|-----------|-------------|----------------------------|
| Load balancer(s) ([6](#footnotes)) | Handles load balancing, typically when you have multiple GitLab application services nodes | [Load balancer configuration](../high_availability/load_balancer.md) ([6](#footnotes)) | No |
| Object storage service ([4](#footnotes)) | Recommended store for shared data objects | [Cloud Object Storage configuration](../object_storage.md) | No |
| NFS ([5](#footnotes)) ([7](#footnotes)) | Shared disk storage service. Can be used as an alternative for Gitaly or Object Storage. Required for GitLab Pages | [NFS configuration](../high_availability/nfs.md) | No |
| [Consul](../../development/architecture.md#consul) ([3](#footnotes)) | Service discovery and health checks/failover | [Consul HA configuration](../high_availability/consul.md) **(PREMIUM ONLY)** | Yes |
| [PostgreSQL](../../development/architecture.md#postgresql) | Database | [PostgreSQL configuration](https://docs.gitlab.com/omnibus/settings/database.html) | Yes |
| [PgBouncer](../../development/architecture.md#pgbouncer) | Database connection pooler | [PgBouncer configuration](../high_availability/pgbouncer.md#running-pgbouncer-as-part-of-a-non-ha-gitlab-installation) **(PREMIUM ONLY)** | Yes |
| Repmgr | PostgreSQL cluster management and failover | [PostgreSQL and Repmgr configuration](../high_availability/database.md) | Yes |
| [Redis](../../development/architecture.md#redis) ([3](#footnotes)) | Key/value store for fast data lookup and caching | [Redis configuration](../high_availability/redis.md) | Yes |
| Redis Sentinel | High availability for Redis | [Redis Sentinel configuration](../high_availability/redis.md) | Yes |
| [Gitaly](../../development/architecture.md#gitaly) ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | Provides access to Git repositories | [Gitaly configuration](../gitaly/index.md#running-gitaly-on-its-own-server) | Yes |
| [Sidekiq](../../development/architecture.md#sidekiq) | Asynchronous/background jobs | [Sidekiq configuration](../high_availability/sidekiq.md) | Yes |
| [GitLab application services](../../development/architecture.md#unicorn)([1](#footnotes)) | Unicorn/Puma, Workhorse, GitLab Shell - serves front-end requests (UI, API, Git over HTTP/SSH) | [GitLab app scaling configuration](../high_availability/gitlab.md) | Yes |
| [Prometheus](../../development/architecture.md#prometheus) and [Grafana](../../development/architecture.md#grafana) | GitLab environment monitoring | [Monitoring node for scaling](../high_availability/monitoring_node.md) | Yes |
## Footnotes
1. In our architectures we run each GitLab Rails node using the Puma webserver
and have its number of workers set to 90% of available CPUs along with four threads.
1. Gitaly node requirements are dependent on customer data, specifically the number of
projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
and at least four nodes should be used when supporting 50,000 or more users.
We also recommend that each Gitaly node should store no more than 5TB of data
and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
set to 20% of available CPUs. Additional nodes should be considered in conjunction
with a review of expected data size and spread based on the recommendations above.
1. Recommended Redis setup differs depending on the size of the architecture.
For smaller architectures (less than 5,000 users), we suggest one Redis cluster for all
classes and that Redis Sentinel is hosted alongside Consul.
For larger architectures (10,000 users or more) we suggest running a separate
[Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
and another for the Queues and Shared State classes respectively. We also recommend
that you run the Redis Sentinel clusters separately for each Redis Cluster.
1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend a [Cloud Object Storage service](../object_storage.md)
over NFS where possible, due to better performance and availability.
1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
object storage but this isn't typically recommended for performance reasons. Note however it is required for
[GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196).
1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
as the load balancer. Although other load balancers with similar feature sets
could also be used, those load balancers have not been validated.
1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
as these components have heavy I/O. These IOPS values are recommended only as a starter
as with time they may be adjusted higher or lower depending on the scale of your
environment's workload. If you're running the environment on a Cloud provider
you may need to refer to their documentation on how configure IOPS correctly.
1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
CPU platform on GCP. On different hardware you may find that adjustments, either lower
or higher, are required for your CPU or Node counts accordingly. For more information, a
[Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
[here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
1. AWS-equivalent configurations are rough suggestions and may change in the
future. They have not yet been tested and validated.
---
type: reference, concepts
redirect_to: ../reference_architectures/index.md
---
# Scaling
GitLab supports a number of scaling options to ensure that your self-managed
instance is able to scale to meet your organization's needs.
On this page, we present examples of self-managed instances which demonstrate
how GitLab can be scaled up, scaled out or made highly available. These
examples progress from simple to complex as scaling or highly-available
components are added.
For detailed insight into how GitLab scales and configures GitLab.com, you can
watch [this 1 hour Q&A](https://www.youtube.com/watch?v=uCU8jdYzpac)
with [John Northrup](https://gitlab.com/northrup), and live questions coming
in from some of our customers.
## Reference architectures
GitLab can be set up on a single machine or scaled out to handle large number of users. In this section we'll detail the Reference Architectures that were built and verified by our Quality and Support teams.
Testing was done with our [GitLab Performance Tool](https://gitlab.com/gitlab-org/quality/performance) at specific coded workloads, and the throughputs used for testing were calculated based on sample customer data.
We test each endpoint type with the following number of requests per second (RPS) per 1000 users:
- API: 20 RPS
- Web: 2 RPS
- Git: 2 RPS
For up to 2,000 users we recommend going with a simple setup. Going above 2,000 users, we recommend scaling GitLab components to multiple machine nodes.
The machine nodes are grouped by component(s). The addition of these nodes adds limited fault tolerance to your GitLab instance.
As long as there is at least one of each component online and capable of handling the instance's usage load, your team's productivity will not be interrupted.
The same is true if you are looking to perform [zero-downtime updates](https://docs.gitlab.com/omnibus/update/#zero-downtime-updates).
When scaling GitLab there's a few factors to consider:
- Multiple application nodes to handle frontend traffic.
- A load balancer is added in front to distribute traffic across the application nodes.
- The application nodes connects to a shared file server and PostgreSQL and Redis services on the backend.
References:
- [Configure your load balancer for GitLab](../high_availability/load_balancer.md)
- [Configure your NFS server to work with GitLab](../high_availability/nfs.md)
- [Configure packaged PostgreSQL server to listen on TCP/IP](https://docs.gitlab.com/omnibus/settings/database.html#configure-packaged-postgresql-server-to-listen-on-tcpip)
- [Setting up a Redis-only server](https://docs.gitlab.com/omnibus/settings/redis.html#setting-up-a-redis-only-server)
NOTE: **Note:** Note that depending on your workflow the below recommended
reference architectures may need to be adapted accordingly. Your workload
is influenced by factors such as - but not limited to - how active your users are,
how much automation you use, mirroring, and repository/change size. Additionally the
shown memory values are given directly by [GCP machine types](https://cloud.google.com/compute/docs/machine-types).
On different cloud vendors a best effort like for like can be used.
### Up to 1,000 users
From 1 to 1,000 users, a single-node [Omnibus](https://docs.gitlab.com/omnibus/) setup with frequent backups is adequate.
Please refer to the [installation documentation](../../install/README.md) and [backup/restore documentation](https://docs.gitlab.com/omnibus/settings/backups.html#backup-and-restore-omnibus-gitlab-configuration).
| Users | Configuration([8](#footnotes)) | GCP type | AWS type([9](#footnotes)) |
|-------|--------------------------------|---------------|---------------------------|
| 100 | 2 vCPU, 7.2GB Memory | n1-standard-2 | c5.2xlarge |
| 500 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| 1000 | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge |
This solution is appropriate for many teams that have a single server at their disposal. With automatic backup of the GitLab repositories, configuration, and the database, this can be an optimal solution if you don't have strict availability requirements.
You can also optionally configure GitLab to use an [external PostgreSQL service](../external_database.md) or an [external object storage service](../high_availability/object_storage.md) for added performance and reliability at a relatively low complexity cost.
### Up to 2,000 users
For up to 2,000 users, defining the reference architecture is [being worked on](https://gitlab.com/gitlab-org/quality/performance/-/issues/223).
### Up to 3,000 users
NOTE: **Note:** The 3,000-user reference architecture documented below is
designed to help your organization achieve a highly-available GitLab deployment.
If you do not have the expertise or need to maintain a highly-available
environment, you can have a simpler and less costly-to-operate environment by
deploying two or more GitLab Rails servers, external load balancing, an NFS
server, a PostgreSQL server and a Redis server. A reference architecture with
this alternative in mind is [being worked on](https://gitlab.com/gitlab-org/quality/performance/-/issues/223).
- **Supported users (approximate):** 2,000
- **Test RPS rates:** API: 40 RPS, Web: 4 RPS, Git: 4 RPS
- **Known issues:** [List of known performance issues](https://gitlab.com/gitlab-org/gitlab/issues?label_name%5B%5D=Quality%3Aperformance-issues)
| Service | Nodes | Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|---------------------------------|---------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 3 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge |
| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
### Up to 5,000 users
- **Supported users (approximate):** 5,000
- **Test RPS rates:** API: 100 RPS, Web: 10 RPS, Git: 10 RPS
- **Known issues:** [List of known performance issues](https://gitlab.com/gitlab-org/gitlab/issues?label_name%5B%5D=Quality%3Aperformance-issues)
| Service | Nodes | Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|---------------------------------|---------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 3 | 16 vCPU, 14.4GB Memory | n1-highcpu-16 | c5.4xlarge |
| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge |
| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
### Up to 10,000 users
- **Supported users (approximate):** 10,000
- **Test RPS rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS
- **Known issues:** [List of known performance issues](https://gitlab.com/gitlab-org/gitlab/issues?label_name%5B%5D=Quality%3Aperformance-issues)
| Service | Nodes | GCP Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|-------------------------------------|----------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 3 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge |
| PostgreSQL | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 16 vCPU, 60GB Memory | n1-standard-16 | m5.4xlarge |
| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
### Up to 25,000 users
- **Supported users (approximate):** 25,000
- **Test RPS rates:** API: 500 RPS, Web: 50 RPS, Git: 50 RPS
- **Known issues:** [List of known performance issues](https://gitlab.com/gitlab-org/gitlab/issues?label_name%5B%5D=Quality%3Aperformance-issues)
| Service | Nodes | Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|---------------------------------|----------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 5 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge |
| PostgreSQL | 3 | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 32 vCPU, 120GB Memory | n1-standard-32 | m5.8xlarge |
| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
### Up to 50,000 users
- **Supported users (approximate):** 50,000
- **Test RPS rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS
- **Known issues:** [List of known performance issues](https://gitlab.com/gitlab-org/gitlab/issues?label_name%5B%5D=Quality%3Aperformance-issues)
| Service | Nodes | Configuration ([8](#footnotes)) | GCP type | AWS type ([9](#footnotes)) |
|--------------------------------------------------------------|-------|---------------------------------|----------------|----------------------------|
| GitLab Rails ([1](#footnotes)) | 12 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge |
| PostgreSQL | 3 | 16 vCPU, 60GB Memory | n1-standard-16 | m5.4xlarge |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 64 vCPU, 240GB Memory | n1-standard-64 | m5.16xlarge |
| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small |
| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge |
| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| Cloud Object Storage ([4](#footnotes)) | - | - | - | - |
| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge |
| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large |
| Internal load balancing node ([6](#footnotes)) | 1 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge |
## Configuring GitLab to scale
### Components not provided by Omnibus GitLab
Depending on your system architecture, you may require some components which are
not provided in Omnibus GitLab. If required, these should be configured before
setting up components provided by GitLab. Advice on how to select the right
solution for your organization is provided in the configuration instructions
listed below.
| Component | Description | Configuration instructions |
|-----------|-------------|----------------------------|
| Load balancer(s) ([6](#footnotes)) | Handles load balancing, typically when you have multiple GitLab application services nodes | [Load balancer configuration](../high_availability/load_balancer.md) ([6](#footnotes)) |
| Object storage service ([4](#footnotes)) | Recommended store for shared data objects | [Cloud Object Storage configuration](../object_storage.md) |
| NFS ([5](#footnotes)) ([7](#footnotes)) | Shared disk storage service. Can be used as an alternative for Gitaly or Object Storage. Required for GitLab Pages | [NFS configuration](../high_availability/nfs.md) |
### Components provided by Omnibus GitLab
The following components are provided by Omnibus GitLab. They are listed in the
order you'll typically configure them if they are required by your
[reference architecture](#reference-architectures) of choice.
| Component | Description | Configuration instructions |
|-----------|-------------|----------------------------|
| [Consul](../../development/architecture.md#consul) ([3](#footnotes)) | Service discovery and health checks/failover | [Consul HA configuration](../high_availability/consul.md) **(PREMIUM ONLY)** |
| [PostgreSQL](../../development/architecture.md#postgresql) | Database | [PostgreSQL configuration](https://docs.gitlab.com/omnibus/settings/database.html) |
| [PgBouncer](../../development/architecture.md#pgbouncer) | Database connection pooler | [PgBouncer configuration](../high_availability/pgbouncer.md#running-pgbouncer-as-part-of-a-non-ha-gitlab-installation) **(PREMIUM ONLY)** |
| Repmgr | PostgreSQL cluster management and failover | [PostgreSQL and Repmgr configuration](../high_availability/database.md) |
| [Redis](../../development/architecture.md#redis) ([3](#footnotes)) | Key/value store for fast data lookup and caching | [Redis configuration](../high_availability/redis.md) |
| Redis Sentinel | High availability for Redis | [Redis Sentinel configuration](../high_availability/redis.md) |
| [Gitaly](../../development/architecture.md#gitaly) ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | Provides access to Git repositories | [Gitaly configuration](../gitaly/index.md#running-gitaly-on-its-own-server) |
| [Sidekiq](../../development/architecture.md#sidekiq) | Asynchronous/background jobs | [Sidekiq configuration](../high_availability/sidekiq.md) |
| [GitLab application services](../../development/architecture.md#unicorn)([1](#footnotes)) | Unicorn/Puma, Workhorse, GitLab Shell - serves front-end requests (UI, API, Git over HTTP/SSH) | [GitLab app scaling configuration](../high_availability/gitlab.md) |
| [Prometheus](../../development/architecture.md#prometheus) and [Grafana](../../development/architecture.md#grafana) | GitLab environment monitoring | [Monitoring node for scaling](../high_availability/monitoring_node.md) |
## Footnotes
1. In our architectures we run each GitLab Rails node using the Puma webserver
and have its number of workers set to 90% of available CPUs along with 4 threads.
1. Gitaly node requirements are dependent on customer data, specifically the number of
projects and their sizes. We recommend 2 nodes as an absolute minimum for HA environments
and at least 4 nodes should be used when supporting 50,000 or more users.
We also recommend that each Gitaly node should store no more than 5TB of data
and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
set to 20% of available CPUs. Additional nodes should be considered in conjunction
with a review of expected data size and spread based on the recommendations above.
1. Recommended Redis setup differs depending on the size of the architecture.
For smaller architectures (up to 5,000 users) we suggest one Redis cluster for all
classes and that Redis Sentinel is hosted alongside Consul.
For larger architectures (10,000 users or more) we suggest running a separate
[Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
and another for the Queues and Shared State classes respectively. We also recommend
that you run the Redis Sentinel clusters separately as well for each Redis Cluster.
1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend a [Cloud Object Storage service](../object_storage.md)
over NFS where possible, due to better performance and availability.
1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
object storage but this isn't typically recommended for performance reasons. Note however it is required for
[GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196).
1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
as the load balancer. However other reputable load balancers with similar feature sets
should also work instead but be aware these aren't validated.
1. We strongly recommend that any Gitaly and / or NFS nodes are set up with SSD disks over
HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
as these components have heavy I/O. These IOPS values are recommended only as a starter
as with time they may be adjusted higher or lower depending on the scale of your
environment's workload. If you're running the environment on a Cloud provider
you may need to refer to their documentation on how configure IOPS correctly.
1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
CPU platform on GCP. On different hardware you may find that adjustments, either lower
or higher, are required for your CPU or Node counts accordingly. For more information, a
[Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
[here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
1. AWS-equivalent configurations are rough suggestions and may change in the
future. They have not yet been tested and validated.
This document was moved to [another location](../reference_architectures/index.md).
......@@ -14,15 +14,15 @@ and cost of hosting.
There are many ways you can install GitLab depending on your platform:
1. **Omnibus GitLab**: The official deb/rpm packages that contain a bundle of GitLab
and the various components it depends on like PostgreSQL, Redis, Sidekiq, etc.
and the various components it depends on, like PostgreSQL, Redis, Sidekiq, etc.
1. **GitLab Helm chart**: The cloud native Helm chart for installing GitLab and all
its components on Kubernetes.
1. **Docker**: The Omnibus GitLab packages dockerized.
1. **Source**: Install GitLab and all its components from scratch.
TIP: **If in doubt, choose Omnibus:**
The Omnibus GitLab packages are mature, scalable, support
[high availability](../administration/availability/index.md) and are used
The Omnibus GitLab packages are mature,
[scalable](../administration/reference_architectures/index.md) and are used
today on GitLab.com. The Helm charts are recommended for those who are familiar
with Kubernetes.
......@@ -36,7 +36,7 @@ The Omnibus GitLab package uses our official deb/rpm repositories. This is
recommended for most users.
If you need additional flexibility and resilience, we recommend deploying
GitLab as described in our [High Availability documentation](../administration/availability/index.md).
GitLab as described in our [reference architecture documentation](../administration/reference_architectures/index.md).
[**> Install GitLab using the Omnibus GitLab package.**](https://about.gitlab.com/install/)
......@@ -67,7 +67,7 @@ GitLab maintains a set of official Docker images based on the Omnibus GitLab pac
## Installing GitLab from source
If the Omnibus GitLab package is not available in your distribution, you can
install GitLab from source: Useful for unsupported systems like *BSD. For an
install GitLab from source: Useful for unsupported systems like \*BSD. For an
overview of the directory structure, read the [structure documentation](structure.md).
[**> Install GitLab from source.**](installation.md)
......
......@@ -739,7 +739,7 @@ Have a read through these other resources and feel free to
[open an issue](https://gitlab.com/gitlab-org/gitlab/issues/new)
to request additional material:
- [Scaling GitLab](../../administration/scaling/index.md):
- [Scaling GitLab](../../administration/reference_architectures/index.md):
GitLab supports several different types of clustering and high-availability.
- [Geo replication](../../administration/geo/replication/index.md):
Geo is the solution for widely distributed development teams.
......
......@@ -95,7 +95,7 @@ This is the recommended minimum hardware for a handful of example GitLab user ba
- 4 cores supports up to 500 users
- 8 cores supports up to 1,000 users
- 32 cores supports up to 5,000 users
- More users? Run it high-availability on [multiple application servers](https://about.gitlab.com/solutions/high-availability/)
- More users? Consult the [reference architectures page](../administration/reference_architectures/index.md)
### Memory
......@@ -112,7 +112,7 @@ errors during usage.
- 16GB RAM supports up to 500 users
- 32GB RAM supports up to 1,000 users
- 128GB RAM supports up to 5,000 users
- More users? Run it high-availability on [multiple application servers](https://about.gitlab.com/solutions/high-availability/)
- More users? Consult the [reference architectures page](../administration/reference_architectures/index.md)
We recommend having at least [2GB of swap on your server](https://askubuntu.com/a/505344/310789), even if you currently have
enough available RAM. Having swap will help reduce the chance of errors occurring
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment