Commit 9338931a authored by Marcel Amirault's avatar Marcel Amirault

Merge branch 'eread/add-iconography-to-geo-docs' into 'master'

Add iconography to Geo documentation

See merge request gitlab-org/gitlab!25776
parents 0a70abdf 038f2431
......@@ -51,14 +51,14 @@ Feature.enable('geo_repository_verification')
## Repository verification
Navigate to the **Admin Area > Geo** dashboard on the **primary** node and expand
Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **primary** node and expand
the **Verification information** tab for that node to view automatic checksumming
status for repositories and wikis. Successes are shown in green, pending work
in grey, and failures in red.
![Verification status](img/verification-status-primary.png)
Navigate to the **Admin Area > Geo** dashboard on the **secondary** node and expand
Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **secondary** node and expand
the **Verification information** tab for that node to view automatic verification
status for repositories and wikis. As with checksumming, successes are shown in
green, pending work in grey, and failures in red.
......@@ -85,7 +85,7 @@ data. The default and recommended re-verification interval is 7 days, though
an interval as short as 1 day can be set. Shorter intervals reduce risk but
increase load and vice versa.
Navigate to the **Admin Area > Geo** dashboard on the **primary** node, and
Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **primary** node, and
click the **Edit** button for the **primary** node to customize the minimum
re-verification interval:
......@@ -134,7 +134,7 @@ sudo gitlab-rake geo:verification:wiki:reset
If the **primary** and **secondary** nodes have a checksum verification mismatch, the cause may not be apparent. To find the cause of a checksum mismatch:
1. Navigate to the **Admin Area > Projects** dashboard on the **primary** node, find the
1. Navigate to the **{admin}** **Admin Area >** **{overview}** **Overview > Projects** dashboard on the **primary** node, find the
project that you want to check the checksum differences and click on the
**Edit** button:
![Projects dashboard](img/checksum-differences-admin-projects.png)
......
......@@ -205,20 +205,20 @@ secondary domain, like changing Git remotes and API URLs.
This command will use the changed `external_url` configuration defined
in `/etc/gitlab/gitlab.rb`.
1. For GitLab 11.11 through 12.7 only, you may need to update the primary
1. For GitLab 11.11 through 12.7 only, you may need to update the **primary**
node's name in the database. This bug has been fixed in GitLab 12.8.
To determine if you need to do this, search for the
`gitlab_rails["geo_node_name"]` setting in your `/etc/gitlab/gitlab.rb`
file. If it is commented out with `#` or not found at all, then you will
need to update the primary node's name in the database. You can search for it
need to update the **primary** node's name in the database. You can search for it
like so:
```shell
grep "geo_node_name" /etc/gitlab/gitlab.rb
```
To update the primary node's name in the database:
To update the **primary** node's name in the database:
```shell
gitlab-rails runner 'Gitlab::Geo.primary_node.update!(name: GeoNode.current_node_name)'
......
......@@ -92,7 +92,7 @@ The maintenance window won't end until Geo replication and verification is
completely finished. To keep the window as short as possible, you should
ensure these processes are close to 100% as possible during active use.
Navigate to the **Admin Area > Geo** dashboard on the **secondary** node to
Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **secondary** node to
review status. Replicated objects (shown in green) should be close to 100%,
and there should be no failures (shown in red). If a large proportion of
objects aren't yet replicated (shown in grey), consider giving the node more
......@@ -117,8 +117,8 @@ This [content was moved to another location][background-verification].
### Notify users of scheduled maintenance
On the **primary** node, navigate to **Admin Area > Messages**, add a broadcast
message. You can check under **Admin Area > Geo** to estimate how long it
On the **primary** node, navigate to **{admin}** **Admin Area >** **{bullhorn}** **Messages**, add a broadcast
message. You can check under **{admin}** **Admin Area >** **{location-dot}** **Geo** to estimate how long it
will take to finish syncing. An example message would be:
> A scheduled maintenance will take place at XX:XX UTC. We expect it to take
......@@ -162,8 +162,8 @@ access to the **primary** node during the maintenance window.
existing Git repository with an SSH remote URL. The server should refuse
connection.
1. Disable non-Geo periodic background jobs on the primary node by navigating
to **Admin Area > Monitoring > Background Jobs > Cron** , pressing `Disable All`,
1. Disable non-Geo periodic background jobs on the **primary** node by navigating
to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Cron**, pressing `Disable All`,
and then pressing `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
This job will re-enable several other cron jobs that are essential for planned
failover to complete successfully.
......@@ -172,11 +172,11 @@ access to the **primary** node during the maintenance window.
1. If you are manually replicating any data not managed by Geo, trigger the
final replication process now.
1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
1. On the **primary** node, navigate to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Queues**
and wait for all queues except those with `geo` in the name to drop to 0.
These queues contain work that has been submitted by your users; failing over
before it is completed will cause the work to be lost.
1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
1. On the **primary** node, navigate to **{admin}** **Admin Area >** **{location-dot}** **Geo** and wait for the
following conditions to be true of the **secondary** node you are failing over to:
- All replication meters to each 100% replicated, 0% failures.
......@@ -184,7 +184,7 @@ access to the **primary** node during the maintenance window.
- Database replication lag is 0ms.
- The Geo log cursor is up to date (0 events behind).
1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
1. On the **secondary** node, navigate to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Queues**
and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
1. On the **secondary** node, use [these instructions][foreground-verification]
to verify the integrity of CI artifacts, LFS objects, and uploads in file
......@@ -201,7 +201,7 @@ Finally, follow the [Disaster Recovery docs][disaster-recovery] to promote the
Once it is completed, the maintenance window is over! Your new **primary** node will now
begin to diverge from the old one. If problems do arise at this point, failing
back to the old **primary** node [is possible][bring-primary-back], but likely to result
in the loss of any data uploaded to the new primary in the meantime.
in the loss of any data uploaded to the new **primary** in the meantime.
Don't forget to remove the broadcast message after failover is complete.
......
......@@ -184,7 +184,7 @@ keys must be manually replicated to the **secondary** node.
gitlab-ctl reconfigure
```
1. Visit the **primary** node's **Admin Area > Geo**
1. Visit the **primary** node's **{admin}** **Admin Area >** **{location-dot}** **Geo**
(`/admin/geo/nodes`) in your browser.
1. Click the **New node** button.
![Add secondary node](img/adding_a_secondary_node.png)
......@@ -231,7 +231,7 @@ You can login to the **secondary** node with the same credentials as used for th
Using Hashed Storage significantly improves Geo replication. Project and group
renames no longer require synchronization between nodes.
1. Visit the **primary** node's **Admin Area > Settings > Repository**
1. Visit the **primary** node's **{admin}** **Admin Area >** **{settings}** **Settings > Repository**
(`/admin/application_settings/repository`) in your browser.
1. In the **Repository storage** section, check **Use hashed storage paths for newly created and renamed projects**.
......@@ -248,7 +248,7 @@ on the **secondary** node.
### Step 6. Enable Git access over HTTP/HTTPS
Geo synchronizes repositories over HTTP/HTTPS, and therefore requires this clone
method to be enabled. Navigate to **Admin Area > Settings**
method to be enabled. Navigate to **{admin}** **Admin Area >** **{settings}** **Settings**
(`/admin/application_settings/general`) on the **primary** node, and set
`Enabled Git access protocols` to `Both SSH and HTTP(S)` or `Only HTTP(S)`.
......@@ -257,13 +257,13 @@ method to be enabled. Navigate to **Admin Area > Settings**
Your **secondary** node is now configured!
You can login to the **secondary** node with the same credentials you used for the
**primary** node. Visit the **secondary** node's **Admin Area > Geo**
**primary** node. Visit the **secondary** node's **{admin}** **Admin Area >** **{location-dot}** **Geo**
(`/admin/geo/nodes`) in your browser to check if it's correctly identified as a
**secondary** Geo node and if Geo is enabled.
The initial replication, or 'backfill', will probably still be in progress. You
can monitor the synchronization process on each geo node from the **primary**
node's Geo Nodes dashboard in your browser.
node's **Geo Nodes** dashboard in your browser.
![Geo dashboard](img/geo_node_dashboard.png)
......
......@@ -97,7 +97,7 @@ as well as permissions and credentials.
PostgreSQL can also hold some level of cached data like HTML rendered Markdown, cached merge-requests diff (this can
also be configured to be offloaded to object storage).
We use PostgreSQL's own replication functionality to replicate data from the primary to secondary nodes.
We use PostgreSQL's own replication functionality to replicate data from the **primary** to **secondary** nodes.
We use Redis both as a cache store and to hold persistent data for our background jobs system. Because both
use-cases has data that are exclusive to the same Geo node, we don't replicate it between nodes.
......
......@@ -17,7 +17,7 @@ integrated [Container Registry](../../packages/container_registry.md#container-r
You can enable a storage-agnostic replication so it
can be used for cloud or local storages. Whenever a new image is pushed to the
primary node, each **secondary** node will pull it to its own container
**primary** node, each **secondary** node will pull it to its own container
repository.
To configure Docker Registry replication:
......@@ -111,6 +111,7 @@ generate a short-lived JWT that is pull-only-capable to access the
### Verify replication
To verify Container Registry replication is working, go to **Admin Area > Geo** (`/admin/geo/nodes`) on the **secondary** node.
To verify Container Registry replication is working, go to **{admin}** **Admin Area >** **{location-dot}** **Geo**
(`/admin/geo/nodes`) on the **secondary** node.
The initial replication, or "backfill", will probably still be in progress.
You can monitor the synchronization process on each Geo node from the **primary** node's **Geo Nodes** dashboard in your browser.
......@@ -270,7 +270,7 @@ For answers to common questions, see the [Geo FAQ](faq.md).
Since GitLab 9.5, Geo stores structured log messages in a `geo.log` file. For Omnibus installations, this file is at `/var/log/gitlab/gitlab-rails/geo.log`.
This file contains information about when Geo attempts to sync repositories and files. Each line in the file contains a separate JSON entry that can be ingested into, for example, Elasticsearch or Splunk.
This file contains information about when Geo attempts to sync repositories and files. Each line in the file contains a separate JSON entry that can be ingested into. For example, Elasticsearch or Splunk.
For example:
......
......@@ -24,7 +24,7 @@ whether they are stored on the local filesystem or in object storage.
To enable GitLab replication, you must:
1. Go to **Admin Area > Geo**.
1. Go to **{admin}** **Admin Area >** **{location-dot}** **Geo**.
1. Press **Edit** on the **secondary** node.
1. Enable the **Allow this secondary node to replicate content on Object Storage**
checkbox.
......
......@@ -2,7 +2,7 @@
**Secondary** nodes can be removed from the Geo cluster using the Geo admin page of the **primary** node. To remove a **secondary** node:
1. Navigate to **Admin Area > Geo** (`/admin/geo/nodes`).
1. Navigate to **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`).
1. Click the **Remove** button for the **secondary** node you want to remove.
1. Confirm by clicking **Remove** when the prompt appears.
......
......@@ -19,7 +19,7 @@ Before attempting more advanced troubleshooting:
### Check the health of the **secondary** node
Visit the **primary** node's **Admin Area > Geo** (`/admin/geo/nodes`) in
Visit the **primary** node's **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`) in
your browser. We perform the following health checks on each **secondary** node
to help identify if something is wrong:
......@@ -122,7 +122,7 @@ Geo finds the current machine's Geo node name in `/etc/gitlab/gitlab.rb` by:
- If that is not defined, using the `external_url` setting.
This name is used to look up the node with the same **Name** in
**Admin Area > Geo**.
**{admin}** **Admin Area >** **{location-dot}** **Geo**.
To check if the current machine has a node name that matches a node in the
database, run the check task:
......@@ -211,9 +211,9 @@ sudo gitlab-rake gitlab:geo:check
Checking Geo ... Finished
```
- Ensure that you have added the secondary node in the Admin Area of the primary node.
- Ensure that you entered the `external_url` or `gitlab_rails['geo_node_name']` when adding the secondary node in the admin are of the primary node.
- Prior to GitLab 12.4, edit the secondary node in the Admin Area of the primary node and ensure that there is a trailing `/` in the `Name` field.
- Ensure that you have added the secondary node in the Admin Area of the **primary** node.
- Ensure that you entered the `external_url` or `gitlab_rails['geo_node_name']` when adding the secondary node in the admin are of the **primary** node.
- Prior to GitLab 12.4, edit the secondary node in the Admin Area of the **primary** node and ensure that there is a trailing `/` in the `Name` field.
1. Check returns Exception: PG::UndefinedTable: ERROR: relation "geo_nodes" does not exist
......@@ -359,7 +359,7 @@ To help us resolve this problem, consider commenting on
GitLab places a timeout on all repository clones, including project imports
and Geo synchronization operations. If a fresh `git clone` of a repository
on the primary takes more than a few minutes, you may be affected by this.
on the **primary** takes more than a few minutes, you may be affected by this.
To increase the timeout, add the following line to `/etc/gitlab/gitlab.rb`
on the **secondary** node:
......@@ -750,7 +750,7 @@ If you are able to log in to the **primary** node, but you receive this error
when attempting to log into a **secondary**, you should check that the Geo
node's URL matches its external URL.
1. On the primary, visit **Admin Area > Geo**.
1. On the primary, visit **{admin}** **Admin Area >** **{location-dot}** **Geo**.
1. Find the affected **secondary** and click **Edit**.
1. Ensure the **URL** field matches the value found in `/etc/gitlab/gitlab.rb`
in `external_url "https://gitlab.example.com"` on the frontend server(s) of
......@@ -833,4 +833,4 @@ To resolve this issue:
- Check `/var/log/gitlab/gitlab-rails/geo.log` to see if the **secondary** node is
using IPv6 to send its status to the **primary** node. If it is, add an entry to
the **primary** node using IPv4 in the `/etc/hosts` file. Alternatively, you should
[enable IPv6 on the primary node](https://docs.gitlab.com/omnibus/settings/nginx.html#setting-the-nginx-listen-address-or-addresses).
[enable IPv6 on the **primary** node](https://docs.gitlab.com/omnibus/settings/nginx.html#setting-the-nginx-listen-address-or-addresses).
......@@ -2,8 +2,8 @@
## Changing the sync capacity values
In the Geo admin page (`/admin/geo/nodes`), there are several variables that
can be tuned to improve performance of Geo:
In the Geo admin page at **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`),
there are several variables that can be tuned to improve performance of Geo:
- Repository sync capacity.
- File sync capacity.
......
......@@ -186,7 +186,7 @@ Replicating over SSH has been deprecated, and support for this option will be
removed in a future release.
To switch to HTTP/HTTPS replication, log into the **primary** node as an admin and visit
**Admin Area > Geo** (`/admin/geo/nodes`). For each **secondary** node listed,
**{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`). For each **secondary** node listed,
press the "Edit" button, change the "Repository cloning" setting from
"SSH (deprecated)" to "HTTP/HTTPS", and press "Save changes". This should take
effect immediately.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment