@@ -448,7 +448,7 @@ The first way is simply by running the experiment. Assuming the experiment has b
The second way doesn't run the experiment and is intended to be used if the experiment only needs to surface in the client layer. To accomplish this we can simply `.publish` the experiment. This won't run any logic, but does surface the experiment details in the client layer so they can be utilized there.
An example might be to publish an experiment in a `before_action` in a controller. Assuming we've defined the `PillColorExperiment` class, like we have above, we can surface it to the client by publishing it instead of running it:
An example might be to publish an experiment in a `before_action` in a controller. Assuming we've defined the `PillColorExperiment` class, like we have above, we can surface it to the client by publishing it instead of running it:
@@ -112,7 +112,7 @@ while and there are no issues, we can proceed.
### Proposed solution: Migrate data by using MultiStore with the fallback strategy
We need a way to migrate users to a new Redis store without causing any inconveniences from UX perspective.
We need a way to migrate users to a new Redis store without causing any inconveniences from UX perspective.
We also want the ability to fall back to the "old" Redis instance if something goes wrong with the new instance.
Migration Requirements:
...
...
@@ -129,13 +129,13 @@ We need to write data into both Redis instances (old + new).
We read from the new instance, but we need to fall back to the old instance when pre-fetching from the new dedicated Redis instance that failed.
We need to log any issues or exceptions with a new instance, but still fall back to the old instance.
The proposed migration strategy is to implement and use the [MultiStore](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/multi_store.rb).
We used this approach with [adding new dedicated Redis instance for session keys](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/579).
The proposed migration strategy is to implement and use the [MultiStore](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/multi_store.rb).
We used this approach with [adding new dedicated Redis instance for session keys](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/579).
Also MultiStore comes with corresponding [specs](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/redis/multi_store_spec.rb).
The MultiStore looks like a `redis-rb ::Redis` instance.
In the new Redis instance class you added in [Step 1](#step-1-support-configuring-the-new-instance),
In the new Redis instance class you added in [Step 1](#step-1-support-configuring-the-new-instance),
override the [Redis](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/sessions.rb#L20-28) method from the `::Gitlab::Redis::Wrapper`
By enabling `use_primary_and_secondary_stores_for_foo` feature flag, our `Gitlab::Redis::Foo` will use `MultiStore` to write to both new Redis instance
and the [old (fallback-instance)](#fallback-instance).
and the [old (fallback-instance)](#fallback-instance).
If we fail to fetch data from the new instance, we will fallback and read from the old Redis instance.
We can monitor logs for `Gitlab::Redis::MultiStore::ReadFromPrimaryError`, and also the Prometheus counter `gitlab_redis_multi_store_read_fallback_total`.
...
...
@@ -218,7 +218,7 @@ When a command outside of the supported list is used, `method_missing` will pass
This ensures that anything unexpected behaves like it would before.
NOTE:
By tracking `gitlab_redis_multi_store_method_missing_total` counter and `Gitlab::Redis::MultiStore::MethodMissingError`,
By tracking `gitlab_redis_multi_store_method_missing_total` counter and `Gitlab::Redis::MultiStore::MethodMissingError`,
a developer will need to add an implementation for missing Redis commands before proceeding with the migration.
-`elapsed` - Amount of time which passed since Service Ping report process started and moment of error occurrence
-`message` - Error message
<pre>
<code>
{
"uuid"=>"02333324-1cd7-4c3b-a45b-a4993f05fb1d",
"hostname"=>"127.0.0.1",
"version"=>"14.7.0-pre",
"elapsed"=>0.006946,
"uuid"=>"02333324-1cd7-4c3b-a45b-a4993f05fb1d",
"hostname"=>"127.0.0.1",
"version"=>"14.7.0-pre",
"elapsed"=>0.006946,
"message"=>'PG::UndefinedColumn: ERROR: column \"non_existent_attribute\" does not exist\nLINE 1: SELECT COUNT(non_existent_attribute) FROM \"issues\" /*applica...'
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/7388) in GitLab 14.8 with a flag named `admin_application_settings_service_usage_data_center`. Disabled by default.
...
...
@@ -596,7 +596,7 @@ To upload payload manually:
## Monitoring
Service Ping reporting process state is monitored with [internal SiSense dashboard](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health).
Service Ping reporting process state is monitored with [internal SiSense dashboard](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health).
@@ -20,7 +20,7 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| `:github` | The test requires a GitHub personal access token. |
| `:group_saml` | The test requires a GitLab instance that has SAML SSO enabled at the group level. Interacts with an external SAML identity provider. Paired with the `:orchestrated` tag. |
| `:instance_saml` | The test requires a GitLab instance that has SAML SSO enabled at the instance level. Interacts with an external SAML identity provider. Paired with the `:orchestrated` tag. |
| `:integrations` | This aims to test the available [integrations](../../../user/project/integrations/overview.md#integrations-listing). The test requires Docker to be installed in the run context. It will provision the containers and can be run against a local instance or using the `gitlab-qa` scenario `Test::Integration::Integrations` |
| `:integrations` | This aims to test the available [integrations](../../../user/project/integrations/overview.md#integrations-listing). The test requires Docker to be installed in the run context. It will provision the containers and can be run against a local instance or using the `gitlab-qa` scenario `Test::Integration::Integrations` |
| `:service_ping_disabled` | The test interacts with the GitLab configuration service ping at the instance level to turn admin setting service ping checkbox on or off. This tag will have the test run only in the `service_ping_disabled` job and must be paired with the `:orchestrated` and `:requires_admin` tags. |
| `:jira` | The test requires a Jira Server. [GitLab-QA](https://gitlab.com/gitlab-org/gitlab-qa) provisions the Jira Server in a Docker container when the `Test::Integration::Jira` test scenario is run.
| `:kubernetes` | The test includes a GitLab instance that is configured to be run behind an SSH tunnel, allowing a TLS-accessible GitLab. This test also includes provisioning of at least one Kubernetes cluster to test against. _This tag is often be paired with `:orchestrated`._ |
On self-managed GitLab, by default this feature is not available. To make it available, ask an administrator to [enable the feature flag](../administration/feature_flags.md) named `omniauth_login_minimal_scopes`. On GitLab.com, this feature is not available.
If you use a GitLab instance for authentication, you can reduce access rights when an OAuth application is used for sign in.
If you use a GitLab instance for authentication, you can reduce access rights when an OAuth application is used for sign in.
Any OAuth application can advertise the purpose of the application with the
authorization parameter: `gl_auth_type=login`. If the application is
1. On the left sidebar, select **Monitoring > Background Jobs**.
1. Under the Sidekiq dashboard, verify that the numbers
match with what was shown on the old server.
match with what was shown on the old server.
1. While still under the Sidekiq dashboard, select **Cron** and then **Enable All**
to re-enable periodic background jobs.
1. Test that read-only operations on the GitLab instance work as expected. For example, browse through project repository files, merge requests, and issues.
@@ -154,7 +154,7 @@ To change the namespace linked to a subscription:
for that group.
1. Select **Proceed to checkout**.
Subscription charges are calculated based on the total number of users in a group, including its subgroups and nested projects. If the [total number of users](gitlab_com/index.md#view-seat-usage) exceeds the number of seats in your subscription, your account is charged for the additional users and you need to pay for the overage before you can change the linked namespace.
Subscription charges are calculated based on the total number of users in a group, including its subgroups and nested projects. If the [total number of users](gitlab_com/index.md#view-seat-usage) exceeds the number of seats in your subscription, your account is charged for the additional users and you need to pay for the overage before you can change the linked namespace.
Only one namespace can be linked to a subscription.
@@ -190,7 +190,7 @@ You can override the default values in the `values.yaml` file in the
`HELM_UPGRADE_VALUES_FILE`[CI/CD variable](#cicd-variables) with
the path and name.
Some values can not be overridden with the options above. Settings like `replicaCount` should instead be overridden with the `REPLICAS`
Some values can not be overridden with the options above. Settings like `replicaCount` should instead be overridden with the `REPLICAS`
[build and deployment](#build-and-deployment) CI/CD variable. Follow [this issue](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/issues/31) for more information.
@@ -39,7 +39,7 @@ release if the patch release is not the latest. For example, upgrading from
14.1.1 to 14.2.0 should be safe even if 14.1.2 has been released. We do recommend
you check the release posts of any releases between your current and target
version just in case they include any migrations that may require you to upgrade
one release at a time.
one release at a time.
We also recommend you verify the [version specific upgrading instructions](index.md#version-specific-upgrading-instructions) relevant to your [upgrade path](index.md#upgrade-paths).
Value stream analytics shows the following deployment metrics for your project:
- Deploys: The number of successful deployments in the date range.
- Deployment Frequency: The average number of successful deployments per day in the date range.
...
...
@@ -174,14 +174,14 @@ This example shows a workflow through all seven stages in one day. In this
example, milestones have been created and CI for testing and setting environments is configured.
- 09:00: Create issue. **Issue** stage starts.
- 11:00: Add issue to a milestone, start work on the issue, and create a branch locally.
**Issue** stage stops and **Plan** stage starts.
- 11:00: Add issue to a milestone, start work on the issue, and create a branch locally.
**Issue** stage stops and **Plan** stage starts.
- 12:00: Make the first commit.
- 12:30: Make the second commit to the branch that mentions the issue number. **Plan** stage stops and **Code** stage starts.
- 14:00: Push branch and create a merge request that contains the [issue closing pattern](../project/issues/managing_issues.md#closing-issues-automatically). **Code** stage stops and **Test** and **Review** stages start.
- The CI takes 5 minutes to run scripts defined in [`.gitlab-ci.yml`](../../ci/yaml/index.md).
- The CI takes 5 minutes to run scripts defined in [`.gitlab-ci.yml`](../../ci/yaml/index.md).
**Test** stage stops.
- Review merge request.
- Review merge request.
- 19:00: Merge the merge request. **Review** stage stops and **Staging** stage starts.
- 19:30: Deployment to the `production` environment starts and finishes. **Staging** stops.
...
...
@@ -191,7 +191,7 @@ Value stream analytics records the following times for each stage:
-**Plan**: 11:00 to 12:00: 1 hr
-**Code**: 12:00 to 14:00: 2 hrs
-**Test**: 5 minutes
-**Review**: 14:00 to 19:00: 5 hrs
-**Review**: 14:00 to 19:00: 5 hrs
-**Staging**: 19:00 to 19:30: 30 minutes
There are some additional considerations for this example:
...
...
@@ -202,5 +202,5 @@ still collects analytics data for the issue.
as every merge request should be tested.
- This example illustrates only one cycle of multiple stages. The value
stream analytics dashboard shows the calculated median elapsed time for these issues.
- Value stream analytics identifies production environments based on the
- Value stream analytics identifies production environments based on the
[deployment tier of environments](../../ci/environments/index.md#deployment-tier-of-environments).
### `secret-detection` job fails with `ERR fatal: ambiguous argument` message
Your `secret-detection` job can fail with `ERR fatal: ambiguous argument` error if your
repository's default branch is unrelated to the branch the job was triggered for.
Your `secret-detection` job can fail with `ERR fatal: ambiguous argument` error if your
repository's default branch is unrelated to the branch the job was triggered for.
See issue [!352014](https://gitlab.com/gitlab-org/gitlab/-/issues/352014) for more details.
To resolve the issue, make sure to correctly [set your default branch](../../project/repository/branches/default.md#change-the-default-branch-name-for-a-project) on your repository. You should set it to a branch
that has related history with the branch you run the `secret-detection` job on.
To resolve the issue, make sure to correctly [set your default branch](../../project/repository/branches/default.md#change-the-default-branch-name-for-a-project) on your repository. You should set it to a branch
that has related history with the branch you run the `secret-detection` job on.
@@ -7,7 +7,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Vulnerability Report **(ULTIMATE)**
The Vulnerability Report provides information about vulnerabilities from scans of the default branch. It contains cumulative results of all successful jobs, regardless of whether the pipeline was successful.
The Vulnerability Report provides information about vulnerabilities from scans of the default branch. It contains cumulative results of all successful jobs, regardless of whether the pipeline was successful.
The scan results from a pipeline are only ingested after all the jobs in the pipeline complete. Partial results for a pipeline with jobs in progress can be seen in the pipeline security tab.
@@ -51,7 +51,7 @@ Once [Group Single Sign-On](index.md) has been configured, we can:
The SAML application that was created during [Single sign-on](index.md) setup for [Azure](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/view-applications-portal) now needs to be set up for SCIM. You can refer to [Azure SCIM setup documentation](https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#getting-started).
1. In your app, go to the Provisioning tab, and set the **Provisioning Mode** to **Automatic**.
1. In your app, go to the Provisioning tab, and set the **Provisioning Mode** to **Automatic**.
Then fill in the **Admin Credentials**, and save. The **Tenant URL** and **secret token** are the items
retrieved in the [previous step](#gitlab-configuration).
...
...
@@ -60,7 +60,7 @@ The SAML application that was created during [Single sign-on](index.md) setup fo
- **Settings**: We recommend setting a notification email and selecting the **Send an email notification when a failure occurs** checkbox.
You also control what is actually synced by selecting the **Scope**. For example, **Sync only assigned users and groups** only syncs the users and groups assigned to the application. Otherwise, it syncs the whole Active Directory.
- **Mappings**: We recommend keeping **Provision Azure Active Directory Users** enabled, and disable **Provision Azure Active Directory Groups**.
- **Mappings**: We recommend keeping **Provision Azure Active Directory Users** enabled, and disable **Provision Azure Active Directory Groups**.
Leaving **Provision Azure Active Directory Groups** enabled does not break the SCIM user provisioning, but it causes errors in Azure AD that may be confusing and misleading.
1. You can then test the connection by selecting **Test Connection**. If the connection is successful, save your configuration before moving on. See below for [troubleshooting](#troubleshooting).
@@ -24,7 +24,7 @@ Project access tokens are similar to [group access tokens](../../group/settings/
and [personal access tokens](../../profile/personal_access_tokens.md), except they are
associated with a project rather than a group or user.
In self-managed instances, project access tokens are subject to the same [maximum lifetime limits](../../admin_area/settings/account_and_limit_settings.md#limit-the-lifetime-of-personal-access-tokens) as personal access tokens if the limit is set.
In self-managed instances, project access tokens are subject to the same [maximum lifetime limits](../../admin_area/settings/account_and_limit_settings.md#limit-the-lifetime-of-personal-access-tokens) as personal access tokens if the limit is set.