@@ -77,7 +77,7 @@ The v2 auto-deploy-image drops support for Kubernetes 1.15 and lower. If you nee
...
@@ -77,7 +77,7 @@ The v2 auto-deploy-image drops support for Kubernetes 1.15 and lower. If you nee
Kubernetes cluster, follow your cloud provider's instructions. Here's
Kubernetes cluster, follow your cloud provider's instructions. Here's
[an example on GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster).
[an example on GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster).
#### Helm 3
#### Helm v3
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/228609) in GitLab 13.4.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/228609) in GitLab 13.4.
...
@@ -86,47 +86,38 @@ Previously, `auto-deploy-image` used Helm v2, which used Tiller in a cluster.
...
@@ -86,47 +86,38 @@ Previously, `auto-deploy-image` used Helm v2, which used Tiller in a cluster.
In the v2 `auto-deploy-image`, it uses Helm v3 that doesn't require Tiller anymore.
In the v2 `auto-deploy-image`, it uses Helm v3 that doesn't require Tiller anymore.
If your Auto DevOps project has an active environment that was deployed with the v1
If your Auto DevOps project has an active environment that was deployed with the v1
`auto-deploy-image`, use the following steps to upgrade to v2, which uses Helm 3:
`auto-deploy-image`, use the following steps to upgrade to v2, which uses Helm v3:
1. Modify your `.gitlab-ci.yml` with:
1. Include the [Helm 2to3 migration CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/raw/master/lib/gitlab/ci/templates/Jobs/Helm-2to3.gitlab-ci.yml):
```yaml
- If you are on GitLab.com, or GitLab 14.0.1 or later, this template is already included in Auto DevOps.
include:
- On other versions of GitLab, you can modify your `.gitlab-ci.yml` to include the templates:
-`MIGRATE_HELM_2TO3` to `true`. If this variable is not present, migration jobs do not run.
variables:
-`AUTO_DEVOPS_FORCE_DEPLOY_V2` to `1`.
AUTO_DEVOPS_FORCE_DEPLOY_V2:1
-**Optional:**`BACKUP_HELM2_RELEASES` to `1`. If you set this variable, the migration
# If you have non-public pipelines, you can back up the entire namespace in a job artifact
job saves a backup for 1 week in a job artifact called `helm-2-release-backups`.
# prior to the migration by setting the CI variable BACKUP_NAMESPACE to a non-empty value.
If you accidentally delete the Helm v2 releases before you are ready, you can restore
# WARNING: If you have public pipelines, this artifact will be public and can
this backup from a Kubernetes manifest file by using `kubectl apply -f $backup`.
# expose your secrets.
# BACKUP_HELM2_RELEASES: 1
**WARNING:**
```
*Do not use this if you have public pipelines*.
This artifact can contain secrets and is visible to any
1. Run the `<environment-name>:helm-2to3:migrate` job.
user who can see your job.
1. Deploy your environment as usual. This deployment uses Helm 3.
1. If the deployment succeeds, you can safely run `environment:helm-2to3:cleanup`.
1. Run a pipeline and trigger the `<environment-name>:helm-2to3:migrate` job.
This deletes all Helm 2 release data from the namespace.
1. Deploy your environment as usual. This deployment uses Helm v3.
1. If the deployment succeeds, you can safely run `<environment-name>:helm-2to3:cleanup`.
If you set `BACKUP_HELM2_RELEASES` to a non-empty value, the `<environment-name>:helm2to3:migrate`
This deletes all Helm v2 release data from the namespace.
job saves a backup for 1 week in a job artifact called `helm-2-release-backups`.
1. Remove the `MIGRATE_HELM_2TO3` CI/CD variable or set it to `false`. You can do this one environment at a time using [environment scopes](../../ci/environments/index.md#scoping-environments-with-specs).
If you accidentally delete the Helm 2 releases before you are ready, then
this backup is in a Kubernetes manifest file that can be restored using
`kubectl apply -f $backup`.
**WARNING:**
This artifact can contain secrets and is visible to any