Commit 80b03c73 authored by Marcia Ramos's avatar Marcia Ramos

Merge branch 'docs-refactor-add-clusters' into 'master'

Docs: Refactor adding and removing clusters doc

See merge request gitlab-org/gitlab!62846
parents 125daf21 0f9e30e5
......@@ -12,6 +12,6 @@
'role-arn' => @aws_role.role_arn,
'instance-types' => @instance_types,
'kubernetes-integration-help-path' => help_page_path('user/project/clusters/index'),
'account-and-external-ids-help-path' => help_page_path('user/project/clusters/add_eks_clusters.md', anchor: 'new-eks-cluster'),
'create-role-arn-help-path' => help_page_path('user/project/clusters/add_eks_clusters.md', anchor: 'new-eks-cluster'),
'account-and-external-ids-help-path' => help_page_path('user/project/clusters/add_eks_clusters.md', anchor: 'create-a-new-certificate-based-eks-cluster'),
'create-role-arn-help-path' => help_page_path('user/project/clusters/add_eks_clusters.md', anchor: 'create-a-new-certificate-based-eks-cluster'),
'external-link-icon' => sprite_icon('external-link') } }
......@@ -22,7 +22,7 @@ This can be useful for:
## Permissions
Only the management project receives `cluster-admin` privileges. All
other projects continue to receive [namespace scoped `edit` level privileges](../project/clusters/add_remove_clusters.md#rbac-cluster-resources).
other projects continue to receive [namespace scoped `edit` level privileges](../project/clusters/cluster_access.md#rbac-cluster-resources).
Management projects are restricted to the following:
......
......@@ -163,7 +163,7 @@ are deployed to the Kubernetes cluster, see the documentation for
## Security of runners
For important information about securely configuring runners, see
[Security of runners](../../project/clusters/add_remove_clusters.md#security-of-runners)
[Security of runners](../../project/clusters/cluster_access.md#security-of-runners)
documentation for project-level clusters.
## More information
......
......@@ -4,85 +4,57 @@ group: Configure
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Adding EKS clusters **(FREE)**
# EKS clusters (DEPRECATED) **(FREE)**
GitLab supports adding new and existing EKS clusters.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/22392) in GitLab 12.5.
> - [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/327908) in GitLab 14.0.
## EKS requirements
WARNING:
Use [Infrastrucure as Code](../../infrastructure/index.md) to create new clusters. The method described in this document is deprecated as of GitLab 14.0.
Before creating your first cluster on Amazon EKS with the GitLab integration, make sure the following
requirements are met:
Through GitLab, you can create new clusters and add existing clusters hosted on Amazon Elastic
Kubernetes Service (EKS).
- An [Amazon Web Services](https://aws.amazon.com/) account is set up and you are able to log in.
- You have permissions to manage IAM resources.
- If you want to use an [existing EKS cluster](#existing-eks-cluster):
- An Amazon EKS cluster with worker nodes properly configured.
- `kubectl` [installed and configured](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#get-started-kubectl)
for access to the EKS cluster.
## Add an existing EKS cluster
### Additional requirements for self-managed instances **(FREE SELF)**
If you already have an EKS cluster and want to integrate it with GitLab,
see how to [add an existing cluster](add_existing_cluster.md).
If you are using a self-managed GitLab instance, GitLab must first be configured with a set of
Amazon credentials. These credentials are used to assume an Amazon IAM role provided by the user
creating the cluster. Create an IAM user and ensure it has permissions to assume the role(s) that
your users need to create EKS clusters.
## Create a new certificate-based EKS cluster
For example, the following policy document allows assuming a role whose name starts with
`gitlab-eks-` in account `123456789012`:
Prerequisites:
```json
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123456789012:role/gitlab-eks-*"
}
}
```
- An [Amazon Web Services](https://aws.amazon.com/) account.
- Permissions to manage IAM resources.
### Configure Amazon authentication
For instance-level clusters, see [additional requirements for self-managed instances](#additional-requirements-for-self-managed-instances). **(FREE SELF)**
To configure Amazon authentication in GitLab, generate an access key for the IAM user in the Amazon AWS console, and following the steps below.
To create new Kubernetes clusters for your project, group, or instance through the certificate-based method:
1. Navigate to **Admin Area > Settings > General** and expand the **Amazon EKS** section.
1. Check **Enable Amazon EKS integration**.
1. Enter your **Account ID**.
1. Depending on your configuration, enter your access key and ID:
1. [Define the access control (RBAC or ABAC) for your cluster](cluster_access.md).
1. [Create a cluster in GitLab](#create-a-new-eks-cluster-in-gitlab).
1. [Prepare the cluster in Amazon](#prepare-the-cluster-in-amazon).
1. [Configure your cluster's data in GitLab](#configure-your-clusters-data-in-gitlab).
- _GitLab 13.7 and later, and using an instance profile_: You may leave
**Access key ID** and **Secret access key** blank.
Read [Instance profiles](#instance-profiles) for more information.
- _All GitLab versions_: Enter your access key credentials into
**Access key ID** and **Secret access key**.
Further steps:
1. Click **Save changes**.
1. [Create a default Storage Class](#create-a-default-storage-class).
1. [Deploy the app to EKS](#deploy-the-app-to-eks).
#### Instance profiles
### Create a new EKS cluster in GitLab
> Introduced in [GitLab 13.7](https://gitlab.com/gitlab-org/gitlab/-/issues/291015).
To create a new EKS cluster:
You may leave `Access key ID` and `Secret access key` fields blank if
you are using an instance profile
[to pass an IAM role to an EC2 instance](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html).
Instance profiles dynamically retrieve temporary credentials from AWS when needed.
## New EKS cluster
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/22392) in GitLab 12.5.
To create and add a new Kubernetes cluster to your project, group, or instance:
1. Navigate to your:
1. Go to your:
- Project's **Infrastructure > Kubernetes clusters** page, for a project-level cluster.
- Group's **Kubernetes** page, for a group-level cluster.
- **Admin Area > Kubernetes**, for an instance-level cluster.
1. Click **Integrate with a cluster certificate**.
- **Menu >** **{admin}** **Admin > Kubernetes**, for an instance-level cluster.
1. Select **Integrate with a cluster certificate**.
1. Under the **Create new cluster** tab, click **Amazon EKS** to display an
`Account ID` and `External ID` needed for later steps.
1. In the [IAM Management Console](https://console.aws.amazon.com/iam/home), create an IAM policy:
1. From the left panel, select **Policies**.
1. Click **Create Policy**, which opens a new window.
1. Select **Create Policy**, which opens a new window.
1. Select the **JSON** tab, and paste the following snippet in place of the
existing content. These permissions give GitLab the ability to create
resources, but not delete them:
......@@ -133,132 +105,163 @@ To create and add a new Kubernetes cluster to your project, group, or instance:
}
```
If an error is encountered during the creation process, changes will
not be rolled back and you must remove resources manually. You can do this by deleting
the relevant [CloudFormation stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html)
If you get an error during this process, GitLab does not roll back the changes. You must remove resources manually. You can do this by deleting
the relevant [CloudFormation stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html).
1. Click **Review policy**.
1. Enter a suitable name for this policy, and click **Create Policy**. You can now close this window.
1. In the [IAM Management Console](https://console.aws.amazon.com/iam/home), create an **EKS IAM role** following the [Amazon EKS cluster IAM role instructions](https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html). This role should exist so that Kubernetes clusters managed by Amazon EKS can make calls to other AWS services on your behalf to manage the resources that you use with the service.
In addition to the policies that guide suggests, you must also include the `AmazonEKSClusterPolicy`
policy for this role in order for GitLab to manage the EKS cluster correctly.
1. In the [IAM Management Console](https://console.aws.amazon.com/iam/home), create another IAM role which will be used by GitLab to authenticate with AWS. Follow these steps to create it:
1. On the AWS IAM console, select **Roles** from the left panel.
1. Click **Create role**.
1. Under `Select type of trusted entity`, select **Another AWS account**.
1. Enter the Account ID from GitLab into the `Account ID` field.
1. Check **Require external ID**.
1. Enter the External ID from GitLab into the `External ID` field.
1. Click **Next: Permissions**, and select the policy you just created.
1. Click **Next: Tags**, and optionally enter any tags you wish to associate with this role.
1. Click **Next: Review**.
1. Enter a role name and optional description into the fields provided.
1. Click **Create role**, the new role name displays at the top. Click on its name and copy the `Role ARN` from the newly created role.
1. In GitLab, enter the copied role ARN into the `Role ARN` field.
### Prepare the cluster in Amazon
1. [Create an **EKS IAM role** for your cluster](#create-an-eks-iam-role-for-your-cluster) (**role A**).
1. [Create **another EKS IAM role** for GitLab authentication with Amazon](#create-another-eks-iam-role-for-gitlab-authentication-with-amazon) (**role B**).
#### Create an EKS IAM role for your cluster
In the [IAM Management Console](https://console.aws.amazon.com/iam/home),
create an **EKS IAM role** (**role A**) following the [Amazon EKS cluster IAM role instructions](https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html).
This role is necessary so that Kubernetes clusters managed by Amazon EKS can make calls to other AWS
services on your behalf to manage the resources that you use with the service.
For GitLab to manage the EKS cluster correctly, you must include `AmazonEKSClusterPolicy` in
addition to the policies the guide suggests.
#### Create another EKS IAM role for GitLab authentication with Amazon
In the [IAM Management Console](https://console.aws.amazon.com/iam/home),
create another IAM role (**role B**) for GitLab authentication with AWS:
1. On the AWS IAM console, select **Roles** from the left panel.
1. Click **Create role**.
1. Under **Select type of trusted entity**, select **Another AWS account**.
1. Enter the Account ID from GitLab into the **Account ID** field.
1. Check **Require external ID**.
1. Enter the External ID from GitLab into the **External ID** field.
1. Click **Next: Permissions**, and select the policy you just created.
1. Click **Next: Tags**, and optionally enter any tags you wish to associate with this role.
1. Click **Next: Review**.
1. Enter a role name and optional description into the fields provided.
1. Click **Create role**. The new role name displays at the top. Click on its name and copy the
`Role ARN` from the newly created role.
### Configure your cluster's data in GitLab
1. Back in GitLab, enter the copied role ARN into the **Role ARN** field.
1. In the **Cluster Region** field, enter the [region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) you plan to use for your new cluster. GitLab confirms you have access to this region when authenticating your role.
1. Click **Authenticate with AWS**.
1. Choose your cluster's settings:
- **Kubernetes cluster name** - The name you wish to give the cluster.
- **Environment scope** - The [associated environment](index.md#setting-the-environment-scope) to this cluster.
- **Kubernetes version** - The [Kubernetes version](index.md#supported-cluster-versions) to use.
- **Service role** - Select the **EKS IAM role** you created earlier to allow Amazon EKS
and the Kubernetes control plane to manage AWS resources on your behalf.
NOTE:
This IAM role is _not_ the IAM role you created in the previous step. It should be
the one you created much earlier by following the
[Amazon EKS cluster IAM role](https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html)
guide.
- **Key pair name** - Select the [key pair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
that you can use to connect to your worker nodes if required.
- **VPC** - Select a [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
to use for your EKS Cluster resources.
- **Subnets** - Choose the [subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
in your VPC where your worker nodes run. You must select at least two.
- **Security group** - Choose the [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
to apply to the EKS-managed Elastic Network Interfaces that are created in your worker node subnets.
- **Instance type** - The [instance type](https://aws.amazon.com/ec2/instance-types/) of your worker nodes.
- **Node count** - The number of worker nodes.
- **GitLab-managed cluster** - Leave this checked if you want GitLab to manage namespaces and service accounts for this cluster.
See the [Managed clusters section](index.md#gitlab-managed-clusters) for more information.
1. Finally, click the **Create Kubernetes cluster** button.
1. Select **Authenticate with AWS**.
1. Adjust your [cluster's settings](#cluster-settings).
1. Select the **Create Kubernetes cluster** button.
After about 10 minutes, your cluster is ready to go.
NOTE:
If you have [installed and configured](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#get-started-kubectl) `kubectl` and you would like to manage your cluster with it, you must add your AWS external ID in the AWS configuration. For more information on how to configure AWS CLI, see [using an IAM role in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html#cli-configure-role-xaccount).
### Cluster creation flow
The following sequence illustrates how GitLab works with AWS to create an EKS cluster:
```mermaid
sequenceDiagram
autonumber
participant G as GitLab
participant A as AWS
participant E as EKS cluster
alt static credentials
G->>G: Load AWS Access and secret key
end
alt IAM instance profile
G->>A: Fetch temporary credentials
A->>G: Temporary access credentials
end
G->>A: AssumeRole: EKS Provision Role
A->>A: Check account, external IDs
A->>A: Check permissions
A->>G: New access credentials
note over G: user selects EKS cluster options
note over G,A: Use Service Role credentials
G->>A: CreateStack (CloudFormation)
A->>G: Received
G->>G: Wait 5 minutes
loop Poll for cluster creation
G->>A: DescribeStacks
A->>G: CREATE_IN_PROGRESS
end
note over G,E: EKS Cluster Created
G->>A: DescribeStacks
A->>G: CREATE_COMPLETE
G->>E: kubectl create role (service account)
E->>G: OK
If you have [installed and configured](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#get-started-kubectl) `kubectl` and you would like to manage your cluster with it, you must add your AWS external ID in the AWS configuration. For more information on how to configure AWS CLI, see [using an IAM role in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html#cli-configure-role-xaccount).
#### Cluster settings
When you create a new cluster, you have the following settings:
| Setting | Description |
| ----------------------- |------------ |
| Kubernetes cluster name | Your cluster's name. |
| Environment scope | The [associated environment](index.md#setting-the-environment-scope). |
| Service role | The **EKS IAM role** (**role A**). |
| Kubernetes version | The [Kubernetes version](index.md#supported-cluster-versions) for your cluster. |
| Key pair name | The [key pair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) that you can use to connect to your worker nodes. |
| VPC | The [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) to use for your EKS Cluster resources. |
| Subnets | The [subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) in your VPC where your worker nodes run. Two are required. |
| Security group | The [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) to apply to the EKS-managed Elastic Network Interfaces that are created in your worker node subnets. |
| Instance type | The [instance type](https://aws.amazon.com/ec2/instance-types/) of your worker nodes. |
| Node count | The number of worker nodes. |
| GitLab-managed cluster | Check if you want GitLab to manage namespaces and service accounts for this cluster. |
## Create a default Storage Class
Amazon EKS doesn't have a default Storage Class out of the box, which means
requests for persistent volumes are not automatically fulfilled. As part
of Auto DevOps, the deployed PostgreSQL instance requests persistent storage,
and without a default storage class it cannot start.
If a default Storage Class doesn't already exist and is desired, follow Amazon's
[guide on storage classes](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html)
to create one.
Alternatively, disable PostgreSQL by setting the project variable
[`POSTGRES_ENABLED`](../../../topics/autodevops/customize.md#cicd-variables) to `false`.
## Deploy the app to EKS
With RBAC disabled and services deployed,
[Auto DevOps](../../../topics/autodevops/index.md) can now be leveraged
to build, test, and deploy the app.
[Enable Auto DevOps](../../../topics/autodevops/index.md#at-the-project-level)
if not already enabled. If a wildcard DNS entry was created resolving to the
Load Balancer, enter it in the `domain` field under the Auto DevOps settings.
Otherwise, the deployed app isn't externally available outside of the cluster.
![Deploy Pipeline](img/pipeline.png)
GitLab creates a new pipeline, which begins to build, test, and deploy the app.
After the pipeline has finished, your app runs in EKS, and is available
to users. Click on **CI/CD > Environments**.
![Deployed Environment](img/environment.png)
GitLab displays a list of the environments and their deploy status, as well as
options to browse to the app, view monitoring metrics, and even access a shell
on the running pod.
## Additional requirements for self-managed instances **(FREE SELF)**
If you are using a self-managed GitLab instance, you need to configure
Amazon credentials. GitLab uses these credentials to assume an Amazon IAM role to create your cluster.
Create an IAM user and ensure it has permissions to assume the role(s) that
your users need to create EKS clusters.
For example, the following policy document allows assuming a role whose name starts with
`gitlab-eks-` in account `123456789012`:
```json
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123456789012:role/gitlab-eks-*"
}
}
```
First, GitLab must obtain an initial set of credentials to communicate with the AWS API.
These credentials can be retrieved in one of two ways:
### Configure Amazon authentication
To configure Amazon authentication in GitLab, generate an access key for the
IAM user in the Amazon AWS console, and follow these steps:
- Statically through the [Configure Amazon authentication](#configure-amazon-authentication).
- Dynamically via an IAM instance profile ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/291015) in GitLab 13.7).
1. In GitLab, on the top bar, select **Menu >** **{admin}** **Admin > Settings > General** and expand the **Amazon EKS** section.
1. Check **Enable Amazon EKS integration**.
1. Enter your **Account ID**.
1. Enter your [access key and ID](#eks-access-key-and-id).
1. Click **Save changes**.
After GitLab retrieves the AWS credentials, it makes an
[AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html)
API call to obtain credentials for the Provision Role. AWS confirms
the request has the correct account ID, external ID, and permissions.
#### EKS access key and ID
If the request is valid, AWS returns a new set of temporary credentials GitLab
uses to load the **Create cluster** options page.
> Instance profiles were [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/291015) in GitLab 13.7.
On the **Create cluster** page, the user must select a **Service Role**, which is
the IAM role that is actually used to create the cluster, and other options
such as the Kubernetes cluster name, Kubernetes version, and region.
After the user clicks the **Create Kubernetes cluster** button, GitLab
submits a CloudFormation API request to create an EKS cluster with the given parameters
from the user. GitLab waits 5 minutes before checking whether the cluster was created,
and polls once a minute for up to 30 minutes.
If you're using GitLab 13.7 or later, you can use instance profiles to
dynamically retrieve temporary credentials from AWS when needed.
In this case, leave the `Access key ID` and `Secret access key` fields blank
and [pass an IAM role to an EC2 instance](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html).
After GitLab receives a `CREATE_COMPLETE` message from AWS, GitLab talks
to the EKS cluster to create a Kubernetes service account with `cluster-admin`
privileges, and updates its internal database to reflect the newly-created
Kubernetes cluster. From this point forward, GitLab uses this service account to
interact with the cluster.
Otherwise, enter your access key credentials into **Access key ID** and **Secret access key**.
### Troubleshooting creating a new cluster
## Troubleshooting
The following errors are commonly encountered when creating a new cluster.
#### Validation failed: Role ARN must be a valid Amazon Resource Name
### Validation failed: Role ARN must be a valid Amazon Resource Name
Check that the `Provision Role ARN` is correct. An example of a valid ARN:
......@@ -266,7 +269,7 @@ Check that the `Provision Role ARN` is correct. An example of a valid ARN:
arn:aws:iam::123456789012:role/gitlab-eks-provision'
```
#### Access denied: User `arn:aws:iam::x` is not authorized to perform: `sts:AssumeRole` on resource: `arn:aws:iam::y`
### Access denied: User `arn:aws:iam::x` is not authorized to perform: `sts:AssumeRole` on resource: `arn:aws:iam::y`
This error occurs when the credentials defined in the
[Configure Amazon authentication](#configure-amazon-authentication) cannot assume the role defined by the
......@@ -280,7 +283,7 @@ Provision Role ARN. Check that:
![AWS IAM Trust relationships](img/aws_iam_role_trust.png)
#### Could not load Security Groups for this VPC
### Could not load Security Groups for this VPC
When populating options in the configuration form, GitLab returns this error
because GitLab has successfully assumed your provided role, but the role has
......@@ -307,46 +310,3 @@ This role should be the role you created by following the
[EKS cluster IAM role](https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html) guide.
In addition to the policies that guide suggests, you must also include the
`AmazonEKSClusterPolicy` policy for this role in order for GitLab to manage the EKS cluster correctly.
## Existing EKS cluster
For information on adding an existing EKS cluster, see
[Existing Kubernetes cluster](add_remove_clusters.md#existing-kubernetes-cluster).
### Create a default Storage Class
Amazon EKS doesn't have a default Storage Class out of the box, which means
requests for persistent volumes are not automatically fulfilled. As part
of Auto DevOps, the deployed PostgreSQL instance requests persistent storage,
and without a default storage class it cannot start.
If a default Storage Class doesn't already exist and is desired, follow Amazon's
[guide on storage classes](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html)
to create one.
Alternatively, disable PostgreSQL by setting the project variable
[`POSTGRES_ENABLED`](../../../topics/autodevops/customize.md#cicd-variables) to `false`.
### Deploy the app to EKS
With RBAC disabled and services deployed,
[Auto DevOps](../../../topics/autodevops/index.md) can now be leveraged
to build, test, and deploy the app.
[Enable Auto DevOps](../../../topics/autodevops/index.md#at-the-project-level)
if not already enabled. If a wildcard DNS entry was created resolving to the
Load Balancer, enter it in the `domain` field under the Auto DevOps settings.
Otherwise, the deployed app isn't externally available outside of the cluster.
![Deploy Pipeline](img/pipeline.png)
GitLab creates a new pipeline, which begins to build, test, and deploy the app.
After the pipeline has finished, your app runs in EKS, and is available
to users. Click on **CI/CD > Environments**.
![Deployed Environment](img/environment.png)
GitLab displays a list of the environments and their deploy status, as well as
options to browse to the app, view monitoring metrics, and even access a shell
on the running pod.
---
stage: Configure
group: Configure
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Add an existing Kubernetes cluster
If you have an existing Kubernetes cluster, you can add it to a project, group,
or instance and benefit from the integration with GitLab.
## Prerequisites
See the prerequisites below to add existing clusters to GitLab.
### All clusters
To add any cluster to GitLab, you need:
- Either a GitLab.com account or an account for a self-managed installation
running GitLab 12.5 or later.
- Maintainer permissions for group-level and project-level clusters.
- Access to the Admin area for instance-level clusters. **(FREE SELF)**
- A Kubernetes cluster.
- Cluster administration access to the cluster with `kubectl`.
You can host your cluster in [EKS](#eks-clusters), [GKE](#gke-clusters),
on premises, and with other providers.
To host them on premises and with other providers,
use either the EKS or GKE method to guide you through and enter your cluster's
settings manually.
WARNING:
GitLab doesn't support `arm64` clusters. See the issue
[Helm Tiller fails to install on `arm64` cluster](https://gitlab.com/gitlab-org/gitlab/-/issues/29838)
for details.
### EKS clusters
To add an existing **EKS** cluster, you need:
- An Amazon EKS cluster with worker nodes properly configured.
- `kubectl` [installed and configured](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#get-started-kubectl)
for access to the EKS cluster.
- Ensure the token of the account has administrator privileges for the cluster.
### GKE clusters
To add an existing **GKE** cluster, you need:
- The `container.clusterRoleBindings.create` permission to create a cluster
role binding. You can follow the [Google Cloud documentation](https://cloud.google.com/iam/docs/granting-changing-revoking-access)
to grant access.
## How to add an existing cluster
<!-- (REVISE - BREAK INTO SMALLER STEPS) -->
To add a Kubernetes cluster to your project, group, or instance:
1. Navigate to your:
1. Project's **{cloud-gear}** **Infrastructure > Kubernetes clusters** page, for a project-level cluster.
1. Group's **{cloud-gear}** **Kubernetes** page, for a group-level cluster.
1. **Menu >** **{admin}** **Admin >** **{cloud-gear}** **Kubernetes** page, for an instance-level cluster.
1. Click **Add Kubernetes cluster**.
1. Click the **Add existing cluster** tab and fill in the details:
1. **Kubernetes cluster name** (required) - The name you wish to give the cluster.
1. **Environment scope** (required) - The
[associated environment](index.md#setting-the-environment-scope) to this cluster.
1. **API URL** (required) -
It's the URL that GitLab uses to access the Kubernetes API. Kubernetes
exposes several APIs, we want the "base" URL that is common to all of them.
For example, `https://kubernetes.example.com` rather than `https://kubernetes.example.com/api/v1`.
Get the API URL by running this command:
```shell
kubectl cluster-info | grep -E 'Kubernetes master|Kubernetes control plane' | awk '/http/ {print $NF}'
```
1. **CA certificate** (required) - A valid Kubernetes certificate is needed to authenticate to the cluster. We use the certificate created by default.
1. List the secrets with `kubectl get secrets`, and one should be named similar to
`default-token-xxxxx`. Copy that token name for use below.
1. Get the certificate by running this command:
```shell
kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
```
If the command returns the entire certificate chain, you must copy the Root CA
certificate and any intermediate certificates at the bottom of the chain.
A chain file has following structure:
```plaintext
-----BEGIN MY CERTIFICATE-----
-----END MY CERTIFICATE-----
-----BEGIN INTERMEDIATE CERTIFICATE-----
-----END INTERMEDIATE CERTIFICATE-----
-----BEGIN INTERMEDIATE CERTIFICATE-----
-----END INTERMEDIATE CERTIFICATE-----
-----BEGIN ROOT CERTIFICATE-----
-----END ROOT CERTIFICATE-----
```
1. **Token** -
GitLab authenticates against Kubernetes using service tokens, which are
scoped to a particular `namespace`.
**The token used should belong to a service account with
[`cluster-admin`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles)
privileges.** To create this service account:
1. Create a file called `gitlab-admin-service-account.yaml` with contents:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab
namespace: kube-system
```
1. Apply the service account and cluster role binding to your cluster:
```shell
kubectl apply -f gitlab-admin-service-account.yaml
```
You need the `container.clusterRoleBindings.create` permission
to create cluster-level roles. If you do not have this permission,
you can alternatively enable Basic Authentication and then run the
`kubectl apply` command as an administrator:
```shell
kubectl apply -f gitlab-admin-service-account.yaml --username=admin --password=<password>
```
NOTE:
Basic Authentication can be turned on and the password credentials
can be obtained using the Google Cloud Console.
Output:
```shell
serviceaccount "gitlab" created
clusterrolebinding "gitlab-admin" created
```
1. Retrieve the token for the `gitlab` service account:
```shell
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab | awk '{print $1}')
```
Copy the `<authentication_token>` value from the output:
```plaintext
Name: gitlab-token-b5zv4
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=gitlab
kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: <authentication_token>
```
1. **GitLab-managed cluster** - Leave this checked if you want GitLab to manage namespaces and service accounts for this cluster.
See the [Managed clusters section](index.md#gitlab-managed-clusters) for more information.
1. **Project namespace** (optional) - You don't have to fill this in. By leaving
it blank, GitLab creates one for you. Also:
- Each project should have a unique namespace.
- The project namespace is not necessarily the namespace of the secret, if
you're using a secret with broader permissions, like the secret from `default`.
- You should **not** use `default` as the project namespace.
- If you or someone created a secret specifically for the project, usually
with limited permissions, the secret's namespace and project namespace may
be the same.
1. Select the **Add Kubernetes cluster** button.
After about 10 minutes, your cluster is ready.
## Disable Role-Based Access Control (RBAC) (optional)
When connecting a cluster via GitLab integration, you may specify whether the
cluster is RBAC-enabled or not. This affects how GitLab interacts with the
cluster for certain operations. If you did *not* check the **RBAC-enabled cluster**
checkbox at creation time, GitLab assumes RBAC is disabled for your cluster
when interacting with it. If so, you must disable RBAC on your cluster for the
integration to work properly.
![RBAC](img/rbac_v13_1.png)
WARNING:
Disabling RBAC means that any application running in the cluster,
or user who can authenticate to the cluster, has full API access. This is a
[security concern](index.md#security-implications), and may not be desirable.
To effectively disable RBAC, global permissions can be applied granting full access:
```shell
kubectl create clusterrolebinding permissive-binding \
--clusterrole=cluster-admin \
--user=admin \
--user=kubelet \
--group=system:serviceaccounts
```
......@@ -4,7 +4,15 @@ group: Configure
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Adding GKE clusters **(FREE)**
# GKE clusters (DEPRECATED) **(FREE)**
> - [Deprecated](https://gitlab.com/groups/gitlab-org/-/epics/6049) in GitLab 14.0.
WARNING:
Use [Infrastrucure as Code](../../infrastructure/index.md) to create new clusters. The method described in this document is deprecated as of GitLab 14.0.
Through GitLab, you can create new clusters and add existing clusters hosted on Amazon Elastic
Kubernetes Service (EKS).
GitLab supports adding new and existing GKE clusters.
......@@ -19,7 +27,12 @@ requirements are met:
take up to 10 minutes after you create a project. For more information see the
["Before you begin" section of the Kubernetes Engine docs](https://cloud.google.com/kubernetes-engine/docs/quickstart#before-you-begin).
## New GKE cluster
## Add an existing GKE cluster
If you already have a GKE cluster and want to integrate it with GitLab,
see how to [add an existing cluster](add_existing_cluster.md).
## Create new GKE cluster
Starting from [GitLab 12.4](https://gitlab.com/gitlab-org/gitlab/-/issues/25925), all the GKE clusters
provisioned by GitLab are [VPC-native](https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips).
......@@ -30,13 +43,13 @@ Note the following:
at the instance level. If that's not the case, ask your GitLab administrator to enable it. On
GitLab.com, this is enabled.
- Starting from [GitLab 12.1](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/55902), all GKE clusters
created by GitLab are RBAC-enabled. Take a look at the [RBAC section](add_remove_clusters.md#rbac-cluster-resources) for
created by GitLab are RBAC-enabled. Take a look at the [RBAC section](cluster_access.md#rbac-cluster-resources) for
more information.
- Starting from [GitLab 12.5](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18341), the
cluster's pod address IP range is set to `/16` instead of the regular `/14`. `/16` is a CIDR
notation.
- GitLab requires basic authentication enabled and a client certificate issued for the cluster to
set up an [initial service account](add_remove_clusters.md#access-controls). In [GitLab versions
set up an [initial service account](cluster_access.md). In [GitLab versions
11.10 and later](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/58208), the cluster creation process
explicitly requests GKE to create clusters with basic authentication enabled and a client
certificate.
......@@ -49,7 +62,7 @@ To create and add a new Kubernetes cluster to your project, group, or instance:
- Project's **{cloud-gear}** **Infrastructure > Kubernetes clusters** page, for a project-level
cluster.
- Group's **{cloud-gear}** **Kubernetes** page, for a group-level cluster.
- **Admin Area >** **{cloud-gear}** **Kubernetes** page, for an instance-level cluster.
- **Menu >** **{admin}** **Admin >** **{cloud-gear}** **Kubernetes** page, for an instance-level cluster.
1. Click **Integrate with a cluster certificate**.
1. Under the **Create new cluster** tab, click **Google GKE**.
1. Connect your Google account if you haven't done already by clicking the
......@@ -81,8 +94,3 @@ You can choose to use Cloud Run for Anthos in place of installing Knative and Is
separately after the cluster has been created. This means that Cloud Run
(Knative), Istio, and HTTP Load Balancing are enabled on the cluster
from the start, and cannot be installed or uninstalled.
## Existing GKE cluster
For information on adding an existing GKE cluster, see
[Existing Kubernetes cluster](add_remove_clusters.md#existing-kubernetes-cluster).
......@@ -4,28 +4,16 @@ group: Configure
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Add a cluster using cluster certificates **(FREE)**
# Add a cluster using cluster certificates (DEPRECATED) **(FREE)**
> [Deprecated](https://gitlab.com/groups/gitlab-org/-/epics/6049) in GitLab 14.0.
> [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/327908) in GitLab 14.0.
WARNING:
Creating a new cluster or adding an existing cluster to GitLab through the certificate-based method
is deprecated and no longer recommended. Kubernetes cluster, similar to any other
infrastructure, should be created, updated, and maintained using [Infrastructure as Code](../../infrastructure/index.md).
infrastructure, should be created, updated, maintained using [Infrastructure as Code](../../infrastructure/index.md).
GitLab is developing a built-in capability to create clusters with Terraform.
You can follow along in this [epic](https://gitlab.com/groups/gitlab-org/-/epics/6049).
GitLab offers integrated cluster creation for the following Kubernetes providers:
- Google Kubernetes Engine (GKE).
- Amazon Elastic Kubernetes Service (EKS).
GitLab can also integrate with any standard Kubernetes provider, either on-premise or hosted.
NOTE:
Watch the webcast [Scalable app deployment with GitLab and Google Cloud Platform](https://about.gitlab.com/webcast/scalable-app-deploy/)
and learn how to spin up a Kubernetes cluster managed by Google Cloud Platform (GCP)
in a few clicks.
You can follow along in this [epic](https://gitlab.com/groups/gitlab-org/-/epics/6049).
NOTE:
Every new Google Cloud Platform (GCP) account receives
......@@ -35,351 +23,76 @@ accounts to get started with the GitLab integration with Google Kubernetes Engin
[Follow this link](https://cloud.google.com/partners/partnercredit/?pcn_code=0014M00001h35gDQAQ#contact-form)
to apply for credit.
## Before you begin
Before [adding a Kubernetes cluster](#create-new-cluster) using GitLab, you need:
- GitLab itself. Either:
- A [GitLab.com account](https://about.gitlab.com/pricing/#gitlab-com).
- A [self-managed installation](https://about.gitlab.com/pricing/#self-managed) with GitLab version
12.5 or later. This ensures the GitLab UI can be used for cluster creation.
- The following GitLab access:
- [Maintainer role for a project](../../permissions.md#project-members-permissions) for a
project-level cluster.
- [Maintainer role for a group](../../permissions.md#group-members-permissions) for a
group-level cluster.
- [Admin Area access](../../admin_area/index.md) for a self-managed instance-level
cluster. **(FREE SELF)**
## Access controls
> - Restricted service account for deployment was [introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/51716) in GitLab 11.5.
When creating a cluster in GitLab, you are asked if you would like to create either:
- A [Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
cluster, which is the GitLab default and recommended option.
- An [Attribute-based access control (ABAC)](https://kubernetes.io/docs/reference/access-authn-authz/abac/) cluster.
When GitLab creates the cluster,
a `gitlab` service account with `cluster-admin` privileges is created in the `default` namespace
to manage the newly created cluster.
Helm also creates additional service accounts and other resources for each
installed application. Consult the documentation of the Helm charts for each application
for details.
If you are [adding an existing Kubernetes cluster](add_remove_clusters.md#add-existing-cluster),
ensure the token of the account has administrator privileges for the cluster.
The resources created by GitLab differ depending on the type of cluster.
### Important notes
Note the following about access controls:
- Environment-specific resources are only created if your cluster is
[managed by GitLab](index.md#gitlab-managed-clusters).
- If your cluster was created before GitLab 12.2, it uses a single namespace for all project
environments.
### RBAC cluster resources
GitLab creates the following resources for RBAC clusters.
| Name | Type | Details | Created when |
|:----------------------|:---------------------|:-----------------------------------------------------------------------------------------------------------|:-----------------------|
| `gitlab` | `ServiceAccount` | `default` namespace | Creating a new cluster |
| `gitlab-admin` | `ClusterRoleBinding` | [`cluster-admin`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) roleRef | Creating a new cluster |
| `gitlab-token` | `Secret` | Token for `gitlab` ServiceAccount | Creating a new cluster |
| `tiller` | `ServiceAccount` | `gitlab-managed-apps` namespace | Installing Helm charts |
| `tiller-admin` | `ClusterRoleBinding` | `cluster-admin` roleRef | Installing Helm charts |
| Environment namespace | `Namespace` | Contains all environment-specific resources | Deploying to a cluster |
| Environment namespace | `ServiceAccount` | Uses namespace of environment | Deploying to a cluster |
| Environment namespace | `Secret` | Token for environment ServiceAccount | Deploying to a cluster |
| Environment namespace | `RoleBinding` | [`admin`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) roleRef | Deploying to a cluster |
NOTE:
Watch the webcast [Scalable app deployment with GitLab and Google Cloud Platform](https://about.gitlab.com/webcast/scalable-app-deploy/)
and learn how to spin up a Kubernetes cluster managed by Google Cloud Platform (GCP)
in a few clicks.
The environment namespace `RoleBinding` was
[updated](https://gitlab.com/gitlab-org/gitlab/-/issues/31113) in GitLab 13.6
to `admin` roleRef. Previously, the `edit` roleRef was used.
## Create new cluster
### ABAC cluster resources
> The certificate-based method for creating clusters from GitLab was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/327908) in GitLab 14.0.
GitLab creates the following resources for ABAC clusters.
As of GitLab 14.0, use [Infrastructure as Code](../../infrastructure/index.md)
to **safely create your new cluster from GitLab**.
| Name | Type | Details | Created when |
|:----------------------|:---------------------|:-------------------------------------|:---------------------------|
| `gitlab` | `ServiceAccount` | `default` namespace | Creating a new cluster |
| `gitlab-token` | `Secret` | Token for `gitlab` ServiceAccount | Creating a new cluster |
| `tiller` | `ServiceAccount` | `gitlab-managed-apps` namespace | Installing Helm charts |
| `tiller-admin` | `ClusterRoleBinding` | `cluster-admin` roleRef | Installing Helm charts |
| Environment namespace | `Namespace` | Contains all environment-specific resources | Deploying to a cluster |
| Environment namespace | `ServiceAccount` | Uses namespace of environment | Deploying to a cluster |
| Environment namespace | `Secret` | Token for environment ServiceAccount | Deploying to a cluster |
The certificate-based method is **deprecated** and scheduled for removal in
GitLab 15.0. However, you can still use it until then. Through
this method, you can host your cluster in EKS, GKE, on premises, and with other
providers. To host them on premises and with other providers,
use either the EKS or GKE method to guide you through and enter your cluster's
settings manually:
### Security of runners
- [New cluster hosted on Google Kubernetes Engine (GKE)](add_eks_clusters.md).
- [New cluster hosted on Amazon Elastic Kubernetes Service (EKS)](add_gke_clusters.md).
Runners have the [privileged mode](https://docs.gitlab.com/runner/executors/docker.html#the-privileged-mode)
enabled by default, which allows them to execute special commands and run
Docker in Docker. This functionality is needed to run some of the
[Auto DevOps](../../../topics/autodevops/index.md)
jobs. This implies the containers are running in privileged mode and you should,
therefore, be aware of some important details.
## Add existing cluster
The privileged flag gives all capabilities to the running container, which in
turn can do almost everything that the host can do. Be aware of the
inherent security risk associated with performing `docker run` operations on
arbitrary images as they effectively have root access.
If you already have a cluster and want to integrate it with GitLab, see how to
[add an existing cluster](add_existing_cluster.md).
If you don't want to use a runner in privileged mode, either:
## Configure your cluster
- Use shared runners on GitLab.com. They don't have this security issue.
- Set up your own runners using the configuration described at
[shared runners](../../gitlab_com/index.md#shared-runners) using
[`docker+machine`](https://docs.gitlab.com/runner/executors/docker_machine.html).
As of GitLab 14.0, use the [GitLab Kubernetes Agent](../../clusters/agent/index.md) to configure your cluster.
## Create new cluster
## Disable a cluster
New clusters can be created using GitLab on Google Kubernetes Engine (GKE) or
Amazon Elastic Kubernetes Service (EKS) at the project, group, or instance level:
When you successfully create a new Kubernetes cluster or add an existing
one to GitLab, the cluster connection to GitLab becomes enabled. To disable it:
1. Navigate to your:
- Project's **{cloud-gear}** **Infrastructure > Kubernetes clusters** page, for a project-level
cluster.
1. Go to your:
- Project's **{cloud-gear}** **Infrastructure > Kubernetes clusters** page, for a project-level cluster.
- Group's **{cloud-gear}** **Kubernetes** page, for a group-level cluster.
- **Admin Area >** **{cloud-gear}** **Kubernetes** page, for an instance-level cluster.
1. Click **Integrate with a cluster certificate**.
1. Click the **Create new cluster** tab.
1. Click either **Amazon EKS** or **Google GKE**, and follow the instructions for your desired service:
- [Amazon EKS](add_eks_clusters.md#new-eks-cluster).
- [Google GKE](add_gke_clusters.md#creating-the-cluster-on-gke).
After creating a cluster, you can [install runners](https://docs.gitlab.com/runner/install/kubernetes.html),
add a [cluster management project](../../clusters/management_project.md),
configure [Auto DevOps](../../../topics/autodevops/index.md),
or start [deploying right away](index.md#deploying-to-a-kubernetes-cluster).
## Add existing cluster
If you have an existing Kubernetes cluster, you can add it to a project, group,
or instance, and [install runners](https://docs.gitlab.com/runner/install/kubernetes.html)
on it (the cluster does not need to be added to GitLab first).
After adding a cluster, you can add a [cluster management project](../../clusters/management_project.md),
configure [Auto DevOps](../../../topics/autodevops/index.md),
or start [deploying right away](index.md#deploying-to-a-kubernetes-cluster).
### Existing Kubernetes cluster
To add a Kubernetes cluster to your project, group, or instance:
1. Navigate to your:
1. Project's **{cloud-gear}** **Infrastructure > Kubernetes clusters** page, for a project-level
cluster.
1. Group's **{cloud-gear}** **Kubernetes** page, for a group-level cluster.
1. **Admin Area >** **{cloud-gear}** **Kubernetes** page, for an instance-level cluster.
1. Click **Add Kubernetes cluster**.
1. Click the **Add existing cluster** tab and fill in the details:
1. **Kubernetes cluster name** (required) - The name you wish to give the cluster.
1. **Environment scope** (required) - The
[associated environment](index.md#setting-the-environment-scope) to this cluster.
1. **API URL** (required) -
It's the URL that GitLab uses to access the Kubernetes API. Kubernetes
exposes several APIs, we want the "base" URL that is common to all of them.
For example, `https://kubernetes.example.com` rather than `https://kubernetes.example.com/api/v1`.
Get the API URL by running this command:
```shell
kubectl cluster-info | grep -E 'Kubernetes master|Kubernetes control plane' | awk '/http/ {print $NF}'
```
1. **CA certificate** (required) - A valid Kubernetes certificate is needed to authenticate to the cluster. We use the certificate created by default.
1. List the secrets with `kubectl get secrets`, and one should be named similar to
`default-token-xxxxx`. Copy that token name for use below.
1. Get the certificate by running this command:
```shell
kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
```
If the command returns the entire certificate chain, you must copy the Root CA
certificate and any intermediate certificates at the bottom of the chain.
A chain file has following structure:
```plaintext
-----BEGIN MY CERTIFICATE-----
-----END MY CERTIFICATE-----
-----BEGIN INTERMEDIATE CERTIFICATE-----
-----END INTERMEDIATE CERTIFICATE-----
-----BEGIN INTERMEDIATE CERTIFICATE-----
-----END INTERMEDIATE CERTIFICATE-----
-----BEGIN ROOT CERTIFICATE-----
-----END ROOT CERTIFICATE-----
```
1. **Token** -
GitLab authenticates against Kubernetes using service tokens, which are
scoped to a particular `namespace`.
**The token used should belong to a service account with
[`cluster-admin`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles)
privileges.** To create this service account:
1. Create a file called `gitlab-admin-service-account.yaml` with contents:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab
namespace: kube-system
```
1. Apply the service account and cluster role binding to your cluster:
```shell
kubectl apply -f gitlab-admin-service-account.yaml
```
You need the `container.clusterRoleBindings.create` permission
to create cluster-level roles. If you do not have this permission,
you can alternatively enable Basic Authentication and then run the
`kubectl apply` command as an administrator:
```shell
kubectl apply -f gitlab-admin-service-account.yaml --username=admin --password=<password>
```
NOTE:
Basic Authentication can be turned on and the password credentials
can be obtained using the Google Cloud Console.
Output:
```shell
serviceaccount "gitlab" created
clusterrolebinding "gitlab-admin" created
```
1. Retrieve the token for the `gitlab` service account:
```shell
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab | awk '{print $1}')
```
Copy the `<authentication_token>` value from the output:
```plaintext
Name: gitlab-token-b5zv4
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=gitlab
kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: <authentication_token>
```
NOTE:
For GKE clusters, you need the
`container.clusterRoleBindings.create` permission to create a cluster
role binding. You can follow the [Google Cloud
documentation](https://cloud.google.com/iam/docs/granting-changing-revoking-access)
to grant access.
1. **GitLab-managed cluster** - Leave this checked if you want GitLab to manage namespaces and service accounts for this cluster.
See the [Managed clusters section](index.md#gitlab-managed-clusters) for more information.
1. **Project namespace** (optional) - You don't have to fill it in; by leaving
it blank, GitLab creates one for you. Also:
- Each project should have a unique namespace.
- The project namespace is not necessarily the namespace of the secret, if
you're using a secret with broader permissions, like the secret from `default`.
- You should **not** use `default` as the project namespace.
- If you or someone created a secret specifically for the project, usually
with limited permissions, the secret's namespace and project namespace may
be the same.
1. Finally, click the **Create Kubernetes cluster** button.
After a couple of minutes, your cluster is ready.
#### Disable Role-Based Access Control (RBAC) (optional)
When connecting a cluster via GitLab integration, you may specify whether the
cluster is RBAC-enabled or not. This affects how GitLab interacts with the
cluster for certain operations. If you did *not* check the **RBAC-enabled cluster**
checkbox at creation time, GitLab assumes RBAC is disabled for your cluster
when interacting with it. If so, you must disable RBAC on your cluster for the
integration to work properly.
![RBAC](img/rbac_v13_1.png)
WARNING:
Disabling RBAC means that any application running in the cluster,
or user who can authenticate to the cluster, has full API access. This is a
[security concern](index.md#security-implications), and may not be desirable.
To effectively disable RBAC, global permissions can be applied granting full access:
```shell
kubectl create clusterrolebinding permissive-binding \
--clusterrole=cluster-admin \
--user=admin \
--user=kubelet \
--group=system:serviceaccounts
```
- **Menu >** **{admin}** **Admin >** **{cloud-gear}** **Kubernetes** page, for an instance-level cluster.
1. Select the name of the cluster you want to disable.
1. Toggle **GitLab Integration** off (in gray).
1. Click **Save changes**.
## Enabling or disabling integration
## Remove a cluster
The Kubernetes cluster integration enables after you have successfully either created
a new cluster or added an existing one. To disable Kubernetes cluster integration:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/26815) in GitLab 12.6, you can remove cluster integrations and resources.
1. Navigate to your:
- Project's **{cloud-gear}** **Infrastructure > Kubernetes clusters** page, for a project-level
cluster.
- Group's **{cloud-gear}** **Kubernetes** page, for a group-level cluster.
- **Admin Area >** **{cloud-gear}** **Kubernetes** page, for an instance-level cluster.
1. Click on the name of the cluster.
1. Click the **GitLab Integration** toggle.
1. Click **Save changes**.
When you remove a cluster integration, you only remove the cluster relationship
to GitLab, not the cluster. To remove the cluster itself, visit your cluster's
GKE or EKS dashboard to do it from their UI or use `kubectl`.
## Removing integration
You need at least Maintainer [permissions](../../permissions.md) to your
project or group to remove the integration with GitLab.
To remove the Kubernetes cluster integration from your project, first navigate to the **Advanced Settings** tab of the cluster details page and either:
When removing a cluster integration, you have two options:
- Select **Remove integration**, to remove only the Kubernetes integration.
- [From GitLab 12.6](https://gitlab.com/gitlab-org/gitlab/-/issues/26815), select
**Remove integration and resources**, to also remove all related GitLab cluster resources (for
example, namespaces, roles, and bindings) when removing the integration.
- **Remove integration**: remove only the Kubernetes integration.
- **Remove integration and resources**: remove the cluster integration and
all GitLab cluster-related resources such as namespaces, roles, and bindings.
When removing the cluster integration, note:
To remove the Kubernetes cluster integration:
- You need Maintainer [permissions](../../permissions.md) and above to remove a Kubernetes cluster
integration.
- When you remove a cluster, you only remove its relationship to GitLab, not the cluster itself. To
remove the cluster, you can do so by visiting the GKE or EKS dashboard, or using `kubectl`.
1. Go to your cluster details page.
1. Select the **Advanced Settings** tab.
1. Select either **Remove integration** or **Remove integration and resources**.
## Learn more
## Access controls
To learn more on automatically deploying your applications,
read about [Auto DevOps](../../../topics/autodevops/index.md).
See [cluster access controls (RBAC or ABAC)](cluster_access.md).
## Troubleshooting
......
---
stage: Configure
group: Configure
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Cluster access controls (RBAC or ABAC)
> Restricted service account for deployment was [introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/51716) in GitLab 11.5.
When creating a cluster in GitLab, you are asked if you would like to create either:
- A [Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
cluster, which is the GitLab default and recommended option.
- An [Attribute-based access control (ABAC)](https://kubernetes.io/docs/reference/access-authn-authz/abac/) cluster.
When GitLab creates the cluster,
a `gitlab` service account with `cluster-admin` privileges is created in the `default` namespace
to manage the newly created cluster.
Helm also creates additional service accounts and other resources for each
installed application. Consult the documentation of the Helm charts for each application
for details.
If you are [adding an existing Kubernetes cluster](add_remove_clusters.md#add-existing-cluster),
ensure the token of the account has administrator privileges for the cluster.
The resources created by GitLab differ depending on the type of cluster.
## Important notes
Note the following about access controls:
- Environment-specific resources are only created if your cluster is
[managed by GitLab](index.md#gitlab-managed-clusters).
- If your cluster was created before GitLab 12.2, it uses a single namespace for all project
environments.
## RBAC cluster resources
GitLab creates the following resources for RBAC clusters.
| Name | Type | Details | Created when |
|:----------------------|:---------------------|:-----------------------------------------------------------------------------------------------------------|:-----------------------|
| `gitlab` | `ServiceAccount` | `default` namespace | Creating a new cluster |
| `gitlab-admin` | `ClusterRoleBinding` | [`cluster-admin`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) roleRef | Creating a new cluster |
| `gitlab-token` | `Secret` | Token for `gitlab` ServiceAccount | Creating a new cluster |
| Environment namespace | `Namespace` | Contains all environment-specific resources | Deploying to a cluster |
| Environment namespace | `ServiceAccount` | Uses namespace of environment | Deploying to a cluster |
| Environment namespace | `Secret` | Token for environment ServiceAccount | Deploying to a cluster |
| Environment namespace | `RoleBinding` | [`admin`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) roleRef | Deploying to a cluster |
The environment namespace `RoleBinding` was
[updated](https://gitlab.com/gitlab-org/gitlab/-/issues/31113) in GitLab 13.6
to `admin` roleRef. Previously, the `edit` roleRef was used.
## ABAC cluster resources
GitLab creates the following resources for ABAC clusters.
| Name | Type | Details | Created when |
|:----------------------|:---------------------|:-------------------------------------|:---------------------------|
| `gitlab` | `ServiceAccount` | `default` namespace | Creating a new cluster |
| `gitlab-token` | `Secret` | Token for `gitlab` ServiceAccount | Creating a new cluster |
| Environment namespace | `Namespace` | Contains all environment-specific resources | Deploying to a cluster |
| Environment namespace | `ServiceAccount` | Uses namespace of environment | Deploying to a cluster |
| Environment namespace | `Secret` | Token for environment ServiceAccount | Deploying to a cluster |
## Security of runners
Runners have the [privileged mode](https://docs.gitlab.com/runner/executors/docker.html#the-privileged-mode)
enabled by default, which allows them to execute special commands and run
Docker in Docker. This functionality is needed to run some of the
[Auto DevOps](../../../topics/autodevops/index.md)
jobs. This implies the containers are running in privileged mode and you should,
therefore, be aware of some important details.
The privileged flag gives all capabilities to the running container, which in
turn can do almost everything that the host can do. Be aware of the
inherent security risk associated with performing `docker run` operations on
arbitrary images as they effectively have root access.
If you don't want to use a runner in privileged mode, either:
- Use shared runners on GitLab.com. They don't have this security issue.
- Set up your own runners using the configuration described at
[shared runners](../../gitlab_com/index.md#shared-runners) using
[`docker+machine`](https://docs.gitlab.com/runner/executors/docker_machine.html).
......@@ -12,35 +12,31 @@ info: To determine the technical writer assigned to the Stage/Group associated w
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/39840) in
> GitLab 11.11 for [instances](../../instance/clusters/index.md).
Using the GitLab project Kubernetes integration, you can:
You can use GitLab to manage your clusters and [benefit from the GitLab-Kubernetes integration](#benefit-from-the-gitlab-kubernetes-integration).
- Use [Review Apps](../../../ci/review_apps/index.md).
- Run [pipelines](../../../ci/pipelines/index.md).
See the [supported cluster versions](#supported-cluster-versions) before
you begin.
## Benefit from the GitLab-Kubernetes integration
Using the GitLab-Kubernetes integration, you can benefit of GitLab
features such as:
- Preview your applications with [Review Apps](../../../ci/review_apps/index.md).
- Create GitLab CI/CD [Pipelines](../../../ci/pipelines/index.md) to build, test, and deploy to your cluster.
- [Deploy](#deploying-to-a-kubernetes-cluster) your applications.
- Detect and [monitor Kubernetes](#monitoring-your-kubernetes-cluster).
- Use it with [Auto DevOps](#auto-devops).
- Detect and [monitor](#monitoring-your-kubernetes-cluster) your clusters.
- Use [Auto DevOps](#auto-devops) to automate the CI/CD process.
- Use [Web terminals](#web-terminals).
- Use [Deploy Boards](#deploy-boards).
- Use [Canary Deployments](#canary-deployments). **(PREMIUM)**
- Use [deployment variables](#deployment-variables).
- Use [role-based or attribute-based access controls](add_remove_clusters.md#access-controls).
- Use [role-based or attribute-based access controls](cluster_access.md).
- View [Logs](#viewing-pod-logs).
- Run serverless workloads on [Kubernetes with Knative](serverless/index.md).
- Connect GitLab to in-cluster applications using [cluster integrations](../../clusters/integrations.md).
Besides integration at the project level, Kubernetes clusters can also be
integrated at the [group level](../../group/clusters/index.md) or
[GitLab instance level](../../instance/clusters/index.md).
To view your project level Kubernetes clusters, navigate to **Infrastructure > Kubernetes clusters**
from your project. On this page, you can [add a new cluster](#adding-and-removing-clusters)
and view information about your existing clusters, such as:
- Nodes count.
- Rough estimates of memory and CPU usage.
## Setting up
### Supported cluster versions
## Supported cluster versions
GitLab is committed to support at least two production-ready Kubernetes minor
versions at any given time. We regularly review the versions we support, and
......@@ -61,19 +57,31 @@ Kubernetes version to any supported version at any time:
Some GitLab features may support versions outside the range provided here.
NOTE:
[GKE Cluster creation](add_remove_clusters.md#create-new-cluster) by GitLab is currently not supported for Kubernetes 1.19+. For these versions you can create the cluster through GCP, then [Add existing cluster](add_remove_clusters.md#add-existing-cluster). See [the related issue](https://gitlab.com/gitlab-org/gitlab/-/issues/331922) for more information.
## Add and remove clusters
You can create new or add existing clusters to GitLab:
### Adding and removing clusters
- On the project-level, to have a cluster dedicated to a project.
- On the [group level](../../group/clusters/index.md), to use the same cluster across multiple projects within your group.
- On the [instance level](../../instance/clusters/index.md), to use the same cluster across multiple groups and projects. **(FREE SELF)**
See [Adding and removing Kubernetes clusters](add_remove_clusters.md) for details on how
to:
To create new clusters, use one of the following methods:
- Create a cluster in Google Cloud Platform (GCP) or Amazon Elastic Kubernetes Service
(EKS) using the GitLab UI.
- Add an integration to an existing cluster from any Kubernetes platform.
- [Infrastructure as Code](../../infrastructure/index.md) (**recommended**).
- [Cluster certificates](add_remove_clusters.md) (**deprecated**).
### Multiple Kubernetes clusters
You can also [add existing clusters](add_existing_cluster.md) to GitLab.
## View your clusters
To view your project-level Kubernetes clusters, to go **Infrastructure > Kubernetes clusters**
from your project. On this page, you can add a new cluster
and view information about your existing clusters, such as:
- Nodes count.
- Rough estimates of memory and CPU usage.
## Multiple Kubernetes clusters
> - Introduced in [GitLab Premium](https://about.gitlab.com/pricing/) 10.3
> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/35094) to GitLab Free in 13.2.
......@@ -85,7 +93,7 @@ Add another cluster, like you did the first time, and make sure to
[set an environment scope](#setting-the-environment-scope) that
differentiates the new cluster from the rest.
#### Setting the environment scope
### Setting the environment scope
When adding more than one Kubernetes cluster to your project, you need to differentiate
them with an environment scope. The environment scope associates clusters with [environments](../../../ci/environments/index.md) similar to how the
......@@ -141,8 +149,8 @@ The results:
## Configuring your Kubernetes cluster
After [adding a Kubernetes cluster](add_remove_clusters.md) to GitLab, read this section that covers
important considerations for configuring Kubernetes clusters with GitLab.
Use the [GitLab Kubernetes Agent](../../clusters/agent/index.md) to safely
configure your clusters. Otherwise, there are [security implications](#security-implications).
### Security implications
......@@ -172,6 +180,8 @@ for your deployment jobs to use. Otherwise, a namespace is created for you.
#### Important notes
Note the following with GitLab and clusters:
Be aware that manually managing resources that have been created by GitLab, like
namespaces and service accounts, can cause unexpected errors. If this occurs, try
[clearing the cluster cache](#clearing-the-cluster-cache).
......@@ -253,6 +263,11 @@ This list provides a generic solution, and some GitLab-specific approaches:
If you see a trailing `%` on some Kubernetes versions, do not include it.
## Cluster integrations
See the available [cluster integrations](../../clusters/integrations.md)
to integrate third-party applications with your clusters through GitLab.
## Cluster management project
Attach a [Cluster management project](../../clusters/management_project.md)
......@@ -261,14 +276,8 @@ installation, such as an Ingress controller.
## Auto DevOps
Auto DevOps automatically detects, builds, tests, deploys, and monitors your
applications.
To make full use of Auto DevOps (Auto Deploy, Auto Review Apps, and
Auto Monitoring) the Kubernetes project integration must be enabled. However,
Kubernetes clusters can be used without Auto DevOps.
[Read more about Auto DevOps](../../../topics/autodevops/index.md).
You can use [Auto DevOps](../../../topics/autodevops/index.md) to automatically
detect, build, test, deploy, and monitor your applications.
## Deploying to a Kubernetes cluster
......@@ -309,7 +318,7 @@ GitLab CI/CD build environment to deployment jobs. Deployment jobs have
| Deployment Variable | Description |
|----------------------------|-------------|
| `KUBE_URL` | Equal to the API URL. |
| `KUBE_TOKEN` | The Kubernetes token of the [environment service account](add_remove_clusters.md#access-controls). Prior to GitLab 11.5, `KUBE_TOKEN` was the Kubernetes token of the main service account of the cluster integration. |
| `KUBE_TOKEN` | The Kubernetes token of the [environment service account](cluster_access.md). Prior to GitLab 11.5, `KUBE_TOKEN` was the Kubernetes token of the main service account of the cluster integration. |
| `KUBE_NAMESPACE` | The namespace associated with the project's deployment service account. In the format `<project_name>-<project_id>-<environment>`. For GitLab-managed clusters, a matching namespace is automatically created by GitLab in the cluster. If your cluster was created before GitLab 12.2, the default `KUBE_NAMESPACE` is set to `<project_name>-<project_id>`. |
| `KUBE_CA_PEM_FILE` | Path to a file containing PEM data. Only present if a custom CA bundle was specified. |
| `KUBE_CA_PEM` | (**deprecated**) Raw PEM data. Only if a custom CA bundle was specified. |
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment