Commit 4768b738 authored by GitLab Bot's avatar GitLab Bot

Automatic merge of gitlab-org/gitlab-ce master

parents 97551183 14cf8bf0
......@@ -98,10 +98,8 @@ class Projects::GitHttpClientController < Projects::ApplicationController
def repo_type
parse_repo_path unless defined?(@repo_type)
# When there a project did not exist, the parsed repo_type would be empty.
# In that case, we want to continue with a regular project repository. As we
# could create the project if the user pushing is allowed to do so.
@repo_type || Gitlab::GlRepository::PROJECT
@repo_type
end
def handle_basic_authentication(login, password)
......
......@@ -31,7 +31,7 @@ From the client side, `git` `v2.18.0` or newer must be installed.
From the server side, if we want to configure SSH we need to set the `sshd`
server to accept the `GIT_PROTOCOL` environment.
In installations using [GitLab Helm Charts](../install/kubernetes/gitlab_chart.md)
In installations using [GitLab Helm Charts](https://docs.gitlab.com/charts/)
and [All-in-one docker image](https://docs.gitlab.com/omnibus/docker/), the SSH
service is already configured to accept the `GIT_PROTOCOL` environment and users
need not do anything more.
......
......@@ -2,14 +2,16 @@
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/15310) in GitLab 11.6.
Usually, when you create a new merge request, a pipeline runs on the
Usually, when you create a new merge request, a pipeline runs with the
new change and checks if it's qualified to be merged into a target branch. This
pipeline should contain only necessary jobs for checking the new changes.
pipeline should contain only necessary jobs for validating the new changes.
For example, unit tests, lint checks, and [Review Apps](../review_apps/index.md)
are often used in this cycle.
With pipelines for merge requests, you can design a specific pipeline structure
for merge requests.
for when you are running a pipeline in a merge request. This
could be either adding or removing steps in the pipeline, to make sure that
your pipelines are as efficient as possible.
## Configuring pipelines for merge requests
......@@ -30,9 +32,7 @@ build:
stage: build
script: ./build
only:
- branches
- tags
- merge_requests
- master
test:
stage: test
......@@ -43,6 +43,8 @@ test:
deploy:
stage: deploy
script: ./deploy
only:
- master
```
After the merge request is updated with new commits:
......@@ -50,18 +52,58 @@ After the merge request is updated with new commits:
- GitLab detects that changes have occurred and creates a new pipeline for the merge request.
- The pipeline fetches the latest code from the source branch and run tests against it.
In the above example, the pipeline contains only `build` and `test` jobs.
Since the `deploy` job doesn't have the `only: merge_requests` parameter,
deployment jobs will not happen in the merge request.
In the above example, the pipeline contains only a `test` job.
Since the `build` and `deploy` jobs don't have the `only: merge_requests` parameter,
they will not run in the merge request.
Pipelines tagged with the **merge request** badge indicate that they were triggered
Pipelines tagged with the **detached** badge indicate that they were triggered
when a merge request was created or updated. For example:
![Merge request page](img/merge_request.png)
The same tag is shown on the pipeline's details:
## Combined ref pipelines **[PREMIUM]**
> [GitLab Premium](https://about.gitlab.com/pricing/) 11.10.
It's possible for your source and target branches to diverge, which can result
in the scenario that source branch's pipeline was green, the target's pipeline was green,
but the combined output fails. By having your merge request pipeline automatically
create a new ref that contains the merge result of the source and target branch
(then running a pipeline on that ref), we can better test that the combined result
is also valid.
From GitLab 11.10, pipelines for merge requests run by default
on this merged result. That is, where the source and target branches are combined into a
new ref and a pipeline for this ref validates the result prior to merging.
![Merge request pipeline as the head pipeline](img/merge_request_pipeline.png)
There are some cases where creating a combined ref is not possible or not wanted.
For example, a source branch that has conflicts with the target branch
or a merge request that is still in WIP status. In this case, the merge request pipeline falls back to a "detached" state
and runs on the source branch ref as if it was a regular pipeline.
![Pipeline's details](img/pipeline_detail.png)
The detached state serves to warn you that you are working in a situation
subjected to merge problems, and helps to highlight that you should
get out of WIP status or resolve merge conflicts as soon as possible.
### Enabling combined ref pipelines
This feature disabled by default until we resolve issues with [contention handling](https://gitlab.com/gitlab-org/gitlab-ee/issues/9186). It can be enabled at the project level:
1. Visit your project's **Settings > General** and expand **Merge requests**.
1. Check **Merge pipelines will try to validate the post-merge result prior to merging**.
1. Click **Save changes** button.
![Merge request pipeline config](img/merge_request_pipeline_config.png)
### Combined ref pipeline's limitations
- This feature requires [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner) 11.9 or newer.
- This feature requires [Gitaly](https://gitlab.com/gitlab-org/gitaly) 1.21.0 or newer.
- After the merge request pipeline succeeds, if the target branch has moved forward, the result of the pipeline is stale and must be retried. In busy repos, this can become a problem as it is highly probable that the target branch will have moved ahead. Improvements are [planned](https://gitlab.com/gitlab-org/gitlab-ee/issues/9186) for future versions of GitLab.
- Forking/cross-repo workflows are not currently supported. To follow progress, see [#9713](https://gitlab.com/gitlab-org/gitlab-ee/issues/9713).
- This feature is not available for [fast forward merges](../../user/project/merge_requests/fast_forward_merge.md) yet. To follow progress, see [#58226](https://gitlab.com/gitlab-org/gitlab-ce/issues/58226).
## Excluding certain jobs
......@@ -138,3 +180,12 @@ External users could steal secret variables from the parent project by modifying
We're discussing a secure solution of running pipelines for merge requests
that submitted from forked projects,
see [the issue about the permission extension](https://gitlab.com/gitlab-org/gitlab-ce/issues/23902).
## Additional predefined variables
By using pipelines for merge requests, GitLab exposes additional predefined variables to the pipeline jobs.
Those variables contain information of the associated merge request, so that it's useful
to integrate your job with [GitLab Merge Request API](../../api/merge_requests.md).
You can find the list of avilable variables in [the reference sheet](../variables/predefined_variables.md).
The variable names begin with the `CI_MERGE_REQUEST_` prefix.
......@@ -55,9 +55,9 @@ need to be aware of:
- It can be more expensive for smaller installations. The default installation
requires more resources than a single node Omnibus deployment, as most services
are deployed in a redundant fashion.
- There are some feature [limitations to be aware of](kubernetes/gitlab_chart.md#limitations).
- There are some feature [limitations to be aware of](https://docs.gitlab.com/charts/#limitations).
[**> Install GitLab on Kubernetes using the GitLab Helm charts.**](kubernetes/index.md)
[**> Install GitLab on Kubernetes using the GitLab Helm charts.**](https://docs.gitlab.com/charts/)
## Installing GitLab with Docker
......
# GitLab Helm Chart
---
redirect_to: https://docs.gitlab.com/charts/
---
This is the official way to install GitLab on a cloud native environment.
NOTE: **Kubernetes experience required:**
Our Helm charts are recommended for those who are familiar with Kubernetes.
If you're not sure if Kubernetes is for you, our
[Omnibus GitLab packages](../README.md#installing-gitlab-using-the-omnibus-gitlab-package-recommended)
are mature, scalable, support [high availability](../../administration/high_availability/README.md)
and are used today on GitLab.com.
It is not necessary to have GitLab installed on Kubernetes in order to use [GitLab Kubernetes integration](https://docs.gitlab.com/ee/user/project/clusters/index.html).
## Introduction
The `gitlab` chart is the best way to operate GitLab on Kubernetes. This chart
contains all the required components to get started, and can scale to large deployments.
The default deployment includes:
- Core GitLab components: Unicorn, Shell, Workhorse, Registry, Sidekiq, and Gitaly
- Optional dependencies: Postgres, Redis, Minio
- An auto-scaling, unprivileged [GitLab Runner](https://docs.gitlab.com/runner/) using the Kubernetes executor
- Automatically provisioned SSL via [Let's Encrypt](https://letsencrypt.org/).
## Limitations
Some features of GitLab are not currently available:
- [GitLab Pages](https://gitlab.com/charts/gitlab/issues/37)
- [GitLab Geo](https://gitlab.com/charts/gitlab/issues/8)
- [No in-cluster HA database](https://gitlab.com/charts/gitlab/issues/48)
- MySQL will not be supported, as support is [deprecated within GitLab](https://docs.gitlab.com/omnibus/settings/database.html#using-a-mysql-database-management-server-enterprise-edition-only)
## Installing GitLab using the Helm Chart
The `gitlab` chart includes all required dependencies, and takes a few minutes
to deploy.
TIP: **Tip:**
For production deployments, we strongly recommend using the
[detailed installation instructions](https://gitlab.com/charts/gitlab/blob/master/doc/installation/index.md)
utilizing [external Postgres, Redis, and object storage](https://gitlab.com/charts/gitlab/tree/master/doc/advanced) services.
### Requirements
In order to deploy GitLab on Kubernetes, the following are required:
1. `helm` and `kubectl` [installed on your computer](preparation/tools_installation.md).
1. A Kubernetes cluster, version 1.8 or higher. 6vCPU and 16GB of RAM is recommended.
- [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
- [Google GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster)
- [IBM IKS](https://console.bluemix.net/docs/tutorials/scalable-webapp-kubernetes.html#create_kube_cluster)
- [Microsoft AKS](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal)
1. A [wildcard DNS entry and external IP address](preparation/networking.md)
1. [Authenticate and connect](preparation/connect.md) to the cluster
1. Configure and initialize [Helm Tiller](preparation/tiller.md).
### Deployment of GitLab to Kubernetes
To deploy GitLab, the following three parameters are required:
- `global.hosts.domain`: the [base domain](preparation/networking.md) of the
wildcard host entry. For example, `example.com` if the wild card entry is
`*.example.com`.
- `global.hosts.externalIP`: the [external IP](preparation/networking.md) which
the wildcard DNS resolves to.
- `certmanager-issuer.email`: the email address to use when requesting new SSL
certificates from Let's Encrypt.
NOTE: **Note:**
For deployments to Amazon EKS, there are
[additional configuration requirements](preparation/eks.md). A full list of
configuration options is [also available](https://gitlab.com/charts/gitlab/blob/master/doc/installation/command-line-options.md).
Once you have all of your configuration options collected, you can get any
dependencies and run helm. In this example, the helm release is named "gitlab":
```sh
helm repo add gitlab https://charts.gitlab.io/
helm repo update
helm upgrade --install gitlab gitlab/gitlab \
--timeout 600 \
--set global.hosts.domain=example.com \
--set global.hosts.externalIP=10.10.10.10 \
--set certmanager-issuer.email=email@example.com
```
### Monitoring the Deployment
This will output the list of resources installed once the deployment finishes,
which may take 5-10 minutes.
The status of the deployment can be checked by running `helm status gitlab`
which can also be done while the deployment is taking place if you run the
command in another terminal.
### Initial login
You can access the GitLab instance by visiting the domain name beginning with
`gitlab.` followed by the domain specified during installation. From the example
above, the URL would be `https://gitlab.example.com`.
If you manually created the secret for initial root password, you
can use that to sign in as `root` user. If not, GitLab automatically
created a random password for `root` user. This can be extracted by the
following command (replace `<name>` by name of the release - which is `gitlab`
if you used the command above):
```sh
kubectl get secret <name>-gitlab-initial-root-password -ojsonpath={.data.password} | base64 --decode ; echo
```
### Outgoing email
By default outgoing email is disabled. To enable it, provide details for your SMTP server
using the `global.smtp` and `global.email` settings. You can find details for these settings in the
[command line options](https://gitlab.com/charts/gitlab/blob/master/doc/installation/command-line-options.md#email-configuration).
If your SMTP server requires authentication make sure to read the section on providing
your password in the [secrets documentation](https://gitlab.com/charts/gitlab/blob/master/doc/installation/secrets.md#smtp-password).
You can disable authentication settings with `--set global.smtp.authentication=""`.
If your Kubernetes cluster is on GKE, be aware that SMTP port [25 is blocked](https://cloud.google.com/compute/docs/tutorials/sending-mail/#using_standard_email_ports).
### Deploying the Community Edition
To deploy the Community Edition, include these options in your `helm install` command:
```sh
--set gitlab.migrations.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-rails-ce
--set gitlab.sidekiq.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce
--set gitlab.unicorn.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-unicorn-ce
--set gitlab.unicorn.workhorse.image=registry.gitlab.com/gitlab-org/build/cng/gitlab-workhorse-ce
--set gitlab.task-runner.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-task-runner-ce
```
## Updating GitLab using the Helm Chart
Once your GitLab Chart is installed, configuration changes and chart updates
should be done using `helm upgrade`:
```sh
helm repo update
helm upgrade --reuse-values gitlab gitlab/gitlab
```
## Uninstalling GitLab using the Helm Chart
To uninstall the GitLab Chart, run the following:
```sh
helm delete gitlab
```
[kube-srv]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
[storageclass]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses
This document was moved to [another location](https://docs.gitlab.com/charts/).
This diff is collapsed.
......@@ -21,16 +21,15 @@ of the application including how it should be deployed, upgraded, and configured
## GitLab Chart
This chart contains all the required components to get started, and can scale to
large deployments. It offers a number of benefits:
large deployments. It offers a number of benefits, among others:
- Horizontal scaling of individual components
- No requirement for shared storage to scale
- Containers do not need `root` permissions
- Automatic SSL with Let's Encrypt
- An unprivileged GitLab Runner
- and plenty more.
- Horizontal scaling of individual components.
- No requirement for shared storage to scale.
- Containers do not need `root` permissions.
- Automatic SSL with Let's Encrypt.
- An unprivileged GitLab Runner.
Learn more about the [GitLab chart](gitlab_chart.md).
Learn more about the [GitLab chart](https://docs.gitlab.com/charts/).
## GitLab Runner Chart
......@@ -39,4 +38,4 @@ and you'd like to leverage the Runner's
[Kubernetes capabilities](https://docs.gitlab.com/runner/executors/kubernetes.html),
it can be deployed with the GitLab Runner chart.
Learn more about [gitlab-runner chart](gitlab_runner_chart.md).
Learn more about the [GitLab Runner chart](https://docs.gitlab.com/runner/install/kubernetes.html).
# Connecting your computer to a cluster
---
redirect_to: https://docs.gitlab.com/charts/installation/cloud/
---
In order to deploy software and settings to a cluster, you must connect and authenticate to it.
## Connect to GKE cluster
The command for connection to the cluster can be obtained from the
[Google Cloud Platform Console](https://console.cloud.google.com/kubernetes/list)
by the individual cluster.
Look for the **Connect** button in the clusters list page or use the command below,
filling in your cluster's information:
```
gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-id>
```
## Connect to EKS cluster
For the most up to date instructions, follow the Amazon EKS documentation on
[connecting to a cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-configure-kubectl).
## Connect to local minikube cluster
If you are doing local development, you can use `minikube` as your
local cluster. If `kubectl cluster-info` is not showing `minikube` as the current
cluster, use `kubectl config set-cluster minikube` to set the active cluster.
This document was moved to [another location](https://docs.gitlab.com/charts/installation/cloud/).
# Running GitLab on EKS
---
redirect_to: https://docs.gitlab.com/charts/installation/cloud/eks.html
---
There are a few nuances to Amazon EKS which are important to be aware of, when deploying GitLab.
## Persistent volume management
There are two methods to manage volume claims on Kubernetes:
1. Manually creating each persistent volume (recommended on EKS)
1. Utilizing dynamic provisioning to automatically create the persistent volumes
### Manual provisioning of volumes (Recommended)
Manually creating the volumes allows you to control the zone of each volume, as well as all other details supported by the underlying storage.
Follow our documentation on [manually creating persistent volumes](https://gitlab.com/charts/gitlab/blob/master/doc/installation/storage.md#manually-creating-static-volumes).
### Dynamic provisioning of volumes
Dynamic provisioning utilizes a Kubernetes provisioner, like `aws-ebs`, to automatically create persistent volumes to fulfill each claim.
With EKS, there are a few important details to keep in mind:
1. Clusters are required to span multiple AZ's
1. Kubernetes volume provisioners create volumes across zones without regard to which pod they belong to. This leads to scenarios where a pod with multiple volumes being unable to start due to the volumes being in different zones.
1. There is no default Storage Class.
The easiest way to solve this and still utilize dynamic provisioning is to utilize, or create, a Storage Class that is locked to a specific zone.
> **Note**: Restricting volumes to specific zone will cause GitLab and any other application using this Storage Class to only reside in that zone. For multiple zone support, utilize [manually provisioned volumes](#manual-provisioning-of-volumes-recommended).
To create the storage class, download and edit Amazon EKS's [sample Storage Class](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html) and add the following parameter:
```yaml
parameters:
zone: <desired-zone>
```
Then [specify the Storage Class](https://gitlab.com/charts/gitlab/blob/master/doc/installation/storage.md#using-a-custom-storage-class) name when deploying GitLab.
## External access to GitLab
By default, GitLab will an deploy an ingress which will create an associated Elastic Load Balancer. Since the DNS names of ELB's cannot be known ahead of time, it is difficult to utilize Let's Encrypt to automatically provision HTTPS certificates.
We recommend [using your own certificates](https://gitlab.com/charts/gitlab/blob/master/doc/installation/tls.md#option-2-use-your-own-wildcard-certificate), and then mapping your desired DNS name to the created ELB using a CNAME record.
This document was moved to [another location](https://docs.gitlab.com/charts/installation/cloud/eks.html).
# Networking Prerequisites
---
redirect_to: https://docs.gitlab.com/charts/installation/deployment.html#networking-and-dns
---
NOTE: **Note:**
Amazon EKS utilizes Elastic Load Balancers, which are addressed by DNS name and
cannot be known ahead of time. If you're using EKS, you can skip this section.
The `gitlab` chart configures a GitLab server and Kubernetes cluster which can support dynamic [Review Apps](https://docs.gitlab.com/ee/ci/review_apps/index.html), as well as services like the integrated [Container Registry](https://docs.gitlab.com/ee/user/project/container_registry.html).
To support the GitLab services and dynamic environments, a wildcard DNS entry is required which resolves to the external IP.
## External IP
To provision an external IP on GCP and Azure, simply request a new address from the Networking section. Ensure that the region matches the region your container cluster is created in. Note, it is important that the IP is not assigned at this point in time. It will be automatically assigned once the Helm chart is installed, to the Load Balancer.
Set `global.hosts.externalIP` to this IP address when [deploying GitLab](../gitlab_chart.md#installing-gitlab-using-the-helm-chart).
Then, create a [wildcard DNS record](#wildcard-dns-entry) which resolves to this IP address.
### Creating an external IP on GCP
When creating the external IP, it is critical to create it in the same region as your cluster. Otherwise, the IP address will fail to bind to the Load Balancer.
1. Open the [web console](https://console.cloud.google.com)
1. In the sidebar, browse to `VPC Network > External IP addresses`
1. Click `Reserve static address`
1. Choose `Regional` and select the region of your cluster
1. Leave `Attached to` blank, as it will be automatically assigned during deployment
## Wildcard DNS entry
Now that an external IP address has been allocated, ensure that the wildcard DNS entry you would like to use resolves to this IP. Typically this would be an `A record` for `*`, resolving to the external IP above.
Please consult the documentation for your DNS service for more information on creating DNS records:
- [Google Domains](https://support.google.com/domains/answer/3290350?hl=en)
- [GoDaddy](https://www.godaddy.com/help/add-an-a-record-19238)
Set `global.hosts.domain` to this DNS name when [deploying GitLab](../gitlab_chart.md#installing-gitlab-using-the-helm-chart).
This document was moved to [another location](https://docs.gitlab.com/charts/installation/deployment.html#networking-and-dns).
# Role Based Access Control
---
redirect_to: https://docs.gitlab.com/charts/installation/deployment.html#rbac
---
Until Kubernetes 1.7, there were no permissions within a cluster. With the launch
of 1.7, there is now a [role based access control system (RBAC)](https://kubernetes.io/docs/admin/authorization/rbac/)
which determines what services can perform actions within a cluster.
RBAC affects a few different aspects of GitLab:
- [Installation of GitLab using Helm](tiller.md#preparing-for-helm-with-rbac)
- Prometheus monitoring
- GitLab Runner
## Checking that RBAC is enabled
Try listing the current cluster roles, if it fails then `RBAC` is disabled.
The following command will output `false` if `RBAC` is disabled and `true` otherwise:
```sh
kubectl get clusterroles > /dev/null 2>&1 && echo true || echo false
```
This document was moved to [another location](https://docs.gitlab.com/charts/installation/deployment.html#rbac).
# Configuring and initializing Helm Tiller
To make use of Helm, you must have a [Kubernetes][k8s-io] cluster. Ensure you can
access your cluster using `kubectl`.
Helm consists of two parts, the `helm` client and a `tiller` server inside Kubernetes.
NOTE: **Note:**
If you are not able to run Tiller in your cluster, for example on OpenShift, it
is possible to use [Tiller locally](https://docs.gitlab.com/charts/installation/tools.html#local-tiller)
and avoid deploying it into the cluster. This should only be used when Tiller
cannot be normally deployed.
## Initialize Helm and Tiller
Tiller is deployed into the cluster and interacts with the Kubernetes API to deploy your applications. If role based access control (RBAC) is enabled, Tiller will need to be [granted permissions](#preparing-for-helm-with-rbac) to allow it to talk to the Kubernetes API.
If RBAC is not enabled, skip to [initializing Helm](#initialize-helm).
If you are not sure whether RBAC is enabled in your cluster, or to learn more, read through our [RBAC documentation](rbac.md).
## Preparing for Helm with RBAC
Helm's Tiller will need to be granted permissions to perform operations. These instructions grant cluster wide permissions, however for more advanced deployments [permissions can be restricted to a single namespace](https://docs.helm.sh/using_helm/#example-deploy-tiller-in-a-namespace-restricted-to-deploying-resources-only-in-that-namespace). To grant access to the cluster, we will create a new `tiller` service account and bind it to the `cluster-admin` role.
Create a file `rbac-config.yaml` with the following contents:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
```
Next we need to connect to the cluster and upload the RBAC config.
### Upload the RBAC config
Some clusters require authentication to use `kubectl` to create the Tiller roles.
#### Upload the RBAC config as an admin user (GKE)
For GKE, you need to obtain the admin credentials. This command will output the admin password:
```
gcloud container clusters describe <cluster-name> --zone <zone> --project <project-id> --format='value(masterAuth.password)'
```
Use the admin password to set the admin credentials. Replace the password value below with the output value from the above step:
```
kubectl config set-credentials admin --username=admin --password=xxxxxxxxxxxxxx
```
Once credentials have been set, create the role:
```
kubectl --user=admin create -f rbac-config.yaml
```
#### Upload the RBAC config (Non-GKE clusters)
For other clusters like Amazon EKS, you can directly upload the RBAC configuration.
```
kubectl create -f rbac-config.yaml
```
## Initialize Helm
Deploy Helm Tiller with a service account:
```
helm init --service-account tiller
```
If your cluster previously had Helm/Tiller installed,
run the following to ensure that the deployed version of Tiller matches the local Helm version:
```
helm init --upgrade --service-account tiller
```
### Patching Helm Tiller for Amazon EKS
Helm Tiller requires a flag to be enabled to work properly on Amazon EKS:
```
kubectl -n kube-system patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}'
```
redirect_to: https://docs.gitlab.com/charts/installation/tools.html
---
[helm]: https://helm.sh
[helm-using]: https://docs.helm.sh/using_helm
[k8s-io]: https://kubernetes.io/
[gcp-k8s]: https://console.cloud.google.com/kubernetes/list
This document was moved to [another location](https://docs.gitlab.com/charts/installation/tools.html).
# Installing kubectl and Helm on your computer
---
redirect_to: https://docs.gitlab.com/charts/installation/tools.html
---
In order to work with the GitLab Helm charts, `kubectl` and `helm` must be installed and configured on your computer.
## Installing `kubectl`
`kubectl` is the Kubernetes command line tool, which can be used to deploy settings to the cluster.
Follow the [official documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/) for the most up to date instructions.
## Installing `helm`
Helm is a package management tool for Kubernetes, and is used to deploy charts.
You can get Helm from the project's [releases page](https://github.com/kubernetes/helm/releases), or follow other options under the official documentation of [Installing Helm](https://docs.helm.sh/using_helm/#installing-helm).
# Next steps
Once installed, proceed to the next [installation step](../gitlab_chart.md#installing-gitlab-using-the-helm-chart).
This document was moved to [another location](https://docs.gitlab.com/charts/installation/tools.html).
......@@ -35,5 +35,9 @@ module Gitlab
[project, type]
end
def self.default_type
PROJECT
end
end
end
......@@ -24,7 +24,10 @@ module Gitlab
return [project, type, redirected_path] if project
end
nil
# When a project did not exist, the parsed repo_type would be empty.
# In that case, we want to continue with a regular project repository. As we
# could create the project if the user pushing is allowed to do so.
[nil, Gitlab::GlRepository.default_type, nil]
end
def self.find_project(project_path)
......
......@@ -44,8 +44,10 @@ describe ::Gitlab::RepoPath do
end
end
it "returns nil for non existent paths" do
expect(described_class.parse("path/non-existent.git")).to eq(nil)
it "returns the default type for non existent paths" do
_project, type, _redirected = described_class.parse("path/non-existent.git")
expect(type).to eq(Gitlab::GlRepository.default_type)
end
end
......
......@@ -644,6 +644,22 @@ describe API::Internal do
expect(response).to have_gitlab_http_status(404)
expect(json_response["status"]).to be_falsey
end
it 'returns a 200 response when using a project path that does not exist' do
post(
api("/internal/allowed"),
params: {
key_id: key.id,
project: 'project/does-not-exist.git',
action: 'git-upload-pack',
secret_token: secret_token,
protocol: 'ssh'
}
)
expect(response).to have_gitlab_http_status(404)
expect(json_response["status"]).to be_falsey
end
end
context 'user does not exist' do
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment