3. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration) for initial settings to enable single sign-on and add Authentiq as an OAuth provider.
1. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration) for initial settings to enable single sign-on and add Authentiq as an OAuth provider.
4. Add the provider configuration for Authentiq:
1. Add the provider configuration for Authentiq:
For Omnibus packages:
...
...
@@ -51,14 +51,14 @@ Authentiq will generate a Client ID and the accompanying Client Secret for you t
```
5. The `scope` is set to request the user's name, email (required and signed), and permission to send push notifications to sign in on subsequent visits.
1. The `scope` is set to request the user's name, email (required and signed), and permission to send push notifications to sign in on subsequent visits.
See [OmniAuth Authentiq strategy](https://github.com/AuthentiqID/omniauth-authentiq/wiki/Scopes,-callback-url-configuration-and-responses) for more information on scopes and modifiers.
6. Change `YOUR_CLIENT_ID` and `YOUR_CLIENT_SECRET` to the Client credentials you received in step 1.
1. Change `YOUR_CLIENT_ID` and `YOUR_CLIENT_SECRET` to the Client credentials you received in step 1.
7. Save the configuration file.
1. Save the configuration file.
8.[Reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure) or [restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect if you installed GitLab via Omnibus or from source respectively.
1.[Reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure) or [restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect if you installed GitLab via Omnibus or from source respectively.
On the sign in page there should now be an Authentiq icon below the regular sign in form. Click the icon to begin the authentication process.
@@ -8,7 +8,7 @@ The instructions make the assumption that you will be using the email address `i
## Configure your server firewall
1. Open up port 25 on your server so that people can send email into the server over SMTP.
2. If the mail server is different from the server running GitLab, open up port 143 on your server so that GitLab can read email from the server over IMAP.
1. If the mail server is different from the server running GitLab, open up port 143 on your server so that GitLab can read email from the server over IMAP.
Now go back to the Google interface, find your cluster, and follow the instructions under `Connect to the cluster` and open the Kubernetes Dashboard. It will look something like `gcloud container clusters get-credentials ruby-autodeploy \ --zone europe-west2-c --project api-project-XXXXXXX` and then `kubectl proxy`.
extracted and shown right in the merge request widget.
[Learn more on Browser Performance Testing in merge requests](https://docs.gitlab.com/ee//user/project/merge_requests/browser_performance_testing.html).
A complete example can be found in our [Auto DevOps CI YML](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml).
## Previous job definitions
CAUTION: **Caution:**
Before GitLab 11.5, Performance job and artifact had to be named specifically
to automatically extract report data and show it in the merge request widget.
While these old job definitions are still maintained they have been deprecated
and may be removed in next major release, GitLab 12.0.
You are advised to update your current `.gitlab-ci.yml` configuration to reflect that change.
For GitLab 11.4 and earlier, the job should look like:
In this particular case, the `npm deploy` script is a Gulp script that does the following:
1. Compile CSS & JS
2. Create sprites
3. Copy various assets (images, fonts) around
4. Replace some strings
1. Create sprites
1. Copy various assets (images, fonts) around
1. Replace some strings
All these operations will put all files into a `build` folder, which is ready to be deployed to a live server.
...
...
@@ -62,10 +62,10 @@ before_script:
In order, this means that:
1. We check if the `ssh-agent` is available and we install it if it's not;
2. We create the `~/.ssh` folder;
3. We make sure we're running bash;
4. We disable host checking (we don't ask for user accept when we first connect to a server; and since every job will equal a first connect, we kind of need this)
1. We check if the `ssh-agent` is available and we install it if it's not.
1. We create the `~/.ssh` folder.
1. We make sure we're running bash.
1. We disable host checking (we don't ask for user accept when we first connect to a server and since every job will equal a first connect, we kind of need this).
And this is basically all you need in the `before_script` section.
...
...
@@ -91,11 +91,11 @@ stage_deploy:
Here's the breakdown:
1.`only:dev` means that this build will run only when something is pushed to the `dev` branch. You can remove this block completely and have everything be ran on every push (but probably this is something you don't want)
2.`ssh-add ...` we will add that private key you added on the web UI to the docker container
3. We will connect via `ssh` and create a new `_tmp` folder
4. We will connect via `scp` and upload the `build` folder (which was generated by a `npm` script) to our previously created `_tmp` folder
5. We will connect again to `ssh` and move the `live` folder to an `_old` folder, then move `_tmp` to `live`.
6. We connect to ssh and remove the `_old` folder
1.`ssh-add ...` we will add that private key you added on the web UI to the docker container
1. We will connect via `ssh` and create a new `_tmp` folder
1. We will connect via `scp` and upload the `build` folder (which was generated by a `npm` script) to our previously created `_tmp` folder
1. We will connect again to `ssh` and move the `live` folder to an `_old` folder, then move `_tmp` to `live`.
1. We connect to ssh and remove the `_old` folder
What's the deal with the artifacts? We just tell GitLab CI to keep the `build` directory (later on, you can download that as needed).
We have a performance dashboard available in one of our [grafana instances](https://performance.gprd.gitlab.com/dashboard/db/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://sitespeed.io) every 6 hours. These changes are displayed after a set number of pages are aggregated.
We have a performance dashboard available in one of our [grafana instances](https://dashboards.gitlab.net/d/1EBTz3Dmz/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://sitespeed.io) every 6 hours. These changes are displayed after a set number of pages are aggregated.
These pages can be found inside a text file in the gitlab-build-images [repository](https://gitlab.com/gitlab-org/gitlab-build-images) called [gitlab.txt](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/scripts/gitlab.txt)
Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing urls of pages from this text file. Please have a [frontend monitoring expert](https://about.gitlab.com/team) review your changes before assigning to a maintainer of the `gitlab-build-images` project. The changes will go live on the next scheduled run after the changes are merged into `master`.
-[notes on testing Vue components](../../fe_guide/vue.html#testing-vue-components)
## RSpec: Ruby unit tests `/spec/**/*.rb`
## Frontend unit tests
These tests are meant to unit test the ruby models, controllers and helpers.
Unit tests are on the lowest abstraction level and typically test functionality that is not directly perceivable by a user.
### When do we write/update these tests?
### When to use unit tests
Whenever we create or modify any Ruby models, controllers or helpers we add/update corresponding tests.
<details>
<summary>exported functions and classes</summary>
Anything that is exported can be reused at various places in a way you have no control over.
Therefore it is necessary to document the expected behavior of the public interface with tests.
</details>
---
<details>
<summary>Vuex actions</summary>
Any Vuex action needs to work in a consistent way independent of the component it is triggered from.
</details>
## RSpec: Full feature tests `/spec/features/**/*.rb`
<details>
<summary>Vuex mutations</summary>
For complex Vuex mutations it helps to identify the source of a problem by separating the tests from other parts of the Vuex store.
</details>
Full feature tests will load a full app environment and allow us to test things like rendering DOM, interacting with links and buttons, testing the outcome of those interactions through multiple pages if necessary. These are also called end-to-end tests but should not be confused with QA end-to-end tests (`package-and-qa` manual pipeline job).
### When *not* to use unit tests
### When do we write/update these tests?
<details>
<summary>non-exported functions or classes</summary>
Anything that is not exported from a module can be considered private or an implementation detail and doesn't need to be tested.
</details>
When we add a new feature, we write at least two tests covering the success and the failure scenarios.
<details>
<summary>constants</summary>
Testing the value of a constant would mean to copy it.
This results in extra effort without additional confidence that the value is correct.
</details>
<details>
<summary>Vue components</summary>
Computed properties, methods, and lifecycle hooks can be considered an implementation detail of components and don't need to be tested.
They are implicitly covered by component tests.
The <ahref="https://vue-test-utils.vuejs.org/guides/#getting-started">official Vue guidelines</a> suggest the same.
</details>
### What to mock in unit tests
<details>
<summary>state of the class under test</summary>
Modifying the state of the class under test directly rather than using methods of the class avoids side-effects in test setup.
</details>
<details>
<summary>other exported classes</summary>
Every class needs to be tested in isolation to prevent test scenarios from growing exponentially.
</details>
<details>
<summary>single DOM elements if passed as parameters</summary>
For tests that only operate on single DOM elements rather than a whole page, creating these elements is cheaper than loading a whole HTML fixture.
</details>
<details>
<summary>all server requests</summary>
When running frontend unit tests, the backend may not be reachable.
Therefore all outgoing requests need to be mocked.
Background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects.
</details>
### What *not* to mock in unit tests
<details>
<summary>non-exported functions or classes</summary>
Everything that is not exported can be considered private to the module and will be implicitly tested via the exported classes / functions.
</details>
<details>
<summary>methods of the class under test</summary>
By mocking methods of the class under test, the mocks will be tested and not the real methods.
</details>
<details>
<summary>utility functions (pure functions, or those that only modify parameters)</summary>
If a function has no side effects because it has no state, it is safe to not mock it in tests.
</details>
<details>
<summary>full HTML pages</summary>
Loading the HTML of a full page slows down tests, so it should be avoided in unit tests.
</details>
## Frontend component tests
Component tests cover the state of a single component that is perceivable by a user depending on external signals such as user input, events fired from other components, or application state.
### When to use component tests
- Vue components
### When *not* to use component tests
<details>
<summary>Vue applications</summary>
Vue applications may contain many components.
Testing them on a component level requires too much effort.
Therefore they are tested on frontend integration level.
</details>
<details>
<summary>HAML templates</summary>
HAML templates contain only Markup and no frontend-side logic.
Therefore they are not complete components.
</details>
### What to mock in component tests
<details>
<summary>DOM</summary>
Operating on the real DOM is significantly slower than on the virtual DOM.
</details>
<details>
<summary>properties and state of the component under test</summary>
Similarly to testing classes, modifying the properties directly (rather than relying on methods of the component) avoids side-effects.
</details>
<details>
<summary>Vuex store</summary>
To avoid side effects and keep component tests simple, Vuex stores are replaced with mocks.
</details>
<details>
<summary>all server requests</summary>
Similar to unit tests, when running component tests, the backend may not be reachable.
Therefore all outgoing requests need to be mocked.
Similar to unit tests, background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects.
</details>
<details>
<summary>child components</summary>
Every component is tested individually, so child components are mocked.
See also <ahref="https://vue-test-utils.vuejs.org/api/#shallowmount">shallowMount()</a>
</details>
### What *not* to mock in component tests
<details>
<summary>methods or computed properties of the component under test</summary>
By mocking part of the component under test, the mocks will be tested and not the real component.
</details>
<details>
<summary>functions and classes independent from Vue</summary>
All plain JavaScript code is already covered by unit tests and needs not to be mocked in component tests.
</details>
## Frontend integration tests
Integration tests cover the interaction between all components on a single page.
Their abstraction level is comparable to how a user would interact with the UI.
### When to use integration tests
<details>
<summary>page bundles (<code>index.js</code> files in <code>app/assets/javascripts/pages/</code>)</summary>
Testing the page bundles ensures the corresponding frontend components integrate well.
</details>
<details>
<summary>Vue applications outside of page bundles</summary>
Testing Vue applications as a whole ensures the corresponding frontend components integrate well.
Rendering HAML views requires a Rails environment including a running database which we cannot rely on in frontend tests.
</details>
<details>
<summary>all server requests</summary>
Similar to unit and component tests, when running component tests, the backend may not be reachable.
Therefore all outgoing requests need to be mocked.
</details>
<details>
<summary>asynchronous background operations that are not perceivable on the page</summary>
Background operations that affect the page need to be tested on this level.
All other background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects.
</details>
### What *not* to mock in integration tests
<details>
<summary>DOM</summary>
Testing on the real DOM ensures our components work in the environment they are meant for.
Part of this will be delegated to <ahref="https://gitlab.com/gitlab-org/quality/team-tasks/issues/45">cross-browser testing</a>.
</details>
<details>
<summary>properties or state of components</summary>
On this level, all tests can only perform actions a user would do.
For example to change the state of a component, a click event would be fired.
</details>
<details>
<summary>Vuex stores</summary>
When testing the frontend code of a page as a whole, the interaction between Vue components and Vuex stores is covered as well.
<details>
## Feature tests
In contrast to [frontend integration tests](#frontend-integration-tests), feature tests make requests against the real backend instead of using fixtures.
This also implies that database queries are executed which makes this category significantly slower.
### When to use feature tests
- use cases that require a backend and cannot be tested using fixtures
- behavior that is not part of a page bundle but defined globally
### Relevant notes
...
...
@@ -48,33 +265,9 @@ wait_for_requests
expect(page).not_to have_selector('.card')
```
---
## Karma tests `/spec/javascripts/**/*.js`
These are the more frontend-focused, at the moment. They're **faster** than `rspec` and make for very quick testing of frontend components.
### When do we write/update these tests?
When we add/update a method/action/mutation to Vue or Vuex, we write karma tests to ensure the logic we wrote doesn't break. We should, however, refrain from writing tests that double-test Vue's internal features.
### Relevant notes
Karma tests are run against a virtual DOM.
## Test helpers
To populate the DOM, we can use fixtures to fake the generation of HTML instead of having Rails do that.
Be sure to check the [best practices for karma tests](../../testing_guide/frontend_testing.html#best-practices).
### Vue and Vuex
Test as much as possible without double-testing Vue's internal features, as mentioned above.
Make sure to test computedProperties, mutations, actions. Run the action and test that the proper mutations are committed.
Also check these [notes on testing Vue components](../../fe_guide/vue.html#testing-vue-components).
#### Vuex Helper: `testAction`
### Vuex Helper: `testAction`
We have a helper available to make testing actions easier, as per [official documentation](https://vuex.vuejs.org/en/testing.html):
...
...
@@ -97,7 +290,7 @@ testAction(
Check an example in [spec/javascripts/ide/stores/actions_spec.jsspec/javascripts/ide/stores/actions_spec.js](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/spec/javascripts/ide/stores/actions_spec.js).
#### Vue Helper: `mountComponent`
### Vue Helper: `mountComponent`
To make mounting a Vue component easier and more readable, we have a few helpers available in `spec/helpers/vue_mount_component_helper`.
...
...
@@ -133,6 +326,7 @@ afterEach(() => {
vm.$destroy();
});
```
## Testing with older browsers
Some regressions only affect a specific browser version. We can install and test in particular browsers with either Firefox or Browserstack using the following steps:
...
...
@@ -147,13 +341,14 @@ You can find the credentials on 1Password, under `frontendteam@gitlab.com`.
### Firefox
#### macOS
You can download any older version of Firefox from the releases FTP server, https://ftp.mozilla.org/pub/firefox/releases/
1. From the website, select a version, in this case `50.0.1`.
2. Go to the mac folder.
3. Select your preferred language, you will find the dmg package inside, download it.
4. Drag and drop the application to any other folder but the `Applications` folder.
5. Rename the application to something like `Firefox_Old`.
6. Move the application to the `Applications` folder.
7. Open up a terminal and run `/Applications/Firefox_Old.app/Contents/MacOS/firefox-bin -profilemanager` to create a new profile specific to that Firefox version.
8. Once the profile has been created, quit the app, and run it again like normal. You now have a working older Firefox version.
1. Go to the mac folder.
1. Select your preferred language, you will find the dmg package inside, download it.
1. Drag and drop the application to any other folder but the `Applications` folder.
1. Rename the application to something like `Firefox_Old`.
1. Move the application to the `Applications` folder.
1. Open up a terminal and run `/Applications/Firefox_Old.app/Contents/MacOS/firefox-bin -profilemanager` to create a new profile specific to that Firefox version.
1. Once the profile has been created, quit the app, and run it again like normal. You now have a working older Firefox version.
@@ -54,10 +54,10 @@ information from database or file system
When exporting SVGs, be sure to follow the following guidelines:
1. Convert all strokes to outlines.
2. Use pathfinder tools to combine overlapping paths and create compound paths.
3. SVGs that are limited to one color should be exported without a fill color so the color can be set using CSS.
4. Ensure that exported SVGs have been run through an [SVG cleaner](https://github.com/RazrFalcon/SVGCleaner) to remove unused elements and attributes.
- Convert all strokes to outlines.
- Use pathfinder tools to combine overlapping paths and create compound paths.
- SVGs that are limited to one color should be exported without a fill color so the color can be set using CSS.
- Ensure that exported SVGs have been run through an [SVG cleaner](https://github.com/RazrFalcon/SVGCleaner) to remove unused elements and attributes.
You can open your svg in a text editor to ensure that it is clean.
@@ -176,8 +176,8 @@ He states that there has been: "a sluggishness of others to adapt" and it's "a l
* To save time. One of the reasons Matthieu moved his company to GitLab was to reduce the effort it took him to manage and configure multiple tools, thus saving him time. He has to balance his day job in addition to managing the company's GitLab installation and onboarding new teams to GitLab.
* To use a platform which is easy to manage. Matthieu isn't a Systems Administrator, and when updating GitLab, creating backups, etc. He would prefer to work within GitLab's UI. Explanations / guided instructions when configuring settings in GitLab's interface would really help Matthieu. He needs reassurance that what he is about to change is
1. the right setting
2. will provide him with the desired result he wants.
- The right setting.
- Will provide him with the desired result he wants.
* Matthieu needs to educate his colleagues about GitLab. Matthieu's colleagues won't adopt GitLab as they're unaware of its capabilities and the positive impact it could have on their work. Matthieu needs support in getting this message across to them.
# Connecting and deploying to an Amazon EKS cluster
## Introduction
In this tutorial, we will show how easy it is to integrate an [Amazon EKS](https://aws.amazon.com/eks/) cluster with GitLab, and begin deploying applications.
In this tutorial, we will show how to integrate an [Amazon EKS](https://aws.amazon.com/eks/) cluster with GitLab, and begin deploying applications.
For an end-to-end walkthrough we will:
...
...
@@ -21,7 +13,7 @@ For an end-to-end walkthrough we will:
You will need:
1. An account on GitLab, like [GitLab.com](https://gitlab.com)
1. An Amazon EKS cluster
1. An Amazon EKS cluster (with worker nodes properly configured)
1.`kubectl`[installed and configured for access to the EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#get-started-kubectl)
If you don't have an Amazon EKS cluster, one can be created by following [the EKS getting started guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html).
...
...
@@ -38,26 +30,103 @@ Give the project a name, and then select `Create project`.
![Create Project](img/create_project.png)
## Connecting the EKS cluster
## Configuring and connecting the EKS cluster
From the left side bar, hover over `Operations` and select `Kubernetes`, then click on `Add Kubernetes cluster`, and finally `Add an existing Kubernetes cluster`.
A few details from the EKS cluster will be required to connect it to GitLab.
1. A valid Kubernetes certificate and token are needed to authenticate to the EKS cluster. A pair is created by default, which can be used. Open a shell and use `kubectl` to retrieve them:
* List the secrets with `kubectl get secrets`, and one should named similar to `default-token-xxxxx`. Copy that token name for use below.
* Get the certificate with `kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 -D`
* Retrieve the token with `kubectl get secret <secret name> -o jsonpath="{['data']['token']}" | base64 -D`.
1.**Retrieve the certificate**: A valid Kubernetes certificate is needed to authenticate to the EKS cluster. We will use the certificate created by default. Open a shell and use `kubectl` to retrieve it:
- List the secrets with `kubectl get secrets`, and one should named similar to `default-token-xxxxx`. Copy that token name for use below.
- Get the certificate with `kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 -D`
1.**Create admin token**: A `cluster-admin` token is required to install and manage Helm Tiller. GitLab establishes mutual SSL auth with Helm Tiller and creates limited service accounts for each application. To create the token we will create an admin service account as follows:
1. Create a file called `eks-admin-service-account.yaml` with the text below:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: eks-admin
namespace: kube-system
```
2. Apply the service account to your cluster:
```bash
kubectl apply -f eks-admin-service-account.yaml
```
Output:
```bash
serviceaccount "eks-admin" created
```
3. Create a file called `eks-admin-cluster-role-binding.yaml` with the text below:
```yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: eks-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: eks-admin
namespace: kube-system
```
4. Apply the cluster role binding to your cluster:
1. The API server endpoint is also required, so GitLab can connect to the cluster. This is displayed on the AWS EKS console, when viewing the EKS cluster details.
You now have all the information needed to connect the EKS cluster:
* Kubernetes cluster name: Provide a name for the cluster to identify it within GitLab.
* Environment scope: Leave this as `*` for now, since we are only connecting a single cluster.
* API URL: Paste in the API server endpoint retrieved above.
* CA Certificate: Paste the certificate data from the earlier step, as-is.
* Paste the token value.
* Project namespace: This can be left blank to accept the default namespace, based on the project name.
- Kubernetes cluster name: Provide a name for the cluster to identify it within GitLab.
- Environment scope: Leave this as `*` for now, since we are only connecting a single cluster.
- API URL: Paste in the API server endpoint retrieved above.
- CA Certificate: Paste the certificate data from the earlier step, as-is.
- Paste the admin token value.
- Project namespace: This can be left blank to accept the default namespace, based on the project name.
![Add Cluster](img/add_cluster.png)
...
...
@@ -65,9 +134,11 @@ Click on `Add Kubernetes cluster`, the cluster is now connected to GitLab. At th
If you would like to utilize your own CI/CD scripts to deploy to the cluster, you can stop here.
## Disable Role-Based Access Control (RBAC)
## Disable Role-Based Access Control (RBAC) - Optional
When connecting a cluster via GitLab integration, you may specify whether the cluster is RBAC-enabled or not. This will affect how GitLab interacts with the cluster for certain operations. If you **did not** check the "RBAC-enabled cluster" checkbox at creation time, GitLab will assume RBAC is disabled for your cluster when interacting with it. If so, you must disable RBAC on your cluster for the integration to work properly.
Presently, Auto DevOps and one-click app installs do not support [Kubernetes role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/). Support is [being worked on](https://gitlab.com/groups/gitlab-org/-/epics/136), but in the interim RBAC must be disabled to utilize for these features.
![rbac](img/rbac.png)
> **Note**: Disabling RBAC means that any application running in the cluster, or user who can authenticate to the cluster, has full API access. This is a [security concern](https://docs.gitlab.com/ee/user/project/clusters/#security-implications), and may not be desirable.
Before you begin, ensure that any GitHub users who you want to map to GitLab users have either:
1. A GitLab account that has logged in using the GitHub icon
- A GitLab account that has logged in using the GitHub icon
\- or -
2. A GitLab account with an email address that matches the [public email address](https://help.github.com/articles/setting-your-commit-email-address-on-github/) of the GitHub user
- A GitLab account with an email address that matches the [public email address](https://help.github.com/articles/setting-your-commit-email-address-on-github/) of the GitHub user
User-matching attempts occur in that order, and if a user is not identified either way, the activity is associated with
the user account that is performing the import.
...
...
@@ -77,10 +77,10 @@ If you are using a self-hosted GitLab instance, this process requires that you h
[GitHub integration][gh-import].
1. From the top navigation bar, click **+** and select **New project**.
2. Select the **Import project** tab and then select **GitHub**.
3. Select the first button to **List your GitHub repositories**. You are redirected to a page on github.com to authorize the GitLab application.
4. Click **Authorize gitlabhq**. You are redirected back to GitLab's Import page and all of your GitHub repositories are listed.
5. Continue on to [selecting which repositories to import](#selecting-which-repositories-to-import).
1. Select the **Import project** tab and then select **GitHub**.
1. Select the first button to **List your GitHub repositories**. You are redirected to a page on github.com to authorize the GitLab application.
1. Click **Authorize gitlabhq**. You are redirected back to GitLab's Import page and all of your GitHub repositories are listed.
1. Continue on to [selecting which repositories to import](#selecting-which-repositories-to-import).
### Using a GitHub token
...
...
@@ -92,12 +92,12 @@ integration enabled, that should be the preferred method to import your reposito
If you are not using the GitHub integration, you can still perform an authorization with GitHub to grant GitLab access your repositories:
1. Go to https://github.com/settings/tokens/new
2. Enter a token description.
3. Select the repo scope.
4. Click **Generate token**.
5. Copy the token hash.
6. Go back to GitLab and provide the token to the GitHub importer.
7. Hit the **List Your GitHub Repositories** button and wait while GitLab reads your repositories' information.
1. Enter a token description.
1. Select the repo scope.
1. Click **Generate token**.
1. Copy the token hash.
1. Go back to GitLab and provide the token to the GitHub importer.
1. Hit the **List Your GitHub Repositories** button and wait while GitLab reads your repositories' information.
Once done, you'll be taken to the importer page to select the repositories to import.
### Selecting which repositories to import
...
...
@@ -107,10 +107,10 @@ your GitHub repositories are listed.
1. By default, the proposed repository namespaces match the names as they exist in GitHub, but based on your permissions,
you can choose to edit these names before you proceed to import any of them.
2. Select the **Import** button next to any number of repositories, or select **Import all repositories**.
3. The **Status** column shows the import status of each repository. You can choose to leave the page open and it will
1. Select the **Import** button next to any number of repositories, or select **Import all repositories**.
1. The **Status** column shows the import status of each repository. You can choose to leave the page open and it will
update in realtime or you can return to it later.
4. Once a repository has been imported, click its GitLab path to open its GitLab URL.
1. Once a repository has been imported, click its GitLab path to open its GitLab URL.
@@ -4,16 +4,15 @@ An API token is needed when integrating with JIRA Cloud, follow the steps
below to create one:
1. Log in to https://id.atlassian.com with your email.
2.**Click API tokens**, then **Create API token**.
1.**Click API tokens**, then **Create API token**.
![JIRA API token](img/jira_api_token_menu.png)
![JIRA API token](img/jira_api_token.png)
3. Make sure to write down your new API token as you will need it in the next [steps](jira.md#configuring-gitlab).
1. Make sure to write down your new API token as you will need it in the next [steps](jira.md#configuring-gitlab).
NOTE: **Note**
It is important that the user associated with this email has 'write' access to projects in JIRA.
The JIRA configuration is complete. You are going to need this new created token and the email you used to log in when [configuring GitLab in the next section](jira.md#configuring-gitlab).
2. To disable the internal issue tracking system in a project, navigate to the General page, expand [Permissions](../settings/index.md#sharing-and-permissions), and slide the Issues switch invalid.
1. To disable the internal issue tracking system in a project, navigate to the General page, expand [Permissions](../settings/index.md#sharing-and-permissions), and slide the Issues switch invalid.