1. The Usage Ping [cron job](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/gitlab_usage_ping_worker.rb#L30) is set in Sidekiq to run weekly.
1. When the cron job runs, it calls [GitLab::UsageData.to_json](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L22).
1.GitLab::UsageData.to_json[cascades down](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L22) to ~400+ other counter method calls.
1. The response of all methods calls are [merged together](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L14) into a single JSON payload in GitLab::UsageData.to_json.
1. When the cron job runs, it calls [`GitLab::UsageData.to_json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L22).
1.`GitLab::UsageData.to_json`[cascades down](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L22) to ~400+ other counter method calls.
1. The response of all methods calls are [merged together](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L14) into a single JSON payload in `GitLab::UsageData.to_json`.
1. The JSON payload is then [posted to the Versions application](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L20).
## Implementing Usage Ping
...
...
@@ -136,12 +136,12 @@ For large tables, PostgreSQL can take a long time to count rows due to MVCC [(Mu
For GitLab.com, there are extremely large tables with 15 second query timeouts, so we use batch counting to avoid encountering timeouts. Here are the sizes of some GitLab.com tables:
There are two batch counting methods provided, `Ordinary Batch Counters` and `Distinct Batch Counters`. Batch counting requires indexes on columns to calculate max, min, and range queries. In some cases, a specialized index may need to be added on the columns involved in a counter.
-[Assert against the underlying database state instead of against a page's content](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/31437): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10934>
- In JS tests, shifting elements can cause Capybara to misclick when the element moves at the exact time Capybara sends the click
- In JS tests, shifting elements can cause Capybara to mis-click when the element moves at the exact time Capybara sends the click
-[Dropdowns rendering upward or downward due to window size and scroll position](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/17660)
-[Lazy loaded images can cause Capybara to misclick](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18713)
-[Lazy loaded images can cause Capybara to mis-click](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18713)
-[Triggering JS events before the event handlers are set up](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18742)
-[Wait for the image to be lazy-loaded when asserting on a Markdown image's src attribute](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25408)
-[Wait for the image to be lazy-loaded when asserting on a Markdown image's `src` attribute](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25408)
This could be a sign that there are too many stale secrets and/or config maps.
This could be a sign that there are too many stale secrets and/or configuration maps.
**Where to look for further debugging:**
Look at [the list of Configurations](https://console.cloud.google.com/kubernetes/config?project=gitlab-review-apps)
or `kubectl get secret,cm --sort-by='{.metadata.creationTimestamp}' | grep 'review-'`.
Any secrets or config maps older than 5 days are suspect and should be deleted.
Any secrets or configuration maps older than 5 days are suspect and should be deleted.
**Useful commands:**
...
...
@@ -354,7 +354,7 @@ For the record, the debugging steps to find out this issue were:
1. Web search for exact error message, following rabbit hole to [a relevant Kubernetes bug report](https://github.com/kubernetes/kubernetes/issues/57345)
1. Access the node over SSH via the GCP console (**Computer Engine > VM
instances** then click the "SSH" button for the node where the `dns-gitlab-review-app-external-dns` pod runs)
1. In the node: `systemctl --version` => systemd 232
1. In the node: `systemctl --version` => `systemd 232`
@@ -719,7 +718,7 @@ with their own clone of the production database.
Joe is available in the
[`#database-lab`](https://gitlab.slack.com/archives/CLJMDRD8C) channel on Slack.
Unlike chatops, it gives you a way to execute DDL statements (like creating indexes and tables) and get query plan not only for `SELECT` but also `UPDATE` and `DELETE`.
Unlike ChatOps, it gives you a way to execute DDL statements (like creating indexes and tables) and get query plan not only for `SELECT` but also `UPDATE` and `DELETE`.
For example, in order to test new index you can do the following:
@@ -92,7 +92,7 @@ We can identify three major use-cases for an upload:
1.**storage:** if we are uploading for storing a file (i.e. artifacts, packages, discussion attachments). In this case [direct upload](#direct-upload) is the proper level as it's the less resource-intensive operation. Additional information can be found on [File Storage in GitLab](file_storage.md).
1.**in-controller/synchronous processing:** if we allow processing **small files** synchronously, using [disk buffered upload](#disk-buffered-upload) may speed up development.
1.**Sidekiq/asynchronous processing:** Async processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
1.**Sidekiq/asynchronous processing:** Asynchronous processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
For more details about currently broken feature see [epic &1802](https://gitlab.com/groups/gitlab-org/-/epics/1802).
...
...
@@ -128,7 +128,7 @@ This is the default kind of upload, and it's most expensive in terms of resource
In this case, workhorse is unaware of files being uploaded and acts as a regular proxy.
When a multipart request reaches the rails application, `Rack::Multipart` leaves behind tempfiles in `/tmp` and uses valuable Ruby process time to copy files around.
When a multipart request reaches the rails application, `Rack::Multipart` leaves behind temporary files in `/tmp` and uses valuable Ruby process time to copy files around.
to build the application. We also use [gitlabktl](https://gitlab.com/gitlab-org/gitlabktl)
to build the application. We also use [GitLab Knative tool](https://gitlab.com/gitlab-org/gitlabktl)
CLI to simplify the deployment of services and functions to Knative.
1.**`serverless.yml`** (for [functions only](#deploying-functions)): When using serverless to deploy functions, the `serverless.yml` file
will contain the information for all the functions being hosted in the repository as well as a reference to the
runtime being used.
1.**`Dockerfile`** (for [applications only](#deploying-serverless-applications)): Knative requires a
`Dockerfile` in order to build your applications. It should be included at the root of your
project's repo and expose port `8080`. `Dockerfile` is not require if you plan to build serverless functions
project's repository and expose port `8080`. `Dockerfile` is not require if you plan to build serverless functions
using our [runtimes](https://gitlab.com/gitlab-org/serverless/runtimes).
1.**Prometheus** (optional): Installing Prometheus allows you to monitor the scale and traffic of your serverless function/application.
See [Installing Applications](../index.md#installing-applications) for more information.
...
...
@@ -97,9 +97,9 @@ The minimum recommended cluster size to run Knative is 3-nodes, 6 vCPUs, and 22.
1. The Ingress is now available at this address and will route incoming requests to the proper service based on the DNS
name in the request. To support this, a wildcard DNS record should be created for the desired domain name. For example,
if your Knative base domain is `knative.info` then you need to create an A record or CNAME record with domain `*.knative.info`
pointing the ip address or hostname of the Ingress.
pointing the IP address or hostname of the Ingress.
![dns entry](img/dns-entry.png)
![DNS entry](img/dns-entry.png)
NOTE: **Note:**
You can deploy either [functions](#deploying-functions) or [serverless applications](#deploying-serverless-applications)
...
...
@@ -318,7 +318,7 @@ Explanation of the fields used above:
|-----------|-------------|
| `name` | Indicates which provider is used to execute the `serverless.yml` file. In this case, the TriggerMesh middleware. |
| `envs` | Includes the environment variables to be passed as part of function execution for **all** functions in the file, where `FOO` is the variable name and `BAR` are the variable contents. You may replace this with your own variables. |
| `secrets` | Includes the contents of the Kubernetes secret as environment variables accessible to be passed as part of function execution for **all** functions in the file. The secrets are expected in ini format. |
| `secrets` | Includes the contents of the Kubernetes secret as environment variables accessible to be passed as part of function execution for **all** functions in the file. The secrets are expected in INI format. |
### `functions`
...
...
@@ -332,7 +332,7 @@ subsequent lines contain the function attributes.
| `runtime` (optional)| The runtime to be used to execute the function. This can be a runtime alias (see [Runtime aliases](#runtime-aliases)), or it can be a full URL to a custom runtime repository. When the runtime is not specified, we assume that `Dockerfile` is present in the function directory specified by `source`. |
| `description` | A short description of the function. |
| `envs` | Sets an environment variable for the specific function only. |
| `secrets` | Includes the contents of the Kubernetes secret as environment variables accessible to be passed as part of function execution for the specific function only. The secrets are expected in ini format. |
| `secrets` | Includes the contents of the Kubernetes secret as environment variables accessible to be passed as part of function execution for the specific function only. The secrets are expected in INI format. |
### Deployment
...
...
@@ -384,7 +384,7 @@ The sample function can now be triggered from any HTTP client using a simple `PO
| [Custom domains and SSL/TLS Certificates](custom_domains_ssl_tls_certification/index.md) | Custom domains and subdomains, DNS records, and SSL/TLS certificates. |
| [Let's Encrypt integration](custom_domains_ssl_tls_certification/lets_encrypt_integration.md) | Secure your Pages sites with Let's Encrypt certificates, which are automatically obtained and renewed by GitLab. |
@@ -25,7 +25,7 @@ In brief, this is what you need to upload your website in GitLab Pages:
1. Domain of the instance: domain name that is used for GitLab Pages
(ask your administrator).
1. GitLab CI/CD: a `.gitlab-ci.yml` file with a specific job named [`pages`](../../../ci/yaml/README.md#pages) in the root directory of your repository.
1. A directory called `public` in your site's repo containing the content
1. A directory called `public` in your site's repository containing the content
to be published.
1. GitLab Runner enabled for the project.
...
...
@@ -87,7 +87,7 @@ will be deleted.
When using Pages under the general domain of a GitLab instance (`*.example.io`),
you _cannot_ use HTTPS with sub-subdomains. That means that if your
username/groupname contains a dot, for example `foo.bar`, the domain
username or group name contains a dot, for example `foo.bar`, the domain
`https://foo.bar.example.io` will _not_ work. This is a limitation of the
[HTTP Over TLS protocol](https://tools.ietf.org/html/rfc2818#section-3.1). HTTP pages will continue to work provided you
See the [`git filter-repo` documentation](https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html#EXAMPLES)
for more examples, and the complete documentation.
1. Force push your changes to overwrite all branches on GitLab.
...
...
@@ -81,8 +79,7 @@ unwanted files.
To reduce the size of your repository in GitLab you will need to remove GitLab
internal refs that reference commits contain large files. Before completing
these steps, first [purged files from your repository
See the [`git filter-repo` documentation](https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html#EXAMPLES)
for more examples, and the complete documentation.
1. After running `git filter-repo`, the header and unchanged commits need to be