An error occurred fetching the project authors.
- 27 Apr, 2020 1 commit
-
-
Oswaldo Ferreira authored
We have 6 (microsecond) precision for a few Go service timings, so making all existing *_duration_s on Rails/API/Sidekiq use a 6 decimal precision instead of 2 would make more sense, and that's what we accomplish here.
-
- 16 Apr, 2020 1 commit
-
-
Oswaldo Ferreira authored
It makes the decision on how to log timings within JSON logs based on https://www.robustperception.io/who-wants-seconds.
-
- 07 Apr, 2020 1 commit
-
-
Stan Hu authored
If these logs are sent to Elasticsearch, it will not be able to process nested object fields, as this causes a type mismatch with scalar elements in the same array across log lines. This was done in https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26267, but that change did not apply to Sidekiq exceptions. We also move all Sidekiq agument formatting into the JSON formatter. This puts the formatting of job logs in one place and avoids the pitfalls of altering job arguments in the actual payload. Closes https://gitlab.com/gitlab-org/gitlab/-/issues/213639
-
- 23 Mar, 2020 1 commit
-
-
Stan Hu authored
It has been difficult to see trends in the number of Redis calls without having the number and duration of the calls in structured logs. This commit adds `redis_calls` and `redis_duration_ms` fields to all relevant logs (e.g. api_json.log, production_json.log, Sidekiq, etc.). Closes https://gitlab.com/gitlab-org/gitlab/issues/208821
-
- 02 Mar, 2020 1 commit
-
-
Craig Furman authored
If these logs are sent to Elasticsearch, it will not be able to process nested object fields, as this causes a type mismatch with scalar elements in the same array across log lines. This is a second attempt, as the first (reverted) one modified the actual job object that was used by sidekiq.
-
- 29 Feb, 2020 1 commit
-
-
Stan Hu authored
This reverts merge request !26075
-
- 28 Feb, 2020 1 commit
-
-
Craig Furman authored
If these logs are sent to Elasticsearch, it will not be able to process nested object fields, as this causes a type mismatch with scalar elements in the same array across log lines.
-
- 17 Feb, 2020 1 commit
-
-
Sean McGivern authored
Sidekiq stores a job's error details in the payload for the _next_ run, so that it can display the error in the Sidekiq UI. This is because Sidekiq's main state is the queue of jobs to be run. However, in our logs, this is very confusing, because we shouldn't have any error at all when a job starts, and we already add an error message and class to our logs when a job fails.
-
- 14 Feb, 2020 1 commit
-
-
Sean McGivern authored
We did this for Sidekiq arguments, but not for HTTP request params. We now do the same everywhere: Sidekiq arguments, Grape params, and Rails controller params. As the params start life as hashes, the order is defined by whatever's creating the hashes.
-
- 10 Jan, 2020 1 commit
-
-
Stan Hu authored
Previously when an exception occurred in Sidekiq, Sidekiq would export logs with timestamps (e.g. created_at, enqueued_at) in floating point seconds, while other jobs would report in ISO 8601 format. This inconsistency in data types would cause Elasticsearch to drop logs that did not match the schema type (date in most cases). This commit moves the responsibility of formatting timestamps to the Sidekiq JSON formatter where it properly belongs. The job logger now generates timestamps with floats, just as Sidekiq does. This ensures that timestamps are manipulated only in one place. See https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8269
-
- 07 Jan, 2020 1 commit
-
-
Sean McGivern authored
Sidekiq JSON logs have total duration, queuing time, Gitaly time, and CPU time. They don't (before this change) have database time. We provide two fields: db_duration and db_duration_s. That's because the units between the different duration fields are currently confusing, so providing an explicit unit moves us closer to that goal, while keeping the raw figure in the un-suffixed fields.
-
- 17 Dec, 2019 1 commit
-
-
Aakriti Gupta authored
This is done to standardize timestamp format in log files
-
- 28 Oct, 2019 1 commit
-
-
Andrew Newdigate authored
Adds a Prometheus histogram, `sidekiq_jobs_queue_duration_seconds` for recording the duration that a Sidekiq job is queued for before being executed. This matches the scheduling_latency_s field emitted from structured logging for the same purpose.
-
- 11 Oct, 2019 1 commit
-
-
Qingyu Zhao authored
When measure Sidekiq job CPU time usage, `Process.times` is wrong because it counts all threads CPU time in current Sidekiq proces. Use `Process.clock_gettime(Process::CLOCK_THREAD_CPUTIME_ID)` instead Removed `system_s`, `user_s`, and `child_s` - since we can not get these values for the job thread. Added `cpu_s`, this is CPU time used by the job thread, including system time and user time
-
- 23 Sep, 2019 1 commit
-
-
Stan Hu authored
As mentioned in https://github.com/mperham/sidekiq/wiki/Error-Handling, Sidekiq can be configured with an exception handler. We use this to log the exception in a structured way so that `corrrelation_id`, `class`, and other useful fields are available. The previous error backtrace in the `StructuredLogger` class did not provide useful information because Sidekiq swallows the exception and raises a `JobRetry::Skip` exception. Closes https://gitlab.com/gitlab-org/gitlab/issues/29425
-
- 22 Aug, 2019 2 commits
-
-
Balakumar authored
-
Thong Kuah authored
Using the sed script from https://gitlab.com/gitlab-org/gitlab-ce/issues/59758
-
- 09 Aug, 2019 2 commits
-
-
Stan Hu authored
This will help identify Sidekiq jobs that invoke excessive number of filesystem access. The timing data is stored in `RequestStore`, but this is only active within the middleware and is not directly accessible to the Sidekiq logger. However, it is possible for the middleware to modify the job hash to pass this data along to the logger.
-
Stan Hu authored
This number was reporting a negative number because `current_time` was a monotonic counter, not an absolute time. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/65748
-
- 31 Jul, 2019 1 commit
-
-
Andrew Newdigate authored
-
- 30 Jan, 2019 1 commit
-
-
Andrew Newdigate authored
Re-enables and autocorrects all instances of the Style/MethodCallWithoutArgsParentheses rule
-
- 29 Jan, 2019 1 commit
-
-
Andrew Newdigate authored
Re-enables and autocorrects all instances of the Style/MethodCallWithoutArgsParentheses rule
-
- 22 Jan, 2019 1 commit
-
-
Sean McGivern authored
When logging arguments from Sidekiq to JSON, restrict the size of `args` to 10 KB (when converted to JSON). This is to avoid blowing up with excessively large job payloads.
-
- 06 Dec, 2018 3 commits
-
-
Kamil Trzciński authored
This reverts commit 3560b119.
-
Kamil Trzciński authored
This changes `correlation_id` to be `correlation-id` when passed via jobs
-
Kamil Trzciński authored
The Correlation ID is taken or generated from received X-Request-ID. Then it is being passed to all executed services (sidekiq workers or gitaly calls). The Correlation ID is logged in all structured logs as `correlation_id`.
-
- 04 Apr, 2018 1 commit
-
-
Stan Hu authored
Closes #20060
-