An error occurred fetching the project authors.
  1. 27 Apr, 2020 1 commit
    • Oswaldo Ferreira's avatar
      Use microseconds precision for log timings · 5c2a5394
      Oswaldo Ferreira authored
      We have 6 (microsecond) precision for a few Go service
      timings, so making all existing *_duration_s on
      Rails/API/Sidekiq use a 6 decimal precision instead of 2
      would make more sense, and that's what we accomplish here.
      5c2a5394
  2. 16 Apr, 2020 1 commit
  3. 07 Apr, 2020 1 commit
  4. 23 Mar, 2020 1 commit
  5. 02 Mar, 2020 1 commit
    • Craig Furman's avatar
      Stringify sidekiq job args in logs · 6c041a61
      Craig Furman authored
      If these logs are sent to Elasticsearch, it will not be able to process
      nested object fields, as this causes a type mismatch with scalar
      elements in the same array across log lines.
      
      This is a second attempt, as the first (reverted) one modified the
      actual job object that was used by sidekiq.
      6c041a61
  6. 29 Feb, 2020 1 commit
  7. 28 Feb, 2020 1 commit
    • Craig Furman's avatar
      Stringify sidekiq job args in logs · ec3f228c
      Craig Furman authored
      If these logs are sent to Elasticsearch, it will not be able to process
      nested object fields, as this causes a type mismatch with scalar
      elements in the same array across log lines.
      ec3f228c
  8. 17 Feb, 2020 1 commit
    • Sean McGivern's avatar
      Omit previous error from Sidekiq JSON logs · 5c030795
      Sean McGivern authored
      Sidekiq stores a job's error details in the payload for the _next_ run,
      so that it can display the error in the Sidekiq UI. This is because
      Sidekiq's main state is the queue of jobs to be run. However, in our
      logs, this is very confusing, because we shouldn't have any error at all
      when a job starts, and we already add an error message and class to our
      logs when a job fails.
      5c030795
  9. 14 Feb, 2020 1 commit
    • Sean McGivern's avatar
      Limit size of params array in JSON logs to 10 KiB · f2d677ac
      Sean McGivern authored
      We did this for Sidekiq arguments, but not for HTTP request params. We
      now do the same everywhere: Sidekiq arguments, Grape params, and Rails
      controller params. As the params start life as hashes, the order is
      defined by whatever's creating the hashes.
      f2d677ac
  10. 10 Jan, 2020 1 commit
    • Stan Hu's avatar
      Make Sidekiq timestamps consistently ISO 8601 · f89237f0
      Stan Hu authored
      Previously when an exception occurred in Sidekiq, Sidekiq would export
      logs with timestamps (e.g. created_at, enqueued_at) in floating point
      seconds, while other jobs would report in ISO 8601 format. This
      inconsistency in data types would cause Elasticsearch to drop logs that
      did not match the schema type (date in most cases).
      
      This commit moves the responsibility of formatting timestamps to the
      Sidekiq JSON formatter where it properly belongs. The job logger now
      generates timestamps with floats, just as Sidekiq does. This ensures
      that timestamps are manipulated only in one place.
      
      See https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8269
      f89237f0
  11. 07 Jan, 2020 1 commit
    • Sean McGivern's avatar
      Add database timings to Sidekiq JSON logs · f248f182
      Sean McGivern authored
      Sidekiq JSON logs have total duration, queuing time, Gitaly time, and
      CPU time. They don't (before this change) have database time.
      
      We provide two fields: db_duration and db_duration_s. That's because the
      units between the different duration fields are currently confusing, so
      providing an explicit unit moves us closer to that goal, while keeping
      the raw figure in the un-suffixed fields.
      f248f182
  12. 17 Dec, 2019 1 commit
  13. 28 Oct, 2019 1 commit
    • Andrew Newdigate's avatar
      Adds a Sidekiq queue duration metric · e1cbaf47
      Andrew Newdigate authored
      Adds a Prometheus histogram, `sidekiq_jobs_queue_duration_seconds` for
      recording the duration that a Sidekiq job is queued for before being
      executed.
      
      This matches the scheduling_latency_s field emitted from structured
      logging for the same purpose.
      e1cbaf47
  14. 11 Oct, 2019 1 commit
    • Qingyu Zhao's avatar
      Fix Sidekiq job CPU time · 717f159b
      Qingyu Zhao authored
      When measure Sidekiq job CPU time usage, `Process.times` is wrong
      because it counts all threads CPU time in current Sidekiq proces.
      Use `Process.clock_gettime(Process::CLOCK_THREAD_CPUTIME_ID)` instead
      
      Removed `system_s`, `user_s`, and `child_s` - since we can not get
      these values for the job thread. Added `cpu_s`, this is CPU time
      used by the job thread, including system time and user time
      717f159b
  15. 23 Sep, 2019 1 commit
  16. 22 Aug, 2019 2 commits
  17. 09 Aug, 2019 2 commits
  18. 31 Jul, 2019 1 commit
  19. 30 Jan, 2019 1 commit
  20. 29 Jan, 2019 1 commit
  21. 22 Jan, 2019 1 commit
  22. 06 Dec, 2018 3 commits
  23. 04 Apr, 2018 1 commit