1. 25 Nov, 2016 4 commits
  2. 24 Nov, 2016 3 commits
  3. 22 Nov, 2016 2 commits
  4. 04 Nov, 2016 2 commits
  5. 03 Nov, 2016 1 commit
  6. 02 Nov, 2016 1 commit
  7. 26 Oct, 2016 2 commits
  8. 14 Oct, 2016 3 commits
  9. 13 Oct, 2016 2 commits
  10. 12 Oct, 2016 2 commits
  11. 06 Oct, 2016 6 commits
  12. 05 Oct, 2016 2 commits
    • Jacob Vosmaer's avatar
      Version 0.8.3 · 1ce06acc
      Jacob Vosmaer authored
      1ce06acc
    • Jacob Vosmaer (GitLab)'s avatar
      Merge branch 'queue-requests' into 'master' · f3f03271
      Jacob Vosmaer (GitLab) authored
      Allow to queue API requests and limit given capacity
      
      This MR implements an API queueing on Workhorse side.
      It's meant to better control given capacity for different resources.
      
      This is meant to solve: https://gitlab.com/gitlab-com/infrastructure/issues/320.
      
      And make a large number of requests easier to handle: https://gitlab.com/gitlab-org/gitlab-ce/issues/21698
      
      It fulfils these requirements:
      - allow to limit capacity given to API, specifically to allow to process up to N-number of requests at single time,
      - allow to queue API requests and timeout them, specifically it allows to slow down processing of API calls if the Unicorn can process the current API requests in reasonable time
      
      The implementation is made as constant cost and it's dead simple.
      It should not inflate the memory / CPU usage of Workhorse.
      
      It works like this:
      - we hook into processing of requests,
      - we try to acquire slot for our request by pushing to buffered channel. The buffered channel actually limits number of processed requests at single time,
      - if we can't push to channel it means that all concurrent slots are in use and we have to wait,
      - we block on buffered channel for the free a slot, secondly we wait on timer to timeout on channel,
      - we generate 502 if timeout occurs,
      - we process request if we manage to push to channel,
      - we pop from channel when we finish processing of requests, allowing other requests to fire,
      - if there's already too many request (over `apiQueueLimit`) we return 429,
      
      This introduces 3 extra parameters (off by default):
      - `apiLimit` - limit number of concurrent API requests,
      - `apiQueueLimit` - limit the backlog for queueing,
      - `apiQueueTimeout` - duration after we timeout requests if they sit too long in queue.
      
      This allows:
      - limit used capacity to any number of available workers, ex. allowing for API to use at most 25% of capacity,
      - slowly process requests in case of slowness,
      - better manage the API calls then rate limiting requests,
      - by slowing down we are automatically backing off all services using API,
      
      
      See merge request !65
      f3f03271
  13. 04 Oct, 2016 6 commits
  14. 03 Oct, 2016 2 commits
  15. 30 Sep, 2016 2 commits