1. 10 Jan, 2024 5 commits
    • Kirill Smelkov's avatar
      fixup! NXD blob/auth: Teach it to handle HTTP Basic Auth too · 56b57090
      Kirill Smelkov authored
      @rafael approached me and asked why URLs like
      
      	https://gitlab-ci-token:XXX@hostname/group/project/raw/master/file
      
      work in CURL, but not in Chrome under AJAX requests.
      
      After investigation it turned out they neither work in WGET and give 302
      redirect to http://localhost:8080/users/sign_in:
      
      	kirr@deco:~$ wget https://gitlab-ci-token:XXX@lab.nexedi.com/kirr/test/raw/master/hello.txt
      	--2018-06-04 13:14:04--  https://gitlab-ci-token:*password*@lab.nexedi.com/kirr/test/raw/master/hello.txt
      	Resolving lab.nexedi.com (lab.nexedi.com)... 176.31.129.213, 85.118.38.162
      	Connecting to lab.nexedi.com (lab.nexedi.com)|176.31.129.213|:443... connected.
      	HTTP request sent, awaiting response... 302 Found
      	Location: http://localhost:8080/users/sign_in [following]
      	--2018-06-04 13:14:04--  http://localhost:8080/users/sign_in
      	Resolving localhost (localhost)... 127.0.0.1, ::1
      	Connecting to localhost (localhost)|127.0.0.1|:8080... failed: Connection refused.
      	Connecting to localhost (localhost)|::1|:8080... failed: Connection refused.
      
      This turned out to be due to most clients (in fine accordance with RFC2617 /
      RFC7617) first send request without Authorization header set and retry it with
      that header only if server challenges it to(*), and our authorization code was
      only trying to handle HTTP basic auth if Authorization header was provided
      without issuing any challenge on server side.
      
      Fix it by checking Rails backend reply for 302, which it gives for
      unauthorized non-raw requests, and on our side convert it HTTP Basic
      auth challenge if raw request does not contain any token. This way it
      now works with user:password in URLs for both WGET and Chrome.
      
      If any tokens were provided we leave Rails auth response as is because
      we handle user/password only for that "no token provided at all" case.
      
      (*) see https://en.wikipedia.org/wiki/Basic_access_authentication for overview.
      /cc @alain.takoudjou, @jerome
      
      /reviewed-on !2
      56b57090
    • Kirill Smelkov's avatar
      fixup! NXD blob/auth: Teach it to handle HTTP Basic Auth too · 5fddfaff
      Kirill Smelkov authored
      Adjust the test because download-archive format has been changed (see
      fixup to first patch in nxd series), while `git fetch` expects the old
      way.
      5fddfaff
    • Kirill Smelkov's avatar
      NXD blob/auth: Teach it to handle HTTP Basic Auth too · bff38c87
      Kirill Smelkov authored
      [ Not sent upstream.
      
        The patch was not sent upstream, because previous 2 raw blob patches
        were not accepted (see details there).
      
        OTOH it is very handy in SlapOS environment to use CI token auth for
        raw downloading, so just carry with us as NXD. ]
      
      There are cases when using user:password for /raw/... access is handy:
      
      - when using query for auth (private_token) is not convenient for some
        reason (e.g. client processing software does not handle queries well
        when generating URLs)
      
      - when we do not want to organize many artificial users and use their
        tokens, but instead just use per-project automatically setup
      
          gitlab-ci-token : <ci-token>
      
        artificial user & "password" which are already handled by auth backend
        for `git fetch` requests.
      
      Handling is easy: if main auth backend rejects access, and there is
      user:password in original request, we retry asking auth backend the way
      as `git fetch` would do.
      
      Access is granted if any of two ways to ask auth backend succeeds. This
      way both private tokens / cookies and HTTP auth are supported.
      bff38c87
    • Kirill Smelkov's avatar
      NXD blob/auth: Cache auth backend reply for 30s · bcc21f3e
      Kirill Smelkov authored
      [ Sent upstream: https://gitlab.com/gitlab-org/gitlab-workhorse/merge_requests/17
      
        This patch was sent upstream but was not accepted for "complexity"
        reason of auth cache, despite that provides more than an order of magnitude
        speedup. Just carry it with us as NXD ]
      
      In previous patch we added code to serve blob content via running `git cat-file
      ...` directly, but for every such request a request to slow RoR-based auth
      backend is made, which is bad for performance.
      
      Let's cache auth backend reply for small period of time, e.g. 30 seconds, which
      will change the situation dramatically:
      
      If we have a lot of requests to the same repository, we query auth backend only
      for every Nth request and with e.g. 100 raw blob request/s N=3000 which means
      that previous load to RoR code essentially goes away.
      
      On the other hand as we query auth backend only once in a while and refresh the
      cache, we will not miss potential changes in project settings. I mean potential
      e.g. 25 seconds delay for a project to become public, or vise versa to become
      private does no real harm.
      
      The cache is done with the idea to allow the read side codepath to execute in
      parallel and to be not blocked by eventual cache updates.
      
      Overall this improves performance a lot:
      
        (on a 8-CPU i7-3770S with 16GB of RAM, 2001:67c:1254:e:8b::c776 is on localhost)
      
        # request is handled by gitlab-workhorse, but without auth caching
        $ ./wrk -c40 -d10 -t1 --latency http://[2001:67c:1254:e:8b::c776]:7777/nexedi/slapos/raw/master/software/wendelin/software.cfg
        Running 10s test @ http://[2001:67c:1254:e:8b::c776]:7777/nexedi/slapos/raw/master/software/wendelin/software.cfg
          1 threads and 40 connections
          Thread Stats   Avg      Stdev     Max   +/- Stdev
            Latency   458.42ms   66.26ms 766.12ms   84.76%
            Req/Sec    85.38     16.59   120.00     82.00%
          Latency Distribution
             50%  459.26ms
             75%  490.09ms
             90%  523.95ms
             99%  611.33ms
          853 requests in 10.01s, 1.51MB read
        Requests/sec:     85.18
        Transfer/sec:    154.90KB
      
        # request goes to gitlab-workhorse with auth caching (this patch)
        $ ./wrk -c40 -d10 -t1 --latency http://[2001:67c:1254:e:8b::c776]:7777/nexedi/slapos/raw/master/software/wendelin/software.cfg
        Running 10s test @ http://[2001:67c:1254:e:8b::c776]:7777/nexedi/slapos/raw/master/software/wendelin/software.cfg
          1 threads and 40 connections
          Thread Stats   Avg      Stdev     Max   +/- Stdev
            Latency    34.52ms   19.28ms 288.63ms   74.74%
            Req/Sec     1.20k   127.21     1.39k    85.00%
          Latency Distribution
             50%   32.67ms
             75%   42.73ms
             90%   56.26ms
             99%   99.86ms
          11961 requests in 10.01s, 21.24MB read
        Requests/sec:   1194.51
        Transfer/sec:      2.12MB
      
      i.e. it is ~ 14x improvement.
      bcc21f3e
    • Kirill Smelkov's avatar
      NXD Teach gitlab-workhorse to serve requests to get raw blobs · 82045ae5
      Kirill Smelkov authored
      [ Sent upstream: https://gitlab.com/gitlab-org/gitlab-workhorse/merge_requests/17
      
        This patch was sent upstream but was not accepted for "complexity"
        reason of auth cache (next patch), despite that provides more than an
        order of magnitude speedup. Just carry it with us as NXD ]
      
      Currently GitLab serves requests to get raw blobs via Ruby-on-Rails code and
      Unicorn. Because RoR/Unicorn is relatively heavyweight, in environment where
      there are a lot of simultaneous requests to get raw blobs, this works very slow
      and server is constantly overloaded.
      
      On the other hand, to get raw blob content, we do not need anything from RoR
      framework - we only need to have access to project git repository on filesystem,
      and knowing whether access for getting data from there should be granted or
      not. That means it is possible to handle '.../raw/....' request directly
      in more lightweight and performant gitlab-workhorse.
      
      As gitlab-workhorse is written in Go, and Go has good concurrency/parallelism
      support and is generally much faster than Ruby, moving raw blob serving task to
      it makes sense and should be a net win.
      
      In this patch: we add infrastructure to process GET request for '/raw/...':
      
      - extract project / ref and path from URL
      - query auth backend for whether download access should be granted or not
      - emit blob content via spawning external `git cat-file`
      
      I've tried to mimic the output to be as close as the one emitted by RoR code,
      with the idea that for users the change should be transparent.
      
      As in this patch we do auth backend query for every request to get a blob, RoR
      code is still loaded very much, so essentially there is no speedup yet:
      
        (on a 8-CPU i7-3770S with 16GB of RAM, 2001:67c:1254:e:8b::c776 is on localhost)
      
        # without patch: request eventually goes to unicorn  (9 unicorn workers)
        $ ./wrk -c40 -d10 -t1 --latency http://[2001:67c:1254:e:8b::c776]:7777/nexedi/slapos/raw/master/software/wendelin/software.cfg
        Running 10s test @ http://[2001:67c:1254:e:8b::c776]:7777/nexedi/slapos/raw/master/software/wendelin/software.cfg
          1 threads and 40 connections
          Thread Stats   Avg      Stdev     Max   +/- Stdev
            Latency   461.16ms   63.44ms 809.80ms   84.18%
            Req/Sec    84.84     17.02   131.00     80.00%
          Latency Distribution
             50%  460.21ms
             75%  492.83ms
             90%  524.67ms
             99%  636.49ms
          847 requests in 10.01s, 1.57MB read
        Requests/sec:     84.64
        Transfer/sec:    161.10KB
      
        # with this patch: request handled by gitlab-workhorse
        $ ./wrk -c40 -d10 -t1 --latency http://[2001:67c:1254:e:8b::c776]:7777/nexedi/slapos/raw/master/software/wendelin/software.cfg
        Running 10s test @ http://[2001:67c:1254:e:8b::c776]:7777/nexedi/slapos/raw/master/software/wendelin/software.cfg
          1 threads and 40 connections
          Thread Stats   Avg      Stdev     Max   +/- Stdev
            Latency   458.42ms   66.26ms 766.12ms   84.76%
            Req/Sec    85.38     16.59   120.00     82.00%
          Latency Distribution
             50%  459.26ms
             75%  490.09ms
             90%  523.95ms
             99%  611.33ms
          853 requests in 10.01s, 1.51MB read
        Requests/sec:     85.18
        Transfer/sec:    154.90KB
      
      In the next patch we'll cache requests to auth backend and that will improve
      performance dramatically.
      
      NOTE 20160228: there is internal/git/blob.go trying to get raw data via
          gitlab-workhorse, but still asking Unicorn about blob->sha1 mapping
          etc. That work started in
      
              86aaa133 (Prototype blobs via workhorse, @jacobvosmaer)
      
          and was inspired by this patch. It goes out of line compared to what
          we can do if we serve all blob data just by gitlab-workhorse (see
          next patch), so we just avoid git/blob.go and put our stuff into
          git/xblob.go and tweak routes, essentially deactivating git/blob.go
          code.
      82045ae5
  2. 04 Jun, 2020 8 commits
  3. 26 May, 2020 2 commits
  4. 22 May, 2020 1 commit
  5. 30 Apr, 2020 1 commit
  6. 07 Apr, 2020 3 commits
  7. 04 Apr, 2020 2 commits
  8. 03 Apr, 2020 3 commits
  9. 02 Apr, 2020 3 commits
  10. 01 Apr, 2020 1 commit
  11. 31 Mar, 2020 3 commits
  12. 30 Mar, 2020 1 commit
    • Oswaldo Ferreira's avatar
      Bump Labkit version · 837c5ae7
      Oswaldo Ferreira authored
      This version bump refers to fac94cb42 in order to
      support Go Continuous Profiling with versioning.
      
      I.e. Workhorse will provide its build version to
      the profiler and it'll be presented at the Stackdriver
      Profiler UI.
      837c5ae7
  13. 27 Mar, 2020 1 commit
  14. 26 Mar, 2020 1 commit
  15. 25 Mar, 2020 1 commit
  16. 23 Mar, 2020 4 commits
    • Alessio Caiazza's avatar
      Merge branch 'security-193100-ignore-duplicate-multipart-params' into 'master' · 7168c2e3
      Alessio Caiazza authored
      Reject parameters that override upload fields
      
      See merge request gitlab-org/security/gitlab-workhorse!3
      7168c2e3
    • Alessio Caiazza's avatar
      Release v8.28.0 · 3fbf8ef2
      Alessio Caiazza authored
      3fbf8ef2
    • Markus Koller's avatar
      Reject parameters that override upload fields · 7c324521
      Markus Koller authored
      When Workhorse intercepts file uploads, we store the files and send the
      information about the temporary file in new multipart form values called
      `file.path`, `file.size` etc.
      
      Since we're also copying all other multipart form values from the
      original client request, it was possible to override the values we
      set in Workhorse, causing Rails to e.g. load the uploaded file from
      an injected `file.path` parameter.
      
      To avoid this, we check if client parameters have the same name as any
      of our own added fields and reject the request.
      7c324521
    • Markus Koller's avatar
      Always set internally used upload fields · 75a39b0b
      Markus Koller authored
      The `path` and `remote_*` fields are not always set in Workhorse
      depending on the storage type, but still picked up in Rails.
      
      To avoid injecting any client params with the same name, we just set
      these fields to empty strings.
      75a39b0b