1. 27 May, 2016 1 commit
  2. 24 May, 2016 1 commit
  3. 23 May, 2016 2 commits
  4. 12 May, 2016 2 commits
  5. 22 Apr, 2016 1 commit
  6. 21 Apr, 2016 2 commits
  7. 20 Apr, 2016 1 commit
  8. 19 Apr, 2016 2 commits
  9. 13 Apr, 2016 1 commit
  10. 01 Apr, 2016 1 commit
  11. 22 Mar, 2016 1 commit
  12. 01 Mar, 2016 1 commit
  13. 24 Feb, 2016 1 commit
  14. 21 Feb, 2016 1 commit
  15. 15 Feb, 2016 1 commit
  16. 12 Jan, 2016 1 commit
    • Sebastien Robin's avatar
      pbs: fixed condition of evaluation of rdiff backup status · 2e4408b9
      Sebastien Robin authored
      Initially, the condition checking rdiff backup status was right
      after calling it. But then a condition about CORRUPTED_ARGS has
      been inserted, making the result of $? different from initially expected.
      Due to this, the cleanup of old versions was never done, making
      the backup becoming too fat very quickly.
      Thus clearly define variable for rdiff backup status to have expected
  17. 17 Dec, 2015 1 commit
  18. 09 Dec, 2015 2 commits
    • Kirill Smelkov's avatar
      slapos/recipe/postgresql: Do not leave half-installed postgresql instance · b7f00def
      Kirill Smelkov authored
      In case there are errors when creating cluster / setting up its
      configuration files, currently we leave pgsql database left
      half-installed and next time instantiation runs do not do anything,
      because os.path.exists(pgdata) is already true.
      I've personally hit this situation via providing ipv4 and ipv6
      parameters as strings and the recipe wanted to do `ipv4.join(ipv6)` but this
      works only for sets and raises for strings.
      What is worse is that the above error becomes hidden in our default
      setup, because webrunner tries to do instantiation _several_ times, and
      on the second run instantiation succeeds, because pgdata directory
      already exists and recipe thinks there is nothing to do _and_ webrunner
      already removed instance.log from previous run.
      So do not hide errors, and if we see there are problems, remove the
      wholly created pgsql database directory.
      /cc @kazuhiko, @jerome
      /proposed-for-review on !29
    • Cédric Le Ninivin's avatar
  19. 07 Dec, 2015 3 commits
  20. 27 Nov, 2015 1 commit
  21. 25 Nov, 2015 3 commits
    • Rafael Monnerat's avatar
    • Kirill Smelkov's avatar
      check-url: Quote $URL in -z check · c1ecf017
      Kirill Smelkov authored
      If one wants to check URLs on UNIX-sockets, there is no full URL schema
      in curl for this, but the following has to be used instead:
          curl --unix-socket /path/to/socket http:/<url-path>
      For this to work, one can do e.g. the following trick:
          recipe  = slapos.cookbook:check_url_available
          url     = --unix-socket ${unicorn:socket}  http:/
      but then generated promise scripts fails this way:
          ./etc/promise/unicorn: line 7: [: too many arguments
      via quoting $URL in emptiness check we can support both usual URLs and
      urls with --unix-socket prepended trick.
      /reviewed-by @cedric.leninivin  (on !31)
    • Kirill Smelkov's avatar
      check-url: Allow to specify expected HTTP code · 35024175
      Kirill Smelkov authored
      In gitlab SR a service I need to check - gitlab-workhorse, returns 200
      only when request comes to some repository and authentication backend
      allows it.
      Requiring access to repositories is not very good just to check if the
      service is alive, and also auth backend can be not alive, and initially
      there are no repositories at all. So gitlab-workhorse is checked to be
      alive by pinging it with non-existing URL and expecting 403.
      For this to work we need to allow clients to specify expected HTTP code
      instead of previously hardcoded 200 (which still remains the default).
      /reviewed-by @cedric.leninivin  (on !31)
  22. 24 Nov, 2015 1 commit
  23. 23 Nov, 2015 1 commit
  24. 17 Nov, 2015 1 commit
  25. 06 Nov, 2015 1 commit
  26. 05 Nov, 2015 2 commits
  27. 04 Nov, 2015 3 commits
    • Kirill Smelkov's avatar
      slapos/recipe/redis: Add support for UNIX sockets · cbbfd405
      Kirill Smelkov authored
      It is well known that UNIX sockets are faster than TCP over loopback.
      E.g. on my machine according to lmbench[1] they have ~ 2 times
      lower latency and ~ 2-3 times more throughput compared to TCP over
          *Local* Communication latencies in microseconds - smaller is better
          Host                 OS 2p/0K  Pipe AF     UDP  RPC/   TCP  RPC/ TCP
                                  ctxsw       UNIX         UDP         TCP conn
          --------- ------------- ----- ----- ---- ----- ----- ----- ----- ----
          teco      Linux 4.2.0-1  13.8  29.2 26.8  45.0  47.9  48.5  55.5  45.
          *Local* Communication bandwidths in MB/s - bigger is better
          Host                OS  Pipe AF    TCP  File   Mmap  Bcopy  Bcopy  Mem   Mem
                                       UNIX      reread reread (libc) (hand) read write
          --------- ------------- ---- ---- ---- ------ ------ ------ ------ ---- -----
          teco      Linux 4.2.0-1 1084 4353 1493 2329.1 3720.7 1613.8 1109.2 3402 1404.
      The same ratio holds for our std shuttle servers.
      API to work with unix sockets is essentially the same as with TCP/UDP.
      Because of that it is easy to support both TCP and UNIX socket in one
      software, and this way a lot of software support unix sockets out of the
      box, including Redis.
      Because of lower latencies and higher throughput, for performance
      reasons, it makes sense to interconnect services on one machine via unix
      sockets and talk via TCP only to outside world.
      Here we add support for unix sockets to Redis recipe.
      [1] http://www.bitmover.com/lmbench/
      /reviewed-by @kazuhiko  (on !27)
      /cc @alain.takoudjou, @jerome, @vpelletier
    • Kirill Smelkov's avatar
      slapos/recipe/redis/promise: Don't create connection pool explicitly · 442866bc
      Kirill Smelkov authored
      Because redis.Redis(...) ctor creates connection pool on initialization
      and we can rely on it.
      Another reason: Redis ctor (in form of StrictRedis.__init__()) has logic
      how to process arguments and does selecting - either it is TCP (`host` and
      `port` args), or UNIX socket (`unix_socket_path` arg):
      Since we are going to introduce unix socket support to redis recipe in
      the next patch, and don't want to duplicate StrictRedis.__init__() logic
      in promise code, let's refactor promise to delegate argument processing
      logic to Redis.
      /reviewed-by @kazuhiko  (on !27)
      /cc @alain.takoudjou
    • Kirill Smelkov's avatar
      redis: v↑ (2.8.23) · 9b3cfff4
      Kirill Smelkov authored
      - update Redis software to latest upstream in 2.8.* series (which now
        supports IPv6 out of the box);
      - update Redis instance template to the one from 2.8.23 and re-merge our
        templating changes to it (file/dir locations, port and binding, master
        The whole diff to pristine 2.8.23 redis conf is now this:
        diff --git a/.../redis-2.8.23/redis.conf b/slapos/recipe/redis/template/redis.conf.in
        index 870959f..2895539 100644
        --- a/.../redis-2.8.23/redis.conf
        +++ b/slapos/recipe/redis/template/redis.conf.in
        @@ -46 +46 @@ daemonize no
        -pidfile /var/run/redis.pid
        +pidfile %(pid_file)s
        @@ -50 +50 @@ pidfile /var/run/redis.pid
        -port 6379
        +port %(port)s
        @@ -69,0 +70 @@ tcp-backlog 511
        +bind %(ipv6)s
        @@ -108 +109 @@ loglevel notice
        -logfile ""
        +logfile %(log_file)s
        @@ -174 +175 @@ rdbcompression yes
        -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
        +# hit to pay (around 10%%) when saving and loading RDB files, so you can disable it
        @@ -192 +193 @@ dbfilename dump.rdb
        -dir ./
        +dir %(server_dir)s
        @@ -217 +218 @@ dir ./
        -# masterauth <master-password>
      NOTE There are test failures for almost all Redis versions when machine
      have not small amount of CPUs:
      Because the failure is in replication test, and so far we do not use
      replication, and there is no feedback from upstream author to handle
      this (for 7 days for my detailed report, and for ~ 3 months for this
      issue in general), we can just disable replication test as a temporary
      solution.  ( to handle remote patches with md5 hash easily the building
      recipe is changed to slapos.recipe.cmmi )
      NOTE Redis updated to 2.8 version because GitLab uses this series.
      If/when we need more recent one we can add [redis30] in addition to
      /reviewed-by @kazuhiko  (on !27 and on !26)
      /cc @alain.takoudjou, @jerome
  28. 27 Oct, 2015 1 commit