- 24 May, 2016 1 commit
-
-
Rafael Monnerat authored
When create a wrapper for etc/service/my-wrapper placing the 2 files at the same folder, cause one duplication at the files added to supervisord, so always place the python script wrapper at the bin folder of the buildout.
-
- 23 May, 2016 2 commits
-
-
Rafael Monnerat authored
-
Rafael Monnerat authored
Ensure the certificates provided by the user actually valid and match, otherwise prevent it write bad certificates.
-
- 12 May, 2016 2 commits
-
-
Kazuhiko Shiozaki authored
so that build will not fail if the directory was left during previous builds.
-
Rafael Monnerat authored
Include software-type in order to use multiple forms/schemas for the same software-type, like simplified or advanced. Include shared to include schemas for slaves.
-
- 22 Apr, 2016 1 commit
-
-
Julien Muchembled authored
In commit 71d0c4fd, I had in mind to replace basename %s by basename "$COMMAND" Commit cfde18ad was a wrong fix.
-
- 21 Apr, 2016 2 commits
-
-
Alain Takoudjou authored
-
Rafael Monnerat authored
-
- 20 Apr, 2016 1 commit
-
-
Julien Muchembled authored
-
- 19 Apr, 2016 2 commits
-
-
Jérome Perrin authored
This recipe only created ~/.ssh/authorized_keys, so it should not return the full ~/.ssh/ directory because uninstallation will delete "too much". @alain.takoudjou @rafael after machine was restarted yesterday, the keys I added in ~/.ssh/ of my webrunner were not here anymore. I think this is the reason, thanks for taking a look when you have time. /reviewed-on !37
-
Jérome Perrin authored
http://stackoverflow.com/a/10826085 /reviewed-on !54
-
- 13 Apr, 2016 1 commit
-
-
Kazuhiko Shiozaki authored
-
- 01 Apr, 2016 1 commit
-
-
Alain Takoudjou authored
-
- 22 Mar, 2016 1 commit
-
-
Alain Takoudjou authored
-
- 01 Mar, 2016 1 commit
-
-
Kazuhiko Shiozaki authored
-
- 24 Feb, 2016 1 commit
-
-
Jérome Perrin authored
Since this parameter is a json encoded string, request parameter must be []
-
- 21 Feb, 2016 1 commit
-
-
https://lab.nexedi.com/nexedi/slapos.gitKirill Smelkov authored
This updates links to slapos.git in the tree to point to new location. We do so whole-tree except one place in stack/monitor/ : [download-monitor-static] recipe = hexagonit.recipe.download url = http://git.erp5.org/gitweb/slapos.git/snapshot/930be99041ea26b7b1186830e5eb56ef0acc1bdf.tar.gz ... (see d8800c0b "monitor: Download statics files from snapshot") The reason we do not update that link yet, is that 930be99041ea26b7b1186830e5eb56ef0acc1bdf is a tree object and gitlab does not allow to dowload tree object as archives (yet ?) So for now that link stays unconverted - and we'll think how to do with it.
-
- 15 Feb, 2016 1 commit
-
-
- 12 Jan, 2016 1 commit
-
-
Sebastien Robin authored
Initially, the condition checking rdiff backup status was right after calling it. But then a condition about CORRUPTED_ARGS has been inserted, making the result of $? different from initially expected. Due to this, the cleanup of old versions was never done, making the backup becoming too fat very quickly. Thus clearly define variable for rdiff backup status to have expected condition.
-
- 17 Dec, 2015 1 commit
-
-
Kazuhiko Shiozaki authored
-
- 09 Dec, 2015 2 commits
-
-
Kirill Smelkov authored
In case there are errors when creating cluster / setting up its configuration files, currently we leave pgsql database left half-installed and next time instantiation runs do not do anything, because os.path.exists(pgdata) is already true. I've personally hit this situation via providing ipv4 and ipv6 parameters as strings and the recipe wanted to do `ipv4.join(ipv6)` but this works only for sets and raises for strings. What is worse is that the above error becomes hidden in our default setup, because webrunner tries to do instantiation _several_ times, and on the second run instantiation succeeds, because pgdata directory already exists and recipe thinks there is nothing to do _and_ webrunner already removed instance.log from previous run. So do not hide errors, and if we see there are problems, remove the wholly created pgsql database directory. /cc @kazuhiko, @jerome /proposed-for-review on !29
-
Cédric Le Ninivin authored
-
- 07 Dec, 2015 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Kazuhiko Shiozaki authored
that is a long-deprecated syntax and removed in haproxy 1.6.
-
- 27 Nov, 2015 1 commit
-
-
Rafael Monnerat authored
-
- 25 Nov, 2015 3 commits
-
-
Rafael Monnerat authored
-
Kirill Smelkov authored
If one wants to check URLs on UNIX-sockets, there is no full URL schema in curl for this, but the following has to be used instead: curl --unix-socket /path/to/socket http:/<url-path> For this to work, one can do e.g. the following trick: [promise-unicorn] recipe = slapos.cookbook:check_url_available url = --unix-socket ${unicorn:socket} http:/ but then generated promise scripts fails this way: ./etc/promise/unicorn: line 7: [: too many arguments via quoting $URL in emptiness check we can support both usual URLs and urls with --unix-socket prepended trick. /reviewed-by @cedric.leninivin (on !31)
-
Kirill Smelkov authored
In gitlab SR a service I need to check - gitlab-workhorse, returns 200 only when request comes to some repository and authentication backend allows it. Requiring access to repositories is not very good just to check if the service is alive, and also auth backend can be not alive, and initially there are no repositories at all. So gitlab-workhorse is checked to be alive by pinging it with non-existing URL and expecting 403. For this to work we need to allow clients to specify expected HTTP code instead of previously hardcoded 200 (which still remains the default). /reviewed-by @cedric.leninivin (on !31)
-
- 24 Nov, 2015 1 commit
-
-
Kazuhiko Shiozaki authored
-
- 23 Nov, 2015 1 commit
-
-
Kazuhiko Shiozaki authored
-
- 17 Nov, 2015 1 commit
-
-
Alain Takoudjou authored
-
- 06 Nov, 2015 1 commit
-
-
Rafael Monnerat authored
-
- 05 Nov, 2015 2 commits
-
-
Rafael Monnerat authored
-
Rafael Monnerat authored
Dump on filesytem the ipv4 used to the node create connections
-
- 04 Nov, 2015 3 commits
-
-
Kirill Smelkov authored
It is well known that UNIX sockets are faster than TCP over loopback. E.g. on my machine according to lmbench[1] they have ~ 2 times lower latency and ~ 2-3 times more throughput compared to TCP over loopback: *Local* Communication latencies in microseconds - smaller is better --------------------------------------------------------------------- Host OS 2p/0K Pipe AF UDP RPC/ TCP RPC/ TCP ctxsw UNIX UDP TCP conn --------- ------------- ----- ----- ---- ----- ----- ----- ----- ---- teco Linux 4.2.0-1 13.8 29.2 26.8 45.0 47.9 48.5 55.5 45. *Local* Communication bandwidths in MB/s - bigger is better ----------------------------------------------------------------------------- Host OS Pipe AF TCP File Mmap Bcopy Bcopy Mem Mem UNIX reread reread (libc) (hand) read write --------- ------------- ---- ---- ---- ------ ------ ------ ------ ---- ----- teco Linux 4.2.0-1 1084 4353 1493 2329.1 3720.7 1613.8 1109.2 3402 1404. The same ratio holds for our std shuttle servers. API to work with unix sockets is essentially the same as with TCP/UDP. Because of that it is easy to support both TCP and UNIX socket in one software, and this way a lot of software support unix sockets out of the box, including Redis. Because of lower latencies and higher throughput, for performance reasons, it makes sense to interconnect services on one machine via unix sockets and talk via TCP only to outside world. Here we add support for unix sockets to Redis recipe. [1] http://www.bitmover.com/lmbench/ /reviewed-by @kazuhiko (on !27) /cc @alain.takoudjou, @jerome, @vpelletier
-
Kirill Smelkov authored
Because redis.Redis(...) ctor creates connection pool on initialization and we can rely on it. Another reason: Redis ctor (in form of StrictRedis.__init__()) has logic how to process arguments and does selecting - either it is TCP (`host` and `port` args), or UNIX socket (`unix_socket_path` arg): https://lab.nexedi.com/nexedi/slapos/blob/95dbb5b2/slapos/recipe/redis/MyRedis2410.py#L560 Since we are going to introduce unix socket support to redis recipe in the next patch, and don't want to duplicate StrictRedis.__init__() logic in promise code, let's refactor promise to delegate argument processing logic to Redis. /reviewed-by @kazuhiko (on !27) /cc @alain.takoudjou
-
Kirill Smelkov authored
- update Redis software to latest upstream in 2.8.* series (which now supports IPv6 out of the box); - update Redis instance template to the one from 2.8.23 and re-merge our templating changes to it (file/dir locations, port and binding, master password). The whole diff to pristine 2.8.23 redis conf is now this: diff --git a/.../redis-2.8.23/redis.conf b/slapos/recipe/redis/template/redis.conf.in index 870959f..2895539 100644 --- a/.../redis-2.8.23/redis.conf +++ b/slapos/recipe/redis/template/redis.conf.in @@ -46 +46 @@ daemonize no -pidfile /var/run/redis.pid +pidfile %(pid_file)s @@ -50 +50 @@ pidfile /var/run/redis.pid -port 6379 +port %(port)s @@ -69,0 +70 @@ tcp-backlog 511 +bind %(ipv6)s @@ -108 +109 @@ loglevel notice -logfile "" +logfile %(log_file)s @@ -174 +175 @@ rdbcompression yes -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it +# hit to pay (around 10%%) when saving and loading RDB files, so you can disable it @@ -192 +193 @@ dbfilename dump.rdb -dir ./ +dir %(server_dir)s @@ -217 +218 @@ dir ./ -# masterauth <master-password> +%(master_passwd)s NOTE There are test failures for almost all Redis versions when machine have not small amount of CPUs: https://github.com/antirez/redis/issues/2715#issuecomment-151608948 Because the failure is in replication test, and so far we do not use replication, and there is no feedback from upstream author to handle this (for 7 days for my detailed report, and for ~ 3 months for this issue in general), we can just disable replication test as a temporary solution. ( to handle remote patches with md5 hash easily the building recipe is changed to slapos.recipe.cmmi ) NOTE Redis updated to 2.8 version because GitLab uses this series. If/when we need more recent one we can add [redis30] in addition to [redis28]. /reviewed-by @kazuhiko (on !27 and on !26) /cc @alain.takoudjou, @jerome
-
- 27 Oct, 2015 2 commits
-
-
Alain Takoudjou authored
-
Alain Takoudjou authored
-