Commit 2ee149d0 authored by Marco Mariani's avatar Marco Mariani

Merge remote-tracking branch 'origin/master' into cygwin-share-res

Conflicts:
	component/dash/buildout.cfg
	component/dcron/buildout.cfg
	component/dropbear/buildout.cfg
	component/glib/buildout.cfg
	component/openssl/buildout.cfg
	component/popt/buildout.cfg
	component/rdiff-backup/buildout.cfg
	software/apache-frontend/software.cfg
parents b2ab8ee3 c9e7414b
...@@ -9,3 +9,5 @@ eggs/ ...@@ -9,3 +9,5 @@ eggs/
parts/ parts/
slapos.cookbook.egg-info slapos.cookbook.egg-info
.*.swp .*.swp
*~
\#*\#
\ No newline at end of file
Changes Changes
======= =======
0.78.1 (2013-05-31) 0.83.1 (2013-09-10)
------------------
* slapconfiguration: fixes previous releasei (don't encode tap_set because it's not a string). [Cedric de Saint Martin]
0.83 (2013-09-10)
-----------------
* slaprunner recipe: remove trailing / from master_url. [Cedric de Saint Martin]
* librecipe: add pidfile option for singletons. [Cedric de Saint Martin]
* Resiliency: Use new pidfile option. [Cedric de Saint Martin]
* Fix request.py for slave instances. [Cedric de Saint Martin]
* slapconfiguration recipe: cast some parameters from unicode to str. [Cedric de Saint Martin]
0.82 (2013-08-30)
-----------------
* Certificate Authority: Can receice certificate to install. [Cedric Le Ninivin]
* Squid: Add squid recipe. [Romain Courteaud]
* Request: Trasmit instace state to requested instances. [Benjamin Blanc / Cédric Le Ninivin]
* Slapconfiguration: Now return instance state. [Cédric Le Ninivin]
* Apache Frontend: Remove recipe
0.81 (2013-08-12)
-----------------
* KVM SR: implement resiliency test. [Cedric de Saint Martin]
0.80 (2013-08-06)
----------------
* Add a simple readline recipe. [f4fce7e]
0.79 (2013-08-06)
-----------------
* KVM SR: Add support for NAT based networking (User Mode Network). [627895fe35]
* KVM SR: add virtual-hard-drive-url support. [aeb5df40cd, 8ce5a9aa1d0, a5034801aa9]
* Fix regression in GenericBaseRecipe.generatePassword. [3333b07d33c]
0.78.5 (2013-08-06)
-------------------
* check_url_available: add option to check secure links [6cbce4d8231]
0.78.4 (2013-08-06)
-------------------
* slapos.cookbook:slaprunner: Update to use https. [Cedric Le Ninivin]
0.78.3 (2013-07-18)
-------------------
* slapos.cookbook:publish: Add support to publish information for slaves. [Cedric Le Ninivin]
0.78.2 (2013-07-18)
------------------- -------------------
* Fix slapos.cookbook:request: Add backward compatiblity about getInstanceGuid(). [Cedric de Saint Martin] * Fix slapos.cookbook:request: Add backward compatiblity about getInstanceGuid(). [Cedric de Saint Martin]
* slapos.cookbook:check_* promises: Add timeout to curl that is not otherwise killed by slapos promise subsystem. [Cedric de Saint Martin] * slapos.cookbook:check_* promises: Add timeout to curl that is not otherwise killed by slapos promise subsystem. [Cedric de Saint Martin]
* Cloudooo: Allow any environment variables. [Yusei Tahara]
* ERP5: disable MariaDB query cache completely by 'query_cache_type = 0' for ERP5. [Kazuhiko Shiozaki]
* ERP5: enable haproxy admin socket and install haproxyctl script. [Kazuhiko Shiozaki]
* ERP5: increase the maximum number of open file descriptors before starting mysqld. [Kazuhiko Shiozaki]
* python 2.7: updated to 2.7.5 [Cedric de Saint Martin]
0.78.1 (2013-05-31)
-------------------
* Add boinc recipe: Allow to deploy an empty BOINC project. [Alain Takoudjou] * Add boinc recipe: Allow to deploy an empty BOINC project. [Alain Takoudjou]
* Add boinc.app recipe: Allow to deploy and update a BOINC application into existing BOINC server instance . [Alain Takoudjou] * Add boinc.app recipe: Allow to deploy and update a BOINC application into existing BOINC server instance . [Alain Takoudjou]
* Add boinc.client recipe: Allow to deploy a BOINC Client instance on SlapOS. [Alain Takoudjou] * Add boinc.client recipe: Allow to deploy a BOINC Client instance on SlapOS. [Alain Takoudjou]
...@@ -15,10 +80,6 @@ Changes ...@@ -15,10 +80,6 @@ Changes
* Add trac recipe: for deploying Trac and manage project with support of SVN and GIT. [Alain Takoudjou] * Add trac recipe: for deploying Trac and manage project with support of SVN and GIT. [Alain Takoudjou]
* Add bonjourgrid recipe: for deploying BonjourGrid Master and submit BOINC or Condor project. [Alain Takoudjou] * Add bonjourgrid recipe: for deploying BonjourGrid Master and submit BOINC or Condor project. [Alain Takoudjou]
* Add bonjourgrid.client recipe: for deploying BonjourGrid Worker instance and execute BOINC or Condor Jobs. [Alain Takoudjou] * Add bonjourgrid.client recipe: for deploying BonjourGrid Worker instance and execute BOINC or Condor Jobs. [Alain Takoudjou]
* Cloudooo: Allow any environment variables. [Yusei Tahara]
* ERP5: disable MariaDB query cache completely by 'query_cache_type = 0' for ERP5. [Kazuhiko Shiozaki]
* ERP5: enable haproxy admin socket and install haproxyctl script. [Kazuhiko Shiozaki]
* ERP5: increase the maximum number of open file descriptors before starting mysqld. [Kazuhiko Shiozaki]
0.78.0 (2013-04-28) 0.78.0 (2013-04-28)
------------------- -------------------
......
...@@ -121,8 +121,9 @@ make-targets = ...@@ -121,8 +121,9 @@ make-targets =
[apache-2.2] [apache-2.2]
# inspired on http://old.aclark.net/team/aclark/blog/a-lamp-buildout-for-wordpress-and-other-php-apps/ # inspired on http://old.aclark.net/team/aclark/blog/a-lamp-buildout-for-wordpress-and-other-php-apps/
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
url = http://mir2.ovh.net/ftp.apache.org/dist//httpd/httpd-2.2.24.tar.gz version = 2.2.25
md5sum = 64a3392018ad60583209a16d728180d3 url = http://mir2.ovh.net/ftp.apache.org/dist/httpd/httpd-${:version}.tar.bz2
md5sum = 9ebe3070c0bb4311f21a0cd0e34f0045
patch-options = -p1 patch-options = -p1
configure-options = --disable-static configure-options = --disable-static
--enable-authn-alias --enable-authn-alias
...@@ -183,6 +184,8 @@ environment = ...@@ -183,6 +184,8 @@ environment =
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
url = http://sourceforge.net/projects/mod-antiloris/files/mod_antiloris-0.4.tar.bz2/download url = http://sourceforge.net/projects/mod-antiloris/files/mod_antiloris-0.4.tar.bz2/download
md5sum = 66862bf10e9be3a023e475604a28a0b4 md5sum = 66862bf10e9be3a023e475604a28a0b4
depends =
${apache-2.2:version}
configure-command = ${apache-2.2:location}/bin/apxs configure-command = ${apache-2.2:location}/bin/apxs
configure-options = -c mod_antiloris.c configure-options = -c mod_antiloris.c
make-binary = ${:configure-command} make-binary = ${:configure-command}
......
...@@ -44,6 +44,13 @@ filename = cloud9-session-directory.patch ...@@ -44,6 +44,13 @@ filename = cloud9-session-directory.patch
download-only = true download-only = true
md5sum = 5dc8cc28447ed3747b8a53c768d872aa md5sum = 5dc8cc28447ed3747b8a53c768d872aa
[cloud9-socket.patch]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/${:filename}
filename = cloud9-socket.patch
download-only = true
#md5sum = 5dc8cc28447ed3747b8a53c768d872aa
[cloud9-git] [cloud9-git]
# Online IDE written in javascript/node.js # Online IDE written in javascript/node.js
# URL : c9.io # URL : c9.io
...@@ -55,7 +62,7 @@ commit = f7d102bc225c922f116d2cea52a746d64343ea59 ...@@ -55,7 +62,7 @@ commit = f7d102bc225c922f116d2cea52a746d64343ea59
repository = https://github.com/ajaxorg/cloud9.git repository = https://github.com/ajaxorg/cloud9.git
location = ${buildout:parts-directory}/${:_buildout_section_name_} location = ${buildout:parts-directory}/${:_buildout_section_name_}
environment = export GIT_SSL_NO_VERIFY=true; export PATH=${git:location}/bin:${nodejs:location}/bin:${node-sm:location}/node_modules/sm/bin:$PATH; export CPPFLAGS="-I${libxml2:location}/include -I${nodejs:location}/include"; export LDFLAGS="-L${libxml2:location}/lib -Wl,-rpath=${libxml2:location}/lib"; export HOME=${:location}; environment = export GIT_SSL_NO_VERIFY=true; export PATH=${git:location}/bin:${nodejs:location}/bin:${node-sm:location}/node_modules/sm/bin:$PATH; export CPPFLAGS="-I${libxml2:location}/include -I${nodejs:location}/include"; export LDFLAGS="-L${libxml2:location}/lib -Wl,-rpath=${libxml2:location}/lib"; export HOME=${:location};
command = ${:environment} (git clone --quiet ${:repository} ${:location} && cd ${:location} && git reset --hard ${:commit} && ${node-sm:location}/node_modules/.bin/sm install && patch -p1 < ${cloud9-session-directory.patch:location}/${cloud9-session-directory.patch:filename}) || (rm -fr ${:location}; exit 1) command = ${:environment} (git clone --quiet ${:repository} ${:location} && cd ${:location} && git reset --hard ${:commit} && ${node-sm:location}/node_modules/.bin/sm install && patch -p1 < ${cloud9-session-directory.patch:location}/${cloud9-session-directory.patch:filename} && ${node-sm:location}/node_modules/.bin/sm install && patch -p1 < ${cloud9-socket.patch:location}/${cloud9-socket.patch:filename}) || (rm -fr ${:location}; exit 1)
update-command = update-command =
executable = ${:location}/server.js executable = ${:location}/server.js
......
diff --git a/node_modules/smith.io/node_modules/engine.io/node_modules/engine.io-client/dist/engine.io-dev.js b/node_modules/smith.io/node_modules/engine.io/node_modules/engine.io-client/dist/engine.io-dev.js
index fa7e54a..14b8e67 100644
--- a/node_modules/smith.io/node_modules/engine.io/node_modules/engine.io-client/dist/engine.io-dev.js
+++ b/node_modules/smith.io/node_modules/engine.io/node_modules/engine.io-client/dist/engine.io-dev.js
@@ -2126,7 +2126,7 @@ Polling.prototype.uri = function () {
query = '?' + query;
}
- return schema + '://' + this.host + port + this.path + query;
+ return this.path + query;
};
});require.register("transports/websocket.js", function(module, exports, require, global){
...@@ -22,7 +22,7 @@ git-executable = ${git:location}/bin/git ...@@ -22,7 +22,7 @@ git-executable = ${git:location}/bin/git
[cloudooo] [cloudooo]
recipe = zc.recipe.egg recipe = zc.recipe.egg
python = python2.6 python = python2.7
extra-paths = ${cloudooo-repository:location} extra-paths = ${cloudooo-repository:location}
eggs = eggs =
${lxml-python:egg} ${lxml-python:egg}
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
extends = extends =
../xz-utils/buildout.cfg ../xz-utils/buildout.cfg
parts = parts =
coreutils coreutils-output
[coreutils] [coreutils]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
...@@ -13,3 +13,17 @@ configure-options = ...@@ -13,3 +13,17 @@ configure-options =
environment = environment =
PATH=${xz-utils:location}/bin:%(PATH)s PATH=${xz-utils:location}/bin:%(PATH)s
LDFLAGS =-Wl,--as-needed LDFLAGS =-Wl,--as-needed
[coreutils-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${:test} -x ${:test} -a -x ${:cat} -a -x ${:rm} -a -x ${:echo} -a -x ${:date} -a -x ${:md5sum} -a -x ${:basename}
test = ${coreutils:location}/bin/test
cat = ${coreutils:location}/bin/cat
rm = ${coreutils:location}/bin/rm
echo = ${coreutils:location}/bin/echo
date = ${coreutils:location}/bin/date
md5sum = ${coreutils:location}/bin/md5sum
basename = ${coreutils:location}/bin/basename
[buildout] [buildout]
extends =
../coreutils/buildout.cfg
parts = dash parts = dash-output
[dash] [dash]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
...@@ -15,3 +17,11 @@ configure-options = ...@@ -15,3 +17,11 @@ configure-options =
share = /usr share = /usr
promises = promises =
/usr/bin/dash.exe /usr/bin/dash.exe
[dash-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -x ${:dash}
dash = ${dash:location}/bin/dash
[buildout] [buildout]
parts = dcron extends =
../coreutils/buildout.cfg
parts = dcron-output
[dcron-hooks-download]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/${:filename}
md5sum = 860e914dff4108b47565965fe5ebc7b5
download-only = true
filename = dcron-hooks.py
[dcron-patch-nonroot] [dcron-patch-nonroot]
recipe = hexagonit.recipe.download recipe = hexagonit.recipe.download
md5sum = 2f5b22dc1cbe81060a9c28e6f5c06e8b md5sum = d5408ab682b65cc1eda40d588fcd7db8
url = ${:_profile_base_location_}/${:filename} url = ${:_profile_base_location_}/${:filename}
filename = dcron-4.4.noroot.no.globals.patch filename = dcron-4.4.noroot.no.globals.patch
download-only = true download-only = true
...@@ -18,8 +28,18 @@ patches = ...@@ -18,8 +28,18 @@ patches =
patch-options = -p1 patch-options = -p1
make-options = make-options =
PREFIX=${buildout:parts-directory}/${:_buildout_section_name_} PREFIX=${buildout:parts-directory}/${:_buildout_section_name_}
post-make-hook = ${dcron-hooks-download:location}/${dcron-hooks-download:filename}:post_make_hook
[dcron:cygwin] [dcron:cygwin]
share = /usr share = /usr
promises = promises =
/usr/sbin/cron.exe /usr/sbin/cron.exe
[dcron-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -x ${:crond} -a -x ${:crontab} -a ! -u ${:crontab}
crond = ${dcron:location}/sbin/crond
crontab = ${dcron:location}/bin/crontab
# Patch for making dcron usable without root user, as a local service
diff -ru dcron-4.4.org/chuser.c dcron-4.4/chuser.c diff -ru dcron-4.4.org/chuser.c dcron-4.4/chuser.c
--- dcron-4.4.org/chuser.c 2010-01-18 16:27:31.000000000 +0100 --- dcron-4.4.org/chuser.c 2010-01-18 16:27:31.000000000 +0100
+++ dcron-4.4/chuser.c 2011-04-01 11:19:19.000000000 +0200 +++ dcron-4.4/chuser.c 2013-07-18 18:17:16.342147418 +0200
@@ -14,47 +14,6 @@ @@ -14,47 +14,6 @@
int int
ChangeUser(const char *user, char *dochdir) ChangeUser(const char *user, char *dochdir)
...@@ -53,7 +52,16 @@ diff -ru dcron-4.4.org/chuser.c dcron-4.4/chuser.c ...@@ -53,7 +52,16 @@ diff -ru dcron-4.4.org/chuser.c dcron-4.4/chuser.c
diff -ru dcron-4.4.org/crontab.c dcron-4.4/crontab.c diff -ru dcron-4.4.org/crontab.c dcron-4.4/crontab.c
--- dcron-4.4.org/crontab.c 2010-01-18 16:27:31.000000000 +0100 --- dcron-4.4.org/crontab.c 2010-01-18 16:27:31.000000000 +0100
+++ dcron-4.4/crontab.c 2011-04-01 11:19:19.000000000 +0200 +++ dcron-4.4/crontab.c 2013-07-18 18:18:07.768535485 +0200
@@ -88,7 +88,7 @@
break;
case 'c':
/* getopt guarantees optarg != 0 here */
- if (*optarg != 0 && getuid() == geteuid()) {
+ if (*optarg != 0) {
CDir = optarg;
} else {
printlogf(0, "-c option: superuser only");
@@ -316,9 +316,6 @@ @@ -316,9 +316,6 @@
close(filedes[0]); close(filedes[0]);
...@@ -75,7 +83,7 @@ diff -ru dcron-4.4.org/crontab.c dcron-4.4/crontab.c ...@@ -75,7 +83,7 @@ diff -ru dcron-4.4.org/crontab.c dcron-4.4/crontab.c
ptr = PATH_VI; ptr = PATH_VI;
diff -ru dcron-4.4.org/job.c dcron-4.4/job.c diff -ru dcron-4.4.org/job.c dcron-4.4/job.c
--- dcron-4.4.org/job.c 2010-01-18 16:27:31.000000000 +0100 --- dcron-4.4.org/job.c 2010-01-18 16:27:31.000000000 +0100
+++ dcron-4.4/job.c 2011-04-01 11:19:19.000000000 +0200 +++ dcron-4.4/job.c 2013-07-18 18:17:16.342147418 +0200
@@ -62,14 +62,6 @@ @@ -62,14 +62,6 @@
* Change running state to the user in question * Change running state to the user in question
*/ */
...@@ -108,7 +116,7 @@ diff -ru dcron-4.4.org/job.c dcron-4.4/job.c ...@@ -108,7 +116,7 @@ diff -ru dcron-4.4.org/job.c dcron-4.4/job.c
/* /*
diff -ru dcron-4.4.org/Makefile dcron-4.4/Makefile diff -ru dcron-4.4.org/Makefile dcron-4.4/Makefile
--- dcron-4.4.org/Makefile 2010-01-18 16:27:31.000000000 +0100 --- dcron-4.4.org/Makefile 2010-01-18 16:27:31.000000000 +0100
+++ dcron-4.4/Makefile 2011-04-01 11:19:35.000000000 +0200 +++ dcron-4.4/Makefile 2013-07-18 18:17:16.342147418 +0200
@@ -3,7 +3,6 @@ @@ -3,7 +3,6 @@
# these variables can be configured by e.g. `make SCRONTABS=/different/path` # these variables can be configured by e.g. `make SCRONTABS=/different/path`
......
import os
import shutil
def post_make_hook(options, buildout):
crontab_path = os.path.join(options['location'], 'bin', 'crontab')
os.chmod(crontab_path, 0750)
...@@ -7,9 +7,10 @@ ...@@ -7,9 +7,10 @@
[buildout] [buildout]
extends = extends =
../zlib/buildout.cfg ../zlib/buildout.cfg
../coreutils/buildout.cfg
parts = parts =
dropbear dropbear-output
[dropbear] [dropbear]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
...@@ -40,3 +41,12 @@ patch-options= ...@@ -40,3 +41,12 @@ patch-options=
patches = patches =
${dropbear:patches} ${dropbear:patches}
${:_profile_base_location_}/dropbear-0.53.1-cygwin.patch ${:_profile_base_location_}/dropbear-0.53.1-cygwin.patch
[dropbear-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -x ${:ssh} -a -x ${:keygen}
ssh = ${dropbear:location}/bin/dbclient
keygen = ${dropbear:location}/bin/dropbearkey
# faketime - report faked system time to programs without having to change the system-wide time
# http://www.code-wizards.com/projects/libfaketime
[buildout]
parts = faketime
[faketime]
recipe = slapos.recipe.cmmi
url = http://www.code-wizards.com/projects/libfaketime/libfaketime-0.9.1.tar.gz
md5sum = ce3f996dfd5826b4ac62f1a7cc36ea27
configure-command = true
make-options =
PREFIX=${buildout:parts-directory}/${:_buildout_section_name_}
make-binary = make -e -C src
make-targets = install
post-install = sed -i -e "16c\FTPL_PATH=${buildout:parts-directory}/${:_buildout_section_name_}/lib/faketime" ${buildout:parts-directory}/${:_buildout_section_name_}/bin/faketime
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
extends = extends =
../bzip2/buildout.cfg ../bzip2/buildout.cfg
../libpng/buildout.cfg ../libpng/buildout.cfg
../patch/buildout.cfg
../pkgconfig/buildout.cfg ../pkgconfig/buildout.cfg
../zlib/buildout.cfg ../zlib/buildout.cfg
...@@ -28,16 +29,25 @@ environment = ...@@ -28,16 +29,25 @@ environment =
PATH=${pkgconfig:location}/bin:%(PATH)s PATH=${pkgconfig:location}/bin:%(PATH)s
PKG_CONFIG_PATH=${libogg:location}/lib/pkgconfig PKG_CONFIG_PATH=${libogg:location}/lib/pkgconfig
[libtheora-png_sizeof.patch]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/${:filename}
filename = libtheora-png_sizeof.patch
md5sum = eaa1454081b50f05b59495a12f52b0d5
download-only = true
[libtheora] [libtheora]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
url = http://downloads.xiph.org/releases/theora/libtheora-1.1.1.tar.bz2 url = http://downloads.xiph.org/releases/theora/libtheora-1.1.1.tar.bz2
md5sum = 292ab65cedd5021d6b7ddd117e07cd8e md5sum = 292ab65cedd5021d6b7ddd117e07cd8e
depends = depends =
${libpng:so_version} ${libpng:so_version}
patches = ${libtheora-png_sizeof.patch:location}/${libtheora-png_sizeof.patch:filename}
patch-options = -p1
configure-options = configure-options =
--disable-static --disable-static
environment = environment =
PATH=${pkgconfig:location}/bin:%(PATH)s PATH=${patch:location}/bin:${pkgconfig:location}/bin:%(PATH)s
PKG_CONFIG_PATH=${libogg:location}/lib/pkgconfig:${libpng:location}/lib/pkgconfig:${libvorbis:location}/lib/pkgconfig PKG_CONFIG_PATH=${libogg:location}/lib/pkgconfig:${libpng:location}/lib/pkgconfig:${libvorbis:location}/lib/pkgconfig
[yasm] [yasm]
......
--- libtheora-1.1.1/examples/png2theora.c.orig 2009-08-23 03:14:04.000000000 +0900
+++ libtheora-1.1.1/examples/png2theora.c 2013-07-16 12:40:07.629087870 +0900
@@ -462,9 +462,9 @@
png_set_strip_alpha(png_ptr);
row_data = (png_bytep)png_malloc(png_ptr,
- 3*height*width*png_sizeof(*row_data));
+ 3*height*width*sizeof(*row_data));
row_pointers = (png_bytep *)png_malloc(png_ptr,
- height*png_sizeof(*row_pointers));
+ height*sizeof(*row_pointers));
for(y = 0; y < height; y++) {
row_pointers[y] = row_data + y*(3*width);
}
...@@ -33,7 +33,7 @@ environment = ...@@ -33,7 +33,7 @@ environment =
[ppl] [ppl]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
# we should use version 0.10.x for gcc-4.5 # we should use version 0.10.x for gcc-4.5
url = http://www.cs.unipr.it/ppl/Download/ftp/releases/0.10.2/ppl-0.10.2.tar.bz2 url = http://bugseng.com/products/ppl/download/ftp/releases/0.10.2/ppl-0.10.2.tar.bz2
md5sum = 5667111f53150618b0fa522ffc53fc3e md5sum = 5667111f53150618b0fa522ffc53fc3e
configure-options = configure-options =
--with-libgmp-prefix=${gmp:location} --with-libgmp-prefix=${gmp:location}
...@@ -85,9 +85,6 @@ patches = ...@@ -85,9 +85,6 @@ patches =
${gcc-multiarch.patch:location}/${gcc-multiarch.patch:filename} ${gcc-multiarch.patch:location}/${gcc-multiarch.patch:filename}
patch-options = -p2 patch-options = -p2
configure-command = make clean \\; make distclean \\; ./configure configure-command = make clean \\; make distclean \\; ./configure
# GMP does not correctly detect achitecture so it have to be given
# as slapos.recipe.cmmi is using shell expansion in subproceses
# backticks are working
configure-options = configure-options =
--disable-bootstrap --disable-bootstrap
--enable-languages="c,c++" --enable-languages="c,c++"
...@@ -97,12 +94,30 @@ configure-options = ...@@ -97,12 +94,30 @@ configure-options =
--with-mpc=${mpc:location} --with-mpc=${mpc:location}
--with-ppl=${ppl:location} --with-ppl=${ppl:location}
--with-cloog=${cloog-ppl:location} --with-cloog=${cloog-ppl:location}
--with-ecj-jar=${ecj:location}/${ecj:filename}
--prefix=${buildout:parts-directory}/${:_buildout_section_name_} --prefix=${buildout:parts-directory}/${:_buildout_section_name_}
environment = environment =
LDFLAGS=-Wl,-rpath=${mpfr:location}/lib -Wl,-rpath=${gmp:location}/lib -Wl,-rpath=${mpc:location}/lib -Wl,-rpath=${ppl:location}/lib -Wl,-rpath=${cloog-ppl:location}/lib LDFLAGS=-Wl,-rpath=${mpfr:location}/lib -Wl,-rpath=${gmp:location}/lib -Wl,-rpath=${mpc:location}/lib -Wl,-rpath=${ppl:location}/lib -Wl,-rpath=${cloog-ppl:location}/lib
PATH=${zip:location}/bin:%(PATH)s # make install does not work when several core are used
make-targets = install -j1
[gcc-minimal]
recipe = slapos.recipe.cmmi
url = http://ftp.gnu.org/gnu/gcc/gcc-4.5.4/gcc-core-4.5.4.tar.bz2
md5sum = ca62e442629a9a7710f5d797bf1b521c
patches =
${gcc-multiarch.patch:location}/${gcc-multiarch.patch:filename}
patch-options = -p2
configure-options =
--disable-bootstrap
--enable-languages=c
--disable-multilib
--with-gmp=${gmp:location}
--with-mpfr=${mpfr:location}
--with-mpc=${mpc:location}
--without-ppl
--without-cloog
environment =
LDFLAGS=-Wl,-rpath=${mpfr:location}/lib -Wl,-rpath=${gmp:location}/lib -Wl,-rpath=${mpc:location}/lib
# make install does not work when several core are used # make install does not work when several core are used
make-targets = install -j1 make-targets = install -j1
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
[buildout] [buildout]
extends = extends =
../curl/buildout.cfg ../curl/buildout.cfg
../gettext/buildout.cfg
../libexpat/buildout.cfg ../libexpat/buildout.cfg
../openssl/buildout.cfg ../openssl/buildout.cfg
../zlib/buildout.cfg ../zlib/buildout.cfg
......
--- glib/gstrfuncs.c~ 2012-12-30 14:51:30.000000000 +0800
+++ glib/gstrfuncs.c 2012-12-30 14:51:50.203125000 +0800
@@ -1423,7 +1423,7 @@
#ifdef HAVE_STRSIGNAL
const char *msg_locale;
-#if defined(G_OS_BEOS) || defined(G_WITH_CYGWIN)
+#if defined(G_OS_BEOS)
extern const char *strsignal(int);
#else
/* this is declared differently (const) in string.h on BeOS */
[buildout] [buildout]
extends = extends =
../readline/buildout.cfg ../gmp/buildout.cfg
../nettle/buildout.cfg
../ncurses/buildout.cfg ../ncurses/buildout.cfg
../readline/buildout.cfg
../zlib/buildout.cfg ../zlib/buildout.cfg
parts = gnutls parts = gnutls
...@@ -22,14 +24,13 @@ environment = ...@@ -22,14 +24,13 @@ environment =
LDFLAGS=-lgpg-error -L${gpg-error:location}/lib -Wl,-rpath=${gpg-error:location}/lib LDFLAGS=-lgpg-error -L${gpg-error:location}/lib -Wl,-rpath=${gpg-error:location}/lib
[gnutls] [gnutls]
# XXX-Cedric : update to latest gnutls
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
url = ftp://ftp.gnutls.org/gcrypt/gnutls/v2.8/gnutls-2.8.6.tar.bz2 url = ftp://ftp.gnutls.org/gcrypt/gnutls/v3.2/gnutls-3.2.0.tar.xz
md5sum = eb0a6d7d3cb9ac684d971c14f9f6d3ba md5sum = e0cba4ddd923420026ff9739b3bc069a
configure-options = configure-options =
--with-libgcrypt-prefix=${gcrypt:location} --with-libgcrypt-prefix=${gcrypt:location}
--disable-static --disable-static
environment = environment =
CPPFLAGS=-I${zlib:location}/include -I${readline:location}/include -I${ncurses:location}/include -I${ncurses:location}/include/ncursesw -I${gcrypt:location}/include -I${gpg-error:location}/include CPPFLAGS=-I${zlib:location}/include -I${readline:location}/include -I${ncurses:location}/include -I${ncurses:location}/include/ncursesw -I${gmp:location}/include -I${gcrypt:location}/include -I${gpg-error:location}/include -I${nettle:location}/include
LDFLAGS=-lgcrypt -L${readline:location}/lib -Wl,-rpath=${readline:location}/lib -L${ncurses:location}/lib -Wl,-rpath=${ncurses:location}/lib -L${gcrypt:location}/lib -Wl,-rpath=${gcrypt:location}/lib -L${zlib:location}/lib -Wl,-rpath=${zlib:location}/lib -L${gpg-error:location}/lib -Wl,-rpath=${gpg-error:location}/lib LDFLAGS=-lgcrypt -L${gmp:location}/lib -Wl,-rpath=${gmp:location}/lib -L${readline:location}/lib -Wl,-rpath=${readline:location}/lib -L${ncurses:location}/lib -Wl,-rpath=${ncurses:location}/lib -L${gcrypt:location}/lib -Wl,-rpath=${gcrypt:location}/lib -L${nettle:location}/lib -Wl,-rpath=${nettle:location}/lib -L${zlib:location}/lib -Wl,-rpath=${zlib:location}/lib -L${gpg-error:location}/lib -Wl,-rpath=${gpg-error:location}/lib
PKG_CONFIG=${zlib:location}/lib/pkgconfig
[buildout] [buildout]
extends = extends =
../pcre/buildout.cfg ../pcre/buildout.cfg
../coreutils/buildout.cfg
../xz-utils/buildout.cfg ../xz-utils/buildout.cfg
parts = parts =
grep grep-output
[grep] [grep]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
...@@ -13,3 +14,11 @@ environment = ...@@ -13,3 +14,11 @@ environment =
PATH=${xz-utils:location}/bin:%(PATH)s PATH=${xz-utils:location}/bin:%(PATH)s
CPPFLAGS=-I${pcre:location}/include CPPFLAGS=-I${pcre:location}/include
LDFLAGS=-L${pcre:location}/lib -Wl,-rpath=${pcre:location}/lib LDFLAGS=-L${pcre:location}/lib -Wl,-rpath=${pcre:location}/lib
[grep-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -x ${:grep}
grep = ${grep:location}/bin/grep
[buildout]
extends =
../autoconf/buildout.cfg
../automake/buildout.cfg
[make]
# make 3.82 breaks too many things. Stick with 3.81.
# See http://lists.gnu.org/archive/html/make-alpha/2010-07/msg00025.html
# for all incompatible changes.
# Moreover, vanilla 3.81 does some seg faults, so use Debian patched version.
<= make3.81-debian
[make-dfsg_3.81-8.2.diff]
# Debian patch coming from:
# http://ftp.de.debian.org/debian/pool/main/m/make-dfsg/make-dfsg_3.81-8.2.diff.gz
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/${:_buildout_section_name_}
md5sum = fa77bb989a096fafbe7c78582e9415e3
download-only = true
[make3.81-debian]
recipe = slapos.recipe.cmmi
url = http://ftp.de.debian.org/debian/pool/main/m/make-dfsg/make-dfsg_3.81.orig.tar.gz
md5sum = 7c93b1ab4680eb21c2c13f4f47741e2d
patches =
${make-dfsg_3.81-8.2.diff:location}/make-dfsg_3.81-8.2.diff
patch-options = -p1
This source diff could not be displayed because it is too large. You can view the blob instead.
...@@ -18,10 +18,10 @@ parts = ...@@ -18,10 +18,10 @@ parts =
[mariadb] [mariadb]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
version = 5.5.31 version = 5.5.32
revision = 1 revision = 1
url = http://downloads.askmonty.org/f/mariadb-${:version}/kvm-tarbake-jaunty-x86/mariadb-${:version}.tar.gz/from/http://ftp.osuosl.org/pub/mariadb url = http://downloads.askmonty.org/f/mariadb-${:version}/kvm-tarbake-jaunty-x86/mariadb-${:version}.tar.gz/from/http://ftp.osuosl.org/pub/mariadb
md5sum = 3fe756bc76f0e7a3af2757e48ce0f3f4 md5sum = 565c2dce6a2fb027c9d0ffbae4934135
# compile directory is required to build mysql plugins. # compile directory is required to build mysql plugins.
keep-compile-dir = true keep-compile-dir = true
patch-options = -p0 patch-options = -p0
......
[buildout]
extends =
../gmp/buildout.cfg
../m4/buildout.cfg
[nettle-lib-location.patch]
recipe = hexagonit.recipe.download
download-only = true
filename = ${:_buildout_section_name_}
url = ${:_profile_base_location_}/${:filename}
md5sum = 41dd0ce2a73487929bdc637b75dd62c9
[nettle]
recipe = slapos.recipe.cmmi
url = http://www.lysator.liu.se/~nisse/archive/nettle-2.7.1.tar.gz
md5sum = 003d5147911317931dd453520eb234a5
patches =
${nettle-lib-location.patch:location}/${nettle-lib-location.patch:filename}
configure-option =
--disable-static
--disable-assembler
--disable-openssl
environment =
PATH=${m4:location}/bin:%(PATH)s
CPPFLAGS=-I${gmp:location}/include
LDFLAGS=-L${gmp:location}/lib -Wl,-rpath=${gmp:location}/lib
--- configure.orig 2013-07-05 15:37:28.000000000 +0200
+++ configure 2013-07-05 15:47:48.000000000 +0200
@@ -4680,52 +4680,6 @@
if test "x$ABI" != xstandard ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: Compiler uses $ABI-bit ABI. To change, set CC." >&5
$as_echo "$as_me: Compiler uses $ABI-bit ABI. To change, set CC." >&6;}
- if test "$libdir" = '${exec_prefix}/lib' ; then
- # Try setting a better default
- case "$host_cpu:$host_os:$ABI" in
- *:solaris*:32|*:sunos*:32)
- libdir='${exec_prefix}/lib'
- ;;
- *:solaris*:64|*:sunos*:64)
- libdir='${exec_prefix}/lib/64'
- ;;
- # Linux conventions are a mess... According to the Linux File
- # Hierarchy Standard, all architectures except IA64 puts 32-bit
- # libraries in lib, and 64-bit in lib64. Some distributions,
- # e.g., Fedora and Gentoo, adhere to this standard, while at
- # least Debian has decided to put 64-bit libraries in lib and
- # 32-bit libraries in lib32.
-
- # We try to figure out the convention, except if we're cross
- # compiling. We use lib${ABI} if /usr/lib${ABI} exists and
- # appears to not be a symlink to a different name.
- *:linux*:32|*:linux*:64)
- if test "$cross_compiling" = yes ; then
- { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Cross compiling for linux. Can't guess if libraries go in lib${ABI} or lib." >&5
-$as_echo "$as_me: WARNING: Cross compiling for linux. Can't guess if libraries go in lib${ABI} or lib." >&2;}; else
- # The dash builtin pwd tries to be "helpful" and remember
- # symlink names. Use -P option, and hope it's portable enough.
- test -d /usr/lib${ABI} \
- && (cd /usr/lib${ABI} && pwd -P | grep >/dev/null "/lib${ABI}"'$') \
- && libdir='${exec_prefix}/'"lib${ABI}"
- fi
- ;;
- # On freebsd, it seems 32-bit libraries are in lib32,
- # and 64-bit in lib. Don't know about "kfreebsd", does
- # it follow the Linux fhs conventions?
- *:freebsd*:32)
- libdir='${exec_prefix}/lib32'
- ;;
- *:freebsd*:64)
- libdir='${exec_prefix}/lib'
- ;;
- *)
- { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Don't know where to install $ABI-bit libraries on this system." >&5
-$as_echo "$as_me: WARNING: Don't know where to install $ABI-bit libraries on this system." >&2;};
- esac
- { $as_echo "$as_me:${as_lineno-$LINENO}: Libraries to be installed in $libdir." >&5
-$as_echo "$as_me: Libraries to be installed in $libdir." >&6;}
- fi
fi
# Select assembler code
...@@ -3,11 +3,14 @@ extends = ...@@ -3,11 +3,14 @@ extends =
../pcre/buildout.cfg ../pcre/buildout.cfg
../zlib/buildout.cfg ../zlib/buildout.cfg
../openssl/buildout.cfg ../openssl/buildout.cfg
../coreutils/buildout.cfg
parts = nginx-output
[nginx] [nginx]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
url = http://nginx.org/download/nginx-1.2.7.tar.gz url = http://nginx.org/download/nginx-1.5.3.tar.gz
md5sum = d252f5c689a14a668e241c744ccf5f06 md5sum = 1e735dd6a6ade2b5c20e924b67c3d355
configure-options= configure-options=
--with-ipv6 --with-ipv6
--with-http_ssl_module --with-http_ssl_module
...@@ -16,6 +19,15 @@ configure-options= ...@@ -16,6 +19,15 @@ configure-options=
--with-ld-opt="-L ${zlib:location}/lib -L ${openssl:location}/lib -L ${pcre:location}/lib -Wl,-rpath=${pcre:location}/lib -Wl,-rpath=${zlib:location}/lib -Wl,-rpath=${openssl:location}/lib" --with-ld-opt="-L ${zlib:location}/lib -L ${openssl:location}/lib -L ${pcre:location}/lib -Wl,-rpath=${pcre:location}/lib -Wl,-rpath=${zlib:location}/lib -Wl,-rpath=${openssl:location}/lib"
--with-cc-opt="-I ${pcre:location}/include -I ${openssl:location}/include -I ${zlib:location}/include" --with-cc-opt="-I ${pcre:location}/include -I ${openssl:location}/include -I ${zlib:location}/include"
[nginx-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -x ${:nginx} -a -f ${:mime}
nginx = ${nginx:location}/sbin/nginx
mime = ${nginx:location}/conf/mime.types
[nginx-unstable] [nginx-unstable]
<= nginx <= nginx
url = http://nginx.org/download/nginx-1.3.15.tar.gz url = http://nginx.org/download/nginx-1.3.15.tar.gz
......
...@@ -8,9 +8,10 @@ extends = ...@@ -8,9 +8,10 @@ extends =
../ca-certificates/buildout.cfg ../ca-certificates/buildout.cfg
../zlib/buildout.cfg ../zlib/buildout.cfg
../patch/buildout.cfg ../patch/buildout.cfg
../coreutils/buildout.cfg
parts = parts =
openssl openssl-output
[openssl-nodoc.patch] [openssl-nodoc.patch]
# Disable doc generation part in Makefile # Disable doc generation part in Makefile
...@@ -51,7 +52,7 @@ configure-options = ...@@ -51,7 +52,7 @@ configure-options =
make-options = make-options =
-j1 -j1
make-targets = make-targets =
install && rm -f ${buildout:parts-directory}/${:_buildout_section_name_}/etc/ssl/certs/* && for i in ${ca-certificates:location}/certs/*/*.crt; do ln -sv $i ${buildout:parts-directory}/${:_buildout_section_name_}/etc/ssl/certs/`${buildout:parts-directory}/${:_buildout_section_name_}/bin/openssl x509 -hash -noout -in $i`.0; done; true all install_sw && rm -f ${buildout:parts-directory}/${:_buildout_section_name_}/etc/ssl/certs/* && for i in ${ca-certificates:location}/certs/*/*.crt; do ln -sv $i ${buildout:parts-directory}/${:_buildout_section_name_}/etc/ssl/certs/`${buildout:parts-directory}/${:_buildout_section_name_}/bin/openssl x509 -hash -noout -in $i`.0; done; true
[openssl:cygwin] [openssl:cygwin]
share = /usr share = /usr
...@@ -62,3 +63,11 @@ promises = ...@@ -62,3 +63,11 @@ promises =
/usr/bin/cygcrypto-1.0.0.dll /usr/bin/cygcrypto-1.0.0.dll
/usr/bin/cygssl-1.0.0.dll /usr/bin/cygssl-1.0.0.dll
/usr/include/openssl/x509.h /usr/include/openssl/x509.h
[openssl-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -x ${:openssl}
openssl = ${openssl:location}/bin/openssl
--- poptconfig.c~ 2011-09-10 22:49:30.802250000 +0800
+++ poptconfig.c 2011-09-10 23:46:30.572048000 +0800
@@ -45,8 +45,9 @@ extern int glob_pattern_p (const char *_
#if !defined(__GLIBC__)
/* Return nonzero if PATTERN contains any metacharacters.
Metacharacters can be quoted with backslashes if QUOTE is nonzero. */
+#define glob_pattern_p glob_pattern_p_s
static int
-glob_pattern_p (const char * pattern, int quote)
+glob_pattern_p_s(const char * pattern, int quote)
/*@*/
{
const char * p;
...@@ -44,6 +44,9 @@ configure-options = ...@@ -44,6 +44,9 @@ configure-options =
--enable-unicode=ucs4 --enable-unicode=ucs4
--with-system-expat --with-system-expat
--with-threads --with-threads
# Profiled build:
make-binary =
make-targets = make profile-opt && make install
# the entry "-Wl,-rpath=${file:location}/lib" below is needed by python-magic, # the entry "-Wl,-rpath=${file:location}/lib" below is needed by python-magic,
# which would otherwise load the system libmagic.so with ctypes # which would otherwise load the system libmagic.so with ctypes
......
...@@ -8,6 +8,8 @@ parts = ...@@ -8,6 +8,8 @@ parts =
[python-openssl] [python-openssl]
recipe = zc.recipe.egg:custom recipe = zc.recipe.egg:custom
egg = pyOpenSSL egg = pyOpenSSL
include-dirs =
${openssl:location}/include/
library-dirs = library-dirs =
${openssl:location}/lib/ ${openssl:location}/lib/
rpath = rpath =
......
[buildout] [buildout]
extends = extends =
../../component/gnutls/buildout.cfg ../../component/gnutls/buildout.cfg
../../component/libpng/buildout.cfg ../../component/libpng/buildout.cfg
../../component/libuuid/buildout.cfg ../../component/libuuid/buildout.cfg
../../component/pkgconfig/buildout.cfg
../../component/xorg/buildout.cfg ../../component/xorg/buildout.cfg
../../component/zlib/buildout.cfg ../../component/zlib/buildout.cfg
[kvm] # XXX Change all reference to kvm section to qemu section, then
# Backward compatibility # use qemu as main name section.
<= qemu-kvm [qemu]
<= kvm
[qemu-kvm] [kvm]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
# qemu-kvm and qemu are now the same since 1.3. # qemu-kvm and qemu are now the same since 1.3.
url = http://wiki.qemu-project.org/download/qemu-1.4.1.tar.bz2 url = http://wiki.qemu-project.org/download/qemu-1.5.1.tar.bz2
md5sum = eb2d696956324722b5ecfa46e41f9a75 md5sum = b56e73bdcfdb214d5c68e13111aca96f
depends = depends =
${libpng:so_version} ${libpng:so_version}
configure-options = configure-options =
--target-list="" --target-list=x86_64-softmmu
--enable-system
--with-system-pixman
--disable-sdl --disable-sdl
--disable-xen --disable-xen
--enable-vnc-tls --enable-vnc-tls
...@@ -36,4 +40,27 @@ configure-options = ...@@ -36,4 +40,27 @@ configure-options =
environment = environment =
PATH=${pkgconfig:location}/bin:%(PATH)s PATH=${pkgconfig:location}/bin:%(PATH)s
PKG_CONFIG_PATH=${gnutls:location}/lib/pkgconfig:${glib:location}/lib/pkgconfig:${pixman:location}/lib/pkgconfig PKG_CONFIG_PATH=${gnutls:location}/lib/pkgconfig:${glib:location}/lib/pkgconfig:${pixman:location}/lib/pkgconfig
LDFLAGS=-L${pixman:location}/lib -Wl,-rpath=${pixman:location}/lib
# The following is only available in buildout2, which we don't use yet.
[kvm-bits64]
configure-options =
--target-list=x86_64-softmmu
${kvm:configure-options}
[kvm-bits32]
configure-options =
--target-list=i386-softmmu
${kvm:configure-options}
[debian-amd64-netinst.iso]
# Download the installer of Debian 7 (Wheezy)
recipe = hexagonit.recipe.download
url = http://cdimage.debian.org/debian-cd/7.1.0/amd64/iso-cd/debian-7.1.0-amd64-netinst.iso
filename = ${:_buildout_section_name_}
md5sum = 80f498a1f9daa76bc911ae13692e4495
download-only = true
mode = 0644
location = ${buildout:parts-directory}/${:_buildout_section_name_}
[buildout] [buildout]
extends = extends =
../librsync/buildout.cfg ../librsync/buildout.cfg
../coreutils/buildout.cfg
parts = parts =
rdiff-backup rdiff-backup-output
[rdiff-backup-build] [rdiff-backup-build]
recipe = zc.recipe.egg:custom recipe = zc.recipe.egg:custom
...@@ -34,3 +35,11 @@ configure-options = ...@@ -34,3 +35,11 @@ configure-options =
prep prep
compile compile
make-options = -C rdiff-backup-1.2.8-5/build make-options = -C rdiff-backup-1.2.8-5/build
[rdiff-backup-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -x ${:rdiff-backup}
rdiff-backup = ${buildout:directory}/bin/rdiff-backup
# Squid: Optimising Web Delivery
# http://squid-cache.org
[buildout]
parts =
squid
extends =
../pkgconfig/buildout.cfg
[squid]
recipe = hexagonit.recipe.cmmi
url = http://www.squid-cache.org/Versions/v3/3.2/squid-3.2.1.tar.gz
md5sum = 3fb81acc6b70a432e3f0d8a0491056dc
configure-options =
--disable-dependency-tracking
--disable-translation
--disable-htcp
--disable-snmp
--disable-loadable-modules
--disable-icmp
--disable-esi
--disable-icap-client
--disable-wccp
--disable-wccpv2
--disable-eui
--enable-http-violations
--disable-ipfw-transparent
--disable-ipf-transparent
--disable-pf-transparent
--disable-linux-netfilter
--enable-follow-x-forwarded-for
--disable-auth
--disable-url-rewrite-helpers
--disable-auto-locale
--disable-kerberos
--enable-x-accelerator-vary
--disable-external-acl-helpers
--disable-auth-ntlm
--with-krb5-config=no
Environment =
PATH=${pkgconfig:location}/bin:%(PATH)s
[buildout]
extends =
../ncurses/buildout.cfg
[texinfo]
# Most other components are not happy with texinfo 5, because it treats some
# used-to-be-warnings as errors.
<= texinfo4
[texinfo4]
recipe = slapos.recipe.cmmi
url = http://ftp.gnu.org/gnu/texinfo/texinfo-4.13.tar.gz
md5sum = 71ba711519209b5fb583fed2b3d86fcb
configure-options =
--disable-static
environment =
CFLAGS=-I${ncurses:location}/include
LDFLAGS=-L${ncurses:location}/lib -Wl,-rpath=${ncurses:location}/lib
[buildout]
extends =
../libtool/buildout.cfg
../libuuid/buildout.cfg
[zeromq]
<= zeromq3
[zeromq3]
recipe = slapos.recipe.cmmi
url = http://download.zeromq.org/zeromq-3.2.3.tar.gz
md5sum = 1abf8246363249baf5931a065ee38203
configure-options = --without-documentation
environment =
PATH=${libtool:location}/bin:%(PATH)s
LDFLAGS=-L${libtool:location}/lib -Wl,-rpath -Wl,${libtool:location}/lib -L${libuuid:location}/lib -Wl,-rpath -Wl,${libuuid:location}/lib
[zeromq2]
recipe = slapos.recipe.cmmi
url = http://download.zeromq.org/zeromq-2.2.0.tar.gz
md5sum = 1b11aae09b19d18276d0717b2ea288f6
configure-options =
--without-documentation
environment =
PATH=${libtool:location}/bin:%(PATH)s
CXXFLAGS=-I${libuuid:location}/include
LDFLAGS=-L${libtool:location}/lib -Wl,-rpath -Wl,${libtool:location}/lib -L${libuuid:location}/lib -Wl,-rpath -Wl,${libuuid:location}/lib
...@@ -28,7 +28,7 @@ from setuptools import setup, find_packages ...@@ -28,7 +28,7 @@ from setuptools import setup, find_packages
import glob import glob
import os import os
version = '0.78.2.dev' version = '0.83.1'
name = 'slapos.cookbook' name = 'slapos.cookbook'
long_description = open("README.txt").read() + "\n" + \ long_description = open("README.txt").read() + "\n" + \
open("CHANGES.txt").read() + "\n" open("CHANGES.txt").read() + "\n"
...@@ -70,7 +70,6 @@ setup(name=name, ...@@ -70,7 +70,6 @@ setup(name=name,
'zc.buildout': [ 'zc.buildout': [
'addresiliency = slapos.recipe.addresiliency:Recipe', 'addresiliency = slapos.recipe.addresiliency:Recipe',
'agent = slapos.recipe.agent:Recipe', 'agent = slapos.recipe.agent:Recipe',
'apache.frontend = slapos.recipe.apache_frontend:Recipe',
'apache.zope.backend = slapos.recipe.apache_zope_backend:Recipe', 'apache.zope.backend = slapos.recipe.apache_zope_backend:Recipe',
'apacheperl = slapos.recipe.apacheperl:Recipe', 'apacheperl = slapos.recipe.apacheperl:Recipe',
'apachephp = slapos.recipe.apachephp:Recipe', 'apachephp = slapos.recipe.apachephp:Recipe',
...@@ -165,8 +164,7 @@ setup(name=name, ...@@ -165,8 +164,7 @@ setup(name=name,
'publish.serialised = slapos.recipe.publish:Serialised', 'publish.serialised = slapos.recipe.publish:Serialised',
'publishsection = slapos.recipe.publish:PublishSection', 'publishsection = slapos.recipe.publish:PublishSection',
'publishurl = slapos.recipe.publishurl:Recipe', 'publishurl = slapos.recipe.publishurl:Recipe',
'pwgen = slapos.recipe.pwgen:Recipe', 'readline = slapos.recipe.readline:Recipe',
'pwgen.stable = slapos.recipe.pwgen:StablePasswordGeneratorRecipe',
'redis.server = slapos.recipe.redis:Recipe', 'redis.server = slapos.recipe.redis:Recipe',
'request = slapos.recipe.request:Recipe', 'request = slapos.recipe.request:Recipe',
'request.serialised = slapos.recipe.request:Serialised', 'request.serialised = slapos.recipe.request:Serialised',
...@@ -192,6 +190,7 @@ setup(name=name, ...@@ -192,6 +190,7 @@ setup(name=name,
'slaprunner.import = slapos.recipe.slaprunner.backup:ImportRecipe', 'slaprunner.import = slapos.recipe.slaprunner.backup:ImportRecipe',
'softwaretype = slapos.recipe.softwaretype:Recipe', 'softwaretype = slapos.recipe.softwaretype:Recipe',
'sphinx= slapos.recipe.sphinx:Recipe', 'sphinx= slapos.recipe.sphinx:Recipe',
'squid = slapos.recipe.squid:Recipe',
'sshkeys_authority = slapos.recipe.sshkeys_authority:Recipe', 'sshkeys_authority = slapos.recipe.sshkeys_authority:Recipe',
'sshkeys_authority.request = slapos.recipe.sshkeys_authority:Request', 'sshkeys_authority.request = slapos.recipe.sshkeys_authority:Request',
'stunnel = slapos.recipe.stunnel:Recipe', 'stunnel = slapos.recipe.stunnel:Recipe',
......
...@@ -12,15 +12,14 @@ On slap console, you can instanciate varnish like this: ...@@ -12,15 +12,14 @@ On slap console, you can instanciate varnish like this:
instance = request( instance = request(
software_type='varnish', software_type='varnish',
partition_parameter_kw={ partition_parameter_kw={
'tidstorage-url':'http://[your tidstrage address]:your tid strage port', 'backend-url':'https://[your_backend_address]:your_backend_port',
'web-checker-frontend-url':'http://www.example.com', 'web-checker-frontend-url':'http://www.example.com',
'web-checker-mail-address':'web-checker-result@example.com', 'web-checker-mail-address':'web-checker-result@example.com',
'web-checker-smtp-host':'mail.example.com', 'web-checker-smtp-host':'mail.example.com',
} }
) )
tidstrage-url is the backend url that varnish will cache. It is expected that backend-url is the backend url that varnish will cache.
the backend is created by tidstorage recipe.
web-checker-frontend-url is the entry-point-url that web checker will check web-checker-frontend-url is the entry-point-url that web checker will check
the HTTP headers of all the pages in the web site. the HTTP headers of all the pages in the web site.
......
...@@ -31,10 +31,10 @@ KVM with Remote and gzipped Image ...@@ -31,10 +31,10 @@ KVM with Remote and gzipped Image
gzip = true gzip = true
# Use -hda instead -drive arg # Use -hda instead -drive arg
# Default is drive (see Options bellow) # Default is drive (see Options below)
image_type = hda image_type = hda
### Common Configuration bellow. ### ### Common Configuration below. ###
# VNC is optional # VNC is optional
kvm_vnc = <SOME-IP>:<VNC-DISPLAY> kvm_vnc = <SOME-IP>:<VNC-DISPLAY>
......
# -*- coding: utf-8 -*-
import logging
import time
import slapos
log = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG)
class Renamer(object):
def __init__(self, server_url, key_file, cert_file, computer_guid,
partition_id, software_release, namebase):
self.server_url = server_url
self.key_file = key_file
self.cert_file = cert_file
self.computer_guid = computer_guid
self.partition_id = partition_id
self.software_release = software_release
self.namebase = namebase
def _failover(self):
"""\
This method does
- retrieve the broken computer partition
- change its reference to 'broken-...' and its software type to 'frozen'
- retrieve the winner computer partition (attached to this process)
- change its reference to replace the broken one.
later, slapgrid will change its software_type as well.
Then, after running slapgrid-cp a few times, the winner takes over and
a new cp is created to replace it as an importer.
"""
slap = slapos.slap.slap()
slap.initializeConnection(self.server_url, self.key_file, self.cert_file)
# partition that will take over.
cp_winner = slap.registerComputerPartition(computer_guid=self.computer_guid,
partition_id=self.partition_id)
# XXX although we can already rename cp_winner, to change its software type we need to
# get hold of the root cp as well
cp_exporter_ref = self.namebase + '0' # this is ok. the boss is always number zero.
# partition to be deactivated
cp_broken = cp_winner.request(software_release=self.software_release,
software_type='frozen',
state='stopped',
partition_reference=cp_exporter_ref)
broken_new_ref = 'broken-{}'.format(time.strftime("%d-%b_%H:%M:%S", time.gmtime()))
log.debug("Renaming {}: {}".format(cp_broken.getId(), broken_new_ref))
cp_broken.rename(new_name=broken_new_ref)
cp_broken.stopped()
log.debug("Renaming {}: {}".format(cp_winner.getId(), cp_exporter_ref))
# update name (and later, software type) for the partition that will take over
cp_winner.rename(new_name=cp_exporter_ref)
cp_winner.bang(message='partitions have been renamed!')
def failover(self):
try:
self._failover()
log.info('Renaming done')
except slapos.slap.ServerError:
log.info('Internal server error')
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import logging
import time
import slapos.recipe.addresiliency.renamer import slapos
from slapos.slap.slap import NotFoundError
def run(args): log = logging.getLogger(__name__)
renamer = slapos.recipe.addresiliency.renamer.Renamer(server_url = args.pop('server_url'), logging.basicConfig(level=logging.DEBUG)
key_file = args.pop('key_file'),
cert_file = args.pop('cert_file'), def takeover(server_url, key_file, cert_file, computer_guid,
computer_guid = args.pop('computer_id'), partition_id, software_release, namebase,
partition_id = args.pop('partition_id'), winner_instance_suffix = None):
software_release = args.pop('software'), """
namebase = args.pop('namebase')) This function does
- retrieve the broken computer partition
- change its reference to 'broken-...' and its software type to 'frozen'
- retrieve the winner computer partition (attached to this process)
- change its reference to replace the broken one.
later, slapgrid will change its software_type as well.
Then, after running slapgrid-cp a few times, the winner takes over and
a new cp is created to replace it as an importer.
"""
slap = slapos.slap.slap()
slap.initializeConnection(server_url, key_file, cert_file)
current_partition = slap.registerComputerPartition(computer_guid=computer_guid,
partition_id=partition_id)
# partition that will take over.
if winner_instance_suffix:
winner_instance_name = namebase + winner_instance_suffix
# XXX: we hardcode a lot of values here, because request is a settergetter, all at once.
cp_winner = current_partition.request(software_release=software_release,
software_type='%s-import' % namebase,
partition_reference=winner_instance_name)
else:
# This script is run in the winning partition: use this one as winner
cp_winner = current_partition
# XXX although we can already rename cp_winner, to change its software type we need to
# get hold of the root cp as well
cp_exporter_ref = namebase + '0' # this is ok. the boss is always number zero.
renamer.failover() # partition to be deactivated
cp_broken = cp_winner.request(software_release=software_release,
software_type='frozen',
state='stopped',
partition_reference=cp_exporter_ref)
broken_new_ref = 'broken-{}'.format(time.strftime("%d-%b_%H:%M:%S", time.gmtime()))
log.debug("Renaming {}: {}".format(cp_broken.getId(), broken_new_ref))
cp_broken.rename(new_name=broken_new_ref)
log.debug("Renaming {}: {}".format(cp_winner.getId(), cp_exporter_ref))
# update name (and later, software type) for the partition that will take over
while True:
time.sleep(10)
try:
cp_winner.rename(new_name=cp_exporter_ref)
break
except NotFoundError:
log.warning('Impossible to rename. Retrying in a few seconds...')
log.debug('Renamed.')
cp_winner.bang(message='partitions have been renamed!')
# Note: Root instance will reconfigure itself the winning instance (software_type
# and parameters.)
def run(args):
slapos.recipe.addresiliency.takeover.takeover(server_url = args.pop('server_url'),
key_file = args.pop('key_file'),
cert_file = args.pop('cert_file'),
computer_guid = args.pop('computer_id'),
partition_id = args.pop('partition_id'),
software_release = args.pop('software'),
namebase = args.pop('namebase'))
##############################################################################
#
# Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from slapos.recipe.librecipe import BaseSlapRecipe
import os
import pkg_resources
import hashlib
import operator
import sys
import zc.buildout
import zc.recipe.egg
import ConfigParser
import re
import traceback
TRUE_VALUES = ['y', 'yes', '1', 'true']
class Recipe(BaseSlapRecipe):
def getTemplateFilename(self, template_name):
return pkg_resources.resource_filename(__name__,
'template/%s' % template_name)
def _install(self):
# Define directory not defined in deprecated lib
self.service_directory = os.path.join(self.etc_directory, 'service')
# Check for mandatory arguments
frontend_domain_name = self.parameter_dict.get("domain")
if frontend_domain_name is None:
raise zc.buildout.UserError('No domain name specified. Please define '
'the "domain" instance parameter.')
# Define optional arguments
frontend_port_number = self.parameter_dict.get("port", 4443)
frontend_plain_http_port_number = self.parameter_dict.get(
"plain_http_port", 8080)
base_varnish_port = 26009
slave_instance_list = self.parameter_dict.get("slave_instance_list", [])
self.path_list = []
self.requirements, self.ws = self.egg.working_set()
# self.cron_d is a directory, where cron jobs can be registered
self.cron_d = self.installCrond()
self.killpidfromfile = zc.buildout.easy_install.scripts(
[('killpidfromfile', 'slapos.toolbox.killpidfromfile',
'killpidfromfile')], self.ws, sys.executable, self.bin_directory)[0]
self.path_list.append(self.killpidfromfile)
rewrite_rule_list = []
rewrite_rule_https_only_list = []
rewrite_rule_zope_list = []
rewrite_rule_zope_path_list = []
slave_dict = {}
service_dict = {}
# Sort slave instance by reference to avoid most security issues
slave_instance_list = sorted(slave_instance_list,
key=operator.itemgetter('slave_reference'))
# dict of used domains, only used to track duplicates
domain_dict = {}
for slave_instance in slave_instance_list:
# Sanitize inputs
backend_url = slave_instance.get("url", None)
reference = slave_instance.get("slave_reference")
enable_cache = slave_instance.get('enable_cache', '').lower() in TRUE_VALUES
slave_type = slave_instance.get('type', '').lower() or None
https_only = slave_instance.get('https-only', '').lower() in TRUE_VALUES
# Set scheme (http? https?)
if https_only:
scheme = 'https://'
else:
scheme = 'http://'
self.logger.info('Processing slave instance: %s' % reference)
# Check for mandatory slave fields
if backend_url is None:
self.logger.warn('No "url" parameter is defined for %s slave'\
'instance. Ignoring it.' % reference)
continue
# Check for custom domain (like mypersonaldomain.com)
# If no custom domain, use generated one.
# Note: if we get an empty custom_domain parameter, we ignore it
domain = slave_instance.get('custom_domain')
if isinstance(domain, basestring):
domain = domain.strip()
if domain is None or domain.strip() == '':
domain = "%s.%s" % (reference.replace("-", "").lower(),
frontend_domain_name)
if domain_dict.get(domain):
# This domain already has been processed, skip this new one
continue
else:
domain_dict[domain] = True
# Define the URL where the instance will be available
# WARNING: we use default ports (443, 80) here.
slave_dict[reference] = "%s%s/" % (scheme, domain)
# Check if we want varnish+stunnel cache.
#if enable_cache:
# # XXX-Cedric : need to refactor to clean code? (to many variables)
# rewrite_rule = self.configureVarnishSlave(
# base_varnish_port, backend_url, reference, service_dict, domain)
# base_varnish_port += 2
#else:
# rewrite_rule = "%s %s" % (domain, backend_url)
# Temporary forbid activation of cache until it is properly tested
rewrite_rule = "%s %s" % (domain, backend_url)
# Finally, if successful, we add the rewrite rule to our list of rules
# We have 4 RewriteMaps:
# - One for generic (non-zope) websites, accepting both HTTP and HTTPS
# - One for generic websites that only accept HTTPS
# - Two for Zope-based websites
if rewrite_rule:
# We check if we have a zope slave. It requires different rewrite
# rule structure.
# So we will have one RewriteMap for normal websites, and one
# RewriteMap for Zope Virtual Host Monster websites.
if slave_type in ['zope']:
rewrite_rule_zope_list.append(rewrite_rule)
# For Zope, we have another dict containing the path e.g '/erp5/...
rewrite_rule_path = "%s %s" % (domain, slave_instance.get('path', ''))
rewrite_rule_zope_path_list.append(rewrite_rule_path)
else:
if https_only:
rewrite_rule_https_only_list.append(rewrite_rule)
else:
rewrite_rule_list.append(rewrite_rule)
# Certificate stuff
valid_certificate_str = self.parameter_dict.get("domain_ssl_ca_cert")
valid_key_str = self.parameter_dict.get("domain_ssl_ca_key")
if valid_certificate_str is None and valid_key_str is None:
ca_conf = self.installCertificateAuthority()
key, certificate = self.requestCertificate(frontend_domain_name)
else:
ca_conf = self.installValidCertificateAuthority(
frontend_domain_name, valid_certificate_str, valid_key_str)
key = ca_conf.pop("key")
certificate = ca_conf.pop("certificate")
if service_dict != {}:
if valid_certificate_str is not None and valid_key_str is not None:
self.installCertificateAuthority()
stunnel_key, stunnel_certificate = \
self.requestCertificate(frontend_domain_name)
else:
stunnel_key, stunnel_certificate = key, certificate
self.installStunnel(service_dict,
stunnel_certificate, stunnel_key,
ca_conf["ca_crl"],
ca_conf["certificate_authority_path"])
apache_parameter_dict = self.installFrontendApache(
ip_list=["[%s]" % self.getGlobalIPv6Address(),
self.getLocalIPv4Address()],
port=frontend_port_number,
plain_http_port=frontend_plain_http_port_number,
name=frontend_domain_name,
rewrite_rule_list=rewrite_rule_list,
rewrite_rule_https_only_list=rewrite_rule_https_only_list,
rewrite_rule_zope_list=rewrite_rule_zope_list,
rewrite_rule_zope_path_list=rewrite_rule_zope_path_list,
key=key, certificate=certificate)
# Send connection informations about each slave
for reference, url in slave_dict.iteritems():
self.logger.debug("Sending connection parameters of slave "
"instance: %s" % reference)
try:
connection_dict = {
# Send the public IPs (if possible) so that user knows what IP
# to bind to its domain name
'frontend_ipv6_address': self.getGlobalIPv6Address(),
'frontend_ipv4_address': self.parameter_dict.get("public-ipv4",
self.getLocalIPv4Address()),
'site_url': url,
}
self.setConnectionDict(connection_dict, reference)
except:
self.logger.fatal("Error while sending slave %s informations: %s",
reference, traceback.format_exc())
# Then set it for master instance
self.setConnectionDict(
dict(site_url=apache_parameter_dict["site_url"],
frontend_ipv6_address=self.getGlobalIPv6Address(),
frontend_ipv4_address=self.getLocalIPv4Address()))
# Promises
promise_config = dict(
hostname=self.getGlobalIPv6Address(),
port=frontend_port_number,
python_path=sys.executable,
)
promise_v6 = self.createPromiseWrapper(
'apache_ipv6',
self.substituteTemplate(
pkg_resources.resource_filename(
'slapos.recipe.check_port_listening',
'template/socket_connection_attempt.py.in'),
promise_config))
self.path_list.append(promise_v6)
promise_config = dict(
hostname=self.getLocalIPv4Address(),
port=frontend_port_number,
python_path=sys.executable,
)
promise_v4 = self.createPromiseWrapper(
'apache_ipv4',
self.substituteTemplate(
pkg_resources.resource_filename(
'slapos.recipe.check_port_listening',
'template/socket_connection_attempt.py.in'),
promise_config))
self.path_list.append(promise_v4)
return self.path_list
def configureVarnishSlave(self, base_varnish_port, url, reference,
service_dict, domain):
# Varnish should use stunnel to connect to the backend
base_varnish_control_port = base_varnish_port
base_varnish_port += 1
# Use regex
host_regex = "((\[\w*|[0-9]+\.)(\:|)).*(\]|\.[0-9]+)"
slave_host = re.search(host_regex, url).group(0)
port_regex = "\w+(\/|)$"
matcher = re.search(port_regex, url)
if matcher is not None:
slave_port = matcher.group(0)
slave_port = slave_port.replace("/", "")
elif url.startswith("https://"):
slave_port = 443
else:
slave_port = 80
service_name = "varnish_%s" % reference
varnish_ip = self.getLocalIPv4Address()
stunnel_port = base_varnish_port + 1
self.installVarnishCache(service_name,
ip=varnish_ip,
port=base_varnish_port,
control_port=base_varnish_control_port,
backend_host=varnish_ip,
backend_port=stunnel_port,
size="1G")
service_dict[service_name] = dict(public_ip=varnish_ip,
public_port=stunnel_port,
private_ip=slave_host.replace("[", "").replace("]", ""),
private_port=slave_port)
return "%s http://%s:%s" % \
(domain, varnish_ip, base_varnish_port)
def requestCertificate(self, name):
hash = hashlib.sha512(name).hexdigest()
key = os.path.join(self.ca_private, hash + self.ca_key_ext)
certificate = os.path.join(self.ca_certs, hash + self.ca_crt_ext)
parser = ConfigParser.RawConfigParser()
parser.add_section('certificate')
parser.set('certificate', 'name', name)
parser.set('certificate', 'key_file', key)
parser.set('certificate', 'certificate_file', certificate)
parser.write(open(os.path.join(self.ca_request_dir, hash), 'w'))
return key, certificate
def installCrond(self):
timestamps = self.createDataDirectory('cronstamps')
cron_output = os.path.join(self.log_directory, 'cron-output')
self._createDirectory(cron_output)
catcher = zc.buildout.easy_install.scripts([('catchcron',
__name__ + '.catdatefile', 'catdatefile')], self.ws, sys.executable,
self.bin_directory, arguments=[cron_output])[0]
self.path_list.append(catcher)
cron_d = os.path.join(self.etc_directory, 'cron.d')
crontabs = os.path.join(self.etc_directory, 'crontabs')
self._createDirectory(cron_d)
self._createDirectory(crontabs)
wrapper = zc.buildout.easy_install.scripts([('crond',
'slapos.recipe.librecipe.execute', 'execute')], self.ws, sys.executable,
self.service_directory, arguments=[
self.options['dcrond_binary'].strip(), '-s', cron_d, '-c', crontabs,
'-t', timestamps, '-f', '-l', '5', '-M', catcher]
)[0]
self.path_list.append(wrapper)
return cron_d
def installValidCertificateAuthority(self, domain_name, certificate, key):
ca_dir = os.path.join(self.data_root_directory, 'ca')
ca_private = os.path.join(ca_dir, 'private')
ca_certs = os.path.join(ca_dir, 'certs')
ca_crl = os.path.join(ca_dir, 'crl')
self._createDirectory(ca_dir)
for path in (ca_private, ca_certs, ca_crl):
self._createDirectory(path)
key_path = os.path.join(ca_private, domain_name + ".key")
certificate_path = os.path.join(ca_certs, domain_name + ".crt")
self._writeFile(key_path, key)
self._writeFile(certificate_path, certificate)
return dict(certificate_authority_path=ca_dir,
ca_crl=ca_crl,
certificate=certificate_path,
key=key_path)
def installCertificateAuthority(self, ca_country_code='XX',
ca_email='xx@example.com', ca_state='State', ca_city='City',
ca_company='Company'):
backup_path = self.createBackupDirectory('ca')
self.ca_dir = os.path.join(self.data_root_directory, 'ca')
self._createDirectory(self.ca_dir)
self.ca_request_dir = os.path.join(self.ca_dir, 'requests')
self._createDirectory(self.ca_request_dir)
config = dict(ca_dir=self.ca_dir, request_dir=self.ca_request_dir)
self.ca_private = os.path.join(self.ca_dir, 'private')
self.ca_certs = os.path.join(self.ca_dir, 'certs')
self.ca_crl = os.path.join(self.ca_dir, 'crl')
self.ca_newcerts = os.path.join(self.ca_dir, 'newcerts')
self.ca_key_ext = '.key'
self.ca_crt_ext = '.crt'
for d in [self.ca_private, self.ca_crl, self.ca_newcerts, self.ca_certs]:
self._createDirectory(d)
for f in ['crlnumber', 'serial']:
if not os.path.exists(os.path.join(self.ca_dir, f)):
open(os.path.join(self.ca_dir, f), 'w').write('01')
if not os.path.exists(os.path.join(self.ca_dir, 'index.txt')):
open(os.path.join(self.ca_dir, 'index.txt'), 'w').write('')
openssl_configuration = os.path.join(self.ca_dir, 'openssl.cnf')
config.update(
working_directory=self.ca_dir,
country_code=ca_country_code,
state=ca_state,
city=ca_city,
company=ca_company,
email_address=ca_email,
)
self._writeFile(openssl_configuration, pkg_resources.resource_string(
__name__, 'template/openssl.cnf.ca.in') % config)
# XXX-Cedric: Don't use this, but use slapos.recipe.certificate_authority
# from the instance profile.
self.path_list.extend(zc.buildout.easy_install.scripts([
('certificate_authority', __name__ + '.certificate_authority',
'runCertificateAuthority')],
self.ws, sys.executable, self.service_directory, arguments=[dict(
openssl_configuration=openssl_configuration,
openssl_binary=self.options['openssl_binary'],
certificate=os.path.join(self.ca_dir, 'cacert.pem'),
key=os.path.join(self.ca_private, 'cakey.pem'),
crl=os.path.join(self.ca_crl),
request_dir=self.ca_request_dir
)]))
# configure backup
backup_cron = os.path.join(self.cron_d, 'ca_rdiff_backup')
open(backup_cron, 'w').write(
'''0 0 * * * %(rdiff_backup)s %(source)s %(destination)s'''%dict(
rdiff_backup=self.options['rdiff_backup_binary'],
source=self.ca_dir,
destination=backup_path))
self.path_list.append(backup_cron)
return dict(
ca_certificate=os.path.join(config['ca_dir'], 'cacert.pem'),
ca_crl=os.path.join(config['ca_dir'], 'crl'),
certificate_authority_path=config['ca_dir']
)
def _getApacheConfigurationDict(self, name, ip_list, port):
apache_conf = dict()
apache_conf['server_name'] = name
apache_conf['pid_file'] = self.options['pid-file']
apache_conf['lock_file'] = os.path.join(self.run_directory,
name + '.lock')
apache_conf['document_root'] = os.path.join(self.data_root_directory,
'htdocs')
apache_conf['instance_home'] = os.path.join(self.work_directory)
apache_conf['httpd_home'] = self.options['httpd_home']
apache_conf['ip_list'] = ip_list
apache_conf['port'] = port
apache_conf['server_admin'] = 'admin@'
apache_conf['error_log'] = self.options['error-log']
apache_conf['access_log'] = self.options['access-log']
return apache_conf
def installVarnishCache(self, name, ip, port, control_port, backend_host,
backend_port, size="1G"):
"""
Install a varnish daemon for a certain address
"""
directory = self.createDataDirectory(name)
varnish_config = dict(
directory=directory,
pid = "%s/varnish.pid" % directory,
port="%s:%s" % (ip, port),
varnishd_binary=self.options["varnishd_binary"],
control_port="%s:%s" % (ip, control_port),
storage="file,%s/storage.bin,%s" % (directory, size))
config_file = self.createConfigurationFile("%s.conf" % name,
self.substituteTemplate(self.getTemplateFilename('varnish.vcl.in'),
dict(backend_host=backend_host, backend_port=backend_port)))
varnish_argument_list = [varnish_config['varnishd_binary'].strip(),
"-F", "-n", directory, "-P", varnish_config["pid"], "-p",
"cc_command=exec %s " % self.options["gcc_binary"] +\
"-fpic -shared -o %o %s",
"-f", config_file,
"-a", varnish_config["port"], "-T", varnish_config["control_port"],
"-s", varnish_config["storage"]]
environment = dict(PATH="%s:%s" % (self.options["binutils_directory"],
os.environ.get('PATH')))
wrapper = zc.buildout.easy_install.scripts([(name,
'slapos.recipe.librecipe.execute', 'executee')], self.ws,
sys.executable, self.service_directory, arguments=[varnish_argument_list,
environment])[0]
self.path_list.append(wrapper)
return varnish_config
def installStunnel(self, service_dict, certificate,
key, ca_crl, ca_path):
"""Installs stunnel
service_dict =
{ name: (public_ip, private_ip, public_port, private_port),}
"""
template_filename = self.getTemplateFilename('stunnel.conf.in')
template_entry_filename = self.getTemplateFilename('stunnel.conf.entry.in')
log = os.path.join(self.log_directory, 'stunnel.log')
pid_file = os.path.join(self.run_directory, 'stunnel.pid')
stunnel_conf = dict(
pid_file=pid_file,
log=log,
cert = certificate,
key = key,
ca_crl = ca_crl,
ca_path = ca_path,
entry_str=''
)
entry_list = []
for name, parameter_dict in service_dict.iteritems():
parameter_dict["name"] = name
entry_str = self.substituteTemplate(template_entry_filename,
parameter_dict)
entry_list.append(entry_str)
stunnel_conf["entry_str"] = "\n".join(entry_list)
stunnel_conf_path = self.createConfigurationFile("stunnel.conf",
self.substituteTemplate(template_filename,
stunnel_conf))
wrapper = zc.buildout.easy_install.scripts([('stunnel',
'slapos.recipe.librecipe.execute', 'execute_wait')], self.ws,
sys.executable, self.service_directory, arguments=[
[self.options['stunnel_binary'].strip(), stunnel_conf_path],
[certificate, key]]
)[0]
self.path_list.append(wrapper)
return stunnel_conf
def installFrontendApache(self, ip_list, key, certificate, name,
port=4443, plain_http_port=8080,
rewrite_rule_list=None,
rewrite_rule_zope_list=None,
rewrite_rule_https_only_list=None,
rewrite_rule_zope_path_list=None,
access_control_string=None):
if rewrite_rule_list is None:
rewrite_rule_list = []
if rewrite_rule_https_only_list is None:
rewrite_rule_zope_path_list = []
if rewrite_rule_zope_list is None:
rewrite_rule_zope_list = []
if rewrite_rule_zope_path_list is None:
rewrite_rule_zope_path_list = []
# Create htdocs, populate it with default 404 document
htdocs_location = os.path.join(self.data_root_directory, 'htdocs')
self._createDirectory(htdocs_location)
notfound_file_location = os.path.join(htdocs_location, 'notfound.html')
notfound_template_file_location = self.getTemplateFilename(
'notfound.html')
notfound_file_content = open(notfound_template_file_location, 'r').read()
self._writeFile(notfound_file_location, notfound_file_content)
# Create mod_ssl cache directory
cache_directory_location = os.path.join(self.var_directory, 'cache')
mod_ssl_cache_location = os.path.join(cache_directory_location,
'httpd_mod_ssl')
self._createDirectory(cache_directory_location)
self._createDirectory(mod_ssl_cache_location)
# Create "custom" apache configuration files if it does not exist.
# Note : Those files won't be erased or changed by slapgrid.
# It can be freely customized by node admin.
custom_apache_configuration_directory = os.path.join(
self.data_root_directory, 'apache-conf.d')
self._createDirectory(custom_apache_configuration_directory)
# First one is included in the end of the apache configuration file
custom_apache_configuration_file_location = os.path.join(
custom_apache_configuration_directory, 'apache_frontend.custom.conf')
if not os.path.exists(custom_apache_configuration_file_location):
open(custom_apache_configuration_file_location, 'w')
# Second one is included in the virtualhost of apache configuration file
custom_apache_virtual_configuration_file_location = os.path.join(
custom_apache_configuration_directory,
'apache_frontend.virtualhost.custom.conf')
if not os.path.exists(custom_apache_virtual_configuration_file_location):
open(custom_apache_virtual_configuration_file_location, 'w')
# Create backup of custom apache configuration
backup_path = self.createBackupDirectory('custom_apache_conf_backup')
backup_cron = os.path.join(self.cron_d, 'custom_apache_conf_backup')
open(backup_cron, 'w').write(
'''0 0 * * * %(rdiff_backup)s %(source)s %(destination)s'''%dict(
rdiff_backup=self.options['rdiff_backup_binary'],
source=custom_apache_configuration_directory,
destination=backup_path))
self.path_list.append(backup_cron)
# Create configuration file and rewritemaps
apachemap_path = self.createConfigurationFile(
"apache_rewritemap_generic.txt",
"\n".join(rewrite_rule_list)
)
apachemap_httpsonly_path = self.createConfigurationFile(
"apache_rewritemap_httpsonly.txt",
"\n".join(rewrite_rule_https_only_list)
)
apachemap_zope_path = self.createConfigurationFile(
"apache_rewritemap_zope.txt",
"\n".join(rewrite_rule_zope_list)
)
apachemap_zopepath_path = self.createConfigurationFile(
"apache_rewritemap_zopepath.txt",
"\n".join(rewrite_rule_zope_path_list)
)
apache_conf = self._getApacheConfigurationDict(name, ip_list, port)
apache_conf['ssl_snippet'] = self.substituteTemplate(
self.getTemplateFilename('apache.ssl-snippet.conf.in'),
dict(login_certificate=certificate,
login_key=key,
httpd_mod_ssl_cache_directory=mod_ssl_cache_location,
)
)
apache_conf["listen"] = "\n".join([
"Listen %s:%s" % (ip, port)
for port in (plain_http_port, port)
for ip in ip_list
])
path = self.substituteTemplate(
self.getTemplateFilename('apache.conf.path-protected.in'),
dict(path='/',
access_control_string='none',
document_root=apache_conf['document_root'],
)
)
apache_conf.update(**dict(
path_enable=path,
apachemap_path=apachemap_path,
apachemap_httpsonly_path=apachemap_httpsonly_path,
apachemapzope_path=apachemap_zope_path,
apachemapzopepath_path=apachemap_zopepath_path,
apache_domain=name,
https_port=port,
plain_http_port=plain_http_port,
custom_apache_conf=custom_apache_configuration_file_location,
custom_apache_virtualhost_conf=custom_apache_virtual_configuration_file_location,
))
apache_conf_string = self.substituteTemplate(
self.getTemplateFilename('apache.conf.in'), apache_conf)
apache_config_file = self.createConfigurationFile('apache_frontend.conf',
apache_conf_string)
self.path_list.append(apache_config_file)
self.path_list.extend(zc.buildout.easy_install.scripts([(
'frontend_apache', 'slapos.recipe.erp5.apache', 'runApache')], self.ws,
sys.executable, self.service_directory, arguments=[
dict(
required_path_list=[key, certificate],
binary=self.options['httpd_binary'],
config=apache_config_file)
]))
return dict(site_url="https://%s:%s/" % (name, port))
import os
import subprocess
import time
import ConfigParser
import uuid
def popenCommunicate(command_list, input=None):
subprocess_kw = dict(stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if input is not None:
subprocess_kw.update(stdin=subprocess.PIPE)
popen = subprocess.Popen(command_list, **subprocess_kw)
result = popen.communicate(input)[0]
if popen.returncode is None:
popen.kill()
if popen.returncode != 0:
raise ValueError('Issue during calling %r, result was:\n%s' % (
command_list, result))
return result
class CertificateAuthority:
def __init__(self, key, certificate, openssl_binary,
openssl_configuration, request_dir):
self.key = key
self.certificate = certificate
self.openssl_binary = openssl_binary
self.openssl_configuration = openssl_configuration
self.request_dir = request_dir
def checkAuthority(self):
file_list = [ self.key, self.certificate ]
ca_ready = True
for f in file_list:
if not os.path.exists(f):
ca_ready = False
break
if ca_ready:
return
for f in file_list:
if os.path.exists(f):
os.unlink(f)
try:
# no CA, let us create new one
popenCommunicate([self.openssl_binary, 'req', '-nodes', '-config',
self.openssl_configuration, '-new', '-x509', '-extensions', 'v3_ca',
'-keyout', self.key, '-out', self.certificate, '-days', '10950'],
# Authority name will be random, so no instance has the same issuer
'Certificate Authority %s\n' % uuid.uuid1())
except:
try:
for f in file_list:
if os.path.exists(f):
os.unlink(f)
except:
# do not raise during cleanup
pass
raise
def _checkCertificate(self, common_name, key, certificate):
file_list = [key, certificate]
ready = True
for f in file_list:
if not os.path.exists(f):
ready = False
break
if ready:
return False
for f in file_list:
if os.path.exists(f):
os.unlink(f)
csr = certificate + '.csr'
try:
popenCommunicate([self.openssl_binary, 'req', '-config',
self.openssl_configuration, '-nodes', '-new', '-keyout',
key, '-out', csr, '-days', '3650'],
common_name + '\n')
try:
popenCommunicate([self.openssl_binary, 'ca', '-batch', '-config',
self.openssl_configuration, '-out', certificate,
'-infiles', csr])
finally:
if os.path.exists(csr):
os.unlink(csr)
except:
try:
for f in file_list:
if os.path.exists(f):
os.unlink(f)
except:
# do not raise during cleanup
pass
raise
else:
return True
def checkRequestDir(self):
for request_file in os.listdir(self.request_dir):
parser = ConfigParser.RawConfigParser()
parser.readfp(open(os.path.join(self.request_dir, request_file), 'r'))
if self._checkCertificate(parser.get('certificate', 'name'),
parser.get('certificate', 'key_file'), parser.get('certificate',
'certificate_file')):
print 'Created certificate %r' % parser.get('certificate', 'name')
def runCertificateAuthority(args):
ca_conf = args[0]
ca = CertificateAuthority(ca_conf['key'], ca_conf['certificate'],
ca_conf['openssl_binary'], ca_conf['openssl_configuration'],
ca_conf['request_dir'])
while True:
ca.checkAuthority()
ca.checkRequestDir()
time.sleep(60)
<Directory %(path)s>
Order Deny,Allow
Allow from %(access_control_string)s
</Directory>
<Directory %(document_root)s>
Order Allow,Deny
Allow from All
</Directory>
<Location %(location)s>
Order Deny,Allow
Deny from all
Allow from %(allow_string)s
</Location>
SSLCertificateFile %(login_certificate)s
SSLCertificateKeyFile %(login_key)s
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
SSLSessionCache shmcb:/%(httpd_mod_ssl_cache_directory)s/ssl_scache(512000)
SSLSessionCacheTimeout 300
SSLRandomSeed startup /dev/urandom 256
SSLRandomSeed connect builtin
SSLProtocol -ALL +SSLv3 +TLSv1
SSLHonorCipherOrder On
SSLCipherSuite RC4-SHA:HIGH:!ADH
<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
# Accept proxy to sites using self-signed SSL certificates
SSLProxyCheckPeerCN off
SSLProxyCheckPeerExpire off
%(file_list)s {
daily
dateext
rotate 30
compress
notifempty
sharedscripts
create
postrotate
%(postrotate)s
endscript
olddir %(olddir)s
}
#
# OpenSSL example configuration file.
# This is mostly being used for generation of certificate requests.
#
# This definition stops the following lines choking if HOME isn't
# defined.
HOME = .
RANDFILE = $ENV::HOME/.rnd
# Extra OBJECT IDENTIFIER info:
#oid_file = $ENV::HOME/.oid
oid_section = new_oids
# To use this configuration file with the "-extfile" option of the
# "openssl x509" utility, name here the section containing the
# X.509v3 extensions to use:
# extensions =
# (Alternatively, use a configuration file that has only
# X.509v3 extensions in its main [= default] section.)
[ new_oids ]
# We can add new OIDs in here for use by 'ca', 'req' and 'ts'.
# Add a simple OID like this:
# testoid1=1.2.3.4
# Or use config file substitution like this:
# testoid2=${testoid1}.5.6
# Policies used by the TSA examples.
tsa_policy1 = 1.2.3.4.1
tsa_policy2 = 1.2.3.4.5.6
tsa_policy3 = 1.2.3.4.5.7
####################################################################
[ ca ]
default_ca = CA_default # The default ca section
####################################################################
[ CA_default ]
dir = %(working_directory)s # Where everything is kept
certs = $dir/certs # Where the issued certs are kept
crl_dir = $dir/crl # Where the issued crl are kept
database = $dir/index.txt # database index file.
#unique_subject = no # Set to 'no' to allow creation of
# several ctificates with same subject.
new_certs_dir = $dir/newcerts # default place for new certs.
certificate = $dir/cacert.pem # The CA certificate
serial = $dir/serial # The current serial number
crlnumber = $dir/crlnumber # the current crl number
# must be commented out to leave a V1 CRL
crl = $dir/crl.pem # The current CRL
private_key = $dir/private/cakey.pem # The private key
RANDFILE = $dir/private/.rand # private random number file
x509_extensions = usr_cert # The extentions to add to the cert
# Comment out the following two lines for the "traditional"
# (and highly broken) format.
name_opt = ca_default # Subject Name options
cert_opt = ca_default # Certificate field options
# Extension copying option: use with caution.
# copy_extensions = copy
# Extensions to add to a CRL. Note: Netscape communicator chokes on V2 CRLs
# so this is commented out by default to leave a V1 CRL.
# crlnumber must also be commented out to leave a V1 CRL.
# crl_extensions = crl_ext
default_days = 3650 # how long to certify for
default_crl_days= 30 # how long before next CRL
default_md = default # use public key default MD
preserve = no # keep passed DN ordering
# A few difference way of specifying how similar the request should look
# For type CA, the listed attributes must be the same, and the optional
# and supplied fields are just that :-)
policy = policy_match
# For the CA policy
[ policy_match ]
countryName = match
stateOrProvinceName = match
organizationName = match
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
# For the 'anything' policy
# At this point in time, you must list all acceptable 'object'
# types.
[ policy_anything ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
####################################################################
[ req ]
default_bits = 2048
default_md = sha1
default_keyfile = privkey.pem
distinguished_name = req_distinguished_name
#attributes = req_attributes
x509_extensions = v3_ca # The extentions to add to the self signed cert
# Passwords for private keys if not present they will be prompted for
# input_password = secret
# output_password = secret
# This sets a mask for permitted string types. There are several options.
# default: PrintableString, T61String, BMPString.
# pkix : PrintableString, BMPString (PKIX recommendation before 2004)
# utf8only: only UTF8Strings (PKIX recommendation after 2004).
# nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings).
# MASK:XXXX a literal mask value.
# WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings.
string_mask = utf8only
# req_extensions = v3_req # The extensions to add to a certificate request
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = %(country_code)s
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_value = %(state)s
localityName = Locality Name (eg, city)
localityName_value = %(city)s
0.organizationName = Organization Name (eg, company)
0.organizationName_value = %(company)s
# we can do this but it is not needed normally :-)
#1.organizationName = Second Organization Name (eg, company)
#1.organizationName_default = World Wide Web Pty Ltd
commonName = Common Name (eg, your name or your server\'s hostname)
commonName_max = 64
emailAddress = Email Address
emailAddress_value = %(email_address)s
emailAddress_max = 64
# SET-ex3 = SET extension number 3
#[ req_attributes ]
#challengePassword = A challenge password
#challengePassword_min = 4
#challengePassword_max = 20
#
#unstructuredName = An optional company name
[ usr_cert ]
# These extensions are added when 'ca' signs a request.
# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.
basicConstraints=CA:FALSE
# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.
# This is OK for an SSL server.
# nsCertType = server
# For an object signing certificate this would be used.
# nsCertType = objsign
# For normal client use this is typical
# nsCertType = client, email
# and for everything including object signing:
# nsCertType = client, email, objsign
# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment
# This will be displayed in Netscape's comment listbox.
nsComment = "OpenSSL Generated Certificate"
# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move
# Copy subject details
# issuerAltName=issuer:copy
#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName
# This is required for TSA certificates.
# extendedKeyUsage = critical,timeStamping
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
[ v3_ca ]
# Extensions for a typical CA
# PKIX recommendation.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
# This is what PKIX recommends but some broken software chokes on critical
# extensions.
#basicConstraints = critical,CA:true
# So we do this instead.
basicConstraints = CA:true
# Key usage: this is typical for a CA certificate. However since it will
# prevent it being used as an test self-signed certificate it is best
# left out by default.
# keyUsage = cRLSign, keyCertSign
# Some might want this also
# nsCertType = sslCA, emailCA
# Include email address in subject alt name: another PKIX recommendation
# subjectAltName=email:copy
# Copy issuer details
# issuerAltName=issuer:copy
# DER hex encoding of an extension: beware experts only!
# obj=DER:02:03
# Where 'obj' is a standard or added object
# You can even override a supported extension:
# basicConstraints= critical, DER:30:03:01:01:FF
[ crl_ext ]
# CRL extensions.
# Only issuerAltName and authorityKeyIdentifier make any sense in a CRL.
# issuerAltName=issuer:copy
authorityKeyIdentifier=keyid:always
[ proxy_cert_ext ]
# These extensions should be added when creating a proxy certificate
# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.
basicConstraints=CA:FALSE
# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.
# This is OK for an SSL server.
# nsCertType = server
# For an object signing certificate this would be used.
# nsCertType = objsign
# For normal client use this is typical
# nsCertType = client, email
# and for everything including object signing:
# nsCertType = client, email, objsign
# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment
# This will be displayed in Netscape's comment listbox.
nsComment = "OpenSSL Generated Certificate"
# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move
# Copy subject details
# issuerAltName=issuer:copy
#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName
# This really needs to be in place for it to be a proxy certificate.
proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo
####################################################################
[ tsa ]
default_tsa = tsa_config1 # the default TSA section
[ tsa_config1 ]
# These are used by the TSA reply generation only.
dir = /etc/pki/tls # TSA root directory
serial = $dir/tsaserial # The current serial number (mandatory)
crypto_device = builtin # OpenSSL engine to use for signing
signer_cert = $dir/tsacert.pem # The TSA signing certificate
# (optional)
certs = $dir/cacert.pem # Certificate chain to include in reply
# (optional)
signer_key = $dir/private/tsakey.pem # The TSA private key (optional)
default_policy = tsa_policy1 # Policy if request did not specify it
# (optional)
other_policies = tsa_policy2, tsa_policy3 # acceptable policies (optional)
digests = md5, sha1 # Acceptable message digests (mandatory)
accuracy = secs:1, millisecs:500, microsecs:100 # (optional)
clock_precision_digits = 0 # number of digits after dot. (optional)
ordering = yes # Is ordering defined for timestamps?
# (optional, default: no)
tsa_name = yes # Must the TSA name be included in the reply?
# (optional, default: no)
ess_cert_id_chain = no # Must the ESS cert id chain be included?
# (optional, default: no)
[%(name)s]
accept = %(public_ip)s:%(public_port)s
connect = %(private_ip)s:%(private_port)s
foreground = yes
output = %(log)s
pid = %(pid_file)s
syslog = no
client = yes
CApath = %(ca_path)s
key = %(key)s
CRLpath = %(ca_crl)s
cert = %(cert)s
sslVersion = SSLv3
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
%(entry_str)s
# This is a basic VCL configuration file for varnish. See the vcl(7)
# man page for details on VCL syntax and semantics.
#
# Default backend definition. Set this to point to your content
# server.
#
backend default {
.host = "%(backend_host)s";
.port = "%(backend_port)s";
.probe = {
.url = "/";
.timeout = 10s;
.interval = 10s;
.window = 4;
.threshold = 3;
}
}
#
# Below is a commented-out copy of the default VCL logic. If you
# redefine any of these subroutines, the built-in logic will be
# appended to your code.
#
# sub vcl_recv {
# if (req.http.x-forwarded-for) {
# set req.http.X-Forwarded-For =
# req.http.X-Forwarded-For ", " client.ip;
# } else {
# set req.http.X-Forwarded-For = client.ip;
# }
# if (req.request != "GET" &&
# req.request != "HEAD" &&
# req.request != "PUT" &&
# req.request != "POST" &&
# req.request != "TRACE" &&
# req.request != "OPTIONS" &&
# req.request != "DELETE") {
# /* Non-RFC2616 or CONNECT which is weird. */
# return (pipe);
# }
# if (req.request != "GET" && req.request != "HEAD") {
# /* We only deal with GET and HEAD by default */
# return (pass);
# }
# if (req.http.Authorization || req.http.Cookie) {
# /* Not cacheable by default */
# return (pass);
# }
# return (lookup);
# }
sub vcl_recv {
if (req.http.cache-control ~ "no-cache") {
purge_url(req.url);
}
if (req.url ~ "\.(css|js|ico)$") {
unset req.http.cookie;
}
# remove bogus cookies
if (req.http.Cookie) {
set req.http.Cookie = regsuball(req.http.Cookie, "(^|; ) *__utm.=[^;]+;? *", "\1");
set req.http.Cookie = regsuball(req.http.Cookie, "(^|; ) *__ac_name=\x22\x22;? *", "\1");
set req.http.Cookie = regsuball(req.http.Cookie, "(^|; ) *__ac=\x22Og.3D.3D\x22;? *", "\1");
}
if (req.http.Cookie == "") {
remove req.http.Cookie;
}
if (req.http.x-forwarded-for) {
set req.http.X-Forwarded-For =
req.http.X-Forwarded-For ", " client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
if (req.request != "GET" &&
req.request != "HEAD" &&
req.request != "PUT" &&
req.request != "POST" &&
req.request != "TRACE" &&
req.request != "OPTIONS" &&
req.request != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
if (req.request != "GET" && req.request != "HEAD") {
/* We only deal with GET and HEAD by default */
return (pass);
}
if (req.http.Authorization) {
/* Not cacheable by default */
return (pass);
}
if (req.http.Cookie && req.http.Cookie ~ "(^|; ) *__ac=") {
/* Not cacheable for authorised users,
but KM images are cacheable */
if (!(req.url ~ "/km_img/.*\.(png|gif)$")) {
return (pass);
}
}
# XXX login form can defer based on __ac_name cookie value
if (req.url ~ "/(login_form|WebSite_viewLoginDialog)($|\?)") {
return (pass);
}
if (req.backend.healthy) {
set req.grace = 1h;
} else {
set req.grace = 1w;
}
return (lookup);
}
#
# sub vcl_pipe {
# # Note that only the first request to the backend will have
# # X-Forwarded-For set. If you use X-Forwarded-For and want to
# # have it set for all requests, make sure to have:
# # set req.http.connection = "close";
# # here. It is not set by default as it might break some broken web
# # applications, like IIS with NTLM authentication.
# return (pipe);
# }
#
# sub vcl_pass {
# return (pass);
# }
#
# sub vcl_hash {
# set req.hash += req.url;
# if (req.http.host) {
# set req.hash += req.http.host;
# } else {
# set req.hash += server.ip;
# }
# return (hash);
# }
#
# sub vcl_hit {
# if (!obj.cacheable) {
# return (pass);
# }
# return (deliver);
# }
#
# sub vcl_miss {
# return (fetch);
# }
#
# sub vcl_fetch {
# if (!beresp.cacheable) {
# return (pass);
# }
# if (beresp.http.Set-Cookie) {
# return (pass);
# }
# return (deliver);
# }
sub vcl_fetch {
# we only cache 200 (OK) and 304 (Not Modified) responses.
if (beresp.status != 200 && beresp.status != 304) {
set beresp.cacheable = false;
}
if (beresp.http.cache-control ~ "no-cache") {
set beresp.cacheable = false;
}
if (!beresp.cacheable) {
unset beresp.http.expires;
set beresp.http.cache-control = "no-cache";
return (pass);
}
# we don't care haproxy's cookie.
if (beresp.http.Set-Cookie && beresp.http.Set-Cookie !~ "^SERVERID=[^;]+; path=/$") {
return (pass);
}
if (req.url ~ "\.(css|js|ico)$") {
unset beresp.http.set-cookie;
set beresp.http.cache-control = regsub(beresp.http.cache-control, "^", "public,");
set beresp.http.cache-control = regsub(beresp.http.cache-control, ",$", "");
}
# remove some headers added by caching policy manager to avoid
# '304 Not Modified' in case of login <-> logout switching.
if (beresp.http.content-type ~ "^text/html") {
unset beresp.http.last-modified;
}
if (beresp.cacheable) {
/* Remove Expires from backend, it's not long enough */
unset beresp.http.expires;
/* Set the clients TTL on this object */
set beresp.http.cache-control = "max-age = 900";
/* Set how long Varnish will keep it */
set beresp.ttl = 1w;
/* marker for vcl_deliver to reset Age: */
set beresp.http.magicmarker = "1";
}
set beresp.grace = 1w;
return (deliver);
}
#
# sub vcl_deliver {
# return (deliver);
# }
sub vcl_deliver {
if (resp.http.magicmarker) {
/* Remove the magic marker */
unset resp.http.magicmarker;
/* By definition we have a fresh object */
set resp.http.age = "0";
}
if (obj.hits > 0) {
set resp.http.X-Cache = obj.hits;
} else {
set resp.http.X-Cache = "MISS";
}
return (deliver);
}
#
# sub vcl_error {
# set obj.http.Content-Type = "text/html; charset=utf-8";
# synthetic {"
# <?xml version="1.0" encoding="utf-8"?>
# <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
# "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
# <html>
# <head>
# <title>"} obj.status " " obj.response {"</title>
# </head>
# <body>
# <h1>Error "} obj.status " " obj.response {"</h1>
# <p>"} obj.response {"</p>
# <h3>Guru Meditation:</h3>
# <p>XID: "} req.xid {"</p>
# <hr>
# <p>Varnish cache server</p>
# </body>
# </html>
# "};
# return (deliver);
# }
...@@ -103,16 +103,27 @@ class Request(Recipe): ...@@ -103,16 +103,27 @@ class Request(Recipe):
key_file = self.options['key-file'] key_file = self.options['key-file']
cert_file = self.options['cert-file'] cert_file = self.options['cert-file']
key_content = self.options.get('key-content', None)
cert_content = self.options.get('cert-content', None)
request_needed = True
name = self.options['name'] name = self.options['name']
hash_ = hashlib.sha512(name).hexdigest() hash_ = hashlib.sha512(name).hexdigest()
key = os.path.join(self.ca_private, hash_ + self.ca_key_ext) key = os.path.join(self.ca_private, hash_ + self.ca_key_ext)
certificate = os.path.join(self.ca_certs, hash_ + self.ca_crt_ext) certificate = os.path.join(self.ca_certs, hash_ + self.ca_crt_ext)
parser = ConfigParser.RawConfigParser()
parser.add_section('certificate') # XXX Ugly hack to quickly provide custom certificate/key to everyone using the recipe
parser.set('certificate', 'name', name) if key_content and cert_content:
parser.set('certificate', 'key_file', key) open(key, 'w').write(key_content)
parser.set('certificate', 'certificate_file', certificate) open(certificate, 'w').write(cert_content)
parser.write(open(os.path.join(self.request_directory, hash_), 'w')) request_needed = False
else:
parser = ConfigParser.RawConfigParser()
parser.add_section('certificate')
parser.set('certificate', 'name', name)
parser.set('certificate', 'key_file', key)
parser.set('certificate', 'certificate_file', certificate)
parser.write(open(os.path.join(self.request_directory, hash_), 'w'))
for link in [key_file, cert_file]: for link in [key_file, cert_file]:
if os.path.islink(link): if os.path.islink(link):
...@@ -123,11 +134,14 @@ class Request(Recipe): ...@@ -123,11 +134,14 @@ class Request(Recipe):
os.symlink(key, key_file) os.symlink(key, key_file)
os.symlink(certificate, cert_file) os.symlink(certificate, cert_file)
wrapper = self.createPythonScript( path_list = [key_file, cert_file]
self.options['wrapper'], if request_needed:
'slapos.recipe.librecipe.execute.execute_wait', wrapper = self.createPythonScript(
[ [self.options['executable']], self.options['wrapper'],
[certificate, key] ], 'slapos.recipe.librecipe.execute.execute_wait',
) [ [self.options['executable']],
[certificate, key] ],
)
path_list.append(wrapper)
return [key_file, cert_file, wrapper] return path_list
...@@ -37,6 +37,7 @@ class Recipe(GenericBaseRecipe): ...@@ -37,6 +37,7 @@ class Recipe(GenericBaseRecipe):
'url': self.options['url'], 'url': self.options['url'],
'shell_path': self.options['dash_path'], 'shell_path': self.options['dash_path'],
'curl_path': self.options['curl_path'], 'curl_path': self.options['curl_path'],
'check_secure': self.options.get('check-secure', 0)
} }
# XXX-Cedric in this script, curl won't check certificate # XXX-Cedric in this script, curl won't check certificate
......
...@@ -31,6 +31,13 @@ if [ $CODE -eq 000 ]; then ...@@ -31,6 +31,13 @@ if [ $CODE -eq 000 ]; then
exit 1 exit 1
fi fi
if [ %(check_secure)s -eq 1 ]; then
if [ $CODE -eq 401 ]; then
echo "$URL is protected (returned $CODE)." >&2
exit 0
fi
fi
if ! [ $CODE -eq 200 ]; then if ! [ $CODE -eq 200 ]; then
echo "$URL is not available (returned $CODE)." >&2 echo "$URL is not available (returned $CODE)." >&2
exit 2 exit 2
......
...@@ -27,6 +27,8 @@ ...@@ -27,6 +27,8 @@
import subprocess import subprocess
import httplib import httplib
import base64 import base64
import os
import shutil
from slapos.recipe.librecipe import GenericBaseRecipe from slapos.recipe.librecipe import GenericBaseRecipe
...@@ -50,13 +52,29 @@ class Recipe(GenericBaseRecipe): ...@@ -50,13 +52,29 @@ class Recipe(GenericBaseRecipe):
user, password user, password
]) ])
htdocs_location = self.options['htdocs']
if not (os.path.exists(htdocs_location) and os.listdir(htdocs_location)):
try:
os.rmdir(htdocs_location)
except:
pass
shutil.copytree(self.options['source'], htdocs_location)
# Install php.ini
php_ini = self.createFile(os.path.join(self.options['php-ini-dir'],
'php.ini'),
self.substituteTemplate(self.getTemplateFilename('php.ini.in'),
dict(tmp_directory=self.options['tmp-dir']))
)
path_list.append(php_ini)
apache_config = dict( apache_config = dict(
pid_file=self.options['pid-file'], pid_file=self.options['pid-file'],
lock_file=self.options['lock-file'], lock_file=self.options['lock-file'],
davlock_db=self.options['davdb-lock'], davlock_db=self.options['davdb-lock'],
ip=self.options['ip'], ip=self.options['ip'],
port=self.options['port'], port_webdav=self.options['port_webdav'],
port_ajax=self.options['port_ajax'],
error_log=self.options['error-log'], error_log=self.options['error-log'],
access_log=self.options['access-log'], access_log=self.options['access-log'],
document_root=self.options['htdocs'], document_root=self.options['htdocs'],
...@@ -67,6 +85,7 @@ class Recipe(GenericBaseRecipe): ...@@ -67,6 +85,7 @@ class Recipe(GenericBaseRecipe):
htpasswd_file=htpasswd_file, htpasswd_file=htpasswd_file,
ssl_certificate=self.options['cert-file'], ssl_certificate=self.options['cert-file'],
ssl_key=self.options['key-file'], ssl_key=self.options['key-file'],
php_ini_dir=self.options['php-ini-dir']
) )
# Create logfiles # Create logfiles
...@@ -86,7 +105,7 @@ class Recipe(GenericBaseRecipe): ...@@ -86,7 +105,7 @@ class Recipe(GenericBaseRecipe):
promise = self.createPythonScript(self.options['promise'], promise = self.createPythonScript(self.options['promise'],
__name__ + '.promise', __name__ + '.promise',
dict(host=self.options['ip'], port=int(self.options['port']), dict(host=self.options['ip'], port=int(self.options['port_webdav']),
user=self.options['user'], password=self.options['password']) user=self.options['user'], password=self.options['password'])
) )
path_list.append(promise) path_list.append(promise)
......
ServerRoot "%(server_root)s" ServerRoot "%(server_root)s"
Listen [%(ip)s]:%(port)s Listen [%(ip)s]:%(port_webdav)s
Listen [%(ip)s]:%(port_ajax)s
NameVirtualHost [%(ip)s]:%(port_webdav)s
NameVirtualHost [%(ip)s]:%(port_ajax)s
# Needed modules # Needed modules
LoadModule unixd_module modules/mod_unixd.so LoadModule unixd_module "%(modules_dir)s/mod_unixd.so"
LoadModule access_compat_module modules/mod_access_compat.so LoadModule access_compat_module "%(modules_dir)s/mod_access_compat.so"
LoadModule authz_core_module modules/mod_authz_core.so LoadModule authn_core_module "%(modules_dir)s/mod_authn_core.so"
LoadModule authz_core_module "%(modules_dir)s/mod_authz_core.so"
LoadModule authn_file_module "%(modules_dir)s/mod_authn_file.so" LoadModule authn_file_module "%(modules_dir)s/mod_authn_file.so"
LoadModule authz_host_module "%(modules_dir)s/mod_authz_host.so" LoadModule authz_host_module "%(modules_dir)s/mod_authz_host.so"
LoadModule authz_user_module "%(modules_dir)s/mod_authz_user.so" LoadModule authz_user_module "%(modules_dir)s/mod_authz_user.so"
LoadModule auth_basic_module "%(modules_dir)s/mod_auth_basic.so" LoadModule auth_basic_module "%(modules_dir)s/mod_auth_basic.so"
LoadModule auth_digest_module "%(modules_dir)s/mod_auth_digest.so" # Comment auth_digest since we don't use it
#LoadModule auth_digest_module "%(modules_dir)s/mod_auth_digest.so"
LoadModule log_config_module "%(modules_dir)s/mod_log_config.so" LoadModule log_config_module "%(modules_dir)s/mod_log_config.so"
LoadModule headers_module "%(modules_dir)s/mod_headers.so" LoadModule headers_module "%(modules_dir)s/mod_headers.so"
LoadModule setenvif_module "%(modules_dir)s/mod_setenvif.so" LoadModule setenvif_module "%(modules_dir)s/mod_setenvif.so"
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so LoadModule socache_shmcb_module "%(modules_dir)s/mod_socache_shmcb.so"
LoadModule ssl_module "%(modules_dir)s/mod_ssl.so" LoadModule ssl_module "%(modules_dir)s/mod_ssl.so"
LoadModule mime_module "%(modules_dir)s/mod_mime.so" LoadModule mime_module "%(modules_dir)s/mod_mime.so"
LoadModule dav_module "%(modules_dir)s/mod_dav.so" LoadModule dav_module "%(modules_dir)s/mod_dav.so"
LoadModule dav_fs_module "%(modules_dir)s/mod_dav_fs.so" LoadModule dav_fs_module "%(modules_dir)s/mod_dav_fs.so"
LoadModule dir_module "%(modules_dir)s/mod_dir.so" LoadModule dir_module "%(modules_dir)s/mod_dir.so"
LoadModule php5_module "%(modules_dir)s/libphp5.so"
ServerAdmin %(email_address)s ServerAdmin %(email_address)s
# Quiet Server header (if not, Apache give its life history) # Quiet Server header (if not, Apache give its life history)
# It's safer # It's safer
ServerTokens ProductOnly ServerTokens ProductOnly
PidFile "%(pid_file)s"
PHPINIDir "%(php_ini_dir)s"
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
<VirtualHost [%(ip)s]:%(port_ajax)s>
#ServerName www.example.com
# Directory protection
<Directory />
Options FollowSymLinks
AllowOverride None
Require all denied
</Directory>
<Directory %(document_root)s>
Options FollowSymLinks
AllowOverride All
Require all granted
</Directory>
DocumentRoot "%(document_root)s"
DirectoryIndex index.html index.php
SSLEngine on
SSLCertificateFile "%(ssl_certificate)s"
SSLCertificateKeyFile "%(ssl_key)s"
</VirtualHost>
<VirtualHost [%(ip)s]:%(port_webdav)s>
DocumentRoot "%(document_root)s" DocumentRoot "%(document_root)s"
PidFile "%(pid_file)s"
DavLockDB "%(davlock_db)s" DavLockDB "%(davlock_db)s"
<Directory /> <Directory />
...@@ -66,6 +103,12 @@ DavLockDB "%(davlock_db)s" ...@@ -66,6 +103,12 @@ DavLockDB "%(davlock_db)s"
</Directory> </Directory>
SSLEngine on
SSLCertificateFile "%(ssl_certificate)s"
SSLCertificateKeyFile "%(ssl_key)s"
</VirtualHost>
ErrorLog "%(error_log)s" ErrorLog "%(error_log)s"
LogLevel warn LogLevel warn
...@@ -77,9 +120,5 @@ DefaultType text/plain ...@@ -77,9 +120,5 @@ DefaultType text/plain
TypesConfig "%(mime_types)s" TypesConfig "%(mime_types)s"
AddType application/x-compress .Z AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz AddType application/x-gzip .gz .tgz
AddType application/x-httpd-php .php .phtml .php5 .php4
SSLRandomSeed startup builtin AddType application/x-httpd-php-source .phps
SSLRandomSeed connect builtin \ No newline at end of file
SSLEngine on
SSLCertificateFile "%(ssl_certificate)s"
SSLCertificateKeyFile "%(ssl_key)s"
[PHP]
engine = On
safe_mode = Off
expose_php = Off
error_reporting = E_ALL & ~(E_DEPRECATED|E_NOTICE|E_WARNING)
display_errors = On
display_startup_errors = Off
log_errors = On
log_errors_max_len = 1024
ignore_repeated_errors = Off
ignore_repeated_source = Off
session.save_path = "%(tmp_directory)s"
session.auto_start = 0
date.timezone = Europe/Paris
file_uploads = On
upload_max_filesize = 8M
post_max_size = 8M
magic_quotes_gpc=Off
output_buffering=Off
\ No newline at end of file
...@@ -26,7 +26,11 @@ ...@@ -26,7 +26,11 @@
############################################################################## ##############################################################################
import os import os
from slapos.recipe.librecipe import GenericBaseRecipe if __name__ == '__main__': # Hack to easily run test below.
GenericBaseRecipe = object
else:
from slapos.recipe.librecipe import GenericBaseRecipe
from zc.buildout import UserError
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
...@@ -51,18 +55,117 @@ class Recipe(GenericBaseRecipe): ...@@ -51,18 +55,117 @@ class Recipe(GenericBaseRecipe):
return [script] return [script]
class Part(GenericBaseRecipe): class Part(GenericBaseRecipe):
def install(self): def install(self):
try:
periodicity = self.options['frequency']
except KeyError:
periodicity = self.options['time']
try:
periodicity = systemd_to_cron(periodicity)
except Exception:
raise UserError("Invalid systemd calendar spec %r" % periodicity)
cron_d = self.options['cron-entries'] cron_d = self.options['cron-entries']
name = self.options['name'] name = self.options['name']
filename = os.path.join(cron_d, name) filename = os.path.join(cron_d, name)
with open(filename, 'w') as part: with open(filename, 'w') as part:
part.write('%(frequency)s %(command)s\n' % { part.write('%s %s\n' % (periodicity, self.options['command']))
'frequency': self.options['frequency'],
'command': self.options['command'],
})
return [filename] return [filename]
day_of_week_dict = dict((name, dow) for dow, name in enumerate(
"sunday monday tuesday wednesday thursday friday saturday".split())
for name in (name, name[:3]))
symbolic_dict = dict(hourly = '0 * * * *',
daily = '0 0 * * *',
monthly = '0 0 1 * *',
weekly = '0 0 * * 0')
def systemd_to_cron(spec):
"""Convert from systemd.time(7) calendar spec to crontab spec"""
try:
return symbolic_dict[spec]
except KeyError:
pass
if not spec.strip():
raise ValueError
spec = spec.split(' ')
try:
dow = ','.join(sorted('-'.join(str(day_of_week_dict[x.lower()])
for x in x.split('-', 1))
for x in spec[0].split(',')
if x))
del spec[0]
except KeyError:
dow = '*'
day = spec.pop(0) if spec else '*-*'
if spec:
time, = spec
elif ':' in day:
time = day
day = '*-*'
else:
time = '0:0'
day = day.split('-')
time = time.split(':')
if (# years not supported
len(day) > 2 and day.pop(0) != '*' or
# some crons ignore day of month if day of week is given, and dcron
# treats day of month in a way that is not compatible with systemd
dow != '*' != day[1] or
# seconds not supported
len(time) > 2 and int(time.pop())):
raise ValueError
month, day = day
hour, minute = time
spec = minute, hour, day, month, dow
for x, (y, z) in zip(spec, ((0, 60), (0, 24), (1, 31), (1, 12))):
if x != '*':
for x in x.split(','):
x = map(int, x.split('/', 1))
x[0] -= y
if x[0] < 0 or len(x) > 1 and x[0] >= x[1] or z <= sum(x):
raise ValueError
return ' '.join(spec)
def test(self):
def _(systemd, cron):
self.assertEqual(systemd_to_cron(systemd), cron)
_("Sat,Mon-Thu,Sun", "0 0 * * 0,1-4,6")
_("mon,sun *-* 2,1:23", "23 2,1 * * 0,1")
_("Wed, 17:48", "48 17 * * 3")
_("Wed-Sat,Tue 10-* 1:2", "2 1 * 10 2,3-6")
_("*-*-7 0:0:0", "0 0 7 * *")
_("10-15", "0 0 15 10 *")
_("monday *-12-* 17:00", "00 17 * 12 1")
_("12,14,13,12:20,10,30", "20,10,30 12,14,13,12 * * *") # TODO: sort
_("*-1/2-1,3 *:30", "30 * 1,3 1/2 *")
_("03-05 08:05", "05 08 05 03 *")
_("08:05:00", "05 08 * * *")
_("05:40", "40 05 * * *")
_("Sat,Sun 12-* 08:05", "05 08 * 12 0,6")
_("Sat,Sun 08:05", "05 08 * * 0,6")
def _(systemd):
self.assertRaises(Exception, systemd_to_cron, systemd)
_("test")
_("")
_("7")
_("121212:1:2")
_("Wed *-1")
_("08:05:40")
_("2003-03-05")
_("0-1"); _("13-1"); _("6/4-1"); _("5/8-1")
_("1-0"); _("1-32"); _("1-4/3"); _("1-14/18")
_("24:0");_("9/9:0"); _("8/16:0")
_("0:60"); _("0:22/22"); _("0:15/45")
if __name__ == '__main__':
import unittest
unittest.TextTestRunner().run(type('', (unittest.TestCase,), {
'runTest': test})())
...@@ -34,19 +34,23 @@ class Recipe(GenericBaseRecipe): ...@@ -34,19 +34,23 @@ class Recipe(GenericBaseRecipe):
""" """
def install(self): def install(self):
promise_parser = ConfigParser.RawConfigParser() promise_parser = ConfigParser.RawConfigParser()
for section_name, option_id_list in (
promise_parser.add_section('portal_templates') ('portal_templates', (
promise_parser.set('portal_templates', 'repository', self.options['bt5-repository-url']) ('repository', 'bt5-repository-url'),
promise_parser.set('portal_templates', 'expected_bt5', self.options['bt5']) ('expected_bt5', 'bt5'),
)),
promise_parser.add_section('external_service') ('external_service', (
promise_parser.set('external_service', 'cloudooo_url', self.options['cloudooo-url']) ('cloudooo_url', 'cloudooo-url'),
promise_parser.set('external_service', 'memcached_url', self.options['memcached-url']) ('memcached_url', 'memcached-url'),
promise_parser.set('external_service', 'kumofs_url', self.options['kumofs-url']) ('kumofs_url', 'kumofs-url'),
promise_parser.set('external_service', 'smtp_url', self.options['smtp-url']) ('smtp_url', 'smtp-url'),
)),
):
promise_parser.add_section(section_name)
for internal_id, option_id in option_id_list:
option = self.options.get(option_id)
if option:
promise_parser.set(section_name, internal_id, option)
promise_parser.write(open(self.options['promise-path'], 'w')) promise_parser.write(open(self.options['promise-path'], 'w'))
return [self.options['promise-path']] return [self.options['promise-path']]
...@@ -26,27 +26,69 @@ ...@@ -26,27 +26,69 @@
# #
############################################################################## ##############################################################################
import binascii import errno
import os import os
import random
import string
from slapos.recipe.librecipe import GenericBaseRecipe def generatePassword(length):
return ''.join(random.SystemRandom().sample(string.ascii_lowercase, length))
class Recipe(GenericBaseRecipe): class Recipe(object):
"""Generate a password that is only composed of lowercase letters
This recipe only makes sure that ${:passwd} does not end up in `.installed`
file, which is world-readable by default. So be careful not to spread it
throughout the buildout configuration by referencing it directly: see
recipes like slapos.recipe.template:jinja2 to safely process the password.
Options:
- bytes: password length (default: 8 characters)
- storage-path: plain-text persistent storage for password,
that can only be accessed by the user
(default: ${buildout:parts-directory}/${:_buildout_section_name_})
"""
def __init__(self, buildout, name, options): def __init__(self, buildout, name, options):
if os.path.exists(options['storage-path']): options_get = options.get
open_file = open(options['storage-path'], 'r') try:
options['passwd'] = open_file.read() self.storage_path = options['storage-path']
open_file.close() except KeyError:
self.storage_path = options['storage-path'] = os.path.join(
buildout['buildout']['parts-directory'], name)
try:
with open(self.storage_path) as f:
passwd = f.read()
except IOError, e:
if e.errno != errno.ENOENT:
raise
passwd = None
if not passwd:
passwd = self.generatePassword(int(options_get('bytes', '8')))
self.update = self.install
self.passwd = passwd
# Password must not go into .installed file, for 2 reasons:
# security of course but also to prevent buildout to always reinstall.
options.get = lambda option, *args, **kw: passwd \
if option == 'passwd' else options_get(option, *args, **kw)
if options.get('passwd', '') == '': generatePassword = staticmethod(generatePassword)
options['passwd'] = binascii.hexlify(os.urandom(
int(options.get('bytes', '24'))))
return GenericBaseRecipe.__init__(self, buildout, name, options)
def install(self): def install(self):
with open(self.options['storage-path'], 'w') as fout: if self.storage_path:
fout.write(self.options['passwd']) try:
return [self.options['storage-path']] os.unlink(self.storage_path)
except OSError, e:
if e.errno != errno.ENOENT:
raise
fd = os.open(self.storage_path,
os.O_CREAT | os.O_EXCL | os.O_WRONLY, 0600)
try:
os.write(fd, self.passwd)
finally:
os.close(fd)
return self.storage_path
def update(self):
return ()
...@@ -37,7 +37,12 @@ class Recipe(GenericSlapRecipe): ...@@ -37,7 +37,12 @@ class Recipe(GenericSlapRecipe):
""" """
def _install(self): def _install(self):
ip = self.options['ip'] ip = self.options['ip']
backend_url = self.parameter_dict['tidstorage-url'] backend_url = self.options.get('backend-url',
# BBB: Peeking in partition parameters directly. Eew.
self.parameter_dict.get('backend-url',
self.parameter_dict.get('tidstorage-url') # BBB
)
)
backend_server, backend_port = self._getBackendServer(backend_url) backend_server, backend_port = self._getBackendServer(backend_url)
path_list = [] path_list = []
if backend_url.startswith('https://'): if backend_url.startswith('https://'):
...@@ -70,6 +75,7 @@ class Recipe(GenericSlapRecipe): ...@@ -70,6 +75,7 @@ class Recipe(GenericSlapRecipe):
varnishd_pid_file=self.options['pid-file'], varnishd_pid_file=self.options['pid-file'],
varnish_instance_name=self.options['varnish-instance-name'], varnish_instance_name=self.options['varnish-instance-name'],
varnish_data=self.options['varnish-data'], varnish_data=self.options['varnish-data'],
gcc_location=self.options['gcc-location'],
shell_path=self.options['shell-path'], shell_path=self.options['shell-path'],
vcl_file=self.options['vcl-file'], vcl_file=self.options['vcl-file'],
backend_port=backend_port, backend_port=backend_port,
......
...@@ -4,11 +4,12 @@ DAEMON_OPTS="-F \ ...@@ -4,11 +4,12 @@ DAEMON_OPTS="-F \
-a %(varnish_ip)s:%(varnishd_server_port)s \ -a %(varnish_ip)s:%(varnishd_server_port)s \
-T %(varnish_ip)s:%(varnishd_manager_port)s \ -T %(varnish_ip)s:%(varnishd_manager_port)s \
-t 0 \ -t 0 \
-p nuke_limit=500 \
-n %(varnish_instance_name)s \ -n %(varnish_instance_name)s \
-f %(vcl_file)s \ -f %(vcl_file)s \
-s file,%(varnish_data)s/varnish_storage.bin,1G" -s file,%(varnish_data)s/varnish_storage.bin,1G"
PIDFILE=%(varnishd_pid_file)s PIDFILE=%(varnishd_pid_file)s
# exporting PATH here so that we will pass the PATH variable to the subprocess # exporting PATH here so that we will pass the PATH variable to the subprocess
export PATH export PATH="%(gcc_location)s:$PATH"
exec %(varnishd_binary)s -P ${PIDFILE} ${DAEMON_OPTS} 2>&1 exec %(varnishd_binary)s -P ${PIDFILE} ${DAEMON_OPTS} 2>&1
...@@ -25,6 +25,9 @@ rest-input-encoding utf-8 ...@@ -25,6 +25,9 @@ rest-input-encoding utf-8
rest-output-encoding utf-8 rest-output-encoding utf-8
default-zpublisher-encoding utf-8 default-zpublisher-encoding utf-8
# Disable ownership checking to execute codes generated by alarm
skip-ownership-checking on
# Temporary storage database (for sessions) # Temporary storage database (for sessions)
<zodb_db temporary> <zodb_db temporary>
<temporarystorage> <temporarystorage>
......
...@@ -25,6 +25,9 @@ rest-input-encoding utf-8 ...@@ -25,6 +25,9 @@ rest-input-encoding utf-8
rest-output-encoding utf-8 rest-output-encoding utf-8
default-zpublisher-encoding utf-8 default-zpublisher-encoding utf-8
# Disable ownership checking to execute codes generated by alarm
skip-ownership-checking on
# Temporary storage database (for sessions) # Temporary storage database (for sessions)
<zodb_db temporary> <zodb_db temporary>
<temporarystorage> <temporarystorage>
......
...@@ -87,7 +87,11 @@ class Recipe(GenericBaseRecipe): ...@@ -87,7 +87,11 @@ class Recipe(GenericBaseRecipe):
'haproxy-listen-snippet.cfg.in') 'haproxy-listen-snippet.cfg.in')
server_snippet = "" server_snippet = ""
ip = self.options['ip'] ip = self.options['ip']
server_check_path = self.options['server-check-path'] server_check_path = self.options.get('server-check-path', None)
if server_check_path:
httpchk = 'option httpchk GET %s' % server_check_path
else:
httpchk = ''
# FIXME: maxconn must be provided per-backend, not globally # FIXME: maxconn must be provided per-backend, not globally
maxconn = self.options['maxconn'] maxconn = self.options['maxconn']
i = 0 i = 0
...@@ -97,7 +101,7 @@ class Recipe(GenericBaseRecipe): ...@@ -97,7 +101,7 @@ class Recipe(GenericBaseRecipe):
'name': name, 'name': name,
'ip': ip, 'ip': ip,
'port': port, 'port': port,
'server_check_path': server_check_path, 'httpchk': httpchk,
}) })
for address in backend_list: for address in backend_list:
i += 1 i += 1
......
listen %(name)s %(ip)s:%(port)s listen %(name)s %(ip)s:%(port)s
cookie SERVERID insert cookie SERVERID insert
balance roundrobin balance roundrobin
option httpchk GET %(server_check_path)s %(httpchk)s
stats uri /haproxy stats uri /haproxy
stats realm Global\ statistics stats realm Global\ statistics
...@@ -41,39 +41,40 @@ class Recipe(GenericBaseRecipe): ...@@ -41,39 +41,40 @@ class Recipe(GenericBaseRecipe):
'"virtio" value.' '"virtio" value.'
self.options['disk-type'] = 'virtio' self.options['disk-type'] = 'virtio'
config = dict( self.options['python-path'] = sys.executable
tap_interface=self.options['tap'],
vnc_ip=self.options['vnc-ip'], path_list = []
vnc_port=self.options['vnc-port'],
nbd_ip=self.options['nbd-host'], if not self.isTrueValue(self.options.get('use-tap')):
nbd_port=self.options['nbd-port'], # XXX This could be done using Jinja.
nbd2_ip=self.options.get('nbd2-host', ''), for port in self.options['nat-rules'].split():
nbd2_port=self.options.get('nbd2-port', 1024), tunnel_port = int(port) + 10000
disk_path=self.options['disk-path'], tunnel_path = self.createExecutable(
disk_size=self.options['disk-size'], '%s-%s' % (self.options['6tunnel-wrapper-path'], tunnel_port),
disk_type=self.options['disk-type'], self.substituteTemplate(
mac_address=self.options['mac-address'], self.getTemplateFilename('6to4.in'),
smp_count=self.options['smp-count'], {
ram_size=self.options['ram-size'], 'ipv6': self.options['ipv6'],
socket_path=self.options['socket-path'], 'ipv6_port': tunnel_port,
pid_file_path=self.options['pid-path'], 'ipv4': self.options['ipv4'],
python_path=sys.executable, 'ipv4_port': tunnel_port,
shell_path=self.options['shell-path'], 'shell_path': self.options['shell-path'],
qemu_path=self.options['qemu-path'], '6tunnel_path': self.options['6tunnel-path'],
qemu_img_path=self.options['qemu-img-path'], },
vnc_passwd=self.options['passwd'] ),
) )
path_list.append(tunnel_path)
# Runners
runner_path = self.createExecutable( runner_path = self.createExecutable(
self.options['runner-path'], self.options['runner-path'],
self.substituteTemplate(self.getTemplateFilename('kvm_run.in'), self.substituteTemplate(self.getTemplateFilename('kvm_run.in'),
config)) self.options))
path_list.append(runner_path)
controller_path = self.createExecutable( controller_path = self.createExecutable(
self.options['controller-path'], self.options['controller-path'],
self.substituteTemplate(self.getTemplateFilename('kvm_controller_run.in'), self.substituteTemplate(self.getTemplateFilename('kvm_controller_run.in'),
config)) self.options))
return [runner_path, controller_path] return path_list
#!%(shell_path)s
# BEWARE: This file is operated by slapgrid
# BEWARE: It will be overwritten automatically
exec %(6tunnel_path)s -6 -4 -d -l %(ipv6)s %(ipv6_port)s %(ipv4)s %(ipv4_port)s
#!%(python_path)s #!%(python-path)s
# BEWARE: This file is operated by slapgrid # BEWARE: This file is operated by slapgrid
# BEWARE: It will be overwritten automatically # BEWARE: It will be overwritten automatically
...@@ -6,12 +6,15 @@ ...@@ -6,12 +6,15 @@
import socket import socket
import time import time
socket_path = '%(socket-path)s'
vnc_password = '%(vnc-passwd)s'
# Connect to KVM qmp socket # Connect to KVM qmp socket
so = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) so = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
connected = False connected = False
while not connected: while not connected:
try: try:
so.connect('%(socket_path)s') so.connect(socket_path)
except socket.error: except socket.error:
time.sleep(1) time.sleep(1)
else: else:
...@@ -25,7 +28,7 @@ data = so.recv(1024) ...@@ -25,7 +28,7 @@ data = so.recv(1024)
# Set VNC password # Set VNC password
so.send('{ "execute": "change", ' \ so.send('{ "execute": "change", ' \
'"arguments": { "device": "vnc", "target": "password", ' \ '"arguments": { "device": "vnc", "target": "password", ' \
' "arg": "%(vnc_passwd)s" } }') ' "arg": "' + vnc_password + '" } }')
data = so.recv(1024) data = so.recv(1024)
# Finish # Finish
......
#!%(python_path)s #!%(python-path)s
# BEWARE: This file is operated by slapgrid # BEWARE: This file is operated by slapgrid
# BEWARE: It will be overwritten automatically # BEWARE: It will be overwritten automatically
# Echo client program import hashlib
import os import os
import socket import socket
import subprocess import subprocess
import urllib
# XXX: give all of this through parameter, don't use this as template, but as module
qemu_img_path = '%(qemu-img-path)s'
qemu_path = '%(qemu-path)s'
disk_size = '%(disk-size)s'
disk_type = '%(disk-type)s'
socket_path = '%(socket-path)s'
nbd_list = (('%(nbd-host)s', %(nbd-port)s), ('%(nbd2-host)s', %(nbd2-port)s))
default_disk_image = '%(default-disk-image)s'
disk_path = '%(disk-path)s'
virtual_hard_drive_url = '%(virtual-hard-drive-url)s'.strip()
virtual_hard_drive_md5sum = '%(virtual-hard-drive-md5sum)s'.strip()
nat_rules = '%(nat-rules)s'.strip()
use_tap = '%(use-tap)s'
tap_interface = '%(tap-interface)s'
listen_ip = '%(ipv4)s'
mac_address = '%(mac-address)s'
smp_count = '%(smp-count)s'
ram_size = '%(ram-size)s'
pid_file_path = '%(pid-file-path)s'
def md5Checksum(file_path):
with open(file_path, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
def getSocketStatus(host, port): def getSocketStatus(host, port):
s = None s = None
...@@ -26,27 +57,53 @@ def getSocketStatus(host, port): ...@@ -26,27 +57,53 @@ def getSocketStatus(host, port):
break break
return s return s
# create disk if doesn't exist # Download existing hard drive if needed at first boot
disk_path = '%(disk_path)s' if not os.path.exists(disk_path) and virtual_hard_drive_url != '':
print('Downloading virtual hard drive...')
urllib.urlretrieve(virtual_hard_drive_url, disk_path)
md5sum = virtual_hard_drive_md5sum.strip()
if md5sum:
print('Checking MD5 checksum...')
local_md5sum = md5Checksum(disk_path)
if local_md5sum != md5sum:
os.remove(disk_path)
raise Exception('MD5 mismatch. MD5 of local file is %%s, Specified MD5 is %%s.' %% (
local_md5sum, md5sum))
print('MD5sum check passed.')
else:
print('Warning: not checksum specified.')
# Create disk if doesn't exist
# XXX: move to Buildout profile
if not os.path.exists(disk_path): if not os.path.exists(disk_path):
subprocess.Popen(['%(qemu_img_path)s', 'create' ,'-f', 'qcow2', print('Creating virtual hard drive...')
disk_path, '%(disk_size)sG']) subprocess.Popen([qemu_img_path, 'create' ,'-f', 'qcow2',
disk_path, '%%sG' %% disk_size])
kvm_argument_list = ['%(qemu_path)s', print('Done.')
'-enable-kvm', '-net', 'nic,macaddr=%(mac_address)s',
'-net', 'tap,ifname=%(tap_interface)s,script=no,downscript=no', # Generate network parameters
'-smp', '%(smp_count)s', # XXX: use_tap should be a boolean
'-m', '%(ram_size)s', if use_tap == 'True':
'-drive', 'file=%(disk_path)s,if=%(disk_type)s', qemu_network_parameter = 'tap,ifname=%%s,script=no,downscript=no' %% tap_interface
'-vnc', '%(vnc_ip)s:1,ipv4,password', else:
qemu_network_parameter = 'user,' + ','.join('hostfwd=tcp:%%s:%%s-:%%s' %% (listen_ip, int(port) + 10000, port) for port in nat_rules.split())
kvm_argument_list = [qemu_path,
'-enable-kvm', '-net', 'nic,macaddr=%%s' %% mac_address,
'-net', qemu_network_parameter,
'-smp', smp_count,
'-m', ram_size,
'-drive', 'file=%%s,if=%%s' %% (disk_path, disk_type),
'-vnc', '%%s:1,ipv4,password' %% listen_ip,
'-boot', 'menu=on', '-boot', 'menu=on',
'-qmp', 'unix:%(socket_path)s,server', '-qmp', 'unix:%%s,server' %% socket_path,
'-pidfile', '%(pid_file_path)s', '-pidfile', pid_file_path,
] ]
# Try to connect to NBD server (and second nbd if defined) # Try to connect to NBD server (and second nbd if defined).
for nbd_ip, nbd_port in ( # If not available, don't even specify it in qemu command line parameters.
('%(nbd_ip)s', %(nbd_port)s), ('%(nbd2_ip)s', %(nbd2_port)s)): # Reason: if qemu starts with unavailable NBD drive, it will just crash.
for nbd_ip, nbd_port in nbd_list:
if nbd_ip and nbd_port: if nbd_ip and nbd_port:
s = getSocketStatus(nbd_ip, nbd_port) s = getSocketStatus(nbd_ip, nbd_port)
if s is None: if s is None:
...@@ -57,5 +114,10 @@ for nbd_ip, nbd_port in ( ...@@ -57,5 +114,10 @@ for nbd_ip, nbd_port in (
kvm_argument_list.extend([ kvm_argument_list.extend([
'-drive', '-drive',
'file=nbd:[%%s]:%%s,media=cdrom' %% (nbd_ip, nbd_port)]) 'file=nbd:[%%s]:%%s,media=cdrom' %% (nbd_ip, nbd_port)])
# If no NBD is specified/available: use internal disk image
else:
kvm_argument_list.extend([
'-drive', 'file=%%s,media=cdrom' %% default_disk_image
])
os.execv('%(qemu_path)s', kvm_argument_list) os.execv(qemu_path, kvm_argument_list)
...@@ -33,6 +33,7 @@ import sys ...@@ -33,6 +33,7 @@ import sys
import inspect import inspect
import re import re
import shutil import shutil
from textwrap import dedent
import urllib import urllib
import urlparse import urlparse
...@@ -129,10 +130,14 @@ class GenericBaseRecipe(object): ...@@ -129,10 +130,14 @@ class GenericBaseRecipe(object):
return script return script
def createWrapper(self, name, command, parameters, comments=[], def createWrapper(self, name, command, parameters, comments=[],
parameters_extra=False, environment=None): parameters_extra=False, environment=None,
pidfile=None
):
""" """
Creates a very simple (one command) shell script for process replacement. Creates a very simple (one command) shell script for process replacement.
Takes care of quoting. Takes care of quoting.
if pidfile parameter is specified, then it will make the wrapper a singleton,
accepting to run only if no other instance is running.
""" """
lines = [ '#!/bin/sh' ] lines = [ '#!/bin/sh' ]
...@@ -144,6 +149,21 @@ class GenericBaseRecipe(object): ...@@ -144,6 +149,21 @@ class GenericBaseRecipe(object):
for key in environment: for key in environment:
lines.append('export %s=%s' % (key, environment[key])) lines.append('export %s=%s' % (key, environment[key]))
if pidfile:
lines.append(dedent("""\
# Check for other instances
pidfile=%s
if [ -e $pidfile ]; then
pid=$(cat $pidfile)
if [[ ! -z $(ps -p "$pid" | grep $(basename %s)) ]]; then
echo "Already running with pid $pid."
exit 1
else
rm $pidfile
fi
fi
echo $$ > $pidfile""" % (pidfile, command)))
lines.append('exec %s' % shlex.quote(command)) lines.append('exec %s' % shlex.quote(command))
for param in parameters: for param in parameters:
...@@ -183,17 +203,13 @@ class GenericBaseRecipe(object): ...@@ -183,17 +203,13 @@ class GenericBaseRecipe(object):
'template/%s' % template_name) 'template/%s' % template_name)
def generatePassword(self, len_=32): def generatePassword(self, len_=32):
""" # TODO: Consider having generate.password recipe inherit this class,
The purpose of this method is to generate a password which doesn't change # so that it can be easily inheritable.
from one execution to the next, so the generated password doesn't change # In the long-term, it's probably better that passwords are provided
on each slapgrid-cp execution. # by software requesters, to avoid keeping unhashed secrets in
# partitions when possible.
Currently, it returns a hardcoded password because no decision has been self.logger.warning("GenericBaseRecipe.generatePassword is deprecated."
taken on where a generated password should be kept (so it is generated " Use generate.password recipe instead.")
once only).
"""
# TODO: implement a real password generator which remember the last
# call.
return "insecure" return "insecure"
def isTrueValue(self, value): def isTrueValue(self, value):
...@@ -247,7 +263,8 @@ class GenericBaseRecipe(object): ...@@ -247,7 +263,8 @@ class GenericBaseRecipe(object):
destination = self.location destination = self.location
if os.path.exists(destination): if os.path.exists(destination):
# leftovers from a previous failed attempt, removing it. # leftovers from a previous failed attempt, removing it.
log.warning('Removing already existing directory %s' % destination) self.logger.warning('Removing already existing directory %s',
destination)
shutil.rmtree(destination) shutil.rmtree(destination)
os.mkdir(destination) os.mkdir(destination)
......
...@@ -64,7 +64,7 @@ class Callback(GenericBaseRecipe): ...@@ -64,7 +64,7 @@ class Callback(GenericBaseRecipe):
class Notify(GenericBaseRecipe): class Notify(GenericBaseRecipe):
def createNotifier(self, notifier_binary, wrapper, executable, def createNotifier(self, notifier_binary, wrapper, executable,
log, title, notification_url, feed_url): log, title, notification_url, feed_url, pidfile=None):
if not os.path.exists(log): if not os.path.exists(log):
# Just a touch # Just a touch
...@@ -82,6 +82,7 @@ class Notify(GenericBaseRecipe): ...@@ -82,6 +82,7 @@ class Notify(GenericBaseRecipe):
return self.createWrapper(name=wrapper, return self.createWrapper(name=wrapper,
command=notifier_binary, command=notifier_binary,
parameters=parameters, parameters=parameters,
pidfile=pidfile,
comments=[ comments=[
'', '',
'Call an executable and send notification(s).', 'Call an executable and send notification(s).',
...@@ -101,6 +102,7 @@ class Notify(GenericBaseRecipe): ...@@ -101,6 +102,7 @@ class Notify(GenericBaseRecipe):
executable=options['executable'], executable=options['executable'],
log=log, log=log,
title=options['title'], title=options['title'],
pidfile=options['pidfile'],
notification_url=options['notify'], notification_url=options['notify'],
feed_url=feed_url) feed_url=feed_url)
return [script] return [script]
...@@ -35,20 +35,22 @@ class Recipe(GenericSlapRecipe): ...@@ -35,20 +35,22 @@ class Recipe(GenericSlapRecipe):
publish_dict = dict() publish_dict = dict()
options = self.options.copy() options = self.options.copy()
del options['recipe'] del options['recipe']
slave_reference = options.pop('-slave-reference', None)
for k, v in options.iteritems(): for k, v in options.iteritems():
if k[:1] == '-':
continue
publish_dict[k] = v publish_dict[k] = v
self._setConnectionDict(publish_dict) self._setConnectionDict(publish_dict, slave_reference)
return [] return []
def _setConnectionDict(self, publish_dict): def _setConnectionDict(self, publish_dict, slave_reference=None):
return self.setConnectionDict(publish_dict) return self.setConnectionDict(publish_dict, slave_reference)
SERIALISED_MAGIC_KEY = '_' SERIALISED_MAGIC_KEY = '_'
class Serialised(Recipe): class Serialised(Recipe):
def _setConnectionDict(self, publish_dict): def _setConnectionDict(self, publish_dict, slave_reference=None):
return super(Serialised, self)._setConnectionDict(wrap(publish_dict)) return super(Serialised, self)._setConnectionDict(wrap(publish_dict), slave_reference)
......
# vim: set et sts=2:
############################################################################## ##############################################################################
# #
# Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved. # Copyright (c) 2013 Vifib SARL and Contributors. All Rights Reserved.
# #
# WARNING: This program as such is intended to be used by professional # WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential # programmers who take the whole responsibility of assessing all potential
...@@ -24,37 +25,39 @@ ...@@ -24,37 +25,39 @@
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
# #
############################################################################## ##############################################################################
import subprocess
import os
from slapos.recipe.librecipe import GenericBaseRecipe import errno
class Recipe(GenericBaseRecipe): class Recipe(object):
"""Read the first line of a file.
def _options(self, options): As the result has to be provided as an options, it is mandatory that the
if not os.path.exists(self.options['file']): buildout profile fills the file content (if needed) before trying to read it.
password = subprocess.check_output([self.options['pwgen-binary'], '-1']).strip()
with open(self.options['file'], 'w') as password_file:
password_file.write(password)
else:
with open(self.options['file'], 'r') as password_file:
password = password_file.read()
options['password'] = password
def install(self): Options:
os.chmod(self.options['file'], 0600) - storage-path: file to read
return []
class StablePasswordGeneratorRecipe(GenericBaseRecipe): Result set in options:
- readline: first line of the file
""" """
The purpose of this class is to generate a password which doesn't change
from one execution to the next (hence "stable"), so the generated password
doesn't change on each slapgrid-cp execution.
See GenericBaseRecipe.generatePassword . def __init__(self, buildout, name, options):
""" storage_path = options['storage-path']
try:
with open(storage_path) as f:
readline = f.readline()
except IOError, e:
if e.errno != errno.ENOENT:
raise
readline = None
def _options(self, options): self.readline = readline
options['password'] = self.generatePassword() options['readline'] = readline
def install(self):
if self.readline is None:
raise ValueError('Unable to read the file content.')
return ()
update = install = lambda self: [] def update(self):
return ()
...@@ -33,6 +33,12 @@ import traceback ...@@ -33,6 +33,12 @@ import traceback
DEFAULT_SOFTWARE_TYPE = 'RootSoftwareInstance' DEFAULT_SOFTWARE_TYPE = 'RootSoftwareInstance'
def getListOption(option_dict, key, default=()):
result = option_dict.get(key, default)
if isinstance(result, basestring):
result = result.split()
return result
class Recipe(object): class Recipe(object):
""" """
Request a partition to a slap master. Request a partition to a slap master.
...@@ -82,8 +88,15 @@ class Recipe(object): ...@@ -82,8 +88,15 @@ class Recipe(object):
installation of request section will fail. installation of request section will fail.
Possible names depend on requested partition's software type. Possible names depend on requested partition's software type.
state (optional)
Requested state, default value is the state of the requester.
Output: Output:
See "return" input key. See "return" input key.
"instance-state"
The current state of the instance.
"requested-state"
The requested state of the instance.
""" """
failed = None failed = None
...@@ -91,21 +104,23 @@ class Recipe(object): ...@@ -91,21 +104,23 @@ class Recipe(object):
self.logger = logging.getLogger(name) self.logger = logging.getLogger(name)
software_url = options['software-url'] software_url = options['software-url']
name = options['name'] name = options['name']
return_parameters = options.get('return', '').split() return_parameters = getListOption(options, 'return')
if not return_parameters: if not return_parameters:
self.logger.debug("No parameter to return to main instance." self.logger.debug("No parameter to return to main instance."
"Be careful about that...") "Be careful about that...")
software_type = options.get('software-type', DEFAULT_SOFTWARE_TYPE) software_type = options.get('software-type', DEFAULT_SOFTWARE_TYPE)
filter_kw = dict( filter_kw = dict(
(x, options['sla-' + x]) for x in options.get('sla', '').split() (x, options['sla-' + x]) for x in getListOption(options, 'sla')
if options['sla-' + x] if options['sla-' + x]
) )
partition_parameter_kw = self._filterForStorage(dict( partition_parameter_kw = self._filterForStorage(dict(
(x, options['config-' + x]) (x, options['config-' + x])
for x in options.get('config', '').split() for x in getListOption(options, 'config')
)) ))
slave = options.get('slave', 'false').lower() in \ slave = options.get('slave', 'false').lower() in \
librecipe.GenericBaseRecipe.TRUE_VALUES librecipe.GenericBaseRecipe.TRUE_VALUES
# By default XXXX Way of doing it is ugly and dangerous
requested_state = options.get('state', buildout['slap-connection'].get('requested','started'))
slap = slapmodule.slap() slap = slapmodule.slap()
slap.initializeConnection( slap.initializeConnection(
options['server-url'], options['server-url'],
...@@ -123,7 +138,7 @@ class Recipe(object): ...@@ -123,7 +138,7 @@ class Recipe(object):
try: try:
self.instance = request(software_url, software_type, self.instance = request(software_url, software_type,
name, partition_parameter_kw=partition_parameter_kw, name, partition_parameter_kw=partition_parameter_kw,
filter_kw=filter_kw, shared=slave) filter_kw=filter_kw, shared=slave, state=requested_state)
return_parameter_dict = self._getReturnParameterDict(self.instance, return_parameter_dict = self._getReturnParameterDict(self.instance,
return_parameters) return_parameters)
if not slave: if not slave:
...@@ -147,6 +162,13 @@ class Recipe(object): ...@@ -147,6 +162,13 @@ class Recipe(object):
except KeyError: except KeyError:
if self.failed is None: if self.failed is None:
self.failed = param self.failed = param
options['requested-state'] = requested_state
try:
options['instance-state'] = self.instance.getState()
except slapmodule.ResourceNotReady:
# Odd case: SlapOS Master doesn't send the state of a slave partition.
# XXX Should be fixed in the SlapOS Master, we should not care here.
pass
def _filterForStorage(self, partition_parameter_kw): def _filterForStorage(self, partition_parameter_kw):
return partition_parameter_kw return partition_parameter_kw
......
...@@ -78,6 +78,8 @@ class Recipe(object): ...@@ -78,6 +78,8 @@ class Recipe(object):
Partition parameter whose name cannot be represented unambiguously in Partition parameter whose name cannot be represented unambiguously in
buildout syntax are ignored. They cannot be accessed from buildout syntax buildout syntax are ignored. They cannot be accessed from buildout syntax
anyway, and are available through "configuration" output key. anyway, and are available through "configuration" output key.
instance-state
The instance state.
""" """
# XXX: used to detect if a configuration key is a valid section key. This # XXX: used to detect if a configuration key is a valid section key. This
...@@ -91,10 +93,12 @@ class Recipe(object): ...@@ -91,10 +93,12 @@ class Recipe(object):
options.get('key'), options.get('key'),
options.get('cert'), options.get('cert'),
) )
parameter_dict = slap.registerComputerPartition( computer_partition = slap.registerComputerPartition(
options['computer'], options['computer'],
options['partition'], options['partition'],
).getInstanceParameterDict() )
parameter_dict = computer_partition.getInstanceParameterDict()
options['instance-state'] = computer_partition.getState()
# XXX: those are not partition parameters, strictly speaking. # XXX: those are not partition parameters, strictly speaking.
# Make them available as individual section keys. # Make them available as individual section keys.
for his_key in ( for his_key in (
...@@ -129,9 +133,9 @@ class Recipe(object): ...@@ -129,9 +133,9 @@ class Recipe(object):
# also export single ip values for those recipes that don't support sets. # also export single ip values for those recipes that don't support sets.
if ipv4_set: if ipv4_set:
options['ipv4-random'] = list(ipv4_set)[0] options['ipv4-random'] = list(ipv4_set)[0].encode('UTF-8')
if ipv6_set: if ipv6_set:
options['ipv6-random'] = list(ipv6_set)[0] options['ipv6-random'] = list(ipv6_set)[0].encode('UTF-8')
options['tap'] = tap_set options['tap'] = tap_set
parameter_dict = self._expandParameterDict(options, parameter_dict) parameter_dict = self._expandParameterDict(options, parameter_dict)
......
...@@ -40,8 +40,8 @@ class Recipe(GenericBaseRecipe): ...@@ -40,8 +40,8 @@ class Recipe(GenericBaseRecipe):
self.partition_amount = options['partition-amount'].strip() self.partition_amount = options['partition-amount'].strip()
self.cloud9_url = options.get('cloud9-url', '').strip() self.cloud9_url = options.get('cloud9-url', '').strip()
self.log_file = os.path.join(options['log_dir'].strip(), 'slaprunner.log') self.log_file = os.path.join(options['log_dir'].strip(), 'slaprunner.log')
# Set slaprunner access URL # Set slaprunner access URL, CLN Beware ipv6 access is made throught nginx
options['access-url'] = 'http://[%s]:%s' % (self.ipv6, self.runner_port) options['access-url'] = 'https://[%s]:%s' % (self.ipv6, self.runner_port)
def install(self): def install(self):
path_list = [] path_list = []
...@@ -49,7 +49,7 @@ class Recipe(GenericBaseRecipe): ...@@ -49,7 +49,7 @@ class Recipe(GenericBaseRecipe):
configuration = dict( configuration = dict(
software_root=self.software_directory, software_root=self.software_directory,
instance_root=self.instance_directory, instance_root=self.instance_directory,
master_url='http://%s:%s/' % (self.ipv4, self.proxy_port), master_url='http://%s:%s' % (self.ipv4, self.proxy_port),
computer_id='slaprunner', computer_id='slaprunner',
partition_amount=self.partition_amount, partition_amount=self.partition_amount,
slapgrid_sr=self.options['slapgrid_sr'], slapgrid_sr=self.options['slapgrid_sr'],
...@@ -62,7 +62,7 @@ class Recipe(GenericBaseRecipe): ...@@ -62,7 +62,7 @@ class Recipe(GenericBaseRecipe):
etc_dir=self.options['etc_dir'], etc_dir=self.options['etc_dir'],
run_dir=self.options['run_dir'], run_dir=self.options['run_dir'],
log_dir=self.options['log_dir'], log_dir=self.options['log_dir'],
runner_host=self.ipv6, runner_host=self.ipv4,
runner_port=self.runner_port, runner_port=self.runner_port,
ipv4_address=self.ipv4, ipv4_address=self.ipv4,
ipv6_address=self.ipv6, ipv6_address=self.ipv6,
...@@ -132,7 +132,7 @@ class Test(GenericBaseRecipe): ...@@ -132,7 +132,7 @@ class Test(GenericBaseRecipe):
etc_dir=self.options['etc_dir'], etc_dir=self.options['etc_dir'],
run_dir=self.options['etc_dir'], run_dir=self.options['etc_dir'],
log_dir=self.workdir, log_dir=self.workdir,
runner_host=self.ipv6, runner_host=self.ipv4,
runner_port=self.runner_port, runner_port=self.runner_port,
ipv4_address=self.ipv4, ipv4_address=self.ipv4,
ipv6_address=self.ipv6, ipv6_address=self.ipv6,
......
...@@ -71,7 +71,7 @@ class ExportRecipe(GenericBaseRecipe): ...@@ -71,7 +71,7 @@ class ExportRecipe(GenericBaseRecipe):
done done
} }
sync_element %(srv-directory)s/runner %(backup-directory)s/runner/ instance project proxy.db softwareLink sync_element %(srv-directory)s/runner %(backup-directory)s/runner/ instance project proxy.db softwareLink
sync_element %(etc-directory)s %(backup-directory)s/etc/ .rcode .project .users ssh sync_element %(etc-directory)s %(backup-directory)s/etc/ .rcode .project .users .htpasswd ssh
if [ -d %(backup-directory)s/runner/software ]; then if [ -d %(backup-directory)s/runner/software ]; then
rm %(backup-directory)s/runner/software/* rm %(backup-directory)s/runner/software/*
fi fi
...@@ -120,7 +120,7 @@ class ImportRecipe(GenericBaseRecipe): ...@@ -120,7 +120,7 @@ class ImportRecipe(GenericBaseRecipe):
done done
} }
restore_element %(backup-directory)s/runner/ %(srv-directory)s/runner instance project proxy.db softwareLink restore_element %(backup-directory)s/runner/ %(srv-directory)s/runner instance project proxy.db softwareLink
restore_element %(backup-directory)s/etc/ %(etc-directory)s .rcode .project .users ssh restore_element %(backup-directory)s/etc/ %(etc-directory)s .rcode .project .users .htpasswd ssh
ifs=$IFS IFS=';' ifs=$IFS IFS=';'
read user pass remaining < %(etc-directory)s/.users read user pass remaining < %(etc-directory)s/.users
IFS=$ifs IFS=$ifs
......
...@@ -87,8 +87,6 @@ class Recipe: ...@@ -87,8 +87,6 @@ class Recipe:
computer_partition_id) computer_partition_id)
self.parameter_dict = self.computer_partition.getInstanceParameterDict() self.parameter_dict = self.computer_partition.getInstanceParameterDict()
software_type = self.parameter_dict['slap_software_type'] software_type = self.parameter_dict['slap_software_type']
self.logger.info('Deploying instance with software type %s' % \
software_type)
# Raise if request software_type does not exist ... # Raise if request software_type does not exist ...
if software_type not in self.options: if software_type not in self.options:
......
##############################################################################
#
# Copyright (c) 2012 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from slapos.recipe.librecipe import GenericBaseRecipe
class Recipe(GenericBaseRecipe):
"""
squid instance configuration.
wrapper-path -- location of the init script to generate
prepare-path -- location of the directory creation script to generate
binary-path -- location of the squid command
conf-path -- location of the configuration file
cache-path -- location of the cache directory
XXXX No good, specific...
open_port -- entrance port to the host and allowed to use cache
ip -- ip of the squid server
port -- port of the squid server
backend-ip -- ip of the service to cache
backend-port -- port of the service to cache
access-log-path -- location of the access log
cache-log-path -- location of the cache log
pid-filename-path -- location of the pid filename
"""
def install(self):
config = dict(
ip=self.options['ip'],
port=self.options['port'],
backend_ip=self.options['backend-ip'],
backend_port=self.options['backend-port'],
cache_path=self.options['cache-path'],
access_log_path=self.options['access-log-path'],
cache_log_path=self.options['cache-log-path'],
pid_filename_path=self.options['pid-filename-path'],
open_port=self.options['open-port'],
)
template_filename = self.getTemplateFilename('squid.conf.in')
configuration_path = self.createFile(
self.options['conf-path'],
self.substituteTemplate(template_filename, config))
# Prepare directories
prepare_path = self.createPythonScript(
self.options['prepare-path'],
'slapos.recipe.librecipe.execute.execute',
arguments=[self.options['binary-path'].strip(),
'-z',
'-f', configuration_path,
],)
# Create running wrapper
wrapper_path = self.createPythonScript(
self.options['wrapper-path'],
'slapos.recipe.librecipe.execute.execute',
arguments=[self.options['binary-path'].strip(),
'-N',
'-f', configuration_path,
],)
return [configuration_path, wrapper_path, prepare_path]
refresh_pattern . 0 20%% 4320 max-stale=604800
# Dissallow cachemgr access
http_access deny manager
# Squid service configuration
http_port %(ip)s:%(port)s accel defaultsite=%(ip)s
cache_peer %(backend_ip)s parent %(backend_port)s 0 no-query originserver name=backend
acl our_sites port %(open_port)s
http_access allow our_sites
cache_peer_access backend allow our_sites
cache_peer_access backend deny all
# Drop squid headers
# via off
# reply_header_access X-Cache-Lookup deny all
# reply_header_access X-Squid-Error deny all
# reply_header_access X-Cache deny all
header_replace X-Forwarded-For
follow_x_forwarded_for allow all
forwarded_for on
# Use 1Go of RAM
cache_mem 1024 MB
# But do not keep big object in RAM
maximum_object_size_in_memory 2048 KB
# Log
access_log %(access_log_path)s
cache_log %(cache_log_path)s
pid_filename %(pid_filename_path)s
...@@ -43,7 +43,11 @@ class Recipe(GenericBaseRecipe): ...@@ -43,7 +43,11 @@ class Recipe(GenericBaseRecipe):
repozo_wrapper = self.createPythonScript( repozo_wrapper = self.createPythonScript(
self.options['repozo-wrapper'], self.options['repozo-wrapper'],
'slapos.recipe.librecipe.execute.execute', 'slapos.recipe.librecipe.execute.execute',
[self.options['tidstorage-repozo-binary'], '--config', [self.options['tidstorage-repozo-binary'],
configuration_file, '--repozo', self.options['repozo-binary'], '-z']) '--config', configuration_file,
'--repozo', self.options['repozo-binary'],
'--gzip',
'--quick',
])
return [configuration_file, tidstorage_wrapper, repozo_wrapper] return [configuration_file, tidstorage_wrapper, repozo_wrapper]
...@@ -34,11 +34,15 @@ class Recipe(GenericSlapRecipe): ...@@ -34,11 +34,15 @@ class Recipe(GenericSlapRecipe):
""" """
def _install(self): def _install(self):
path_list = [] path_list = []
web_checker_mail_address = self.parameter_dict['web-checker-mail-address'] try:
web_checker_smtp_host = self.parameter_dict['web-checker-smtp-host'] web_checker_mail_address = self.options['mail-address']
web_checker_frontend_url = self.parameter_dict.get( web_checker_smtp_host = self.options['smtp-host']
'web-checker-frontend-url', web_checker_frontend_url = self.options['frontend-url']
self.options['frontend-url']) except KeyError:
# BBB
web_checker_mail_address = self.parameter_dict['web-checker-mail-address']
web_checker_smtp_host = self.parameter_dict['web-checker-smtp-host']
web_checker_frontend_url = self.parameter_dict['web-checker-frontend-url']
web_checker_working_directory = \ web_checker_working_directory = \
self.options['web-checker-working-directory'] self.options['web-checker-working-directory']
config = dict( config = dict(
......
...@@ -8,6 +8,31 @@ apache_frontend works using the master instance / slave instance design. ...@@ -8,6 +8,31 @@ apache_frontend works using the master instance / slave instance design.
It means that a single main instance of Apache will be used to act as frontend It means that a single main instance of Apache will be used to act as frontend
for many slaves. for many slaves.
Software type
=============
Apache frontend is available in 3 software types:
* default : The standard way to use the apache frontend configuring everything with a few given parameters
* custom-personal : This software type allow each slave to edit its apache configuration file
* custom-group : This software type use a template given as a parameter on master instance to generate apache configuration for all slaves
* replicate : This software type is set to replicate any kind of apache
About replicate frontend
========================
Slaves of the root instance (type "replicate") are sent as a parameter to requested frontends which will process them. The only difference is that they will then return the « would-be published information » to the root instance instead of publishing it. The root instance will then do a synthesis and publish the information to its slaves. The replicate instance only use 3 type of parameters for itself and will transmit the rest to requested frontends.
These parameters are :
* "-frontend-type" : the type to deploy frontends with. (default to 2)
* "-frontend-quantity" : The quantity of frontends to request (default to "default")
* "-sla-i-foo" : where "i" is the number of the concerned frontend (between 1 and "-frontend-quantity") and "foo" a sla parameter.
ex:
<parameter id="-frontend-quantity">3</parameter>
<parameter id="-frontend-type">custom-personal</parameter>
<parameter id="-sla-3-computer_guid">COMP-1234</parameter>
will request the third frontend on COMP-1234. All frontends will be of software type "custom-personal".
Note: the way slaves are transformed to a parameter avoid modifying more than 3 lines in the frontend logic.
Important NOTE: The way you ask for slave to a replicate frontend is the same as the one you would use for the software given in "-frontend-quantity". Do not forget to use "replicate" for software type. XXXXX So far it is not possible to do a simple request on a replicate frontend if you do not know the software_guid or other sla-parameter of the master instance. In fact we do not know yet the software type of the "requested" frontends. TO BE IMPLEMENTED
How to deploy a frontend server How to deploy a frontend server
=============================== ===============================
...@@ -43,24 +68,28 @@ all slave instances. ...@@ -43,24 +68,28 @@ all slave instances.
Finally, the slave instance will be accessible from: Finally, the slave instance will be accessible from:
https://someidentifier.moulefrite.org. https://someidentifier.moulefrite.org.
About SSL
How to have custom configuration in frontend server =========
=================================================== Default and custom-personal software type can handle specific ssl for one slave instance.
IMPORTANT: One apache can not serve more than One specific SSL VirtualHost and be compatible with obsolete browser (i.e.: IE8). See http://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI
In your instance directory, you, as sysadmin, can directly edit two
configuration files that won't be overwritten by SlapOS to customize your #How to have custom configuration in frontend server
instance: #===================================================
#
* $PARTITION_PATH/srv/srv/apache-conf.d/apache_frontend.custom.conf #In your instance directory, you, as sysadmin, can directly edit two
* $PARTITION_PATH/srv/srv/apache-conf.d/apache_frontend.virtualhost.custom.conf #configuration files that won't be overwritten by SlapOS to customize your
#instance:
The first one is included in the end of the main apache configuration file. #
The second one is included in the virtualhost of the main apache configuration file. # * $PARTITION_PATH/srv/srv/apache-conf.d/apache_frontend.custom.conf
# * $PARTITION_PATH/srv/srv/apache-conf.d/apache_frontend.virtualhost.custom.conf
SlapOS will jsut create those two files for you, then completely forget them. #
#The first one is included in the end of the main apache configuration file.
Note: make sure that the UNIX user of the instance has read access to those #The second one is included in the virtualhost of the main apache configuration file.
files if you edit them. #
#SlapOS will jsut create those two files for you, then completely forget them.
#
#Note: make sure that the UNIX user of the instance has read access to those
#files if you edit them.
Instance Parameters Instance Parameters
=================== ===================
...@@ -78,6 +107,10 @@ for the subdomains of the chosen domain like:: ...@@ -78,6 +107,10 @@ for the subdomains of the chosen domain like::
Using the IP given by the Master Instance. Using the IP given by the Master Instance.
"domain" is a mandatory Parameter. "domain" is a mandatory Parameter.
public-ipv4
~~~~~~~~~~~
Public ipv4 of the frontend (the one Apache will be indirectly listening to)
port port
~~~~ ~~~~
Port used by Apache. Optional parameter, defaults to 4443. Port used by Apache. Optional parameter, defaults to 4443.
...@@ -87,8 +120,17 @@ plain_http_port ...@@ -87,8 +120,17 @@ plain_http_port
Port used by apache to serve plain http (only used to redirect to https). Port used by apache to serve plain http (only used to redirect to https).
Optional parameter, defaults to 8080. Optional parameter, defaults to 8080.
Slave Instance Parameters apache_custom_http (custom-group)
------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jinja template for apache virtualhost http configuration. It will be used by all slaves
apache_custom_https (custom-group)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jinja template for apache virtualhost https configuration. It will be used by all slaves
Slave Instance Parameters (default)
-----------------------------------
url url
~~~ ~~~
...@@ -98,7 +140,7 @@ Example: http://mybackend.com/myresource ...@@ -98,7 +140,7 @@ Example: http://mybackend.com/myresource
enable_cache enable_cache
~~~~~ ~~~~~
Specify if slave instance should use a varnish / stunnel to connect to backend. Specify if slave instance should use a squid to connect to backend.
Possible values: "true", "false". Possible values: "true", "false".
"enable_cache" is an optional parameter. Defaults to "false". "enable_cache" is an optional parameter. Defaults to "false".
Example: true Example: true
...@@ -111,10 +153,10 @@ Possible values: "zope", "default". ...@@ -111,10 +153,10 @@ Possible values: "zope", "default".
"type" is an optional parameter. Defaults to "default". "type" is an optional parameter. Defaults to "default".
Example: zope Example: zope
custom_domain domain (former custom_domain)
~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Domain name to use as frontend. The frontend will be accessible from this domain. Domain name to use as frontend. The frontend will be accessible from this domain.
"custom_domain" is an optional parameter. Defaults to "domain" is an optional parameter. Defaults to
[instancereference].[masterdomain]. [instancereference].[masterdomain].
Example: www.mycustomdomain.com Example: www.mycustomdomain.com
...@@ -123,7 +165,8 @@ https-only ...@@ -123,7 +165,8 @@ https-only
Specify if website should be accessed using https only. If so, the frontend Specify if website should be accessed using https only. If so, the frontend
will redirect the user to https if accessed from http. will redirect the user to https if accessed from http.
Possible values: "true", "false". Possible values: "true", "false".
This is an optional parameter. Defaults to false. "https-only" is an optional parameter. Defaults to "false".
Example: true
path path
~~~~ ~~~~
...@@ -135,14 +178,75 @@ VirtualHostMonster. ...@@ -135,14 +178,75 @@ VirtualHostMonster.
"path" is an optional parameter, ignored if not specified. "path" is an optional parameter, ignored if not specified.
Example of value: "/erp5/web_site_module/hosting/" Example of value: "/erp5/web_site_module/hosting/"
Slave Instance Parameters (custom-personal)
-------------------------------------------
apache_custom_https
~~~~~~~~~~~~~~~~~~~
Raw apache configuration in python template format (i.e. write "%%" for one "%") for the slave listening to the https port. Its content will be templatified in order to access functionalities such as cache access, ssl certificates... The list is available above.
NOTE: If you want to use the cache, use the apache option "ProxyPreserveHost On"
apache_custom_http
~~~~~~~~~~~~~~~~~~
Raw apache configuration in python template format (i.e. write "%%" for one "%") for the slave listening to the http port. Its content will be templatified in order to access functionalities such as cache access, ssl certificates... The list is available above
NOTE: If you want to use the cache, use the apache option "ProxyPreserveHost On"
url
~~~
Necesarry to activate cache. url of backend to use.
"url" is an optional parameter.
Example: http://mybackend.com/myresource
domain
~~~~~~
Necesarry to activate cache. The frontend will be accessible from this domain.
"domain" is an optional parameter.
Example: www.mycustomdomain.com
enable_cache
~~~~~~~~~~~~
Necesarry to activate cache.
"enable_cache" is an optional parameter.
ssl_key, ssl_crt, ssl_ca_crt, ssl_crs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SSL certificates of the slave.
They are optional.
Functionalities for apache configuration:
In the slave apache configuration you can use parameters that will be replaced during instanciation. They should be entered as python templates parameters ex:" %(parameter)s"
* cache_access : url of the cache. Should replace backend url in configuration to use the cache
* error_log : path of the slave error log in order to log in a deferenciated file.
* error_log : path of the slave access log in order to log in a deferenciated file.
* ssl_key, ssl_crt, ssl_ca_crt, ssl_crs : path of the certificates given in slave instance parameters
Slave Instance Parameters (custom-group)
----------------------------------------
url
~~~
Necesarry to activate cache. url of backend to use.
"url" is an optional parameter.
Example: http://mybackend.com/myresource
domain
~~~~~~
Domain name to use as frontend. The frontend will be accessible from this domain.
"domain" is an optional parameter necessary to activate cache. Defaults to
[instancereference].[masterdomain].
Example: www.mycustomdomain.com
The rest of the parameters are defined by templates given to the master and accessible by the slave_parameter dict in it.
Examples Examples
======== ========
Here are some example of how to make your SlapOS service available through Here are some example of how to make your SlapOS service available through
an already deployed frontend. an already deployed frontend.
Simple Example Simple Example (default)
-------------- ------------------------
Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be
redirected and accessible from the proxy:: redirected and accessible from the proxy::
...@@ -157,8 +261,8 @@ redirected and accessible from the proxy:: ...@@ -157,8 +261,8 @@ redirected and accessible from the proxy::
) )
Zope Example Zope Example (default)
------------ ----------------------
Request slave frontend instance using a Zope backend so that Request slave frontend instance using a Zope backend so that
https://[1:2:3:4:5:6:7:8]:1234 will be redirected and accessible from the https://[1:2:3:4:5:6:7:8]:1234 will be redirected and accessible from the
...@@ -175,8 +279,8 @@ proxy:: ...@@ -175,8 +279,8 @@ proxy::
) )
Advanced example Advanced example (default)
---------------- --------------------------
Request slave frontend instance using a Zope backend, with Varnish activated, Request slave frontend instance using a Zope backend, with Varnish activated,
listening to a custom domain and redirecting to /erp5/ so that listening to a custom domain and redirecting to /erp5/ so that
...@@ -192,7 +296,192 @@ the proxy:: ...@@ -192,7 +296,192 @@ the proxy::
"enable_cache":"true", "enable_cache":"true",
"type":"zope", "type":"zope",
"path":"/erp5", "path":"/erp5",
"custom_domain":"mycustomdomain.com", "domain":"mycustomdomain.com",
}
)
Simple Example (custom-personal)
--------------------------------
Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be
instance = request(
software_release=apache_frontend,
software_type="RootSoftwareInstance",
partition_reference='my frontend',
shared=True,
software_type="custom-personal",
partition_parameter_kw={
"url":"https://[1:2:3:4:5:6:7:8]:1234",
"apache_custom_https":'
ServerName www.example.org
ServerAlias example.org
ServerAdmin geronimo@example.org
SSLEngine on
SSLProxyEngine on
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
RewriteRule ^/(.*) https://[1:2:3:4:5:6:7:8]:1234/$1 [L,P]',
"apache_custom_http":'
ServerName www.example.org
ServerAlias www.example.org
ServerAlias example.org
ServerAdmin geronimo@example.org
SSLProxyEngine on
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
# Remove "Secure" from cookies, as backend may be https
Header edit Set-Cookie "(?i)^(.+);secure$" "$1"
# Not using HTTPS? Ask that guy over there.
# Dummy redirection to https. Note: will work only if https listens
# on standard port (443).
RewriteRule ^/(.*) https://[1:2:3:4:5:6:7:8]:1234/$1 [L,P],
}
)
Simple Cache Example (custom-personal)
--------------------------------
Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be
instance = request(
software_release=apache_frontend,
software_type="RootSoftwareInstance",
partition_reference='my frontend',
shared=True,
software_type="custom-personal",
partition_parameter_kw={
"url":"https://[1:2:3:4:5:6:7:8]:1234",
"domain": "www.example.org",
"enable_cache": "True",
"apache_custom_https":'
ServerName www.example.org
ServerAlias www.example.org
ServerAlias example.org
ServerAdmin geronimo@example.org
SSLEngine on
SSLProxyEngine on
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
RewriteRule ^/(.*) %(cache_access)s/$1 [L,P]',
"apache_custom_http":'
ServerName www.example.org
ServerAlias www.example.org
ServerAlias example.org
ServerAdmin geronimo@example.org
SSLProxyEngine on
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
# Remove "Secure" from cookies, as backend may be https
Header edit Set-Cookie "(?i)^(.+);secure$" "$1"
# Not using HTTPS? Ask that guy over there.
# Dummy redirection to https. Note: will work only if https listens
# on standard port (443).
RewriteRule ^/(.*) %(cache_access)s/$1 [L,P],
}
)
Advanced example (custom-personal)
----------------------------------
Request slave frontend instance using custom apache configuration, willing to use cache and ssl certificates.
listening to a custom domain and redirecting to /erp5/ so that
https://[1:2:3:4:5:6:7:8]:1234/erp5/ will be redirected and accessible from
the proxy::
instance = request(
software_release=apache_frontend,
software_type="RootSoftwareInstance",
partition_reference='my frontend',
shared=True,
software_type="custom-personal",
partition_parameter_kw={
"url":"https://[1:2:3:4:5:6:7:8]:1234",
"enable_cache":"true",
"type":"zope",
"path":"/erp5",
"domain":"example.org",
"apache_custom_https":'
ServerName www.example.org
ServerAlias www.example.org
ServerAdmin example.org
SSLEngine on
SSLProxyEngine on
SSLProtocol -ALL +SSLv3 +TLSv1
SSLHonorCipherOrder On
SSLCipherSuite RC4-SHA:HIGH:!ADH
# Use personal ssl certificates
SSLCertificateFile %(ssl_crt)s
SSLCertificateKeyFile %(ssl_key)s
SSLCACertificateFile %(ssl_ca_crt)s
SSLCertificateChainFile %(ssl_ca_crt)s
# Configure personal logs
ErrorLog "%(error_log)s"
LogLevel info
LogFormat "%%h %%l %%{REMOTE_USER}i %%t \"%%r\" %%>s %%b \"%%{Referer}i\" \"%%{User-Agent}i\" %%D" combined
CustomLog "%(access_log)s" combined
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
# Redirect / to /index.html
RewriteRule ^/$ /index.html [R=302,L]
# Use cache
RewriteRule ^/(.*) %(cache_access)s/VirtualHostBase/https/www.example.org:443/erp5/VirtualHostRoot/$1 [L,P]',
"apache_custom_http":'
ServerName www.example.org
ServerAlias www.example.org
ServerAlias example.org
ServerAdmin geronimo@example.org
SSLProxyEngine on
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
# Configure personal logs
ErrorLog "%(error_log)s"
LogLevel info
LogFormat "%%h %%l %%{REMOTE_USER}i %%t \"%%r\" %%>s %%b \"%%{Referer}i\" \"%%{User-Agent}i\" %%D" combined
CustomLog "%(access_log)s" combined
# Remove "Secure" from cookies, as backend may be https
Header edit Set-Cookie "(?i)^(.+);secure$" "$1"
# Not using HTTPS? Ask that guy over there.
# Dummy redirection to https. Note: will work only if https listens
# on standard port (443).
RewriteRule ^/(.*)$ https://%%{SERVER_NAME}%%{REQUEST_URI}',
"ssl_key":"-----BEGIN RSA PRIVATE KEY-----
XXXXXXX..........XXXXXXXXXXXXXXX
-----END RSA PRIVATE KEY-----",
"ssl_crt":'-----BEGIN CERTIFICATE-----
XXXXXXXXXXX.............XXXXXXXXXXXXXXXXXXX
-----END CERTIFICATE-----',
"ssl_ca_crt":'-----BEGIN CERTIFICATE-----
XXXXXXXXX...........XXXXXXXXXXXXXXXXX
-----END CERTIFICATE-----',
"ssl_csr":'-----BEGIN CERTIFICATE REQUEST-----
XXXXXXXXXXXXXXX.............XXXXXXXXXXXXXXXXXX
-----END CERTIFICATE REQUEST-----',
} }
) )
......
Apache:
=======
- set a redirection of / in option
- Implement support of multiple ip (if possible) for multiple ssl (other solution is to ask master instance but quid to much apache process?)
Squid:
======
- Only cache in ram. Problems with to much squid on the computer or too many slave?
SlapOS:
=======
- Implement intelligent apache graceful -> Should check slave configuration and return error to slave if there is a problem (important but difficult)
- useless srv/squid_cache directory so far
[buildout] [buildout]
extends = extends =
# dev Stuff
../../component/git/buildout.cfg
../../stack/slapos.cfg
../../component/binutils/buildout.cfg ../../component/binutils/buildout.cfg
../../component/lxml-python/buildout.cfg ../../component/lxml-python/buildout.cfg
../../component/apache/buildout.cfg ../../component/apache/buildout.cfg
../../component/gzip/buildout.cfg ../../component/gzip/buildout.cfg
../../component/stunnel/buildout.cfg ../../component/stunnel/buildout.cfg
../../component/varnish/buildout.cfg
../../component/dcron/buildout.cfg ../../component/dcron/buildout.cfg
../../component/logrotate/buildout.cfg ../../component/logrotate/buildout.cfg
../../component/rdiff-backup/buildout.cfg ../../component/rdiff-backup/buildout.cfg
../../stack/slapos.cfg ../../component/squid/buildout.cfg
parts = parts +=
slapos-cookbook
slapos-toolbox
template template
template-apache-frontend
template-apache-replicate
binutils binutils
apache-2.2 apache-2.2
apache-antiloris-apache-2.2 apache-antiloris-apache-2.2
stunnel stunnel
varnish-2.1
dcron dcron
logrotate logrotate
rdiff-backup rdiff-backup
squid
# Buildoutish [slapos-toolbox]
eggs
instance-recipe-egg
[instance-recipe]
# Note: In case if specific instantiation recipe is used this is the place to
# put its name
egg = slapos.cookbook
module = apache.frontend
[instance-recipe-egg]
recipe = zc.recipe.egg recipe = zc.recipe.egg
eggs = ${instance-recipe:egg}
[eggs]
recipe = z3c.recipe.scripts
eggs = eggs =
${lxml-python:egg} ${lxml-python:egg}
slapos.toolbox slapos.toolbox
scripts =
killpidfromfile
onetimedownload
[check-recipe]
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command =
grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link &&
[template] [template]
# Default template for apache instance.
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg url = ${:_profile_base_location_}/instance.cfg
md5sum = e7b9f57da7eb1450fc15789e239388d4 md5sum = 9c6346c8eaf484748e6be0b62b65cf2e
output = ${buildout:directory}/template.cfg output = ${buildout:directory}/template.cfg
mode = 0644 mode = 0644
[template-apache-frontend]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-apache-frontend.cfg
md5sum = 7e7e7599ec41cf1eb6e8e725d855c345
output = ${buildout:directory}/template-apache-frontend.cfg
mode = 0644
[template-apache-replicate]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/instance-apache-replicate.cfg.in
md5sum = 2c96799f1429d0541c04c0875d864777
mode = 0644
[template-slave-list]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/apache-custom-slave-list.cfg.in
md5sum = f5eef006211809669b12422240c6f436
mode = 640
[template-slave-configuration]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/slave-virtualhost.conf.in
md5sum = a7ad2e83b7f919fc45a7ef1e64344dcb
mode = 640
[template-replicate-publish-slave-information]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/replicate-publish-slave-information.cfg.in
mode = 640
[template-apache-frontend-configuration]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/apache.conf.in
md5sum = c141b9e78c7e80d75bb40493910294e5
mode = 640
[template-apache-cached-configuration]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/apache_cached.conf.in
md5sum = 0c4393db80670daf18b432b7f07383e9
mode = 640
[template-rewrite-cached]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/apache_cached_rewrite.txt.in
md5sum = 2f30af4f9da340c2b0618599da03ed4b
mode = 640
[template-custom-slave-list]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/apache-default-slave-list.cfg.in
md5sum = 9362384cd80727987b34c7746a6de196
mode = 640
[template-not-found-html]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/notfound.html
filename = notfound.html
md5sum = f20d6c3d2d94fb685f8d26dfca1e822b
mode = 640
[template-default-virtualhost]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/000.conf.in
md5sum = c2bbf029e6adc432de0884fb5cf5d2ab
mode = 640
[template-default-slave-virtualhost]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/default-virtualhost.conf.in
md5sum = ac845c0fa3835832307a0e7323cb339d
mode = 640
[template-empty]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/templates/empty.in
md5sum = c2314c3a9c3412a38d14b312d3df83c1
mode = 640
\ No newline at end of file
[buildout]
parts =
directory
configtest
logrotate
cron
cron-entry-logrotate
ca-frontend
certificate-authority
squid-cache
logrotate-entry-apache
logrotate-entry-apache-cached
logrotate-entry-squid
apache-frontend
apache-cached
switch-apache-softwaretype
frontend-apache-graceful
cached-apache-graceful
squid-reload
dynamic-template-default-vh
not-found-html
promise-apache-frontend-v4-https
promise-apache-frontend-v4-http
promise-apache-frontend-v6-https
promise-apache-frontend-v6-http
promise-apache-cached
promise-squid
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
# Create all needed directories
[directory]
recipe = slapos.cookbook:mkdirectory
bin = $${buildout:directory}/bin/
etc = $${buildout:directory}/etc/
srv = $${buildout:directory}/srv/
var = $${buildout:directory}/var/
template = $${buildout:directory}/template/
backup = $${:srv}/backup
log = $${:var}/log
run = $${:var}/run
service = $${:etc}/service
etc-run = $${:etc}/run
promise = $${:etc}/promise
logrotate-backup = $${:backup}/logrotate
logrotate-entries = $${:etc}/logrotate.d
cron-entries = $${:etc}/cron.d
crontabs = $${:etc}/crontabs
cronstamps = $${:etc}/cronstamps
ca-dir = $${:srv}/ssl
squid-cache = $${:srv}/squid_cache
[switch-apache-softwaretype]
recipe = slapos.cookbook:softwaretype
default = $${dynamic-default-template-slave-list:rendered}
custom-personal = $${dynamic-custom-personal-template-slave-list:rendered}
custom-group = $${dynamic-custom-group-template-slave-list:rendered}
[instance-parameter]
# Fetches parameters defined in SlapOS Master for this instance.
# Always the same.
recipe = slapos.cookbook:slapconfiguration.serialised
computer = $${slap-connection:computer-id}
partition = $${slap-connection:partition-id}
url = $${slap-connection:server-url}
key = $${slap-connection:key-file}
cert = $${slap-connection:cert-file}
# Define default parameter(s) that will be used later, in case user didn't
# specify it
# All parameters are available through the configuration.XX syntax.
# All possible parameters should have a default.
configuration.domain = example.org
configuration.public-ipv4 =
configuration.port = 4443
configuration.plain_http_port = 8080
configuration.server-admin = admin@example.com
configuration.apache_custom_https = ""
configuration.apache_custom_http = ""
configuration.apache-key =
configuration.apache-certificate =
configuration.open-port = 80 443
configuration.extra_slave_instance_list =
[jinja2-template-base]
recipe = slapos.recipe.template:jinja2
rendered = $${buildout:directory}/$${:filename}
extra-context =
context =
import json_module json
key eggs_directory buildout:eggs-directory
key develop_eggs_directory buildout:develop-eggs-directory
key slap_software_type instance-parameter:slap-software-type
key slapparameter_dict instance-parameter:configuration
$${:extra-context}
[dynamic-template-default-vh]
< = jinja2-template-base
template = ${template-default-virtualhost:target}
rendered = $${apache-directory:slave-configuration}/000.conf
extensions = jinja2.ext.do
extra-context =
key http_port instance-parameter:configuration.plain_http_port
key https_port instance-parameter:configuration.port
[dynamic-custom-personal-template-slave-list]
< = jinja2-template-base
template = ${template-slave-list:target}
filename = custom-personal-instance-slave-list.cfg
extensions = jinja2.ext.do
extra-context =
key apache_configuration_directory apache-directory:slave-configuration
key http_port instance-parameter:configuration.plain_http_port
key https_port instance-parameter:configuration.port
key public_ipv4 instance-parameter:configuration.public-ipv4
key slave_instance_list instance-parameter:slave-instance-list
key extra_slave_instance_list instance-parameter:configuration.extra_slave_instance_list
key rewrite_cached_configuration apache-configuration:cached-rewrite-file
key custom_ssl_directory apache-directory:vh-ssl
key apache_log_directory apache-directory:slave-log
key local_ipv4 instance-parameter:ipv4-random
key cache_port apache-configuration:cache-port
raw empty_template ${template-empty:target}
raw template_slave_configuration ${template-slave-configuration:target}
raw template_rewrite_cached ${template-rewrite-cached:target}
raw software_type custom-personal
[dynamic-custom-group-template-slave-list]
< = jinja2-template-base
template = ${template-custom-slave-list:target}
filename = custom-group-instance-slave-list.cfg
extensions = jinja2.ext.do
extra-context =
key apache_configuration_directory apache-directory:slave-configuration
key domain instance-parameter:configuration.domain
key http_port instance-parameter:configuration.plain_http_port
key https_port instance-parameter:configuration.port
key public_ipv4 instance-parameter:configuration.public-ipv4
key slave_instance_list instance-parameter:slave-instance-list
key extra_slave_instance_list instance-parameter:configuration.extra_slave_instance_list
key rewrite_cached_configuration apache-configuration:cached-rewrite-file
key custom_ssl_directory apache-directory:vh-ssl
key template_slave_configuration dynamic-virtualhost-template-slave:rendered
key apache_log_directory apache-directory:slave-log
key local_ipv4 instance-parameter:ipv4-random
key cache_port apache-configuration:cache-port
raw empty_template ${template-empty:target}
raw template_rewrite_cached ${template-rewrite-cached:target}
raw software_type custom-group
[dynamic-default-template-slave-list]
< = jinja2-template-base
template = ${template-custom-slave-list:target}
filename = default-instance-slave-list.cfg
extensions = jinja2.ext.do
extra-context =
key apache_configuration_directory apache-directory:slave-configuration
key domain instance-parameter:configuration.domain
key http_port instance-parameter:configuration.plain_http_port
key https_port instance-parameter:configuration.port
key public_ipv4 instance-parameter:configuration.public-ipv4
key slave_instance_list instance-parameter:slave-instance-list
key extra_slave_instance_list instance-parameter:configuration.extra_slave_instance_list
key rewrite_cached_configuration apache-configuration:cached-rewrite-file
key custom_ssl_directory apache-directory:vh-ssl
key apache_log_directory apache-directory:slave-log
key local_ipv4 instance-parameter:ipv4-random
key cache_port apache-configuration:cache-port
raw template_slave_configuration ${template-default-slave-virtualhost:target}
raw empty_template ${template-empty:target}
raw template_rewrite_cached ${template-rewrite-cached:target}
raw software_type default-RootSoftwareInstance
# XXXX Hack to allow two software types
[dynamic-virtualhost-template-slave]
<= jinja2-template-base
template = ${template-slave-configuration:target}
rendered = $${directory:template}/slave-virtualhost.conf.in
extensions = jinja2.ext.do
extra-context =
key https_port instance-parameter:configuration.port
key http_port instance-parameter:configuration.plain_http_port
key apache_custom_https instance-parameter:configuration.apache_custom_https
key apache_custom_http instance-parameter:configuration.apache_custom_http
# Deploy Apache Frontend (new way, no recipe, jinja power)
[dynamic-apache-frontend-template]
< = jinja2-template-base
template = ${template-apache-frontend-configuration:target}
rendered = $${apache-configuration:frontend-configuration}
extra-context =
raw httpd_home ${apache-2.2:location}
key httpd_mod_ssl_cache_directory apache-directory:mod-ssl
key domain instance-parameter:configuration.domain
key document_root apache-directory:document-root
key instance_home buildout:directory
key ipv4_addr instance-parameter:ipv4-random
key ipv6_addr instance-parameter:ipv6-random
key http_port instance-parameter:configuration.plain_http_port
key https_port instance-parameter:configuration.port
key server_admin instance-parameter:configuration.server-admin
key protected_path apache-configuration:protected-path
key access_control_string apache-configuration:access-control-string
key login_certificate ca-frontend:cert-file
key login_key ca-frontend:key-file
key ca_dir certificate-authority:ca-dir
key ca_crl certificate-authority:ca-crl
key access_log apache-configuration:access-log
key error_log apache-configuration:error-log
key pid_file apache-configuration:pid-file
key slave_configuration_directory apache-directory:slave-configuration
[apache-frontend]
recipe = slapos.cookbook:wrapper
command-line = ${apache-2.2:location}/bin/httpd -f $${dynamic-apache-frontend-template:rendered} -DFOREGROUND
wrapper-path = $${directory:service}/frontend_apache
wait-for-files =
$${ca-frontend:cert-file}
$${ca-frontend:key-file}
# Deploy Apache for cached website
[dynamic-apache-cached-template]
< = jinja2-template-base
template = ${template-apache-cached-configuration:target}
rendered = $${apache-configuration:cached-configuration}
extra-context =
raw httpd_home ${apache-2.2:location}
key httpd_mod_ssl_cache_directory apache-directory:mod-ssl
key domain instance-parameter:configuration.domain
key document_root apache-directory:document-root
key instance_home buildout:directory
key ipv4_addr instance-parameter:ipv4-random
key cached_port apache-configuration:cache-through-port
key server_admin instance-parameter:configuration.server-admin
key protected_path apache-configuration:protected-path
key access_control_string apache-configuration:access-control-string
key login_certificate ca-frontend:cert-file
key login_key ca-frontend:key-file
key ca_dir certificate-authority:ca-dir
key ca_crl certificate-authority:ca-crl
key access_log apache-configuration:cache-access-log
key error_log apache-configuration:cache-error-log
key pid_file apache-configuration:cache-pid-file
key apachecachedmap_path apache-configuration:cached-rewrite-file
[apache-cached]
recipe = slapos.cookbook:wrapper
command-line = ${apache-2.2:location}/bin/httpd -f $${dynamic-apache-cached-template:rendered} -DFOREGROUND
wrapper-path = $${directory:service}/frontend_cached_apache
wait-for-files =
$${ca-frontend:cert-file}
$${ca-frontend:key-file}
[not-found-html]
recipe = slapos.cookbook:symbolic.link
target-directory = $${apache-directory:document-root}
link-binary =
${template-not-found-html:target}
[apache-directory]
recipe = slapos.cookbook:mkdirectory
document-root = $${directory:srv}/htdocs
slave-configuration = $${directory:etc}/apache-slave-conf.d/
cache = $${directory:var}/cache
mod-ssl = $${:cache}/httpd_mod_ssl
vh-ssl = $${:slave-configuration}/ssl
slave-log = $${directory:log}/httpd
[apache-configuration]
frontend-configuration = $${directory:etc}/apache_frontend.conf
cached-configuration = $${directory:etc}/apache_frontend_cached.conf
access-log = $${directory:log}/frontend-apache-access.log
error-log = $${directory:log}/frontend-apache-error.log
pid-file = $${directory:run}/httpd.pid
protected-path = /
access-control-string = none
cached-rewrite-file = $${directory:etc}/apache_rewrite_cached.txt
# Apache for cache configuration
cache-access-log = $${directory:log}/frontend-apache-access-cached.log
cache-error-log = $${directory:log}/frontend-apache-error-cached.log
cache-pid-file = $${directory:run}/httpd-cached.pid
# Comunication with squid
cache-port = 26010
cache-through-port = 26011
# Create wrapper for "apachectl conftest" in bin
[configtest]
recipe = slapos.cookbook:wrapper
command-line = ${apache-2.2:location}/bin/httpd -f $${directory:etc}/apache_frontend.conf -t
wrapper-path = $${directory:bin}/apache-configtest
[certificate-authority]
recipe = slapos.cookbook:certificate_authority
openssl-binary = ${openssl:location}/bin/openssl
ca-dir = $${directory:ca-dir}
requests-directory = $${cadirectory:requests}
wrapper = $${directory:service}/certificate_authority
ca-private = $${cadirectory:private}
ca-certs = $${cadirectory:certs}
ca-newcerts = $${cadirectory:newcerts}
ca-crl = $${cadirectory:crl}
[cadirectory]
recipe = slapos.cookbook:mkdirectory
requests = $${directory:ca-dir}/requests/
private = $${directory:ca-dir}/private/
certs = $${directory:ca-dir}/certs/
newcerts = $${directory:ca-dir}/newcerts/
crl = $${directory:ca-dir}/crl/
[ca-frontend]
<= certificate-authority
recipe = slapos.cookbook:certificate_authority.request
key-file = $${cadirectory:certs}/apache_frontend.key
cert-file = $${cadirectory:certs}/apache_frontend.crt
executable = $${directory:service}/frontend_apache
wrapper = $${directory:service}/frontend_apache
key-content = $${instance-parameter:configuration.apache-key}
cert-content = $${instance-parameter:configuration.apache-certificate}
# Put domain name
name = $${instance-parameter:configuration.domain}
[cron]
recipe = slapos.cookbook:cron
dcrond-binary = ${dcron:location}/sbin/crond
cron-entries = $${directory:cron-entries}
crontabs = $${directory:crontabs}
cronstamps = $${directory:cronstamps}
catcher = $${cron-simplelogger:wrapper}
binary = $${directory:service}/crond
[cron-simplelogger]
recipe = slapos.cookbook:simplelogger
wrapper = $${directory:bin}/cron_simplelogger
log = $${directory:log}/cron.log
[cron-entry-logrotate]
<= cron
recipe = slapos.cookbook:cron.d
name = logrotate
frequency = 0 0 * * *
command = $${logrotate:wrapper}
# Deploy Logrotate
[logrotate]
recipe = slapos.cookbook:logrotate
# Binaries
logrotate-binary = ${logrotate:location}/usr/sbin/logrotate
gzip-binary = ${gzip:location}/bin/gzip
gunzip-binary = ${gzip:location}/bin/gunzip
# Directories
wrapper = $${directory:bin}/logrotate
conf = $${directory:etc}/logrotate.conf
logrotate-entries = $${directory:logrotate-entries}
backup = $${directory:logrotate-backup}
state-file = $${directory:srv}/logrotate.status
[logrotate-entry-apache]
<= logrotate
recipe = slapos.cookbook:logrotate.d
name = apache
log = $${apache-directory:slave-log}/*_log $${apache-configuration:error-log} $${apache-configuration:access-log}
frequency = daily
rotatep-num = 30
post = ${buildout:bin-directory}/killpidfromfile $${apache-configuration:pid-file} SIGUSR1
sharedscripts = true
notifempty = true
create = true
[logrotate-entry-apache-cached]
<= logrotate
recipe = slapos.cookbook:logrotate.d
name = apache-cached
log = $${apache-configuration:cache-error-log} $${apache-configuration:cache-access-log}
frequency = daily
rotatep-num = 30
post = ${buildout:bin-directory}/killpidfromfile $${apache-configuration:cache-pid-file} SIGUSR1
sharedscripts = true
notifempty = true
create = true
[logrotate-entry-squid]
<= logrotate
recipe = slapos.cookbook:logrotate.d
name = squid
log = $${squid-cache:cache-log-path} $${squid-cache:access-log-path}
frequency = daily
rotatep-num = 30
post = ${buildout:bin-directory}/killpidfromfile $${apache-configuration:pid-file} SIGHUP
sharedscripts = true
notifempty = true
create = true
[squid-cache]
recipe = slapos.cookbook:squid
prepare-path = $${directory:etc-run}/squid-prepare
wrapper-path = $${directory:service}/squid
binary-path = ${squid:location}/sbin/squid
conf-path = $${directory:etc}/squid.cfg
cache-path = $${directory:squid-cache}
ip = $${instance-parameter:ipv4-random}
port = $${apache-configuration:cache-port}
backend-ip = $${instance-parameter:ipv4-random}
backend-port = $${apache-configuration:cache-through-port}
open-port = $${instance-parameter:configuration.open-port}
access-log-path = $${directory:log}/squid-access.log
cache-log-path = $${directory:log}/squid-cache.log
pid-filename-path = $${directory:run}/squid.pid
[squid-reload]
recipe = slapos.cookbook:wrapper
command-line = ${buildout:bin-directory}/killpidfromfile $${squid-cache:pid-filename-path} SIGHUP
wrapper-path = $${directory:etc-run}/squid-reload
[frontend-apache-graceful]
recipe = slapos.cookbook:wrapper
command-line = ${buildout:bin-directory}/killpidfromfile $${apache-configuration:pid-file} SIGUSR1
wrapper-path = $${directory:etc-run}/frontend-apache-graceful
[cached-apache-graceful]
recipe = slapos.cookbook:wrapper
command-line = ${buildout:bin-directory}/killpidfromfile $${apache-configuration:cache-pid-file} SIGUSR1
wrapper-path = $${directory:etc-run}/cached-apache-graceful
[promise-apache-frontend-v4-https]
recipe = slapos.cookbook:check_port_listening
path = $${directory:promise}/apache_frontend_ipv4_https
hostname = $${instance-parameter:ipv4-random}
port = $${instance-parameter:configuration.port}
[promise-apache-frontend-v4-http]
recipe = slapos.cookbook:check_port_listening
path = $${directory:promise}/apache_frontend_ipv4_http
hostname = $${instance-parameter:ipv4-random}
port = $${instance-parameter:configuration.plain_http_port}
[promise-apache-frontend-v6-https]
recipe = slapos.cookbook:check_port_listening
path = $${directory:promise}/apache_frontend_ipv6_https
hostname = $${instance-parameter:ipv6-random}
port = $${instance-parameter:configuration.port}
[promise-apache-frontend-v6-http]
recipe = slapos.cookbook:check_port_listening
path = $${directory:promise}/apache_frontend_ipv6_http
hostname = $${instance-parameter:ipv6-random}
port = $${instance-parameter:configuration.plain_http_port}
[promise-apache-cached]
recipe = slapos.cookbook:check_port_listening
path = $${directory:promise}/apache_cached
hostname = $${instance-parameter:ipv4-random}
port = $${apache-configuration:cache-through-port}
[promise-squid]
recipe = slapos.cookbook:check_port_listening
path = $${directory:promise}/squid
hostname = $${instance-parameter:ipv4-random}
port = $${apache-configuration:cache-port}
[slap_connection]
# Kept for backward compatiblity
computer_id = $${slap-connection:computer-id}
partition_id = $${slap-connection:partition-id}
server_url = $${slap-connection:server-url}
software_release_url = $${slap-connection:software-release-url}
key_file = $${slap-connection:key-file}
cert_file = $${slap-connection:cert-file}
{% if slap_software_type.startswith(software_type) -%}
[jinja2-template-base]
recipe = slapos.recipe.template:jinja2
rendered = ${buildout:directory}/${:filename}
extra-context =
context =
import json_module json
key eggs_directory buildout:eggs-directory
key develop_eggs_directory buildout:develop-eggs-directory
key slap_software_type slap-parameter:slap_software_type
key slave_instance_list slap-parameter:slave_instance_list
${:extra-context}
{% set part_list = [] -%}
{% set type_key = 'replicate-' %}
{% set type_key_length = type_key | length %}
{% if slap_software_type.startswith(type_key) %}
{% set frontend_type = slap_software_type[type_key_length:] -%}
{% else -%}
{% set frontend_type = slapparameter_dict.pop('-frontend-type', 'default') -%}
{% endif -%}
{% set frontend_quantity = slapparameter_dict.pop('-frontend-quantity', '2') | int -%}
{% set slave_list_name = 'extra_slave_instance_list' -%}
{% set frontend_list = [] %}
{% set frontend_section_list = [] %}
{% set namebase = 'apache-frontend' -%}
# Here we request individualy each frontend.
# The presence of sla parameters is checked and added if found
{% for i in range(1, frontend_quantity + 1) -%}
{% set frontend_name = "%s-%s" % (namebase, i) -%}
{% set request_section_title = 'request-%s' % frontend_name -%}
{% set sla_key = "-sla-%s-" % i -%}
{% set sla_key_length = sla_key | length %}
{% set sla_parameters = [] %}
{% for key in slapparameter_dict.keys() %}
{% if key.startswith(sla_key) %}
{% do sla_parameters.append(key[sla_key_length:]) %}
{% endif -%}
{% endfor -%}
{% do frontend_list.append(frontend_name) -%}
{% do frontend_section_list.append(request_section_title) -%}
{% do part_list.append(request_section_title) -%}
[{{request_section_title}}]
<= replicate
name = {{frontend_name}}
{% if sla_parameters %}
sla = {{ ' '.join(sla_parameters) }}
{% for parameter in sla_parameters -%}
sla-{{ parameter }} = {{ slapparameter_dict.pop( sla_key + parameter ) }}
{% endfor -%}
{% endif -%}
{% endfor -%}
[replicate]
<= slap-connection
recipe = slapos.cookbook:request
software-url = ${slap-connection:software-release-url}
software-type = {{frontend_type}}
return = private-ipv4 public-ipv4 slave-instance-information-list
config = {{ ' '.join(slapparameter_dict.keys()) + ' ' + slave_list_name }}
{% for parameter, value in slapparameter_dict.iteritems() -%}
config-{{parameter}} = {{ value }}
{% endfor -%}
config-{{ slave_list_name }} = {{ json_module.dumps(slave_instance_list) }}
[publish-information]
recipe = slapos.cookbook:publish
domain = {{ slapparameter_dict.get('domain') }}
slave-amount = {{ slave_instance_list | length }}
{% for frontend in frontend_list -%}
#{{frontend}}-private-ipv4 = ${request-{{frontend}}:private-ipv4}
{% endfor -%}
#----------------------------
#--
#-- Publish slave information
[publish-slave-information]
recipe = slapos.cookbook:softwaretype
default = ${dynamic-publish-slave-information:rendered}
replicate = ${dynamic-publish-slave-information:rendered}
[slave-information]
{% for frontend_section in frontend_section_list -%}
{{ frontend_section }} = {{ "${%s:connection-slave-instance-information-list}" % frontend_section }}
{% endfor -%}
[dynamic-publish-slave-information]
< = jinja2-template-base
template = {{ template_publish_slave_information }}
filename = dynamic-publish-slave-information.cfg
extensions = jinja2.ext.do
extra-context =
section slave_information slave-information
[buildout]
parts =
publish-slave-information
publish-information
{% for part in part_list -%}
{{ ' %s' % part }}
{% endfor -%}
# publish-information
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
[slap_connection]
# Kept for backward compatiblity
computer_id = ${slap-connection:computer-id}
partition_id = ${slap-connection:partition-id}
server_url = ${slap-connection:server-url}
software_release_url = ${slap-connection:software-release-url}
key_file = ${slap-connection:key-file}
cert_file = ${slap-connection:cert-file}
[slap-parameter]
slave_instance_list =
-frontend-quantity = 2
-frontend-type = default
{%- endif %}
[buildout] [buildout]
parts = parts =
directory dynamic-template-apache-replicate
apache switch-softwaretype
configtest
logrotate
logrotate-entry-apache
eggs-directory = ${buildout:eggs-directory} eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory} develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
# Create all needed directories
[directory] [slap-parameters]
recipe = slapos.cookbook:mkdirectory recipe = slapos.cookbook:slapconfiguration
computer = $${slap-connection:computer-id}
bin = $${buildout:directory}/bin/ partition = $${slap-connection:partition-id}
etc = $${buildout:directory}/etc/ url = $${slap-connection:server-url}
srv = $${buildout:directory}/srv/ key = $${slap-connection:key-file}
var = $${buildout:directory}/var/ cert = $${slap-connection:cert-file}
backup = $${:srv}/backup [jinja2-template-base]
log = $${:var}/log recipe = slapos.recipe.template:jinja2
run = $${:var}/run rendered = $${buildout:directory}/$${:filename}
service = $${:etc}/service extra-context =
context =
logrotate-backup = $${:backup}/logrotate import json_module json
logrotate-entries = $${:etc}/logrotate.d key eggs_directory buildout:eggs-directory
key develop_eggs_directory buildout:develop-eggs-directory
key slap_software_type slap-parameters:slap-software-type
# Deploy Apache (old way, with monolithic recipe) key slapparameter_dict slap-parameters:configuration
[apache] key slave_instance_list slap-parameters:slave-instance-list
recipe = ${instance-recipe:egg}:${instance-recipe:module} $${:extra-context}
httpd_home = ${apache-2.2:location}
httpd_binary = ${apache-2.2:location}/bin/httpd [switch-softwaretype]
logrotate_binary = ${logrotate:location}/usr/sbin/logrotate recipe = slapos.cookbook:softwaretype
openssl_binary = ${openssl:location}/bin/openssl default = ${template-apache-frontend:output}
dcrond_binary = ${dcron:location}/sbin/crond custom-personal = ${template-apache-frontend:output}
varnishd_binary = ${varnish-2.1:location}/sbin/varnishd custom-group = ${template-apache-frontend:output}
stunnel_binary = ${stunnel:location}/bin/stunnel replicate-default = $${dynamic-template-apache-replicate:rendered}
rdiff_backup_binary = ${buildout:bin-directory}/rdiff-backup replicate-custom-personal = $${dynamic-template-apache-replicate:rendered}
gcc_binary = gcc replicate-custom-group = $${dynamic-template-apache-replicate:rendered}
binutils_directory = ${binutils:location}/bin/ replicate = $${dynamic-template-apache-replicate:rendered}
access-log = $${directory:log}/frontend-apache-access.log [dynamic-template-apache-replicate]
error-log = $${directory:log}/frontend-apache-error.log < = jinja2-template-base
pid-file = $${directory:run}/httpd.pid template = ${template-apache-replicate:target}
filename = instance-apache-replicate.cfg
extensions = jinja2.ext.do
# Create wrapper for "apachectl conftest" in bin extra-context =
[configtest] raw template_publish_slave_information ${template-replicate-publish-slave-information:target}
recipe = slapos.cookbook:wrapper # Must match the key id in [switch-softwaretype] which uses this section.
command-line = $${apache:httpd_binary} -f $${directory:etc}/apache_frontend.conf -t raw software_type replicate
wrapper-path = $${directory:bin}/apache-configtest
# Deploy Logrotate
[logrotate]
recipe = slapos.cookbook:logrotate
# Binaries
logrotate-binary = ${logrotate:location}/usr/sbin/logrotate
gzip-binary = ${gzip:location}/bin/gzip
gunzip-binary = ${gzip:location}/bin/gunzip
# Directories
wrapper = $${directory:bin}/logrotate
conf = $${directory:etc}/logrotate.conf
logrotate-entries = $${directory:logrotate-entries}
backup = $${directory:logrotate-backup}
state-file = $${directory:srv}/logrotate.status
[logrotate-entry-apache]
<= logrotate
recipe = slapos.cookbook:logrotate.d
name = apache
log = $${apache:error-log} $${apache:access-log}
frequency = daily
rotate-num = 30
post = ${buildout:bin-directory}/killpidfromfile $${apache:pid-file} SIGUSR1
sharedscripts = true
notifempty = true
create = true
...@@ -18,6 +18,15 @@ slapos.recipe.template = 2.4.2 ...@@ -18,6 +18,15 @@ slapos.recipe.template = 2.4.2
slapos.toolbox = 0.34.0 slapos.toolbox = 0.34.0
smmap = 0.8.2 smmap = 0.8.2
z3c.recipe.scripts = 1.0.1 z3c.recipe.scripts = 1.0.1
cliff = 1.4.4
cmd2 = 0.6.5.1
prettytable = 0.7.2
requests = 1.2.3
slapos.cookbook = 0.82
# Required by:
# slapos.cookbook==0.82
lock-file = 2.0
# Required by: # Required by:
# slapos.core==0.35.1 # slapos.core==0.35.1
...@@ -37,17 +46,17 @@ atomize = 0.1.1 ...@@ -37,17 +46,17 @@ atomize = 0.1.1
feedparser = 5.1.3 feedparser = 5.1.3
# Required by: # Required by:
# slapos.cookbook==0.77.1 # slapos.cookbook==0.82
inotifyx = 0.2.0-1 inotifyx = 0.2.0
# Required by: # Required by:
# slapos.cookbook==0.77.1 # slapos.cookbook==0.82
# slapos.core==0.35.1 # slapos.core==0.35.1
# xml-marshaller==0.9.7 # xml-marshaller==0.9.7
lxml = 3.1.2 lxml = 3.1.2
# Required by: # Required by:
# slapos.cookbook==0.77.1 # slapos.cookbook==0.82
netaddr = 0.7.10 netaddr = 0.7.10
# Required by: # Required by:
...@@ -67,11 +76,11 @@ psutil = 0.6.2 ...@@ -67,11 +76,11 @@ psutil = 0.6.2
pyflakes = 0.7 pyflakes = 0.7
# Required by: # Required by:
# slapos.cookbook==0.77.1 # slapos.cookbook==0.82
pytz = 2013b pytz = 2013b
# Required by: # Required by:
# slapos.cookbook==0.77.1 # slapos.cookbook==0.82
# slapos.core==0.35.1 # slapos.core==0.35.1
# slapos.toolbox==0.34.0 # slapos.toolbox==0.34.0
# zc.buildout==1.6.0-dev-SlapOS-010 # zc.buildout==1.6.0-dev-SlapOS-010
...@@ -79,7 +88,7 @@ pytz = 2013b ...@@ -79,7 +88,7 @@ pytz = 2013b
setuptools = 0.6c12dev-r88846 setuptools = 0.6c12dev-r88846
# Required by: # Required by:
# slapos.cookbook==0.77.1 # slapos.cookbook==0.82
# slapos.toolbox==0.34.0 # slapos.toolbox==0.34.0
slapos.core = 0.35.1 slapos.core = 0.35.1
...@@ -92,7 +101,7 @@ supervisor = 3.0b1 ...@@ -92,7 +101,7 @@ supervisor = 3.0b1
unittest2 = 0.5.1 unittest2 = 0.5.1
# Required by: # Required by:
# slapos.cookbook==0.77.1 # slapos.cookbook==0.82
# slapos.toolbox==0.34.0 # slapos.toolbox==0.34.0
xml-marshaller = 0.9.7 xml-marshaller = 0.9.7
...@@ -102,10 +111,54 @@ zope.interface = 4.0.5 ...@@ -102,10 +111,54 @@ zope.interface = 4.0.5
[networkcache] [networkcache]
# signature certificates of the following uploaders. # signature certificates of the following uploaders.
# Cedric de Saint Martin
# Romain Courteaud # Romain Courteaud
# Test Agent # Sebastien Robin
# Kazuhiko Shiozaki
# Cedric de Saint Martin
# Yingjie Xu
# Gabriel Monnerat
# Łukasz Nowak
# Test Agent (Automatic update from tests)
signature-certificate-list = signature-certificate-list =
-----BEGIN CERTIFICATE-----
MIIB4DCCAUkCADANBgkqhkiG9w0BAQsFADA5MQswCQYDVQQGEwJGUjEZMBcGA1UE
CBMQRGVmYXVsdCBQcm92aW5jZTEPMA0GA1UEChMGTmV4ZWRpMB4XDTExMDkxNTA5
MDAwMloXDTEyMDkxNTA5MDAwMlowOTELMAkGA1UEBhMCRlIxGTAXBgNVBAgTEERl
ZmF1bHQgUHJvdmluY2UxDzANBgNVBAoTBk5leGVkaTCBnzANBgkqhkiG9w0BAQEF
AAOBjQAwgYkCgYEApYZv6OstoqNzxG1KI6iE5U4Ts2Xx9lgLeUGAMyfJLyMmRLhw
boKOyJ9Xke4dncoBAyNPokUR6iWOcnPHtMvNOsBFZ2f7VA28em3+E1JRYdeNUEtX
Z0s3HjcouaNAnPfjFTXHYj4um1wOw2cURSPuU5dpzKBbV+/QCb5DLheynisCAwEA
ATANBgkqhkiG9w0BAQsFAAOBgQBCZLbTVdrw3RZlVVMFezSHrhBYKAukTwZrNmJX
mHqi2tN8tNo6FX+wmxUUAf3e8R2Ymbdbn2bfbPpcKQ2fG7PuKGvhwMG3BlF9paEC
q7jdfWO18Zp/BG7tagz0jmmC4y/8akzHsVlruo2+2du2freE8dK746uoMlXlP93g
QUUGLQ==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB8jCCAVugAwIBAgIJAPu2zchZ2BxoMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV
BAMMB3RzeGRldjMwHhcNMTExMDE0MTIxNjIzWhcNMTIxMDEzMTIxNjIzWjASMRAw
DgYDVQQDDAd0c3hkZXYzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrPbh+
YGmo6mWmhVb1vTqX0BbeU0jCTB8TK3i6ep3tzSw2rkUGSx3niXn9LNTFNcIn3MZN
XHqbb4AS2Zxyk/2tr3939qqOrS4YRCtXBwTCuFY6r+a7pZsjiTNddPsEhuj4lEnR
L8Ax5mmzoi9nE+hiPSwqjRwWRU1+182rzXmN4QIDAQABo1AwTjAdBgNVHQ4EFgQU
/4XXREzqBbBNJvX5gU8tLWxZaeQwHwYDVR0jBBgwFoAU/4XXREzqBbBNJvX5gU8t
LWxZaeQwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQA07q/rKoE7fAda
FED57/SR00OvY9wLlFEF2QJ5OLu+O33YUXDDbGpfUSF9R8l0g9dix1JbWK9nQ6Yd
R/KCo6D0sw0ZgeQv1aUXbl/xJ9k4jlTxmWbPeiiPZEqU1W9wN5lkGuLxV4CEGTKU
hJA/yXa1wbwIPGvX3tVKdOEWPRXZLg==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB7jCCAVegAwIBAgIJAJWA0jQ4o9DGMA0GCSqGSIb3DQEBBQUAMA8xDTALBgNV
BAMMBHg2MXMwIBcNMTExMTI0MTAyNDQzWhgPMjExMTEwMzExMDI0NDNaMA8xDTAL
BgNVBAMMBHg2MXMwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANdJNiFsRlkH
vq2kHP2zdxEyzPAWZH3CQ3Myb3F8hERXTIFSUqntPXDKXDb7Y/laqjMXdj+vptKk
3Q36J+8VnJbSwjGwmEG6tym9qMSGIPPNw1JXY1R29eF3o4aj21o7DHAkhuNc5Tso
67fUSKgvyVnyH4G6ShQUAtghPaAwS0KvAgMBAAGjUDBOMB0GA1UdDgQWBBSjxFUE
RfnTvABRLAa34Ytkhz5vPzAfBgNVHSMEGDAWgBSjxFUERfnTvABRLAa34Ytkhz5v
PzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBAFLDS7zNhlrQYSQO5KIj
z2RJe3fj4rLPklo3TmP5KLvendG+LErE2cbKPqnhQ2oVoj6u9tWVwo/g03PMrrnL
KrDm39slYD/1KoE5kB4l/p6KVOdeJ4I6xcgu9rnkqqHzDwI4v7e8/D3WZbpiFUsY
vaZhjNYKWQf79l6zXfOvphzJ
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAO4V/jiMoICoMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV MIIB9jCCAV+gAwIBAgIJAO4V/jiMoICoMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMjMyMCAXDTEyMDIxNjExMTAyM1oYDzIxMTIwMTIzMTExMDIzWjAT BAMMCENPTVAtMjMyMCAXDTEyMDIxNjExMTAyM1oYDzIxMTIwMTIzMTExMDIzWjAT
...@@ -120,17 +173,43 @@ signature-certificate-list = ...@@ -120,17 +173,43 @@ signature-certificate-list =
If1a2ZoqHRxoNo2yTmm7TSYRORWVS+vvfjY= If1a2ZoqHRxoNo2yTmm7TSYRORWVS+vvfjY=
-----END CERTIFICATE----- -----END CERTIFICATE-----
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----
MIIB4DCCAUkCADANBgkqhkiG9w0BAQsFADA5MQswCQYDVQQGEwJGUjEZMBcGA1UE MIIB9jCCAV+gAwIBAgIJAIlBksrZVkK8MA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
CBMQRGVmYXVsdCBQcm92aW5jZTEPMA0GA1UEChMGTmV4ZWRpMB4XDTExMDkxNTA5 BAMMCENPTVAtMzU3MCAXDTEyMDEyNjEwNTUyOFoYDzIxMTIwMTAyMTA1NTI4WjAT
MDAwMloXDTEyMDkxNTA5MDAwMlowOTELMAkGA1UEBhMCRlIxGTAXBgNVBAgTEERl MREwDwYDVQQDDAhDT01QLTM1NzCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
ZmF1bHQgUHJvdmluY2UxDzANBgNVBAoTBk5leGVkaTCBnzANBgkqhkiG9w0BAQEF ts+iGUwi44vtIfwXR8DCnLtHV4ydl0YTK2joJflj0/Ws7mz5BYkxIU4fea/6+VF3
AAOBjQAwgYkCgYEApYZv6OstoqNzxG1KI6iE5U4Ts2Xx9lgLeUGAMyfJLyMmRLhw i11nwBgYgxQyjNztgc9u9O71k1W5tU95yO7U7bFdYd5uxYA9/22fjObaTQoC4Nc9
boKOyJ9Xke4dncoBAyNPokUR6iWOcnPHtMvNOsBFZ2f7VA28em3+E1JRYdeNUEtX mTu6r/VHyJ1yRsunBZXvnk/XaKp7gGE9vNEyJvPn2bkCAwEAAaNQME4wHQYDVR0O
Z0s3HjcouaNAnPfjFTXHYj4um1wOw2cURSPuU5dpzKBbV+/QCb5DLheynisCAwEA BBYEFKuGIYu8+6aEkTVg62BRYaD11PILMB8GA1UdIwQYMBaAFKuGIYu8+6aEkTVg
ATANBgkqhkiG9w0BAQsFAAOBgQBCZLbTVdrw3RZlVVMFezSHrhBYKAukTwZrNmJX 62BRYaD11PILMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAMoTRpBxK
mHqi2tN8tNo6FX+wmxUUAf3e8R2Ymbdbn2bfbPpcKQ2fG7PuKGvhwMG3BlF9paEC YLEZJbofF7gSrRIcrlUJYXfTfw1QUBOKkGFFDsiJpEg4y5pUk1s5Jq9K3SDzNq/W
q7jdfWO18Zp/BG7tagz0jmmC4y/8akzHsVlruo2+2du2freE8dK746uoMlXlP93g it1oYjOhuGg3al8OOeKFrU6nvNTF1BAvJCl0tr3POai5yXyN5jlK/zPfypmQYxE+
QUUGLQ== TaqQSGBJPVXYt6lrq/PRD9ciZgKLOwEqK8w=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAPHoWu90gbsgMA0GCSqGSIb3DQEBBQUAMBQxEjAQBgNV
BAMMCXZpZmlibm9kZTAeFw0xMjAzMTkyMzIwNTVaFw0xMzAzMTkyMzIwNTVaMBQx
EjAQBgNVBAMMCXZpZmlibm9kZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
ozBijpO8PS5RTeKTzA90vi9ezvv4vVjNaguqT4UwP9+O1+i6yq1Y2W5zZxw/Klbn
oudyNzie3/wqs9VfPmcyU9ajFzBv/Tobm3obmOqBN0GSYs5fyGw+O9G3//6ZEhf0
NinwdKmrRX+d0P5bHewadZWIvlmOupcnVJmkks852BECAwEAAaNQME4wHQYDVR0O
BBYEFF9EtgfZZs8L2ZxBJxSiY6eTsTEwMB8GA1UdIwQYMBaAFF9EtgfZZs8L2ZxB
JxSiY6eTsTEwMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAc43YTfc6
baSemaMAc/jz8LNLhRE5dLfLOcRSoHda8y0lOrfe4lHT6yP5l8uyWAzLW+g6s3DA
Yme/bhX0g51BmI6gjKJo5DoPtiXk/Y9lxwD3p7PWi+RhN+AZQ5rpo8UfwnnN059n
yDuimQfvJjBFMVrdn9iP6SfMjxKaGk6gVmI=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAMNZBmoIOXPBMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMTMyMCAXDTEyMDUwMjEyMDQyNloYDzIxMTIwNDA4MTIwNDI2WjAT
MREwDwYDVQQDDAhDT01QLTEzMjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
6peZQt1sAmMAmSG9BVxxcXm8x15kE9iAplmANYNQ7z2YO57c10jDtlYlwVfi/rct
xNUOKQtc8UQtV/fJWP0QT0GITdRz5X/TkWiojiFgkopza9/b1hXs5rltYByUGLhg
7JZ9dZGBihzPfn6U8ESAKiJzQP8Hyz/o81FPfuHCftsCAwEAAaNQME4wHQYDVR0O
BBYEFNuxsc77Z6/JSKPoyloHNm9zF9yqMB8GA1UdIwQYMBaAFNuxsc77Z6/JSKPo
yloHNm9zF9yqMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAl4hBaJy1
cgiNV2+Z5oNTrHgmzWvSY4duECOTBxeuIOnhql3vLlaQmo0p8Z4c13kTZq2s3nhd
Loe5mIHsjRVKvzB6SvIaFUYq/EzmHnqNdpIGkT/Mj7r/iUs61btTcGUCLsUiUeci
Vd0Ozh79JSRpkrdI8R/NRQ2XPHAo+29TT70=
-----END CERTIFICATE----- -----END CERTIFICATE-----
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAKRvzcy7OH0UMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV MIIB9jCCAV+gAwIBAgIJAKRvzcy7OH0UMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
......
<VirtualHost *:{{ https_port }}>
ServerName www.example.org
SSLEngine on
SSLProxyEngine on
SSLProtocol -ALL +SSLv3 +TLSv1
SSLHonorCipherOrder On
SSLCipherSuite RC4-SHA:HIGH:!ADH
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
ErrorDocument 404 /notfound.html
</VirtualHost>
<VirtualHost *:{{ http_port }}>
ServerName www.example.org
ErrorDocument 404 /notfound.html
</VirtualHost>
\ No newline at end of file
{% if software_type == slap_software_type -%}
{% set cached_server_dict = {} -%}
{% set part_list = [] -%}
{% set cache_access = "http://%s:%s" % (local_ipv4, cache_port) -%}
{% set generic_instance_parameter_dict = {'cache_access': cache_access,} -%}
{% if extra_slave_instance_list -%}
{% set slave_instance_information_list = []-%}
{% set slave_instance_list = slave_instance_list + json_module.loads(extra_slave_instance_list) -%}
{% endif -%}
[jinja2-template-base]
recipe = slapos.recipe.template:jinja2
rendered = {{ apache_configuration_directory }}/${:filename}
extra-context =
context =
key eggs_directory buildout:eggs-directory
key develop_eggs_directory buildout:develop-eggs-directory
${:extra-context}
# Loop trhought slave list to set up slaves
{% for slave_instance in slave_instance_list -%}
{% set slave_reference = slave_instance.get('slave_reference') -%}
{% set slave_section_title = 'dynamic-template-slave-instance-%s' % slave_reference -%}
{% set slave_parameter_dict = generic_instance_parameter_dict.copy() -%}
{% do part_list.append(slave_section_title) -%}
# Set Up log files
{% do slave_parameter_dict.__setitem__('access_log', '/'.join([apache_log_directory, '%s_access_log' % slave_reference])) -%}
{% do slave_parameter_dict.__setitem__('error_log', '/'.join([apache_log_directory, '%s_error_log' % slave_reference])) -%}
# Set up apache configuration file for slave
[{{ slave_section_title }}]
< = jinja2-template-base
template = {{ template_slave_configuration }}
filename = {{ '%s.conf' % slave_reference }}
extra-context =
key apache_custom_https {{ 'slave-instance-%s-configuration:apache_custom_https' % slave_reference }}
key apache_custom_http {{ 'slave-instance-%s-configuration:apache_custom_http' % slave_reference }}
raw https_port {{ https_port }}
raw http_port {{ http_port }}
{{ '\n' }}
# Set ssl certificates for each slave
{% for cert_name in ('ssl_key', 'ssl_crt', 'ssl_ca_crt', 'ssl_csr')-%}
{% if cert_name in slave_instance -%}
{% set cert_title = '%s-%s' % (slave_reference, cert_name.replace('ssl_', '')) -%}
{% set cert_file = '/'.join([custom_ssl_directory, cert_title.replace('-','.')]) -%}
{% do part_list.append(cert_title) -%}
{% do slave_parameter_dict.__setitem__(cert_name, cert_file) -%}
# Store certificates on fs
[{{ cert_title }}]
< = jinja2-template-base
template = {{ empty_template }}
rendered = {{ cert_file }}
extra-context =
key content {{ cert_title + '-config:value' }}
# Store certificate in config
[{{ cert_title + '-config' }}]
value = {{ dumps(slave_instance.get(cert_name)) }}
{% endif -%}
{% endfor -%}
# Set apache configuration value for slave
[{{ ('slave-instance-%s-configuration' % slave_reference) }}]
{% set apache_custom_http = ((slave_instance.get('apache_custom_http', '')) % slave_parameter_dict) -%}
{% set apache_custom_https = ((slave_instance.get('apache_custom_https', '')) % slave_parameter_dict) -%}
apache_custom_http = {{ dumps(apache_custom_http) }}
apache_custom_https = {{ dumps(apache_custom_https) }}
{{ '\n' }}
# The slave use cache
{% if 'enable_cache' in slave_instance and 'url' in slave_instance and 'domain' in slave_instance -%}
{% do cached_server_dict.__setitem__(slave_instance.get('domain'), slave_instance.get('url')) -%}
{% endif -%}
# Publish slave information
{% if not extra_slave_instance_list -%}
{% set publish_section_title = 'publish-%s-connection-information' % slave_instance.get('slave_reference') -%}
{% do part_list.append(publish_section_title) -%}
[{{ publish_section_title }}]
recipe = slapos.cookbook:publish
public-ipv4 = {{ public_ipv4 }}
-slave-reference = {{ slave_instance.get('slave_reference') }}
{% else -%}
{% do slave_instance_information_list.append({'slave-reference':slave_instance.get('slave_reference'), 'public-ipv4':public_ipv4}) -%}
{% endif -%}
{% endfor -%}
# Publish information for the instance
{% set publish_section_title = 'publish-apache-information' -%}
{% do part_list.append(publish_section_title) -%}
[{{ publish_section_title }}]
recipe = slapos.cookbook:publish
public-ipv4 = {{ public_ipv4 }}
private-ipv4 = {{ local_ipv4 }}
{% if extra_slave_instance_list -%}
slave-instance-information-list = {{ json_module.dumps(slave_instance_information_list) }}
{% endif -%}
{% do part_list.append('cached-rewrite-rules') -%}
[cached-rewrite-rules]
< = jinja2-template-base
template = {{ template_rewrite_cached }}
rendered = {{ rewrite_cached_configuration }}
extra-context =
import json_module json
key server_dict rewrite-rules:rules
[rewrite-rules]
rules = {{ dumps(cached_server_dict) }}
[buildout]
parts +=
{% for part in part_list -%}
{{ ' %s' % part }}
{% endfor -%}
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
cache-access = {{ cache_access }}
{% endif -%}
{% if slap_software_type in software_type -%}
{% set cached_server_dict = {} -%}
{% set part_list = [] -%}
{% set cache_access = "http://%s:%s" % (local_ipv4, cache_port) -%}
{% set TRUE_VALUES = ['y', 'yes', '1', 'true'] -%}
{% set generic_instance_parameter_dict = {'cache_access': cache_access,} -%}
{% if extra_slave_instance_list -%}
{% set slave_instance_information_list = []-%}
{% set slave_instance_list = slave_instance_list + json_module.loads(extra_slave_instance_list) -%}
{% endif -%}
[jinja2-template-base]
recipe = slapos.recipe.template:jinja2
rendered = {{ apache_configuration_directory }}/${:filename}
extra-context =
context =
key eggs_directory buildout:eggs-directory
key develop_eggs_directory buildout:develop-eggs-directory
${:extra-context}
# Go throught slave list to set their configuration
{% for slave_instance in slave_instance_list -%}
{% set slave_reference = slave_instance.get('slave_reference') -%}
{% set slave_section_title = 'dynamic-template-slave-instance-%s' % slave_reference -%}
{% set slave_parameter_dict = generic_instance_parameter_dict.copy() -%}
# Set slave domain if none was defined
{% if slave_instance.get('domain', None) == None -%}
# Backward compatibility
{% if slave_instance.get('custom_domain', None) != None -%}
{% do slave_instance.__setitem__('domain', slave_instance.get('custom_domain') )-%}
{% else -%}
{% do slave_instance.__setitem__('domain', "%s.%s" % (slave_instance.get('slave_reference').replace("-", "").lower(), domain)) -%}
{% endif -%}
{% endif -%}
# Set personal log, two per slave
{% set access_log = '/'.join([apache_log_directory, '%s_access_log' % slave_reference]) -%}
{% set error_log = '/'.join([apache_log_directory, '%s_error_log' % slave_reference]) -%}
# The slave use cache
# Next line is forbidden and people who copy it will be hanged short
{% set enable_cache = ('' ~ slave_instance.get('enable_cache', '')).lower() in TRUE_VALUES -%}
{% if enable_cache -%}
{% do cached_server_dict.__setitem__(slave_instance.get('domain'), slave_instance.get('url')) -%}
{% do slave_instance.__setitem__('url', cache_access) -%}
{% endif -%}
{% do part_list.append(slave_section_title) -%}
# Set up slave configuration file
[{{ slave_section_title }}]
< = jinja2-template-base
template = {{ template_slave_configuration }}
filename = {{ '%s.conf' % slave_reference }}
extensions = jinja2.ext.do
extra-context =
section slave_parameter {{ 'slave-instance-%s-configuration' % slave_reference }}
raw domain {{ domain }}
raw https_port {{ https_port }}
raw http_port {{ http_port }}
raw access_log {{ access_log }}
raw error_log {{ error_log }}
{{ '\n' }}
# Set ssl certificates for each slave
{% for cert_name in ('ssl_key', 'ssl_crt', 'ssl_ca_crt', 'ssl_csr')-%}
{% if cert_name in slave_instance -%}
{% set cert_title = '%s-%s' % (slave_reference, cert_name.replace('ssl_', '')) -%}
{% set cert_file = '/'.join([custom_ssl_directory, cert_title.replace('-','.')]) -%}
{% do part_list.append(cert_title) -%}
{% do slave_instance.__setitem__('path_to_' ~ cert_name, cert_file) -%}
# Store certificates on fs
[{{ cert_title }}]
< = jinja2-template-base
template = {{ empty_template }}
rendered = {{ cert_file }}
extra-context =
key content {{ cert_title + '-config:value' }}
# Store certificate in config
[{{ cert_title + '-config' }}]
value = {{ dumps(slave_instance.get(cert_name)) }}
{% endif -%}
{% endfor -%}
# Set apache configuration for slave
[{{ ('slave-instance-%s-configuration' % slave_reference) }}]
{% for key, value in slave_instance.iteritems() -%}
{{ key }} = {{ dumps(value) }}
{% endfor %}
# Publish slave information
{% if not extra_slave_instance_list -%}
{% set publish_section_title = 'publish-%s-connection-information' % slave_instance.get('slave_reference') -%}
{% do part_list.append(publish_section_title) -%}
[{{ publish_section_title }}]
recipe = slapos.cookbook:publish
-slave-reference = {{ slave_instance.get('slave_reference') }}
public-ipv4 = {{ public_ipv4 }}
domain = {{ slave_instance.get('domain') }}
url = http://{{ slave_instance.get('domain') }}
# Backward compatibility
site_url = ${:url}
{% else -%}
{% do slave_instance_information_list.append({'slave-reference':slave_instance.get('slave_reference'), 'public-ipv4':public_ipv4, 'domain':slave_instance.get('domain'), 'url':"http://%s" % slave_instance.get('domain'), 'site_url':"http://%s" % slave_instance.get('domain')}) -%}
{% endif -%}
{% endfor -%}
# Publish information for the instance
{% set publish_section_title = 'publish-apache-information' -%}
{% do part_list.append(publish_section_title) -%}
[{{ publish_section_title }}]
recipe = slapos.cookbook:publish
public-ipv4 = {{ public_ipv4 }}
private-ipv4 = {{ local_ipv4 }}
domain = {{ domain }}
{% if extra_slave_instance_list -%}
slave-instance-information-list = {{ json_module.dumps(slave_instance_information_list) }}
{% endif -%}
{% do part_list.append('cached-rewrite-rules') -%}
# Set rewrite rules for second apache
[cached-rewrite-rules]
< = jinja2-template-base
template = {{ template_rewrite_cached }}
rendered = {{ rewrite_cached_configuration }}
extra-context =
import json_module json
key server_dict rewrite-rules:rules
# Store Rewrite rules for second apache
[rewrite-rules]
rules = {{ dumps(cached_server_dict) }}
# Add parts generated by template
[buildout]
parts +=
{% for part in part_list -%}
{{ ' %s' % part }}
{% endfor -%}
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
cache-access = {{ cache_access }}
{% endif -%}
# Apache configuration file for Zope
# Automatically generated
# Basic server configuration
PidFile "{{ pid_file }}"
ServerName {{ domain }}
DocumentRoot {{ document_root }}
ServerRoot {{ instance_home }}
{% for ip in (ipv4_addr, "[%s]" % ipv6_addr) -%}
{% for port in (http_port, https_port) -%}
{{ "Listen %s:%s" % (ip, port) }}
{% endfor -%}
{% endfor -%}
ServerAdmin {{ server_admin }}
DefaultType text/plain
TypesConfig {{ httpd_home }}/conf/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
# As backend is trusting REMOTE_USER header unset it always
RequestHeader unset REMOTE_USER
ServerTokens Prod
# Log configuration
ErrorLog "{{ error_log }}"
LogLevel info
# LogFormat "%h %{REMOTE_USER}i %{Host}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
# LogFormat "%h %{REMOTE_USER}i %{Host}i %l %u %t \"%r\" %>s %b" common
# CustomLog "{{ access_log }}" common
LogFormat "%h %l %{REMOTE_USER}i %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D" combined
CustomLog "{{ access_log }}" combined
<Directory {{ protected_path }}>
Order Deny,Allow
Allow from {{ access_control_string }}
</Directory>
<Directory {{ document_root }}>
Order Allow,Deny
Allow from All
</Directory>
# List of modules
#LoadModule unixd_module modules/mod_unixd.so
#LoadModule access_compat_module modules/mod_access_compat.so
#LoadModule authz_core_module modules/mod_authz_core.so
LoadModule authz_host_module {{ httpd_home }}/modules/mod_authz_host.so
LoadModule log_config_module {{ httpd_home }}/modules/mod_log_config.so
LoadModule deflate_module {{ httpd_home }}/modules/mod_deflate.so
LoadModule setenvif_module {{ httpd_home }}/modules/mod_setenvif.so
LoadModule version_module {{ httpd_home }}/modules/mod_version.so
LoadModule proxy_module {{ httpd_home }}/modules/mod_proxy.so
LoadModule proxy_http_module {{ httpd_home }}/modules/mod_proxy_http.so
LoadModule ssl_module {{ httpd_home }}/modules/mod_ssl.so
LoadModule mime_module {{ httpd_home }}/modules/mod_mime.so
LoadModule dav_module {{ httpd_home }}/modules/mod_dav.so
LoadModule dav_fs_module {{ httpd_home }}/modules/mod_dav_fs.so
LoadModule negotiation_module {{ httpd_home }}/modules/mod_negotiation.so
LoadModule rewrite_module {{ httpd_home }}/modules/mod_rewrite.so
LoadModule headers_module {{ httpd_home }}/modules/mod_headers.so
LoadModule cache_module {{ httpd_home }}/modules/mod_cache.so
LoadModule mem_cache_module {{ httpd_home }}/modules/mod_mem_cache.so
LoadModule antiloris_module {{ httpd_home }}/modules/mod_antiloris.so
# The following directives modify normal HTTP response behavior to
# handle known problems with browser implementations.
BrowserMatch "Mozilla/2" nokeepalive
BrowserMatch ".*MSIE.*" nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
BrowserMatch "RealPlayer 4\.0" force-response-1.0
BrowserMatch "Java/1\.0" force-response-1.0
BrowserMatch "JDK/1\.0" force-response-1.0
# The following directive disables redirects on non-GET requests for
# a directory that does not include the trailing slash. This fixes a
# problem with Microsoft WebFolders which does not appropriately handle
# redirects for folders with DAV methods.
# Same deal with Apple's DAV filesystem and Gnome VFS support for DAV.
BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully
BrowserMatch "MS FrontPage" redirect-carefully
BrowserMatch "^WebDrive" redirect-carefully
BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully
BrowserMatch "^gnome-vfs" redirect-carefully
BrowserMatch "^XML Spy" redirect-carefully
BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully
# Cache directives
CacheEnable mem /
CacheDefaultExpire 3600
MCacheSize 8192
MCacheMaxObjectCount 1000
MCacheMaxObjectSize 8192
MCacheRemovalAlgorithm LRU
# Deflate
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/x-javascript application/javascript
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
# SSL Configuration
SSLCertificateFile {{ login_certificate }}
SSLCertificateKeyFile {{ login_key }}
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
SSLSessionCache shmcb:/{{ httpd_mod_ssl_cache_directory }}/ssl_scache(512000)
SSLSessionCacheTimeout 300
SSLRandomSeed startup /dev/urandom 256
SSLRandomSeed connect builtin
SSLProtocol -ALL +SSLv3 +TLSv1
SSLHonorCipherOrder On
SSLCipherSuite RC4-SHA:HIGH:!ADH
<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
# Accept proxy to sites using self-signed SSL certificates
SSLProxyCheckPeerCN off
SSLProxyCheckPeerExpire off
NameVirtualHost *:{{ http_port }}
NameVirtualHost *:{{ https_port }}
include {{ slave_configuration_directory }}/*.conf
\ No newline at end of file
...@@ -2,16 +2,16 @@ ...@@ -2,16 +2,16 @@
# Automatically generated # Automatically generated
# Basic server configuration # Basic server configuration
PidFile "%(pid_file)s" PidFile "{{ pid_file }}"
ServerName %(server_name)s ServerName {{ domain }}
DocumentRoot %(document_root)s DocumentRoot {{ document_root }}
ServerRoot %(instance_home)s ServerRoot {{ instance_home }}
%(listen)s {{ "Listen %s:%s" % (ipv4_addr, cached_port) }}
ServerAdmin %(server_admin)s ServerAdmin {{ server_admin }}
DefaultType text/plain DefaultType text/plain
TypesConfig %(httpd_home)s/conf/mime.types TypesConfig {{ httpd_home }}/conf/mime.types
AddType application/x-compress .Z AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz AddType application/x-gzip .gz .tgz
...@@ -21,35 +21,45 @@ RequestHeader unset REMOTE_USER ...@@ -21,35 +21,45 @@ RequestHeader unset REMOTE_USER
ServerTokens Prod ServerTokens Prod
# Log configuration # Log configuration
ErrorLog "%(error_log)s" ErrorLog "{{ error_log }}"
LogLevel info LogLevel info
LogFormat "%%h %%{REMOTE_USER}i %%{Host}i %%l %%u %%t \"%%r\" %%>s %%b \"%%{Referer}i\" \"%%{User-Agent}i\"" combined # LogFormat "%h %{REMOTE_USER}i %{Host}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%%h %%{REMOTE_USER}i %%{Host}i %%l %%u %%t \"%%r\" %%>s %%b" common # LogFormat "%h %{REMOTE_USER}i %{Host}i %l %u %t \"%r\" %>s %b" common
CustomLog "%(access_log)s" common # CustomLog "{{ access_log }}" common
LogFormat "%h %l %{REMOTE_USER}i %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D" combined
%(path_enable)s CustomLog "{{ access_log }}" combined
<Directory {{ protected_path }}>
Order Deny,Allow
Allow from {{ access_control_string }}
</Directory>
<Directory {{ document_root }}>
Order Allow,Deny
Allow from All
</Directory>
# List of modules # List of modules
#LoadModule unixd_module modules/mod_unixd.so #LoadModule unixd_module modules/mod_unixd.so
#LoadModule access_compat_module modules/mod_access_compat.so #LoadModule access_compat_module modules/mod_access_compat.so
#LoadModule authz_core_module modules/mod_authz_core.so #LoadModule authz_core_module modules/mod_authz_core.so
LoadModule authz_host_module %(httpd_home)s/modules/mod_authz_host.so LoadModule authz_host_module {{ httpd_home }}/modules/mod_authz_host.so
LoadModule log_config_module %(httpd_home)s/modules/mod_log_config.so LoadModule log_config_module {{ httpd_home }}/modules/mod_log_config.so
LoadModule deflate_module %(httpd_home)s/modules/mod_deflate.so LoadModule deflate_module {{ httpd_home }}/modules/mod_deflate.so
LoadModule setenvif_module %(httpd_home)s/modules/mod_setenvif.so LoadModule setenvif_module {{ httpd_home }}/modules/mod_setenvif.so
LoadModule version_module %(httpd_home)s/modules/mod_version.so LoadModule version_module {{ httpd_home }}/modules/mod_version.so
LoadModule proxy_module %(httpd_home)s/modules/mod_proxy.so LoadModule proxy_module {{ httpd_home }}/modules/mod_proxy.so
LoadModule proxy_http_module %(httpd_home)s/modules/mod_proxy_http.so LoadModule proxy_http_module {{ httpd_home }}/modules/mod_proxy_http.so
LoadModule ssl_module %(httpd_home)s/modules/mod_ssl.so LoadModule ssl_module {{ httpd_home }}/modules/mod_ssl.so
LoadModule mime_module %(httpd_home)s/modules/mod_mime.so LoadModule mime_module {{ httpd_home }}/modules/mod_mime.so
LoadModule dav_module %(httpd_home)s/modules/mod_dav.so LoadModule dav_module {{ httpd_home }}/modules/mod_dav.so
LoadModule dav_fs_module %(httpd_home)s/modules/mod_dav_fs.so LoadModule dav_fs_module {{ httpd_home }}/modules/mod_dav_fs.so
LoadModule negotiation_module %(httpd_home)s/modules/mod_negotiation.so LoadModule negotiation_module {{ httpd_home }}/modules/mod_negotiation.so
LoadModule rewrite_module %(httpd_home)s/modules/mod_rewrite.so LoadModule rewrite_module {{ httpd_home }}/modules/mod_rewrite.so
LoadModule headers_module %(httpd_home)s/modules/mod_headers.so LoadModule headers_module {{ httpd_home }}/modules/mod_headers.so
LoadModule cache_module %(httpd_home)s/modules/mod_cache.so LoadModule cache_module {{ httpd_home }}/modules/mod_cache.so
LoadModule mem_cache_module %(httpd_home)s/modules/mod_mem_cache.so LoadModule mem_cache_module {{ httpd_home }}/modules/mod_mem_cache.so
LoadModule antiloris_module %(httpd_home)s/modules/mod_antiloris.so LoadModule antiloris_module {{ httpd_home }}/modules/mod_antiloris.so
# The following directives modify normal HTTP response behavior to # The following directives modify normal HTTP response behavior to
# handle known problems with browser implementations. # handle known problems with browser implementations.
...@@ -85,14 +95,28 @@ AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javasc ...@@ -85,14 +95,28 @@ AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javasc
BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
# Make sure proxies don't deliver the wrong content
Header append Vary User-Agent
# SSL Configuration # SSL Configuration
%(ssl_snippet)s SSLCertificateFile {{ login_certificate }}
SSLCertificateKeyFile {{ login_key }}
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
SSLSessionCache shmcb:/{{ httpd_mod_ssl_cache_directory }}/ssl_scache(512000)
SSLSessionCacheTimeout 300
SSLRandomSeed startup /dev/urandom 256
SSLRandomSeed connect builtin
SSLProtocol -ALL +SSLv3 +TLSv1
SSLHonorCipherOrder On
SSLCipherSuite RC4-SHA:HIGH:!ADH
<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
# Accept proxy to sites using self-signed SSL certificates
SSLProxyCheckPeerCN off
SSLProxyCheckPeerExpire off
<VirtualHost *:%(https_port)s> # Only accept generic (i.e not Zope) backends on http
SSLEngine on <VirtualHost *:{{ cached_port }}>
SSLProxyEngine on SSLProxyEngine on
# Rewrite part # Rewrite part
ProxyVia On ProxyVia On
...@@ -100,68 +124,10 @@ Header append Vary User-Agent ...@@ -100,68 +124,10 @@ Header append Vary User-Agent
ProxyTimeout 600 ProxyTimeout 600
RewriteEngine On RewriteEngine On
# Include configuration file not operated by slapos. This file won't be erased RewriteMap apachemapcached txt:{{ apachecachedmap_path }}
# or changed when slapgrid is ran. It can be freely customized by node admin. RewriteCond ${apachemapcached:%{SERVER_NAME}} >""
Include %(custom_apache_virtualhost_conf)s RewriteRule ^/(.*)$ ${apachemapcached:%{SERVER_NAME}}/$1 [L,P]
# Define the 3 RewriteMaps (key -> value store): one for Zope, one generic,
# one generic https only,
# containing: rewritten URL -> original URL (a.k.a VirtualHostBase in Zope)
RewriteMap apachemapzope txt:%(apachemapzope_path)s
RewriteMap apachemapgeneric txt:%(apachemap_path)s
RewriteMap apachemapgenerichttpsonly txt:%(apachemap_httpsonly_path)s
# Define another RewriteMap for Zope, containing:
# rewritten URL -> VirtualHostRoot
RewriteMap apachemapzopepath txt:%(apachemapzopepath_path)s
# First, we check if we have a zope backend server
# If so, let's use Virtual Host Daemon rewrite
RewriteCond ${apachemapzope:%%{SERVER_NAME}} >""
# We suppose that Apache listens to 443 (even indirectly thanks to things like iptables)
RewriteRule ^/(.*)$ ${apachemapzope:%%{SERVER_NAME}}/VirtualHostBase/https/%%{SERVER_NAME}:443/${apachemapzopepath:%%{SERVER_NAME}}/VirtualHostRoot/$1 [L,P]
# If we have generic backend server, let's rewrite without virtual host daemon
RewriteCond ${apachemapgeneric:%%{SERVER_NAME}} >""
# We suppose that Apache listens to 443 (even indirectly thanks to things like iptables)
RewriteRule ^/(.*)$ ${apachemapgeneric:%%{SERVER_NAME}}/$1 [L,P]
# Same for https only server
RewriteCond ${apachemapgenerichttpsonly:%%{SERVER_NAME}} >""
# We suppose that Apache listens to 443 (even indirectly thanks to things like iptables)
RewriteRule ^/(.*)$ ${apachemapgenerichttpsonly:%%{SERVER_NAME}}/$1 [L,P]
# If nothing exist : put a nice error # If nothing exist : put a nice error
ErrorDocument 404 /notfound.html ErrorDocument 404 /notfound.html
</VirtualHost> </VirtualHost>
# Only accept generic (i.e not Zope) backends on http
<VirtualHost *:%(plain_http_port)s>
SSLProxyEngine on
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
# Remove "Secure" from cookies, as backend may be https
Header edit Set-Cookie "(?i)^(.+);secure$" "$1"
# Include configuration file not operated by slapos. This file won't be erased
# or changed when slapgrid is ran. It can be freely customized by node admin.
Include %(custom_apache_virtualhost_conf)s
RewriteMap apachemapgeneric txt:%(apachemap_path)s
RewriteCond ${apachemapgeneric:%%{SERVER_NAME}} >""
RewriteRule ^/(.*)$ ${apachemapgeneric:%%{SERVER_NAME}}/$1 [L,P]
# Not using HTTPS? Ask that guy over there.
# Dummy redirection to https. Note: will work only if https listens
# on standard port (443).
RewriteRule ^/(.*)$ https://%%{SERVER_NAME}%%{REQUEST_URI}
</VirtualHost>
# Include configuration file not operated by slapos. This file won't be erased
# or changed when slapgrid is ran. It can be freely customized by node admin.
Include %(custom_apache_conf)s
{% for server_tuple in server_dict.items() -%}
{{ "%s %s" % server_tuple }}
{% endfor -%}
{% set TRUE_VALUES = ['y', 'yes', '1', 'true'] -%}
<VirtualHost *:{{ https_port }}>
ServerName {{ slave_parameter.get('domain') }}
ServerAlias {{ slave_parameter.get('domain') }}
SSLEngine on
SSLProxyEngine on
SSLProtocol -ALL +SSLv3 +TLSv1
SSLHonorCipherOrder On
SSLCipherSuite RC4-SHA:HIGH:!ADH
{% set ssl_configuration_list = [('SSLCertificateFile', 'path_to_ssl_crt'),
('SSLCertificateKeyFile', 'path_to_ssl_key'),
('SSLCACertificateFile', 'path_to_ssl_ca_crt'),
('SSLCertificateChainFile', 'path_to_ssl_ca_crt')] -%}
{% for key, value in ssl_configuration_list -%}
{% if value in slave_parameter -%}
{{ ' %s' % key }} {{ slave_parameter.get(value) }}
{% endif -%}
{% endfor -%}
# One Slave two logs
ErrorLog "{{ error_log }}"
LogLevel info
LogFormat "%h %l %{REMOTE_USER}i %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D" combined
CustomLog "{{ access_log }}" combined
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
{% if slave_parameter.get('type', '') == 'zope' -%}
# First, we check if we have a zope backend server
# If so, let's use Virtual Host Daemon rewrite
# We suppose that Apache listens to 443 (even indirectly thanks to things like iptables)
RewriteRule ^/(.*)$ {{ slave_parameter.get('url', '') }}/VirtualHostBase/https/{{ slave_parameter.get('domain', '') }}:443/{{ slave_parameter.get('path', '') }}/VirtualHostRoot/$1 [L,P]
{% else -%}
RewriteRule ^/(.*)$ {{ slave_parameter.get('url', '') }}/$1 [L,P]
{% endif -%}
</VirtualHost>
<VirtualHost *:{{ http_port }}>
ServerName {{ slave_parameter.get('domain') }}
ServerAlias {{ slave_parameter.get('domain') }}
SSLProxyEngine on
# Rewrite part
ProxyVia On
ProxyPreserveHost On
ProxyTimeout 600
RewriteEngine On
# One Slave two logs
ErrorLog "{{ error_log }}"
LogLevel info
LogFormat "%h %l %{REMOTE_USER}i %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D" combined
CustomLog "{{ access_log }}" combined
# Remove "Secure" from cookies, as backend may be https
Header edit Set-Cookie "(?i)^(.+);secure$" "$1"
# Next line is forbidden and people who copy it will be hanged short
{% set https_only = ('' ~ slave_parameter.get('https-only', '')).lower() in TRUE_VALUES -%}
{% if https_only -%}
# Not using HTTPS? Ask that guy over there.
# Dummy redirection to https. Note: will work only if https listens
# on standard port (443).
RewriteCond %{SERVER_PORT} !^{{ https_port }}$
RewriteRule ^/(.*) https://%{SERVER_NAME}/$1 [NC,R,L]
{% elif slave_parameter.get('type', '') == 'zope' -%}
# First, we check if we have a zope backend server
# If so, let's use Virtual Host Daemon rewrite
# We suppose that Apache listens to 80 (even indirectly thanks to things like iptables)
RewriteRule ^/(.*)$ {{ slave_parameter.get('url', '') }}/VirtualHostBase/http/{{ slave_parameter.get('domain', '') }}:80/{{ slave_parameter.get('path', '') }}/VirtualHostRoot/$1 [L,P]
{% else -%}
RewriteRule ^/(.*)$ {{ slave_parameter.get('url', '') }}/$1 [L,P]
{% endif -%}
# If nothing exist : put a nice error
# ErrorDocument 404 /notfound.html
# Dadiboom
</VirtualHost>
{{ content }}
\ No newline at end of file
{% set part_list = [] -%}
{% set slave_information_dict = {} -%}
# regroup slave information from all frontends
{%- for frontend, slave_list_raw in slave_information.iteritems() -%}
{% set slave_list = json_module.loads(slave_list_raw) -%}
{% for slave_dict in slave_list -%}
{% set slave_reference = slave_dict.pop('slave-reference') %}
{% set current_slave_dict = slave_information_dict.get(slave_reference, {}) %}
{% do current_slave_dict.update(slave_dict) -%}
{% do current_slave_dict.__setitem__(
'replication_number',
current_slave_dict.get('replication_number', 0) + 1
) -%}
{% do slave_information_dict.__setitem__(slave_reference, current_slave_dict) -%}
{% endfor -%}
{% endfor %}
# Publish information for each slave
{% for slave_reference, slave_information in slave_information_dict.iteritems() %}
{% set publish_section_title = 'publish-%s' % slave_reference -%}
{% do part_list.append(publish_section_title) -%}
[{{ publish_section_title }}]
recipe = slapos.cookbook:publish
-slave-reference = {{ slave_reference }}
{% for key, value in slave_information.iteritems() -%}
{{ key }} = {{ value }}
{% endfor -%}
{% endfor %}
[buildout]
parts =
{% for part in part_list %}
{{ ' %s' % part }}
{% endfor %}
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
\ No newline at end of file
<VirtualHost *:{{ https_port }}>
{{ apache_custom_https }}
</VirtualHost>
<VirtualHost *:{{ http_port }}>
{{ apache_custom_http }}
</VirtualHost>
{% set part_list = [] -%}
{% set crontab_line_list = [] -%}
###############################
#
# Instanciate dcron
#
###############################
[directory]
recipe = slapos.cookbook:mkdirectory
etc = $${buildout:directory}/etc
bin = $${buildout:directory}/bin
srv = $${buildout:directory}/srv
var = $${buildout:directory}/var
run = $${:var}/run
log = $${:var}/log
varnginx = $${:var}/nginx
# scripts = $${:etc}/run
services = $${:etc}/service
cron-entries = $${:etc}/cron.d
cron-lines = $${:etc}/cron.lines
crontabs = $${:etc}/crontabs
cronstamps = $${:etc}/cronstamps
backup = $${:srv}/backup
status = $${:srv}/status
backupscript = $${:etc}/backup
www = $${:srv}/www
home = $${:etc}/home
ssl = $${:etc}/ssl
ssh = $${:home}/.ssh
#################################
# Cron service
#################################
[dcron-service]
recipe = slapos.recipe.template
url = ${template-dcron-service:output}
output = $${directory:services}/crond
mode = 0700
logfile = $${directory:log}/crond.log
#################################
# Slave backup scripts and crontab
#################################
# Go throught slave list to set their configuration
{% for slave_instance in slave_instance_list -%}
{% set slave_reference = slave_instance.get('slave_reference') -%}
{% set frequency = slave_instance.get('frequency', '') -%}
{% set hostname = slave_instance.get('hostname', '') -%}
{% set connection = slave_instance.get('connection', '') -%}
{% set include = slave_instance.get('include', '') -%}
{% set include_string = "' --include='".join(include.split(' ')) -%}
{% set exclude = slave_instance.get('exclude', '') -%}
{% set exclude_string = '' -%}
{% set sudo = slave_instance.get('sudo', 'False') -%}
{% set remote_schema = 'rdiff-backup --server --restrict-read-only / -- "$@"' -%}
{% if (exclude != '') -%}
{% set exclude_string = "' --exclude='".join(exclude.split(' ')) -%}
{% set exclude_string = "--exclude='" + exclude_string + "'" -%}
{% endif -%}
{% if (sudo == 'True') -%}
{% set remote_schema = 'sudo backupagent_rdiff-backup' -%}
{% endif -%}
{% if (frequency != '') and (hostname != '') and (connection != '') and (include != '') -%}
[{{ slave_reference }}-backup-directory]
recipe = slapos.cookbook:mkdirectory
directory = $${directory:backup}/$${:_buildout_section_name_}
[{{ slave_reference }}-backup-private_key]
recipe = plone.recipe.command
stop-on-error = false
command = ${dropbear-output:keygen} -t $${:type} -s 2048 -f $${:key}
key = $${directory:ssh}/$${:_buildout_section_name_}
type = rsa
[{{ slave_reference }}-backup-public_key]
recipe = plone.recipe.command
stop-on-error = true
command = ${coreutils-output:rm} -f $${:key} && ${dropbear-output:keygen} -y -f {{ '$${' ~ slave_reference }}-backup-private_key:key} | ${grep-output:grep} {{ '$${' ~ slave_reference }}-backup-private_key:type} > $${:key}
key = {{ '$${' ~ slave_reference }}-backup-private_key:key}.pub
location = $${:key}
# Insert as a beginning part, to ensure that all public keys are generated before trying to publish. This will reduce the number of slapgrid-cp run.
{% do part_list.insert(0, "%s-backup-public_key" % slave_reference) -%}
[{{ slave_reference }}-backup-read-public_key]
recipe = slapos.cookbook:readline
storage-path = {{ '$${' ~ slave_reference }}-backup-public_key:key}
# Publish slave {{ slave_reference }} information
[{{ slave_reference }}-backup-publish]
recipe = slapos.cookbook:publish
-slave-reference = {{ slave_reference }}
authorized_key = {{ '$${' ~ slave_reference }}-backup-read-public_key:readline}
rss = https://[$${nginx-configuration:ip}]:$${nginx-configuration:port}/{{ '$${' ~ slave_reference }}-backup-script:status_name}.rss
{% do part_list.append("%s-backup-publish" % slave_reference) -%}
[{{ slave_reference }}-backup-script]
recipe = slapos.recipe.template
url = ${template-backup-script:output}
output = $${directory:backupscript}/$${:_buildout_section_name_}
mode = 0700
datadirectory = {{ '$${' ~ slave_reference }}-backup-directory:directory}
sshkey = {{ '$${' ~ slave_reference }}-backup-private_key:key}
connection = {{ connection }}
hostname = {{ hostname }}
include = {{ include_string }}
exclude_string = {{ exclude_string }}
remote_schema = {{ remote_schema }}
status_name = {{ slave_reference }}_status.txt
status_log = $${directory:status}/$${:status_name}
[{{ slave_reference }}-backup-crontab-line]
recipe = slapos.recipe.template
url = ${template-crontab-line:output}
output = $${directory:cron-lines}/$${:_buildout_section_name_}
mode = 0600
script = {{ '$${' ~ slave_reference }}-backup-script:output}
frequency = {{ frequency }}
{% do crontab_line_list.append("$${%s-backup-crontab-line:output}" % slave_reference) -%}
{% endif -%}
{% endfor -%}
#################################
# Generate crontab file
#################################
[update-rss-script]
recipe = slapos.recipe.template
url = ${template-update-rss-script:output}
output = $${directory:etc}/$${:_buildout_section_name_}
mode = 0700
global_rss = $${slap-connection:computer-id}-$${slap-connection:partition-id}.rss
[update-rss-crontab-line]
recipe = slapos.recipe.template
url = ${template-crontab-line:output}
output = $${directory:cron-lines}/$${:_buildout_section_name_}
mode = 0600
script = $${update-rss-script:output}
frequency = */5 * * * *
{% do crontab_line_list.append("$${update-rss-crontab-line:output}") -%}
[publish-global-rss]
recipe = slapos.cookbook:publish
rss = https://[$${nginx-configuration:ip}]:$${nginx-configuration:port}/$${update-rss-script:global_rss}
{% set crontab_line_list_string = " ".join(crontab_line_list) -%}
[activate-crontab-file]
# XXX File is never removed
recipe = plone.recipe.command
stop-on-error = true
command = ${coreutils-output:cat} ${template-crontab:output} {{ crontab_line_list_string }} | ${dcron-output:crontab} -c $${directory:crontabs} -
#################################
# Nginx service
#################################
[nginx-service]
recipe = slapos.recipe.template
url = ${template-nginx-service:output}
output = $${directory:services}/nginx
mode = 0700
virtual-depends =
$${nginx-configuration:ip}
[nginx-configuration]
recipe = slapos.recipe.template
url = ${template-nginx-configuration:output}
output = $${directory:etc}/nginx.cfg
mode = 0600
access_log = $${directory:log}/nginx-access.log
error_log = $${directory:log}/nginx-error.log
ip = $${slap-network-information:global-ipv6}
port = 9443
ssl_key = $${directory:ssl}/nginx.key
ssl_csr = $${directory:ssl}/nginx.csr
ssl_crt = $${directory:ssl}/nginx.crt
# Add parts generated by template
[buildout]
parts =
dcron-service
nginx-service
activate-crontab-file
publish-global-rss
{% for part in part_list -%}
{{ ' %s' % part }}
{% endfor -%}
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
[buildout]
parts =
switch-softwaretype
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
[dynamic-template-pullrdiffbackup]
recipe = slapos.recipe.template:jinja2
template = ${template-pullrdiffbackup:output}
rendered = $${buildout:parts-directory}/$${:_buildout_section_name_}/$${:filename}
filename = instance-pullrdiffbackup.cfg
extensions = jinja2.ext.do
context =
key slave_instance_list instance-parameter:slave-instance-list
[switch-softwaretype]
recipe = slapos.cookbook:softwaretype
default = $${:pullrdiffbackup}
# pullrdiffbackup = ${template-pullrdiffbackup:output}
pullrdiffbackup = $${dynamic-template-pullrdiffbackup:rendered}
[slap-connection]
# part to migrate to new - separated words
computer-id = $${slap_connection:computer_id}
partition-id = $${slap_connection:partition_id}
server-url = $${slap_connection:server_url}
software-release-url = $${slap_connection:software_release_url}
key-file = $${slap_connection:key_file}
cert-file = $${slap_connection:cert_file}
# [slap-parameter]
# slave-instance-list = []
[instance-parameter]
# Fetches parameters defined in SlapOS Master for this instance.
# Always the same.
recipe = slapos.cookbook:slapconfiguration.serialised
computer = $${slap_connection:computer_id}
partition = $${slap_connection:partition_id}
url = $${slap_connection:server_url}
key = $${slap_connection:key_file}
cert = $${slap_connection:cert_file}
[buildout]
extends =
../../component/dash/buildout.cfg
../../component/dcron/buildout.cfg
../../component/logrotate/buildout.cfg
../../component/openssl/buildout.cfg
../../component/nginx/buildout.cfg
../../component/rdiff-backup/buildout.cfg
# ../../component/duplicity/buildout.cfg
# ../../component/git/buildout.cfg
# ../../component/subversion/buildout.cfg
../../component/rsync/buildout.cfg
../../component/dropbear/buildout.cfg
../../component/grep/buildout.cfg
# ../../stack/flask.cfg
../../stack/slapos.cfg
parts =
extra-eggs
rdiff-backup
# duplicity
dcron
logrotate
nginx
openssl
# git
# subversion
rsync
# flask-egg
template
template-pullrdiffbackup
template-backup-script
template-crontab-line
slapos-cookbook
[extra-eggs]
recipe = zc.recipe.egg
interpreter = pythonforrssgen
eggs =
PyRSS2Gen
[networkcache]
# signature certificates of the following uploaders.
# Romain Courteaud
# Sebastien Robin
# Kazuhiko Shiozaki
# Cedric de Saint Martin
# Yingjie Xu
# Gabriel Monnerat
# Łukasz Nowak
# Test Agent (Automatic update from tests)
# Aurélien Calonne
signature-certificate-list =
-----BEGIN CERTIFICATE-----
MIIB4DCCAUkCADANBgkqhkiG9w0BAQsFADA5MQswCQYDVQQGEwJGUjEZMBcGA1UE
CBMQRGVmYXVsdCBQcm92aW5jZTEPMA0GA1UEChMGTmV4ZWRpMB4XDTExMDkxNTA5
MDAwMloXDTEyMDkxNTA5MDAwMlowOTELMAkGA1UEBhMCRlIxGTAXBgNVBAgTEERl
ZmF1bHQgUHJvdmluY2UxDzANBgNVBAoTBk5leGVkaTCBnzANBgkqhkiG9w0BAQEF
AAOBjQAwgYkCgYEApYZv6OstoqNzxG1KI6iE5U4Ts2Xx9lgLeUGAMyfJLyMmRLhw
boKOyJ9Xke4dncoBAyNPokUR6iWOcnPHtMvNOsBFZ2f7VA28em3+E1JRYdeNUEtX
Z0s3HjcouaNAnPfjFTXHYj4um1wOw2cURSPuU5dpzKBbV+/QCb5DLheynisCAwEA
ATANBgkqhkiG9w0BAQsFAAOBgQBCZLbTVdrw3RZlVVMFezSHrhBYKAukTwZrNmJX
mHqi2tN8tNo6FX+wmxUUAf3e8R2Ymbdbn2bfbPpcKQ2fG7PuKGvhwMG3BlF9paEC
q7jdfWO18Zp/BG7tagz0jmmC4y/8akzHsVlruo2+2du2freE8dK746uoMlXlP93g
QUUGLQ==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB8jCCAVugAwIBAgIJAPu2zchZ2BxoMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV
BAMMB3RzeGRldjMwHhcNMTExMDE0MTIxNjIzWhcNMTIxMDEzMTIxNjIzWjASMRAw
DgYDVQQDDAd0c3hkZXYzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrPbh+
YGmo6mWmhVb1vTqX0BbeU0jCTB8TK3i6ep3tzSw2rkUGSx3niXn9LNTFNcIn3MZN
XHqbb4AS2Zxyk/2tr3939qqOrS4YRCtXBwTCuFY6r+a7pZsjiTNddPsEhuj4lEnR
L8Ax5mmzoi9nE+hiPSwqjRwWRU1+182rzXmN4QIDAQABo1AwTjAdBgNVHQ4EFgQU
/4XXREzqBbBNJvX5gU8tLWxZaeQwHwYDVR0jBBgwFoAU/4XXREzqBbBNJvX5gU8t
LWxZaeQwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQA07q/rKoE7fAda
FED57/SR00OvY9wLlFEF2QJ5OLu+O33YUXDDbGpfUSF9R8l0g9dix1JbWK9nQ6Yd
R/KCo6D0sw0ZgeQv1aUXbl/xJ9k4jlTxmWbPeiiPZEqU1W9wN5lkGuLxV4CEGTKU
hJA/yXa1wbwIPGvX3tVKdOEWPRXZLg==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB7jCCAVegAwIBAgIJAJWA0jQ4o9DGMA0GCSqGSIb3DQEBBQUAMA8xDTALBgNV
BAMMBHg2MXMwIBcNMTExMTI0MTAyNDQzWhgPMjExMTEwMzExMDI0NDNaMA8xDTAL
BgNVBAMMBHg2MXMwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANdJNiFsRlkH
vq2kHP2zdxEyzPAWZH3CQ3Myb3F8hERXTIFSUqntPXDKXDb7Y/laqjMXdj+vptKk
3Q36J+8VnJbSwjGwmEG6tym9qMSGIPPNw1JXY1R29eF3o4aj21o7DHAkhuNc5Tso
67fUSKgvyVnyH4G6ShQUAtghPaAwS0KvAgMBAAGjUDBOMB0GA1UdDgQWBBSjxFUE
RfnTvABRLAa34Ytkhz5vPzAfBgNVHSMEGDAWgBSjxFUERfnTvABRLAa34Ytkhz5v
PzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBAFLDS7zNhlrQYSQO5KIj
z2RJe3fj4rLPklo3TmP5KLvendG+LErE2cbKPqnhQ2oVoj6u9tWVwo/g03PMrrnL
KrDm39slYD/1KoE5kB4l/p6KVOdeJ4I6xcgu9rnkqqHzDwI4v7e8/D3WZbpiFUsY
vaZhjNYKWQf79l6zXfOvphzJ
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAO4V/jiMoICoMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMjMyMCAXDTEyMDIxNjExMTAyM1oYDzIxMTIwMTIzMTExMDIzWjAT
MREwDwYDVQQDDAhDT01QLTIzMjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
wi/3Z8W9pUiegUXIk/AiFDQ0UJ4JFAwjqr+HSRUirlUsHHT+8DzH/hfcTDX1I5BB
D1ADk+ydXjMm3OZrQcXjn29OUfM5C+g+oqeMnYQImN0DDQIOcUyr7AJc4xhvuXQ1
P2pJ5NOd3tbd0kexETa1LVhR6EgBC25LyRBRae76qosCAwEAAaNQME4wHQYDVR0O
BBYEFMDmW9aFy1sKTfCpcRkYnP6zUd1cMB8GA1UdIwQYMBaAFMDmW9aFy1sKTfCp
cRkYnP6zUd1cMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAskbFizHr
b6d3iIyN+wffxz/V9epbKIZVEGJd/6LrTdLiUfJPec7FaxVCWNyKBlCpINBM7cEV
Gn9t8mdVQflNqOlAMkOlUv1ZugCt9rXYQOV7rrEYJBWirn43BOMn9Flp2nibblby
If1a2ZoqHRxoNo2yTmm7TSYRORWVS+vvfjY=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAIlBksrZVkK8MA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMzU3MCAXDTEyMDEyNjEwNTUyOFoYDzIxMTIwMTAyMTA1NTI4WjAT
MREwDwYDVQQDDAhDT01QLTM1NzCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
ts+iGUwi44vtIfwXR8DCnLtHV4ydl0YTK2joJflj0/Ws7mz5BYkxIU4fea/6+VF3
i11nwBgYgxQyjNztgc9u9O71k1W5tU95yO7U7bFdYd5uxYA9/22fjObaTQoC4Nc9
mTu6r/VHyJ1yRsunBZXvnk/XaKp7gGE9vNEyJvPn2bkCAwEAAaNQME4wHQYDVR0O
BBYEFKuGIYu8+6aEkTVg62BRYaD11PILMB8GA1UdIwQYMBaAFKuGIYu8+6aEkTVg
62BRYaD11PILMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAMoTRpBxK
YLEZJbofF7gSrRIcrlUJYXfTfw1QUBOKkGFFDsiJpEg4y5pUk1s5Jq9K3SDzNq/W
it1oYjOhuGg3al8OOeKFrU6nvNTF1BAvJCl0tr3POai5yXyN5jlK/zPfypmQYxE+
TaqQSGBJPVXYt6lrq/PRD9ciZgKLOwEqK8w=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAPHoWu90gbsgMA0GCSqGSIb3DQEBBQUAMBQxEjAQBgNV
BAMMCXZpZmlibm9kZTAeFw0xMjAzMTkyMzIwNTVaFw0xMzAzMTkyMzIwNTVaMBQx
EjAQBgNVBAMMCXZpZmlibm9kZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
ozBijpO8PS5RTeKTzA90vi9ezvv4vVjNaguqT4UwP9+O1+i6yq1Y2W5zZxw/Klbn
oudyNzie3/wqs9VfPmcyU9ajFzBv/Tobm3obmOqBN0GSYs5fyGw+O9G3//6ZEhf0
NinwdKmrRX+d0P5bHewadZWIvlmOupcnVJmkks852BECAwEAAaNQME4wHQYDVR0O
BBYEFF9EtgfZZs8L2ZxBJxSiY6eTsTEwMB8GA1UdIwQYMBaAFF9EtgfZZs8L2ZxB
JxSiY6eTsTEwMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAc43YTfc6
baSemaMAc/jz8LNLhRE5dLfLOcRSoHda8y0lOrfe4lHT6yP5l8uyWAzLW+g6s3DA
Yme/bhX0g51BmI6gjKJo5DoPtiXk/Y9lxwD3p7PWi+RhN+AZQ5rpo8UfwnnN059n
yDuimQfvJjBFMVrdn9iP6SfMjxKaGk6gVmI=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAMNZBmoIOXPBMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMTMyMCAXDTEyMDUwMjEyMDQyNloYDzIxMTIwNDA4MTIwNDI2WjAT
MREwDwYDVQQDDAhDT01QLTEzMjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
6peZQt1sAmMAmSG9BVxxcXm8x15kE9iAplmANYNQ7z2YO57c10jDtlYlwVfi/rct
xNUOKQtc8UQtV/fJWP0QT0GITdRz5X/TkWiojiFgkopza9/b1hXs5rltYByUGLhg
7JZ9dZGBihzPfn6U8ESAKiJzQP8Hyz/o81FPfuHCftsCAwEAAaNQME4wHQYDVR0O
BBYEFNuxsc77Z6/JSKPoyloHNm9zF9yqMB8GA1UdIwQYMBaAFNuxsc77Z6/JSKPo
yloHNm9zF9yqMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAl4hBaJy1
cgiNV2+Z5oNTrHgmzWvSY4duECOTBxeuIOnhql3vLlaQmo0p8Z4c13kTZq2s3nhd
Loe5mIHsjRVKvzB6SvIaFUYq/EzmHnqNdpIGkT/Mj7r/iUs61btTcGUCLsUiUeci
Vd0Ozh79JSRpkrdI8R/NRQ2XPHAo+29TT70=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAKRvzcy7OH0UMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtNzcyMCAXDTEyMDgxMDE1NDI1MVoYDzIxMTIwNzE3MTU0MjUxWjAT
MREwDwYDVQQDDAhDT01QLTc3MjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
o7aipd6MbnuGDeR1UJUjuMLQUariAyQ2l2ZDS6TfOwjHiPw/mhzkielgk73kqN7A
sUREx41eTcYCXzTq3WP3xCLE4LxLg1eIhd4nwNHj8H18xR9aP0AGjo4UFl5BOMa1
mwoyBt3VtfGtUmb8whpeJgHhqrPPxLoON+i6fIbXDaUCAwEAAaNQME4wHQYDVR0O
BBYEFEfjy3OopT2lOksKmKBNHTJE2hFlMB8GA1UdIwQYMBaAFEfjy3OopT2lOksK
mKBNHTJE2hFlMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAaNRx6YN2
M/p3R8/xS6zvH1EqJ3FFD7XeAQ52WuQnKSREzuw0dsw12ClxjcHiQEFioyTiTtjs
5pW18Ry5Ie7iFK4cQMerZwWPxBodEbAteYlRsI6kePV7Gf735Y1RpuN8qZ2sYL6e
x2IMeSwJ82BpdEI5niXxB+iT0HxhmR+XaMI=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB+DCCAWGgAwIBAgIJAKGd0vpks6T/MA0GCSqGSIb3DQEBBQUAMBQxEjAQBgNV
BAMMCUNPTVAtMTU4NDAgFw0xMzA2MjAxMjE5MjBaGA8yMTEzMDUyNzEyMTkyMFow
FDESMBAGA1UEAwwJQ09NUC0xNTg0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKB
gQDZTH9etPUC+wMZQ3UIiOwyyCfHsJ+7duCFYjuo1uZrhtDt/fp8qb8qK9ob+df3
EEYgA0IgI2j/9jNUEnKbc5+OrfKznzXjrlrH7zU8lKBVNCLzQuqBKRNajZ+UvO8R
nlqK2jZCXP/p3HXDYUTEwIR5W3tVCEn/Vda4upTLcPVE5wIDAQABo1AwTjAdBgNV
HQ4EFgQU7KXaNDheQWoy5uOU01tn1M5vNkEwHwYDVR0jBBgwFoAU7KXaNDheQWoy
5uOU01tn1M5vNkEwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQASmqCU
Znbvu6izdicvjuE3aKnBa7G++Fdp2bdne5VCwVbVLYCQWatB+n4crKqGdnVply/u
+uZ16u1DbO9rYoKgWqjLk1GfiLw5v86pd5+wZd5I9QJ0/Sbz2vZk5S4ciMIGwArc
m711+GzlW5xe6GyH9SZaGOPAdUbI6JTDwLzEgA==
-----END CERTIFICATE-----
##########################################################
# Service startup scripts and configuration files
##########################################################
[template-nginx-service]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template-nginx-service.sh.in
md5sum = 5c94d952305552dcbeaeaeb27dd28f3b
output = ${buildout:directory}/template-nginx-service.sh.in
mode = 0644
[template-nginx-configuration]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template-nginx.cfg.in
md5sum = c54d36f55ba71c897505ed61213e104a
output = ${buildout:directory}/template-nginx.cfg.in
mode = 0644
[template-dcron-service]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template-dcron-service.sh.in
md5sum = 1372441dac23e4fa7d2dc773a74725ea
output = ${buildout:directory}/template-dcron-service.sh.in
mode = 0644
[template-backup-script]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template-backup-script.sh.in
md5sum = 8a076962fc4df7f154572543899328e3
output = ${buildout:directory}/template-backup-script.sh.in
mode = 0644
[template-crontab-line]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template-crontab-line.in
md5sum = 5cbd64f04da0601ba4286516a6161f5e
output = ${buildout:directory}/template-crontab-line.in
mode = 0644
[template-crontab]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template-crontab.in
md5sum = 072be0fd04896880c931d44d8eabde37
output = ${buildout:directory}/template-crontab.in
mode = 0644
[status2rss]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/status2rss.py
md5sum = 138c96e0836f2b06414b98ba2643f21c
output = ${buildout:directory}/status2rss.py
mode = 0644
[template-update-rss-script]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template-update-rss.sh.in
md5sum = 529058c54e873ab26f7920c868b23c50
output = ${buildout:directory}/template-update-rss.sh.in
mode = 0644
##########################################################
# Buildout instance.cfg templates
##########################################################
[template-pullrdiffbackup]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-pullrdiffbackup.cfg.in
md5sum = 9bf3a34fa41ae6fe57b183293b3ff377
output = ${buildout:directory}/template-pullrdiffbackup.cfg
mode = 0644
[template]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg.in
md5sum = 42021b325159dff29e4bd4e33b8ff2f3
output = ${buildout:directory}/template.cfg
mode = 0644
[versions]
zc.buildout = 1.6.0-dev-SlapOS-010
rdiff-backup = 1.0.5
Jinja2 = 2.7
MarkupSafe = 0.18
Werkzeug = 0.9.1
buildout-versions = 1.7
gunicorn = 17.5
itsdangerous = 0.22
meld3 = 0.6.10
plone.recipe.command = 1.1
slapos.cookbook = 0.80
slapos.recipe.build = 0.11.6
slapos.recipe.cmmi = 0.1.1
slapos.recipe.template = 2.4.2
zc.recipe.egg = 1.3.2
PyRSS2Gen = 1.1
# Required by:
# slapos.core==0.35.1
Flask = 0.10.1
# Required by:
# slapos.cookbook==0.78.1
inotifyx = 0.2.0-1
# Required by:
# slapos.cookbook==0.78.1
lock-file = 2.0
# Required by:
# slapos.cookbook==0.78.1
# slapos.core==0.35.1
# xml-marshaller==0.9.7
lxml = 3.2.1
# Required by:
# slapos.cookbook==0.78.1
netaddr = 0.7.10
# Required by:
# slapos.core==0.35.1
netifaces = 0.8-1
# Required by:
# slapos.core==0.35.1
pyflakes = 0.7.3
# Required by:
# slapos.cookbook==0.78.1
pytz = 2013b
# Required by:
# slapos.cookbook==0.78.1
# slapos.core==0.35.1
# zc.buildout==1.6.0-dev-SlapOS-010
# zc.recipe.egg==1.3.2
setuptools = 0.9.5
# Required by:
# slapos.cookbook==0.78.1
slapos.core = 0.35.1
# Required by:
# slapos.core==0.35.1
supervisor = 3.0b2
# Required by:
# slapos.core==0.35.1
unittest2 = 0.5.1
# Required by:
# slapos.cookbook==0.78.1
xml-marshaller = 0.9.7
# Required by:
# slapos.core==0.35.1
zope.interface = 4.0.5
cliff = 1.4
cmd2 = 0.6.5.1
prettytable = 0.7.2
requests = 1.2.3
import datetime
import uuid
import PyRSS2Gen
import sys
from email.utils import parsedate_tz, mktime_tz
import base64
# Based on http://thehelpfulhacker.net/2011/03/27/a-rss-feed-for-your-crontabs/
# ### Defaults
TITLE = sys.argv[1]
LINK = sys.argv[2]
DESCRIPTION = TITLE
items = []
while 1:
try:
line = sys.stdin.readline()
except KeyboardInterrupt:
break
if not line:
break
time, desc = line.split(',', 1)
rss_item = PyRSS2Gen.RSSItem(
title = desc,
description = "%s, %s" % (time, desc),
link = LINK,
pubDate = datetime.datetime.fromtimestamp(mktime_tz(parsedate_tz(time))),
guid = PyRSS2Gen.Guid(base64.b64encode("%s, %s" % (time, desc)))
)
items.append(rss_item)
### Build the rss feed
rss_feed = PyRSS2Gen.RSS2 (
title = TITLE,
link = LINK,
description = DESCRIPTION,
lastBuildDate = datetime.datetime.utcnow(),
items = items
)
print rss_feed.to_xml()
#!${dash-output:dash}
# trap "echo Backing up failed for $${:hostname}" ERR
export HOME=$${directory:home}
# Clean status file (no history needed)
${coreutils-output:rm} -f $${:status_log}
# Inform about beginning of backup
${coreutils-output:echo} "`${coreutils-output:date} -u`, $${:hostname} backup running" >> $${:status_log}
# set -e
cd $${:datadirectory}
${rdiff-backup-output:rdiff-backup} \
$${:exclude_string} \
--include='$${:include}' \
--exclude='**' \
--remote-schema '${dropbear-output:ssh} -T -y -i $${:sshkey} %s $${:remote_schema}' \
$${:connection}::/ ./
RESULT=$?
# Inform about backup status
${coreutils-output:rm} -f $${:status_log}
if [ $RESULT -eq 0 ]
then
${coreutils-output:echo} "`${coreutils-output:date} -u`, $${:hostname} backup success" >> $${:status_log}
else
${coreutils-output:echo} "`${coreutils-output:date} -u`, $${:hostname} backup failed" >> $${:status_log}
fi
# python scripts/verify_with_sudo.py ./ $${:connection}:/
# $${:_buildout_section_name_}
$${:frequency} $${:script}
# min(0-59) hours(0-23) day(1-31) month(1-12) dow(0-7) command
MAILTO=admins@erp5.org
#!${dash-output:dash}
# BEWARE: This file is operated by slapgrid
# BEWARE: It will be overwritten automatically
exec ${dcron-output:crond} \
-s $${directory:cron-entries} \
-c $${directory:crontabs} \
-t $${directory:cronstamps} \
-f -l 5 \
-L $${dcron-service:logfile}
# -M cron_simplelogger
#!${dash-output:dash}
# BEWARE: This file is operated by slapgrid
# BEWARE: It will be overwritten automatically
if [ ! -e $${nginx-configuration:ssl_crt} ]
then
${openssl-output:openssl} genrsa -out $${nginx-configuration:ssl_key} 2048
${openssl-output:openssl} req -new \
-subj "/C=AA/ST=Denial/L=Nowhere/O=Dis/CN=$${nginx-configuration:ip}" \
-key $${nginx-configuration:ssl_key} -out $${nginx-configuration:ssl_csr}
${openssl-output:openssl} x509 -req -days 365 \
-in $${nginx-configuration:ssl_csr} \
-signkey $${nginx-configuration:ssl_key} \
-out $${nginx-configuration:ssl_crt}
fi
exec ${nginx-output:nginx} \
-c $${nginx-configuration:output}
daemon off; # run in the foreground so supervisord can look after it
worker_processes 4;
pid $${directory:run}/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
error_log $${nginx-configuration:error_log};
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
default_type application/octet-stream;
include ${nginx-output:mime};
##
# Logging Settings
##
access_log $${nginx-configuration:access_log};
error_log $${nginx-configuration:error_log};
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server {
listen [$${nginx-configuration:ip}]:$${nginx-configuration:port};
ssl on;
ssl_certificate $${nginx-configuration:ssl_crt};
ssl_certificate_key $${nginx-configuration:ssl_key};
fastcgi_temp_path $${directory:varnginx} 1 2;
uwsgi_temp_path $${directory:varnginx} 1 2;
scgi_temp_path $${directory:varnginx} 1 2;
client_body_temp_path $${directory:varnginx} 1 2;
proxy_temp_path $${directory:varnginx} 1 2;
## Only allow GET and HEAD request methods
if ($request_method !~ ^(GET|HEAD)$ ) {
return 444;
}
## Serve an error 204 (No Content) for favicon.ico
location = /favicon.ico {
return 204;
}
location /
{
root $${directory:www};
# index index.html;
}
}
}
#!${dash-output:dash}
STATUS_DIR=$${directory:status}
RSS_DIR=$${directory:www}
CAT=${coreutils-output:cat}
PYTHON=${buildout:directory}/bin/${extra-eggs:interpreter}
STATUS2RSS=${status2rss:output}
BASENAME=${coreutils-output:basename}
for status in $STATUS_DIR/*
do
NAME=`$BASENAME $status`
$CAT $status | $PYTHON $STATUS2RSS "Backup status $NAME" "https://[$${nginx-configuration:ip}]:$${nginx-configuration:port}/$NAME.rss" > $RSS_DIR/$NAME.rss
done
$CAT $STATUS_DIR/* | $PYTHON $STATUS2RSS "Full backup status $${:global_rss}" "https://[$${nginx-configuration:ip}]:$${nginx-configuration:port}/$${:global_rss}" > $RSS_DIR/$${:global_rss}
{
"$schema": "http://json-schema.org/draft-04/schema#",
"extends": "./schema-definitions.json#",
"properties": {
"tcpv4-port": {
"allOf": [{
"$ref": "#/definitions/tcpv4port"
}, {
"description": "Start allocating ports at this value, going upward",
"default": 23000
}]
},
"font-url-list": {
"description": "List of URLs from which fonts are to be downloaded",
"default": [],
"items": {
"type": "string"
},
"type": "array"
},
"backend-count": {
"description": "Number of backend cloudooo instances",
"default": 1,
"type": "integer"
}
}
}
{
"$schema": "http://json-schema.org/draft-04/schema#",
"description": "Values returned by Cloudooo instanciation",
"properties": {
"url": {
"description": "Conversion service access information",
"type": "string"
}
},
"type": "object"
}
{% set ipv4 = (ipv4_set | list)[0] -%}
{% set bin_directory = parameter_dict['buildout-bin-directory'] -%} {% set bin_directory = parameter_dict['buildout-bin-directory'] -%}
{% set section_list = [] -%}
{% macro section(name) %}{% do section_list.append(name) %}{{ name }}{% endmacro -%}
[buildout] [buildout]
parts = parts =
publish-cloudooo-connection-information publish-cloudooo-connection-information
...@@ -9,35 +12,53 @@ develop-eggs-directory = {{ develop_eggs_directory }} ...@@ -9,35 +12,53 @@ develop-eggs-directory = {{ develop_eggs_directory }}
offline = true offline = true
[publish-cloudooo-connection-information] [publish-cloudooo-connection-information]
recipe = slapos.cookbook:publishurl recipe = slapos.cookbook:publish.serialised
url = cloudooo://${cloudooo-instance:ip}:${cloudooo-instance:port}/ url = cloudooo://${haproxy:ip}:${haproxy:port}/
[cloudooo-instance] [cloudooo-base]
recipe = slapos.cookbook:generic.cloudooo recipe = slapos.cookbook:generic.cloudooo
ip = {{ ipv4 }}
# Network options
ip = ${slap-network-information:local-ipv4}
port = 23000
openoffice-port = 23060
# Paths
configuration-file = ${rootdirectory:etc}/cloudooo.cfg
wrapper = ${basedirectory:services}/cloudooo
# Paths: Data
data-directory = ${directory:cloudooo-data}
environment = environment =
LD_LIBRARY_PATH = {{ parameter_dict['file'] }}/lib:{{ parameter_dict['fontconfig'] }}/lib:{{ parameter_dict['freetype'] }}/lib:{{ parameter_dict['libICE'] }}/lib:{{ parameter_dict['libpng12'] }}/lib:{{ parameter_dict['libSM'] }}/lib:{{ parameter_dict['libX11'] }}/lib:{{ parameter_dict['libXau'] }}/lib:{{ parameter_dict['libXdmcp'] }}/lib:{{ parameter_dict['libXext'] }}/lib:{{ parameter_dict['libxcb'] }}/lib:{{ parameter_dict['libXrender'] }}/lib:{{ parameter_dict['zlib'] }}/lib LD_LIBRARY_PATH = {{ parameter_dict['file'] }}/lib:{{ parameter_dict['fontconfig'] }}/lib:{{ parameter_dict['freetype'] }}/lib:{{ parameter_dict['libICE'] }}/lib:{{ parameter_dict['libpng12'] }}/lib:{{ parameter_dict['libSM'] }}/lib:{{ parameter_dict['libX11'] }}/lib:{{ parameter_dict['libXau'] }}/lib:{{ parameter_dict['libXdmcp'] }}/lib:{{ parameter_dict['libXext'] }}/lib:{{ parameter_dict['libxcb'] }}/lib:{{ parameter_dict['libXrender'] }}/lib:{{ parameter_dict['zlib'] }}/lib
FONTCONFIG_FILE = ${fontconfig-instance:conf-path} FONTCONFIG_FILE = ${fontconfig-instance:conf-path}
PATH = ${binary-link:target-directory} PATH = ${binary-link:target-directory}
# Binary information # Binary information
# cloudooo specific configuration # cloudooo specific configuration
ooo-binary-path = {{ parameter_dict['libreoffice-bin'] }}/program ooo-binary-path = {{ parameter_dict['libreoffice-bin'] }}/program
ooo-paster = {{ bin_directory }}/cloudooo_paster ooo-paster = {{ bin_directory }}/cloudooo_paster
ooo-uno-path = {{ parameter_dict['libreoffice-bin'] }}/basis-link/program ooo-uno-path = {{ parameter_dict['libreoffice-bin'] }}/basis-link/program
{% set cloudooo_port = slapparameter_dict.get('tcpv4_port', 23000) | int -%}
{% set backend_count = slapparameter_dict.get('backend-count', 1) | int -%}
{% for index in range(backend_count) -%}
{% set name = 'cloudooo-' ~ index -%}
[{{ section(name) }}]
< = cloudooo-base
port = {{ cloudooo_port }}
openoffice-port = {{ cloudooo_port + 1 }}
configuration-file = ${directory:etc}/{{ name }}.cfg
data-directory = ${directory:srv}/{{ name }}
wrapper = ${directory:services}/{{ name }}
{% set cloudooo_port = cloudooo_port + 2 -%}
{% endfor -%}
[haproxy]
recipe = slapos.cookbook:haproxy
name = cloudooo
conf-path = ${directory:etc}/haproxy.cfg
socket-path = ${directory:run}/haproxy.sock
ip = {{ ipv4 }}
port = 8001
maxconn = 1
wrapper-path = ${directory:services}/haproxy
binary-path = {{ parameter_dict['haproxy'] }}/sbin/haproxy
ctl-path = ${directory:bin}/haproxy-ctl
backend-list =
{%- for section_name in section_list %}
{{ "${" ~ section_name ~ ":ip}:${" ~ section_name ~ ":port}" }}
{%- endfor %}
[cloudooo-test-runner] [cloudooo-test-runner]
recipe = slapos.cookbook:cloudooo.test recipe = slapos.cookbook:cloudooo.test
...@@ -45,24 +66,25 @@ prepend-path = ${buildout:bin-directory} ...@@ -45,24 +66,25 @@ prepend-path = ${buildout:bin-directory}
run-unit-test = ${buildout:bin-directory}/runUnitTest run-unit-test = ${buildout:bin-directory}/runUnitTest
run-test-suite = ${buildout:bin-directory}/runTestSuite run-test-suite = ${buildout:bin-directory}/runTestSuite
ooo-paster = ${cloudooo-instance:ooo-paster} ooo-paster = ${cloudooo-0:ooo-paster}
configuration-file = ${cloudooo-instance:configuration-file} configuration-file = ${cloudooo-0:configuration-file}
run-unit-test-binary = {{ bin_directory }}/runCloudoooUnitTest run-unit-test-binary = {{ bin_directory }}/runCloudoooUnitTest
run-test-suite-binary = {{ bin_directory }}/runCloudoooTestSuite run-test-suite-binary = {{ bin_directory }}/runCloudoooTestSuite
[fontconfig-instance] [fontconfig-instance]
recipe = slapos.cookbook:fontconfig recipe = slapos.cookbook:fontconfig
conf-path = ${rootdirectory:etc}/font.conf conf-path = ${directory:etc}/font.conf
font-system-folder = {{ parameter_dict['fonts'] }} font-system-folder = {{ parameter_dict['fonts'] }}
font-folder = ${directory:font} font-folder = ${directory:font}
url-list = {# XXX: violates "instanciation happens offline" rule -#}
service-folder = ${basedirectory:services} url-list = {{ slapparameter_dict.get('font-url-list', []) | join(' ') }}
service-folder = ${directory:services}
onetimedownload_path = {{ bin_directory }}/onetimedownload onetimedownload_path = {{ bin_directory }}/onetimedownload
[binary-link] [binary-link]
recipe = slapos.cookbook:symbolic.link recipe = slapos.cookbook:symbolic.link
target-directory = ${rootdirectory:bin} target-directory = ${directory:bin}
link-binary = link-binary =
{{ parameter_dict['coreutils'] }}/bin/basename {{ parameter_dict['coreutils'] }}/bin/basename
{{ parameter_dict['coreutils'] }}/bin/cat {{ parameter_dict['coreutils'] }}/bin/cat
...@@ -78,17 +100,12 @@ link-binary = ...@@ -78,17 +100,12 @@ link-binary =
{{ parameter_dict['poppler'] }}/bin/pdftohtml {{ parameter_dict['poppler'] }}/bin/pdftohtml
# rest of parts are candidates for some generic stuff # rest of parts are candidates for some generic stuff
[basedirectory]
recipe = slapos.cookbook:mkdirectory
services = ${rootdirectory:etc}/run
[directory] [directory]
recipe = slapos.cookbook:mkdirectory recipe = slapos.cookbook:mkdirectory
cloudooo-data = ${rootdirectory:srv}/cloudooo bin = ${buildout:directory}/bin
font = ${rootdirectory:srv}/font
[rootdirectory]
recipe = slapos.cookbook:mkdirectory
etc = ${buildout:directory}/etc etc = ${buildout:directory}/etc
font = ${:srv}/font
run = ${:var}/run
services = ${:etc}/run
srv = ${buildout:directory}/srv srv = ${buildout:directory}/srv
bin = ${buildout:directory}/bin var = ${buildout:directory}/var
...@@ -6,6 +6,14 @@ eggs-directory = {{ eggs_directory }} ...@@ -6,6 +6,14 @@ eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }} develop-eggs-directory = {{ develop_eggs_directory }}
offline = true offline = true
[slap-parameters]
recipe = slapos.cookbook:slapconfiguration
computer = ${slap-connection:computer-id}
partition = ${slap-connection:partition-id}
url = ${slap-connection:server-url}
key = ${slap-connection:key-file}
cert = ${slap-connection:cert-file}
[jinja2-template-base] [jinja2-template-base]
recipe = slapos.recipe.template:jinja2 recipe = slapos.recipe.template:jinja2
rendered = ${buildout:parts-directory}/${:_buildout_section_name_}/${:filename} rendered = ${buildout:parts-directory}/${:_buildout_section_name_}/${:filename}
...@@ -13,12 +21,14 @@ extra-context = ...@@ -13,12 +21,14 @@ extra-context =
context = context =
key eggs_directory buildout:eggs-directory key eggs_directory buildout:eggs-directory
key develop_eggs_directory buildout:develop-eggs-directory key develop_eggs_directory buildout:develop-eggs-directory
key slapparameter_dict slap-parameters:configuration
${:extra-context} ${:extra-context}
[dynamic-template-cloudooo-parameters] [dynamic-template-cloudooo-parameters]
file = {{ file_location }} file = {{ file_location }}
fontconfig = {{ fontconfig_location }} fontconfig = {{ fontconfig_location }}
freetype = {{ freetype_location }} freetype = {{ freetype_location }}
haproxy = {{ haproxy_location }}
libICE = {{ libICE_location }} libICE = {{ libICE_location }}
libpng12 = {{ libpng12_location }} libpng12 = {{ libpng12_location }}
libSM = {{ libSM_location }} libSM = {{ libSM_location }}
...@@ -40,8 +50,10 @@ buildout-bin-directory = {{ buildout_bin_directory }} ...@@ -40,8 +50,10 @@ buildout-bin-directory = {{ buildout_bin_directory }}
< = jinja2-template-base < = jinja2-template-base
template = {{ template_cloudooo }} template = {{ template_cloudooo }}
filename = instance-cloudoo.cfg filename = instance-cloudoo.cfg
extensions = jinja2.ext.do
extra-context = extra-context =
section parameter_dict dynamic-template-cloudooo-parameters section parameter_dict dynamic-template-cloudooo-parameters
key ipv4_set slap-parameters:ipv4
[switch-softwaretype] [switch-softwaretype]
recipe = slapos.cookbook:softwaretype recipe = slapos.cookbook:softwaretype
......
...@@ -14,10 +14,9 @@ parts += ...@@ -14,10 +14,9 @@ parts +=
# Local development # Local development
slapos.cookbook-repository slapos.cookbook-repository
check-recipe check-recipe
slapos.cookbook-python2.6
slapos.recipe.template-python2.6
# Create instance template # Create instance template
template template
slapos-cookbook
# XXX: Workaround of SlapOS limitation # XXX: Workaround of SlapOS limitation
# Unzippig of eggs is required, as SlapOS do not yet provide nicely working # Unzippig of eggs is required, as SlapOS do not yet provide nicely working
...@@ -26,11 +25,10 @@ unzip = true ...@@ -26,11 +25,10 @@ unzip = true
# Local development # Local development
[slapos.cookbook-repository] [slapos.cookbook-repository]
recipe = plone.recipe.command recipe = slapos.recipe.build:gitclone
stop-on-error = true repository = http://git.erp5.org/repos/slapos.git
location = ${buildout:parts-directory}/${:_buildout_section_name_} branch = master
command = ${git:location}/bin/git clone --branch cloudooo --quiet http://git.erp5.org/repos/slapos.git ${:location} git-executable = ${git:location}/bin/git
update-command = cd ${:location} && ${git:location}/bin/git pull --quiet
[check-recipe] [check-recipe]
recipe = plone.recipe.command recipe = plone.recipe.command
...@@ -38,18 +36,13 @@ stop-on-error = true ...@@ -38,18 +36,13 @@ stop-on-error = true
update-command = ${:command} update-command = ${:command}
command = grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link command = grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link
[slapos.cookbook-python2.6] [slap-parameters]
recipe = zc.recipe.egg recipe = slapos.cookbook:slapconfiguration
eggs = slapos.cookbook computer = ${slap-connection:computer-id}
scripts = partition = ${slap-connection:partition-id}
python = python2.6 url = ${slap-connection:server-url}
ugly-depend-on = ${slapos.cookbook-repository:command} ${slapos.cookbook-repository:update-command} key = ${slap-connection:key-file}
cert = ${slap-connection:cert-file}
[slapos.recipe.template-python2.6]
recipe = zc.recipe.egg
eggs = slapos.recipe.template
scripts =
python = python2.6
[template-jinja2-base] [template-jinja2-base]
recipe = slapos.recipe.template:jinja2 recipe = slapos.recipe.template:jinja2
...@@ -69,7 +62,7 @@ context = ...@@ -69,7 +62,7 @@ context =
# XXX: "template.cfg" is hardcoded in instanciation recipe # XXX: "template.cfg" is hardcoded in instanciation recipe
filename = template.cfg filename = template.cfg
template = ${:_profile_base_location_}/instance.cfg.in template = ${:_profile_base_location_}/instance.cfg.in
md5sum = 694205787e78c5d615d72d7b4b26d174 md5sum = 425cb2e76d46d53bb0b0eebdb8c1aa95
extra-context = extra-context =
key buildout_bin_directory buildout:bin-directory key buildout_bin_directory buildout:bin-directory
key dcron_location dcron:location key dcron_location dcron:location
...@@ -78,6 +71,7 @@ extra-context = ...@@ -78,6 +71,7 @@ extra-context =
key fonts_location fonts:location key fonts_location fonts:location
key freetype_location freetype:location key freetype_location freetype:location
key git_location git:location key git_location git:location
key haproxy_location haproxy:location
key imagemagick_location imagemagick:location key imagemagick_location imagemagick:location
key libICE_location libICE:location key libICE_location libICE:location
key libSM_location libSM:location key libSM_location libSM:location
...@@ -98,10 +92,9 @@ extra-context = ...@@ -98,10 +92,9 @@ extra-context =
[template-cloudooo] [template-cloudooo]
recipe = slapos.recipe.build:download recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/instance-cloudoo.cfg.in url = ${:_profile_base_location_}/instance-cloudoo.cfg.in
md5sum = 4c8608f9525be0f01a09d60b240315a9 md5sum = bbe84b4c9022db62c926e8a8a4bf02a1
mode = 640 mode = 640
[networkcache] [networkcache]
# signature certificates of the following uploaders. # signature certificates of the following uploaders.
# Romain Courteaud # Romain Courteaud
......
[buildout]
versions = versions
extends =
../../component/apache-php/buildout.cfg
../../component/apache/buildout.cfg
../../component/curl/buildout.cfg
../../component/dash/buildout.cfg
../../component/dcron/buildout.cfg
../../component/logrotate/buildout.cfg
../../stack/shacache-client.cfg
../../component/lxml-python/buildout.cfg
../../component/python-2.7/buildout.cfg
../../component/gzip/buildout.cfg
find-links +=
http://www.nexedi.org/static/packages/source/slapos.buildout/
http://www.nexedi.org/static/packages/source/
http://www.nexedi.org/static/packages/source/hexagonit.recipe.download/
# Use only quite well working sites.
allow-hosts +=
*.googlecode.com
*.nexedi.org
*.python.org
*.sourceforge.net
alastairs-place.net
bitbucket.org
dist.repoze.org
effbot.org
github.com
launchpad.net
peak.telecommunity.com
sourceforge.net
www.dabeaz.com
www.owlfish.com
parts =
apache-php
application
template
lxml-python
eggs
instance-recipe-egg
unzip= true
[eggs]
recipe = zc.recipe.egg
eggs =
[instance-recipe]
egg = slapos.cookbook
module = davstorage
[instance-recipe-egg]
recipe = zc.recipe.egg
python = python2.7
eggs = ${instance-recipe:egg}
[application]
recipe = hexagonit.recipe.download
url = http://garr.dl.sourceforge.net/project/ajaxplorer/ajaxplorer/dev-channel/4.3.4/ajaxplorer-core-4.3.4.tar.gz
md5sum = 2f2ff8bda7bbe841ef0e870c724eb74f
strip-top-level-dir = true
[template]
# Default template for the instance.
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg
md5sum = bed788dee6daf05349c4577e7a7f1299
output = ${buildout:directory}/template.cfg
mode = 0644
[instance-davstorage]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-davstorage.cfg
md5sum = 699ecf4678386667f58a3391bab7af0f
output = ${buildout:directory}/template-davstorage.cfg
mode = 0644
[lxml-python]
python = python2.7
[buildout]
extends =
../../component/git/buildout.cfg
common.cfg
parts +=
slapos.cookbook-repository
slapos.toolbox-repository
check-recipe
develop =
${:parts-directory}/slapos.cookbook-repository
${:parts-directory}/slapos.toolbox-repository
[slapos.cookbook-repository]
recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.git
branch = davstorage-ajaxplorer
git-executable = ${git:location}/bin/git
[slapos.toolbox-repository]
recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.toolbox.git
branch = master
git-executable = ${git:location}/bin/git
[check-recipe]
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command =
grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link &&
grep parts ${buildout:develop-eggs-directory}/slapos.toolbox.egg-link
[versions]
zc.buildout = 1.6.0-dev-SlapOS-002
Jinja2 = 2.6
Werkzeug = 0.8.3
buildout-versions = 1.7
hexagonit.recipe.cmmi = 1.6
hexagonit.recipe.download = 1.6nxd002
meld3 = 0.6.10
openssl = 1.0.1c
# Required by:
# slapos.core
Flask = 0.9
# Required by:
# slapos.cookbook
PyXML = 0.8.4
slapos.recipe.template = 2.4.2
# Required by:
# slapos.cookbook==0.24
# slapos.core==0.14
# xml-marshaller==0.9.7
lxml = 3.1.0
# Required by:
# slapos.cookbook==0.24
netaddr = 0.7.10
# Required by:
# slapos.core==0.14
netifaces = 0.8
# Required by:
# slapos.cookbook==0.24
# slapos.core==0.14
# zc.buildout==1.5.3-dev-SlapOS-009
# zc.recipe.egg==1.3.2
setuptools = 0.6c12dev-r88846
# Required by:
# slapos.cookbook==0.73.1
slapos.core = 0.35.1
# Required by:
# slapos.core==0.35.1
supervisor = 3.0b1
# Required by:
# slapos.cookbook==0.73.1
xml-marshaller = 0.9.7
# Required by:
# slapos.cookbook==0.24
zc.recipe.egg = 1.3.2
# Required by:
# slapos.core==0.35.1
zope.interface = 4.0.5
[buildout] [buildout]
parts = parts =
davstorage davstorage
url publish-connection-informations
certificate-authority certificate-authority
ca-davstorage ca-davstorage
cron cron
cron-entry-logrotate cron-entry-logrotate
logrotate logrotate
logrotate-entry-davstorage logrotate-entry-davstorage
request-frontend
eggs-directory = ${buildout:eggs-directory} eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory} develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true offline = true
[url] [publish-connection-informations]
recipe = slapos.cookbook:publishurl recipe = slapos.cookbook:publish
scheme = webdavs webdav_access = https://$${request-frontend:connection-domain}
user = $${davstorage:user} url = https://$${request-frontend-ajaxupload:connection-domain}
port = $${davstorage:port} webdav_user = $${davstorage:user}
host = $${davstorage:ip} webdav_password = $${davstorage:password}
password = $${davstorage:password}
[davstorage] [davstorage]
recipe = slapos.cookbook:davstorage recipe = slapos.cookbook:davstorage
user = user user = user
port = 8080 password = insecure
port_webdav = 8080
port_ajax = 8070
ip = $${slap-network-information:global-ipv6} ip = $${slap-network-information:global-ipv6}
# Path # Path
...@@ -42,12 +44,15 @@ root = $${buildout:directory} ...@@ -42,12 +44,15 @@ root = $${buildout:directory}
email-address = admin+davstorage@vifib.net email-address = admin+davstorage@vifib.net
htpasswd-file = $${directory:davstorage-conf}/davstorage.htpasswd htpasswd-file = $${directory:davstorage-conf}/davstorage.htpasswd
promise = $${basedirectory:promises}/davstorage promise = $${basedirectory:promises}/davstorage
php-ini-dir = $${directory:php-ini-dir}
tmp-dir = $${directory:tmp-php}
# Binaries # Binaries
apache-binary = ${apache:location}/bin/httpd apache-binary = ${apache:location}/bin/httpd
apache-modules-dir = ${apache:location}/modules/ apache-modules-dir = ${apache:location}/modules/
apache-mime-file = ${apache:location}/conf/mime.types apache-mime-file = ${apache:location}/conf/mime.types
apache-htpasswd = ${apache:location}/bin/htpasswd apache-htpasswd = ${apache:location}/bin/htpasswd
source = ${application:location}
[certificate-authority] [certificate-authority]
recipe = slapos.cookbook:certificate_authority recipe = slapos.cookbook:certificate_authority
...@@ -66,7 +71,7 @@ dcrond-binary = ${dcron:location}/sbin/crond ...@@ -66,7 +71,7 @@ dcrond-binary = ${dcron:location}/sbin/crond
cron-entries = $${directory:cron-entries} cron-entries = $${directory:cron-entries}
crontabs = $${directory:crontabs} crontabs = $${directory:crontabs}
cronstamps = $${directory:cronstamps} cronstamps = $${directory:cronstamps}
catcher = $${cron-simplelogger:binary} catcher = $${cron-simplelogger:wrapper}
binary = $${basedirectory:services}/crond binary = $${basedirectory:services}/crond
[logrotate] [logrotate]
...@@ -91,8 +96,8 @@ command = $${logrotate:wrapper} ...@@ -91,8 +96,8 @@ command = $${logrotate:wrapper}
[cron-simplelogger] [cron-simplelogger]
recipe = slapos.cookbook:simplelogger recipe = slapos.cookbook:simplelogger
binary = $${rootdirectory:bin}/cron_simplelogger wrapper = $${rootdirectory:bin}/cron_simplelogger
output = $${directory:cronoutput} log = $${basedirectory:log}/crond.log
[logrotate-entry-davstorage] [logrotate-entry-davstorage]
...@@ -128,6 +133,7 @@ etc = $${buildout:directory}/etc/ ...@@ -128,6 +133,7 @@ etc = $${buildout:directory}/etc/
var = $${buildout:directory}/var/ var = $${buildout:directory}/var/
srv = $${buildout:directory}/srv/ srv = $${buildout:directory}/srv/
bin = $${buildout:directory}/bin/ bin = $${buildout:directory}/bin/
tmp = $${buildout:directory}/tmp/
[basedirectory] [basedirectory]
recipe = slapos.cookbook:mkdirectory recipe = slapos.cookbook:mkdirectory
...@@ -149,3 +155,38 @@ cron-entries = $${rootdirectory:etc}/cron.d/ ...@@ -149,3 +155,38 @@ cron-entries = $${rootdirectory:etc}/cron.d/
crontabs = $${rootdirectory:etc}/crontabs/ crontabs = $${rootdirectory:etc}/crontabs/
cronstamps = $${rootdirectory:etc}/cronstamps/ cronstamps = $${rootdirectory:etc}/cronstamps/
cronoutput = $${basedirectory:log}/cron/ cronoutput = $${basedirectory:log}/cron/
php-ini-dir = $${rootdirectory:etc}/php
tmp-php = $${rootdirectory:tmp}/php
# Request frontend
[request-frontend-ajaxupload]
<= slap-connection
recipe = slapos.cookbook:request
name = Frontend Ajax
# XXX We have hardcoded SR URL here.
software-url = http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD:/software/apache-frontend/software.cfg
slave = true
config = url https-only
config-https-only = true
config-url = https://[$${davstorage:ip}]:$${davstorage:port_ajax}/
return = domain
[request-frontend]
<= slap-connection
recipe = slapos.cookbook:request
name = Frontend Webdav
# XXX We have hardcoded SR URL here.
software-url = http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD:/software/apache-frontend/software.cfg
slave = true
config = url https-only
config-https-only = true
config-url = https://$${davstorage:user}:$${davstorage:password}@[$${davstorage:ip}]:$${davstorage:port_webdav}/
return = domain
# XXX Vivien: promise not working for now
#[frontend-ajaxupload-promise]
#recipe = slapos.cookbook:check_url_available
#path = $${basedirectory:promises}/frontend-ajaxupload
#url = $${request-frontend-ajaxupload:connection-site_url}
#dash_path = ${dash:location}/bin/dash
#curl_path = ${curl:location}/bin/curl
\ No newline at end of file
[buildout] [buildout]
find-links +=
http://www.nexedi.org/static/packages/source/slapos.buildout/
versions = versions
extends = extends =
../../component/apache/buildout.cfg ../../component/git/buildout.cfg
../../component/dcron/buildout.cfg common.cfg
../../component/logrotate/buildout.cfg
../../stack/shacache-client.cfg parts +=
../../component/lxml-python/buildout.cfg slapos.cookbook-repository
../../component/python-2.7/buildout.cfg slapos.toolbox-repository
../../component/gzip/buildout.cfg check-recipe
# Use only quite well working sites. develop =
allow-hosts = ${:parts-directory}/slapos.cookbook-repository
*.nexedi.org ${:parts-directory}/slapos.toolbox-repository
*.python.org
*.sourceforge.net [slapos.cookbook-repository]
dist.repoze.org recipe = slapos.recipe.build:gitclone
effbot.org repository = http://git.erp5.org/repos/slapos.git
github.com branch = davstorage-ajaxplorer
peak.telecommunity.com git-executable = ${git:location}/bin/git
psutil.googlecode.com
www.dabeaz.com [slapos.toolbox-repository]
alastairs-place.net recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.toolbox.git
branch = master
parts = git-executable = ${git:location}/bin/git
template
lxml-python [check-recipe]
apache recipe = plone.recipe.command
logrotate stop-on-error = true
dcron update-command = ${:command}
eggs command =
gzip grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link &&
instance-recipe-egg grep parts ${buildout:develop-eggs-directory}/slapos.toolbox.egg-link
unzip= true
[eggs]
recipe = zc.recipe.egg
eggs =
[instance-recipe]
egg = slapos.cookbook
module = davstorage
[instance-recipe-egg]
recipe = zc.recipe.egg
python = python2.7
eggs = ${instance-recipe:egg}
[template]
# Default template for the instance.
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg
md5sum = bed788dee6daf05349c4577e7a7f1299
output = ${buildout:directory}/template.cfg
mode = 0644
[instance-davstorage]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-davstorage.cfg
md5sum = 49c8e049a78e233d3a553e64c8914592
output = ${buildout:directory}/template-davstorage.cfg
mode = 0644
[lxml-python]
python = python2.7
[versions] [versions]
zc.buildout = 1.6.0-dev-SlapOS-002 zc.buildout = 1.6.0-dev-SlapOS-002
Jinja2 = 2.6 Jinja2 = 2.6
Werkzeug = 0.7.1 Werkzeug = 0.8.3
buildout-versions = 1.6 buildout-versions = 1.7
hexagonit.recipe.cmmi = 1.5.0 hexagonit.recipe.cmmi = 1.6
hexagonit.recipe.download = 1.6nxd002 hexagonit.recipe.download = 1.6nxd002
meld3 = 0.6.7 meld3 = 0.6.10
slapos.cookbook = 0.26 openssl = 1.0.1c
# Required by: # Required by:
# slapos.core==0.14 # slapos.core
Flask = 0.7.2 Flask = 0.9
# Required by: # Required by:
# slapos.cookbook==0.24 # slapos.cookbook
PyXML = 0.8.4 PyXML = 0.8.4
# Required by: slapos.recipe.template = 2.4.2
# slapos.recipe.template==1.1
collective.recipe.template = 1.9
# Required by: # Required by:
# slapos.cookbook==0.24 # slapos.cookbook==0.24
# slapos.core==0.14 # slapos.core==0.14
# xml-marshaller==0.9.7 # xml-marshaller==0.9.7
lxml = 2.3 lxml = 3.1.0
# Required by: # Required by:
# slapos.cookbook==0.24 # slapos.cookbook==0.24
netaddr = 0.7.6 netaddr = 0.7.10
# Required by: # Required by:
# slapos.core==0.14 # slapos.core==0.14
#netifaces = 0.4 netifaces = 0.8
# Required by: # Required by:
# slapos.cookbook==0.24 # slapos.cookbook==0.24
...@@ -115,15 +73,15 @@ netaddr = 0.7.6 ...@@ -115,15 +73,15 @@ netaddr = 0.7.6
setuptools = 0.6c12dev-r88846 setuptools = 0.6c12dev-r88846
# Required by: # Required by:
# slapos.cookbook==0.24 # slapos.cookbook==0.73.1
slapos.core = 0.14 slapos.core = 0.35.1
# Required by: # Required by:
# slapos.core==0.14 # slapos.core==0.35.1
supervisor = 3.0a10 supervisor = 3.0b1
# Required by: # Required by:
# slapos.cookbook==0.24 # slapos.cookbook==0.73.1
xml-marshaller = 0.9.7 xml-marshaller = 0.9.7
# Required by: # Required by:
...@@ -131,5 +89,6 @@ xml-marshaller = 0.9.7 ...@@ -131,5 +89,6 @@ xml-marshaller = 0.9.7
zc.recipe.egg = 1.3.2 zc.recipe.egg = 1.3.2
# Required by: # Required by:
# slapos.core==0.14 # slapos.core==0.35.1
zope.interface = 3.7.0 zope.interface = 4.0.5
TODO:
+ Move Ajaxplorer to a separate directory (currently in the www/ directory
accessed by webdav client, and so might delete files)
+ Make configuration part disappear ! (configuration when instance deploy,
no need for user to do it at each start of a new instance)
...@@ -7,7 +7,6 @@ offline = true ...@@ -7,7 +7,6 @@ offline = true
parts = parts =
connection-dict connection-dict
testnode testnode
pwgen
shell shell
shellinabox shellinabox
certificate-authority certificate-authority
...@@ -16,12 +15,11 @@ parts = ...@@ -16,12 +15,11 @@ parts =
[connection-dict] [connection-dict]
recipe = slapos.cookbook:publish recipe = slapos.cookbook:publish
url = http://[$${shellinabox:ipv6}]:$${shellinabox:port}/ url = http://[$${shellinabox:ipv6}]:$${shellinabox:port}/
password = $${pwgen:password} password = $${pwgen:passwd}
[pwgen] [pwgen]
recipe = slapos.cookbook:pwgen recipe = slapos.cookbook:generate.password
file = $${buildout:directory}/.password storage-path = $${buildout:directory}/.password
pwgen-binary = ${pwgen:location}/bin/pwgen
[testnode] [testnode]
recipe = slapos.cookbook:erp5testnode recipe = slapos.cookbook:erp5testnode
...@@ -82,7 +80,7 @@ port = 8080 ...@@ -82,7 +80,7 @@ port = 8080
shell = $${shell:wrapper} shell = $${shell:wrapper}
wrapper = $${rootdirectory:bin}/shellinaboxd wrapper = $${rootdirectory:bin}/shellinaboxd
shellinabox-binary = ${shellinabox:location}/bin/shellinaboxd shellinabox-binary = ${shellinabox:location}/bin/shellinaboxd
password = $${pwgen:password} password = $${pwgen:passwd}
directory = $${buildout:directory}/ directory = $${buildout:directory}/
login-shell = $${rootdirectory:bin}/login login-shell = $${rootdirectory:bin}/login
certificate-directory = $${directory:shellinabox} certificate-directory = $${directory:shellinabox}
......
...@@ -20,7 +20,6 @@ extends = ...@@ -20,7 +20,6 @@ extends =
../../component/zip/buildout.cfg ../../component/zip/buildout.cfg
../../component/busybox/buildout.cfg ../../component/busybox/buildout.cfg
../../component/shellinabox/buildout.cfg ../../component/shellinabox/buildout.cfg
../../component/pwgen/buildout.cfg
# Local development # Local development
develop = develop =
......
[buildout]
versions = versions
extends =
../../stack/slapos.cfg
../../component/gcc/buildout.cfg
../../component/openssl/buildout.cfg
../../component/curl/buildout.cfg
../../component/dash/buildout.cfg
../../component/dcron/buildout.cfg
../../component/logrotate/buildout.cfg
../../component/lxml-python/buildout.cfg
../../component/python-2.7/buildout.cfg
../../component/gzip/buildout.cfg
../../component/git/buildout.cfg
../../component/nodejs/buildout.cfg
../../component/postgresql/buildout.cfg
parts =
postgresql
nodejs
etherpad-lite-repository
install-deps
template
lxml-python
eggs
instance-recipe-egg
unzip= true
[eggs]
recipe = zc.recipe.egg
eggs =
[instance-recipe]
egg = slapos.cookbook
module = etherpad-lite
[instance-recipe-egg]
recipe = zc.recipe.egg
python = python2.7
eggs = ${instance-recipe:egg}
[etherpad-lite-repository]
recipe = slapos.recipe.build:gitclone
repository = http://github.com/ether/etherpad-lite.git
branch = develop
git-executable = ${git:location}/bin/git
[template]
# Default template for the instance.
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg
md5sum = 7ab2a242df988bf5c10bf8002acac3bd
output = ${buildout:directory}/template.cfg
mode = 0644
[instance-etherpad-lite]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-etherpad-lite.cfg
md5sum = fd7249be8988155110234c7bb877abb9
output = ${buildout:directory}/template-etherpad-lite.cfg
mode = 0644
[template-conf]
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/templates/${:filename}
mode = 0644
filename = settings.json.in
md5sum = 19ab39e6b3256c82fd54ce074488b136
location = ${buildout:parts-directory}/${:_buildout_section_name_}
[template-run-script]
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/templates/${:filename}
mode = 0644
filename = run.sh.in
md5sum = eac870b5f30e735e109a48913af2fae3
location = ${buildout:parts-directory}/${:_buildout_section_name_}
[template-deps-script]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/templates/${:filename}
etherpad-location = ${etherpad-lite-repository:location}
nodejs-location = ${nodejs:location}
curl-location = ${curl:location}
postgre-location = ${postgresql:location}
mode = 0755
md5sum = 53d0d53d419bd9ee592d3e1a1c84c758
filename = installDeps.sh.in
output = ${etherpad-lite-repository:location}/bin/installDeps.sh
[install-deps]
recipe = plone.recipe.command
command = ${template-deps-script:output}
update-command = command
[lxml-python]
python = python2.7
[buildout]
extends =
common.cfg
parts +=
slapos.cookbook-repository
slapos.toolbox-repository
check-recipe
develop =
${:parts-directory}/slapos.cookbook-repository
${:parts-directory}/slapos.toolbox-repository
[slapos.cookbook-repository]
recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.git
branch = etherpad-lite
git-executable = ${git:location}/bin/git
[slapos.toolbox-repository]
recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.toolbox.git
branch = master
git-executable = ${git:location}/bin/git
[check-recipe]
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command =
grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link &&
grep parts ${buildout:develop-eggs-directory}/slapos.toolbox.egg-link
\ No newline at end of file
[buildout]
parts =
etherpad-lite
publish-connection-informations
# frontend-etherpad
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
[rootdirectory]
recipe = slapos.cookbook:mkdirectory
etc = $${buildout:directory}/etc/
var = $${buildout:directory}/var/
srv = $${buildout:directory}/srv/
bin = $${buildout:directory}/bin/
tmp = $${buildout:directory}/tmp/
[basedirectory]
recipe = slapos.cookbook:mkdirectory
log = $${rootdirectory:var}/log/
services = $${rootdirectory:etc}/run/
run = $${rootdirectory:etc}/run/
backup = $${rootdirectory:srv}/backup/
promises = $${rootdirectory:etc}/promise/
[directory]
recipe = slapos.cookbook:mkdirectory
etherpad-conf = $${rootdirectory:etc}/etherpad/
etherpad-repository-location = $${buildout:directory}/parts/etherpad-lite-repository
[publish-connection-informations]
recipe = slapos.cookbook:publish
url = $${request-frontend:connection-site_url}
[etherpad-conf-generation]
recipe = slapos.recipe.template
url = ${template-conf:location}/${template-conf:filename}
ip = $${slap-network-information:global-ipv6}
dirtydb-location = $${rootdirectory:var}/dirty.db
port = 9001
mode = 0644
output = $${directory:etherpad-conf}/settings.json
[etherpad-run-script]
recipe = slapos.recipe.template
url = ${template-run-script:location}/${template-run-script:filename}
etherpad-location = ${etherpad-lite-repository:location}
etherpad-repository-location = $${directory:etherpad-repository-location}
nodejs-location = ${nodejs:location}
etherpad-deps-script-location = ${template-deps-script:output}
etherpad-conf-location = $${etherpad-conf-generation:output}
etherpad-conf-name = settings-$${slap-connection:partition-id}.json
mode = 0755
output = $${rootdirectory:bin}/run.sh
# Command line comes from the run script of etherpad-lite
[etherpad-lite]
recipe = slapos.cookbook:wrapper
wrapper-path = $${basedirectory:run}/etherpad-lite
command-line = $${etherpad-run-script:output} -s $${etherpad-run-script:etherpad-conf-name}
[request-frontend]
<= slap-connection
recipe = slapos.cookbook:request
name = Frontend
# XXX We have hardcoded SR URL here.
software-url = http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD:/software/apache-frontend/software.cfg
slave = true
config = url
config-url = http://$${etherpad-conf-generation:ip}:$${etherpad-conf-generation:port}
return = site_url
[frontend-etherpad]
recipe = slapos.cookbook:check_url_available
path = $${basedirectory:promises}/frontend-etherpad
url = $${request-frontend:connection-site_url}
dash_path = ${dash:location}/bin/dash
curl_path = ${curl:location}/bin/curl
\ No newline at end of file
[buildout]
parts =
switch_softwaretype
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
[switch_softwaretype]
recipe = slapos.cookbook:softwaretype
default = ${instance-etherpad-lite:output}
[buildout]
extends =
common.cfg
parts +=
slapos.cookbook-repository
slapos.toolbox-repository
check-recipe
develop =
${:parts-directory}/slapos.cookbook-repository
${:parts-directory}/slapos.toolbox-repository
[slapos.cookbook-repository]
recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.git
branch = etherpad-lite
git-executable = ${git:location}/bin/git
[slapos.toolbox-repository]
recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.toolbox.git
branch = master
git-executable = ${git:location}/bin/git
[check-recipe]
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command =
grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link &&
grep parts ${buildout:develop-eggs-directory}/slapos.toolbox.egg-link
\ No newline at end of file
#!/bin/sh
# Ugly setup
cd ${:etherpad-location}
export PATH=$PATH:${:postgre-location}/bin
#Is gnu-grep (ggrep) installed on SunOS (Solaris)
if [ $(uname) = "SunOS" ]; then
hash ggrep > /dev/null 2>&1 || {
echo "Please install ggrep (pkg install gnu-grep)" >&2
exit 1
}
fi
#Is wget installed?
hash ${:curl-location}/bin/curl > /dev/null 2>&1 || {
echo "Please install curl" >&2
exit 1
}
#Is node installed?
hash ${:nodejs-location}/bin/node > /dev/null 2>&1 || {
echo "Please install node.js ( http://nodejs.org )" >&2
exit 1
}
#Is npm installed?
hash ${:nodejs-location}/bin/npm > /dev/null 2>&1 || {
echo "Please install npm ( http://npmjs.org )" >&2
exit 1
}
echo "Ensure that all dependencies are up to date... If this is the first time you have run Etherpad please be patient."
(
mkdir -p node_modules
cd node_modules
[ -e ep_etherpad-lite ] || ln -s ../src ep_etherpad-lite
cd ep_etherpad-lite
${:nodejs-location}/bin/npm install --loglevel warn
) || {
rm -rf node_modules
exit 1
}
echo "Ensure jQuery is downloaded and up to date..."
DOWNLOAD_JQUERY="true"
NEEDED_VERSION="1.9.1"
if [ $DOWNLOAD_JQUERY = "true" ]; then
${:curl-location}/bin/curl -lo src/static/js/jquery.js http://code.jquery.com/jquery-$NEEDED_VERSION.js || exit 1
fi
#Remove all minified data to force node creating it new
echo "Clear minfified cache..."
rm -f var/minified*
echo "ensure custom css/js files are created..."
for f in "index" "pad" "timeslider"
do
if [ ! -f "src/static/custom/$f.js" ]; then
cp "src/static/custom/js.template" "src/static/custom/$f.js" || exit 1
fi
if [ ! -f "src/static/custom/$f.css" ]; then
cp "src/static/custom/css.template" "src/static/custom/$f.css" || exit 1
fi
done
exit 0
#!/bin/sh
# Copy repository content from the software release
mkdir ${:etherpad-repository-location}
cp -R ${:etherpad-location}/* ${:etherpad-repository-location}
#Move to the folder where ep-lite is installed
cd ${:etherpad-repository-location}
# XXX Vivien: very ugly, definitely need to find a cleaner way
cp ${:etherpad-conf-location} ${:etherpad-repository-location}/${:etherpad-conf-name}
#Move to the node folder and start
echo "start..."
${:nodejs-location}/bin/node node_modules/ep_etherpad-lite/node/server.js $*
/*
This file must be valid JSON. But comments are allowed
Please edit settings.json, not settings.json.template
*/
{
// Name your instance!
"title": "Etherpad",
// favicon default name
// alternatively, set up a fully specified Url to your own favicon
"favicon": "favicon.ico",
//IP and port which etherpad should bind at
"ip": "${:ip}",
"port" : "${:port}",
// Session Key, used for reconnecting user sessions
// Set this to a secure string at least 10 characters long. Do not share this value.
"sessionKey" : "",
/*
// Node native SSL support
// this is disabled by default
//
// make sure to have the minimum and correct file access permissions set
// so that the Etherpad server can access them
"ssl" : {
"key" : "/path-to-your/epl-server.key",
"cert" : "/path-to-your/epl-server.crt"
},
*/
//The Type of the database. You can choose between dirty, postgres, sqlite and mysql
//You shouldn't use "dirty" for for anything else than testing or development
"dbType" : "dirty",
//the database specific settings
"dbSettings" : {
"filename" : "${:dirtydb-location}"
},
/* An Example of MySQL Configuration
"dbType" : "mysql",
"dbSettings" : {
"user" : "root",
"host" : "localhost",
"password": "",
"database": "store"
},
*/
//the default text of a pad
"defaultPadText" : "Welcome to Etherpad!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nGet involved with Etherpad at http:\/\/etherpad.org\n",
/* Users must have a session to access pads. This effectively allows only group pads to be accessed. */
"requireSession" : false,
/* Users may edit pads but not create new ones. Pad creation is only via the API. This applies both to group pads and regular pads. */
"editOnly" : false,
/* if true, all css & js will be minified before sending to the client. This will improve the loading performance massivly,
but makes it impossible to debug the javascript/css */
"minify" : true,
/* How long may clients use served javascript code (in seconds)? Without versioning this
may cause problems during deployment. Set to 0 to disable caching */
"maxAge" : 21600, // 60 * 60 * 6 = 6 hours
/* This is the path to the Abiword executable. Setting it to null, disables abiword.
Abiword is needed to advanced import/export features of pads*/
"abiword" : null,
/* This setting is used if you require authentication of all users.
Note: /admin always requires authentication. */
"requireAuthentication": false,
/* Require authorization by a module, or a user with is_admin set, see below. */
"requireAuthorization": false,
/* Users for basic authentication. is_admin = true gives access to /admin.
If you do not uncomment this, /admin will not be available! */
/*
"users": {
"admin": {
"password": "changeme1",
"is_admin": true
},
"user": {
"password": "changeme1",
"is_admin": false
}
},
*/
// restrict socket.io transport methods
"socketTransportProtocols" : ["xhr-polling"],
/* The log level we are using, can be: DEBUG, INFO, WARN, ERROR */
"loglevel": "INFO",
//Logging configuration. See log4js documentation for further information
// https://github.com/nomiddlename/log4js-node
// You can add as many appenders as you want here:
"logconfig" :
{ "appenders": [
{ "type": "console",
// "category": "access"// only logs pad access
}
/*
, { "type": "file"
, "filename": "your-log-file-here.log"
, "maxLogSize": 1024
, "backups": 3 // how many log files there're gonna be at max
//, "category": "test" // only log a specific category
}*/
/*{ "type": "logLevelFilter",
"level": "warn" // filters out all log messages that have a lower level than "error"
, "appender":
{ Use whatever appender you want here }
}*/
/*
, { "type": "logLevelFilter"
, "level": "error" // filters out all log messages that have a lower level than "error"
, "appender":
{ "type": "smtp"
, "subject": "An error occured in your EPL instance!"
, "recipients": "bar@blurdybloop.com, baz@blurdybloop.com"
, "sendInterval": 60*5 // in secs -- will buffer log messages; set to 0 to send a mail for every message
, "transport": "SMTP", "SMTP": { // see https://github.com/andris9/Nodemailer#possible-transport-methods
"host": "smtp.example.com", "port": 465,
"secureConnection": true,
"auth": {
"user": "foo@example.com",
"pass": "bar_foo"
}
}
}
}*/
] }
}
TODO:
+ Modify etherpad-lite to read absolute path correctly, so that we aren't
forced to move everything to the software release directory for it to work
...@@ -13,14 +13,13 @@ parts = ...@@ -13,14 +13,13 @@ parts =
gitdaemon gitdaemon
git-http-backend-cgi git-http-backend-cgi
htpasswd htpasswd
pwgen
git-repos git-repos
[publish] [publish]
recipe = slapos.cookbook:publish recipe = slapos.cookbook:publish
url = http://[$${slap-network-information:global-ipv6}]:$${httpd-conf:port}/ url = http://[$${slap-network-information:global-ipv6}]:$${httpd-conf:port}/
user = $${pwgen:user} user = $${pwgen:user}
password = $${pwgen:password} password = $${pwgen:passwd}
[httpd] [httpd]
recipe = slapos.cookbook:wrapper recipe = slapos.cookbook:wrapper
...@@ -79,14 +78,12 @@ output = $${basedirectory:services}/git-daemon ...@@ -79,14 +78,12 @@ output = $${basedirectory:services}/git-daemon
recipe = collective.recipe.cmd recipe = collective.recipe.cmd
output = $${rootdirectory:etc}/httpd.htpasswd output = $${rootdirectory:etc}/httpd.htpasswd
on_install = true on_install = true
on_udptae = true on_update = true
cmds = cmds =
${apache:location}/bin/htpasswd -cb $${:output} $${pwgen:user} $${pwgen:password} ${apache:location}/bin/htpasswd -cb $${:output} $${pwgen:user} $${pwgen:passwd}
[pwgen] [pwgen]
recipe = slapos.cookbook:pwgen recipe = slapos.cookbook:generate.password
file = $${buildout:directory}/.password
pwgen-binary = ${pwgen:location}/bin/pwgen
user = slapos user = slapos
[rootdirectory] [rootdirectory]
......
...@@ -4,7 +4,6 @@ extends = ...@@ -4,7 +4,6 @@ extends =
../../component/apache/buildout.cfg ../../component/apache/buildout.cfg
../../component/perl/buildout.cfg ../../component/perl/buildout.cfg
../../component/git/buildout.cfg ../../component/git/buildout.cfg
../../component/pwgen/buildout.cfg
../../stack/slapos.cfg ../../stack/slapos.cfg
parts = parts =
......
...@@ -45,7 +45,9 @@ configuration.name = John Doe ...@@ -45,7 +45,9 @@ configuration.name = John Doe
# Create all needed directories, depending on your needs # Create all needed directories, depending on your needs
[directory] [directory]
recipe = slapos.cookbook:mkdirectory recipe = slapos.cookbook:mkdirectory
etc = $${buildout:directory}/etc home = $${buildout:directory}
etc = $${:home}/etc
var = $${:home}/var
# Executables put here will be started but not monitored (for startup scripts) # Executables put here will be started but not monitored (for startup scripts)
script = $${:etc}/run/ script = $${:etc}/run/
# Executables put here will be started and monitored (for daemons) # Executables put here will be started and monitored (for daemons)
...@@ -53,7 +55,8 @@ service = $${:etc}/service ...@@ -53,7 +55,8 @@ service = $${:etc}/service
# Executables put here will be launched after buildout has completed to see # Executables put here will be launched after buildout has completed to see
# if instance is running # if instance is running
promise = $${:etc}/promise/ promise = $${:etc}/promise/
# Path of the log directory used by our service (see [hello-world])
log = $${:var}/log
# Create a simple shell script that will only output your name if you # Create a simple shell script that will only output your name if you
# specified it as instance parameter. # specified it as instance parameter.
...@@ -62,9 +65,11 @@ promise = $${:etc}/promise/ ...@@ -62,9 +65,11 @@ promise = $${:etc}/promise/
# This recipe will try to "exec" the command-line after separating parameters. # This recipe will try to "exec" the command-line after separating parameters.
recipe = slapos.cookbook:wrapper recipe = slapos.cookbook:wrapper
# Notice that there is only one $ at ${dash:location}, it is because it comes from the Software Release buildout profile. # Notice that there is only one $ at ${dash:location}, it is because it comes from the Software Release buildout profile.
command-line = ${dash:location}/bin/dash -c 'echo "Hello $${instance-parameter:configuration.name}!"; sleep 100000;' command-line = ${dash:location}/bin/dash -c 'echo "Hello $${instance-parameter:configuration.name}, it is $(date)." > $${directory:log}/log.log; sleep 1000000;'
# Put this shell script in the "etc/service" directory. Every executable of this # Put this shell script in the "etc/service" directory. Every executable of this
# repository will be started and monitored by supervisord # repository will be started and monitored by supervisord. If one service
# exits/crashes, it will trigger a "bang" and cause run of slapgrid for the
# instance.
wrapper-path = $${directory:service}/hello-world wrapper-path = $${directory:service}/hello-world
......
...@@ -16,15 +16,15 @@ parts = ...@@ -16,15 +16,15 @@ parts =
slapos-cookbook slapos-cookbook
# Call creation of instance.cfg file that will be called for deployment of # Call creation of instance.cfg file that will be called for deployment of
# instance # instance
template instance-profile
# Download instance.cfg.in (buildout profile used to deployment of instance), # Download instance.cfg.in (buildout profile used to deployment of instance),
# replace all ${foo:bar} parameters by real values, and change $${foo:bar} to # replace all ${foo:bar} parameters by real values, and change $${foo:bar} to
# ${foo:bar} # ${foo:bar}
[template] [instance-profile]
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg.in url = ${:_profile_base_location_}/instance.cfg.in
output = ${buildout:directory}/instance.cfg output = ${buildout:directory}/instance.cfg
# MD5 checksum can be skipped for development (easier to develop), but must be filled for production # MD5 checksum can be skipped for development (easier to develop), but must be filled for production
md5sum = 1fc461c00e86485bee77a942f39e3c43 md5sum = ed94ac99ae1e596c0da5350da6ab6f52
mode = 0644 mode = 0644
[buildout] [buildout]
extends = extends =
../../component/6tunnel/buildout.cfg
../../component/curl/buildout.cfg ../../component/curl/buildout.cfg
../../component/dash/buildout.cfg ../../component/dash/buildout.cfg
../../component/dcron/buildout.cfg ../../component/dcron/buildout.cfg
...@@ -9,13 +10,18 @@ extends = ...@@ -9,13 +10,18 @@ extends =
../../component/logrotate/buildout.cfg ../../component/logrotate/buildout.cfg
../../component/noVNC/buildout.cfg ../../component/noVNC/buildout.cfg
../../component/openssl/buildout.cfg ../../component/openssl/buildout.cfg
../../component/dcron/buildout.cfg
../../stack/nodejs.cfg ../../stack/nodejs.cfg
../../stack/resilient/buildout.cfg
../../stack/slapos.cfg ../../stack/slapos.cfg
parts = parts =
template template
eggs eggs
# XXX: we have to manually add this for resilience
rdiff-backup
#XXX-Cedric : Currently, one can only access to KVM using noVNC. #XXX-Cedric : Currently, one can only access to KVM using noVNC.
# Ideally one should be able to access KVM by using either NoVNC or VNC. # Ideally one should be able to access KVM by using either NoVNC or VNC.
# Problem is : no native crypto support in web browsers. So we have to disable ssl # Problem is : no native crypto support in web browsers. So we have to disable ssl
...@@ -36,6 +42,7 @@ eggs = ...@@ -36,6 +42,7 @@ eggs =
websockify websockify
slapos.cookbook slapos.cookbook
slapos.toolbox slapos.toolbox
erp5.util
[http-proxy] [http-proxy]
# https://github.com/nodejitsu/node-http-proxy # https://github.com/nodejitsu/node-http-proxy
...@@ -67,13 +74,66 @@ command = ...@@ -67,13 +74,66 @@ command =
${nodejs:location}/bin/node ${nodejs:location}/bin/npm install pkginfo@0.2.3 ${nodejs:location}/bin/node ${nodejs:location}/bin/npm install pkginfo@0.2.3
# Create all templates that will be used to deploy instances
[template]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg.in
#md5sum = bdd0495ef729e7272ec9c97aca919c09
output = ${buildout:directory}/template.cfg
mode = 0644
[template-kvm] [template-kvm]
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-kvm.cfg.in url = ${:_profile_base_location_}/instance-kvm.cfg.in
md5sum = 87197471aa93863c310204e8865b5ac1 #md5sum = c3c888c78bbff334135be9e8ad5885a9
output = ${buildout:directory}/template-kvm.cfg output = ${buildout:directory}/template-kvm.cfg
mode = 0644 mode = 0644
[template-kvm-resilient]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/instance-kvm-resilient.cfg.jinja2
mode = 644
md5sum = 6753004b582c0470bd028253ce1964ad
download-only = true
[template-kvm-resilient-test]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/instance-kvm-resilient-test.cfg.jinja2
md5sum = 027d68d9decbc6aec59365fa723975d7
mode = 0644
download-only = true
[template-kvm-import]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-kvm-import.cfg.in
md5sum = 7b36d6c61154b7ec3113a1bfaa25a904
output = ${buildout:directory}/template-kvm-import.cfg
mode = 0644
[template-kvm-import-script]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/template/kvm-import.sh.in
filename = kvm-import.sh.in
md5sum = a731372420dc59c0b5ba7bc5f39a14ad
download-only = true
mode = 0755
[template-kvm-export]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-kvm-export.cfg.in
md5sum = 64a1a505aff9fde52afac46240811047
output = ${buildout:directory}/template-kvm-export.cfg
mode = 0644
[template-kvm-export-script]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/template/kvm-export.sh.in
filename = kvm-export.sh.in
md5sum = 3e878b3343c76f0d6950986fffcb6a8c
download-only = true
mode = 0755
[template-nbd] [template-nbd]
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-nbd.cfg.in url = ${:_profile_base_location_}/instance-nbd.cfg.in
...@@ -88,9 +148,3 @@ md5sum = cdb690495e9eb007d2b7d2f8e12f5c59 ...@@ -88,9 +148,3 @@ md5sum = cdb690495e9eb007d2b7d2f8e12f5c59
output = ${buildout:directory}/template-frontend.cfg output = ${buildout:directory}/template-frontend.cfg
mode = 0644 mode = 0644
[template]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg.in
md5sum = 0a98e34aaec7097a84066c0665e3a49a
output = ${buildout:directory}/template.cfg
mode = 0644
...@@ -6,11 +6,13 @@ extends = ...@@ -6,11 +6,13 @@ extends =
parts += parts +=
slapos.cookbook-repository slapos.cookbook-repository
slapos.toolbox-repository slapos.toolbox-repository
erp5.util-repository
check-recipe check-recipe
develop = develop =
${:parts-directory}/slapos.cookbook-repository ${:parts-directory}/slapos.cookbook-repository
${:parts-directory}/slapos.toolbox-repository ${:parts-directory}/slapos.toolbox-repository
${:parts-directory}/erp5.util-repository
[slapos.cookbook-repository] [slapos.cookbook-repository]
recipe = slapos.recipe.build:gitclone recipe = slapos.recipe.build:gitclone
...@@ -21,7 +23,13 @@ git-executable = ${git:location}/bin/git ...@@ -21,7 +23,13 @@ git-executable = ${git:location}/bin/git
[slapos.toolbox-repository] [slapos.toolbox-repository]
recipe = slapos.recipe.build:gitclone recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.toolbox.git repository = http://git.erp5.org/repos/slapos.toolbox.git
branch = master branch = kvmresiliency
git-executable = ${git:location}/bin/git
[erp5.util-repository]
recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/erp5.git
branch = scalability-master2
git-executable = ${git:location}/bin/git git-executable = ${git:location}/bin/git
[check-recipe] [check-recipe]
...@@ -30,4 +38,6 @@ stop-on-error = true ...@@ -30,4 +38,6 @@ stop-on-error = true
update-command = ${:command} update-command = ${:command}
command = command =
grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link && grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link &&
grep parts ${buildout:develop-eggs-directory}/slapos.toolbox.egg-link grep parts ${buildout:develop-eggs-directory}/slapos.toolbox.egg-link &&
grep parts ${buildout:develop-eggs-directory}/erp5.util.egg-link
[buildout]
extends = ${template-kvm:output}
${pbsready-export:output}
parts +=
cron-entry-backup
certificate-authority
publish-connection-information
kvm-promise
websockify-sighandler
novnc-promise
cron
frontend-promise
# Create the exporter executable, which is a simple shell script
[exporter]
recipe = slapos.recipe.template
url = ${template-kvm-export-script:location}/${template-kvm-export-script:filename}
output = $${directory:bin}/$${slap-parameter:namebase}-exporter
mode = 0755
backup-disk-path = $${directory:backup}/virtual.qcow2
# Resilient stack wants a "wrapper" parameter
wrapper = $${:output}
# Extends publish section with resilient parameters
[publish-connection-information]
<= resilient-publish-connection-parameter
[buildout]
# Here, we don't need KVM to run to import data, so we don't
# even extend the kvm instance profile.
extends = ${pbsready-import:output}
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
[directory]
recipe = slapos.cookbook:mkdirectory
etc = $${buildout:directory}/etc
bin = $${buildout:directory}/bin
srv = $${buildout:directory}/srv
var = $${buildout:directory}/var
log = $${:var}/log
scripts = $${:etc}/run
services = $${:etc}/service
promises = $${:etc}/promise
novnc-conf = $${:etc}/novnc
run = $${:var}/run
ca-dir = $${:srv}/ssl
cron-entries = $${:etc}/cron.d
crontabs = $${:etc}/crontabs
cronstamps = $${:etc}/cronstamps
[importer]
recipe = slapos.recipe.template
url = ${template-kvm-import-script:location}/${template-kvm-import-script:filename}
output = $${directory:bin}/$${slap-parameter:namebase}-importer
mode = 0755
backup-disk-path = $${directory:backup}/virtual.qcow2
disk-path = $${directory:srv}/virtual.qcow2
# Resilient stack wants a "wrapper" parameter
wrapper = $${:output}
backup-disk-path = $${directory:backup}/virtual.qcow2
{ {
"name": "Input Parameters", "type": "object",
"$schema": "http://json-schema.org/draft-04/schema",
"title": "Input Parameters",
"properties": { "properties": {
"ram-size": { "ram-size": {
"title": "RAM size", "title": "RAM size",
...@@ -7,7 +10,7 @@ ...@@ -7,7 +10,7 @@
"type": "integer", "type": "integer",
"default": 1024, "default": 1024,
"minimum": 128, "minimum": 128,
"divisibleBy": 128, "multipleOf": 128,
"maximum": 16384 "maximum": 16384
}, },
"disk-size": { "disk-size": {
...@@ -34,7 +37,6 @@ ...@@ -34,7 +37,6 @@
"maximum": 8 "maximum": 8
}, },
"nbd-host": { "nbd-host": {
"title": "NBD hostname", "title": "NBD hostname",
"description": "hostname (or IP) of the NBD server containing the boot image.", "description": "hostname (or IP) of the NBD server containing the boot image.",
...@@ -65,6 +67,31 @@ ...@@ -65,6 +67,31 @@
"maximum": 65535 "maximum": 65535
}, },
"virtual-hard-drive-url": {
"title": "Existing disk image URL",
"description": "If specified, will download an existing disk image (qcow2, raw, ...), and will use it as main virtual hard drive. Can be used to download and use an already installed and customized virtual hard drive.",
"format": "uri",
"type": "string",
},
"virtual-hard-drive-md5sum": {
"title": "Checksum of virtual hard drive",
"description": "MD5 checksum of virtual hard drive, used if virtual-hard-drive-url is specified.",
"type": "string",
},
virtual-hard-drive-md5sum
"use-tap": {
"title": "Use QEMU TAP network interface",
"description": "Use QEMU TAP network interface, requires a bridge on SlapOS Node. If false, use user-mode network stack (NAT).",
"type": "boolean",
"default": false
},
"nat-rules": {
"title": "List of rules for NAT of QEMU user mode network stack.",
"description": "List of rules for NAT of QEMU user mode network stack, as comma-separated list of ports. For each port specified, it will redirect port x of the VM (example: 80) to the port x + 10000 of the public IPv6 (example: 10080). Defaults to \"22 80 443\". Ignored if \"use-tap\" parameter is enabled.",
"type": "string",
},
"frontend-instance-guid": { "frontend-instance-guid": {
"title": "Frontend Instance ID", "title": "Frontend Instance ID",
......
...@@ -13,13 +13,6 @@ ...@@ -13,13 +13,6 @@
"description": "URL used to connect to the service.", "description": "URL used to connect to the service.",
"type": "uri", "type": "uri",
"required": false "required": false
},
"password": {
"title": "Password",
"description": "Password used to authenticate in the service webpage.",
"type": "uri",
"required": true
} }
} }
} }
[buildout]
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
parts =
deploy-resiliency-test
request-resilient-kvm
deploy-standalone-resiliency-test
[directory]
recipe = slapos.cookbook:mkdirectory
home = ${buildout:directory}
etc = ${:home}/etc/
var = ${:home}/var/
srv = ${:home}/srv/
bin = ${:home}/bin/
tmp = ${:home}/tmp/
log = ${:var}/log/
services = ${:etc}/service/
scripts = ${:etc}/run/
[deploy-resiliency-test]
recipe = slapos.cookbook:wrapper
wrapper-path = ${directory:scripts}/runKVMResiliencyTestSuite
testnode-parameters = --test-result-path={{ slapparameter_dict.get('test-result-path') }} --revision={{ slapparameter_dict.get('test-suite-revision') }} --node-title={{ slapparameter_dict.get('scalability-launcher-title') }} --test-suite={{ slapparameter_dict.get('test-suite') }} --test-suite-master-url={{ slapparameter_dict.get('test-suite-master-url') }} --log-path=${directory:log}
kvm-test-parameters = server_url=${slap-connection:server-url} key_file=${slap-connection:key-file} cert_file=${slap-connection:cert-file} computer_id=${slap-connection:computer-id} partition_id=${slap-connection:partition-id} software=${slap-connection:software-release-url} namebase=kvm kvm_rootinstance_name='${request-resilient-kvm:name}'
command-line = {{ bin_directory }}/runResiliencyTest ${:testnode-parameters} ${:kvm-test-parameters}
[deploy-standalone-resiliency-test]
# Used to manually run the KVM test if we don't have a running testnode.
recipe = slapos.cookbook:wrapper
wrapper-path = ${directory:bin}/runStandaloneResiliencyTestSuite
command-line = {{ bin_directory }}/runStandaloneResiliencyTest --test-suite-title=kvm ${deploy-resiliency-test:kvm-test-parameters}
[request-resilient-kvm]
<= slap-connection
recipe = slapos.cookbook:request
software-url = ${slap-connection:software-release-url}
software-type = kvm-resilient
name = Resilient KVM (Root Instance)
config = virtual-hard-drive-url virtual-hard-drive-md5sum resiliency-backup-periodicity
config-virtual-hard-drive-url = ${slap-parameter:virtual-hard-drive-url}
config-virtual-hard-drive-md5sum = ${slap-parameter:virtual-hard-drive-md5sum}
config-resiliency-backup-periodicity = */5
# We don't use url parameter, but we want it to be there to make sure root instance is ready.
return = url
# XXX What to do?
#sla = instance_guid
#sla-instance_guid = ${slap-parameter:frontend-instance-guid}
[slap-parameter]
virtual-hard-drive-url = https://softinst43236.host.vifib.net/data/public/8e2138.php?dl=true
virtual-hard-drive-md5sum = de0f10c7c6538e9928879332afd9be7a
# vim: set ft=cfg:
{% import 'parts' as parts %}
{% import 'replicated' as replicated with context %}
[buildout]
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
# += because we need to take up parts (like instance-custom, slapmonitor etc) from the profile we extended
parts +=
{{ parts.replicate("kvm", "3") }}
publish-connection-informations
{{ replicated.replicate("kvm", "3", "kvm-export", "kvm-import") }}
# Bubble down the parameters of the requested instance to the user
[request-kvm]
# Note: += doesn't work.
return =
# Resilient related parameters
url ssh-public-key ssh-url notification-id ip
# KVM related parameters
backend-url url ipv6
[publish-connection-informations]
recipe = slapos.cookbook:publish
backend-url = ${request-kvm:connection-backend-url}
url = ${request-kvm:connection-url}
ipv6 = ${request-kvm:connection-ipv6}
...@@ -6,10 +6,13 @@ ...@@ -6,10 +6,13 @@
[buildout] [buildout]
parts = parts =
certificate-authority certificate-authority
publish-kvm-connection-information publish-connection-information
kvm-promise kvm-promise
websockify-sighandler websockify-sighandler
novnc-promise novnc-promise
# kvm-monitor
cron
# cron-entry-monitor
frontend-promise frontend-promise
eggs-directory = ${buildout:eggs-directory} eggs-directory = ${buildout:eggs-directory}
...@@ -22,12 +25,16 @@ etc = $${buildout:directory}/etc ...@@ -22,12 +25,16 @@ etc = $${buildout:directory}/etc
bin = $${buildout:directory}/bin bin = $${buildout:directory}/bin
srv = $${buildout:directory}/srv srv = $${buildout:directory}/srv
var = $${buildout:directory}/var var = $${buildout:directory}/var
log = $${:var}/log
scripts = $${:etc}/run scripts = $${:etc}/run
services = $${:etc}/service services = $${:etc}/service
promises = $${:etc}/promise promises = $${:etc}/promise
novnc-conf = $${:etc}/novnc novnc-conf = $${:etc}/novnc
run = $${:var}/run run = $${:var}/run
ca-dir = $${:srv}/ssl ca-dir = $${:srv}/ssl
cron-entries = $${:etc}/cron.d
crontabs = $${:etc}/crontabs
cronstamps = $${:etc}/cronstamps
[create-mac] [create-mac]
recipe = slapos.cookbook:generate.mac recipe = slapos.cookbook:generate.mac
...@@ -38,31 +45,56 @@ recipe = slapos.cookbook:generate.password ...@@ -38,31 +45,56 @@ recipe = slapos.cookbook:generate.password
storage-path = $${directory:srv}/passwd storage-path = $${directory:srv}/passwd
bytes = 8 bytes = 8
[kvm-instance] [kvm-instance]
# XXX-Cedric: change "KVM" recipe to simple "create wrappers". No need for this # XXX-Cedric: change "KVM" recipe to simple "create wrappers". No need for this
# Specific code # Specific code. It needs Jinja.
recipe = slapos.cookbook:kvm recipe = slapos.cookbook:kvm
vnc-ip = $${slap-network-information:local-ipv4}
vnc-passwd = $${gen-passwd:passwd}
ipv4 = $${slap-network-information:local-ipv4}
ipv6 = $${slap-network-information:global-ipv6}
vnc-ip = $${:ipv4}
vnc-port = 5901 vnc-port = 5901
# XXX-Cedric: should be named "default-cdrom-iso"
default-disk-image = ${debian-amd64-netinst.iso:location}/${debian-amd64-netinst.iso:filename}
nbd-host = $${slap-parameter:nbd-host} nbd-host = $${slap-parameter:nbd-host}
nbd-port = $${slap-parameter:nbd-port} nbd-port = $${slap-parameter:nbd-port}
nbd2-host = $${slap-parameter:nbd2-host} nbd2-host = $${slap-parameter:nbd2-host}
nbd2-port = $${slap-parameter:nbd2-port} nbd2-port = $${slap-parameter:nbd2-port}
tap = $${slap-network-information:network-interface}
tap-interface = $${slap-network-information:network-interface}
disk-path = $${directory:srv}/virtual.qcow2 disk-path = $${directory:srv}/virtual.qcow2
disk-size = $${slap-parameter:disk-size} disk-size = $${slap-parameter:disk-size}
disk-type = $${slap-parameter:disk-type} disk-type = $${slap-parameter:disk-type}
socket-path = $${directory:var}/qmp_socket socket-path = $${directory:var}/qmp_socket
pid-path = $${directory:run}/pid_file pid-file-path = $${directory:run}/pid_file
smp-count = $${slap-parameter:cpu-count} smp-count = $${slap-parameter:cpu-count}
ram-size = $${slap-parameter:ram-size} ram-size = $${slap-parameter:ram-size}
mac-address = $${create-mac:mac-address} mac-address = $${create-mac:mac-address}
# XXX-Cedric: should be named runner-wrapper-path and controller-wrapper-path
runner-path = $${directory:services}/kvm runner-path = $${directory:services}/kvm
controller-path = $${directory:scripts}/kvm_controller controller-path = $${directory:scripts}/kvm_controller
use-tap = $${slap-parameter:use-tap}
nat-rules = $${slap-parameter:nat-rules}
6tunnel-wrapper-path = $${directory:services}/6tunnel
virtual-hard-drive-url = $${slap-parameter:virtual-hard-drive-url}
virtual-hard-drive-md5sum = $${slap-parameter:virtual-hard-drive-md5sum}
shell-path = ${dash:location}/bin/dash shell-path = ${dash:location}/bin/dash
qemu-path = ${kvm:location}/bin/qemu-system-x86_64 qemu-path = ${kvm:location}/bin/qemu-system-x86_64
qemu-img-path = ${kvm:location}/bin/qemu-img qemu-img-path = ${kvm:location}/bin/qemu-img
passwd = $${gen-passwd:passwd} 6tunnel-path = ${6tunnel:location}/bin/6tunnel
[kvm-promise] [kvm-promise]
recipe = slapos.cookbook:check_port_listening recipe = slapos.cookbook:check_port_listening
...@@ -122,10 +154,42 @@ hostname = $${novnc-instance:ip} ...@@ -122,10 +154,42 @@ hostname = $${novnc-instance:ip}
port = $${novnc-instance:port} port = $${novnc-instance:port}
[kvm-monitor] #[kvm-monitor]
recipe = slapos.cookbook:generic.slapmonitor #recipe = slapos.cookbook:wrapper
db-path = $${directory:srv}/slapmonitor_database #wrapper-path = $${directory:services}/kvm_monitor
#command-line = ${buildout:bin-directory}/kvm.monitor.test
# $${buildout:directory}/buildout-switch-softwaretype.cfg
# $${buildout:directory}/report.xml
# -s slap-parameter
# -opts disk-size ram-size cpu-count
#----------------
#--
#-- Deploy cron.
[cron]
recipe = slapos.cookbook:cron
dcrond-binary = ${dcron:location}/sbin/crond
cron-entries = $${directory:cron-entries}
crontabs = $${directory:crontabs}
cronstamps = $${directory:cronstamps}
catcher = $${cron-simplelogger:wrapper}
binary = $${directory:services}/crond
[cron-simplelogger]
recipe = slapos.cookbook:simplelogger
wrapper = $${directory:bin}/cron_simplelogger
log = $${directory:log}/crond.log
#[cron-entry-monitor]
#<= cron
#recipe = slapos.cookbook:cron.d
#name = kvm_monitor
#frequency = 0 0 * * *
#command = $${kvm-monitor:wrapper-path}
[request-slave-frontend] [request-slave-frontend]
recipe = slapos.cookbook:requestoptional recipe = slapos.cookbook:requestoptional
...@@ -146,17 +210,16 @@ sla = instance_guid ...@@ -146,17 +210,16 @@ sla = instance_guid
sla-instance_guid = $${slap-parameter:frontend-instance-guid} sla-instance_guid = $${slap-parameter:frontend-instance-guid}
[publish-kvm-connection-information] [publish-connection-information]
recipe = slapos.cookbook:publish recipe = slapos.cookbook:publish
backend-url = https://[$${novnc-instance:ip}]:$${novnc-instance:port}/vnc_auto.html?host=[$${novnc-instance:ip}]&port=$${novnc-instance:port}&encrypt=1 backend-url = https://[$${novnc-instance:ip}]:$${novnc-instance:port}/vnc_auto.html?host=[$${novnc-instance:ip}]&port=$${novnc-instance:port}&encrypt=1&password=$${kvm-instance:vnc-passwd}
password = $${kvm-instance:passwd} url = $${request-slave-frontend:connection-url}/vnc_auto.html?host=$${request-slave-frontend:connection-domainname}&port=$${request-slave-frontend:connection-port}&encrypt=1&path=$${request-slave-frontend:connection-resource}&password=$${kvm-instance:vnc-passwd}
url = $${request-slave-frontend:connection-url}/vnc_auto.html?host=$${request-slave-frontend:connection-domainname}&port=$${request-slave-frontend:connection-port}&encrypt=1&path=$${request-slave-frontend:connection-resource} ipv6 = $${slap-network-information:global-ipv6}
[frontend-promise] [frontend-promise]
recipe = slapos.cookbook:check_url_available recipe = slapos.cookbook:check_url_available
path = $${directory:promises}/frontend_promise path = $${directory:promises}/frontend_promise
url = $${publish-kvm-connection-information:url} url = $${publish-connection-information:url}
dash_path = ${dash:location}/bin/dash dash_path = ${dash:location}/bin/dash
curl_path = ${curl:location}/bin/curl curl_path = ${curl:location}/bin/curl
...@@ -164,9 +227,9 @@ curl_path = ${curl:location}/bin/curl ...@@ -164,9 +227,9 @@ curl_path = ${curl:location}/bin/curl
# Default values if not specified # Default values if not specified
frontend-software-type = frontend frontend-software-type = frontend
frontend-software-url = http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.92:/software/kvm/software.cfg frontend-software-url = http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.92:/software/kvm/software.cfg
frontend-instance-guid =
nbd-port = 1024 nbd-port = 1024
nbd-host = debian.nbd.vifib.net nbd-host =
nbd2-port = 1024 nbd2-port = 1024
nbd2-host = nbd2-host =
...@@ -175,3 +238,9 @@ disk-size = 10 ...@@ -175,3 +238,9 @@ disk-size = 10
disk-type = virtio disk-type = virtio
cpu-count = 1 cpu-count = 1
nat-rules = 22 80 443
use-tap = False
virtual-hard-drive-url =
virtual-hard-drive-md5sum =
...@@ -4,7 +4,6 @@ parts = ...@@ -4,7 +4,6 @@ parts =
eggs-directory = ${buildout:eggs-directory} eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory} develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
[switch-softwaretype] [switch-softwaretype]
recipe = slapos.cookbook:softwaretype recipe = slapos.cookbook:softwaretype
...@@ -13,11 +12,46 @@ kvm = ${template-kvm:output} ...@@ -13,11 +12,46 @@ kvm = ${template-kvm:output}
nbd = ${template-nbd:output} nbd = ${template-nbd:output}
frontend = ${template-frontend:output} frontend = ${template-frontend:output}
[slap-connection] kvm-resilient = $${dynamic-template-kvm-resilient:rendered}
# part to migrate to new - separated words kvm-import = ${template-kvm-import:output}
computer-id = $${slap_connection:computer_id} kvm-export = ${template-kvm-export:output}
partition-id = $${slap_connection:partition_id}
server-url = $${slap_connection:server_url} # Used for the test of resiliency. The system wants a "test" software_type.
software-release-url = $${slap_connection:software_release_url} test = $${dynamic-template-kvm-resilient-test:rendered}
key-file = $${slap_connection:key_file}
cert-file = $${slap_connection:cert_file} frozen = ${instance-frozen:output}
pull-backup = ${template-pull-backup:output}
[slap-configuration]
recipe = slapos.cookbook:slapconfiguration.serialised
computer = $${slap-connection:computer-id}
partition = $${slap-connection:partition-id}
url = $${slap-connection:server-url}
key = $${slap-connection:key-file}
cert = $${slap-connection:cert-file}
[dynamic-template-kvm-resilient]
recipe = slapos.recipe.template:jinja2
template = ${template-kvm-resilient:location}/instance-kvm-resilient.cfg.jinja2
rendered = $${buildout:directory}/template-kvm-resilient.cfg
context =
key develop_eggs_directory buildout:develop-eggs-directory
key eggs_directory buildout:eggs-directory
key slapparameter_dict slap-configuration:configuration
template-parts-destination = ${template-parts:destination}
template-replicated-destination = ${template-replicated:destination}
import-list = file parts :template-parts-destination
file replicated :template-replicated-destination
mode = 0644
[dynamic-template-kvm-resilient-test]
recipe = slapos.recipe.template:jinja2
template = ${template-kvm-resilient-test:location}/instance-kvm-resilient-test.cfg.jinja2
rendered = $${buildout:directory}/template-kvm-resilient-test.cfg
bin-directory = ${buildout:bin-directory}
context =
key bin_directory dynamic-template-kvm-resilient-test:bin-directory
key develop_eggs_directory buildout:develop-eggs-directory
key eggs_directory buildout:eggs-directory
key slapparameter_dict slap-configuration:configuration
mode = 0644
...@@ -47,98 +47,96 @@ signature-certificate-list = ...@@ -47,98 +47,96 @@ signature-certificate-list =
x2IMeSwJ82BpdEI5niXxB+iT0HxhmR+XaMI= x2IMeSwJ82BpdEI5niXxB+iT0HxhmR+XaMI=
-----END CERTIFICATE----- -----END CERTIFICATE-----
[versions] [versions]
numpy = 1.6.2 Werkzeug = 0.9.3
Jinja2 = 2.6 apache-libcloud = 0.13.0
Werkzeug = 0.8.3
apache-libcloud = 0.12.1
async = 0.6.1 async = 0.6.1
buildout-versions = 1.7 buildout-versions = 1.7
gitdb = 0.5.4 gitdb = 0.5.4
hexagonit.recipe.cmmi = 1.6 itsdangerous = 0.22
lxml = 3.1.0 lxml = 3.2.3
meld3 = 0.6.10 meld3 = 0.6.10
plone.recipe.command = 1.1 plone.recipe.command = 1.1
pycrypto = 2.6 pycrypto = 2.6
slapos.cookbook = 0.73.1 rdiff-backup = 1.0.5
slapos.cookbook = 0.79
slapos.recipe.cmmi = 0.2
slapos.recipe.download = 1.0.dev-r4053
slapos.recipe.template = 2.4.2 slapos.recipe.template = 2.4.2
slapos.toolbox = 0.33.1 slapos.toolbox = 0.35.0
smmap = 0.8.2 smmap = 0.8.2
websockify = 0.3.0 websockify = 0.5.1
z3c.recipe.scripts = 1.0.1 z3c.recipe.scripts = 1.0.1
# Required by: # Required by:
# slapos.core==0.35.1 # slapos.core==0.35.1
# slapos.toolbox==0.33.1 # slapos.toolbox==0.35.0
Flask = 0.9 Flask = 0.10.1
# Required by: # Required by:
# slapos.toolbox==0.33.1 # slapos.toolbox==0.35.0
GitPython = 0.3.2.RC1 GitPython = 0.3.2.RC1
# Required by: # Required by:
# slapos.toolbox==0.33.1 # slapos.toolbox==0.35.0
atomize = 0.1.1 atomize = 0.1.1
# Required by: # Required by:
# slapos.toolbox==0.33.1 # slapos.toolbox==0.35.0
feedparser = 5.1.3 feedparser = 5.1.3
# Required by: # Required by:
# hexagonit.recipe.cmmi==1.6 # slapos.cookbook==0.79
hexagonit.recipe.download = 1.6nxd002 inotifyx = 0.2.0-1
# Required by: # Required by:
# slapos.cookbook==0.73.1 # slapos.cookbook==0.79
inotifyx = 0.2.0 lock-file = 2.0
# Required by: # Required by:
# slapos.cookbook==0.73.1 # slapos.cookbook==0.79
netaddr = 0.7.10 netaddr = 0.7.10
# Required by: # Required by:
# slapos.core==0.35.1 # slapos.core==0.35.1
netifaces = 0.8 netifaces = 0.8-1
# Required by: # Required by:
# slapos.toolbox==0.33.1 # websockify==0.5.1
paramiko = 1.10.0 numpy = 1.7.1
# Required by: # Required by:
# slapos.toolbox==0.33.1 # slapos.toolbox==0.35.0
psutil = 0.6.1 paramiko = 1.11.0
# Required by: # Required by:
# slapos.core==0.35.1 # slapos.toolbox==0.35.0
pyflakes = 0.6.1 psutil = 1.0.1
# Required by: # Required by:
# slapos.cookbook==0.73.1 # slapos.core==0.35.1
pytz = 2012j pyflakes = 0.7.3
# Required by: # Required by:
# slapos.cookbook==0.73.1 # slapos.cookbook==0.79
# slapos.core==0.35.1 pytz = 2013b
# slapos.toolbox==0.33.1
setuptools = 0.6c12dev-r88846
# Required by: # Required by:
# slapos.cookbook==0.73.1 # slapos.cookbook==0.79
# slapos.toolbox==0.33.1 # slapos.toolbox==0.35.0
slapos.core = 0.35.1 slapos.core = 0.35.1
# Required by: # Required by:
# slapos.core==0.35.1 # slapos.core==0.35.1
supervisor = 3.0b1 supervisor = 3.0b2
# Required by: # Required by:
# slapos.core==0.35.1 # slapos.core==0.35.1
unittest2 = 0.5.1 unittest2 = 0.5.1
# Required by: # Required by:
# slapos.cookbook==0.73.1 # slapos.cookbook==0.79
# slapos.toolbox==0.33.1 # slapos.toolbox==0.35.0
xml-marshaller = 0.9.7 xml-marshaller = 0.9.7
# Required by: # Required by:
......
#!/bin/bash
# Create a backup of the disk image of the virtual machine
QEMU_IMG=${kvm-instance:qemu-img-path}
SNAPSHOT_NAME=$(date +%s)
DISK_PATH=${kvm-instance:disk-path}
BACKUP_PATH=${:backup-disk-path}
if [ ! -f $DISK_PATH ]; then
echo "Nothing to backup, disk image doesn't exist yet."
exit 0;
fi
$QEMU_IMG snapshot -c $SNAPSHOT_NAME $DISK_PATH
if [ -f $BACKUP_PATH ]; then
rm $BACKUP_PATH
fi
$QEMU_IMG convert -f qcow2 -O qcow2 -s $SNAPSHOT_NAME $DISK_PATH $BACKUP_PATH && \
$QEMU_IMG snapshot -d $SNAPSHOT_NAME $DISK_PATH
#!/bin/bash
DISK_PATH=${:disk-path}
BACKUP_PATH=${:backup-disk-path}
# TODO: Use rdiff
rm $DISK_PATH && \
cp $BACKUP_PATH $DISK_PATH
...@@ -68,9 +68,8 @@ bridge = !!BRIDGE_NAME!! ...@@ -68,9 +68,8 @@ bridge = !!BRIDGE_NAME!!
interface = lxc$${slap-network-information:network-interface} interface = lxc$${slap-network-information:network-interface}
[passwd] [passwd]
recipe = slapos.cookbook:pwgen recipe = slapos.cookbook:generate.password
file = $${buildout:directory}/.password storage-path = $${buildout:directory}/.password
pwgen-binary = ${pwgen:location}/bin/pwgen
[shellinabox] [shellinabox]
recipe = slapos.cookbook:shellinabox recipe = slapos.cookbook:shellinabox
...@@ -79,7 +78,7 @@ port = 8080 ...@@ -79,7 +78,7 @@ port = 8080
shell = ${lxc:location}/bin/lxc-console -n $${uuid:uuid} shell = ${lxc:location}/bin/lxc-console -n $${uuid:uuid}
wrapper = $${rootdirectory:bin}/shellinaboxd_raw wrapper = $${rootdirectory:bin}/shellinaboxd_raw
shellinabox-binary = ${shellinabox:location}/bin/shellinaboxd shellinabox-binary = ${shellinabox:location}/bin/shellinaboxd
password = $${passwd:password} password = $${passwd:passwd}
directory = $${buildout:directory}/ directory = $${buildout:directory}/
login-shell = $${rootdirectory:bin}/login login-shell = $${rootdirectory:bin}/login
certificate-directory = $${directory:shellinabox} certificate-directory = $${directory:shellinabox}
......
...@@ -10,7 +10,6 @@ extends = ...@@ -10,7 +10,6 @@ extends =
../../component/xz-utils/buildout.cfg ../../component/xz-utils/buildout.cfg
../../component/tar/buildout.cfg ../../component/tar/buildout.cfg
../../component/shellinabox/buildout.cfg ../../component/shellinabox/buildout.cfg
../../component/pwgen/buildout.cfg
../../component/bash/buildout.cfg ../../component/bash/buildout.cfg
../../component/coreutils/buildout.cfg ../../component/coreutils/buildout.cfg
...@@ -23,7 +22,6 @@ parts = ...@@ -23,7 +22,6 @@ parts =
slapos-toolbox slapos-toolbox
lxc lxc
shellinabox shellinabox
pwgen
[template] [template]
recipe = slapos.recipe.template recipe = slapos.recipe.template
......
[buildout]
extends =
../../component/apache/buildout.cfg
../../component/bash/buildout.cfg
../../component/dcron/buildout.cfg
../../component/dropbear/buildout.cfg
../../component/gzip/buildout.cfg
../../component/logrotate/buildout.cfg
../../stack/slapos.cfg
parts =
instance-profile
slapos-cookbook
eggs
# Add hosting location of testing version of slapos.core
find-links +=
http://www.nexedi.org/static/packages/source/slapos.core-testing/
[environment]
recipe = collective.recipe.environment
[instance-profile]
# 3 advantages of using jinja2 for ALL templates:
# 1/ Explicit scope (pythonic style, we explicitely list what we want to be in the scope)
# 2/ No troubles between $ and $$ (more simple)
# 3/ We can explicitely define the path of executables (i.e
# in software, define httpd-executable = ${apache:location}/bin/httpd
# and in instance, just use httpd-executable without bother where it is actually
# (location can change inside of the component, from bin to sbin for example).
recipe = slapos.recipe.template:jinja2
template = ${:_profile_base_location_}/instance.cfg.jinja2
rendered = ${buildout:directory}/instance.cfg
#md5sum = 4861be4a581686feef9f9edea865d7ee
mode = 0644
context =
key bin_directory buildout:bin-directory
key develop_eggs_directory buildout:develop-eggs-directory
key eggs_directory buildout:eggs-directory
key path environment:PATH
raw httpd_executable ${apache:location}/bin/httpd
raw bash_executable ${bash:location}/bin/bash
raw dcron_executable ${dcron:location}/sbin/crond
raw dropbear_executable ${dropbear:location}/sbin/dropbear
raw dropbearkey_executable ${dropbear:location}/bin/dropbearkey
raw gzip_executable ${gzip:location}/bin/gzip
raw gunzip_executable ${gzip:location}/bin/gunzip
raw logrotate_executable ${logrotate:location}/sbin/logrotate
raw slapos_configuration_file_template_path ${slapos-configuration-file-template:target}
raw httpd_configuration_file_template_path ${httpd-configuration-file-template:target}
[slapos-configuration-file-template]
# Download the template of slapos.cfg
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/template/slapos.cfg.in
#md5sum =
target = ${buildout:directory}/slapos.cfg.in
mode = 0644
[httpd-configuration-file-template]
# Download the template of httpd.conf
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/template/httpd.conf.in
mode = 0644
#md5sum =
#target = ${
[eggs]
recipe = zc.recipe.egg
eggs =
collective.recipe.template
# Add slapos.libnetworkcache to path of slapos.core.
[slapos-cookbook]
eggs =
${lxml-python:egg}
slapos.cookbook
cliff
hexagonit.recipe.download
inotifyx
netaddr
netifaces
requests
slapos.core
supervisor
xml_marshaller
pytz
slapos.libnetworkcache
[buildout]
parts =
slapos-configuration-file
cron-entry-slapos
slapos-node-status-wrapper
slapos-node-format-wrapper-script
httpd-wrapper
cron
logrotate
logrotate-entry-httpd
logrotate-entry-slapos
sshkeys-dropbear
dropbear-server-add-authorized-key
sshkeys-authority
publish-connection-informations
dropbear-promise
httpd-promise
slapos-promise
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
[instance-parameter]
recipe = slapos.cookbook:slapconfiguration
computer = ${slap_connection:computer_id}
partition = ${slap_connection:partition_id}
url = ${slap_connection:server_url}
key = ${slap_connection:key_file}
cert = ${slap_connection:cert_file}
configuration.master-url = https://slap.vifib.com
configuration.authorized-key =
# Create all needed directories
[directory]
recipe = slapos.cookbook:mkdirectory
mode = 0750
etc = ${buildout:directory}/etc/
var = ${buildout:directory}/var/
srv = ${buildout:directory}/srv/
bin = ${buildout:directory}/bin/
sshkeys = ${:srv}/sshkeys
service = ${:etc}/service/
script = ${:etc}/run/
ssh = ${:etc}/ssh/
log = ${:var}/log/
run = ${:var}/run/
backup = ${:srv}/backup/
promises = ${:etc}/promise/
slapos-partitions-certificate-repository = ${:var}/pki
software-root = ${:srv}/slapos-software
instance-root = ${:srv}/slapos-instance
slapos-log = ${:log}/slapos
{% for i in range(0,10) %}
slappart{{i}} = ${:instance-root}/slappart{{i}}
{% endfor %}
cron-entries = ${:etc}/cron.d
crontabs = ${:etc}/crontabs
cronstamps = ${:etc}/cronstamps
logrotate-entries = ${:etc}/logrotate.d
logrotate-backup = ${:backup}/logrotate
httpd-log = ${:log}/httpd
########
# Deploy slapos.cfg, computer certificates and slapos node wrapper
########
[slapos-computer-certificate-file]
recipe = collective.recipe.template
input = inline:${instance-parameter:configuration.computer-certificate}
output = ${directory:var}/slapos-computer.crt
[slapos-computer-key-file]
recipe = collective.recipe.template
input = inline:${instance-parameter:configuration.computer-key}
output = ${directory:var}/slapos-computer.key
[computer-definition-file]
recipe = collective.recipe.template
input = inline:
[computer]
{% for i in range(0,10|int) %}
[partition_{{i}}]
address = ${instance-parameter:ipv4-random}/255.255.255.0 ${instance-parameter:ipv6-random}/64
pathname = slappart{{i}}
user = dummy
network_interface = dummy
{% endfor %}
output = ${directory:etc}/slapos-computer-definition.cfg
[slapos-configuration-file]
recipe = slapos.recipe.template
url = {{ slapos_configuration_file_template_path }}
output = ${directory:etc}/slapos.cfg
#md5sum = 4861be4a581686feef9f9edea865d7ee
software-root = ${directory:software-root}
instance-root = ${directory:instance-root}
master-url = ${instance-parameter:configuration.master-url}
computer-id = ${instance-parameter:configuration.computer-id}
# XXX should be a parameter
partition-amount = 10
computer-definition-file = ${computer-definition-file:output}
computer-xml = ${directory:var}/slapos.xml
computer-key-file = ${slapos-computer-key-file:output}
computer-certificate-file = ${slapos-computer-certificate-file:output}
certificate-repository-path = ${directory:slapos-partitions-certificate-repository}
[slapos-node-instance-wrapper]
recipe = slapos.cookbook:wrapper
command-line = {{ bin_directory }}/slapos node instance --cfg ${slapos-configuration-file:output} --pidfile ${directory:run}/slapos-instance.pid --logfile ${directory:slapos-log}/slapos-instance.log
wrapper-path = ${directory:bin}/slapos-node-instance
parameters-extra = true
[slapos-node-software-wrapper]
recipe = slapos.cookbook:wrapper
command-line = {{ bin_directory }}/slapos node software --cfg ${slapos-configuration-file:output} --pidfile ${directory:run}/slapos-software.pid --logfile ${directory:slapos-log}/slapos-software.log
wrapper-path = ${directory:bin}/slapos-node-software
parameters-extra = true
[slapos-node-report-wrapper]
recipe = slapos.cookbook:wrapper
command-line = {{ bin_directory }}/slapos node report --cfg ${slapos-configuration-file:output} --pidfile ${directory:run}/slapos-report.pid --logfile ${directory:slapos-log}/slapos-report.log
wrapper-path = ${directory:bin}/slapos-node-report
parameters-extra = true
[slapos-node-status-wrapper]
recipe = slapos.cookbook:wrapper
command-line = {{ bin_directory }}/slapos node status --cfg ${slapos-configuration-file:output}
wrapper-path = ${directory:bin}/slapos-node-status
parameters-extra = true
[slapos-node-format-wrapper]
recipe = slapos.cookbook:wrapper
command-line = {{ bin_directory }}/slapos node format --cfg ${slapos-configuration-file:output} --logfile=${directory:slapos-log}/slapos-node-format.log --now
wrapper-path = ${directory:bin}/slapos-node-format
parameters-extra = true
[slapos-node-format-wrapper-script]
# Create a wrapper of the wrapper in etc/run
recipe = collective.recipe.template
input = inline:#!{{ bash_executable }}
false
while [ ! $? -eq 0 ]; do
${slapos-node-format-wrapper:wrapper-path}
done
output = ${directory:script}/slapos-node-format
mode = 700
#########
# Deploy some http server to see logs online
#########
# XXX could it be something lighter?
[httpd-configuration-file]
recipe = slapos.recipe.template
url = {{ httpd_configuration_file_template_path }}
output = ${directory:etc}/httpd.conf
# md5sum =
listening-ip = ${instance-parameter:ipv6-random}
listening-port = 8080
htdocs = ${directory:log}
pid-file = ${directory:run}/httpd.pid
access-log = ${directory:httpd-log}/access-log
error-log = ${directory:httpd-log}/error-log
document-root = ${directory:log}
# XXX logrotate for httpd
[httpd-wrapper]
recipe = slapos.cookbook:wrapper
apache-executable = {{ httpd_executable }}
command-line = ${:apache-executable} -f ${httpd-configuration-file:output} -DFOREGROUND
wrapper-path = ${directory:service}/httpd
# generated parameter containing url to use for other sections
url = http://[${httpd-configuration-file:listening-ip}]/
#########
# Deploy logrotate
#########
[logrotate]
recipe = slapos.cookbook:logrotate
# Binaries
logrotate-binary = {{ logrotate_executable }}
gzip-binary = {{ gzip_executable }}
gunzip-binary = {{ gunzip_executable }}
# Directories
wrapper = ${directory:bin}/logrotate
conf = ${directory:etc}/logrotate.conf
logrotate-entries = ${directory:logrotate-entries}
backup = ${directory:logrotate-backup}
state-file = ${directory:srv}/logrotate.status
[logrotate-entry-httpd]
<= logrotate
recipe = slapos.cookbook:logrotate.d
name = httpd
log = ${httpd-configuration-file:access-log} ${httpd-configuration-file:error-log}
frequency = daily
rotate-num = 30
post = {{ bin_directory }}/killpidfromfile $${apache-configuration:pid-file} SIGUSR1
sharedscripts = true
notifempty = true
create = true
[logrotate-entry-slapos]
<= logrotate
recipe = slapos.cookbook:logrotate.d
name = slapos
log = ${directory:slapos-log}/*.log
frequency = daily
rotate-num = 30
#post = {{ bin_directory }}/killpidfromfile ${nginx-configuration:pid-file} SIGUSR1
sharedscripts = true
notifempty = true
create = true
###########
# Deploy cron and configure it
###########
[cron-simplelogger]
recipe = slapos.cookbook:simplelogger
wrapper = ${directory:bin}/cron_simplelogger
log = ${directory:log}/crond.log
[cron]
recipe = slapos.cookbook:cron
dcrond-binary = {{ dcron_executable }}
cron-entries = ${directory:cron-entries}
crontabs = ${directory:crontabs}
cronstamps = ${directory:cronstamps}
catcher = ${cron-simplelogger:wrapper}
binary = ${directory:service}/crond
[cron-entry-slapos]
recipe = collective.recipe.template
# Add current PATH to environment, otherwise, gcc is not able to find its own cc1.
# We don't add it in the top of the script, because dcron disallow it.
# XXX: maybe it works if we take PATH from instance, not software.
input = inline:
* * * * * PATH={{ path }} ${slapos-node-instance-wrapper:wrapper-path} > /dev/null 2>&1
* * * * * PATH={{ path }} ${slapos-node-software-wrapper:wrapper-path} > /dev/null 2>&1
* * * * * PATH={{ path }} ${slapos-node-report-wrapper:wrapper-path} > /dev/null 2>&1
output = ${directory:cron-entries}/slapos
[cron-entry-logrotate]
<= cron
recipe = slapos.cookbook:cron.d
name = logrotate
frequency = 0 0 * * *
command = $${logrotate:wrapper}
# XXX what to do for slapformat?
#########
# Deploy dropbear (minimalist SSH server)
#########
[sshkeys-directory]
recipe = slapos.cookbook:mkdirectory
requests = ${directory:sshkeys}/requests/
keys = ${directory:sshkeys}/keys/
[sshkeys-authority]
recipe = slapos.cookbook:sshkeys_authority
request-directory = ${sshkeys-directory:requests}
keys-directory = ${sshkeys-directory:keys}
wrapper = ${directory:service}/sshkeys_authority
keygen-binary = {{ dropbearkey_executable }}
[dropbear-server]
recipe = slapos.cookbook:dropbear
host = ${instance-parameter:ipv6-random}
port = 2222
home = ${directory:ssh}
wrapper = ${directory:bin}/raw_sshd
shell = {{ bash_executable }}
rsa-keyfile = ${directory:ssh}/server_key.rsa
dropbear-binary = {{ dropbear_executable }}
[sshkeys-dropbear]
<= sshkeys-authority
recipe = slapos.cookbook:sshkeys_authority.request
name = dropbear
type = rsa
executable = ${dropbear-server:wrapper}
public-key = ${dropbear-server:rsa-keyfile}.pub
private-key = ${dropbear-server:rsa-keyfile}
wrapper = ${directory:service}/sshd
[dropbear-server-add-authorized-key]
<= dropbear-server
recipe = slapos.cookbook:dropbear.add_authorized_key
key = ${instance-parameter:configuration.authorized-key}
#########
# Send informations to SlapOS Master
#########
[publish-connection-informations]
recipe = slapos.cookbook:publish
log-viewer-url = http://[${httpd-configuration-file:listening-ip}]:${httpd-configuration-file:listening-port}
ssh_command = ssh ${dropbear-server:host} -p ${dropbear-server:port}
#########
# Deploy promises scripts
#########
[dropbear-promise]
recipe = slapos.cookbook:check_port_listening
path = ${directory:promises}/dropbear
hostname = ${dropbear-server:host}
port = ${dropbear-server:port}
[httpd-promise]
recipe = slapos.cookbook:check_port_listening
path = ${directory:promises}/httpd
hostname = ${httpd-configuration-file:listening-ip}
port = ${httpd-configuration-file:listening-port}
[slapos-promise]
recipe = collective.recipe.template
input = inline:#!/{{ bash_executable }}
{{ bin_directory }}/slapgrid-supervisorctl ${slapos-configuration-file:output} status watchdog | grep RUNNING
output = ${directory:promises}/slapos
mode = 0700
# Production profile of slapos-in-partition
# Exactly the same as common.cfg, but:
# 1/ Use a defined set of Python eggs instead of using the latest available
# ones from Pypi, to ensure stability;
# 2/ Define list of trusted certificates for the cache.
[buildout]
extends = common.cfg
[networkcache]
# signature certificates of the following uploaders.
# Romain Courteaud
# Sebastien Robin
# Kazuhiko Shiozaki
# Cedric de Saint Martin
# Yingjie Xu
# Gabriel Monnerat
# Łukasz Nowak
# Test Agent (Automatic update from tests)
# Aurélien Calonne
signature-certificate-list =
-----BEGIN CERTIFICATE-----
MIIB4DCCAUkCADANBgkqhkiG9w0BAQsFADA5MQswCQYDVQQGEwJGUjEZMBcGA1UE
CBMQRGVmYXVsdCBQcm92aW5jZTEPMA0GA1UEChMGTmV4ZWRpMB4XDTExMDkxNTA5
MDAwMloXDTEyMDkxNTA5MDAwMlowOTELMAkGA1UEBhMCRlIxGTAXBgNVBAgTEERl
ZmF1bHQgUHJvdmluY2UxDzANBgNVBAoTBk5leGVkaTCBnzANBgkqhkiG9w0BAQEF
AAOBjQAwgYkCgYEApYZv6OstoqNzxG1KI6iE5U4Ts2Xx9lgLeUGAMyfJLyMmRLhw
boKOyJ9Xke4dncoBAyNPokUR6iWOcnPHtMvNOsBFZ2f7VA28em3+E1JRYdeNUEtX
Z0s3HjcouaNAnPfjFTXHYj4um1wOw2cURSPuU5dpzKBbV+/QCb5DLheynisCAwEA
ATANBgkqhkiG9w0BAQsFAAOBgQBCZLbTVdrw3RZlVVMFezSHrhBYKAukTwZrNmJX
mHqi2tN8tNo6FX+wmxUUAf3e8R2Ymbdbn2bfbPpcKQ2fG7PuKGvhwMG3BlF9paEC
q7jdfWO18Zp/BG7tagz0jmmC4y/8akzHsVlruo2+2du2freE8dK746uoMlXlP93g
QUUGLQ==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB8jCCAVugAwIBAgIJAPu2zchZ2BxoMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV
BAMMB3RzeGRldjMwHhcNMTExMDE0MTIxNjIzWhcNMTIxMDEzMTIxNjIzWjASMRAw
DgYDVQQDDAd0c3hkZXYzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrPbh+
YGmo6mWmhVb1vTqX0BbeU0jCTB8TK3i6ep3tzSw2rkUGSx3niXn9LNTFNcIn3MZN
XHqbb4AS2Zxyk/2tr3939qqOrS4YRCtXBwTCuFY6r+a7pZsjiTNddPsEhuj4lEnR
L8Ax5mmzoi9nE+hiPSwqjRwWRU1+182rzXmN4QIDAQABo1AwTjAdBgNVHQ4EFgQU
/4XXREzqBbBNJvX5gU8tLWxZaeQwHwYDVR0jBBgwFoAU/4XXREzqBbBNJvX5gU8t
LWxZaeQwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQA07q/rKoE7fAda
FED57/SR00OvY9wLlFEF2QJ5OLu+O33YUXDDbGpfUSF9R8l0g9dix1JbWK9nQ6Yd
R/KCo6D0sw0ZgeQv1aUXbl/xJ9k4jlTxmWbPeiiPZEqU1W9wN5lkGuLxV4CEGTKU
hJA/yXa1wbwIPGvX3tVKdOEWPRXZLg==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB7jCCAVegAwIBAgIJAJWA0jQ4o9DGMA0GCSqGSIb3DQEBBQUAMA8xDTALBgNV
BAMMBHg2MXMwIBcNMTExMTI0MTAyNDQzWhgPMjExMTEwMzExMDI0NDNaMA8xDTAL
BgNVBAMMBHg2MXMwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANdJNiFsRlkH
vq2kHP2zdxEyzPAWZH3CQ3Myb3F8hERXTIFSUqntPXDKXDb7Y/laqjMXdj+vptKk
3Q36J+8VnJbSwjGwmEG6tym9qMSGIPPNw1JXY1R29eF3o4aj21o7DHAkhuNc5Tso
67fUSKgvyVnyH4G6ShQUAtghPaAwS0KvAgMBAAGjUDBOMB0GA1UdDgQWBBSjxFUE
RfnTvABRLAa34Ytkhz5vPzAfBgNVHSMEGDAWgBSjxFUERfnTvABRLAa34Ytkhz5v
PzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBAFLDS7zNhlrQYSQO5KIj
z2RJe3fj4rLPklo3TmP5KLvendG+LErE2cbKPqnhQ2oVoj6u9tWVwo/g03PMrrnL
KrDm39slYD/1KoE5kB4l/p6KVOdeJ4I6xcgu9rnkqqHzDwI4v7e8/D3WZbpiFUsY
vaZhjNYKWQf79l6zXfOvphzJ
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAO4V/jiMoICoMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMjMyMCAXDTEyMDIxNjExMTAyM1oYDzIxMTIwMTIzMTExMDIzWjAT
MREwDwYDVQQDDAhDT01QLTIzMjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
wi/3Z8W9pUiegUXIk/AiFDQ0UJ4JFAwjqr+HSRUirlUsHHT+8DzH/hfcTDX1I5BB
D1ADk+ydXjMm3OZrQcXjn29OUfM5C+g+oqeMnYQImN0DDQIOcUyr7AJc4xhvuXQ1
P2pJ5NOd3tbd0kexETa1LVhR6EgBC25LyRBRae76qosCAwEAAaNQME4wHQYDVR0O
BBYEFMDmW9aFy1sKTfCpcRkYnP6zUd1cMB8GA1UdIwQYMBaAFMDmW9aFy1sKTfCp
cRkYnP6zUd1cMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAskbFizHr
b6d3iIyN+wffxz/V9epbKIZVEGJd/6LrTdLiUfJPec7FaxVCWNyKBlCpINBM7cEV
Gn9t8mdVQflNqOlAMkOlUv1ZugCt9rXYQOV7rrEYJBWirn43BOMn9Flp2nibblby
If1a2ZoqHRxoNo2yTmm7TSYRORWVS+vvfjY=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAIlBksrZVkK8MA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMzU3MCAXDTEyMDEyNjEwNTUyOFoYDzIxMTIwMTAyMTA1NTI4WjAT
MREwDwYDVQQDDAhDT01QLTM1NzCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
ts+iGUwi44vtIfwXR8DCnLtHV4ydl0YTK2joJflj0/Ws7mz5BYkxIU4fea/6+VF3
i11nwBgYgxQyjNztgc9u9O71k1W5tU95yO7U7bFdYd5uxYA9/22fjObaTQoC4Nc9
mTu6r/VHyJ1yRsunBZXvnk/XaKp7gGE9vNEyJvPn2bkCAwEAAaNQME4wHQYDVR0O
BBYEFKuGIYu8+6aEkTVg62BRYaD11PILMB8GA1UdIwQYMBaAFKuGIYu8+6aEkTVg
62BRYaD11PILMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAMoTRpBxK
YLEZJbofF7gSrRIcrlUJYXfTfw1QUBOKkGFFDsiJpEg4y5pUk1s5Jq9K3SDzNq/W
it1oYjOhuGg3al8OOeKFrU6nvNTF1BAvJCl0tr3POai5yXyN5jlK/zPfypmQYxE+
TaqQSGBJPVXYt6lrq/PRD9ciZgKLOwEqK8w=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAPHoWu90gbsgMA0GCSqGSIb3DQEBBQUAMBQxEjAQBgNV
BAMMCXZpZmlibm9kZTAeFw0xMjAzMTkyMzIwNTVaFw0xMzAzMTkyMzIwNTVaMBQx
EjAQBgNVBAMMCXZpZmlibm9kZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
ozBijpO8PS5RTeKTzA90vi9ezvv4vVjNaguqT4UwP9+O1+i6yq1Y2W5zZxw/Klbn
oudyNzie3/wqs9VfPmcyU9ajFzBv/Tobm3obmOqBN0GSYs5fyGw+O9G3//6ZEhf0
NinwdKmrRX+d0P5bHewadZWIvlmOupcnVJmkks852BECAwEAAaNQME4wHQYDVR0O
BBYEFF9EtgfZZs8L2ZxBJxSiY6eTsTEwMB8GA1UdIwQYMBaAFF9EtgfZZs8L2ZxB
JxSiY6eTsTEwMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAc43YTfc6
baSemaMAc/jz8LNLhRE5dLfLOcRSoHda8y0lOrfe4lHT6yP5l8uyWAzLW+g6s3DA
Yme/bhX0g51BmI6gjKJo5DoPtiXk/Y9lxwD3p7PWi+RhN+AZQ5rpo8UfwnnN059n
yDuimQfvJjBFMVrdn9iP6SfMjxKaGk6gVmI=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAMNZBmoIOXPBMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMTMyMCAXDTEyMDUwMjEyMDQyNloYDzIxMTIwNDA4MTIwNDI2WjAT
MREwDwYDVQQDDAhDT01QLTEzMjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
6peZQt1sAmMAmSG9BVxxcXm8x15kE9iAplmANYNQ7z2YO57c10jDtlYlwVfi/rct
xNUOKQtc8UQtV/fJWP0QT0GITdRz5X/TkWiojiFgkopza9/b1hXs5rltYByUGLhg
7JZ9dZGBihzPfn6U8ESAKiJzQP8Hyz/o81FPfuHCftsCAwEAAaNQME4wHQYDVR0O
BBYEFNuxsc77Z6/JSKPoyloHNm9zF9yqMB8GA1UdIwQYMBaAFNuxsc77Z6/JSKPo
yloHNm9zF9yqMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAl4hBaJy1
cgiNV2+Z5oNTrHgmzWvSY4duECOTBxeuIOnhql3vLlaQmo0p8Z4c13kTZq2s3nhd
Loe5mIHsjRVKvzB6SvIaFUYq/EzmHnqNdpIGkT/Mj7r/iUs61btTcGUCLsUiUeci
Vd0Ozh79JSRpkrdI8R/NRQ2XPHAo+29TT70=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAKRvzcy7OH0UMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtNzcyMCAXDTEyMDgxMDE1NDI1MVoYDzIxMTIwNzE3MTU0MjUxWjAT
MREwDwYDVQQDDAhDT01QLTc3MjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
o7aipd6MbnuGDeR1UJUjuMLQUariAyQ2l2ZDS6TfOwjHiPw/mhzkielgk73kqN7A
sUREx41eTcYCXzTq3WP3xCLE4LxLg1eIhd4nwNHj8H18xR9aP0AGjo4UFl5BOMa1
mwoyBt3VtfGtUmb8whpeJgHhqrPPxLoON+i6fIbXDaUCAwEAAaNQME4wHQYDVR0O
BBYEFEfjy3OopT2lOksKmKBNHTJE2hFlMB8GA1UdIwQYMBaAFEfjy3OopT2lOksK
mKBNHTJE2hFlMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAaNRx6YN2
M/p3R8/xS6zvH1EqJ3FFD7XeAQ52WuQnKSREzuw0dsw12ClxjcHiQEFioyTiTtjs
5pW18Ry5Ie7iFK4cQMerZwWPxBodEbAteYlRsI6kePV7Gf735Y1RpuN8qZ2sYL6e
x2IMeSwJ82BpdEI5niXxB+iT0HxhmR+XaMI=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB+DCCAWGgAwIBAgIJAKGd0vpks6T/MA0GCSqGSIb3DQEBBQUAMBQxEjAQBgNV
BAMMCUNPTVAtMTU4NDAgFw0xMzA2MjAxMjE5MjBaGA8yMTEzMDUyNzEyMTkyMFow
FDESMBAGA1UEAwwJQ09NUC0xNTg0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKB
gQDZTH9etPUC+wMZQ3UIiOwyyCfHsJ+7duCFYjuo1uZrhtDt/fp8qb8qK9ob+df3
EEYgA0IgI2j/9jNUEnKbc5+OrfKznzXjrlrH7zU8lKBVNCLzQuqBKRNajZ+UvO8R
nlqK2jZCXP/p3HXDYUTEwIR5W3tVCEn/Vda4upTLcPVE5wIDAQABo1AwTjAdBgNV
HQ4EFgQU7KXaNDheQWoy5uOU01tn1M5vNkEwHwYDVR0jBBgwFoAU7KXaNDheQWoy
5uOU01tn1M5vNkEwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQASmqCU
Znbvu6izdicvjuE3aKnBa7G++Fdp2bdne5VCwVbVLYCQWatB+n4crKqGdnVply/u
+uZ16u1DbO9rYoKgWqjLk1GfiLw5v86pd5+wZd5I9QJ0/Sbz2vZk5S4ciMIGwArc
m711+GzlW5xe6GyH9SZaGOPAdUbI6JTDwLzEgA==
-----END CERTIFICATE-----
[versions]
slapos.libnetworkcache = 0.13.4
Jinja2 = 2.7.1
MarkupSafe = 0.18
Pygments = 1.6
Werkzeug = 0.9.4
buildout-versions = 1.7
cliff = 1.4.4
cmd2 = 0.6.5.1
collective.recipe.environment = 0.2.0
collective.recipe.template = 1.10
inotifyx = 0.2.0-1
itsdangerous = 0.23
lxml = 3.2.3
meld3 = 0.6.10
netaddr = 0.7.10
netifaces = 0.8-1
pytz = 2013d
requests = 1.2.3
slapos.cookbook = 0.83.1
slapos.core = 1.0.0rc5
slapos.recipe.cmmi = 0.2
slapos.recipe.download = 1.0.dev-r4053
slapos.recipe.template = 2.5
supervisor = 3.0
xml-marshaller = 0.9.7
# Required by:
# slapos.core==1.0.0rc5
Flask = 0.10.1
# Required by:
# slapos.core==1.0.0rc5
bpython = 0.12
# Required by:
# slapos.core==1.0.0rc5
ipython = 1.0.0
# Required by:
# slapos.cookbook==0.83.1
lock-file = 2.0
# Required by:
# slapos.core==1.0.0rc5
zope.interface = 4.0.5
# Apache static configuration
# Automatically generated
# Basic server configuration
PidFile "${:pid-file}"
Listen [${:listening-ip}]:${:listening-port}
ServerAdmin someone@email
DefaultType text/plain
TypesConfig conf/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
DocumentRoot "${:document-root}"
# Log configuration
ErrorLog "${:error-log}"
LogLevel warn
LogFormat "%h %{REMOTE_USER}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %{REMOTE_USER}i %l %u %t \"%r\" %>s %b" common
CustomLog "${:access-log}" common
# Allow cross site scripting
Header set Access-Control-Allow-Origin "*"
# List of modules
LoadModule unixd_module modules/mod_unixd.so
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule version_module modules/mod_version.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule mime_module modules/mod_mime.so
LoadModule negotiation_module modules/mod_negotiation.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule headers_module modules/mod_headers.so
LoadModule dir_module modules/mod_dir.so
LoadModule alias_module modules/mod_alias.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule autoindex_module modules/mod_autoindex.so
<Directory />
Options Indexes FollowSymLinks
IndexOptions FancyIndexing
order allow,deny
Allow from All
</Directory>
[slapos]
software_root = ${:software-root}
instance_root = ${:instance-root}
master_url = ${:master-url}
key_file = ${:computer-key-file}
cert_file = ${:computer-certificate-file}
certificate_repository_path = ${:certificate-repository-path}
computer_id = ${:computer-id}
maximal_delay = 0
# Don't check if we are using root
root_check = false
[slapformat]
alter_user = false
alter_network = false
input_definition_file = ${:computer-definition-file}
computer_xml = ${:computer-xml}
partition_amount = ${:partition-amount}
create_tap = false
...@@ -7,6 +7,7 @@ extends = ...@@ -7,6 +7,7 @@ extends =
../../component/dropbear/buildout.cfg ../../component/dropbear/buildout.cfg
../../component/git/buildout.cfg ../../component/git/buildout.cfg
../../component/lxml-python/buildout.cfg ../../component/lxml-python/buildout.cfg
../../component/nginx/buildout.cfg
../../component/rsync/buildout.cfg ../../component/rsync/buildout.cfg
../../stack/flask.cfg ../../stack/flask.cfg
../../stack/shacache-client.cfg ../../stack/shacache-client.cfg
...@@ -14,61 +15,148 @@ extends = ...@@ -14,61 +15,148 @@ extends =
../../stack/slapos.cfg ../../stack/slapos.cfg
parts = parts =
slapos.cookbook-repository
rdiff-backup rdiff-backup
template template
eggs eggs
nginx
simple-proxy
node-frontend-template
http-proxy
npm-modules
instance-runner-import instance-runner-import
instance-runner-export instance-runner-export
slapos-cookbook
####################
## Node JS proxy
####################
[simple-proxy]
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/simple-proxy.js
location = ${buildout:parts-directory}/${:_buildout_section_name_}
md5sum = 86e2231b3f65587b56d9be63e21a4e05
filename = simple-proxy.js
mode = 0644
[node-frontend-template]
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/node-frontend.in
location = ${buildout:parts-directory}/${:_buildout_section_name_}
filename = node-frontend.in
md5sum = 72904152860dddb30ca936dac5bbf4cd
mode = 0644
[http-proxy]
# https://github.com/nodejitsu/node-http-proxy
recipe = slapos.recipe.build:download-unpacked
#XXX-Cedric : use upstream when merged
url = https://github.com/desaintmartin/node-http-proxy/archive/20120621.zip
md5sum = 621e5fca448cbea137c5d847d780d84d
[npm-modules]
recipe = plone.recipe.command
destination = ${buildout:parts-directory}/${:_buildout_section_name_}
location = ${buildout:parts-directory}/${:_buildout_section_name_}
command =
export HOME=${:location};
rm -fr ${:destination} &&
mkdir -p ${:destination} &&
cd ${:destination} &&
${nodejs:location}/bin/node ${nodejs:location}/bin/npm install colors@0.6.0-1 &&
${nodejs:location}/bin/node ${nodejs:location}/bin/npm install socket.io@0.8.7 &&
${nodejs:location}/bin/node ${nodejs:location}/bin/npm install socket.io-client@0.8.7 &&
${nodejs:location}/bin/node ${nodejs:location}/bin/npm install optimist@0.3.1 &&
${nodejs:location}/bin/node ${nodejs:location}/bin/npm install pkginfo@0.2.3
# slapos-cookbook
[template] [template]
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg url = ${:_profile_base_location_}/instance.cfg
output = ${buildout:directory}/template.cfg output = ${buildout:directory}/template.cfg
md5sum = 92a2f3bcd5ff79e3b61ca4a8bacb73ec
mode = 0644 mode = 0644
#md5sum = 5307e4200f044ae57b504ad68444491c
[template-runner] [template-runner]
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-runner.cfg url = ${:_profile_base_location_}/instance-runner.cfg
output = ${buildout:directory}/template-runner.cfg output = ${buildout:directory}/template-runner.cfg
#md5sum = 91d6550c43b7a43a999724af4650ae40 md5sum = bcd1ee4dd126d2c6e9461f7753fc83b7
mode = 0644
[instance-resilient]
recipe = slapos.recipe.template:jinja2
template = ${:_profile_base_location_}/instance-resilient.cfg.jinja2
rendered = ${buildout:directory}/instance-resilient.cfg
context = key buildout buildout:bin-directory
key develop_eggs_directory buildout:develop-eggs-directory
key eggs_directory buildout:eggs-directory
import-list = file parts template-parts:destination
file replicated template-replicated:destination
mode = 0644 mode = 0644
[instance-runner-import] [instance-runner-import]
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-runner-import.cfg.in url = ${:_profile_base_location_}/instance-runner-import.cfg.in
output = ${buildout:directory}/instance-runner-import.cfg output = ${buildout:directory}/instance-runner-import.cfg
md5sum = f16cb60bb16632e652bea69cd5cdd9b7
mode = 0644 mode = 0644
[instance-runner-export] [instance-runner-export]
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-runner-export.cfg.in url = ${:_profile_base_location_}/instance-runner-export.cfg.in
output = ${buildout:directory}/instance-runner-export.cfg output = ${buildout:directory}/instance-runner-export.cfg
md5sum = 7e71622c09271790b5cef21c8613b8ac
mode = 0644
[template-resilient]
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/instance-resilient.cfg.jinja2
md5sum = 2562a6ac6893cc71a7328d7064aff599
filename = instance-resilient.cfg.jinja2
mode = 0644 mode = 0644
[template-resilient-test]
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/instance-resilient-test.cfg.jinja2
#md5sum = 0ee2cea5239278a8c1572d7a04798fdc
filename = instance-resilient-test.cfg.jinja2
mode = 0644
[template_nginx_conf]
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/nginx_conf.in
md5sum = 09b7677dfc6b23c1f58e67fd06a7625e
filename = nginx_conf.in
mode = 0644
[template_launcher]
recipe = slapos.recipe.download
url = ${:_profile_base_location_}/launcher.in
md5sum = c7f8b6e9ae84aa94686a9cbaaa3dd693
filename = launcher.in
mode = 0644
location = ${buildout:parts-directory}/${:_buildout_section_name_}
[eggs] [eggs]
recipe = z3c.recipe.scripts recipe = z3c.recipe.scripts
eggs = eggs =
${lxml-python:egg} ${lxml-python:egg}
cns.recipe.symlink
erp5.util
hexagonit.recipe.download
inotifyx
lock-file
netaddr
slapos.cookbook
slapos.libnetworkcache slapos.libnetworkcache
slapos.toolbox[flask_auth] slapos.toolbox[flask_auth]
slapos.core slapos.core
cns.recipe.symlink xml_marshaller
pytz
# Add slapos.libnetworkcache to path of slapos.core so that slaprunner can build SRs using cache
[slapos-cookbook]
eggs =
${lxml-python:egg}
slapos.cookbook
cliff
hexagonit.recipe.download
inotifyx
netaddr
netifaces
requests
slapos.core
supervisor
xml_marshaller
pytz
slapos.libnetworkcache
...@@ -10,29 +10,37 @@ extends = common.cfg ...@@ -10,29 +10,37 @@ extends = common.cfg
parts += parts +=
slapos.cookbook-repository slapos.cookbook-repository
slapos.toolbox-repository
erp5.util-repository
# slapos.toolbox-repository check-recipe
# slapos.core-repository # slapos.core-repository
# check-recipe
develop = develop =
${:parts-directory}/slapos.toolbox-repository
${:parts-directory}/slapos.cookbook-repository ${:parts-directory}/slapos.cookbook-repository
# ${:parts-directory}/slapos.toolbox-repository ${:parts-directory}/erp5.util-repository
# ${:parts-directory}/slapos.core-repository # ${:parts-directory}/slapos.core-repository
#[slapos.toolbox-repository] [slapos.toolbox-repository]
#recipe = slapos.recipe.build:gitclone recipe = slapos.recipe.build:gitclone
#repository = http://git.erp5.org/repos/slapos.toolbox.git repository = http://git.erp5.org/repos/slapos.toolbox.git
#branch = slaprunner-resiliency branch = kvmresiliency
#git-executable = ${git:location}/bin/git git-executable = ${git:location}/bin/git
[slapos.cookbook-repository] [slapos.cookbook-repository]
recipe = slapos.recipe.build:gitclone recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.git repository = http://git.erp5.org/repos/slapos.git
branch = slaprunner-resiliency branch = slaprunner
git-executable = ${git:location}/bin/git
# Used for resiliency tests only
[erp5.util-repository]
recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/erp5.git
#branch = scalability-master2
revision = f10da882ab5e1dc03a812f3d0e7390dc8da2b59
git-executable = ${git:location}/bin/git git-executable = ${git:location}/bin/git
#[slapos.core-repository] #[slapos.core-repository]
...@@ -47,6 +55,7 @@ stop-on-error = true ...@@ -47,6 +55,7 @@ stop-on-error = true
update-command = ${:command} update-command = ${:command}
command = command =
grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link && grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link &&
grep parts ${buildout:develop-eggs-directory}/slapos.toolbox.egg-link grep parts ${buildout:develop-eggs-directory}/slapos.toolbox.egg-link &&
grep parts ${buildout:develop-eggs-directory}/erp5.util.egg-link
# grep parts ${buildout:develop-eggs-directory}/slapos.core.egg-link && # grep parts ${buildout:develop-eggs-directory}/slapos.core.egg-link &&
[buildout]
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
parts =
deploy-resiliency-test
request-resilient-instance
deploy-standalone-resiliency-test
[directory]
recipe = slapos.cookbook:mkdirectory
home = ${buildout:directory}
etc = ${:home}/etc/
var = ${:home}/var/
srv = ${:home}/srv/
bin = ${:home}/bin/
tmp = ${:home}/tmp/
log = ${:var}/log/
services = ${:etc}/service/
scripts = ${:etc}/run/
[deploy-resiliency-test]
recipe = slapos.cookbook:wrapper
testnode-parameters = --test-result-path={{ slapparameter_dict.get('test-result-path') }} --revision={{ slapparameter_dict.get('test-suite-revision') }} --node-title={{ slapparameter_dict.get('scalability-launcher-title') }} --test-suite={{ slapparameter_dict.get('test-suite') }} --test-suite-master-url={{ slapparameter_dict.get('test-suite-master-url') }} --log-path=${directory:log}
test-parameters = server_url=${slap-connection:server-url} key_file=${slap-connection:key-file} cert_file=${slap-connection:cert-file} computer_id=${slap-connection:computer-id} partition_id=${slap-connection:partition-id} software=${slap-connection:software-release-url} namebase=runner slaprunner_rootinstance_name='${request-resilient-instance:name}'
command-line = {{ bin_directory }}/runResiliencyTest ${:testnode-parameters} ${:test-parameters}
wrapper-path = ${directory:scripts}/runResiliencyTestSuite
[deploy-standalone-resiliency-test]
# Used to manually run the KVM test if we don't have a running testnode.
recipe = slapos.cookbook:wrapper
test-suite-title = slaprunner
command-line = {{ bin_directory }}/runStandaloneResiliencyTest --test-suite-title=${:test-suite-title} ${deploy-resiliency-test:test-parameters}
wrapper-path = ${directory:bin}/runStandaloneResiliencyTestSuite
[request-resilient-instance]
<= slap-connection
recipe = slapos.cookbook:request
software-url = ${slap-connection:software-release-url}
software-type = resilient
name = Resilient Instance (Root Instance)
config = resiliency-backup-periodicity frontend-domain cloud9-frontend-domain
config-resiliency-backup-periodicity = *
# XXX hardcoded
config-frontend-domain = google.com
config-cloud9-frontend-domain = google.com
return = backend_url
[slap-parameter]
...@@ -13,30 +13,19 @@ parts += ...@@ -13,30 +13,19 @@ parts +=
{{ parts.replicate("runner", "3") }} {{ parts.replicate("runner", "3") }}
publish-connection-informations publish-connection-informations
{{ replicated.replicate("runner", "3", "runner-export", "runner-import") }} {{ replicated.replicate("runner", "3", "runner-export", "runner-import", slapparameter_dict=slapparameter_dict) }}
# Bubble up the parameters # Bubble up the parameters
[request-runner] [request-runner]
return = url ssh-public-key ssh-url notification-id ip backend_url url cloud9_url ssh_command password_recovery_code return = url ssh-public-key ssh-url notification-id ip backend_url url cloud9_url ssh_command password_recovery_code cloud9_backend_url
config = instance-amount debug domain number authorized-key notify ip-list namebase runner1-computer-guid pbs-runner1-computer-guid runner2-computer-guid pbs-runner2-computer-guid runner3-computer-guid pbs-runner3-computer-guid
# XXX Cedric LN Ugly hack, resilient stack and slaprunner stack sharing too much ssh sections
config-authorized-key = ${request-pbs-runner-1:connection-ssh-key} ${request-pbs-runner-2:connection-ssh-key} ${slap-parameter:authorized-key}
config-instance-amount = ${slap-parameter:instance-amount}
config-debug = ${slap-parameter:debug}
config-runner1-computer-guid = ${slap-parameter:runner1-computer-guid}
config-pbs-runner1-computer-guid = ${slap-parameter:pbs-runner1-computer-guid}
config-runner2-computer-guid = ${slap-parameter:runner2-computer-guid}
config-pbs-runner2-computer-guid = ${slap-parameter:pbs-runner2-computer-guid}
config-runner3-computer-guid = ${slap-parameter:runner3-computer-guid}
config-pbs-runner3-computer-guid = ${slap-parameter:pbs-runner3-computer-guid}
config-domain = ${slap-parameter:domain}
[publish-connection-informations] [publish-connection-informations]
recipe = slapos.cookbook:publish recipe = slapos.cookbook:publish
1_info = Set your passord in slaprunner in order to access cloud9
backend_url = ${request-runner:connection-backend_url} backend_url = ${request-runner:connection-backend_url}
url = ${request-runner:connection-url} url = ${request-runner:connection-url}
cloud9_url = ${request-runner:connection-cloud9_url} cloud9_url = ${request-runner:connection-cloud9_url}
cloud9_backend_url = ${request-runner:connection-cloud9_backend_url}
ssh_command = ${request-runner:connection-ssh_command} ssh_command = ${request-runner:connection-ssh_command}
password_recovery_code = ${request-runner:connection-password_recovery_code} password_recovery_code = ${request-runner:connection-password_recovery_code}
......
...@@ -3,9 +3,27 @@ extends = ${template-runner:output} ...@@ -3,9 +3,27 @@ extends = ${template-runner:output}
${pbsready-export:output} ${pbsready-export:output}
parts += parts +=
urls nginx_conf
slaprunner nginx-launcher
cron-entry-backup cloud9
certificate-authority
ca-nginx
ca-node-frontend
slaprunner
test-runner
sshkeys-dropbear-runner
dropbear-server-add-authorized-key
sshkeys-authority
slaprunner-promise
slaprunner-frontend-promise
cloud9-promise
cloud9-frontend-promise
dropbear-promise
symlinks
node-frontend-promise
nginx-promise
urls
cron-entry-backup
[exporter] [exporter]
recipe = slapos.cookbook:slaprunner.export recipe = slapos.cookbook:slaprunner.export
...@@ -21,7 +39,8 @@ rsync-binary = ${rsync:location}/bin/rsync ...@@ -21,7 +39,8 @@ rsync-binary = ${rsync:location}/bin/rsync
[urls] [urls]
<= resilient-publish-connection-parameter <= resilient-publish-connection-parameter
backend_url = $${slaprunner:access-url} backend_url = $${slaprunner:access-url}
url = $${request-frontend:connection-site_url} url = https://$${request-frontend:connection-domain}
cloud9_url = $${cloud9:access-url} cloud9_backend_url = $${node-frontend:access-url}
cloud9_url = https://$${request-cloud9-frontend:connection-domain}
ssh_command = ssh $${dropbear-runner-server:host} -p $${dropbear-runner-server:port} ssh_command = ssh $${dropbear-runner-server:host} -p $${dropbear-runner-server:port}
password_recovery_code = $${recovery-code:passwd} password_recovery_code = $${recovery-code:passwd}
...@@ -2,11 +2,25 @@ ...@@ -2,11 +2,25 @@
extends = ${template-runner:output} extends = ${template-runner:output}
${pbsready-import:output} ${pbsready-import:output}
parts += parts +=
slaprunner nginx_conf
nginx-launcher
cloud9
certificate-authority
ca-nginx
ca-node-frontend
slaprunner
test-runner
sshkeys-dropbear-runner
dropbear-server-add-authorized-key
sshkeys-authority
slaprunner-promise
cloud9-promise
dropbear-promise
symlinks
nginx-promise
# have to repeat the next one, as it's not inherited from pbsready-import # have to repeat the next one, as it's not inherited from pbsready-import
import-on-notification import-on-notification
[importer] [importer]
recipe = slapos.cookbook:slaprunner.import recipe = slapos.cookbook:slaprunner.import
......
[buildout] [buildout]
parts = parts =
nginx_conf
nginx-launcher
cloud9 cloud9
certificate-authority
ca-nginx
ca-node-frontend
slaprunner slaprunner
test-runner test-runner
sshkeys-dropbear-runner sshkeys-dropbear-runner
...@@ -10,8 +15,12 @@ parts = ...@@ -10,8 +15,12 @@ parts =
slaprunner-promise slaprunner-promise
slaprunner-frontend-promise slaprunner-frontend-promise
cloud9-promise cloud9-promise
cloud9-frontend-promise
dropbear-promise dropbear-promise
symlinks symlinks
request-cloud9-frontend
node-frontend-promise
nginx-promise
eggs-directory = ${buildout:eggs-directory} eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory} develop-eggs-directory = ${buildout:develop-eggs-directory}
...@@ -25,6 +34,7 @@ etc = $${buildout:directory}/etc/ ...@@ -25,6 +34,7 @@ etc = $${buildout:directory}/etc/
var = $${buildout:directory}/var/ var = $${buildout:directory}/var/
srv = $${buildout:directory}/srv/ srv = $${buildout:directory}/srv/
bin = $${buildout:directory}/bin/ bin = $${buildout:directory}/bin/
tmp = $${buildout:directory}/tmp/
sshkeys = $${:srv}/sshkeys sshkeys = $${:srv}/sshkeys
services = $${:etc}/service/ services = $${:etc}/service/
...@@ -35,6 +45,9 @@ run = $${:var}/run/ ...@@ -35,6 +45,9 @@ run = $${:var}/run/
backup = $${:srv}/backup/ backup = $${:srv}/backup/
promises = $${:etc}/promise/ promises = $${:etc}/promise/
test = $${:etc}/test/ test = $${:etc}/test/
nginx-data = $${directory:srv}/nginx
ca-dir = $${:srv}/ssl
[runnerdirectory] [runnerdirectory]
recipe = slapos.cookbook:mkdirectory recipe = slapos.cookbook:mkdirectory
...@@ -57,8 +70,8 @@ bytes = 4 ...@@ -57,8 +70,8 @@ bytes = 4
# Deploy cloud9 and slaprunner # Deploy cloud9 and slaprunner
[cloud9] [cloud9]
recipe = slapos.cookbook:cloud9 recipe = slapos.cookbook:cloud9
ip = $${slap-network-information:global-ipv6} ip = $${slap-network-information:local-ipv4}
port = 30000 port = 4443
wrapper = $${directory:services}/cloud9 wrapper = $${directory:services}/cloud9
working-directory = $${runnerdirectory:home} working-directory = $${runnerdirectory:home}
git-binary = ${git:location}/bin/git git-binary = ${git:location}/bin/git
...@@ -87,7 +100,7 @@ private_key = $${sshkeys-dropbear-runner:private-key} ...@@ -87,7 +100,7 @@ private_key = $${sshkeys-dropbear-runner:private-key}
ipv4 = $${slap-network-information:local-ipv4} ipv4 = $${slap-network-information:local-ipv4}
ipv6 = $${slap-network-information:global-ipv6} ipv6 = $${slap-network-information:global-ipv6}
proxy_port = 50000 proxy_port = 50000
runner_port = 50000 runner_port = 50005
partition-amount = $${slap-parameter:instance-amount} partition-amount = $${slap-parameter:instance-amount}
cloud9-url = $${cloud9:access-url} cloud9-url = $${cloud9:access-url}
wrapper = $${directory:services}/slaprunner wrapper = $${directory:services}/slaprunner
...@@ -145,31 +158,182 @@ wrapper = $${directory:services}/runner_sshd ...@@ -145,31 +158,182 @@ wrapper = $${directory:services}/runner_sshd
recipe = slapos.cookbook:dropbear.add_authorized_key recipe = slapos.cookbook:dropbear.add_authorized_key
key = $${slap-parameter:authorized-key} key = $${slap-parameter:authorized-key}
#---------------------
#--
#-- Set node frontend
[node-frontend]
launcher = $${directory:bin}/node-frontend
ip = $${slap-network-information:global-ipv6}
port = $${cloud9:port}
access-url = https://[$${:ip}]:$${:port}
[node-frontend-launcher]
recipe = slapos.recipe.template:jinja2
template = ${node-frontend-template:location}/${node-frontend-template:filename}
rendered = $${node-frontend:launcher}
mode = 700
context =
key ip node-frontend:ip
key port node-frontend:port
key key ca-node-frontend:key-file
key certificate ca-node-frontend:cert-file
key backend_ip nginx-frontend:local-ip
key backend_port nginx-frontend:port
raw shell_path ${bash:location}/bin/bash
raw node_env ${buildout:parts-directory}:${npm-modules:location}/node_modules
raw node_path ${nodejs:location}/bin/node
raw conf_path ${simple-proxy:location}/${simple-proxy:filename}
#---------------------------
#--
#-- Set nginx frontend
[tempdirectory]
recipe = slapos.cookbook:mkdirectory
client_body_temp_path = $${directory:tmp}/client_body_temp_path
proxy_temp_path = $${directory:tmp}/proxy_temp_path
fastcgi_temp_path = $${directory:tmp}/fastcgi_temp_path
uwsgi_temp_path = $${directory:tmp}/uwsgi_temp_path
scgi_temp_path = $${directory:tmp}/scgi_temp_path
[nginx-frontend]
# Options
nb_workers = 2
# Network
local-ip = $${slap-network-information:local-ipv4}
port = 30001
global-ip = $${slap-network-information:global-ipv6}
global-port = $${slaprunner:runner_port}
# Backend
cloud9-ip = $${cloud9:ip}
cloud9-port = $${cloud9:port}
runner-ip = $${slaprunner:ipv4}
runner-port = $${slaprunner:runner_port}
# SSL
ssl-certificate = $${ca-nginx:cert-file}
ssl-key = $${ca-nginx:key-file}
# Log
path_pid = $${directory:run}/nginx.pid
path_log = $${directory:log}/nginx.log
path_access_log = $${directory:log}/nginx.access.log
path_error_log = $${directory:log}/nginx.error.log
path_tmp = $${buildout:directory}/tmp
# Config files
path_nginx_conf = $${directory:etc}/nginx.conf
# Executables
bin_nginx = ${nginx:location}/sbin/nginx
bin_launcher = $${directory:bin}/launcher
# Utils
path_shell = ${dash:location}/bin/dash
# Misc.
etc_dir = $${directory:etc}
[nginx_conf]
recipe = slapos.recipe.template:jinja2
template = ${template_nginx_conf:location}/${template_nginx_conf:filename}
rendered = $${nginx-frontend:path_nginx_conf}
context =
section param_nginx_frontend nginx-frontend
section param_tempdir tempdirectory
[nginx-launcher]
recipe = slapos.recipe.template:jinja2
template = ${template_launcher:location}/${template_launcher:filename}
rendered = $${nginx-frontend:bin_launcher}
mode = 700
context =
section param_nginx_frontend nginx-frontend
#--------------------
#--
#-- ssl certificates
[certificate-authority]
recipe = slapos.cookbook:certificate_authority
openssl-binary = ${openssl:location}/bin/openssl
ca-dir = $${directory:ca-dir}
requests-directory = $${cadirectory:requests}
wrapper = $${directory:services}/certificate_authority
ca-private = $${cadirectory:private}
ca-certs = $${cadirectory:certs}
ca-newcerts = $${cadirectory:newcerts}
ca-crl = $${cadirectory:crl}
[cadirectory]
recipe = slapos.cookbook:mkdirectory
requests = $${directory:ca-dir}/requests/
private = $${directory:ca-dir}/private/
certs = $${directory:ca-dir}/certs/
newcerts = $${directory:ca-dir}/newcerts/
crl = $${directory:ca-dir}/crl/
[ca-nginx]
<= certificate-authority
recipe = slapos.cookbook:certificate_authority.request
key-file = $${cadirectory:certs}/nginx_frontend.key
cert-file = $${cadirectory:certs}/nginx_frontend.crt
executable = $${nginx-launcher:rendered}
wrapper = $${directory:services}/nginx-frontend
# Put domain name
name = example.com
[ca-node-frontend]
<= certificate-authority
recipe = slapos.cookbook:certificate_authority.request
key-file = $${cadirectory:certs}/nodejs.key
cert-file = $${cadirectory:certs}/nodejs.crt
executable = $${node-frontend-launcher:rendered}
wrapper = $${directory:services}/node-frontend
# Put domain name
name = example.com
#--------------------
#--
#-- Request frontend
# Request frontend
[request-frontend] [request-frontend]
<= slap-connection <= slap-connection
recipe = slapos.cookbook:requestoptional recipe = slapos.cookbook:requestoptional
name = Frontend name = SlapRunner Frontend
# XXX We have hardcoded SR URL here. # XXX We have hardcoded SR URL here.
software-url = http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD:/software/apache-frontend/software.cfg software-url = http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD:/software/apache-frontend/software.cfg
slave = true slave = true
config = url config = url domain
config-url = $${slaprunner:access-url} config-url = $${node-frontend:access-url}
return = site_url config-domain = $${slap-parameter:frontend-domain}
return = site_url domain
[request-cloud9-frontend]
<= slap-connection
recipe = slapos.cookbook:requestoptional
name = Cloud9 Frontend
software-url = http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD:/software/apache-frontend/software.cfg
slave = true
config = url domain
config-url = $${node-frontend:access-url}
config-domain = $${slap-parameter:cloud9-frontend-domain}
return = site_url domain
#--------------------------------------
#--
#-- Send informations to SlapOS Master
# Send informations to SlapOS Master
[publish-connection-informations] [publish-connection-informations]
recipe = slapos.cookbook:publish recipe = slapos.cookbook:publish
1_info = Set your passord in slaprunner in order to access cloud9
backend_url = $${slaprunner:access-url} backend_url = $${slaprunner:access-url}
url = $${request-frontend:connection-site_url} url = https://$${request-frontend:connection-domain}
cloud9_url = $${cloud9:access-url} cloud9_backend_url = $${node-frontend:access-url}
cloud9_url = https://$${request-cloud9-frontend:connection-domain}
ssh_command = ssh $${dropbear-runner-server:host} -p $${dropbear-runner-server:port} ssh_command = ssh $${dropbear-runner-server:host} -p $${dropbear-runner-server:port}
password_recovery_code = $${recovery-code:passwd} password_recovery_code = $${recovery-code:passwd}
#---------------------------
#--
#-- Deploy promises scripts
# Deploy promises scripts
[slaprunner-promise] [slaprunner-promise]
recipe = slapos.cookbook:check_port_listening recipe = slapos.cookbook:check_port_listening
path = $${directory:promises}/slaprunner path = $${directory:promises}/slaprunner
...@@ -179,7 +343,7 @@ port = $${slaprunner:runner_port} ...@@ -179,7 +343,7 @@ port = $${slaprunner:runner_port}
[slaprunner-frontend-promise] [slaprunner-frontend-promise]
recipe = slapos.cookbook:check_url_available recipe = slapos.cookbook:check_url_available
path = $${directory:promises}/slaprunner_frontend path = $${directory:promises}/slaprunner_frontend
url = $${request-frontend:connection-site_url} url = https://$${request-frontend:connection-domain}
dash_path = ${dash:location}/bin/dash dash_path = ${dash:location}/bin/dash
curl_path = ${curl:location}/bin/curl curl_path = ${curl:location}/bin/curl
...@@ -190,6 +354,26 @@ url = http://$${cloud9:ip}:$${cloud9:port} ...@@ -190,6 +354,26 @@ url = http://$${cloud9:ip}:$${cloud9:port}
dash_path = ${dash:location}/bin/dash dash_path = ${dash:location}/bin/dash
curl_path = ${curl:location}/bin/curl curl_path = ${curl:location}/bin/curl
[cloud9-frontend-promise]
recipe = slapos.cookbook:check_url_available
path = $${directory:promises}/cloud9-frontend-promise
url = $${publish-connection-informations:cloud9_url}
check-secure = 1
dash_path = ${dash:location}/bin/dash
curl_path = ${curl:location}/bin/curl
[node-frontend-promise]
recipe = slapos.cookbook:check_port_listening
path = $${directory:promises}/node-frontend
hostname = $${node-frontend:ip}
port = $${node-frontend:port}
[nginx-promise]
recipe = slapos.cookbook:check_port_listening
path = $${directory:promises}/nginx
hostname = $${nginx-frontend:local-ip}
port = $${nginx-frontend:port}
[dropbear-promise] [dropbear-promise]
recipe = slapos.cookbook:check_port_listening recipe = slapos.cookbook:check_port_listening
path = $${directory:promises}/dropbear path = $${directory:promises}/dropbear
...@@ -207,3 +391,6 @@ authorized-key = ...@@ -207,3 +391,6 @@ authorized-key =
# Default value of instances number in slaprunner # Default value of instances number in slaprunner
instance-amount = 10 instance-amount = 10
debug = false debug = false
cloud9-frontend-domain =
frontend-domain =
...@@ -4,15 +4,50 @@ parts = ...@@ -4,15 +4,50 @@ parts =
eggs-directory = ${buildout:eggs-directory} eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory} develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
[switch_softwaretype] [switch_softwaretype]
recipe = slapos.cookbook:softwaretype recipe = slapos.cookbook:softwaretype
default = ${template-runner:output} default = ${template-runner:output}
resilient = ${instance-resilient:rendered} resilient = $${instance-resilient:rendered}
test = $${instance-resilient-test:rendered}
runner = ${template-runner:output} runner = ${template-runner:output}
runner-import = ${instance-runner-import:output} runner-import = ${instance-runner-import:output}
runner-export = ${instance-runner-export:output} runner-export = ${instance-runner-export:output}
frozen = ${instance-frozen:output} frozen = ${instance-frozen:output}
pull-backup = ${template-pull-backup:output} pull-backup = ${template-pull-backup:output}
[instance-resilient]
recipe = slapos.recipe.template:jinja2
template = ${template-resilient:target}
rendered = $${buildout:directory}/instance-resilient.cfg
extensions = jinja2.ext.do
context = key buildout buildout:bin-directory
key develop_eggs_directory buildout:develop-eggs-directory
key eggs_directory buildout:eggs-directory
key slapparameter_dict slap-configuration:configuration
template-parts-destination = ${template-parts:destination}
template-replicated-destination = ${template-replicated:destination}
import-list = file parts :template-parts-destination
file replicated :template-replicated-destination
mode = 0644
[instance-resilient-test]
recipe = slapos.recipe.template:jinja2
template = ${template-resilient-test:location}/instance-resilient-test.cfg.jinja2
rendered = $${buildout:directory}/template-resilient-test.cfg
bin-directory = ${buildout:bin-directory}
context =
key bin_directory instance-resilient-test:bin-directory
key develop_eggs_directory buildout:develop-eggs-directory
key eggs_directory buildout:eggs-directory
key slapparameter_dict slap-configuration:configuration
mode = 0644
[slap-configuration]
recipe = slapos.cookbook:slapconfiguration.serialised
computer = $${slap-connection:computer-id}
partition = $${slap-connection:partition-id}
url = $${slap-connection:server-url}
key = $${slap-connection:key-file}
cert = $${slap-connection:cert-file}
\ No newline at end of file
#! {{ param_nginx_frontend['path_shell'] }}
# BEWARE: This file is operated by slapgrid
# BEWARE: It will be overwritten automatically
# Run nginx
exec {{ param_nginx_frontend['bin_nginx'] }} -c {{ param_nginx_frontend['path_nginx_conf'] }}
worker_processes {{ param_nginx_frontend['nb_workers'] }};
pid {{ param_nginx_frontend['path_pid'] }};
error_log {{ param_nginx_frontend['path_error_log'] }};
daemon off;
events {
worker_connections 1024;
accept_mutex off;
}
http {
default_type application/octet-stream;
access_log {{ param_nginx_frontend['path_access_log'] }} combined;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen {{ param_nginx_frontend['local-ip'] }}:{{ param_nginx_frontend['port'] }};
server_name _;
keepalive_timeout 90s;
client_body_temp_path {{ param_tempdir['client_body_temp_path'] }};
proxy_temp_path {{ param_tempdir['proxy_temp_path'] }};
fastcgi_temp_path {{ param_tempdir['fastcgi_temp_path'] }};
uwsgi_temp_path {{ param_tempdir['uwsgi_temp_path'] }};
scgi_temp_path {{ param_tempdir['scgi_temp_path'] }};
location / {
auth_basic "Restricted";
auth_basic_user_file {{ param_nginx_frontend['etc_dir'] }}/.htpasswd;
proxy_pass http://{{ param_nginx_frontend['cloud9-ip'] }}:{{ param_nginx_frontend['cloud9-port'] }};
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen [{{ param_nginx_frontend['global-ip'] }}]:{{ param_nginx_frontend['global-port'] }} ssl;
server_name _;
ssl_certificate {{ param_nginx_frontend['ssl-certificate'] }};
ssl_certificate_key {{ param_nginx_frontend['ssl-key'] }};
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
keepalive_timeout 90s;
client_body_temp_path {{ param_tempdir['client_body_temp_path'] }};
proxy_temp_path {{ param_tempdir['proxy_temp_path'] }};
fastcgi_temp_path {{ param_tempdir['fastcgi_temp_path'] }};
uwsgi_temp_path {{ param_tempdir['uwsgi_temp_path'] }};
scgi_temp_path {{ param_tempdir['scgi_temp_path'] }};
location / {
proxy_pass http://{{ param_nginx_frontend['runner-ip'] }}:{{ param_nginx_frontend['runner-port'] }};
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $http_host;
}
}
}
#!{{ shell_path }}
# BEWARE: This file is operated by slapgrid
# BEWARE: It will be overwritten automatically
export NODE_PATH={{ node_env }}
exec {{ node_path }} {{ conf_path }} {{ ip }} {{ port }} {{ key }} {{ certificate }} {{ backend_ip }} {{ backend_port }}
\ No newline at end of file
/*****************************************************************************
*
* Copyright (c) 2013 Vifib SARL and Contributors. All Rights Reserved.
*
* WARNING: This program as such is intended to be used by professional
* programmers who take the whole responsibility of assessing all potential
* consequences resulting from its eventual inadequacies and bugs
* End users who are looking for a ready-to-use solution with commercial
* guarantees and support are strongly adviced to contract a Free Software
* Service Company
*
* This program is Free Software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 3
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
*****************************************************************************/
var fs = require('fs'),
util = require('util'),
colors = require('colors'),
http = require('http'),
httpProxy = require('http-proxy');
var listenInterface = process.argv[2],
port = process.argv[3],
sslKeyFile = process.argv[4],
sslCertFile = process.argv[5],
backendIp = process.argv[6],
backendPort = process.argv[7];
if (process.argv.length < 8) {
console.error("Too few arguments. Exiting.");
process.exit(1);
}
var middleware = function (req, res, proxy) {
return proxy.proxyRequest(req, res,{
host: backendIp,
port: backendPort
});
};
middleware.proxyWebSocketRequest = function (req, socket, head, proxy) {
return proxy.proxyWebSocketRequest(req, socket, head,{
host: backendIp,
port: backendPort
});
};
/**
* Create server
*/
var proxyServer = httpProxy.createServer(
middleware,
{
https: {
key: fs.readFileSync(
sslKeyFile,
'utf8'
),
cert: fs.readFileSync(
sslCertFile,
'utf8'
)
},
source: {
host: listenInterface,
port: port
}}
);
console.log('HTTPS server starting and trying to listen on ' +
listenInterface + ':' + port);
// Release the beast.
proxyServer.listen(port, listenInterface);
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
# 2/ Define list of trusted certificates for the cache. # 2/ Define list of trusted certificates for the cache.
[buildout] [buildout]
extends = development.cfg extends = common.cfg
[networkcache] [networkcache]
# signature certificates of the following uploaders. # signature certificates of the following uploaders.
...@@ -55,7 +55,7 @@ meld3 = 0.6.10 ...@@ -55,7 +55,7 @@ meld3 = 0.6.10
netaddr = 0.7.10 netaddr = 0.7.10
plone.recipe.command = 1.1 plone.recipe.command = 1.1
pycrypto = 2.6 pycrypto = 2.6
pytz = 2012j pytz = 2013b
#slapos.cookbook = 0.71.1 #slapos.cookbook = 0.71.1
slapos.core = 0.35.1 slapos.core = 0.35.1
slapos.libnetworkcache = 0.13.4 slapos.libnetworkcache = 0.13.4
...@@ -65,7 +65,16 @@ slapos.recipe.template = 2.4.2 ...@@ -65,7 +65,16 @@ slapos.recipe.template = 2.4.2
smmap = 0.8.2 smmap = 0.8.2
xml-marshaller = 0.9.7 xml-marshaller = 0.9.7
z3c.recipe.scripts = 1.0.1 z3c.recipe.scripts = 1.0.1
lock-file = 2.0
rdiff-backup = 1.0.5
slapos.recipe.cmmi = 0.2
slapos.recipe.download = 1.0.dev-r4053
slapos.toolbox = 0.35.1
slapos.cookbook = 0.78.5
cliff = 1.4
cmd2 = 0.6.6
prettytable = 0.7.2
requests = 1.2.3
# Required by: # Required by:
# slapos.core==0.34 # slapos.core==0.34
# slapos.toolbox==0.34.0 # slapos.toolbox==0.34.0
......
...@@ -308,9 +308,8 @@ githttpbackend = ${git:location}/libexec/git-core/git-http-backend ...@@ -308,9 +308,8 @@ githttpbackend = ${git:location}/libexec/git-core/git-http-backend
base-directory = $${trac-config:project_dir}/git base-directory = $${trac-config:project_dir}/git
[trac-admin] [trac-admin]
recipe = slapos.cookbook:pwgen recipe = slapos.cookbook:generate.password
file = $${buildout:directory}/.password storage-path = $${buildout:directory}/.password
pwgen-binary = ${pwgen:location}/bin/pwgen
user = TracAdmin user = TracAdmin
#--------------------- #---------------------
...@@ -330,7 +329,7 @@ eggs-dirs = ...@@ -330,7 +329,7 @@ eggs-dirs =
python-lib = ${python2.7:location}/lib python-lib = ${python2.7:location}/lib
trac-admin = ${buildout:bin-directory}/trac-admin trac-admin = ${buildout:bin-directory}/trac-admin
admin-user = $${trac-admin:user} admin-user = $${trac-admin:user}
admin-password = $${trac-admin:password} admin-password = $${trac-admin:passwd}
#MySQL informations #MySQL informations
mysql-username = $${mariadb-urlparse:username} mysql-username = $${mariadb-urlparse:username}
mysql-password = $${mariadb-urlparse:password} mysql-password = $${mariadb-urlparse:password}
...@@ -401,7 +400,7 @@ port = 9000 ...@@ -401,7 +400,7 @@ port = 9000
shell = $${shell:wrapper} shell = $${shell:wrapper}
wrapper = $${rootdirectory:bin}/shellinaboxd_raw wrapper = $${rootdirectory:bin}/shellinaboxd_raw
shellinabox-binary = ${shellinabox:location}/bin/shellinaboxd shellinabox-binary = ${shellinabox:location}/bin/shellinaboxd
password = $${trac-admin:password} password = $${trac-admin:passwd}
directory = $${inittrac:site-dir} directory = $${inittrac:site-dir}
login-shell = $${rootdirectory:bin}/login login-shell = $${rootdirectory:bin}/login
certificate-directory = $${directory:shellinabox} certificate-directory = $${directory:shellinabox}
...@@ -454,7 +453,7 @@ frontend_url = $${request-frontend:connection-site_url} ...@@ -454,7 +453,7 @@ frontend_url = $${request-frontend:connection-site_url}
git = $${request-frontend:connection-site_url}git/ git = $${request-frontend:connection-site_url}git/
svn = $${request-frontend:connection-site_url}svn/ svn = $${request-frontend:connection-site_url}svn/
admin_user = $${trac-admin:user} admin_user = $${trac-admin:user}
admin_password = $${trac-admin:password} admin_password = $${trac-admin:passwd}
admin_shell = https://[$${shellinabox:ipv6}]:$${shellinabox:port}/ admin_shell = https://[$${shellinabox:ipv6}]:$${shellinabox:port}/
#---------------- #----------------
......
...@@ -41,7 +41,6 @@ extends = ...@@ -41,7 +41,6 @@ extends =
../../component/lxml-python/buildout.cfg ../../component/lxml-python/buildout.cfg
../../component/mysql-python/buildout.cfg ../../component/mysql-python/buildout.cfg
../../component/git/buildout.cfg ../../component/git/buildout.cfg
../../component/pwgen/buildout.cfg
../../component/shellinabox/buildout.cfg ../../component/shellinabox/buildout.cfg
../../component/perl/buildout.cfg ../../component/perl/buildout.cfg
......
{
"$schema": "http://json-schema.org/draft-04/schema#",
"extends": "./schema-definitions.json#",
"properties": {
"tcpv4-port": {
"allOf": [{
"$ref": "#/definitions/tcpv4port"
}, {
"description": "Start allocating ports at this value, going upward",
"default": 6001
}]
},
"backend-url": {
"description": "The backend url that varnish will cache",
"type": "string"
},
"web-checker": {
"description": "Controls automated cache checker, disabled if null or empty"
"properties": {
"frontend-url": {
"description": "Override entry-point-url web checker will check the HTTP headers of all links in the web site, '%(ip)s' and '%(port)s' being substituted with varnish's listening ip and port, respectively",
"default": "http://%(ip)s:%(port)s/",
"type": "string"
},
"mail-address": {
"description": "Email address to which web checker result is sent",
"type": "string"
},
"smtp-host": {
"description": "The smtp server to be used to send the web checker result",
"type": "string"
}
},
"type": "object"
}
}
}
{
"$schema": "http://json-schema.org/draft-04/schema#",
"description": "Values returned by Varnish instanciation",
"properties": {
"url": {
"description": "Varnish HTTP service access information",
"type": "string"
}
},
"type": "object"
}
{% set tcpv4_port = slapparameter_dict.get('tcpv4_port', 6001) -%}
{% set ip = (ipv4_set | list)[0] %}
[buildout]
parts =
publish-varnish-connection-information
varnish-instance
cron
cron-entry-logrotate
{# When web_checker related parameter is given, web_checker will be enabled.-#}
{% set web_checker_dict = slapparameter_dict.get('web-checker', {}) -%}
{% if web_checker_dict -%}
web-checker
cron-entry-web-checker
logrotate-entry-web-checker
[cron-entry-web-checker]
<= cron
recipe = slapos.cookbook:cron.d
name = web-checker
frequency = 0 0 * * *
command = ${varnish-instance:web-checker} ${web-checker:web-checker-config}
[web-checker]
recipe = slapos.cookbook:webchecker
web-checker-config = ${rootdirectory:etc}/web_checker.cfg
web-checker-working-directory = ${directory:web-checker}
frontend-url = {{ web_checker_dict.get('frontend-url', 'http://%(ip)s:%(port)s/') % {
'ip': ip,
'port': tcpv4_port,
} }}
mail-address = {{ web_checker_dict['mail-address'] }}
smtp-host = {{ web_checker_dict['smtp-host'] }}
wget-binary-path = {{ parameter_dict['wget'] }}/bin/wget
varnishlog-binary-path = ${varnish-instance:varnishlog-wrapper}
web-checker-log = ${basedirectory:log}/web-checker.log
[logrotate-entry-web-checker]
<= logrotate
recipe = slapos.cookbook:logrotate.d
name = web-checker
log = ${web-checker:web-checker-log}
frequency = daily
rotate-num = 30
sharedscripts = true
notifempty = true
create = true
{%- endif %}
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
[publish-varnish-connection-information]
recipe = slapos.cookbook:publish.serialised
url = http://${varnish-instance:ip}:${varnish-instance:server-port}/
[varnish-instance]
recipe = slapos.cookbook:generic.varnish
backend-url = {{ slapparameter_dict['backend-url'] }}
# Network options
ip = {{ ip }}
server-port = {{ tcpv4_port }}
manager-port = {{ tcpv4_port + 1 }}
stunnel-port = {{ tcpv4_port + 2}}
# Paths: Running wrappers
varnishd-wrapper = ${basedirectory:services}/varnishd
varnishlog-wrapper = ${rootdirectory:bin}/varnishlog
stunnel-wrapper = ${basedirectory:services}/stunnel
# Binary information
varnishd-binary = {{ parameter_dict['varnish'] }}/sbin/varnishd
varnishlog-binary = {{ parameter_dict['varnish'] }}/bin/varnishlog
shell-path = {{ parameter_dict['dash'] }}/bin/dash
stunnel-binary = {{ parameter_dict['stunnel'] }}/bin/stunnel
gcc-location = {{ parameter_dict['gcc'] }}/bin
# Configuration by VCL
vcl-file = ${rootdirectory:etc}/default.vcl
pid-file = ${basedirectory:run}/varnishd.pid
stunnel-conf-file = ${rootdirectory:etc}/stunnel.conf
stunnel-pid-file = ${basedirectory:run}/stunnel.pid
varnish-data = ${directory:varnish-data}
# this will pass at -n option
varnish-instance-name = ${directory:varnish-instance}
web-checker = {{ parameter_dict['buildout-bin-directory'] }}/web_checker_utility
[cron]
recipe = slapos.cookbook:cron
dcrond-binary = {{ parameter_dict['dcron'] }}/sbin/crond
cron-entries = ${directory:cron-entries}
crontabs = ${directory:crontabs}
cronstamps = ${directory:cronstamps}
binary = ${basedirectory:services}/crond
catcher = ${cron-simplelogger:wrapper}
[cron-simplelogger]
recipe = slapos.cookbook:simplelogger
wrapper = ${rootdirectory:bin}/cron_simplelogger
log = ${basedirectory:log}/cron.log
[cron-entry-logrotate]
<= cron
recipe = slapos.cookbook:cron.d
name = logrotate
frequency = 0 0 * * *
command = ${logrotate:wrapper}
[logrotate]
recipe = slapos.cookbook:logrotate
# Binaries
logrotate-binary = {{ parameter_dict['logrotate'] }}/usr/sbin/logrotate
gzip-binary = {{ parameter_dict['gzip'] }}/bin/gzip
gunzip-binary = {{ parameter_dict['gzip'] }}/bin/gunzip
# Directories
wrapper = ${rootdirectory:bin}/logrotate
conf = ${rootdirectory:etc}/logrotate.conf
logrotate-entries = ${directory:logrotate-entries}
backup = ${directory:logrotate-backup}
state-file = ${rootdirectory:srv}/logrotate.status
[basedirectory]
recipe = slapos.cookbook:mkdirectory
services = ${rootdirectory:etc}/run
run = ${rootdirectory:var}/run
backup = ${rootdirectory:srv}/backup
log = ${rootdirectory:var}/log
backup = ${rootdirectory:srv}/backup
[directory]
recipe = slapos.cookbook:mkdirectory
varnish-data = ${rootdirectory:srv}/varnish
varnish-instance = ${directory:varnish-data}/instance
cron-entries = ${rootdirectory:etc}/cron.d
crontabs = ${rootdirectory:etc}/crontabs
cronstamps = ${rootdirectory:etc}/cronstamps
logrotate-backup = ${basedirectory:backup}/logrotate
logrotate-entries = ${rootdirectory:etc}/logrotate.d
web-checker = ${rootdirectory:srv}/web-checker
[rootdirectory]
recipe = slapos.cookbook:mkdirectory
etc = ${buildout:directory}/etc
var = ${buildout:directory}/var
srv = ${buildout:directory}/srv
bin = ${buildout:directory}/bin
[buildout]
parts =
switch-softwaretype
eggs-directory = {{ eggs_directory }}
develop-eggs-directory = {{ develop_eggs_directory }}
offline = true
[slap-configuration]
recipe = slapos.cookbook:slapconfiguration.serialised
computer = ${slap-connection:computer-id}
partition = ${slap-connection:partition-id}
url = ${slap-connection:server-url}
key = ${slap-connection:key-file}
cert = ${slap-connection:cert-file}
[jinja2-template-base]
recipe = slapos.recipe.template:jinja2
rendered = ${buildout:parts-directory}/${:_buildout_section_name_}/${:filename}
extra-context =
context =
key eggs_directory buildout:eggs-directory
key develop_eggs_directory buildout:develop-eggs-directory
key slapparameter_dict slap-configuration:configuration
key ipv4_set slap-configuration:ipv4
${:extra-context}
[dynamic-template-varnish-parameters]
dash = {{ dash_location }}
dcron = {{ dcron_location }}
gcc = {{ gcc_location }}
gzip = {{ gzip_location }}
logrotate = {{ logrotate_location }}
stunnel = {{ stunnel_location }}
varnish = {{ varnish_location }}
wget = {{ wget_location }}
buildout-bin-directory = {{ buildout_bin_directory }}
[dynamic-template-varnish]
< = jinja2-template-base
template = {{ template_varnish }}
filename = instance-varnish.cfg
extra-context =
section parameter_dict dynamic-template-varnish-parameters
[switch-softwaretype]
recipe = slapos.cookbook:softwaretype
default = ${dynamic-template-varnish:rendered}
[slap-connection]
# part to migrate to new - separated words
computer-id = ${slap_connection:computer_id}
partition-id = ${slap_connection:partition_id}
server-url = ${slap_connection:server_url}
software-release-url = ${slap_connection:software_release_url}
key-file = ${slap_connection:key_file}
cert-file = ${slap_connection:cert_file}
[buildout]
# Local development
develop =
${:parts-directory}/slapos.cookbook-repository
extensions =
slapos.zcbworkarounds
mr.developer
find-links =
http://www.nexedi.org/static/packages/source/slapos.buildout/
http://www.nexedi.org/static/packages/source/hexagonit.recipe.download/
http://dist.repoze.org
http://www.nexedi.org/static/packages/source/
extends =
../../stack/slapos.cfg
../../component/dash/buildout.cfg
../../component/dcron/buildout.cfg
../../component/gcc/buildout.cfg
../../component/git/buildout.cfg
../../component/gzip/buildout.cfg
../../component/logrotate/buildout.cfg
../../component/python-2.7/buildout.cfg
../../component/stunnel/buildout.cfg
../../component/varnish/buildout.cfg
../../component/wget/buildout.cfg
parts =
dash
dcron
gcc-minimal
slapos-toolbox
stunnel
varnish-3.0
wget
# Local development
slapos.cookbook-repository
check-recipe
# Create instance template
template
# Local development
[slapos.cookbook-repository]
recipe = slapos.recipe.build:gitclone
repository = http://git.erp5.org/repos/slapos.git
branch = master
git-executable = ${git:location}/bin/git
[check-recipe]
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link
[template-jinja2-base]
recipe = slapos.recipe.template:jinja2
template = ${:_profile_base_location_}/${:filename}.in
rendered = ${buildout:directory}/${:filename}
# XXX: extra-context is needed because we cannot append to a key of an extended
# section.
extra-context =
context =
key bin_directory buildout:bin-directory
key develop_eggs_directory buildout:develop-eggs-directory
key eggs_directory buildout:eggs-directory
${:extra-context}
[template]
< = template-jinja2-base
# XXX: "template.cfg" is hardcoded in instanciation recipe
filename = template.cfg
template = ${:_profile_base_location_}/instance.cfg.in
md5sum = 8e906d749e19ee13fe5b7f4d9bfcf896
extra-context =
key buildout_bin_directory buildout:bin-directory
key dash_location dash:location
key dcron_location dcron:location
key gcc_location gcc-minimal:location
key gzip_location gzip:location
key logrotate_location logrotate:location
key stunnel_location stunnel:location
key template_varnish template-varnish:target
key varnish_location varnish-3.0:location
key wget_location wget:location
[template-varnish]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/instance-varnish.cfg.in
md5sum = 4334d900f212d170fd0ca35865879bdf
mode = 640
[eggs]
recipe = zc.recipe.egg
python = python2.7
eggs =
${lxml-python:egg}
erp5.util
pytz
lock_file
inotifyx
scripts =
web_checker_utility = erp5.util.webchecker:web_checker_utility
[lxml-python]
python = python2.7
[slapos-toolbox]
recipe = zc.recipe.egg
python = ${eggs:python}
eggs =
${lxml-python:egg}
slapos.toolbox
scripts =
killpidfromfile
[versions]
erp5.util = 0.4.34
lxml = 2.3.6
slapos.toolbox = 0.33.1
[networkcache]
# signature certificates of the following uploaders.
# Romain Courteaud
# Sebastien Robin
# Kazuhiko Shiozaki
# Cedric de Saint Martin
# Yingjie Xu
# Gabriel Monnerat
# Łukasz Nowak
# Test Agent (Automatic update from tests)
# Aurélien Calonne
signature-certificate-list =
-----BEGIN CERTIFICATE-----
MIIB4DCCAUkCADANBgkqhkiG9w0BAQsFADA5MQswCQYDVQQGEwJGUjEZMBcGA1UE
CBMQRGVmYXVsdCBQcm92aW5jZTEPMA0GA1UEChMGTmV4ZWRpMB4XDTExMDkxNTA5
MDAwMloXDTEyMDkxNTA5MDAwMlowOTELMAkGA1UEBhMCRlIxGTAXBgNVBAgTEERl
ZmF1bHQgUHJvdmluY2UxDzANBgNVBAoTBk5leGVkaTCBnzANBgkqhkiG9w0BAQEF
AAOBjQAwgYkCgYEApYZv6OstoqNzxG1KI6iE5U4Ts2Xx9lgLeUGAMyfJLyMmRLhw
boKOyJ9Xke4dncoBAyNPokUR6iWOcnPHtMvNOsBFZ2f7VA28em3+E1JRYdeNUEtX
Z0s3HjcouaNAnPfjFTXHYj4um1wOw2cURSPuU5dpzKBbV+/QCb5DLheynisCAwEA
ATANBgkqhkiG9w0BAQsFAAOBgQBCZLbTVdrw3RZlVVMFezSHrhBYKAukTwZrNmJX
mHqi2tN8tNo6FX+wmxUUAf3e8R2Ymbdbn2bfbPpcKQ2fG7PuKGvhwMG3BlF9paEC
q7jdfWO18Zp/BG7tagz0jmmC4y/8akzHsVlruo2+2du2freE8dK746uoMlXlP93g
QUUGLQ==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB8jCCAVugAwIBAgIJAPu2zchZ2BxoMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV
BAMMB3RzeGRldjMwHhcNMTExMDE0MTIxNjIzWhcNMTIxMDEzMTIxNjIzWjASMRAw
DgYDVQQDDAd0c3hkZXYzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrPbh+
YGmo6mWmhVb1vTqX0BbeU0jCTB8TK3i6ep3tzSw2rkUGSx3niXn9LNTFNcIn3MZN
XHqbb4AS2Zxyk/2tr3939qqOrS4YRCtXBwTCuFY6r+a7pZsjiTNddPsEhuj4lEnR
L8Ax5mmzoi9nE+hiPSwqjRwWRU1+182rzXmN4QIDAQABo1AwTjAdBgNVHQ4EFgQU
/4XXREzqBbBNJvX5gU8tLWxZaeQwHwYDVR0jBBgwFoAU/4XXREzqBbBNJvX5gU8t
LWxZaeQwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQA07q/rKoE7fAda
FED57/SR00OvY9wLlFEF2QJ5OLu+O33YUXDDbGpfUSF9R8l0g9dix1JbWK9nQ6Yd
R/KCo6D0sw0ZgeQv1aUXbl/xJ9k4jlTxmWbPeiiPZEqU1W9wN5lkGuLxV4CEGTKU
hJA/yXa1wbwIPGvX3tVKdOEWPRXZLg==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB7jCCAVegAwIBAgIJAJWA0jQ4o9DGMA0GCSqGSIb3DQEBBQUAMA8xDTALBgNV
BAMMBHg2MXMwIBcNMTExMTI0MTAyNDQzWhgPMjExMTEwMzExMDI0NDNaMA8xDTAL
BgNVBAMMBHg2MXMwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANdJNiFsRlkH
vq2kHP2zdxEyzPAWZH3CQ3Myb3F8hERXTIFSUqntPXDKXDb7Y/laqjMXdj+vptKk
3Q36J+8VnJbSwjGwmEG6tym9qMSGIPPNw1JXY1R29eF3o4aj21o7DHAkhuNc5Tso
67fUSKgvyVnyH4G6ShQUAtghPaAwS0KvAgMBAAGjUDBOMB0GA1UdDgQWBBSjxFUE
RfnTvABRLAa34Ytkhz5vPzAfBgNVHSMEGDAWgBSjxFUERfnTvABRLAa34Ytkhz5v
PzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBAFLDS7zNhlrQYSQO5KIj
z2RJe3fj4rLPklo3TmP5KLvendG+LErE2cbKPqnhQ2oVoj6u9tWVwo/g03PMrrnL
KrDm39slYD/1KoE5kB4l/p6KVOdeJ4I6xcgu9rnkqqHzDwI4v7e8/D3WZbpiFUsY
vaZhjNYKWQf79l6zXfOvphzJ
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAO4V/jiMoICoMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMjMyMCAXDTEyMDIxNjExMTAyM1oYDzIxMTIwMTIzMTExMDIzWjAT
MREwDwYDVQQDDAhDT01QLTIzMjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
wi/3Z8W9pUiegUXIk/AiFDQ0UJ4JFAwjqr+HSRUirlUsHHT+8DzH/hfcTDX1I5BB
D1ADk+ydXjMm3OZrQcXjn29OUfM5C+g+oqeMnYQImN0DDQIOcUyr7AJc4xhvuXQ1
P2pJ5NOd3tbd0kexETa1LVhR6EgBC25LyRBRae76qosCAwEAAaNQME4wHQYDVR0O
BBYEFMDmW9aFy1sKTfCpcRkYnP6zUd1cMB8GA1UdIwQYMBaAFMDmW9aFy1sKTfCp
cRkYnP6zUd1cMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAskbFizHr
b6d3iIyN+wffxz/V9epbKIZVEGJd/6LrTdLiUfJPec7FaxVCWNyKBlCpINBM7cEV
Gn9t8mdVQflNqOlAMkOlUv1ZugCt9rXYQOV7rrEYJBWirn43BOMn9Flp2nibblby
If1a2ZoqHRxoNo2yTmm7TSYRORWVS+vvfjY=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAIlBksrZVkK8MA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMzU3MCAXDTEyMDEyNjEwNTUyOFoYDzIxMTIwMTAyMTA1NTI4WjAT
MREwDwYDVQQDDAhDT01QLTM1NzCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
ts+iGUwi44vtIfwXR8DCnLtHV4ydl0YTK2joJflj0/Ws7mz5BYkxIU4fea/6+VF3
i11nwBgYgxQyjNztgc9u9O71k1W5tU95yO7U7bFdYd5uxYA9/22fjObaTQoC4Nc9
mTu6r/VHyJ1yRsunBZXvnk/XaKp7gGE9vNEyJvPn2bkCAwEAAaNQME4wHQYDVR0O
BBYEFKuGIYu8+6aEkTVg62BRYaD11PILMB8GA1UdIwQYMBaAFKuGIYu8+6aEkTVg
62BRYaD11PILMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAMoTRpBxK
YLEZJbofF7gSrRIcrlUJYXfTfw1QUBOKkGFFDsiJpEg4y5pUk1s5Jq9K3SDzNq/W
it1oYjOhuGg3al8OOeKFrU6nvNTF1BAvJCl0tr3POai5yXyN5jlK/zPfypmQYxE+
TaqQSGBJPVXYt6lrq/PRD9ciZgKLOwEqK8w=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAPHoWu90gbsgMA0GCSqGSIb3DQEBBQUAMBQxEjAQBgNV
BAMMCXZpZmlibm9kZTAeFw0xMjAzMTkyMzIwNTVaFw0xMzAzMTkyMzIwNTVaMBQx
EjAQBgNVBAMMCXZpZmlibm9kZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
ozBijpO8PS5RTeKTzA90vi9ezvv4vVjNaguqT4UwP9+O1+i6yq1Y2W5zZxw/Klbn
oudyNzie3/wqs9VfPmcyU9ajFzBv/Tobm3obmOqBN0GSYs5fyGw+O9G3//6ZEhf0
NinwdKmrRX+d0P5bHewadZWIvlmOupcnVJmkks852BECAwEAAaNQME4wHQYDVR0O
BBYEFF9EtgfZZs8L2ZxBJxSiY6eTsTEwMB8GA1UdIwQYMBaAFF9EtgfZZs8L2ZxB
JxSiY6eTsTEwMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAc43YTfc6
baSemaMAc/jz8LNLhRE5dLfLOcRSoHda8y0lOrfe4lHT6yP5l8uyWAzLW+g6s3DA
Yme/bhX0g51BmI6gjKJo5DoPtiXk/Y9lxwD3p7PWi+RhN+AZQ5rpo8UfwnnN059n
yDuimQfvJjBFMVrdn9iP6SfMjxKaGk6gVmI=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAMNZBmoIOXPBMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtMTMyMCAXDTEyMDUwMjEyMDQyNloYDzIxMTIwNDA4MTIwNDI2WjAT
MREwDwYDVQQDDAhDT01QLTEzMjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
6peZQt1sAmMAmSG9BVxxcXm8x15kE9iAplmANYNQ7z2YO57c10jDtlYlwVfi/rct
xNUOKQtc8UQtV/fJWP0QT0GITdRz5X/TkWiojiFgkopza9/b1hXs5rltYByUGLhg
7JZ9dZGBihzPfn6U8ESAKiJzQP8Hyz/o81FPfuHCftsCAwEAAaNQME4wHQYDVR0O
BBYEFNuxsc77Z6/JSKPoyloHNm9zF9yqMB8GA1UdIwQYMBaAFNuxsc77Z6/JSKPo
yloHNm9zF9yqMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAl4hBaJy1
cgiNV2+Z5oNTrHgmzWvSY4duECOTBxeuIOnhql3vLlaQmo0p8Z4c13kTZq2s3nhd
Loe5mIHsjRVKvzB6SvIaFUYq/EzmHnqNdpIGkT/Mj7r/iUs61btTcGUCLsUiUeci
Vd0Ozh79JSRpkrdI8R/NRQ2XPHAo+29TT70=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB9jCCAV+gAwIBAgIJAKRvzcy7OH0UMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV
BAMMCENPTVAtNzcyMCAXDTEyMDgxMDE1NDI1MVoYDzIxMTIwNzE3MTU0MjUxWjAT
MREwDwYDVQQDDAhDT01QLTc3MjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
o7aipd6MbnuGDeR1UJUjuMLQUariAyQ2l2ZDS6TfOwjHiPw/mhzkielgk73kqN7A
sUREx41eTcYCXzTq3WP3xCLE4LxLg1eIhd4nwNHj8H18xR9aP0AGjo4UFl5BOMa1
mwoyBt3VtfGtUmb8whpeJgHhqrPPxLoON+i6fIbXDaUCAwEAAaNQME4wHQYDVR0O
BBYEFEfjy3OopT2lOksKmKBNHTJE2hFlMB8GA1UdIwQYMBaAFEfjy3OopT2lOksK
mKBNHTJE2hFlMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAaNRx6YN2
M/p3R8/xS6zvH1EqJ3FFD7XeAQ52WuQnKSREzuw0dsw12ClxjcHiQEFioyTiTtjs
5pW18Ry5Ie7iFK4cQMerZwWPxBodEbAteYlRsI6kePV7Gf735Y1RpuN8qZ2sYL6e
x2IMeSwJ82BpdEI5niXxB+iT0HxhmR+XaMI=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB+DCCAWGgAwIBAgIJAKGd0vpks6T/MA0GCSqGSIb3DQEBBQUAMBQxEjAQBgNV
BAMMCUNPTVAtMTU4NDAgFw0xMzA2MjAxMjE5MjBaGA8yMTEzMDUyNzEyMTkyMFow
FDESMBAGA1UEAwwJQ09NUC0xNTg0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKB
gQDZTH9etPUC+wMZQ3UIiOwyyCfHsJ+7duCFYjuo1uZrhtDt/fp8qb8qK9ob+df3
EEYgA0IgI2j/9jNUEnKbc5+OrfKznzXjrlrH7zU8lKBVNCLzQuqBKRNajZ+UvO8R
nlqK2jZCXP/p3HXDYUTEwIR5W3tVCEn/Vda4upTLcPVE5wIDAQABo1AwTjAdBgNV
HQ4EFgQU7KXaNDheQWoy5uOU01tn1M5vNkEwHwYDVR0jBBgwFoAU7KXaNDheQWoy
5uOU01tn1M5vNkEwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQASmqCU
Znbvu6izdicvjuE3aKnBa7G++Fdp2bdne5VCwVbVLYCQWatB+n4crKqGdnVply/u
+uZ16u1DbO9rYoKgWqjLk1GfiLw5v86pd5+wZd5I9QJ0/Sbz2vZk5S4ciMIGwArc
m711+GzlW5xe6GyH9SZaGOPAdUbI6JTDwLzEgA==
-----END CERTIFICATE-----
...@@ -36,7 +36,6 @@ extends = ...@@ -36,7 +36,6 @@ extends =
../component/libreoffice-bin/buildout.cfg ../component/libreoffice-bin/buildout.cfg
../component/libpng/buildout.cfg ../component/libpng/buildout.cfg
../component/lxml-python/buildout.cfg ../component/lxml-python/buildout.cfg
../component/python-2.6/buildout.cfg
../component/python-2.7/buildout.cfg ../component/python-2.7/buildout.cfg
../component/xorg/buildout.cfg ../component/xorg/buildout.cfg
../component/fonts/buildout.cfg ../component/fonts/buildout.cfg
...@@ -50,6 +49,7 @@ extends = ...@@ -50,6 +49,7 @@ extends =
../component/dcron/buildout.cfg ../component/dcron/buildout.cfg
../component/coreutils/buildout.cfg ../component/coreutils/buildout.cfg
../component/cloudooo/buildout.cfg ../component/cloudooo/buildout.cfg
../component/haproxy/buildout.cfg
versions = versions versions = versions
...@@ -77,17 +77,14 @@ parts = ...@@ -77,17 +77,14 @@ parts =
poppler poppler
ffmpeg ffmpeg
bootstrap2.6
rdiff-backup rdiff-backup
haproxy
cloudooo cloudooo
# Local development # Local development
develop += develop +=
${:parts-directory}/cloudooo ${:parts-directory}/cloudooo
[bootstrap2.6]
python = python2.6
[versions] [versions]
# Use SlapOS patched zc.buildout # Use SlapOS patched zc.buildout
zc.buildout = 1.6.0-dev-SlapOS-006 zc.buildout = 1.6.0-dev-SlapOS-006
...@@ -19,7 +19,7 @@ allow-hosts += pybrary.net ...@@ -19,7 +19,7 @@ allow-hosts += pybrary.net
extends = extends =
# Exact version of Zope # Exact version of Zope
http://svn.zope.org/repos/main/Zope/tags/2.12.26/versions.cfg https://raw.github.com/zopefoundation/Zope/2.13.21/versions.cfg
../../stack/slapos.cfg ../../stack/slapos.cfg
../../component/logrotate/buildout.cfg ../../component/logrotate/buildout.cfg
../../component/dcron/buildout.cfg ../../component/dcron/buildout.cfg
...@@ -44,7 +44,6 @@ extends = ...@@ -44,7 +44,6 @@ extends =
../../component/pil-python/buildout.cfg ../../component/pil-python/buildout.cfg
../../component/pycrypto-python/buildout.cfg ../../component/pycrypto-python/buildout.cfg
../../component/pysvn-python/buildout.cfg ../../component/pysvn-python/buildout.cfg
../../component/python-2.6/buildout.cfg
../../component/python-2.7/buildout.cfg ../../component/python-2.7/buildout.cfg
../../component/python-ldap-python/buildout.cfg ../../component/python-ldap-python/buildout.cfg
../../component/rdiff-backup/buildout.cfg ../../component/rdiff-backup/buildout.cfg
...@@ -93,7 +92,6 @@ parts = ...@@ -93,7 +92,6 @@ parts =
w3-validator w3-validator
tesseract tesseract
hookbox hookbox
bootstrap2.6
perl-DBD-mariadb perl-DBD-mariadb
perl-DBI perl-DBI
percona-toolkit percona-toolkit
...@@ -136,8 +134,6 @@ parts = ...@@ -136,8 +134,6 @@ parts =
# Local development # Local development
slapos.cookbook-repository slapos.cookbook-repository
check-recipe check-recipe
slapos.cookbook-python2.6
slapos.recipe.template-python2.6
# Create instance template # Create instance template
template template
...@@ -154,19 +150,6 @@ stop-on-error = true ...@@ -154,19 +150,6 @@ stop-on-error = true
update-command = ${:command} update-command = ${:command}
command = grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link command = grep parts ${buildout:develop-eggs-directory}/slapos.cookbook.egg-link
[slapos.cookbook-python2.6]
recipe = zc.recipe.egg
eggs = slapos.cookbook
scripts =
python = python2.6
ugly-depend-on = ${slapos.cookbook-repository:repository} ${slapos.cookbook-repository:branch}
[slapos.recipe.template-python2.6]
recipe = zc.recipe.egg
eggs = slapos.recipe.template
scripts =
python = python2.6
[template-jinja2-base] [template-jinja2-base]
recipe = slapos.recipe.template:jinja2 recipe = slapos.recipe.template:jinja2
template = ${:_profile_base_location_}/${:filename}.in template = ${:_profile_base_location_}/${:filename}.in
...@@ -343,9 +326,6 @@ command = ...@@ -343,9 +326,6 @@ command =
${buildout:executable} ${:genbt5list} ${local-bt5-repository:list} ${buildout:executable} ${:genbt5list} ${local-bt5-repository:list}
update-command = ${:command} update-command = ${:command}
[bootstrap2.6]
python = python2.6
[erp5_repository_list] [erp5_repository_list]
repository_id_list = erp5 repository_id_list = erp5
...@@ -367,7 +347,7 @@ update-command = ${:command} ...@@ -367,7 +347,7 @@ update-command = ${:command}
# XXX: Workaround for fact ERP5Type is not an distribution and does not # XXX: Workaround for fact ERP5Type is not an distribution and does not
# expose entry point for test runner # expose entry point for test runner
recipe = zc.recipe.egg recipe = zc.recipe.egg
python = python2.6 python = python2.7
eggs = ${eggs:eggs} eggs = ${eggs:eggs}
extra-paths = ${eggs:extra-paths} extra-paths = ${eggs:extra-paths}
entry-points = entry-points =
...@@ -405,7 +385,7 @@ initialization = ...@@ -405,7 +385,7 @@ initialization =
# XXX: Workaround for fact ERP5Type is not an distribution and does not # XXX: Workaround for fact ERP5Type is not an distribution and does not
# expose entry point for test runner # expose entry point for test runner
recipe = zc.recipe.egg recipe = zc.recipe.egg
python = python2.6 python = python2.7
eggs = ${eggs:eggs} eggs = ${eggs:eggs}
extra-paths = ${eggs:extra-paths} extra-paths = ${eggs:extra-paths}
entry-points = entry-points =
...@@ -424,7 +404,7 @@ initialization = ...@@ -424,7 +404,7 @@ initialization =
[eggs] [eggs]
recipe = zc.recipe.egg recipe = zc.recipe.egg
python = python2.6 python = python2.7
eggs = eggs =
${mysql-python:egg} ${mysql-python:egg}
${lxml-python:egg} ${lxml-python:egg}
...@@ -432,6 +412,7 @@ eggs = ...@@ -432,6 +412,7 @@ eggs =
${python-ldap-python:egg} ${python-ldap-python:egg}
${pysvn-python:egg} ${pysvn-python:egg}
${pycrypto-python:egg} ${pycrypto-python:egg}
lock_file
PyXML PyXML
SOAPpy SOAPpy
cElementTree cElementTree
...@@ -441,6 +422,7 @@ eggs = ...@@ -441,6 +422,7 @@ eggs =
erp5diff erp5diff
inotifyx inotifyx
ipdb ipdb
Jinja2
mechanize mechanize
numpy numpy
ordereddict ordereddict
...@@ -465,6 +447,8 @@ eggs = ...@@ -465,6 +447,8 @@ eggs =
huBarcode huBarcode
qrcode qrcode
spyne spyne
# Needed for checking ZODB Components source code
pylint
# Zope # Zope
ZODB3 ZODB3
...@@ -494,6 +478,10 @@ eggs = ...@@ -494,6 +478,10 @@ eggs =
Products.TIDStorage Products.TIDStorage
Products.LongRequestLogger Products.LongRequestLogger
# BBB: Temporarily keep zope.app.testing awaiting we use newer version of CMF
# (for tests like testCookieCrumbler).
zope.app.testing
# Currently forked in our repository # Currently forked in our repository
# Products.PortalTransforms # Products.PortalTransforms
# Dependency for our fork of PortalTransforms # Dependency for our fork of PortalTransforms
...@@ -505,7 +493,7 @@ eggs = ...@@ -505,7 +493,7 @@ eggs =
# parameterizing the version of the generated python interpreter name by the # parameterizing the version of the generated python interpreter name by the
# python section version causes dependency between this egg section and the # python section version causes dependency between this egg section and the
# installation of python, which we don't want on an instance # installation of python, which we don't want on an instance
interpreter = python2.6 interpreter = python2.7
scripts = scripts =
repozo repozo
runzope runzope
...@@ -519,7 +507,7 @@ extra-paths = ...@@ -519,7 +507,7 @@ extra-paths =
[zodbanalyze] [zodbanalyze]
recipe = zc.recipe.egg recipe = zc.recipe.egg
python = python2.6 python = python2.7
eggs = eggs =
ZODB3 ZODB3
erp5.util erp5.util
...@@ -532,19 +520,19 @@ branch = master ...@@ -532,19 +520,19 @@ branch = master
revision = 5c67568c403239bd8e25993602d03c553236fcec revision = 5c67568c403239bd8e25993602d03c553236fcec
[mysql-python] [mysql-python]
python = python2.6 python = python2.7
[lxml-python] [lxml-python]
python = python2.6 python = python2.7
[pil-python] [pil-python]
python = python2.6 python = python2.7
[python-ldap-python] [python-ldap-python]
python = python2.6 python = python2.7
[pysvn-python] [pysvn-python]
python = python2.6 python = python2.7
[slapos-toolbox] [slapos-toolbox]
recipe = zc.recipe.egg recipe = zc.recipe.egg
...@@ -608,6 +596,16 @@ Products.CMFDefault = 2.2.2 ...@@ -608,6 +596,16 @@ Products.CMFDefault = 2.2.2
Products.CMFTopic = 2.2.1 Products.CMFTopic = 2.2.1
Products.CMFUid = 2.2.1 Products.CMFUid = 2.2.1
# newer version requires zope.traversing>=4.0.0a2.
zope.app.appsetup = 3.16.0
# newer version requires zope.i18n>=4.0.0a3
zope.app.publication = 3.14.0
# BBB: Temporarily keep zope.app.testing awaiting we use newer version of CMF
# (for tests like testCookieCrumbler).
zope.app.testing = 3.8.1
# Pinned versions # Pinned versions
Flask = 0.9 Flask = 0.9
GitPython = 0.3.2.RC1 GitPython = 0.3.2.RC1
...@@ -625,6 +623,7 @@ Products.LongRequestLogger = 1.1.0 ...@@ -625,6 +623,7 @@ Products.LongRequestLogger = 1.1.0
Products.MimetypesRegistry = 2.0.4 Products.MimetypesRegistry = 2.0.4
Products.PluginRegistry = 1.3 Products.PluginRegistry = 1.3
Products.TIDStorage = 5.4.8 Products.TIDStorage = 5.4.8
Products.ZSQLMethods = 2.13.4
Pygments = 1.6 Pygments = 1.6
StructuredText = 2.11.1 StructuredText = 2.11.1
WSGIUtils = 0.7 WSGIUtils = 0.7
...@@ -643,6 +642,7 @@ erp5.util = 0.4.34 ...@@ -643,6 +642,7 @@ erp5.util = 0.4.34
erp5diff = 0.8.1.5 erp5diff = 0.8.1.5
eventlet = 0.12.1 eventlet = 0.12.1
feedparser = 5.1.3 feedparser = 5.1.3
five.formlib = 1.0.4
five.localsitemanager = 2.0.5 five.localsitemanager = 2.0.5
fpconst = 0.7.2 fpconst = 0.7.2
gitdb = 0.5.4 gitdb = 0.5.4
...@@ -686,3 +686,7 @@ uuid = 1.30 ...@@ -686,3 +686,7 @@ uuid = 1.30
validictory = 0.9.1 validictory = 0.9.1
xml-marshaller = 0.9.7 xml-marshaller = 0.9.7
xupdate-processor = 0.4 xupdate-processor = 0.4
zope.app.debug = 3.4.1
zope.app.dependable = 3.5.1
zope.app.form = 4.0.2
pylint = 1.0.0
...@@ -331,7 +331,7 @@ runzope-binary = {{ bin_directory }}/runzope ...@@ -331,7 +331,7 @@ runzope-binary = {{ bin_directory }}/runzope
bt5-repository-list = bt5-repository-list =
[deadlock-debugger-password] [deadlock-debugger-password]
recipe = slapos.cookbook:pwgen.stable recipe = slapos.cookbook:generate.password
[zope-conf-parameter-base] [zope-conf-parameter-base]
ip = {{ ipv4 }} ip = {{ ipv4 }}
...@@ -346,7 +346,7 @@ context = ...@@ -346,7 +346,7 @@ context =
key instance directory:instance key instance directory:instance
key instance_products directory:instance-products key instance_products directory:instance-products
raw deadlock_path /manage_debug_threads raw deadlock_path /manage_debug_threads
key deadlock_debugger_password deadlock-debugger-password:password key deadlock_debugger_password deadlock-debugger-password:passwd
key tidstorage_ip tidstorage:ip key tidstorage_ip tidstorage:ip
key tidstorage_port tidstorage:port key tidstorage_port tidstorage:port
key promise_path erp5-promise:promise-path key promise_path erp5-promise:promise-path
......
...@@ -214,12 +214,12 @@ parts += ...@@ -214,12 +214,12 @@ parts +=
{{ replicated.replicate("Name", "3", {{ replicated.replicate("Name", "3",
"mysoftware-export", "mysoftware-import", "mysoftware-export", "mysoftware-import",
"ArgLeader","ArgBackup") }} "ArgLeader","ArgBackup", slapparameter_dict=slapparameter_dict) }}
and it'll expend into the sections require to request Name0, Name1 and Name2, and it'll expend into the sections require to request Name0, Name1 and Name2,
backuped and resilient. The leader will expend the section [ArgLeader], backups backuped and resilient. The leader will expend the section [ArgLeader], backups
will expend [ArgBackup]. If you don't need to specify any options, you can will expend [ArgBackup]. slapparameter_dict is the dict containing the parameters given to the instance. If you don't need to specify any options, you can
omit the last two arguments in replicate(). omit the last three arguments in replicate().
Since you will compile your template with jinja2, there should be no $${}, Since you will compile your template with jinja2, there should be no $${},
because it is not yet possible to use jinja2 -> buildout template. because it is not yet possible to use jinja2 -> buildout template.
...@@ -227,3 +227,36 @@ because it is not yet possible to use jinja2 -> buildout template. ...@@ -227,3 +227,36 @@ because it is not yet possible to use jinja2 -> buildout template.
To compile with jinja2, see jinja2's recipe. To compile with jinja2, see jinja2's recipe.
Deploying your resilient software
---------------------------------
You can provide sla parameters to each request you make (a lot: for export, import and pbs).
example:
Here is a small example of parameters you can provide to control the deployment (case of a runner):
<?xml version='1.0' encoding='utf-8'?>
<instance>
<parameter id="-sla-1-computer_guid">COMP-GRP1</parameter>
<parameter id="-sla-pbs1-computer_guid">COMP-PBS1</parameter>
<parameter id="-sla-2-computer_guid">COMP-GROUP2</parameter>
<parameter id="-sla-runner2-computer_guid">COMP-RUN2</parameter>
<parameter id="-sla-2-network_guid">NET-2</parameter>
<parameter id="-sla-runner0-computer_guid">COMP-RUN0</parameter>
</instance>
Consequence on sla parameters by request:
* runner0: computer_guid = COMP-RUN0 (provided directly)
* runner1: computer_guid = COMP-GRP1 (provided by group 1)
* runner2: computer_guid = COMP-RUN2 (provided by group 2 but overided directly)
network_guid = NET-2 (provided by group 2)
* PBS 1: computer_guid = COMP-PBS1 (provided by group 1 but overided directly)
* PBS 2: computer_guid = COMP-GRP2 (provided by group 2)
network_guid = NET-2 (provided by group 2)
Parameters are analysed this way:
* If it starts with "-sla-" it is not transmitted to requested instance and is used to do the request as sla.
* -sla-foo-bar=example (foo being a magic key) will be use for each request considering "foo" as a key to use and the sla parameter is "bar". So for each group using the "foo" key, sla parameter "bar" is used with value "example"
About magic keys:
We can find 2 kinds of magic keys:
* id : example, in "-sla-2-foo" 2 is the magic key and the parameter will be used for each request with id 2 (in case of kvm: kvm2 and PBS 2)
* nameid : example, in "-sla-kvm2-foo", foo will be used for kvm2 request. Name for pbs is "pbs" -> "-sla-pbs2-foo".
IMPORTANT NOTE: in case the same foo parameter is asked for the group, the nameid key prevail
* PBSs and mirrors should monitor/replace themselves
* Report errors from backup
* If a PBS master is down and then back again, it might want to participate in the ongoing election, then.. what happens?
* If the network is partitioned (the two backups don't see each other, but each can see the slapos master) there will be two concurrent elections taking place, with two winners and two renames.
* How to know that a backup is working? define "check that it works". Does it deploys? But then, how to ensure data integrity? By application?
* How to ensure "synchronization" between two main instances? example: Wordpress: mysql is down, then replaced, then inconsistency between apache and the new mysql
* How to deal with big data? I.e how to have working backup/restore system of 1TB data with slow connection?
* How to be sure that elected importer contains a/ the latest data and b/ has finished to pull. We should prevent importer not having a/ and b/ to become the main.
* How to say "I don't want this instance to be here" + Be able to define "here". Allows to automate deployment of PBS and backup instances
* Should we crypt backed up data?
* If a PBS is lost, a new PBS should be created from another one, in order ot keep history
* If an election takes place and asks for a rename, but no slapgrid is running (or takes too much) then another election will take place
and ask to rename the previously selected winner, thus breaking the resiliency system.
[buildout] [buildout]
extends = extends =
../../component/dropbear/buildout.cfg
../../component/gzip/buildout.cfg ../../component/gzip/buildout.cfg
../../component/rdiff-backup/buildout.cfg ../../component/rdiff-backup/buildout.cfg
../../component/rsync/buildout.cfg
parts = parts =
rdiff-backup rdiff-backup
...@@ -11,7 +13,6 @@ parts = ...@@ -11,7 +13,6 @@ parts =
template-replicated template-replicated
template-parts template-parts
instance-frozen instance-frozen
template-resilient
# needed tools for resiliency # needed tools for resiliency
gzip gzip
...@@ -38,7 +39,7 @@ mode = 0644 ...@@ -38,7 +39,7 @@ mode = 0644
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/pbsready-import.cfg.in url = ${:_profile_base_location_}/pbsready-import.cfg.in
output = ${buildout:directory}/pbsready-import.cfg output = ${buildout:directory}/pbsready-import.cfg
md5sum = 1b1308fd39476d48b5ca13db48ea6dc9 md5sum = 3c2e73f49abdc52282fc045e6d91f3e9
mode = 0644 mode = 0644
[pbsready-export] [pbsready-export]
...@@ -47,20 +48,20 @@ mode = 0644 ...@@ -47,20 +48,20 @@ mode = 0644
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/pbsready-export.cfg.in url = ${:_profile_base_location_}/pbsready-export.cfg.in
output = ${buildout:directory}/pbsready-export.cfg output = ${buildout:directory}/pbsready-export.cfg
md5sum = 5d9e20c436fd307e8e4ab224a9a65792 md5sum = 5e27c391ceafb6a58032f1f87fba7826
mode = 0644 mode = 0644
[template-pull-backup] [template-pull-backup]
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-pull-backup.cfg.in url = ${:_profile_base_location_}/instance-pull-backup.cfg.in
output = ${buildout:directory}/instance-pull-backup.cfg output = ${buildout:directory}/instance-pull-backup.cfg
md5sum = 453d96f5a6c1230c01c878cc7640bae6 md5sum = c67a9dad66490ae264f9e7003521bf59
mode = 0644 mode = 0644
[template-replicated] [template-replicated]
recipe = slapos.recipe.download recipe = slapos.recipe.download
url = ${:_profile_base_location_}/template-replicated.cfg.in url = ${:_profile_base_location_}/template-replicated.cfg.in
md5sum = 9e20f283bf709c63c9c6692d5e1f8972 #md5sum = 9e20f283bf709c63c9c6692d5e1f8972
mode = 0644 mode = 0644
destination = ${buildout:directory}/template-replicated.cfg.in destination = ${buildout:directory}/template-replicated.cfg.in
...@@ -77,12 +78,12 @@ destination = ${buildout:directory}/template-parts.cfg.in ...@@ -77,12 +78,12 @@ destination = ${buildout:directory}/template-parts.cfg.in
# which will run without removing any content because it raises an error. # which will run without removing any content because it raises an error.
recipe = slapos.recipe.template recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-frozen.cfg.in url = ${:_profile_base_location_}/instance-frozen.cfg.in
md5sum = d21472f0e58f928fb827f2cbf22c4d4a
output = ${buildout:directory}/instance-frozen.cfg output = ${buildout:directory}/instance-frozen.cfg
[template-resilient] [versions]
recipe = slapos.recipe.template # Pin Jinja2 to 2.6, as 2.7 breaks current code
url = ${:_profile_base_location_}/resilient.cfg.in Jinja2 = 2.6
output = ${buildout:directory}/resilient.cfg # ... And newer s.r.template requires Jinja2 >= 2.7
md5sum = 59e74d290d623de2c1e147e48f284fba slapos.recipe.template = 2.4.2
mode = 0644
[buildout] [buildout]
parts = eggs-directory = ${buildout:eggs-directory}
\ No newline at end of file develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
parts =
...@@ -105,7 +105,8 @@ promises-directory = $${basedirectory:promises} ...@@ -105,7 +105,8 @@ promises-directory = $${basedirectory:promises}
directory = $${directory:pbs-backup} directory = $${directory:pbs-backup}
cron-entries = $${cron:cron-entries} cron-entries = $${cron:cron-entries}
wrappers-directory = $${directory:pbs-wrappers} wrappers-directory = $${directory:pbs-wrappers}
notifier-url = http://[$${notifier:host}]:$${notifier:port}/ # XXX: this should be named "notifier-host"
notifier-url = http://[$${notifier:host}]:$${notifier:port}
slave-instance-list = $${slap-parameter:slave_instance_list} slave-instance-list = $${slap-parameter:slave_instance_list}
......
...@@ -2,9 +2,29 @@ ...@@ -2,9 +2,29 @@
extends = ${pbsready:output} extends = ${pbsready:output}
parts += # Explicitely define extended parts from pbsready
# then add local parts
parts =
resiliency
logrotate
logrotate-entry-cron
logrotate-entry-equeue
cron
cron-entry-logrotate
sshkeys-authority
dropbear-server
sshkeys-dropbear
dropbear-server-pbs-authorized-key
notifier
cron-entry-backup cron-entry-backup
[resilient-directory]
recipe = slapos.cookbook:mkdirectory
home = $${buildout:directory}
var = $${:home}/var
pid = $${:var}/pid
[resilient-publish-connection-parameter] [resilient-publish-connection-parameter]
notification-id = http://[$${notifier:host}]:$${notifier:port}/get/$${notifier-exporter:name} notification-id = http://[$${notifier:host}]:$${notifier:port}/get/$${notifier-exporter:name}
...@@ -18,6 +38,7 @@ title = Dumping $${slap-parameter:namebase} ...@@ -18,6 +38,7 @@ title = Dumping $${slap-parameter:namebase}
executable = $${exporter:wrapper} executable = $${exporter:wrapper}
wrapper = $${rootdirectory:bin}/exporter wrapper = $${rootdirectory:bin}/exporter
notify = $${slap-parameter:notify} notify = $${slap-parameter:notify}
pidfile = $${resilient-directory:pid}/$${:name}.pid
[cron-entry-backup] [cron-entry-backup]
# Schedule the periodic database dump. # Schedule the periodic database dump.
...@@ -25,5 +46,10 @@ notify = $${slap-parameter:notify} ...@@ -25,5 +46,10 @@ notify = $${slap-parameter:notify}
<= cron <= cron
recipe = slapos.cookbook:cron.d recipe = slapos.cookbook:cron.d
name = backup name = backup
frequency = 0 * * * * frequency = $${slap-parameter:resiliency-backup-periodicity} * * * *
command = $${notifier-exporter:wrapper} command = $${notifier-exporter:wrapper}
[slap-parameter]
# in minutes, modulo 60, in cron.d format (i.e */15 is accepted).
resiliency-backup-periodicity = 0
...@@ -2,7 +2,21 @@ ...@@ -2,7 +2,21 @@
extends = ${pbsready:output} extends = ${pbsready:output}
parts += # Explicitely define extended parts from pbsready
# then add local parts
parts =
resiliency
logrotate
logrotate-entry-cron
logrotate-entry-equeue
cron
cron-entry-logrotate
sshkeys-authority
dropbear-server
sshkeys-dropbear
dropbear-server-pbs-authorized-key
notifier
import-on-notification import-on-notification
resilient-publish-connection-parameter resilient-publish-connection-parameter
......
## not used at the moment
[buildout]
parts =
request-pull-backup-server
[request-pull-backup-server]
<= slap-connection
recipe = slapos.cookbook:request
name = PBS (Pull Backup Server)
software-url = $${slap-connection:software-release-url}
software-type = pull-backup
return = ssh-key notification-url feeds-url
slave = false
\ No newline at end of file
{% macro replicate(namebase, nbbackup, typeexport, typeimport, heriteLeader='', heriteBackup='') %} {% macro replicate(namebase, nbbackup, typeexport, typeimport, heriteLeader='', heriteBackup='', slapparameter_dict={}) %}
{% set sla_parameter_dict = {} -%}
# prepare sla-parameters
{% if slapparameter_dict is defined -%}
{% for key in slapparameter_dict.keys() -%}
{% if key.startswith('-sla-') -%}
{% do sla_parameter_dict.__setitem__(key, slapparameter_dict.pop(key)) -%}
{% endif -%}
{% endfor -%}
{% endif -%}
## Tells the Backupable recipe that we want a backup ## Tells the Backupable recipe that we want a backup
[resilient] [resilient]
...@@ -18,12 +29,42 @@ software-type = {{typeexport}} ...@@ -18,12 +29,42 @@ software-type = {{typeexport}}
name = {{namebase}}0 name = {{namebase}}0
return = ssh-public-key ssh-url notification-id ip return = ssh-public-key ssh-url notification-id ip
config = number authorized-key notify ip-list namebase config =
# Resilient related parameters
number authorized-key notify ip-list namebase
{% if slapparameter_dict is defined %}
# Software Instance related parameters
{% for parameter_name in slapparameter_dict.keys() %}{{parameter_name}} {% endfor %}
{% endif %}
config-number = 0 config-number = 0
config-authorized-key = {% for id in range(1,nbbackup|int) %} ${request-pbs-{{namebase}}-{{id}}:connection-ssh-key}{% endfor %} config-authorized-key = {% for id in range(1,nbbackup|int) %} ${request-pbs-{{namebase}}-{{id}}:connection-ssh-key}{% endfor %}
config-notify = {% for id in range(1,nbbackup|int) %} ${request-pbs-{{namebase}}-{{id}}:connection-notification-url}{% endfor %} config-notify = {% for id in range(1,nbbackup|int) %} ${request-pbs-{{namebase}}-{{id}}:connection-notification-url}{% endfor %}
config-ip-list = config-ip-list =
# Bubble up all the instance parameters to the requested export instance.
{% if slapparameter_dict is defined %}
{% for parameter_name, parameter_value in slapparameter_dict.items() %}config-{{parameter_name}} = {{parameter_value}}
{% endfor %}
{% endif %}
{% if sla_parameter_dict -%}
{% set sla_key_main = "-sla-%s%s-" % (namebase, 0) -%}
{% set sla_key_secondary = "-sla-%s-" % (0) -%}
{% set sla_key_main_length = sla_key_main | length -%}
{% set sla_key_secondary_length = sla_key_secondary | length -%}
{% set sla_dict = {} -%}
{% for key in sla_parameter_dict.keys() -%}
{% if key.startswith(sla_key_main) -%}
{% do sla_dict.__setitem__(key[sla_key_main_length:], sla_parameter_dict.get(key)) -%}
{% elif key.startswith(sla_key_secondary) and not sla_dict.has_key(key[sla_key_secondary_length:]) -%}
{% do sla_dict.__setitem__(key[sla_key_secondary_length:], sla_parameter_dict.get(key)) -%}
{% endif -%}
{% endfor -%}
{% if sla_dict %}
sla = {{ ' '.join(sla_dict.keys()) }}
{% for key, value in sla_dict.iteritems() -%}
sla-{{ key }} = {{ value }}
{% endfor -%}
{% endif -%}
{% endif -%}
{% for id in range(1,nbbackup|int) %} {% for id in range(1,nbbackup|int) %}
...@@ -45,12 +86,29 @@ config-number = {{id}} ...@@ -45,12 +86,29 @@ config-number = {{id}}
config-authorized-key = ${request-pbs-{{namebase}}-{{id}}:connection-ssh-key} config-authorized-key = ${request-pbs-{{namebase}}-{{id}}:connection-ssh-key}
config-on-notification = ${request-pbs-{{namebase}}-{{id}}:connection-feeds-url}${:pbs-notification-id} config-on-notification = ${request-pbs-{{namebase}}-{{id}}:connection-feeds-url}${:pbs-notification-id}
config-ip-list = config-ip-list =
{% if sla_parameter_dict -%}
sla = computer_guid {% set sla_key_main = "-sla-%s%s-" % (namebase, id) -%}
sla-computer_guid = ${slap-parameter:{{namebase}}{{id}}-computer-guid} {% set sla_key_secondary = "-sla-%s-" % (id) -%}
{% set sla_key_main_length = sla_key_main | length -%}
{% set sla_key_secondary_length = sla_key_secondary | length -%}
{% endfor %} {% set sla_dict = {} -%}
{% for key in sla_parameter_dict.keys() -%}
{% if key.startswith(sla_key_main) -%}
{% do sla_dict.__setitem__(key[sla_key_main_length:], sla_parameter_dict.get(key)) -%}
{% elif key.startswith(sla_key_secondary) and not sla_dict.has_key(key[sla_key_secondary_length:]) -%}
{% do sla_dict.__setitem__(key[sla_key_secondary_length:], sla_parameter_dict.get(key)) -%}
{% endif -%}
{% endfor -%}
{% if sla_dict %}
sla = {{ ' '.join(sla_dict.keys()) }}
{% for key, value in sla_dict.iteritems() -%}
sla-{{ key }} = {{ value }}
{% endfor -%}
{% endif %}
{% endif %}
{% endfor -%}
[iplist] [iplist]
config-ip-list = ${request-{{namebase}}:connection-ip}{% for j in range(1,nbbackup|int) %} ${request-{{namebase}}-pseudo-replicating-{{j}}:connection-ip}{% endfor %} config-ip-list = ${request-{{namebase}}:connection-ip}{% for j in range(1,nbbackup|int) %} ${request-{{namebase}}-pseudo-replicating-{{j}}:connection-ip}{% endfor %}
...@@ -90,8 +148,27 @@ software-type = pull-backup ...@@ -90,8 +148,27 @@ software-type = pull-backup
name = PBS ({{namebase}} / {{id}}) name = PBS ({{namebase}} / {{id}})
return = ssh-key notification-url feeds-url return = ssh-key notification-url feeds-url
slave = false slave = false
sla = computer_guid {% if sla_parameter_dict -%}
sla-computer_guid = ${slap-parameter:pbs-{{namebase}}{{id}}-computer-guid} {% set sla_key_main = "-sla-%s%s-" % ("pbs", id) -%}
{% set sla_key_secondary = "-sla-%s-" % (id) -%}
{% set sla_key_main_length = sla_key_main | length -%}
{% set sla_key_secondary_length = sla_key_secondary | length -%}
{% set sla_dict = {} -%}
{% for key in sla_parameter_dict.keys() -%}
{% if key.startswith(sla_key_main) -%}
{% do sla_dict.__setitem__(key[sla_key_main_length:], sla_parameter_dict.get(key)) -%}
{% elif key.startswith(sla_key_secondary) and not sla_dict.has_key(key[sla_key_secondary_length:]) -%}
{% do sla_dict.__setitem__(key[sla_key_secondary_length:], sla_parameter_dict.get(key)) -%}
{% endif -%}
{% endfor -%}
{% if sla_dict %}
sla = {{ ' '.join(sla_dict.keys()) }}
{% for key, value in sla_dict.iteritems() -%}
sla-{{ key }} = {{ value }}
{% endfor %}
{% endif %}
{% endif %}
[request-pull-backup-server-{{namebase}}-{{id}}] [request-pull-backup-server-{{namebase}}-{{id}}]
<= request-pbs-common <= request-pbs-common
...@@ -135,3 +212,4 @@ pbs-{{namebase}}{{id}}-computer-guid = ...@@ -135,3 +212,4 @@ pbs-{{namebase}}{{id}}-computer-guid =
{% endfor %} {% endfor %}
{% endmacro %} {% endmacro %}
...@@ -73,11 +73,16 @@ eggs = ...@@ -73,11 +73,16 @@ eggs =
[versions] [versions]
# Use SlapOS patched zc.buildout # Use SlapOS patched zc.buildout
zc.buildout = 1.6.0-dev-SlapOS-010 zc.buildout = 1.6.0-dev-SlapOS-012
# zc.recipe.egg 2.x is for Buildout 2 # zc.recipe.egg 2.x is for Buildout 2
zc.recipe.egg = 1.3.2 zc.recipe.egg = 1.3.2
# Use own version of h.r.download to be able to open xz-like archives # Use own version of h.r.download to be able to open xz-like archives
hexagonit.recipe.download = 1.6nxd002 hexagonit.recipe.download = 1.7nxd002
# Use pinned version of setuptools. Other versions work, but changing
# version makes buildout recompile everything. Developers' nightmare.
setuptools = 0.9.8
# Official egg of prettytable has permission problems in EGG-INFO.
prettytable = 0.7.3-nxd001
[networkcache] [networkcache]
download-cache-url = http://www.shacache.org/shacache download-cache-url = http://www.shacache.org/shacache
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment