Commit bc915678 authored by Alain Takoudjou's avatar Alain Takoudjou

Update release candidate

parents 58f2c3a6 803411df
Changes Changes
======= =======
1.0.58 (2018-03-14)
-------------------
* generic.mysql: unregister UDFs before (re)adding UDFs
* Remove obsolete/unused recipes.
* neoppod: add support for new --dedup storage option.
* Use inotify-simple instead of inotifyx.
* erp5.test: remove duplicated code.
* librecipe: bugfixes found by pylint, performance improvements, and major
refactoring of executable wrappers.
* GenericBaseRecipe.createWrapper: remove 'comments' parameter.
* Drop the 'parameters-extra' option and always forward extra parameters.
* wrapper: new 'private-dev-shm' option (useful for wendelin.core).
* generic.cloudooo: OnlyOffice converter support odf.
* erp5testnode: don't tell git to ignore SSL errors.
1.0.53 (2017-09-13) 1.0.53 (2017-09-13)
------------------- -------------------
* check_port_listening: workaround for sheband limitation, reduce to a single file * check_port_listening: workaround for shebang limitation, reduce to a single file
* erp5.test: pass new --conversion_server_url option to runUnitTest * erp5.test: pass new --conversion_server_url option to runUnitTest
1.0.52 (2017-07-04) 1.0.52 (2017-07-04)
------------------- -------------------
* wrapper: Add option to reserve CPU core * wrapper: Add option to reserve CPU core
* slapconfiguration: Recipe reads partitions resource file * slapconfiguration: Recipe reads partitions resource file
* neoppod: add support for new --disable-drop-partitions storage option * neoppod: add support for new --disable-drop-partitions storage option
* random: Fix the monkeypatch in random.py to incorporate the recent changes in buildout 'get' function * random: Fix the monkeypatch in random.py to incorporate the recent changes in buildout 'get' function
* random: Add Integer recipe. * random: Add Integer recipe.
* librecipe.execute: Notify on file moved * librecipe.execute: Notify on file moved
* zero_knowledge: allow to set destination folder of configuration file * zero_knowledge: allow to set destination folder of configuration file
1.0.50 (2017-04-18) 1.0.50 (2017-04-18)
------------------- -------------------
* pbs: Do not parallelize calculus when the heaviest task is IO * pbs: Do not parallelize calculus when the heaviest task is IO
* re6st-registry: Refactor integration with re6st registry * re6st-registry: Refactor integration with re6st registry
* erp5testnode: make shellinabox reusing password file of pwgen * erp5testnode: make shellinabox reusing password file of pwgen
1.0.48 (2017-01-31) 1.0.48 (2017-01-31)
------------------- -------------------
* random-recipe: add option create-once to prevent storage file deletion by buildout * random-recipe: add option create-once to prevent storage file deletion by buildout
1.0.45 (2017-01-09) 1.0.45 (2017-01-09)
------------------- -------------------
* recipe: set default timeout of check url promise to 20 seconds * recipe: set default timeout of check url promise to 20 seconds
1.0.44 (2016-12-30) 1.0.44 (2016-12-30)
------------------- -------------------
* pbs: handles the fact that some parameters are not present when slaves are down * pbs: handles the fact that some parameters are not present when slaves are down
* recipe: allow usage of pidfile in wrapper recipe * recipe: allow usage of pidfile in wrapper recipe
* sshd: fix generation of authorized_keys * sshd: fix generation of authorized_keys
1.0.43 (2016-11-24) 1.0.43 (2016-11-24)
------------------- -------------------
* pbs: fixes trap command for dash intepreter * pbs: fixes trap command for dash intepreter
* pbs: remove infinite loops from pbs scripts. * pbs: remove infinite loops from pbs scripts.
* random.py: new file containing recipes generating random values. * random.py: new file containing recipes generating random values.
* testnode: disallow frontend access to all folders, avoiding publishing private repositories * testnode: disallow frontend access to all folders, avoiding publishing private repositories
1.0.41 (2016-10-26) 1.0.41 (2016-10-26)
------------------- -------------------
* dcron: new parameter to get a random time, with a frequency of once a day * dcron: new parameter to get a random time, with a frequency of once a day
* softwaretype: fix parse error on '+ =' when using buildout 2 * softwaretype: fix parse error on '+ =' when using buildout 2
* pbs: General Improvement and fixes. * pbs: General Improvement and fixes.
1.0.35 (2016-09-19) 1.0.35 (2016-09-19)
------------------- -------------------
* pbs: fix/accelerates deployment of resilient instances * pbs: fix/accelerates deployment of resilient instances
* recipe: new recipe to get a free network port * recipe: new recipe to get a free network port
* Remove url-list parameter to download fonts from fontconfig instance * Remove url-list parameter to download fonts from fontconfig instance
1.0.31 (2016-05-30) 1.0.31 (2016-05-30)
------------------- -------------------
* Implement cross recipe cache for registerComputerPartition * Implement cross recipe cache for registerComputerPartition
* Fixup! workarround for long shebang (place script on bin) * Fix workaround for long shebang (place script on bin)
1.0.30 (2016-05-23) 1.0.30 (2016-05-23)
------------------- -------------------
* Implement a workarround for long shebang * Implement a workarround for long shebang
* Implement Validation for user inputs ssl certificates * Implement Validation for user inputs ssl certificates
1.0.25 (2016-04-15) 1.0.25 (2016-04-15)
------------------- -------------------
* fixup slap configuration: provide instance and root instance title * fixup slap configuration: provide instance and root instance title
1.0.22 (2016-04-01) 1.0.22 (2016-04-01)
------------------- -------------------
* slap configuration: provide instance and root instance title * slap configuration: provide instance and root instance title
1.0.16 (2015-10.27) 1.0.16 (2015-10.27)
------------------- -------------------
* kvm recipe: fix bugs dowload image and disk creation * kvm recipe: fix bugs dowload image and disk creation
1.0.14 (2015-10.26) 1.0.14 (2015-10.26)
-------------------
* kvm recipe: Allow to set keyboard layout language used by qemu and VNC
* simplehttpserver-recipe: fix encoding error
0.103 (2015-07-24)
------------------
* kvm: fix issues with boolean parameters and add 'qed' in external disk format list.
* simplehttpserver-recipe: Add support for POST method which only get content and save into specified file.
0.102 (2015-05-22)
------------------
* kvm-recipe: vm of kvm-cluster can get ipv4/hostname of all other vm in the same cluster
* simplehttpserver-recipe: simple http server to serve files
0.101 (2015-04-29)
------------------
* kvm recipe: new parameters: external-disk-format, numa and cpu-options.
* kvm recipe: allow guest VM to connect to host http service via a local predefined ipv4 address (guestfwd).
0.100 (2015-04-20)
------------------
* re6stnet recipe: re6st-registry log can now be reopened with SIGUSR1
* re6stnet recipe: re6st certificate generation is improved.
0.99 (2015-04-10)
-----------------
* re6stnet: new recipe to deploy re6st registry (re6st master) with slapos.
0.98 (2015-04-09)
-----------------
* shellinabox: do not run in debug mode, it is much slower !
0.97 (2015-03-26)
-----------------
* switch softwaretype recipe: the recipe is backward compatible with old slapos node packages.
* kvm recipe: Avoid getting wrong storage path when creating kvm external disk
0.96 (2015-03-20)
-----------------
* slap configuration: recipe can read from master network information releated to a tap interface
* slap configuration: recipe will setup data folder in DATA directory of computer partition if disk is mounted
* switch softwaretype recipe: also generate tap network information when they exist
* switch softwaretype recipe: generate configuration for DATA directory when disk is mounted
0.95 (2015-02-14)
-----------------
* resiliency stack: allow web takeover to work inside of webrunner/erp5testnode.
* resiliency takeover script: create lock file stating that takeover has been done.
0.94 (2015-02-06)
-----------------
* kvm: allow to configure tap and nat interface at the same time with use-nat and use-tap [d3d65916]
* kvm: use -netdev to configure network interface as -net is now obsolete [27baa9d4]
0.85 (2013-12-03)
-----------------
* Slaprunner: recipe replaced by a buildout profile [14fbcd92]
* Slaprunner: import instances can automatically deploy Software Releases [64c48388]
* Slaprunner: backup script passes basic authentification [8877615]
* Slaprunner: backup doesn't destroy symlinks for Software Releases [f519a078]
* Shellinabox: now uses uid and gid to start [e9349c65]
* Shellinabox: can do autoconnection [516e772]
* Librecipe-generic: correction of bash code for /bin/sh compatibility [bee8c9c8]
0.84.2 (2013-10-04)
-------------------
* sshkeys_authority: don't allow to return None as parameter. [9e340a0]
0.84.1 (2013-10-03)
-------------------
* Resiliency: PBS: promise should NOT bang. [64886cd]
0.84 (2013-09-30)
-----------------
* Request.py: improve instance-state handling. [ba5f160]
* Resilient recipe: remove hashing of urls/names. [ee2aec8]
* Resilient pbs recipe: recover from rdiff-backup failures. [be7f2fc, 92ee0c3]
* Resilience: add pidfiles in PBS. [0b3ad5c]
* Resilient: don't hide exception, print it. [05b3d64, d2b0494]
* Resiliency: Only keep 10 increments of backup. [4e89e33]
* KVM SR: add fallback in case of download exception. [de8d796]
* slaprunner: don't check certificate for importer. [53dc772]
0.83.1 (2013-09-10)
------------------
* slapconfiguration: fixes previous releasei (don't encode tap_set because it's not a string). [Cedric de Saint Martin]
0.83 (2013-09-10)
-----------------
* slaprunner recipe: remove trailing / from master_url. [Cedric de Saint Martin]
* librecipe: add pidfile option for singletons. [Cedric de Saint Martin]
* Resiliency: Use new pidfile option. [Cedric de Saint Martin]
* Fix request.py for slave instances. [Cedric de Saint Martin]
* slapconfiguration recipe: cast some parameters from unicode to str. [Cedric de Saint Martin]
0.82 (2013-08-30)
-----------------
* Certificate Authority: Can receice certificate to install. [Cedric Le Ninivin]
* Squid: Add squid recipe. [Romain Courteaud]
* Request: Trasmit instace state to requested instances. [Benjamin Blanc / Cédric Le Ninivin]
* Slapconfiguration: Now return instance state. [Cédric Le Ninivin]
* Apache Frontend: Remove recipe
0.81 (2013-08-12)
-----------------
* KVM SR: implement resiliency test. [Cedric de Saint Martin]
0.80 (2013-08-06)
----------------
* Add a simple readline recipe. [f4fce7e]
0.79 (2013-08-06)
-----------------
* KVM SR: Add support for NAT based networking (User Mode Network). [627895fe35]
* KVM SR: add virtual-hard-drive-url support. [aeb5df40cd, 8ce5a9aa1d0, a5034801aa9]
* Fix regression in GenericBaseRecipe.generatePassword. [3333b07d33c]
0.78.5 (2013-08-06)
-------------------
* check_url_available: add option to check secure links [6cbce4d8231]
0.78.4 (2013-08-06)
------------------- -------------------
* slapos.cookbook:slaprunner: Update to use https. [Cedric Le Ninivin] * kvm recipe: Allow to set keyboard layout language used by qemu and VNC
* simplehttpserver-recipe: fix encoding error
0.78.3 (2013-07-18)
-------------------
* slapos.cookbook:publish: Add support to publish information for slaves. [Cedric Le Ninivin]
0.78.2 (2013-07-18)
-------------------
* Fix slapos.cookbook:request: Add backward compatiblity about getInstanceGuid(). [Cedric de Saint Martin]
* slapos.cookbook:check_* promises: Add timeout to curl that is not otherwise killed by slapos promise subsystem. [Cedric de Saint Martin]
* Cloudooo: Allow any environment variables. [Yusei Tahara]
* ERP5: disable MariaDB query cache completely by 'query_cache_type = 0' for ERP5. [Kazuhiko Shiozaki]
* ERP5: enable haproxy admin socket and install haproxyctl script. [Kazuhiko Shiozaki]
* ERP5: increase the maximum number of open file descriptors before starting mysqld. [Kazuhiko Shiozaki]
* python 2.7: updated to 2.7.5 [Cedric de Saint Martin]
0.78.1 (2013-05-31)
-------------------
* Add boinc recipe: Allow to deploy an empty BOINC project. [Alain Takoudjou]
* Add boinc.app recipe: Allow to deploy and update a BOINC application into existing BOINC server instance . [Alain Takoudjou]
* Add boinc.client recipe: Allow to deploy a BOINC Client instance on SlapOS. [Alain Takoudjou]
* Add condor recipe: Allow to deploy Condor Manager or Condor worker instance on SlapOS. [Alain Takoudjou]
* Add condor.submit recipe: Allow to deploy or update application into existing Condor Manager instance. [Alain Takoudjou]
* Add redis.server recipe: Allow to deploy Redis server. [Alain Takoudjou]
* Add trac recipe: for deploying Trac and manage project with support of SVN and GIT. [Alain Takoudjou]
* Add bonjourgrid recipe: for deploying BonjourGrid Master and submit BOINC or Condor project. [Alain Takoudjou]
* Add bonjourgrid.client recipe: for deploying BonjourGrid Worker instance and execute BOINC or Condor Jobs. [Alain Takoudjou]
0.78.0 (2013-04-28)
-------------------
* LAMP stack: Allow to give application-dependent parameters to application configuration file. [Cedric de Saint Martin]
* zabbix-agent: Allow user to pass zabbix parameter. [Cedric de Saint Martin]
* kvm frontend: listen to ipv6 and ipv4. [Jean-Baptiste Petre]
0.77.1 (2013-04-18)
-------------------
* Re-release of 0.77.0.
0.77.0 (2013-04-18)
-------------------
* Allow to pass extra parameters when creating simple wrapper. [Sebastien Robin]
* Apache frontend: Append all rewrite module options to http as well. [Cedric de Saint Martin]
* Apache frontend: Add https-only support. [Cedric de Saint Martin]
* Apache frontend: make logrotate work by using "generic" component. [Cedric de Saint Martin]
0.76.0 (2013-04-03)
-------------------
* Add 'generic' phpconfigure recipe, allowing to configure any PHP-based app. [Cedric de Saint Martin]
* apache_frontend: Have more useful access_log in apache frontend. [Cedric de Saint Martin]
* apache_frontend: Add "SSLProxyEngine On" to http apache frontend vhost to be able to proxy https -> http. [Cedric de Saint Martin]
* Add first preliminary version of nginx-based reverse proxy. [Cedric de Saint Martin]
* Request-optional is not verbose anymore (again) if it failed. [Cedric de Saint Martin]
* Add possibility to fetch web ip and port from apache recipe. [Cedric de Saint Martin]
0.75.0 (2013-03-26)
-------------------
* Add backward compatibility about Partition.getInstanceGuid() in request.py. [Cedric de Saint Martin]
* request.py: Don't crash if resource is not ready. [Cedric de Saint Martin]
* Use memory-based kumofs instead of memcached to have no limitation for key length and data size. [Kazuhiko Shiozaki]
* Postgres: allow slapuser# to connect as postgres user. [Marco Mariani]
* apache_frontend: Sanitize inputs, disable Varnish cache, don't touch to custom file if already present. [Cedric de Saint Martin]
* Resiliency: simpler, more robust PBS recipe and promise. [Marco Mariani]
* Add helper method to set "location" parameter in librecipe. [Cedric de Saint Martin]
* Add download helper function in librecipe. [Cedric de Saint Martin]
* Update wrapper recipe to make it simpler and more dev-friendly. [Cedric de Saint Martin]
* Add configurationfile recipe. [Cedric de Saint Martin]
* Add request-edge recipe. [Cedric de Saint Martin]
* Add publishsection recipe. [Cedric de Saint Martin]
* Add match support for promise check_page_content. [Cedric de Saint Martin]
0.74.0 (2013-03-05)
-------------------
* Generate mysql password randomly in LAMP stack. [Alain Takoudjou]
* Add support for apache and haproxy to have more than one listening port. [Vincent Pelletier]
* Use a more consistent parameter naming in 6tunnel recipe. [Vincent Pelletier]
* Provide an SR-transparent way to (de)serialise master data. [Vincent Pelletier]
* Initial version of neoppod recipe. [Vincent Pelletier]
* Initial version of clusterized erp5 recipes. [Vincent Pelletier]
* General cleanup of the request recipe (simpler parsing, less calls to master). [Vincent Pelletier, Cedric de Saint Martin]
0.73.1 (2013-02-19)
-------------------
* softwaretype recipe: all falsy parameter values are now ignored. [Cedric de Saint Martin]
0.73.0 (2013-02-18)
-------------------
* Add mioga and apacheperl recipes. [Viktor Horvath]
* request.py: Properly fetch instance_guid of instance. [Cedric de Saint Martin]
* request.py: Only append SLA parameter to the list if the key actually exists and is not empty string. [Cedric de Saint Martin]
0.72.0 (2013-02-11)
-------------------
* librecipe: correctly handle newline and missing file in addLineToFile(). [Marco Mariani]
* LAMP: Copy php application even if directory exists but is empty. This handle new resilient LAMP stack. [Cedric de Saint Martin]
* LAMP: Don't even try to restart/reload/graceful Apache. This fix "Apache hangs" problem. [Cedric de Saint Martin]
0.71.4 (2013-02-01)
-------------------
* Enable IPv6 support in KumoFS. [Vincent Pelletier]
* Use new connection and get result when try to create new erp5 site. [Rafael Monnerat]
* Set up timezone database in mariab's mysql table so that we can use timezone conversion function. [Kazuhiko Shiozaki]
* Make erp5_bootstrap wait for manage_addERP5Site response [Rafael Monnerat]
0.71.3 (2013-01-31)
-------------------
* Add mysql_ip and mysql_port parameters in apachephp recipe [Cedric de Saint
Martin]
* Random password for postgres in standalone SR and lapp stack; accept
connections from the world. [Marco Mariani]
0.71.2 (2013-01-29)
-------------------
* revised postgres/lapp recipe. [Marco Mariani]
0.71.1 (2013-01-04)
-------------------
* Frontend: Sort instances by reference to avoid attacks. [Cedric de Saint
Martin]
* Frontend: Add public_ipv4 parameter support to ease deployment of slave
frontend. [Cedric de Saint Martin]
* Frontend: Move apache_frontend wrappers to watched directory (etc/service).
[Cedric de Saint Martin]
* Frontend: Add native path to varnish environment. [Cedric de Saint Martin]
0.71 (2012-12-20)
-----------------
* frontend: Add "path" parameter for Zope instances. [Cedric de Saint Martin]
0.70 (2012-11-05)
-----------------
* KVM: Add support for disk-type, second nbd and cpu-count. [Cedric de Saint
Martin]
0.69 (2012-10-30)
-----------------
* handle multiple notification_url values in notifier recipe [Marco Mariani]
* createWrapper() sh alternative to execute.execute() for simple cases
[Marco Mariani]
* fixed secret key generation in apachephp config [Marco Mariani]
0.68.1 (2012-10-03)
-------------------
* slaprunner: fix "logfile" parameter to "log_file"
0.68 (2012-10-02)
-----------------
* request.py: Remove useless calls to master, fix "update" method. [Cedric
de Saint Martin]
* Add webrunner test recipe. [Alain Takoudjou]
* Add logfile for slaprunner. [Cedric de Saint Martin]
* Fix check_url_available promise (syntax + checks + IPv6 support). [Cedric
de Saint Martin]
0.67 (2012-09-26)
-----------------
* Add check_page_content promise generator. [Cedric Le Ninivin]
* Fix check_url_available recipe. [Cedric de Saint Martin]
* Set up timezone database in mariab's mysql table so that we can use
timezone conversion function. [Kazuhiko Shiozaki]
* Add many resiliency-based recipes [Timothée Lacroix]
* Fix and unify request and requestoptional recipes [Cedric de Saint Martin]
* Fix Dropbear. [Antoine Catton]
0.66 (2012-09-10)
-----------------
* Add check_page_content promise generator. [Cedric Le Ninivin]
0.65 (2012-09-07)
-----------------
* Add egg_test, recipe allowing to do "python setup.py test" on a list of
eggs. [Rafael Monnerat, Cedric de Saint Martin]
0.64.2 (2012-08.28)
-------------------
* Specify description on gitinit recipe. [Antoine Catton]
0.64.1 (2012-08-28)
-------------------
* Fix: minor fix on downloader recipe in order to allow cross-device renaming.
[Antoine Catton]
0.64 (2012-08-27)
-----------------
* Fix: remove "template" recipe which was collinding with slapos.recipe.template.
[Antoine Catton]
0.63 (2012-08-22)
-----------------
* Add the ability to run command line in shellinabox. [Antoine Catton]
* Add the ability to run shellinabox as root. (for LXC purpose) [Antoine Catton]
* Add "uuid" recipe. [Antoine Catton]
* Add "downloader" recipe. [Antoine Catton]
0.62 (2012-08-21)
-----------------
* Add "wrapper" recipe. [Antoine Catton]
* Add "gitinit" recipe. [Antoine Catton]
* librecipe.execute code clean up and factorization. [Antoine Catton]
* Add "template" recipe. [Antoine Catton]
0.61 (2012-08-17)
-----------------
* Add "debug" option for slaprunner. [Alain Takoudjou]
0.60 (2012-08-13)
-----------------
* New recipe: requestoptional, like "request", but won't fail if instance is
not ready. [Cedric de Saint Martin]
* Update zabbix to return strings as parameters. [Cedric de Saint Martin]
* Add check in check_url_promise in case of empty URL. [Cedric de Saint
Martin]
* Upgrade slaprunner recipe to be compatible with newest version. [Alain
Takoudjou]
0.59 (2012-07-12)
-----------------
* Zabbix: add temperature monitoring using custom commands.
0.58 (2012-07-06)
-----------------
* Agent rewrite. [Vincent Pelletier]
0.57 (2012-06-22)
-----------------
* Do not use system curl. [Romain Courteaud]
0.56 (2012-06-18)
-----------------
* Add signalwrapper, generate.mac, generate.password recipes. [Romain
Courteaud]
0.55 (2012-06-18)
-----------------
* Add slapmonitor and slapreport recipes. [Mohamadou Mbengue]
0.54.1 (2012-06-18)
-------------------
* Fix 0.54 release containing wrong code in request.py.
0.54 (2012-06-18)
-----------------
* Apache frontend: won't block sending slave informations to SlapOS Master
in case of problem from one slave instance.[Cedric de Saint Martin]
* Apache frontend will send IP informations for slaves in case slave is about
custom domain. [Cedric de Saint Martin]
* Ability to use LAMP applications without configuration. [Cedric de Saint
Martin]
* Users can specify custom domain in LAMP applications. [Cedric de Saint
Martin]
0.53 (2012-06-07)
-----------------
* Switch slaprunner into generic recipe, and add cloud9 recipe. [Cedric de
Saint Martin]
0.52 (2012-05-16)
-----------------
* Request bugfix: Correct default software_type (was: RootInstanceSoftware).
[Cedric de Saint Martin]
* Request will raise again if requested instance is not ready
[Romain Courteaud]
* Apache Frontend: assume apache is available from standard ports.
Consequence: url connection parameter of slave instance doesn't contain
port. [Cedric de Saint Martin]
* Apache Frontend bugfix: correctly detect slave instance type (zope).
[Cedric de Saint Martin]
* Apache Frontend: "default" slave instances are available through http
in addition to https. [Cedric de Saint Martin]
* Apache Frontend: Configuration: Add mod_deflate and set ProxyPreserveHost
[Cedric de Saint Martin]
0.51 (2012-05-14)
-----------------
* LAMP stack bugfix: Users were losing data when slapgrid is ran (Don't
erase htdocs if it already exist). [Cedric de Saint Martin]
0.50 (2012-05-12)
-----------------
* LAMP stack bugfix: fix a crash where recipe was trying to restart
non-existent httpd process. [Cedric de Saint Martin]
* LAMP stack bugfix: don't erase htdocs at update [Cedric de Saint Martin]
* Apache Frontend: Improve Apache configuration, inspired by Nexedi
production frontend. [Cedric de Saint Martin]
* Allow sysadmin of node to customize frontend instance.
[Cedric de Saint Martin]
* Apache Frontend: Change 'zope=true' option to 'type=zope'.
[Cedric de Saint Martin]
* Apache Frontend: listens to plain http port as well to redirect to https.
[Cedric de Saint Martin]
0.49 (2012-05-10)
-----------------
* Apache Frontend supports Zope and Varnish. [Cedric de Saint Martin]
0.48 (2012-04-26)
-----------------
* New utility recipe: slapos.recipe.generate_output_if_input_not_null.
[Cedric de Saint Martin]
* New promise recipe: slapos.recipe.url_available: check if url returns http
code 200. [Cedric de Saint Martin]
* Fix: slapos.recipe.request won't raise anymore if instance is not ready.
[Cedric de Saint Martin]
* Fix: slapos.recipe.request won't assume instance reference if not
specified. [Cedric de Saint Martin]
0.47 (2012-04-19)
-----------------
* Slap Test Agent [Yingjie Xu]
0.46 (2012/04/12)
-----------------
* xvfb and firefox initial release [Romain Courteaud]
0.45 (2012-03-29)
-----------------
* slaprunner: change number of available partitions to 7 [Alain Takoudjou]
0.44 (2012-03-28)
-----------------
* minor: apachephp: update apache configuration to work with Apache2.4
0.43 (2012-03-28)
-----------------
* minor: erp5: add missing .zcml files into egg. [Cedric de Saint Martin]
0.42 (2012-03-26)
-----------------
* erp5: Add web_checker recipe. [Tatuya Kamada]
* erp5: Add generic_varnish recipe. [Tatuya Kamada]
* erp5: Simplify erp5_update to only create the ERP5 site. [Romain Courteaud]
* erp5: Allow to pass CA parameters from section. [Łukasz Nowak]
0.41 (2012-03-21)
-----------------
* Release new "generic" version of KVM, includes frontend.
[Cedric de Saint Martin]
0.40.1 (2012-03-01)
-------------------
* Fix manifest to include files needed for apache. [Cedric de Saint Martin]
0.40 (2012-03-01)
-----------------
* apache_frontend initial release. [Cedric de Saint Martin]
0.39 (2012-02-20)
-----------------
* seleniumrunner initial release. [Cedric de Saint Martin]
0.38 (2011-12-05)
-----------------
* erp5: Swtich to percona, as maatkit is obsoleted. [Sebastien Robin]
* erp5: Improve haproxy configuration. [Sebastien Robin]
* erp5: Support sphinxd. [Kazuhiko Shiozaki]
* erp5: Improve and make logging more usual. [Sebastien Robin]
* erp5: Allow mysql connection from localhost. [Romain Courteaud]
* erp5: Allow to control Zope/Zeo cache [Arnaud Fontaine]
* erp5: Increase precision in logs [Julien Muchembled]
* erp5: Improve erp5 update [Arnaud Fontaine, Rafael Monnerat]
0.37 (2011-11-24)
-----------------
* KVM : allow access to several KVM instances without SSL certificate duplicate
problem. [Cedric de Saint Martin]
0.36 (2011-11-16)
-----------------
* erp5testnode : the code of testnode is not in slapos repository anymore
0.35 (2011-11-10)
-----------------
* KVM : Promise are now working properly. [Łukasz Nowak]
* KVM : Use NoVNC with automatic login. [Cedric de Saint Martin]
* KVM : Use websockify egg and remove numpy hack. [Cedric de Saint Martin]
0.34 (2011-11-08)
-----------------
* Any LAMP software can specify its own php.ini [Alain Takoudjou]
* LAMP : Fix bug where buildout does not has sufficient rights to update
application parts. [Alain Takoudjou]
* LAMP : Update formatting when returning list of renamed files.
[Alain Takoudjou]
0.33 (2011-10-31)
-----------------
* erp5 : use percona toolkit instead of maatkit [Sebastien Robin]
0.32 (2011-10-28)
-----------------
* LAMP : Recipe can now call lampconfigure from slapos.toolbox which will
configure PHP application instance when needed. [Alain Takoudjou Kamdem]
0.31 (2011-10-16)
-----------------
* Split big redundant recipes into small ones. In order to factorize the code
and have everything in the buildout file. [Antoine Catton, Romain Courteaud,
Łukasz Nowak]
* LAMP : Update apache and php configuration files to work with a lot of different
PHP software. [Alain Takoudjou Kamdem]
* LAMP : Recipe can launch scripts, move or remove files or directories
when a given condition is filled. Useful when PHP apps require you to
remove "admin" directory after configuration for example.
[Alain Takoudjou Kamdem]
0.30 (2011-10-06)
-----------------
* LAMP : Update apache and php configuration files to work with a lot of different
PHP software. [Alain Takoudjou Kamdem]
0.29 (2011-09-28)
-----------------
* mysql: bug fix on database recovering (avoid importing dump two times). [Antoine Catton]
0.28 (2011-09-27)
-----------------
* lamp.request: requesting the mariadb software release instead of itself. [Antoine Catton]
* lamp.request: adding support of remote backup repo (using a different
software type). The default remote backup is a davstorage. [Antoine Catton]
0.27 (2011-09-27)
-----------------
* mysql: add backup and backup recovering using different software type. [Antoine Catton]
0.26 (2011-09-27)
-----------------
* Davstorage: returning more explicit url (using webdav scheme). [Antoine Catton]
* Other mysql minor fixes. [Antoine Catton]
0.25 (2011-09-21)
-----------------
* mysql: Restore to default behaviour. [Antoine Catton]
* mysql: Use mysqldump instead of non trustable backup system. [Antoine Catton]
0.24 (2011-09-19)
-----------------
* mysql: Unhardcode the requested url. [Antoine Catton]
0.23 (2011-09-19)
-----------------
* Clean code in mysql recipe [Cedric de Saint Martin]
* librecipe: Provide createPromiseWrapper method. [Antoine Catton]
* kvm: Expose promisee checks to slapgrid. [Antoine Catton]
* davstorage: Initial version. [Antoine Catton]
* mysql: Support DAV backup. [Antoine Catton]
0.22 (2011-09-12)
-----------------
* Fix haproxy setup for erp5 [Sebastien Robin]
0.21 (2011-09-12)
-----------------
* Update PHP configuration to set session and date options.
[Alain Takoudjou Kamdem]
* Improve logrotate policy and haproxy config for erp5
[Sebastien Robin]
0.20 (2011-09-07)
-----------------
* Update and fix KVM/noVNC installation to be compatible with new WebSocket
protocol (HyBi-10) required by Chrome >= 14 and Firefox >= 7.
[Cedric de Saint Martin]
0.19 (2011-09-06)
-----------------
* Update PHP configuration to disable debug logging. [Cedric de Saint Martin]
0.18 (2011-08-25)
-----------------
* Repackage egg to include needed .bin files. [Cedric de Saint Martin]
0.17 (2011-08-25)
-----------------
* Add XWiki software release [Cedric de Saint Martin]
0.16 (2011-07-15)
-----------------
* Improve Vifib and pure ERP5 instantiation [Rafael Monnerat]
* Use configurator for Vifib [Rafael Monnerat]
0.15 (2011-07-13)
-----------------
* Encrypt connection by default. [Vivien Alger]
0.14 (2011-07-13)
-----------------
* Provide new way to instantiate kvm. [Cedric de Saint Martin, Vivien Alger]
0.13 (2011-07-13)
-----------------
* Implement generic execute_wait wrapper, which allows to wait for some files
to appear before starting service depending on it. [Łukasz Nowak]
0.12 (2011-07-11)
-----------------
* Fix slaprunner, phpmyadmin software releases, added
wordpress software release. [Cedric de Saint Martin]
0.11 (2011-07-07)
-----------------
* Enable test suite runner for vifib.
0.10 (2011-07-01)
-----------------
* Add PHPMyAdmin software release used in SlapOS tutorials
[Cedric de Saint Martin]
* Add slaprunner software release [Cedric de Saint Martin]
0.9 (2011-06-24)
----------------
* mysql recipe : Changing slapos.recipe.erp5.execute to
slapos.recipe.librecipe.execute [Cedric de Saint Martin]
0.8 (2011-06-15)
----------------
* Add MySQL and MariaDB standalone software release and recipe
[Cedric de Saint Martin]
* Fixed slapos.recipe.erp5testnode instantiation [Sebastien Robin]
0.7 (2011-06-14)
----------------
* Fix slapos.recipe.erp5 package by providing site.zcml in it. [Łukasz Nowak]
* Improve slapos.recipe.erp5testnode partition instantiation error reporting
[Sebastien Robin]
0.6 (2011-06-13)
----------------
* Fixed slapos.recipe.erp5 instantiation. [Łukasz Nowak]
0.5 (2011-06-13)
----------------
* Implement zabbix agent instantiation. [Łukasz Nowak]
* Drop dependency on Zope2. [Łukasz Nowak]
* Share more in slapos.recipe.librecipe module. [Łukasz Nowak]
0.4 (2011-06-09)
----------------
* Remove reference to slapos.tool.networkcache as it was removed from pypi. [Łukasz Nowak]
* Add Kumofs standalone software release and recipe [Cedric de Saint Martin]
* Add Memcached standalone software release and recipe [Cedric de Saint Martin]
0.3 (2011-06-09)
----------------
* Moved out template and build to separate distributions [Łukasz Nowak]
* Depend on slapos.core instead of depracated slapos.slap [Romain Courteaud]
* Fix apache module configuration [Kazuhiko Shiozaki]
* Allow to control full environment in erp5 module [Łukasz Nowak]
0.2 (2011-05-30)
----------------
* Allow to pass zope_environment in erp5 entry point [Łukasz Nowak]
0.1 (2011-05-27)
----------------
* All slapos.recipe.* became slapos.cookbook:* [Łukasz Nowak] For older entries, see https://lab.nexedi.com/nexedi/slapos/blob/a662db75cc840df9d4664a9d048ef28ebfff4d50/CHANGES.rst
...@@ -6,6 +6,13 @@ parts = file ...@@ -6,6 +6,13 @@ parts = file
extends = extends =
../zlib/buildout.cfg ../zlib/buildout.cfg
[file-msooxml]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/msooxml
md5sum = c889ad135cbfb343db36b729a3897432
location = ${buildout:parts-directory}/${:_buildout_section_name_}
filename = msooxml
[file] [file]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
url = http://ftp.icm.edu.pl/packages/file/file-5.32.tar.gz url = http://ftp.icm.edu.pl/packages/file/file-5.32.tar.gz
...@@ -15,3 +22,7 @@ configure-options = ...@@ -15,3 +22,7 @@ configure-options =
environment = environment =
CPPFLAGS=-I${zlib:location}/include CPPFLAGS=-I${zlib:location}/include
LDFLAGS=-L${zlib:location}/lib -Wl,-rpath=${zlib:location}/lib LDFLAGS=-L${zlib:location}/lib -Wl,-rpath=${zlib:location}/lib
pre-configure =
# patch for fix msooxml files detect correctly
test -f ./magic/Magdir/msooxml
cp ${file-msooxml:location}/msooxml ./magic/Magdir/
#------------------------------------------------------------------------------
# $File: msooxml,v 1.5 2014/08/05 07:38:45 christos Exp $
# msooxml: file(1) magic for Microsoft Office XML
# From: Ralf Brown <ralf.brown@gmail.com>
# .docx, .pptx, and .xlsx are XML plus other files inside a ZIP
# archive. The first member file is normally "[Content_Types].xml".
# but some libreoffice generated files put this later. Perhaps skip
# the "[Content_Types].xml" test?
# Since MSOOXML doesn't have anything like the uncompressed "mimetype"
# file of ePub or OpenDocument, we'll have to scan for a filename
# which can distinguish between the three types
# start by checking for ZIP local file header signature
0 string PK\003\004
!:strength +10
# make sure the first file is correct
>0x1E regex \\[Content_Types\\]\\.xml|_rels/\\.rels
# skip to the second local file header
# since some documents include a 520-byte extra field following the file
# header, we need to scan for the next header
>>(18.l+49) search/2000 PK\003\004
# now skip to the *third* local file header; again, we need to scan due to a
# 520-byte extra field following the file header
>>>&26 search/1000 PK\003\004
# and check the subdirectory name to determine which type of OOXML
# file we have. Correct the mimetype with the registered ones:
# http://technet.microsoft.com/en-us/library/cc179224.aspx
>>>>&26 string word/ Microsoft Word 2007+
!:mime application/vnd.openxmlformats-officedocument.wordprocessingml.document
>>>>&26 string ppt/ Microsoft PowerPoint 2007+
!:mime application/vnd.openxmlformats-officedocument.presentationml.presentation
>>>>&26 string xl/ Microsoft Excel 2007+
!:mime application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
>>1104 search/300 PK\003\004
# and check the subdirectory name to determine which type of OOXML
# file we have. Correct the mimetype with the registered ones:
# http://technet.microsoft.com/en-us/library/cc179224.aspx
>>>&26 string word/ Microsoft Word 2007+
!:mime application/vnd.openxmlformats-officedocument.wordprocessingml.document
>>>&26 string ppt/ Microsoft PowerPoint 2007+
!:mime application/vnd.openxmlformats-officedocument.presentationml.presentation
>>>&26 string xl/ Microsoft Excel 2007+
!:mime application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
...@@ -32,7 +32,7 @@ configure-options = ...@@ -32,7 +32,7 @@ configure-options =
--with-png=${libpng:location} --with-png=${libpng:location}
--with-static-proj4=${proj4:location} --with-static-proj4=${proj4:location}
--with-sqlite3=${sqlite3:location} --with-sqlite3=${sqlite3:location}
--with-wepb=${webp:location} --with-webp=${webp:location}
--with-xml2=${libxml2:location}/bin/xml2-config --with-xml2=${libxml2:location}/bin/xml2-config
environment = environment =
PATH=${xz-utils:location}/bin:%(PATH)s PATH=${xz-utils:location}/bin:%(PATH)s
......
[buildout] [buildout]
extends =
../gcc/buildout.cfg
parts = icu4c parts = icu4c
[icu4c] [icu4c]
# need for couchdb
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
location = ${buildout:parts-directory}/${:_buildout_section_name_} location = ${buildout:parts-directory}/${:_buildout_section_name_}
url = http://download.icu-project.org/files/icu4c/58.2/icu4c-58_2-src.tgz url = http://download.icu-project.org/files/icu4c/58.2/icu4c-58_2-src.tgz
...@@ -13,6 +15,14 @@ configure-options = ...@@ -13,6 +15,14 @@ configure-options =
--disable-static --disable-static
--enable-rpath --enable-rpath
[icu4c-slaposgcc]
# need for onlyoffice-core
<= icu4c
environment =
PATH=${gcc:location}/bin:%(PATH)s
LD_LIBRARY_PATH=${gcc:location}/lib:${gcc:location}/lib64
LDFLAGS=-Wl,-rpath=${gcc:location}/lib -Wl,-rpath=${gcc:location}/lib64
[icu] [icu]
<= icu4c <= icu4c
......
...@@ -36,11 +36,20 @@ slapos_promisee = ...@@ -36,11 +36,20 @@ slapos_promisee =
directory:plugin directory:plugin
file:lib/rt.jar file:lib/rt.jar
file:bin/java file:bin/java
# http://java.com/en/download/manual_java7.jsp # https://www.java.com/en/download/manual.jsp
x86 = http://javadl.sun.com/webapps/download/AutoDL?BundleId=97358 22d970566c418499d331a2099d77c548 # Update 161
x86-64 = http://javadl.sun.com/webapps/download/AutoDL?BundleId=97360 f4f7f7335eaf2e7b5ff455abece9d5ed x86 = http://javadl.oracle.com/webapps/download/AutoDL?BundleId=230530_2f38c3b165be4555a1fa6e98c45e0808 32db95dd417fd7949922206b2a61aa19
x86-64 = http://javadl.oracle.com/webapps/download/AutoDL?BundleId=230532_2f38c3b165be4555a1fa6e98c45e0808 4385bc121b085862be623f4a31e7e0b4
script = script =
if not self.options.get('url'): self.options['url'], self.options['md5sum'] = self.options[guessPlatform()].split(' ') if not self.options.get('url'): self.options['url'], self.options['md5sum'] = self.options[guessPlatform()].split(' ')
extract_dir = self.extract(self.download(self.options['url'], self.options.get('md5sum'))) extract_dir = self.extract(self.download(self.options['url'], self.options.get('md5sum')))
workdir = guessworkdir(extract_dir) workdir = guessworkdir(extract_dir)
self.copyTree(workdir, "%(location)s") self.copyTree(workdir, "%(location)s")
[java-re-8-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -x ${:keytool}
keytool = ${java-re-8:location}/bin/keytool
...@@ -11,7 +11,7 @@ url = https://github.com/logrotate/logrotate/releases/download/3.11.0/logrotate- ...@@ -11,7 +11,7 @@ url = https://github.com/logrotate/logrotate/releases/download/3.11.0/logrotate-
md5sum = 3a9280e4caeb837427a2d54518fbcdac md5sum = 3a9280e4caeb837427a2d54518fbcdac
# BBB this is only for backward-compatibility. # BBB this is only for backward-compatibility.
post-install = post-install =
ln -s . ${buildout:parts-directory}/${:_buildout_section_name_}/usr ln -sf . ${buildout:parts-directory}/${:_buildout_section_name_}/usr
environment = environment =
PATH=${xz-utils:location}/bin:%(PATH)s PATH=${xz-utils:location}/bin:%(PATH)s
CPPFLAGS=-I${popt:location}/include CPPFLAGS=-I${popt:location}/include
......
...@@ -16,6 +16,7 @@ extends = ...@@ -16,6 +16,7 @@ extends =
../readline/buildout.cfg ../readline/buildout.cfg
../xz-utils/buildout.cfg ../xz-utils/buildout.cfg
../zlib/buildout.cfg ../zlib/buildout.cfg
../unixodbc/buildout.cfg
# The following lines are only for mariarocks.cfg # The following lines are only for mariarocks.cfg
# to be extended last without touching 'parts'. # to be extended last without touching 'parts'.
../gcc/buildout.cfg ../gcc/buildout.cfg
...@@ -62,16 +63,18 @@ configure-options = ...@@ -62,16 +63,18 @@ configure-options =
-DCMAKE_C_FLAGS="${:CMAKE_CFLAGS}" -DCMAKE_C_FLAGS="${:CMAKE_CFLAGS}"
-DCMAKE_CXX_FLAGS="${:CMAKE_CFLAGS}" -DCMAKE_CXX_FLAGS="${:CMAKE_CFLAGS}"
-DCMAKE_INSTALL_RPATH=${:CMAKE_LIBRARY_PATH} -DCMAKE_INSTALL_RPATH=${:CMAKE_LIBRARY_PATH}
CMAKE_CFLAGS = -I${bzip2:location}/include -I${jemalloc:location}/include -I${libaio:location}/include -I${libxml2:location}/include -I${ncurses:location}/include -I${openssl:location}/include -I${readline5:location}/include -I${xz-utils:location}/include -I${zlib:location}/include ${:extra_cflags} -DCMAKE_INCLUDE_PATH=${unixodbc:location}/include
CMAKE_LIBRARY_PATH = ${bzip2:location}/lib:${jemalloc:location}/lib:${libaio:location}/lib:${libxml2:location}/lib:${ncurses:location}/lib:${openssl:location}/lib:${readline5:location}/lib:${xz-utils:location}/lib:${zlib:location}/lib${:extra_library_path} -DCMAKE_LIBRARY_PATH=${unixodbc:location}/lib
CMAKE_CFLAGS = -I${bzip2:location}/include -I${jemalloc:location}/include -I${libaio:location}/include -I${libxml2:location}/include -I${ncurses:location}/include -I${openssl:location}/include -I${readline5:location}/include -I${xz-utils:location}/include -I${zlib:location}/include -I${unixodbc:location}/include ${:extra_cflags}
CMAKE_LIBRARY_PATH = ${bzip2:location}/lib:${jemalloc:location}/lib:${libaio:location}/lib:${libxml2:location}/lib:${ncurses:location}/lib:${openssl:location}/lib:${readline5:location}/lib:${xz-utils:location}/lib:${zlib:location}/lib:${unixodbc:location}/lib:${:extra_library_path}
extra_cflags = extra_cflags =
extra_include_path = extra_include_path =
extra_library_path = extra_library_path =
environment = environment =
CMAKE_PROGRAM_PATH=${cmake:location}/bin CMAKE_PROGRAM_PATH=${cmake:location}/bin
CMAKE_INCLUDE_PATH=${bzip2:location}/include:${libaio:location}/include:${libaio:location}/include:${libxml2:location}/include:${ncurses:location}/include:${openssl:location}/include:${readline5:location}/include:${xz-utils:location}/include:${zlib:location}/include${:extra_include_path} CMAKE_INCLUDE_PATH=${bzip2:location}/include:${libaio:location}/include:${libaio:location}/include:${libxml2:location}/include:${ncurses:location}/include:${openssl:location}/include:${readline5:location}/include:${xz-utils:location}/include:${zlib:location}/include:${unixodbc:location}/include:${:extra_include_path}
CMAKE_LIBRARY_PATH=${:CMAKE_LIBRARY_PATH} CMAKE_LIBRARY_PATH=${:CMAKE_LIBRARY_PATH}
LDFLAGS=-L${bzip2:location}/lib -L${jemalloc:location}/lib -L${libaio:location}/lib -L${xz-utils:location}/lib -L${zlib:location}/lib LDFLAGS=-L${bzip2:location}/lib -L${jemalloc:location}/lib -L${libaio:location}/lib -L${xz-utils:location}/lib -L${zlib:location}/lib -L${unixodbc:location}/lib
PATH=${patch:location}/bin:%(PATH)s PATH=${patch:location}/bin:%(PATH)s
post-install = post-install =
mkdir -p ${:location}/include/wsrep && mkdir -p ${:location}/include/wsrep &&
......
# Do not extend any file that touch buildout:parts. # Do not extend any file that touch buildout:parts.
[mariadb] [mariadb]
version = 10.2.11 version = 10.2.13
md5sum = 954088299fe5f11b4fda3b540558adbd md5sum = 20c61bd4059ba287e54cfb2862bae81d
stable-patches = stable-patches =
configure-options += configure-options +=
# force build of TokuDB due to a regression in 10.2.11
-DTOKUDB_OK=1
-DCMAKE_C_COMPILER=${gcc:location}/bin/gcc -DCMAKE_C_COMPILER=${gcc:location}/bin/gcc
-DCMAKE_CXX_COMPILER=${gcc:location}/bin/g++ -DCMAKE_CXX_COMPILER=${gcc:location}/bin/g++
extra_cflags = -I${zstd:location}/include extra_cflags = -I${zstd:location}/include
......
...@@ -5,6 +5,7 @@ extends = ...@@ -5,6 +5,7 @@ extends =
../pkgconfig/buildout.cfg ../pkgconfig/buildout.cfg
../openssl/buildout.cfg ../openssl/buildout.cfg
../zlib/buildout.cfg ../zlib/buildout.cfg
../python-2.7/buildout.cfg
parts = parts =
nodejs nodejs
...@@ -12,6 +13,24 @@ parts = ...@@ -12,6 +13,24 @@ parts =
[nodejs] [nodejs]
<= nodejs-0.12 <= nodejs-0.12
[nodejs-8.6.0]
# Server-side Javascript.
recipe = slapos.recipe.cmmi
version = v8.6.0
url = https://nodejs.org/dist/${:version}/node-${:version}.tar.gz
md5sum = 0c95e08220667d8a18b97ecec8218ac6
configure-options =
--shared-openssl
--shared-openssl-includes=${openssl:location}/include
--shared-openssl-libpath=${openssl:location}/lib
environment =
HOME=${buildout:parts-directory}/${:_buildout_section_name_}
PATH=${pkgconfig:location}/bin:${python2.7:location}/bin/:%(PATH)s
PKG_CONFIG_PATH=${openssl:location}/lib/pkgconfig/
CPPFLAGS=-I${zlib:location}/include
LDFLAGS=-Wl,-rpath=${openssl:location}/lib -L${zlib:location}/lib -Wl,-rpath=${zlib:location}/lib
LD_LIBRARY_PATH=${openssl:location}/lib
[nodejs-5] [nodejs-5]
# Server-side Javascript. # Server-side Javascript.
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
......
[buildout]
extends =
../gcc/buildout.cfg
../libxml2/buildout.cfg
../zlib/buildout.cfg
../icu/buildout.cfg
# for qmake
../qt/buildout.cfg
parts +=
onlyoffice-core
[onlyoffice-core]
recipe = slapos.recipe.cmmi
location = ${buildout:parts-directory}/${:_buildout_section_name_}
# This url contains the hash provided by the DocumentServer core submodule hash.
# https://github.com/ONLYOFFICE/DocumentServer/
url = https://lab.nexedi.com/bk/onlyoffice_core/repository/archive.tar.bz2?ref=b051e75b179b3599c09937668fbbd2d7e2c50683
md5sum = b2713373d687dd1c7121c286fa156626
configure-command = true
make-targets = lib bin
environment =
PATH=${gcc:location}/bin:${qt5-qmake:location}/bin:%(PATH)s
CXXFLAGS=-I${libxml2:location}/include -I${zlib:location}/include -I${icu4c-slaposgcc:location}/include -I${boost-lib:location}/include -Wno-comment -Wno-deprecated-declarations -Wno-endif-labels -Wno-parentheses -Wno-reorder -Wno-sign-compare -Wno-switch -Wno-unknown-pragmas -Wno-unused
LDFLAGS=-L${gcc:location}/lib -Wl,-rpath=${gcc:location}/lib -L${gcc:location}/lib64 -Wl,-rpath=${gcc:location}/lib64 -L${libxml2:location}/lib -Wl,-rpath=${libxml2:location}/lib -L${zlib:location}/lib -Wl,-rpath=${zlib:location}/lib -L${icu4c-slaposgcc:location}/lib -Wl,-rpath=${icu4c-slaposgcc:location}/lib -L${boost-lib:location}/lib -Wl,-rpath=${boost-lib:location}/lib -Wl,-rpath=${:location}/lib
post-install =
set -e -x
mkdir -p ${:location}/bin ${:location}/lib
mv -t ${:location}/lib build/lib/*/*.so
mv -t ${:location}/bin build/bin/*/*
# the binary linux_64 in build/bin/AllFontsGen is renamed AllFontsGen here.
# mv build/bin/AllFontsGen/* ${:location}/bin/AllFontsGen
[buildout]
extends =
../dash/buildout.cfg
../curl/buildout.cfg
parts +=
onlyoffice-x2t
[onlyoffice-x2t]
recipe = slapos.recipe.build
url = https://lab.nexedi.com/tc/bin/raw/fc3af671d3b19e9d25b40326373222b601f23edc/onlyoffice-x2t-part.tar.gz
md5sum = 3e08a8b1345c301078cdce3a7f7360b2
# script to install.
script =
location = %(location)r
self.failIfPathExists(location)
import sys
extract_dir = self.extract(self.download(self.options['url'], self.options.get('md5sum')))
shutil.move(extract_dir, location)
wrapper_location = os.path.join("%(location)s", "x2t")
wrapper = open(wrapper_location, 'w')
wrapper.write('''#!${dash:location}/bin/dash
export LD_LIBRARY_PATH=%(location)s/lib:${curl:location}/lib
exec %(location)s/bin/x2t "$@"''')
wrapper.close()
os.chmod(wrapper_location, 0755)
os.chmod(location, 0750)
...@@ -57,22 +57,27 @@ location = ${buildout:parts-directory}/${:_buildout_section_name_} ...@@ -57,22 +57,27 @@ location = ${buildout:parts-directory}/${:_buildout_section_name_}
<= debian-netinst-base <= debian-netinst-base
arch = amd64 arch = amd64
[debian-amd64-wheezy-netinst.iso]
<= debian-amd64-netinst-base
version = 7.11.0
md5sum = 096c1c18b44c269808bd815d58c53c8f
[debian-amd64-jessie-netinst.iso] [debian-amd64-jessie-netinst.iso]
# Download the installer of Debian 8 (Jessie)
<= debian-amd64-netinst-base <= debian-amd64-netinst-base
release = archive release = archive
version = 8.10.0 version = 8.10.0
md5sum = 19dcfc381bd3e609c6056216d203f5bc md5sum = 19dcfc381bd3e609c6056216d203f5bc
[debian-amd64-netinst.iso] [debian-amd64-netinst.iso]
# Download the installer of Debian 9 (Stretch)
<= debian-amd64-netinst-base <= debian-amd64-netinst-base
release = release/current release = release/current
version = 9.3.0 version = 9.4.0
md5sum = db8ab7871bc2b7d456c4746e706fb5d3 md5sum = 73bd8aaaeb843745ec939f6ae3906e48
[debian-amd64-testing-netinst.iso] [debian-amd64-testing-netinst.iso]
# Download the installer of Debian Stretch # Download the installer of Debian Buster
<= debian-amd64-netinst-base <= debian-amd64-netinst-base
release = stretch_di_alpha7 release = buster_di_alpha2
version = stretch-DI-alpha7 version = buster-DI-alpha2
md5sum = 3fe53635b904553b26588491e1473e99 md5sum = fbdc192f8857e2bd884e41481ed0fc09
[buildout] [buildout]
extends =
../xorg/buildout.cfg
../gcc/buildout.cfg
parts = parts =
qt qt4-qmake
[qt] [qt5-qmake]
recipe = slapos.recipe.build # XXX work on all systems needs check
slapos_promisee = recipe = slapos.recipe.cmmi
file:plop location = ${buildout:parts-directory}/${:_buildout_section_name_}
url = http://download.qt.io/official_releases/qt/5.6/5.6.2/submodules/qtbase-opensource-src-5.6.2.tar.gz
# Online installer md5sum = 7aa5841b50c411e23e31e8a6cc1c6981
x86 = http://get.qt.nokia.com/qtsdk/Qt_SDK_Lin32_online_v1_1_3_en.run eae2e2a1396fec1369b66c71d7df6eab configure-command = ./configure
x86-64 = http://get.qt.nokia.com/qtsdk/Qt_SDK_Lin64_online_v1_1_3_en.run a4d929bc4d6511290c07c3745477b77b configure-options =
# Offline installer --prefix=${:location}
#x86 = http://get.qt.nokia.com/qtsdk/Qt_SDK_Lin32_offline_v1_1_3_en.run 106fdae4ec8947c491ab0a827a02da12 -v
#x86-64 = http://get.qt.nokia.com/qtsdk/Qt_SDK_Lin64_offline_v1_1_3_en.run 8c280beb11ee763840464572ed80e8b8 -no-separate-debug-info
-release
# Needs many dependencies. -confirm-license
-opensource
script = -no-opengl
if not self.options.get('url'): self.options['url'], self.options['md5sum'] = self.options[guessPlatform()].split(' ') -nomake examples
download_file = self.download(self.options['url'], self.options.get('md5sum')) environment =
extract_dir = tempfile.mkdtemp(self.name) PATH=${gcc:location}/bin:%(PATH)s
os.chdir(extract_dir) CPPFLAGS=-I${libX11:location}/include -I${xproto:location}/include -I${libXext:location}/include
(download_dir, filename) = os.path.split(download_file) LDFLAGS=-L${gcc:location}/lib -Wl,-rpath=${gcc:location}/lib -L${gcc:location}/lib64 -Wl,-rpath=${gcc:location}/lib64 -L${libX11:location}/lib -Wl,-rpath=${libX11:location}/lib -L${xproto:location}/lib -Wl,-rpath=${xproto:location}/lib -L${libXext:location}/lib -Wl,-rpath=${libXext:location}/lib
auto_extract_bin = os.path.join(extract_dir, filename) make-binary = true
shutil.move(download_file, auto_extract_bin) post-install =
os.chmod(auto_extract_bin, 0755) mkdir -p ${:location}/bin
subprocess.call([auto_extract_bin]) mv -t ${:location}/bin bin/qmake
self.cleanup_list.append(extract_dir) mv -t ${:location} mkspecs
workdir = guessworkdir(extract_dir)
import pdb; pdb.set_trace() [qt5.6-qmake]
self.copyTree(os.path.join(workdir, "jre1.6.0_27"), "%(location)s") <= qt5-qmake
[qt5.6.2-qmake]
<= qt5.6-qmake
#ldd qt.run
# linux-gate.so.1 => (0xb7827000) [qt4-qmake]
# libutil.so.1 => /lib/i686/cmov/libutil.so.1 (0xb781c000) # building [qmake] will download the full qt source anyway ~200MB
# libgobject-2.0.so.0 => not found # qmake binary can be reached directly from ${qt:location}/bin/qmake if [qt] is fully built
# libSM.so.6 => not found recipe = slapos.recipe.cmmi
# libICE.so.6 => not found location = ${buildout:parts-directory}/${:_buildout_section_name_}
# libXrender.so.1 => not found url = http://download.qt.io/official_releases/qt/4.8/4.8.7/qt-everywhere-opensource-src-4.8.7.tar.gz
# libfontconfig.so.1 => not found md5sum = d990ee66bf7ab0c785589776f35ba6ad
# libfreetype.so.6 => /usr/lib/libfreetype.so.6 (0xb77a4000) # see https://github.com/NixOS/nixpkgs/blob/3e387c3e005c87566b5403d24c86f71f4945a79b/pkgs/development/libraries/qt-4.x/4.8/default.nix#L101
# libz.so.1 => /usr/lib/libz.so.1 (0xb778f000) pre-configure =
# libXext.so.6 => /usr/lib/libXext.so.6 (0xb7780000) set -e -x
# libX11.so.6 => /usr/lib/libX11.so.6 (0xb7663000) sed 's,/usr/X11R6/lib64,${libX11:location}/lib64 ${xproto:location}/lib64 ${libXext:location}/lib64,g' -i mkspecs/*/*.conf
# libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb765f000) sed 's,/usr/X11R6/lib,${libX11:location}/lib ${xproto:location}/lib ${libXext:location}/lib,g' -i mkspecs/*/*.conf
# libgthread-2.0.so.0 => not found sed 's,/usr/X11R6/include,${libX11:location}/include ${xproto:location}/include ${libXext:location}/include,g' -i mkspecs/*/*.conf
# librt.so.1 => /lib/i686/cmov/librt.so.1 (0xb7655000) configure-command = ./configure --prefix=${:location} -v -no-separate-debug-info -release -no-fast -confirm-license -opensource
# libglib-2.0.so.0 => not found make-targets = qmake
# libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb763c000) post-install =
# libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb754d000) cp -rt ${:location} *
# libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb7527000)
# libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7508000) [qt4.8-qmake]
# libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb73c2000) <= qt4-qmake
# libxcb.so.1 => /usr/lib/libxcb.so.1 (0xb73a9000) [qt4.8.7-qmake]
# /lib/ld-linux.so.2 (0xb7828000) <= qt4.8-qmake
# libXau.so.6 => /usr/lib/libXau.so.6 (0xb73a6000)
# libXdmcp.so.6 => /usr/lib/libXdmcp.so.6 (0xb73a1000)
\ No newline at end of file
...@@ -20,5 +20,13 @@ md5sum = 3dde098fd0b3a08d3f2867e4a95591ba ...@@ -20,5 +20,13 @@ md5sum = 3dde098fd0b3a08d3f2867e4a95591ba
recipe = hexagonit.recipe.download recipe = hexagonit.recipe.download
ignore-existing = true ignore-existing = true
strip-top-level-dir = true strip-top-level-dir = true
url = http://apache.multidist.com/tomcat/tomcat-7/v7.0.34/bin/apache-tomcat-7.0.34.tar.gz url = http://www-us.apache.org/dist/tomcat/tomcat-7/v7.0.84/bin/apache-tomcat-7.0.84.tar.gz
md5sum = 0f50494425c24450b4f66dfd4d2aecca md5sum = 1c6f2c06a90bd7d8a19522749c219a2a
[tomcat7-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -x ${:catalina}
catalina = ${tomcat7:location}/bin/catalina.sh
\ No newline at end of file
[buildout]
parts=
unixodbc
[unixodbc]
recipe = slapos.recipe.cmmi
url = http://www.unixodbc.org/unixODBC-2.3.5.tar.gz
md5sum = abf14cf943f1f8c5e63a24cb26d54fd9
# nu - The Nu Html Checker (v.Nu) is an ongoing experiment in better HTML checking.
# https://validator.w3.org/nu/
[buildout]
parts =
vnu
[vnu]
recipe = hexagonit.recipe.download
ignore-existing = true
strip-top-level-dir = true
url = https://github.com/validator/validator/releases/download/17.11.1/vnu.war_17.11.1.zip
md5sum = 2af6dec153a5011cd6fcc85ce5fb599d
[vnu-output]
# Shared binary location to ease migration
recipe = plone.recipe.command
stop-on-error = true
update-command = ${:command}
command = ${coreutils-output:test} -r ${:war}
war = ${vnu:location}/vnu.war
...@@ -5,7 +5,7 @@ parts = ...@@ -5,7 +5,7 @@ parts =
[zstd] [zstd]
recipe = slapos.recipe.cmmi recipe = slapos.recipe.cmmi
location = ${buildout:parts-directory}/${:_buildout_section_name_} location = ${buildout:parts-directory}/${:_buildout_section_name_}
url = https://github.com/facebook/zstd/archive/v1.3.1.tar.gz url = https://github.com/facebook/zstd/archive/v1.3.3.tar.gz
md5sum = e849ceef2f090240f690c13fba6ca70b md5sum = 187f8df17a75a74f78a23ea4806ac65f
configure-command = : configure-command = :
make-options = PREFIX=${:location} make-options = PREFIX=${:location}
...@@ -28,7 +28,7 @@ from setuptools import setup, find_packages ...@@ -28,7 +28,7 @@ from setuptools import setup, find_packages
import glob import glob
import os import os
version = '1.0.54.dev0' version = '1.0.58'
name = 'slapos.cookbook' name = 'slapos.cookbook'
long_description = open("README.rst").read() + "\n" + \ long_description = open("README.rst").read() + "\n" + \
open("CHANGES.rst").read() + "\n" open("CHANGES.rst").read() + "\n"
...@@ -72,7 +72,6 @@ setup(name=name, ...@@ -72,7 +72,6 @@ setup(name=name,
'zc.buildout': [ 'zc.buildout': [
'addresiliency = slapos.recipe.addresiliency:Recipe', 'addresiliency = slapos.recipe.addresiliency:Recipe',
'accords = slapos.recipe.accords:Recipe', 'accords = slapos.recipe.accords:Recipe',
'apache.zope.backend = slapos.recipe.apache_zope_backend:Recipe',
'apacheperl = slapos.recipe.apacheperl:Recipe', 'apacheperl = slapos.recipe.apacheperl:Recipe',
'apachephp = slapos.recipe.apachephp:Recipe', 'apachephp = slapos.recipe.apachephp:Recipe',
'apachephpconfigure = slapos.recipe.apachephpconfigure:Recipe', 'apachephpconfigure = slapos.recipe.apachephpconfigure:Recipe',
...@@ -118,14 +117,11 @@ setup(name=name, ...@@ -118,14 +117,11 @@ setup(name=name,
'generic.mysql.wrap_update_mysql = slapos.recipe.generic_mysql:WrapUpdateMySQL', 'generic.mysql.wrap_update_mysql = slapos.recipe.generic_mysql:WrapUpdateMySQL',
'generic.mysql.wrap_mysqld = slapos.recipe.generic_mysql:WrapMySQLd', 'generic.mysql.wrap_mysqld = slapos.recipe.generic_mysql:WrapMySQLd',
'generic.varnish = slapos.recipe.generic_varnish:Recipe', 'generic.varnish = slapos.recipe.generic_varnish:Recipe',
'generic.zope = slapos.recipe.generic_zope:Recipe',
'generic.zope.zeo.client = slapos.recipe.generic_zope_zeo_client:Recipe',
'gitinit = slapos.recipe.gitinit:Recipe', 'gitinit = slapos.recipe.gitinit:Recipe',
'haproxy = slapos.recipe.haproxy:Recipe', 'haproxy = slapos.recipe.haproxy:Recipe',
'ipv4toipv6 = slapos.recipe.6tunnel:FourToSix', 'ipv4toipv6 = slapos.recipe.6tunnel:FourToSix',
'ipv6toipv4 = slapos.recipe.6tunnel:SixToFour', 'ipv6toipv4 = slapos.recipe.6tunnel:SixToFour',
'jsondump = slapos.recipe.jsondump:Recipe', 'jsondump = slapos.recipe.jsondump:Recipe',
'kumofs = slapos.recipe.kumofs:Recipe',
'kvm.frontend = slapos.recipe.kvm_frontend:Recipe', 'kvm.frontend = slapos.recipe.kvm_frontend:Recipe',
'lamp = slapos.recipe.lamp:Request', 'lamp = slapos.recipe.lamp:Request',
'lamp.generic = slapos.recipe.lampgeneric:Recipe', 'lamp.generic = slapos.recipe.lampgeneric:Recipe',
...@@ -136,7 +132,6 @@ setup(name=name, ...@@ -136,7 +132,6 @@ setup(name=name,
'libcloudrequest = slapos.recipe.libcloudrequest:Recipe', 'libcloudrequest = slapos.recipe.libcloudrequest:Recipe',
'logrotate = slapos.recipe.logrotate:Recipe', 'logrotate = slapos.recipe.logrotate:Recipe',
'logrotate.d = slapos.recipe.logrotate:Part', 'logrotate.d = slapos.recipe.logrotate:Part',
'memcached = slapos.recipe.memcached:Recipe',
'mkdirectory = slapos.recipe.mkdirectory:Recipe', 'mkdirectory = slapos.recipe.mkdirectory:Recipe',
'mioga.instantiate = slapos.recipe.mioga.instantiate:Recipe', 'mioga.instantiate = slapos.recipe.mioga.instantiate:Recipe',
'mydumper = slapos.recipe.mydumper:Recipe', 'mydumper = slapos.recipe.mydumper:Recipe',
...@@ -196,7 +191,6 @@ setup(name=name, ...@@ -196,7 +191,6 @@ setup(name=name,
'urlparse = slapos.recipe._urlparse:Recipe', 'urlparse = slapos.recipe._urlparse:Recipe',
'uuid = slapos.recipe._uuid:Recipe', 'uuid = slapos.recipe._uuid:Recipe',
'userinfo = slapos.recipe.userinfo:Recipe', 'userinfo = slapos.recipe.userinfo:Recipe',
'waitfor = slapos.recipe.waitfor:Recipe',
'webchecker = slapos.recipe.web_checker:Recipe', 'webchecker = slapos.recipe.web_checker:Recipe',
'wrapper = slapos.recipe.wrapper:Recipe', 'wrapper = slapos.recipe.wrapper:Recipe',
'xvfb = slapos.recipe.xvfb:Recipe', 'xvfb = slapos.recipe.xvfb:Recipe',
......
...@@ -91,8 +91,8 @@ class Recipe(GenericSlapRecipe): ...@@ -91,8 +91,8 @@ class Recipe(GenericSlapRecipe):
# Generate wrapper # Generate wrapper
wrapper_location = self.createPythonScript(self.options['accords-wrapper'], wrapper_location = self.createPythonScript(self.options['accords-wrapper'],
'%s.accords.runAccords' % __name__, __name__ + '.accords.runAccords',
parameter_dict) (parameter_dict,))
path_list.append(wrapper_location) path_list.append(wrapper_location)
# Generate helper for debug # Generate helper for debug
......
...@@ -38,26 +38,18 @@ class Recipe(GenericSlapRecipe): ...@@ -38,26 +38,18 @@ class Recipe(GenericSlapRecipe):
""" """
def _install(self): def _install(self):
path_list = []
slap_connection = self.buildout['slap-connection'] slap_connection = self.buildout['slap-connection']
takeover_wrapper = self.createPythonScript( return self.createPythonScript(
name=self.options['wrapper-takeover'], self.options['wrapper-takeover'],
absolute_function='slapos.recipe.addresiliency.takeover.run', __name__ + '.takeover.takeover',
arguments={ kw={
'server_url': slap_connection['server-url'], 'server_url': slap_connection['server-url'],
'key_file': slap_connection.get('key-file'), 'key_file': slap_connection.get('key-file'),
'cert_file': slap_connection.get('cert-file'), 'cert_file': slap_connection.get('cert-file'),
'computer_id': slap_connection['computer-id'], 'computer_guid': slap_connection['computer-id'],
'partition_id': slap_connection['partition-id'], 'partition_id': slap_connection['partition-id'],
'software': slap_connection['software-release-url'], 'software_release': slap_connection['software-release-url'],
'namebase': self.parameter_dict['namebase'], 'namebase': self.parameter_dict['namebase'],
'takeover_triggered_file_path': self.options['takeover-triggered-file-path'], 'takeover_triggered_file_path': self.options['takeover-triggered-file-path'],
}) })
path_list.append(takeover_wrapper)
return path_list
...@@ -78,14 +78,3 @@ def takeover(server_url, key_file, cert_file, computer_guid, ...@@ -78,14 +78,3 @@ def takeover(server_url, key_file, cert_file, computer_guid,
# Create "lock" file preventing equeue to run import scripts # Create "lock" file preventing equeue to run import scripts
# XXX hardcoded # XXX hardcoded
open(takeover_triggered_file_path, 'w').write('') open(takeover_triggered_file_path, 'w').write('')
def run(args):
slapos.recipe.addresiliency.takeover.takeover(server_url = args.pop('server_url'),
key_file = args.pop('key_file'),
cert_file = args.pop('cert_file'),
computer_guid = args.pop('computer_id'),
partition_id = args.pop('partition_id'),
software_release = args.pop('software'),
namebase = args.pop('namebase'),
takeover_triggered_file_path = args.pop('takeover_triggered_file_path'))
##############################################################################
#
# Copyright (c) 2011 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from slapos.recipe.librecipe import GenericBaseRecipe
import pkg_resources
class Recipe(GenericBaseRecipe):
def install(self):
try:
backend_list = self.options['backend-list']
except KeyError:
backend_list = [(self.options['port'], self.options['backend'])]
scheme = self.options['scheme']
if scheme == 'http':
required_path_list = []
ssl_enable = ssl_snippet = ''
elif scheme == 'https':
key = self.options['key-file']
certificate = self.options['cert-file']
required_path_list = [key, certificate]
ssl_snippet = self.substituteTemplate(self.getTemplateFilename('snippet.ssl.in'), {
'key': key,
'certificate': certificate,
'ssl_session_cache': self.options['ssl-session-cache'],
})
if 'ssl-authentication' in self.options and self.optionIsTrue(
'ssl-authentication'):
ssl_snippet += self.substituteTemplate(self.getTemplateFilename('snippet.ssl.ca.in'), {
'ca_certificate': self.options['ssl-authentication-certificate'],
'ca_crl': self.options['ssl-authentication-crl'],
})
ssl_enable = 'SSLEngine on'
else:
raise ValueError('Unsupported scheme %s' % scheme)
ip_list = self.options['ip']
if isinstance(ip_list, basestring):
ip_list = [ip_list]
backend_path = self.options.get('backend-path', '/')
vhost_template_name = self.getTemplateFilename('vhost.in')
apache_config_file = self.createFile(
self.options['configuration-file'],
self.substituteTemplate(
self.getTemplateFilename('apache.zope.conf.in'),
{
'path': '/',
'server_admin': 'admin@',
'pid_file': self.options['pid-file'],
'lock_file': self.options['lock-file'],
'error_log': self.options['error-log'],
'access_log': self.options['access-log'],
'access_control_string': self.options['access-control-string'],
'ssl_snippet': ssl_snippet,
'vhosts': ''.join(self.substituteTemplate(vhost_template_name, {
'ip': ip,
'port': port,
'backend': ('%s/%s' % (backend.rstrip('/'), backend_path.strip('/'))).rstrip('/'),
'ssl_enable': ssl_enable,
}) for (port, backend) in backend_list for ip in ip_list),
},
)
)
return [
apache_config_file,
self.createPythonScript(
self.options['wrapper'],
__name__ + '.apache.runApache',
[
{
'required_path_list': required_path_list,
'binary': self.options['apache-binary'],
'config': apache_config_file,
},
],
),
]
import os
import sys
import time
def runApache(args):
sleep = 60
conf = args[0]
while True:
ready = True
for f in conf.get('required_path_list', []):
if not os.path.exists(f):
print 'File %r does not exists, sleeping for %s' % (f, sleep)
ready = False
if ready:
break
time.sleep(sleep)
apache_wrapper_list = [conf['binary'], '-f', conf['config'], '-DFOREGROUND']
apache_wrapper_list.extend(sys.argv[1:])
sys.stdout.flush()
sys.stderr.flush()
os.execl(apache_wrapper_list[0], *apache_wrapper_list)
# Apache configuration file for Zope
# Automatically generated
# List of modules
LoadModule unixd_module modules/mod_unixd.so
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule version_module modules/mod_version.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule mime_module modules/mod_mime.so
LoadModule dav_module modules/mod_dav.so
LoadModule dav_fs_module modules/mod_dav_fs.so
LoadModule negotiation_module modules/mod_negotiation.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule headers_module modules/mod_headers.so
# Basic server configuration
PidFile "%(pid_file)s"
ServerAdmin %(server_admin)s
TypesConfig conf/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
ServerTokens Prod
ServerSignature Off
TraceEnable Off
# Apache 2.4's default value (60 seconds) can be a bit too short
TimeOut 300
# As backend is trusting REMOTE_USER header unset it always
RequestHeader unset REMOTE_USER
%(ssl_snippet)s
# Log configuration
ErrorLog "%(error_log)s"
# Default apache log format with request time in microsecond at the end
LogFormat "%%h %%l %%u %%t \"%%r\" %%>s %%b \"%%{Referer}i\" \"%%{User-Agent}i\" %%D" combined
CustomLog "%(access_log)s" combined
# Directory protection
<Directory />
Options FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
</Directory>
# Path protected
<Location %(path)s>
Order Deny,Allow
Deny from all
Allow from %(access_control_string)s
</Location>
# Magic of Zope related rewrite
RewriteEngine On
%(vhosts)s
SSLVerifyClient require
RequestHeader set REMOTE_USER %%{SSL_CLIENT_S_DN_CN}s
SSLCACertificateFile %(ca_certificate)s
SSLCARevocationPath %(ca_crl)s
SSLCertificateFile %(certificate)s
SSLCertificateKeyFile %(key)s
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:HIGH:!aNULL:!MD5
SSLHonorCipherOrder on
SSLSessionCache shmcb:%(ssl_session_cache)s(512000)
SSLProxyEngine On
Listen %(ip)s:%(port)s
<VirtualHost *:%(port)s>
%(ssl_enable)s
RewriteRule ^/(.*) %(backend)s/$1 [L,P]
</VirtualHost>
...@@ -57,10 +57,9 @@ class Recipe(GenericBaseRecipe): ...@@ -57,10 +57,9 @@ class Recipe(GenericBaseRecipe):
) )
path_list.append(httpd_conf) path_list.append(httpd_conf)
wrapper = self.createPythonScript(self.options['wrapper'], wrapper = self.createWrapper(self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute', (self.options['httpd-binary'], '-f', self.options['httpd-conf'],
[self.options['httpd-binary'], '-f', self.options['httpd-conf'], '-DFOREGROUND'),
'-DFOREGROUND']
) )
path_list.append(wrapper) path_list.append(wrapper)
......
...@@ -92,14 +92,13 @@ class Recipe(GenericBaseRecipe): ...@@ -92,14 +92,13 @@ class Recipe(GenericBaseRecipe):
) )
path_list.append(httpd_conf) path_list.append(httpd_conf)
wrapper = self.createWrapper(name=self.options['wrapper'], wrapper = self.createWrapper(self.options['wrapper'],
command=self.options['httpd-binary'], (self.options['httpd-binary'],
parameters=[
'-f', '-f',
self.options['httpd-conf'], self.options['httpd-conf'],
'-DFOREGROUND' '-DFOREGROUND'
], ),
environment=self.environ) self.environ)
path_list.append(wrapper) path_list.append(wrapper)
secret_key_filename = os.path.join(self.buildout['buildout']['directory'], secret_key_filename = os.path.join(self.buildout['buildout']['directory'],
......
...@@ -118,7 +118,7 @@ class Recipe(GenericBaseRecipe): ...@@ -118,7 +118,7 @@ class Recipe(GenericBaseRecipe):
configureinstall_wrapper_path = self.createPythonScript( configureinstall_wrapper_path = self.createPythonScript(
self.options['configureinstall-location'], self.options['configureinstall-location'],
__name__ + '.runner.executeRunner', __name__ + '.runner.executeRunner',
[argument, delete, rename, chmod, data] (argument, delete, rename, chmod, data)
) )
#TODO finish to port this and remove upper one #TODO finish to port this and remove upper one
......
import subprocess import subprocess
def executeRunner(args): def executeRunner(arguments, delete, rename, chmod, data):
"""Start the instance configure. this may run a python script, move or/and rename """Start the instance configure. this may run a python script, move or/and rename
file or directory when dondition is filled. the condition may be when file exist or when an entry file or directory when dondition is filled. the condition may be when file exist or when an entry
exist into database. exist into database.
""" """
arguments, delete, rename, chmod, data = args if delete:
if delete != []:
print "Calling lampconfigure with 'delete' arguments" print "Calling lampconfigure with 'delete' arguments"
result = subprocess.Popen(arguments + delete) subprocess.call(arguments + delete)
result.wait() if rename:
if rename != []:
for parameters in rename: for parameters in rename:
print "Calling lampconfigure with 'rename' arguments" print "Calling lampconfigure with 'rename' arguments"
result = subprocess.Popen(arguments + parameters) subprocess.call(arguments + parameters)
result.wait() if chmod:
if chmod != []:
print "Calling lampconfigure with 'chmod' arguments" print "Calling lampconfigure with 'chmod' arguments"
result = subprocess.Popen(arguments + chmod) subprocess.call(arguments + chmod)
result.wait() if data:
if data != []:
print "Calling lampconfigure with 'run' arguments" print "Calling lampconfigure with 'run' arguments"
print arguments + data print arguments + data
result = subprocess.Popen(arguments + data) subprocess.call(arguments + data)
result.wait()
return
...@@ -49,13 +49,12 @@ class Recipe(GenericBaseRecipe): ...@@ -49,13 +49,12 @@ class Recipe(GenericBaseRecipe):
) )
path_list.append(httpd_conf) path_list.append(httpd_conf)
wrapper = self.createWrapper(name=self.options['wrapper'], wrapper = self.createWrapper(self.options['wrapper'],
command=self.options['httpd-binary'], (self.options['httpd-binary'],
parameters=[
'-f', '-f',
self.options['httpd-conf'], self.options['httpd-conf'],
'-DFOREGROUND', '-DFOREGROUND',
]) ))
path_list.append(wrapper) path_list.append(wrapper)
......
...@@ -124,12 +124,15 @@ class Recipe(GenericBaseRecipe): ...@@ -124,12 +124,15 @@ class Recipe(GenericBaseRecipe):
#Generate wrapper for php #Generate wrapper for php
wrapperphp = os.path.join(self.home, 'bin/php') wrapperphp = os.path.join(self.home, 'bin/php')
php_wrapper = self.createPythonScript(wrapperphp, php_wrapper = self.createWrapper(wrapperphp,
'slapos.recipe.librecipe.execute.executee', (self.phpbin, '-c', self.phpini),
([self.phpbin, '-c', self.phpini], os.environ)
) )
path_list.append(php_wrapper) path_list.append(php_wrapper)
mysql_dict = dict(db=self.database,
host=self.mysqlhost, port=self.mysqlport,
user=self.username, passwd=self.password)
#Generate python script for MySQL database test (starting) #Generate python script for MySQL database test (starting)
file_status = os.path.join(self.home, '.boinc_config') file_status = os.path.join(self.home, '.boinc_config')
if os.path.exists(file_status): if os.path.exists(file_status):
...@@ -137,11 +140,7 @@ class Recipe(GenericBaseRecipe): ...@@ -137,11 +140,7 @@ class Recipe(GenericBaseRecipe):
mysql_wrapper = self.createPythonScript( mysql_wrapper = self.createPythonScript(
os.path.join(self.wrapperdir, 'start_config'), os.path.join(self.wrapperdir, 'start_config'),
'%s.configure.checkMysql' % __name__, '%s.configure.checkMysql' % __name__,
dict(mysql_port=self.mysqlport, mysql_host=self.mysqlhost, (environment, mysql_dict, file_status)
mysql_user=self.username, mysql_password=self.password,
database=self.database,
file_status=file_status, environment=environment
)
) )
# Generate make project wrapper file # Generate make project wrapper file
...@@ -164,8 +163,7 @@ class Recipe(GenericBaseRecipe): ...@@ -164,8 +163,7 @@ class Recipe(GenericBaseRecipe):
install_wrapper = self.createPythonScript( install_wrapper = self.createPythonScript(
os.path.join(self.wrapperdir, 'make_project'), os.path.join(self.wrapperdir, 'make_project'),
'%s.configure.makeProject' % __name__, '%s.configure.makeProject' % __name__,
dict(launch_args=launch_args, request_file=request_make_boinc, (file_status, launch_args, request_make_boinc, environment)
make_sig=file_status, env=environment)
) )
path_list.append(install_wrapper) path_list.append(install_wrapper)
...@@ -197,7 +195,7 @@ class Recipe(GenericBaseRecipe): ...@@ -197,7 +195,7 @@ class Recipe(GenericBaseRecipe):
) )
start_service = self.createPythonScript( start_service = self.createPythonScript(
os.path.join(self.wrapperdir, 'config_project'), os.path.join(self.wrapperdir, 'config_project'),
'%s.configure.services' % __name__, parameter '%s.configure.services' % __name__, (parameter,)
) )
path_list.append(start_service) path_list.append(start_service)
...@@ -208,14 +206,12 @@ class Recipe(GenericBaseRecipe): ...@@ -208,14 +206,12 @@ class Recipe(GenericBaseRecipe):
os.unlink(start_boinc) os.unlink(start_boinc)
boinc_parameter = dict(service_status=service_status, boinc_parameter = dict(service_status=service_status,
installroot=self.installroot, drop_install=drop_install, installroot=self.installroot, drop_install=drop_install,
mysql_port=self.mysqlport, mysql_host=self.mysqlhost, mysql_dict=mysql_dict, environment=environment,
mysql_user=self.username, mysql_password=self.password,
database=self.database, environment=environment,
start_boinc=start_boinc) start_boinc=start_boinc)
start_wrapper = self.createPythonScript(os.path.join(self.wrapperdir, start_wrapper = self.createPythonScript(os.path.join(self.wrapperdir,
'start_boinc'), 'start_boinc'),
'%s.configure.restart_boinc' % __name__, '%s.configure.restart_boinc' % __name__,
boinc_parameter (boinc_parameter,)
) )
path_list.append(start_wrapper) path_list.append(start_wrapper)
...@@ -362,7 +358,7 @@ class App(GenericBaseRecipe): ...@@ -362,7 +358,7 @@ class App(GenericBaseRecipe):
) )
deploy_app = self.createPythonScript( deploy_app = self.createPythonScript(
os.path.join(wrapperdir, 'boinc_%s' % appname), os.path.join(wrapperdir, 'boinc_%s' % appname),
'%s.configure.deployApp' % __name__, parameter '%s.configure.deployApp' % __name__, (parameter,)
) )
path_list.append(deploy_app) path_list.append(deploy_app)
...@@ -404,17 +400,15 @@ class Client(GenericBaseRecipe): ...@@ -404,17 +400,15 @@ class Client(GenericBaseRecipe):
cc_cmd = '--read_cc_config' cc_cmd = '--read_cc_config'
cmd = self.createPythonScript(cmd_wrapper, cmd = self.createPythonScript(cmd_wrapper,
'%s.configure.runCmd' % __name__, '%s.configure.runCmd' % __name__,
dict(base_cmd=base_cmd, cc_cmd=cc_cmd, installdir=installdir, (base_cmd, cc_cmd, installdir, url, key)
project_url=url, key=key)
) )
path_list.append(cmd) path_list.append(cmd)
#Generate BOINC client wrapper #Generate BOINC client wrapper
boinc = self.createPythonScript(boinc_wrapper, boinc = self.createWrapper(boinc_wrapper,
'slapos.recipe.librecipe.execute.execute', (boincbin, '--allow_multiple_clients', '--gui_rpc_port',
[boincbin, '--allow_multiple_clients', '--gui_rpc_port',
str(self.options['rpc-port']), '--allow_remote_gui_rpc', str(self.options['rpc-port']), '--allow_remote_gui_rpc',
'--dir', installdir, '--redirectio', '--check_all_logins'] '--dir', installdir, '--redirectio', '--check_all_logins'),
) )
path_list.append(boinc) path_list.append(boinc)
......
...@@ -35,27 +35,21 @@ import filecmp ...@@ -35,27 +35,21 @@ import filecmp
from lock_file import LockFile from lock_file import LockFile
def checkMysql(args): def checkMysql(environment, connect_kw, file_status=None):
sys.path += args['environment']['PYTHONPATH'].split(':') sys.path += environment['PYTHONPATH'].split(':')
import MySQLdb import MySQLdb
#Sleep until mysql server becomes available #Sleep until mysql server becomes available
while True: while True:
try: try:
conn = MySQLdb.connect(host = args['mysql_host'], MySQLdb.connect(**connect_kw).close()
user = args['mysql_user'],
port = int(args['mysql_port']),
passwd = args['mysql_password'],
db = args['database'])
conn.close()
print "Successfully connect to MySQL database... "
if args.has_key('file_status'):
writeFile(args['file_status'], "starting")
break break
except Exception, ex: except Exception, ex:
print "The result is: \n" + ex.message print "The result is: \n" + ex.message
print "Could not connect to MySQL database... sleep for 2 secondes" print "Could not connect to MySQL database... sleep for 2 secondes"
time.sleep(2) time.sleep(2)
print "Successfully connect to MySQL database... "
if file_status:
writeFile(file_status, "starting")
def checkFile(file, stime): def checkFile(file, stime):
"""Loop until 'file' is created (exist)""" """Loop until 'file' is created (exist)"""
...@@ -70,18 +64,16 @@ def checkFile(file, stime): ...@@ -70,18 +64,16 @@ def checkFile(file, stime):
def restart_boinc(args): def restart_boinc(args):
"""Stop (if currently is running state) and start all Boinc service""" """Stop (if currently is running state) and start all Boinc service"""
environment = args['environment']
if args['drop_install']: if args['drop_install']:
checkFile(args['service_status'], 3) checkFile(args['service_status'], 3)
else: else:
checkMysql(args) checkMysql(environment, args['mysql_dict'], args.get('file_status'))
print "Restart Boinc..." print "Restart Boinc..."
env = os.environ env = os.environ.copy()
env['PATH'] = args['environment']['PATH'] env.update(environment)
env['PYTHONPATH'] = args['environment']['PYTHONPATH'] subprocess.call((os.path.join(args['installroot'], 'bin', 'stop'),), env=env)
binstart = os.path.join(args['installroot'], 'bin/start') subprocess.call((os.path.join(args['installroot'], 'bin', 'start'),), env=env)
binstop = os.path.join(args['installroot'], 'bin/stop')
os.system(binstop)
os.system(binstart)
writeFile(args['start_boinc'], "started") writeFile(args['start_boinc'], "started")
print "Done." print "Done."
...@@ -122,17 +114,16 @@ def startProcess(launch_args, env=None, cwd=None, stdout=subprocess.PIPE): ...@@ -122,17 +114,16 @@ def startProcess(launch_args, env=None, cwd=None, stdout=subprocess.PIPE):
return False return False
return True return True
def makeProject(args): def makeProject(make_sig, launch_args, request_file, extra_environ):
"""Run BOINC make_project script but once only""" """Run BOINC make_project script but once only"""
#Wait for DateBase initialization... #Wait for DateBase initialization...
checkFile(args['make_sig'], 3) checkFile(make_sig, 3)
print "Cheking if needed to run BOINC make_project..." print "Cheking if needed to run BOINC make_project..."
if os.path.exists(args['request_file']): if os.path.exists(request_file):
env = os.environ env = os.environ.copy()
env['PATH'] = args['env']['PATH'] env.update(extra_environ)
env['PYTHONPATH'] = args['env']['PYTHONPATH'] if startProcess(launch_args, env=env):
if startProcess(args['launch_args'], env=env): os.unlink(request_file)
os.unlink(args['request_file'])
print "Finished running BOINC make_projet...Ending" print "Finished running BOINC make_projet...Ending"
else: else:
print "No new request for make_project. Exiting..." print "No new request for make_project. Exiting..."
...@@ -155,9 +146,8 @@ def services(args): ...@@ -155,9 +146,8 @@ def services(args):
return return
print "execute script xadd..." print "execute script xadd..."
env = os.environ env = os.environ.copy()
env['PATH'] = args['environment']['PATH'] env.update(args['environment'])
env['PYTHONPATH'] = args['environment']['PYTHONPATH']
if not startProcess([os.path.join(args['installroot'], 'bin/xadd')], env): if not startProcess([os.path.join(args['installroot'], 'bin/xadd')], env):
return return
print "Update files and directories permissions..." print "Update files and directories permissions..."
...@@ -212,9 +202,8 @@ def deployManagement(args): ...@@ -212,9 +202,8 @@ def deployManagement(args):
newInstall = True newInstall = True
#Sleep until file .start_boinc exist (File indicate that BOINC has been started) #Sleep until file .start_boinc exist (File indicate that BOINC has been started)
checkFile(args['start_boinc'], 3) checkFile(args['start_boinc'], 3)
env = os.environ env = os.environ.copy()
env['PATH'] = args['environment']['PATH'] env.update(args['environment'])
env['PYTHONPATH'] = args['environment']['PYTHONPATH']
print "setup directories..." print "setup directories..."
numversion = args['version'].replace('.', '') numversion = args['version'].replace('.', '')
...@@ -263,7 +252,7 @@ def deployManagement(args): ...@@ -263,7 +252,7 @@ def deployManagement(args):
privateKeyFile = os.path.join(args['installroot'], 'keys/code_sign_private') privateKeyFile = os.path.join(args['installroot'], 'keys/code_sign_private')
output = open(binary + '.sig', 'w') output = open(binary + '.sig', 'w')
p_sign = subprocess.Popen([sign, binary, privateKeyFile], stdout=output, p_sign = subprocess.Popen([sign, binary, privateKeyFile], stdout=output,
stderr=subprocess.STDOUT) stderr=subprocess.STDOUT, env=env)
result = p_sign.communicate()[0] result = p_sign.communicate()[0]
if p_sign.returncode is None or p_sign.returncode != 0: if p_sign.returncode is None or p_sign.returncode != 0:
print "Failed to execute bin/sign_executable.\nThe error was: %s" % result print "Failed to execute bin/sign_executable.\nThe error was: %s" % result
...@@ -290,10 +279,8 @@ def deployManagement(args): ...@@ -290,10 +279,8 @@ def deployManagement(args):
create_wu(args, env) create_wu(args, env)
print "Restart Boinc..." print "Restart Boinc..."
binstart = os.path.join(args['installroot'], 'bin/start') subprocess.call((os.path.join(args['installroot'], 'bin', 'stop'),), env=env)
binstop = os.path.join(args['installroot'], 'bin/stop') subprocess.call((os.path.join(args['installroot'], 'bin', 'start'),), env=env)
os.system(binstop)
os.system(binstart)
print "Boinc Application deployment is done... writing end signal file..." print "Boinc Application deployment is done... writing end signal file..."
writeFile(token, str(args['wu_number'])) writeFile(token, str(args['wu_number']))
...@@ -315,22 +302,21 @@ def create_wu(args, env): ...@@ -315,22 +302,21 @@ def create_wu(args, env):
startProcess(launch_args, env, args['installroot']) startProcess(launch_args, env, args['installroot'])
def runCmd(args): def runCmd(base_cmd, cc_cmd, installdir, url, key):
"""Wait for Boinc Client started and run boinc cmd""" """Wait for Boinc Client started and run boinc cmd"""
client_config = os.path.join(args['installdir'], 'client_state.xml') client_config = os.path.join(installdir, 'client_state.xml')
checkFile(client_config, 5) checkFile(client_config, 5)
time.sleep(10) time.sleep(10)
#Scan client state xml to find client ipv4 adress #Scan client state xml to find client ipv4 adress
host = re.search("<ip_addr>([\w\d\.:]+)</ip_addr>", host = re.search("<ip_addr>([\w\d\.:]+)</ip_addr>",
open(client_config, 'r').read()).group(1) open(client_config, 'r').read()).group(1)
args['base_cmd'][2] = host + ':' + args['base_cmd'][2] base_cmd[2] = host + ':' + base_cmd[2]
print "Run boinccmd with host at %s " % args['base_cmd'][2] print "Run boinccmd with host at %s " % base_cmd[2]
project_args = args['base_cmd'] + ['--project_attach', args['project_url'], project_args = base_cmd + ['--project_attach', url, key]
args['key']] startProcess(project_args, cwd=installdir)
startProcess(project_args, cwd=args['installdir']) if cc_cmd:
if args['cc_cmd'] != '':
#Load or reload cc_config file #Load or reload cc_config file
startProcess(args['base_cmd'] + [args['cc_cmd']], cwd=args['installdir']) startProcess(base_cmd + [cc_cmd], cwd=installdir)
def writeFile(file, content): def writeFile(file, content):
......
...@@ -62,8 +62,8 @@ class Recipe(GenericBaseRecipe): ...@@ -62,8 +62,8 @@ class Recipe(GenericBaseRecipe):
condor_wrapper_list=condor_wrapper_list, condor_wrapper_list=condor_wrapper_list,
boinc_wrapper_list=boinc_wrapper_list) boinc_wrapper_list=boinc_wrapper_list)
bonjourGrid_wrapper = self.createPythonScript(grid_wrapper, bonjourGrid_wrapper = self.createPythonScript(grid_wrapper,
'%s.configure.launchScript' % __name__, __name__ + '.configure.launchScript',
parameters (parameters,)
) )
path_list.append(bonjourGrid_wrapper) path_list.append(bonjourGrid_wrapper)
...@@ -73,16 +73,15 @@ class Recipe(GenericBaseRecipe): ...@@ -73,16 +73,15 @@ class Recipe(GenericBaseRecipe):
bg_wrapper = self.options['wrapper'].strip() bg_wrapper = self.options['wrapper'].strip()
log = self.options['log_file'].strip() log = self.options['log_file'].strip()
pid_file = self.options['pid_file'].strip() pid_file = self.options['pid_file'].strip()
wrapper = self.createPythonScript(bg_wrapper, wrapper = self.createWrapper(bg_wrapper,
'slapos.recipe.librecipe.execute.execute', (python, bonjourgrid_master, '--log_file', log,
([python, bonjourgrid_master, '--log_file', log,
'--pid_file', pid_file, '--pid_file', pid_file,
'--master_wrapper', grid_wrapper, '--master_wrapper', grid_wrapper,
'--directory', self.options['work_dir'].strip(), '--directory', self.options['work_dir'].strip(),
'--server', self.options['redis-url'].strip(), '--server', self.options['redis-url'].strip(),
'--port', self.options['redis-port'].strip(), '--port', self.options['redis-port'].strip(),
'--num_workers', self.options['nworkers'].strip(), '--num_workers', self.options['nworkers'].strip(),
]) ),
) )
path_list.append(wrapper) path_list.append(wrapper)
...@@ -113,9 +112,8 @@ class Client(GenericBaseRecipe): ...@@ -113,9 +112,8 @@ class Client(GenericBaseRecipe):
bg_wrapper = self.options['wrapper'].strip() bg_wrapper = self.options['wrapper'].strip()
log = self.options['log_file'].strip() log = self.options['log_file'].strip()
pid_file = self.options['pid_file'].strip() pid_file = self.options['pid_file'].strip()
wrapper = self.createPythonScript(bg_wrapper, wrapper = self.createWrapper(bg_wrapper,
'slapos.recipe.librecipe.execute.execute', (python, bonjourgrid_client, '--log_file', log,
([python, bonjourgrid_client, '--log_file', log,
'--pid_file', pid_file, '--pid_file', pid_file,
'--boinc_wrapper', boinc_script, '--boinc_wrapper', boinc_script,
'--condor_wrapper', condor_script, '--condor_wrapper', condor_script,
...@@ -123,7 +121,7 @@ class Client(GenericBaseRecipe): ...@@ -123,7 +121,7 @@ class Client(GenericBaseRecipe):
'--install_directory', self.options['install_dir'].strip(), '--install_directory', self.options['install_dir'].strip(),
'--server', self.options['redis-url'].strip(), '--server', self.options['redis-url'].strip(),
'--port', self.options['redis-port'].strip(), '--port', self.options['redis-port'].strip(),
]) ),
) )
path_list.append(wrapper) path_list.append(wrapper)
......
...@@ -40,13 +40,10 @@ class Recipe(GenericBaseRecipe): ...@@ -40,13 +40,10 @@ class Recipe(GenericBaseRecipe):
self.ca_private = self.options['ca-private'] self.ca_private = self.options['ca-private']
self.ca_certs = self.options['ca-certs'] self.ca_certs = self.options['ca-certs']
self.ca_newcerts = self.options['ca-newcerts'] self.ca_newcerts = self.options['ca-newcerts']
self.ca_crl = self.options['ca-crl']
self.ca_key_ext = '.key' self.ca_key_ext = '.key'
self.ca_crt_ext = '.crt' self.ca_crt_ext = '.crt'
def install(self): def install(self):
path_list = []
ca_country_code = self.options.get('country-code', 'XX') ca_country_code = self.options.get('country-code', 'XX')
ca_email = self.options.get('email', 'xx@example.com') ca_email = self.options.get('email', 'xx@example.com')
# XXX-BBB: State by mistake has been configured as string "('State',)" # XXX-BBB: State by mistake has been configured as string "('State',)"
...@@ -77,21 +74,15 @@ class Recipe(GenericBaseRecipe): ...@@ -77,21 +74,15 @@ class Recipe(GenericBaseRecipe):
self.createFile(openssl_configuration, self.substituteTemplate( self.createFile(openssl_configuration, self.substituteTemplate(
self.getTemplateFilename('openssl.cnf.ca.in'), config)) self.getTemplateFilename('openssl.cnf.ca.in'), config))
ca_wrapper = self.createPythonScript( return self.createPythonScript(
self.options['wrapper'], self.options['wrapper'],
'%s.certificate_authority.runCertificateAuthority' % __name__, __name__ + '.certificate_authority.runCertificateAuthority',
dict( (os.path.join(self.ca_private, 'cakey.pem'),
openssl_configuration=openssl_configuration, os.path.join(self.ca_dir, 'cacert.pem'),
openssl_binary=self.options['openssl-binary'], self.options['openssl-binary'],
certificate=os.path.join(self.ca_dir, 'cacert.pem'), openssl_configuration,
key=os.path.join(self.ca_private, 'cakey.pem'), self.request_directory)
crl=self.ca_crl,
request_dir=self.request_directory
)
) )
path_list.append(ca_wrapper)
return path_list
class Request(Recipe): class Request(Recipe):
...@@ -146,11 +137,10 @@ class Request(Recipe): ...@@ -146,11 +137,10 @@ class Request(Recipe):
path_list = [key_file, cert_file] path_list = [key_file, cert_file]
if request_needed: if request_needed:
wrapper = self.createPythonScript( wrapper = self.createWrapper(
self.options['wrapper'], self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute_wait', (self.options['executable'],),
[ [self.options['executable']], wait_list=(certificate, key),
[certificate, key] ],
) )
path_list.append(wrapper) path_list.append(wrapper)
......
...@@ -102,10 +102,8 @@ class CertificateAuthority: ...@@ -102,10 +102,8 @@ class CertificateAuthority:
'certificate_file')): 'certificate_file')):
print 'Created certificate %r' % parser.get('certificate', 'name') print 'Created certificate %r' % parser.get('certificate', 'name')
def runCertificateAuthority(ca_conf): def runCertificateAuthority(*args):
ca = CertificateAuthority(ca_conf['key'], ca_conf['certificate'], ca = CertificateAuthority(*args)
ca_conf['openssl_binary'], ca_conf['openssl_configuration'],
ca_conf['request_dir'])
while True: while True:
ca.checkAuthority() ca.checkAuthority()
ca.checkRequestDir() ca.checkRequestDir()
......
...@@ -42,8 +42,6 @@ class Recipe(GenericBaseRecipe): ...@@ -42,8 +42,6 @@ class Recipe(GenericBaseRecipe):
options['access-url'] = 'http://[%s]:%s' % (self.ip, self.port) options['access-url'] = 'http://[%s]:%s' % (self.ip, self.port)
def install(self): def install(self):
path_list = []
environment = { environment = {
'PATH': os.path.dirname(self.git) + ':' + os.environ['PATH'], 'PATH': os.path.dirname(self.git) + ':' + os.environ['PATH'],
} }
...@@ -51,10 +49,4 @@ class Recipe(GenericBaseRecipe): ...@@ -51,10 +49,4 @@ class Recipe(GenericBaseRecipe):
cloud9_args = [self.node_executable, self.cloud9, '-l', self.ip, '-p', cloud9_args = [self.node_executable, self.cloud9, '-l', self.ip, '-p',
self.port, '-w', self.workdir] self.port, '-w', self.workdir]
wrapper = self.createPythonScript(self.wrapper, return self.createWrapper(self.wrapper, cloud9_args, environment)
'slapos.recipe.librecipe.execute.executee',
(cloud9_args, environment)
)
path_list.append(wrapper)
return path_list
...@@ -178,12 +178,11 @@ class Recipe(GenericBaseRecipe): ...@@ -178,12 +178,11 @@ class Recipe(GenericBaseRecipe):
os.chmod(wrapper_location, 0744) os.chmod(wrapper_location, 0744)
#generate script for start condor #generate script for start condor
start_condor = os.path.join(self.wrapperdir, 'start_condor') wrapper = self.createPythonScript(
start_bin = os.path.join(self.wrapper_sbin, 'condor_master') os.path.join(self.wrapperdir, 'start_condor'),
condor_reconfig = os.path.join(self.wrapper_sbin, 'condor_reconfig') __name__ + '.configure.condorStart',
wrapper = self.createPythonScript(start_condor, (os.path.join(self.wrapper_sbin, 'condor_reconfig'),
'%s.configure.condorStart' % __name__, os.path.join(self.wrapper_sbin, 'condor_master'))
dict(start_bin=start_bin, condor_reconfig=condor_reconfig)
) )
path_list.append(wrapper) path_list.append(wrapper)
return path_list return path_list
...@@ -276,13 +275,11 @@ class AppSubmit(GenericBaseRecipe): ...@@ -276,13 +275,11 @@ class AppSubmit(GenericBaseRecipe):
os.unlink(destination) os.unlink(destination)
os.symlink(app_list[appname]['files'][file], destination) os.symlink(app_list[appname]['files'][file], destination)
#generate wrapper for submitting job #generate wrapper for submitting job
condor_submit = os.path.join(self.options['bin'].strip(), 'condor_submit')
parameter = dict(submit=condor_submit, sig_install=sig_install,
submit_file='submit',
appname=appname, appdir=appdir)
submit_job = self.createPythonScript( submit_job = self.createPythonScript(
os.path.join(self.options['wrapper-dir'].strip(), appname), os.path.join(self.options['wrapper-dir'].strip(), appname),
'%s.configure.submitJob' % __name__, parameter __name__ + '.configure.submitJob',
(os.path.join(self.options['bin'].strip(), 'condor_submit'),
'submit', appdir, appname, sig_install)
) )
path_list.append(submit_job) path_list.append(submit_job)
return path_list return path_list
\ No newline at end of file
...@@ -29,27 +29,25 @@ import os ...@@ -29,27 +29,25 @@ import os
import subprocess import subprocess
import time import time
def submitJob(args): def submitJob(submit, submit_file, appdir, appname, sig_install):
"""Run condor_submit (if needed) for job deployment""" """Run condor_submit (if needed) for job deployment"""
time.sleep(10) time.sleep(10)
print "Check if needed to submit %s job's" % args['appname'] print "Check if needed to submit %s job's" % appname
if not os.path.exists(args['sig_install']): if not os.path.exists(sig_install):
print "Nothing for install or update...Exited" print "Nothing for install or update...Exited"
return return
# '-a', "log = out.log", '-a', "error = error.log", # '-a', "log = out.log", '-a', "error = error.log",
launch_args = [args['submit'], '-verbose', args['submit_file']] launch_args = submit, '-verbose', submit_file
process = subprocess.Popen(launch_args, stdout=subprocess.PIPE, process = subprocess.Popen(launch_args, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, cwd=args['appdir']) stderr=subprocess.STDOUT, cwd=appdir)
result = process.communicate()[0] result = process.communicate()[0]
if process.returncode is None or process.returncode != 0: if process.returncode is None or process.returncode != 0:
print "Failed to execute condor_submit.\nThe error was: %s" % result print "Failed to execute condor_submit.\nThe error was: %s" % result
else: else:
os.unlink(args['sig_install']) os.unlink(sig_install)
def condorStart(args): def condorStart(condor_reconfig, start_bin):
"""Start Condor if deamons is currently stopped""" """Start Condor if deamons is currently stopped"""
result = os.system(args['condor_reconfig']) if subprocess.call(condor_reconfig):
if result != 0:
#process failled to reconfig condor that mean that condor deamons is not curently started #process failled to reconfig condor that mean that condor deamons is not curently started
os.system(args['start_bin']) subprocess.call(start_bin)
\ No newline at end of file
...@@ -98,26 +98,19 @@ class Recipe(GenericBaseRecipe): ...@@ -98,26 +98,19 @@ class Recipe(GenericBaseRecipe):
) )
path_list.append(config_file) path_list.append(config_file)
wrapper = self.createPythonScript(self.options['wrapper'], wrapper = self.createWrapper(self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute', (self.options['apache-binary'], '-f', config_file, '-DFOREGROUND'))
[self.options['apache-binary'], '-f', config_file, '-DFOREGROUND'])
path_list.append(wrapper) path_list.append(wrapper)
promise = self.createPythonScript(self.options['promise'], promise = self.createPythonScript(self.options['promise'],
__name__ + '.promise', __name__ + '.promise',
dict(host=self.options['ip'], port=int(self.options['port_webdav']), (self.options['ip'], int(self.options['port_webdav']),
user=self.options['user'], password=self.options['password']) self.options['user'], self.options['password']))
)
path_list.append(promise) path_list.append(promise)
return path_list return path_list
def promise(args): def promise(host, port, user, password):
host = args['host']
port = args['port']
user = args['user']
password = args['password']
connection = httplib.HTTPSConnection(host, port) connection = httplib.HTTPSConnection(host, port)
auth = base64.b64encode('%s:%s' % (user, password)) auth = base64.b64encode('%s:%s' % (user, password))
connection.request('OPTIONS', '/', connection.request('OPTIONS', '/',
......
...@@ -35,15 +35,14 @@ class Recipe(GenericBaseRecipe): ...@@ -35,15 +35,14 @@ class Recipe(GenericBaseRecipe):
self.logger.info("Installing dcron...") self.logger.info("Installing dcron...")
options = self.options options = self.options
script = self.createWrapper(name=options['binary'], script = self.createWrapper(options['binary'],
command=options['dcrond-binary'].strip(), (options['dcrond-binary'].strip(),
parameters=[
'-s', options['cron-entries'], '-s', options['cron-entries'],
'-c', options['crontabs'], '-c', options['crontabs'],
'-t', options['cronstamps'], '-t', options['cronstamps'],
'-f', '-l', '5', '-f', '-l', '5',
'-M', options['catcher'] '-M', options['catcher']
]) ))
self.logger.debug('Main cron executable created at : %r', script) self.logger.debug('Main cron executable created at : %r', script)
......
...@@ -57,8 +57,6 @@ class KnownHostsFile(dict): ...@@ -57,8 +57,6 @@ class KnownHostsFile(dict):
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
def install(self): def install(self):
path_list = []
dropbear_cmd = [self.options['dropbear-binary']] dropbear_cmd = [self.options['dropbear-binary']]
# Don't fork into background # Don't fork into background
dropbear_cmd.append('-F') dropbear_cmd.append('-F')
...@@ -95,19 +93,12 @@ class Recipe(GenericBaseRecipe): ...@@ -95,19 +93,12 @@ class Recipe(GenericBaseRecipe):
if 'shell' in self.options: if 'shell' in self.options:
env['DROPBEAR_OVERRIDE_SHELL'] = self.options['shell'] env['DROPBEAR_OVERRIDE_SHELL'] = self.options['shell']
wrapper = self.createPythonScript( return self.createWrapper(self.options['wrapper'], dropbear_cmd, env)
self.options['wrapper'],
'slapos.recipe.librecipe.execute.executee',
(dropbear_cmd, env, )
)
path_list.append(wrapper)
return path_list
class Client(GenericBaseRecipe): class Client(GenericBaseRecipe):
def install(self): def install(self):
env = dict() env = {}
if 'home' in self.options: if 'home' in self.options:
env['HOME'] = self.options['home'] env['HOME'] = self.options['home']
...@@ -120,13 +111,7 @@ class Client(GenericBaseRecipe): ...@@ -120,13 +111,7 @@ class Client(GenericBaseRecipe):
if 'identity-file' in self.options: if 'identity-file' in self.options:
dropbear_cmd.extend(['-i', self.options['identity-file']]) dropbear_cmd.extend(['-i', self.options['identity-file']])
wrapper = self.createPythonScript( return self.createWrapper(self.options['wrapper'], dropbear_cmd, env)
self.options['wrapper'],
'slapos.recipe.librecipe.execute.executee',
(dropbear_cmd, env, )
)
return [wrapper]
class AddAuthorizedKey(GenericBaseRecipe): class AddAuthorizedKey(GenericBaseRecipe):
......
...@@ -46,7 +46,4 @@ class Recipe(GenericBaseRecipe): ...@@ -46,7 +46,4 @@ class Recipe(GenericBaseRecipe):
cmd.extend(options) cmd.extend(options)
cmd.extend([backup_directory, remote_url]) cmd.extend([backup_directory, remote_url])
wrapper = self.createPythonScript(self.options['wrapper'], return self.createWrapper(self.options['wrapper'], cmd)
'slapos.recipe.librecipe.execute.execute', cmd)
return [wrapper]
...@@ -30,23 +30,20 @@ class Recipe(GenericBaseRecipe): ...@@ -30,23 +30,20 @@ class Recipe(GenericBaseRecipe):
def install(self): def install(self):
parameters = [ args = [
self.options['equeue-binary'],
'--database', self.options['database'], '--database', self.options['database'],
'--logfile', self.options['log'], '--logfile', self.options['log'],
'--lockfile', self.options['lockfile'] '--lockfile', self.options['lockfile']
] ]
if 'takeover-triggered-file-path' in self.options: if 'takeover-triggered-file-path' in self.options:
parameters.extend(['--takeover-triggered-file-path', self.options['takeover-triggered-file-path']]) args += ('--takeover-triggered-file-path',
self.options['takeover-triggered-file-path'])
if 'loglevel' in self.options: if 'loglevel' in self.options:
parameters.extend(['--loglevel', self.options['loglevel']]) args += '--loglevel', self.options['loglevel']
parameters.append(self.options['socket']) args.append(self.options['socket'])
wrapper = self.createWrapper(name=self.options['wrapper'],
command=self.options['equeue-binary'],
parameters=parameters)
return [wrapper]
return self.createWrapper(self.options['wrapper'], args)
...@@ -67,7 +67,7 @@ class Recipe(GenericBaseRecipe): ...@@ -67,7 +67,7 @@ class Recipe(GenericBaseRecipe):
openssl_binary=self.options['openssl-binary'], openssl_binary=self.options['openssl-binary'],
test_ca_path=self.options['certificate-authority-path'], test_ca_path=self.options['certificate-authority-path'],
) )
common_list = [ common_list = (
'--conversion_server_url=' + cloudooo_url, '--conversion_server_url=' + cloudooo_url,
# BBB: We still have test suites that only accept the following 2 options. # BBB: We still have test suites that only accept the following 2 options.
'--conversion_server_hostname=%s' % cloudooo_parsed.hostname, '--conversion_server_hostname=%s' % cloudooo_parsed.hostname,
...@@ -76,19 +76,19 @@ class Recipe(GenericBaseRecipe): ...@@ -76,19 +76,19 @@ class Recipe(GenericBaseRecipe):
'--volatile_memcached_server_port=%s' % memcached_parsed.port, '--volatile_memcached_server_port=%s' % memcached_parsed.port,
'--persistent_memcached_server_hostname=%s' % kumofs_parsed.hostname, '--persistent_memcached_server_hostname=%s' % kumofs_parsed.hostname,
'--persistent_memcached_server_port=%s' % kumofs_parsed.port, '--persistent_memcached_server_port=%s' % kumofs_parsed.port,
] )
path_list.append(self.createPythonScript(self.options['run-unit-test'], path_list.append(self.createPythonScript(self.options['run-unit-test'],
__name__ + '.test.runUnitTest', [dict( __name__ + '.test.runUnitTest',
call_list=[self.options['run-unit-test-binary'], ((self.options['run-unit-test-binary'],
'--erp5_sql_connection_string', mysql_connection_string, '--erp5_sql_connection_string', mysql_connection_string,
'--extra_sql_connection_string_list', ','.join( '--extra_sql_connection_string_list', ','.join(
mysql_connection_string_list), mysql_connection_string_list),
] + common_list, **common_dict)])) ) + common_list, common_dict)))
path_list.append(self.createPythonScript(self.options['run-test-suite'], path_list.append(self.createPythonScript(self.options['run-test-suite'],
__name__ + '.test.runTestSuite', [dict( __name__ + '.test.runTestSuite',
call_list=[self.options['run-test-suite-binary'], ((self.options['run-test-suite-binary'],
'--db_list', ','.join(mysql_connection_string_list), '--db_list', ','.join(mysql_connection_string_list),
] + common_list, **common_dict)])) ) + common_list, common_dict)))
return path_list return path_list
...@@ -98,20 +98,18 @@ class CloudoooRecipe(GenericBaseRecipe): ...@@ -98,20 +98,18 @@ class CloudoooRecipe(GenericBaseRecipe):
common_dict = dict( common_dict = dict(
prepend_path=self.options['prepend-path'], prepend_path=self.options['prepend-path'],
) )
common_list = [ common_list = (
"--paster_path", self.options['ooo-paster'], "--paster_path", self.options['ooo-paster'],
self.options['configuration-file'] self.options['configuration-file']
] )
run_unit_test_path = self.createPythonScript(self.options['run-unit-test'], path_list.append(self.createPythonScript(self.options['run-unit-test'],
__name__ + '.test.runUnitTest', [dict( __name__ + '.test.runUnitTest',
call_list=[self.options['run-unit-test-binary'], ((self.options['run-unit-test-binary'],
] + common_list, **common_dict)]) ) + common_list, common_dict)))
path_list.append(run_unit_test_path)
path_list.append(self.createPythonScript(self.options['run-test-suite'], path_list.append(self.createPythonScript(self.options['run-test-suite'],
__name__ + '.test.runTestSuite', [dict( __name__ + '.test.runTestSuite',
call_list=[self.options['run-test-suite-binary'], ((self.options['run-test-suite-binary'],
], **common_dict)])) ), common_dict)))
return path_list return path_list
...@@ -121,32 +119,20 @@ class EggTestRecipe(GenericBaseRecipe): ...@@ -121,32 +119,20 @@ class EggTestRecipe(GenericBaseRecipe):
off a list of Python eggs. off a list of Python eggs.
""" """
def install(self): def install(self):
path_list = []
test_list = self.options['test-list'].strip().replace('\n', ',') test_list = self.options['test-list'].strip().replace('\n', ',')
common_dict = {}
environment_dict = {} common_dict = {}
if self.options.get('environment'): if self.options.get('environment'):
environment_part = self.buildout.get(self.options['environment']) environment_part = self.buildout.get(self.options['environment'])
if environment_part: if environment_part:
for key, value in environment_part.iteritems(): common_dict['environment'] = dict(environment_part)
environment_dict[key] = value
common_list = [ "--source_code_path_list", test_list]
argument_dict = dict(
call_list=[self.options['run-test-suite-binary'],] + common_list,
environment=environment_dict,
**common_dict
)
if 'prepend-path' in self.options: if 'prepend-path' in self.options:
argument_dict['prepend_path'] = self.options['prepend-path'] common_dict['prepend_path'] = self.options['prepend-path']
run_test_suite_script = self.createPythonScript( return self.createPythonScript(
self.options['run-test-suite'], __name__ + '.test.runTestSuite', self.options['run-test-suite'], __name__ + '.test.runTestSuite',
[argument_dict] ((self.options['run-test-suite-binary'],
"--source_code_path_list", test_list),
common_dict)
) )
path_list.append(run_test_suite_script)
return path_list
...@@ -26,9 +26,9 @@ ...@@ -26,9 +26,9 @@
############################################################################## ##############################################################################
import os import os
import sys import sys
def runTestSuite(args):
def runTestSuite(args, d):
env = os.environ.copy() env = os.environ.copy()
d = args[0]
if 'openssl_binary' in d: if 'openssl_binary' in d:
env['OPENSSL_BINARY'] = d['openssl_binary'] env['OPENSSL_BINARY'] = d['openssl_binary']
if 'test_ca_path' in d: if 'test_ca_path' in d:
...@@ -47,17 +47,15 @@ def runTestSuite(args): ...@@ -47,17 +47,15 @@ def runTestSuite(args):
env.update(d['environment']) env.update(d['environment'])
# Deal with Shebang size limitation # Deal with Shebang size limitation
executable_filepath = d['call_list'][0] executable_filepath = args[0]
file_object = open(executable_filepath, 'r') with open(executable_filepath, 'r') as f:
line = file_object.readline() line = f.readline()
file_object.close()
argument_list = [] argument_list = []
if line[:2] == '#!': if line[:2] == '#!':
executable_filepath = line[2:].strip() executable_filepath = line[2:].strip()
argument_list.append(executable_filepath) argument_list.append(executable_filepath)
argument_list.extend(d['call_list']) argument_list += args
argument_list.extend(sys.argv[1:]) argument_list += sys.argv[1:]
argument_list.append(env) os.execve(executable_filepath, argument_list, env)
os.execle(executable_filepath, *argument_list)
runUnitTest = runTestSuite runUnitTest = runTestSuite
...@@ -68,17 +68,9 @@ class Recipe(GenericBaseRecipe): ...@@ -68,17 +68,9 @@ class Recipe(GenericBaseRecipe):
) )
self.path_list.append(configuration_file) self.path_list.append(configuration_file)
self.path_list.append( self.path_list.append(
self.createPythonScript( self.createWrapper(self.options['wrapper'],
self.options['wrapper'], ( self.options['testnode'], '-l', self.options['log-file'],
'slapos.recipe.librecipe.execute.executee', configuration_file)
[ # Executable
[ self.options['testnode'], '-l', self.options['log-file'],
configuration_file],
# Environment
{
'GIT_SSL_NO_VERIFY': '1',
}
],
) )
) )
self.installApache() self.installApache()
...@@ -106,9 +98,8 @@ class Recipe(GenericBaseRecipe): ...@@ -106,9 +98,8 @@ class Recipe(GenericBaseRecipe):
apache_config) apache_config)
) )
self.path_list.append(config_file) self.path_list.append(config_file)
wrapper = self.createPythonScript(self.options['httpd-wrapper'], wrapper = self.createWrapper(self.options['httpd-wrapper'],
'slapos.recipe.librecipe.execute.execute', (self.options['apache-binary'], '-f', config_file, '-DFOREGROUND'))
[self.options['apache-binary'], '-f', config_file, '-DFOREGROUND'])
self.path_list.append(wrapper) self.path_list.append(wrapper)
# create empty html page to not allow listing of / # create empty html page to not allow listing of /
page = open(os.path.join(self.options['log-directory'], "index.html"), "w") page = open(os.path.join(self.options['log-directory'], "index.html"), "w")
......
...@@ -76,6 +76,12 @@ default_mimetype_entry_list = [ ...@@ -76,6 +76,12 @@ default_mimetype_entry_list = [
"application/x-asc-text application/vnd.openxmlformats-officedocument.wordprocessingml.document x2t", "application/x-asc-text application/vnd.openxmlformats-officedocument.wordprocessingml.document x2t",
"application/x-asc-spreadsheet application/vnd.openxmlformats-officedocument.spreadsheetml.sheet x2t", "application/x-asc-spreadsheet application/vnd.openxmlformats-officedocument.spreadsheetml.sheet x2t",
"application/x-asc-presentation application/vnd.openxmlformats-officedocument.presentationml.presentation x2t", "application/x-asc-presentation application/vnd.openxmlformats-officedocument.presentationml.presentation x2t",
"application/vnd.oasis.opendocument.text application/x-asc-text x2t",
"application/vnd.oasis.opendocument.spreadsheet application/x-asc-spreadsheet x2t",
"application/vnd.oasis.opendocument.presentation application/x-asc-presentation x2t",
"application/x-asc-text application/vnd.oasis.opendocument.text x2t",
"application/x-asc-spreadsheet application/vnd.oasis.opendocument.spreadsheet x2t",
"application/x-asc-presentation application/vnd.oasis.opendocument.presentation x2t",
] ]
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
...@@ -118,5 +124,5 @@ class Recipe(GenericBaseRecipe): ...@@ -118,5 +124,5 @@ class Recipe(GenericBaseRecipe):
path_list.append(config_file) path_list.append(config_file)
path_list.append(self.createPythonScript(self.options['wrapper'], path_list.append(self.createPythonScript(self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute_with_signal_translation', 'slapos.recipe.librecipe.execute.execute_with_signal_translation',
[self.options['ooo-paster'].strip(), 'serve', config_file])) ((self.options['ooo-paster'].strip(), 'serve', config_file),)))
return path_list return path_list
...@@ -154,7 +154,7 @@ class Recipe(GenericBaseRecipe): ...@@ -154,7 +154,7 @@ class Recipe(GenericBaseRecipe):
)] )]
) )
path_list.append(mysqld) path_list.append(mysqld)
environment = dict(PATH='%s' % self.options['bin-directory']) environment = {'PATH': self.options['bin-directory']}
# TODO: move to a separate recipe (ack'ed by Cedric) # TODO: move to a separate recipe (ack'ed by Cedric)
if 'backup-script' in self.options: if 'backup-script' in self.options:
# backup configuration # backup configuration
...@@ -165,9 +165,13 @@ class Recipe(GenericBaseRecipe): ...@@ -165,9 +165,13 @@ class Recipe(GenericBaseRecipe):
'--defaults-file=%s' % mysql_conf_file, '--defaults-file=%s' % mysql_conf_file,
'--socket=%s' % socket.strip(), '--user=root', '--socket=%s' % socket.strip(), '--user=root',
'--ibbackup=%s'% self.options['xtrabackup-binary']] '--ibbackup=%s'% self.options['xtrabackup-binary']]
innobackupex_incremental = self.createPythonScript(self.options['innobackupex-incremental'], 'slapos.recipe.librecipe.execute.executee', [innobackupex_argument_list + ['--incremental'], environment]) innobackupex_incremental = self.createWrapper(
self.options['innobackupex-incremental'],
innobackupex_argument_list + ['--incremental'], environment)
path_list.append(innobackupex_incremental) path_list.append(innobackupex_incremental)
innobackupex_full = self.createPythonScript(self.options['innobackupex-full'], 'slapos.recipe.librecipe.execute.executee', [innobackupex_argument_list, environment]) innobackupex_full = self.createWrapper(
self.options['innobackupex-full'],
innobackupex_argument_list, environment)
path_list.append(innobackupex_full) path_list.append(innobackupex_full)
backup_controller = self.createPythonScript(self.options['backup-script'], __name__ + '.innobackupex.controller', [innobackupex_incremental, innobackupex_full, full_backup, incremental_backup]) backup_controller = self.createPythonScript(self.options['backup-script'], __name__ + '.innobackupex.controller', [innobackupex_incremental, innobackupex_full, full_backup, incremental_backup])
path_list.append(backup_controller) path_list.append(backup_controller)
...@@ -215,7 +219,9 @@ class Recipe(GenericBaseRecipe): ...@@ -215,7 +219,9 @@ class Recipe(GenericBaseRecipe):
'--defaults-file=%s' % mysql_conf_file, '--defaults-file=%s' % mysql_conf_file,
'--socket=%s' % socket.strip(), '--user=root', '--socket=%s' % socket.strip(), '--user=root',
] ]
pt_exe = self.createPythonScript(os.path.join(self.options['bin-directory'], pt_script_name), 'slapos.recipe.librecipe.execute.executee', [pt_argument_list, environment]) pt_exe = self.createWrapper(
os.path.join(self.options['bin-directory'], pt_script_name),
pt_argument_list, environment)
path_list.append(pt_exe) path_list.append(pt_exe)
return path_list return path_list
......
import os import os
import glob import glob
def controller(args): def controller(innobackupex_incremental, innobackupex_full,
full_backup, incremental_backup):
"""Creates full or incremental backup """Creates full or incremental backup
If no full backup is done, it is created If no full backup is done, it is created
...@@ -9,8 +10,6 @@ def controller(args): ...@@ -9,8 +10,6 @@ def controller(args):
base is the newest (according to date) full or incremental backup base is the newest (according to date) full or incremental backup
""" """
innobackupex_incremental, innobackupex_full, full_backup, incremental_backup \
= args
if len(os.listdir(full_backup)) == 0: if len(os.listdir(full_backup)) == 0:
print 'Doing full backup in %r' % full_backup print 'Doing full backup in %r' % full_backup
os.execv(innobackupex_full, [innobackupex_full, full_backup]) os.execv(innobackupex_full, [innobackupex_full, full_backup])
......
...@@ -5,9 +5,8 @@ import sys ...@@ -5,9 +5,8 @@ import sys
import pytz import pytz
def runMysql(args): def runMysql(conf):
sleep = 60 sleep = 60
conf = args[0]
mysqld_wrapper_list = [conf['mysqld_binary'], '--defaults-file=%s' % mysqld_wrapper_list = [conf['mysqld_binary'], '--defaults-file=%s' %
conf['configuration_file']] conf['configuration_file']]
# we trust mysql_install that if mysql directory is available mysql was # we trust mysql_install that if mysql directory is available mysql was
...@@ -54,8 +53,7 @@ def runMysql(args): ...@@ -54,8 +53,7 @@ def runMysql(args):
os.execl(mysqld_wrapper_list[0], *mysqld_wrapper_list) os.execl(mysqld_wrapper_list[0], *mysqld_wrapper_list)
def updateMysql(args): def updateMysql(conf):
conf = args[0]
sleep = 30 sleep = 30
is_succeed = False is_succeed = False
try: try:
......
##############################################################################
#
# Copyright (c) 2011 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from slapos.recipe.librecipe import GenericBaseRecipe
import binascii
import hashlib
import os
import re
import zc.buildout
_isurl = re.compile('([a-zA-Z0-9+.-]+)://').match
# based on Zope2.utilities.mkzopeinstance.write_inituser
def Zope2InitUser(path, username, password):
# Set password only once
# Currently, rely on existence of a simple file:
# Create it the first time, then next time, detect this file and do no-op.
inituser_done_path = '%s_done' % path
if os.path.exists(inituser_done_path):
return
if os.path.exists(path):
return
open(path, 'w').write('')
os.chmod(path, 0600)
open(path, 'w').write('%s:{SHA}%s\n' % (
username,binascii.b2a_base64(hashlib.sha1(password).digest())[:-1]))
open(inituser_done_path, 'w').write('"inituser" file already created once.')
class Recipe(GenericBaseRecipe):
def _options(self, options):
options['password'] = self.generatePassword()
options['deadlock-password'] = self.generatePassword()
def install(self):
"""
All zope have to share file created by portal_classes
(until everything is integrated into the ZODB).
So, do not request zope instance and create multiple in the same partition.
"""
path_list = []
Zope2InitUser(self.options['inituser'], self.options['user'],
self.options['password'])
# Symlink to BT5 repositories defined in instance config.
# Those paths will eventually end up in the ZODB, and having symlinks
# inside the XXX makes it possible to reuse such ZODB with another software
# release[ version].
# Note: this path cannot be used for development, it's really just a
# read-only repository.
repository_path = self.options['bt5-repository']
self.bt5_repository_list = []
append = self.bt5_repository_list.append
for repository in self.options.get('bt5-repository-list', '').split():
repository = repository.strip()
if not repository:
continue
if _isurl(repository) and not repository.startswith("file://"):
# XXX: assume it's a valid URL
append(repository)
continue
if repository.startswith('file://'):
repository = repository.replace('file://', '', '')
if os.path.isabs(repository):
repo_id = hashlib.sha1(repository).hexdigest()
link = os.path.join(repository_path, repo_id)
if os.path.lexists(link):
if not os.path.islink(link):
raise zc.buildout.UserError(
'Target link already %r exists but it is not link' % link)
os.unlink(link)
os.symlink(repository, link)
self.logger.debug('Created link %r -> %r' % (link, repository_path))
# Always provide a URL-Type
append("file://" + link)
# Create zope configuration file
zope_config = dict(
thread_amount=self.options['thread-amount'],
zodb_root_path=self.options['zodb-path'],
zodb_cache_size=int(self.options['zodb-cache-size']),
)
zope_environment = dict(
TMP=self.options['tmp-path'],
TMPDIR=self.options['tmp-path'],
HOME=self.options['tmp-path'],
PATH=self.options['bin-path']
)
# configure default Zope2 zcml
open(self.options['site-zcml'], 'w').write(open(self.getTemplateFilename(
'site.zcml')).read())
zope_config['instance'] = self.options['instance-path']
zope_config['event_log'] = self.options['event-log']
zope_config['z2_log'] = self.options['z2-log']
zope_config['pid-filename'] = self.options['pid-file']
zope_config['lock-filename'] = self.options['lock-file']
zope_config['products'] = 'products %s' % self.options['instance-products']
zope_config['address'] = '%s:%s' % (self.options['ip'], self.options['port'])
zope_config.update(dump_url=self.options['deadlock-path'],
secret=self.options['deadlock-password'])
zope_wrapper_template_location = self.getTemplateFilename('zope.conf.in')
zope_conf_content = self.substituteTemplate(zope_wrapper_template_location,
zope_config)
if ('promise-path' in self.options) and ('site-id' in self.options):
zope_conf_content += self.substituteTemplate(self.getTemplateFilename(
'zope.conf.promise.in'), {
'site-id': self.options['site-id'],
'promise-path': self.options['promise-path'],
})
zope_conf_path = self.createFile(self.options['configuration-file'], zope_conf_content)
path_list.append(zope_conf_path)
# Create init script
path_list.append(self.createPythonScript(self.options['wrapper'], 'slapos.recipe.librecipe.execute.executee', [[self.options['runzope-binary'].strip(), '-C', zope_conf_path], zope_environment]))
return path_list
<configure
xmlns="http://namespaces.zope.org/zope"
xmlns:meta="http://namespaces.zope.org/meta"
xmlns:five="http://namespaces.zope.org/five">
<include package="Products.Five" />
<meta:redefinePermission from="zope2.Public" to="zope.Public" />
<!-- Load the meta -->
<include files="package-includes/*-meta.zcml" />
<five:loadProducts file="meta.zcml"/>
<!-- Load the configuration -->
<include files="package-includes/*-configure.zcml" />
<five:loadProducts />
<!-- Load the configuration overrides-->
<includeOverrides files="package-includes/*-overrides.zcml" />
<five:loadProductsOverrides />
<securityPolicy
component="Products.Five.security.FiveSecurityPolicy" />
</configure>
## Zope 2 configuration file generated by SlapOS
# Some defines
%%define INSTANCE %(instance)s
instancehome $INSTANCE
# Used products
%(products)s
# Environment is setup in running wrapper script
# Reason: zope.conf is read too late for some componets
# No need to debug
debug-mode off
# One thread is safe enough
zserver-threads %(thread_amount)s
# File location
pid-filename %(pid-filename)s
lock-filename %(lock-filename)s
# Encoding
rest-input-encoding utf-8
rest-output-encoding utf-8
default-zpublisher-encoding utf-8
# Disable ownership checking to execute codes generated by alarm
skip-ownership-checking on
# Temporary storage database (for sessions)
<zodb_db temporary>
<temporarystorage>
name temporary storage for sessioning
</temporarystorage>
mount-point /temp_folder
container-class Products.TemporaryFolder.TemporaryContainer
</zodb_db>
# Logging configuration
<eventlog>
<logfile>
dateformat
path %(event_log)s
</logfile>
</eventlog>
<logger access>
<logfile>
dateformat
path %(z2_log)s
</logfile>
</logger>
# Serving configuration
<http-server>
address %(address)s
</http-server>
# ZODB configuration
<zodb_db root>
cache-size %(zodb_cache_size)d
<filestorage>
path %(zodb_root_path)s
</filestorage>
mount-point /
</zodb_db>
<zoperunner>
program $INSTANCE/bin/runzope
</zoperunner>
# DeadlockDebugger configuration
<product-config DeadlockDebugger>
dump_url %(dump_url)s
secret %(secret)s
</product-config>
# ERP5 Timer Service
%%import timerserver
<timer-server>
interval 5
</timer-server>
# ERP5 promise
<product-config /%(site-id)s>
promise_path %(promise-path)s
</product-config>
##############################################################################
#
# Copyright (c) 2011 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from slapos.recipe.librecipe import GenericBaseRecipe
import binascii
import hashlib
import os
import re
import zc.buildout
_isurl = re.compile('([a-zA-Z0-9+.-]+)://').match
# based on Zope2.utilities.mkzopeinstance.write_inituser
def Zope2InitUser(path, username, password):
# Set password only once
# Currently, rely on existence of a simple file:
# Create it the first time, then next time, detect this file and do no-op.
inituser_done_path = '%s_done' % path
if os.path.exists(inituser_done_path):
return
if os.path.exists(path):
return
open(path, 'w').write('')
os.chmod(path, 0600)
open(path, 'w').write('%s:{SHA}%s\n' % (
username,binascii.b2a_base64(hashlib.sha1(password).digest())[:-1]))
open(inituser_done_path, 'w').write('"inituser" file already created once.')
class Recipe(GenericBaseRecipe):
def _options(self, options):
if 'password' not in options:
options['password'] = self.generatePassword()
def install(self):
"""
All zope have to share file created by portal_classes
(until everything is integrated into the ZODB).
So, do not request zope instance and create multiple in the same partition.
"""
path_list = []
Zope2InitUser(self.options['inituser'], self.options['user'],
self.options['password'])
# Symlink to BT5 repositories defined in instance config.
# Those paths will eventually end up in the ZODB, and having symlinks
# inside the XXX makes it possible to reuse such ZODB with another software
# release[ version].
# Note: this path cannot be used for development, it's really just a
# read-only repository.
repository_path = self.options['bt5-repository']
self.bt5_repository_list = []
append = self.bt5_repository_list.append
for repository in self.options.get('bt5-repository-list', '').split():
repository = repository.strip()
if not repository:
continue
if _isurl(repository) and not repository.startswith("file://"):
# XXX: assume it's a valid URL
append(repository)
continue
if repository.startswith('file://'):
repository = repository.replace('file://', '', '')
if os.path.isabs(repository):
repo_id = hashlib.sha1(repository).hexdigest()
link = os.path.join(repository_path, repo_id)
if os.path.lexists(link):
if not os.path.islink(link):
raise zc.buildout.UserError(
'Target link already %r exists but it is not link' % link)
os.unlink(link)
os.symlink(repository, link)
self.logger.debug('Created link %r -> %r' % (link, repository_path))
# Always provide a URL-Type
append("file://" + link)
zope_environment = {
'TMP': self.options['tmp-path'],
'TMPDIR': self.options['tmp-path'],
'HOME': self.options.get('home-path', self.options.get('tmp-path')),
'PATH': self.options['bin-path'],
'TZ': self.options['timezone'],
}
instance_home = self.options.get("instancehome-path", None)
if instance_home:
zope_environment["INSTANCE_HOME"] = instance_home
# configure default Zope2 zcml
open(self.options['site-zcml'], 'w').write(open(self.getTemplateFilename(
'site.zcml')).read())
# Create init script
path_list.append(self.createPythonScript(self.options['wrapper'], 'slapos.recipe.librecipe.execute.executee', [[self.options['runzope-binary'].strip(), '-C', self.options['configuration-file']], zope_environment]))
return path_list
<configure
xmlns="http://namespaces.zope.org/zope"
xmlns:meta="http://namespaces.zope.org/meta"
xmlns:five="http://namespaces.zope.org/five">
<include package="Products.Five" />
<meta:redefinePermission from="zope2.Public" to="zope.Public" />
<!-- Load the meta -->
<include files="package-includes/*-meta.zcml" />
<five:loadProducts file="meta.zcml"/>
<!-- Load the configuration -->
<include files="package-includes/*-configure.zcml" />
<five:loadProducts />
<!-- Load the configuration overrides-->
<includeOverrides files="package-includes/*-overrides.zcml" />
<five:loadProductsOverrides />
<securityPolicy
component="Products.Five.security.FiveSecurityPolicy" />
</configure>
...@@ -120,12 +120,11 @@ class Recipe(GenericBaseRecipe): ...@@ -120,12 +120,11 @@ class Recipe(GenericBaseRecipe):
'server_text': server_snippet}, 'server_text': server_snippet},
) )
) )
wrapper_path = self.createPythonScript( wrapper_path = self.createWrapper(
self.options['wrapper-path'], self.options['wrapper-path'],
'slapos.recipe.librecipe.execute.execute', (self.options['binary-path'].strip(), '-f', configuration_path))
arguments=[self.options['binary-path'].strip(), '-f', configuration_path],)
ctl_path = self.createPythonScript( ctl_path = self.createPythonScript(
self.options['ctl-path'], self.options['ctl-path'],
'%s.haproxy.haproxyctl' % __name__, __name__ + '.haproxy.haproxyctl',
{'socket_path':self.options['socket-path']}) (self.options['socket-path'],))
return [configuration_path, wrapper_path, ctl_path] return [configuration_path, wrapper_path, ctl_path]
...@@ -4,7 +4,7 @@ try: ...@@ -4,7 +4,7 @@ try:
except ImportError: except ImportError:
pass pass
def haproxyctl(conf): def haproxyctl(socket_path):
while True: while True:
try: try:
l = raw_input('> ') l = raw_input('> ')
...@@ -14,7 +14,7 @@ def haproxyctl(conf): ...@@ -14,7 +14,7 @@ def haproxyctl(conf):
if l == 'quit': if l == 'quit':
break break
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.connect(conf['socket_path']) s.connect(socket_path)
s.send('%s\n' % l) s.send('%s\n' % l)
while True: while True:
r = s.recv(1024) r = s.recv(1024)
......
##############################################################################
#
# Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from slapos.recipe.librecipe import BaseSlapRecipe
import hashlib
import os
import pkg_resources
import sys
import zc.buildout
import ConfigParser
class Recipe(BaseSlapRecipe):
def getTemplateFilename(self, template_name):
return pkg_resources.resource_filename(__name__,
'template/%s' % template_name)
def _install(self):
self.path_list = []
self.requirements, self.ws = self.egg.working_set()
# XXX-Cedric : add logrotate?
self.cron_d = self.installCrond()
kumo_conf = self.installKumo(self.getLocalIPv4Address())
ca_conf = self.installCertificateAuthority()
key, certificate = self.requestCertificate('Login Based Access')
stunnel_conf = self.installStunnel(self.getGlobalIPv6Address(),
self.getLocalIPv4Address(), 12345, kumo_conf['kumo_gateway_port'],
certificate, key, ca_conf['ca_crl'],
ca_conf['certificate_authority_path'])
self.linkBinary()
self.setConnectionDict(dict(
stunnel_ip = stunnel_conf['public_ip'],
stunnel_port = stunnel_conf['public_port'],
))
return self.path_list
def linkBinary(self):
"""Links binaries to instance's bin directory for easier exposal"""
for linkline in self.options.get('link_binary_list', '').splitlines():
if not linkline:
continue
target = linkline.split()
if len(target) == 1:
target = target[0]
path, linkname = os.path.split(target)
else:
linkname = target[1]
target = target[0]
link = os.path.join(self.bin_directory, linkname)
if os.path.lexists(link):
if not os.path.islink(link):
raise zc.buildout.UserError(
'Target link already %r exists but it is not link' % link)
os.unlink(link)
os.symlink(target, link)
self.logger.debug('Created link %r -> %r' % (link, target))
self.path_list.append(link)
def installCrond(self):
timestamps = self.createDataDirectory('cronstamps')
cron_output = os.path.join(self.log_directory, 'cron-output')
self._createDirectory(cron_output)
catcher = zc.buildout.easy_install.scripts([('catchcron',
__name__ + '.catdatefile', 'catdatefile')], self.ws, sys.executable,
self.bin_directory, arguments=[cron_output])[0]
self.path_list.append(catcher)
cron_d = os.path.join(self.etc_directory, 'cron.d')
crontabs = os.path.join(self.etc_directory, 'crontabs')
self._createDirectory(cron_d)
self._createDirectory(crontabs)
# Use execute from erp5.
wrapper = zc.buildout.easy_install.scripts([('crond',
'slapos.recipe.librecipe.execute', 'execute')], self.ws, sys.executable,
self.wrapper_directory, arguments=[
self.options['dcrond_binary'].strip(), '-s', cron_d, '-c', crontabs,
'-t', timestamps, '-f', '-l', '5', '-M', catcher]
)[0]
self.path_list.append(wrapper)
return cron_d
def installLogrotate(self):
"""Installs logortate main configuration file and registers its to cron"""
logrotate_d = os.path.abspath(os.path.join(self.etc_directory,
'logrotate.d'))
self._createDirectory(logrotate_d)
logrotate_backup = self.createBackupDirectory('logrotate')
logrotate_conf = self.createConfigurationFile("logrotate.conf",
"include %s" % logrotate_d)
logrotate_cron = os.path.join(self.cron_d, 'logrotate')
state_file = os.path.join(self.data_root_directory, 'logrotate.status')
open(logrotate_cron, 'w').write('0 0 * * * %s -s %s %s' %
(self.options['logrotate_binary'], state_file, logrotate_conf))
self.path_list.extend([logrotate_d, logrotate_conf, logrotate_cron])
return logrotate_d, logrotate_backup
def registerLogRotation(self, name, log_file_list, postrotate_script):
"""Register new log rotation requirement"""
open(os.path.join(self.logrotate_d, name), 'w').write(
self.substituteTemplate(self.getTemplateFilename(
'logrotate_entry.in'),
dict(file_list=' '.join(['"'+q+'"' for q in log_file_list]),
postrotate=postrotate_script, olddir=self.logrotate_backup)))
def installCertificateAuthority(self, ca_country_code='XX',
ca_email='xx@example.com', ca_state='State', ca_city='City',
ca_company='Company'):
backup_path = self.createBackupDirectory('ca')
self.ca_dir = os.path.join(self.data_root_directory, 'ca')
self._createDirectory(self.ca_dir)
self.ca_request_dir = os.path.join(self.ca_dir, 'requests')
self._createDirectory(self.ca_request_dir)
config = dict(ca_dir=self.ca_dir, request_dir=self.ca_request_dir)
self.ca_private = os.path.join(self.ca_dir, 'private')
self.ca_certs = os.path.join(self.ca_dir, 'certs')
self.ca_crl = os.path.join(self.ca_dir, 'crl')
self.ca_newcerts = os.path.join(self.ca_dir, 'newcerts')
self.ca_key_ext = '.key'
self.ca_crt_ext = '.crt'
for d in [self.ca_private, self.ca_crl, self.ca_newcerts, self.ca_certs]:
self._createDirectory(d)
for f in ['crlnumber', 'serial']:
if not os.path.exists(os.path.join(self.ca_dir, f)):
open(os.path.join(self.ca_dir, f), 'w').write('01')
if not os.path.exists(os.path.join(self.ca_dir, 'index.txt')):
open(os.path.join(self.ca_dir, 'index.txt'), 'w').write('')
openssl_configuration = os.path.join(self.ca_dir, 'openssl.cnf')
config.update(
working_directory=self.ca_dir,
country_code=ca_country_code,
state=ca_state,
city=ca_city,
company=ca_company,
email_address=ca_email,
)
self._writeFile(openssl_configuration, pkg_resources.resource_string(
__name__, 'template/openssl.cnf.ca.in') % config)
self.path_list.extend(zc.buildout.easy_install.scripts([
('certificate_authority',
__name__ + '.certificate_authority', 'runCertificateAuthority')],
self.ws, sys.executable, self.wrapper_directory, arguments=[dict(
openssl_configuration=openssl_configuration,
openssl_binary=self.options['openssl_binary'],
certificate=os.path.join(self.ca_dir, 'cacert.pem'),
key=os.path.join(self.ca_private, 'cakey.pem'),
crl=os.path.join(self.ca_crl),
request_dir=self.ca_request_dir
)]))
# configure backup
backup_cron = os.path.join(self.cron_d, 'ca_rdiff_backup')
open(backup_cron, 'w').write(
'''0 0 * * * %(rdiff_backup)s %(source)s %(destination)s'''%dict(
rdiff_backup=self.options['rdiff_backup_binary'],
source=self.ca_dir,
destination=backup_path))
self.path_list.append(backup_cron)
return dict(
ca_certificate=os.path.join(config['ca_dir'], 'cacert.pem'),
ca_crl=os.path.join(config['ca_dir'], 'crl'),
certificate_authority_path=config['ca_dir']
)
def requestCertificate(self, name):
hash = hashlib.sha512(name).hexdigest()
key = os.path.join(self.ca_private, hash + self.ca_key_ext)
certificate = os.path.join(self.ca_certs, hash + self.ca_crt_ext)
parser = ConfigParser.RawConfigParser()
parser.add_section('certificate')
parser.set('certificate', 'name', name)
parser.set('certificate', 'key_file', key)
parser.set('certificate', 'certificate_file', certificate)
parser.write(open(os.path.join(self.ca_request_dir, hash), 'w'))
return key, certificate
def installStunnel(self, public_ip, private_ip, public_port, private_port,
ca_certificate, key, ca_crl, ca_path):
"""Installs stunnel"""
template_filename = self.getTemplateFilename('stunnel.conf.in')
log = os.path.join(self.log_directory, 'stunnel.log')
pid_file = os.path.join(self.run_directory, 'stunnel.pid')
stunnel_conf = dict(
public_ip=public_ip,
private_ip=private_ip,
public_port=public_port,
pid_file=pid_file,
log=log,
cert = ca_certificate,
key = key,
ca_crl = ca_crl,
ca_path = ca_path,
private_port = private_port,
)
stunnel_conf_path = self.createConfigurationFile("stunnel.conf",
self.substituteTemplate(template_filename,
stunnel_conf))
wrapper = zc.buildout.easy_install.scripts([('stunnel',
'slapos.recipe.librecipe.execute', 'execute_wait')], self.ws,
sys.executable, self.wrapper_directory, arguments=[
[self.options['stunnel_binary'].strip(), stunnel_conf_path],
[ca_certificate, key]]
)[0]
self.path_list.append(wrapper)
return stunnel_conf
def installKumo(self, ip, kumo_manager_port=13101, kumo_server_port=13201,
kumo_server_listen_port=13202, kumo_gateway_port=13301):
# XXX: kumo is not storing pid in file, unless it is not running as daemon
# but running daemons is incompatible with SlapOS, so there is currently
# no way to have Kumo's pid files to rotate logs and send signals to them
config = dict(
kumo_gateway_binary=self.options['kumo_gateway_binary'],
kumo_gateway_ip=ip,
kumo_gateway_log=os.path.join(self.log_directory, "kumo-gateway.log"),
kumo_manager_binary=self.options['kumo_manager_binary'],
kumo_manager_ip=ip,
kumo_manager_log=os.path.join(self.log_directory, "kumo-manager.log"),
kumo_server_binary=self.options['kumo_server_binary'],
kumo_server_ip=ip,
kumo_server_log=os.path.join(self.log_directory, "kumo-server.log"),
kumo_server_storage=os.path.join(self.data_root_directory, "kumodb.tch"),
kumo_manager_port=kumo_manager_port,
kumo_server_port=kumo_server_port,
kumo_server_listen_port=kumo_server_listen_port,
kumo_gateway_port=kumo_gateway_port
)
self.path_list.append(self.createRunningWrapper('kumo_gateway',
self.substituteTemplate(self.getTemplateFilename('kumo_gateway.in'),
config)))
self.path_list.append(self.createRunningWrapper('kumo_manager',
self.substituteTemplate(self.getTemplateFilename('kumo_manager.in'),
config)))
self.path_list.append(self.createRunningWrapper('kumo_server',
self.substituteTemplate(self.getTemplateFilename('kumo_server.in'),
config)))
return dict(
kumo_address = '%s:%s' % (config['kumo_gateway_ip'],
config['kumo_gateway_port']),
kumo_gateway_ip=config['kumo_gateway_ip'],
kumo_gateway_port=config['kumo_gateway_port'],
)
import os
import subprocess
import time
import ConfigParser
def popenCommunicate(command_list, input=None):
subprocess_kw = dict(stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if input is not None:
subprocess_kw.update(stdin=subprocess.PIPE)
popen = subprocess.Popen(command_list, **subprocess_kw)
result = popen.communicate(input)[0]
if popen.returncode is None:
popen.kill()
if popen.returncode != 0:
raise ValueError('Issue during calling %r, result was:\n%s' % (
command_list, result))
return result
class CertificateAuthority:
def __init__(self, key, certificate, openssl_binary,
openssl_configuration, request_dir):
self.key = key
self.certificate = certificate
self.openssl_binary = openssl_binary
self.openssl_configuration = openssl_configuration
self.request_dir = request_dir
def checkAuthority(self):
file_list = [ self.key, self.certificate ]
ca_ready = True
for f in file_list:
if not os.path.exists(f):
ca_ready = False
break
if ca_ready:
return
for f in file_list:
if os.path.exists(f):
os.unlink(f)
try:
# no CA, let us create new one
popenCommunicate([self.openssl_binary, 'req', '-nodes', '-config',
self.openssl_configuration, '-new', '-x509', '-extensions',
'v3_ca', '-keyout', self.key, '-out', self.certificate,
'-days', '10950'], 'Automatic Certificate Authority\n')
except:
try:
for f in file_list:
if os.path.exists(f):
os.unlink(f)
except:
# do not raise during cleanup
pass
raise
def _checkCertificate(self, common_name, key, certificate):
file_list = [key, certificate]
ready = True
for f in file_list:
if not os.path.exists(f):
ready = False
break
if ready:
return False
for f in file_list:
if os.path.exists(f):
os.unlink(f)
csr = certificate + '.csr'
try:
popenCommunicate([self.openssl_binary, 'req', '-config',
self.openssl_configuration, '-nodes', '-new', '-keyout',
key, '-out', csr, '-days', '3650'],
common_name + '\n')
try:
popenCommunicate([self.openssl_binary, 'ca', '-batch', '-config',
self.openssl_configuration, '-out', certificate,
'-infiles', csr])
finally:
if os.path.exists(csr):
os.unlink(csr)
except:
try:
for f in file_list:
if os.path.exists(f):
os.unlink(f)
except:
# do not raise during cleanup
pass
raise
else:
return True
def checkRequestDir(self):
for request_file in os.listdir(self.request_dir):
parser = ConfigParser.RawConfigParser()
parser.readfp(open(os.path.join(self.request_dir, request_file), 'r'))
if self._checkCertificate(parser.get('certificate', 'name'),
parser.get('certificate', 'key_file'), parser.get('certificate',
'certificate_file')):
print 'Created certificate %r' % parser.get('certificate', 'name')
def runCertificateAuthority(args):
ca_conf = args[0]
ca = CertificateAuthority(ca_conf['key'], ca_conf['certificate'],
ca_conf['openssl_binary'], ca_conf['openssl_configuration'],
ca_conf['request_dir'])
while True:
ca.checkAuthority()
ca.checkRequestDir()
time.sleep(60)
#!/bin/sh
exec %(kumo_gateway_binary)s -F -E -m %(kumo_manager_ip)s:%(kumo_manager_port)s -t %(kumo_gateway_ip)s:%(kumo_gateway_port)s -o %(kumo_gateway_log)s
#!/bin/sh
exec %(kumo_manager_binary)s -a -l %(kumo_manager_ip)s:%(kumo_manager_port)s -o %(kumo_manager_log)s
#!/bin/sh
exec %(kumo_server_binary)s -l %(kumo_server_ip)s:%(kumo_server_port)s -L %(kumo_server_listen_port)s -m %(kumo_manager_ip)s:%(kumo_manager_port)s -s %(kumo_server_storage)s -o %(kumo_server_log)s
#
# OpenSSL example configuration file.
# This is mostly being used for generation of certificate requests.
#
# This definition stops the following lines choking if HOME isn't
# defined.
HOME = .
RANDFILE = $ENV::HOME/.rnd
# Extra OBJECT IDENTIFIER info:
#oid_file = $ENV::HOME/.oid
oid_section = new_oids
# To use this configuration file with the "-extfile" option of the
# "openssl x509" utility, name here the section containing the
# X.509v3 extensions to use:
# extensions =
# (Alternatively, use a configuration file that has only
# X.509v3 extensions in its main [= default] section.)
[ new_oids ]
# We can add new OIDs in here for use by 'ca', 'req' and 'ts'.
# Add a simple OID like this:
# testoid1=1.2.3.4
# Or use config file substitution like this:
# testoid2=${testoid1}.5.6
# Policies used by the TSA examples.
tsa_policy1 = 1.2.3.4.1
tsa_policy2 = 1.2.3.4.5.6
tsa_policy3 = 1.2.3.4.5.7
####################################################################
[ ca ]
default_ca = CA_default # The default ca section
####################################################################
[ CA_default ]
dir = %(working_directory)s # Where everything is kept
certs = $dir/certs # Where the issued certs are kept
crl_dir = $dir/crl # Where the issued crl are kept
database = $dir/index.txt # database index file.
#unique_subject = no # Set to 'no' to allow creation of
# several ctificates with same subject.
new_certs_dir = $dir/newcerts # default place for new certs.
certificate = $dir/cacert.pem # The CA certificate
serial = $dir/serial # The current serial number
crlnumber = $dir/crlnumber # the current crl number
# must be commented out to leave a V1 CRL
crl = $dir/crl.pem # The current CRL
private_key = $dir/private/cakey.pem # The private key
RANDFILE = $dir/private/.rand # private random number file
x509_extensions = usr_cert # The extentions to add to the cert
# Comment out the following two lines for the "traditional"
# (and highly broken) format.
name_opt = ca_default # Subject Name options
cert_opt = ca_default # Certificate field options
# Extension copying option: use with caution.
# copy_extensions = copy
# Extensions to add to a CRL. Note: Netscape communicator chokes on V2 CRLs
# so this is commented out by default to leave a V1 CRL.
# crlnumber must also be commented out to leave a V1 CRL.
# crl_extensions = crl_ext
default_days = 3650 # how long to certify for
default_crl_days= 30 # how long before next CRL
default_md = default # use public key default MD
preserve = no # keep passed DN ordering
# A few difference way of specifying how similar the request should look
# For type CA, the listed attributes must be the same, and the optional
# and supplied fields are just that :-)
policy = policy_match
# For the CA policy
[ policy_match ]
countryName = match
stateOrProvinceName = match
organizationName = match
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
# For the 'anything' policy
# At this point in time, you must list all acceptable 'object'
# types.
[ policy_anything ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
####################################################################
[ req ]
default_bits = 2048
default_md = sha1
default_keyfile = privkey.pem
distinguished_name = req_distinguished_name
#attributes = req_attributes
x509_extensions = v3_ca # The extentions to add to the self signed cert
# Passwords for private keys if not present they will be prompted for
# input_password = secret
# output_password = secret
# This sets a mask for permitted string types. There are several options.
# default: PrintableString, T61String, BMPString.
# pkix : PrintableString, BMPString (PKIX recommendation before 2004)
# utf8only: only UTF8Strings (PKIX recommendation after 2004).
# nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings).
# MASK:XXXX a literal mask value.
# WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings.
string_mask = utf8only
# req_extensions = v3_req # The extensions to add to a certificate request
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = %(country_code)s
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_value = %(state)s
localityName = Locality Name (eg, city)
localityName_value = %(city)s
0.organizationName = Organization Name (eg, company)
0.organizationName_value = %(company)s
# we can do this but it is not needed normally :-)
#1.organizationName = Second Organization Name (eg, company)
#1.organizationName_default = World Wide Web Pty Ltd
commonName = Common Name (eg, your name or your server\'s hostname)
commonName_max = 64
emailAddress = Email Address
emailAddress_value = %(email_address)s
emailAddress_max = 64
# SET-ex3 = SET extension number 3
#[ req_attributes ]
#challengePassword = A challenge password
#challengePassword_min = 4
#challengePassword_max = 20
#
#unstructuredName = An optional company name
[ usr_cert ]
# These extensions are added when 'ca' signs a request.
# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.
basicConstraints=CA:FALSE
# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.
# This is OK for an SSL server.
# nsCertType = server
# For an object signing certificate this would be used.
# nsCertType = objsign
# For normal client use this is typical
# nsCertType = client, email
# and for everything including object signing:
# nsCertType = client, email, objsign
# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment
# This will be displayed in Netscape's comment listbox.
nsComment = "OpenSSL Generated Certificate"
# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move
# Copy subject details
# issuerAltName=issuer:copy
#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName
# This is required for TSA certificates.
# extendedKeyUsage = critical,timeStamping
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
[ v3_ca ]
# Extensions for a typical CA
# PKIX recommendation.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
# This is what PKIX recommends but some broken software chokes on critical
# extensions.
#basicConstraints = critical,CA:true
# So we do this instead.
basicConstraints = CA:true
# Key usage: this is typical for a CA certificate. However since it will
# prevent it being used as an test self-signed certificate it is best
# left out by default.
# keyUsage = cRLSign, keyCertSign
# Some might want this also
# nsCertType = sslCA, emailCA
# Include email address in subject alt name: another PKIX recommendation
# subjectAltName=email:copy
# Copy issuer details
# issuerAltName=issuer:copy
# DER hex encoding of an extension: beware experts only!
# obj=DER:02:03
# Where 'obj' is a standard or added object
# You can even override a supported extension:
# basicConstraints= critical, DER:30:03:01:01:FF
[ crl_ext ]
# CRL extensions.
# Only issuerAltName and authorityKeyIdentifier make any sense in a CRL.
# issuerAltName=issuer:copy
authorityKeyIdentifier=keyid:always
[ proxy_cert_ext ]
# These extensions should be added when creating a proxy certificate
# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.
basicConstraints=CA:FALSE
# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.
# This is OK for an SSL server.
# nsCertType = server
# For an object signing certificate this would be used.
# nsCertType = objsign
# For normal client use this is typical
# nsCertType = client, email
# and for everything including object signing:
# nsCertType = client, email, objsign
# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment
# This will be displayed in Netscape's comment listbox.
nsComment = "OpenSSL Generated Certificate"
# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move
# Copy subject details
# issuerAltName=issuer:copy
#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName
# This really needs to be in place for it to be a proxy certificate.
proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo
####################################################################
[ tsa ]
default_tsa = tsa_config1 # the default TSA section
[ tsa_config1 ]
# These are used by the TSA reply generation only.
dir = /etc/pki/tls # TSA root directory
serial = $dir/tsaserial # The current serial number (mandatory)
crypto_device = builtin # OpenSSL engine to use for signing
signer_cert = $dir/tsacert.pem # The TSA signing certificate
# (optional)
certs = $dir/cacert.pem # Certificate chain to include in reply
# (optional)
signer_key = $dir/private/tsakey.pem # The TSA private key (optional)
default_policy = tsa_policy1 # Policy if request did not specify it
# (optional)
other_policies = tsa_policy2, tsa_policy3 # acceptable policies (optional)
digests = md5, sha1 # Acceptable message digests (mandatory)
accuracy = secs:1, millisecs:500, microsecs:100 # (optional)
clock_precision_digits = 0 # number of digits after dot. (optional)
ordering = yes # Is ordering defined for timestamps?
# (optional, default: no)
tsa_name = yes # Must the TSA name be included in the reply?
# (optional, default: no)
ess_cert_id_chain = no # Must the ESS cert id chain be included?
# (optional, default: no)
foreground = yes
output = %(log)s
pid = %(pid_file)s
syslog = no
CApath = %(ca_path)s
key = %(key)s
CRLpath = %(ca_crl)s
cert = %(cert)s
[service]
accept = %(public_ip)s:%(public_port)s
connect = %(private_ip)s:%(private_port)s
...@@ -298,9 +298,9 @@ class Request(BaseRecipe): ...@@ -298,9 +298,9 @@ class Request(BaseRecipe):
'local_host': local_host, 'local_port': local_port, 'local_host': local_host, 'local_port': local_port,
})) }))
wrapper = zc.buildout.easy_install.scripts([('stunnel', wrapper = zc.buildout.easy_install.scripts([('stunnel',
'slapos.recipe.librecipe.execute', 'execute')], self.ws, 'slapos.recipe.librecipe.execute', 'generic_exec')], self.ws,
sys.executable, self.wrapper_directory, arguments=[ sys.executable, self.wrapper_directory, arguments='%r, %r'
self.options['stunnel_binary'].strip(), stunnel_conf_path] % (self.options['stunnel_binary'].strip(), stunnel_conf_path)
)[0] )[0]
self.path_list.append(wrapper) self.path_list.append(wrapper)
return (local_host, local_port,) return (local_host, local_port,)
...@@ -71,13 +71,12 @@ class Recipe(GenericBaseRecipe): ...@@ -71,13 +71,12 @@ class Recipe(GenericBaseRecipe):
) )
path_list.append(httpd_conf) path_list.append(httpd_conf)
wrapper = self.createWrapper(name=self.options['wrapper'], wrapper = self.createWrapper(self.options['wrapper'],
command=self.options['httpd-binary'], (self.options['httpd-binary'],
parameters=[
'-f', '-f',
self.options['httpd-conf'], self.options['httpd-conf'],
'-DFOREGROUND' '-DFOREGROUND'
]) ))
path_list.append(wrapper) path_list.append(wrapper)
......
...@@ -2,7 +2,6 @@ import sys ...@@ -2,7 +2,6 @@ import sys
import os import os
import signal import signal
import subprocess import subprocess
import time
from collections import defaultdict from collections import defaultdict
from inotify_simple import INotify, flags from inotify_simple import INotify, flags
...@@ -29,65 +28,96 @@ def _wait_files_creation(file_list): ...@@ -29,65 +28,96 @@ def _wait_files_creation(file_list):
if event.name in directory: if event.name in directory:
directory[event.name] = event.mask & (flags.CREATE | flags.MOVED_TO) directory[event.name] = event.mask & (flags.CREATE | flags.MOVED_TO)
def execute(args): def _libc():
"""Portable execution with process replacement""" from ctypes import CDLL, get_errno, c_char_p, c_int, c_ulong, util
# XXX: Kept for backward compatibility libc = CDLL(util.find_library('c'), use_errno=True)
generic_exec([args, None, None]) libc_mount = libc.mount
libc_mount.argtypes = c_char_p, c_char_p, c_char_p, c_ulong, c_char_p
def execute_wait(args): def mount(source, target, filesystemtype, mountflags, data):
"""Execution but after all files in args[1] exists""" if libc_mount(source, target, filesystemtype, mountflags, data):
# XXX: Kept for backward compatibility e = get_errno()
generic_exec([args[0], args[1], None]) raise OSError(e, os.strerror(e))
libc_unshare = libc.unshare
libc_unshare.argtypes = c_int,
def unshare(flags):
if libc_unshare(flags):
e = get_errno()
raise OSError(e, os.strerror(e))
return mount, unshare
def generic_exec(args, extra_environ=None, wait_list=None,
pidfile=None, reserve_cpu=False, private_dev_shm=None,
#shebang_workaround=False, # XXX: still needed ?
):
args = list(args)
if pidfile:
import psutil
try:
with open(pidfile) as f:
pid = int(f.read())
running = psutil.Process(pid).cmdline()
except Exception:
pass
else:
# With chained shebangs, several paths may be inserted at the beginning.
n = len(args)
for i in xrange(1+len(running)-n):
if args == running[i:n+i]:
sys.exit("Already running with pid %s." % pid)
with open(pidfile, 'w') as f:
f.write(str(os.getpid()))
args += sys.argv[1:]
if reserve_cpu:
# If the CGROUPS cpuset is available (and prepared by slap format),
# request an exclusive CPU core for this process.
with open(os.path.expanduser('~/.slapos-cpu-exclusive'), 'a') as f:
f.write('%s\n' % os.getpid())
if wait_list:
_wait_files_creation(wait_list)
if private_dev_shm:
mount, unshare = _libc()
CLONE_NEWNS = 0x00020000
CLONE_NEWUSER = 0x10000000
uid = os.getuid()
gid = os.getgid()
unshare(CLONE_NEWUSER |CLONE_NEWNS)
with open('/proc/self/setgroups', 'wb') as f: f.write('deny')
with open('/proc/self/uid_map', 'wb') as f: f.write('%s %s 1' % (uid, uid))
with open('/proc/self/gid_map', 'wb') as f: f.write('%s %s 1' % (gid, gid))
mount('tmpfs', '/dev/shm', 'tmpfs', 0, 'size=' + private_dev_shm)
if extra_environ:
env = os.environ.copy()
env.update(extra_environ)
os.execve(args[0], args, env)
else:
os.execv(args[0], args)
child_pg = None child_pg = None
def executee(args):
"""Portable execution with process replacement and environment manipulation"""
# XXX: Kept for backward compatibility
generic_exec([args[0], None, args[1]])
def executee_wait(args):
"""Portable execution with process replacement and environment manipulation"""
# XXX: Kept for backward compatibility
generic_exec(args)
def generic_exec(args):
exec_list = list(args[0])
file_list = args[1]
environment_overriding = args[2]
exec_env = os.environ.copy()
if environment_overriding is not None:
exec_env.update(environment_overriding)
if file_list is not None:
_wait_files_creation(file_list)
os.execve(exec_list[0], exec_list + sys.argv[1:], exec_env)
def sig_handler(sig, frame): def sig_handler(sig, frame):
print 'Received signal %r, killing children and exiting' % sig print 'Received signal %r, killing children and exiting' % sig
if child_pg is not None: if child_pg is not None:
os.killpg(child_pg, signal.SIGHUP) os.killpg(child_pg, signal.SIGHUP)
os.killpg(child_pg, signal.SIGTERM) os.killpg(child_pg, signal.SIGTERM)
sys.exit(0) sys.exit()
signal.signal(signal.SIGINT, sig_handler)
signal.signal(signal.SIGQUIT, sig_handler)
signal.signal(signal.SIGTERM, sig_handler)
def execute_with_signal_translation(args): def execute_with_signal_translation(args):
"""Run process as children and translate from SIGTERM to another signal""" """Run process as children and translate from SIGTERM to another signal"""
global child_pg global child_pg
signal.signal(signal.SIGINT, sig_handler)
signal.signal(signal.SIGQUIT, sig_handler)
signal.signal(signal.SIGTERM, sig_handler)
child = subprocess.Popen(args, close_fds=True, preexec_fn=os.setsid) child = subprocess.Popen(args, close_fds=True, preexec_fn=os.setsid)
child_pg = child.pid child_pg = child.pid
try: try:
print 'Process %r started' % args print 'Process %r started' % args
while True: signal.pause()
time.sleep(10)
finally: finally:
os.killpg(child_pg, signal.SIGHUP) os.killpg(child_pg, signal.SIGHUP)
os.killpg(child_pg, signal.SIGTERM) os.killpg(child_pg, signal.SIGTERM)
...@@ -33,7 +33,6 @@ import sys ...@@ -33,7 +33,6 @@ import sys
import inspect import inspect
import re import re
import shutil import shutil
from textwrap import dedent
import urllib import urllib
import urlparse import urlparse
...@@ -116,92 +115,60 @@ class GenericBaseRecipe(object): ...@@ -116,92 +115,60 @@ class GenericBaseRecipe(object):
with io.open(filepath, 'w+', encoding=encoding) as f: with io.open(filepath, 'w+', encoding=encoding) as f:
f.write(u'\n'.join(lines)) f.write(u'\n'.join(lines))
def createPythonScript(self, name, absolute_function, arguments=''): def createPythonScript(self, name, absolute_function, args=(), kw={}):
"""Create a python script using zc.buildout.easy_install.scripts """Create a python script using zc.buildout.easy_install.scripts
* function should look like 'module.function', or only 'function' * function should look like 'module.function', or only 'function'
if it is a builtin function.""" if it is a builtin function."""
absolute_function = tuple(absolute_function.rsplit('.', 1)) function = absolute_function.rsplit('.', 1)
if len(absolute_function) == 1: if len(function) == 1:
absolute_function = ('__builtin__',) + absolute_function module = '__builtin__'
if len(absolute_function) != 2: function, = function
raise ValueError("A non valid function was given") else:
module, function = function
module, function = absolute_function
path, filename = os.path.split(os.path.abspath(name)) path, filename = os.path.split(os.path.abspath(name))
script = zc.buildout.easy_install.scripts( assert not isinstance(args, (basestring, dict)), args
args = map(repr, args)
args += map('%s=%r'.__mod__, kw.iteritems())
return zc.buildout.easy_install.scripts(
[(filename, module, function)], self._ws, sys.executable, [(filename, module, function)], self._ws, sys.executable,
path, arguments=arguments)[0] path, arguments=', '.join(args))[0]
return script
def createWrapper(self, name, command, parameters, comments=[], def createWrapper(self, path, args, env=None, **kw):
parameters_extra=False, environment=None, """Create a wrapper script for process replacement"""
pidfile=None, reserve_cpu=False assert args
): if kw:
""" return self.createPythonScript(path,
Creates a shell script for process replacement. 'slapos.recipe.librecipe.execute.generic_exec',
Takes care of quoting. (args, env) if env else (args,), kw)
Takes care of #! line limitation when the wrapped command is a script.
if pidfile parameter is specified, then it will make the wrapper a singleton,
accepting to run only if no other instance is running.
:param reserve_cpu: bool, try to reserve one core for the `command` # Simple case: creates a basic shell script for process replacement.
""" # This must be kept minimal to avoid code duplication with generic_exec.
# In particular, do not implement workaround for shebang size limitation
# here (note that this can't be done correctly with a POSIX shell, because
# the process can't be given a name).
lines = ['#!/bin/sh']
if env:
for k, v in sorted(env.iteritems()):
lines.append('export %s=%s' % (k, shlex.quote(v)))
lines.append('exec')
lines = [ '#!/bin/sh' ] args = map(shlex.quote, args)
args.append('"$@"')
if comments: for arg in args:
lines += '# ', '\n# '.join(comments), '\n'
lines.append('COMMAND=' + shlex.quote(command))
for key in environment or ():
lines.append('export %s=%s' % (key, environment[key]))
if pidfile:
lines.append(dedent("""
# Check for other instances
pidfile=%s
if [ -s $pidfile ]; then
if pid=`pgrep -F $pidfile -f "$COMMAND" 2>/dev/null`; then
echo "Already running with pid $pid."
exit 1
fi
fi
echo $$ > $pidfile""" % shlex.quote(pidfile)))
if reserve_cpu:
# if the CGROUPS cpuset is available (and prepared by slap format)
# request an exclusive CPU core for this process
lines.append(dedent("""
# put own PID into waiting list for exclusive CPU-core access
echo $$ >> ~/.slapos-cpu-exclusive
"""))
lines.append(dedent('''
# If the wrapped command uses a shebang, execute the referenced
# executable passing the script path as first argument.
# This is to workaround the limitation of 127 characters in #!
[ ! -f "$COMMAND" ] || {
[ "`head -c2`" != "#!" ] || read -r EXE ARG
} < "$COMMAND"
exec $EXE ${ARG:+"$ARG"} "$COMMAND"'''))
parameters = map(shlex.quote, parameters)
if parameters_extra:
# pass-through further parameters
parameters.append('"$@"')
for param in parameters:
if len(lines[-1]) < 40: if len(lines[-1]) < 40:
lines[-1] += ' ' + param lines[-1] += ' ' + arg
else: else:
lines[-1] += ' \\' lines[-1] += ' \\'
lines.append('\t' + param) lines.append('\t' + arg)
lines.append('') lines.append('')
return self.createFile(name, '\n'.join(lines), 0700) return self.createFile(path, '\n'.join(lines), 0700)
def createDirectory(self, parent, name, mode=0700): def createDirectory(self, parent, name, mode=0700):
path = os.path.join(parent, name) path = os.path.join(parent, name)
......
...@@ -46,10 +46,10 @@ class Recipe(GenericBaseRecipe): ...@@ -46,10 +46,10 @@ class Recipe(GenericBaseRecipe):
state_file = self.options['state-file'] state_file = self.options['state-file']
logrotate = self.createPythonScript( logrotate = self.createWrapper(
self.options['wrapper'], self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute', (self.options['logrotate-binary'],
[self.options['logrotate-binary'], '-s', state_file, logrotate_conf_file, ] '-s', state_file, logrotate_conf_file),
) )
return [logrotate, logrotate_conf_file] return [logrotate, logrotate_conf_file]
......
##############################################################################
#
# Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from slapos.recipe.librecipe import BaseSlapRecipe
import hashlib
import os
import pkg_resources
import sys
import zc.buildout
import ConfigParser
class Recipe(BaseSlapRecipe):
def getTemplateFilename(self, template_name):
return pkg_resources.resource_filename(__name__,
'template/%s' % template_name)
def _install(self):
self.path_list = []
self.requirements, self.ws = self.egg.working_set()
# XXX-Cedric : add logrotate?
self.cron_d = self.installCrond()
memcached_conf = self.installMemcached(ip=self.getLocalIPv4Address(),
port=11000)
ca_conf = self.installCertificateAuthority()
key, certificate = self.requestCertificate('Memcached')
stunnel_conf = self.installStunnel(self.getGlobalIPv6Address(),
self.getLocalIPv4Address(), 12345, memcached_conf['memcached_port'],
certificate, key, ca_conf['ca_crl'],
ca_conf['certificate_authority_path'])
self.linkBinary()
self.setConnectionDict(dict(
stunnel_ip = stunnel_conf['public_ip'],
stunnel_port = stunnel_conf['public_port'],
))
return self.path_list
def linkBinary(self):
"""Links binaries to instance's bin directory for easier exposal"""
for linkline in self.options.get('link_binary_list', '').splitlines():
if not linkline:
continue
target = linkline.split()
if len(target) == 1:
target = target[0]
path, linkname = os.path.split(target)
else:
linkname = target[1]
target = target[0]
link = os.path.join(self.bin_directory, linkname)
if os.path.lexists(link):
if not os.path.islink(link):
raise zc.buildout.UserError(
'Target link already %r exists but it is not link' % link)
os.unlink(link)
os.symlink(target, link)
self.logger.debug('Created link %r -> %r' % (link, target))
self.path_list.append(link)
def installCrond(self):
timestamps = self.createDataDirectory('cronstamps')
cron_output = os.path.join(self.log_directory, 'cron-output')
self._createDirectory(cron_output)
catcher = zc.buildout.easy_install.scripts([('catchcron',
__name__ + '.catdatefile', 'catdatefile')], self.ws, sys.executable,
self.bin_directory, arguments=[cron_output])[0]
self.path_list.append(catcher)
cron_d = os.path.join(self.etc_directory, 'cron.d')
crontabs = os.path.join(self.etc_directory, 'crontabs')
self._createDirectory(cron_d)
self._createDirectory(crontabs)
# Use execute from erp5.
wrapper = zc.buildout.easy_install.scripts([('crond',
'slapos.recipe.librecipe.execute', 'execute')], self.ws, sys.executable,
self.wrapper_directory, arguments=[
self.options['dcrond_binary'].strip(), '-s', cron_d, '-c', crontabs,
'-t', timestamps, '-f', '-l', '5', '-M', catcher]
)[0]
self.path_list.append(wrapper)
return cron_d
def installLogrotate(self):
"""Installs logortate main configuration file and registers its to cron"""
logrotate_d = os.path.abspath(os.path.join(self.etc_directory,
'logrotate.d'))
self._createDirectory(logrotate_d)
logrotate_backup = self.createBackupDirectory('logrotate')
logrotate_conf = self.createConfigurationFile("logrotate.conf",
"include %s" % logrotate_d)
logrotate_cron = os.path.join(self.cron_d, 'logrotate')
state_file = os.path.join(self.data_root_directory, 'logrotate.status')
open(logrotate_cron, 'w').write('0 0 * * * %s -s %s %s' %
(self.options['logrotate_binary'], state_file, logrotate_conf))
self.path_list.extend([logrotate_d, logrotate_conf, logrotate_cron])
return logrotate_d, logrotate_backup
def registerLogRotation(self, name, log_file_list, postrotate_script):
"""Register new log rotation requirement"""
open(os.path.join(self.logrotate_d, name), 'w').write(
self.substituteTemplate(self.getTemplateFilename(
'logrotate_entry.in'),
dict(file_list=' '.join(['"'+q+'"' for q in log_file_list]),
postrotate=postrotate_script, olddir=self.logrotate_backup)))
def installCertificateAuthority(self, ca_country_code='XX',
ca_email='xx@example.com', ca_state='State', ca_city='City',
ca_company='Company'):
backup_path = self.createBackupDirectory('ca')
self.ca_dir = os.path.join(self.data_root_directory, 'ca')
self._createDirectory(self.ca_dir)
self.ca_request_dir = os.path.join(self.ca_dir, 'requests')
self._createDirectory(self.ca_request_dir)
config = dict(ca_dir=self.ca_dir, request_dir=self.ca_request_dir)
self.ca_private = os.path.join(self.ca_dir, 'private')
self.ca_certs = os.path.join(self.ca_dir, 'certs')
self.ca_crl = os.path.join(self.ca_dir, 'crl')
self.ca_newcerts = os.path.join(self.ca_dir, 'newcerts')
self.ca_key_ext = '.key'
self.ca_crt_ext = '.crt'
for d in [self.ca_private, self.ca_crl, self.ca_newcerts, self.ca_certs]:
self._createDirectory(d)
for f in ['crlnumber', 'serial']:
if not os.path.exists(os.path.join(self.ca_dir, f)):
open(os.path.join(self.ca_dir, f), 'w').write('01')
if not os.path.exists(os.path.join(self.ca_dir, 'index.txt')):
open(os.path.join(self.ca_dir, 'index.txt'), 'w').write('')
openssl_configuration = os.path.join(self.ca_dir, 'openssl.cnf')
config.update(
working_directory=self.ca_dir,
country_code=ca_country_code,
state=ca_state,
city=ca_city,
company=ca_company,
email_address=ca_email,
)
self._writeFile(openssl_configuration, pkg_resources.resource_string(
__name__, 'template/openssl.cnf.ca.in') % config)
self.path_list.extend(zc.buildout.easy_install.scripts([
('certificate_authority',
__name__ + '.certificate_authority', 'runCertificateAuthority')],
self.ws, sys.executable, self.wrapper_directory, arguments=[dict(
openssl_configuration=openssl_configuration,
openssl_binary=self.options['openssl_binary'],
certificate=os.path.join(self.ca_dir, 'cacert.pem'),
key=os.path.join(self.ca_private, 'cakey.pem'),
crl=os.path.join(self.ca_crl),
request_dir=self.ca_request_dir
)]))
# configure backup
backup_cron = os.path.join(self.cron_d, 'ca_rdiff_backup')
open(backup_cron, 'w').write(
'''0 0 * * * %(rdiff_backup)s %(source)s %(destination)s'''%dict(
rdiff_backup=self.options['rdiff_backup_binary'],
source=self.ca_dir,
destination=backup_path))
self.path_list.append(backup_cron)
return dict(
ca_certificate=os.path.join(config['ca_dir'], 'cacert.pem'),
ca_crl=os.path.join(config['ca_dir'], 'crl'),
certificate_authority_path=config['ca_dir']
)
def requestCertificate(self, name):
hash = hashlib.sha512(name).hexdigest()
key = os.path.join(self.ca_private, hash + self.ca_key_ext)
certificate = os.path.join(self.ca_certs, hash + self.ca_crt_ext)
parser = ConfigParser.RawConfigParser()
parser.add_section('certificate')
parser.set('certificate', 'name', name)
parser.set('certificate', 'key_file', key)
parser.set('certificate', 'certificate_file', certificate)
parser.write(open(os.path.join(self.ca_request_dir, hash), 'w'))
return key, certificate
def installStunnel(self, public_ip, private_ip, public_port, private_port,
ca_certificate, key, ca_crl, ca_path):
"""Installs stunnel"""
template_filename = self.getTemplateFilename('stunnel.conf.in')
log = os.path.join(self.log_directory, 'stunnel.log')
pid_file = os.path.join(self.run_directory, 'stunnel.pid')
stunnel_conf = dict(
public_ip=public_ip,
private_ip=private_ip,
public_port=public_port,
pid_file=pid_file,
log=log,
cert = ca_certificate,
key = key,
ca_crl = ca_crl,
ca_path = ca_path,
private_port = private_port,
)
stunnel_conf_path = self.createConfigurationFile("stunnel.conf",
self.substituteTemplate(template_filename,
stunnel_conf))
wrapper = zc.buildout.easy_install.scripts([('stunnel',
'slapos.recipe.librecipe.execute', 'execute_wait')], self.ws,
sys.executable, self.wrapper_directory, arguments=[
[self.options['stunnel_binary'].strip(), stunnel_conf_path],
[ca_certificate, key]]
)[0]
self.path_list.append(wrapper)
return stunnel_conf
def installMemcached(self, ip, port):
config = dict(
memcached_binary=self.options['memcached_binary'],
memcached_ip=ip,
memcached_port=port,
)
self.path_list.append(self.createRunningWrapper('memcached',
self.substituteTemplate(self.getTemplateFilename('memcached.in'),
config)))
return dict(memcached_url='%s:%s' %
(config['memcached_ip'], config['memcached_port']),
memcached_ip=config['memcached_ip'],
memcached_port=config['memcached_port'])
import os
import subprocess
import time
import ConfigParser
def popenCommunicate(command_list, input=None):
subprocess_kw = dict(stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if input is not None:
subprocess_kw.update(stdin=subprocess.PIPE)
popen = subprocess.Popen(command_list, **subprocess_kw)
result = popen.communicate(input)[0]
if popen.returncode is None:
popen.kill()
if popen.returncode != 0:
raise ValueError('Issue during calling %r, result was:\n%s' % (
command_list, result))
return result
class CertificateAuthority:
def __init__(self, key, certificate, openssl_binary,
openssl_configuration, request_dir):
self.key = key
self.certificate = certificate
self.openssl_binary = openssl_binary
self.openssl_configuration = openssl_configuration
self.request_dir = request_dir
def checkAuthority(self):
file_list = [ self.key, self.certificate ]
ca_ready = True
for f in file_list:
if not os.path.exists(f):
ca_ready = False
break
if ca_ready:
return
for f in file_list:
if os.path.exists(f):
os.unlink(f)
try:
# no CA, let us create new one
popenCommunicate([self.openssl_binary, 'req', '-nodes', '-config',
self.openssl_configuration, '-new', '-x509', '-extensions',
'v3_ca', '-keyout', self.key, '-out', self.certificate,
'-days', '10950'], 'Automatic Certificate Authority\n')
except:
try:
for f in file_list:
if os.path.exists(f):
os.unlink(f)
except:
# do not raise during cleanup
pass
raise
def _checkCertificate(self, common_name, key, certificate):
file_list = [key, certificate]
ready = True
for f in file_list:
if not os.path.exists(f):
ready = False
break
if ready:
return False
for f in file_list:
if os.path.exists(f):
os.unlink(f)
csr = certificate + '.csr'
try:
popenCommunicate([self.openssl_binary, 'req', '-config',
self.openssl_configuration, '-nodes', '-new', '-keyout',
key, '-out', csr, '-days', '3650'],
common_name + '\n')
try:
popenCommunicate([self.openssl_binary, 'ca', '-batch', '-config',
self.openssl_configuration, '-out', certificate,
'-infiles', csr])
finally:
if os.path.exists(csr):
os.unlink(csr)
except:
try:
for f in file_list:
if os.path.exists(f):
os.unlink(f)
except:
# do not raise during cleanup
pass
raise
else:
return True
def checkRequestDir(self):
for request_file in os.listdir(self.request_dir):
parser = ConfigParser.RawConfigParser()
parser.readfp(open(os.path.join(self.request_dir, request_file), 'r'))
if self._checkCertificate(parser.get('certificate', 'name'),
parser.get('certificate', 'key_file'), parser.get('certificate',
'certificate_file')):
print 'Created certificate %r' % parser.get('certificate', 'name')
def runCertificateAuthority(args):
ca_conf = args[0]
ca = CertificateAuthority(ca_conf['key'], ca_conf['certificate'],
ca_conf['openssl_binary'], ca_conf['openssl_configuration'],
ca_conf['request_dir'])
while True:
ca.checkAuthority()
ca.checkRequestDir()
time.sleep(60)
#!/bin/sh
exec %(memcached_binary)s -p %(memcached_port)s -U %(memcached_port)s -l %(memcached_ip)s
#
# OpenSSL example configuration file.
# This is mostly being used for generation of certificate requests.
#
# This definition stops the following lines choking if HOME isn't
# defined.
HOME = .
RANDFILE = $ENV::HOME/.rnd
# Extra OBJECT IDENTIFIER info:
#oid_file = $ENV::HOME/.oid
oid_section = new_oids
# To use this configuration file with the "-extfile" option of the
# "openssl x509" utility, name here the section containing the
# X.509v3 extensions to use:
# extensions =
# (Alternatively, use a configuration file that has only
# X.509v3 extensions in its main [= default] section.)
[ new_oids ]
# We can add new OIDs in here for use by 'ca', 'req' and 'ts'.
# Add a simple OID like this:
# testoid1=1.2.3.4
# Or use config file substitution like this:
# testoid2=${testoid1}.5.6
# Policies used by the TSA examples.
tsa_policy1 = 1.2.3.4.1
tsa_policy2 = 1.2.3.4.5.6
tsa_policy3 = 1.2.3.4.5.7
####################################################################
[ ca ]
default_ca = CA_default # The default ca section
####################################################################
[ CA_default ]
dir = %(working_directory)s # Where everything is kept
certs = $dir/certs # Where the issued certs are kept
crl_dir = $dir/crl # Where the issued crl are kept
database = $dir/index.txt # database index file.
#unique_subject = no # Set to 'no' to allow creation of
# several ctificates with same subject.
new_certs_dir = $dir/newcerts # default place for new certs.
certificate = $dir/cacert.pem # The CA certificate
serial = $dir/serial # The current serial number
crlnumber = $dir/crlnumber # the current crl number
# must be commented out to leave a V1 CRL
crl = $dir/crl.pem # The current CRL
private_key = $dir/private/cakey.pem # The private key
RANDFILE = $dir/private/.rand # private random number file
x509_extensions = usr_cert # The extentions to add to the cert
# Comment out the following two lines for the "traditional"
# (and highly broken) format.
name_opt = ca_default # Subject Name options
cert_opt = ca_default # Certificate field options
# Extension copying option: use with caution.
# copy_extensions = copy
# Extensions to add to a CRL. Note: Netscape communicator chokes on V2 CRLs
# so this is commented out by default to leave a V1 CRL.
# crlnumber must also be commented out to leave a V1 CRL.
# crl_extensions = crl_ext
default_days = 3650 # how long to certify for
default_crl_days= 30 # how long before next CRL
default_md = default # use public key default MD
preserve = no # keep passed DN ordering
# A few difference way of specifying how similar the request should look
# For type CA, the listed attributes must be the same, and the optional
# and supplied fields are just that :-)
policy = policy_match
# For the CA policy
[ policy_match ]
countryName = match
stateOrProvinceName = match
organizationName = match
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
# For the 'anything' policy
# At this point in time, you must list all acceptable 'object'
# types.
[ policy_anything ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
####################################################################
[ req ]
default_bits = 2048
default_md = sha1
default_keyfile = privkey.pem
distinguished_name = req_distinguished_name
#attributes = req_attributes
x509_extensions = v3_ca # The extentions to add to the self signed cert
# Passwords for private keys if not present they will be prompted for
# input_password = secret
# output_password = secret
# This sets a mask for permitted string types. There are several options.
# default: PrintableString, T61String, BMPString.
# pkix : PrintableString, BMPString (PKIX recommendation before 2004)
# utf8only: only UTF8Strings (PKIX recommendation after 2004).
# nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings).
# MASK:XXXX a literal mask value.
# WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings.
string_mask = utf8only
# req_extensions = v3_req # The extensions to add to a certificate request
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = %(country_code)s
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_value = %(state)s
localityName = Locality Name (eg, city)
localityName_value = %(city)s
0.organizationName = Organization Name (eg, company)
0.organizationName_value = %(company)s
# we can do this but it is not needed normally :-)
#1.organizationName = Second Organization Name (eg, company)
#1.organizationName_default = World Wide Web Pty Ltd
commonName = Common Name (eg, your name or your server\'s hostname)
commonName_max = 64
emailAddress = Email Address
emailAddress_value = %(email_address)s
emailAddress_max = 64
# SET-ex3 = SET extension number 3
#[ req_attributes ]
#challengePassword = A challenge password
#challengePassword_min = 4
#challengePassword_max = 20
#
#unstructuredName = An optional company name
[ usr_cert ]
# These extensions are added when 'ca' signs a request.
# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.
basicConstraints=CA:FALSE
# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.
# This is OK for an SSL server.
# nsCertType = server
# For an object signing certificate this would be used.
# nsCertType = objsign
# For normal client use this is typical
# nsCertType = client, email
# and for everything including object signing:
# nsCertType = client, email, objsign
# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment
# This will be displayed in Netscape's comment listbox.
nsComment = "OpenSSL Generated Certificate"
# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move
# Copy subject details
# issuerAltName=issuer:copy
#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName
# This is required for TSA certificates.
# extendedKeyUsage = critical,timeStamping
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
[ v3_ca ]
# Extensions for a typical CA
# PKIX recommendation.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
# This is what PKIX recommends but some broken software chokes on critical
# extensions.
#basicConstraints = critical,CA:true
# So we do this instead.
basicConstraints = CA:true
# Key usage: this is typical for a CA certificate. However since it will
# prevent it being used as an test self-signed certificate it is best
# left out by default.
# keyUsage = cRLSign, keyCertSign
# Some might want this also
# nsCertType = sslCA, emailCA
# Include email address in subject alt name: another PKIX recommendation
# subjectAltName=email:copy
# Copy issuer details
# issuerAltName=issuer:copy
# DER hex encoding of an extension: beware experts only!
# obj=DER:02:03
# Where 'obj' is a standard or added object
# You can even override a supported extension:
# basicConstraints= critical, DER:30:03:01:01:FF
[ crl_ext ]
# CRL extensions.
# Only issuerAltName and authorityKeyIdentifier make any sense in a CRL.
# issuerAltName=issuer:copy
authorityKeyIdentifier=keyid:always
[ proxy_cert_ext ]
# These extensions should be added when creating a proxy certificate
# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.
basicConstraints=CA:FALSE
# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.
# This is OK for an SSL server.
# nsCertType = server
# For an object signing certificate this would be used.
# nsCertType = objsign
# For normal client use this is typical
# nsCertType = client, email
# and for everything including object signing:
# nsCertType = client, email, objsign
# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment
# This will be displayed in Netscape's comment listbox.
nsComment = "OpenSSL Generated Certificate"
# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move
# Copy subject details
# issuerAltName=issuer:copy
#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName
# This really needs to be in place for it to be a proxy certificate.
proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo
####################################################################
[ tsa ]
default_tsa = tsa_config1 # the default TSA section
[ tsa_config1 ]
# These are used by the TSA reply generation only.
dir = /etc/pki/tls # TSA root directory
serial = $dir/tsaserial # The current serial number (mandatory)
crypto_device = builtin # OpenSSL engine to use for signing
signer_cert = $dir/tsacert.pem # The TSA signing certificate
# (optional)
certs = $dir/cacert.pem # Certificate chain to include in reply
# (optional)
signer_key = $dir/private/tsakey.pem # The TSA private key (optional)
default_policy = tsa_policy1 # Policy if request did not specify it
# (optional)
other_policies = tsa_policy2, tsa_policy3 # acceptable policies (optional)
digests = md5, sha1 # Acceptable message digests (mandatory)
accuracy = secs:1, millisecs:500, microsecs:100 # (optional)
clock_precision_digits = 0 # number of digits after dot. (optional)
ordering = yes # Is ordering defined for timestamps?
# (optional, default: no)
tsa_name = yes # Must the TSA name be included in the reply?
# (optional, default: no)
ess_cert_id_chain = no # Must the ESS cert id chain be included?
# (optional, default: no)
foreground = yes
output = %(log)s
pid = %(pid_file)s
syslog = no
CApath = %(ca_path)s
key = %(key)s
CRLpath = %(ca_crl)s
cert = %(cert)s
[service]
accept = %(public_ip)s:%(public_port)s
connect = %(private_ip)s:%(private_port)s
...@@ -203,11 +203,10 @@ Include conf/extra/httpd-autoindex.conf ...@@ -203,11 +203,10 @@ Include conf/extra/httpd-autoindex.conf
services_dir = self.options['services_dir'] services_dir = self.options['services_dir']
httpd_wrapper = self.createPythonScript( httpd_wrapper = self.createWrapper(
os.path.join(services_dir, 'httpd_wrapper'), os.path.join(services_dir, 'httpd_wrapper'),
'slapos.recipe.librecipe.execute.execute', (self.options['httpd_binary'],
[self.options['httpd_binary'], '-f', self.options['httpd_conf'], '-f', self.options['httpd_conf'], '-DFOREGROUND'),
'-DFOREGROUND']
) )
path_list.append(httpd_wrapper) path_list.append(httpd_wrapper)
...@@ -220,19 +219,17 @@ Include conf/extra/httpd-autoindex.conf ...@@ -220,19 +219,17 @@ Include conf/extra/httpd-autoindex.conf
site_perl_bin = os.path.join(self.options['site_perl'], 'bin') site_perl_bin = os.path.join(self.options['site_perl'], 'bin')
mioga_conf_path = os.path.join(mioga_base, 'conf', 'Mioga.conf') mioga_conf_path = os.path.join(mioga_base, 'conf', 'Mioga.conf')
notifier_wrapper = self.createPythonScript( notifier_wrapper = self.createWrapper(
os.path.join(services_dir, 'notifier'), os.path.join(services_dir, 'notifier'),
'slapos.recipe.librecipe.execute.execute', (os.path.join(site_perl_bin, 'notifier.pl'),
[ os.path.join(site_perl_bin, 'notifier.pl'), mioga_conf_path),
mioga_conf_path ]
) )
path_list.append(notifier_wrapper) path_list.append(notifier_wrapper)
searchengine_wrapper = self.createPythonScript( searchengine_wrapper = self.createWrapper(
os.path.join(services_dir, 'searchengine'), os.path.join(services_dir, 'searchengine'),
'slapos.recipe.librecipe.execute.execute', (os.path.join(site_perl_bin, 'searchengine.pl'),
[ os.path.join(site_perl_bin, 'searchengine.pl'), mioga_conf_path),
mioga_conf_path ]
) )
path_list.append(searchengine_wrapper) path_list.append(searchengine_wrapper)
......
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
# #
############################################################################## ##############################################################################
import subprocess import os
from slapos.recipe.librecipe import GenericBaseRecipe from slapos.recipe.librecipe import GenericBaseRecipe
...@@ -58,14 +58,14 @@ def do_export(args): ...@@ -58,14 +58,14 @@ def do_export(args):
cmd.extend(['-o', args['directory']]) cmd.extend(['-o', args['directory']])
subprocess.check_call(cmd) os.execv(cmd[0], cmd)
def do_import(args): def do_import(args):
cmd = _mydumper_base_cmd(**args) cmd = _mydumper_base_cmd(**args)
cmd.append('--overwrite-tables') cmd.append('--overwrite-tables')
cmd.extend(['-d', args['directory']]) cmd.extend(['-d', args['directory']])
subprocess.check_call(cmd) os.execv(cmd[0], cmd)
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
...@@ -95,9 +95,5 @@ class Recipe(GenericBaseRecipe): ...@@ -95,9 +95,5 @@ class Recipe(GenericBaseRecipe):
config['compression'] = self.optionIsTrue('compression', default=False) config['compression'] = self.optionIsTrue('compression', default=False)
config['rows'] = self.options.get('rows') config['rows'] = self.options.get('rows')
wrapper = self.createPythonScript(name=self.options['wrapper'], return self.createPythonScript(self.options['wrapper'],
absolute_function = '%s.%s' % (__name__, function.func_name), '%s.%s' % (function.__module__, function.__name__), (config,))
arguments=config)
return [wrapper]
...@@ -53,11 +53,10 @@ class Recipe(GenericBaseRecipe): ...@@ -53,11 +53,10 @@ class Recipe(GenericBaseRecipe):
mysql_binary = self.options['mysql-binary'] mysql_binary = self.options['mysql-binary']
socket = self.options['socket'], socket = self.options['socket'],
post_rotate = self.createPythonScript( post_rotate = self.createWrapper(
self.options['logrotate-post'], self.options['logrotate-post'],
'slapos.recipe.librecipe.execute.execute', (mysql_binary, '--no-defaults', '-B', '-u', 'root',
[mysql_binary, '--no-defaults', '-B', '-u', 'root', '--socket=%s' % socket, '-e', '--socket=%s' % socket, '-e', 'FLUSH LOGS'),
'FLUSH LOGS']
) )
path_list.append(post_rotate) path_list.append(post_rotate)
...@@ -85,12 +84,12 @@ class Recipe(GenericBaseRecipe): ...@@ -85,12 +84,12 @@ class Recipe(GenericBaseRecipe):
mysql_update = self.createPythonScript( mysql_update = self.createPythonScript(
self.options['update-wrapper'], self.options['update-wrapper'],
'%s.mysql.updateMysql' % __name__, '%s.mysql.updateMysql' % __name__,
dict( (dict(
mysql_script=mysql_script, mysql_script=mysql_script,
mysql_binary=mysql_binary, mysql_binary=mysql_binary,
mysql_upgrade_binary=mysql_upgrade_binary, mysql_upgrade_binary=mysql_upgrade_binary,
socket=socket, socket=socket,
) ),)
) )
path_list.append(mysql_update) path_list.append(mysql_update)
...@@ -98,7 +97,7 @@ class Recipe(GenericBaseRecipe): ...@@ -98,7 +97,7 @@ class Recipe(GenericBaseRecipe):
mysqld = self.createPythonScript( mysqld = self.createPythonScript(
self.options['wrapper'], self.options['wrapper'],
'%s.mysql.runMysql' % __name__, '%s.mysql.runMysql' % __name__,
dict( (dict(
mysql_install_binary=self.options['mysql-install-binary'], mysql_install_binary=self.options['mysql-install-binary'],
mysqld_binary=mysqld_binary, mysqld_binary=mysqld_binary,
data_directory=mysql_conf['data_directory'], data_directory=mysql_conf['data_directory'],
...@@ -106,7 +105,7 @@ class Recipe(GenericBaseRecipe): ...@@ -106,7 +105,7 @@ class Recipe(GenericBaseRecipe):
socket=socket, socket=socket,
configuration_file=mysql_conf_file, configuration_file=mysql_conf_file,
cwd=self.options['mysql-base-directory'], cwd=self.options['mysql-base-directory'],
) ),)
) )
path_list.append(mysqld) path_list.append(mysqld)
...@@ -115,12 +114,12 @@ class Recipe(GenericBaseRecipe): ...@@ -115,12 +114,12 @@ class Recipe(GenericBaseRecipe):
backup_script = self.createPythonScript( backup_script = self.createPythonScript(
self.options['backup-script'], self.options['backup-script'],
'%s.do_backup' % __name__, '%s.do_backup' % __name__,
dict( (dict(
mydumper_binary=self.options['mydumper-binary'], mydumper_binary=self.options['mydumper-binary'],
database=mysql_conf['mysql_database'], database=mysql_conf['mysql_database'],
socket=mysql_conf['socket'], socket=mysql_conf['socket'],
backup_directory=self.options['backup-directory'] backup_directory=self.options['backup-directory']
), ),)
) )
path_list.append(backup_script) path_list.append(backup_script)
...@@ -129,7 +128,7 @@ class Recipe(GenericBaseRecipe): ...@@ -129,7 +128,7 @@ class Recipe(GenericBaseRecipe):
recovering_script = self.createPythonScript( recovering_script = self.createPythonScript(
self.options['recovering-wrapper'], self.options['recovering-wrapper'],
'%s.import_dump' % __name__, '%s.import_dump' % __name__,
{ ({
'lock_file': os.path.join(self.work_directory, 'lock_file': os.path.join(self.work_directory,
'import_done'), 'import_done'),
'database': mysql_conf['mysql_database'], 'database': mysql_conf['mysql_database'],
...@@ -140,7 +139,7 @@ class Recipe(GenericBaseRecipe): ...@@ -140,7 +139,7 @@ class Recipe(GenericBaseRecipe):
'local_directory': self.mysql_backup_directory, 'local_directory': self.mysql_backup_directory,
'dump_name': dump_filename, 'dump_name': dump_filename,
'zcat_binary': self.options['zcat-binary'], 'zcat_binary': self.options['zcat-binary'],
} },)
) )
path_list.append(recovering_script) path_list.append(recovering_script)
......
...@@ -42,7 +42,8 @@ class NeoBaseRecipe(GenericBaseRecipe): ...@@ -42,7 +42,8 @@ class NeoBaseRecipe(GenericBaseRecipe):
# Only then can this recipe start succeeding and actually doing anything # Only then can this recipe start succeeding and actually doing anything
# useful, as per NEO deploying constraints. # useful, as per NEO deploying constraints.
raise UserError('"masters" parameter is mandatory') raise UserError('"masters" parameter is mandatory')
option_list = [ args = [
options['binary'],
# Keep the -l option first, as expected by logrotate snippets. # Keep the -l option first, as expected by logrotate snippets.
'-l', options['logfile'], '-l', options['logfile'],
'-m', options['masters'], '-m', options['masters'],
...@@ -53,17 +54,13 @@ class NeoBaseRecipe(GenericBaseRecipe): ...@@ -53,17 +54,13 @@ class NeoBaseRecipe(GenericBaseRecipe):
] ]
if options['ssl']: if options['ssl']:
etc = os.path.join(self.buildout['buildout']['directory'], 'etc', '') etc = os.path.join(self.buildout['buildout']['directory'], 'etc', '')
option_list += ( args += (
'--ca', etc + 'ca.crt', '--ca', etc + 'ca.crt',
'--cert', etc + 'neo.crt', '--cert', etc + 'neo.crt',
'--key', etc + 'neo.key', '--key', etc + 'neo.key',
) )
option_list.extend(self._getOptionList()) args += self._getOptionList()
return [self.createWrapper( return self.createWrapper(options['wrapper'], args)
options['wrapper'],
options['binary'],
option_list
)]
def _getBindingAddress(self): def _getBindingAddress(self):
options = self.options options = self.options
......
...@@ -32,19 +32,15 @@ class Recipe(GenericBaseRecipe): ...@@ -32,19 +32,15 @@ class Recipe(GenericBaseRecipe):
def install(self): def install(self):
options = self.options options = self.options
script = self.createWrapper(name=options['wrapper'], # Script that execute the callback(s) upon receiving a notification.
command=options['server-binary'], return self.createWrapper(options['wrapper'],
parameters=[ (options['server-binary'],
'--callbacks', options['callbacks'], '--callbacks', options['callbacks'],
'--feeds', options['feeds'], '--feeds', options['feeds'],
'--equeue-socket', options['equeue-socket'], '--equeue-socket', options['equeue-socket'],
options['host'], options['port'] options['host'], options['port']
], ),
comments=[ )
'',
'Upon receiving a notification, execute the callback(s).',
''])
return [script]
class Callback(GenericBaseRecipe): class Callback(GenericBaseRecipe):
...@@ -80,35 +76,32 @@ class Notify(GenericBaseRecipe): ...@@ -80,35 +76,32 @@ class Notify(GenericBaseRecipe):
# Just a touch # Just a touch
open(log, 'w').close() open(log, 'w').close()
parameters = [ cmd = [notifier_binary,
'-l', log, '-l', log,
'--title', title, '--title', title,
'--feed', feed_url, '--feed', feed_url,
'--max-run', str(max_run), '--max-run', str(max_run),
'--notification-url', '--notification-url',
] ]
parameters.extend(notification_url.split(' ')) cmd += notification_url.split(' ')
parameters.extend(['--executable', executable]) cmd += '--executable', executable
# For a more verbose mode, writing feed items for any action # For a more verbose mode, writing feed items for any action
instance_root_name = instance_root_name or self.options.get('instance-root-name', None) instance_root_name = instance_root_name or self.options.get('instance-root-name', None)
log_url = log_url or self.options.get('log-url', None) log_url = log_url or self.options.get('log-url', None)
status_item_directory = status_item_directory or self.options.get('status-item-directory', None) status_item_directory = status_item_directory or self.options.get('status-item-directory', None)
if instance_root_name and log_url and status_item_directory: if instance_root_name and log_url and status_item_directory:
parameters.extend([ cmd += (
'--instance-root-name', instance_root_name, '--instance-root-name', instance_root_name,
'--log-url', log_url, '--log-url', log_url,
'--status-item-directory', status_item_directory, '--status-item-directory', status_item_directory,
]) )
return self.createWrapper(name=wrapper, kw = {}
command=notifier_binary, if pidfile:
parameters=parameters, kw['pidfile'] = pidfile
pidfile=pidfile,
parameters_extra=True, # Script that call an executable and send notification(s).
comments=[ return self.createWrapper(wrapper, cmd, **kw)
'',
'Call an executable and send notification(s).',
''])
def install(self): def install(self):
......
...@@ -35,10 +35,9 @@ class Recipe(GenericBaseRecipe): ...@@ -35,10 +35,9 @@ class Recipe(GenericBaseRecipe):
""" """
def install(self): def install(self):
runner_path = self.createPythonScript( return self.createWrapper(
self.options['path'], self.options['path'],
'slapos.recipe.librecipe.execute.execute_wait', (
[[
self.options['websockify-path'], self.options['websockify-path'],
'--web', '--web',
self.options['novnc-location'], self.options['novnc-location'],
...@@ -47,8 +46,7 @@ class Recipe(GenericBaseRecipe): ...@@ -47,8 +46,7 @@ class Recipe(GenericBaseRecipe):
'--ssl-only', '--ssl-only',
'%s:%s' % (self.options['ip'], self.options['port']), '%s:%s' % (self.options['ip'], self.options['port']),
'%s:%s' % (self.options['vnc-ip'], self.options['vnc-port']), '%s:%s' % (self.options['vnc-ip'], self.options['vnc-port']),
], ),
[self.options['ssl-key-path'], self.options['ssl-cert-path']]], wait_list=(self.options['ssl-key-path'],
self.options['ssl-cert-path']),
) )
return [runner_path]
...@@ -39,30 +39,18 @@ from slapos.recipe.notifier import Callback ...@@ -39,30 +39,18 @@ from slapos.recipe.notifier import Callback
from slapos.recipe.librecipe import shlex from slapos.recipe.librecipe import shlex
def promise(args): def promise(ssh_client, user, host, port):
# Redirect output to /dev/null # Redirect output to /dev/null
with open("/dev/null") as _dev_null: with open(os.devnull) as _dev_null:
ssh = subprocess.Popen( ssh = subprocess.Popen(
[args['ssh_client'], '%(user)s@%(host)s' % args, '-p', '%(port)s' % args], (ssh_client, '%s@%s' % (user, host), '-p', str(port)),
stdin=subprocess.PIPE, stdout=_dev_null, stderr=None stdin=subprocess.PIPE, stdout=_dev_null)
) ssh.communicate('q' + chr(255) + chr(0) * 7)
if ssh.returncode:
# Rdiff Backup protocol quit command
quitcommand = 'q' + chr(255) + chr(0) * 7
ssh.stdin.write(quitcommand)
ssh.stdin.flush()
ssh.stdin.close()
ssh.wait()
if ssh.poll() is None:
return 1
if ssh.returncode != 0:
sys.stderr.write("SSH Connection failed\n") sys.stderr.write("SSH Connection failed\n")
return ssh.returncode return ssh.returncode
class Recipe(GenericSlapRecipe, Notify, Callback): class Recipe(GenericSlapRecipe, Notify, Callback):
def _options(self, options): def _options(self, options):
options['rdiff-backup-data-folder'] = "" options['rdiff-backup-data-folder'] = ""
...@@ -244,15 +232,11 @@ class Recipe(GenericSlapRecipe, Notify, Callback): ...@@ -244,15 +232,11 @@ class Recipe(GenericSlapRecipe, Notify, Callback):
print 'Processing PBS slave %s with type %s' % (slave_id, slave_type) print 'Processing PBS slave %s with type %s' % (slave_id, slave_type)
promise_path = os.path.join(self.options['promises-directory'], "ssh-to-%s" % slave_id) path_list.append(self.createPythonScript(
promise_dict = dict(ssh_client=self.options['sshclient-binary'], os.path.join(self.options['promises-directory'], "ssh-to-%s" % slave_id),
user=parsed_url.username,
host=parsed_url.hostname,
port=parsed_url.port)
promise = self.createPythonScript(promise_path,
__name__ + '.promise', __name__ + '.promise',
promise_dict) (self.options['sshclient-binary'],
path_list.append(promise) parsed_url.username, parsed_url.hostname, parsed_url.port)))
# Create known_hosts file by default. # Create known_hosts file by default.
# In some case, we don't want to create it (case where we share IP mong partitions) # In some case, we don't want to create it (case where we share IP mong partitions)
...@@ -336,12 +320,11 @@ class Recipe(GenericSlapRecipe, Notify, Callback): ...@@ -336,12 +320,11 @@ class Recipe(GenericSlapRecipe, Notify, Callback):
else: else:
self.logger.info("Server mode") self.logger.info("Server mode")
wrapper = self.createWrapper(name=self.options['wrapper'], wrapper = self.createWrapper(self.options['wrapper'],
command=self.options['rdiffbackup-binary'], (self.options['rdiffbackup-binary'],
parameters=[
'--restrict', self.options['path'], '--restrict', self.options['path'],
'--server' '--server'
]) ))
path_list.append(wrapper) path_list.append(wrapper)
return path_list return path_list
...@@ -162,7 +162,6 @@ class Recipe(GenericBaseRecipe): ...@@ -162,7 +162,6 @@ class Recipe(GenericBaseRecipe):
return hash_string return hash_string
def install(self): def install(self):
path_list = []
token_save_path = os.path.join(self.options['conf-dir'], 'token.json') token_save_path = os.path.join(self.options['conf-dir'], 'token.json')
token_list_path = self.options['token-dir'] token_list_path = self.options['token-dir']
...@@ -190,20 +189,14 @@ class Recipe(GenericBaseRecipe): ...@@ -190,20 +189,14 @@ class Recipe(GenericBaseRecipe):
self.createFile(token_save_path, json.dumps(token_dict)) self.createFile(token_save_path, json.dumps(token_dict))
service_dict = dict(token_base_path=token_list_path, computer_dict = dict(partition_id=self.computer_partition_id,
token_json=token_save_path, computer_guid=self.computer_id,
partition_id=self.computer_partition_id, master_url=self.server_url,
computer_id=self.computer_id,
registry_url=registry_url,
server_url=self.server_url,
cert_file=self.cert_file, cert_file=self.cert_file,
key_file=self.key_file) key_file=self.key_file)
request_add = self.createPythonScript( return self.createPythonScript(
self.options['manager-wrapper'].strip(), self.options['manager-wrapper'].strip(),
'%s.re6stnet.manage' % __name__, service_dict __name__ + '.re6stnet.manage',
(registry_url, token_list_path, token_save_path, computer_dict)
) )
path_list.append(request_add)
return path_list
...@@ -17,9 +17,7 @@ logging.trace = logging.debug ...@@ -17,9 +17,7 @@ logging.trace = logging.debug
def loadJsonFile(path): def loadJsonFile(path):
if os.path.exists(path): if os.path.exists(path):
with open(path, 'r') as f: with open(path, 'r') as f:
content = f.read() return json.load(f)
return json.loads(content)
else:
return {} return {}
def writeFile(path, data): def writeFile(path, data):
...@@ -39,29 +37,25 @@ def updateFile(file_path, value): ...@@ -39,29 +37,25 @@ def updateFile(file_path, value):
return True return True
return False return False
def getComputerPartition(server_url, key_file, cert_file, computer_guid, partition_id): def getComputerPartition(master_url, key_file, cert_file,
computer_guid, partition_id):
slap = slapos.slap.slap() slap = slapos.slap.slap()
# Redeploy instance to update published information # Redeploy instance to update published information
slap.initializeConnection(server_url, slap.initializeConnection(master_url, key_file, cert_file)
key_file, return slap.registerComputerPartition(computer_guid, partition_id)
cert_file)
return slap.registerComputerPartition(computer_guid=computer_guid, def requestAddToken(client, token_base_path):
partition_id=partition_id)
def requestAddToken(client, base_token_path):
time.sleep(3) time.sleep(3)
path_list = [x for x in os.listdir(base_token_path) if x.endswith('.add')] path_list = [x for x in os.listdir(token_base_path) if x.endswith('.add')]
log.info("Searching tokens to add at %s and found %s." % (base_token_path, path_list)) log.info("Searching tokens to add at %s and found %s." % (token_base_path, path_list))
if not path_list: if not path_list:
log.info("No new token to add. Exiting...") log.info("No new token to add. Exiting...")
return return
for reference_key in path_list: for reference_key in path_list:
request_file = os.path.join(base_token_path, reference_key) request_file = os.path.join(token_base_path, reference_key)
token = readFile(request_file) token = readFile(request_file)
log.info("Including token %s for %s" % (token, reference_key)) log.info("Including token %s for %s" % (token, reference_key))
if token : if token :
...@@ -79,21 +73,21 @@ def requestAddToken(client, base_token_path): ...@@ -79,21 +73,21 @@ def requestAddToken(client, base_token_path):
# update information # update information
log.info("New token added for slave instance %s. Updating file status..." % log.info("New token added for slave instance %s. Updating file status..." %
reference) reference)
status_file = os.path.join(base_token_path, '%s.status' % reference) status_file = os.path.join(token_base_path, '%s.status' % reference)
updateFile(status_file, 'TOKEN_ADDED') updateFile(status_file, 'TOKEN_ADDED')
os.unlink(request_file) os.unlink(request_file)
else: else:
log.debug('Bad token. Request add token fail for %s...' % request_file) log.debug('Bad token. Request add token fail for %s...' % request_file)
def requestRemoveToken(client, base_token_path): def requestRemoveToken(client, token_base_path):
path_list = [x for x in os.listdir(base_token_path) if x.endswith('.remove')] path_list = [x for x in os.listdir(token_base_path) if x.endswith('.remove')]
if not path_list: if not path_list:
log.info("No token to delete. Exiting...") log.info("No token to delete. Exiting...")
return return
for reference_key in path_list: for reference_key in path_list:
request_file = os.path.join(base_token_path, reference_key) request_file = os.path.join(token_base_path, reference_key)
token = readFile(request_file) token = readFile(request_file)
if token : if token :
reference = reference_key.split('.')[0] reference = reference_key.split('.')[0]
...@@ -108,7 +102,7 @@ def requestRemoveToken(client, base_token_path): ...@@ -108,7 +102,7 @@ def requestRemoveToken(client, base_token_path):
continue continue
else: else:
# certificate is invalidated, it will be revoked # certificate is invalidated, it will be revoked
writeFile(os.path.join(base_token_path, '%s.revoke' % reference), '') writeFile(os.path.join(token_base_path, '%s.revoke' % reference), '')
if result in (True, 'True'): if result in (True, 'True'):
# update information # update information
...@@ -117,33 +111,17 @@ def requestRemoveToken(client, base_token_path): ...@@ -117,33 +111,17 @@ def requestRemoveToken(client, base_token_path):
if result in ['True', 'False']: if result in ['True', 'False']:
os.unlink(request_file) os.unlink(request_file)
status_file = os.path.join(base_token_path, '%s.status' % reference) status_file = os.path.join(token_base_path, '%s.status' % reference)
if os.path.exists(status_file): if os.path.exists(status_file):
os.unlink(status_file) os.unlink(status_file)
ipv6_file = os.path.join(base_token_path, '%s.ipv6' % reference) ipv6_file = os.path.join(token_base_path, '%s.ipv6' % reference)
if os.path.exists(ipv6_file): if os.path.exists(ipv6_file):
os.unlink(ipv6_file) os.unlink(ipv6_file)
else: else:
log.debug('Bad token. Request add token fail for %s...' % request_file) log.debug('Bad token. Request add token fail for %s...' % request_file)
def requestRevoqueCertificate(args): def checkService(client, token_base_path, token_json, computer_partition):
base_token_path = args['token_base_path']
path_list = [x for x in os.listdir(base_token_path) if x.endswith('.revoke')]
for reference_key in path_list:
reference = reference_key.split('.')[0]
if revokeByMail(args['registry_url'],
'%s@slapos' % reference.lower(),
args['db']):
os.unlink(os.path.join(base_token_path, reference_key))
log.info("Certificate revoked for slave instance %s." % reference)
return
log.info("Failed to revoke email for %s" % reference)
def checkService(client, base_token_path, token_json, computer_partition):
token_dict = loadJsonFile(token_json) token_dict = loadJsonFile(token_json)
updated = False updated = False
if not token_dict: if not token_dict:
...@@ -152,7 +130,7 @@ def checkService(client, base_token_path, token_json, computer_partition): ...@@ -152,7 +130,7 @@ def checkService(client, base_token_path, token_json, computer_partition):
# Check token status # Check token status
for slave_reference, token in token_dict.iteritems(): for slave_reference, token in token_dict.iteritems():
log.info("%s %s" % (slave_reference, token)) log.info("%s %s" % (slave_reference, token))
status_file = os.path.join(base_token_path, '%s.status' % slave_reference) status_file = os.path.join(token_base_path, '%s.status' % slave_reference)
if not os.path.exists(status_file): if not os.path.exists(status_file):
# This token is not added yet! # This token is not added yet!
log.info("Token %s dont exist yet." % status_file) log.info("Token %s dont exist yet." % status_file)
...@@ -206,31 +184,22 @@ def checkService(client, base_token_path, token_json, computer_partition): ...@@ -206,31 +184,22 @@ def checkService(client, base_token_path, token_json, computer_partition):
slave_reference, traceback.format_exc()) slave_reference, traceback.format_exc())
def manage(args, can_bang=True): def manage(registry_url, token_base_path, token_json,
computer_dict, can_bang=True):
computer_guid = args['computer_id']
partition_id = args['partition_id']
server_url = args['server_url']
key_file = args['key_file']
cert_file = args['cert_file']
client = registry.RegistryClient(args['registry_url']) client = registry.RegistryClient(registry_url)
base_token_path = args['token_base_path']
token_json = args['token_json']
log.info("ADD TOKEN") log.info("ADD TOKEN")
# Request Add new tokens # Request Add new tokens
requestAddToken(client, base_token_path) requestAddToken(client, token_base_path)
log.info("Remove TOKEN") log.info("Remove TOKEN")
# Request delete removed token # Request delete removed token
requestRemoveToken(client, base_token_path) requestRemoveToken(client, token_base_path)
computer_partition = getComputerPartition(server_url, key_file, computer_partition = getComputerPartition(**computer_dict)
cert_file, computer_guid, partition_id)
log.info("Update Services") log.info("Update Services")
# check status of all token # check status of all token
checkService(client, base_token_path, checkService(client, token_base_path, token_json, computer_partition)
token_json, computer_partition)
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
# #
############################################################################## ##############################################################################
import os import os
import sys
from slapos.recipe.librecipe import GenericBaseRecipe from slapos.recipe.librecipe import GenericBaseRecipe
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
...@@ -56,10 +56,9 @@ class Recipe(GenericBaseRecipe): ...@@ -56,10 +56,9 @@ class Recipe(GenericBaseRecipe):
configuration)) configuration))
path_list.append(config) path_list.append(config)
redis = self.createPythonScript( redis = self.createWrapper(
self.options['wrapper'], self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute', (self.options['server_bin'], config_file),
[self.options['server_bin'], config_file]
) )
path_list.append(redis) path_list.append(redis)
...@@ -67,11 +66,20 @@ class Recipe(GenericBaseRecipe): ...@@ -67,11 +66,20 @@ class Recipe(GenericBaseRecipe):
if promise_script: if promise_script:
promise = self.createPythonScript( promise = self.createPythonScript(
promise_script, promise_script,
'%s.promise.main' % __name__, __name__ + '.promise',
dict(host=self.options['ipv6'], port=self.options['port'], (self.options['ipv6'], int(self.options['port']),
unixsocket = self.options.get('unixsocket') ) self.options.get('unixsocket'))
) )
path_list.append(promise) path_list.append(promise)
return path_list return path_list
def promise(host, port, unixsocket):
from .MyRedis2410 import Redis
try:
r = Redis(host=host, port=port, unix_socket_path=unixsocket, db=0)
r.publish("Promise-Service","SlapOS Promise")
r.connection_pool.disconnect()
except Exception, e:
sys.exit(e)
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import slapos.recipe.redis.MyRedis2410 as redis
import sys
def main(args):
host = args['host']
port = int(args['port'])
unixsocket = args['unixsocket']
try:
r = redis.Redis(host=host, port=port, unix_socket_path=unixsocket, db=0)
r.publish("Promise-Service","SlapOS Promise")
r.connection_pool.disconnect()
sys.exit(0)
except Exception, e:
print str(e)
sys.exit(1)
\ No newline at end of file
...@@ -70,14 +70,12 @@ class Recipe(GenericSlapRecipe): ...@@ -70,14 +70,12 @@ class Recipe(GenericSlapRecipe):
path_list.append(nginx_configuration_file) path_list.append(nginx_configuration_file)
# Generate Nginx wrapper # Generate Nginx wrapper
wrapper = self.createWrapper( path_list.append(self.createWrapper(
name=self.options['wrapper'], self.options['wrapper'],
command=self.options['nginx-executable'], (self.options['nginx-executable'],
parameters=[
'-c', self.options['configuration-file'], '-c', self.options['configuration-file'],
'-p', self.options['home-directory'] '-p', self.options['home-directory']
] )))
)
# TODO: reload configuration or have feature like apache_map # TODO: reload configuration or have feature like apache_map
......
...@@ -25,23 +25,15 @@ ...@@ -25,23 +25,15 @@
# #
############################################################################# #############################################################################
import os
import sys
import zc.buildout
from slapos.recipe.librecipe import BaseSlapRecipe
from slapos.recipe.librecipe import GenericBaseRecipe from slapos.recipe.librecipe import GenericBaseRecipe
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
def install(self):
runner = self.createPythonScript( def install(self):
return self.createPythonScript(
self.options['runner-path'], self.options['runner-path'],
__name__+'.testrunner.run', __name__+'.testrunner.run',
arguments=[self.options['suite-url'], (self.options['suite-url'],
self.options['report-url'], self.options['report-url'],
self.options['report-project'], self.options['report-project'],
self.options['browser'], self.options['browser']))
])
return [runner]
...@@ -36,12 +36,7 @@ import urlparse ...@@ -36,12 +36,7 @@ import urlparse
from subprocess import Popen, PIPE from subprocess import Popen, PIPE
import signal import signal
def run(args): def run(suite_url, report_url, project, browser_binary):
suite_url = args[0]
report_url = args[1]
project = args[2]
browser_binary = args[3]
suite_parsed = urlparse.urlparse(suite_url) suite_parsed = urlparse.urlparse(suite_url)
config = { config = {
......
...@@ -31,27 +31,12 @@ from slapos.recipe.librecipe import GenericBaseRecipe ...@@ -31,27 +31,12 @@ from slapos.recipe.librecipe import GenericBaseRecipe
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
def install(self): def install(self):
env = os.environ.copy()
path_list = self.options['path'].split('\n')
env.update(PATH=':'.join(path_list))
env.update(SHELL=self.options['shell'])
env.update(HOME=self.options['home'])
ps1 = self.options.get('ps1') ps1 = self.options.get('ps1')
if ps1 is not None: shell = self.options['shell']
env.update(PS1=str(json.loads(ps1))) env = {
else: 'HOME': self.options['home'],
env.update(PS1=env.get('PS1', '> ')) 'PATH': ':'.join(self.options['path'].split('\n')),
'PS1': str(json.loads(ps1)) if ps1 else os.getenv('PS1', '> '),
wrapper = self.createPythonScript( 'SHELL': shell,
self.options['wrapper'], }
'slapos.recipe.librecipe.execute.executee', return self.createWrapper(self.options['wrapper'], (shell,), env)
[ # Executable
[self.options['shell']],
# Environment
env
]
)
return [wrapper]
...@@ -33,24 +33,14 @@ import shlex ...@@ -33,24 +33,14 @@ import shlex
from slapos.recipe.librecipe import GenericBaseRecipe from slapos.recipe.librecipe import GenericBaseRecipe
def login_shell(args): def login_shell(password_file, shell):
password_file = args['password-file']
if password_file: if password_file:
with open(password_file, 'r') as password_file: with open(password_file, 'r') as password_file:
password = password_file.read() password = password_file.read()
if (password != ''): if not password or hmac.compare_digest(getpass(), password):
entered_password = getpass() commandline = shlex.split(shell)
else: os.execv(commandline[0], commandline)
entered_password = ''
if not hmac.compare_digest(entered_password, password):
return 1
else:
commandline = shlex.split(args['shell'])
path = commandline[0]
os.execv(path, commandline)
else:
return 1 return 1
def shellinabox(args): def shellinabox(args):
...@@ -95,22 +85,16 @@ def shellinabox(args): ...@@ -95,22 +85,16 @@ def shellinabox(args):
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
def install(self): def install(self):
path_list = [] login_shell_wrapper = self.createPythonScript(
login_shell = self.createPythonScript(
self.options['login-shell'], self.options['login-shell'],
'%s.login_shell' % __name__, __name__ + '.login_shell',
{ (self.options['password-file'], self.options['shell'])
'password-file': self.options['password-file'],
'shell': self.options['shell']
}
) )
path_list.append(login_shell)
wrapper = self.createPythonScript( shellinabox_wrapper = self.createPythonScript(
self.options['wrapper'], self.options['wrapper'],
'%s.shellinabox' % __name__, __name__ + '.shellinabox',
dict( (dict(
certificate_dir=self.options['certificate-directory'], certificate_dir=self.options['certificate-directory'],
ssl_key=self.options['key-file'], ssl_key=self.options['key-file'],
ssl_certificate=self.options['cert-file'], ssl_certificate=self.options['cert-file'],
...@@ -118,9 +102,8 @@ class Recipe(GenericBaseRecipe): ...@@ -118,9 +102,8 @@ class Recipe(GenericBaseRecipe):
directory=self.options['directory'], directory=self.options['directory'],
ipv6=self.options['ipv6'], ipv6=self.options['ipv6'],
port=self.options['port'], port=self.options['port'],
login_shell=login_shell, login_shell=login_shell_wrapper,
) ),)
) )
path_list.append(wrapper)
return [wrapper] return login_shell_wrapper, shellinabox_wrapper
...@@ -32,6 +32,6 @@ class Recipe(GenericBaseRecipe): ...@@ -32,6 +32,6 @@ class Recipe(GenericBaseRecipe):
self.createPythonScript( self.createPythonScript(
self.options['wrapper-path'], self.options['wrapper-path'],
'slapos.recipe.librecipe.execute.execute_with_signal_translation', 'slapos.recipe.librecipe.execute.execute_with_signal_translation',
[self.options['wrapped-path']] ((self.options['wrapped-path'],),)
) )
] ]
...@@ -68,9 +68,8 @@ class Recipe(GenericBaseRecipe): ...@@ -68,9 +68,8 @@ class Recipe(GenericBaseRecipe):
'root-dir': self.options['root-dir'] 'root-dir': self.options['root-dir']
} }
server = self.createPythonScript( return self.createPythonScript(
self.options['wrapper'].strip(), self.options['wrapper'].strip(),
'%s.simplehttpserver.run' % __name__, parameters __name__ + '.simplehttpserver.run',
(parameters,)
) )
return [server]
...@@ -29,9 +29,9 @@ import time ...@@ -29,9 +29,9 @@ import time
from slapos.recipe.librecipe import GenericBaseRecipe from slapos.recipe.librecipe import GenericBaseRecipe
def log(args): def log(filename):
prefix = time.strftime('%Y-%m-%d.%H:%M.%s:') prefix = time.strftime('%Y-%m-%d.%H:%M.%s:')
with open(args['filename'], 'aw') as logfile: with open(filename, 'a') as logfile:
for line in sys.stdin: for line in sys.stdin:
print >> logfile, prefix, line, print >> logfile, prefix, line,
print >> logfile, prefix, '------------------------' print >> logfile, prefix, '------------------------'
...@@ -39,10 +39,7 @@ def log(args): ...@@ -39,10 +39,7 @@ def log(args):
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
def install(self): def install(self):
wrapper = self.options['wrapper'] return self.createPythonScript(
log = self.options['log'] self.options['wrapper'],
script = self.createPythonScript(wrapper,
__name__ + '.log', __name__ + '.log',
arguments=dict(filename=log)) (self.options['log'],))
return [script]
...@@ -48,11 +48,10 @@ class Recipe(GenericBaseRecipe): ...@@ -48,11 +48,10 @@ class Recipe(GenericBaseRecipe):
) )
# Create init script # Create init script
wrapper = self.createPythonScript( wrapper = self.createWrapper(
self.options['wrapper'], self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute', (self.options['sphinx-searchd-binary'].strip(), '-c',
[self.options['sphinx-searchd-binary'].strip(), '-c', sphinx_conf_path, '--nodetach'),
sphinx_conf_path, '--nodetach'],
) )
return [wrapper, sphinx_conf_path] return [wrapper, sphinx_conf_path]
...@@ -78,21 +78,15 @@ class Recipe(GenericBaseRecipe): ...@@ -78,21 +78,15 @@ class Recipe(GenericBaseRecipe):
self.substituteTemplate(template_filename, config)) self.substituteTemplate(template_filename, config))
# Prepare directories # Prepare directories
prepare_path = self.createPythonScript( prepare_path = self.createWrapper(
self.options['prepare-path'], self.options['prepare-path'],
'slapos.recipe.librecipe.execute.execute', (self.options['binary-path'].strip(),
arguments=[self.options['binary-path'].strip(), '-z', '-f', configuration_path))
'-z',
'-f', configuration_path,
],)
# Create running wrapper # Create running wrapper
wrapper_path = self.createPythonScript( wrapper_path = self.createWrapper(
self.options['wrapper-path'], self.options['wrapper-path'],
'slapos.recipe.librecipe.execute.execute', (self.options['binary-path'].strip(),
arguments=[self.options['binary-path'].strip(), '-N', '-f', configuration_path))
'-N',
'-f', configuration_path,
],)
return [configuration_path, wrapper_path, prepare_path] return [configuration_path, wrapper_path, prepare_path]
...@@ -34,17 +34,14 @@ from slapos.recipe.librecipe import GenericBaseRecipe ...@@ -34,17 +34,14 @@ from slapos.recipe.librecipe import GenericBaseRecipe
from slapos.recipe.librecipe.inotify import subfiles from slapos.recipe.librecipe.inotify import subfiles
# This authority only works with dropbear or openssh sshkey generators # This authority only works with dropbear or openssh sshkey generators
def sshkeys_authority(args): def sshkeys_authority(request_directory, keygen_binary):
requests_directory = args['requests']
keygen_binary = args['sshkeygen']
if 'openssh' in keygen_binary: if 'openssh' in keygen_binary:
authority_type = 'openssh' authority_type = 'openssh'
else: else:
# Keep dropbear for compatibility # Keep dropbear for compatibility
authority_type = 'dropbear' authority_type = 'dropbear'
for request_filename in subfiles(requests_directory): for request_filename in subfiles(request_directory):
with open(request_filename) as request_file: with open(request_filename) as request_file:
request = json.load(request_file) request = json.load(request_file)
...@@ -98,18 +95,13 @@ def sshkeys_authority(args): ...@@ -98,18 +95,13 @@ def sshkeys_authority(args):
public_key_file.write(public_key_value) public_key_file.write(public_key_value)
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
def install(self): def install(self):
args = dict( return self.createPythonScript(self.options['wrapper'],
requests=self.options['request-directory'], __name__ + '.sshkeys_authority',
sshkeygen=self.options['keygen-binary'], (self.options['request-directory'],
) self.options['keygen-binary']))
wrapper = self.createPythonScript(self.options['wrapper'],
__name__ + '.sshkeys_authority', args)
return [wrapper]
class Request(GenericBaseRecipe): class Request(GenericBaseRecipe):
...@@ -160,11 +152,9 @@ class Request(GenericBaseRecipe): ...@@ -160,11 +152,9 @@ class Request(GenericBaseRecipe):
os.symlink(self.private_key, private_key_link) os.symlink(self.private_key, private_key_link)
# end-XXX # end-XXX
wrapper = self.createPythonScript( wrapper = self.createWrapper(
self.options['wrapper'], self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute_wait', (self.options['executable'],),
[ [self.options['executable']], wait_list=(self.private_key, self.public_key))
[self.private_key, self.public_key] ])
return [request_file, wrapper, public_key_link, private_key_link] return [request_file, wrapper, public_key_link, private_key_link]
...@@ -30,13 +30,16 @@ import errno ...@@ -30,13 +30,16 @@ import errno
from slapos.recipe.librecipe import GenericBaseRecipe from slapos.recipe.librecipe import GenericBaseRecipe
def post_rotate(args): def kill(pid_file, sig=signal.SIGUSR1):
pid_file = args['pid_file'] if os.path.exists(pid_file):
with open(pid_file) as f:
if os.path.exist(pid_file): pid = int(f.read().strip())
with open(pid_file, 'r') as file_: try:
pid = file_.read().strip() os.kill(pid, sig)
os.kill(pid, signal.SIGUSR1) except OSError, e:
if e.errno != errno.ESRCH: # No such process
raise e
os.unlink(pid_file)
class Recipe(GenericBaseRecipe): class Recipe(GenericBaseRecipe):
...@@ -76,28 +79,18 @@ class Recipe(GenericBaseRecipe): ...@@ -76,28 +79,18 @@ class Recipe(GenericBaseRecipe):
self.substituteTemplate(template, conf)) self.substituteTemplate(template, conf))
path_list.append(conf_file) path_list.append(conf_file)
wrapper = self.createPythonScript( wrapper = self.createWrapper(
self.options['wrapper'], self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute', (self.options['stunnel-binary'], conf_file),
[self.options['stunnel-binary'], conf_file]
) )
path_list.append(wrapper) path_list.append(wrapper)
if os.path.exists(pid_file):
with open(pid_file, 'r') as file_:
pid = file_.read().strip()
# Reload configuration # Reload configuration
try: kill(pid_file, signal.SIGHUP)
os.kill(int(pid, 10), signal.SIGHUP)
except OSError, e:
if e.errno == errno.ESRCH: # No such process
os.unlink(pid_file)
else:
raise e
if 'post-rotate-script' in self.options: if 'post-rotate-script' in self.options:
self.createPythonScript(self.options['post-rotate-script'], path_list.append(self.createPythonScript(
__name__ + 'post_rotate', self.options['post-rotate-script'],
dict(pid_file=pid_file)) __name__ + '.kill', (pid_file,)))
return path_list return path_list
...@@ -36,19 +36,17 @@ class Recipe(GenericBaseRecipe): ...@@ -36,19 +36,17 @@ class Recipe(GenericBaseRecipe):
r = [configuration_file] r = [configuration_file]
wrapper = self.options.get('tidstorage-wrapper') wrapper = self.options.get('tidstorage-wrapper')
wrapper and r.append(self.createPythonScript(wrapper, wrapper and r.append(self.createWrapper(wrapper,
'slapos.recipe.librecipe.execute.execute', (self.options['tidstoraged-binary'],
[self.options['tidstoraged-binary'], '--nofork', '--config', '--nofork', '--config', configuration_file)))
configuration_file]))
r.append(self.createPythonScript( r.append(self.createWrapper(
self.options['repozo-wrapper'], self.options['repozo-wrapper'],
'slapos.recipe.librecipe.execute.execute', (self.options['tidstorage-repozo-binary'],
[self.options['tidstorage-repozo-binary'],
'--config', configuration_file, '--config', configuration_file,
'--repozo', self.options['repozo-binary'], '--repozo', self.options['repozo-binary'],
'--gzip', '--gzip',
'--quick', '--quick',
])) )))
return r return r
##############################################################################
#
# Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
import shlex
from slapos.recipe.librecipe import GenericBaseRecipe
class Recipe(GenericBaseRecipe):
def install(self):
files = [f for f in self.options['files'].split('\n') if f]
command_line = shlex.split(self.options['command-line'])
wrapper = self.createPythonScript(
self.options['wrapper'],
'slapos.recipe.librecipe.execute.execute_wait',
[ command_line,
files ],
)
return [wrapper]
...@@ -26,7 +26,6 @@ ...@@ -26,7 +26,6 @@
############################################################################## ##############################################################################
import shlex import shlex
import os
from slapos.recipe.librecipe import GenericBaseRecipe from slapos.recipe.librecipe import GenericBaseRecipe
...@@ -38,56 +37,31 @@ class Recipe(GenericBaseRecipe): ...@@ -38,56 +37,31 @@ class Recipe(GenericBaseRecipe):
:param lines wait-for-files: list of files to wait for :param lines wait-for-files: list of files to wait for
:param str pidfile: path to pidfile ensure exclusivity for the process :param str pidfile: path to pidfile ensure exclusivity for the process
:param bool parameters-extra: whether wrapper parameters are passed onto command :param str private-dev-shm: size of private /dev/shm, using user namespaces
:param bool reserve-cpu: command will ask for an exclusive CPU core :param bool reserve-cpu: command will ask for an exclusive CPU core
""" """
def install(self): def install(self):
command_line = shlex.split(self.options['command-line']) args = shlex.split(self.options['command-line'])
wrapper_path = self.options['wrapper-path'] wrapper_path = self.options['wrapper-path']
wait_files = self.options.get('wait-for-files') wait_files = self.options.get('wait-for-files')
environment = self.options.get('environment')
parameters_extra = self.options.get('parameters-extra')
pidfile = self.options.get('pidfile') pidfile = self.options.get('pidfile')
reserve_cpu = self.options.get('reserve-cpu', False) private_dev_shm = self.options.get('private-dev-shm')
if not wait_files and not environment: environment = {}
# Create a simple wrapper as shell script for line in (self.options.get('environment') or '').splitlines():
return [self.createWrapper( line = line.strip()
name=wrapper_path, if line:
command=command_line[0], k, v = line.split('=')
parameters=command_line[1:], environment[k.rstrip()] = v.lstrip()
parameters_extra=parameters_extra,
pidfile=pidfile,
reserve_cpu=reserve_cpu
)]
# More complex needs: create a Python script as wrapper kw = {}
if wait_files:
if wait_files is not None: kw['wait_list'] = wait_files.split()
wait_files = [filename.strip() for filename in wait_files.split() if pidfile:
if filename.strip()] kw['pidfile'] = pidfile
if environment is not None: if private_dev_shm:
environment = dict((k.strip(), v.strip()) for k, v in [ kw['private_dev_shm'] = private_dev_shm
line.split('=') for line in environment.splitlines() if line.strip() ]) if self.isTrueValue(self.options.get('reserve-cpu')):
kw['reserve_cpu'] = True
# We create a python script and a wrapper around the python
# script because the python script might have a too long #! line
if os.path.exists(os.path.join(self.buildout['buildout']['directory'], "bin")):
base_script_path = os.path.join(
self.buildout['buildout']['directory'], "bin/" + wrapper_path.split("/")[-1])
else:
base_script_path = os.path.join(
self.buildout['buildout']['directory'], wrapper_path.split("/")[-1])
python_script = self.createPythonScript(
base_script_path +'.py',
'slapos.recipe.librecipe.execute.generic_exec',
(command_line, wait_files, environment,), )
return [python_script, self.createWrapper(
name=wrapper_path,
command=python_script,
parameters=[],
parameters_extra=parameters_extra,
pidfile=pidfile,
reserve_cpu=reserve_cpu
)]
return self.createWrapper(wrapper_path, args, environment, **kw)
...@@ -71,10 +71,10 @@ class Recipe(BaseSlapRecipe): ...@@ -71,10 +71,10 @@ class Recipe(BaseSlapRecipe):
self._createDirectory(cron_d) self._createDirectory(cron_d)
self._createDirectory(crontabs) self._createDirectory(crontabs)
wrapper = zc.buildout.easy_install.scripts([('crond', wrapper = zc.buildout.easy_install.scripts([('crond',
'slapos.recipe.librecipe.execute', 'execute')], self.ws, sys.executable, 'slapos.recipe.librecipe.execute', 'generic_exec')], self.ws,
self.wrapper_directory, arguments=[ sys.executable, self.wrapper_directory, arguments=repr((
self.options['dcrond_binary'].strip(), '-s', cron_d, '-c', crontabs, self.options['dcrond_binary'].strip(), '-s', cron_d, '-c', crontabs,
'-t', timestamps, '-f', '-l', '5', '-M', catcher] '-t', timestamps, '-f', '-l', '5', '-M', catcher))[1:-1]
)[0] )[0]
self.path_list.append(wrapper) self.path_list.append(wrapper)
return cron_d return cron_d
...@@ -101,10 +101,10 @@ class Recipe(BaseSlapRecipe): ...@@ -101,10 +101,10 @@ class Recipe(BaseSlapRecipe):
self.path_list.append(zabbix_agentd_path) self.path_list.append(zabbix_agentd_path)
wrapper = zc.buildout.easy_install.scripts([('zabbixagentd', wrapper = zc.buildout.easy_install.scripts([('zabbixagentd',
'slapos.recipe.librecipe.execute', 'execute')], self.ws, sys.executable, 'slapos.recipe.librecipe.execute', 'generic_exec')], self.ws,
self.bin_directory, arguments=[ sys.executable, self.bin_directory, arguments=repr((
self.options['zabbix_agentd_binary'].strip(), '-c', self.options['zabbix_agentd_binary'].strip(), '-c',
zabbix_agentd_path])[0] zabbix_agentd_path))[1:-1])[0]
self.path_list.extend(zc.buildout.easy_install.scripts([ self.path_list.extend(zc.buildout.easy_install.scripts([
('zabbixagentd', __name__ + '.svcdaemon', 'svcdaemon')], ('zabbixagentd', __name__ + '.svcdaemon', 'svcdaemon')],
......
...@@ -89,10 +89,9 @@ class Recipe(GenericBaseRecipe): ...@@ -89,10 +89,9 @@ class Recipe(GenericBaseRecipe):
self.substituteTemplate(template_filename, config)) self.substituteTemplate(template_filename, config))
# Create running wrapper # Create running wrapper
wrapper_path = self.createPythonScript( wrapper_path = self.createWrapper(
self.options['wrapper-path'], self.options['wrapper-path'],
'slapos.recipe.librecipe.execute.execute', (self.options['binary-path'].strip(),
arguments=[self.options['binary-path'].strip(), '-C', '-C', self.options['conf-path']))
self.options['conf-path']],)
return [configuration_path, wrapper_path] return [configuration_path, wrapper_path]
...@@ -69,8 +69,11 @@ class TestGenericCloudooo(unittest.TestCase): ...@@ -69,8 +69,11 @@ class TestGenericCloudooo(unittest.TestCase):
]), data) ]), data)
self.assertIn("\n".join([ self.assertIn("\n".join([
"", "",
" application/x-asc-presentation application/vnd.oasis.opendocument.presentation x2t",
" application/x-asc-presentation application/vnd.openxmlformats-officedocument.presentationml.presentation x2t", " application/x-asc-presentation application/vnd.openxmlformats-officedocument.presentationml.presentation x2t",
" application/x-asc-spreadsheet application/vnd.oasis.opendocument.spreadsheet x2t",
" application/x-asc-spreadsheet application/vnd.openxmlformats-officedocument.spreadsheetml.sheet x2t", " application/x-asc-spreadsheet application/vnd.openxmlformats-officedocument.spreadsheetml.sheet x2t",
" application/x-asc-text application/vnd.oasis.opendocument.text x2t",
" application/x-asc-text application/vnd.openxmlformats-officedocument.wordprocessingml.document x2t", " application/x-asc-text application/vnd.openxmlformats-officedocument.wordprocessingml.document x2t",
"", "",
]), data) ]), data)
...@@ -79,15 +79,14 @@ class Re6stnetTest(unittest.TestCase): ...@@ -79,15 +79,14 @@ class Re6stnetTest(unittest.TestCase):
token_file = os.path.join(self.options['conf-dir'], 'token.json') token_file = os.path.join(self.options['conf-dir'], 'token.json')
with open(path, 'r') as f: with open(path, 'r') as f:
content = f.read() content = f.read()
self.assertIn("'token_json': '%s'" % token_file, content) self.assertIn("('http://%s:%s/', %r, %r," % (
self.options['ipv4'], self.options['port'], self.token_dir, token_file),
content)
self.assertIn("'partition_id': 'slappart0'", content) self.assertIn("'partition_id': 'slappart0'", content)
self.assertIn("'computer_id': 'comp-test'", content) self.assertIn("'computer_guid': 'comp-test'", content)
self.assertIn("'key_file': '/path/to/key'", content) self.assertIn("'key_file': '/path/to/key'", content)
self.assertIn("'cert_file': '/path/to/cert'", content) self.assertIn("'cert_file': '/path/to/cert'", content)
self.assertIn("'server_url': 'http://server.com'", content) self.assertIn("'master_url': 'http://server.com'", content)
self.assertIn("'token_base_path': '%s'" % self.token_dir, content)
self.assertIn("'registry_url': 'http://%s:%s/'" % (self.options['ipv4'],
self.options['port']), content)
def fake_generateCertificates(self): def fake_generateCertificates(self):
return return
......
...@@ -175,6 +175,16 @@ ...@@ -175,6 +175,16 @@
"default": "", "default": "",
"textarea": true, "textarea": true,
"type": "string" "type": "string"
},
"virtualhostroot-http-port": {
"description": "Port where http requests to frontend will be redirected.",
"default": 80,
"type": "integer"
},
"virtualhostroot-https-port": {
"description": "Port where https requests to frontend will be redirected.",
"default": 443,
"type": "integer"
} }
} }
} }
...@@ -80,7 +80,7 @@ ...@@ -80,7 +80,7 @@
# First, we check if we have a zope backend server # First, we check if we have a zope backend server
# If so, let's use Virtual Host Monster rewrite # If so, let's use Virtual Host Monster rewrite
# We suppose that Apache listens to 443 (even indirectly thanks to things like iptables) # We suppose that Apache listens to 443 (even indirectly thanks to things like iptables)
RewriteRule ^/(.*)$ {{ slave_parameter.get('https-url', slave_parameter.get('url', '')) }}/VirtualHostBase/https//%{SERVER_NAME}:443/{{ slave_parameter.get('path', '') }}/VirtualHostRoot/$1 [L,P] RewriteRule ^/(.*)$ {{ slave_parameter.get('https-url', slave_parameter.get('url', '')) }}/VirtualHostBase/https//%{SERVER_NAME}:{{ slave_parameter.get('virtualhostroot-https-port', '443') }}/{{ slave_parameter.get('path', '') }}/VirtualHostRoot/$1 [L,P]
{% elif slave_type == 'redirect' -%} {% elif slave_type == 'redirect' -%}
RewriteRule (.*) {{ slave_parameter.get('https-url', slave_parameter.get('url', ''))}}$1 [R,L] RewriteRule (.*) {{ slave_parameter.get('https-url', slave_parameter.get('url', ''))}}$1 [R,L]
{% else -%} {% else -%}
...@@ -159,7 +159,7 @@ ...@@ -159,7 +159,7 @@
# First, we check if we have a zope backend server # First, we check if we have a zope backend server
# If so, let's use Virtual Host Daemon rewrite # If so, let's use Virtual Host Daemon rewrite
# We suppose that Apache listens to 80 (even indirectly thanks to things like iptables) # We suppose that Apache listens to 80 (even indirectly thanks to things like iptables)
RewriteRule ^/(.*)$ {{ slave_parameter.get('url', '') }}/VirtualHostBase/http/%{SERVER_NAME}:80/{{ slave_parameter.get('path', '') }}/VirtualHostRoot/$1 [L,P] RewriteRule ^/(.*)$ {{ slave_parameter.get('url', '') }}/VirtualHostBase/http/%{SERVER_NAME}:{{ slave_parameter.get('virtualhostroot-http-port', '80') }}/{{ slave_parameter.get('path', '') }}/VirtualHostRoot/$1 [L,P]
{% else -%} {% else -%}
{% if 'default-path' in slave_parameter %} {% if 'default-path' in slave_parameter %}
RewriteRule ^/?$ {{ slave_parameter.get('default-path') }} [R=301,L] RewriteRule ^/?$ {{ slave_parameter.get('default-path') }} [R=301,L]
......
...@@ -19,4 +19,4 @@ md5sum = 6e4431cf4b0a0d034402604b1e2844c0 ...@@ -19,4 +19,4 @@ md5sum = 6e4431cf4b0a0d034402604b1e2844c0
[template-cloudooo-instance] [template-cloudooo-instance]
filename = instance-cloudooo.cfg.in filename = instance-cloudooo.cfg.in
md5sum = afbfed1d762e5cdf7c6fd1292e7b28e7 md5sum = b1e0c365b5cfffd86278daf39fb6de9f
...@@ -247,7 +247,7 @@ link-binary = ...@@ -247,7 +247,7 @@ link-binary =
{{ parameter_dict['poppler'] }}/bin/pdfinfo {{ parameter_dict['poppler'] }}/bin/pdfinfo
{{ parameter_dict['poppler'] }}/bin/pdftotext {{ parameter_dict['poppler'] }}/bin/pdftotext
{{ parameter_dict['poppler'] }}/bin/pdftohtml {{ parameter_dict['poppler'] }}/bin/pdftohtml
{{ parameter_dict['onlyoffice-x2t'] }}/x2t {{ parameter_dict['onlyoffice-core'] }}/bin/x2t
# rest of parts are candidates for some generic stuff # rest of parts are candidates for some generic stuff
[directory] [directory]
......
...@@ -68,7 +68,7 @@ libreoffice-bin = ${libreoffice-bin:location} ...@@ -68,7 +68,7 @@ libreoffice-bin = ${libreoffice-bin:location}
libxcb = ${libxcb:location} libxcb = ${libxcb:location}
mesa = ${mesa:location} mesa = ${mesa:location}
openssl = ${openssl:location} openssl = ${openssl:location}
onlyoffice-x2t = ${onlyoffice-x2t:location} onlyoffice-core = ${onlyoffice-core:location}
poppler = ${poppler:location} poppler = ${poppler:location}
pixman = ${pixman:location} pixman = ${pixman:location}
wkhtmltopdf = ${wkhtmltopdf:location} wkhtmltopdf = ${wkhtmltopdf:location}
......
...@@ -88,6 +88,16 @@ ...@@ -88,6 +88,16 @@
"description": "Request a front-end slave instance of this software type.", "description": "Request a front-end slave instance of this software type.",
"default": "RootSoftwareInstance", "default": "RootSoftwareInstance",
"type": "object" "type": "object"
},
"virtualhostroot-http-port": {
"description": "Front-end slave http port. Port where http requests to frontend will be redirected.",
"default": 80,
"type": "integer"
},
"virtualhostroot-https-port": {
"description": "Front-end slave https port. Port where https requests to frontend will be redirected.",
"default": 443,
"type": "integer"
} }
}, },
"type": "object" "type": "object"
...@@ -119,6 +129,10 @@ ...@@ -119,6 +129,10 @@
"default": 5, "default": 5,
"type": "number" "type": "number"
}, },
"private-dev-shm": {
"description": "Size of private /dev/shm for wendelin.core. If sysctl kernel.unprivileged_userns_clone exists, it must be set to 1.",
"type": "string"
},
"ssl-authentication": { "ssl-authentication": {
"title": "Enable SSL Client authentication on this zope instance.", "title": "Enable SSL Client authentication on this zope instance.",
"description": "If set to true, will set SSL Client verification to required on apache VirtualHost which allow to access this zope instance.", "description": "If set to true, will set SSL Client verification to required on apache VirtualHost which allow to access this zope instance.",
...@@ -140,6 +154,11 @@ ...@@ -140,6 +154,11 @@
"default": 1, "default": 1,
"type": "integer" "type": "integer"
}, },
"large-file-threshold": {
"description": "Requests bigger than this size get saved into a temporary file instead of being read completely into memory, in bytes",
"default": "10MB",
"type": "string"
},
"port-base": { "port-base": {
"allOf": [{ "allOf": [{
"$ref": "#/definitions/tcpv4port" "$ref": "#/definitions/tcpv4port"
......
...@@ -122,6 +122,12 @@ ...@@ -122,6 +122,12 @@
} }
}, },
"type": "object" "type": "object"
},
"odbc-ini": {
"description": "Contents of odbc.ini file, see unixodbc document",
"default": "",
"type": "string"
} }
} }
} }
...@@ -15,3 +15,6 @@ branch = erp5-component ...@@ -15,3 +15,6 @@ branch = erp5-component
[cloudooo-repository] [cloudooo-repository]
branch = master branch = master
revision = revision =
[versions]
cloudooo =
...@@ -19,4 +19,4 @@ md5sum = 307663d73ef3ef94b02567ecd322252e ...@@ -19,4 +19,4 @@ md5sum = 307663d73ef3ef94b02567ecd322252e
[template-default] [template-default]
filename = instance-default.cfg filename = instance-default.cfg
md5sum = 76f63d443c5fdcea6fac68791c6bb65b md5sum = 555700e5d216ff32a981f4066791bdab
...@@ -21,7 +21,7 @@ parts = ...@@ -21,7 +21,7 @@ parts =
[monitor-publish] [monitor-publish]
recipe = slapos.cookbook:publish recipe = slapos.cookbook:publish
url = http://[$${shellinabox:ipv6}]:$${shellinabox:port}/ url = https://[$${shellinabox:ipv6}]:$${shellinabox:port}/
password = $${pwgen:passwd} password = $${pwgen:passwd}
frontend-url = $${testnode-frontend:connection-secure_access} frontend-url = $${testnode-frontend:connection-secure_access}
......
...@@ -64,11 +64,21 @@ function add_log () ...@@ -64,11 +64,21 @@ function add_log ()
done done
} }
function add_checks ()
{
LOG_FILE=$1
echo 'lsof -Pni' >> $LOG_FILE 2>&1
lsof -Pni >> $LOG_FILE 2>&1
echo 'iptables-save' >> $LOG_FILE 2>&1
iptables-save >> $LOG_FILE 2>&1
for f in /tmp/playbook-* ; do echo $f ; cat $f; echo; done >> $LOG_FILE 2>&1
}
function upload () function upload ()
{ {
try=$1 try=$1
LOG_FILE=$2 LOG_FILE=$2
add_log $LOG_FILE add_log $LOG_FILE
add_checks $LOG_FILE
t=`date '+%Y%m%d%H%S'` t=`date '+%Y%m%d%H%S'`
mv $LOG_FILE ${LOG_FILE}.$t mv $LOG_FILE ${LOG_FILE}.$t
# just to be sure flush all disk operations before uploading # just to be sure flush all disk operations before uploading
......
...@@ -64,7 +64,7 @@ def waitForSite(partition_path): ...@@ -64,7 +64,7 @@ def waitForSite(partition_path):
finished = False finished = False
status_dict['stdout'] = try_info + 'Build not yet successful.' status_dict['stdout'] = try_info + 'Build not yet successful.'
print(try_info + '%r: Found not yet finished run.' % (result_file,)) print(try_info + '%r: Found not yet finished run.' % (result_file,))
elif "\"msg\": \"[u'Build successful, connect to:', u'" in result: elif "Build successful, connect to:" in result:
# success # success
status_dict.update( status_dict.update(
success=True success=True
......
[buildout] [buildout]
extends = ../../../kvm/software.cfg extends = https://lab.nexedi.com/nexedi/slapos/raw/8deeab28fb7d4fd527c2df05f74fcaf27c8df218/software/kvm/software.cfg
parts = parts =
eggs eggs
...@@ -55,7 +55,7 @@ output = ${buildout:directory}/template-original.kvm.cfg ...@@ -55,7 +55,7 @@ output = ${buildout:directory}/template-original.kvm.cfg
[deploy-script-controller-script] [deploy-script-controller-script]
filename = deploy-script-controller filename = deploy-script-controller
location = ${:_profile_base_location_}/${:filename} location = ${:_profile_base_location_}/${:filename}
md5sum = 31aadc895acf9fc2fc6e1cbe815339c6 md5sum = f0f5dd379361eb37f84e0bc7639f645f
# configuration # configuration
waittime = 360 waittime = 360
tries = 80 tries = 80
......
# grafana / telegraf / influxdb
## Custom telegraf plugins
See https://github.com/influxdata/telegraf to learn about plugins.
Useful plugins in this context are probably
[exec](https://github.com/influxdata/telegraf/tree/1.5.1/plugins/inputs/exec)
or
[httpjson](https://github.com/influxdata/telegraf/tree/1.5.1/plugins/inputs/httpjson).
Telegraf will save in the `telegraf` database from the embedded influxdb server.
## Grafana
You'll have to add yourself the influxdb data source in grafana, using the
parameters published by the slapos instance.
http://docs.grafana.org/features/datasources/influxdb/
When adding datasource, use *proxy* option, otherwise Grafana makes your
browser query influxdb directly, which also uses a self signed certificate.
One workaround is to configure your browser to also accept influxdb certificate
before using grafana, but using proxy seems easier.
## Influxdb
Influxdb backups are not done automatically by this software release.
One important thing to notice is that the backup protocol is enabled on ipv4
provided by slapos, so make sure this ip is not reachable from untrusted
sources.
## TODO
* influxdb and telegraf runs with very low priority, this could become an option
* make one partition for each service and use switch software type
* make it easier to add custom configuration (how ?)
# THIS IS NOT A BUILDOUT FILE, despite purposedly using a compatible syntax.
# The only allowed lines here are (regexes):
# - "^#" comments, copied verbatim
# - "^[" section beginings, copied verbatim
# - lines containing an "=" sign which must fit in the following categorie.
# - "^\s*filename\s*=\s*path\s*$" where "path" is relative to this file
# But avoid directories, they are not portable.
# Copied verbatim.
# - "^\s*hashtype\s*=.*" where "hashtype" is one of the values supported
# by the re-generation script.
# Re-generated.
# - other lines are copied verbatim
# Substitution (${...:...}), extension ([buildout] extends = ...) and
# section inheritance (< = ...) are NOT supported (but you should really
# not need these here).
[instance-profile]
filename = instance.cfg.in
md5sum = 7fb6806b139b3a8d0054308397be1dd9
[influxdb-config-file]
filename = influxdb-config-file.cfg.in
md5sum = 7ce85159c0664b251e249eac4b37bea4
[telegraf-config-file]
filename = telegraf-config-file.cfg.in
md5sum = a1a9c22c2a7829c66a49fc2504604d21
[grafana-config-file]
filename = grafana-config-file.cfg.in
md5sum = 8244d430905b968795c7946049bed9e3
# Code generated by gowork-snapshot; DO NOT EDIT.
# list of go git repositories to fetch
[gowork.goinstall]
depends_gitfetch =
${go_collectd.org:recipe}
${go_github.com_BurntSushi_toml:recipe}
${go_github.com_Microsoft_go-winio:recipe}
${go_github.com_RoaringBitmap_roaring:recipe}
${go_github.com_Shopify_sarama:recipe}
${go_github.com_Sirupsen_logrus:recipe}
${go_github.com_StackExchange_wmi:recipe}
${go_github.com_aerospike_aerospike-client-go:recipe}
${go_github.com_amir_raidman:recipe}
${go_github.com_apache_thrift:recipe}
${go_github.com_aws_aws-sdk-go:recipe}
${go_github.com_beorn7_perks:recipe}
${go_github.com_bmizerany_pat:recipe}
${go_github.com_boltdb_bolt:recipe}
${go_github.com_bsm_sarama-cluster:recipe}
${go_github.com_cenkalti_backoff:recipe}
${go_github.com_cespare_xxhash:recipe}
${go_github.com_couchbase_go-couchbase:recipe}
${go_github.com_couchbase_gomemcached:recipe}
${go_github.com_couchbase_goutils:recipe}
${go_github.com_davecgh_go-spew:recipe}
${go_github.com_dgrijalva_jwt-go:recipe}
${go_github.com_dgryski_go-bitstream:recipe}
${go_github.com_docker_docker:recipe}
${go_github.com_docker_go-connections:recipe}
${go_github.com_eapache_go-resiliency:recipe}
${go_github.com_eapache_go-xerial-snappy:recipe}
${go_github.com_eapache_queue:recipe}
${go_github.com_eclipse_paho.mqtt.golang:recipe}
${go_github.com_glycerine_go-unsnap-stream:recipe}
${go_github.com_go-ini_ini:recipe}
${go_github.com_go-logfmt_logfmt:recipe}
${go_github.com_go-ole_go-ole:recipe}
${go_github.com_go-sql-driver_mysql:recipe}
${go_github.com_gobwas_glob:recipe}
${go_github.com_gogo_protobuf:recipe}
${go_github.com_golang_protobuf:recipe}
${go_github.com_golang_snappy:recipe}
${go_github.com_google_go-cmp:recipe}
${go_github.com_gorilla_mux:recipe}
${go_github.com_grafana_grafana:recipe}
${go_github.com_hailocab_go-hostpool:recipe}
${go_github.com_hashicorp_consul:recipe}
${go_github.com_influxdata_influxdb:recipe}
${go_github.com_influxdata_influxql:recipe}
${go_github.com_influxdata_tail:recipe}
${go_github.com_influxdata_telegraf:recipe}
${go_github.com_influxdata_toml:recipe}
${go_github.com_influxdata_usage-client:recipe}
${go_github.com_influxdata_wlog:recipe}
${go_github.com_influxdata_yamux:recipe}
${go_github.com_influxdata_yarpc:recipe}
${go_github.com_jackc_pgx:recipe}
${go_github.com_jmespath_go-jmespath:recipe}
${go_github.com_jwilder_encoding:recipe}
${go_github.com_kardianos_govendor:recipe}
${go_github.com_kardianos_osext:recipe}
${go_github.com_kardianos_service:recipe}
${go_github.com_kballard_go-shellquote:recipe}
${go_github.com_matttproud_golang_protobuf_extensions:recipe}
${go_github.com_miekg_dns:recipe}
${go_github.com_mitchellh_mapstructure:recipe}
${go_github.com_multiplay_go-ts3:recipe}
${go_github.com_naoina_go-stringutil:recipe}
${go_github.com_nats-io_go-nats:recipe}
${go_github.com_nats-io_nats:recipe}
${go_github.com_nats-io_nuid:recipe}
${go_github.com_nsqio_go-nsq:recipe}
${go_github.com_opencontainers_runc:recipe}
${go_github.com_opentracing-contrib_go-observer:recipe}
${go_github.com_opentracing_opentracing-go:recipe}
${go_github.com_openzipkin_zipkin-go-opentracing:recipe}
${go_github.com_peterh_liner:recipe}
${go_github.com_philhofer_fwd:recipe}
${go_github.com_pierrec_lz4:recipe}
${go_github.com_pierrec_xxHash:recipe}
${go_github.com_pkg_errors:recipe}
${go_github.com_pmezard_go-difflib:recipe}
${go_github.com_prometheus_client_golang:recipe}
${go_github.com_prometheus_client_model:recipe}
${go_github.com_prometheus_common:recipe}
${go_github.com_prometheus_procfs:recipe}
${go_github.com_rcrowley_go-metrics:recipe}
${go_github.com_retailnext_hllpp:recipe}
${go_github.com_samuel_go-zookeeper:recipe}
${go_github.com_satori_go.uuid:recipe}
${go_github.com_shirou_gopsutil:recipe}
${go_github.com_shirou_w32:recipe}
${go_github.com_soniah_gosnmp:recipe}
${go_github.com_sparrc_gdm:recipe}
${go_github.com_streadway_amqp:recipe}
${go_github.com_stretchr_objx:recipe}
${go_github.com_stretchr_testify:recipe}
${go_github.com_tidwall_gjson:recipe}
${go_github.com_tidwall_match:recipe}
${go_github.com_tinylib_msgp:recipe}
${go_github.com_vjeantet_grok:recipe}
${go_github.com_wvanbergen_kafka:recipe}
${go_github.com_wvanbergen_kazoo-go:recipe}
${go_github.com_xlab_treeprint:recipe}
${go_github.com_yuin_gopher-lua:recipe}
${go_github.com_zensqlmonitor_go-mssqldb:recipe}
${go_go.uber.org_atomic:recipe}
${go_go.uber.org_multierr:recipe}
${go_go.uber.org_zap:recipe}
${go_golang.org_x_crypto:recipe}
${go_golang.org_x_net:recipe}
${go_golang.org_x_sys:recipe}
${go_golang.org_x_text:recipe}
${go_golang.org_x_time:recipe}
${go_golang.org_x_tools:recipe}
${go_gopkg.in_asn1-ber.v1:recipe}
${go_gopkg.in_fatih_pool.v2:recipe}
${go_gopkg.in_fsnotify.v1:recipe}
${go_gopkg.in_gorethink_gorethink.v3:recipe}
${go_gopkg.in_ldap.v2:recipe}
${go_gopkg.in_mgo.v2:recipe}
${go_gopkg.in_olivere_elastic.v5:recipe}
${go_gopkg.in_tomb.v1:recipe}
${go_gopkg.in_yaml.v2:recipe}
[go_collectd.org]
<= go-git-package
go.importpath = collectd.org
repository = https://github.com/collectd/go-collectd
revision = v0.3.0-17-g606bd390f3
[go_github.com_BurntSushi_toml]
<= go-git-package
go.importpath = github.com/BurntSushi/toml
repository = https://github.com/BurntSushi/toml
revision = v0.2.0-45-ga368813c5e
[go_github.com_Microsoft_go-winio]
<= go-git-package
go.importpath = github.com/Microsoft/go-winio
repository = https://github.com/Microsoft/go-winio
revision = ce2922f643
[go_github.com_RoaringBitmap_roaring]
<= go-git-package
go.importpath = github.com/RoaringBitmap/roaring
repository = https://github.com/RoaringBitmap/roaring
revision = v0.2.8-174-g0a6691af7c
[go_github.com_Shopify_sarama]
<= go-git-package
go.importpath = github.com/Shopify/sarama
repository = https://github.com/Shopify/sarama
revision = 3b1b38866a
[go_github.com_Sirupsen_logrus]
<= go-git-package
go.importpath = github.com/Sirupsen/logrus
repository = https://github.com/Sirupsen/logrus
revision = 61e43dc76f
[go_github.com_StackExchange_wmi]
<= go-git-package
go.importpath = github.com/StackExchange/wmi
repository = https://github.com/StackExchange/wmi
revision = f3e2bae1e0
[go_github.com_aerospike_aerospike-client-go]
<= go-git-package
go.importpath = github.com/aerospike/aerospike-client-go
repository = https://github.com/aerospike/aerospike-client-go
revision = v1.6.4-277-g95e1ad7791
[go_github.com_amir_raidman]
<= go-git-package
go.importpath = github.com/amir/raidman
repository = https://github.com/amir/raidman
revision = c74861fe6a
[go_github.com_apache_thrift]
<= go-git-package
go.importpath = github.com/apache/thrift
repository = https://github.com/apache/thrift
revision = 4aaa92ece8
[go_github.com_aws_aws-sdk-go]
<= go-git-package
go.importpath = github.com/aws/aws-sdk-go
repository = https://github.com/aws/aws-sdk-go
revision = c861d27d03
[go_github.com_beorn7_perks]
<= go-git-package
go.importpath = github.com/beorn7/perks
repository = https://github.com/beorn7/perks
revision = 4c0e84591b
[go_github.com_bmizerany_pat]
<= go-git-package
go.importpath = github.com/bmizerany/pat
repository = https://github.com/bmizerany/pat
revision = 6226ea591a
[go_github.com_boltdb_bolt]
<= go-git-package
go.importpath = github.com/boltdb/bolt
repository = https://github.com/boltdb/bolt
revision = 9da3174536
[go_github.com_bsm_sarama-cluster]
<= go-git-package
go.importpath = github.com/bsm/sarama-cluster
repository = https://github.com/bsm/sarama-cluster
revision = v1.0.0-164-gabf039439f
[go_github.com_cenkalti_backoff]
<= go-git-package
go.importpath = github.com/cenkalti/backoff
repository = https://github.com/cenkalti/backoff
revision = b02f2bbce1
[go_github.com_cespare_xxhash]
<= go-git-package
go.importpath = github.com/cespare/xxhash
repository = https://github.com/cespare/xxhash
revision = e4e2bd419c
[go_github.com_couchbase_go-couchbase]
<= go-git-package
go.importpath = github.com/couchbase/go-couchbase
repository = https://github.com/couchbase/go-couchbase
revision = bfe555a140
[go_github.com_couchbase_gomemcached]
<= go-git-package
go.importpath = github.com/couchbase/gomemcached
repository = https://github.com/couchbase/gomemcached
revision = 4a25d2f4e1
[go_github.com_couchbase_goutils]
<= go-git-package
go.importpath = github.com/couchbase/goutils
repository = https://github.com/couchbase/goutils
revision = 5823a0cbaa
[go_github.com_davecgh_go-spew]
<= go-git-package
go.importpath = github.com/davecgh/go-spew
repository = https://github.com/davecgh/go-spew
revision = v1.1.0-9-gecdeabc654
[go_github.com_dgrijalva_jwt-go]
<= go-git-package
go.importpath = github.com/dgrijalva/jwt-go
repository = https://github.com/dgrijalva/jwt-go
revision = dbeaa9332f
[go_github.com_dgryski_go-bitstream]
<= go-git-package
go.importpath = github.com/dgryski/go-bitstream
repository = https://github.com/dgryski/go-bitstream
revision = 7d46cd22db
[go_github.com_docker_docker]
<= go-git-package
go.importpath = github.com/docker/docker
repository = https://github.com/docker/docker
revision = v17.03.2-ce-0-gf5ec1e2936
[go_github.com_docker_go-connections]
<= go-git-package
go.importpath = github.com/docker/go-connections
repository = https://github.com/docker/go-connections
revision = 990a1a1a70
[go_github.com_eapache_go-resiliency]
<= go-git-package
go.importpath = github.com/eapache/go-resiliency
repository = https://github.com/eapache/go-resiliency
revision = b86b1ec0dd
[go_github.com_eapache_go-xerial-snappy]
<= go-git-package
go.importpath = github.com/eapache/go-xerial-snappy
repository = https://github.com/eapache/go-xerial-snappy
revision = bb955e01b9
[go_github.com_eapache_queue]
<= go-git-package
go.importpath = github.com/eapache/queue
repository = https://github.com/eapache/queue
revision = 44cc805cf1
[go_github.com_eclipse_paho.mqtt.golang]
<= go-git-package
go.importpath = github.com/eclipse/paho.mqtt.golang
repository = https://github.com/eclipse/paho.mqtt.golang
revision = d4f545eb10
[go_github.com_glycerine_go-unsnap-stream]
<= go-git-package
go.importpath = github.com/glycerine/go-unsnap-stream
repository = https://github.com/glycerine/go-unsnap-stream
revision = 62a9a9eb44
[go_github.com_go-ini_ini]
<= go-git-package
go.importpath = github.com/go-ini/ini
repository = https://github.com/go-ini/ini
revision = 9144852efb
[go_github.com_go-logfmt_logfmt]
<= go-git-package
go.importpath = github.com/go-logfmt/logfmt
repository = https://github.com/go-logfmt/logfmt
revision = v0.3.0-0-g390ab7935e
[go_github.com_go-ole_go-ole]
<= go-git-package
go.importpath = github.com/go-ole/go-ole
repository = https://github.com/go-ole/go-ole
revision = be49f7c077
[go_github.com_go-sql-driver_mysql]
<= go-git-package
go.importpath = github.com/go-sql-driver/mysql
repository = https://github.com/go-sql-driver/mysql
revision = v1.0-470-g2e00b5cd70
[go_github.com_gobwas_glob]
<= go-git-package
go.importpath = github.com/gobwas/glob
repository = https://github.com/gobwas/glob
revision = v0.2.2-0-gbea32b9cd2
[go_github.com_gogo_protobuf]
<= go-git-package
go.importpath = github.com/gogo/protobuf
repository = https://github.com/gogo/protobuf
revision = 160de10b25
[go_github.com_golang_protobuf]
<= go-git-package
go.importpath = github.com/golang/protobuf
repository = https://github.com/golang/protobuf
revision = 1e59b77b52
[go_github.com_golang_snappy]
<= go-git-package
go.importpath = github.com/golang/snappy
repository = https://github.com/golang/snappy
revision = 553a641470
[go_github.com_google_go-cmp]
<= go-git-package
go.importpath = github.com/google/go-cmp
repository = https://github.com/google/go-cmp
revision = f94e52cad9
[go_github.com_gorilla_mux]
<= go-git-package
go.importpath = github.com/gorilla/mux
repository = https://github.com/gorilla/mux
revision = 392c28fe23
[go_github.com_grafana_grafana]
<= go-git-package
go.importpath = github.com/grafana/grafana
repository = https://github.com/grafana/grafana
revision = v4.6.0-beta1-1360-g9606a34e0a
[go_github.com_hailocab_go-hostpool]
<= go-git-package
go.importpath = github.com/hailocab/go-hostpool
repository = https://github.com/hailocab/go-hostpool
revision = e80d13ce29
[go_github.com_hashicorp_consul]
<= go-git-package
go.importpath = github.com/hashicorp/consul
repository = https://github.com/hashicorp/consul
revision = v0.7.3-35-g63d2fc6823
[go_github.com_influxdata_influxdb]
<= go-git-package
go.importpath = github.com/influxdata/influxdb
repository = https://github.com/influxdata/influxdb
revision = v1.4.0rc0-328-g938db68198
[go_github.com_influxdata_influxql]
<= go-git-package
go.importpath = github.com/influxdata/influxql
repository = https://github.com/influxdata/influxql
revision = 851636b092
[go_github.com_influxdata_tail]
<= go-git-package
go.importpath = github.com/influxdata/tail
repository = https://github.com/influxdata/tail
revision = v0-95-ga395bf99fe
[go_github.com_influxdata_telegraf]
<= go-git-package
go.importpath = github.com/influxdata/telegraf
repository = https://github.com/influxdata/telegraf
revision = 1.5.0-rc1-73-g90b6b760d1
[go_github.com_influxdata_toml]
<= go-git-package
go.importpath = github.com/influxdata/toml
repository = https://github.com/influxdata/toml
revision = 5d1d907f22
[go_github.com_influxdata_usage-client]
<= go-git-package
go.importpath = github.com/influxdata/usage-client
repository = https://github.com/influxdata/usage-client
revision = 6d38953763
[go_github.com_influxdata_wlog]
<= go-git-package
go.importpath = github.com/influxdata/wlog
repository = https://github.com/influxdata/wlog
revision = 7c63b0a71e
[go_github.com_influxdata_yamux]
<= go-git-package
go.importpath = github.com/influxdata/yamux
repository = https://github.com/influxdata/yamux
revision = 1f58ded512
[go_github.com_influxdata_yarpc]
<= go-git-package
go.importpath = github.com/influxdata/yarpc
repository = https://github.com/influxdata/yarpc
revision = fdd7e84bf3
[go_github.com_jackc_pgx]
<= go-git-package
go.importpath = github.com/jackc/pgx
repository = https://github.com/jackc/pgx
revision = 63f58fd32e
[go_github.com_jmespath_go-jmespath]
<= go-git-package
go.importpath = github.com/jmespath/go-jmespath
repository = https://github.com/jmespath/go-jmespath
revision = 0.2.2-14-gbd40a432e4
[go_github.com_jwilder_encoding]
<= go-git-package
go.importpath = github.com/jwilder/encoding
repository = https://github.com/jwilder/encoding
revision = b4e1701a28
[go_github.com_kardianos_govendor]
<= go-git-package
go.importpath = github.com/kardianos/govendor
repository = https://github.com/kardianos/govendor
revision = 274337c49c
[go_github.com_kardianos_osext]
<= go-git-package
go.importpath = github.com/kardianos/osext
repository = https://github.com/kardianos/osext
revision = c2c54e542f
[go_github.com_kardianos_service]
<= go-git-package
go.importpath = github.com/kardianos/service
repository = https://github.com/kardianos/service
revision = 6d3a0ee7d3
[go_github.com_kballard_go-shellquote]
<= go-git-package
go.importpath = github.com/kballard/go-shellquote
repository = https://github.com/kballard/go-shellquote
revision = d8ec1a69a2
[go_github.com_matttproud_golang_protobuf_extensions]
<= go-git-package
go.importpath = github.com/matttproud/golang_protobuf_extensions
repository = https://github.com/matttproud/golang_protobuf_extensions
revision = v1.0.0-2-gc12348ce28
[go_github.com_miekg_dns]
<= go-git-package
go.importpath = github.com/miekg/dns
repository = https://github.com/miekg/dns
revision = 99f84ae56e
[go_github.com_mitchellh_mapstructure]
<= go-git-package
go.importpath = github.com/mitchellh/mapstructure
repository = https://github.com/mitchellh/mapstructure
revision = d0303fe809
[go_github.com_multiplay_go-ts3]
<= go-git-package
go.importpath = github.com/multiplay/go-ts3
repository = https://github.com/multiplay/go-ts3
revision = 07477f49b8
[go_github.com_naoina_go-stringutil]
<= go-git-package
go.importpath = github.com/naoina/go-stringutil
repository = https://github.com/naoina/go-stringutil
revision = 6b638e95a3
[go_github.com_nats-io_go-nats]
<= go-git-package
go.importpath = github.com/nats-io/go-nats
repository = https://github.com/nats-io/go-nats
revision = v1.2.0-73-gea9585611a
[go_github.com_nats-io_nats]
<= go-git-package
go.importpath = github.com/nats-io/nats
repository = https://github.com/nats-io/nats
revision = v1.2.0-73-gea9585611a
[go_github.com_nats-io_nuid]
<= go-git-package
go.importpath = github.com/nats-io/nuid
repository = https://github.com/nats-io/nuid
revision = 289cccf02c
[go_github.com_nsqio_go-nsq]
<= go-git-package
go.importpath = github.com/nsqio/go-nsq
repository = https://github.com/nsqio/go-nsq
revision = v1.0.7-0-geee57a3ac4
[go_github.com_opencontainers_runc]
<= go-git-package
go.importpath = github.com/opencontainers/runc
repository = https://github.com/opencontainers/runc
revision = v0.0.5-426-g89ab7f2ccc
[go_github.com_opentracing-contrib_go-observer]
<= go-git-package
go.importpath = github.com/opentracing-contrib/go-observer
repository = https://github.com/opentracing-contrib/go-observer
revision = a52f234244
[go_github.com_opentracing_opentracing-go]
<= go-git-package
go.importpath = github.com/opentracing/opentracing-go
repository = https://github.com/opentracing/opentracing-go
revision = 1361b9cd60
[go_github.com_openzipkin_zipkin-go-opentracing]
<= go-git-package
go.importpath = github.com/openzipkin/zipkin-go-opentracing
repository = https://github.com/openzipkin/zipkin-go-opentracing
revision = 1cafbdfde9
[go_github.com_peterh_liner]
<= go-git-package
go.importpath = github.com/peterh/liner
repository = https://github.com/peterh/liner
revision = 3681c2a912
[go_github.com_philhofer_fwd]
<= go-git-package
go.importpath = github.com/philhofer/fwd
repository = https://github.com/philhofer/fwd
revision = bb6d471dc9
[go_github.com_pierrec_lz4]
<= go-git-package
go.importpath = github.com/pierrec/lz4
repository = https://github.com/pierrec/lz4
revision = v1.0-0-g5c9560bfa9
[go_github.com_pierrec_xxHash]
<= go-git-package
go.importpath = github.com/pierrec/xxHash
repository = https://github.com/pierrec/xxHash
revision = 5a004441f8
[go_github.com_pkg_errors]
<= go-git-package
go.importpath = github.com/pkg/errors
repository = https://github.com/pkg/errors
revision = v0.8.0-0-g645ef00459
[go_github.com_pmezard_go-difflib]
<= go-git-package
go.importpath = github.com/pmezard/go-difflib
repository = https://github.com/pmezard/go-difflib
revision = v1.0.0-0-g792786c740
[go_github.com_prometheus_client_golang]
<= go-git-package
go.importpath = github.com/prometheus/client_golang
repository = https://github.com/prometheus/client_golang
revision = v0.9.0-pre1-21-g180b8fdc22
[go_github.com_prometheus_client_model]
<= go-git-package
go.importpath = github.com/prometheus/client_model
repository = https://github.com/prometheus/client_model
revision = model-0.0.2-16-g99fa1f4be8
[go_github.com_prometheus_common]
<= go-git-package
go.importpath = github.com/prometheus/common
repository = https://github.com/prometheus/common
revision = 89604d1970
[go_github.com_prometheus_procfs]
<= go-git-package
go.importpath = github.com/prometheus/procfs
repository = https://github.com/prometheus/procfs
revision = b15cd069a8
[go_github.com_rcrowley_go-metrics]
<= go-git-package
go.importpath = github.com/rcrowley/go-metrics
repository = https://github.com/rcrowley/go-metrics
revision = 1f30fe9094
[go_github.com_retailnext_hllpp]
<= go-git-package
go.importpath = github.com/retailnext/hllpp
repository = https://github.com/retailnext/hllpp
revision = v1.0.0-2-g6e8b6d3944
[go_github.com_samuel_go-zookeeper]
<= go-git-package
go.importpath = github.com/samuel/go-zookeeper
repository = https://github.com/samuel/go-zookeeper
revision = 1d7be4effb
[go_github.com_satori_go.uuid]
<= go-git-package
go.importpath = github.com/satori/go.uuid
repository = https://github.com/satori/go.uuid
revision = v1.1.0-8-g5bf94b69c6
[go_github.com_shirou_gopsutil]
<= go-git-package
go.importpath = github.com/shirou/gopsutil
repository = https://github.com/shirou/gopsutil
revision = v2.16.10-243-g384a55110a
[go_github.com_shirou_w32]
<= go-git-package
go.importpath = github.com/shirou/w32
repository = https://github.com/shirou/w32
revision = 3c9377fc67
[go_github.com_soniah_gosnmp]
<= go-git-package
go.importpath = github.com/soniah/gosnmp
repository = https://github.com/soniah/gosnmp
revision = v1.0-204-g5ad50dc75a
[go_github.com_sparrc_gdm]
<= go-git-package
go.importpath = github.com/sparrc/gdm
repository = https://github.com/sparrc/gdm
revision = 1.4-4-g81089dabfa
[go_github.com_streadway_amqp]
<= go-git-package
go.importpath = github.com/streadway/amqp
repository = https://github.com/streadway/amqp
revision = 63795daa9a
[go_github.com_stretchr_objx]
<= go-git-package
go.importpath = github.com/stretchr/objx
repository = https://github.com/stretchr/objx
revision = 1a9d0bb9f5
[go_github.com_stretchr_testify]
<= go-git-package
go.importpath = github.com/stretchr/testify
repository = https://github.com/stretchr/testify
revision = v1.0-187-g4d4bfba8f1
[go_github.com_tidwall_gjson]
<= go-git-package
go.importpath = github.com/tidwall/gjson
repository = https://github.com/tidwall/gjson
revision = 0623bd8fbd
[go_github.com_tidwall_match]
<= go-git-package
go.importpath = github.com/tidwall/match
repository = https://github.com/tidwall/match
revision = 173748da73
[go_github.com_tinylib_msgp]
<= go-git-package
go.importpath = github.com/tinylib/msgp
repository = https://github.com/tinylib/msgp
revision = 428e467e72
[go_github.com_vjeantet_grok]
<= go-git-package
go.importpath = github.com/vjeantet/grok
repository = https://github.com/vjeantet/grok
revision = d73e972b60
[go_github.com_wvanbergen_kafka]
<= go-git-package
go.importpath = github.com/wvanbergen/kafka
repository = https://github.com/wvanbergen/kafka
revision = bc265fedb9
[go_github.com_wvanbergen_kazoo-go]
<= go-git-package
go.importpath = github.com/wvanbergen/kazoo-go
repository = https://github.com/wvanbergen/kazoo-go
revision = 9689573521
[go_github.com_xlab_treeprint]
<= go-git-package
go.importpath = github.com/xlab/treeprint
repository = https://github.com/xlab/treeprint
revision = 06dfc6fa17
[go_github.com_yuin_gopher-lua]
<= go-git-package
go.importpath = github.com/yuin/gopher-lua
repository = https://github.com/yuin/gopher-lua
revision = 66c871e454
[go_github.com_zensqlmonitor_go-mssqldb]
<= go-git-package
go.importpath = github.com/zensqlmonitor/go-mssqldb
repository = https://github.com/zensqlmonitor/go-mssqldb
revision = ffe5510c6f
[go_go.uber.org_atomic]
<= go-git-package
go.importpath = go.uber.org/atomic
repository = https://github.com/uber-go/atomic
revision = v1.3.1-0-g8474b86a5a
[go_go.uber.org_multierr]
<= go-git-package
go.importpath = go.uber.org/multierr
repository = https://github.com/uber-go/multierr
revision = v1.1.0-1-gfb7d312c2c
[go_go.uber.org_zap]
<= go-git-package
go.importpath = go.uber.org/zap
repository = https://github.com/uber-go/zap
revision = v1.7.1-11-gf85c78b1dd
[go_golang.org_x_crypto]
<= go-git-package
go.importpath = golang.org/x/crypto
repository = https://go.googlesource.com/crypto
revision = b3c9a1d25c
[go_golang.org_x_net]
<= go-git-package
go.importpath = golang.org/x/net
repository = https://go.googlesource.com/net
revision = ab555f366c
[go_golang.org_x_sys]
<= go-git-package
go.importpath = golang.org/x/sys
repository = https://go.googlesource.com/sys
revision = 810d700034
[go_golang.org_x_text]
<= go-git-package
go.importpath = golang.org/x/text
repository = https://go.googlesource.com/text
revision = e19ae14969
[go_golang.org_x_time]
<= go-git-package
go.importpath = golang.org/x/time
repository = https://go.googlesource.com/time
revision = 6dc17368e0
[go_golang.org_x_tools]
<= go-git-package
go.importpath = golang.org/x/tools
repository = https://go.googlesource.com/tools
revision = fbec762f83
[go_gopkg.in_asn1-ber.v1]
<= go-git-package
go.importpath = gopkg.in/asn1-ber.v1
repository = https://gopkg.in/asn1-ber.v1
revision = 4e86f43671
[go_gopkg.in_fatih_pool.v2]
<= go-git-package
go.importpath = gopkg.in/fatih/pool.v2
repository = https://gopkg.in/fatih/pool.v2
revision = 6e328e6789
[go_gopkg.in_fsnotify.v1]
<= go-git-package
go.importpath = gopkg.in/fsnotify.v1
repository = https://gopkg.in/fsnotify.v1
revision = a8a77c9133
[go_gopkg.in_gorethink_gorethink.v3]
<= go-git-package
go.importpath = gopkg.in/gorethink/gorethink.v3
repository = https://gopkg.in/gorethink/gorethink.v3
revision = v2.2.1-35-g7ab832f7b6
[go_gopkg.in_ldap.v2]
<= go-git-package
go.importpath = gopkg.in/ldap.v2
repository = https://gopkg.in/ldap.v2
revision = 8168ee085e
[go_gopkg.in_mgo.v2]
<= go-git-package
go.importpath = gopkg.in/mgo.v2
repository = https://gopkg.in/mgo.v2
revision = 3f83fa5005
[go_gopkg.in_olivere_elastic.v5]
<= go-git-package
go.importpath = gopkg.in/olivere/elastic.v5
repository = https://gopkg.in/olivere/elastic.v5
revision = v5.0.41-0-g3113f9b9ad
[go_gopkg.in_tomb.v1]
<= go-git-package
go.importpath = gopkg.in/tomb.v1
repository = https://gopkg.in/tomb.v1
revision = dd632973f1
[go_gopkg.in_yaml.v2]
<= go-git-package
go.importpath = gopkg.in/yaml.v2
repository = https://gopkg.in/yaml.v2
revision = 4c78c975fe
##################### Grafana Configuration Defaults #####################
#
# Do not modify this file in grafana installs
#
# possible values : production, development
app_mode = production
# instance name, defaults to HOSTNAME environment variable value or hostname if HOSTNAME var is empty
instance_name = ${HOSTNAME}
#################################### Paths ###############################
[paths]
# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
#data = data
data = {{ grafana['data-dir'] }}
# Directory where grafana can store logs
#logs = data/log
logs = {{ grafana['logs-dir'] }}
# Directory where grafana will automatically scan and look for plugins
#plugins = data/plugins
plugins = {{ grafana['plugins-dir'] }}
# folder that contains provisioning config files that grafana will apply on startup and while running.
#provisioning = conf/provisioning
provisioning = {{ grafana['provisioning-config-dir'] }}
#################################### Server ##############################
[server]
# Protocol (http, https, socket)
protocol = https
# The ip address to bind to, empty will bind to all interfaces
#http_addr =
http_addr = [{{ grafana['ipv6'] }}]
# The http port to use
#http_port = 3000
http_port = {{ grafana['port'] }}
# The public facing domain name used to access grafana from a browser
domain = {{ apache_frontend['connection-domain'] }}
# Redirect to correct domain if host header does not match domain
# Prevents DNS rebinding attacks
enforce_domain = false
# The full public facing url
root_url = {{ apache_frontend['connection-secure_access'] }}
# Log web requests
router_logging = false
# the path relative working path
static_root_path = public
# enable gzip
#enable_gzip = false
enable_gzip = true
# https certs & key file
#cert_file =
cert_file = {{ grafana['ssl-cert-file'] }}
#cert_key =
cert_key = {{ grafana['ssl-key-file'] }}
# Unix socket path
#socket = /tmp/grafana.sock
#################################### Database ############################
[database]
# You can configure the database connection by specifying type, host, name, user and password
# as separate properties or as on string using the url property.
# Either "mysql", "postgres" or "sqlite3", it's your choice
type = sqlite3
host = 127.0.0.1:3306
name = grafana
user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
password =
# Use either URL or the previous fields to configure the database
# Example: mysql://user:secret@host:port/database
url =
# Max idle conn setting default is 2
max_idle_conn = 2
# Max conn setting default is 0 (mean not set)
max_open_conn =
# Set to true to log the sql calls and execution times.
log_queries =
# For "postgres", use either "disable", "require" or "verify-full"
# For "mysql", use either "true", "false", or "skip-verify".
ssl_mode = disable
ca_cert_path =
client_key_path =
client_cert_path =
server_cert_name =
# For "sqlite3" only, path relative to data_path setting
path = grafana.db
#################################### Session #############################
[session]
# Either "memory", "file", "redis", "mysql", "postgres", "memcache", default is "file"
provider = file
# Provider config options
# memory: not have any config yet
# file: session dir path, is relative to grafana data_path
# redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=grafana`
# postgres: user=a password=b host=localhost port=5432 dbname=c sslmode=disable
# mysql: go-sql-driver/mysql dsn config string, examples:
# `user:password@tcp(127.0.0.1:3306)/database_name`
# `user:password@unix(/var/run/mysqld/mysqld.sock)/database_name`
# memcache: 127.0.0.1:11211
provider_config = sessions
# Session cookie name
cookie_name = grafana_sess
# If you use session in https only, default is false
#cookie_secure = false
cookie_secure = true
# Session life time, default is 86400
session_life_time = 86400
gc_interval_time = 86400
#################################### Data proxy ###########################
[dataproxy]
# This enables data proxy logging, default is false
logging = false
#################################### Analytics ###########################
[analytics]
# Server reporting, sends usage counters to stats.grafana.org every 24 hours.
# No ip addresses are being tracked, only simple counters to track
# running instances, dashboard and error counts. It is very helpful to us.
# Change this option to false to disable reporting.
reporting_enabled = true
# Set to false to disable all checks to https://grafana.com
# for new versions (grafana itself and plugins), check is used
# in some UI views to notify that grafana or plugin update exists
# This option does not cause any auto updates, nor send any information
# only a GET request to https://grafana.com to get latest versions
check_for_updates = true
# Google Analytics universal tracking code, only enabled if you specify an id here
google_analytics_ua_id =
# Google Tag Manager ID, only enabled if you specify an id here
google_tag_manager_id =
#################################### Security ############################
[security]
# default admin user, created on startup
#admin_user = "admin"
admin_user = "{{ grafana['admin-user'] }}"
# default admin password, can be changed before first start of grafana, or in profile settings
#admin_password = admin
admin_password = "{{ grafana['admin-password'] }}"
# used for signing
#secret_key = SW2YcwTIb9zpOOhoPsMm
secret_key = "{{ grafana['secret-key'] }}"
# Auto-login remember days
login_remember_days = 7
cookie_username = grafana_user
cookie_remember_name = grafana_remember
# disable gravatar profile images
disable_gravatar = false
# data source proxy whitelist (ip_or_domain:port separated by spaces)
data_source_proxy_whitelist =
#################################### Snapshots ###########################
[snapshots]
# snapshot sharing options
external_enabled = true
external_snapshot_url = https://snapshots-origin.raintank.io
external_snapshot_name = Publish to snapshot.raintank.io
# remove expired snapshot
snapshot_remove_expired = true
# remove snapshots after 90 days
snapshot_TTL_days = 90
#################################### Dashboards ##################
[dashboards]
# Number dashboard versions to keep (per dashboard). Default: 20, Minimum: 1
versions_to_keep = 20
#################################### Users ###############################
[users]
# disable user signup / registration
allow_sign_up = false
# Allow non admin users to create organizations
allow_org_create = false
# Set to true to automatically assign new users to the default organization (id 1)
auto_assign_org = true
# Default role new users will be automatically assigned (if auto_assign_org above is set to true)
auto_assign_org_role = Viewer
# Require email validation before sign up completes
verify_email_enabled = false
# Background text for the user field on the login page
login_hint = email or username
# Default UI theme ("dark" or "light")
default_theme = dark
# External user management
external_manage_link_url =
external_manage_link_name =
external_manage_info =
# Viewers can edit/inspect dashboard settings in the browser. But not save the dashboard.
viewers_can_edit = false
[auth]
# Set to true to disable (hide) the login form, useful if you use OAuth
disable_login_form = false
# Set to true to disable the signout link in the side menu. useful if you use auth.proxy
disable_signout_menu = false
#################################### Anonymous Auth ######################
[auth.anonymous]
# enable anonymous access
enabled = false
# specify organization name that should be used for unauthenticated users
org_name = Main Org.
# specify role for unauthenticated users
org_role = Viewer
#################################### Github Auth #########################
[auth.github]
enabled = false
allow_sign_up = true
client_id = some_id
client_secret = some_secret
scopes = user:email
auth_url = https://github.com/login/oauth/authorize
token_url = https://github.com/login/oauth/access_token
api_url = https://api.github.com/user
team_ids =
allowed_organizations =
#################################### Google Auth #########################
[auth.google]
enabled = false
allow_sign_up = true
client_id = some_client_id
client_secret = some_client_secret
scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
auth_url = https://accounts.google.com/o/oauth2/auth
token_url = https://accounts.google.com/o/oauth2/token
api_url = https://www.googleapis.com/oauth2/v1/userinfo
allowed_domains =
hosted_domain =
#################################### Grafana.com Auth ####################
# legacy key names (so they work in env variables)
[auth.grafananet]
enabled = false
allow_sign_up = true
client_id = some_id
client_secret = some_secret
scopes = user:email
allowed_organizations =
[auth.grafana_com]
enabled = false
allow_sign_up = true
client_id = some_id
client_secret = some_secret
scopes = user:email
allowed_organizations =
#################################### Generic OAuth #######################
[auth.generic_oauth]
name = OAuth
enabled = false
allow_sign_up = true
client_id = some_id
client_secret = some_secret
scopes = user:email
auth_url =
token_url =
api_url =
team_ids =
allowed_organizations =
#################################### Basic Auth ##########################
[auth.basic]
enabled = true
#################################### Auth Proxy ##########################
[auth.proxy]
enabled = false
header_name = X-WEBAUTH-USER
header_property = username
auto_sign_up = true
ldap_sync_ttl = 60
whitelist =
#################################### Auth LDAP ###########################
[auth.ldap]
enabled = false
config_file = /etc/grafana/ldap.toml
allow_sign_up = true
#################################### SMTP / Emailing #####################
[smtp]
#enabled = false
enabled = {{ slapparameter_dict.get('smtp-server') and 'true' or 'false' }}
#host = locahost:25
host = {{ slapparameter_dict.get('smtp-server', '') }}
#user =
user = {{ slapparameter_dict.get('smtp-username', '') }}
# If the password contains # or ; you have to wrap it with trippel quotes. Ex """#password;"""
#password =
password = {{ slapparameter_dict.get('smtp-password', '') and '"""%s"""' % slapparameter_dict['smtp-password'] or ""}}
cert_file =
key_file =
#skip_verify = false
skip_verify = {{ slapparameter_dict.get('smtp-verify-ssl', True) and 'false' or 'true' }}
#from_address = admin@grafana.localhost
from_address = {{ slapparameter_dict.get('email-from-address', '') }}
#from_name = Grafana
from_name = {{ slapparameter_dict.get('email-from-name', 'Grafana') }}
ehlo_identity =
[emails]
welcome_email_on_sign_up = false
templates_pattern = emails/*.html
#################################### Logging ##########################
[log]
# Either "console", "file", "syslog". Default is console and file
# Use space to separate multiple modes, e.g. "console file"
mode = console file
# Either "debug", "info", "warn", "error", "critical", default is "info"
level = info
# optional settings to set different levels for specific loggers. Ex filters = sqlstore:debug
filters =
# For "console" mode only
[log.console]
level =
# log line format, valid options are text, console and json
format = console
# For "file" mode only
[log.file]
level =
# log line format, valid options are text, console and json
format = text
# This enables automated log rotate(switch of following options), default is true
log_rotate = true
# Max line number of single file, default is 1000000
max_lines = 1000000
# Max size shift of single file, default is 28 means 1 << 28, 256MB
max_size_shift = 28
# Segment log daily, default is true
daily_rotate = true
# Expired days of log file(delete after max days), default is 7
max_days = 7
[log.syslog]
level =
# log line format, valid options are text, console and json
format = text
# Syslog network type and address. This can be udp, tcp, or unix. If left blank, the default unix endpoints will be used.
network =
address =
# Syslog facility. user, daemon and local0 through local7 are valid.
facility =
# Syslog tag. By default, the process' argv[0] is used.
tag =
#################################### Usage Quotas ########################
[quota]
enabled = false
#### set quotas to -1 to make unlimited. ####
# limit number of users per Org.
org_user = 10
# limit number of dashboards per Org.
org_dashboard = 100
# limit number of data_sources per Org.
org_data_source = 10
# limit number of api_keys per Org.
org_api_key = 10
# limit number of orgs a user can create.
user_org = 10
# Global limit of users.
global_user = -1
# global limit of orgs.
global_org = -1
# global limit of dashboards
global_dashboard = -1
# global limit of api_keys
global_api_key = -1
# global limit on number of logged in users.
global_session = -1
#################################### Alerting ############################
[alerting]
# Disable alerting engine & UI features
enabled = true
# Makes it possible to turn off alert rule execution but alerting UI is visible
execute_alerts = true
#################################### Internal Grafana Metrics ############
# Metrics available at HTTP API Url /metrics
[metrics]
enabled = true
interval_seconds = 10
# Send internal Grafana metrics to graphite
[metrics.graphite]
# Enable by setting the address setting (ex localhost:2003)
address =
prefix = prod.grafana.%(instance_name)s.
[grafana_net]
url = https://grafana.com
[grafana_com]
url = https://grafana.com
#################################### Distributed tracing ############
[tracing.jaeger]
# jaeger destination (ex localhost:6831)
address =
# tag that will always be included in when creating new spans. ex (tag1:value1,tag2:value2)
always_included_tag =
# Type specifies the type of the sampler: const, probabilistic, rateLimiting, or remote
sampler_type = const
# jaeger samplerconfig param
# for "const" sampler, 0 or 1 for always false/true respectively
# for "probabilistic" sampler, a probability between 0 and 1
# for "rateLimiting" sampler, the number of spans per second
# for "remote" sampler, param is the same as for "probabilistic"
# and indicates the initial sampling rate before the actual one
# is received from the mothership
sampler_param = 1
#################################### External Image Storage ##############
[external_image_storage]
# You can choose between (s3, webdav, gcs, azure_blob)
provider =
[external_image_storage.s3]
bucket_url =
bucket =
region =
path =
access_key =
secret_key =
[external_image_storage.webdav]
url =
username =
password =
public_url =
[external_image_storage.gcs]
key_file =
bucket =
path =
[external_image_storage.azure_blob]
account_name =
account_key =
container_name =
reporting-disabled = false
bind-address = "[{{ influxdb['local-host'] }}]:{{ influxdb['rpc-port'] }}"
[meta]
dir = "{{ influxdb['data-dir'] }}/meta"
retention-autocreate = true
logging-enabled = true
[data]
dir = "{{ influxdb['data-dir'] }}/data"
index-version = "inmem"
wal-dir = "{{ influxdb['data-dir'] }}/wal"
wal-fsync-delay = "0s"
query-log-enabled = true
cache-max-memory-size = 1073741824
cache-snapshot-memory-size = 26214400
cache-snapshot-write-cold-duration = "10m0s"
compact-full-write-cold-duration = "4h0m0s"
max-series-per-database = 1000000
max-values-per-tag = 100000
max-concurrent-compactions = 0
trace-logging-enabled = false
[coordinator]
write-timeout = "10s"
max-concurrent-queries = 0
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0
[retention]
enabled = true
check-interval = "30m0s"
[shard-precreation]
enabled = true
check-interval = "10m0s"
advance-period = "30m0s"
[monitor]
store-enabled = true
store-database = "_internal"
store-interval = "10s"
[subscriber]
enabled = true
http-timeout = "30s"
insecure-skip-verify = false
ca-certs = ""
write-concurrency = 40
write-buffer-size = 1000
[http]
enabled = true
bind-address = "[{{ influxdb['host'] }}]:{{ influxdb['http-port'] }}"
auth-enabled = true
log-enabled = true
write-tracing = false
pprof-enabled = true
https-enabled = true
https-certificate = "{{ influxdb['ssl-cert-file'] }}"
https-private-key = "{{ influxdb['ssl-key-file'] }}"
max-row-limit = 0
max-connection-limit = 0
shared-secret = ""
realm = "InfluxDB"
unix-socket-enabled = true
bind-socket = "{{ influxdb['unix-socket'] }}"
max-body-size = 25000000
[ifql]
enabled = false
log-enabled = true
bind-address = ":8082"
[[graphite]]
enabled = false
bind-address = ":2003"
database = "graphite"
retention-policy = ""
protocol = "tcp"
batch-size = 5000
batch-pending = 10
batch-timeout = "1s"
consistency-level = "one"
separator = "."
udp-read-buffer = 0
[[collectd]]
enabled = false
bind-address = ":25826"
database = "collectd"
retention-policy = ""
batch-size = 5000
batch-pending = 10
batch-timeout = "10s"
read-buffer = 0
typesdb = "/usr/share/collectd/types.db"
security-level = "none"
auth-file = "/etc/collectd/auth_file"
parse-multivalue-plugin = "split"
[[opentsdb]]
enabled = false
bind-address = ":4242"
database = "opentsdb"
retention-policy = ""
consistency-level = "one"
tls-enabled = false
certificate = "/etc/ssl/influxdb.pem"
batch-size = 1000
batch-pending = 5
batch-timeout = "1s"
log-point-errors = true
[[udp]]
enabled = false
bind-address = ":8089"
database = "udp"
retention-policy = ""
batch-size = 5000
batch-pending = 10
read-buffer = 0
batch-timeout = "1s"
precision = ""
[continuous_queries]
log-enabled = true
enabled = true
query-stats-enabled = false
run-interval = "1s"
{
"$schema": "http://json-schema.org/draft-04/schema#",
"description": "Parameters to instantiate Grafana",
"additionalProperties": false,
"properties": {
"smtp-server": {
"description": "SMTP server used by grafana to send emails (in host:port format). Leaving this empty will disable email sending.",
"type": "string"
},
"smtp-username": {
"description": "Username to connect to SMTP server",
"type": "string"
},
"smtp-password": {
"description": "Password to connect to SMTP server",
"type": "string"
},
"smtp-verify-ssl": {
"description": "Verify SSL certificate of SMTP server",
"type": "boolean",
"default": true
},
"email-from-address": {
"description": "Email address used in From: header of emails",
"type": "string"
},
"email-from-name": {
"description": "Name used in From: header of emails",
"default": "Grafana",
"type": "string"
}
}
}
{
"$schema": "http://json-schema.org/draft-04/schema#",
"description": "Values returned by Grafana instantiation",
"additionalProperties": false,
"properties": {
"url": {
"description": "Shared frontend for this Grafana instance",
"pattern": "^https://",
"type": "string"
},
"grafana-username": {
"description": "Admin user for grafana",
"type": "string"
},
"grafana-password": {
"description": "Password for grafana's admin user",
"type": "string"
},
"grafana-url": {
"description": "IPv6 URL to access grafana",
"pattern": "^https://",
"type": "string"
},
"influxdb-url": {
"description": "IPv6 URL of influxdb HTTP endpoint",
"pattern": "^https://",
"type": "string"
},
"influxdb-database": {
"description": "database created in influxdb",
"type": "string"
},
"influxdb-username": {
"description": "username for influxdb",
"type": "string"
},
"influxdb-password": {
"description": "password for influxdb user",
"type": "string"
},
"telegraf-extra-config-dir": {
"description": "Directory in telegraf partition where extra configuration file will be loaded. These files must match *.conf pattern",
"type": "string"
}
},
"type": "object"
}
[buildout]
parts =
promises
publish-connection-parameter
eggs-directory = {{ buildout['eggs-directory'] }}
develop-eggs-directory = {{ buildout['develop-eggs-directory'] }}
offline = true
[instance-parameter]
recipe = slapos.cookbook:slapconfiguration
computer = ${slap-connection:computer-id}
partition = ${slap-connection:partition-id}
url = ${slap-connection:server-url}
key = ${slap-connection:key-file}
cert = ${slap-connection:cert-file}
[slap-configuration]
# apache-frontend reads from from a part named [slap-configuration]
recipe = slapos.cookbook:slapconfiguration.serialised
computer = ${slap-connection:computer-id}
partition = ${slap-connection:partition-id}
url = ${slap-connection:server-url}
key = ${slap-connection:key-file}
cert = ${slap-connection:cert-file}
[directory]
recipe = slapos.cookbook:mkdirectory
home = ${buildout:directory}
etc = ${:home}/etc
var = ${:home}/var
srv = ${:home}/srv
service = ${:etc}/service
promise = ${:etc}/promise
influxdb-data-dir = ${:srv}/influxdb
grafana-dir = ${:srv}/grafana
grafana-data-dir = ${:grafana-dir}/data
grafana-logs-dir = ${:var}/log
grafana-plugins-dir = ${:grafana-dir}/plugins
grafana-provisioning-config-dir = ${:grafana-dir}/provisioning-config
grafana-provisioning-datasources = ${:grafana-provisioning-config-dir}/datasources
grafana-provisioning-dashboards = ${:grafana-provisioning-config-dir}/dashboards
telegraf-dir = ${:srv}/telegraf
telegraf-extra-config-dir = ${:telegraf-dir}/extra-config
# macros
[generate-certificate]
recipe = plone.recipe.command
command =
if [ ! -e ${:key-file} ]
then
{{ openssl_bin }} req -x509 -nodes -days 3650 \
-subj "/C=AA/ST=X/L=X/O=Dis/CN=${:common-name}" \
-newkey rsa:1024 -keyout ${:key-file} \
-out ${:cert-file}
fi
update-command = ${:command}
key-file = ${directory:etc}/${:_buildout_section_name_}.key
cert-file = ${directory:etc}/${:_buildout_section_name_}.crt
common-name = ${:_buildout_section_name_}
[config-file]
recipe = slapos.recipe.template:jinja2
template = {{ buildout['parts-directory'] }}/${:_buildout_section_name_}/${:_buildout_section_name_}.cfg.in
rendered = ${directory:etc}/${:_buildout_section_name_}.cfg
mode = 0644
extensions = jinja2.ext.do
[check-port-listening-promise]
recipe = slapos.cookbook:check_port_listening
path = ${directory:promise}/${:_buildout_section_name_}
[influxdb]
ipv6 = ${instance-parameter:ipv6-random}
ipv4 = ${instance-parameter:ipv4-random}
host = ${:ipv6}
local-host = ${:ipv4}
rpc-port = 8088
http-port = 8086
url = https://[${:host}]:${:http-port}
data-dir = ${directory:influxdb-data-dir}
auth-username = ${influxdb-password:username}
auth-password = ${influxdb-password:passwd}
unix-socket = ${directory:var}/influxdb.socket
ssl-cert-file = ${influxdb-certificate:cert-file}
ssl-key-file = ${influxdb-certificate:key-file}
database = telegraf
recipe = slapos.cookbook:wrapper
command-line =
nice -19 chrt --idle 0 ionice -c3 {{ influxd_bin }} -config ${influxdb-config-file:rendered}
wrapper-path = ${directory:service}/influxdb
[influxdb-config-file]
<= config-file
context =
section influxdb influxdb
[influxdb-password]
recipe = slapos.cookbook:generate.password
username = influxdb
[influxdb-certificate]
<= generate-certificate
[influxdb-listen-promise]
<= check-port-listening-promise
hostname = ${influxdb:ipv6}
port = ${influxdb:http-port}
[influxdb-password-promise]
recipe = slapos.cookbook:wrapper
command-line =
{{ influx_bin }} -username ${influxdb:auth-username} -password ${influxdb:auth-password} -socket ${influxdb:unix-socket} -execute "CREATE USER ${influxdb:auth-username} WITH PASSWORD '${influxdb:auth-password}' WITH ALL PRIVILEGES"
wrapper-path = ${directory:promise}/${:_buildout_section_name_}
[grafana]
ipv6 = ${instance-parameter:ipv6-random}
port = 8080
url = https://[${:ipv6}]:${:port}
data-dir = ${directory:grafana-data-dir}
logs-dir = ${directory:grafana-logs-dir}
plugins-dir = ${directory:grafana-plugins-dir}
provisioning-config-dir = ${directory:grafana-provisioning-config-dir}
admin-user = ${grafana-password:username}
admin-password = ${grafana-password:passwd}
secret-key = ${grafana-secret-key:passwd}
ssl-key-file = ${grafana-certificate:key-file}
ssl-cert-file = ${grafana-certificate:cert-file}
recipe = slapos.cookbook:wrapper
command-line =
{{ grafana_bin }} -config ${grafana-config-file:rendered} -homepath {{ grafana_homepath }}
wrapper-path = ${directory:service}/grafana
[grafana-certificate]
<= generate-certificate
[grafana-password]
recipe = slapos.cookbook:generate.password
username = admin
[grafana-secret-key]
recipe = slapos.cookbook:generate.password
[grafana-config-file]
<= config-file
context =
section grafana grafana
section apache_frontend apache-frontend
key slapparameter_dict slap-configuration:configuration
[grafana-listen-promise]
<= check-port-listening-promise
hostname= ${grafana:ipv6}
port = ${grafana:port}
[telegraf]
recipe = slapos.cookbook:wrapper
extra-config-dir = ${directory:telegraf-extra-config-dir}
# telegraf needs influxdb to be already listening before starting
command-line =
bash -c '${influxdb-listen-promise:path} && nice -19 chrt --idle 0 ionice -c3 {{ telegraf_bin }} --config ${telegraf-config-file:rendered} --config-directory ${:extra-config-dir}'
wrapper-path = ${directory:service}/telegraf
[telegraf-config-file]
<= config-file
context =
section influxdb influxdb
section telegraf telegraf
[apache-frontend]
<= slap-connection
recipe = slapos.cookbook:requestoptional
name = Grafana Frontend
# XXX We have hardcoded SR URL here.
software-url = http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD:/software/apache-frontend/software.cfg
slave = true
config-url = ${grafana:url}
config-https-only = true
return = domain secure_access
[promises]
recipe =
instance-promises =
${influxdb-listen-promise:path}
${influxdb-password-promise:wrapper-path}
${grafana-listen-promise:path}
[publish-connection-parameter]
recipe = slapos.cookbook:publish
influxdb-url = ${influxdb:url}
influxdb-database = ${influxdb:database}
influxdb-username = ${influxdb:auth-username}
influxdb-password = ${influxdb:auth-password}
telegraf-extra-config-dir = ${telegraf:extra-config-dir}
grafana-url = ${grafana:url}
grafana-username = ${grafana:admin-user}
grafana-password = ${grafana:admin-password}
url = ${apache-frontend:connection-secure_access}
[buildout]
extends =
../../stack/slapos.cfg
../../stack/nodejs.cfg
../../component/make/buildout.cfg
../../component/golang/buildout.cfg
../../component/openssl/buildout.cfg
buildout.hash.cfg
gowork.cfg
versions = versions
parts =
slapos-cookbook
instance-profile
gowork
influxdb-config-file
telegraf-config-file
grafana-config-file
[nodejs]
<= nodejs-8.6.0
[yarn]
# this could become a component, but it needs to be invoked from nodejs explicitly,
# otherwise it uses system's nodejs
recipe = slapos.recipe.build:download-unpacked
url = https://github.com/yarnpkg/yarn/releases/download/v1.3.2/yarn-v1.3.2.tar.gz
md5sum = db82fa09c996e9318f2f1d2ab99228f9
[gowork]
# All the softwares installed in the go work have "non standard" installation
# methods, so we install them in specific parts with custom commands.
# They will be installed because they are dependencies of ${gowork.goinstall}
install =
telegraf-bin = ${:bin}/telegraf
influx-bin = ${:bin}/influx
influxd-bin = ${:bin}/influxd
grafana-bin = ${:bin}/grafana-server
grafana-homepath = ${go_github.com_grafana_grafana:location}
[gowork.goinstall]
command = :
depends =
${influxdb-install:recipe}
${telegraf-install:recipe}
${grafana-install:recipe}
[influxdb-install]
<= gowork.goinstall
command = bash -c ". ${gowork:env.sh} && \
cd ${gowork:directory}/src/github.com/influxdata/influxdb && \
go install ./cmd/..."
update-command =
[telegraf-install]
<= gowork.goinstall
command = bash -c ". ${gowork:env.sh} && \
cd ${gowork:directory}/src/github.com/influxdata/telegraf && \
${make:location}/bin/make &&
cp telegraf ${gowork:bin}"
update-command =
[grafana-install]
<= gowork.goinstall
# yarn and go run build.go needs our nodejs in $PATH
command = bash -c "export PATH=${nodejs:location}/bin/:$PATH && \
. ${gowork:env.sh} && \
cd ${gowork:directory}/src/github.com/grafana/grafana && \
${gowork:golang}/bin/go run build.go setup && \
${gowork:golang}/bin/go run build.go build && \
${yarn:location}/bin/yarn install --pure-lockfile && \
${nodejs:location}/bin/npm run build"
update-command =
[download-file-base]
recipe = slapos.recipe.build:download
url = ${:_profile_base_location_}/${:filename}
download-only = true
mode = 0644
[influxdb-config-file]
<= download-file-base
[telegraf-config-file]
<= download-file-base
[grafana-config-file]
<= download-file-base
[instance-profile]
recipe = slapos.recipe.template:jinja2
template = ${:_profile_base_location_}/${:filename}
rendered = ${buildout:directory}/instance.cfg
mode = 0644
extensions = jinja2.ext.do
context =
section buildout buildout
key openssl_bin openssl-output:openssl
key telegraf_bin gowork:telegraf-bin
key influxd_bin gowork:influxd-bin
key influx_bin gowork:influx-bin
key grafana_bin gowork:grafana-bin
key grafana_homepath gowork:grafana-homepath
[versions]
slapos.recipe.template = 4.2
inotifyx = 0.2.2
{
"name": "Grafana",
"description": "Grafana, Telegraf and Influxdb",
"serialisation": "json-in-xml",
"software-type": {
"default": {
"title": "Default",
"description": "Grafana, Telegraf and Influxdb in same partition",
"request": "instance-input-schema.json",
"response": "instance-output-schema.json",
"index": 0
}
}
}
# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared plugins.
# Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name
# as a section with no variables. To deactivate a plugin, comment
# out the name and any variables.
# Use 'telegraf -config telegraf.toml -test' to see what metrics a config
# file would generate.
# One rule that plugins conform to is wherever a connection string
# can be passed, the values '' and 'localhost' are treated specially.
# They indicate to the plugin to use their own builtin configuration to
# connect to the local system.
# NOTE: The configuration has a few required parameters. They are marked
# with 'required'. Be sure to edit those to make this configuration work.
# Tags can also be specified via a normal map, but only one form at a time:
[tags]
# dc = "us-east-1"
# Configuration for telegraf agent
[agent]
# Default data collection interval for all plugins
interval = "10s"
# Rounds collection interval to 'interval'
# ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
# Default data flushing interval for all outputs. You should not set this below
# interval. Maximum flush_interval will be flush_interval + flush_jitter
flush_interval = "10s"
# Jitter the flush interval by a random amount. This is primarily to avoid
# large write spikes for users running a large number of telegraf instances.
# ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
# Run telegraf in debug mode
debug = false
# Override default hostname, if empty use os.Hostname()
hostname = ""
###############################################################################
# OUTPUTS #
###############################################################################
[outputs]
# Configuration for influxdb server to send metrics to
[outputs.influxdb]
# The full HTTP or UDP endpoint URL for your InfluxDB instance
# Multiple urls can be specified for InfluxDB cluster support.
# urls = ["udp://localhost:8089"] # UDP endpoint example
# XXX XXX XXX
#urls = ["http://localhost:8086"] # required
urls = ["{{ influxdb['url'] }}"]
insecure_skip_verify = true # because we are using a self signed certificate
# The target database for metrics (telegraf will create it if not exists)
database = "{{ influxdb['database'] }}" # required
# Precision of writes, valid values are n, u, ms, s, m, and h
# note: using second precision greatly helps InfluxDB compression
precision = "s"
# Connection timeout (for the connection with InfluxDB), formatted as a string.
# If not provided, will default to 0 (no timeout)
# timeout = "5s"
username = "{{ influxdb['auth-username'] }}"
password = "{{ influxdb['auth-password'] }}"
# Set the user agent for HTTP POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
# Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
# udp_payload = 512
###############################################################################
# PLUGINS #
###############################################################################
# Read metrics about cpu usage
[cpu]
# Whether to report per-cpu stats or not
percpu = true
# Whether to report total system cpu stats or not
totalcpu = true
# Comment this line if you want the raw CPU time metrics
drop = ["cpu_time"]
# Read metrics about memory usage
[mem]
# no configuration
[disk]
[io]
[system]
###############################################################################
# ERP5 - PLUGINS #
###############################################################################
#
# Left here as example, don't edit this file directly, but place your config
# files in {{ telegraf['extra-config-dir'] }}
#
#[mysql]
# servers = ["root@unix(/srv/slapgrid/slappart12/srv/runner/instance/slappart1/var/run/mariadb.sock)/erp5"]
#[memcached]
# # XXX kumofs does not support memcached's stat command
# servers = ["10.0.248.233:2013", "10.0.248.233:2003"]
#[haproxy]
# servers = ["http://10.0.121.162:2150/haproxy", "http://10.0.121.162:2152/haproxy"]
#[[inputs.exec]]
# commands = ["/srv/slapgrid/slappart0/bin/slapsensor /srv/slapgrid/slappart0/srv/runner/instance/etc/supervisord.conf"]
# name_suffix = "_slapos"
# interval = "5s"
###############################################################################
# SERVICE PLUGINS #
###############################################################################
###############################
# Instanciate nvu
###############################
[basedirectory]
recipe = slapos.cookbook:mkdirectory
etc = $${buildout:directory}/etc
bin = $${buildout:directory}/bin
srv = $${buildout:directory}/srv
var = $${buildout:directory}/var
run = $${:var}/run
log = $${:var}/log
# scripts = $${:etc}/run
services = $${:etc}/service
promises = $${:etc}/promise
# tomcat directories
catalina_base = $${:var}/vnu
catalina_logs = $${:catalina_base}/logs
catalina_temp = $${:catalina_base}/temp
catalina_webapps = $${:catalina_base}/webapps
catalina_work = $${:catalina_base}/work
catalina_conf = $${:catalina_base}/conf
#################################
# Tomcat service
#################################
[keystore]
recipe = plone.recipe.command
command =
${java-re-8-output:keytool} \
-genkeypair \
-alias "tomcat" \
-keyalg RSA \
-keypass "$${:pass}" \
-dname "CN=Web Server,OU=Unit,O=Organization,L=City,S=State,C=Country" \
-keystore "$${:file}" \
-storepass "$${:pass}"
file = $${basedirectory:catalina_base}/.keystore
pass = insecure
[tomcat-service]
recipe = slapos.recipe.template
url = ${template-tomcat-service:output}
output = $${basedirectory:services}/tomcat
mode = 0700
virtual-depends =
$${tomcat-configuration:ip}
[tomcat-configuration]
recipe = slapos.recipe.template
url = ${template-tomcat-configuration:output}
output = $${basedirectory:catalina_conf}/server.xml
mode = 0600
ip = $${slap-network-information:global-ipv6}
port = 8899
scheme = https
[tomcat-listen-promise]
recipe = slapos.cookbook:check_port_listening
hostname = $${tomcat-configuration:ip}
port = $${tomcat-configuration:port}
path = $${basedirectory:promises}/tomcat_listen
#################################
# Slapos publish
#################################
[publish-url]
recipe = slapos.cookbook:publish
<= monitor-publish
vnu-url = $${tomcat-configuration:scheme}://[$${tomcat-configuration:ip}]:$${tomcat-configuration:port}/
[monitor-instance-parameter]
monitor-httpd-port = 8333
# Add parts generated by template
[buildout]
extends =
${monitor-template:rendered}
parts =
publish-url
tomcat-service
tomcat-listen-promise
monitor-base
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
\ No newline at end of file
[buildout]
parts =
switch-softwaretype
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
[switch-softwaretype]
recipe = slapos.cookbook:softwaretype
default = $${:validator}
validator = ${template-validator:output}
[slap-connection]
# part to migrate to new - separated words
computer-id = $${slap_connection:computer_id}
partition-id = $${slap_connection:partition_id}
server-url = $${slap_connection:server_url}
software-release-url = $${slap_connection:software_release_url}
key-file = $${slap_connection:key_file}
cert-file = $${slap_connection:cert_file}
# [slap-parameter]
# slave-instance-list = []
[instance-parameter]
# Fetches parameters defined in SlapOS Master for this instance.
# Always the same.
recipe = slapos.cookbook:slapconfiguration.serialised
computer = $${slap_connection:computer_id}
partition = $${slap_connection:partition_id}
url = $${slap_connection:server_url}
key = $${slap_connection:key_file}
cert = $${slap_connection:cert_file}
<?xml version='1.0' encoding='utf-8'?>
<Server port="-1" shutdown="SHUTDOWN">
<Service name="Catalina">
<Connector
protocol="org.apache.coyote.http11.Http11Protocol"
address="$${tomcat-configuration:ip}"
port="$${tomcat-configuration:port}"
maxThreads="10"
scheme="$${tomcat-configuration:scheme}"
secure="true"
clientAuth="false"
SSLEnabled="true"
keystorePass="$${keystore:pass}"
keystoreFile="$${keystore:file}"
/>
<Engine name="Catalina" defaultHost="localhost">
<Valve className="org.apache.catalina.valves.AccessLogValve"
directory="logs" prefix="localhost_access_log." suffix=".log"
pattern="common" resolveHosts="false"/>
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true"
xmlValidation="false" xmlNamespaceAware="false">
<Context path="/" docBase="${vnu-output:war}"
privileged="true">
</Context>
</Host>
</Engine>
</Service>
</Server>
\ No newline at end of file
[buildout]
extends =
../../component/dash/buildout.cfg
../../component/grep/buildout.cfg
../../component/findutils/buildout.cfg
../../component/java/buildout.cfg
../../component/tomcat/buildout.cfg
../../component/vnu/buildout.cfg
../../stack/slapos.cfg
# Monitoring stack (keep on bottom)
../../stack/monitor/buildout.cfg
parts =
slapos-cookbook
template
##########################################################
# Service startup scripts and configuration files
##########################################################
[template-tomcat-configuration]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/server.xml.in
md5sum = 9978b8b9e567f33cb4c853fee85f1637
output = ${buildout:directory}/server.xml.in
mode = 0644
[template-tomcat-service]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/template-tomcat-service.sh.in
md5sum = 09803fb71404edbccb32c44a0040dae4
output = ${buildout:directory}/template-tomcat-service.sh.in
mode = 0644
##########################################################
# Buildout instance.cfg templates
##########################################################
[template-validator]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance-validator.cfg.in
md5sum = 0275d7a8a021f84a1303e5c8933c07c3
output = ${buildout:directory}/template-validator.cfg
mode = 0644
[template]
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg.in
md5sum = 2b4d33e9ef1082dd4d6a53f55b391772
output = ${buildout:directory}/template.cfg
mode = 0644
[versions]
# 1.3.4nxd2 is invalid version string, thus pached version string is not '1.3.4nxd2+SlapOSPatched001'
# but '1.3.4nxd2-SlapOSPatched001'.
gunicorn = 19.1.1
plone.recipe.command = 1.1
slapos.recipe.template = 2.4.2
inotifyx = 0.2.2
apache-libcloud = 2.2.1
gitdb2 = 2.0.3
smmap2 = 2.0.3
# Required by:
# slapos.toolbox==0.73
GitPython = 2.1.8
# Required by:
# slapos.toolbox==0.73
atomize = 0.2.0
# Required by:
# slapos.toolbox==0.73
dnspython = 1.15.0
# Required by:
# slapos.toolbox==0.73
erp5.util = 0.4.50
# Required by:
# slapos.toolbox==0.73
feedparser = 5.2.1
# Required by:
# slapos.toolbox==0.73
lockfile = 0.12.2
# Required by:
# slapos.toolbox==0.73
passlib = 1.7.1
#!${dash-output:dash}
# BEWARE: This file is operated by slapgrid
# BEWARE: It will be overwritten automatically
export JRE_HOME=${java-re-8:location}
export CATALINA_BASE=$${basedirectory:catalina_base}
exec ${tomcat7-output:catalina} run
...@@ -36,4 +36,4 @@ md5sum = 33547be93a67530165e079dc3ecfdac3 ...@@ -36,4 +36,4 @@ md5sum = 33547be93a67530165e079dc3ecfdac3
[custom-js] [custom-js]
filename = custom.js filename = custom.js
md5sum = 0bf9e2eb1718b14307265fe05a167018 md5sum = 40d938bb09261c65421a7725b40f87dc
...@@ -81,8 +81,8 @@ ...@@ -81,8 +81,8 @@
* @static * @static
*/ */
$([jupyter.events]).on('notebook_loaded.Notebook', function(){ $([Jupyter.events]).on('notebook_loaded.Notebook', function(){
var kernelname = jupyter.notebook.kernel_selector.current_selection; var kernelname = Jupyter.notebook.kernel_selector.current_selection;
var display_text="<div class='output_subarea output_text output_result'>\ var display_text="<div class='output_subarea output_text output_result'>\
<pre>Follow these steps to customize your notebook with ERP5 kernel :-</br>\ <pre>Follow these steps to customize your notebook with ERP5 kernel :-</br>\
1. Make sure you have 'erp5_data_notebook' business template installed in your ERP5</br>\ 1. Make sure you have 'erp5_data_notebook' business template installed in your ERP5</br>\
......
[buildout]
parts =
instance
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
[instance]
recipe = ${instance-recipe:egg}:${instance-recipe:module}
kumo_gateway_binary = ${kumo:location}/bin/kumo-gateway
kumo_manager_binary = ${kumo:location}/bin/kumo-manager
kumo_server_binary = ${kumo:location}/bin/kumo-server
dcrond_binary = ${dcron:location}/sbin/crond
openssl_binary = ${openssl:location}/bin/openssl
rdiff_backup_binary = ${buildout:bin-directory}/rdiff-backup
stunnel_binary = ${stunnel:location}/bin/stunnel
[buildout]
extends =
../../component/kumo/buildout.cfg
../../component/dcron/buildout.cfg
../../component/stunnel/buildout.cfg
../../component/rdiff-backup/buildout.cfg
../../component/lxml-python/buildout.cfg
../../stack/slapos.cfg
parts =
# Create instance template
template
rdiff-backup
dcron
kumo
stunnel
eggs
instance-recipe-egg
[instance-recipe]
egg = slapos.cookbook
module = kumofs
[instance-recipe-egg]
recipe = zc.recipe.egg
eggs = ${instance-recipe:egg}
[eggs]
recipe = zc.recipe.egg
eggs =
${lxml-python:egg}
[template]
# Default template for the instance.
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg
md5sum = 056a4af7128fd9e31da42c85cc039420
output = ${buildout:directory}/template.cfg
mode = 0644
[versions]
rdiff-backup = 1.0.5+SlapOSPatched001
slapos.recipe.template = 2.4.2
...@@ -99,7 +99,7 @@ recipe = hexagonit.recipe.download ...@@ -99,7 +99,7 @@ recipe = hexagonit.recipe.download
ignore-existing = true ignore-existing = true
url = ${:_profile_base_location_}/instance-kvm.cfg.jinja2 url = ${:_profile_base_location_}/instance-kvm.cfg.jinja2
mode = 644 mode = 644
md5sum = e3cc9ffe857da1078e321cab65173fb1 md5sum = 68b66fb3e9020642e57f4a4ee266f2b3
download-only = true download-only = true
on-update = true on-update = true
...@@ -108,7 +108,7 @@ recipe = hexagonit.recipe.download ...@@ -108,7 +108,7 @@ recipe = hexagonit.recipe.download
ignore-existing = true ignore-existing = true
url = ${:_profile_base_location_}/instance-kvm-cluster.cfg.jinja2.in url = ${:_profile_base_location_}/instance-kvm-cluster.cfg.jinja2.in
mode = 644 mode = 644
md5sum = 26d931a0279d49eadb3881f440b623bc md5sum = ba3337b3678ed9d3578cc88749c5cd13
download-only = true download-only = true
on-update = true on-update = true
...@@ -186,7 +186,7 @@ ignore-existing = true ...@@ -186,7 +186,7 @@ ignore-existing = true
url = ${:_profile_base_location_}/template/template-kvm-run.in url = ${:_profile_base_location_}/template/template-kvm-run.in
mode = 644 mode = 644
filename = template-kvm-run.in filename = template-kvm-run.in
md5sum = 122bf5e8c7a12bceef1bdd6d6b54f4d7 md5sum = bd238397af6236b6b24b693012feeece
download-only = true download-only = true
on-update = true on-update = true
...@@ -196,7 +196,7 @@ ignore-existing = true ...@@ -196,7 +196,7 @@ ignore-existing = true
url = ${:_profile_base_location_}/template/kvm-controller-run.in url = ${:_profile_base_location_}/template/kvm-controller-run.in
mode = 644 mode = 644
filename = kvm-controller-run.in filename = kvm-controller-run.in
md5sum = 7e6c79232cc88c15ed21c112ff801b76 md5sum = c86cd67bbdd26b7b14b7449a1bbd959b
download-only = true download-only = true
on-update = true on-update = true
......
...@@ -51,7 +51,7 @@ config-authorized-key = {{ dumps(slapparameter_dict.get('authorized-keys') | joi ...@@ -51,7 +51,7 @@ config-authorized-key = {{ dumps(slapparameter_dict.get('authorized-keys') | joi
config-nbd-port = {{ dumps(kvm_parameter_dict.get('nbd-port', 1024)) }} config-nbd-port = {{ dumps(kvm_parameter_dict.get('nbd-port', 1024)) }}
config-nbd2-port = {{ dumps(kvm_parameter_dict.get('nbd-port2', 1024)) }} config-nbd2-port = {{ dumps(kvm_parameter_dict.get('nbd-port2', 1024)) }}
config-ram-size = {{ dumps(kvm_parameter_dict.get('ram-size', 1024)) }} config-ram-size = {{ dumps(kvm_parameter_dict.get('ram-size', 1024)) }}
config-ram-max-size = {{ dumps(kvm_parameter_dict.get('ram-max-size', '50G')) }} config-ram-max-size = {{ dumps(kvm_parameter_dict.get('ram-max-size', '51200')) }}
config-enable-device-hotplug = {{ dumps(kvm_parameter_dict.get('enable-device-hotplug', False)) }} config-enable-device-hotplug = {{ dumps(kvm_parameter_dict.get('enable-device-hotplug', False)) }}
config-ram-hotplug-slot-size = {{ dumps(kvm_parameter_dict.get('ram-hotplug-slot-size', 512)) }} config-ram-hotplug-slot-size = {{ dumps(kvm_parameter_dict.get('ram-hotplug-slot-size', 512)) }}
config-disk-size = {{ dumps(kvm_parameter_dict.get('disk-size', 10)) }} config-disk-size = {{ dumps(kvm_parameter_dict.get('disk-size', 10)) }}
......
...@@ -561,7 +561,7 @@ nbd2-host = ...@@ -561,7 +561,7 @@ nbd2-host =
enable-device-hotplug = False enable-device-hotplug = False
ram-size = 1024 ram-size = 1024
ram-max-size = 50G ram-max-size = 51200
ram-hotplug-slot-size = 512 ram-hotplug-slot-size = 512
disk-size = 10 disk-size = 10
disk-type = virtio disk-type = virtio
......
...@@ -28,14 +28,18 @@ def update(): ...@@ -28,14 +28,18 @@ def update():
try: try:
init_dict = getInitialQemuResourceDict(pid_file) init_dict = getInitialQemuResourceDict(pid_file)
if os.path.exists(status_path):
os.unlink(status_path)
if init_dict is None:
# qemu process is not OK
return
init_ram_size = int(init_dict['ram'].split('M')[0]) init_ram_size = int(init_dict['ram'].split('M')[0])
if cpu_amount < 1: if cpu_amount < 1:
raise ValueError("CPU should be at least equal to 1.") raise ValueError("CPU should be at least equal to 1.")
hotplug_ram = ram_size - init_ram_size hotplug_ram = ram_size - init_ram_size
if hotplug_ram < 0: if hotplug_ram < 0:
raise ValueError("RAM size cannot be less than the initial value %s MB" % init_ram_size) raise ValueError("RAM size cannot be less than the initial value %s MB" % init_ram_size)
if os.path.exists(status_path):
os.unlink(status_path)
qemu_wrapper = QemuQMPWrapper(socket_path) qemu_wrapper = QemuQMPWrapper(socket_path)
qemu_wrapper.setVNCPassword(vnc_password) qemu_wrapper.setVNCPassword(vnc_password)
......
...@@ -261,7 +261,7 @@ if use_tap == 'true': ...@@ -261,7 +261,7 @@ if use_tap == 'true':
if enable_device_hotplug != 'true': if enable_device_hotplug != 'true':
smp = '%s,maxcpus=%s' % (smp_count, smp_max_count) smp = '%s,maxcpus=%s' % (smp_count, smp_max_count)
ram = '%sM,slots=128,maxmem=%s' % (ram_size, ram_max_size) ram = '%sM,slots=128,maxmem=%sM' % (ram_size, ram_max_size)
else: else:
smp = '1,maxcpus=%s' % smp_max_count smp = '1,maxcpus=%s' % smp_max_count
ram = '%sM,slots=128,maxmem=%s' % (init_ram_size, ram_max_size) ram = '%sM,slots=128,maxmem=%s' % (init_ram_size, ram_max_size)
......
[buildout]
parts =
instance
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
[instance]
recipe = ${instance-recipe:egg}:${instance-recipe:module}
dcrond_binary = ${dcron:location}/sbin/crond
memcached_binary = ${memcached:location}/bin/memcached
openssl_binary = ${openssl:location}/bin/openssl
rdiff_backup_binary = ${buildout:bin-directory}/rdiff-backup
stunnel_binary = ${stunnel:location}/bin/stunnel
[buildout]
extends =
../../component/memcached/buildout.cfg
../../component/dcron/buildout.cfg
../../component/stunnel/buildout.cfg
../../component/rdiff-backup/buildout.cfg
../../component/lxml-python/buildout.cfg
../../stack/slapos.cfg
parts =
# Create instance template
template
eggs
instance-recipe-egg
[instance-recipe]
egg = slapos.cookbook
module = memcached
[instance-recipe-egg]
recipe = zc.recipe.egg
eggs = ${instance-recipe:egg}
[eggs]
recipe = zc.recipe.egg
eggs =
${lxml-python:egg}
[template]
# Default template for the instance.
recipe = slapos.recipe.template
url = ${:_profile_base_location_}/instance.cfg
md5sum = 837caf9897332a5f70c72438f1dc5bae
output = ${buildout:directory}/template.cfg
mode = 0644
[versions]
plone.recipe.command = 1.1
slapos.recipe.template = 2.3
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
# not need these here). # not need these here).
[template-erp5] [template-erp5]
filename = instance-erp5.cfg.in filename = instance-erp5.cfg.in
md5sum = 4a77ee4a6367fee27552f8bfe9d87aab md5sum = f539acb8da805ce2de0787769283869e
[template-balancer] [template-balancer]
filename = instance-balancer.cfg.in filename = instance-balancer.cfg.in
......
...@@ -206,10 +206,12 @@ name = {{ partition_name }} ...@@ -206,10 +206,12 @@ name = {{ partition_name }}
{{ root_common.sla(partition_name) }} {{ root_common.sla(partition_name) }}
config-name = {{ dumps(custom_name) }} config-name = {{ dumps(custom_name) }}
config-instance-count = {{ dumps(zope_parameter_dict.get('instance-count', 1)) }} config-instance-count = {{ dumps(zope_parameter_dict.get('instance-count', 1)) }}
config-private-dev-shm = {{ zope_parameter_dict.get('private-dev-shm', '') }}
config-thread-amount = {{ dumps(zope_parameter_dict.get('thread-amount', 4)) }} config-thread-amount = {{ dumps(zope_parameter_dict.get('thread-amount', 4)) }}
config-timerserver-interval = {{ dumps(zope_parameter_dict.get('timerserver-interval', 5)) }} config-timerserver-interval = {{ dumps(zope_parameter_dict.get('timerserver-interval', 5)) }}
config-longrequest-logger-interval = {{ dumps(zope_parameter_dict.get('longrequest-logger-interval', -1)) }} config-longrequest-logger-interval = {{ dumps(zope_parameter_dict.get('longrequest-logger-interval', -1)) }}
config-longrequest-logger-timeout = {{ dumps(zope_parameter_dict.get('longrequest-logger-timeout', 1)) }} config-longrequest-logger-timeout = {{ dumps(zope_parameter_dict.get('longrequest-logger-timeout', 1)) }}
config-large-file-threshold = {{ dumps(zope_parameter_dict.get('large-file-threshold', '10MB')) }}
config-port-base = {{ dumps(zope_parameter_dict.get('port-base', 2200)) }} config-port-base = {{ dumps(zope_parameter_dict.get('port-base', 2200)) }}
config-webdav = {{ dumps(zope_parameter_dict.get('webdav', False)) }} config-webdav = {{ dumps(zope_parameter_dict.get('webdav', False)) }}
{% endfor -%} {% endfor -%}
...@@ -327,7 +329,7 @@ config-backend-path-dict = {{ dumps(zope_backend_path_dict) }} ...@@ -327,7 +329,7 @@ config-backend-path-dict = {{ dumps(zope_backend_path_dict) }}
config-ssl-authentication-dict = {{ dumps(ssl_authentication_dict) }} config-ssl-authentication-dict = {{ dumps(ssl_authentication_dict) }}
config-apachedex-promise-threshold = {{ dumps(monitor_dict.get('apachedex-promise-threshold', 70)) }} config-apachedex-promise-threshold = {{ dumps(monitor_dict.get('apachedex-promise-threshold', 70)) }}
config-apachedex-configuration = {{ dumps(monitor_dict.get('apachedex-configuration', config-apachedex-configuration = {{ dumps(monitor_dict.get('apachedex-configuration',
'--erp5-base "/erp5(/|$|/\?)" --skip-user-agent Zabbix --error-detail --js-embed --quiet')) }} '--erp5-base +erp5 .*/VirtualHostRoot/erp5(/|\\?|$) --base +other / --skip-user-agent Zabbix --error-detail --js-embed --quiet')) }}
[request-frontend-base] [request-frontend-base]
{% if has_frontend -%} {% if has_frontend -%}
......
...@@ -19,7 +19,7 @@ md5sum = 713db528880282d568278f09458d2aab ...@@ -19,7 +19,7 @@ md5sum = 713db528880282d568278f09458d2aab
[template-runner] [template-runner]
filename = instance-runner.cfg filename = instance-runner.cfg
md5sum = 8b1caca52ab1307343ebada59f5a6c66 md5sum = 16ff762e71c92f8a8e1062906eb10b9c
[template-runner-import-script] [template-runner-import-script]
filename = template/runner-import.sh.jinja2 filename = template/runner-import.sh.jinja2
...@@ -63,7 +63,7 @@ md5sum = f8446fcf254b4929eb828a9a1d7e5f62 ...@@ -63,7 +63,7 @@ md5sum = f8446fcf254b4929eb828a9a1d7e5f62
[template-bash-profile] [template-bash-profile]
filename = template/bash_profile.in filename = template/bash_profile.in
md5sum = 712ca70488051f97e7a7b11a02a06bb1 md5sum = 37eea89042a58127c85e6b3886260e59
[template-supervisord] [template-supervisord]
filename = template/supervisord.conf.in filename = template/supervisord.conf.in
......
...@@ -725,6 +725,7 @@ rendered = $${buildout:directory}/.bash_profile ...@@ -725,6 +725,7 @@ rendered = $${buildout:directory}/.bash_profile
context = context =
raw path $${shell-environment:path} raw path $${shell-environment:path}
raw shell $${shell-environment:shell} raw shell $${shell-environment:shell}
key instance_name slap-parameter:instance-name
key workdir runnerdirectory:home key workdir runnerdirectory:home
#--------------------------- #---------------------------
......
...@@ -5,6 +5,9 @@ ...@@ -5,6 +5,9 @@
cd {{ workdir }} cd {{ workdir }}
export PATH={{- path }} export PATH={{- path }}
export SHELL={{- shell }} export SHELL={{- shell }}
{%- if instance_name %}
export PROMPT_COMMAND='echo -en "\033]0;{{-instance_name}}\a"'
{% endif %}
export PS1="$ " export PS1="$ "
if [ -f "$HOME/.bashrc" ] ; then if [ -f "$HOME/.bashrc" ] ; then
......
...@@ -10,12 +10,15 @@ ...@@ -10,12 +10,15 @@
# $x 8.787 0 # $x 8.787 0
# 0.100036621094 # 0.100036621094
# 6556 # 6556
#
# The 'start_process' command is similar but by growing a ZBigArray object.
# The random data has a compression ratio of 10%.
[buildout] [buildout]
extends = test-common.cfg extends = test-common.cfg
parts += start_ingest parts += start_ingest start_process
[start_ingest] [start-script-common]
recipe = slapos.recipe.template:jinja2 recipe = slapos.recipe.template:jinja2
rendered = ${buildout:bin-directory}/${:_buildout_section_name_} rendered = ${buildout:bin-directory}/${:_buildout_section_name_}
mode = 0755 mode = 0755
...@@ -27,20 +30,35 @@ template = ...@@ -27,20 +30,35 @@ template =
_('--site-id', default='erp5') _('--site-id', default='erp5')
_('hostport', metavar='host[:port]', help='Zope address') _('hostport', metavar='host[:port]', help='Zope address')
_('password', help="'zope' user password") _('password', help="'zope' user password")
_('reference', help='Data Stream reference') options = []
_('mu', type=float) def option(name, **kw):
_('sigma', type=float) _(name, **kw)
_('chunks_per_transaction', nargs='?', type=int, help='default: 128 (8 MiB)') options.append(name)
${:options}
args = parser.parse_args() args = parser.parse_args()
qs = [] qs = []
for k in 'reference', 'mu', 'sigma', 'chunks_per_transaction': for k in options:
v = getattr(args, k) v = getattr(args, k)
if v is not None: if v is not None:
t = type(v) t = type(v)
qs.append('%s=%s' % (k if t is str else k + ':' + t.__name__, v)) qs.append('%s=%s' % (k if t is str else k + ':' + t.__name__, v))
c = httplib.HTTPConnection(args.hostport) c = httplib.HTTPConnection(args.hostport)
c.putrequest('GET', '/%s/ERP5Site_simulateFluentdIngestion?%s' c.putrequest('GET', '/%s/${:script}?%s' % (args.site_id, '&'.join(qs)))
% (args.site_id, '&'.join(qs)))
c.putheader('Authorization', c.putheader('Authorization',
'Basic ' + base64.b64encode('zope:'+args.password)) 'Basic ' + base64.b64encode('zope:'+args.password))
c.endheaders() c.endheaders()
[start_ingest]
<= start-script-common
options =
option('id', help='Data Stream id')
option('mu', type=float)
option('sigma', type=float)
option('chunks_per_transaction', nargs='?', type=int, help='default: 128 (8 MiB)')
script = ERP5Site_simulateFluentdIngestion
[start_process]
<= start-script-common
options =
option('id', help='Data Array id')
script = ERP5Site_dummyZBigArrayProcessing
...@@ -5,7 +5,7 @@ from random import lognormvariate ...@@ -5,7 +5,7 @@ from random import lognormvariate
bigfile_chunk_size = 65536 bigfile_chunk_size = 65536
def simulateFluentdIngestion(self, reference, mu, sigma, def simulateFluentdIngestion(self, id, mu, sigma,
chunks_per_transaction=128): chunks_per_transaction=128):
from time import time from time import time
import transaction import transaction
...@@ -13,9 +13,9 @@ def simulateFluentdIngestion(self, reference, mu, sigma, ...@@ -13,9 +13,9 @@ def simulateFluentdIngestion(self, reference, mu, sigma,
+ '/ingest') + '/ingest')
module = self['data_stream_module'] module = self['data_stream_module']
try: try:
data_stream = module[reference] data_stream = module[id]
except KeyError: except KeyError:
data_stream = module.newContent(reference, 'Data Stream') data_stream = module.newContent(id, 'Data Stream')
transaction.commit() transaction.commit()
pack = struct.Struct('!d').pack pack = struct.Struct('!d').pack
......
def dummyZBigArrayProcessing(self, id):
import numpy as np
from random import randrange, sample
import transaction
module = self['data_array_module']
try:
array = module[id]
except KeyError:
array = module.newContent(id, 'Data Array')
array.initArray(shape=(0, 64), dtype=np.int32)
transaction.commit()
note = array.getPath() + '/new_data'
array = array.getArray()
rows, cols = array.shape
y = xrange(cols)
n = 10 * (2<<20) // (cols*4)
z = np.ndarray(shape=(n, cols), dtype=array.dtype)
for row in z:
for i in sample(y, 8):
row[i] = randrange(0, 1000)
while 1:
txn = transaction.begin()
np.random.shuffle(z)
rows += n
array.resize((rows, cols))
array[-n:] = z
txn.note(note)
txn.commit()
<?xml version="1.0"?>
<ZopeData>
<record id="1" aka="AAAAAAAAAAE=">
<pickle>
<global name="Extension Component" module="erp5.portal_type"/>
</pickle>
<pickle>
<dictionary>
<item>
<key> <string>_recorded_property_dict</string> </key>
<value>
<persistent> <string encoding="base64">AAAAAAAAAAI=</string> </persistent>
</value>
</item>
<item>
<key> <string>default_reference</string> </key>
<value> <string>ScalabilityZBigArray</string> </value>
</item>
<item>
<key> <string>description</string> </key>
<value>
<none/>
</value>
</item>
<item>
<key> <string>id</string> </key>
<value> <string>extension.erp5.ScalabilityZBigArray</string> </value>
</item>
<item>
<key> <string>portal_type</string> </key>
<value> <string>Extension Component</string> </value>
</item>
<item>
<key> <string>sid</string> </key>
<value>
<none/>
</value>
</item>
<item>
<key> <string>text_content_error_message</string> </key>
<value>
<tuple/>
</value>
</item>
<item>
<key> <string>text_content_warning_message</string> </key>
<value>
<tuple/>
</value>
</item>
<item>
<key> <string>version</string> </key>
<value> <string>erp5</string> </value>
</item>
<item>
<key> <string>workflow_history</string> </key>
<value>
<persistent> <string encoding="base64">AAAAAAAAAAM=</string> </persistent>
</value>
</item>
</dictionary>
</pickle>
</record>
<record id="2" aka="AAAAAAAAAAI=">
<pickle>
<global name="PersistentMapping" module="Persistence.mapping"/>
</pickle>
<pickle>
<dictionary>
<item>
<key> <string>data</string> </key>
<value>
<dictionary/>
</value>
</item>
</dictionary>
</pickle>
</record>
<record id="3" aka="AAAAAAAAAAM=">
<pickle>
<global name="PersistentMapping" module="Persistence.mapping"/>
</pickle>
<pickle>
<dictionary>
<item>
<key> <string>data</string> </key>
<value>
<dictionary>
<item>
<key> <string>component_validation_workflow</string> </key>
<value>
<persistent> <string encoding="base64">AAAAAAAAAAQ=</string> </persistent>
</value>
</item>
</dictionary>
</value>
</item>
</dictionary>
</pickle>
</record>
<record id="4" aka="AAAAAAAAAAQ=">
<pickle>
<global name="WorkflowHistoryList" module="Products.ERP5Type.patches.WorkflowTool"/>
</pickle>
<pickle>
<tuple>
<none/>
<list>
<dictionary>
<item>
<key> <string>action</string> </key>
<value> <string>validate</string> </value>
</item>
<item>
<key> <string>validation_state</string> </key>
<value> <string>validated</string> </value>
</item>
</dictionary>
</list>
</tuple>
</pickle>
</record>
</ZopeData>
<?xml version="1.0"?>
<ZopeData>
<record id="1" aka="AAAAAAAAAAE=">
<pickle>
<global name="ExternalMethod" module="Products.ExternalMethod.ExternalMethod"/>
</pickle>
<pickle>
<dictionary>
<item>
<key> <string>_function</string> </key>
<value> <string>dummyZBigArrayProcessing</string> </value>
</item>
<item>
<key> <string>_module</string> </key>
<value> <string>ScalabilityZBigArray</string> </value>
</item>
<item>
<key> <string>id</string> </key>
<value> <string>ERP5Site_dummyZBigArrayProcessing</string> </value>
</item>
<item>
<key> <string>title</string> </key>
<value> <string></string> </value>
</item>
</dictionary>
</pickle>
</record>
</ZopeData>
...@@ -54,7 +54,7 @@ ...@@ -54,7 +54,7 @@
</item> </item>
<item> <item>
<key> <string>id</string> </key> <key> <string>id</string> </key>
<value> <string>DataStreamModule_getTotalSize</string> </value> <value> <string>Module_getTotalSize</string> </value>
</item> </item>
</dictionary> </dictionary>
</pickle> </pickle>
......
extension.erp5.ScalabilityFluentd extension.erp5.ScalabilityFluentd
extension.erp5.ScalabilityZBigArray
\ No newline at end of file
portal_ingestion_policies/scalability_test_* portal_ingestion_policies/scalability_test_*
portal_skins/custom/DataStreamModule_getTotalSize portal_skins/custom/ERP5Site_dummyZBigArrayProcessing
portal_skins/custom/ERP5Site_handleRawDataFluentdIngestion portal_skins/custom/ERP5Site_handleRawDataFluentdIngestion
portal_skins/custom/ERP5Site_simulateFluentdIngestion portal_skins/custom/ERP5Site_simulateFluentdIngestion
portal_skins/custom/Module_getTotalSize
\ No newline at end of file
...@@ -22,7 +22,7 @@ extends = ...@@ -22,7 +22,7 @@ extends =
../component/logrotate/buildout.cfg ../component/logrotate/buildout.cfg
../component/lxml-python/buildout.cfg ../component/lxml-python/buildout.cfg
../component/mesa/buildout.cfg ../component/mesa/buildout.cfg
../component/onlyoffice-x2t/buildout.cfg ../component/onlyoffice-core/buildout.cfg
../component/poppler/buildout.cfg ../component/poppler/buildout.cfg
../component/python-2.7/buildout.cfg ../component/python-2.7/buildout.cfg
../component/rdiff-backup/buildout.cfg ../component/rdiff-backup/buildout.cfg
......
...@@ -31,7 +31,7 @@ extends = ...@@ -31,7 +31,7 @@ extends =
../../component/statsmodels/buildout.cfg ../../component/statsmodels/buildout.cfg
../../component/h5py/buildout.cfg ../../component/h5py/buildout.cfg
../../component/ocropy/buildout.cfg ../../component/ocropy/buildout.cfg
../../component/onlyoffice-x2t/buildout.cfg ../../component/onlyoffice-core/buildout.cfg
../../component/pandas/buildout.cfg ../../component/pandas/buildout.cfg
../../component/percona-toolkit/buildout.cfg ../../component/percona-toolkit/buildout.cfg
../../component/patch/buildout.cfg ../../component/patch/buildout.cfg
...@@ -284,7 +284,7 @@ context = ...@@ -284,7 +284,7 @@ context =
key mariadb_start_clone_from_backup mariadb-start-clone-from-backup:target key mariadb_start_clone_from_backup mariadb-start-clone-from-backup:target
key matplotlibrc_location matplotlibrc:location key matplotlibrc_location matplotlibrc:location
key mesa_location mesa:location key mesa_location mesa:location
key onlyoffice_x2t_location onlyoffice-x2t:location key onlyoffice_core_location onlyoffice-core:location
key parts_directory buildout:parts-directory key parts_directory buildout:parts-directory
key openssl_location openssl:location key openssl_location openssl:location
key percona_toolkit_location percona-toolkit:location key percona_toolkit_location percona-toolkit:location
...@@ -739,6 +739,7 @@ validictory = 1.1.0 ...@@ -739,6 +739,7 @@ validictory = 1.1.0
xfw = 0.10 xfw = 0.10
xupdate-processor = 0.4 xupdate-processor = 0.4
selenium = 3.8.0 selenium = 3.8.0
zbarlight = 2.0
# Required by: # Required by:
# Products.CMFCore==2.2.10 # Products.CMFCore==2.2.10
......
...@@ -27,7 +27,7 @@ md5sum = 1af531c51f575a1d1362f2ca2d61620d ...@@ -27,7 +27,7 @@ md5sum = 1af531c51f575a1d1362f2ca2d61620d
[template-mariadb] [template-mariadb]
filename = instance-mariadb.cfg.in filename = instance-mariadb.cfg.in
md5sum = da7c36ecb490b67360d2afda94b41bff md5sum = 705f017e19dc2d1048770284d1d6b31f
[template-kumofs] [template-kumofs]
filename = instance-kumofs.cfg.in filename = instance-kumofs.cfg.in
...@@ -35,11 +35,11 @@ md5sum = 091d3c3feb2d543d176b0fadb11c07dc ...@@ -35,11 +35,11 @@ md5sum = 091d3c3feb2d543d176b0fadb11c07dc
[template-cloudooo] [template-cloudooo]
filename = instance-cloudoo.cfg.in filename = instance-cloudoo.cfg.in
md5sum = 76f9e8c8cdc352081e34539d8fc17026 md5sum = a3ca0af4983b3b80d191de3f2cc2146d
[template-zope-conf] [template-zope-conf]
filename = zope.conf.in filename = zope.conf.in
md5sum = 3524ef2e14cea4a5bd40fdc9e95cfc0c md5sum = adb25a1ab15c8aecf40a3952528f81c2
[site-zcml] [site-zcml]
filename = site.zcml filename = site.zcml
...@@ -71,7 +71,7 @@ md5sum = 0969fbb25b05c02ef3c2d437b2f4e1a0 ...@@ -71,7 +71,7 @@ md5sum = 0969fbb25b05c02ef3c2d437b2f4e1a0
[template] [template]
filename = instance.cfg.in filename = instance.cfg.in
md5sum = 47d09a83d44f38d3ea62743f004e866b md5sum = eba0b4596484dcf24b1da29ddeac453d
[monitor-template-dummy] [monitor-template-dummy]
filename = dummy.cfg filename = dummy.cfg
...@@ -79,7 +79,7 @@ md5sum = d41d8cd98f00b204e9800998ecf8427e ...@@ -79,7 +79,7 @@ md5sum = d41d8cd98f00b204e9800998ecf8427e
[template-erp5] [template-erp5]
filename = instance-erp5.cfg.in filename = instance-erp5.cfg.in
md5sum = 02ed5d9b74c70789004d01dd2ecde7b1 md5sum = 5eb5ff7491b9e47c647ecfd381a2e143
[template-zeo] [template-zeo]
filename = instance-zeo.cfg.in filename = instance-zeo.cfg.in
...@@ -87,7 +87,7 @@ md5sum = d1f33d406d528ae27d973e2dd0efb1ba ...@@ -87,7 +87,7 @@ md5sum = d1f33d406d528ae27d973e2dd0efb1ba
[template-zope] [template-zope]
filename = instance-zope.cfg.in filename = instance-zope.cfg.in
md5sum = fd7e8c507cef1950e6c0347ce2a01021 md5sum = 490001726c0dd93cc94960d83a2f08e5
[template-balancer] [template-balancer]
filename = instance-balancer.cfg.in filename = instance-balancer.cfg.in
......
...@@ -132,7 +132,7 @@ link-binary = ...@@ -132,7 +132,7 @@ link-binary =
{{ parameter_dict['poppler'] }}/bin/pdfinfo {{ parameter_dict['poppler'] }}/bin/pdfinfo
{{ parameter_dict['poppler'] }}/bin/pdftotext {{ parameter_dict['poppler'] }}/bin/pdftotext
{{ parameter_dict['poppler'] }}/bin/pdftohtml {{ parameter_dict['poppler'] }}/bin/pdftohtml
{{ parameter_dict['onlyoffice-x2t'] }}/x2t {{ parameter_dict['onlyoffice-core'] }}/bin/x2t
[xvfb-instance] [xvfb-instance]
recipe = slapos.cookbook:xvfb recipe = slapos.cookbook:xvfb
......
...@@ -205,10 +205,12 @@ name = {{ partition_name }} ...@@ -205,10 +205,12 @@ name = {{ partition_name }}
{{ root_common.sla(partition_name) }} {{ root_common.sla(partition_name) }}
config-name = {{ dumps(custom_name) }} config-name = {{ dumps(custom_name) }}
config-instance-count = {{ dumps(zope_parameter_dict.get('instance-count', 1)) }} config-instance-count = {{ dumps(zope_parameter_dict.get('instance-count', 1)) }}
config-private-dev-shm = {{ zope_parameter_dict.get('private-dev-shm', '') }}
config-thread-amount = {{ dumps(zope_parameter_dict.get('thread-amount', 4)) }} config-thread-amount = {{ dumps(zope_parameter_dict.get('thread-amount', 4)) }}
config-timerserver-interval = {{ dumps(zope_parameter_dict.get('timerserver-interval', 5)) }} config-timerserver-interval = {{ dumps(zope_parameter_dict.get('timerserver-interval', 5)) }}
config-longrequest-logger-interval = {{ dumps(zope_parameter_dict.get('longrequest-logger-interval', -1)) }} config-longrequest-logger-interval = {{ dumps(zope_parameter_dict.get('longrequest-logger-interval', -1)) }}
config-longrequest-logger-timeout = {{ dumps(zope_parameter_dict.get('longrequest-logger-timeout', 1)) }} config-longrequest-logger-timeout = {{ dumps(zope_parameter_dict.get('longrequest-logger-timeout', 1)) }}
config-large-file-threshold = {{ dumps(zope_parameter_dict.get('large-file-threshold', "10MB")) }}
config-port-base = {{ dumps(zope_parameter_dict.get('port-base', 2200)) }} config-port-base = {{ dumps(zope_parameter_dict.get('port-base', 2200)) }}
config-webdav = {{ dumps(zope_parameter_dict.get('webdav', False)) }} config-webdav = {{ dumps(zope_parameter_dict.get('webdav', False)) }}
{% endfor -%} {% endfor -%}
...@@ -289,6 +291,7 @@ config-apachedex-configuration = {{ dumps(monitor_dict.get('apachedex-configurat ...@@ -289,6 +291,7 @@ config-apachedex-configuration = {{ dumps(monitor_dict.get('apachedex-configurat
[request-frontend-base] [request-frontend-base]
{% if has_frontend -%} {% if has_frontend -%}
<= request-common <= request-common
recipe = slapos.cookbook:request
software-url = {{ dumps(frontend_dict['software-url']) }} software-url = {{ dumps(frontend_dict['software-url']) }}
software-type = {{ dumps(frontend_dict.get('software-type', 'RootSoftwareInstance')) }} software-type = {{ dumps(frontend_dict.get('software-type', 'RootSoftwareInstance')) }}
{{ root_common.sla('frontend', True) }} {{ root_common.sla('frontend', True) }}
...@@ -299,6 +302,12 @@ slave = true ...@@ -299,6 +302,12 @@ slave = true
{% if frontend_dict.get('domain') -%} {% if frontend_dict.get('domain') -%}
{% do config_dict.__setitem__('custom_domain', frontend_dict['domain']) -%} {% do config_dict.__setitem__('custom_domain', frontend_dict['domain']) -%}
{% endif -%} {% endif -%}
{% if frontend_dict.get('virtualhostroot-http-port') -%}
{% do config_dict.__setitem__('virtualhostroot-http-port', frontend_dict['virtualhostroot-http-port']) -%}
{% endif -%}
{% if frontend_dict.get('virtualhostroot-https-port') -%}
{% do config_dict.__setitem__('virtualhostroot-https-port', frontend_dict['virtualhostroot-https-port']) -%}
{% endif -%}
{% for name, value in config_dict.items() -%} {% for name, value in config_dict.items() -%}
config-{{ name }} = {{ value }} config-{{ name }} = {{ value }}
{% endfor -%} {% endfor -%}
......
...@@ -128,6 +128,7 @@ port = {{ port }} ...@@ -128,6 +128,7 @@ port = {{ port }}
socket = ${directory:run}/mariadb.sock socket = ${directory:run}/mariadb.sock
data-directory = ${directory:mariadb-data} data-directory = ${directory:mariadb-data}
tmp-directory = ${directory:tmp} tmp-directory = ${directory:tmp}
etc-directory = ${directory:etc}
pid-file = ${directory:run}/mariadb.pid pid-file = ${directory:run}/mariadb.pid
error-log = ${directory:log}/mariadb_error.log error-log = ${directory:log}/mariadb_error.log
slow-query-log = ${directory:log}/mariadb_slowquery.log slow-query-log = ${directory:log}/mariadb_slowquery.log
...@@ -185,11 +186,22 @@ template = inline:#!{{ dash }} ...@@ -185,11 +186,22 @@ template = inline:#!{{ dash }}
--skip-name-resolve \ --skip-name-resolve \
--datadir='${my-cnf-parameters:data-directory}' \ --datadir='${my-cnf-parameters:data-directory}' \
--basedir='{{ parameter_dict['mariadb-location'] }}' \ --basedir='{{ parameter_dict['mariadb-location'] }}' \
&& exec '{{ parameter_dict['mariadb-location'] }}/bin/mysqld' \ && ODBCSYSINI='${my-cnf-parameters:etc-directory}' exec '{{ parameter_dict['mariadb-location'] }}/bin/mysqld' \
--defaults-file='${my-cnf:rendered}' \ --defaults-file='${my-cnf:rendered}' \
"$@" "$@"
rendered = ${directory:services}/mariadb rendered = ${directory:services}/mariadb
[odbc-ini-text]
text = {{ dumps(slapparameter_dict.get('odbc-ini', '').encode('base64')) }}
[{{ section('odbc-ini') }}]
< = jinja2-template-base
rendered = ${directory:etc}/odbc.ini
template = inline:{% raw -%}
{{ parameter_dict['text'].decode('base64') }}
{%- endraw %}
context = section parameter_dict odbc-ini-text
[{{ section('logrotate-entry-mariadb') }}] [{{ section('logrotate-entry-mariadb') }}]
< = logrotate-entry-base < = logrotate-entry-base
name = mariadb name = mariadb
......
...@@ -206,6 +206,7 @@ environment = ...@@ -206,6 +206,7 @@ environment =
{% endif %} {% endif %}
parameters-extra = true parameters-extra = true
command-line = '{{ parameter_dict['userhosts'] }}' '{{ bin_directory }}/runzope' -C '${:configuration-file}' command-line = '{{ parameter_dict['userhosts'] }}' '{{ bin_directory }}/runzope' -C '${:configuration-file}'
private-dev-shm = {{ slapparameter_dict['private-dev-shm'] }}
[{{ section('zcml') }}] [{{ section('zcml') }}]
recipe = slapos.cookbook:copyfilelist recipe = slapos.cookbook:copyfilelist
...@@ -268,6 +269,7 @@ tidstorage-ip = {{ dumps(slapparameter_dict['tidstorage-ip']) }} ...@@ -268,6 +269,7 @@ tidstorage-ip = {{ dumps(slapparameter_dict['tidstorage-ip']) }}
tidstorage-port = {{ dumps(slapparameter_dict['tidstorage-port']) }} tidstorage-port = {{ dumps(slapparameter_dict['tidstorage-port']) }}
{% endif -%} {% endif -%}
{% set thread_amount = slapparameter_dict['thread-amount'] -%} {% set thread_amount = slapparameter_dict['thread-amount'] -%}
{% set large_file_threshold = slapparameter_dict['large-file-threshold'] -%}
thread-amount = {{ thread_amount }} thread-amount = {{ thread_amount }}
{% set webdav = slapparameter_dict['webdav'] -%} {% set webdav = slapparameter_dict['webdav'] -%}
webdav = {{ dumps(webdav) }} webdav = {{ dumps(webdav) }}
...@@ -314,6 +316,7 @@ node-id = {{ dumps(node_id_base ~ (node_id_index_format % index)) }} ...@@ -314,6 +316,7 @@ node-id = {{ dumps(node_id_base ~ (node_id_index_format % index)) }}
{% endfor -%} {% endfor -%}
import-list = {{ dumps(list(import_set)) }} import-list = {{ dumps(list(import_set)) }}
zodb-dict = {{ dumps(zodb_dict) }} zodb-dict = {{ dumps(zodb_dict) }}
large-file-threshold = {{ large_file_threshold }}
{% if longrequest_logger_interval > 0 -%} {% if longrequest_logger_interval > 0 -%}
longrequest-logger-file = {{ longrequest_logger_base_path ~ name ~ ".log" }} longrequest-logger-file = {{ longrequest_logger_base_path ~ name ~ ".log" }}
longrequest-logger-timeout = {{ longrequest_logger_timeout }} longrequest-logger-timeout = {{ longrequest_logger_timeout }}
......
...@@ -30,7 +30,7 @@ libffi = {{ libffi_location }} ...@@ -30,7 +30,7 @@ libffi = {{ libffi_location }}
libpng12 = {{ libpng12_location }} libpng12 = {{ libpng12_location }}
libxcb = {{ libxcb_location }} libxcb = {{ libxcb_location }}
mesa = {{ mesa_location }} mesa = {{ mesa_location }}
onlyoffice-x2t = {{ dumps(onlyoffice_x2t_location) }} onlyoffice-core = {{ dumps(onlyoffice_core_location) }}
pixman = {{ pixman_location }} pixman = {{ pixman_location }}
poppler = {{ dumps(poppler_location) }} poppler = {{ dumps(poppler_location) }}
wkhtmltopdf = {{ dumps(wkhtmltopdf_location) }} wkhtmltopdf = {{ dumps(wkhtmltopdf_location) }}
......
...@@ -49,6 +49,11 @@ products {{ parameter_dict['instance-products'] }} ...@@ -49,6 +49,11 @@ products {{ parameter_dict['instance-products'] }}
interval {{ parameter_dict['longrequest-logger-interval'] }} interval {{ parameter_dict['longrequest-logger-interval'] }}
</product-config> </product-config>
{% endif -%}
{% if 'large-file-threshold' in parameter_dict -%}
large-file-threshold {{ parameter_dict['large-file-threshold'] }}
{% endif -%} {% endif -%}
{% if 'tidstorage-ip' in parameter_dict -%} {% if 'tidstorage-ip' in parameter_dict -%}
<product-config TIDStorage> <product-config TIDStorage>
......
...@@ -127,7 +127,7 @@ pyparsing = 2.2.0 ...@@ -127,7 +127,7 @@ pyparsing = 2.2.0
pytz = 2016.10 pytz = 2016.10
requests = 2.13.0 requests = 2.13.0
six = 1.10.0 six = 1.10.0
slapos.cookbook = 1.0.53 slapos.cookbook = 1.0.58
slapos.core = 1.4.4 slapos.core = 1.4.4
slapos.extension.strip = 0.4 slapos.extension.strip = 0.4
slapos.libnetworkcache = 0.15 slapos.libnetworkcache = 0.15
...@@ -156,7 +156,7 @@ functools32 = 3.2.3.post2 ...@@ -156,7 +156,7 @@ functools32 = 3.2.3.post2
ipaddress = 1.0.18 ipaddress = 1.0.18
# Required by: # Required by:
# slapos.cookbook==1.0.53 # slapos.cookbook==1.0.58
jsonschema = 2.6.0 jsonschema = 2.6.0
# Required by: # Required by:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment