Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
slapos
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Lukas Niegsch
slapos
Commits
43653c40
Commit
43653c40
authored
May 26, 2011
by
Łukasz Nowak
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Import some important instantiation recipes.
parent
ff17fae0
Changes
51
Show whitespace changes
Inline
Side-by-side
Showing
51 changed files
with
2862 additions
and
0 deletions
+2862
-0
slapos/erp5.recipe.testnode/CHANGES.txt
slapos/erp5.recipe.testnode/CHANGES.txt
+6
-0
slapos/erp5.recipe.testnode/MANIFEST.in
slapos/erp5.recipe.testnode/MANIFEST.in
+2
-0
slapos/erp5.recipe.testnode/README.txt
slapos/erp5.recipe.testnode/README.txt
+1
-0
slapos/erp5.recipe.testnode/setup.py
slapos/erp5.recipe.testnode/setup.py
+40
-0
slapos/erp5.recipe.testnode/src/erp5/__init__.py
slapos/erp5.recipe.testnode/src/erp5/__init__.py
+6
-0
slapos/erp5.recipe.testnode/src/erp5/recipe/__init__.py
slapos/erp5.recipe.testnode/src/erp5/recipe/__init__.py
+6
-0
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/SlapOSControler.py
...cipe.testnode/src/erp5/recipe/testnode/SlapOSControler.py
+90
-0
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/Updater.py
.../erp5.recipe.testnode/src/erp5/recipe/testnode/Updater.py
+189
-0
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/__init__.py
...erp5.recipe.testnode/src/erp5/recipe/testnode/__init__.py
+170
-0
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/template/slapos.cfg.in
....testnode/src/erp5/recipe/testnode/template/slapos.cfg.in
+10
-0
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/testnode.py
...erp5.recipe.testnode/src/erp5/recipe/testnode/testnode.py
+229
-0
slapos/slapos.recipe.erp5/CHANGES.txt
slapos/slapos.recipe.erp5/CHANGES.txt
+7
-0
slapos/slapos.recipe.erp5/MANIFEST.in
slapos/slapos.recipe.erp5/MANIFEST.in
+2
-0
slapos/slapos.recipe.erp5/README.txt
slapos/slapos.recipe.erp5/README.txt
+2
-0
slapos/slapos.recipe.erp5/setup.py
slapos/slapos.recipe.erp5/setup.py
+41
-0
slapos/slapos.recipe.erp5/src/slapos/__init__.py
slapos/slapos.recipe.erp5/src/slapos/__init__.py
+6
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/__init__.py
slapos/slapos.recipe.erp5/src/slapos/recipe/__init__.py
+6
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/__init__.py
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/__init__.py
+912
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/apache.py
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/apache.py
+22
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/catdatefile.py
.../slapos.recipe.erp5/src/slapos/recipe/erp5/catdatefile.py
+14
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/certificate_authority.py
...cipe.erp5/src/slapos/recipe/erp5/certificate_authority.py
+112
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/erp5.py
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/erp5.py
+107
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/execute.py
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/execute.py
+48
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/innobackupex.py
...slapos.recipe.erp5/src/slapos/recipe/erp5/innobackupex.py
+25
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/killpidfromfile.py
...pos.recipe.erp5/src/slapos/recipe/erp5/killpidfromfile.py
+12
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/mysql.py
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/mysql.py
+71
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.location-snippet.conf.in
...apos/recipe/erp5/template/apache.location-snippet.conf.in
+5
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.ssl-snippet.conf.in
...rc/slapos/recipe/erp5/template/apache.ssl-snippet.conf.in
+6
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.zope.conf.in
....erp5/src/slapos/recipe/erp5/template/apache.zope.conf.in
+55
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.zope.conf.path-protected.in
...s/recipe/erp5/template/apache.zope.conf.path-protected.in
+7
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.zope.conf.path.in
.../src/slapos/recipe/erp5/template/apache.zope.conf.path.in
+5
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/cloudooo.cfg.in
...cipe.erp5/src/slapos/recipe/erp5/template/cloudooo.cfg.in
+54
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/haproxy.cfg.in
...ecipe.erp5/src/slapos/recipe/erp5/template/haproxy.cfg.in
+26
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/initmysql.sql.in
...ipe.erp5/src/slapos/recipe/erp5/template/initmysql.sql.in
+2
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/kumo_gateway.in
...cipe.erp5/src/slapos/recipe/erp5/template/kumo_gateway.in
+2
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/kumo_manager.in
...cipe.erp5/src/slapos/recipe/erp5/template/kumo_manager.in
+2
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/kumo_server.in
...ecipe.erp5/src/slapos/recipe/erp5/template/kumo_server.in
+2
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/logrotate_entry.in
...e.erp5/src/slapos/recipe/erp5/template/logrotate_entry.in
+13
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/memcached.in
....recipe.erp5/src/slapos/recipe/erp5/template/memcached.in
+2
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/my.cnf.in
...pos.recipe.erp5/src/slapos/recipe/erp5/template/my.cnf.in
+52
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/openssl.cnf.ca.in
...pe.erp5/src/slapos/recipe/erp5/template/openssl.cnf.ca.in
+350
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/tidstorage.py.in
...ipe.erp5/src/slapos/recipe/erp5/template/tidstorage.py.in
+15
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zeo.conf.in
...s.recipe.erp5/src/slapos/recipe/erp5/template/zeo.conf.in
+15
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope-deadlockdebugger-snippet.conf.in
...ecipe/erp5/template/zope-deadlockdebugger-snippet.conf.in
+7
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope-tidstorage-snippet.conf.in
...apos/recipe/erp5/template/zope-tidstorage-snippet.conf.in
+6
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope-zeo-snippet.conf.in
.../src/slapos/recipe/erp5/template/zope-zeo-snippet.conf.in
+8
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope-zodb-snippet.conf.in
...src/slapos/recipe/erp5/template/zope-zodb-snippet.conf.in
+6
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope.conf.in
....recipe.erp5/src/slapos/recipe/erp5/template/zope.conf.in
+58
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope.conf.timerservice.in
...src/slapos/recipe/erp5/template/zope.conf.timerservice.in
+6
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/test_suite_runner.py
...s.recipe.erp5/src/slapos/recipe/erp5/test_suite_runner.py
+11
-0
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/testrunner.py
...s/slapos.recipe.erp5/src/slapos/recipe/erp5/testrunner.py
+11
-0
No files found.
slapos/erp5.recipe.testnode/CHANGES.txt
0 → 100644
View file @
43653c40
Changelog
=========
1.0 (unreleased)
----------------
slapos/erp5.recipe.testnode/MANIFEST.in
0 → 100644
View file @
43653c40
include CHANGES.txt
recursive-include src/erp5/recipe/testnode *.in
slapos/erp5.recipe.testnode/README.txt
0 → 100644
View file @
43653c40
The erp5.recipe.tesnode aims to install generic erp5 testnode.
slapos/erp5.recipe.testnode/setup.py
0 → 100644
View file @
43653c40
from
setuptools
import
setup
,
find_packages
name
=
"erp5.recipe.testnode"
version
=
'1.0.23'
def
read
(
name
):
return
open
(
name
).
read
()
long_description
=
(
read
(
'README.txt'
)
+
'
\
n
'
+
read
(
'CHANGES.txt'
)
)
setup
(
name
=
name
,
version
=
version
,
description
=
"ZC Buildout recipe for create an testnode instance"
,
long_description
=
long_description
,
license
=
"GPLv3"
,
keywords
=
"buildout erp5 test"
,
classifiers
=
[
"Framework :: Buildout :: Recipe"
,
"Programming Language :: Python"
,
],
packages
=
find_packages
(
'src'
),
package_dir
=
{
''
:
'src'
},
include_package_data
=
True
,
install_requires
=
[
'setuptools'
,
'slapos.lib.recipe'
,
'xml_marshaller'
,
'zc.buildout'
,
'zc.recipe.egg'
,
# below are requirements to provide full blown python interpreter
'lxml'
,
'PyXML'
,
],
namespace_packages
=
[
'erp5'
,
'erp5.recipe'
],
entry_points
=
{
'zc.buildout'
:
[
'default = %s:Recipe'
%
name
]},
)
slapos/erp5.recipe.testnode/src/erp5/__init__.py
0 → 100644
View file @
43653c40
# See http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
try
:
__import__
(
'pkg_resources'
).
declare_namespace
(
__name__
)
except
ImportError
:
from
pkgutil
import
extend_path
__path__
=
extend_path
(
__path__
,
__name__
)
slapos/erp5.recipe.testnode/src/erp5/recipe/__init__.py
0 → 100644
View file @
43653c40
# See http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
try
:
__import__
(
'pkg_resources'
).
declare_namespace
(
__name__
)
except
ImportError
:
from
pkgutil
import
extend_path
__path__
=
extend_path
(
__path__
,
__name__
)
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/SlapOSControler.py
0 → 100644
View file @
43653c40
import
slapos.slap
,
subprocess
,
os
,
time
from
xml_marshaller
import
xml_marshaller
class
SlapOSControler
(
object
):
def
__init__
(
self
,
config
,
process_group_pid_set
=
None
):
self
.
config
=
config
# By erasing everything, we make sure that we are able to "update"
# existing profiles. This is quite dirty way to do updates...
if
os
.
path
.
exists
(
config
[
'proxy_database'
]):
os
.
unlink
(
config
[
'proxy_database'
])
proxy
=
subprocess
.
Popen
([
config
[
'slapproxy_binary'
],
config
[
'slapos_config'
]],
close_fds
=
True
,
preexec_fn
=
os
.
setsid
)
process_group_pid_set
.
add
(
proxy
.
pid
)
# XXX: dirty, giving some time for proxy to being able to accept
# connections
time
.
sleep
(
10
)
slap
=
slapos
.
slap
.
slap
()
slap
.
initializeConnection
(
config
[
'master_url'
])
# register software profile
self
.
software_profile
=
config
[
'custom_profile_path'
]
slap
.
registerSupply
().
supply
(
self
.
software_profile
,
computer_guid
=
config
[
'computer_id'
])
computer
=
slap
.
registerComputer
(
config
[
'computer_id'
])
# create partition and configure computer
partition_reference
=
config
[
'partition_reference'
]
partition_path
=
os
.
path
.
join
(
config
[
'instance_root'
],
partition_reference
)
if
not
os
.
path
.
exists
(
partition_path
):
os
.
mkdir
(
partition_path
)
os
.
chmod
(
partition_path
,
0750
)
computer
.
updateConfiguration
(
xml_marshaller
.
dumps
({
'address'
:
config
[
'ipv4_address'
],
'instance_root'
:
config
[
'instance_root'
],
'netmask'
:
'255.255.255.255'
,
'partition_list'
:
[{
'address_list'
:
[{
'addr'
:
config
[
'ipv4_address'
],
'netmask'
:
'255.255.255.255'
},
{
'addr'
:
config
[
'ipv6_address'
],
'netmask'
:
'ffff:ffff:ffff::'
},
],
'path'
:
partition_path
,
'reference'
:
partition_reference
,
'tap'
:
{
'name'
:
partition_reference
},
}
],
'reference'
:
config
[
'computer_id'
],
'software_root'
:
config
[
'software_root'
]}))
def
runSoftwareRelease
(
self
,
config
,
environment
,
process_group_pid_set
=
None
):
print
"SlapOSControler.runSoftwareRelease"
while
True
:
cpu_count
=
os
.
sysconf
(
"SC_NPROCESSORS_ONLN"
)
os
.
putenv
(
'MAKEFLAGS'
,
'-j%s'
%
cpu_count
)
os
.
environ
[
'PATH'
]
=
environment
[
'PATH'
]
stdout
=
open
(
os
.
path
.
join
(
config
[
'instance_root'
],
'.runSoftwareRelease_out'
),
'w+'
)
stderr
=
open
(
os
.
path
.
join
(
config
[
'instance_root'
],
'.runSoftwareRelease_err'
),
'w+'
)
slapgrid
=
subprocess
.
Popen
([
config
[
'slapgrid_software_binary'
],
'-v'
,
'-c'
,
#'--buildout-parameter',"'-U -N' -o",
config
[
'slapos_config'
]],
stdout
=
stdout
,
stderr
=
stderr
,
close_fds
=
True
,
preexec_fn
=
os
.
setsid
)
process_group_pid_set
.
add
(
slapgrid
.
pid
)
slapgrid
.
wait
()
stdout
.
seek
(
0
)
stderr
.
seek
(
0
)
process_group_pid_set
.
remove
(
slapgrid
.
pid
)
status_dict
=
{
'status_code'
:
slapgrid
.
returncode
,
'stdout'
:
stdout
.
read
(),
'stderr'
:
stderr
.
read
()}
stdout
.
close
()
stderr
.
close
()
return
status_dict
def
runComputerPartition
(
self
,
config
,
process_group_pid_set
=
None
):
print
"SlapOSControler.runSoftwareRelease"
slap
=
slapos
.
slap
.
slap
()
slap
.
registerOpenOrder
().
request
(
self
.
software_profile
,
partition_reference
=
'testing partition'
,
partition_parameter_kw
=
config
[
'instance_dict'
])
slapgrid
=
subprocess
.
Popen
([
config
[
'slapgrid_partition_binary'
],
config
[
'slapos_config'
],
'-c'
,
'-v'
],
close_fds
=
True
,
preexec_fn
=
os
.
setsid
)
process_group_pid_set
.
add
(
slapgrid
.
pid
)
slapgrid
.
wait
()
process_group_pid_set
.
remove
(
slapgrid
.
pid
)
if
slapgrid
.
returncode
!=
0
:
raise
ValueError
(
'Slapgrid instance failed'
)
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/Updater.py
0 → 100644
View file @
43653c40
import
os
,
sys
,
subprocess
,
re
,
threading
from
testnode
import
SubprocessError
_format_command_search
=
re
.
compile
(
"[[
\
\
s $({?*
\
\
`#~';<>&|]"
).
search
_format_command_escape
=
lambda
s
:
"'%s'"
%
r"'\''"
.
join
(
s
.
split
(
"'"
))
def
format_command
(
*
args
,
**
kw
):
cmdline
=
[]
for
k
,
v
in
sorted
(
kw
.
items
()):
if
_format_command_search
(
v
):
v
=
_format_command_escape
(
v
)
cmdline
.
append
(
'%s=%s'
%
(
k
,
v
))
for
v
in
args
:
if
_format_command_search
(
v
):
v
=
_format_command_escape
(
v
)
cmdline
.
append
(
v
)
return
' '
.
join
(
cmdline
)
def
subprocess_capture
(
p
,
quiet
=
False
):
def
readerthread
(
input
,
output
,
buffer
):
while
True
:
data
=
input
.
readline
()
if
not
data
:
break
output
(
data
)
buffer
.
append
(
data
)
if
p
.
stdout
:
stdout
=
[]
output
=
quiet
and
(
lambda
data
:
None
)
or
sys
.
stdout
.
write
stdout_thread
=
threading
.
Thread
(
target
=
readerthread
,
args
=
(
p
.
stdout
,
output
,
stdout
))
stdout_thread
.
setDaemon
(
True
)
stdout_thread
.
start
()
if
p
.
stderr
:
stderr
=
[]
stderr_thread
=
threading
.
Thread
(
target
=
readerthread
,
args
=
(
p
.
stderr
,
sys
.
stderr
.
write
,
stderr
))
stderr_thread
.
setDaemon
(
True
)
stderr_thread
.
start
()
if
p
.
stdout
:
stdout_thread
.
join
()
if
p
.
stderr
:
stderr_thread
.
join
()
p
.
wait
()
return
(
p
.
stdout
and
''
.
join
(
stdout
),
p
.
stderr
and
''
.
join
(
stderr
))
GIT_TYPE
=
'git'
SVN_TYPE
=
'svn'
class
Updater
(
object
):
_git_cache
=
{}
realtime_output
=
True
stdin
=
file
(
os
.
devnull
)
def
__init__
(
self
,
repository_path
,
revision
=
None
,
git_binary
=
None
):
self
.
revision
=
revision
self
.
_path_list
=
[]
self
.
repository_path
=
repository_path
self
.
git_binary
=
git_binary
def
getRepositoryPath
(
self
):
return
self
.
repository_path
def
getRepositoryType
(
self
):
try
:
return
self
.
repository_type
except
AttributeError
:
# guess the type of repository we have
if
os
.
path
.
isdir
(
os
.
path
.
join
(
self
.
getRepositoryPath
(),
'.git'
)):
repository_type
=
GIT_TYPE
elif
os
.
path
.
isdir
(
os
.
path
.
join
(
self
.
getRepositoryPath
(),
'.svn'
)):
repository_type
=
SVN_TYPE
else
:
raise
NotImplementedError
self
.
repository_type
=
repository_type
return
repository_type
def
deletePycFiles
(
self
,
path
):
"""Delete *.pyc files so that deleted/moved files can not be imported"""
for
path
,
dir_list
,
file_list
in
os
.
walk
(
path
):
for
file
in
file_list
:
if
file
[
-
4
:]
in
(
'.pyc'
,
'.pyo'
):
# allow several processes clean the same folder at the same time
try
:
os
.
remove
(
os
.
path
.
join
(
path
,
file
))
except
OSError
,
e
:
if
e
.
errno
!=
errno
.
ENOENT
:
raise
def
spawn
(
self
,
*
args
,
**
kw
):
quiet
=
kw
.
pop
(
'quiet'
,
False
)
env
=
kw
and
dict
(
os
.
environ
,
**
kw
)
or
None
command
=
format_command
(
*
args
,
**
kw
)
print
'
\
n
$ '
+
command
sys
.
stdout
.
flush
()
p
=
subprocess
.
Popen
(
args
,
stdin
=
self
.
stdin
,
stdout
=
subprocess
.
PIPE
,
stderr
=
subprocess
.
PIPE
,
env
=
env
,
cwd
=
self
.
getRepositoryPath
())
if
self
.
realtime_output
:
stdout
,
stderr
=
subprocess_capture
(
p
,
quiet
)
else
:
stdout
,
stderr
=
p
.
communicate
()
if
not
quiet
:
sys
.
stdout
.
write
(
stdout
)
sys
.
stderr
.
write
(
stderr
)
result
=
dict
(
status_code
=
p
.
returncode
,
command
=
command
,
stdout
=
stdout
,
stderr
=
stderr
)
if
p
.
returncode
:
raise
SubprocessError
(
result
)
return
result
def
_git
(
self
,
*
args
,
**
kw
):
return
self
.
spawn
(
self
.
git_binary
,
*
args
,
**
kw
)[
'stdout'
].
strip
()
def
_git_find_rev
(
self
,
ref
):
try
:
return
self
.
_git_cache
[
ref
]
except
KeyError
:
if
os
.
path
.
exists
(
'.git/svn'
):
r
=
self
.
_git
(
'svn'
,
'find-rev'
,
ref
)
assert
r
self
.
_git_cache
[
ref
[
0
]
!=
'r'
and
'r%u'
%
int
(
r
)
or
r
]
=
ref
else
:
r
=
self
.
_git
(
'rev-list'
,
'--topo-order'
,
'--count'
,
ref
),
ref
self
.
_git_cache
[
ref
]
=
r
return
r
def
getRevision
(
self
,
*
path_list
):
if
not
path_list
:
path_list
=
self
.
_path_list
if
self
.
getRepositoryType
()
==
GIT_TYPE
:
h
=
self
.
_git
(
'log'
,
'-1'
,
'--format=%H'
,
'--'
,
*
path_list
)
return
self
.
_git_find_rev
(
h
)
elif
self
.
getRepositoryType
()
==
SVN_TYPE
:
stdout
=
self
.
spawn
(
'svn'
,
'info'
,
*
path_list
)[
'stdout'
]
return
str
(
max
(
map
(
int
,
SVN_CHANGED_REV
.
findall
(
stdout
))))
raise
NotImplementedError
def
checkout
(
self
,
*
path_list
):
if
not
path_list
:
path_list
=
'.'
,
revision
=
self
.
revision
if
self
.
getRepositoryType
()
==
GIT_TYPE
:
# edit .git/info/sparse-checkout if you want sparse checkout
if
revision
:
if
type
(
revision
)
is
str
:
h
=
self
.
_git_find_rev
(
'r'
+
revision
)
else
:
h
=
revision
[
1
]
if
h
!=
self
.
_git
(
'rev-parse'
,
'HEAD'
):
self
.
deletePycFiles
(
'.'
)
self
.
_git
(
'reset'
,
'--merge'
,
h
)
else
:
self
.
deletePycFiles
(
'.'
)
if
os
.
path
.
exists
(
'.git/svn'
):
self
.
_git
(
'svn'
,
'rebase'
)
else
:
self
.
_git
(
'pull'
,
'--ff-only'
)
self
.
revision
=
self
.
_git_find_rev
(
self
.
_git
(
'rev-parse'
,
'HEAD'
))
elif
self
.
getRepositoryType
()
==
SVN_TYPE
:
# following code allows sparse checkout
def
svn_mkdirs
(
path
):
path
=
os
.
path
.
dirname
(
path
)
if
path
and
not
os
.
path
.
isdir
(
path
):
svn_mkdirs
(
path
)
self
.
spawn
(
*
(
args
+
[
'--depth=empty'
,
path
]))
for
path
in
path_list
:
args
=
[
'svn'
,
'up'
,
'--force'
,
'--non-interactive'
]
if
revision
:
args
.
append
(
'-r%s'
%
revision
)
svn_mkdirs
(
path
)
args
+=
'--set-depth=infinity'
,
path
self
.
deletePycFiles
(
path
)
try
:
status_dict
=
self
.
spawn
(
*
args
)
except
SubprocessError
,
e
:
if
'cleanup'
not
in
e
.
stderr
:
raise
self
.
spawn
(
'svn'
,
'cleanup'
,
path
)
status_dict
=
self
.
spawn
(
*
args
)
if
not
revision
:
self
.
revision
=
revision
=
SVN_UP_REV
.
findall
(
status_dict
[
'stdout'
].
splitlines
()[
-
1
])[
0
]
else
:
raise
NotImplementedError
self
.
_path_list
+=
path_list
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/__init__.py
0 → 100644
View file @
43653c40
##############################################################################
#
# Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from
slapos.lib.recipe.BaseSlapRecipe
import
BaseSlapRecipe
import
os
import
pkg_resources
import
zc.buildout
import
zc.recipe.egg
import
sys
CONFIG
=
dict
(
proxy_port
=
'5000'
,
computer_id
=
'COMPUTER'
,
partition_reference
=
'test0'
,
)
class
Recipe
(
BaseSlapRecipe
):
def
__init__
(
self
,
buildout
,
name
,
options
):
self
.
egg
=
zc
.
recipe
.
egg
.
Egg
(
buildout
,
options
[
'recipe'
],
options
)
BaseSlapRecipe
.
__init__
(
self
,
buildout
,
name
,
options
)
def
installSlapOs
(
self
):
CONFIG
[
'slapos_directory'
]
=
self
.
createDataDirectory
(
'slapos'
)
CONFIG
[
'working_directory'
]
=
self
.
createDataDirectory
(
'testnode'
)
CONFIG
[
'software_root'
]
=
os
.
path
.
join
(
CONFIG
[
'slapos_directory'
],
'software'
)
CONFIG
[
'instance_root'
]
=
os
.
path
.
join
(
CONFIG
[
'slapos_directory'
],
'instance'
)
CONFIG
[
'proxy_database'
]
=
os
.
path
.
join
(
CONFIG
[
'slapos_directory'
],
'proxy.db'
)
CONFIG
[
'proxy_host'
]
=
self
.
getLocalIPv4Address
()
CONFIG
[
'master_url'
]
=
'http://%s:%s'
%
(
CONFIG
[
'proxy_host'
],
CONFIG
[
'proxy_port'
])
self
.
_createDirectory
(
CONFIG
[
'software_root'
])
self
.
_createDirectory
(
CONFIG
[
'instance_root'
])
CONFIG
[
'slapos_config'
]
=
self
.
createConfigurationFile
(
'slapos.cfg'
,
self
.
substituteTemplate
(
pkg_resources
.
resource_filename
(
__name__
,
'template/slapos.cfg.in'
),
CONFIG
))
self
.
path_list
.
append
(
CONFIG
[
'slapos_config'
])
def
setupRunningWrapper
(
self
):
self
.
path_list
.
extend
(
zc
.
buildout
.
easy_install
.
scripts
([(
'testnode'
,
__name__
+
'.testnode'
,
'run'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
dict
(
computer_id
=
CONFIG
[
'computer_id'
],
instance_dict
=
eval
(
self
.
parameter_dict
.
get
(
'instance_dict'
,
'{}'
)),
instance_root
=
CONFIG
[
'instance_root'
],
ipv4_address
=
self
.
getLocalIPv4Address
(),
ipv6_address
=
self
.
getGlobalIPv6Address
(),
master_url
=
CONFIG
[
'master_url'
],
profile_path
=
self
.
parameter_dict
[
'profile_path'
],
proxy_database
=
CONFIG
[
'proxy_database'
],
proxy_port
=
CONFIG
[
'proxy_port'
],
slapgrid_partition_binary
=
self
.
options
[
'slapgrid_partition_binary'
],
slapgrid_software_binary
=
self
.
options
[
'slapgrid_software_binary'
],
slapos_config
=
CONFIG
[
'slapos_config'
],
slapproxy_binary
=
self
.
options
[
'slapproxy_binary'
],
git_binary
=
self
.
options
[
'git_binary'
],
software_root
=
CONFIG
[
'software_root'
],
working_directory
=
CONFIG
[
'working_directory'
],
vcs_repository
=
self
.
parameter_dict
.
get
(
'vcs_repository'
),
node_quantity
=
self
.
parameter_dict
.
get
(
'node_quantity'
,
'1'
),
branch
=
self
.
parameter_dict
.
get
(
'branch'
,
None
),
test_suite_master_url
=
self
.
parameter_dict
.
get
(
'test_suite_master_url'
,
None
),
test_suite_name
=
self
.
parameter_dict
.
get
(
'test_suite_name'
),
test_suite_title
=
self
.
parameter_dict
.
get
(
'test_suite_title'
),
bin_directory
=
self
.
bin_directory
,
foo
=
'bar'
,
# botenvironemnt is splittable string of key=value to substitute
# environment of running bot
bot_environment
=
self
.
parameter_dict
.
get
(
'bot_environment'
,
''
),
partition_reference
=
CONFIG
[
'partition_reference'
],
environment
=
dict
(
PATH
=
os
.
environ
[
'PATH'
]),
)
]))
def
installLocalSvn
(
self
):
svn_dict
=
dict
(
svn_binary
=
self
.
options
[
'svn_binary'
])
svn_dict
.
update
(
self
.
parameter_dict
)
self
.
_writeExecutable
(
os
.
path
.
join
(
self
.
bin_directory
,
'svn'
),
"""
\
#!/bin/sh
%(svn_binary)s --username %(svn_username)s --password %(svn_password)s
\
--non-interactive --trust-server-cert --no-auth-cache "$@" """
%
svn_dict
)
svnversion
=
os
.
path
.
join
(
self
.
bin_directory
,
'svnversion'
)
if
os
.
path
.
lexists
(
svnversion
):
os
.
unlink
(
svnversion
)
os
.
symlink
(
self
.
options
[
'svnversion_binary'
],
svnversion
)
def
installLocalGit
(
self
):
git_dict
=
dict
(
git_binary
=
self
.
options
[
'git_binary'
])
git_dict
.
update
(
self
.
parameter_dict
)
double_slash_end_position
=
1
# XXX, this should be provided by slapos
print
"bin_directory : %r"
%
self
.
bin_directory
home_directory
=
os
.
path
.
join
(
*
os
.
path
.
split
(
self
.
bin_directory
)[
0
:
-
1
])
print
"home_directory : %r"
%
home_directory
git_dict
.
setdefault
(
"git_server_name"
,
"git.erp5.org"
)
if
git_dict
.
get
(
'vcs_username'
,
None
)
is
not
None
:
netrc_file
=
open
(
os
.
path
.
join
(
home_directory
,
'.netrc'
),
'w'
)
netrc_file
.
write
(
"""
machine %(git_server_name)s
login %(vcs_username)s
password %(vcs_password)s"""
%
git_dict
)
netrc_file
.
close
()
def
installLocalRepository
(
self
):
if
self
.
parameter_dict
.
get
(
'vcs_repository'
).
endswith
(
'git'
):
self
.
installLocalGit
()
else
:
self
.
installLocalSvn
()
def
installLocalZip
(
self
):
zip
=
os
.
path
.
join
(
self
.
bin_directory
,
'zip'
)
if
os
.
path
.
lexists
(
zip
):
os
.
unlink
(
zip
)
os
.
symlink
(
self
.
options
[
'zip_binary'
],
zip
)
def
installLocalPython
(
self
):
"""Installs local python fully featured with eggs"""
self
.
path_list
.
extend
(
zc
.
buildout
.
easy_install
.
scripts
([],
self
.
ws
,
sys
.
executable
,
self
.
bin_directory
,
scripts
=
None
,
interpreter
=
'python'
))
def
installLocalRunUnitTest
(
self
):
link
=
os
.
path
.
join
(
self
.
bin_directory
,
'runUnitTest'
)
destination
=
os
.
path
.
join
(
CONFIG
[
'instance_root'
],
CONFIG
[
'partition_reference'
],
'bin'
,
'runUnitTest'
)
if
os
.
path
.
lexists
(
link
):
if
not
os
.
readlink
(
link
)
!=
destination
:
os
.
unlink
(
link
)
if
not
os
.
path
.
lexists
(
link
):
os
.
symlink
(
destination
,
link
)
def
_install
(
self
):
self
.
requirements
,
self
.
ws
=
self
.
egg
.
working_set
([
__name__
])
self
.
path_list
=
[]
self
.
installSlapOs
()
self
.
setupRunningWrapper
()
self
.
installLocalRepository
()
self
.
installLocalZip
()
self
.
installLocalPython
()
self
.
installLocalRunUnitTest
()
return
self
.
path_list
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/template/slapos.cfg.in
0 → 100644
View file @
43653c40
[slapos]
software_root = %(software_root)s
instance_root = %(instance_root)s
master_url = %(master_url)s
computer_id = %(computer_id)s
[slapproxy]
host = %(proxy_host)s
port = %(proxy_port)s
database_uri = %(proxy_database)s
slapos/erp5.recipe.testnode/src/erp5/recipe/testnode/testnode.py
0 → 100644
View file @
43653c40
from
xml_marshaller
import
xml_marshaller
import
os
,
xmlrpclib
,
time
,
imp
from
glob
import
glob
import
signal
import
slapos.slap
import
subprocess
import
sys
import
socket
import
pprint
from
SlapOSControler
import
SlapOSControler
class
SubprocessError
(
EnvironmentError
):
def
__init__
(
self
,
status_dict
):
self
.
status_dict
=
status_dict
def
__getattr__
(
self
,
name
):
return
self
.
status_dict
[
name
]
def
__str__
(
self
):
return
'Error %i'
%
self
.
status_code
from
Updater
import
Updater
process_group_pid_set
=
set
()
process_pid_file_list
=
[]
process_command_list
=
[]
def
sigterm_handler
(
signal
,
frame
):
for
pgpid
in
process_group_pid_set
:
try
:
os
.
killpg
(
pgpid
,
signal
.
SIGTERM
)
except
:
pass
for
pid_file
in
process_pid_file_list
:
try
:
os
.
kill
(
int
(
open
(
pid_file
).
read
().
strip
()),
signal
.
SIGTERM
)
except
:
pass
for
p
in
process_command_list
:
try
:
subprocess
.
call
(
p
)
except
:
pass
sys
.
exit
(
1
)
signal
.
signal
(
signal
.
SIGTERM
,
sigterm_handler
)
def
safeRpcCall
(
function
,
*
args
):
retry
=
64
while
True
:
try
:
return
function
(
*
args
)
except
(
socket
.
error
,
xmlrpclib
.
ProtocolError
),
e
:
print
>>
sys
.
stderr
,
e
pprint
.
pprint
(
args
,
file
(
function
.
_Method__name
,
'w'
))
time
.
sleep
(
retry
)
retry
+=
retry
>>
1
slapos_controler
=
None
def
run
(
args
):
config
=
args
[
0
]
slapgrid
=
None
branch
=
config
.
get
(
'branch'
,
None
)
supervisord_pid_file
=
os
.
path
.
join
(
config
[
'instance_root'
],
'var'
,
'run'
,
'supervisord.pid'
)
subprocess
.
check_call
([
config
[
'git_binary'
],
"config"
,
"--global"
,
"http.sslVerify"
,
"false"
])
previous_revision
=
None
run_software
=
True
# find what will be the path of the repository
repository_name
=
config
[
'vcs_repository'
].
split
(
'/'
)[
-
1
].
split
(
'.'
)[
0
]
repository_path
=
os
.
path
.
join
(
config
[
'working_directory'
],
repository_name
)
config
[
'repository_path'
]
=
repository_path
sys
.
path
.
append
(
repository_path
)
# Write our own software.cfg to use the local repository
custom_profile_path
=
os
.
path
.
join
(
config
[
'working_directory'
],
'software.cfg'
)
config
[
'custom_profile_path'
]
=
custom_profile_path
# create a profile in order to use the repository we already have
custom_profile
=
open
(
custom_profile_path
,
'w'
)
profile_content
=
"""
[buildout]
extends = %(software_config_path)s
[%(repository_name)s]
repository = %(repository_path)s
"""
%
{
'software_config_path'
:
os
.
path
.
join
(
repository_path
,
config
[
'profile_path'
]),
'repository_name'
:
repository_name
,
'repository_path'
:
repository_path
}
if
branch
is
not
None
:
profile_content
+=
"
\
n
branch = %s"
%
branch
custom_profile
.
write
(
profile_content
)
custom_profile
.
close
()
retry_software
=
False
try
:
while
True
:
# kill processes from previous loop if any
try
:
for
pgpid
in
process_group_pid_set
:
try
:
os
.
killpg
(
pgpid
,
signal
.
SIGTERM
)
except
:
pass
process_group_pid_set
.
clear
()
# Make sure we have local repository
if
not
os
.
path
.
exists
(
repository_path
):
parameter_list
=
[
config
[
'git_binary'
],
'clone'
,
config
[
'vcs_repository'
]]
if
branch
is
not
None
:
parameter_list
.
extend
([
'-b'
,
branch
])
parameter_list
.
append
(
repository_path
)
subprocess
.
check_call
(
parameter_list
)
# XXX this looks like to not wait the end of the command
# Make sure we have local repository
updater
=
Updater
(
repository_path
,
git_binary
=
config
[
'git_binary'
])
updater
.
checkout
()
revision
=
updater
.
getRevision
()
if
previous_revision
==
revision
:
time
.
sleep
(
120
)
if
not
(
retry_software
):
continue
retry_software
=
False
previous_revision
=
revision
print
config
portal_url
=
config
[
'test_suite_master_url'
]
test_result_path
=
None
test_result
=
(
test_result_path
,
revision
)
if
portal_url
:
if
portal_url
[
-
1
]
!=
'/'
:
portal_url
+=
'/'
portal
=
xmlrpclib
.
ServerProxy
(
"%s%s"
%
(
portal_url
,
'portal_task_distribution'
),
allow_none
=
1
)
master
=
portal
.
portal_task_distribution
assert
master
.
getProtocolRevision
()
==
1
test_result
=
safeRpcCall
(
master
.
createTestResult
,
config
[
'test_suite_name'
],
revision
,
[],
False
,
config
[
'test_suite_title'
])
print
"testnode, test_result : %r"
%
(
test_result
,)
if
test_result
:
test_result_path
,
test_revision
=
test_result
if
revision
!=
test_revision
:
# other testnodes on other boxes are already ready to test another
# revision
updater
=
Updater
(
repository_path
,
git_binary
=
config
[
'git_binary'
],
revision
=
test_revision
)
updater
.
checkout
()
# Now prepare the installation of SlapOS
slapos_controler
=
SlapOSControler
(
config
,
process_group_pid_set
=
process_group_pid_set
)
# this should be always true later, but it is too slow for now
status_dict
=
slapos_controler
.
runSoftwareRelease
(
config
,
environment
=
config
[
'environment'
],
process_group_pid_set
=
process_group_pid_set
,
)
if
status_dict
[
'status_code'
]
!=
0
:
safeRpcCall
(
master
.
reportTaskFailure
,
test_result_path
,
status_dict
,
config
[
'test_suite_title'
])
retry_software
=
True
continue
# create instances, it should take some seconds only
slapos_controler
.
runComputerPartition
(
config
,
process_group_pid_set
=
process_group_pid_set
)
# update repositories downloaded by buildout. Later we should get
# from master a list of repositories
repository_path_list
=
glob
(
os
.
path
.
join
(
config
[
'software_root'
],
'*'
,
'parts'
,
'git_repository'
,
'*'
))
assert
len
(
repository_path_list
)
>=
0
for
repository_path
in
repository_path_list
:
updater
=
Updater
(
repository_path
,
git_binary
=
config
[
'git_binary'
])
updater
.
checkout
()
if
os
.
path
.
split
(
repository_path
)[
-
1
]
==
repository_name
:
# redo checkout with good revision, the previous one is used
# to pull last code
updater
=
Updater
(
repository_path
,
git_binary
=
config
[
'git_binary'
],
revision
=
revision
)
updater
.
checkout
()
# calling dist/externals is only there for backward compatibility,
# the code will be removed soon
if
os
.
path
.
exists
(
os
.
path
.
join
(
repository_path
,
'dist/externals.py'
)):
process
=
subprocess
.
Popen
([
'dist/externals.py'
],
cwd
=
repository_path
)
process
.
wait
()
partition_path
=
os
.
path
.
join
(
config
[
'instance_root'
],
config
[
'partition_reference'
])
run_test_suite_path
=
os
.
path
.
join
(
partition_path
,
'bin'
,
'runTestSuite'
)
if
not
os
.
path
.
exists
(
run_test_suite_path
):
raise
ValueError
(
'No %r provided'
%
run_test_suite_path
)
run_test_suite_revision
=
revision
if
isinstance
(
revision
,
tuple
):
revision
=
','
.
join
(
revision
)
run_test_suite
=
subprocess
.
Popen
([
run_test_suite_path
,
'--test_suite'
,
config
[
'test_suite_name'
],
'--revision'
,
revision
,
'--node_quantity'
,
config
[
'node_quantity'
],
'--master_url'
,
config
[
'test_suite_master_url'
],
],
)
process_group_pid_set
.
add
(
run_test_suite
.
pid
)
run_test_suite
.
wait
()
process_group_pid_set
.
remove
(
run_test_suite
.
pid
)
except
SubprocessError
:
time
.
sleep
(
120
)
continue
finally
:
# Nice way to kill *everything* generated by run process -- process
# groups working only in POSIX compilant systems
# Exceptions are swallowed during cleanup phase
print
"going to kill %r"
%
(
process_group_pid_set
,)
for
pgpid
in
process_group_pid_set
:
try
:
os
.
killpg
(
pgpid
,
signal
.
SIGTERM
)
except
:
pass
try
:
if
os
.
path
.
exists
(
supervisord_pid_file
):
os
.
kill
(
int
(
open
(
supervisord_pid_file
).
read
().
strip
()),
signal
.
SIGTERM
)
except
:
pass
slapos/slapos.recipe.erp5/CHANGES.txt
0 → 100644
View file @
43653c40
Changelog
=========
1.1 (unreleased)
----------------
* backup enabled for Certificate Authority [Łukasz Nowak]
slapos/slapos.recipe.erp5/MANIFEST.in
0 → 100644
View file @
43653c40
include CHANGES.txt
recursive-include src/slapos/recipe/erp5 *.in
slapos/slapos.recipe.erp5/README.txt
0 → 100644
View file @
43653c40
The slapos.recipe.erp5 aims to instanciate an ERP5 environnment
===============================================================
slapos/slapos.recipe.erp5/setup.py
0 → 100644
View file @
43653c40
from
setuptools
import
setup
,
find_packages
#def getGitIteration():
# import subprocess
# return str(int(subprocess.Popen(["git", "rev-list", "--count", "HEAD", "--",
# "."], stdout=subprocess.PIPE).communicate()[0]))
name
=
"slapos.recipe.erp5"
version
=
'1.1-dev-184'
def
read
(
name
):
return
open
(
name
).
read
()
long_description
=
(
read
(
'README.txt'
)
+
'
\
n
'
+
read
(
'CHANGES.txt'
)
)
setup
(
name
=
name
,
version
=
version
,
description
=
"ZC Buildout recipe for create an erp5 instance"
,
long_description
=
long_description
,
license
=
"GPLv3"
,
keywords
=
"buildout slapos erp5"
,
classifiers
=
[
"Framework :: Buildout :: Recipe"
,
"Programming Language :: Python"
,
],
packages
=
find_packages
(
'src'
),
package_dir
=
{
''
:
'src'
},
include_package_data
=
True
,
install_requires
=
[
'zc.recipe.egg'
,
'setuptools'
,
'slapos.lib.recipe'
,
'Zope2'
,
],
namespace_packages
=
[
'slapos'
,
'slapos.recipe'
],
entry_points
=
{
'zc.buildout'
:
[
'default = %s:Recipe'
%
name
]},
)
slapos/slapos.recipe.erp5/src/slapos/__init__.py
0 → 100644
View file @
43653c40
# See http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
try
:
__import__
(
'pkg_resources'
).
declare_namespace
(
__name__
)
except
ImportError
:
from
pkgutil
import
extend_path
__path__
=
extend_path
(
__path__
,
__name__
)
slapos/slapos.recipe.erp5/src/slapos/recipe/__init__.py
0 → 100644
View file @
43653c40
# See http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
try
:
__import__
(
'pkg_resources'
).
declare_namespace
(
__name__
)
except
ImportError
:
from
pkgutil
import
extend_path
__path__
=
extend_path
(
__path__
,
__name__
)
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/__init__.py
0 → 100644
View file @
43653c40
##############################################################################
#
# Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from
slapos.lib.recipe.BaseSlapRecipe
import
BaseSlapRecipe
import
binascii
import
os
import
pkg_resources
import
pprint
import
hashlib
import
sys
import
zc.buildout
import
zc.recipe.egg
import
ConfigParser
from
Zope2.utilities.mkzopeinstance
import
write_inituser
class
Recipe
(
BaseSlapRecipe
):
def
getTemplateFilename
(
self
,
template_name
):
return
pkg_resources
.
resource_filename
(
__name__
,
'template/%s'
%
template_name
)
site_id
=
'erp5'
def
_install
(
self
):
self
.
path_list
=
[]
self
.
requirements
,
self
.
ws
=
self
.
egg
.
working_set
([
__name__
])
# self.cron_d is a directory, where cron jobs can be registered
self
.
cron_d
=
self
.
installCrond
()
self
.
logrotate_d
,
self
.
logrotate_backup
=
self
.
installLogrotate
()
self
.
killpidfromfile
=
zc
.
buildout
.
easy_install
.
scripts
(
[(
'killpidfromfile'
,
__name__
+
'.killpidfromfile'
,
'killpidfromfile'
)],
self
.
ws
,
sys
.
executable
,
self
.
bin_directory
)[
0
]
self
.
path_list
.
append
(
self
.
killpidfromfile
)
ca_conf
=
self
.
installCertificateAuthority
()
memcached_conf
=
self
.
installMemcached
(
ip
=
self
.
getLocalIPv4Address
(),
port
=
11000
)
kumo_conf
=
self
.
installKumo
(
self
.
getLocalIPv4Address
())
conversion_server_conf
=
self
.
installConversionServer
(
self
.
getLocalIPv4Address
(),
23000
,
23060
)
mysql_conf
=
self
.
installMysqlServer
(
self
.
getLocalIPv4Address
(),
45678
)
user
,
password
=
self
.
installERP5
()
zodb_dir
=
os
.
path
.
join
(
self
.
data_root_directory
,
'zodb'
)
self
.
_createDirectory
(
zodb_dir
)
zodb_root_path
=
os
.
path
.
join
(
zodb_dir
,
'root.fs'
)
zope_access
=
self
.
installZope
(
ip
=
self
.
getLocalIPv4Address
(),
port
=
12000
+
1
,
name
=
'zope_%s'
%
1
,
zodb_configuration_string
=
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'zope-zodb-snippet.conf.in'
),
dict
(
zodb_root_path
=
zodb_root_path
)),
with_timerservice
=
True
)
key
,
certificate
=
self
.
requestCertificate
(
'Login Based Access'
)
apache_conf
=
dict
(
apache_login
=
self
.
installBackendApache
(
ip
=
self
.
getGlobalIPv6Address
(),
port
=
13000
,
backend
=
zope_access
,
key
=
key
,
certificate
=
certificate
))
self
.
installERP5Site
(
user
,
password
,
zope_access
,
mysql_conf
,
conversion_server_conf
,
memcached_conf
,
kumo_conf
,
self
.
site_id
)
self
.
installTestRunner
(
ca_conf
,
mysql_conf
,
conversion_server_conf
,
memcached_conf
,
kumo_conf
)
self
.
installTestSuiteRunner
(
ca_conf
,
mysql_conf
,
conversion_server_conf
,
memcached_conf
,
kumo_conf
)
self
.
linkBinary
()
self
.
setConnectionDict
(
dict
(
site_url
=
apache_conf
[
'apache_login'
],
site_user
=
user
,
site_password
=
password
,
memcached_url
=
memcached_conf
[
'memcached_url'
],
kumo_url
=
kumo_conf
[
'kumo_address'
]
))
return
self
.
path_list
def
_requestZeoFileStorage
(
self
,
server_name
,
storage_name
):
"""Local, slap.request compatible, call to ask for filestorage on Zeo
filter_kw can be used to select specific Zeo server
Someday in future it will be possible to invoke:
self.request(
software_release=self.computer_partition.getSoftwareRelease().getURI(),
software_type='Zeo Server',
partition_reference=storage_name,
filter_kw={'server_name': server_name},
shared=True
)
Thanks to this it will be possible to select precisely on which server
which storage will be placed.
"""
base_port
=
35001
if
getattr
(
self
,
'_zeo_storage_dict'
,
None
)
is
None
:
self
.
_zeo_storage_dict
=
{}
if
getattr
(
self
,
'_zeo_storage_port_dict'
,
None
)
is
None
:
self
.
_zeo_storage_port_dict
=
{}
self
.
_zeo_storage_port_dict
.
setdefault
(
server_name
,
base_port
+
len
(
self
.
_zeo_storage_port_dict
))
self
.
_zeo_storage_dict
[
server_name
]
=
self
.
_zeo_storage_dict
.
get
(
server_name
,
[])
+
[
storage_name
]
return
dict
(
ip
=
self
.
getLocalIPv4Address
(),
port
=
self
.
_zeo_storage_port_dict
[
server_name
],
storage_name
=
storage_name
)
def
installLogrotate
(
self
):
"""Installs logortate main configuration file and registers its to cron"""
logrotate_d
=
os
.
path
.
abspath
(
os
.
path
.
join
(
self
.
etc_directory
,
'logrotate.d'
))
self
.
_createDirectory
(
logrotate_d
)
logrotate_backup
=
self
.
createBackupDirectory
(
'logrotate'
)
logrotate_conf
=
self
.
createConfigurationFile
(
"logrotate.conf"
,
"include %s"
%
logrotate_d
)
logrotate_cron
=
os
.
path
.
join
(
self
.
cron_d
,
'logrotate'
)
state_file
=
os
.
path
.
join
(
self
.
data_root_directory
,
'logrotate.status'
)
open
(
logrotate_cron
,
'w'
).
write
(
'0 0 * * * %s -s %s %s'
%
(
self
.
options
[
'logrotate_binary'
],
state_file
,
logrotate_conf
))
self
.
path_list
.
extend
([
logrotate_d
,
logrotate_conf
,
logrotate_cron
])
return
logrotate_d
,
logrotate_backup
def
registerLogRotation
(
self
,
name
,
log_file_list
,
postrotate_script
):
"""Register new log rotation requirement"""
open
(
os
.
path
.
join
(
self
.
logrotate_d
,
name
),
'w'
).
write
(
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'logrotate_entry.in'
),
dict
(
file_list
=
' '
.
join
([
'"'
+
q
+
'"'
for
q
in
log_file_list
]),
postrotate
=
postrotate_script
,
olddir
=
self
.
logrotate_backup
)))
def
linkBinary
(
self
):
"""Links binaries to instance's bin directory for easier exposal"""
for
linkline
in
self
.
options
.
get
(
'link_binary_list'
,
''
).
splitlines
():
if
not
linkline
:
continue
target
=
linkline
.
split
()
if
len
(
target
)
==
1
:
target
=
target
[
0
]
path
,
linkname
=
os
.
path
.
split
(
target
)
else
:
linkname
=
target
[
1
]
target
=
target
[
0
]
link
=
os
.
path
.
join
(
self
.
bin_directory
,
linkname
)
if
os
.
path
.
lexists
(
link
):
if
not
os
.
path
.
islink
(
link
):
raise
zc
.
buildout
.
UserError
(
'Target link already %r exists but it is not link'
%
link
)
os
.
unlink
(
link
)
os
.
symlink
(
target
,
link
)
self
.
logger
.
debug
(
'Created link %r -> %r'
%
(
link
,
target
))
self
.
path_list
.
append
(
link
)
def
installKumo
(
self
,
ip
,
kumo_manager_port
=
13101
,
kumo_server_port
=
13201
,
kumo_server_listen_port
=
13202
,
kumo_gateway_port
=
13301
):
# XXX: kumo is not storing pid in file, unless it is not running as daemon
# but running daemons is incompatible with SlapOS, so there is currently
# no way to have Kumo's pid files to rotate logs and send signals to them
config
=
dict
(
kumo_gateway_binary
=
self
.
options
[
'kumo_gateway_binary'
],
kumo_gateway_ip
=
ip
,
kumo_gateway_log
=
os
.
path
.
join
(
self
.
log_directory
,
"kumo-gateway.log"
),
kumo_manager_binary
=
self
.
options
[
'kumo_manager_binary'
],
kumo_manager_ip
=
ip
,
kumo_manager_log
=
os
.
path
.
join
(
self
.
log_directory
,
"kumo-manager.log"
),
kumo_server_binary
=
self
.
options
[
'kumo_server_binary'
],
kumo_server_ip
=
ip
,
kumo_server_log
=
os
.
path
.
join
(
self
.
log_directory
,
"kumo-server.log"
),
kumo_server_storage
=
os
.
path
.
join
(
self
.
data_root_directory
,
"kumodb.tch"
),
kumo_manager_port
=
kumo_manager_port
,
kumo_server_port
=
kumo_server_port
,
kumo_server_listen_port
=
kumo_server_listen_port
,
kumo_gateway_port
=
kumo_gateway_port
)
self
.
path_list
.
append
(
self
.
createRunningWrapper
(
'kumo_gateway'
,
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'kumo_gateway.in'
),
config
)))
self
.
path_list
.
append
(
self
.
createRunningWrapper
(
'kumo_manager'
,
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'kumo_manager.in'
),
config
)))
self
.
path_list
.
append
(
self
.
createRunningWrapper
(
'kumo_server'
,
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'kumo_server.in'
),
config
)))
return
dict
(
kumo_address
=
'%s:%s'
%
(
config
[
'kumo_gateway_ip'
],
config
[
'kumo_gateway_port'
]),
kumo_gateway_ip
=
config
[
'kumo_gateway_ip'
],
kumo_gateway_port
=
config
[
'kumo_gateway_port'
],
)
def
installMemcached
(
self
,
ip
,
port
):
config
=
dict
(
memcached_binary
=
self
.
options
[
'memcached_binary'
],
memcached_ip
=
ip
,
memcached_port
=
port
,
)
self
.
path_list
.
append
(
self
.
createRunningWrapper
(
'memcached'
,
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'memcached.in'
),
config
)))
return
dict
(
memcached_url
=
'%s:%s'
%
(
config
[
'memcached_ip'
],
config
[
'memcached_port'
]),
memcached_ip
=
config
[
'memcached_ip'
],
memcached_port
=
config
[
'memcached_port'
])
def
installTestRunner
(
self
,
ca_conf
,
mysql_conf
,
conversion_server_conf
,
memcached_conf
,
kumo_conf
):
"""Installs bin/runUnitTest executable to run all tests using
bin/runUnitTest"""
testinstance
=
self
.
createDataDirectory
(
'testinstance'
)
# workaround wrong assumptions of ERP5Type.tests.runUnitTest about
# directory existence
unit_test
=
os
.
path
.
join
(
testinstance
,
'unit_test'
)
if
not
os
.
path
.
isdir
(
unit_test
):
os
.
mkdir
(
unit_test
)
runUnitTest
=
zc
.
buildout
.
easy_install
.
scripts
([
(
'runUnitTest'
,
__name__
+
'.testrunner'
,
'runUnitTest'
)],
self
.
ws
,
sys
.
executable
,
self
.
bin_directory
,
arguments
=
[
dict
(
instance_home
=
testinstance
,
prepend_path
=
self
.
bin_directory
,
openssl_binary
=
self
.
options
[
'openssl_binary'
],
test_ca_path
=
ca_conf
[
'certificate_authority_path'
],
call_list
=
[
self
.
options
[
'runUnitTest_binary'
],
'--erp5_sql_connection_string'
,
'%(mysql_test_database)s@%'
'(ip)s:%(tcp_port)s %(mysql_test_user)s '
'%(mysql_test_password)s'
%
mysql_conf
,
'--conversion_server_hostname=%(conversion_server_ip)s'
%
\
conversion_server_conf
,
'--conversion_server_port=%(conversion_server_port)s'
%
\
conversion_server_conf
,
'--volatile_memcached_server_hostname=%(memcached_ip)s'
%
memcached_conf
,
'--volatile_memcached_server_port=%(memcached_port)s'
%
memcached_conf
,
'--persistent_memcached_server_hostname=%(kumo_gateway_ip)s'
%
kumo_conf
,
'--persistent_memcached_server_port=%(kumo_gateway_port)s'
%
kumo_conf
,
]
)])[
0
]
self
.
path_list
.
append
(
runUnitTest
)
def
installTestSuiteRunner
(
self
,
ca_conf
,
mysql_conf
,
conversion_server_conf
,
memcached_conf
,
kumo_conf
):
"""Installs bin/runTestSuite executable to run all tests using
bin/runUnitTest"""
testinstance
=
self
.
createDataDirectory
(
'test_suite_instance'
)
# workaround wrong assumptions of ERP5Type.tests.runUnitTest about
# directory existence
unit_test
=
os
.
path
.
join
(
testinstance
,
'unit_test'
)
if
not
os
.
path
.
isdir
(
unit_test
):
os
.
mkdir
(
unit_test
)
connection_string_list
=
[]
for
test_database
,
test_user
,
test_password
in
\
mysql_conf
[
'mysql_parallel_test_dict'
]:
connection_string_list
.
append
(
'%s@%s:%s %s %s'
%
(
test_database
,
mysql_conf
[
'ip'
],
mysql_conf
[
'tcp_port'
],
test_user
,
test_password
))
command
=
zc
.
buildout
.
easy_install
.
scripts
([
(
'runTestSuite'
,
__name__
+
'.test_suite_runner'
,
'runTestSuite'
)],
self
.
ws
,
sys
.
executable
,
self
.
bin_directory
,
arguments
=
[
dict
(
instance_home
=
testinstance
,
prepend_path
=
self
.
bin_directory
,
openssl_binary
=
self
.
options
[
'openssl_binary'
],
test_ca_path
=
ca_conf
[
'certificate_authority_path'
],
call_list
=
[
self
.
options
[
'runTestSuite_binary'
],
'--db_list'
,
','
.
join
(
connection_string_list
),
'--conversion_server_hostname=%(conversion_server_ip)s'
%
\
conversion_server_conf
,
'--conversion_server_port=%(conversion_server_port)s'
%
\
conversion_server_conf
,
'--volatile_memcached_server_hostname=%(memcached_ip)s'
%
memcached_conf
,
'--volatile_memcached_server_port=%(memcached_port)s'
%
memcached_conf
,
'--persistent_memcached_server_hostname=%(kumo_gateway_ip)s'
%
kumo_conf
,
'--persistent_memcached_server_port=%(kumo_gateway_port)s'
%
kumo_conf
,
]
)])[
0
]
self
.
path_list
.
append
(
command
)
def
installCrond
(
self
):
timestamps
=
self
.
createDataDirectory
(
'cronstamps'
)
cron_output
=
os
.
path
.
join
(
self
.
log_directory
,
'cron-output'
)
self
.
_createDirectory
(
cron_output
)
catcher
=
zc
.
buildout
.
easy_install
.
scripts
([(
'catchcron'
,
__name__
+
'.catdatefile'
,
'catdatefile'
)],
self
.
ws
,
sys
.
executable
,
self
.
bin_directory
,
arguments
=
[
cron_output
])[
0
]
self
.
path_list
.
append
(
catcher
)
cron_d
=
os
.
path
.
join
(
self
.
etc_directory
,
'cron.d'
)
crontabs
=
os
.
path
.
join
(
self
.
etc_directory
,
'crontabs'
)
self
.
_createDirectory
(
cron_d
)
self
.
_createDirectory
(
crontabs
)
wrapper
=
zc
.
buildout
.
easy_install
.
scripts
([(
'crond'
,
__name__
+
'.execute'
,
'execute'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
self
.
options
[
'dcrond_binary'
].
strip
(),
'-s'
,
cron_d
,
'-c'
,
crontabs
,
'-t'
,
timestamps
,
'-f'
,
'-l'
,
'5'
,
'-M'
,
catcher
]
)[
0
]
self
.
path_list
.
append
(
wrapper
)
return
cron_d
def
requestCertificate
(
self
,
name
):
hash
=
hashlib
.
sha512
(
name
).
hexdigest
()
key
=
os
.
path
.
join
(
self
.
ca_private
,
hash
+
self
.
ca_key_ext
)
certificate
=
os
.
path
.
join
(
self
.
ca_certs
,
hash
+
self
.
ca_crt_ext
)
parser
=
ConfigParser
.
RawConfigParser
()
parser
.
add_section
(
'certificate'
)
parser
.
set
(
'certificate'
,
'name'
,
name
)
parser
.
set
(
'certificate'
,
'key_file'
,
key
)
parser
.
set
(
'certificate'
,
'certificate_file'
,
certificate
)
parser
.
write
(
open
(
os
.
path
.
join
(
self
.
ca_request_dir
,
hash
),
'w'
))
return
key
,
certificate
def
installCertificateAuthority
(
self
,
ca_country_code
=
'XX'
,
ca_email
=
'xx@example.com'
,
ca_state
=
'State'
,
ca_city
=
'City'
,
ca_company
=
'Company'
):
backup_path
=
self
.
createBackupDirectory
(
'ca'
)
self
.
ca_dir
=
os
.
path
.
join
(
self
.
data_root_directory
,
'ca'
)
self
.
_createDirectory
(
self
.
ca_dir
)
self
.
ca_request_dir
=
os
.
path
.
join
(
self
.
ca_dir
,
'requests'
)
self
.
_createDirectory
(
self
.
ca_request_dir
)
config
=
dict
(
ca_dir
=
self
.
ca_dir
,
request_dir
=
self
.
ca_request_dir
)
self
.
ca_private
=
os
.
path
.
join
(
self
.
ca_dir
,
'private'
)
self
.
ca_certs
=
os
.
path
.
join
(
self
.
ca_dir
,
'certs'
)
self
.
ca_crl
=
os
.
path
.
join
(
self
.
ca_dir
,
'crl'
)
self
.
ca_newcerts
=
os
.
path
.
join
(
self
.
ca_dir
,
'newcerts'
)
self
.
ca_key_ext
=
'.key'
self
.
ca_crt_ext
=
'.crt'
for
d
in
[
self
.
ca_private
,
self
.
ca_crl
,
self
.
ca_newcerts
,
self
.
ca_certs
]:
self
.
_createDirectory
(
d
)
for
f
in
[
'crlnumber'
,
'serial'
]:
if
not
os
.
path
.
exists
(
os
.
path
.
join
(
self
.
ca_dir
,
f
)):
open
(
os
.
path
.
join
(
self
.
ca_dir
,
f
),
'w'
).
write
(
'01'
)
if
not
os
.
path
.
exists
(
os
.
path
.
join
(
self
.
ca_dir
,
'index.txt'
)):
open
(
os
.
path
.
join
(
self
.
ca_dir
,
'index.txt'
),
'w'
).
write
(
''
)
openssl_configuration
=
os
.
path
.
join
(
self
.
ca_dir
,
'openssl.cnf'
)
config
.
update
(
working_directory
=
self
.
ca_dir
,
country_code
=
ca_country_code
,
state
=
ca_state
,
city
=
ca_city
,
company
=
ca_company
,
email_address
=
ca_email
,
)
self
.
_writeFile
(
openssl_configuration
,
pkg_resources
.
resource_string
(
__name__
,
'template/openssl.cnf.ca.in'
)
%
config
)
self
.
path_list
.
extend
(
zc
.
buildout
.
easy_install
.
scripts
([
(
'certificate_authority'
,
__name__
+
'.certificate_authority'
,
'runCertificateAuthority'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
dict
(
openssl_configuration
=
openssl_configuration
,
openssl_binary
=
self
.
options
[
'openssl_binary'
],
certificate
=
os
.
path
.
join
(
self
.
ca_dir
,
'cacert.pem'
),
key
=
os
.
path
.
join
(
self
.
ca_private
,
'cakey.pem'
),
crl
=
os
.
path
.
join
(
self
.
ca_crl
),
request_dir
=
self
.
ca_request_dir
)]))
# configure backup
backup_cron
=
os
.
path
.
join
(
self
.
cron_d
,
'ca_rdiff_backup'
)
open
(
backup_cron
,
'w'
).
write
(
'''0 0 * * * %(rdiff_backup)s %(source)s %(destination)s'''
%
dict
(
rdiff_backup
=
self
.
options
[
'rdiff_backup_binary'
],
source
=
self
.
ca_dir
,
destination
=
backup_path
))
self
.
path_list
.
append
(
backup_cron
)
return
dict
(
ca_certificate
=
os
.
path
.
join
(
config
[
'ca_dir'
],
'cacert.pem'
),
ca_crl
=
os
.
path
.
join
(
config
[
'ca_dir'
],
'crl'
),
certificate_authority_path
=
config
[
'ca_dir'
]
)
def
installConversionServer
(
self
,
ip
,
port
,
openoffice_port
):
name
=
'conversion_server'
working_directory
=
self
.
createDataDirectory
(
name
)
conversion_server_dict
=
dict
(
working_path
=
working_directory
,
uno_path
=
self
.
options
[
'ooo_uno_path'
],
office_binary_path
=
self
.
options
[
'ooo_binary_path'
],
ip
=
ip
,
port
=
port
,
openoffice_port
=
openoffice_port
,
)
for
env_line
in
self
.
options
[
'environment'
].
splitlines
():
env_line
=
env_line
.
strip
()
if
not
env_line
:
continue
if
'='
in
env_line
:
env_key
,
env_value
=
env_line
.
split
(
'='
)
conversion_server_dict
[
env_key
.
strip
()]
=
env_value
.
strip
()
else
:
raise
zc
.
buildout
.
UserError
(
'Line %r in environment parameter is '
'incorrect'
%
env_line
)
config_file
=
self
.
createConfigurationFile
(
name
+
'.cfg'
,
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'cloudooo.cfg.in'
),
conversion_server_dict
))
self
.
path_list
.
append
(
config_file
)
self
.
path_list
.
extend
(
zc
.
buildout
.
easy_install
.
scripts
([(
name
,
__name__
+
'.execute'
,
'execute_with_signal_translation'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
self
.
options
[
'ooo_paster'
].
strip
(),
'serve'
,
config_file
]))
return
{
name
+
'_port'
:
conversion_server_dict
[
'port'
],
name
+
'_ip'
:
conversion_server_dict
[
'ip'
]
}
def
installHaproxy
(
self
,
ip
,
port
,
name
,
server_check_path
,
url_list
):
server_template
=
""" server %(name)s %(address)s cookie %(name)s check inter 20s rise 2 fall 4"""
config
=
dict
(
name
=
name
,
ip
=
ip
,
port
=
port
,
server_check_path
=
server_check_path
,)
i
=
1
server_list
=
[]
for
url
in
url_list
:
server_list
.
append
(
server_template
%
dict
(
name
=
'%s_%s'
%
(
name
,
i
),
address
=
url
))
i
+=
1
config
[
'server_text'
]
=
'
\
n
'
.
join
(
server_list
)
haproxy_conf_path
=
self
.
createConfigurationFile
(
'haproxy_%s.cfg'
%
name
,
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'haproxy.cfg.in'
),
config
))
self
.
path_list
.
append
(
haproxy_conf_path
)
wrapper
=
zc
.
buildout
.
easy_install
.
scripts
([(
'haproxy_%s'
%
name
,
__name__
+
'.execute'
,
'execute'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
self
.
options
[
'haproxy_binary'
].
strip
(),
'-f'
,
haproxy_conf_path
]
)[
0
]
self
.
path_list
.
append
(
wrapper
)
return
'%s:%s'
%
(
ip
,
port
)
def
installERP5
(
self
):
"""
All zope have to share file created by portal_classes
(until everything is integrated into the ZODB).
So, do not request zope instance and create multiple in the same partition.
"""
# Create instance directories
self
.
erp5_directory
=
self
.
createDataDirectory
(
'erp5shared'
)
# Create init user
password
=
self
.
generatePassword
()
# XXX Unhardcoded me please
user
=
'zope'
write_inituser
(
os
.
path
.
join
(
self
.
erp5_directory
,
"inituser"
),
user
,
password
)
self
.
_createDirectory
(
self
.
erp5_directory
)
for
directory
in
(
'Constraint'
,
'Document'
,
'Extensions'
,
'PropertySheet'
,
'import'
,
'lib'
,
'tests'
,
'Products'
,
'etc'
,
):
self
.
_createDirectory
(
os
.
path
.
join
(
self
.
erp5_directory
,
directory
))
self
.
_createDirectory
(
os
.
path
.
join
(
self
.
erp5_directory
,
'etc'
,
'package-includes'
))
return
user
,
password
def
installERP5Site
(
self
,
user
,
password
,
zope_access
,
mysql_conf
,
conversion_server_conf
=
None
,
memcached_conf
=
None
,
kumo_conf
=
None
,
erp5_site_id
=
'erp5'
):
""" Create a script controlled by supervisor, which creates a erp5
site on current available zope and mysql environment"""
conversion_server
=
None
if
conversion_server_conf
is
not
None
:
conversion_server
=
"%s:%s"
%
(
conversion_server_conf
[
'conversion_server_ip'
],
conversion_server_conf
[
'conversion_server_port'
])
if
memcached_conf
is
None
:
memcached_conf
=
{}
if
kumo_conf
is
None
:
kumo_conf
=
{}
# XXX Conversion server and memcache server coordinates are not relevant
# for pure site creation.
https_connection_url
=
"http://%s:%s@%s/"
%
(
user
,
password
,
zope_access
)
mysql_connection_string
=
"%(mysql_database)s@%(ip)s:%(tcp_port)s %(mysql_user)s %(mysql_password)s"
%
mysql_conf
# XXX URL list vs. repository + list of bt5 names?
bt5_list
=
self
.
parameter_dict
.
get
(
"bt5_list"
,
""
).
split
()
bt5_repository_list
=
self
.
parameter_dict
.
get
(
"bt5_repository_list"
,
""
).
split
()
self
.
path_list
.
extend
(
zc
.
buildout
.
easy_install
.
scripts
([(
'erp5_update'
,
__name__
+
'.erp5'
,
'updateERP5'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
erp5_site_id
,
mysql_connection_string
,
https_connection_url
,
memcached_conf
.
get
(
'memcached_url'
),
conversion_server
,
kumo_conf
.
get
(
"kumo_address"
),
bt5_list
,
bt5_repository_list
]))
return
[]
def
installZeo
(
self
,
ip
):
zodb_dir
=
os
.
path
.
join
(
self
.
data_root_directory
,
'zodb'
)
self
.
_createDirectory
(
zodb_dir
)
zeo_configuration_dict
=
{}
zeo_number
=
0
for
zeo_server
in
sorted
(
self
.
_zeo_storage_dict
.
iterkeys
()):
zeo_number
+=
1
zeo_event_log
=
os
.
path
.
join
(
self
.
log_directory
,
'zeo-%s.log'
%
zeo_number
)
zeo_pid
=
os
.
path
.
join
(
self
.
run_directory
,
'zeo-%s.pid'
%
zeo_number
)
self
.
registerLogRotation
(
'zeo-%s'
%
zeo_number
,
[
zeo_event_log
],
self
.
killpidfromfile
+
' '
+
zeo_pid
+
' SIGUSR2'
)
config
=
dict
(
zeo_ip
=
ip
,
zeo_port
=
self
.
_zeo_storage_port_dict
[
zeo_server
],
zeo_event_log
=
zeo_event_log
,
zeo_pid
=
zeo_pid
,
)
storage_definition_list
=
[]
for
storage_name
in
sorted
(
self
.
_zeo_storage_dict
[
zeo_server
]):
path
=
os
.
path
.
join
(
zodb_dir
,
'%s.fs'
%
storage_name
)
storage_definition_list
.
append
(
"""<filestorage %(storage_name)s>
path %(path)s
</filestorage>"""
%
dict
(
storage_name
=
storage_name
,
path
=
path
))
zeo_configuration_dict
[
storage_name
]
=
dict
(
ip
=
ip
,
port
=
config
[
'zeo_port'
],
path
=
path
)
config
[
'zeo_filestorage_snippet'
]
=
'
\
n
'
.
join
(
storage_definition_list
)
zeo_conf_path
=
self
.
createConfigurationFile
(
'zeo-%s.conf'
%
zeo_number
,
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'zeo.conf.in'
),
config
))
self
.
path_list
.
append
(
zeo_conf_path
)
wrapper
=
zc
.
buildout
.
easy_install
.
scripts
([(
'zeo_%s'
%
zeo_number
,
__name__
+
'.execute'
,
'execute'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
self
.
options
[
'runzeo_binary'
].
strip
(),
'-C'
,
zeo_conf_path
]
)[
0
]
self
.
path_list
.
append
(
wrapper
)
return
zeo_configuration_dict
def
installTidStorage
(
self
,
ip
,
port
,
known_tid_storage_identifier_dict
,
access_url
):
"""Install TidStorage with all required backup tools
known_tid_storage_identifier_dict is a dictionary of:
(((ip, port),), storagename): (filestorage path, url for serialize)
url for serialize will be merged with access_url by internal tidstorage
"""
backup_base_path
=
self
.
createBackupDirectory
(
'zodb'
)
# it is time to fill known_tid_storage_identifier_dict with backup
# destination
formatted_storage_dict
=
dict
()
for
key
,
v
in
known_tid_storage_identifier_dict
.
copy
().
iteritems
():
# generate unique name for each backup
storage_name
=
key
[
1
]
destination
=
os
.
path
.
join
(
backup_base_path
,
storage_name
)
self
.
_createDirectory
(
destination
)
formatted_storage_dict
[
str
(
key
)]
=
(
v
[
0
],
destination
,
v
[
1
])
logfile
=
os
.
path
.
join
(
self
.
log_directory
,
'tidstorage.log'
)
pidfile
=
os
.
path
.
join
(
self
.
run_directory
,
'tidstorage.pid'
)
statusfile
=
os
.
path
.
join
(
self
.
log_directory
,
'tidstorage.tid'
)
timestamp_file_path
=
os
.
path
.
join
(
self
.
log_directory
,
'repozo_tidstorage_timestamp.log'
)
# shared configuration file
tidstorage_config
=
self
.
createConfigurationFile
(
'tidstorage.py'
,
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'tidstorage.py.in'
),
dict
(
known_tid_storage_identifier_dict
=
pprint
.
pformat
(
formatted_storage_dict
),
base_url
=
'%s/%%s/serialize'
%
access_url
,
host
=
ip
,
port
=
port
,
timestamp_file_path
=
timestamp_file_path
,
logfile
=
logfile
,
pidfile
=
pidfile
,
statusfile
=
statusfile
)))
# TID server
tidstorage_server
=
zc
.
buildout
.
easy_install
.
scripts
([(
'tidstoraged'
,
__name__
+
'.execute'
,
'execute'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
self
.
options
[
'tidstoraged_binary'
],
'--nofork'
,
'--config'
,
tidstorage_config
])[
0
]
self
.
registerLogRotation
(
'tidsorage'
,
[
logfile
,
timestamp_file_path
],
self
.
killpidfromfile
+
' '
+
pidfile
+
' SIGHUP'
)
self
.
path_list
.
append
(
tidstorage_config
)
self
.
path_list
.
append
(
tidstorage_server
)
# repozo wrapper
tidstorage_repozo
=
zc
.
buildout
.
easy_install
.
scripts
([(
'tidstorage_repozo'
,
__name__
+
'.execute'
,
'execute'
)],
self
.
ws
,
sys
.
executable
,
self
.
bin_directory
,
arguments
=
[
self
.
options
[
'tidstorage_repozo_binary'
],
'--config'
,
tidstorage_config
,
'--repozo'
,
self
.
options
[
'repozo_binary'
],
'-z'
])[
0
]
self
.
path_list
.
append
(
tidstorage_repozo
)
# and backup configuration
tidstorage_repozo_cron
=
os
.
path
.
join
(
self
.
cron_d
,
'tidstorage_repozo'
)
open
(
tidstorage_repozo_cron
,
'w'
).
write
(
'''0 0 * * * %(tidstorage_repozo)s
0 0 * * * cp -f %(tidstorage_tid)s %(tidstorage_tid_backup)s'''
%
dict
(
tidstorage_repozo
=
tidstorage_repozo
,
tidstorage_tid
=
statusfile
,
tidstorage_tid_backup
=
os
.
path
.
join
(
backup_base_path
,
'tidstorage.tid'
)))
self
.
path_list
.
append
(
tidstorage_repozo_cron
)
return
dict
(
host
=
ip
,
port
=
port
)
def
installZope
(
self
,
ip
,
port
,
name
,
zodb_configuration_string
,
with_timerservice
=
False
,
tidstorage_config
=
None
,
thread_amount
=
1
,
with_deadlockdebugger
=
True
):
# Create zope configuration file
zope_config
=
dict
(
products
=
self
.
options
[
'products'
],
thread_amount
=
thread_amount
)
# configure default Zope2 zcml
open
(
os
.
path
.
join
(
self
.
erp5_directory
,
'etc'
,
'site.zcml'
),
'w'
).
write
(
pkg_resources
.
resource_string
(
'Zope2'
,
'utilities/skel/etc/site.zcml'
))
zope_config
[
'zodb_configuration_string'
]
=
zodb_configuration_string
zope_config
[
'instance'
]
=
self
.
erp5_directory
zope_config
[
'event_log'
]
=
os
.
path
.
join
(
self
.
log_directory
,
'%s-event.log'
%
name
)
zope_config
[
'z2_log'
]
=
os
.
path
.
join
(
self
.
log_directory
,
'%s-Z2.log'
%
name
)
zope_config
[
'pid-filename'
]
=
os
.
path
.
join
(
self
.
run_directory
,
'%s.pid'
%
name
)
zope_config
[
'lock-filename'
]
=
os
.
path
.
join
(
self
.
run_directory
,
'%s.lock'
%
name
)
self
.
registerLogRotation
(
name
,
[
zope_config
[
'event_log'
],
zope_config
[
'z2_log'
]],
self
.
killpidfromfile
+
' '
+
zope_config
[
'pid-filename'
]
+
' SIGUSR2'
)
prefixed_products
=
[]
for
product
in
reversed
(
zope_config
[
'products'
].
split
()):
product
=
product
.
strip
()
if
product
:
prefixed_products
.
append
(
'products %s'
%
product
)
prefixed_products
.
insert
(
0
,
'products %s'
%
os
.
path
.
join
(
self
.
erp5_directory
,
'Products'
))
zope_config
[
'products'
]
=
'
\
n
'
.
join
(
prefixed_products
)
zope_config
[
'address'
]
=
'%s:%s'
%
(
ip
,
port
)
zope_config
[
'tmp_directory'
]
=
self
.
tmp_directory
zope_config
[
'path'
]
=
self
.
bin_directory
zope_wrapper_template_location
=
self
.
getTemplateFilename
(
'zope.conf.in'
)
zope_conf_content
=
self
.
substituteTemplate
(
zope_wrapper_template_location
,
zope_config
)
if
with_timerservice
:
zope_conf_content
+=
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'zope.conf.timerservice.in'
),
zope_config
)
if
tidstorage_config
is
not
None
:
zope_conf_content
+=
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'zope-tidstorage-snippet.conf.in'
),
tidstorage_config
)
if
with_deadlockdebugger
:
zope_conf_content
+=
self
.
substituteTemplate
(
self
.
getTemplateFilename
(
'zope-deadlockdebugger-snippet.conf.in'
),
dict
(
dump_url
=
'/manage_debug_threads'
,
secret
=
self
.
generatePassword
()))
zope_conf_path
=
self
.
createConfigurationFile
(
"%s.conf"
%
name
,
zope_conf_content
)
self
.
path_list
.
append
(
zope_conf_path
)
# Create init script
wrapper
=
zc
.
buildout
.
easy_install
.
scripts
([(
name
,
__name__
+
'.execute'
,
'execute'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
self
.
options
[
'runzope_binary'
].
strip
(),
'-C'
,
zope_conf_path
]
)[
0
]
self
.
path_list
.
append
(
wrapper
)
return
zope_config
[
'address'
]
def
_getApacheConfigurationDict
(
self
,
prefix
,
ip
,
port
):
apache_conf
=
dict
()
apache_conf
[
'pid_file'
]
=
os
.
path
.
join
(
self
.
run_directory
,
prefix
+
'.pid'
)
apache_conf
[
'lock_file'
]
=
os
.
path
.
join
(
self
.
run_directory
,
prefix
+
'.lock'
)
apache_conf
[
'ip'
]
=
ip
apache_conf
[
'port'
]
=
port
apache_conf
[
'server_admin'
]
=
'admin@'
apache_conf
[
'error_log'
]
=
os
.
path
.
join
(
self
.
log_directory
,
prefix
+
'-error.log'
)
apache_conf
[
'access_log'
]
=
os
.
path
.
join
(
self
.
log_directory
,
prefix
+
'-access.log'
)
self
.
registerLogRotation
(
prefix
,
[
apache_conf
[
'error_log'
],
apache_conf
[
'access_log'
]],
self
.
killpidfromfile
+
' '
+
apache_conf
[
'pid_file'
]
+
' SIGUSR1'
)
return
apache_conf
def
_writeApacheConfiguration
(
self
,
prefix
,
apache_conf
,
backend
,
access_control_string
=
None
):
rewrite_rule_template
=
\
"RewriteRule (.*) http://%(backend)s$1 [L,P]"
if
access_control_string
is
None
:
path_template
=
pkg_resources
.
resource_string
(
__name__
,
'template/apache.zope.conf.path.in'
)
path
=
path_template
%
dict
(
path
=
'/'
)
else
:
path_template
=
pkg_resources
.
resource_string
(
__name__
,
'template/apache.zope.conf.path-protected.in'
)
path
=
path_template
%
dict
(
path
=
'/'
,
access_control_string
=
access_control_string
)
d
=
dict
(
path
=
path
,
backend
=
backend
,
backend_path
=
'/'
,
port
=
apache_conf
[
'port'
],
vhname
=
path
.
replace
(
'/'
,
''
),
)
rewrite_rule
=
rewrite_rule_template
%
d
apache_conf
.
update
(
**
dict
(
path_enable
=
path
,
rewrite_rule
=
rewrite_rule
))
apache_conf_string
=
pkg_resources
.
resource_string
(
__name__
,
'template/apache.zope.conf.in'
)
%
apache_conf
return
self
.
createConfigurationFile
(
prefix
+
'.conf'
,
apache_conf_string
)
def
installFrontendZopeApache
(
self
,
ip
,
port
,
name
,
frontend_path
,
backend_url
,
backend_path
,
key
,
certificate
,
access_control_string
=
None
):
ident
=
'frontend_'
+
name
apache_conf
=
self
.
_getApacheConfigurationDict
(
ident
,
ip
,
port
)
apache_conf
[
'server_name'
]
=
name
apache_conf
[
'ssl_snippet'
]
=
pkg_resources
.
resource_string
(
__name__
,
'template/apache.ssl-snippet.conf.in'
)
%
dict
(
login_certificate
=
certificate
,
login_key
=
key
)
rewrite_rule_template
=
\
"RewriteRule ^%(path)s($|/.*) %(backend_url)s/VirtualHostBase/https/%(server_name)s:%(port)s%(backend_path)s/VirtualHostRoot/_vh_%(vhname)s$1 [L,P]
\
n
"
path
=
pkg_resources
.
resource_string
(
__name__
,
'template/apache.zope.conf.path-protected.in'
)
%
dict
(
path
=
'/'
,
access_control_string
=
'none'
)
if
access_control_string
is
None
:
path_template
=
pkg_resources
.
resource_string
(
__name__
,
'template/apache.zope.conf.path.in'
)
path
+=
path_template
%
dict
(
path
=
frontend_path
)
else
:
path_template
=
pkg_resources
.
resource_string
(
__name__
,
'template/apache.zope.conf.path-protected.in'
)
path
+=
path_template
%
dict
(
path
=
frontend_path
,
access_control_string
=
access_control_string
)
d
=
dict
(
path
=
frontend_path
,
backend_url
=
backend_url
,
backend_path
=
backend_path
,
port
=
apache_conf
[
'port'
],
vhname
=
frontend_path
.
replace
(
'/'
,
''
),
server_name
=
name
)
rewrite_rule
=
rewrite_rule_template
%
d
apache_conf
.
update
(
**
dict
(
path_enable
=
path
,
rewrite_rule
=
rewrite_rule
))
apache_conf_string
=
pkg_resources
.
resource_string
(
__name__
,
'template/apache.zope.conf.in'
)
%
apache_conf
apache_config_file
=
self
.
createConfigurationFile
(
ident
+
'.conf'
,
apache_conf_string
)
self
.
path_list
.
append
(
apache_config_file
)
self
.
path_list
.
extend
(
zc
.
buildout
.
easy_install
.
scripts
([(
ident
,
__name__
+
'.apache'
,
'runApache'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
dict
(
required_path_list
=
[
key
,
certificate
],
binary
=
self
.
options
[
'httpd_binary'
],
config
=
apache_config_file
)
]))
def
installBackendApache
(
self
,
ip
,
port
,
backend
,
key
,
certificate
,
suffix
=
''
,
access_control_string
=
None
):
apache_conf
=
self
.
_getApacheConfigurationDict
(
'backend_apache'
+
suffix
,
ip
,
port
)
apache_conf
[
'server_name'
]
=
'%s'
%
apache_conf
[
'ip'
]
apache_conf
[
'ssl_snippet'
]
=
pkg_resources
.
resource_string
(
__name__
,
'template/apache.ssl-snippet.conf.in'
)
%
dict
(
login_certificate
=
certificate
,
login_key
=
key
)
apache_config_file
=
self
.
_writeApacheConfiguration
(
'backend_apache'
+
suffix
,
apache_conf
,
backend
,
access_control_string
)
self
.
path_list
.
append
(
apache_config_file
)
self
.
path_list
.
extend
(
zc
.
buildout
.
easy_install
.
scripts
([(
'backend_apache'
+
suffix
,
__name__
+
'.apache'
,
'runApache'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
dict
(
required_path_list
=
[
key
,
certificate
],
binary
=
self
.
options
[
'httpd_binary'
],
config
=
apache_config_file
)
]))
# Note: IPv6 is assumed always
return
'https://[%(ip)s]:%(port)s'
%
apache_conf
def
installMysqlServer
(
self
,
ip
,
port
,
database
=
'erp5'
,
user
=
'user'
,
test_database
=
'test_erp5'
,
test_user
=
'test_user'
,
template_filename
=
None
,
parallel_test_database_amount
=
100
,
mysql_conf
=
None
):
if
mysql_conf
is
None
:
mysql_conf
=
{}
backup_directory
=
self
.
createBackupDirectory
(
'mysql'
)
if
template_filename
is
None
:
template_filename
=
self
.
getTemplateFilename
(
'my.cnf.in'
)
error_log
=
os
.
path
.
join
(
self
.
log_directory
,
'mysqld.log'
)
slow_query_log
=
os
.
path
.
join
(
self
.
log_directory
,
'mysql-slow.log'
)
mysql_conf
.
update
(
ip
=
ip
,
data_directory
=
os
.
path
.
join
(
self
.
data_root_directory
,
'mysql'
),
tcp_port
=
port
,
pid_file
=
os
.
path
.
join
(
self
.
run_directory
,
'mysqld.pid'
),
socket
=
os
.
path
.
join
(
self
.
run_directory
,
'mysqld.sock'
),
error_log
=
error_log
,
slow_query_log
=
slow_query_log
,
mysql_database
=
database
,
mysql_user
=
user
,
mysql_password
=
self
.
generatePassword
(),
mysql_test_password
=
self
.
generatePassword
(),
mysql_test_database
=
test_database
,
mysql_test_user
=
test_user
,
mysql_parallel_test_dict
=
[
(
'test_%i'
%
x
,)
*
2
+
(
self
.
generatePassword
(),)
\
for
x
in
xrange
(
0
,
parallel_test_database_amount
)],
)
self
.
registerLogRotation
(
'mysql'
,
[
error_log
,
slow_query_log
],
'%(mysql_binary)s --no-defaults -B --user=root '
'--socket=%(mysql_socket)s -e "FLUSH LOGS"'
%
dict
(
mysql_binary
=
self
.
options
[
'mysql_binary'
],
mysql_socket
=
mysql_conf
[
'socket'
]))
self
.
_createDirectory
(
mysql_conf
[
'data_directory'
])
mysql_conf_path
=
self
.
createConfigurationFile
(
"my.cnf"
,
self
.
substituteTemplate
(
template_filename
,
mysql_conf
))
mysql_script_list
=
[]
for
x_database
,
x_user
,
x_password
in
\
[(
mysql_conf
[
'mysql_database'
],
mysql_conf
[
'mysql_user'
],
mysql_conf
[
'mysql_password'
]),
(
mysql_conf
[
'mysql_test_database'
],
mysql_conf
[
'mysql_test_user'
],
mysql_conf
[
'mysql_test_password'
]),
]
+
mysql_conf
[
'mysql_parallel_test_dict'
]:
mysql_script_list
.
append
(
pkg_resources
.
resource_string
(
__name__
,
'template/initmysql.sql.in'
)
%
{
'mysql_database'
:
x_database
,
'mysql_user'
:
x_user
,
'mysql_password'
:
x_password
})
mysql_script_list
.
append
(
'EXIT'
)
mysql_script
=
'
\
n
'
.
join
(
mysql_script_list
)
self
.
path_list
.
extend
(
zc
.
buildout
.
easy_install
.
scripts
([(
'mysql_update'
,
__name__
+
'.mysql'
,
'updateMysql'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
dict
(
mysql_script
=
mysql_script
,
mysql_binary
=
self
.
options
[
'mysql_binary'
].
strip
(),
mysql_upgrade_binary
=
self
.
options
[
'mysql_upgrade_binary'
].
strip
(),
socket
=
mysql_conf
[
'socket'
],
)]))
self
.
path_list
.
extend
(
zc
.
buildout
.
easy_install
.
scripts
([(
'mysqld'
,
__name__
+
'.mysql'
,
'runMysql'
)],
self
.
ws
,
sys
.
executable
,
self
.
wrapper_directory
,
arguments
=
[
dict
(
mysql_install_binary
=
self
.
options
[
'mysql_install_binary'
].
strip
(),
mysqld_binary
=
self
.
options
[
'mysqld_binary'
].
strip
(),
data_directory
=
mysql_conf
[
'data_directory'
].
strip
(),
mysql_binary
=
self
.
options
[
'mysql_binary'
].
strip
(),
socket
=
mysql_conf
[
'socket'
].
strip
(),
configuration_file
=
mysql_conf_path
,
)]))
self
.
path_list
.
extend
([
mysql_conf_path
])
# backup configuration
backup_directory
=
self
.
createBackupDirectory
(
'mysql'
)
full_backup
=
os
.
path
.
join
(
backup_directory
,
'full'
)
incremental_backup
=
os
.
path
.
join
(
backup_directory
,
'incremental'
)
self
.
_createDirectory
(
full_backup
)
self
.
_createDirectory
(
incremental_backup
)
innobackupex_argument_list
=
[
self
.
options
[
'perl_binary'
],
self
.
options
[
'innobackupex_binary'
],
'--defaults-file=%s'
%
mysql_conf_path
,
'--socket=%s'
%
mysql_conf
[
'socket'
].
strip
(),
'--user=root'
]
environment
=
dict
(
PATH
=
'%s'
%
self
.
bin_directory
)
innobackupex_incremental
=
zc
.
buildout
.
easy_install
.
scripts
([(
'innobackupex_incremental'
,
__name__
+
'.execute'
,
'executee'
)],
self
.
ws
,
sys
.
executable
,
self
.
bin_directory
,
arguments
=
[
innobackupex_argument_list
+
[
'--incremental'
],
environment
])[
0
]
self
.
path_list
.
append
(
innobackupex_incremental
)
innobackupex_full
=
zc
.
buildout
.
easy_install
.
scripts
([(
'innobackupex_full'
,
__name__
+
'.execute'
,
'executee'
)],
self
.
ws
,
sys
.
executable
,
self
.
bin_directory
,
arguments
=
[
innobackupex_argument_list
,
environment
])[
0
]
self
.
path_list
.
append
(
innobackupex_full
)
backup_controller
=
zc
.
buildout
.
easy_install
.
scripts
([
(
'innobackupex_controller'
,
__name__
+
'.innobackupex'
,
'controller'
)],
self
.
ws
,
sys
.
executable
,
self
.
bin_directory
,
arguments
=
[
innobackupex_incremental
,
innobackupex_full
,
full_backup
,
incremental_backup
])[
0
]
self
.
path_list
.
append
(
backup_controller
)
mysql_backup_cron
=
os
.
path
.
join
(
self
.
cron_d
,
'mysql_backup'
)
open
(
mysql_backup_cron
,
'w'
).
write
(
'0 0 * * * '
+
backup_controller
)
self
.
path_list
.
append
(
mysql_backup_cron
)
# The return could be more explicit database, user ...
return
mysql_conf
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/apache.py
0 → 100644
View file @
43653c40
import
os
import
sys
import
time
def
runApache
(
args
):
sleep
=
60
conf
=
args
[
0
]
while
True
:
ready
=
True
for
f
in
conf
.
get
(
'required_path_list'
,
[]):
if
not
os
.
path
.
exists
(
f
):
print
'File %r does not exists, sleeping for %s'
%
(
f
,
sleep
)
ready
=
False
if
ready
:
break
time
.
sleep
(
sleep
)
apache_wrapper_list
=
[
conf
[
'binary'
],
'-f'
,
conf
[
'config'
],
'-DFOREGROUND'
]
apache_wrapper_list
.
extend
(
sys
.
argv
[
1
:])
sys
.
stdout
.
flush
()
sys
.
stderr
.
flush
()
os
.
execl
(
apache_wrapper_list
[
0
],
*
apache_wrapper_list
)
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/catdatefile.py
0 → 100644
View file @
43653c40
import
os
import
sys
import
time
def
catdatefile
(
args
):
directory
=
args
[
0
]
try
:
suffix
=
args
[
1
]
except
IndexError
:
suffix
=
'.log'
f
=
open
(
os
.
path
.
join
(
directory
,
time
.
strftime
(
'%Y-%m-%d.%H:%M.%s'
)
+
suffix
),
'aw'
)
for
line
in
sys
.
stdin
.
read
():
f
.
write
(
line
)
f
.
close
()
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/certificate_authority.py
0 → 100644
View file @
43653c40
import
os
import
subprocess
import
time
import
ConfigParser
def
popenCommunicate
(
command_list
,
input
=
None
):
subprocess_kw
=
dict
(
stdout
=
subprocess
.
PIPE
,
stderr
=
subprocess
.
STDOUT
)
if
input
is
not
None
:
subprocess_kw
.
update
(
stdin
=
subprocess
.
PIPE
)
popen
=
subprocess
.
Popen
(
command_list
,
**
subprocess_kw
)
result
=
popen
.
communicate
(
input
)[
0
]
if
popen
.
returncode
is
None
:
popen
.
kill
()
if
popen
.
returncode
!=
0
:
raise
ValueError
(
'Issue during calling %r, result was:
\
n
%s'
%
(
command_list
,
result
))
return
result
class
CertificateAuthority
:
def
__init__
(
self
,
key
,
certificate
,
openssl_binary
,
openssl_configuration
,
request_dir
):
self
.
key
=
key
self
.
certificate
=
certificate
self
.
openssl_binary
=
openssl_binary
self
.
openssl_configuration
=
openssl_configuration
self
.
request_dir
=
request_dir
def
checkAuthority
(
self
):
file_list
=
[
self
.
key
,
self
.
certificate
]
ca_ready
=
True
for
f
in
file_list
:
if
not
os
.
path
.
exists
(
f
):
ca_ready
=
False
break
if
ca_ready
:
return
for
f
in
file_list
:
if
os
.
path
.
exists
(
f
):
os
.
unlink
(
f
)
try
:
# no CA, let us create new one
popenCommunicate
([
self
.
openssl_binary
,
'req'
,
'-nodes'
,
'-config'
,
self
.
openssl_configuration
,
'-new'
,
'-x509'
,
'-extensions'
,
'v3_ca'
,
'-keyout'
,
self
.
key
,
'-out'
,
self
.
certificate
,
'-days'
,
'10950'
],
'Automatic Certificate Authority
\
n
'
)
except
:
try
:
for
f
in
file_list
:
if
os
.
path
.
exists
(
f
):
os
.
unlink
(
f
)
except
:
# do not raise during cleanup
pass
raise
def
_checkCertificate
(
self
,
common_name
,
key
,
certificate
):
file_list
=
[
key
,
certificate
]
ready
=
True
for
f
in
file_list
:
if
not
os
.
path
.
exists
(
f
):
ready
=
False
break
if
ready
:
return
False
for
f
in
file_list
:
if
os
.
path
.
exists
(
f
):
os
.
unlink
(
f
)
csr
=
certificate
+
'.csr'
try
:
popenCommunicate
([
self
.
openssl_binary
,
'req'
,
'-config'
,
self
.
openssl_configuration
,
'-nodes'
,
'-new'
,
'-keyout'
,
key
,
'-out'
,
csr
,
'-days'
,
'3650'
],
common_name
+
'
\
n
'
)
try
:
popenCommunicate
([
self
.
openssl_binary
,
'ca'
,
'-batch'
,
'-config'
,
self
.
openssl_configuration
,
'-out'
,
certificate
,
'-infiles'
,
csr
])
finally
:
if
os
.
path
.
exists
(
csr
):
os
.
unlink
(
csr
)
except
:
try
:
for
f
in
file_list
:
if
os
.
path
.
exists
(
f
):
os
.
unlink
(
f
)
except
:
# do not raise during cleanup
pass
raise
else
:
return
True
def
checkRequestDir
(
self
):
for
request_file
in
os
.
listdir
(
self
.
request_dir
):
parser
=
ConfigParser
.
RawConfigParser
()
parser
.
readfp
(
open
(
os
.
path
.
join
(
self
.
request_dir
,
request_file
),
'r'
))
if
self
.
_checkCertificate
(
parser
.
get
(
'certificate'
,
'name'
),
parser
.
get
(
'certificate'
,
'key_file'
),
parser
.
get
(
'certificate'
,
'certificate_file'
)):
print
'Created certificate %r'
%
parser
.
get
(
'certificate'
,
'name'
)
def
runCertificateAuthority
(
args
):
ca_conf
=
args
[
0
]
ca
=
CertificateAuthority
(
ca_conf
[
'key'
],
ca_conf
[
'certificate'
],
ca_conf
[
'openssl_binary'
],
ca_conf
[
'openssl_configuration'
],
ca_conf
[
'request_dir'
])
while
True
:
ca
.
checkAuthority
()
ca
.
checkRequestDir
()
time
.
sleep
(
60
)
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/erp5.py
0 → 100644
View file @
43653c40
##############################################################################
#
# Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
import
time
import
urllib
import
xmlrpclib
import
socket
def
updateERP5
(
args
):
# FIXME Use a dict
site_id
=
args
[
0
]
mysql_string
=
args
[
1
]
base_url
=
args
[
2
]
memcached_provider
=
args
[
3
]
conversion_server
=
args
[
4
]
kumo_provider
=
args
[
5
]
bt5_list
=
args
[
6
]
bt5_repository_list
=
[]
if
len
(
args
)
>
7
:
bt5_repository_list
=
args
[
7
]
if
len
(
bt5_list
)
>
0
and
len
(
bt5_repository_list
)
==
0
:
bt5_repository_list
=
[
"http://www.erp5.org/dists/snapshot/bt5"
]
erp5_catalog_storage
=
"erp5_mysql_innodb_catalog"
business_template_setup_finished
=
0
external_service_assertion
=
1
update_script_id
=
"ERP5Site_assertExternalServiceList"
sleep
=
120
while
True
:
try
:
proxy
=
xmlrpclib
.
ServerProxy
(
base_url
)
print
"Adding site at %r"
%
base_url
if
proxy
.
isERP5SitePresent
()
==
False
:
url
=
'%s/manage_addProduct/ERP5/manage_addERP5Site'
%
base_url
result
=
urllib
.
urlopen
(
url
,
urllib
.
urlencode
({
"id"
:
site_id
,
"erp5_catalog_storage"
:
erp5_catalog_storage
,
"erp5_sql_connection_string"
:
mysql_string
,
"cmf_activity_sql_connection_string"
:
mysql_string
,
}))
print
"ERP5 Site creation output: %s"
%
result
.
read
()
if
not
business_template_setup_finished
:
if
proxy
.
isERP5SitePresent
()
==
True
:
print
"Start to set initial business template setup."
# Update URL to ERP5 Site
erp5
=
xmlrpclib
.
ServerProxy
(
"%s/%s"
%
(
base_url
,
site_id
),
allow_none
=
1
)
repository_list
=
erp5
.
portal_templates
.
getRepositoryList
()
if
len
(
bt5_repository_list
)
>
0
and
\
set
(
bt5_repository_list
)
!=
set
(
repository_list
):
erp5
.
portal_templates
.
\
updateRepositoryBusinessTemplateList
(
bt5_repository_list
,
None
)
installed_bt5_list
=
\
erp5
.
portal_templates
.
getInstalledBusinessTemplateTitleList
()
for
bt5
in
bt5_list
:
if
bt5
not
in
installed_bt5_list
:
erp5
.
portal_templates
.
\
installBusinessTemplatesFromRepositories
([
bt5
])
repository_set
=
set
(
erp5
.
portal_templates
.
getRepositoryList
())
installed_bt5_list
=
erp5
.
portal_templates
.
\
getInstalledBusinessTemplateTitleList
()
if
(
set
(
repository_set
)
==
set
(
bt5_repository_list
))
and
\
len
([
i
for
i
in
bt5_list
if
i
not
in
installed_bt5_list
])
==
0
:
print
"Repositories updated and business templates installed."
business_template_setup_finished
=
1
if
external_service_assertion
:
url
=
"%s/%s/%s"
%
(
base_url
,
site_id
,
update_script_id
)
result
=
urllib
.
urlopen
(
url
,
urllib
.
urlencode
({
"memcached"
:
memcached_provider
,
"kumo"
:
kumo_provider
,
"conversion_server"
:
conversion_server
,})).
read
()
external_service_assertion
=
not
(
result
==
"True"
)
except
IOError
:
print
"Unable to create the ERP5 Site!"
except
socket
.
error
,
e
:
print
"Unable to connect to ZOPE! %s"
%
e
except
xmlrpclib
.
Fault
,
e
:
print
"XMLRPC Fault: %s"
%
e
time
.
sleep
(
sleep
)
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/execute.py
0 → 100644
View file @
43653c40
import
sys
import
os
import
signal
import
subprocess
import
time
def
execute
(
args
):
"""Portable execution with process replacement"""
# Note: Candidate for slapos.lib.recipe
os
.
execv
(
args
[
0
],
args
+
sys
.
argv
[
1
:])
child_pg
=
None
def
executee
(
args
):
"""Portable execution with process replacement and environment manipulation"""
exec_list
=
list
(
args
[
0
])
environment
=
args
[
1
]
env
=
os
.
environ
.
copy
()
for
k
,
v
in
environment
.
iteritems
():
env
[
k
]
=
v
os
.
execve
(
exec_list
[
0
],
exec_list
+
sys
.
argv
[
1
:],
env
)
def
sig_handler
(
signal
,
frame
):
print
'Received signal %r, killing children and exiting'
%
signal
if
child_pg
is
not
None
:
os
.
killpg
(
child_pg
,
signal
.
SIGHUP
)
os
.
killpg
(
child_pg
,
signal
.
SIGTERM
)
sys
.
exit
(
0
)
signal
.
signal
(
signal
.
SIGINT
,
sig_handler
)
signal
.
signal
(
signal
.
SIGQUIT
,
sig_handler
)
signal
.
signal
(
signal
.
SIGTERM
,
sig_handler
)
def
execute_with_signal_translation
(
args
):
"""Run process as children and translate from SIGTERM to another signal"""
child
=
subprocess
.
Popen
(
args
,
close_fds
=
True
,
preexec_fn
=
os
.
setsid
)
child_pg
=
child
.
pid
try
:
print
'Process %r started'
%
args
while
True
:
time
.
sleep
(
10
)
finally
:
os
.
killpg
(
child_pg
,
signal
.
SIGHUP
)
os
.
killpg
(
child_pg
,
signal
.
SIGTERM
)
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/innobackupex.py
0 → 100644
View file @
43653c40
import
os
import
glob
def
controller
(
args
):
"""Creates full or incremental backup
If no full backup is done, it is created
If full backup exists incremental backup is done starting with base
base is the newest (according to date) full or incremental backup
"""
innobackupex_incremental
,
innobackupex_full
,
full_backup
,
incremental_backup
\
=
args
if
len
(
os
.
listdir
(
full_backup
))
==
0
:
print
'Doing full backup in %r'
%
full_backup
os
.
execv
(
innobackupex_full
,
[
innobackupex_full
,
full_backup
])
else
:
backup_list
=
filter
(
os
.
path
.
isdir
,
glob
.
glob
(
full_backup
+
"/*"
)
+
glob
.
glob
(
incremental_backup
+
"/*"
))
backup_list
.
sort
(
key
=
lambda
x
:
os
.
path
.
getmtime
(
x
),
reverse
=
True
)
base
=
backup_list
[
0
]
print
'Doing incremental backup in %r using %r as a base'
%
(
incremental_backup
,
base
)
os
.
execv
(
innobackupex_incremental
,
[
innobackupex_incremental
,
'--incremental-basedir=%s'
%
base
,
incremental_backup
])
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/killpidfromfile.py
0 → 100644
View file @
43653c40
import
sys
import
os
import
signal
def
killpidfromfile
():
file
=
sys
.
argv
[
1
]
sig
=
getattr
(
signal
,
sys
.
argv
[
2
],
None
)
if
sig
is
None
:
raise
ValueError
(
'Unknwon signal name %s'
%
sys
.
argv
[
2
])
if
os
.
path
.
exists
(
file
):
pid
=
int
(
open
(
file
).
read
())
print
'Killing pid %s with signal %s'
%
(
pid
,
sys
.
argv
[
2
])
os
.
kill
(
pid
,
sig
)
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/mysql.py
0 → 100644
View file @
43653c40
import
os
import
subprocess
import
time
import
sys
def
runMysql
(
args
):
sleep
=
60
conf
=
args
[
0
]
mysqld_wrapper_list
=
[
conf
[
'mysqld_binary'
],
'--defaults-file=%s'
%
conf
[
'configuration_file'
]]
# we trust mysql_install that if mysql directory is available mysql was
# correctly initalised
if
not
os
.
path
.
isdir
(
os
.
path
.
join
(
conf
[
'data_directory'
],
'mysql'
)):
while
True
:
# XXX: Protect with proper root password
# XXX: Follow http://dev.mysql.com/doc/refman/5.0/en/default-privileges.html
popen
=
subprocess
.
Popen
([
conf
[
'mysql_install_binary'
],
'--skip-name-resolve'
,
'--no-defaults'
,
'--datadir=%s'
%
conf
[
'data_directory'
]],
stdout
=
subprocess
.
PIPE
,
stderr
=
subprocess
.
STDOUT
)
result
=
popen
.
communicate
()[
0
]
if
popen
.
returncode
is
None
or
popen
.
returncode
!=
0
:
print
"Failed to initialise server.
\
n
The error was: %s"
%
result
print
"Waiting for %ss and retrying"
%
sleep
time
.
sleep
(
sleep
)
else
:
print
"Mysql properly initialised"
break
else
:
print
"MySQL already initialised"
print
"Starting %r"
%
mysqld_wrapper_list
[
0
]
sys
.
stdout
.
flush
()
sys
.
stderr
.
flush
()
os
.
execl
(
mysqld_wrapper_list
[
0
],
*
mysqld_wrapper_list
)
def
updateMysql
(
args
):
conf
=
args
[
0
]
sleep
=
30
is_succeed
=
False
while
True
:
if
not
is_succeed
:
mysql_upgrade_list
=
[
conf
[
'mysql_upgrade_binary'
],
'--no-defaults'
,
'--user=root'
,
'--socket=%s'
%
conf
[
'socket'
]]
mysql_upgrade
=
subprocess
.
Popen
(
mysql_upgrade_list
,
stdout
=
subprocess
.
PIPE
,
stderr
=
subprocess
.
STDOUT
)
result
=
mysql_upgrade
.
communicate
()[
0
]
if
mysql_upgrade
.
returncode
is
None
:
mysql_upgrade
.
kill
()
if
mysql_upgrade
.
returncode
!=
0
and
not
'is already upgraded'
in
result
:
print
"Command %r failed with result:
\
n
%s"
%
(
mysql_upgrade_list
,
result
)
print
'Sleeping for %ss and retrying'
%
sleep
else
:
if
mysql_upgrade
.
returncode
==
0
:
print
"MySQL database upgraded with result:
\
n
%s"
%
result
else
:
print
"No need to upgrade MySQL database"
mysql_list
=
[
conf
[
'mysql_binary'
].
strip
(),
'--no-defaults'
,
'-B'
,
'--user=root'
,
'--socket=%s'
%
conf
[
'socket'
]]
mysql
=
subprocess
.
Popen
(
mysql_list
,
stdin
=
subprocess
.
PIPE
,
stdout
=
subprocess
.
PIPE
,
stderr
=
subprocess
.
STDOUT
)
result
=
mysql
.
communicate
(
conf
[
'mysql_script'
])[
0
]
if
mysql
.
returncode
is
None
:
mysql
.
kill
()
if
mysql
.
returncode
!=
0
:
print
'Command %r failed with:
\
n
%s'
%
(
mysql_list
,
result
)
print
'Sleeping for %ss and retrying'
%
sleep
else
:
is_succeed
=
True
print
'SlapOS initialisation script succesfully applied on database.'
sys
.
stdout
.
flush
()
sys
.
stderr
.
flush
()
time
.
sleep
(
sleep
)
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.location-snippet.conf.in
0 → 100644
View file @
43653c40
<Location %(location)s>
Order Deny,Allow
Deny from all
Allow from %(allow_string)s
</Location>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.ssl-snippet.conf.in
0 → 100644
View file @
43653c40
SSLEngine on
SSLCertificateFile %(login_certificate)s
SSLCertificateKeyFile %(login_key)s
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.zope.conf.in
0 → 100644
View file @
43653c40
# Apache configuration file for Zope
# Automatically generated
# Basic server configuration
PidFile "%(pid_file)s"
LockFile "%(lock_file)s"
Listen %(ip)s:%(port)s
ServerAdmin %(server_admin)s
DefaultType text/plain
TypesConfig conf/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
# As backend is trusting REMOTE_USER header unset it always
RequestHeader unset REMOTE_USER
# SSL Configuration
%(ssl_snippet)s
# Log configuration
ErrorLog "%(error_log)s"
LogLevel warn
LogFormat "%%h %%{REMOTE_USER}i %%l %%u %%t \"%%r\" %%>s %%b \"%%{Referer}i\" \"%%{User-Agent}i\"" combined
LogFormat "%%h %%{REMOTE_USER}i %%l %%u %%t \"%%r\" %%>s %%b" common
CustomLog "%(access_log)s" common
# Directory protection
<Directory />
Options FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
</Directory>
%(path_enable)s
# Magic of Zope related rewrite
RewriteEngine On
%(rewrite_rule)s
# List of modules
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule version_module modules/mod_version.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule mime_module modules/mod_mime.so
LoadModule dav_module modules/mod_dav.so
LoadModule dav_fs_module modules/mod_dav_fs.so
LoadModule negotiation_module modules/mod_negotiation.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule headers_module modules/mod_headers.so
LoadModule antiloris_module modules/mod_antiloris.so
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.zope.conf.path-protected.in
0 → 100644
View file @
43653c40
# Path protected
<Location %(path)s>
Order Deny,Allow
Deny from all
Allow from %(access_control_string)s
</Location>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/apache.zope.conf.path.in
0 → 100644
View file @
43653c40
# Path enabled
<Location %(path)s>
Order Allow,Deny
Allow from all
</Location>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/cloudooo.cfg.in
0 → 100644
View file @
43653c40
[app:main]
use = egg:cloudooo
#
## System config
#
debug_mode = True
# Folder where pid files, lock files and virtual frame buffer mappings
# are stored. In this folder is necessary create a folder tmp, because this
# folder is used to create all temporary documents.
working_path = %(working_path)s
# Folder where UNO library is installed
uno_path = %(uno_path)s
# Folder where soffice.bin is installed
office_binary_path = %(office_binary_path)s
#
## Monitor Settings
#
# Limit to use the Openoffice Instance. if pass of the limit, the instance is
# stopped and another is started.
limit_number_request = 100
# Interval to check the factory
monitor_interval = 10
timeout_response = 180
enable_memory_monitor = True
# Set the limit in MB
# e.g 1000 = 1 GB, 100 = 100 MB
limit_memory_used = 3000
#
## OOFactory Settings
#
# The pool consist of several OpenOffice.org instances
application_hostname = %(ip)s
# OpenOffice Port
openoffice_port = %(openoffice_port)s
# LD_LIBRARY_PATH passed to OpenOffice
env-LD_LIBRARY_PATH = %(LD_LIBRARY_PATH)s
#
# Mimetype Registry
# It is used to select the handler that will be used in conversion.
# Priority matters, first match take precedence on next lines.
mimetype_registry =
application/pdf * pdf
application/vnd.oasis.opendocument* * ooo
application/vnd.sun.xml* * ooo
text/* * ooo
image/* image/* imagemagick
video/* * ffmpeg
* application/vnd.oasis.opendocument* ooo
[server:main]
use = egg:PasteScript#wsgiutils
host = %(ip)s
port = %(port)s
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/haproxy.cfg.in
0 → 100644
View file @
43653c40
global
maxconn 4096
defaults
log global
mode http
option httplog
option dontlognull
retries 1
option redispatch
maxconn 2000
timeout server 3000s
timeout queue 5s
timeout connect 10s
timeout client 3600s
listen %(name)s %(ip)s:%(port)s
cookie SERVERID insert
balance roundrobin
%(server_text)s
option httpchk GET %(server_check_path)s
stats uri /haproxy
stats realm Global\ statistics
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/initmysql.sql.in
0 → 100644
View file @
43653c40
CREATE DATABASE IF NOT EXISTS %(mysql_database)s;
GRANT ALL PRIVILEGES ON %(mysql_database)s.* TO %(mysql_user)s@'%%' IDENTIFIED BY '%(mysql_password)s';
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/kumo_gateway.in
0 → 100644
View file @
43653c40
#!/bin/sh
exec
%
(
kumo_gateway_binary
)
s
-F
-E
-m
%
(
kumo_manager_ip
)
s:%
(
kumo_manager_port
)
s
-t
%
(
kumo_gateway_ip
)
s:%
(
kumo_gateway_port
)
s
-o
%
(
kumo_gateway_log
)
s
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/kumo_manager.in
0 → 100644
View file @
43653c40
#!/bin/sh
exec
%
(
kumo_manager_binary
)
s
-a
-l
%
(
kumo_manager_ip
)
s:%
(
kumo_manager_port
)
s
-o
%
(
kumo_manager_log
)
s
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/kumo_server.in
0 → 100644
View file @
43653c40
#!/bin/sh
exec
%
(
kumo_server_binary
)
s
-l
%
(
kumo_server_ip
)
s:%
(
kumo_server_port
)
s
-L
%
(
kumo_server_listen_port
)
s
-m
%
(
kumo_manager_ip
)
s:%
(
kumo_manager_port
)
s
-s
%
(
kumo_server_storage
)
s
-o
%
(
kumo_server_log
)
s
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/logrotate_entry.in
0 → 100644
View file @
43653c40
%(file_list)s {
daily
dateext
rotate 30
compress
notifempty
sharedscripts
create
postrotate
%(postrotate)s
endscript
olddir %(olddir)s
}
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/memcached.in
0 → 100644
View file @
43653c40
#!/bin/sh
exec
%
(
memcached_binary
)
s
-p
%
(
memcached_port
)
s
-U
%
(
memcached_port
)
s
-l
%
(
memcached_ip
)
s
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/my.cnf.in
0 → 100644
View file @
43653c40
# ERP5 buildout my.cnf template based on my-huge.cnf shipped with mysql
# The MySQL server
[mysqld]
# ERP5 by default requires InnoDB storage. MySQL by default fallbacks to using
# different engine, like MyISAM. Such behaviour generates problems only, when
# tables requested as InnoDB are silently created with MyISAM engine.
#
# Loud fail is really required in such case.
sql-mode="NO_ENGINE_SUBSTITUTION"
skip-show-database
port = %(tcp_port)s
bind-address = %(ip)s
socket = %(socket)s
datadir = %(data_directory)s
pid-file = %(pid_file)s
log-error = %(error_log)s
log-slow-file = %(slow_query_log)s
long_query_time = 5
max_allowed_packet = 128M
query_cache_size = 32M
plugin-load = ha_innodb_plugin.so
# The following are important to configure and depend a lot on to the size of
# your database and the available resources.
#innodb_buffer_pool_size = 4G
#innodb_log_file_size = 256M
#innodb_log_buffer_size = 8M
# Some dangerous settings you may want to uncomment if you only want
# performance or less disk access. Useful for unit tests.
#innodb_flush_log_at_trx_commit = 0
#innodb_flush_method = nosync
#innodb_doublewrite = 0
#sync_frm = 0
# Uncomment the following if you need binary logging, which is recommended
# on production instances (either for replication or incremental backups).
#log-bin=mysql-bin
# Force utf8 usage
collation_server = utf8_unicode_ci
character_set_server = utf8
skip-character-set-client-handshake
[mysql]
no-auto-rehash
socket = %(socket)s
[mysqlhotcopy]
interactive-timeout
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/openssl.cnf.ca.in
0 → 100644
View file @
43653c40
#
# OpenSSL example configuration file.
# This is mostly being used for generation of certificate requests.
#
# This definition stops the following lines choking if HOME isn't
# defined.
HOME = .
RANDFILE = $ENV::HOME/.rnd
# Extra OBJECT IDENTIFIER info:
#oid_file = $ENV::HOME/.oid
oid_section = new_oids
# To use this configuration file with the "-extfile" option of the
# "openssl x509" utility, name here the section containing the
# X.509v3 extensions to use:
# extensions =
# (Alternatively, use a configuration file that has only
# X.509v3 extensions in its main [= default] section.)
[ new_oids ]
# We can add new OIDs in here for use by 'ca', 'req' and 'ts'.
# Add a simple OID like this:
# testoid1=1.2.3.4
# Or use config file substitution like this:
# testoid2=${testoid1}.5.6
# Policies used by the TSA examples.
tsa_policy1 = 1.2.3.4.1
tsa_policy2 = 1.2.3.4.5.6
tsa_policy3 = 1.2.3.4.5.7
####################################################################
[ ca ]
default_ca = CA_default # The default ca section
####################################################################
[ CA_default ]
dir = %(working_directory)s # Where everything is kept
certs = $dir/certs # Where the issued certs are kept
crl_dir = $dir/crl # Where the issued crl are kept
database = $dir/index.txt # database index file.
#unique_subject = no # Set to 'no' to allow creation of
# several ctificates with same subject.
new_certs_dir = $dir/newcerts # default place for new certs.
certificate = $dir/cacert.pem # The CA certificate
serial = $dir/serial # The current serial number
crlnumber = $dir/crlnumber # the current crl number
# must be commented out to leave a V1 CRL
crl = $dir/crl.pem # The current CRL
private_key = $dir/private/cakey.pem # The private key
RANDFILE = $dir/private/.rand # private random number file
x509_extensions = usr_cert # The extentions to add to the cert
# Comment out the following two lines for the "traditional"
# (and highly broken) format.
name_opt = ca_default # Subject Name options
cert_opt = ca_default # Certificate field options
# Extension copying option: use with caution.
# copy_extensions = copy
# Extensions to add to a CRL. Note: Netscape communicator chokes on V2 CRLs
# so this is commented out by default to leave a V1 CRL.
# crlnumber must also be commented out to leave a V1 CRL.
# crl_extensions = crl_ext
default_days = 3650 # how long to certify for
default_crl_days= 30 # how long before next CRL
default_md = default # use public key default MD
preserve = no # keep passed DN ordering
# A few difference way of specifying how similar the request should look
# For type CA, the listed attributes must be the same, and the optional
# and supplied fields are just that :-)
policy = policy_match
# For the CA policy
[ policy_match ]
countryName = match
stateOrProvinceName = match
organizationName = match
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
# For the 'anything' policy
# At this point in time, you must list all acceptable 'object'
# types.
[ policy_anything ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
####################################################################
[ req ]
default_bits = 2048
default_md = sha1
default_keyfile = privkey.pem
distinguished_name = req_distinguished_name
#attributes = req_attributes
x509_extensions = v3_ca # The extentions to add to the self signed cert
# Passwords for private keys if not present they will be prompted for
# input_password = secret
# output_password = secret
# This sets a mask for permitted string types. There are several options.
# default: PrintableString, T61String, BMPString.
# pkix : PrintableString, BMPString (PKIX recommendation before 2004)
# utf8only: only UTF8Strings (PKIX recommendation after 2004).
# nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings).
# MASK:XXXX a literal mask value.
# WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings.
string_mask = utf8only
# req_extensions = v3_req # The extensions to add to a certificate request
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = %(country_code)s
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_value = %(state)s
localityName = Locality Name (eg, city)
localityName_value = %(city)s
0.organizationName = Organization Name (eg, company)
0.organizationName_value = %(company)s
# we can do this but it is not needed normally :-)
#1.organizationName = Second Organization Name (eg, company)
#1.organizationName_default = World Wide Web Pty Ltd
commonName = Common Name (eg, your name or your server\'s hostname)
commonName_max = 64
emailAddress = Email Address
emailAddress_value = %(email_address)s
emailAddress_max = 64
# SET-ex3 = SET extension number 3
#[ req_attributes ]
#challengePassword = A challenge password
#challengePassword_min = 4
#challengePassword_max = 20
#
#unstructuredName = An optional company name
[ usr_cert ]
# These extensions are added when 'ca' signs a request.
# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.
basicConstraints=CA:FALSE
# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.
# This is OK for an SSL server.
# nsCertType = server
# For an object signing certificate this would be used.
# nsCertType = objsign
# For normal client use this is typical
# nsCertType = client, email
# and for everything including object signing:
# nsCertType = client, email, objsign
# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment
# This will be displayed in Netscape's comment listbox.
nsComment = "OpenSSL Generated Certificate"
# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move
# Copy subject details
# issuerAltName=issuer:copy
#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName
# This is required for TSA certificates.
# extendedKeyUsage = critical,timeStamping
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
[ v3_ca ]
# Extensions for a typical CA
# PKIX recommendation.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
# This is what PKIX recommends but some broken software chokes on critical
# extensions.
#basicConstraints = critical,CA:true
# So we do this instead.
basicConstraints = CA:true
# Key usage: this is typical for a CA certificate. However since it will
# prevent it being used as an test self-signed certificate it is best
# left out by default.
# keyUsage = cRLSign, keyCertSign
# Some might want this also
# nsCertType = sslCA, emailCA
# Include email address in subject alt name: another PKIX recommendation
# subjectAltName=email:copy
# Copy issuer details
# issuerAltName=issuer:copy
# DER hex encoding of an extension: beware experts only!
# obj=DER:02:03
# Where 'obj' is a standard or added object
# You can even override a supported extension:
# basicConstraints= critical, DER:30:03:01:01:FF
[ crl_ext ]
# CRL extensions.
# Only issuerAltName and authorityKeyIdentifier make any sense in a CRL.
# issuerAltName=issuer:copy
authorityKeyIdentifier=keyid:always
[ proxy_cert_ext ]
# These extensions should be added when creating a proxy certificate
# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.
basicConstraints=CA:FALSE
# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.
# This is OK for an SSL server.
# nsCertType = server
# For an object signing certificate this would be used.
# nsCertType = objsign
# For normal client use this is typical
# nsCertType = client, email
# and for everything including object signing:
# nsCertType = client, email, objsign
# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment
# This will be displayed in Netscape's comment listbox.
nsComment = "OpenSSL Generated Certificate"
# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move
# Copy subject details
# issuerAltName=issuer:copy
#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName
# This really needs to be in place for it to be a proxy certificate.
proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo
####################################################################
[ tsa ]
default_tsa = tsa_config1 # the default TSA section
[ tsa_config1 ]
# These are used by the TSA reply generation only.
dir = /etc/pki/tls # TSA root directory
serial = $dir/tsaserial # The current serial number (mandatory)
crypto_device = builtin # OpenSSL engine to use for signing
signer_cert = $dir/tsacert.pem # The TSA signing certificate
# (optional)
certs = $dir/cacert.pem # Certificate chain to include in reply
# (optional)
signer_key = $dir/private/tsakey.pem # The TSA private key (optional)
default_policy = tsa_policy1 # Policy if request did not specify it
# (optional)
other_policies = tsa_policy2, tsa_policy3 # acceptable policies (optional)
digests = md5, sha1 # Acceptable message digests (mandatory)
accuracy = secs:1, millisecs:500, microsecs:100 # (optional)
clock_precision_digits = 0 # number of digits after dot. (optional)
ordering = yes # Is ordering defined for timestamps?
# (optional, default: no)
tsa_name = yes # Must the TSA name be included in the reply?
# (optional, default: no)
ess_cert_id_chain = no # Must the ESS cert id chain be included?
# (optional, default: no)
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/tidstorage.py.in
0 → 100644
View file @
43653c40
known_tid_storage_identifier_dict = %(known_tid_storage_identifier_dict)s
base_url = '%(base_url)s'
address = '%(host)s'
port = %(port)s
#fork = False
#setuid = None
#setgid = None
burst_period = 30
full_dump_period = 300
timestamp_file_path = '%(timestamp_file_path)s'
logfile_name = '%(logfile)s'
pidfile_name = '%(pidfile)s'
status_file = '%(statusfile)s'
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zeo.conf.in
0 → 100644
View file @
43653c40
# ZEO configuration file generated by SlapOS
<zeo>
address %(zeo_ip)s:%(zeo_port)s
read-only false
invalidation-queue-size 100
pid-filename %(zeo_pid)s
</zeo>
%(zeo_filestorage_snippet)s
<eventlog>
<logfile>
path %(zeo_event_log)s
</logfile>
</eventlog>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope-deadlockdebugger-snippet.conf.in
0 → 100644
View file @
43653c40
# DeadlockDebugger configuration
<product-config DeadlockDebugger>
dump_url %(dump_url)s
secret %(secret)s
</product-config>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope-tidstorage-snippet.conf.in
0 → 100644
View file @
43653c40
# TIDStorage connection
<product-config TIDStorage>
backend-ip %(host)s
backend-port %(port)s
</product-config>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope-zeo-snippet.conf.in
0 → 100644
View file @
43653c40
<zodb_db %(storage_name)s>
mount-point %(mount_point)s
<zeoclient>
server %(address)s
storage %(storage_name)s
name %(storage_name)s
</zeoclient>
</zodb_db>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope-zodb-snippet.conf.in
0 → 100644
View file @
43653c40
<zodb_db root>
<filestorage>
path %(zodb_root_path)s
</filestorage>
mount-point /
</zodb_db>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope.conf.in
0 → 100644
View file @
43653c40
## Zope 2 configuration file generated by SlapOS
# Some defines
%%define INSTANCE %(instance)s
instancehome $INSTANCE
# Used products
%(products)s
# Environment override
<environment>
TMP %(tmp_directory)s
TMPDIR %(tmp_directory)s
HOME %(tmp_directory)s
PATH %(path)s
</environment>
# No need to debug
debug-mode off
# One thread is safe enough
zserver-threads %(thread_amount)s
# File location
pid-filename %(pid-filename)s
lock-filename %(lock-filename)s
# Temporary storage database (for sessions)
<zodb_db temporary>
<temporarystorage>
name temporary storage for sessioning
</temporarystorage>
mount-point /temp_folder
container-class Products.TemporaryFolder.TemporaryContainer
</zodb_db>
# Logging configuration
<eventlog>
<logfile>
path %(event_log)s
</logfile>
</eventlog>
<logger access>
<logfile>
path %(z2_log)s
</logfile>
</logger>
# Serving configuration
<http-server>
address %(address)s
</http-server>
# ZODB configuration
%(zodb_configuration_string)s
<zoperunner>
program $INSTANCE/bin/runzope
</zoperunner>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/template/zope.conf.timerservice.in
0 → 100644
View file @
43653c40
# ERP5 Timer Service
%%import timerserver
<timer-server>
interval 5
</timer-server>
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/test_suite_runner.py
0 → 100644
View file @
43653c40
import
os
import
sys
def
runTestSuite
(
args
):
env
=
os
.
environ
.
copy
()
d
=
args
[
0
]
env
[
'OPENSSL_BINARY'
]
=
d
[
'openssl_binary'
]
env
[
'TEST_CA_PATH'
]
=
d
[
'test_ca_path'
]
env
[
'PATH'
]
=
':'
.
join
([
d
[
'prepend_path'
]]
+
os
.
environ
[
'PATH'
].
split
(
':'
))
env
[
'INSTANCE_HOME'
]
=
d
[
'instance_home'
]
env
[
'REAL_INSTANCE_HOME'
]
=
d
[
'instance_home'
]
os
.
execve
(
d
[
'call_list'
][
0
],
d
[
'call_list'
]
+
sys
.
argv
[
1
:],
env
)
slapos/slapos.recipe.erp5/src/slapos/recipe/erp5/testrunner.py
0 → 100644
View file @
43653c40
import
os
import
sys
def
runUnitTest
(
args
):
env
=
os
.
environ
.
copy
()
d
=
args
[
0
]
env
[
'OPENSSL_BINARY'
]
=
d
[
'openssl_binary'
]
env
[
'TEST_CA_PATH'
]
=
d
[
'test_ca_path'
]
env
[
'PATH'
]
=
':'
.
join
([
d
[
'prepend_path'
]]
+
os
.
environ
[
'PATH'
].
split
(
':'
))
env
[
'INSTANCE_HOME'
]
=
d
[
'instance_home'
]
env
[
'REAL_INSTANCE_HOME'
]
=
d
[
'instance_home'
]
os
.
execve
(
d
[
'call_list'
][
0
],
d
[
'call_list'
]
+
sys
.
argv
[
1
:],
env
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment