pax_global_header 0000666 0000000 0000000 00000000064 13616512510 0014512 g ustar 00root root 0000000 0000000 52 comment=9e9ca4e18f95c32c369a9ff1dfef2d4e9589a94a
slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/ 0000775 0000000 0000000 00000000000 13616512510 0024441 5 ustar 00root root 0000000 0000000 slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/ 0000775 0000000 0000000 00000000000 13616512510 0025742 5 ustar 00root root 0000000 0000000 slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/ 0000775 0000000 0000000 00000000000 13616512510 0031010 5 ustar 00root root 0000000 0000000 slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/README.txt 0000664 0000000 0000000 00000010526 13616512510 0032512 0 ustar 00root root 0000000 0000000 This module is used to test resiliency stack for different Software Releases.
To be specific, it tests:
* The replication mechanism between main instance, pbs and clone and
the specific behavior of the resilient Software Release.
* The takeover mechanism.
This module is supposed to be launched from the "test" instance of a resilient
service (usually deployed using "test" software type).
One entry point, if the service has been deployed
from a "scalability" test node, so with special parameters, will automatically
communicate when the instance is started to an ERP5 TestNode Master, and start the test.
The other entry point, "bin/runStandaloneTest", is supposed to be run manually from a simple "test"
instance without special parameter (just request an instance of your software
release using the dedicated test software-type made for the occasion).
This is quite useful if you simply want to run the resiliency tests without having the whole
dedicated test infrastructure.
The third entry point, "bin/runUnitTestTestNode" is basically the same as the first one, but used to be run inside of
a classical, "Unit Test"-style erp5testnode.
This module contains:
* The code to start the test from a testnode / manually
* A Resiliency Test Suite framework (in suites/__init__.py), used to easily write new
test suites
* A list of test suites (in suites/)
TODO :
* Check that each partition is in different slapos node.
* Test for bang calls
* Be able to configure from ERP5 Master (i.e from instance parameters): count of PBS/clones, then test several possibilities (so called "count" in test suite)
* Use Nexedi ERP5, when in production.
* Put the KVM disk image in a safe place.
------------
For reference: How-to deploy the whole test system
1/ Deploy a SlapOS Master
2/ Deploy an ERP5, install erp5_test_result BT with scalability feature (current in scalability-master2 branch of erp5.git) (currently, had to change a few lines in the scalability extension of the portal_class, should be commited)
3/ Configure 3 nodes in the new SlapOS Master, deploy in each a testnode with scalability feature (erp5testnode-scalability branch of slapos.git) with parameters like:
COMP-0-Testnode
https://zope:insecure@softinst43496.host.vifib.net/erp5/portal_task_distribution/1
3bis/ Supply and request http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.92:/software/kvm/software.cfg on a public node (so that vnc frontends are ok). "domain" parameter should be [ipv6] of partition. ipv4:4443 should be 6tunnelled to ipv6:4443 (Note: here, instead, I just hacked kvm_frontend to listen on ipv6).
3ter/ Supply and request http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD:/software/apache-frontend/software.cfg, with any "domain" (it won't be used), on a public node (so that web frontends are ok)
4/ On the ERP5 instance, create a project, a Task Distribution (in portal_task_distribution, type Scalability Task Distribution)
5/ On the ERP5 instance, create a Test Suite, validate it
Note: the slapos nodes are currently deployed using slapos-in-partition.
Note: you have to manually kill -10 the erp5testnode process to start deployment of test because it doesn't know when SR installation is finished.
Note: you have to manually run slapos-node-software --all on the slapos nodes if you are developping the SR you are testing.
------------
STANDALONE TESTS
Here is an example on how to deploy standalone tests on the webrunner, which means without using erp5.
1/ Deploy a SlapRunner software instance using the type test.
2/ In slapos.org, you should tell on which server you want to deploy your instances. You can adapt to your case the parameter.xml above. For the first time, you can deploy all the instances on the same node, it will run the tests faster, and it will be easier to debug :
{"cluster": {"-sla-0-computer_guid": "COMP-XXXX", "-sla-1-computer_guid": "COMP-XXXX", "-sla-2-computer_guid": "COMP-XXXX"}}
3/ Then go to the root instance folder : it is the one who has only "runStandaloneResiliencyTestSuite" in its bin folder.
4/ Run ./bin/runStandaloneResiliencyTestSuite and wait :) it would return "success" or "failure"
slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/__init__.py 0000664 0000000 0000000 00000023265 13616512510 0033131 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2013 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from __future__ import print_function
import argparse
import json
import importlib
import logging
import os
import sys
import tempfile
import time
import traceback
from erp5.util import taskdistribution
try:
from erp5.util.testnode import Utils
except ImportError:
pass
def importFrom(name):
"""
Import a test suite module (in the suites module) and return it.
"""
return importlib.import_module('.suites.%s' % name, package=__name__)
def parseArguments():
"""
Parse arguments.
"""
parser = argparse.ArgumentParser()
parser.add_argument('--test-result-path', '--test_result_path',
metavar='ERP5_TEST_RESULT_PATH',
help='ERP5 relative path of the test result')
parser.add_argument('--revision',
metavar='REVISION',
help='Revision of the test_suite')
parser.add_argument('--test-suite', '--test_suite',
metavar='TEST_SUITE',
help='Name of the test suite')
parser.add_argument('--test-suite-title', '--test_suite_title',
metavar='TEST_SUITE',
help='The test suite title')
parser.add_argument('--node-title', '--test_node_title',
metavar='NODE_TITLE',
help='Title of the testnode which is running this'
'launcher')
parser.add_argument('--node_quantity', help='Number of parallel tests to run',
default=1, type=int)
parser.add_argument('--test-suite-master-url', '--master_url',
metavar='TEST_SUITE_MASTER_URL',
help='Url to connect to the ERP5 Master testsuite taskditributor')
parser.add_argument('--log-path', '--log_path',
metavar='LOG_PATH',
help='Log Path')
parser.add_argument('additional_arguments', nargs=argparse.REMAINDER)
return parser.parse_args()
def setupLogging(name=__name__, log_path=None):
logger_format = '%(asctime)s %(name)-13s: %(levelname)-8s %(message)s'
formatter = logging.Formatter(logger_format)
logging.basicConfig(level=logging.INFO, format=logger_format)
root_logger = logging.getLogger('')
fd, fname = tempfile.mkstemp()
file_handler = logging.FileHandler(fname)
file_handler.setFormatter(formatter)
root_logger.addHandler(file_handler)
if log_path:
file_handler = logging.handlers.RotatingFileHandler(
filename=log_path,
maxBytes=20000000, backupCount=4)
file_handler.setFormatter(formatter)
root_logger.addHandler(file_handler)
logger = logging.getLogger(name)
return logger, fname
def runTestSuite(test_suite_title, test_suite_arguments, logger):
"""
Run a specified test suite, by dynamically loading the module and calling
its "runTestSuite" method.
"""
try:
# Generate the additional arguments that were given using the syntax
# additionalargument1=value1 additionalargument2=value2
parsed_arguments = dict(key.split('=') for key in test_suite_arguments)
test_suite_module = importFrom(test_suite_title)
success = test_suite_module.runTestSuite(**parsed_arguments)
except Exception:
logger.exception('Impossible to run resiliency test:')
success = False
return success
class ScalabilityTest(object):
"""
Simple structure carrying test data.
"""
def __init__(self, data, test_result):
self.__dict__ = {}
self.__dict__.update(data)
self.test_result = test_result
class ScalabilityLauncher(object):
"""
Core part of the code, responsible of speaking with the ERP5 testnode Master
and running tests.
"""
def __init__(self):
self._argumentNamespace = parseArguments()
if self._argumentNamespace.log_path:
log_path = os.path.join(self._argumentNamespace.log_path,
'runScalabilityTestSuite.log')
else:
log_path = None
logger, fname = setupLogging('runScalabilityTestSuite', log_path)
self.log = logger.info
# Proxy to erp5 master test_result
self.test_result = taskdistribution.TestResultProxy(
self._argumentNamespace.test_suite_master_url,
1.0, logger,
self._argumentNamespace.test_result_path,
self._argumentNamespace.node_title,
self._argumentNamespace.revision
)
def getNextTest(self):
"""
Return a ScalabilityTest with current running test case informations,
or None if no test_case ready
"""
data = self.test_result.getRunningTestCase()
if data == None:
return None
decoded_data = Utils.deunicodeData(json.loads(
data
))
next_test = ScalabilityTest(decoded_data, self.test_result)
return next_test
def run(self):
self.log('Resiliency Launcher started, with:')
self.log('Test suite master url: %s' % self._argumentNamespace.test_suite_master_url)
self.log('Test suite: %s' % self._argumentNamespace.test_suite)
self.log('Test result path: %s' % self._argumentNamespace.test_result_path)
self.log('Revision: %s' % self._argumentNamespace.revision)
self.log('Node title: %s' % self._argumentNamespace.node_title)
while True:
time.sleep(5)
current_test = self.getNextTest()
if current_test == None:
self.log('No Test Case Ready')
else:
start_time = time.time()
error_message_set, exit_status = set(), 0
proxy = taskdistribution.ServerProxy(
self._argumentNamespace.test_suite_master_url,
allow_none=True
).portal_task_distribution
retry_time = 2.0
test_result_line_test = taskdistribution.TestResultLineProxy(
proxy, retry_time, self.log,
current_test.relative_path,
current_test.title
)
success = runTestSuite(
self._argumentNamespace.test_suite,
self._argumentNamespace.additional_arguments,
self.log,
)
if success:
error_count = 0
else:
error_count = 1
test_duration = time.time() - start_time
test_result_line_test.stop(stdout='Success',
test_count=1,
error_count=error_count,
duration=test_duration)
self.log('Test Case Stopped')
return error_message_set, exit_status
def runResiliencyTest():
"""
Used for automated test suite run from "Scalability" Test Node infrastructure.
It means the instance running this code should have been deployed by a
"Scalability" Testnode.
"""
error_message_set, exit_status = ScalabilityLauncher().run()
for error_message in error_message_set:
print('ERROR: %s' % error_message, file=sys.stderr)
sys.exit(exit_status)
def runUnitTest():
"""
Function meant to be run by "classical" (a.k.a UnitTest) erp5testnode.
"""
args = parseArguments()
logger, fname = setupLogging('runScalabilityTestSuite', None)
try:
master = taskdistribution.TaskDistributor(args.test_suite_master_url)
test_suite_title = args.test_suite_title or args.test_suite
revision = args.revision
test_result = master.createTestResult(revision, [test_suite_title],
args.node_title, True, test_suite_title, 'foo')
if test_result is None:
# No test to run.
logger.info("There is no test to run. Exiting...")
return
test_line = test_result.start()
start_time = time.time()
args.additional_arguments.append('type=UnitTest')
success = runTestSuite(
args.test_suite,
args.additional_arguments,
logger,
)
if success:
error_count = 0
else:
error_count = 1
test_duration = time.time() - start_time
max_size = 4096
fsize = os.stat(fname).st_size
with open(fname) as f:
if fsize <= max_size:
stdout = f.read()
else:
# Read only 2048 bytes from the file. The whole
# file will be available on the server.
stdout = f.read(2048)
stdout += "\n[...] File truncated\n"
f.seek(-2048, os.SEEK_END)
stdout += f.read()
test_line.stop(stdout=stdout,
test_count=1,
error_count=error_count,
duration=test_duration)
finally:
os.remove(fname)
slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/suites/ 0000775 0000000 0000000 00000000000 13616512510 0032324 5 ustar 00root root 0000000 0000000 slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/suites/__init__.py 0000664 0000000 0000000 00000000000 13616512510 0034423 0 ustar 00root root 0000000 0000000 slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/suites/erp5.py 0000664 0000000 0000000 00000022311 13616512510 0033550 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2014 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from .slaprunner import SlaprunnerTestSuite
import json
import random
import ssl
import string
import time
from six.moves.urllib.parse import quote
from six.moves.urllib.request import HTTPBasicAuthHandler, HTTPSHandler, \
build_opener
class NotHttpOkException(Exception):
pass
class ERP5TestSuite(SlaprunnerTestSuite):
"""
Run ERP5 inside Slaprunner Resiliency Test.
Note: requires specific kernel allowing long shebang paths.
"""
def _setERP5InstanceParameter(self):
"""
Set inside of slaprunner the instance parameter to use to deploy erp5 instance.
"""
p = ' {"zodb-zeo": {"backup-periodicity": "*:1/4"}, "mariadb": {"backup-periodicity": "*:1/4"}} '
parameter = quote(p)
self._connectToSlaprunner(
resource='saveParameterXml',
data='software_type=default¶meter=%s' % parameter)
def _getERP5Url(self):
"""
Return the backend url of erp5 instance.
Note: it is not connection parameter of slaprunner,
but connection parameter of what is inside of webrunner.
"""
data = self._connectToSlaprunner(
resource='getConnectionParameter/slappart0'
)
url = json.loads(json.loads(data)['_'])['family-default-v6']
self.logger.info('Retrieved erp5 url is:\n%s' % url)
return url
def _getERP5Password(self):
data = self._connectToSlaprunner(
resource='getConnectionParameter/slappart0'
)
password = json.loads(json.loads(data)['_'])['inituser-password']
self.logger.info('Retrieved erp5 password is:\n%s' % password)
return password
def _getSlaprunnerServiceInformationList(self):
result = self._connectToSlaprunner(
resource='/inspectInstance',
)
return json.loads(result)
def _editHAProxyconfiguration(self):
"""
XXX pure hack.
haproxy processes don't support long path for sockets.
Edit haproxy configuration file of erp5 to make it compatible with long paths
Then restart haproxy.
"""
self.logger.info('Editing HAProxy configuration...')
service_information_list = self._getSlaprunnerServiceInformationList()
# We expect only one service haproxy
haproxy_service, = [
x['service_name'] for x in service_information_list
if 'haproxy' in x['service_name']
]
haproxy_slappart = haproxy_service.split(':', 1)[0]
result = self._connectToSlaprunner(
resource='/getFileContent',
data='file=runner_workdir%2Finstance%2F{slappart}%2Fetc%2Fhaproxy.cfg'.format(slappart=haproxy_slappart)
)
file_content = json.loads(result)['result']
file_content = file_content.replace('var/run/haproxy.sock', 'ha.sock')
self._connectToSlaprunner(
resource='/saveFileContent',
data='file=runner_workdir%%2Finstance%%2F%s%%2Fetc%%2Fhaproxy.cfg&content=%s' % (
haproxy_slappart,
quote(file_content),
)
)
# Restart HAProxy
self._connectToSlaprunner(
resource='/startStopProccess/name/%s:*/cmd/RESTART' % haproxy_slappart
)
def _getCreatedERP5Document(self):
""" Fetch and return content of ERP5 document created above."""
url = "%s/erp5/getTitle" % self._getERP5Url()
return self._connectToERP5(url)
def _getCreatedERP5SiteId(self):
""" Fetch and return id of ERP5 document created above."""
url = "%s/erp5/getId" % self._getERP5Url()
return self._connectToERP5(url)
def _connectToERP5(self, url, data=None, password=None):
if password is None:
password = self._getERP5Password()
auth_handler = HTTPBasicAuthHandler()
auth_handler.add_password(realm='Zope', uri=url, user='zope', passwd=password)
ssl_context = ssl._create_unverified_context()
opener_director = build_opener(
auth_handler,
HTTPSHandler(context=ssl_context)
)
self.logger.info('Calling ERP5 url %s' % url)
if data:
result = opener_director.open(url, data=data)
else:
result = opener_director.open(url)
if result.getcode() is not 200:
raise NotHttpOkException(result.getcode())
return result.read()
def _createRandomERP5Document(self, password=None):
""" Create a document with random content in erp5 site."""
# XXX currently only sets erp5 site title.
# XXX could be simplified to /erp5/setTitle?title=slapos
if password is None:
password = self._getERP5Password()
erp5_site_title = self.slaprunner_user
url = "%s/erp5?__ac_name=zope&__ac_password=%s" % (self._getERP5Url(), password)
form = 'title%%3AUTF-8:string=%s&manage_editProperties%%3Amethod=Save+Changes' % erp5_site_title
self._connectToERP5(url, form)
return erp5_site_title
def generateData(self):
self.slaprunner_password = ''.join(
random.SystemRandom().sample(string.ascii_lowercase, 8)
)
self.slaprunner_user = 'slapos'
self.logger.info('Generated slaprunner user is: %s' % self.slaprunner_user)
self.logger.info('Generated slaprunner password is: %s' % self.slaprunner_password)
def pushDataOnMainInstance(self):
"""
Create a dummy Software Release,
Build it,
Wait for build to be successful,
Deploy instance,
Wait for instance to be started.
Store the main IP of the slaprunner for future use.
"""
self.logger.debug('Getting the backend URL...')
parameter_dict = self._getPartitionParameterDict()
self.slaprunner_backend_url = parameter_dict['backend-url']
self.logger.info('backend_url is %s.' % self.slaprunner_backend_url)
self.slaprunner_user = parameter_dict['init-user']
self.slaprunner_password = parameter_dict['init-password']
self._login()
time.sleep(10)
self._gitClone()
self._openSoftwareRelease('erp5')
self._setERP5InstanceParameter()
self._buildSoftwareRelease()
self._deployInstance()
self._deployInstance()
self._deployInstance()
self._deployInstance()
self._editHAProxyconfiguration()
time.sleep(30)
self.logger.info('Starting all partitions ...')
self._connectToSlaprunner('/startAllPartition')
self.logger.info('Waiting 30 seconds so that erp5 can be bootstrapped...')
for i in range(20):
time.sleep(30)
try:
if "erp5" == self._getCreatedERP5SiteId():
break
except Exception:
self.logger.info("Fail to connect to erp5.... wait a bit longer")
pass
self.data = self._createRandomERP5Document()
self.logger.info('Wait half an hour for main instance to have backup of erp5...')
time.sleep(3600 / 2)
# in erp5testnode, we have only one IP, so we can't run at the same time
# erp5 in webrunner of main instance, and mariadb/zope/etc in import script of clone instance
# So we stop main instance processes.
self._connectToSlaprunner('/stopAllPartition')
self.logger.info('Wait half an hour for clone to have compiled ERP5 SR...')
time.sleep(3600 / 2)
def checkDataOnCloneInstance(self):
"""
Check that:
* backend_url is different
* Software Release profile is the same,
* Software Release is built and is the same,
* Instance is deployed and is the same (contains same new data).
"""
old_slaprunner_backend_url = self.slaprunner_backend_url
self.slaprunner_backend_url = self._returnNewInstanceParameter(
parameter_key='backend-url',
old_parameter_value=old_slaprunner_backend_url,
force_new=True,
)
self._login()
self._waitForSoftwareBuild()
self._deployInstance()
time.sleep(60)
self._editHAProxyconfiguration()
new_data = self._getCreatedERP5Document()
if new_data == self.data:
self.logger.info('Data are the same: success.')
return True
else:
self.logger.info('Data are different: failure.')
return False
def runTestSuite(*args, **kwargs):
"""
Run Slaprunner Resiliency Test.
"""
return ERP5TestSuite(*args, **kwargs).runTestSuite()
slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/suites/gitlab.py 0000664 0000000 0000000 00000021321 13616512510 0034137 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2014 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from .slaprunner import SlaprunnerTestSuite
import json
import random
import string
import time
import requests
class NotHttpOkException(Exception):
pass
class GitlabTestSuite(SlaprunnerTestSuite):
"""
Run Gitlab inside Slaprunner Resiliency Test.
Note: requires specific kernel allowing long shebang paths.
"""
def _setGitlabInstanceParameter(self):
"""
Set inside of slaprunner the instance parameter to use to deploy gitlab instance.
"""
self.logger.info('Updating instance parameter to use to deploy gitlab instance...')
self._connectToSlaprunner(
resource='saveParameterXml',
data='software_type=gitlab-test¶meter=%3C%3Fxml%20version%3D%221.0%22%20encoding%3D%22utf-8%22%3F%3E%0A%3Cinstance%3E%0A%3C%2Finstance%3E'
)
def _connectToGitlab(self, path='', post_data=None, url='', parameter_dict={}):
request_url = self.backend_url
if url:
request_url = url
if path:
request_url += '/' + path
headers = {"PRIVATE-TOKEN" : self.private_token}
if post_data is None:
response = requests.get(request_url, params=parameter_dict,
headers=headers, verify=False)
elif post_data == {}:
response = requests.post(request_url, params=parameter_dict,
headers=headers, verify=False)
else:
response = requests.post(request_url, params=parameter_dict,
headers=headers, data=post_data, verify=False)
if not response.ok:
raise Exception("ERROR connecting to Gitlab: %s: %s \n%s" % (
response.status_code, response.reason, response.text))
return response.text
def _createNewProject(self, name, namespace='open'):
uri = 'api/v3/projects'
parameter_dict = {'name': name, 'namespace': namespace}
return self._connectToGitlab(uri, post_data={}, parameter_dict=parameter_dict)
def _listProjects(self):
path = 'api/v3/projects'
return json.loads(self._connectToGitlab(path=path))
def _setGitlabConnectionParameter(self):
"""
set parameters of gitlab instance.
Note: it is not connection parameter of slaprunner,
but connection parameter of what is inside of webrunner.
"""
data = self._connectToSlaprunner(
resource='getConnectionParameter/slappart0'
)
parameter_dict = json.loads(data)
self.backend_url = parameter_dict['backend_url']
self.password = parameter_dict['password']
self.private_token = parameter_dict['private-token']
self.file_uri = parameter_dict['latest-file-uri']
def _getRootPassword(self):
data = self._connectToSlaprunner(
resource='getConnectionParameter/slappart0'
)
password = json.loads(json.loads(data)['_'])['password']
self.logger.info('Retrieved gitlab root password is:\n%s' % password)
return password
def generateData(self):
self.slaprunner_password = ''.join(
random.SystemRandom().sample(string.ascii_lowercase, 8)
)
self.slaprunner_user = 'slapos'
self.logger.info('Generated slaprunner user is: %s' % self.slaprunner_user)
self.logger.info('Generated slaprunner password is: %s' % self.slaprunner_password)
def pushDataOnMainInstance(self):
"""
Create a dummy Software Release,
Build it,
Wait for build to be successful,
Deploy instance,
Wait for instance to be started.
Store the main IP of the slaprunner for future use.
"""
self.logger.debug('Getting the backend URL...')
parameter_dict = self._getPartitionParameterDict()
self.slaprunner_backend_url = parameter_dict['backend-url']
self.logger.info('backend_url is %s.' % self.slaprunner_backend_url)
self.slaprunner_user = parameter_dict['init-user']
self.slaprunner_password = parameter_dict['init-password']
self._login()
time.sleep(10)
self._gitClone()
self._openSoftwareRelease('gitlab')
self._setGitlabInstanceParameter()
self.logger.info('Waiting for 1 minutes...')
# Debug, remove me.
time.sleep(60)
self._buildSoftwareRelease()
self._deployInstance()
# wait for unicorn to start then run instance again to install gitlab backup
self.logger.info('wait 1 minute for unicorn to start then run instance...')
time.sleep(60)
self._deployInstance()
self._deployInstance()
# Stop all services because we want to restart gitlab (safe)
self.logger.info('Stop all services because we want to restart gitlab..')
self._connectToSlaprunner('/stopAllPartition')
self._deployInstance()
self._setGitlabConnectionParameter()
self.logger.info('Retrieved gitlab url is:\n%s' % self.backend_url)
self.logger.info('Gitlab root password is:\n%s' % self.password)
self.logger.info('Gitlab private token is:\n%s' % self.private_token)
self.logger.info('Waiting 90 seconds so that gitlab can be started...')
time.sleep(90)
self.logger.info('Trying to connect to gitlab backend URL...')
loop = 0
while loop < 3:
try:
self._connectToGitlab(url=self.backend_url)
except Exception as e:
if loop == 2:
raise
self.logger.warning(str(e))
self.logger.info('Retry connection in 60 seconds...')
loop += 1
time.sleep(60)
else:
self.logger.info('success!')
break
self.logger.info(self._createNewProject('sample.test'))
project_list = self._listProjects()
self.default_project_list = []
for project in project_list:
self.default_project_list.append(project['name_with_namespace'])
self.logger.info('Gitlab project list is:\n%s' % self.default_project_list)
self.logger.info('Getting test file at url: %s' % self.file_uri)
self.sample_file = self._connectToGitlab(url=self.file_uri)
self.logger.info('Wait 10 minutes for main instance to have backup of gitlab...')
time.sleep(600)
# in erp5testnode, we have only one IP, so we can't run at the same time
# gitlab in webrunner of main instance, and other services in import script of clone instance
# So we stop main instance processes.
self._connectToSlaprunner('/stopAllPartition')
self.logger.info('Wait half an hour for clone to have compiled Gitlab SR...')
#time.sleep(3600 / 2)
time.sleep(600)
def checkDataOnCloneInstance(self):
"""
Check that:
* backend_url is different
* Software Release profile is the same,
* Software Release is built and is the same,
* Instance is deployed and is the same (contains same new data).
"""
old_slaprunner_backend_url = self.slaprunner_backend_url
self.slaprunner_backend_url = self._returnNewInstanceParameter(
parameter_key='backend-url',
old_parameter_value=old_slaprunner_backend_url,
force_new=True,
)
self._login()
self._waitForSoftwareBuild()
self._deployInstance()
time.sleep(60)
project_list = self._listProjects()
success = True
for project in project_list:
success = success and (project['name_with_namespace'] in self.default_project_list)
if success:
file_content = self._connectToGitlab(url=self.file_uri)
success = success and (file_content == self.sample_file)
if success:
self.logger.info('Data are the same: success.')
return True
else:
self.logger.info('Data are different: failure.')
return False
def runTestSuite(*args, **kwargs):
"""
Run Slaprunner Resiliency Test.
"""
return GitlabTestSuite(*args, **kwargs).runTestSuite()
slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/suites/kvm.py 0000664 0000000 0000000 00000014101 13616512510 0033470 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2013 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from .resiliencytestsuite import ResiliencyTestSuite
import logging
import random
import string
import time
from six.moves.urllib.request import urlopen
logger = logging.getLogger('KVMResiliencyTest')
def fetchKey(ip):
"""
Fetch the key that had been set on original virtual hard drive.
If doesn't exist (503), fail. If other error: retry after a few minutes,
fail after XX (2?) hours.
"""
new_key = None
for i in range(0, 10):
try:
new_key = urlopen('http://%s:10080/get' % ip).read().strip()
break
except IOError:
logger.error('Server in new KVM does not answer.')
time.sleep(60)
if not new_key:
raise Exception('Server in new KVM does not answer for too long.')
return new_key
class KVMTestSuite(ResiliencyTestSuite):
"""
Run KVM Resiliency Test.
Requires a specific KVM environment (virtual hard drive), see KVM SR for more
informations.
Scenario:
1/ Boot from a custom image
2/ The VM from the main instance starts a simple get/set server. It will receive a random number sent from the resiliency test. The VM will store this number, and the test suite will store the number as well.
3/ Resilience is done, wait XX seconds
4/ For each clone: do a takeover. Check that IPv6 of new main instance is different. Check, when doing a http request to the new VM that will fetch the stored random number, that the sent number is the same.
Note: disk image is a simple debian with gunicorn and flask installed:
apt-get install python-setuptools; easy_install gunicorn flask
With the following python code running at boot in /root/number.py:
import os
from flask import Flask, abort, request
app = Flask(__name__)
storage = 'storage.txt'
@app.route("/")
def greeting_list(): # 'cause there are several greetings, and plural is forbidden.
return "Hello World"
@app.route("/get")
def get():
return open(storage, 'r').read()
@app.route("/set")
def set():
#if os.path.exists(storage):
# abort(503)
open(storage, 'w').write(request.args['key'])
return "OK"
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
Then create the boot script:
echo "cd /root; /usr/local/bin/gunicorn number:app -b 0.0.0.0:80 -D --error-logfile /root/error_log --access-logfile /root/access_log" > /etc/init.d/gunicorn-number
chmod +x /etc/init.d/gunicorn-number
update-rc.d gunicorn-number defaults
There also is a script that randomly generates I/O in /root/io.sh:
#!/bin/sh
# Randomly generates high I/O on disk. Goal is to write on disk so that
# it flushes at the same time that snapshot of disk image is done, to check if
# it doesn't corrupt image.
# Ayayo!
while [ 1 ]; do
dd if=/dev/urandom of=random count=2k
sync
sleep 0.2
done
Then create the boot script:
echo "/bin/sh /root/io.sh &" > /etc/init.d/io
chmod +x /etc/init.d/io
update-rc.d io defaults
"""
def _getPartitionParameterDict(self):
"""
Overload default method.
"""
return self.partition.request(
software_release=self.software,
software_type='kvm-resilient',
partition_reference=self.root_instance_name).getConnectionParameterDict()
def generateData(self):
"""
Set a random key that will be stored inside of the virtual hard drive.
"""
self.key = ''.join(random.SystemRandom().sample(string.ascii_lowercase, 20))
self.logger.info('Generated key is: %s' % self.key)
def pushDataOnMainInstance(self):
self.logger.info('Getting the KVM IP...')
self.ip = self._getPartitionParameterDict()['ipv6']
logger.info('KVM IP is %s.' % self.ip)
for i in range(0, 60):
failure = False
try:
connection = urlopen('http://%s:10080/set?key=%s' % (self.ip, self.key))
if connection.getcode() is 200:
break
else:
failure = True
except IOError:
failure = True
finally:
if failure:
logger.info('Impossible to connect to virtual machine to set key. sleeping...')
time.sleep(60)
if i is 59:
raise Exception('Bad return code when setting key in main instance, after trying for 60 minutes.')
logger.info('Key uploaded to KVM main instance.')
def checkDataOnCloneInstance(self):
self.ip = self._returnNewInstanceParameter(
parameter_key='ipv6',
old_parameter_value=self.ip
)
new_key = fetchKey(self.ip)
logger.info('Key on this new instance is %s' % new_key)
# Compare with original key. If same: success.
if new_key == self.key:
self.logger.info('Data are the same: success.')
return True
else:
self.logger.info('Data are different: failure.')
def runTestSuite(*args, **kwargs):
"""
Run KVM Resiliency Test.
"""
return KVMTestSuite(*args, **kwargs).runTestSuite()
resiliencytestsuite.py 0000664 0000000 0000000 00000023627 13616512510 0036751 0 ustar 00root root 0000000 0000000 slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/suites # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2013 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
import slapos.slap
import glob
import logging
import os
import subprocess
import sys
import time
from six.moves.urllib.request import urlopen
UNIT_TEST_ERP5TESTNODE = 'UnitTest'
class ResiliencyTestSuite(object):
"""
Abstract class supposed to be extended by Resiliency Test Suites.
"""
def __init__(self,
server_url, key_file, cert_file,
computer_id, partition_id, software,
namebase,
root_instance_name,
sleep_time_between_test=900,
total_instance_count="2",
type=None):
self.server_url = server_url
self.key_file = key_file
self.cert_file = cert_file
self.computer_id = computer_id
self.partition_id = partition_id
self.software = software
self.namebase = namebase
self.total_instance_count = total_instance_count
self.root_instance_name = root_instance_name
self.sleep_time_between_test = sleep_time_between_test
self.test_type = type
slap = slapos.slap.slap()
slap.initializeConnection(server_url, key_file, cert_file)
self.partition = slap.registerComputerPartition(
computer_guid=computer_id,
partition_id=partition_id
)
self.logger = logging.getLogger('SlaprunnerResiliencyTest')
self.logger.setLevel(logging.DEBUG)
def _doTakeover(self, namebase, target_clone):
"""
Private method.
Make the specified clone instance takeover the main instance.
"""
self.logger.info('Replacing main instance by clone instance %s%s...' % (
self.namebase, target_clone))
root_partition_parameter_dict = self._getPartitionParameterDict()
takeover_url = root_partition_parameter_dict['takeover-%s-%s-url' % (namebase, target_clone)]
takeover_password = root_partition_parameter_dict['takeover-%s-%s-password' % (namebase, target_clone)]
# Do takeover
takeover_result = urlopen('%s?password=%s' % (takeover_url, takeover_password)).read()
if 'Error' in takeover_result:
raise Exception('Error while doing takeover: %s' % takeover_result)
self.logger.info('Done.')
def generateData(self):
"""
Generate data that will be used by the test.
"""
raise NotImplementedError('Overload me, I am an abstract method.')
def pushDataOnMainInstance(self):
"""
Push our data to the main instance.
"""
raise NotImplementedError('Overload me, I am an abstract method.')
def checkDataOnCloneInstance(self):
"""
Check that, on the ex-clone, now-main instance, data is the same as
what we pushed to the ex-main instance.
"""
raise NotImplementedError('Overload me, I am an abstract method.')
def deleteTimestamp():
"""
XXX-Nicolas delete .timestamp in test partition to force the full processing
by slapgrid, to force the good parameters to be passed to the instances of the tree
"""
home = os.getenv('HOME')
timestamp = os.path.join(home, '.timestamp')
os.remove(timestamp)
def _getPartitionParameterDict(self):
"""
Helper.
Return the partition parameter dict of the main root ("resilient") instance.
"""
return self.partition.request(
software_release=self.software,
software_type='resilient',
partition_reference=self.root_instance_name
).getConnectionParameterDict()
self.deleteTimestamp()
def _returnNewInstanceParameter(self, parameter_key, old_parameter_value, force_new=False):
"""
Helper, can be used inside of checkDataOnCloneInstance.
Wait for the new parameter (of old-clone new-main instance) to appear.
Check than it is different from the old parameter
"""
# if we are inside of a classical erp5testnode: just return the same parameter.
if self.test_type == UNIT_TEST_ERP5TESTNODE and not force_new:
return old_parameter_value
self.logger.info('Waiting for new main instance to be ready...')
new_parameter_value = None
while not new_parameter_value or new_parameter_value == 'None' or new_parameter_value == old_parameter_value:
self.logger.info('Not ready yet. SlapOS says new parameter value is %s' % new_parameter_value)
new_parameter_value = self._getPartitionParameterDict().get(parameter_key, None)
time.sleep(30)
self.logger.info('New parameter value of instance is %s' % new_parameter_value)
return new_parameter_value
def _waitForCloneToBeReadyForTakeover(self, clone):
self.logger.info('Wait for Clone %s to be ready for takeover' % clone)
root_partition_parameter_dict = self._getPartitionParameterDict()
takeover_url = root_partition_parameter_dict['takeover-%s-%s-url' % (self.namebase, clone)]
takeover_password = root_partition_parameter_dict['takeover-%s-%s-password' % (self.namebase, clone)]
# Connect to takeover web interface and wait for importer script to be not running
takeover_page_content = urlopen(takeover_url).read()
while "Importer script(s) of backup in progress: True" in takeover_page_content:
time.sleep(10)
takeover_page_content = urlopen(takeover_url).read()
return
def _testClone(self, clone):
"""
Private method.
Launch takeover and check for a specific clone.
"""
# Wait for XX minutes so that replication is done
self.logger.info(
'Sleeping for %s seconds before testing clone %s.' % (
self.sleep_time_between_test,
clone
))
time.sleep(self.sleep_time_between_test)
self._waitForCloneToBeReadyForTakeover(clone)
# Before doing takeover we expect the instances to be in a stable state
if not self._testPromises():
return False
self.logger.info('Testing %s%s instance.' % (self.namebase, clone))
self._doTakeover(self.namebase, clone)
if self.test_type == UNIT_TEST_ERP5TESTNODE: # Run by classical erp5testnode using slapproxy
# Run manually slapos node instance
# XXX hardcoded path
self.logger.info('Running "slapos node instance"...')
slapos_configuration_file_path = os.path.join(
os.path.dirname(sys.argv[0]),
'..', '..', '..', 'slapos.cfg'
)
# Output is huge and we don't want to store it in memory nor print it
devnull = open('/dev/null', 'w')
command = [os.path.join(os.environ['HOME'], 'software_release', 'bin', 'slapos'), 'node', 'instance',
'--cfg=%s' % slapos_configuration_file_path,
'--pidfile=slapos.pid']
for _ in range(5):
subprocess.Popen(command, stdout=devnull, stderr=devnull).wait()
success = self.checkDataOnCloneInstance()
if success:
return True
def _testPromises(self):
"""
Run promises in all instances (export, PBS, import(s)) and
check their output. An error at any step of the resilience
should make at least one of the promises complain.
"""
for promise_directory in glob.glob(
os.path.realpath(os.path.join(os.path.dirname(sys.argv[0]), '..', '..', '..', 'inst*', '*', 'etc', 'promise'))):
self.logger.info("Testing promises of instance's directory : %s", promise_directory)
promise_list = os.listdir(promise_directory)
for promise in promise_list:
# XXX: for the moment ignore monitor promises as they can never succeed
# in a testnode environment.
if 'monitor' in promise:
continue
try:
subprocess.check_output(os.path.join(promise_directory, promise),
stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
self.logger.error('ERROR : promise "%s" failed with output :\n%s', promise, e.output)
return False
return True
def runTestSuite(self):
"""
Generate data to send,
Push data on main instance,
Wait for replication to be done,
For each clone: Do a takeover, Check data.
"""
self.generateData()
self.pushDataOnMainInstance()
# In resilient stack, main instance (example with KVM) is named "kvm0",
# clones are named "kvm1", "kvm2", ...
clone_count = int(self.total_instance_count) - 1
# So first clone starts from 1.
current_clone = 1
# In case we have only one clone: test the takeover twice
# so that we test the reconstruction of a new clone.
if clone_count == 1:
return self._testClone(1)
#for i in range(2):
# success = self._testClone(1)
# if not success:
# return False
else:
# Test each clone
while current_clone <= clone_count:
success = self._testClone(current_clone)
if not success:
return False
current_clone = current_clone + 1
# All clones have been successfully tested: success.
return True
slapos.toolbox-tomo_find_resilient-slapos-resiliencytest/slapos/resiliencytest/suites/slaprunner.py 0000664 0000000 0000000 00000024432 13616512510 0035074 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2013 Vifib SARL and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from .resiliencytestsuite import ResiliencyTestSuite
import base64
from six.moves import http_cookiejar as cookielib
import json
from lxml import etree
import random
import ssl
import string
import time
from six.moves.urllib.request import HTTPCookieProcessor, HTTPSHandler, \
build_opener
from six.moves.urllib.error import HTTPError
class NotHttpOkException(Exception):
pass
class SlaprunnerTestSuite(ResiliencyTestSuite):
"""
Run Slaprunner Resiliency Test.
It is highly suggested to read ResiliencyTestSuite code.
"""
def __init__(self, *args, **kwargs):
# Setup urllib2 with cookie support
cookie_jar = cookielib.CookieJar()
ssl_context = ssl._create_unverified_context()
self._opener_director = build_opener(
HTTPCookieProcessor(cookie_jar),
HTTPSHandler(context=ssl_context)
)
ResiliencyTestSuite.__init__(self, *args, **kwargs)
def _getPartitionParameterDict(self):
"""
Helper.
Return the partition parameter dict of the main root ("resilient") instance.
"""
# XXX Hardcoded parameters, should be obtained dynamically
return self.partition.request(
software_release=self.software,
software_type='resilient',
partition_reference=self.root_instance_name,
partition_parameter_kw={
'resiliency-backup-periodicity': '*/6 * * * *',
'auto-deploy-instance': 'false',
'auto-deploy': 'true',
# XXX HACK!
"slapos-reference": 'slaprunner-erp5-resiliency'
}
).getConnectionParameterDict()
self.deleteTimestamp()
def _connectToSlaprunner(self, resource, data=None):
"""
Utility.
Connect through HTTP to the slaprunner instance.
Require self.slaprunner_backend_url to be set.
"""
try:
url = "%s/%s" % (self.slaprunner_backend_url, resource)
if data:
result = self._opener_director.open(url, data=data)
else:
result = self._opener_director.open(url)
if result.getcode() is not 200:
raise NotHttpOkException(result.getcode())
return result.read()
except HTTPError:
self.logger.error('Error when contacting slaprunner at URL: {}'.format(url))
raise
def _login(self):
self.logger.debug('Logging in...')
b64string = base64.encodestring('%s:%s' % (self.slaprunner_user, self.slaprunner_password))[:-1]
self._opener_director.addheaders = [
('Authorization', 'Basic %s' % b64string),
# By default we will prefer to receive JSON to simplify
# treatments of the response
("Accept", "application/json"),
]
def _retrieveInstanceLogFile(self):
"""
Store the logfile (=data) of the instance, check it is not empty nor it is
html.
"""
time.sleep(30)
data = self._connectToSlaprunner(
resource='getFileContent',
data="file=instance_root/slappart0/var/log/log.log"
)
try:
json_data = json.loads(data)
if json_data['code'] == 0:
raise IOError(json_data['result'])
data = json_data['result']
self.logger.info('Retrieved data are:\n%s' % data)
except (ValueError, KeyError):
if data.find('<') != -1:
raise IOError(
'Could not retrieve logfile content: retrieved content is html.'
)
if data.find('Could not load') != -1:
raise IOError(
'Could not retrieve logfile content: server could not load the file.'
)
if data.find('Hello') == -1:
raise IOError(
'Could not retrieve logfile content: retrieve content does not match "Hello".'
)
return data
def _retrieveSoftwareLogFileTail(self, truncate=100):
"""
Retrieve the tail of the software.log file.
"""
data = self._connectToSlaprunner(
resource='getFileLog',
data="filename=instance_root/../software.log&truncate=%s" % truncate)
try:
data = json.loads(data)['result']
self.logger.info('Tail of software.log:\n%s' % data)
except (ValueError, KeyError):
self.logger.info("Fail to get software.log")
def _waitForSoftwareBuild(self, limit=5000):
"""
Wait until SR is built or limit reach 0
"""
def getSRStatus():
"""
Return current status (-1 in case of connection problem)
"""
try:
return self._connectToSlaprunner(resource='isSRReady')
except (NotHttpOkException, HTTPError) as error:
# The nginx frontend might timeout before software release is finished.
self.logger.warning('Problem occured when contacting the server: %s' % error)
return -1
status = getSRStatus()
while limit > 0 and status != '1':
status = getSRStatus()
limit -= 1
if status == '0':
self.logger.info('Software release is Failing to Build. Sleeping...')
else:
self.logger.info('Software release is still building. Sleeping...')
time.sleep(20)
for sleep_wait in range(3):
self._retrieveSoftwareLogFileTail(truncate=100)
time.sleep(10)
def _buildSoftwareRelease(self):
self.logger.info('Building the Software Release...')
try:
self._connectToSlaprunner(resource='runSoftwareProfile')
except (NotHttpOkException, HTTPError):
# The nginx frontend might timeout before software release is finished.
pass
self._waitForSoftwareBuild()
def _deployInstance(self):
self.logger.info('Deploying instance...')
try:
self._connectToSlaprunner(resource='runInstanceProfile')
except (NotHttpOkException, HTTPError):
# The nginx frontend might timeout before someftware release is finished.
pass
while True:
time.sleep(15)
result = json.loads(self._connectToSlaprunner(resource='slapgridResult', data='position=0&log='))
if result['instance']['state'] is False:
break
self.logger.info('Buildout is still running. Sleeping...')
self.logger.info('Instance has been deployed.')
def _gitClone(self):
self.logger.debug('Doing git clone of https://lab.nexedi.com/nexedi/slapos.git..')
try:
data = self._connectToSlaprunner(
resource='cloneRepository',
data='repo=https://lab.nexedi.com/nexedi/slapos.git&name=workspace/slapos&email=slapos@slapos.org&user=slapos'
)
data = json.loads(data)
if data['code'] == 0:
self.logger.warning(data['result'])
except (NotHttpOkException, HTTPError):
# cloning can be very long.
# XXX: quite dirty way to check.
while self._connectToSlaprunner('getProjectStatus', data='project=workspace/slapos').find('On branch master') == -1:
self.logger.info('git-cloning ongoing, sleeping...')
def _openSoftwareRelease(self, software_release='erp5testnode/testsuite/dummy'):
self.logger.debug('Opening %s software release...' % software_release)
data = self._connectToSlaprunner(
resource='setCurrentProject',
data='path=workspace/slapos/software/%s/' % software_release
)
assert json.loads(data)['code'] != 0, 'Unexpecting result in call to setCurrentProject: %s' % data
def generateData(self):
"""
Generate Data for slaprunner
"""
def pushDataOnMainInstance(self):
"""
Create a dummy Software Release,
Build it,
Wait for build to be successful,
Deploy instance,
Wait for instance to be started.
Store the main IP of the slaprunner for future use.
"""
self.logger.debug('Getting the backend URL...')
parameter_dict = self._getPartitionParameterDict()
self.slaprunner_backend_url = parameter_dict['backend-url']
self.logger.info('backend_url is %s.' % self.slaprunner_backend_url)
self.slaprunner_user = parameter_dict['init-user']
self.slaprunner_password = parameter_dict['init-password']
self._login()
self._gitClone()
# XXX should be taken from parameter.
self._openSoftwareRelease()
self._buildSoftwareRelease()
time.sleep(15)
self._deployInstance()
self.data = self._retrieveInstanceLogFile()
def checkDataOnCloneInstance(self):
"""
Check that:
* backend_url is different
* Software Release profile is the same,
* Software Release is built and is the same, (?)
* Instance is deployed and is the same.
"""
# XXX: does the promise wait for the software to be built and the instance to be ready?
old_slaprunner_backend_url = self.slaprunner_backend_url
self.slaprunner_backend_url = self._returnNewInstanceParameter(
parameter_key='backend-url',
old_parameter_value=old_slaprunner_backend_url,
force_new=True,
)
self._login()
self._waitForSoftwareBuild()
time.sleep(15)
new_data = self._retrieveInstanceLogFile()
if new_data.startswith(self.data):
self.logger.info('Data are the same: success.')
return True
else:
self.logger.info('Data are different: failure.')
def runTestSuite(*args, **kwargs):
"""
Run Slaprunner Resiliency Test.
"""
return SlaprunnerTestSuite(*args, **kwargs).runTestSuite()