1. 20 Dec, 2021 1 commit
    • Kirill Smelkov's avatar
      Run each testcase with its own /tmp and /dev/shm · a191468f
      Kirill Smelkov authored
      and detect leaked temporary files and mount entries after each test run.
      
      Background
      
      Currently we have several testing-related problems that are
      all connected to /tmp and similar directories:
      
      Problem 1: many tests create temporary files for each run. Usually
      tests are careful to remove them on teardown, but due to bugs, many kind
      of tests, test processes being hard-killed (SIGKILL, or SIGSEGV) and
      other reasons, in practice this cleanup does not work 100% reliably and
      there is steady growth of files leaked on /tmp on testnodes.
      
      Problem 2: due to using shared /tmp and /dev/shm, the isolation in
      between different test runs of potentially different users is not
      strong. For example @jerome reports that due to leakage of faketime's
      shared segments separate test runs affect each other and fail:
      https://erp5.nexedi.net/bug_module/20211125-1C8FE17
      
      Problem 3: many tests depend on /tmp being a tmpfs instance. This are for
      example wendelin.core tests which are intensively writing to database,
      and, if /tmp is resided on disk, timeout due to disk IO stalls in fsync
      on every commit. The stalls are as much as >30s and lead to ~2.5x overall
      slowdown for test runs. However the main problem is spike of increased
      latency which, with close to 100% probability, always render some test
      as missing its deadline. This topic is covered in
      https://erp5.com/group_section/forum/Using-tmpfs-for--tmp-on-testnodes-JTocCtJjOd
      
      --------
      
      There are many ways to try to address each problem separately, but they
      all come with limitations and drawbacks. We discussed things with @tomo
      and @jerome, and it looks like that all those problems can be addressed
      in one go if we run tests under user namespaces with private mounts for
      /tmp and /dev/shm.
      
      Even though namespaces is generally no-go in Nexedi, they seem to be ok
      to use in tests. For example they are already used via private_tmpfs
      option in SlapOS:
      
      https://lab.nexedi.com/nexedi/slapos/blob/1876c150/slapos/recipe/librecipe/execute.py#L87-103
      https://lab.nexedi.com/nexedi/slapos/blob/1876c150/software/neoppod/instance-neo-input-schema.json#L121-124
      https://lab.nexedi.com/nexedi/slapos/blob/1876c150/software/neoppod/instance-neo.cfg.in#L11-16
      https://lab.nexedi.com/nexedi/slapos/blob/1876c150/software/neoppod/instance-neo.cfg.in#L30-34
      https://lab.nexedi.com/nexedi/slapos/blob/1876c150/software/neoppod/instance-neo.cfg.in#L170-177
      ...
      https://lab.nexedi.com/nexedi/slapos/blob/1876c150/stack/erp5/instance-zope.cfg.in#L227-230
      
      Thomas says that using private tmpfs for each test would be a better
      solution than implementing tmpfs for whole /tmp on testnodes. He also
      reports that @jp is OK to use namespaces for test as long as there is a
      fallback if namespaces aren't available.
      
      -> So let's do that: teach nxdtest to run each test case in its own
      private environment with privately-mounted /tmp and /dev/shm if we can
      detect that user namespaces are available. In an environment where user
      namespaces are indeed available this addresses all 3 problems because
      isolation and being-tmpfs are there by design, and even if some files
      will leak, the kernel will free everything when test terminates and the
      filesystem is automatically unmounted. We also detect such leakage and
      report a warning so that such problems do not go completely unnoticed.
      
      Implementation
      
      We leverage unshare(1) for simplicity. I decided to preserve uid/gid
      instead of becoming uid=0 (= `unshare -Umr`) for better traceability, so
      that it is clear from test output under which real slapuser a test is
      run(*). Not changing uid requires to activate ambient capabilities so
      that mounting filesystems, including FUSE-based needed by wendelin.core,
      continue to work under regular non-zero uid. Please see
      https://git.kernel.org/linus/58319057b784 for details on this topic. And
      please refer to added trun.py for details on how per-test namespace is setup.
      
      Using FUSE inside user namespaces requires Linux >= 4.18 (see
      https://git.kernel.org/linus/da315f6e0398 and
      https://git.kernel.org/linus/8cb08329b080), so if we are really to use
      this patch we'll have to upgrade kernel on our testnodes, at least where
      wendelin.core is used in tests.
      
      "no namespaces" detection is implemented via first running `unshare ...
      true` with the same unshare options that are going to be used to create
      and enter new user namespace for real. If that fails, we fallback into
      "no namespaces" mode where no private /tmp and /dev/shm are mounted(%).
      
      (*) for example nxdtest logs information about the system on startup:
      
          date:   Mon, 29 Nov 2021 17:27:04 MSK
          xnode:  slapuserX@test.node
          ...
      
      (%) Here is how nxdtest is run in fallback mode on my Debian 11 with
          user namespaces disabled via `sysctl kernel.unprivileged_userns_clone=0`
      
          (neo) (z-dev) (g.env) kirr@deca:~/src/wendelin/nxdtest$ nxdtest
          date:   Thu, 02 Dec 2021 14:04:30 MSK
          xnode:  kirr@deca.navytux.spb.ru
          uname:  Linux deca 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64
          cpu:    Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz
      
          >>> pytest
          $ python -m pytest
          # user namespaces not available. isolation and many checks will be deactivated.    <--- NOTE
          ===================== test session starts ======================
          platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.10.0, pluggy-0.13.1
          rootdir: /home/kirr/src/wendelin/nxdtest
          plugins: timeout-1.4.2
          collected 23 items
      
          nxdtest/nxdtest_pylint_test.py ....                      [ 17%]
          nxdtest/nxdtest_pytest_test.py ...                       [ 30%]
          nxdtest/nxdtest_test.py ......xx                         [ 65%]
          nxdtest/nxdtest_unittest_test.py ........                [100%]
      
          ============= 21 passed, 2 xfailed in 2.67 seconds =============
          ok      pytest  3.062s  # 23t 0e 0f 0s
          # ran 1 test case:  1·ok
      
      /helped-by @tomo
      /helped-and-reviewed-by @jerome
      /reviewed-on !13
      a191468f
  2. 09 Dec, 2021 2 commits
    • Kirill Smelkov's avatar
      Log that master is connected and for which test_result this run is · 4fe9ee16
      Kirill Smelkov authored
      Also log if master told us that we have nothing to do, and if the mode to run is local.
      
      This should make it a bit more clear what is going on just by looking at
      nxdtest log. See previous patch for more details and context.
      
      For the reference: here is how updated output looks like in the normal case:
      
          date:   Thu, 09 Dec 2021 04:20:37 CET
          xnode:  slapuser7@rapidspace-testnode-001
          uname:  Linux rapidspace-testnode-001 4.9.0-16-amd64 #1 SMP Debian 4.9.272-1 (2021-06-21) x86_64
          cpu:    Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
          # running for test_result_module/20211209-170FD3998
      
          >>> pytest
          $ python -m pytest
          ============================= test session starts ==============================
          platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.9.0, pluggy-0.13.1
          rootdir: /srv/slapgrid/slappart7/t/dfp/soft/47cc86af27d234f0464630f2a0d22a6f/parts/zodbtools-dev
          collected 46 items
      
          zodbtools/test/test_analyze.py .                                         [  2%]
          zodbtools/test/test_commit.py ..                                         [  6%]
          zodbtools/test/test_dump.py ...                                          [ 13%]
          zodbtools/test/test_restore.py ..                                        [ 17%]
          zodbtools/test/test_tidrange.py .............................            [ 80%]
          zodbtools/test/test_zodb.py .........                                    [100%]
      
          ========================== 46 passed in 9.15 seconds ===========================
          ok      pytest  12.433s # 46t 0e 0f 0s
          # ran 1 test case:  1·ok
      
      /reviewed-by @jerome
      /reviewed-on nexedi/nxdtest!15
      4fe9ee16
    • Kirill Smelkov's avatar
      Always log system info and run summary, even if master tells us to do nothing · f8ec5787
      Kirill Smelkov authored
      Nxdtest logs system information (bd91f6f1 "Include system information
      into log output") and run summary at the end (9f413221 "Emit run summary
      at the end"). However all that information currently is printed only if
      master is successfully connected and actually tells to run the tests.
      
      This behaviour is not very useful, because if nxdtest log output on
      testnode is empty, it is not clear - whether it was "we have to do
      nothing", or nxdtest stuck somewhere or something else.
      
      For example
      https://nexedijs.erp5.net/#/test_result_module/20211208-47D165B7/12 is
      currently marked as Running for many long hours already. And the log on
      testnode regarding nxdtest run is just:
      
          2021-12-08 15:42:17,314 INFO     $ PATH=/srv/slapgrid/slappart13/srv/slapos/soft/2956f419073cb2249ed953507fa6b173/bin:/opt/slapos/parts/bison/bin:/opt/slapos/parts/bzip2/bin:/opt/slapos/parts/gettext/bin:/opt/slapos/parts/glib/bin:/opt/slapos/parts/libxml2/bin:/opt/slapos/parts/libxslt/bin:/opt/slapos/parts/m4/bin:/opt/slapos/parts/ncurses/bin:/opt/slapos/parts/openssl/bin:/opt/slapos/parts/pkgconfig/bin:/opt/slapos/parts/python2.7/bin:/opt/slapos/parts/readline/bin:/opt/slapos/parts/sqlite3/bin:/opt/slapos/parts/swig/bin:/opt/slapos/bin:/opt/slapos/parts/patch/bin:/opt/slapos/parts/socat/bin:/usr/bin:/usr/sbin:/sbin:/bin SLAPOS_TEST_LOG_DIRECTORY=/srv/slapgrid/slappart13/var/log/testnode/dgd-xStX9safSG SLAPOS_TEST_SHARED_PART_LIST=/srv/slapgrid/slappart13/srv/shared:/srv/slapgrid/slappart13/t/dgd/shared /bin/sh /srv/slapgrid/slappart13/t/dgd/i/0/bin/runTestSuite --master_url $DISTRIBUTOR_URL --revision slapos=13977-ec686a708633f689382426063c21efbe3b2eab04,slapos.core=8698-91edab77ed36c160da8017cfdc1673fe7a8e10de --test_node_title rapidspace-testnode-008-3Nodes-DEPLOYTASK0 --test_suite SLAPOS-SR-TEST --test_suite_title SlapOS.SoftwareReleases.IntegrationTest-kirr.Python2 --project_title 'Rapid.Space Project'
      
      without anything else.
      
      With this patch nxdtest would print system information and report how many
      tests it had run, if its invocation did not stuck.
      
      In this patch we only move code that calls system_info and defer summary log
      before code that connects to master. In the following patch we'll add more
      logging around connecting to master.
      
      /reviewed-by @jerome
      /reviewed-on !15
      f8ec5787
  3. 11 Nov, 2021 1 commit
    • Kirill Smelkov's avatar
      Emit run summary at the end · 9f413221
      Kirill Smelkov authored
      Sometimes there is zero testcases to be executed on testnodes, and
      log output from nxdtest is just
      
          date:   Wed, 10 Nov 2021 12:31:50 MSK
          xnode:  kirr@deca.navytux.spb.ru
          uname:  Linux deca 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64
          cpu:    Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz
      
      it is not clear from such output did the run ended or the test got
      stuck. After this patch it becomes
      
          date:   Wed, 10 Nov 2021 12:31:50 MSK
          xnode:  kirr@deca.navytux.spb.ru
          uname:  Linux deca 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64
          cpu:    Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz
          # ran 0 test cases.
      
      And in general, when there are several testcases to be run, it is helpful to
      indicate end of such run and to print brief summary of result status for all
      ran test cases. Example output:
      
          wendelin.core$ nxdtest -k test.wcfs
          date:   Wed, 10 Nov 2021 12:35:34 MSK
          xnode:  kirr@deca.navytux.spb.ru
          uname:  Linux deca 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64
          cpu:    Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz
          >>> test.wcfs/fs:1
          ...
          ok      test.wcfs/fs:1  25.035s # 35t 0e 0f 0s
      
          >>> test.wcfs/fs:2
          ...
          ok      test.wcfs/fs:2  21.033s # 35t 0e 0f 0s
      
          >>> test.wcfs/fs:
          ...
          ok      test.wcfs/fs:   21.056s # 35t 0e 0f 0s
          # ran 3 test cases:  3·ok
      
      /reviewed-by @jerome
      /reviewed-on nexedi/nxdtest!12
      9f413221
  4. 13 Aug, 2021 2 commits
    • Jérome Perrin's avatar
      support parsing pylint output · 72e36088
      Jérome Perrin authored
      This parses pylint output with a simple regexp and counts one failure per
      message reported.
      
      This has not been tested yet, but we decided to apply this commit already.
      
      /acked-by @kirr
      /reviewed-on !11
      72e36088
    • Jérome Perrin's avatar
      loadNXDTestFile: use `compile` for better tracebacks on errors · 7b5add47
      Jérome Perrin authored
      When using compile with the actual file path, we can have better tracebacks
      in case of errors.
      
      before:
      
          Traceback (most recent call last):
            File "/srv/slapgrid/slappart3/srv/runner/software/9544feb19475590d240ba2d32743c0a0/bin/nxdtest", line 22, in <module>
              sys.exit(nxdtest.main())
            File "/srv/slapgrid/slappart3/srv/runner/software/9544feb19475590d240ba2d32743c0a0/parts/nxdtest/nxdtest/__init__.py", line 142, in main
              tenv = loadNXDTestFile('.nxdtest')
            File "/srv/slapgrid/slappart3/srv/runner/software/9544feb19475590d240ba2d32743c0a0/parts/nxdtest/nxdtest/__init__.py", line 75, in loadNXDTestFile
              six.exec_(src, g)
            File "<string>", line 77, in <module>
          NameError: name 'Pylint' is not defined
      
      after:
      
          Traceback (most recent call last):
            File "/srv/slapgrid/slappart3/srv/runner/software/9544feb19475590d240ba2d32743c0a0/bin/nxdtest", line 22, in <module>
              sys.exit(nxdtest.main())
            File "/srv/slapgrid/slappart3/srv/runner/software/9544feb19475590d240ba2d32743c0a0/parts/nxdtest/nxdtest/__init__.py", line 142, in main
              tenv = loadNXDTestFile('.nxdtest')
            File "/srv/slapgrid/slappart3/srv/runner/software/9544feb19475590d240ba2d32743c0a0/parts/nxdtest/nxdtest/__init__.py", line 75, in loadNXDTestFile
              six.exec_(compile(src, os.path.realpath(path), 'exec'), g)
            File "/srv/slapgrid/slappart3/srv/runner/instance/slappart8/var/nxdtest/.nxdtest", line 77, in <module>
              summaryf=Pylint.summary,
          NameError: name 'Pylint' is not defined
      
      /reviewed-by @kirr
      /reviewed-in !10
      7b5add47
  5. 12 Aug, 2021 1 commit
    • Kirill Smelkov's avatar
      Detect if a test leaks processes and terminate them · 0ad45a9c
      Kirill Smelkov authored
      For every TestCase nxdtest spawns test process to run with stdout/stderr
      redirected to pipes that nxdtest reads. Nxdtest, in turn, tees those
      pipes to its stdout/stderr until the pipes become EOF. If the test
      process, in turn, spawns other processes, those other processes will
      inherit opened pipes, and so the pipes won't become EOF untill _all_
      spawned test processes (main test process + other processes that it
      spawns) exit. Thus, if there will be any process, that the main test
      process spawned, but did not terminated upon its own exit, nxdtest will
      get stuck waiting for pipes to become EOF which won't happen at all if a
      spawned test subprocess persists not to terminate.
      
      I hit this problem for real on a Wendelin.core 2 test - there the main
      test processes was segfaulting and so did not instructed other spawned
      processes (ZEO, WCFS, ...) to terminate. As the result the whole test
      was becoming stuck instead of being promptly reported as failed:
      
          runTestSuite: Makefile:175: recipe for target 'test.wcfs' failed
          runTestSuite: make: *** [test.wcfs] Segmentation fault
          runTestSuite: wcfs: 2021/08/09 17:32:09 zlink [::1]:52052 - [::1]:23386: recvPkt: EOF
          runTestSuite: E0809 17:32:09.376800   38082 wcfs.go:2574] zwatch zeo://localhost:23386: zlink [::1]:52052 - [::1]:23386: recvPkt: EOF
          runTestSuite: E0809 17:32:09.377431   38082 wcfs.go:2575] zwatcher failed -> switching filesystem to EIO mode (TODO)
          <LONG WAIT>
          runTestSuite: PROCESS TOO LONG OR DEAD, GOING TO BE TERMINATED
      
      -> Fix it.
      
      /reviewed-by @jerome
      /reviewed-on !9
      0ad45a9c
  6. 01 Dec, 2020 1 commit
    • Jérome Perrin's avatar
      use re.search to filter tests in --run · b5a74214
      Jérome Perrin authored
      re.match only find matches where the pattern appears at the beginning of
      the string, whereas re.search matches if the pattern appears anywhere in
      the string. This is behavior is consistent with pytest, go test and ERP5's
      runUnitTest
      
      For more details, see the discussion from !6 (comment 121409)
      
      /reviewed-on: !8
      b5a74214
  7. 26 Nov, 2020 1 commit
    • Kirill Smelkov's avatar
      Switch tee from threading.Thread to sync.WorkGroup · 1e6a1cc6
      Kirill Smelkov authored
      The reason is that with threading.Thread if exception happens in that
      spawned thread, this error is not propagated to main driver, while with
      sync.WorkGroup an exception from any spawned worker is propagated back
      to main. For example with the following injected error
      
          --- a/nxdtest/__init__.py
          +++ b/nxdtest/__init__.py
          @@ -267,6 +267,7 @@ def main():
      
           # tee, similar to tee(1) utility, copies data from fin to fout appending them to buf.
           def tee(ctx, fin, fout, buf):
          +    1/0
               while 1:
      
      before this patch nxdtest behaves like ...
      
          (neo) (z4-dev) (g.env) kirr@deco:~/src/wendelin/nxdtest$ nxdtest
          date:   Tue, 24 Nov 2020 14:55:08 MSK
          xnode:  kirr@deco.navytux.spb.ru
          uname:  Linux deco 5.9.0-2-amd64 #1 SMP Debian 5.9.6-1 (2020-11-08) x86_64
          cpu:    Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
      
          >>> pytest
          $ python -m pytest
          Exception in thread Thread-2:
          Traceback (most recent call last):
            File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
              self.run()
            File "/usr/lib/python2.7/threading.py", line 754, in run
              self.__target(*self.__args, **self.__kwargs)
            File "/home/kirr/src/wendelin/nxdtest/nxdtest/__init__.py", line 270, in tee
              1/0
          ZeroDivisionError: integer division or modulo by zero
      
          Exception in thread Thread-1:
          Traceback (most recent call last):
            File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
              self.run()
            File "/usr/lib/python2.7/threading.py", line 754, in run
              self.__target(*self.__args, **self.__kwargs)
            File "/home/kirr/src/wendelin/nxdtest/nxdtest/__init__.py", line 270, in tee
              1/0
          ZeroDivisionError: integer division or modulo by zero
      
          error   pytest  0.583s  # 1t 1e 0f 0s
          (neo) (z4-dev) (g.env) kirr@deco:~/src/wendelin/nxdtest$ echo $?
          0
      
      Here the error in another thread is only printed, but nxdtest is not aborted.
      Above it reported "error", but e.g. when testing pygolang/py3 and raising an
      error in tee it even reported it was succeeding
      ( !6 (comment 121393) ):
      
          slapuser34@vifibcloud-rapidspace-hosting-007:~/srv/runner/instance/slappart0$ ./bin/runTestSuite
          date:   Tue, 24 Nov 2020 12:51:23 MSK
          xnode:  slapuser34@vifibcloud-rapidspace-hosting-007
          uname:  Linux vifibcloud-rapidspace-hosting-007 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64
          cpu:    Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz
      
          >>> thread
          $ python -m pytest
          Exception in thread Thread-1:
          Traceback (most recent call last):
            File "/srv/slapgrid/slappart34/srv/runner/shared/python3/5497998c60d97cbbf748337ccce21db2/lib/python3.7/threading.py", line 926, in _bootstrap_inner
              self.run()
            File "/srv/slapgrid/slappart34/srv/runner/shared/python3/5497998c60d97cbbf748337ccce21db2/lib/python3.7/threading.py", line 870, in run
              self._target(*self._args, **self._kwargs)
            File "/srv/slapgrid/slappart34/srv/runner/software/44fe7dd3f13ecd100894c6368a35c055/parts/nxdtest/nxdtest/__init__.py", line 268, in tee
              fout.write(data)
          TypeError: write() argument must be str, not bytes
      
          ok      thread  9.145s  # 1t 0e 0f 0s
      
          >>> gevent
          $ gpython -m pytest
          Exception in thread Thread-3:
          Traceback (most recent call last):
            File "/srv/slapgrid/slappart34/srv/runner/shared/python3/5497998c60d97cbbf748337ccce21db2/lib/python3.7/threading.py", line 926, in _bootstrap_inner
              self.run()
            File "/srv/slapgrid/slappart34/srv/runner/shared/python3/5497998c60d97cbbf748337ccce21db2/lib/python3.7/threading.py", line 870, in run
              self._target(*self._args, **self._kwargs)
            File "/srv/slapgrid/slappart34/srv/runner/software/44fe7dd3f13ecd100894c6368a35c055/parts/nxdtest/nxdtest/__init__.py", line 268, in tee
              fout.write(data)
          TypeError: write() argument must be str, not bytes
      
          ok      gevent  21.980s # 1t 0e 0f 0s
      
      After this patch nxdtest correctly handles and propagates an error originating
      in spawned thread back to main driver:
      
          (neo) (z4-dev) (g.env) kirr@deco:~/src/wendelin/nxdtest$ nxdtest
          date:   Tue, 24 Nov 2020 14:54:19 MSK
          xnode:  kirr@deco.navytux.spb.ru
          uname:  Linux deco 5.9.0-2-amd64 #1 SMP Debian 5.9.6-1 (2020-11-08) x86_64
          cpu:    Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
      
          >>> pytest
          $ python -m pytest
          Traceback (most recent call last):
            File "/home/kirr/src/wendelin/venv/z4-dev/bin/nxdtest", line 11, in <module>
              load_entry_point('nxdtest', 'console_scripts', 'nxdtest')()
            File "/home/kirr/src/wendelin/nxdtest/nxdtest/__init__.py", line 230, in main
              wg.wait()
            File "golang/_sync.pyx", line 237, in golang._sync.PyWorkGroup.wait
              pyerr_reraise(pyerr)
            File "golang/_sync.pyx", line 217, in golang._sync.PyWorkGroup.go.pyrunf
              f(pywg._pyctx, *argv, **kw)
            File "/home/kirr/src/wendelin/nxdtest/nxdtest/__init__.py", line 270, in tee
              1/0
          ZeroDivisionError: integer division or modulo by zero
          (neo) (z4-dev) (g.env) kirr@deco:~/src/wendelin/nxdtest$ echo $?
          1
      
      NOTE sync.WorkGroup requires every worker to handle context cancellation, so
      that whenever there is an error, all other workers are canceled. We add such
      cancellation handling to tee but only lightly: before going to block in
      read/write syscalls we check for whether ctx is canceled or not. However the
      proper handling would be to switch file descriptors into non-block mode and to
      select at every IO point on both potential IO events and potential
      cancellation. This is left as TODO for the future.
      
      /reviewed-on !7
      1e6a1cc6
  8. 24 Nov, 2020 5 commits
    • Jérome Perrin's avatar
      Unittest and Python3 support · 40e2c4ab
      Jérome Perrin authored
      These are the necessary changes to run `SlapOS.Eggs.UnitTest-*` and `SlapOS.SoftwareReleases.IntegrationTest-*` using nxdtest
      
      See merge request !6
      
      /reviewed-by @kirr
      40e2c4ab
    • Jérome Perrin's avatar
      Flush output right after printing running test name · a129b560
      Jérome Perrin authored
      If test program output on stderr (which is unbuffered), the
      output of the test program will appear before output from nxdtest
      advertising the program that is about to be executed, because nxdtest
      stdout is buffered (testnode does not set PYTHONUNBUFFERED, and eventhough
      nxdtest sets PYTHONUNBUFFERED in its own environ, this only applies to sub
      processes)
      a129b560
    • Jérome Perrin's avatar
    • Jérome Perrin's avatar
      Also pass stderr output to summary method · 53064e71
      Jérome Perrin authored
      While pytest sends everything in stdout, some other programs send on stderr.
      53064e71
    • Jérome Perrin's avatar
      Treat program output as binary for python3 support · beb9d47e
      Jérome Perrin authored
      While treating output as text would not really be impossible, treating it
      as bytes seems a better choice because:
       - we don't have to make assumptions about what output encoding the test
         program is using for output
       - `tee` can just read stream output bytes by bytes without having to worry
         about multi-bytes characters
       - testnode protocol uses xmlrpc.client.Binary, which uses bytes.
      
      Because using bufsize=1 implies reading subprocess output as text, we use
      bufsize=0 instead in the subprocess.Popen call, to prevent buffering.
      
      To make manipulation of strings and bytes easier, we add a dependency on
      pygolang, so that we can use its strings utility functions.
      
      Also add a few tests to verify general functionality.
      beb9d47e
  9. 09 Nov, 2020 1 commit
  10. 02 Nov, 2020 1 commit
  11. 30 Oct, 2020 2 commits
  12. 28 Oct, 2020 1 commit
  13. 20 Oct, 2020 1 commit
    • Kirill Smelkov's avatar
      Modularize · f74005e4
      Kirill Smelkov authored
      Create real nxdtest module, so that things could be imported from there.
      As the result switch `nxdtest` program from scripts to entry_point.
      f74005e4
  14. 08 Oct, 2020 1 commit
  15. 07 Oct, 2020 1 commit
  16. 05 Oct, 2020 5 commits
    • Kirill Smelkov's avatar
      --list and --run · 6991af9b
      Kirill Smelkov authored
      - To list which tests are in there,
      - To execute only selected tests.
      
      Apply only to local mode.
      Very handy for debugging.
      6991af9b
    • Kirill Smelkov's avatar
      log += test result summary · 39e89cc0
      Kirill Smelkov authored
      We started to print test result summary line for a testcase run since
      0153635b (Teach nxdtest to run tests locally). However it is not present
      in log when nxdtest is run under master. -> Include summary lines
      everywhere for uniformity, with reason similar to bd1333bb (log += title
      and argv for ran testcase).
      39e89cc0
    • Kirill Smelkov's avatar
      Revert 'Include spawned command into stderr if Popen fails' · 3016b6be
      Kirill Smelkov authored
      This reverts commit 34e96b1d. Reason for revert is that since bd1333bb
      (log += title and argv for ran testcase) we always emit details of
      to-be-run command to log in the beginning of testcase run.
      3016b6be
    • Kirill Smelkov's avatar
      Include envadj in report to master as well · 50ebc09d
      Kirill Smelkov authored
      We started to display command and envadj in the log in the previous
      patch. However only command - without envadj - was reported to master
      for test result. -> Make it uniform: include envadj into details
      everywhere.
      50ebc09d
    • Kirill Smelkov's avatar
      log += title and argv for ran testcase · bd1333bb
      Kirill Smelkov authored
      When there are several or many testcases, it is hard to understand - by
      seeing just log or console output - which part of the test suite was
      running. It also helps to see the exact command that was spawned.
      
      Example output for pygolang. Before:
      
          (neo) (z-dev) (g.env) kirr@deco:~/src/tools/go/pygolang$ nxdtest
          ============================= test session starts ==============================
          platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.9.0, pluggy-0.13.1
          rootdir: /home/kirr/src/tools/go/pygolang
          collected 112 items
      
          golang/_gopath_test.py ..                                                [  1%]
          golang/context_test.py ..                                                [  3%]
          golang/cxx_test.py ..                                                    [  5%]
          golang/errors_test.py ........                                           [ 12%]
          golang/fmt_test.py ...                                                   [ 15%]
          golang/golang_test.py ................................................   [ 58%]
          golang/io_test.py .                                                      [ 58%]
          golang/strconv_test.py ..                                                [ 60%]
          golang/strings_test.py .....                                             [ 65%]
          golang/sync_test.py .............                                        [ 76%]
          golang/time_test.py ........                                             [ 83%]
          golang/pyx/build_test.py ...                                             [ 86%]
          golang/pyx/runtime_test.py .                                             [ 87%]
          gpython/gpython_test.py ssssss.sssssss                                   [100%]
      
          ==================== 99 passed, 13 skipped in 5.42 seconds =====================
          ok      thread  5.656s  # 112t 0e 0f 13s
          ============================= test session starts ==============================
          platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.9.0, pluggy-0.13.1
          rootdir: /home/kirr/src/tools/go/pygolang
          collected 112 items
      
          golang/_gopath_test.py ..                                                [  1%]
          golang/context_test.py ..                                                [  3%]
          golang/cxx_test.py ..                                                    [  5%]
          golang/errors_test.py ........                                           [ 12%]
          golang/fmt_test.py ...                                                   [ 15%]
          golang/golang_test.py ................................................   [ 58%]
          golang/io_test.py .                                                      [ 58%]
          golang/strconv_test.py ..                                                [ 60%]
          golang/strings_test.py .....                                             [ 65%]
          golang/sync_test.py .............                                        [ 76%]
          golang/time_test.py ........                                             [ 83%]
          golang/pyx/build_test.py ...                                             [ 86%]
          golang/pyx/runtime_test.py .                                             [ 87%]
          gpython/gpython_test.py ..............                                   [100%]
      
          ========================= 112 passed in 17.35 seconds ==========================
          ok      gevent  17.768s # 112t 0e 0f 0s
      
      After:
      
      (neo) (z-dev) (g.env) kirr@deco:~/src/tools/go/pygolang$ nxdtest
      
          >>> thread
          $ python -m pytest
          ============================= test session starts ==============================
          platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.9.0, pluggy-0.13.1
          rootdir: /home/kirr/src/tools/go/pygolang
          collected 112 items
      
          golang/_gopath_test.py ..                                                [  1%]
          golang/context_test.py ..                                                [  3%]
          golang/cxx_test.py ..                                                    [  5%]
          golang/errors_test.py ........                                           [ 12%]
          golang/fmt_test.py ...                                                   [ 15%]
          golang/golang_test.py ................................................   [ 58%]
          golang/io_test.py .                                                      [ 58%]
          golang/strconv_test.py ..                                                [ 60%]
          golang/strings_test.py .....                                             [ 65%]
          golang/sync_test.py .............                                        [ 76%]
          golang/time_test.py ........                                             [ 83%]
          golang/pyx/build_test.py ...                                             [ 86%]
          golang/pyx/runtime_test.py .                                             [ 87%]
          gpython/gpython_test.py ssssss.sssssss                                   [100%]
      
          ==================== 99 passed, 13 skipped in 5.27 seconds =====================
          ok      thread  5.508s  # 112t 0e 0f 13s
      
          >>> gevent
          $ gpython -m pytest
          ============================= test session starts ==============================
          platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.9.0, pluggy-0.13.1
          rootdir: /home/kirr/src/tools/go/pygolang
          collected 112 items
      
          golang/_gopath_test.py ..                                                [  1%]
          golang/context_test.py ..                                                [  3%]
          golang/cxx_test.py ..                                                    [  5%]
          golang/errors_test.py ........                                           [ 12%]
          golang/fmt_test.py ...                                                   [ 15%]
          golang/golang_test.py ................................................   [ 58%]
          golang/io_test.py .                                                      [ 58%]
          golang/strconv_test.py ..                                                [ 60%]
          golang/strings_test.py .....                                             [ 65%]
          golang/sync_test.py .............                                        [ 76%]
          golang/time_test.py ........                                             [ 83%]
          golang/pyx/build_test.py ...                                             [ 86%]
          golang/pyx/runtime_test.py .                                             [ 87%]
          gpython/gpython_test.py ..............                                   [100%]
      
          ========================= 112 passed in 17.32 seconds ==========================
          ok      gevent  17.729s # 112t 0e 0f 0s
      bd1333bb
  17. 01 Oct, 2020 1 commit
  18. 29 Sep, 2020 9 commits
  19. 28 Sep, 2020 3 commits