An error occurred fetching the project authors.
  1. 13 Dec, 2022 5 commits
  2. 12 Dec, 2022 3 commits
  3. 08 Dec, 2022 6 commits
  4. 07 Dec, 2022 4 commits
  5. 06 Dec, 2022 5 commits
  6. 05 Dec, 2022 5 commits
  7. 30 Nov, 2022 3 commits
  8. 28 Nov, 2022 5 commits
  9. 25 Nov, 2022 2 commits
  10. 21 Nov, 2022 2 commits
    • Jérome Perrin's avatar
      stack/erp5: don't produce HTML report for each test · 2312ed83
      Jérome Perrin authored
      The coverage of an individual test is after all not useful, only the
      coverage of all tests combined is significant.
      2312ed83
    • Jérome Perrin's avatar
      stack/erp5: support coverage when running tests · 3d8deba0
      Jérome Perrin authored
      This replaces the broken --coverage argument from runUnitTest, coverage
      needs to be started earlier and also introduces a coverage plugin to
      collect coverage data for code in ZODB; python scripts and components
      are supported.
      
      To use on test nodes, set up a web dav server somewhere, configure the
      instance parameters of the test suite on ERP5 to enable coverage and
      upload individual results to this webdav server and then combine the
      coverage data and produce a report, using the bin/coverage script from
      the software release.
      
      For the steps below, it is necessary to change working directory to the
      root of the software folder.
      
      Step 1: combine the coverage data:
      
          ./bin/coverage combine \
            --keep \
            /path/to/all/coverage/files/*coverage.sqlite3
      
      ( using --keep is optional, but it helps in case of mistakes )
      
      Step 2: build an html report:
      
         ./bin/coverage html \
           --skip-covered \
           --omit parts/erp5/product/ERP5/Document/UnitTest.py \
           --directory /path/for/html/report/
      
      Note that we want to omit UnitTest.py that is created during test
      (see testBusinessTemplate.TestDocumentTemplateItem) and get coverage
      result because it is executed in the test, but it does not exist as a
      source file in the repository, so we skip it.
      
      Of course, to produce a correct html report from a test that have been
      running on test nodes, it's necessary that the software release used to
      produce the html report has exactly the same version ans the one that
      has been running on test nodes.
      
      Another simpler, but slower approach is to run all the tests on the same
      machine, then only running step 2 is necessary.
      3d8deba0