1. 29 Oct, 2021 1 commit
  2. 25 Oct, 2021 1 commit
  3. 19 Oct, 2021 12 commits
    • David Gow's avatar
      kunit: Reset suite count after running tests · 17ac23eb
      David Gow authored
      There are some KUnit tests (KFENCE, Thunderbolt) which, for various
      reasons, do not use the kunit_test_suite() macro and end up running
      before the KUnit executor runs its tests. This means that their results
      are printed separately, and they aren't included in the suite count used
      by the executor.
      
      This causes the executor output to be invalid TAP, however, as the suite
      numbers used are no-longer 1-based, and don't match the test plan.
      kunit_tool, therefore, prints a large number of warnings.
      
      While it'd be nice to fix the tests to run in the executor, in the
      meantime, reset the suite counter to 1 in __kunit_test_suites_exit.
      Not only does this fix the executor, it means that if there are multiple
      calls to __kunit_test_suites_init() across different tests, they'll each
      get their own numbering.
      
      kunit_tool likes this better: even if it's lacking the results for those
      tests which don't use the executor (due to the lack of TAP header), the
      output for the other tests is valid.
      Signed-off-by: default avatarDavid Gow <davidgow@google.com>
      Reviewed-by: default avatarDaniel Latypov <dlatypov@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      17ac23eb
    • Rae Moar's avatar
      kunit: tool: improve compatibility of kunit_parser with KTAP specification · d65d07cb
      Rae Moar authored
      Update to kunit_parser to improve compatibility with KTAP
      specification including arbitrarily nested tests. Patch accomplishes
      three major changes:
      
      - Use a general Test object to represent all tests rather than TestCase
      and TestSuite objects. This allows for easier implementation of arbitrary
      levels of nested tests and promotes the idea that both test suites and test
      cases are tests.
      
      - Print errors incrementally rather than all at once after the
      parsing finishes to maximize information given to the user in the
      case of the parser given invalid input and to increase the helpfulness
      of the timestamps given during printing. Note that kunit.py parse does
      not print incrementally yet. However, this fix brings us closer to
      this feature.
      
      - Increase compatibility for different formats of input. Arbitrary levels
      of nested tests supported. Also, test cases and test suites are now
      supported to be present on the same level of testing.
      
      This patch now implements the draft KTAP specification here:
      https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqaJk+r-K1YJzPggFDQ@mail.gmail.com/
      We'll update the parser as the spec evolves.
      
      This patch adjusts the kunit_tool_test.py file to check for
      the correct outputs from the new parser and adds a new test to check
      the parsing for a KTAP result log with correct format for multiple nested
      subtests (test_is_test_passed-all_passed_nested.log).
      
      This patch also alters the kunit_json.py file to allow for arbitrarily
      nested tests.
      Signed-off-by: default avatarRae Moar <rmoar@google.com>
      Reviewed-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      d65d07cb
    • Daniel Latypov's avatar
      kunit: tool: yield output from run_kernel in real time · 7d7c48df
      Daniel Latypov authored
      Currently, `run_kernel()` dumps all the kernel output to a file
      (.kunit/test.log) and then opens the file and yields it to callers.
      This made it easier to respect the requested timeout, if any.
      
      But it means that we can't yield the results in real time, either to the
      parser or to stdout (if --raw_output is set).
      
      This change spins up a background thread to enforce the timeout, which
      allows us to yield the kernel output in real time, while also copying it
      to the .kunit/test.log file.
      It's also careful to ensure that the .kunit/test.log file is complete,
      even in the kunit_parser throws an exception/otherwise doesn't consume
      every line, see the new `finally` block and unit test.
      
      For example:
      
      $ ./tools/testing/kunit/kunit.py run --arch=x86_64 --raw_output
      <configure + build steps>
      ...
      <can now see output from QEMU in real time>
      
      This does not currently have a visible effect when --raw_output is not
      passed, as kunit_parser.py currently only outputs everything at the end.
      But that could change, and this patch is a necessary step towards
      showing parsed test results in real time.
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Reviewed-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      7d7c48df
    • Daniel Latypov's avatar
      kunit: tool: support running each suite/test separately · ff9e09a3
      Daniel Latypov authored
      The new --run_isolated flag makes the tool boot the kernel once per
      suite or test, preventing leftover state from one suite to impact the
      other. This can be useful as a starting point to debugging test
      hermeticity issues.
      
      Note: it takes a lot longer, so people should not use it normally.
      
      Consider the following very simplified example:
      
        bool disable_something_for_test = false;
        void function_being_tested() {
          ...
          if (disable_something_for_test) return;
          ...
        }
      
        static void test_before(struct kunit *test)
        {
          disable_something_for_test = true;
          function_being_tested();
          /* oops, we forgot to reset it back to false */
        }
      
        static void test_after(struct kunit *test)
        {
          /* oops, now "fixing" test_before can cause test_after to fail! */
          function_being_tested();
        }
      
      Presented like this, the issues are obvious, but it gets a lot more
      complicated to track down as the amount of test setup and helper
      functions increases.
      
      Another use case is memory corruption. It might not be surfaced as a
      failure/crash in the test case or suite that caused it. I've noticed in
      kunit's own unit tests, the 3rd suite after might be the one to finally
      crash after an out-of-bounds write, for example.
      
      Example usage:
      
      Per suite:
      $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite
      ...
      Starting KUnit Kernel (1/7)...
      ============================================================
      ======== [PASSED] kunit_executor_test ========
      ....
      Testing complete. 5 tests run. 0 failed. 0 crashed. 0 skipped.
      Starting KUnit Kernel (2/7)...
      ============================================================
      ======== [PASSED] kunit-try-catch-test ========
      ...
      
      Per test:
      $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test
      Starting KUnit Kernel (1/23)...
      ============================================================
      ======== [PASSED] kunit_executor_test ========
      [PASSED] parse_filter_test
      ============================================================
      Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped.
      Starting KUnit Kernel (2/23)...
      ============================================================
      ======== [PASSED] kunit_executor_test ========
      [PASSED] filter_subsuite_test
      ...
      
      It works with filters as well:
      $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite example
      ...
      Starting KUnit Kernel (1/1)...
      ============================================================
      ======== [PASSED] example ========
      ...
      
      It also handles test filters, '*.*skip*' runs these 3 tests:
        kunit_status.kunit_status_mark_skipped_test
        example.example_skip_test
        example.example_mark_skipped_test
      
      Fixed up merge conflict between:
        d8c23ead ("kunit: tool: better handling of quasi-bool args (--json, --raw_output)") and
        6710951ee039 ("kunit: tool: support running each suite/test separately")
      Reported-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
          Shuah Khan <skhan@linuxfoundation.org>
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Reviewed-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      ff9e09a3
    • Daniel Latypov's avatar
      kunit: tool: actually track how long it took to run tests · 5f6aa6d8
      Daniel Latypov authored
      This is a long standing bug in kunit tool.
      Since these files were added, run_kernel() has always yielded lines.
      
      That means, the call to run_kernel() returns before the kernel finishes
      executing tests, potentially before a single line of output is even
      produced.
      
      So code like this
        time_start = time.time()
        result = linux.run_kernel(...)
        time_end = time.time()
      
      would only measure the time taken for python to give back the generator
      object.
      
      From a caller's perspective, the only way to know the kernel has exited
      is for us to consume all the output from the `result` generator object.
      Alternatively, we could change run_kernel() to try and do its own book
      keeping and return the total time, but that doesn't seem worth it.
      
      This change makes us record `time_end` after we're done parsing all the
      output (which should mean we've consumed all of it, or errored out).
      That means we're including in the parsing time as well, but that should
      be quite small, and it's better than claiming it took 0s to run tests.
      
      Let's use this as an example:
      $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit example
      
      Before:
      Elapsed time: 7.684s total, 0.001s configuring, 4.692s building, 0.000s running
      
      After:
      Elapsed time: 6.283s total, 0.001s configuring, 3.202s building, 3.079s running
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Reviewed-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      5f6aa6d8
    • Daniel Latypov's avatar
      kunit: tool: factor exec + parse steps into a function · 7ef925ea
      Daniel Latypov authored
      Currently this code is copy-pasted between the normal "run" subcommand
      and the "exec" subcommand.
      
      Given we don't have any interest in just executing the tests without
      giving the user any indication what happened (i.e. parsing the output),
      make a function that does both this things and can be reused.
      
      This will be useful when we allow more complicated ways of running
      tests, e.g. invoking the kernel multiple times instead of just once,
      etc.
      
      We remove input_data from the ParseRequest so the callers don't have to
      pass in a dummy value for this field. Named tuples are also immutable,
      so if they did pass in a dummy, exec_tests() would need to make a copy
      to call parse_tests().
      
      Removing it also makes KunitParseRequest match the other *Request types,
      as they only contain user arguments/flags, not data.
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Acked-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      7ef925ea
    • Daniel Latypov's avatar
      kunit: add 'kunit.action' param to allow listing out tests · 9c6b0e1d
      Daniel Latypov authored
      Context:
      It's difficult to map a given .kunitconfig => set of enabled tests.
      Letting kunit.py figure that out would be useful.
      
      This patch:
      * is intended to be an implementation detail used only by kunit.py
      * adds a kunit.action module param with one valid non-null value, "list"
      * for the "list" action, it simply prints out "<suite>.<test>"
      * leaves the kunit.py changes to make use of this for another patch.
      
      Note: kunit.filter_glob is respected for this and all future actions.
      
      Hack: we print a TAP header (but no test plan) to allow kunit.py to
      use the same code to pick up KUnit output that it does for normal tests.
      Since this is intended to be an implementation detail, it seems fine for
      now. Maybe in the future we output each test as SKIPPED or the like.
      
      Go with a more generic "action" param, since it seems like we might
      eventually have more modes besides just running or listing tests, e.g.
      * perhaps a benchmark mode that reruns test cases and reports timing
      * perhaps a deflake mode that reruns test cases that failed
      * perhaps a mode where we randomize test order to try and catch
        hermeticity bugs like "test a only passes if run after test b"
      
      Tested:
      $ ./tools/testing/kunit/kunit.py run --kernel_arg=kunit.action=list --raw_output=kunit
      ...
      TAP version 14
      1..1
      example.example_simple_test
      example.example_skip_test
      example.example_mark_skipped_test
      reboot: System halted
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Reviewed-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      9c6b0e1d
    • Daniel Latypov's avatar
      kunit: tool: show list of valid --arch options when invalid · fe678fed
      Daniel Latypov authored
      Consider this attempt to run KUnit in QEMU:
      $ ./tools/testing/kunit/kunit.py run --arch=x86
      
      Before you'd get this error message:
      kunit_kernel.ConfigError: x86 is not a valid arch
      
      After:
      kunit_kernel.ConfigError: x86 is not a valid arch, options are ['alpha', 'arm', 'arm64', 'i386', 'powerpc', 'riscv', 's390', 'sparc', 'x86_64']
      
      This should make it a bit easier for people to notice when they make
      typos, etc. Currently, one would have to dive into the python code to
      figure out what the valid set is.
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Reviewed-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      fe678fed
    • Daniel Latypov's avatar
      kunit: tool: misc fixes (unused vars, imports, leaked files) · a54ea2e0
      Daniel Latypov authored
      Drop some variables in unit tests that were unused and/or add assertions
      based on them.
      
      For ExitStack, it was imported, but the `es` variable wasn't used so it
      didn't do anything, and we were leaking the file objects.
      Refactor it to just use nested `with` statements to properly close them.
      
      And drop the direct use of .close() on file objects in the kunit tool
      unit test, as these can be leaked if test assertions fail.
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Reviewed-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      a54ea2e0
    • Daniel Latypov's avatar
      kunit: fix too small allocation when using suite-only kunit.filter_glob · cd94fbc2
      Daniel Latypov authored
      When a user filters by a suite and not a test, e.g.
      $ ./tools/testing/kunit/kunit.py run 'suite_name'
      
      it hits this code
        const int len = strlen(filter_glob);
        ...
        parsed->suite_glob = kmalloc(len, GFP_KERNEL);
      which fails to allocate space for the terminating NULL.
      
      Somehow, it seems like we can't easily reproduce this under UML, so the
      existing `parse_filter_test()` didn't catch this.
      
      Fix this by allocating `len + 1` and switch to kzalloc() just to be a
      bit more defensive. We're only going to run this code once per kernel
      boot, and it should never be very long.
      
      Also update the unit tests to be a bit more cautious.
      This bug showed up as a NULL pointer dereference here:
      >  KUNIT_EXPECT_STREQ(test, (const char *)filtered.start[0][0]->name, "suite0");
      `filtered.start[0][0]` was NULL, and `name` is at offset 0 in the struct,
      so `...->name` was also NULL.
      
      Fixes: 3b29021ddd10 ("kunit: tool: allow filtering test cases via glob")
      Reported-by: default avatarkernel test robot <oliver.sang@intel.com>
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Acked-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      cd94fbc2
    • Daniel Latypov's avatar
      kunit: tool: allow filtering test cases via glob · a127b154
      Daniel Latypov authored
      Commit 1d71307a ("kunit: add unit test for filtering suites by
      names") introduced the ability to filter which suites we run via glob.
      
      This change extends it so we can also filter individual test cases
      inside of suites as well.
      
      This is quite useful when, e.g.
      * trying to run just the tests cases you've just added or are working on
      * trying to debug issues with test hermeticity
      
      Examples:
      $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit '*exec*.parse*'
      ...
      ============================================================
      ======== [PASSED] kunit_executor_test ========
      [PASSED] parse_filter_test
      ============================================================
      Testing complete. 1 tests run. 0 failed. 0 crashed.
      
      $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit '*.no_matching_tests'
      ...
      [ERROR] no tests run!
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarDavid Gow <davidgow@google.com>
      Reviewed-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      a127b154
    • Daniel Latypov's avatar
      kunit: drop assumption in kunit-log-test about current suite · b7cbaef3
      Daniel Latypov authored
      This test assumes that the declared kunit_suite object is the exact one
      which is being executed, which KUnit will not guarantee [1].
      
      Specifically, `suite->log` is not initialized until a suite object is
      executed. So if KUnit makes a copy of the suite and runs that instead,
      this test dereferences an invalid pointer and (hopefully) segfaults.
      
      N.B. since we no longer assume this, we can no longer verify that
      `suite->log` is *not* allocated during normal execution.
      
      An alternative to this patch that would allow us to test that would
      require exposing an API for the current test to get its current suite.
      Exposing that for one internal kunit test seems like overkill, and
      grants users more footguns (e.g. reusing a test case in multiple suites
      and changing behavior based on the suite name, dynamically modifying the
      setup/cleanup funcs, storing/reading stuff out of the suite->log, etc.).
      
      [1] In a subsequent patch, KUnit will allow running subsets of test
      cases within a suite by making a copy of the suite w/ the filtered test
      list. But there are other reasons KUnit might execute a copy, e.g. if it
      ever wants to support parallel execution of different suites, recovering
      from errors and restarting suites
      Signed-off-by: default avatarDaniel Latypov <dlatypov@google.com>
      Reviewed-by: default avatarBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      b7cbaef3
  4. 18 Oct, 2021 17 commits
  5. 17 Oct, 2021 3 commits
  6. 16 Oct, 2021 6 commits