Commit 915de2ad authored by Masami Hiramatsu's avatar Masami Hiramatsu Committed by Steven Rostedt

ftracetest: Add POSIX.3 standard and XFAIL result codes

Add XFAIL and POSIX 1003.3 standard codes (UNRESOLVED/
UNTESTED/UNSUPPORTED) as result codes. These are used for the
results that test case is expected to fail or unsupported
feature (by config).

To return these result code, this introduces exit_unresolved,
exit_untested, exit_unsupported and exit_xfail functions,
which use real-time signals to notify the result code to
ftracetest.

This also set "errexit" option for the testcases, so that
the tests don't need to exit explicitly.

Note that if the test returns UNRESOLVED/UNSUPPORTED/FAIL,
its test log including executed commands is shown on console
and main logfile as below.

  ------
  # ./ftracetest samples/
  === Ftrace unit tests ===
  [1] failure-case example        [FAIL]
  execute: /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/fail.tc
  + . /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/fail.tc
  ++ cat non-exist-file
  cat: non-exist-file: No such file or directory
  [2] pass-case example   [PASS]
  [3] unresolved-case example     [UNRESOLVED]
  execute: /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unresolved.tc
  + . /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unresolved.tc
  ++ trap exit_unresolved INT
  ++ kill -INT 29324
  +++ exit_unresolved
  +++ kill -s 38 29265
  +++ exit 0
  [4] unsupported-case example    [UNSUPPORTED]
  execute: /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unsupported.tc
  + . /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unsupported.tc
  ++ exit_unsupported
  ++ kill -s 40 29265
  ++ exit 0
  [5] untested-case example       [UNTESTED]
  [6] xfail-case example  [XFAIL]

  # of passed:  1
  # of failed:  1
  # of unresolved:  1
  # of untested:  1
  # of unsupported:  1
  # of xfailed:  1
  # of undefined(test bug):  0
  ------

Link: http://lkml.kernel.org/p/20140929120211.30203.99510.stgit@kbuild-f20.novalocalAcked-by: default avatarNamhyung Kim <namhyung@kernel.org>
Signed-off-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
parent 2909ef28
...@@ -38,6 +38,43 @@ extension) and rewrite the test description line. ...@@ -38,6 +38,43 @@ extension) and rewrite the test description line.
* The test cases should run on dash (busybox shell) for testing on * The test cases should run on dash (busybox shell) for testing on
minimal cross-build environments. minimal cross-build environments.
* Note that the tests are run with "set -e" (errexit) option. If any
command fails, the test will be terminated immediately.
* The tests can return some result codes instead of pass or fail by
using exit_unresolved, exit_untested, exit_unsupported and exit_xfail.
Result code
===========
Ftracetest supports following result codes.
* PASS: The test succeeded as expected. The test which exits with 0 is
counted as passed test.
* FAIL: The test failed, but was expected to succeed. The test which exits
with !0 is counted as failed test.
* UNRESOLVED: The test produced unclear or intermidiate results.
for example, the test was interrupted
or the test depends on a previous test, which failed.
or the test was set up incorrectly
The test which is in above situation, must call exit_unresolved.
* UNTESTED: The test was not run, currently just a placeholder.
In this case, the test must call exit_untested.
* UNSUPPORTED: The test failed because of lack of feature.
In this case, the test must call exit_unsupported.
* XFAIL: The test failed, and was expected to fail.
To return XFAIL, call exit_xfail from the test.
There are some sample test scripts for result code under samples/.
You can also run samples as below:
# ./ftracetest samples/
TODO TODO
==== ====
......
...@@ -114,22 +114,106 @@ prlog "=== Ftrace unit tests ===" ...@@ -114,22 +114,106 @@ prlog "=== Ftrace unit tests ==="
# Testcase management # Testcase management
# Test result codes - Dejagnu extended code
PASS=0 # The test succeeded.
FAIL=1 # The test failed, but was expected to succeed.
UNRESOLVED=2 # The test produced indeterminate results. (e.g. interrupted)
UNTESTED=3 # The test was not run, currently just a placeholder.
UNSUPPORTED=4 # The test failed because of lack of feature.
XFAIL=5 # The test failed, and was expected to fail.
# Accumulations
PASSED_CASES= PASSED_CASES=
FAILED_CASES= FAILED_CASES=
UNRESOLVED_CASES=
UNTESTED_CASES=
UNSUPPORTED_CASES=
XFAILED_CASES=
UNDEFINED_CASES=
TOTAL_RESULT=0
CASENO=0 CASENO=0
testcase() { # testfile testcase() { # testfile
CASENO=$((CASENO+1)) CASENO=$((CASENO+1))
prlog -n "[$CASENO]"`grep "^#[ \t]*description:" $1 | cut -f2 -d:` prlog -n "[$CASENO]"`grep "^#[ \t]*description:" $1 | cut -f2 -d:`
} }
failed() {
prlog " [FAIL]" eval_result() { # retval sigval
FAILED_CASES="$FAILED_CASES $CASENO" local retval=$2
if [ $2 -eq 0 ]; then
test $1 -ne 0 && retval=$FAIL
fi
case $retval in
$PASS)
prlog " [PASS]"
PASSED_CASES="$PASSED_CASES $CASENO"
return 0
;;
$FAIL)
prlog " [FAIL]"
FAILED_CASES="$FAILED_CASES $CASENO"
return 1 # this is a bug.
;;
$UNRESOLVED)
prlog " [UNRESOLVED]"
UNRESOLVED_CASES="$UNRESOLVED_CASES $CASENO"
return 1 # this is a kind of bug.. something happened.
;;
$UNTESTED)
prlog " [UNTESTED]"
UNTESTED_CASES="$UNTESTED_CASES $CASENO"
return 0
;;
$UNSUPPORTED)
prlog " [UNSUPPORTED]"
UNSUPPORTED_CASES="$UNSUPPORTED_CASES $CASENO"
return 1 # this is not a bug, but the result should be reported.
;;
$XFAIL)
prlog " [XFAIL]"
XFAILED_CASES="$XFAILED_CASES $CASENO"
return 0
;;
*)
prlog " [UNDEFINED]"
UNDEFINED_CASES="$UNDEFINED_CASES $CASENO"
return 1 # this must be a test bug
;;
esac
}
# Signal handling for result codes
SIG_RESULT=
SIG_BASE=36 # Use realtime signals
SIG_PID=$$
SIG_UNRESOLVED=$((SIG_BASE + UNRESOLVED))
exit_unresolved () {
kill -s $SIG_UNRESOLVED $SIG_PID
exit 0
}
trap 'SIG_RESULT=$UNRESOLVED' $SIG_UNRESOLVED
SIG_UNTESTED=$((SIG_BASE + UNTESTED))
exit_untested () {
kill -s $SIG_UNTESTED $SIG_PID
exit 0
} }
passed() { trap 'SIG_RESULT=$UNTESTED' $SIG_UNTESTED
prlog " [PASS]"
PASSED_CASES="$PASSED_CASES $CASENO" SIG_UNSUPPORTED=$((SIG_BASE + UNSUPPORTED))
exit_unsupported () {
kill -s $SIG_UNSUPPORTED $SIG_PID
exit 0
} }
trap 'SIG_RESULT=$UNSUPPORTED' $SIG_UNSUPPORTED
SIG_XFAIL=$((SIG_BASE + XFAIL))
exit_xfail () {
kill -s $SIG_XFAIL $SIG_PID
exit 0
}
trap 'SIG_RESULT=$XFAIL' $SIG_XFAIL
# Run one test case # Run one test case
run_test() { # testfile run_test() { # testfile
...@@ -137,14 +221,17 @@ run_test() { # testfile ...@@ -137,14 +221,17 @@ run_test() { # testfile
local testlog=`mktemp --tmpdir=$LOG_DIR ${testname}-XXXXXX.log` local testlog=`mktemp --tmpdir=$LOG_DIR ${testname}-XXXXXX.log`
testcase $1 testcase $1
echo "execute: "$1 > $testlog echo "execute: "$1 > $testlog
(cd $TRACING_DIR; set -x ; . $1) >> $testlog 2>&1 SIG_RESULT=0
ret=$? # setup PID and PPID, $$ is not updated.
if [ $ret -ne 0 ]; then (cd $TRACING_DIR; read PID _ < /proc/self/stat ;
failed set -e; set -x; . $1) >> $testlog 2>&1
catlog $testlog eval_result $? $SIG_RESULT
else if [ $? -eq 0 ]; then
passed # Remove test log if the test was done as it was expected.
[ $KEEP_LOG -eq 0 ] && rm $testlog [ $KEEP_LOG -eq 0 ] && rm $testlog
else
catlog $testlog
TOTAL_RESULT=1
fi fi
} }
...@@ -152,8 +239,15 @@ run_test() { # testfile ...@@ -152,8 +239,15 @@ run_test() { # testfile
for t in $TEST_CASES; do for t in $TEST_CASES; do
run_test $t run_test $t
done done
prlog "" prlog ""
prlog "# of passed: " `echo $PASSED_CASES | wc -w` prlog "# of passed: " `echo $PASSED_CASES | wc -w`
prlog "# of failed: " `echo $FAILED_CASES | wc -w` prlog "# of failed: " `echo $FAILED_CASES | wc -w`
prlog "# of unresolved: " `echo $UNRESOLVED_CASES | wc -w`
test -z "$FAILED_CASES" # if no error, return 0 prlog "# of untested: " `echo $UNTESTED_CASES | wc -w`
prlog "# of unsupported: " `echo $UNSUPPORTED_CASES | wc -w`
prlog "# of xfailed: " `echo $XFAILED_CASES | wc -w`
prlog "# of undefined(test bug): " `echo $UNDEFINED_CASES | wc -w`
# if no error, return 0
exit $TOTAL_RESULT
#!/bin/sh
# description: failure-case example
cat non-exist-file
echo "this is not executed"
#!/bin/sh
# description: pass-case example
return 0
#!/bin/sh
# description: unresolved-case example
trap exit_unresolved INT
kill -INT $PID
#!/bin/sh
# description: unsupported-case example
exit_unsupported
#!/bin/sh
# description: untested-case example
exit_untested
#!/bin/sh
# description: xfail-case example
cat non-exist-file || exit_xfail
#!/bin/sh #!/bin/sh
# description: Basic test for tracers # description: Basic test for tracers
test -f available_tracers
for t in `cat available_tracers`; do for t in `cat available_tracers`; do
echo $t > current_tracer || exit 1 echo $t > current_tracer
done done
echo nop > current_tracer echo nop > current_tracer
#!/bin/sh #!/bin/sh
# description: Basic trace clock test # description: Basic trace clock test
[ -f trace_clock ] || exit 1 test -f trace_clock
for c in `cat trace_clock | tr -d \[\]`; do for c in `cat trace_clock | tr -d \[\]`; do
echo $c > trace_clock || exit 1 echo $c > trace_clock
grep '\['$c'\]' trace_clock || exit 1 grep '\['$c'\]' trace_clock
done done
echo local > trace_clock echo local > trace_clock
#!/bin/sh #!/bin/sh
# description: Kprobe dynamic event - adding and removing # description: Kprobe dynamic event - adding and removing
[ -f kprobe_events ] || exit 1 [ -f kprobe_events ] || exit_unsupported # this is configurable
echo 0 > events/enable || exit 1 echo 0 > events/enable
echo > kprobe_events || exit 1 echo > kprobe_events
echo p:myevent do_fork > kprobe_events || exit 1 echo p:myevent do_fork > kprobe_events
grep myevent kprobe_events || exit 1 grep myevent kprobe_events
[ -d events/kprobes/myevent ] || exit 1 test -d events/kprobes/myevent
echo > kprobe_events echo > kprobe_events
#!/bin/sh #!/bin/sh
# description: Kprobe dynamic event - busy event check # description: Kprobe dynamic event - busy event check
[ -f kprobe_events ] || exit 1 [ -f kprobe_events ] || exit_unsupported
echo 0 > events/enable || exit 1 echo 0 > events/enable
echo > kprobe_events || exit 1 echo > kprobe_events
echo p:myevent do_fork > kprobe_events || exit 1 echo p:myevent do_fork > kprobe_events
[ -d events/kprobes/myevent ] || exit 1 test -d events/kprobes/myevent
echo 1 > events/kprobes/myevent/enable || exit 1 echo 1 > events/kprobes/myevent/enable
echo > kprobe_events && exit 1 # this must fail echo > kprobe_events && exit 1 # this must fail
echo 0 > events/kprobes/myevent/enable || exit 1 echo 0 > events/kprobes/myevent/enable
echo > kprobe_events # this must succeed echo > kprobe_events # this must succeed
#!/bin/sh #!/bin/sh
# description: %HERE DESCRIBE WHAT THIS DOES% # description: %HERE DESCRIBE WHAT THIS DOES%
# you have to add ".tc" extention for your testcase file # you have to add ".tc" extention for your testcase file
# Note that all tests are run with "errexit" option.
exit 0 # Return 0 if the test is passed, otherwise return !0 exit 0 # Return 0 if the test is passed, otherwise return !0
# If the test could not run because of lack of feature, call exit_unsupported
# If the test returned unclear results, call exit_unresolved
# If the test is a dummy, or a placeholder, call exit_untested
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment