1. 25 Feb, 2022 6 commits
    • Nathan Chancellor's avatar
      ftrace: Remove unused ftrace_startup_enable() stub · ab2f993c
      Nathan Chancellor authored
      When building with clang + CONFIG_DYNAMIC_FTRACE=n + W=1, there is a
      warning:
      
        kernel/trace/ftrace.c:7194:20: error: unused function 'ftrace_startup_enable' [-Werror,-Wunused-function]
        static inline void ftrace_startup_enable(int command) { }
                           ^
        1 error generated.
      
      Clang warns on instances of static inline functions in .c files with W=1
      after commit 6863f564 ("kbuild: allow Clang to find unused static
      inline functions for W=1 build").
      
      The ftrace_startup_enable() stub has been unused since
      commit e1effa01 ("ftrace: Annotate the ops operation on update"),
      where its use outside of the CONFIG_DYNAMIC_TRACE section was replaced
      by ftrace_startup_all().  Remove it to resolve the warning.
      
      Link: https://lkml.kernel.org/r/20220214192847.488166-1-nathan@kernel.orgReported-by: default avatarkernel test robot <lkp@intel.com>
      Signed-off-by: default avatarNathan Chancellor <nathan@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      ab2f993c
    • Sven Schnelle's avatar
      tracing: Ensure trace buffer is at least 4096 bytes large · 7acf3a12
      Sven Schnelle authored
      Booting the kernel with 'trace_buf_size=1' give a warning at
      boot during the ftrace selftests:
      
      [    0.892809] Running postponed tracer tests:
      [    0.892893] Testing tracer function:
      [    0.901899] Callback from call_rcu_tasks_trace() invoked.
      [    0.983829] Callback from call_rcu_tasks_rude() invoked.
      [    1.072003] .. bad ring buffer .. corrupted trace buffer ..
      [    1.091944] Callback from call_rcu_tasks() invoked.
      [    1.097695] PASSED
      [    1.097701] Testing dynamic ftrace: .. filter failed count=0 ..FAILED!
      [    1.353474] ------------[ cut here ]------------
      [    1.353478] WARNING: CPU: 0 PID: 1 at kernel/trace/trace.c:1951 run_tracer_selftest+0x13c/0x1b0
      
      Therefore enforce a minimum of 4096 bytes to make the selftest pass.
      
      Link: https://lkml.kernel.org/r/20220214134456.1751749-1-svens@linux.ibm.comSigned-off-by: default avatarSven Schnelle <svens@linux.ibm.com>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      7acf3a12
    • Christophe Leroy's avatar
      tracing: Uninline trace_trigger_soft_disabled() partly · bc82c38a
      Christophe Leroy authored
      On a powerpc32 build with CONFIG_CC_OPTIMISE_FOR_SIZE, the inline
      keyword is not honored and trace_trigger_soft_disabled() appears
      approx 50 times in vmlinux.
      
      Adding -Winline to the build, the following message appears:
      
      	./include/linux/trace_events.h:712:1: error: inlining failed in call to 'trace_trigger_soft_disabled': call is unlikely and code size would grow [-Werror=inline]
      
      That function is rather big for an inlined function:
      
      	c003df60 <trace_trigger_soft_disabled>:
      	c003df60:	94 21 ff f0 	stwu    r1,-16(r1)
      	c003df64:	7c 08 02 a6 	mflr    r0
      	c003df68:	90 01 00 14 	stw     r0,20(r1)
      	c003df6c:	bf c1 00 08 	stmw    r30,8(r1)
      	c003df70:	83 e3 00 24 	lwz     r31,36(r3)
      	c003df74:	73 e9 01 00 	andi.   r9,r31,256
      	c003df78:	41 82 00 10 	beq     c003df88 <trace_trigger_soft_disabled+0x28>
      	c003df7c:	38 60 00 00 	li      r3,0
      	c003df80:	39 61 00 10 	addi    r11,r1,16
      	c003df84:	4b fd 60 ac 	b       c0014030 <_rest32gpr_30_x>
      	c003df88:	73 e9 00 80 	andi.   r9,r31,128
      	c003df8c:	7c 7e 1b 78 	mr      r30,r3
      	c003df90:	41 a2 00 14 	beq     c003dfa4 <trace_trigger_soft_disabled+0x44>
      	c003df94:	38 c0 00 00 	li      r6,0
      	c003df98:	38 a0 00 00 	li      r5,0
      	c003df9c:	38 80 00 00 	li      r4,0
      	c003dfa0:	48 05 c5 f1 	bl      c009a590 <event_triggers_call>
      	c003dfa4:	73 e9 00 40 	andi.   r9,r31,64
      	c003dfa8:	40 82 00 28 	bne     c003dfd0 <trace_trigger_soft_disabled+0x70>
      	c003dfac:	73 ff 02 00 	andi.   r31,r31,512
      	c003dfb0:	41 82 ff cc 	beq     c003df7c <trace_trigger_soft_disabled+0x1c>
      	c003dfb4:	80 01 00 14 	lwz     r0,20(r1)
      	c003dfb8:	83 e1 00 0c 	lwz     r31,12(r1)
      	c003dfbc:	7f c3 f3 78 	mr      r3,r30
      	c003dfc0:	83 c1 00 08 	lwz     r30,8(r1)
      	c003dfc4:	7c 08 03 a6 	mtlr    r0
      	c003dfc8:	38 21 00 10 	addi    r1,r1,16
      	c003dfcc:	48 05 6f 6c 	b       c0094f38 <trace_event_ignore_this_pid>
      	c003dfd0:	38 60 00 01 	li      r3,1
      	c003dfd4:	4b ff ff ac 	b       c003df80 <trace_trigger_soft_disabled+0x20>
      
      However it is located in a hot path so inlining it is important.
      But forcing inlining of the entire function by using __always_inline
      leads to increasing the text size by approx 20 kbytes.
      
      Instead, split the fonction in two parts, one part with the likely
      fast path, flagged __always_inline, and a second part out of line.
      
      With this change, on a powerpc32 with CONFIG_CC_OPTIMISE_FOR_SIZE
      vmlinux text increases by only 1,4 kbytes, which is partly
      compensated by a decrease of vmlinux data by 7 kbytes.
      
      On ppc64_defconfig which has CONFIG_CC_OPTIMISE_FOR_SPEED, this
      change reduces vmlinux text by more than 30 kbytes.
      
      Link: https://lkml.kernel.org/r/69ce0986a52d026d381d612801d978aa4f977460.1644563295.git.christophe.leroy@csgroup.euSigned-off-by: default avatarChristophe Leroy <christophe.leroy@csgroup.eu>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      bc82c38a
    • Steven Rostedt (Google)'s avatar
      eprobes: Remove redundant event type information · b61edd57
      Steven Rostedt (Google) authored
      Currently, the event probes save the type of the event they are attached
      to when recording the event. For example:
      
        # echo 'e:switch sched/sched_switch prev_state=$prev_state prev_prio=$prev_prio next_pid=$next_pid next_prio=$next_prio' > dynamic_events
        # cat events/eprobes/switch/format
      
       name: switch
       ID: 1717
       format:
              field:unsigned short common_type;       offset:0;       size:2; signed:0;
              field:unsigned char common_flags;       offset:2;       size:1; signed:0;
              field:unsigned char common_preempt_count;       offset:3;       size:1; signed:0;
              field:int common_pid;   offset:4;       size:4; signed:1;
      
              field:unsigned int __probe_type;        offset:8;       size:4; signed:0;
              field:u64 prev_state;   offset:12;      size:8; signed:0;
              field:u64 prev_prio;    offset:20;      size:8; signed:0;
              field:u64 next_pid;     offset:28;      size:8; signed:0;
              field:u64 next_prio;    offset:36;      size:8; signed:0;
      
       print fmt: "(%u) prev_state=0x%Lx prev_prio=0x%Lx next_pid=0x%Lx next_prio=0x%Lx", REC->__probe_type, REC->prev_state, REC->prev_prio, REC->next_pid, REC->next_prio
      
      The __probe_type adds 4 bytes to every event.
      
      One of the reasons for creating eprobes is to limit what is traced in an
      event to be able to limit what is written into the ring buffer. Having
      this redundant 4 bytes to every event takes away from this.
      
      The event that is recorded can be retrieved from the event probe itself,
      that is available when the trace is happening. For user space tools, it
      could simply read the dynamic_event file to find the event they are for.
      So there is really no reason to write this information into the ring
      buffer for every event.
      
      Link: https://lkml.kernel.org/r/20220218190057.2f5a19a8@gandalf.local.homeAcked-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Reviewed-by: default avatarJoel Fernandes <joel@joelfernandes.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      b61edd57
    • Steven Rostedt (Google)'s avatar
      tracing: Have traceon and traceoff trigger honor the instance · 302e9edd
      Steven Rostedt (Google) authored
      If a trigger is set on an event to disable or enable tracing within an
      instance, then tracing should be disabled or enabled in the instance and
      not at the top level, which is confusing to users.
      
      Link: https://lkml.kernel.org/r/20220223223837.14f94ec3@rorschach.local.home
      
      Cc: stable@vger.kernel.org
      Fixes: ae63b31e ("tracing: Separate out trace events from global variables")
      Tested-by: default avatarDaniel Bristot de Oliveira <bristot@kernel.org>
      Reviewed-by: default avatarTom Zanussi <zanussi@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      302e9edd
    • Daniel Bristot de Oliveira's avatar
      tracing: Dump stacktrace trigger to the corresponding instance · ce33c845
      Daniel Bristot de Oliveira authored
      The stacktrace event trigger is not dumping the stacktrace to the instance
      where it was enabled, but to the global "instance."
      
      Use the private_data, pointing to the trigger file, to figure out the
      corresponding trace instance, and use it in the trigger action, like
      snapshot_trigger does.
      
      Link: https://lkml.kernel.org/r/afbb0b4f18ba92c276865bc97204d438473f4ebc.1645396236.git.bristot@kernel.org
      
      Cc: stable@vger.kernel.org
      Fixes: ae63b31e ("tracing: Separate out trace events from global variables")
      Reviewed-by: default avatarTom Zanussi <zanussi@kernel.org>
      Tested-by: default avatarTom Zanussi <zanussi@kernel.org>
      Signed-off-by: default avatarDaniel Bristot de Oliveira <bristot@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      ce33c845
  2. 24 Feb, 2022 1 commit
  3. 08 Feb, 2022 3 commits
  4. 04 Feb, 2022 3 commits
  5. 28 Jan, 2022 10 commits
  6. 23 Jan, 2022 6 commits
    • Linus Torvalds's avatar
      Linux 5.17-rc1 · e783362e
      Linus Torvalds authored
      e783362e
    • Linus Torvalds's avatar
      Merge tag 'perf-tools-for-v5.17-2022-01-22' of... · 40c84321
      Linus Torvalds authored
      Merge tag 'perf-tools-for-v5.17-2022-01-22' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
      
      Pull more perf tools updates from Arnaldo Carvalho de Melo:
      
       - Fix printing 'phys_addr' in 'perf script'.
      
       - Fix failure to add events with 'perf probe' in ppc64 due to not
         removing leading dot (ppc64 ABIv1).
      
       - Fix cpu_map__item() python binding building.
      
       - Support event alias in form foo-bar-baz, add pmu-events and
         parse-event tests for it.
      
       - No need to setup affinities when starting a workload or attaching to
         a pid.
      
       - Use path__join() to compose a path instead of ad-hoc snprintf()
         equivalent.
      
       - Override attr->sample_period for non-libpfm4 events.
      
       - Use libperf cpumap APIs instead of accessing the internal state
         directly.
      
       - Sync x86 arch prctl headers and files changed by the new
         set_mempolicy_home_node syscall with the kernel sources.
      
       - Remove duplicate include in cpumap.h.
      
       - Remove redundant err variable.
      
      * tag 'perf-tools-for-v5.17-2022-01-22' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux:
        perf tools: Remove redundant err variable
        perf test: Add parse-events test for aliases with hyphens
        perf test: Add pmu-events test for aliases with hyphens
        perf parse-events: Support event alias in form foo-bar-baz
        perf evsel: Override attr->sample_period for non-libpfm4 events
        perf cpumap: Remove duplicate include in cpumap.h
        perf cpumap: Migrate to libperf cpumap api
        perf python: Fix cpu_map__item() building
        perf script: Fix printing 'phys_addr' failure issue
        tools headers UAPI: Sync files changed by new set_mempolicy_home_node syscall
        tools headers UAPI: Sync x86 arch prctl headers with the kernel sources
        perf machine: Use path__join() to compose a path instead of snprintf(dir, '/', filename)
        perf evlist: No need to setup affinities when disabling events for pid targets
        perf evlist: No need to setup affinities when enabling events for pid targets
        perf stat: No need to setup affinities when starting a workload
        perf affinity: Allow passing a NULL arg to affinity__cleanup()
        perf probe: Fix ppc64 'perf probe add events failed' case
      40c84321
    • Linus Torvalds's avatar
      Merge tag 'trace-v5.17-3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace · 67bfce0e
      Linus Torvalds authored
      Pull ftrace fix from Steven Rostedt:
       "Fix s390 breakage from sorting mcount tables.
      
        The latest merge of the tracing tree sorts the mcount table at build
        time. But s390 appears to do things differently (like always) and
        replaces the sorted table back to the original unsorted one. As the
        ftrace algorithm depends on it being sorted, bad things happen when it
        is not, and s390 experienced those bad things.
      
        Add a new config to tell the boot if the mcount table is sorted or
        not, and allow s390 to opt out of it"
      
      * tag 'trace-v5.17-3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
        ftrace: Fix assuming build time sort works for s390
      67bfce0e
    • Steven Rostedt (Google)'s avatar
      ftrace: Fix assuming build time sort works for s390 · 6b9b6413
      Steven Rostedt (Google) authored
      To speed up the boot process, as mcount_loc needs to be sorted for ftrace
      to work properly, sorting it at build time is more efficient than boot up
      and can save milliseconds of time. Unfortunately, this change broke s390
      as it will modify the mcount_loc location after the sorting takes place
      and will put back the unsorted locations. Since the sorting is skipped at
      boot up if it is believed that it was sorted at run time, ftrace can crash
      as its algorithms are dependent on the list being sorted.
      
      Add a new config BUILDTIME_MCOUNT_SORT that is set when
      BUILDTIME_TABLE_SORT but not if S390 is set. Use this config to determine
      if sorting should take place at boot up.
      
      Link: https://lore.kernel.org/all/yt9dee51ctfn.fsf@linux.ibm.com/
      
      Fixes: 72b3942a ("scripts: ftrace - move the sort-processing in ftrace_init")
      Reported-by: default avatarSven Schnelle <svens@linux.ibm.com>
      Tested-by: default avatarHeiko Carstens <hca@linux.ibm.com>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      6b9b6413
    • Linus Torvalds's avatar
      Merge tag 'kbuild-fixes-v5.17' of... · 473aec0e
      Linus Torvalds authored
      Merge tag 'kbuild-fixes-v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
      
      Pull Kbuild fixes from Masahiro Yamada:
      
       - Bring include/uapi/linux/nfc.h into the UAPI compile-test coverage
      
       - Revert the workaround of CONFIG_CC_IMPLICIT_FALLTHROUGH
      
       - Fix build errors in certs/Makefile
      
      * tag 'kbuild-fixes-v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
        certs: Fix build error when CONFIG_MODULE_SIG_KEY is empty
        certs: Fix build error when CONFIG_MODULE_SIG_KEY is PKCS#11 URI
        Revert "Makefile: Do not quote value for CONFIG_CC_IMPLICIT_FALLTHROUGH"
        usr/include/Makefile: add linux/nfc.h to the compile-test coverage
      473aec0e
    • Linus Torvalds's avatar
      Merge tag 'bitmap-5.17-rc1' of git://github.com/norov/linux · 3689f9f8
      Linus Torvalds authored
      Pull bitmap updates from Yury Norov:
      
       - introduce for_each_set_bitrange()
      
       - use find_first_*_bit() instead of find_next_*_bit() where possible
      
       - unify for_each_bit() macros
      
      * tag 'bitmap-5.17-rc1' of git://github.com/norov/linux:
        vsprintf: rework bitmap_list_string
        lib: bitmap: add performance test for bitmap_print_to_pagebuf
        bitmap: unify find_bit operations
        mm/percpu: micro-optimize pcpu_is_populated()
        Replace for_each_*_bit_from() with for_each_*_bit() where appropriate
        find: micro-optimize for_each_{set,clear}_bit()
        include/linux: move for_each_bit() macros from bitops.h to find.h
        cpumask: replace cpumask_next_* with cpumask_first_* where appropriate
        tools: sync tools/bitmap with mother linux
        all: replace find_next{,_zero}_bit with find_first{,_zero}_bit where appropriate
        cpumask: use find_first_and_bit()
        lib: add find_first_and_bit()
        arch: remove GENERIC_FIND_FIRST_BIT entirely
        include: move find.h from asm_generic to linux
        bitops: move find_bit_*_le functions from le.h to find.h
        bitops: protect find_first_{,zero}_bit properly
      3689f9f8
  7. 22 Jan, 2022 11 commits