- 06 Nov, 2008 4 commits
-
-
Steven Rostedt authored
Impact: change where tracing is started up and stopped Currently, when a new tracer is selected via echo'ing a tracer name into the current_tracer file, the startup is only done if tracing_enabled is set to one. If tracing_enabled is changed to zero (by echo'ing 0 into the tracing_enabled file) a full shutdown is performed. The full startup and shutdown of a tracer can be expensive and the user can lose out traces when echo'ing in 0 to the tracing_enabled file, because the process takes too long. There can also be places that the user would like to start and stop the tracer several times and doing the full startup and shutdown of a tracer might be too expensive. This patch performs the full startup and shutdown when a tracer is selected. It also adds a way to do a quick start or stop of a tracer. The quick version is just a flag that prevents the tracing from taking place, but the overhead of the code is still there. For example, the startup of a tracer may enable tracepoints, or enable the function tracer. The stop and start will just set a flag to have the tracer ignore the calls when the tracepoint or function trace is called. The overhead of the tracer may still be present when the tracer is stopped, but no tracing will occur. Setting the tracer to the 'nop' tracer (or any other tracer) will perform the shutdown of the tracer which will disable the tracepoint or disable the function tracer. The tracing_enabled file will simply start or stop tracing. This change is all internal. The end result for the user should be the same as before. If tracing_enabled is not set, no trace will happen. If tracing_enabled is set, then the trace will happen. The tracing_enabled variable is static between tracers. Enabling tracing_enabled and going to another tracer will keep tracing_enabled enabled. Same is true with disabling tracing_enabled. This patch will now provide a fast start/stop method to the users for enabling or disabling tracing. Note: There were two methods to the struct tracer that were never used: The methods start and stop. These were to be used as a hook to the reading of the trace output, but ended up not being necessary. These two methods are now used to enable the start and stop of each tracer, in case the tracer needs to do more than just not write into the buffer. For example, the irqsoff tracer must stop recording max latencies when tracing is stopped. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
Impact: add way to quickly start stop tracing from the kernel This patch adds a soft stop and start to the trace. This simply disables function tracing via the ftrace_disabled flag, and disables the trace buffers to prevent recording. The tracing code may still be executed, but the trace will not be recorded. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
Impact: quick start and stop of function tracer This patch adds a way to disable the function tracer quickly without the need to run kstop_machine. It adds a new variable called function_trace_stop which will stop the calls to functions from mcount when set. This is just an on/off switch and does not handle recursion like preempt_disable(). It's main purpose is to help other tracers/debuggers start and stop tracing fuctions without the need to call kstop_machine. The config option HAVE_FUNCTION_TRACE_MCOUNT_TEST is added for archs that implement the testing of the function_trace_stop in the mcount arch dependent code. Otherwise, the test is done in the C code. x86 is the only arch at the moment that supports this. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
-
- 04 Nov, 2008 11 commits
-
-
Frederic Weisbecker authored
Impact: fix boot tracer + sched tracer coupling bug Fix a bug that made the sched_switch tracer unable to run if set as the current_tracer after the boot tracer. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Frederic Weisbecker authored
Impact: cleanup This patch applies some corrections suggested by Steven Rostedt. Change the type of shed_ref into int since it is used into a Mutex, we don't need it anymore as an atomic variable in the sched_switch tracer. Also change the name of the register mutex. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Frederic Weisbecker authored
Impact: enhance boot trace output with scheduling events Use the sched_switch tracer from the boot tracer. We also can trace schedule events inside the initcalls. Sched tracing is disabled after the initcall has finished and then reenabled before the next one is started. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Frederic Weisbecker authored
Impact: cleanup When init_sched_switch_trace() is called, it has no reason to start the sched tracer if the sched_ref is not zero. _ If this is non-zero, the tracer is already used, but we can register it to the tracing engine. There is already a security which avoid the tracer probes not to be resgistered twice. _ If this is zero, this block will not be used. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Frederic Weisbecker authored
Impact: fix race condition in sched_switch tracer This patch fixes a race condition in the sched_switch tracer. If several tasks (IE: concurrent initcalls) are playing with tracing_start_cmdline_record() and tracing_stop_cmdline_record(), the following situation could happen: _ Task A and B are using the same tracepoint probe. Task A holds it. Task B is sleeping and doesn't hold it. _ Task A frees the sched tracer, then sched_ref is decremented to 0. _ Task A is preempted and hadn't yet unregistered its tracepoint probe, then B runs. _ B increments sched_ref, sees it's 1 and then guess it has to register its probe. But it has not been freed by task A. _ A lot of bad things can happen after that... Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Frederic Weisbecker authored
Impact: modify boot tracer We used to disable the initcall tracing at a specified time (IE: end of builtin initcalls). But we don't need it anymore. It will be stopped when initcalls are finished. However we want two things: _Start this tracing only after pre-smp initcalls are finished. _Since we are planning to trace sched_switches at the same time, we want to enable them only during the initcall execution. For this purpose, this patch introduce two functions to enable/disable the sched_switch tracing during boot. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Impact: fix sysctl name typo Steve must have needed more coffee ;-) Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Impact: add SysRq-z support to dump trace buffers Allows one to force an ftrace dump from sysrq Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
Impact: disable interrupts during trace entry creation (as opposed to preempt) To help with performance, I set the ftracer to not disable interrupts, and only to disable preemption. If an interrupt occurred, it would not be traced, because the function tracer protects itself from recursion. This may be faster, but the trace output might miss some traces. This patch makes the fuction trace disable interrupts, but it also adds a runtime feature to disable preemption instead. It does this by having two different tracer functions. When the function tracer is enabled, it will check to see which version is requested (irqs disabled or preemption disabled). Then it will use the corresponding function as the tracer. Irq disabling is the default behaviour, but if the user wants better performance, with the chance of missing traces, then they can choose the preempt disabled version. Running hackbench 3 times with the irqs disabled and 3 times with the preempt disabled function tracer yielded: tracing type times entries recorded ------------ -------- ---------------- irq disabled 43.393 166433066 43.282 166172618 43.298 166256704 preempt disabled 38.969 159871710 38.943 159972935 39.325 161056510 Average: irqs disabled: 43.324 166287462 preempt disabled: 39.079 160300385 preempt is 10.8 percent faster than irqs disabled. I wrote a patch to count function trace recursion and reran hackbench. With irq disabled: 1,150 times the function tracer did not trace due to recursion. with preempt disabled: 5,117,718 times. The thousand times with irq disabled could be due to NMIs, or simply a case where it called a function that was not protected by notrace. But we also see that a large amount of the trace is lost with the preempt version. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
Impact: use new, consolidated APIs in ftrace plugins This patch replaces the schedule safe preempt disable code with the ftrace_preempt_disable() and ftrace_preempt_enable() safe functions. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Steven Rostedt authored
Impact: add new ftrace-plugin internal APIs Parts of the tracer needs to be careful about schedule recursion. If the NEED_RESCHED flag is set, a preempt_enable will call schedule. Inside the schedule function, the NEED_RESCHED flag is cleared. The problem arises when a trace happens in the schedule function but before NEED_RESCHED is cleared. The race is as follows: schedule() >> tracer called trace_function() preempt_disable() [ record trace ] preempt_enable() <<- here's the issue. [check NEED_RESCHED] schedule() [ Repeat the above, over and over again ] The naive approach is simply to use preempt_enable_no_schedule instead. The problem with that approach is that, although we solve the schedule recursion issue, we now might lose a preemption check when not in the schedule function. trace_function() preempt_disable() [ record trace ] [Interrupt comes in and sets NEED_RESCHED] preempt_enable_no_resched() [continue without scheduling] The way ftrace handles this problem is with the following approach: int resched; resched = need_resched(); preempt_disable_notrace(); [record trace] if (resched) preempt_enable_no_sched_notrace(); else preempt_enable_notrace(); This may seem like the opposite of what we want. If resched is set then we call the "no_sched" version?? The reason we do this is because if NEED_RESCHED is set before we disable preemption, there's two reasons for that: 1) we are in an atomic code path 2) we are already on our way to the schedule function, and maybe even in the schedule function, but have yet to clear the flag. Both the above cases we do not want to schedule. This solution has already been implemented within the ftrace infrastructure. But the problem is that it has been implemented several times. This patch encapsulates this code to two nice functions. resched = ftrace_preempt_disable(); [ record trace] ftrace_preempt_enable(resched); This way the tracers do not need to worry about getting it right. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 03 Nov, 2008 6 commits
-
-
Ingo Molnar authored
Merge branches 'tracing/ftrace', 'tracing/markers', 'tracing/mmiotrace', 'tracing/nmisafe', 'tracing/tracepoints' and 'tracing/urgent' into tracing/core
-
Lai Jiangshan authored
Impact: add new tracepoint APIs to allow the batched registration of probes new APIs separate tracepoint_probe_register(), tracepoint_probe_unregister() into 2 steps. The first step of them is just update tracepoint_entry, not connect or disconnect. this patch introduces tracepoint_probe_update_all() for update all. these APIs are very useful for registering lots of probes but just updating once. Another very important thing is that *_noupdate APIs do not require module_mutex. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Lai Jiangshan authored
Impact: simplify implementation Now, unused memory is handled by struct tp_probes. old code use these three field to handle unused memory. struct tracepoint_entry { ... struct rcu_head rcu; void *oldptr; unsigned char rcu_pending:1; ... }; in this way, unused memory is handled by struct tracepoint_entry. it bring reenter bug(it was fixed) and tracepoint.c is filled full of ".*rcu.*" code statements. this patch removes all these. and: rcu_barrier_sched() is removed. Do not need regain tracepoints_mutex after tracepoint_update_probes() several little cleanup. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Al Viro authored
Impact: build fix on !stacktrace architectures only select STACKTRACE on architectures that have STACKTRACE_SUPPORT ... since we also need to ifdef out the guts of ftrace_trace_stack(). We also want to disallow setting TRACE_ITER_STACKTRACE in trace_flags on such configs, but that can wait. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Impact: add new (optional) debug boot option In order to facilitate early boot trouble, allow one to specify a tracer on the kernel boot line. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
-
- 02 Nov, 2008 19 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6: ide-gd: re-get capacity on revalidate tx4938ide: Avoid underflow on calculation of a wait cycle tx4938ide: Do not call devm_ioremap for whole 128KB tx4938ide: Check minimum cycle time and SHWT range (v2) ide: Switch to a common address ide-cd: fix DMA alignment regression
-
Borislav Petkov authored
We need to re-get a removable media's capacity when revalidating the disk so that its partitions get rescanned by the block layer. Signed-off-by: Borislav Petkov <petkovbb@gmail.com> Cc: Tejun Heo <tj@kernel.org> Cc: axboe@kernel.dk Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
Atsushi Nemoto authored
Make 'wt' variable signed while it can be negative during calculation. Suggested-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Cc: sshtylyov@ru.mvista.com Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
Atsushi Nemoto authored
Call devm_ioremap() for CS0 and CS1 separetely. And some style cleanups. Suggested-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Cc: ralf@linux-mips.org Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
Atsushi Nemoto authored
SHWT value is used as address valid to -CSx assertion and -CSx to -DIOx assertion setup time, and contrarywise, -DIOx to -CSx release and -CSx release to address invalid hold time, so it actualy applies 4 times and so constitutes -DIOx recovery time. Check requirement of the recovery time and cycle time. Also check SHWT maximum value. Suggested-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Cc: ralf@linux-mips.org Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
Alan Cox authored
Signed-off-by: Alan Cox <alan@redhat.com> Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
Borislav Petkov authored
e5318b53 ("ide: use the dma safe check for REQ_TYPE_ATA_PC") introduced a regression which caused some ATAPI drives to turn off DMA for REQ_TYPE_BLOCK_PC commands while burning and thus degrading performance and ultimately causing an excessive amount of underruns. The issue is documented also in: http://bugzilla.kernel.org/show_bug.cgi?id=11742. Signed-off-by: Borislav Petkov <petkovbb@gmail.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Valerio Passini <valerio.passini@unicam.it> [bart: fixup patch description per comments from Sergei Shtylyov] Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: sparc64: Fix PCI resource mapping on sparc64 sparc64: Kill annoying warning when building compat_binfmt_elf.o sparc32: kernel/trace/trace.c wants DIE_OOPS sparc64: Fix __copy_{to,from}_user_inatomic defines.
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (33 commits) af_unix: netns: fix problem of return value IRDA: remove double inclusion of module.h udp: multicast packets need to check namespace net: add documentation for skb recycling key: fix setkey(8) policy set breakage bpa10x: free sk_buff with kfree_skb xfrm: do not leak ESRCH to user space net: Really remove all of LOOPBACK_TSO code. netfilter: nf_conntrack_proto_gre: switch to register_pernet_gen_subsys() netns: add register_pernet_gen_subsys/unregister_pernet_gen_subsys net: delete excess kernel-doc notation pppoe: Fix socket leak. gianfar: Don't reset TBI<->SerDes link if it's already up gianfar: Fix race in TBI/SerDes configuration at91_ether: request/free GPIO for PHY interrupt amd8111e: fix dma_free_coherent context atl1: fix vlan tag regression SMC91x: delete unused local variable "lp" myri10ge: fix stop/go mmio ordering bonding: fix panic when taking bond interface down before removing module ...
-
Jeff Garzik authored
s/user/used/ Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Max Dmitrichenko authored
There is a problem discovered in recent versions of ATI Mach64 driver in X.org on sparc64 architecture. In short, the driver fails to mmap MMIO aperture (PCI resource #2). I've found that kernel's __pci_mmap_make_offset() returns EINVAL. It checks whether user attempts to mmap more than the resource length, which is 0x1000 bytes in our case. But PAGE_SIZE on SPARC64 is 0x2000 and this is what actually is being mmaped. So __pci_mmap_make_offset() failed for this PCI resource. Signed-off-by: Max Dmitrichenko <dmitrmax@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
GCC warns because some tests against 32-bit values never evaluate to true due to how TASK_SIZE is defined. I always wanted to mimick powerpc's definition of TASK_SIZE, which is simply TASK_SIZE_OF(current) and that also fixes the warning. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Al Viro authored
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hugh Dickins authored
Alexander Beregalov reports oops in __bzero() called from copy_from_user_fixup() called from iov_iter_copy_from_user_atomic(), when running dbench on tmpfs on sparc64: its __copy_from_user_inatomic and __copy_to_user_inatomic should be avoiding, not calling, the fixups. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jianjun Kong authored
fix problem of return value net/unix/af_unix.c: unix_net_init() when error appears, it should return 'error', not always return 0. Signed-off-by: Jianjun Kong <jianjun@zeuux.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Beregalov authored
Signed-off-by: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Current UDP multicast delivery is not namespace aware. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Acked-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Hemminger authored
Commit 04a4bb55 ("net: add skb_recycle_check() to enable netdriver skb recycling") added a method for network drivers to recycle skbuffs, but while use of this mechanism was documented in the commit message, it should really have been added as a docbook comment as well -- this patch does that. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-