Commit e310396b authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'trace-v5.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:

 - Added new "bootconfig".

   This looks for a file appended to initrd to add boot config options,
   and has been discussed thoroughly at Linux Plumbers.

   Very useful for adding kprobes at bootup.

   Only enabled if "bootconfig" is on the real kernel command line.

 - Created dynamic event creation.

   Merges common code between creating synthetic events and kprobe
   events.

 - Rename perf "ring_buffer" structure to "perf_buffer"

 - Rename ftrace "ring_buffer" structure to "trace_buffer"

   Had to rename existing "trace_buffer" to "array_buffer"

 - Allow trace_printk() to work withing (some) tracing code.

 - Sort of tracing configs to be a little better organized

 - Fixed bug where ftrace_graph hash was not being protected properly

 - Various other small fixes and clean ups

* tag 'trace-v5.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (88 commits)
  bootconfig: Show the number of nodes on boot message
  tools/bootconfig: Show the number of bootconfig nodes
  bootconfig: Add more parse error messages
  bootconfig: Use bootconfig instead of boot config
  ftrace: Protect ftrace_graph_hash with ftrace_sync
  ftrace: Add comment to why rcu_dereference_sched() is open coded
  tracing: Annotate ftrace_graph_notrace_hash pointer with __rcu
  tracing: Annotate ftrace_graph_hash pointer with __rcu
  bootconfig: Only load bootconfig if "bootconfig" is on the kernel cmdline
  tracing: Use seq_buf for building dynevent_cmd string
  tracing: Remove useless code in dynevent_arg_pair_add()
  tracing: Remove check_arg() callbacks from dynevent args
  tracing: Consolidate some synth_event_trace code
  tracing: Fix now invalid var_ref_vals assumption in trace action
  tracing: Change trace_boot to use synth_event interface
  tracing: Move tracing selftests to bottom of menu
  tracing: Move mmio tracer config up with the other tracers
  tracing: Move tracing test module configs together
  tracing: Move all function tracing configs together
  tracing: Documentation for in-kernel synthetic event API
  ...
parents c1ef57a3 a0057403
.. SPDX-License-Identifier: GPL-2.0
.. _bootconfig:
==================
Boot Configuration
==================
:Author: Masami Hiramatsu <mhiramat@kernel.org>
Overview
========
The boot configuration expands the current kernel command line to support
additional key-value data when booting the kernel in an efficient way.
This allows administrators to pass a structured-Key config file.
Config File Syntax
==================
The boot config syntax is a simple structured key-value. Each key consists
of dot-connected-words, and key and value are connected by ``=``. The value
has to be terminated by semi-colon (``;``) or newline (``\n``).
For array value, array entries are separated by comma (``,``). ::
KEY[.WORD[...]] = VALUE[, VALUE2[...]][;]
Unlike the kernel command line syntax, spaces are OK around the comma and ``=``.
Each key word must contain only alphabets, numbers, dash (``-``) or underscore
(``_``). And each value only contains printable characters or spaces except
for delimiters such as semi-colon (``;``), new-line (``\n``), comma (``,``),
hash (``#``) and closing brace (``}``).
If you want to use those delimiters in a value, you can use either double-
quotes (``"VALUE"``) or single-quotes (``'VALUE'``) to quote it. Note that
you can not escape these quotes.
There can be a key which doesn't have value or has an empty value. Those keys
are used for checking if the key exists or not (like a boolean).
Key-Value Syntax
----------------
The boot config file syntax allows user to merge partially same word keys
by brace. For example::
foo.bar.baz = value1
foo.bar.qux.quux = value2
These can be written also in::
foo.bar {
baz = value1
qux.quux = value2
}
Or more shorter, written as following::
foo.bar { baz = value1; qux.quux = value2 }
In both styles, same key words are automatically merged when parsing it
at boot time. So you can append similar trees or key-values.
Comments
--------
The config syntax accepts shell-script style comments. The comments starting
with hash ("#") until newline ("\n") will be ignored.
::
# comment line
foo = value # value is set to foo.
bar = 1, # 1st element
2, # 2nd element
3 # 3rd element
This is parsed as below::
foo = value
bar = 1, 2, 3
Note that you can not put a comment between value and delimiter(``,`` or
``;``). This means following config has a syntax error ::
key = 1 # comment
,2
/proc/bootconfig
================
/proc/bootconfig is a user-space interface of the boot config.
Unlike /proc/cmdline, this file shows the key-value style list.
Each key-value pair is shown in each line with following style::
KEY[.WORDS...] = "[VALUE]"[,"VALUE2"...]
Boot Kernel With a Boot Config
==============================
Since the boot configuration file is loaded with initrd, it will be added
to the end of the initrd (initramfs) image file. The Linux kernel decodes
the last part of the initrd image in memory to get the boot configuration
data.
Because of this "piggyback" method, there is no need to change or
update the boot loader and the kernel image itself.
To do this operation, Linux kernel provides "bootconfig" command under
tools/bootconfig, which allows admin to apply or delete the config file
to/from initrd image. You can build it by the following command::
# make -C tools/bootconfig
To add your boot config file to initrd image, run bootconfig as below
(Old data is removed automatically if exists)::
# tools/bootconfig/bootconfig -a your-config /boot/initrd.img-X.Y.Z
To remove the config from the image, you can use -d option as below::
# tools/bootconfig/bootconfig -d /boot/initrd.img-X.Y.Z
Then add "bootconfig" on the normal kernel command line to tell the
kernel to look for the bootconfig at the end of the initrd file.
Config File Limitation
======================
Currently the maximum config size size is 32KB and the total key-words (not
key-value entries) must be under 1024 nodes.
Note: this is not the number of entries but nodes, an entry must consume
more than 2 nodes (a key-word and a value). So theoretically, it will be
up to 512 key-value pairs. If keys contains 3 words in average, it can
contain 256 key-value pairs. In most cases, the number of config items
will be under 100 entries and smaller than 8KB, so it would be enough.
If the node number exceeds 1024, parser returns an error even if the file
size is smaller than 32KB.
Anyway, since bootconfig command verifies it when appending a boot config
to initrd image, user can notice it before boot.
Bootconfig APIs
===============
User can query or loop on key-value pairs, also it is possible to find
a root (prefix) key node and find key-values under that node.
If you have a key string, you can query the value directly with the key
using xbc_find_value(). If you want to know what keys exist in the boot
config, you can use xbc_for_each_key_value() to iterate key-value pairs.
Note that you need to use xbc_array_for_each_value() for accessing
each array's value, e.g.::
vnode = NULL;
xbc_find_value("key.word", &vnode);
if (vnode && xbc_node_is_array(vnode))
xbc_array_for_each_value(vnode, value) {
printk("%s ", value);
}
If you want to focus on keys which have a prefix string, you can use
xbc_find_node() to find a node by the prefix string, and iterate
keys under the prefix node with xbc_node_for_each_key_value().
But the most typical usage is to get the named value under prefix
or get the named array under prefix as below::
root = xbc_find_node("key.prefix");
value = xbc_node_find_value(root, "option", &vnode);
...
xbc_node_for_each_array_value(root, "array-option", value, anode) {
...
}
This accesses a value of "key.prefix.option" and an array of
"key.prefix.array-option".
Locking is not needed, since after initialization, the config becomes
read-only. All data and keys must be copied if you need to modify it.
Functions and structures
========================
.. kernel-doc:: include/linux/bootconfig.h
.. kernel-doc:: lib/bootconfig.c
......@@ -64,6 +64,7 @@ configure specific aspects of kernel behavior to your liking.
binderfs
binfmt-misc
blockdev/index
bootconfig
braille-console
btmrvl
cgroup-v1/index
......
......@@ -437,6 +437,12 @@
no delay (0).
Format: integer
bootconfig [KNL]
Extended command line options can be added to an initrd
and this will cause the kernel to look for it.
See Documentation/admin-guide/bootconfig.rst
bert_disable [ACPI]
Disable BERT OS support on buggy BIOSes.
......
.. SPDX-License-Identifier: GPL-2.0
=================
Boot-time tracing
=================
:Author: Masami Hiramatsu <mhiramat@kernel.org>
Overview
========
Boot-time tracing allows users to trace boot-time process including
device initialization with full features of ftrace including per-event
filter and actions, histograms, kprobe-events and synthetic-events,
and trace instances.
Since kernel command line is not enough to control these complex features,
this uses bootconfig file to describe tracing feature programming.
Options in the Boot Config
==========================
Here is the list of available options list for boot time tracing in
boot config file [1]_. All options are under "ftrace." or "kernel."
prefix. See kernel parameters for the options which starts
with "kernel." prefix [2]_.
.. [1] See :ref:`Documentation/admin-guide/bootconfig.rst <bootconfig>`
.. [2] See :ref:`Documentation/admin-guide/kernel-parameters.rst <kernelparameters>`
Ftrace Global Options
---------------------
Ftrace global options have "kernel." prefix in boot config, which means
these options are passed as a part of kernel legacy command line.
kernel.tp_printk
Output trace-event data on printk buffer too.
kernel.dump_on_oops [= MODE]
Dump ftrace on Oops. If MODE = 1 or omitted, dump trace buffer
on all CPUs. If MODE = 2, dump a buffer on a CPU which kicks Oops.
kernel.traceoff_on_warning
Stop tracing if WARN_ON() occurs.
kernel.fgraph_max_depth = MAX_DEPTH
Set MAX_DEPTH to maximum depth of fgraph tracer.
kernel.fgraph_filters = FILTER[, FILTER2...]
Add fgraph tracing function filters.
kernel.fgraph_notraces = FILTER[, FILTER2...]
Add fgraph non-tracing function filters.
Ftrace Per-instance Options
---------------------------
These options can be used for each instance including global ftrace node.
ftrace.[instance.INSTANCE.]options = OPT1[, OPT2[...]]
Enable given ftrace options.
ftrace.[instance.INSTANCE.]trace_clock = CLOCK
Set given CLOCK to ftrace's trace_clock.
ftrace.[instance.INSTANCE.]buffer_size = SIZE
Configure ftrace buffer size to SIZE. You can use "KB" or "MB"
for that SIZE.
ftrace.[instance.INSTANCE.]alloc_snapshot
Allocate snapshot buffer.
ftrace.[instance.INSTANCE.]cpumask = CPUMASK
Set CPUMASK as trace cpu-mask.
ftrace.[instance.INSTANCE.]events = EVENT[, EVENT2[...]]
Enable given events on boot. You can use a wild card in EVENT.
ftrace.[instance.INSTANCE.]tracer = TRACER
Set TRACER to current tracer on boot. (e.g. function)
ftrace.[instance.INSTANCE.]ftrace.filters
This will take an array of tracing function filter rules.
ftrace.[instance.INSTANCE.]ftrace.notraces
This will take an array of NON-tracing function filter rules.
Ftrace Per-Event Options
------------------------
These options are setting per-event options.
ftrace.[instance.INSTANCE.]event.GROUP.EVENT.enable
Enable GROUP:EVENT tracing.
ftrace.[instance.INSTANCE.]event.GROUP.EVENT.filter = FILTER
Set FILTER rule to the GROUP:EVENT.
ftrace.[instance.INSTANCE.]event.GROUP.EVENT.actions = ACTION[, ACTION2[...]]
Set ACTIONs to the GROUP:EVENT.
ftrace.[instance.INSTANCE.]event.kprobes.EVENT.probes = PROBE[, PROBE2[...]]
Defines new kprobe event based on PROBEs. It is able to define
multiple probes on one event, but those must have same type of
arguments. This option is available only for the event which
group name is "kprobes".
ftrace.[instance.INSTANCE.]event.synthetic.EVENT.fields = FIELD[, FIELD2[...]]
Defines new synthetic event with FIELDs. Each field should be
"type varname".
Note that kprobe and synthetic event definitions can be written under
instance node, but those are also visible from other instances. So please
take care for event name conflict.
Examples
========
For example, to add filter and actions for each event, define kprobe
events, and synthetic events with histogram, write a boot config like
below::
ftrace.event {
task.task_newtask {
filter = "pid < 128"
enable
}
kprobes.vfs_read {
probes = "vfs_read $arg1 $arg2"
filter = "common_pid < 200"
enable
}
synthetic.initcall_latency {
fields = "unsigned long func", "u64 lat"
actions = "hist:keys=func.sym,lat:vals=lat:sort=lat"
}
initcall.initcall_start {
actions = "hist:keys=func:ts0=common_timestamp.usecs"
}
initcall.initcall_finish {
actions = "hist:keys=func:lat=common_timestamp.usecs-$ts0:onmatch(initcall.initcall_start).initcall_latency(func,$lat)"
}
}
Also, boot-time tracing supports "instance" node, which allows us to run
several tracers for different purpose at once. For example, one tracer
is for tracing functions starting with "user\_", and others tracing
"kernel\_" functions, you can write boot config as below::
ftrace.instance {
foo {
tracer = "function"
ftrace.filters = "user_*"
}
bar {
tracer = "function"
ftrace.filters = "kernel_*"
}
}
The instance node also accepts event nodes so that each instance
can customize its event tracing.
This boot-time tracing also supports ftrace kernel parameters via boot
config.
For example, following kernel parameters::
trace_options=sym-addr trace_event=initcall:* tp_printk trace_buf_size=1M ftrace=function ftrace_filter="vfs*"
This can be written in boot config like below::
kernel {
trace_options = sym-addr
trace_event = "initcall:*"
tp_printk
trace_buf_size = 1M
ftrace = function
ftrace_filter = "vfs*"
}
Note that parameters start with "kernel" prefix instead of "ftrace".
This diff is collapsed.
......@@ -19,6 +19,7 @@ Linux Tracing Technologies
events-msr
mmiotrace
histogram
boottime-trace
hwlat_detector
intel_th
stm
......
......@@ -97,6 +97,7 @@ which shows given pointer in "symbol+offset" style.
For $comm, the default type is "string"; any other type is invalid.
.. _user_mem_access:
User Memory Access
------------------
Kprobe events supports user-space memory access. For that purpose, you can use
......
......@@ -15934,6 +15934,15 @@ S: Supported
F: Documentation/networking/device_drivers/stmicro/
F: drivers/net/ethernet/stmicro/stmmac/
EXTRA BOOT CONFIG
M: Masami Hiramatsu <mhiramat@kernel.org>
S: Maintained
F: lib/bootconfig.c
F: fs/proc/bootconfig.c
F: include/linux/bootconfig.h
F: tools/bootconfig/*
F: Documentation/admin-guide/bootconfig.rst
SUN3/3X
M: Sam Creasey <sammy@sammy.net>
W: http://sammy.net/sun3/
......
......@@ -32,7 +32,7 @@
#define OP_BUFFER_FLAGS 0
static struct ring_buffer *op_ring_buffer;
static struct trace_buffer *op_ring_buffer;
DEFINE_PER_CPU(struct oprofile_cpu_buffer, op_cpu_buffer);
static void wq_sync_buffer(struct work_struct *work);
......
......@@ -33,3 +33,4 @@ proc-$(CONFIG_PROC_KCORE) += kcore.o
proc-$(CONFIG_PROC_VMCORE) += vmcore.o
proc-$(CONFIG_PRINTK) += kmsg.o
proc-$(CONFIG_PROC_PAGE_MONITOR) += page.o
proc-$(CONFIG_BOOT_CONFIG) += bootconfig.o
// SPDX-License-Identifier: GPL-2.0
/*
* /proc/bootconfig - Extra boot configuration
*/
#include <linux/fs.h>
#include <linux/init.h>
#include <linux/printk.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/bootconfig.h>
#include <linux/slab.h>
static char *saved_boot_config;
static int boot_config_proc_show(struct seq_file *m, void *v)
{
if (saved_boot_config)
seq_puts(m, saved_boot_config);
return 0;
}
/* Rest size of buffer */
#define rest(dst, end) ((end) > (dst) ? (end) - (dst) : 0)
/* Return the needed total length if @size is 0 */
static int __init copy_xbc_key_value_list(char *dst, size_t size)
{
struct xbc_node *leaf, *vnode;
const char *val;
char *key, *end = dst + size;
int ret = 0;
key = kzalloc(XBC_KEYLEN_MAX, GFP_KERNEL);
xbc_for_each_key_value(leaf, val) {
ret = xbc_node_compose_key(leaf, key, XBC_KEYLEN_MAX);
if (ret < 0)
break;
ret = snprintf(dst, rest(dst, end), "%s = ", key);
if (ret < 0)
break;
dst += ret;
vnode = xbc_node_get_child(leaf);
if (vnode && xbc_node_is_array(vnode)) {
xbc_array_for_each_value(vnode, val) {
ret = snprintf(dst, rest(dst, end), "\"%s\"%s",
val, vnode->next ? ", " : "\n");
if (ret < 0)
goto out;
dst += ret;
}
} else {
ret = snprintf(dst, rest(dst, end), "\"%s\"\n", val);
if (ret < 0)
break;
dst += ret;
}
}
out:
kfree(key);
return ret < 0 ? ret : dst - (end - size);
}
static int __init proc_boot_config_init(void)
{
int len;
len = copy_xbc_key_value_list(NULL, 0);
if (len < 0)
return len;
if (len > 0) {
saved_boot_config = kzalloc(len + 1, GFP_KERNEL);
if (!saved_boot_config)
return -ENOMEM;
len = copy_xbc_key_value_list(saved_boot_config, len + 1);
if (len < 0) {
kfree(saved_boot_config);
return len;
}
}
proc_create_single("bootconfig", 0, NULL, boot_config_proc_show);
return 0;
}
fs_initcall(proc_boot_config_init);
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_XBC_H
#define _LINUX_XBC_H
/*
* Extra Boot Config
* Copyright (C) 2019 Linaro Ltd.
* Author: Masami Hiramatsu <mhiramat@kernel.org>
*/
#include <linux/kernel.h>
#include <linux/types.h>
/* XBC tree node */
struct xbc_node {
u16 next;
u16 child;
u16 parent;
u16 data;
} __attribute__ ((__packed__));
#define XBC_KEY 0
#define XBC_VALUE (1 << 15)
/* Maximum size of boot config is 32KB - 1 */
#define XBC_DATA_MAX (XBC_VALUE - 1)
#define XBC_NODE_MAX 1024
#define XBC_KEYLEN_MAX 256
#define XBC_DEPTH_MAX 16
/* Node tree access raw APIs */
struct xbc_node * __init xbc_root_node(void);
int __init xbc_node_index(struct xbc_node *node);
struct xbc_node * __init xbc_node_get_parent(struct xbc_node *node);
struct xbc_node * __init xbc_node_get_child(struct xbc_node *node);
struct xbc_node * __init xbc_node_get_next(struct xbc_node *node);
const char * __init xbc_node_get_data(struct xbc_node *node);
/**
* xbc_node_is_value() - Test the node is a value node
* @node: An XBC node.
*
* Test the @node is a value node and return true if a value node, false if not.
*/
static inline __init bool xbc_node_is_value(struct xbc_node *node)
{
return node->data & XBC_VALUE;
}
/**
* xbc_node_is_key() - Test the node is a key node
* @node: An XBC node.
*
* Test the @node is a key node and return true if a key node, false if not.
*/
static inline __init bool xbc_node_is_key(struct xbc_node *node)
{
return !xbc_node_is_value(node);
}
/**
* xbc_node_is_array() - Test the node is an arraied value node
* @node: An XBC node.
*
* Test the @node is an arraied value node.
*/
static inline __init bool xbc_node_is_array(struct xbc_node *node)
{
return xbc_node_is_value(node) && node->next != 0;
}
/**
* xbc_node_is_leaf() - Test the node is a leaf key node
* @node: An XBC node.
*
* Test the @node is a leaf key node which is a key node and has a value node
* or no child. Returns true if it is a leaf node, or false if not.
*/
static inline __init bool xbc_node_is_leaf(struct xbc_node *node)
{
return xbc_node_is_key(node) &&
(!node->child || xbc_node_is_value(xbc_node_get_child(node)));
}
/* Tree-based key-value access APIs */
struct xbc_node * __init xbc_node_find_child(struct xbc_node *parent,
const char *key);
const char * __init xbc_node_find_value(struct xbc_node *parent,
const char *key,
struct xbc_node **vnode);
struct xbc_node * __init xbc_node_find_next_leaf(struct xbc_node *root,
struct xbc_node *leaf);
const char * __init xbc_node_find_next_key_value(struct xbc_node *root,
struct xbc_node **leaf);
/**
* xbc_find_value() - Find a value which matches the key
* @key: Search key
* @vnode: A container pointer of XBC value node.
*
* Search a value whose key matches @key from whole of XBC tree and return
* the value if found. Found value node is stored in *@vnode.
* Note that this can return 0-length string and store NULL in *@vnode for
* key-only (non-value) entry.
*/
static inline const char * __init
xbc_find_value(const char *key, struct xbc_node **vnode)
{
return xbc_node_find_value(NULL, key, vnode);
}
/**
* xbc_find_node() - Find a node which matches the key
* @key: Search key
*
* Search a (key) node whose key matches @key from whole of XBC tree and
* return the node if found. If not found, returns NULL.
*/
static inline struct xbc_node * __init xbc_find_node(const char *key)
{
return xbc_node_find_child(NULL, key);
}
/**
* xbc_array_for_each_value() - Iterate value nodes on an array
* @anode: An XBC arraied value node
* @value: A value
*
* Iterate array value nodes and values starts from @anode. This is expected to
* be used with xbc_find_value() and xbc_node_find_value(), so that user can
* process each array entry node.
*/
#define xbc_array_for_each_value(anode, value) \
for (value = xbc_node_get_data(anode); anode != NULL ; \
anode = xbc_node_get_next(anode), \
value = anode ? xbc_node_get_data(anode) : NULL)
/**
* xbc_node_for_each_child() - Iterate child nodes
* @parent: An XBC node.
* @child: Iterated XBC node.
*
* Iterate child nodes of @parent. Each child nodes are stored to @child.
*/
#define xbc_node_for_each_child(parent, child) \
for (child = xbc_node_get_child(parent); child != NULL ; \
child = xbc_node_get_next(child))
/**
* xbc_node_for_each_array_value() - Iterate array entries of geven key
* @node: An XBC node.
* @key: A key string searched under @node
* @anode: Iterated XBC node of array entry.
* @value: Iterated value of array entry.
*
* Iterate array entries of given @key under @node. Each array entry node
* is stroed to @anode and @value. If the @node doesn't have @key node,
* it does nothing.
* Note that even if the found key node has only one value (not array)
* this executes block once. Hoever, if the found key node has no value
* (key-only node), this does nothing. So don't use this for testing the
* key-value pair existence.
*/
#define xbc_node_for_each_array_value(node, key, anode, value) \
for (value = xbc_node_find_value(node, key, &anode); value != NULL; \
anode = xbc_node_get_next(anode), \
value = anode ? xbc_node_get_data(anode) : NULL)
/**
* xbc_node_for_each_key_value() - Iterate key-value pairs under a node
* @node: An XBC node.
* @knode: Iterated key node
* @value: Iterated value string
*
* Iterate key-value pairs under @node. Each key node and value string are
* stored in @knode and @value respectively.
*/
#define xbc_node_for_each_key_value(node, knode, value) \
for (knode = NULL, value = xbc_node_find_next_key_value(node, &knode);\
knode != NULL; value = xbc_node_find_next_key_value(node, &knode))
/**
* xbc_for_each_key_value() - Iterate key-value pairs
* @knode: Iterated key node
* @value: Iterated value string
*
* Iterate key-value pairs in whole XBC tree. Each key node and value string
* are stored in @knode and @value respectively.
*/
#define xbc_for_each_key_value(knode, value) \
xbc_node_for_each_key_value(NULL, knode, value)
/* Compose partial key */
int __init xbc_node_compose_key_after(struct xbc_node *root,
struct xbc_node *node, char *buf, size_t size);
/**
* xbc_node_compose_key() - Compose full key string of the XBC node
* @node: An XBC node.
* @buf: A buffer to store the key.
* @size: The size of the @buf.
*
* Compose the full-length key of the @node into @buf. Returns the total
* length of the key stored in @buf. Or returns -EINVAL if @node is NULL,
* and -ERANGE if the key depth is deeper than max depth.
*/
static inline int __init xbc_node_compose_key(struct xbc_node *node,
char *buf, size_t size)
{
return xbc_node_compose_key_after(NULL, node, buf, size);
}
/* XBC node initializer */
int __init xbc_init(char *buf);
/* XBC cleanup data structures */
void __init xbc_destroy_all(void);
/* Debug dump functions */
void __init xbc_debug_dump(void);
#endif
......@@ -582,7 +582,7 @@ struct swevent_hlist {
#define PERF_ATTACH_ITRACE 0x10
struct perf_cgroup;
struct ring_buffer;
struct perf_buffer;
struct pmu_event_list {
raw_spinlock_t lock;
......@@ -694,7 +694,7 @@ struct perf_event {
struct mutex mmap_mutex;
atomic_t mmap_count;
struct ring_buffer *rb;
struct perf_buffer *rb;
struct list_head rb_entry;
unsigned long rcu_batches;
int rcu_pending;
......@@ -854,7 +854,7 @@ struct perf_cpu_context {
struct perf_output_handle {
struct perf_event *event;
struct ring_buffer *rb;
struct perf_buffer *rb;
unsigned long wakeup;
unsigned long size;
u64 aux_flags;
......
......@@ -6,7 +6,7 @@
#include <linux/seq_file.h>
#include <linux/poll.h>
struct ring_buffer;
struct trace_buffer;
struct ring_buffer_iter;
/*
......@@ -77,13 +77,13 @@ u64 ring_buffer_event_time_stamp(struct ring_buffer_event *event);
* else
* ring_buffer_unlock_commit(buffer, event);
*/
void ring_buffer_discard_commit(struct ring_buffer *buffer,
void ring_buffer_discard_commit(struct trace_buffer *buffer,
struct ring_buffer_event *event);
/*
* size is in bytes for each per CPU buffer.
*/
struct ring_buffer *
struct trace_buffer *
__ring_buffer_alloc(unsigned long size, unsigned flags, struct lock_class_key *key);
/*
......@@ -97,38 +97,38 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, struct lock_class_key *k
__ring_buffer_alloc((size), (flags), &__key); \
})
int ring_buffer_wait(struct ring_buffer *buffer, int cpu, int full);
__poll_t ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu,
int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
__poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
struct file *filp, poll_table *poll_table);
#define RING_BUFFER_ALL_CPUS -1
void ring_buffer_free(struct ring_buffer *buffer);
void ring_buffer_free(struct trace_buffer *buffer);
int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size, int cpu);
int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size, int cpu);
void ring_buffer_change_overwrite(struct ring_buffer *buffer, int val);
void ring_buffer_change_overwrite(struct trace_buffer *buffer, int val);
struct ring_buffer_event *ring_buffer_lock_reserve(struct ring_buffer *buffer,
struct ring_buffer_event *ring_buffer_lock_reserve(struct trace_buffer *buffer,
unsigned long length);
int ring_buffer_unlock_commit(struct ring_buffer *buffer,
int ring_buffer_unlock_commit(struct trace_buffer *buffer,
struct ring_buffer_event *event);
int ring_buffer_write(struct ring_buffer *buffer,
int ring_buffer_write(struct trace_buffer *buffer,
unsigned long length, void *data);
void ring_buffer_nest_start(struct ring_buffer *buffer);
void ring_buffer_nest_end(struct ring_buffer *buffer);
void ring_buffer_nest_start(struct trace_buffer *buffer);
void ring_buffer_nest_end(struct trace_buffer *buffer);
struct ring_buffer_event *
ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts,
ring_buffer_peek(struct trace_buffer *buffer, int cpu, u64 *ts,
unsigned long *lost_events);
struct ring_buffer_event *
ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts,
ring_buffer_consume(struct trace_buffer *buffer, int cpu, u64 *ts,
unsigned long *lost_events);
struct ring_buffer_iter *
ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu, gfp_t flags);
ring_buffer_read_prepare(struct trace_buffer *buffer, int cpu, gfp_t flags);
void ring_buffer_read_prepare_sync(void);
void ring_buffer_read_start(struct ring_buffer_iter *iter);
void ring_buffer_read_finish(struct ring_buffer_iter *iter);
......@@ -140,59 +140,59 @@ ring_buffer_read(struct ring_buffer_iter *iter, u64 *ts);
void ring_buffer_iter_reset(struct ring_buffer_iter *iter);
int ring_buffer_iter_empty(struct ring_buffer_iter *iter);
unsigned long ring_buffer_size(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_size(struct trace_buffer *buffer, int cpu);
void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu);
void ring_buffer_reset(struct ring_buffer *buffer);
void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu);
void ring_buffer_reset(struct trace_buffer *buffer);
#ifdef CONFIG_RING_BUFFER_ALLOW_SWAP
int ring_buffer_swap_cpu(struct ring_buffer *buffer_a,
struct ring_buffer *buffer_b, int cpu);
int ring_buffer_swap_cpu(struct trace_buffer *buffer_a,
struct trace_buffer *buffer_b, int cpu);
#else
static inline int
ring_buffer_swap_cpu(struct ring_buffer *buffer_a,
struct ring_buffer *buffer_b, int cpu)
ring_buffer_swap_cpu(struct trace_buffer *buffer_a,
struct trace_buffer *buffer_b, int cpu)
{
return -ENODEV;
}
#endif
bool ring_buffer_empty(struct ring_buffer *buffer);
bool ring_buffer_empty_cpu(struct ring_buffer *buffer, int cpu);
void ring_buffer_record_disable(struct ring_buffer *buffer);
void ring_buffer_record_enable(struct ring_buffer *buffer);
void ring_buffer_record_off(struct ring_buffer *buffer);
void ring_buffer_record_on(struct ring_buffer *buffer);
bool ring_buffer_record_is_on(struct ring_buffer *buffer);
bool ring_buffer_record_is_set_on(struct ring_buffer *buffer);
void ring_buffer_record_disable_cpu(struct ring_buffer *buffer, int cpu);
void ring_buffer_record_enable_cpu(struct ring_buffer *buffer, int cpu);
u64 ring_buffer_oldest_event_ts(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_bytes_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_entries(struct ring_buffer *buffer);
unsigned long ring_buffer_overruns(struct ring_buffer *buffer);
unsigned long ring_buffer_entries_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_overrun_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_commit_overrun_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_dropped_events_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_read_events_cpu(struct ring_buffer *buffer, int cpu);
u64 ring_buffer_time_stamp(struct ring_buffer *buffer, int cpu);
void ring_buffer_normalize_time_stamp(struct ring_buffer *buffer,
bool ring_buffer_empty(struct trace_buffer *buffer);
bool ring_buffer_empty_cpu(struct trace_buffer *buffer, int cpu);
void ring_buffer_record_disable(struct trace_buffer *buffer);
void ring_buffer_record_enable(struct trace_buffer *buffer);
void ring_buffer_record_off(struct trace_buffer *buffer);
void ring_buffer_record_on(struct trace_buffer *buffer);
bool ring_buffer_record_is_on(struct trace_buffer *buffer);
bool ring_buffer_record_is_set_on(struct trace_buffer *buffer);
void ring_buffer_record_disable_cpu(struct trace_buffer *buffer, int cpu);
void ring_buffer_record_enable_cpu(struct trace_buffer *buffer, int cpu);
u64 ring_buffer_oldest_event_ts(struct trace_buffer *buffer, int cpu);
unsigned long ring_buffer_bytes_cpu(struct trace_buffer *buffer, int cpu);
unsigned long ring_buffer_entries(struct trace_buffer *buffer);
unsigned long ring_buffer_overruns(struct trace_buffer *buffer);
unsigned long ring_buffer_entries_cpu(struct trace_buffer *buffer, int cpu);
unsigned long ring_buffer_overrun_cpu(struct trace_buffer *buffer, int cpu);
unsigned long ring_buffer_commit_overrun_cpu(struct trace_buffer *buffer, int cpu);
unsigned long ring_buffer_dropped_events_cpu(struct trace_buffer *buffer, int cpu);
unsigned long ring_buffer_read_events_cpu(struct trace_buffer *buffer, int cpu);
u64 ring_buffer_time_stamp(struct trace_buffer *buffer, int cpu);
void ring_buffer_normalize_time_stamp(struct trace_buffer *buffer,
int cpu, u64 *ts);
void ring_buffer_set_clock(struct ring_buffer *buffer,
void ring_buffer_set_clock(struct trace_buffer *buffer,
u64 (*clock)(void));
void ring_buffer_set_time_stamp_abs(struct ring_buffer *buffer, bool abs);
bool ring_buffer_time_stamp_abs(struct ring_buffer *buffer);
void ring_buffer_set_time_stamp_abs(struct trace_buffer *buffer, bool abs);
bool ring_buffer_time_stamp_abs(struct trace_buffer *buffer);
size_t ring_buffer_nr_pages(struct ring_buffer *buffer, int cpu);
size_t ring_buffer_nr_dirty_pages(struct ring_buffer *buffer, int cpu);
size_t ring_buffer_nr_pages(struct trace_buffer *buffer, int cpu);
size_t ring_buffer_nr_dirty_pages(struct trace_buffer *buffer, int cpu);
void *ring_buffer_alloc_read_page(struct ring_buffer *buffer, int cpu);
void ring_buffer_free_read_page(struct ring_buffer *buffer, int cpu, void *data);
int ring_buffer_read_page(struct ring_buffer *buffer, void **data_page,
void *ring_buffer_alloc_read_page(struct trace_buffer *buffer, int cpu);
void ring_buffer_free_read_page(struct trace_buffer *buffer, int cpu, void *data);
int ring_buffer_read_page(struct trace_buffer *buffer, void **data_page,
size_t len, int cpu, int full);
struct trace_seq;
......
......@@ -11,7 +11,7 @@
#include <linux/tracepoint.h>
struct trace_array;
struct trace_buffer;
struct array_buffer;
struct tracer;
struct dentry;
struct bpf_prog;
......@@ -79,7 +79,7 @@ struct trace_entry {
struct trace_iterator {
struct trace_array *tr;
struct tracer *trace;
struct trace_buffer *trace_buffer;
struct array_buffer *array_buffer;
void *private;
int cpu_file;
struct mutex mutex;
......@@ -153,7 +153,7 @@ void tracing_generic_entry_update(struct trace_entry *entry,
struct trace_event_file;
struct ring_buffer_event *
trace_event_buffer_lock_reserve(struct ring_buffer **current_buffer,
trace_event_buffer_lock_reserve(struct trace_buffer **current_buffer,
struct trace_event_file *trace_file,
int type, unsigned long len,
unsigned long flags, int pc);
......@@ -226,12 +226,13 @@ extern int trace_event_reg(struct trace_event_call *event,
enum trace_reg type, void *data);
struct trace_event_buffer {
struct ring_buffer *buffer;
struct trace_buffer *buffer;
struct ring_buffer_event *event;
struct trace_event_file *trace_file;
void *entry;
unsigned long flags;
int pc;
struct pt_regs *regs;
};
void *trace_event_buffer_reserve(struct trace_event_buffer *fbuffer,
......@@ -364,6 +365,128 @@ enum {
EVENT_FILE_FL_WAS_ENABLED_BIT,
};
extern struct trace_event_file *trace_get_event_file(const char *instance,
const char *system,
const char *event);
extern void trace_put_event_file(struct trace_event_file *file);
#define MAX_DYNEVENT_CMD_LEN (2048)
enum dynevent_type {
DYNEVENT_TYPE_SYNTH = 1,
DYNEVENT_TYPE_KPROBE,
DYNEVENT_TYPE_NONE,
};
struct dynevent_cmd;
typedef int (*dynevent_create_fn_t)(struct dynevent_cmd *cmd);
struct dynevent_cmd {
struct seq_buf seq;
const char *event_name;
unsigned int n_fields;
enum dynevent_type type;
dynevent_create_fn_t run_command;
void *private_data;
};
extern int dynevent_create(struct dynevent_cmd *cmd);
extern int synth_event_delete(const char *name);
extern void synth_event_cmd_init(struct dynevent_cmd *cmd,
char *buf, int maxlen);
extern int __synth_event_gen_cmd_start(struct dynevent_cmd *cmd,
const char *name,
struct module *mod, ...);
#define synth_event_gen_cmd_start(cmd, name, mod, ...) \
__synth_event_gen_cmd_start(cmd, name, mod, ## __VA_ARGS__, NULL)
struct synth_field_desc {
const char *type;
const char *name;
};
extern int synth_event_gen_cmd_array_start(struct dynevent_cmd *cmd,
const char *name,
struct module *mod,
struct synth_field_desc *fields,
unsigned int n_fields);
extern int synth_event_create(const char *name,
struct synth_field_desc *fields,
unsigned int n_fields, struct module *mod);
extern int synth_event_add_field(struct dynevent_cmd *cmd,
const char *type,
const char *name);
extern int synth_event_add_field_str(struct dynevent_cmd *cmd,
const char *type_name);
extern int synth_event_add_fields(struct dynevent_cmd *cmd,
struct synth_field_desc *fields,
unsigned int n_fields);
#define synth_event_gen_cmd_end(cmd) \
dynevent_create(cmd)
struct synth_event;
struct synth_event_trace_state {
struct trace_event_buffer fbuffer;
struct synth_trace_event *entry;
struct trace_buffer *buffer;
struct synth_event *event;
unsigned int cur_field;
unsigned int n_u64;
bool enabled;
bool add_next;
bool add_name;
};
extern int synth_event_trace(struct trace_event_file *file,
unsigned int n_vals, ...);
extern int synth_event_trace_array(struct trace_event_file *file, u64 *vals,
unsigned int n_vals);
extern int synth_event_trace_start(struct trace_event_file *file,
struct synth_event_trace_state *trace_state);
extern int synth_event_add_next_val(u64 val,
struct synth_event_trace_state *trace_state);
extern int synth_event_add_val(const char *field_name, u64 val,
struct synth_event_trace_state *trace_state);
extern int synth_event_trace_end(struct synth_event_trace_state *trace_state);
extern int kprobe_event_delete(const char *name);
extern void kprobe_event_cmd_init(struct dynevent_cmd *cmd,
char *buf, int maxlen);
#define kprobe_event_gen_cmd_start(cmd, name, loc, ...) \
__kprobe_event_gen_cmd_start(cmd, false, name, loc, ## __VA_ARGS__, NULL)
#define kretprobe_event_gen_cmd_start(cmd, name, loc, ...) \
__kprobe_event_gen_cmd_start(cmd, true, name, loc, ## __VA_ARGS__, NULL)
extern int __kprobe_event_gen_cmd_start(struct dynevent_cmd *cmd,
bool kretprobe,
const char *name,
const char *loc, ...);
#define kprobe_event_add_fields(cmd, ...) \
__kprobe_event_add_fields(cmd, ## __VA_ARGS__, NULL)
#define kprobe_event_add_field(cmd, field) \
__kprobe_event_add_fields(cmd, field, NULL)
extern int __kprobe_event_add_fields(struct dynevent_cmd *cmd, ...);
#define kprobe_event_gen_cmd_end(cmd) \
dynevent_create(cmd)
#define kretprobe_event_gen_cmd_end(cmd) \
dynevent_create(cmd)
/*
* Event file flags:
* ENABLED - The event is enabled
......
......@@ -2,7 +2,8 @@
/*
* Stage 1 of the trace events.
*
* Override the macros in <trace/trace_events.h> to include the following:
* Override the macros in the event tracepoint header <trace/events/XXX.h>
* to include the following:
*
* struct trace_event_raw_<call> {
* struct trace_entry ent;
......@@ -223,7 +224,8 @@ TRACE_MAKE_SYSTEM_STR();
/*
* Stage 3 of the trace events.
*
* Override the macros in <trace/trace_events.h> to include the following:
* Override the macros in the event tracepoint header <trace/events/XXX.h>
* to include the following:
*
* enum print_line_t
* trace_raw_output_<call>(struct trace_iterator *iter, int flags)
......@@ -533,7 +535,8 @@ static inline notrace int trace_event_get_offsets_##call( \
/*
* Stage 4 of the trace events.
*
* Override the macros in <trace/trace_events.h> to include the following:
* Override the macros in the event tracepoint header <trace/events/XXX.h>
* to include the following:
*
* For those macros defined with TRACE_EVENT:
*
......@@ -548,7 +551,7 @@ static inline notrace int trace_event_get_offsets_##call( \
* enum event_trigger_type __tt = ETT_NONE;
* struct ring_buffer_event *event;
* struct trace_event_raw_<call> *entry; <-- defined in stage 1
* struct ring_buffer *buffer;
* struct trace_buffer *buffer;
* unsigned long irq_flags;
* int __data_size;
* int pc;
......
......@@ -1224,6 +1224,20 @@ source "usr/Kconfig"
endif
config BOOT_CONFIG
bool "Boot config support"
depends on BLK_DEV_INITRD
select LIBXBC
default y
help
Extra boot config allows system admin to pass a config file as
complemental extension of kernel cmdline when booting.
The boot config file must be attached at the end of initramfs
with checksum and size.
See <file:Documentation/admin-guide/bootconfig.rst> for details.
If unsure, say Y.
choice
prompt "Compiler optimization level"
default CC_OPTIMIZE_FOR_PERFORMANCE
......
......@@ -28,6 +28,7 @@
#include <linux/initrd.h>
#include <linux/memblock.h>
#include <linux/acpi.h>
#include <linux/bootconfig.h>
#include <linux/console.h>
#include <linux/nmi.h>
#include <linux/percpu.h>
......@@ -136,8 +137,10 @@ char __initdata boot_command_line[COMMAND_LINE_SIZE];
char *saved_command_line;
/* Command line for parameter parsing */
static char *static_command_line;
/* Command line for per-initcall parameter parsing */
static char *initcall_command_line;
/* Untouched extra command line */
static char *extra_command_line;
/* Extra init arguments */
static char *extra_init_args;
static char *execute_command;
static char *ramdisk_execute_command;
......@@ -245,6 +248,156 @@ static int __init loglevel(char *str)
early_param("loglevel", loglevel);
#ifdef CONFIG_BOOT_CONFIG
char xbc_namebuf[XBC_KEYLEN_MAX] __initdata;
#define rest(dst, end) ((end) > (dst) ? (end) - (dst) : 0)
static int __init xbc_snprint_cmdline(char *buf, size_t size,
struct xbc_node *root)
{
struct xbc_node *knode, *vnode;
char *end = buf + size;
char c = '\"';
const char *val;
int ret;
xbc_node_for_each_key_value(root, knode, val) {
ret = xbc_node_compose_key_after(root, knode,
xbc_namebuf, XBC_KEYLEN_MAX);
if (ret < 0)
return ret;
vnode = xbc_node_get_child(knode);
ret = snprintf(buf, rest(buf, end), "%s%c", xbc_namebuf,
vnode ? '=' : ' ');
if (ret < 0)
return ret;
buf += ret;
if (!vnode)
continue;
c = '\"';
xbc_array_for_each_value(vnode, val) {
ret = snprintf(buf, rest(buf, end), "%c%s", c, val);
if (ret < 0)
return ret;
buf += ret;
c = ',';
}
if (rest(buf, end) > 2)
strcpy(buf, "\" ");
buf += 2;
}
return buf - (end - size);
}
#undef rest
/* Make an extra command line under given key word */
static char * __init xbc_make_cmdline(const char *key)
{
struct xbc_node *root;
char *new_cmdline;
int ret, len = 0;
root = xbc_find_node(key);
if (!root)
return NULL;
/* Count required buffer size */
len = xbc_snprint_cmdline(NULL, 0, root);
if (len <= 0)
return NULL;
new_cmdline = memblock_alloc(len + 1, SMP_CACHE_BYTES);
if (!new_cmdline) {
pr_err("Failed to allocate memory for extra kernel cmdline.\n");
return NULL;
}
ret = xbc_snprint_cmdline(new_cmdline, len + 1, root);
if (ret < 0 || ret > len) {
pr_err("Failed to print extra kernel cmdline.\n");
return NULL;
}
return new_cmdline;
}
u32 boot_config_checksum(unsigned char *p, u32 size)
{
u32 ret = 0;
while (size--)
ret += *p++;
return ret;
}
static void __init setup_boot_config(const char *cmdline)
{
u32 size, csum;
char *data, *copy;
const char *p;
u32 *hdr;
int ret;
p = strstr(cmdline, "bootconfig");
if (!p || (p != cmdline && !isspace(*(p-1))) ||
(p[10] && !isspace(p[10])))
return;
if (!initrd_end)
goto not_found;
hdr = (u32 *)(initrd_end - 8);
size = hdr[0];
csum = hdr[1];
if (size >= XBC_DATA_MAX) {
pr_err("bootconfig size %d greater than max size %d\n",
size, XBC_DATA_MAX);
return;
}
data = ((void *)hdr) - size;
if ((unsigned long)data < initrd_start)
goto not_found;
if (boot_config_checksum((unsigned char *)data, size) != csum) {
pr_err("bootconfig checksum failed\n");
return;
}
copy = memblock_alloc(size + 1, SMP_CACHE_BYTES);
if (!copy) {
pr_err("Failed to allocate memory for bootconfig\n");
return;
}
memcpy(copy, data, size);
copy[size] = '\0';
ret = xbc_init(copy);
if (ret < 0)
pr_err("Failed to parse bootconfig\n");
else {
pr_info("Load bootconfig: %d bytes %d nodes\n", size, ret);
/* keys starting with "kernel." are passed via cmdline */
extra_command_line = xbc_make_cmdline("kernel");
/* Also, "init." keys are init arguments */
extra_init_args = xbc_make_cmdline("init");
}
return;
not_found:
pr_err("'bootconfig' found on command line, but no bootconfig found\n");
}
#else
#define setup_boot_config(cmdline) do { } while (0)
#endif
/* Change NUL term back to "=", to make "param" the whole string. */
static void __init repair_env_string(char *param, char *val)
{
......@@ -373,22 +526,50 @@ static inline void smp_prepare_cpus(unsigned int maxcpus) { }
*/
static void __init setup_command_line(char *command_line)
{
size_t len = strlen(boot_command_line) + 1;
size_t len, xlen = 0, ilen = 0;
saved_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
if (!saved_command_line)
panic("%s: Failed to allocate %zu bytes\n", __func__, len);
if (extra_command_line)
xlen = strlen(extra_command_line);
if (extra_init_args)
ilen = strlen(extra_init_args) + 4; /* for " -- " */
initcall_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
if (!initcall_command_line)
panic("%s: Failed to allocate %zu bytes\n", __func__, len);
len = xlen + strlen(boot_command_line) + 1;
saved_command_line = memblock_alloc(len + ilen, SMP_CACHE_BYTES);
if (!saved_command_line)
panic("%s: Failed to allocate %zu bytes\n", __func__, len + ilen);
static_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
if (!static_command_line)
panic("%s: Failed to allocate %zu bytes\n", __func__, len);
strcpy(saved_command_line, boot_command_line);
strcpy(static_command_line, command_line);
if (xlen) {
/*
* We have to put extra_command_line before boot command
* lines because there could be dashes (separator of init
* command line) in the command lines.
*/
strcpy(saved_command_line, extra_command_line);
strcpy(static_command_line, extra_command_line);
}
strcpy(saved_command_line + xlen, boot_command_line);
strcpy(static_command_line + xlen, command_line);
if (ilen) {
/*
* Append supplemental init boot args to saved_command_line
* so that user can check what command line options passed
* to init.
*/
len = strlen(saved_command_line);
if (!strstr(boot_command_line, " -- ")) {
strcpy(saved_command_line + len, " -- ");
len += 4;
} else
saved_command_line[len++] = ' ';
strcpy(saved_command_line + len, extra_init_args);
}
}
/*
......@@ -595,6 +776,7 @@ asmlinkage __visible void __init start_kernel(void)
pr_notice("%s", linux_banner);
early_security_init();
setup_arch(&command_line);
setup_boot_config(command_line);
setup_command_line(command_line);
setup_nr_cpu_ids();
setup_per_cpu_areas();
......@@ -604,7 +786,7 @@ asmlinkage __visible void __init start_kernel(void)
build_all_zonelists(NULL);
page_alloc_init();
pr_notice("Kernel command line: %s\n", boot_command_line);
pr_notice("Kernel command line: %s\n", saved_command_line);
/* parameters may set static keys */
jump_label_init();
parse_early_param();
......@@ -615,6 +797,9 @@ asmlinkage __visible void __init start_kernel(void)
if (!IS_ERR_OR_NULL(after_dashes))
parse_args("Setting init args", after_dashes, NULL, 0, -1, -1,
NULL, set_init_arg);
if (extra_init_args)
parse_args("Setting extra init args", extra_init_args,
NULL, 0, -1, -1, NULL, set_init_arg);
/*
* These use large bootmem allocations and must precede
......@@ -996,13 +1181,12 @@ static int __init ignore_unknown_bootoption(char *param, char *val,
return 0;
}
static void __init do_initcall_level(int level)
static void __init do_initcall_level(int level, char *command_line)
{
initcall_entry_t *fn;
strcpy(initcall_command_line, saved_command_line);
parse_args(initcall_level_names[level],
initcall_command_line, __start___param,
command_line, __start___param,
__stop___param - __start___param,
level, level,
NULL, ignore_unknown_bootoption);
......@@ -1015,9 +1199,20 @@ static void __init do_initcall_level(int level)
static void __init do_initcalls(void)
{
int level;
size_t len = strlen(saved_command_line) + 1;
char *command_line;
command_line = kzalloc(len, GFP_KERNEL);
if (!command_line)
panic("%s: Failed to allocate %zu bytes\n", __func__, len);
for (level = 0; level < ARRAY_SIZE(initcall_levels) - 1; level++) {
/* Parser modifies command_line, restore it each time */
strcpy(command_line, saved_command_line);
do_initcall_level(level, command_line);
}
for (level = 0; level < ARRAY_SIZE(initcall_levels) - 1; level++)
do_initcall_level(level);
kfree(command_line);
}
/*
......
......@@ -4373,7 +4373,7 @@ static void free_event_rcu(struct rcu_head *head)
}
static void ring_buffer_attach(struct perf_event *event,
struct ring_buffer *rb);
struct perf_buffer *rb);
static void detach_sb_event(struct perf_event *event)
{
......@@ -5054,7 +5054,7 @@ perf_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
static __poll_t perf_poll(struct file *file, poll_table *wait)
{
struct perf_event *event = file->private_data;
struct ring_buffer *rb;
struct perf_buffer *rb;
__poll_t events = EPOLLHUP;
poll_wait(file, &event->waitq, wait);
......@@ -5296,7 +5296,7 @@ static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned lon
return perf_event_set_bpf_prog(event, arg);
case PERF_EVENT_IOC_PAUSE_OUTPUT: {
struct ring_buffer *rb;
struct perf_buffer *rb;
rcu_read_lock();
rb = rcu_dereference(event->rb);
......@@ -5432,7 +5432,7 @@ static void calc_timer_values(struct perf_event *event,
static void perf_event_init_userpage(struct perf_event *event)
{
struct perf_event_mmap_page *userpg;
struct ring_buffer *rb;
struct perf_buffer *rb;
rcu_read_lock();
rb = rcu_dereference(event->rb);
......@@ -5464,7 +5464,7 @@ void __weak arch_perf_update_userpage(
void perf_event_update_userpage(struct perf_event *event)
{
struct perf_event_mmap_page *userpg;
struct ring_buffer *rb;
struct perf_buffer *rb;
u64 enabled, running, now;
rcu_read_lock();
......@@ -5515,7 +5515,7 @@ EXPORT_SYMBOL_GPL(perf_event_update_userpage);
static vm_fault_t perf_mmap_fault(struct vm_fault *vmf)
{
struct perf_event *event = vmf->vma->vm_file->private_data;
struct ring_buffer *rb;
struct perf_buffer *rb;
vm_fault_t ret = VM_FAULT_SIGBUS;
if (vmf->flags & FAULT_FLAG_MKWRITE) {
......@@ -5548,9 +5548,9 @@ static vm_fault_t perf_mmap_fault(struct vm_fault *vmf)
}
static void ring_buffer_attach(struct perf_event *event,
struct ring_buffer *rb)
struct perf_buffer *rb)
{
struct ring_buffer *old_rb = NULL;
struct perf_buffer *old_rb = NULL;
unsigned long flags;
if (event->rb) {
......@@ -5608,7 +5608,7 @@ static void ring_buffer_attach(struct perf_event *event,
static void ring_buffer_wakeup(struct perf_event *event)
{
struct ring_buffer *rb;
struct perf_buffer *rb;
rcu_read_lock();
rb = rcu_dereference(event->rb);
......@@ -5619,9 +5619,9 @@ static void ring_buffer_wakeup(struct perf_event *event)
rcu_read_unlock();
}
struct ring_buffer *ring_buffer_get(struct perf_event *event)
struct perf_buffer *ring_buffer_get(struct perf_event *event)
{
struct ring_buffer *rb;
struct perf_buffer *rb;
rcu_read_lock();
rb = rcu_dereference(event->rb);
......@@ -5634,7 +5634,7 @@ struct ring_buffer *ring_buffer_get(struct perf_event *event)
return rb;
}
void ring_buffer_put(struct ring_buffer *rb)
void ring_buffer_put(struct perf_buffer *rb)
{
if (!refcount_dec_and_test(&rb->refcount))
return;
......@@ -5672,7 +5672,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
{
struct perf_event *event = vma->vm_file->private_data;
struct ring_buffer *rb = ring_buffer_get(event);
struct perf_buffer *rb = ring_buffer_get(event);
struct user_struct *mmap_user = rb->mmap_user;
int mmap_locked = rb->mmap_locked;
unsigned long size = perf_data_size(rb);
......@@ -5790,8 +5790,8 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
struct perf_event *event = file->private_data;
unsigned long user_locked, user_lock_limit;
struct user_struct *user = current_user();
struct perf_buffer *rb = NULL;
unsigned long locked, lock_limit;
struct ring_buffer *rb = NULL;
unsigned long vma_size;
unsigned long nr_pages;
long user_extra = 0, extra = 0;
......@@ -6266,7 +6266,7 @@ static unsigned long perf_prepare_sample_aux(struct perf_event *event,
size_t size)
{
struct perf_event *sampler = event->aux_event;
struct ring_buffer *rb;
struct perf_buffer *rb;
data->aux_size = 0;
......@@ -6299,7 +6299,7 @@ static unsigned long perf_prepare_sample_aux(struct perf_event *event,
return data->aux_size;
}
long perf_pmu_snapshot_aux(struct ring_buffer *rb,
long perf_pmu_snapshot_aux(struct perf_buffer *rb,
struct perf_event *event,
struct perf_output_handle *handle,
unsigned long size)
......@@ -6338,8 +6338,8 @@ static void perf_aux_sample_output(struct perf_event *event,
struct perf_sample_data *data)
{
struct perf_event *sampler = event->aux_event;
struct perf_buffer *rb;
unsigned long pad;
struct ring_buffer *rb;
long size;
if (WARN_ON_ONCE(!sampler || !data->aux_size))
......@@ -6707,7 +6707,7 @@ void perf_output_sample(struct perf_output_handle *handle,
int wakeup_events = event->attr.wakeup_events;
if (wakeup_events) {
struct ring_buffer *rb = handle->rb;
struct perf_buffer *rb = handle->rb;
int events = local_inc_return(&rb->events);
if (events >= wakeup_events) {
......@@ -7150,7 +7150,7 @@ void perf_event_exec(void)
}
struct remote_output {
struct ring_buffer *rb;
struct perf_buffer *rb;
int err;
};
......@@ -7158,7 +7158,7 @@ static void __perf_event_output_stop(struct perf_event *event, void *data)
{
struct perf_event *parent = event->parent;
struct remote_output *ro = data;
struct ring_buffer *rb = ro->rb;
struct perf_buffer *rb = ro->rb;
struct stop_event_data sd = {
.event = event,
};
......@@ -10998,7 +10998,7 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr,
static int
perf_event_set_output(struct perf_event *event, struct perf_event *output_event)
{
struct ring_buffer *rb = NULL;
struct perf_buffer *rb = NULL;
int ret = -EINVAL;
if (!output_event)
......
......@@ -10,7 +10,7 @@
#define RING_BUFFER_WRITABLE 0x01
struct ring_buffer {
struct perf_buffer {
refcount_t refcount;
struct rcu_head rcu_head;
#ifdef CONFIG_PERF_USE_VMALLOC
......@@ -58,17 +58,17 @@ struct ring_buffer {
void *data_pages[0];
};
extern void rb_free(struct ring_buffer *rb);
extern void rb_free(struct perf_buffer *rb);
static inline void rb_free_rcu(struct rcu_head *rcu_head)
{
struct ring_buffer *rb;
struct perf_buffer *rb;
rb = container_of(rcu_head, struct ring_buffer, rcu_head);
rb = container_of(rcu_head, struct perf_buffer, rcu_head);
rb_free(rb);
}
static inline void rb_toggle_paused(struct ring_buffer *rb, bool pause)
static inline void rb_toggle_paused(struct perf_buffer *rb, bool pause)
{
if (!pause && rb->nr_pages)
rb->paused = 0;
......@@ -76,16 +76,16 @@ static inline void rb_toggle_paused(struct ring_buffer *rb, bool pause)
rb->paused = 1;
}
extern struct ring_buffer *
extern struct perf_buffer *
rb_alloc(int nr_pages, long watermark, int cpu, int flags);
extern void perf_event_wakeup(struct perf_event *event);
extern int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
extern int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
pgoff_t pgoff, int nr_pages, long watermark, int flags);
extern void rb_free_aux(struct ring_buffer *rb);
extern struct ring_buffer *ring_buffer_get(struct perf_event *event);
extern void ring_buffer_put(struct ring_buffer *rb);
extern void rb_free_aux(struct perf_buffer *rb);
extern struct perf_buffer *ring_buffer_get(struct perf_event *event);
extern void ring_buffer_put(struct perf_buffer *rb);
static inline bool rb_has_aux(struct ring_buffer *rb)
static inline bool rb_has_aux(struct perf_buffer *rb)
{
return !!rb->aux_nr_pages;
}
......@@ -94,7 +94,7 @@ void perf_event_aux_event(struct perf_event *event, unsigned long head,
unsigned long size, u64 flags);
extern struct page *
perf_mmap_to_page(struct ring_buffer *rb, unsigned long pgoff);
perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff);
#ifdef CONFIG_PERF_USE_VMALLOC
/*
......@@ -103,25 +103,25 @@ perf_mmap_to_page(struct ring_buffer *rb, unsigned long pgoff);
* Required for architectures that have d-cache aliasing issues.
*/
static inline int page_order(struct ring_buffer *rb)
static inline int page_order(struct perf_buffer *rb)
{
return rb->page_order;
}
#else
static inline int page_order(struct ring_buffer *rb)
static inline int page_order(struct perf_buffer *rb)
{
return 0;
}
#endif
static inline unsigned long perf_data_size(struct ring_buffer *rb)
static inline unsigned long perf_data_size(struct perf_buffer *rb)
{
return rb->nr_pages << (PAGE_SHIFT + page_order(rb));
}
static inline unsigned long perf_aux_size(struct ring_buffer *rb)
static inline unsigned long perf_aux_size(struct perf_buffer *rb)
{
return rb->aux_nr_pages << PAGE_SHIFT;
}
......@@ -141,7 +141,7 @@ static inline unsigned long perf_aux_size(struct ring_buffer *rb)
buf += written; \
handle->size -= written; \
if (!handle->size) { \
struct ring_buffer *rb = handle->rb; \
struct perf_buffer *rb = handle->rb; \
\
handle->page++; \
handle->page &= rb->nr_pages - 1; \
......
......@@ -35,7 +35,7 @@ static void perf_output_wakeup(struct perf_output_handle *handle)
*/
static void perf_output_get_handle(struct perf_output_handle *handle)
{
struct ring_buffer *rb = handle->rb;
struct perf_buffer *rb = handle->rb;
preempt_disable();
......@@ -49,7 +49,7 @@ static void perf_output_get_handle(struct perf_output_handle *handle)
static void perf_output_put_handle(struct perf_output_handle *handle)
{
struct ring_buffer *rb = handle->rb;
struct perf_buffer *rb = handle->rb;
unsigned long head;
unsigned int nest;
......@@ -150,7 +150,7 @@ __perf_output_begin(struct perf_output_handle *handle,
struct perf_event *event, unsigned int size,
bool backward)
{
struct ring_buffer *rb;
struct perf_buffer *rb;
unsigned long tail, offset, head;
int have_lost, page_shift;
struct {
......@@ -301,7 +301,7 @@ void perf_output_end(struct perf_output_handle *handle)
}
static void
ring_buffer_init(struct ring_buffer *rb, long watermark, int flags)
ring_buffer_init(struct perf_buffer *rb, long watermark, int flags)
{
long max_size = perf_data_size(rb);
......@@ -361,7 +361,7 @@ void *perf_aux_output_begin(struct perf_output_handle *handle,
{
struct perf_event *output_event = event;
unsigned long aux_head, aux_tail;
struct ring_buffer *rb;
struct perf_buffer *rb;
unsigned int nest;
if (output_event->parent)
......@@ -449,7 +449,7 @@ void *perf_aux_output_begin(struct perf_output_handle *handle,
}
EXPORT_SYMBOL_GPL(perf_aux_output_begin);
static __always_inline bool rb_need_aux_wakeup(struct ring_buffer *rb)
static __always_inline bool rb_need_aux_wakeup(struct perf_buffer *rb)
{
if (rb->aux_overwrite)
return false;
......@@ -475,7 +475,7 @@ static __always_inline bool rb_need_aux_wakeup(struct ring_buffer *rb)
void perf_aux_output_end(struct perf_output_handle *handle, unsigned long size)
{
bool wakeup = !!(handle->aux_flags & PERF_AUX_FLAG_TRUNCATED);
struct ring_buffer *rb = handle->rb;
struct perf_buffer *rb = handle->rb;
unsigned long aux_head;
/* in overwrite mode, driver provides aux_head via handle */
......@@ -532,7 +532,7 @@ EXPORT_SYMBOL_GPL(perf_aux_output_end);
*/
int perf_aux_output_skip(struct perf_output_handle *handle, unsigned long size)
{
struct ring_buffer *rb = handle->rb;
struct perf_buffer *rb = handle->rb;
if (size > handle->size)
return -ENOSPC;
......@@ -569,8 +569,8 @@ long perf_output_copy_aux(struct perf_output_handle *aux_handle,
struct perf_output_handle *handle,
unsigned long from, unsigned long to)
{
struct perf_buffer *rb = aux_handle->rb;
unsigned long tocopy, remainder, len = 0;
struct ring_buffer *rb = aux_handle->rb;
void *addr;
from &= (rb->aux_nr_pages << PAGE_SHIFT) - 1;
......@@ -626,7 +626,7 @@ static struct page *rb_alloc_aux_page(int node, int order)
return page;
}
static void rb_free_aux_page(struct ring_buffer *rb, int idx)
static void rb_free_aux_page(struct perf_buffer *rb, int idx)
{
struct page *page = virt_to_page(rb->aux_pages[idx]);
......@@ -635,7 +635,7 @@ static void rb_free_aux_page(struct ring_buffer *rb, int idx)
__free_page(page);
}
static void __rb_free_aux(struct ring_buffer *rb)
static void __rb_free_aux(struct perf_buffer *rb)
{
int pg;
......@@ -662,7 +662,7 @@ static void __rb_free_aux(struct ring_buffer *rb)
}
}
int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
pgoff_t pgoff, int nr_pages, long watermark, int flags)
{
bool overwrite = !(flags & RING_BUFFER_WRITABLE);
......@@ -753,7 +753,7 @@ int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
return ret;
}
void rb_free_aux(struct ring_buffer *rb)
void rb_free_aux(struct perf_buffer *rb)
{
if (refcount_dec_and_test(&rb->aux_refcount))
__rb_free_aux(rb);
......@@ -766,7 +766,7 @@ void rb_free_aux(struct ring_buffer *rb)
*/
static struct page *
__perf_mmap_to_page(struct ring_buffer *rb, unsigned long pgoff)
__perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
{
if (pgoff > rb->nr_pages)
return NULL;
......@@ -798,13 +798,13 @@ static void perf_mmap_free_page(void *addr)
__free_page(page);
}
struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
struct perf_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
{
struct ring_buffer *rb;
struct perf_buffer *rb;
unsigned long size;
int i;
size = sizeof(struct ring_buffer);
size = sizeof(struct perf_buffer);
size += nr_pages * sizeof(void *);
if (order_base_2(size) >= PAGE_SHIFT+MAX_ORDER)
......@@ -843,7 +843,7 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
return NULL;
}
void rb_free(struct ring_buffer *rb)
void rb_free(struct perf_buffer *rb)
{
int i;
......@@ -854,13 +854,13 @@ void rb_free(struct ring_buffer *rb)
}
#else
static int data_page_nr(struct ring_buffer *rb)
static int data_page_nr(struct perf_buffer *rb)
{
return rb->nr_pages << page_order(rb);
}
static struct page *
__perf_mmap_to_page(struct ring_buffer *rb, unsigned long pgoff)
__perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
{
/* The '>' counts in the user page. */
if (pgoff > data_page_nr(rb))
......@@ -878,11 +878,11 @@ static void perf_mmap_unmark_page(void *addr)
static void rb_free_work(struct work_struct *work)
{
struct ring_buffer *rb;
struct perf_buffer *rb;
void *base;
int i, nr;
rb = container_of(work, struct ring_buffer, work);
rb = container_of(work, struct perf_buffer, work);
nr = data_page_nr(rb);
base = rb->user_page;
......@@ -894,18 +894,18 @@ static void rb_free_work(struct work_struct *work)
kfree(rb);
}
void rb_free(struct ring_buffer *rb)
void rb_free(struct perf_buffer *rb)
{
schedule_work(&rb->work);
}
struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
struct perf_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
{
struct ring_buffer *rb;
struct perf_buffer *rb;
unsigned long size;
void *all_buf;
size = sizeof(struct ring_buffer);
size = sizeof(struct perf_buffer);
size += sizeof(void *);
rb = kzalloc(size, GFP_KERNEL);
......@@ -939,7 +939,7 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
#endif
struct page *
perf_mmap_to_page(struct ring_buffer *rb, unsigned long pgoff)
perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
{
if (rb->aux_nr_pages) {
/* above AUX space */
......
This diff is collapsed.
......@@ -44,6 +44,8 @@ obj-$(CONFIG_TRACING) += trace_stat.o
obj-$(CONFIG_TRACING) += trace_printk.o
obj-$(CONFIG_TRACING_MAP) += tracing_map.o
obj-$(CONFIG_PREEMPTIRQ_DELAY_TEST) += preemptirq_delay_test.o
obj-$(CONFIG_SYNTH_EVENT_GEN_TEST) += synth_event_gen_test.o
obj-$(CONFIG_KPROBE_EVENT_GEN_TEST) += kprobe_event_gen_test.o
obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
obj-$(CONFIG_PREEMPTIRQ_TRACEPOINTS) += trace_preemptirq.o
......@@ -83,6 +85,7 @@ endif
obj-$(CONFIG_DYNAMIC_EVENTS) += trace_dynevent.o
obj-$(CONFIG_PROBE_EVENTS) += trace_probe.o
obj-$(CONFIG_UPROBE_EVENTS) += trace_uprobe.o
obj-$(CONFIG_BOOTTIME_TRACING) += trace_boot.o
obj-$(CONFIG_TRACEPOINT_BENCHMARK) += trace_benchmark.o
......
......@@ -68,14 +68,14 @@ static void trace_note(struct blk_trace *bt, pid_t pid, int action,
{
struct blk_io_trace *t;
struct ring_buffer_event *event = NULL;
struct ring_buffer *buffer = NULL;
struct trace_buffer *buffer = NULL;
int pc = 0;
int cpu = smp_processor_id();
bool blk_tracer = blk_tracer_enabled;
ssize_t cgid_len = cgid ? sizeof(cgid) : 0;
if (blk_tracer) {
buffer = blk_tr->trace_buffer.buffer;
buffer = blk_tr->array_buffer.buffer;
pc = preempt_count();
event = trace_buffer_lock_reserve(buffer, TRACE_BLK,
sizeof(*t) + len + cgid_len,
......@@ -215,7 +215,7 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
{
struct task_struct *tsk = current;
struct ring_buffer_event *event = NULL;
struct ring_buffer *buffer = NULL;
struct trace_buffer *buffer = NULL;
struct blk_io_trace *t;
unsigned long flags = 0;
unsigned long *sequence;
......@@ -248,7 +248,7 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
if (blk_tracer) {
tracing_record_cmdline(current);
buffer = blk_tr->trace_buffer.buffer;
buffer = blk_tr->array_buffer.buffer;
pc = preempt_count();
event = trace_buffer_lock_reserve(buffer, TRACE_BLK,
sizeof(*t) + pdu_len + cgid_len,
......
......@@ -62,8 +62,6 @@
})
/* hash bits for specific function selection */
#define FTRACE_HASH_BITS 7
#define FTRACE_FUNC_HASHSIZE (1 << FTRACE_HASH_BITS)
#define FTRACE_HASH_DEFAULT_BITS 10
#define FTRACE_HASH_MAX_BITS 12
......@@ -146,7 +144,7 @@ static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip,
{
struct trace_array *tr = op->private;
if (tr && this_cpu_read(tr->trace_buffer.data->ftrace_ignore_pid))
if (tr && this_cpu_read(tr->array_buffer.data->ftrace_ignore_pid))
return;
op->saved_func(ip, parent_ip, op, regs);
......@@ -1103,9 +1101,6 @@ struct ftrace_page {
#define ENTRY_SIZE sizeof(struct dyn_ftrace)
#define ENTRIES_PER_PAGE (PAGE_SIZE / ENTRY_SIZE)
/* estimate from running different kernels */
#define NR_TO_INIT 10000
static struct ftrace_page *ftrace_pages_start;
static struct ftrace_page *ftrace_pages;
......@@ -5464,7 +5459,7 @@ static void __init set_ftrace_early_graph(char *buf, int enable)
struct ftrace_hash *hash;
hash = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
if (WARN_ON(!hash))
if (MEM_FAIL(!hash, "Failed to allocate hash\n"))
return;
while (buf) {
......@@ -5596,8 +5591,8 @@ static const struct file_operations ftrace_notrace_fops = {
static DEFINE_MUTEX(graph_lock);
struct ftrace_hash *ftrace_graph_hash = EMPTY_HASH;
struct ftrace_hash *ftrace_graph_notrace_hash = EMPTY_HASH;
struct ftrace_hash __rcu *ftrace_graph_hash = EMPTY_HASH;
struct ftrace_hash __rcu *ftrace_graph_notrace_hash = EMPTY_HASH;
enum graph_filter_type {
GRAPH_FILTER_NOTRACE = 0,
......@@ -5872,8 +5867,15 @@ ftrace_graph_release(struct inode *inode, struct file *file)
mutex_unlock(&graph_lock);
/* Wait till all users are no longer using the old hash */
synchronize_rcu();
/*
* We need to do a hard force of sched synchronization.
* This is because we use preempt_disable() to do RCU, but
* the function tracers can be called where RCU is not watching
* (like before user_exit()). We can not rely on the RCU
* infrastructure to do the synchronization, thus we must do it
* ourselves.
*/
schedule_on_each_cpu(ftrace_sync);
free_ftrace_hash(old_hash);
}
......@@ -6596,7 +6598,7 @@ static void add_to_clear_hash_list(struct list_head *clear_list,
func = kmalloc(sizeof(*func), GFP_KERNEL);
if (!func) {
WARN_ONCE(1, "alloc failure, ftrace filter could be stale\n");
MEM_FAIL(1, "alloc failure, ftrace filter could be stale\n");
return;
}
......@@ -6922,7 +6924,7 @@ ftrace_filter_pid_sched_switch_probe(void *data, bool preempt,
pid_list = rcu_dereference_sched(tr->function_pids);
this_cpu_write(tr->trace_buffer.data->ftrace_ignore_pid,
this_cpu_write(tr->array_buffer.data->ftrace_ignore_pid,
trace_ignore_this_task(pid_list, next));
}
......@@ -6976,7 +6978,7 @@ static void clear_ftrace_pids(struct trace_array *tr)
unregister_trace_sched_switch(ftrace_filter_pid_sched_switch_probe, tr);
for_each_possible_cpu(cpu)
per_cpu_ptr(tr->trace_buffer.data, cpu)->ftrace_ignore_pid = false;
per_cpu_ptr(tr->array_buffer.data, cpu)->ftrace_ignore_pid = false;
rcu_assign_pointer(tr->function_pids, NULL);
......@@ -7031,9 +7033,10 @@ static void *fpid_next(struct seq_file *m, void *v, loff_t *pos)
struct trace_array *tr = m->private;
struct trace_pid_list *pid_list = rcu_dereference_sched(tr->function_pids);
if (v == FTRACE_NO_PIDS)
if (v == FTRACE_NO_PIDS) {
(*pos)++;
return NULL;
}
return trace_pid_next(pid_list, v, pos);
}
......@@ -7100,7 +7103,7 @@ static void ignore_task_cpu(void *data)
pid_list = rcu_dereference_protected(tr->function_pids,
mutex_is_locked(&ftrace_lock));
this_cpu_write(tr->trace_buffer.data->ftrace_ignore_pid,
this_cpu_write(tr->array_buffer.data->ftrace_ignore_pid,
trace_ignore_this_task(pid_list, current));
}
......
// SPDX-License-Identifier: GPL-2.0
/*
* Test module for in-kernel kprobe event creation and generation.
*
* Copyright (C) 2019 Tom Zanussi <zanussi@kernel.org>
*/
#include <linux/module.h>
#include <linux/trace_events.h>
/*
* This module is a simple test of basic functionality for in-kernel
* kprobe/kretprobe event creation. The first test uses
* kprobe_event_gen_cmd_start(), kprobe_event_add_fields() and
* kprobe_event_gen_cmd_end() to create a kprobe event, which is then
* enabled in order to generate trace output. The second creates a
* kretprobe event using kretprobe_event_gen_cmd_start() and
* kretprobe_event_gen_cmd_end(), and is also then enabled.
*
* To test, select CONFIG_KPROBE_EVENT_GEN_TEST and build the module.
* Then:
*
* # insmod kernel/trace/kprobe_event_gen_test.ko
* # cat /sys/kernel/debug/tracing/trace
*
* You should see many instances of the "gen_kprobe_test" and
* "gen_kretprobe_test" events in the trace buffer.
*
* To remove the events, remove the module:
*
* # rmmod kprobe_event_gen_test
*
*/
static struct trace_event_file *gen_kprobe_test;
static struct trace_event_file *gen_kretprobe_test;
/*
* Test to make sure we can create a kprobe event, then add more
* fields.
*/
static int __init test_gen_kprobe_cmd(void)
{
struct dynevent_cmd cmd;
char *buf;
int ret;
/* Create a buffer to hold the generated command */
buf = kzalloc(MAX_DYNEVENT_CMD_LEN, GFP_KERNEL);
if (!buf)
return -ENOMEM;
/* Before generating the command, initialize the cmd object */
kprobe_event_cmd_init(&cmd, buf, MAX_DYNEVENT_CMD_LEN);
/*
* Define the gen_kprobe_test event with the first 2 kprobe
* fields.
*/
ret = kprobe_event_gen_cmd_start(&cmd, "gen_kprobe_test",
"do_sys_open",
"dfd=%ax", "filename=%dx");
if (ret)
goto free;
/* Use kprobe_event_add_fields to add the rest of the fields */
ret = kprobe_event_add_fields(&cmd, "flags=%cx", "mode=+4($stack)");
if (ret)
goto free;
/*
* This actually creates the event.
*/
ret = kprobe_event_gen_cmd_end(&cmd);
if (ret)
goto free;
/*
* Now get the gen_kprobe_test event file. We need to prevent
* the instance and event from disappearing from underneath
* us, which trace_get_event_file() does (though in this case
* we're using the top-level instance which never goes away).
*/
gen_kprobe_test = trace_get_event_file(NULL, "kprobes",
"gen_kprobe_test");
if (IS_ERR(gen_kprobe_test)) {
ret = PTR_ERR(gen_kprobe_test);
goto delete;
}
/* Enable the event or you won't see anything */
ret = trace_array_set_clr_event(gen_kprobe_test->tr,
"kprobes", "gen_kprobe_test", true);
if (ret) {
trace_put_event_file(gen_kprobe_test);
goto delete;
}
out:
return ret;
delete:
/* We got an error after creating the event, delete it */
ret = kprobe_event_delete("gen_kprobe_test");
free:
kfree(buf);
goto out;
}
/*
* Test to make sure we can create a kretprobe event.
*/
static int __init test_gen_kretprobe_cmd(void)
{
struct dynevent_cmd cmd;
char *buf;
int ret;
/* Create a buffer to hold the generated command */
buf = kzalloc(MAX_DYNEVENT_CMD_LEN, GFP_KERNEL);
if (!buf)
return -ENOMEM;
/* Before generating the command, initialize the cmd object */
kprobe_event_cmd_init(&cmd, buf, MAX_DYNEVENT_CMD_LEN);
/*
* Define the kretprobe event.
*/
ret = kretprobe_event_gen_cmd_start(&cmd, "gen_kretprobe_test",
"do_sys_open",
"$retval");
if (ret)
goto free;
/*
* This actually creates the event.
*/
ret = kretprobe_event_gen_cmd_end(&cmd);
if (ret)
goto free;
/*
* Now get the gen_kretprobe_test event file. We need to
* prevent the instance and event from disappearing from
* underneath us, which trace_get_event_file() does (though in
* this case we're using the top-level instance which never
* goes away).
*/
gen_kretprobe_test = trace_get_event_file(NULL, "kprobes",
"gen_kretprobe_test");
if (IS_ERR(gen_kretprobe_test)) {
ret = PTR_ERR(gen_kretprobe_test);
goto delete;
}
/* Enable the event or you won't see anything */
ret = trace_array_set_clr_event(gen_kretprobe_test->tr,
"kprobes", "gen_kretprobe_test", true);
if (ret) {
trace_put_event_file(gen_kretprobe_test);
goto delete;
}
out:
return ret;
delete:
/* We got an error after creating the event, delete it */
ret = kprobe_event_delete("gen_kretprobe_test");
free:
kfree(buf);
goto out;
}
static int __init kprobe_event_gen_test_init(void)
{
int ret;
ret = test_gen_kprobe_cmd();
if (ret)
return ret;
ret = test_gen_kretprobe_cmd();
if (ret) {
WARN_ON(trace_array_set_clr_event(gen_kretprobe_test->tr,
"kprobes",
"gen_kretprobe_test", false));
trace_put_event_file(gen_kretprobe_test);
WARN_ON(kprobe_event_delete("gen_kretprobe_test"));
}
return ret;
}
static void __exit kprobe_event_gen_test_exit(void)
{
/* Disable the event or you can't remove it */
WARN_ON(trace_array_set_clr_event(gen_kprobe_test->tr,
"kprobes",
"gen_kprobe_test", false));
/* Now give the file and instance back */
trace_put_event_file(gen_kprobe_test);
/* Now unregister and free the event */
WARN_ON(kprobe_event_delete("gen_kprobe_test"));
/* Disable the event or you can't remove it */
WARN_ON(trace_array_set_clr_event(gen_kprobe_test->tr,
"kprobes",
"gen_kretprobe_test", false));
/* Now give the file and instance back */
trace_put_event_file(gen_kretprobe_test);
/* Now unregister and free the event */
WARN_ON(kprobe_event_delete("gen_kretprobe_test"));
}
module_init(kprobe_event_gen_test_init)
module_exit(kprobe_event_gen_test_exit)
MODULE_AUTHOR("Tom Zanussi");
MODULE_DESCRIPTION("kprobe event generation test");
MODULE_LICENSE("GPL v2");
This diff is collapsed.
......@@ -29,7 +29,7 @@ static int reader_finish;
static DECLARE_COMPLETION(read_start);
static DECLARE_COMPLETION(read_done);
static struct ring_buffer *buffer;
static struct trace_buffer *buffer;
static struct task_struct *producer;
static struct task_struct *consumer;
static unsigned long read;
......
This diff is collapsed.
This diff is collapsed.
......@@ -93,6 +93,18 @@ enum trace_type {
#include "trace_entries.h"
/* Use this for memory failure errors */
#define MEM_FAIL(condition, fmt, ...) ({ \
static bool __section(.data.once) __warned; \
int __ret_warn_once = !!(condition); \
\
if (unlikely(__ret_warn_once && !__warned)) { \
__warned = true; \
pr_err("ERROR: " fmt, ##__VA_ARGS__); \
} \
unlikely(__ret_warn_once); \
})
/*
* syscalls are special, and need special handling, this is why
* they are not included in trace_entries.h
......@@ -175,9 +187,9 @@ struct trace_array_cpu {
struct tracer;
struct trace_option_dentry;
struct trace_buffer {
struct array_buffer {
struct trace_array *tr;
struct ring_buffer *buffer;
struct trace_buffer *buffer;
struct trace_array_cpu __percpu *data;
u64 time_start;
int cpu;
......@@ -248,7 +260,7 @@ struct cond_snapshot {
struct trace_array {
struct list_head list;
char *name;
struct trace_buffer trace_buffer;
struct array_buffer array_buffer;
#ifdef CONFIG_TRACER_MAX_TRACE
/*
* The max_buffer is used to snapshot the trace when a maximum
......@@ -256,12 +268,12 @@ struct trace_array {
* Some tracers will use this to store a maximum trace while
* it continues examining live traces.
*
* The buffers for the max_buffer are set up the same as the trace_buffer
* The buffers for the max_buffer are set up the same as the array_buffer
* When a snapshot is taken, the buffer of the max_buffer is swapped
* with the buffer of the trace_buffer and the buffers are reset for
* the trace_buffer so the tracing can continue.
* with the buffer of the array_buffer and the buffers are reset for
* the array_buffer so the tracing can continue.
*/
struct trace_buffer max_buffer;
struct array_buffer max_buffer;
bool allocated_snapshot;
#endif
#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)
......@@ -345,6 +357,8 @@ extern struct mutex trace_types_lock;
extern int trace_array_get(struct trace_array *tr);
extern int tracing_check_open_get_tr(struct trace_array *tr);
extern struct trace_array *trace_array_find(const char *instance);
extern struct trace_array *trace_array_find_get(const char *instance);
extern int tracing_set_time_stamp_abs(struct trace_array *tr, bool abs);
extern int tracing_set_clock(struct trace_array *tr, const char *clockstr);
......@@ -684,7 +698,7 @@ trace_buffer_iter(struct trace_iterator *iter, int cpu)
int tracer_init(struct tracer *t, struct trace_array *tr);
int tracing_is_enabled(void);
void tracing_reset_online_cpus(struct trace_buffer *buf);
void tracing_reset_online_cpus(struct array_buffer *buf);
void tracing_reset_current(int cpu);
void tracing_reset_all_online_cpus(void);
int tracing_open_generic(struct inode *inode, struct file *filp);
......@@ -704,7 +718,7 @@ struct dentry *tracing_init_dentry(void);
struct ring_buffer_event;
struct ring_buffer_event *
trace_buffer_lock_reserve(struct ring_buffer *buffer,
trace_buffer_lock_reserve(struct trace_buffer *buffer,
int type,
unsigned long len,
unsigned long flags,
......@@ -716,7 +730,7 @@ struct trace_entry *tracing_get_trace_entry(struct trace_array *tr,
struct trace_entry *trace_find_next_entry(struct trace_iterator *iter,
int *ent_cpu, u64 *ent_ts);
void trace_buffer_unlock_commit_nostack(struct ring_buffer *buffer,
void trace_buffer_unlock_commit_nostack(struct trace_buffer *buffer,
struct ring_buffer_event *event);
int trace_empty(struct trace_iterator *iter);
......@@ -872,7 +886,7 @@ trace_vprintk(unsigned long ip, const char *fmt, va_list args);
extern int
trace_array_vprintk(struct trace_array *tr,
unsigned long ip, const char *fmt, va_list args);
int trace_array_printk_buf(struct ring_buffer *buffer,
int trace_array_printk_buf(struct trace_buffer *buffer,
unsigned long ip, const char *fmt, ...);
void trace_printk_seq(struct trace_seq *s);
enum print_line_t print_trace_line(struct trace_iterator *iter);
......@@ -949,22 +963,31 @@ extern void __trace_graph_return(struct trace_array *tr,
unsigned long flags, int pc);
#ifdef CONFIG_DYNAMIC_FTRACE
extern struct ftrace_hash *ftrace_graph_hash;
extern struct ftrace_hash *ftrace_graph_notrace_hash;
extern struct ftrace_hash __rcu *ftrace_graph_hash;
extern struct ftrace_hash __rcu *ftrace_graph_notrace_hash;
static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)
{
unsigned long addr = trace->func;
int ret = 0;
struct ftrace_hash *hash;
preempt_disable_notrace();
if (ftrace_hash_empty(ftrace_graph_hash)) {
/*
* Have to open code "rcu_dereference_sched()" because the
* function graph tracer can be called when RCU is not
* "watching".
* Protected with schedule_on_each_cpu(ftrace_sync)
*/
hash = rcu_dereference_protected(ftrace_graph_hash, !preemptible());
if (ftrace_hash_empty(hash)) {
ret = 1;
goto out;
}
if (ftrace_lookup_ip(ftrace_graph_hash, addr)) {
if (ftrace_lookup_ip(hash, addr)) {
/*
* This needs to be cleared on the return functions
......@@ -1000,10 +1023,20 @@ static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace)
static inline int ftrace_graph_notrace_addr(unsigned long addr)
{
int ret = 0;
struct ftrace_hash *notrace_hash;
preempt_disable_notrace();
if (ftrace_lookup_ip(ftrace_graph_notrace_hash, addr))
/*
* Have to open code "rcu_dereference_sched()" because the
* function graph tracer can be called when RCU is not
* "watching".
* Protected with schedule_on_each_cpu(ftrace_sync)
*/
notrace_hash = rcu_dereference_protected(ftrace_graph_notrace_hash,
!preemptible());
if (ftrace_lookup_ip(notrace_hash, addr))
ret = 1;
preempt_enable_notrace();
......@@ -1056,7 +1089,7 @@ struct ftrace_func_command {
extern bool ftrace_filter_param __initdata;
static inline int ftrace_trace_task(struct trace_array *tr)
{
return !this_cpu_read(tr->trace_buffer.data->ftrace_ignore_pid);
return !this_cpu_read(tr->array_buffer.data->ftrace_ignore_pid);
}
extern int ftrace_is_dead(void);
int ftrace_create_function_files(struct trace_array *tr,
......@@ -1144,6 +1177,11 @@ int unregister_ftrace_command(struct ftrace_func_command *cmd);
void ftrace_create_filter_files(struct ftrace_ops *ops,
struct dentry *parent);
void ftrace_destroy_filter_files(struct ftrace_ops *ops);
extern int ftrace_set_filter(struct ftrace_ops *ops, unsigned char *buf,
int len, int reset);
extern int ftrace_set_notrace(struct ftrace_ops *ops, unsigned char *buf,
int len, int reset);
#else
struct ftrace_func_command;
......@@ -1366,17 +1404,17 @@ struct trace_subsystem_dir {
};
extern int call_filter_check_discard(struct trace_event_call *call, void *rec,
struct ring_buffer *buffer,
struct trace_buffer *buffer,
struct ring_buffer_event *event);
void trace_buffer_unlock_commit_regs(struct trace_array *tr,
struct ring_buffer *buffer,
struct trace_buffer *buffer,
struct ring_buffer_event *event,
unsigned long flags, int pc,
struct pt_regs *regs);
static inline void trace_buffer_unlock_commit(struct trace_array *tr,
struct ring_buffer *buffer,
struct trace_buffer *buffer,
struct ring_buffer_event *event,
unsigned long flags, int pc)
{
......@@ -1389,7 +1427,7 @@ void trace_buffered_event_disable(void);
void trace_buffered_event_enable(void);
static inline void
__trace_event_discard_commit(struct ring_buffer *buffer,
__trace_event_discard_commit(struct trace_buffer *buffer,
struct ring_buffer_event *event)
{
if (this_cpu_read(trace_buffered_event) == event) {
......@@ -1415,7 +1453,7 @@ __trace_event_discard_commit(struct ring_buffer *buffer,
*/
static inline bool
__event_trigger_test_discard(struct trace_event_file *file,
struct ring_buffer *buffer,
struct trace_buffer *buffer,
struct ring_buffer_event *event,
void *entry,
enum event_trigger_type *tt)
......@@ -1450,7 +1488,7 @@ __event_trigger_test_discard(struct trace_event_file *file,
*/
static inline void
event_trigger_unlock_commit(struct trace_event_file *file,
struct ring_buffer *buffer,
struct trace_buffer *buffer,
struct ring_buffer_event *event,
void *entry, unsigned long irq_flags, int pc)
{
......@@ -1481,7 +1519,7 @@ event_trigger_unlock_commit(struct trace_event_file *file,
*/
static inline void
event_trigger_unlock_commit_regs(struct trace_event_file *file,
struct ring_buffer *buffer,
struct trace_buffer *buffer,
struct ring_buffer_event *event,
void *entry, unsigned long irq_flags, int pc,
struct pt_regs *regs)
......@@ -1892,6 +1930,15 @@ void trace_printk_start_comm(void);
int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled);
/* Used from boot time tracer */
extern int trace_set_options(struct trace_array *tr, char *option);
extern int tracing_set_tracer(struct trace_array *tr, const char *buf);
extern ssize_t tracing_resize_ring_buffer(struct trace_array *tr,
unsigned long size, int cpu_id);
extern int tracing_set_cpumask(struct trace_array *tr,
cpumask_var_t tracing_cpumask_new);
#define MAX_EVENT_NAME_LEN 64
extern int trace_run_command(const char *buf, int (*createfn)(int, char**));
......@@ -1949,6 +1996,9 @@ static inline const char *get_syscall_name(int syscall)
#ifdef CONFIG_EVENT_TRACING
void trace_event_init(void);
void trace_event_eval_update(struct trace_eval_map **map, int len);
/* Used from boot time tracer */
extern int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set);
extern int trigger_process_regex(struct trace_event_file *file, char *buff);
#else
static inline void __init trace_event_init(void) { }
static inline void trace_event_eval_update(struct trace_eval_map **map, int len) { }
......
// SPDX-License-Identifier: GPL-2.0
/*
* trace_boot.c
* Tracing kernel boot-time
*/
#define pr_fmt(fmt) "trace_boot: " fmt
#include <linux/bootconfig.h>
#include <linux/cpumask.h>
#include <linux/ftrace.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/mutex.h>
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/trace.h>
#include <linux/trace_events.h>
#include "trace.h"
#define MAX_BUF_LEN 256
static void __init
trace_boot_set_instance_options(struct trace_array *tr, struct xbc_node *node)
{
struct xbc_node *anode;
const char *p;
char buf[MAX_BUF_LEN];
unsigned long v = 0;
/* Common ftrace options */
xbc_node_for_each_array_value(node, "options", anode, p) {
if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf)) {
pr_err("String is too long: %s\n", p);
continue;
}
if (trace_set_options(tr, buf) < 0)
pr_err("Failed to set option: %s\n", buf);
}
p = xbc_node_find_value(node, "trace_clock", NULL);
if (p && *p != '\0') {
if (tracing_set_clock(tr, p) < 0)
pr_err("Failed to set trace clock: %s\n", p);
}
p = xbc_node_find_value(node, "buffer_size", NULL);
if (p && *p != '\0') {
v = memparse(p, NULL);
if (v < PAGE_SIZE)
pr_err("Buffer size is too small: %s\n", p);
if (tracing_resize_ring_buffer(tr, v, RING_BUFFER_ALL_CPUS) < 0)
pr_err("Failed to resize trace buffer to %s\n", p);
}
p = xbc_node_find_value(node, "cpumask", NULL);
if (p && *p != '\0') {
cpumask_var_t new_mask;
if (alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
if (cpumask_parse(p, new_mask) < 0 ||
tracing_set_cpumask(tr, new_mask) < 0)
pr_err("Failed to set new CPU mask %s\n", p);
free_cpumask_var(new_mask);
}
}
}
#ifdef CONFIG_EVENT_TRACING
static void __init
trace_boot_enable_events(struct trace_array *tr, struct xbc_node *node)
{
struct xbc_node *anode;
char buf[MAX_BUF_LEN];
const char *p;
xbc_node_for_each_array_value(node, "events", anode, p) {
if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf)) {
pr_err("String is too long: %s\n", p);
continue;
}
if (ftrace_set_clr_event(tr, buf, 1) < 0)
pr_err("Failed to enable event: %s\n", p);
}
}
#ifdef CONFIG_KPROBE_EVENTS
static int __init
trace_boot_add_kprobe_event(struct xbc_node *node, const char *event)
{
struct dynevent_cmd cmd;
struct xbc_node *anode;
char buf[MAX_BUF_LEN];
const char *val;
int ret;
kprobe_event_cmd_init(&cmd, buf, MAX_BUF_LEN);
ret = kprobe_event_gen_cmd_start(&cmd, event, NULL);
if (ret)
return ret;
xbc_node_for_each_array_value(node, "probes", anode, val) {
ret = kprobe_event_add_field(&cmd, val);
if (ret)
return ret;
}
ret = kprobe_event_gen_cmd_end(&cmd);
if (ret)
pr_err("Failed to add probe: %s\n", buf);
return ret;
}
#else
static inline int __init
trace_boot_add_kprobe_event(struct xbc_node *node, const char *event)
{
pr_err("Kprobe event is not supported.\n");
return -ENOTSUPP;
}
#endif
#ifdef CONFIG_HIST_TRIGGERS
static int __init
trace_boot_add_synth_event(struct xbc_node *node, const char *event)
{
struct dynevent_cmd cmd;
struct xbc_node *anode;
char buf[MAX_BUF_LEN];
const char *p;
int ret;
synth_event_cmd_init(&cmd, buf, MAX_BUF_LEN);
ret = synth_event_gen_cmd_start(&cmd, event, NULL);
if (ret)
return ret;
xbc_node_for_each_array_value(node, "fields", anode, p) {
ret = synth_event_add_field_str(&cmd, p);
if (ret)
return ret;
}
ret = synth_event_gen_cmd_end(&cmd);
if (ret < 0)
pr_err("Failed to add synthetic event: %s\n", buf);
return ret;
}
#else
static inline int __init
trace_boot_add_synth_event(struct xbc_node *node, const char *event)
{
pr_err("Synthetic event is not supported.\n");
return -ENOTSUPP;
}
#endif
static void __init
trace_boot_init_one_event(struct trace_array *tr, struct xbc_node *gnode,
struct xbc_node *enode)
{
struct trace_event_file *file;
struct xbc_node *anode;
char buf[MAX_BUF_LEN];
const char *p, *group, *event;
group = xbc_node_get_data(gnode);
event = xbc_node_get_data(enode);
if (!strcmp(group, "kprobes"))
if (trace_boot_add_kprobe_event(enode, event) < 0)
return;
if (!strcmp(group, "synthetic"))
if (trace_boot_add_synth_event(enode, event) < 0)
return;
mutex_lock(&event_mutex);
file = find_event_file(tr, group, event);
if (!file) {
pr_err("Failed to find event: %s:%s\n", group, event);
goto out;
}
p = xbc_node_find_value(enode, "filter", NULL);
if (p && *p != '\0') {
if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf))
pr_err("filter string is too long: %s\n", p);
else if (apply_event_filter(file, buf) < 0)
pr_err("Failed to apply filter: %s\n", buf);
}
xbc_node_for_each_array_value(enode, "actions", anode, p) {
if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf))
pr_err("action string is too long: %s\n", p);
else if (trigger_process_regex(file, buf) < 0)
pr_err("Failed to apply an action: %s\n", buf);
}
if (xbc_node_find_value(enode, "enable", NULL)) {
if (trace_event_enable_disable(file, 1, 0) < 0)
pr_err("Failed to enable event node: %s:%s\n",
group, event);
}
out:
mutex_unlock(&event_mutex);
}
static void __init
trace_boot_init_events(struct trace_array *tr, struct xbc_node *node)
{
struct xbc_node *gnode, *enode;
node = xbc_node_find_child(node, "event");
if (!node)
return;
/* per-event key starts with "event.GROUP.EVENT" */
xbc_node_for_each_child(node, gnode)
xbc_node_for_each_child(gnode, enode)
trace_boot_init_one_event(tr, gnode, enode);
}
#else
#define trace_boot_enable_events(tr, node) do {} while (0)
#define trace_boot_init_events(tr, node) do {} while (0)
#endif
#ifdef CONFIG_DYNAMIC_FTRACE
static void __init
trace_boot_set_ftrace_filter(struct trace_array *tr, struct xbc_node *node)
{
struct xbc_node *anode;
const char *p;
char *q;
xbc_node_for_each_array_value(node, "ftrace.filters", anode, p) {
q = kstrdup(p, GFP_KERNEL);
if (!q)
return;
if (ftrace_set_filter(tr->ops, q, strlen(q), 0) < 0)
pr_err("Failed to add %s to ftrace filter\n", p);
else
ftrace_filter_param = true;
kfree(q);
}
xbc_node_for_each_array_value(node, "ftrace.notraces", anode, p) {
q = kstrdup(p, GFP_KERNEL);
if (!q)
return;
if (ftrace_set_notrace(tr->ops, q, strlen(q), 0) < 0)
pr_err("Failed to add %s to ftrace filter\n", p);
else
ftrace_filter_param = true;
kfree(q);
}
}
#else
#define trace_boot_set_ftrace_filter(tr, node) do {} while (0)
#endif
static void __init
trace_boot_enable_tracer(struct trace_array *tr, struct xbc_node *node)
{
const char *p;
trace_boot_set_ftrace_filter(tr, node);
p = xbc_node_find_value(node, "tracer", NULL);
if (p && *p != '\0') {
if (tracing_set_tracer(tr, p) < 0)
pr_err("Failed to set given tracer: %s\n", p);
}
}
static void __init
trace_boot_init_one_instance(struct trace_array *tr, struct xbc_node *node)
{
trace_boot_set_instance_options(tr, node);
trace_boot_init_events(tr, node);
trace_boot_enable_events(tr, node);
trace_boot_enable_tracer(tr, node);
}
static void __init
trace_boot_init_instances(struct xbc_node *node)
{
struct xbc_node *inode;
struct trace_array *tr;
const char *p;
node = xbc_node_find_child(node, "instance");
if (!node)
return;
xbc_node_for_each_child(node, inode) {
p = xbc_node_get_data(inode);
if (!p || *p == '\0')
continue;
tr = trace_array_get_by_name(p);
if (!tr) {
pr_err("Failed to get trace instance %s\n", p);
continue;
}
trace_boot_init_one_instance(tr, inode);
trace_array_put(tr);
}
}
static int __init trace_boot_init(void)
{
struct xbc_node *trace_node;
struct trace_array *tr;
trace_node = xbc_find_node("ftrace");
if (!trace_node)
return 0;
tr = top_trace_array();
if (!tr)
return 0;
/* Global trace array is also one instance */
trace_boot_init_one_instance(tr, trace_node);
trace_boot_init_instances(trace_node);
return 0;
}
fs_initcall(trace_boot_init);
......@@ -32,10 +32,10 @@ probe_likely_condition(struct ftrace_likely_data *f, int val, int expect)
{
struct trace_event_call *call = &event_branch;
struct trace_array *tr = branch_tracer;
struct trace_buffer *buffer;
struct trace_array_cpu *data;
struct ring_buffer_event *event;
struct trace_branch *entry;
struct ring_buffer *buffer;
unsigned long flags;
int pc;
const char *p;
......@@ -55,12 +55,12 @@ probe_likely_condition(struct ftrace_likely_data *f, int val, int expect)
raw_local_irq_save(flags);
current->trace_recursion |= TRACE_BRANCH_BIT;
data = this_cpu_ptr(tr->trace_buffer.data);
data = this_cpu_ptr(tr->array_buffer.data);
if (atomic_read(&data->disabled))
goto out;
pc = preempt_count();
buffer = tr->trace_buffer.buffer;
buffer = tr->array_buffer.buffer;
event = trace_buffer_lock_reserve(buffer, TRACE_BRANCH,
sizeof(*entry), flags, pc);
if (!event)
......
This diff is collapsed.
......@@ -117,4 +117,36 @@ int dyn_event_release(int argc, char **argv, struct dyn_event_operations *type);
#define for_each_dyn_event_safe(pos, n) \
list_for_each_entry_safe(pos, n, &dyn_event_list, list)
extern void dynevent_cmd_init(struct dynevent_cmd *cmd, char *buf, int maxlen,
enum dynevent_type type,
dynevent_create_fn_t run_command);
typedef int (*dynevent_check_arg_fn_t)(void *data);
struct dynevent_arg {
const char *str;
char separator; /* e.g. ';', ',', or nothing */
};
extern void dynevent_arg_init(struct dynevent_arg *arg,
char separator);
extern int dynevent_arg_add(struct dynevent_cmd *cmd,
struct dynevent_arg *arg,
dynevent_check_arg_fn_t check_arg);
struct dynevent_arg_pair {
const char *lhs;
const char *rhs;
char operator; /* e.g. '=' or nothing */
char separator; /* e.g. ';', ',', or nothing */
};
extern void dynevent_arg_pair_init(struct dynevent_arg_pair *arg_pair,
char operator, char separator);
extern int dynevent_arg_pair_add(struct dynevent_cmd *cmd,
struct dynevent_arg_pair *arg_pair,
dynevent_check_arg_fn_t check_arg);
extern int dynevent_str_add(struct dynevent_cmd *cmd, const char *str);
#endif
......@@ -164,7 +164,7 @@ FTRACE_ENTRY(kernel_stack, stack_entry,
F_STRUCT(
__field( int, size )
__dynamic_array(unsigned long, caller )
__array( unsigned long, caller, FTRACE_STACK_ENTRIES )
),
F_printk("\t=> %ps\n\t=> %ps\n\t=> %ps\n"
......
This diff is collapsed.
This diff is collapsed.
......@@ -116,9 +116,10 @@ static void *trigger_next(struct seq_file *m, void *t, loff_t *pos)
{
struct trace_event_file *event_file = event_file_data(m->private);
if (t == SHOW_AVAILABLE_TRIGGERS)
if (t == SHOW_AVAILABLE_TRIGGERS) {
(*pos)++;
return NULL;
}
return seq_list_next(t, &event_file->triggers, pos);
}
......@@ -213,7 +214,7 @@ static int event_trigger_regex_open(struct inode *inode, struct file *file)
return ret;
}
static int trigger_process_regex(struct trace_event_file *file, char *buff)
int trigger_process_regex(struct trace_event_file *file, char *buff)
{
char *command, *next = buff;
struct event_command *p;
......
......@@ -101,7 +101,7 @@ static int function_trace_init(struct trace_array *tr)
ftrace_init_array_ops(tr, func);
tr->trace_buffer.cpu = get_cpu();
tr->array_buffer.cpu = get_cpu();
put_cpu();
tracing_start_cmdline_record();
......@@ -118,7 +118,7 @@ static void function_trace_reset(struct trace_array *tr)
static void function_trace_start(struct trace_array *tr)
{
tracing_reset_online_cpus(&tr->trace_buffer);
tracing_reset_online_cpus(&tr->array_buffer);
}
static void
......@@ -143,7 +143,7 @@ function_trace_call(unsigned long ip, unsigned long parent_ip,
goto out;
cpu = smp_processor_id();
data = per_cpu_ptr(tr->trace_buffer.data, cpu);
data = per_cpu_ptr(tr->array_buffer.data, cpu);
if (!atomic_read(&data->disabled)) {
local_save_flags(flags);
trace_function(tr, ip, parent_ip, flags, pc);
......@@ -192,7 +192,7 @@ function_stack_trace_call(unsigned long ip, unsigned long parent_ip,
*/
local_irq_save(flags);
cpu = raw_smp_processor_id();
data = per_cpu_ptr(tr->trace_buffer.data, cpu);
data = per_cpu_ptr(tr->array_buffer.data, cpu);
disabled = atomic_inc_return(&data->disabled);
if (likely(disabled == 1)) {
......
This diff is collapsed.
......@@ -104,7 +104,7 @@ static void trace_hwlat_sample(struct hwlat_sample *sample)
{
struct trace_array *tr = hwlat_trace;
struct trace_event_call *call = &event_hwlat;
struct ring_buffer *buffer = tr->trace_buffer.buffer;
struct trace_buffer *buffer = tr->array_buffer.buffer;
struct ring_buffer_event *event;
struct hwlat_entry *entry;
unsigned long flags;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -538,7 +538,7 @@ lat_print_timestamp(struct trace_iterator *iter, u64 next_ts)
struct trace_array *tr = iter->tr;
unsigned long verbose = tr->trace_flags & TRACE_ITER_VERBOSE;
unsigned long in_ns = iter->iter_flags & TRACE_FILE_TIME_IN_NS;
unsigned long long abs_ts = iter->ts - iter->trace_buffer->time_start;
unsigned long long abs_ts = iter->ts - iter->array_buffer->time_start;
unsigned long long rel_ts = next_ts - iter->ts;
struct trace_seq *s = &iter->seq;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# Array must be comma separated.
key = "value1" "value2"
This diff is collapsed.
# Wrong boot config: comment only
This diff is collapsed.
key_word_is_too_long01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment