Commit 81a84ad3 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'docs-next' of git://git.lwn.net/linux

Pull documentation updates from Jonathan Corbet:
 "After a fair amount of churn in the last couple of cycles, docs are
  taking it easier this time around. Lots of fixes and some new
  documentation, but nothing all that radical. Perhaps the most
  interesting change for many is the scripts/sphinx-pre-install tool
  from Mauro; it will tell you exactly which packages you need to
  install to get a working docs toolchain on your system.

  There are two little patches reaching outside of Documentation/; both
  just tweak kerneldoc comments to eliminate warnings and fix some
  dangling doc pointers"

* 'docs-next' of git://git.lwn.net/linux: (52 commits)
  Documentation/sphinx: fix kernel-doc decode for non-utf-8 locale
  genalloc: Fix an incorrect kerneldoc comment
  doc: Add documentation for the genalloc subsystem
  assoc_array: fix path to assoc_array documentation
  kernel-doc parser mishandles declarations split into lines
  docs: ReSTify table of contents in core.rst
  docs: process: drop git snapshots from applying-patches.rst
  Documentation:input: fix typo
  swap: Remove obsolete sentence
  sphinx.rst: Allow Sphinx version 1.6 at the docs
  docs-rst: fix verbatim font size on tables
  Documentation: stable-kernel-rules: fix broken git urls
  rtmutex: update rt-mutex
  rtmutex: update rt-mutex-design
  docs: fix minimal sphinx version in conf.py
  docs: fix nested numbering in the TOC
  NVMEM documentation fix: A minor typo
  docs-rst: pdf: use same vertical margin on all Sphinx versions
  doc: Makefile: if sphinx is not found, run a check script
  docs: Fix paths in security/keys
  ...
parents fe91f281 86c0f046
......@@ -22,6 +22,8 @@ ifeq ($(HAVE_SPHINX),0)
.DEFAULT:
$(warning The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed and in PATH, or set the SPHINXBUILD make variable to point to the full path of the '$(SPHINXBUILD)' executable.)
@echo
@./scripts/sphinx-pre-install
@echo " SKIP Sphinx $@ target."
else # HAVE_SPHINX
......@@ -95,16 +97,6 @@ endif # HAVE_SPHINX
# The following targets are independent of HAVE_SPHINX, and the rules should
# work or silently pass without Sphinx.
# no-ops for the Sphinx toolchain
sgmldocs:
@:
psdocs:
@:
mandocs:
@:
installmandocs:
@:
cleandocs:
$(Q)rm -rf $(BUILDDIR)
$(Q)$(MAKE) BUILDDIR=$(abspath $(BUILDDIR)) $(build)=Documentation/media clean
......
......@@ -60,7 +60,7 @@ Example of using a firmware operation:
/* some platform code, e.g. SMP initialization */
__raw_writel(virt_to_phys(exynos4_secondary_startup),
__raw_writel(__pa_symbol(exynos4_secondary_startup),
CPU1_BOOT_REG);
/* Call Exynos specific smc call */
......
......@@ -29,7 +29,7 @@ from load_config import loadConfig
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.2'
needs_sphinx = '1.3'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
......@@ -344,8 +344,8 @@ if major == 1 and minor > 3:
if major == 1 and minor <= 4:
latex_elements['preamble'] += '\\usepackage[margin=0.5in, top=1in, bottom=1in]{geometry}'
elif major == 1 and (minor > 5 or (minor == 5 and patch >= 3)):
latex_elements['sphinxsetup'] = 'hmargin=0.5in, vmargin=0.5in'
latex_elements['sphinxsetup'] = 'hmargin=0.5in, vmargin=1in'
latex_elements['preamble'] += '\\fvset{fontsize=auto}\n'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
......
The genalloc/genpool subsystem
==============================
There are a number of memory-allocation subsystems in the kernel, each
aimed at a specific need. Sometimes, however, a kernel developer needs to
implement a new allocator for a specific range of special-purpose memory;
often that memory is located on a device somewhere. The author of the
driver for that device can certainly write a little allocator to get the
job done, but that is the way to fill the kernel with dozens of poorly
tested allocators. Back in 2005, Jes Sorensen lifted one of those
allocators from the sym53c8xx_2 driver and posted_ it as a generic module
for the creation of ad hoc memory allocators. This code was merged
for the 2.6.13 release; it has been modified considerably since then.
.. _posted: https://lwn.net/Articles/125842/
Code using this allocator should include <linux/genalloc.h>. The action
begins with the creation of a pool using one of:
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_create
.. kernel-doc:: lib/genalloc.c
:functions: devm_gen_pool_create
A call to :c:func:`gen_pool_create` will create a pool. The granularity of
allocations is set with min_alloc_order; it is a log-base-2 number like
those used by the page allocator, but it refers to bytes rather than pages.
So, if min_alloc_order is passed as 3, then all allocations will be a
multiple of eight bytes. Increasing min_alloc_order decreases the memory
required to track the memory in the pool. The nid parameter specifies
which NUMA node should be used for the allocation of the housekeeping
structures; it can be -1 if the caller doesn't care.
The "managed" interface :c:func:`devm_gen_pool_create` ties the pool to a
specific device. Among other things, it will automatically clean up the
pool when the given device is destroyed.
A pool is shut down with:
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_destroy
It's worth noting that, if there are still allocations outstanding from the
given pool, this function will take the rather extreme step of invoking
BUG(), crashing the entire system. You have been warned.
A freshly created pool has no memory to allocate. It is fairly useless in
that state, so one of the first orders of business is usually to add memory
to the pool. That can be done with one of:
.. kernel-doc:: include/linux/genalloc.h
:functions: gen_pool_add
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_add_virt
A call to :c:func:`gen_pool_add` will place the size bytes of memory
starting at addr (in the kernel's virtual address space) into the given
pool, once again using nid as the node ID for ancillary memory allocations.
The :c:func:`gen_pool_add_virt` variant associates an explicit physical
address with the memory; this is only necessary if the pool will be used
for DMA allocations.
The functions for allocating memory from the pool (and putting it back)
are:
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_alloc
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_dma_alloc
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_free
As one would expect, :c:func:`gen_pool_alloc` will allocate size< bytes
from the given pool. The :c:func:`gen_pool_dma_alloc` variant allocates
memory for use with DMA operations, returning the associated physical
address in the space pointed to by dma. This will only work if the memory
was added with :c:func:`gen_pool_add_virt`. Note that this function
departs from the usual genpool pattern of using unsigned long values to
represent kernel addresses; it returns a void * instead.
That all seems relatively simple; indeed, some developers clearly found it
to be too simple. After all, the interface above provides no control over
how the allocation functions choose which specific piece of memory to
return. If that sort of control is needed, the following functions will be
of interest:
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_alloc_algo
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_set_algo
Allocations with :c:func:`gen_pool_alloc_algo` specify an algorithm to be
used to choose the memory to be allocated; the default algorithm can be set
with :c:func:`gen_pool_set_algo`. The data value is passed to the
algorithm; most ignore it, but it is occasionally needed. One can,
naturally, write a special-purpose algorithm, but there is a fair set
already available:
- gen_pool_first_fit is a simple first-fit allocator; this is the default
algorithm if none other has been specified.
- gen_pool_first_fit_align forces the allocation to have a specific
alignment (passed via data in a genpool_data_align structure).
- gen_pool_first_fit_order_align aligns the allocation to the order of the
size. A 60-byte allocation will thus be 64-byte aligned, for example.
- gen_pool_best_fit, as one would expect, is a simple best-fit allocator.
- gen_pool_fixed_alloc allocates at a specific offset (passed in a
genpool_data_fixed structure via the data parameter) within the pool.
If the indicated memory is not available the allocation fails.
There is a handful of other functions, mostly for purposes like querying
the space available in the pool or iterating through chunks of memory.
Most users, however, should not need much beyond what has been described
above. With luck, wider awareness of this module will help to prevent the
writing of special-purpose memory allocators in the future.
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_virt_to_phys
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_for_each_chunk
.. kernel-doc:: lib/genalloc.c
:functions: addr_in_gen_pool
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_avail
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_size
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_get
.. kernel-doc:: lib/genalloc.c
:functions: of_gen_pool_get
......@@ -20,6 +20,7 @@ Core utilities
genericirq
flexible-arrays
librs
genalloc
Interfaces for kernel debugging
===============================
......
......@@ -31,11 +31,13 @@ Setup
CONFIG_DEBUG_INFO_REDUCED off. If your architecture supports
CONFIG_FRAME_POINTER, keep it enabled.
- Install that kernel on the guest.
- Install that kernel on the guest, turn off KASLR if necessary by adding
"nokaslr" to the kernel command line.
Alternatively, QEMU allows to boot the kernel directly using -kernel,
-append, -initrd command line switches. This is generally only useful if
you do not depend on modules. See QEMU documentation for more details on
this mode.
this mode. In this case, you should build the kernel with
CONFIG_RANDOMIZE_BASE disabled if the architecture supports KASLR.
- Enable the gdb stub of QEMU/KVM, either
......
......@@ -348,6 +348,15 @@ default behavior is always set to 0.
- ``echo 1 > /sys/module/debug_core/parameters/kgdbreboot``
- Enter the debugger on reboot notify.
Kernel parameter: ``nokaslr``
-----------------------------
If the architecture that you are using enable KASLR by default,
you should consider turning it off. KASLR randomizes the
virtual address where the kernel image is mapped and confuse
gdb which resolve kernel symbol address from symbol table
of vmlinux.
Using kdb
=========
......@@ -358,7 +367,7 @@ This is a quick example of how to use kdb.
1. Configure kgdboc at boot using kernel parameters::
console=ttyS0,115200 kgdboc=ttyS0,115200
console=ttyS0,115200 kgdboc=ttyS0,115200 nokaslr
OR
......
......@@ -19,6 +19,110 @@ Finally, there are thousands of plain text documentation files scattered around
``Documentation``. Some of these will likely be converted to reStructuredText
over time, but the bulk of them will remain in plain text.
.. _sphinx_install:
Sphinx Install
==============
The ReST markups currently used by the Documentation/ files are meant to be
built with ``Sphinx`` version 1.3 or upper. If you're desiring to build
PDF outputs, it is recommended to use version 1.4.6 or upper.
There's a script that checks for the Spinx requirements. Please see
:ref:`sphinx-pre-install` for further details.
Most distributions are shipped with Sphinx, but its toolchain is fragile,
and it is not uncommon that upgrading it or some other Python packages
on your machine would cause the documentation build to break.
A way to get rid of that is to use a different version than the one shipped
on your distributions. In order to do that, it is recommended to install
Sphinx inside a virtual environment, using ``virtualenv-3``
or ``virtualenv``, depending on how your distribution packaged Python 3.
.. note::
#) Sphinx versions below 1.5 don't work properly with Python's
docutils version 0.13.1 or upper. So, if you're willing to use
those versions, you should run ``pip install 'docutils==0.12'``.
#) It is recommended to use the RTD theme for html output. Depending
on the Sphinx version, it should be installed in separate,
with ``pip install sphinx_rtd_theme``.
#) Some ReST pages contain math expressions. Due to the way Sphinx work,
those expressions are written using LaTeX notation. It needs texlive
installed with amdfonts and amsmath in order to evaluate them.
In summary, if you want to install Sphinx version 1.4.9, you should do::
$ virtualenv sphinx_1.4
$ . sphinx_1.4/bin/activate
(sphinx_1.4) $ pip install -r Documentation/sphinx/requirements.txt
After running ``. sphinx_1.4/bin/activate``, the prompt will change,
in order to indicate that you're using the new environment. If you
open a new shell, you need to rerun this command to enter again at
the virtual environment before building the documentation.
Image output
------------
The kernel documentation build system contains an extension that
handles images on both GraphViz and SVG formats (see
:ref:`sphinx_kfigure`).
For it to work, you need to install both GraphViz and ImageMagick
packages. If those packages are not installed, the build system will
still build the documentation, but won't include any images at the
output.
PDF and LaTeX builds
--------------------
Such builds are currently supported only with Sphinx versions 1.4 and upper.
For PDF and LaTeX output, you'll also need ``XeLaTeX`` version 3.14159265.
Depending on the distribution, you may also need to install a series of
``texlive`` packages that provide the minimal set of functionalities
required for ``XeLaTeX`` to work.
.. _sphinx-pre-install:
Checking for Sphinx dependencies
--------------------------------
There's a script that automatically check for Sphinx dependencies. If it can
recognize your distribution, it will also give a hint about the install
command line options for your distro::
$ ./scripts/sphinx-pre-install
Checking if the needed tools for Fedora release 26 (Twenty Six) are available
Warning: better to also install "texlive-luatex85".
You should run:
sudo dnf install -y texlive-luatex85
/usr/bin/virtualenv sphinx_1.4
. sphinx_1.4/bin/activate
pip install -r Documentation/sphinx/requirements.txt
Can't build as 1 mandatory dependency is missing at ./scripts/sphinx-pre-install line 468.
By default, it checks all the requirements for both html and PDF, including
the requirements for images, math expressions and LaTeX build, and assumes
that a virtual Python environment will be used. The ones needed for html
builds are assumed to be mandatory; the others to be optional.
It supports two optional parameters:
``--no-pdf``
Disable checks for PDF;
``--no-virtualenv``
Use OS packaging for Sphinx instead of Python virtual environment.
Sphinx Build
============
......@@ -118,7 +222,7 @@ Here are some specific guidelines for the kernel documentation:
the C domain
------------
The `Sphinx C Domain`_ (name c) is suited for documentation of C API. E.g. a
The **Sphinx C Domain** (name c) is suited for documentation of C API. E.g. a
function prototype:
.. code-block:: rst
......@@ -229,6 +333,7 @@ Rendered as:
- column 3
.. _sphinx_kfigure:
Figures & Images
================
......
......@@ -4,7 +4,7 @@ Driver Basics
Driver Entry and Exit points
----------------------------
.. kernel-doc:: include/linux/init.h
.. kernel-doc:: include/linux/module.h
:internal:
Driver device table
......@@ -103,9 +103,6 @@ Kernel utility functions
.. kernel-doc:: kernel/panic.c
:export:
.. kernel-doc:: kernel/sys.c
:export:
.. kernel-doc:: kernel/rcu/tree.c
:export:
......
......@@ -139,9 +139,6 @@ DMA Fences
Seqno Hardware Fences
~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/seqno-fence.c
:export:
.. kernel-doc:: include/linux/seqno-fence.h
:internal:
......
......@@ -47,4 +47,3 @@ used by one consumer at a time.
.. kernel-doc:: drivers/pwm/core.c
:export:
......@@ -75,7 +75,7 @@ The channel-measurement facility provides a means to collect measurement
data which is made available by the channel subsystem for each channel
attached device.
.. kernel-doc:: arch/s390/include/asm/cmb.h
.. kernel-doc:: arch/s390/include/uapi/asm/cmb.h
:internal:
.. kernel-doc:: drivers/s390/cio/cmf.c
......
......@@ -224,14 +224,6 @@ mid to lowlevel SCSI driver interface
.. kernel-doc:: drivers/scsi/hosts.c
:export:
drivers/scsi/constants.c
~~~~~~~~~~~~~~~~~~~~~~~~
mid to lowlevel SCSI driver interface
.. kernel-doc:: drivers/scsi/constants.c
:export:
Transport classes
-----------------
......
......@@ -25,7 +25,7 @@
| mn10300: | ok |
| nios2: | ok |
| openrisc: | ok |
| parisc: | TODO |
| parisc: | ok |
| powerpc: | ok |
| s390: | ok |
| score: | TODO |
......
......@@ -829,9 +829,7 @@ struct address_space_operations {
swap_activate: Called when swapon is used on a file to allocate
space if necessary and pin the block lookup information in
memory. A return value of zero indicates success,
in which case this file can be used to back swapspace. The
swapspace operations will be proxied to this address space's
->swap_{out,in} methods.
in which case this file can be used to back swapspace.
swap_deactivate: Called during swapoff on files where swap_activate
was successful.
......
......@@ -109,7 +109,7 @@ evdev nodes are created with minors starting with 256.
keyboard
~~~~~~~~
``keyboard`` is in-kernel input handler ad is a part of VT code. It
``keyboard`` is in-kernel input handler and is a part of VT code. It
consumes keyboard keystrokes and handles user input for VT consoles.
mousedev
......
......@@ -12,7 +12,6 @@ Linux Joystick support
.. toctree::
:maxdepth: 3
:numbered:
joystick
joystick-api
......@@ -297,9 +297,9 @@ more details, with real examples.
ccflags-y specifies options for compiling with $(CC).
Example:
# drivers/acpi/Makefile
ccflags-y := -Os
ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT
# drivers/acpi/acpica/Makefile
ccflags-y := -Os -D_LINUX -DBUILDING_ACPICA
ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT
This variable is necessary because the top Makefile owns the
variable $(KBUILD_CFLAGS) and uses it for compilation flags for the
......
......@@ -97,9 +97,9 @@ waiter - A waiter is a struct that is stored on the stack of a blocked
a process being blocked on the mutex, it is fine to allocate
the waiter on the process's stack (local variable). This
structure holds a pointer to the task, as well as the mutex that
the task is blocked on. It also has the plist node structures to
place the task in the waiter_list of a mutex as well as the
pi_list of a mutex owner task (described below).
the task is blocked on. It also has rbtree node structures to
place the task in the waiters rbtree of a mutex as well as the
pi_waiters rbtree of a mutex owner task (described below).
waiter is sometimes used in reference to the task that is waiting
on a mutex. This is the same as waiter->task.
......@@ -179,53 +179,34 @@ again.
|
F->L5-+
If process G has the highest priority in the chain, then all the tasks up
the chain (A and B in this example), must have their priorities increased
to that of G.
Plist
-----
Before I go further and talk about how the PI chain is stored through lists
on both mutexes and processes, I'll explain the plist. This is similar to
the struct list_head functionality that is already in the kernel.
The implementation of plist is out of scope for this document, but it is
very important to understand what it does.
There are a few differences between plist and list, the most important one
being that plist is a priority sorted linked list. This means that the
priorities of the plist are sorted, such that it takes O(1) to retrieve the
highest priority item in the list. Obviously this is useful to store processes
based on their priorities.
Another difference, which is important for implementation, is that, unlike
list, the head of the list is a different element than the nodes of a list.
So the head of the list is declared as struct plist_head and nodes that will
be added to the list are declared as struct plist_node.
Mutex Waiter List
Mutex Waiters Tree
-----------------
Every mutex keeps track of all the waiters that are blocked on itself. The mutex
has a plist to store these waiters by priority. This list is protected by
a spin lock that is located in the struct of the mutex. This lock is called
wait_lock. Since the modification of the waiter list is never done in
interrupt context, the wait_lock can be taken without disabling interrupts.
Every mutex keeps track of all the waiters that are blocked on itself. The
mutex has a rbtree to store these waiters by priority. This tree is protected
by a spin lock that is located in the struct of the mutex. This lock is called
wait_lock.
Task PI List
Task PI Tree
------------
To keep track of the PI chains, each process has its own PI list. This is
a list of all top waiters of the mutexes that are owned by the process.
Note that this list only holds the top waiters and not all waiters that are
To keep track of the PI chains, each process has its own PI rbtree. This is
a tree of all top waiters of the mutexes that are owned by the process.
Note that this tree only holds the top waiters and not all waiters that are
blocked on mutexes owned by the process.
The top of the task's PI list is always the highest priority task that
The top of the task's PI tree is always the highest priority task that
is waiting on a mutex that is owned by the task. So if the task has
inherited a priority, it will always be the priority of the task that is
at the top of this list.
at the top of this tree.
This list is stored in the task structure of a process as a plist called
pi_list. This list is protected by a spin lock also in the task structure,
This tree is stored in the task structure of a process as a rbtree called
pi_waiters. It is protected by a spin lock also in the task structure,
called pi_lock. This lock may also be taken in interrupt context, so when
locking the pi_lock, interrupts must be disabled.
......@@ -312,15 +293,12 @@ Mutex owner and flags
The mutex structure contains a pointer to the owner of the mutex. If the
mutex is not owned, this owner is set to NULL. Since all architectures
have the task structure on at least a four byte alignment (and if this is
not true, the rtmutex.c code will be broken!), this allows for the two
least significant bits to be used as flags. This part is also described
in Documentation/rt-mutex.txt, but will also be briefly described here.
Bit 0 is used as the "Pending Owner" flag. This is described later.
Bit 1 is used as the "Has Waiters" flags. This is also described later
in more detail, but is set whenever there are waiters on a mutex.
have the task structure on at least a two byte alignment (and if this is
not true, the rtmutex.c code will be broken!), this allows for the least
significant bit to be used as a flag. Bit 0 is used as the "Has Waiters"
flag. It's set whenever there are waiters on a mutex.
See Documentation/locking/rt-mutex.txt for further details.
cmpxchg Tricks
--------------
......@@ -359,40 +337,31 @@ Priority adjustments
--------------------
The implementation of the PI code in rtmutex.c has several places that a
process must adjust its priority. With the help of the pi_list of a
process must adjust its priority. With the help of the pi_waiters of a
process this is rather easy to know what needs to be adjusted.
The functions implementing the task adjustments are rt_mutex_adjust_prio,
__rt_mutex_adjust_prio (same as the former, but expects the task pi_lock
to already be taken), rt_mutex_getprio, and rt_mutex_setprio.
The functions implementing the task adjustments are rt_mutex_adjust_prio
and rt_mutex_setprio. rt_mutex_setprio is only used in rt_mutex_adjust_prio.
rt_mutex_getprio and rt_mutex_setprio are only used in __rt_mutex_adjust_prio.
rt_mutex_adjust_prio examines the priority of the task, and the highest
priority process that is waiting any of mutexes owned by the task. Since
the pi_waiters of a task holds an order by priority of all the top waiters
of all the mutexes that the task owns, we simply need to compare the top
pi waiter to its own normal/deadline priority and take the higher one.
Then rt_mutex_setprio is called to adjust the priority of the task to the
new priority. Note that rt_mutex_setprio is defined in kernel/sched/core.c
to implement the actual change in priority.
rt_mutex_getprio returns the priority that the task should have. Either the
task's own normal priority, or if a process of a higher priority is waiting on
a mutex owned by the task, then that higher priority should be returned.
Since the pi_list of a task holds an order by priority list of all the top
waiters of all the mutexes that the task owns, rt_mutex_getprio simply needs
to compare the top pi waiter to its own normal priority, and return the higher
priority back.
(Note: For the "prio" field in task_struct, the lower the number, the
higher the priority. A "prio" of 5 is of higher priority than a
"prio" of 10.)
(Note: if looking at the code, you will notice that the lower number of
prio is returned. This is because the prio field in the task structure
is an inverse order of the actual priority. So a "prio" of 5 is
of higher priority than a "prio" of 10.)
__rt_mutex_adjust_prio examines the result of rt_mutex_getprio, and if the
result does not equal the task's current priority, then rt_mutex_setprio
is called to adjust the priority of the task to the new priority.
Note that rt_mutex_setprio is defined in kernel/sched/core.c to implement the
actual change in priority.
It is interesting to note that __rt_mutex_adjust_prio can either increase
It is interesting to note that rt_mutex_adjust_prio can either increase
or decrease the priority of the task. In the case that a higher priority
process has just blocked on a mutex owned by the task, __rt_mutex_adjust_prio
process has just blocked on a mutex owned by the task, rt_mutex_adjust_prio
would increase/boost the task's priority. But if a higher priority task
were for some reason to leave the mutex (timeout or signal), this same function
would decrease/unboost the priority of the task. That is because the pi_list
would decrease/unboost the priority of the task. That is because the pi_waiters
always contains the highest priority task that is waiting on a mutex owned
by the task, so we only need to compare the priority of that top pi waiter
to the normal priority of the given task.
......@@ -412,9 +381,10 @@ priorities.
rt_mutex_adjust_prio_chain is called with a task to be checked for PI
(de)boosting (the owner of a mutex that a process is blocking on), a flag to
check for deadlocking, the mutex that the task owns, and a pointer to a waiter
check for deadlocking, the mutex that the task owns, a pointer to a waiter
that is the process's waiter struct that is blocked on the mutex (although this
parameter may be NULL for deboosting).
parameter may be NULL for deboosting), a pointer to the mutex on which the task
is blocked, and a top_task as the top waiter of the mutex.
For this explanation, I will not mention deadlock detection. This explanation
will try to stay at a high level.
......@@ -424,133 +394,14 @@ that the state of the owner and lock can change when entered into this function.
Before this function is called, the task has already had rt_mutex_adjust_prio
performed on it. This means that the task is set to the priority that it
should be at, but the plist nodes of the task's waiter have not been updated
with the new priorities, and that this task may not be in the proper locations
in the pi_lists and wait_lists that the task is blocked on. This function
should be at, but the rbtree nodes of the task's waiter have not been updated
with the new priorities, and this task may not be in the proper locations
in the pi_waiters and waiters trees that the task is blocked on. This function
solves all that.
A loop is entered, where task is the owner to be checked for PI changes that
was passed by parameter (for the first iteration). The pi_lock of this task is
taken to prevent any more changes to the pi_list of the task. This also
prevents new tasks from completing the blocking on a mutex that is owned by this
task.
If the task is not blocked on a mutex then the loop is exited. We are at
the top of the PI chain.
A check is now done to see if the original waiter (the process that is blocked
on the current mutex) is the top pi waiter of the task. That is, is this
waiter on the top of the task's pi_list. If it is not, it either means that
there is another process higher in priority that is blocked on one of the
mutexes that the task owns, or that the waiter has just woken up via a signal
or timeout and has left the PI chain. In either case, the loop is exited, since
we don't need to do any more changes to the priority of the current task, or any
task that owns a mutex that this current task is waiting on. A priority chain
walk is only needed when a new top pi waiter is made to a task.
The next check sees if the task's waiter plist node has the priority equal to
the priority the task is set at. If they are equal, then we are done with
the loop. Remember that the function started with the priority of the
task adjusted, but the plist nodes that hold the task in other processes
pi_lists have not been adjusted.
Next, we look at the mutex that the task is blocked on. The mutex's wait_lock
is taken. This is done by a spin_trylock, because the locking order of the
pi_lock and wait_lock goes in the opposite direction. If we fail to grab the
lock, the pi_lock is released, and we restart the loop.
Now that we have both the pi_lock of the task as well as the wait_lock of
the mutex the task is blocked on, we update the task's waiter's plist node
that is located on the mutex's wait_list.
Now we release the pi_lock of the task.
Next the owner of the mutex has its pi_lock taken, so we can update the
task's entry in the owner's pi_list. If the task is the highest priority
process on the mutex's wait_list, then we remove the previous top waiter
from the owner's pi_list, and replace it with the task.
Note: It is possible that the task was the current top waiter on the mutex,
in which case the task is not yet on the pi_list of the waiter. This
is OK, since plist_del does nothing if the plist node is not on any
list.
If the task was not the top waiter of the mutex, but it was before we
did the priority updates, that means we are deboosting/lowering the
task. In this case, the task is removed from the pi_list of the owner,
and the new top waiter is added.
Lastly, we unlock both the pi_lock of the task, as well as the mutex's
wait_lock, and continue the loop again. On the next iteration of the
loop, the previous owner of the mutex will be the task that will be
processed.
Note: One might think that the owner of this mutex might have changed
since we just grab the mutex's wait_lock. And one could be right.
The important thing to remember is that the owner could not have
become the task that is being processed in the PI chain, since
we have taken that task's pi_lock at the beginning of the loop.
So as long as there is an owner of this mutex that is not the same
process as the tasked being worked on, we are OK.
Looking closely at the code, one might be confused. The check for the
end of the PI chain is when the task isn't blocked on anything or the
task's waiter structure "task" element is NULL. This check is
protected only by the task's pi_lock. But the code to unlock the mutex
sets the task's waiter structure "task" element to NULL with only
the protection of the mutex's wait_lock, which was not taken yet.
Isn't this a race condition if the task becomes the new owner?
The answer is No! The trick is the spin_trylock of the mutex's
wait_lock. If we fail that lock, we release the pi_lock of the
task and continue the loop, doing the end of PI chain check again.
In the code to release the lock, the wait_lock of the mutex is held
the entire time, and it is not let go when we grab the pi_lock of the
new owner of the mutex. So if the switch of a new owner were to happen
after the check for end of the PI chain and the grabbing of the
wait_lock, the unlocking code would spin on the new owner's pi_lock
but never give up the wait_lock. So the PI chain loop is guaranteed to
fail the spin_trylock on the wait_lock, release the pi_lock, and
try again.
If you don't quite understand the above, that's OK. You don't have to,
unless you really want to make a proof out of it ;)
Pending Owners and Lock stealing
--------------------------------
One of the flags in the owner field of the mutex structure is "Pending Owner".
What this means is that an owner was chosen by the process releasing the
mutex, but that owner has yet to wake up and actually take the mutex.
Why is this important? Why can't we just give the mutex to another process
and be done with it?
The PI code is to help with real-time processes, and to let the highest
priority process run as long as possible with little latencies and delays.
If a high priority process owns a mutex that a lower priority process is
blocked on, when the mutex is released it would be given to the lower priority
process. What if the higher priority process wants to take that mutex again.
The high priority process would fail to take that mutex that it just gave up
and it would need to boost the lower priority process to run with full
latency of that critical section (since the low priority process just entered
it).
There's no reason a high priority process that gives up a mutex should be
penalized if it tries to take that mutex again. If the new owner of the
mutex has not woken up yet, there's no reason that the higher priority process
could not take that mutex away.
To solve this, we introduced Pending Ownership and Lock Stealing. When a
new process is given a mutex that it was blocked on, it is only given
pending ownership. This means that it's the new owner, unless a higher
priority process comes in and tries to grab that mutex. If a higher priority
process does come along and wants that mutex, we let the higher priority
process "steal" the mutex from the pending owner (only if it is still pending)
and continue with the mutex.
The main operation of this function is summarized by Thomas Gleixner in
rtmutex.c. See the 'Chain walk basics and protection scope' comment for further
details.
Taking of a mutex (The walk through)
------------------------------------
......@@ -563,14 +414,14 @@ done when we have CMPXCHG enabled (otherwise the fast taking automatically
fails). Only when the owner field of the mutex is NULL can the lock be
taken with the CMPXCHG and nothing else needs to be done.
If there is contention on the lock, whether it is owned or pending owner
we go about the slow path (rt_mutex_slowlock).
If there is contention on the lock, we go about the slow path
(rt_mutex_slowlock).
The slow path function is where the task's waiter structure is created on
the stack. This is because the waiter structure is only needed for the
scope of this function. The waiter structure holds the nodes to store
the task on the wait_list of the mutex, and if need be, the pi_list of
the owner.
the task on the waiters tree of the mutex, and if need be, the pi_waiters
tree of the owner.
The wait_lock of the mutex is taken since the slow path of unlocking the
mutex also takes this lock.
......@@ -581,102 +432,45 @@ contention).
try_to_take_rt_mutex is used every time the task tries to grab a mutex in the
slow path. The first thing that is done here is an atomic setting of
the "Has Waiters" flag of the mutex's owner field. Yes, this could really
be false, because if the mutex has no owner, there are no waiters and
the current task also won't have any waiters. But we don't have the lock
yet, so we assume we are going to be a waiter. The reason for this is to
play nice for those architectures that do have CMPXCHG. By setting this flag
now, the owner of the mutex can't release the mutex without going into the
slow unlock path, and it would then need to grab the wait_lock, which this
code currently holds. So setting the "Has Waiters" flag forces the owner
to synchronize with this code.
Now that we know that we can't have any races with the owner releasing the
mutex, we check to see if we can take the ownership. This is done if the
mutex doesn't have a owner, or if we can steal the mutex from a pending
owner. Let's look at the situations we have here.
1) Has owner that is pending
----------------------------
The mutex has a owner, but it hasn't woken up and the mutex flag
"Pending Owner" is set. The first check is to see if the owner isn't the
current task. This is because this function is also used for the pending
owner to grab the mutex. When a pending owner wakes up, it checks to see
if it can take the mutex, and this is done if the owner is already set to
itself. If so, we succeed and leave the function, clearing the "Pending
Owner" bit.
If the pending owner is not current, we check to see if the current priority is
higher than the pending owner. If not, we fail the function and return.
There's also something special about a pending owner. That is a pending owner
is never blocked on a mutex. So there is no PI chain to worry about. It also
means that if the mutex doesn't have any waiters, there's no accounting needed
to update the pending owner's pi_list, since we only worry about processes
blocked on the current mutex.
If there are waiters on this mutex, and we just stole the ownership, we need
to take the top waiter, remove it from the pi_list of the pending owner, and
add it to the current pi_list. Note that at this moment, the pending owner
is no longer on the list of waiters. This is fine, since the pending owner
would add itself back when it realizes that it had the ownership stolen
from itself. When the pending owner tries to grab the mutex, it will fail
in try_to_take_rt_mutex if the owner field points to another process.
2) No owner
-----------
If there is no owner (or we successfully stole the lock), we set the owner
of the mutex to current, and set the flag of "Has Waiters" if the current
mutex actually has waiters, or we clear the flag if it doesn't. See, it was
OK that we set that flag early, since now it is cleared.
3) Failed to grab ownership
---------------------------
The most interesting case is when we fail to take ownership. This means that
there exists an owner, or there's a pending owner with equal or higher
priority than the current task.
We'll continue on the failed case.
If the mutex has a timeout, we set up a timer to go off to break us out
of this mutex if we failed to get it after a specified amount of time.
Now we enter a loop that will continue to try to take ownership of the mutex, or
fail from a timeout or signal.
Once again we try to take the mutex. This will usually fail the first time
in the loop, since it had just failed to get the mutex. But the second time
in the loop, this would likely succeed, since the task would likely be
the pending owner.
If the mutex is TASK_INTERRUPTIBLE a check for signals and timeout is done
here.
The waiter structure has a "task" field that points to the task that is blocked
on the mutex. This field can be NULL the first time it goes through the loop
or if the task is a pending owner and had its mutex stolen. If the "task"
field is NULL then we need to set up the accounting for it.
the "Has Waiters" flag of the mutex's owner field. By setting this flag
now, the current owner of the mutex being contended for can't release the mutex
without going into the slow unlock path, and it would then need to grab the
wait_lock, which this code currently holds. So setting the "Has Waiters" flag
forces the current owner to synchronize with this code.
The lock is taken if the following are true:
1) The lock has no owner
2) The current task is the highest priority against all other
waiters of the lock
If the task succeeds to acquire the lock, then the task is set as the
owner of the lock, and if the lock still has waiters, the top_waiter
(highest priority task waiting on the lock) is added to this task's
pi_waiters tree.
If the lock is not taken by try_to_take_rt_mutex(), then the
task_blocks_on_rt_mutex() function is called. This will add the task to
the lock's waiter tree and propagate the pi chain of the lock as well
as the lock's owner's pi_waiters tree. This is described in the next
section.
Task blocks on mutex
--------------------
The accounting of a mutex and process is done with the waiter structure of
the process. The "task" field is set to the process, and the "lock" field
to the mutex. The plist nodes are initialized to the processes current
priority.
to the mutex. The rbtree node of waiter are initialized to the processes
current priority.
Since the wait_lock was taken at the entry of the slow lock, we can safely
add the waiter to the wait_list. If the current process is the highest
priority process currently waiting on this mutex, then we remove the
previous top waiter process (if it exists) from the pi_list of the owner,
and add the current process to that list. Since the pi_list of the owner
add the waiter to the task waiter tree. If the current process is the
highest priority process currently waiting on this mutex, then we remove the
previous top waiter process (if it exists) from the pi_waiters of the owner,
and add the current process to that tree. Since the pi_waiter of the owner
has changed, we call rt_mutex_adjust_prio on the owner to see if the owner
should adjust its priority accordingly.
If the owner is also blocked on a lock, and had its pi_list changed
If the owner is also blocked on a lock, and had its pi_waiters changed
(or deadlock checking is on), we unlock the wait_lock of the mutex and go ahead
and run rt_mutex_adjust_prio_chain on the owner, as described earlier.
......@@ -686,30 +480,23 @@ mutex (waiter "task" field is not NULL), then we go to sleep (call schedule).
Waking up in the loop
---------------------
The schedule can then wake up for a few reasons.
1) we were given pending ownership of the mutex.
2) we received a signal and was TASK_INTERRUPTIBLE
3) we had a timeout and was TASK_INTERRUPTIBLE
The task can then wake up for a couple of reasons:
1) The previous lock owner released the lock, and the task now is top_waiter
2) we received a signal or timeout
In any of these cases, we continue the loop and once again try to grab the
ownership of the mutex. If we succeed, we exit the loop, otherwise we continue
and on signal and timeout, will exit the loop, or if we had the mutex stolen
we just simply add ourselves back on the lists and go back to sleep.
In both cases, the task will try again to acquire the lock. If it
does, then it will take itself off the waiters tree and set itself back
to the TASK_RUNNING state.
Note: For various reasons, because of timeout and signals, the steal mutex
algorithm needs to be careful. This is because the current process is
still on the wait_list. And because of dynamic changing of priorities,
especially on SCHED_OTHER tasks, the current process can be the
highest priority task on the wait_list.
Failed to get mutex on Timeout or Signal
----------------------------------------
In first case, if the lock was acquired by another task before this task
could get the lock, then it will go back to sleep and wait to be woken again.
If a timeout or signal occurred, the waiter's "task" field would not be
NULL and the task needs to be taken off the wait_list of the mutex and perhaps
pi_list of the owner. If this process was a high priority process, then
the rt_mutex_adjust_prio_chain needs to be executed again on the owner,
but this time it will be lowering the priorities.
The second case is only applicable for tasks that are grabbing a mutex
that can wake up before getting the lock, either due to a signal or
a timeout (i.e. rt_mutex_timed_futex_lock()). When woken, it will try to
take the lock again, if it succeeds, then the task will return with the
lock held, otherwise it will return with -EINTR if the task was woken
by a signal, or -ETIMEDOUT if it timed out.
Unlocking the Mutex
......@@ -739,25 +526,12 @@ owner still needs to make this check. If there are no waiters then the mutex
owner field is set to NULL, the wait_lock is released and nothing more is
needed.
If there are waiters, then we need to wake one up and give that waiter
pending ownership.
If there are waiters, then we need to wake one up.
On the wake up code, the pi_lock of the current owner is taken. The top
waiter of the lock is found and removed from the wait_list of the mutex
as well as the pi_list of the current owner. The task field of the new
pending owner's waiter structure is set to NULL, and the owner field of the
mutex is set to the new owner with the "Pending Owner" bit set, as well
as the "Has Waiters" bit if there still are other processes blocked on the
mutex.
The pi_lock of the previous owner is released, and the new pending owner's
pi_lock is taken. Remember that this is the trick to prevent the race
condition in rt_mutex_adjust_prio_chain from adding itself as a waiter
on the mutex.
We now clear the "pi_blocked_on" field of the new pending owner, and if
the mutex still has waiters pending, we add the new top waiter to the pi_list
of the pending owner.
waiter of the lock is found and removed from the waiters tree of the mutex
as well as the pi_waiters tree of the current owner. The "Has Waiters" bit is
marked to prevent lower priority tasks from stealing the lock.
Finally we unlock the pi_lock of the pending owner and wake it up.
......@@ -772,10 +546,14 @@ Credits
-------
Author: Steven Rostedt <rostedt@goodmis.org>
Updated: Alex Shi <alex.shi@linaro.org> - 7/6/2017
Reviewers: Ingo Molnar, Thomas Gleixner, Thomas Duetsch, and Randy Dunlap
Original Reviewers: Ingo Molnar, Thomas Gleixner, Thomas Duetsch, and
Randy Dunlap
Update (7/6/2017) Reviewers: Steven Rostedt and Sebastian Siewior
Updates
-------
This document was originally written for 2.6.17-rc3-mm1
was updated on 4.12
......@@ -28,14 +28,13 @@ magic bullet for poorly designed applications, but it allows
well-designed applications to use userspace locks in critical parts of
an high priority thread, without losing determinism.
The enqueueing of the waiters into the rtmutex waiter list is done in
The enqueueing of the waiters into the rtmutex waiter tree is done in
priority order. For same priorities FIFO order is chosen. For each
rtmutex, only the top priority waiter is enqueued into the owner's
priority waiters list. This list too queues in priority order. Whenever
priority waiters tree. This tree too queues in priority order. Whenever
the top priority waiter of a task changes (for example it timed out or
got a signal), the priority of the owner task is readjusted. [The
priority enqueueing is handled by "plists", see include/linux/plist.h
for more details.]
got a signal), the priority of the owner task is readjusted. The
priority enqueueing is handled by "pi_waiters".
RT-mutexes are optimized for fastpath operations and have no internal
locking overhead when locking an uncontended mutex or unlocking a mutex
......@@ -46,34 +45,29 @@ is used]
The state of the rt-mutex is tracked via the owner field of the rt-mutex
structure:
rt_mutex->owner holds the task_struct pointer of the owner. Bit 0 and 1
are used to keep track of the "owner is pending" and "rtmutex has
waiters" state.
lock->owner holds the task_struct pointer of the owner. Bit 0 is used to
keep track of the "lock has waiters" state.
owner bit1 bit0
NULL 0 0 mutex is free (fast acquire possible)
NULL 0 1 invalid state
NULL 1 0 Transitional state*
NULL 1 1 invalid state
taskpointer 0 0 mutex is held (fast release possible)
taskpointer 0 1 task is pending owner
taskpointer 1 0 mutex is held and has waiters
taskpointer 1 1 task is pending owner and mutex has waiters
owner bit0
NULL 0 lock is free (fast acquire possible)
NULL 1 lock is free and has waiters and the top waiter
is going to take the lock*
taskpointer 0 lock is held (fast release possible)
taskpointer 1 lock is held and has waiters**
Pending-ownership handling is a performance optimization:
pending-ownership is assigned to the first (highest priority) waiter of
the mutex, when the mutex is released. The thread is woken up and once
it starts executing it can acquire the mutex. Until the mutex is taken
by it (bit 0 is cleared) a competing higher priority thread can "steal"
the mutex which puts the woken up thread back on the waiters list.
The fast atomic compare exchange based acquire and release is only
possible when bit 0 of lock->owner is 0.
The pending-ownership optimization is especially important for the
uninterrupted workflow of high-prio tasks which repeatedly
takes/releases locks that have lower-prio waiters. Without this
optimization the higher-prio thread would ping-pong to the lower-prio
task [because at unlock time we always assign a new owner].
(*) It also can be a transitional state when grabbing the lock
with ->wait_lock is held. To prevent any fast path cmpxchg to the lock,
we need to set the bit0 before looking at the lock, and the owner may be
NULL in this small time, hence this can be a transitional state.
(*) The "mutex has waiters" bit gets set to take the lock. If the lock
doesn't already have an owner, this bit is quickly cleared if there are
no waiters. So this is a transitional state to synchronize with looking
at the owner field of the mutex and the mutex owner releasing the lock.
(**) There is a small time when bit 0 is set but there are no
waiters. This can happen when grabbing the lock in the slow path.
To prevent a cmpxchg of the owner releasing the lock, we need to
set this bit before looking at the lock.
BTW, there is still technically a "Pending Owner", it's just not called
that anymore. The pending owner happens to be the top_waiter of a lock
that has no owner and has been woken up to grab the lock.
......@@ -7,7 +7,6 @@ Function Reference
.. toctree::
:maxdepth: 1
:numbered:
cec-func-open
cec-func-close
......
......@@ -84,17 +84,17 @@ Device drivers API
==================
The include/net/mac802154.h defines following functions:
- struct ieee802154_dev *ieee802154_alloc_device
(size_t priv_size, struct ieee802154_ops *ops):
allocation of IEEE 802.15.4 compatible device
- struct ieee802154_hw *
ieee802154_alloc_hw(size_t priv_data_len, const struct ieee802154_ops *ops):
allocation of IEEE 802.15.4 compatible hardware device
- void ieee802154_free_device(struct ieee802154_dev *dev):
freeing allocated device
- void ieee802154_free_hw(struct ieee802154_hw *hw):
freeing allocated hardware device
- int ieee802154_register_device(struct ieee802154_dev *dev):
register PHY in the system
- int ieee802154_register_hw(struct ieee802154_hw *hw):
register PHY which is the allocated hardware device, in the system
- void ieee802154_unregister_device(struct ieee802154_dev *dev):
- void ieee802154_unregister_hw(struct ieee802154_hw *hw):
freeing registered PHY
Moreover IEEE 802.15.4 device operations structure should be filled.
......
......@@ -112,7 +112,7 @@ take nvmem_device as parameter.
5. Releasing a reference to the NVMEM
=====================================
When a consumers no longer needs the NVMEM, it has to release the reference
When a consumer no longer needs the NVMEM, it has to release the reference
to the NVMEM it has obtained using the APIs mentioned in the above section.
The NVMEM framework provides 2 APIs to release a reference to the NVMEM.
......
......@@ -6,9 +6,6 @@ Applying Patches To The Linux Kernel
Original by:
Jesper Juhl, August 2005
Last update:
2016-09-14
.. note::
This document is obsolete. In most cases, rather than using ``patch``
......@@ -344,7 +341,7 @@ possible.
This is a good branch to run for people who want to help out testing
development kernels but do not want to run some of the really experimental
stuff (such people should see the sections about -git and -mm kernels below).
stuff (such people should see the sections about -next and -mm kernels below).
The -rc patches are not incremental, they apply to a base 4.x kernel, just
like the 4.x.y patches described above. The kernel version before the -rcN
......@@ -380,44 +377,6 @@ Here are 3 examples of how to apply these patches::
$ mv linux-4.7.3 linux-4.8-rc5 # rename the kernel source dir
The -git kernels
================
These are daily snapshots of Linus' kernel tree (managed in a git
repository, hence the name).
These patches are usually released daily and represent the current state of
Linus's tree. They are more experimental than -rc kernels since they are
generated automatically without even a cursory glance to see if they are
sane.
-git patches are not incremental and apply either to a base 4.x kernel or
a base 4.x-rc kernel -- you can see which from their name.
A patch named 4.7-git1 applies to the 4.7 kernel source and a patch
named 4.8-rc3-git2 applies to the source of the 4.8-rc3 kernel.
Here are some examples of how to apply these patches::
# moving from 4.7 to 4.7-git1
$ cd ~/linux-4.7 # change to the kernel source dir
$ patch -p1 < ../patch-4.7-git1 # apply the 4.7-git1 patch
$ cd ..
$ mv linux-4.7 linux-4.7-git1 # rename the kernel source dir
# moving from 4.7-git1 to 4.8-rc2-git3
$ cd ~/linux-4.7-git1 # change to the kernel source dir
$ patch -p1 -R < ../patch-4.7-git1 # revert the 4.7-git1 patch
# we now have a 4.7 kernel
$ patch -p1 < ../patch-4.8-rc2 # apply the 4.8-rc2 patch
# the kernel is now 4.8-rc2
$ patch -p1 < ../patch-4.8-rc2-git3 # apply the 4.8-rc2-git3 patch
# the kernel is now 4.8-rc2-git3
$ cd ..
$ mv linux-4.7-git1 linux-4.8-rc2-git3 # rename source dir
The -mm patches and the linux-next tree
=======================================
......
......@@ -53,7 +53,7 @@ mcelog 0.6 mcelog --version
iptables 1.4.2 iptables -V
openssl & libcrypto 1.0.0 openssl version
bc 1.06.95 bc --version
Sphinx\ [#f1]_ 1.2 sphinx-build --version
Sphinx\ [#f1]_ 1.3 sphinx-build --version
====================== =============== ========================================
.. [#f1] Sphinx is needed only to build the Kernel documentation
......@@ -309,18 +309,8 @@ Kernel documentation
Sphinx
------
The ReST markups currently used by the Documentation/ files are meant to be
built with ``Sphinx`` version 1.2 or upper. If you're desiring to build
PDF outputs, it is recommended to use version 1.4.6.
.. note::
Please notice that, for PDF and LaTeX output, you'll also need ``XeLaTeX``
version 3.14159265. Depending on the distribution, you may also need to
install a series of ``texlive`` packages that provide the minimal set of
functionalities required for ``XeLaTex`` to work. For PDF output you'll also
need ``convert(1)`` from ImageMagick (https://www.imagemagick.org).
Please see :ref:`sphinx_install` in ``Documentation/doc-guide/sphinx.rst``
for details about Sphinx requirements.
Getting updated software
========================
......
......@@ -166,12 +166,12 @@ Trees
- The queues of patches, for both completed versions and in progress
versions can be found at:
http://git.kernel.org/?p=linux/kernel/git/stable/stable-queue.git
https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
- The finalized and tagged releases of all stable kernels can be found
in separate branches per version at:
http://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Review committee
......
......@@ -413,7 +413,7 @@ e-mail discussions.
11) Sign your work the Developer's Certificate of Origin
11) Sign your work - the Developer's Certificate of Origin
----------------------------------------------------------
To improve tracking of who did what, especially with patches that can
......
......@@ -16,17 +16,7 @@ The key service can be configured on by enabling:
This document has the following sections:
- Key overview
- Key service overview
- Key access permissions
- SELinux support
- New procfs files
- Userspace system call interface
- Kernel services
- Notes on accessing payload contents
- Defining a key type
- Request-key callback service
- Garbage collection
.. contents:: :local:
Key Overview
......@@ -443,7 +433,7 @@ The main syscalls are:
/sbin/request-key will be invoked in an attempt to obtain a key. The
callout_info string will be passed as an argument to the program.
See also Documentation/security/keys-request-key.txt.
See also Documentation/security/keys/request-key.rst.
The keyctl syscall functions are:
......@@ -973,7 +963,7 @@ payload contents" for more information.
If successful, the key will have been attached to the default keyring for
implicitly obtained request-key keys, as set by KEYCTL_SET_REQKEY_KEYRING.
See also Documentation/security/keys-request-key.txt.
See also Documentation/security/keys/request-key.rst.
* To search for a key, passing auxiliary data to the upcaller, call::
......
......@@ -3,7 +3,7 @@ Key Request Service
===================
The key request service is part of the key retention service (refer to
Documentation/security/keys.txt). This document explains more fully how
Documentation/security/core.rst). This document explains more fully how
the requesting algorithm works.
The process starts by either the kernel requesting a service by calling
......
......@@ -172,4 +172,4 @@ Other uses for trusted and encrypted keys, such as for disk and file encryption
are anticipated. In particular the new format 'ecryptfs' has been defined in
in order to use encrypted keys to mount an eCryptfs filesystem. More details
about the usage can be found in the file
``Documentation/security/keys-ecryptfs.txt``.
``Documentation/security/keys/ecryptfs.rst``.
......@@ -4,6 +4,17 @@
*
*/
/* Interim: Code-blocks with line nos - lines and line numbers don't line up.
* see: https://github.com/rtfd/sphinx_rtd_theme/issues/419
*/
div[class^="highlight"] pre {
line-height: normal;
}
.rst-content .highlight > pre {
line-height: normal;
}
@media screen {
/* content column
......@@ -56,6 +67,12 @@
font-family: "Courier New", Courier, monospace
}
/* fix bottom margin of lists items */
.rst-content .section ul li:last-child, .rst-content .section ul li p:last-child {
margin-bottom: 12px;
}
/* inline literal: drop the borderbox, padding and red color */
code, .rst-content tt, .rst-content code {
......
......@@ -27,6 +27,7 @@
# Please make sure this works on both python2 and python3.
#
import codecs
import os
import subprocess
import sys
......@@ -88,13 +89,10 @@ class KernelDocDirective(Directive):
try:
env.app.verbose('calling kernel-doc \'%s\'' % (" ".join(cmd)))
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
# python2 needs conversion to unicode.
# python3 with universal_newlines=True returns strings.
if sys.version_info.major < 3:
out, err = unicode(out, 'utf-8'), unicode(err, 'utf-8')
out, err = codecs.decode(out, 'utf-8'), codecs.decode(err, 'utf-8')
if p.returncode != 0:
sys.stderr.write(err)
......
docutils==0.12
Sphinx==1.4.9
sphinx_rtd_theme
......@@ -149,9 +149,7 @@ Linux内核代码中包含有大量的文档。这些文档对于学习如何与
核源码的主目录中使用以下不同命令将会分别生成PDF、Postscript、HTML和手册
页等不同格式的文档:
make pdfdocs
make psdocs
make htmldocs
make mandocs
如何成为内核开发者
......
......@@ -1468,7 +1468,7 @@ $(help-board-dirs): help-%:
# Documentation targets
# ---------------------------------------------------------------------------
DOC_TARGETS := xmldocs sgmldocs psdocs latexdocs pdfdocs htmldocs mandocs installmandocs epubdocs cleandocs linkcheckdocs
DOC_TARGETS := xmldocs latexdocs pdfdocs htmldocs epubdocs cleandocs linkcheckdocs
PHONY += $(DOC_TARGETS)
$(DOC_TARGETS): scripts_basic FORCE
$(Q)$(MAKE) $(build)=Documentation $@
......
......@@ -38,12 +38,13 @@ struct device_node;
struct gen_pool;
/**
* Allocation callback function type definition
* typedef genpool_algo_t: Allocation callback function type definition
* @map: Pointer to bitmap
* @size: The bitmap size in bits
* @start: The bitnumber to start searching at
* @nr: The number of zeroed bits we're looking for
* @data: optional additional data used by @genpool_algo_t
* @data: optional additional data used by the callback
* @pool: the pool being allocated from
*/
typedef unsigned long (*genpool_algo_t)(unsigned long *map,
unsigned long size,
......
/* Generic associative array implementation.
*
* See Documentation/assoc_array.txt for information.
* See Documentation/core-api/assoc_array.rst for information.
*
* Copyright (C) 2013 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
......
......@@ -2226,6 +2226,7 @@ sub dump_enum($$) {
if ($x =~ /enum\s+(\w+)\s*{(.*)}/) {
$declaration_name = $1;
my $members = $2;
$members =~ s/\s+$//;
foreach my $arg (split ',', $members) {
$arg =~ s/^\s*(\w+).*/$1/;
......@@ -2766,6 +2767,9 @@ sub process_proto_type($$) {
while (1) {
if ( $x =~ /([^{};]*)([{};])(.*)/ ) {
if( length $prototype ) {
$prototype .= " "
}
$prototype .= $1 . $2;
($2 eq '{') && $brcount++;
($2 eq '}') && $brcount--;
......
#!/usr/bin/perl
use strict;
# Copyright (c) 2017 Mauro Carvalho Chehab <mchehab@kernel.org>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
my $virtenv_dir = "sphinx_1.4";
my $requirement_file = "Documentation/sphinx/requirements.txt";
#
# Static vars
#
my %missing;
my $system_release;
my $need = 0;
my $optional = 0;
my $need_symlink = 0;
my $need_sphinx = 0;
my $install = "";
#
# Command line arguments
#
my $pdf = 1;
my $virtualenv = 1;
#
# List of required texlive packages on Fedora and OpenSuse
#
my %texlive = (
'adjustbox.sty' => 'texlive-adjustbox',
'amsfonts.sty' => 'texlive-amsfonts',
'amsmath.sty' => 'texlive-amsmath',
'amssymb.sty' => 'texlive-amsfonts',
'amsthm.sty' => 'texlive-amscls',
'anyfontsize.sty' => 'texlive-anyfontsize',
'atbegshi.sty' => 'texlive-oberdiek',
'bm.sty' => 'texlive-tools',
'capt-of.sty' => 'texlive-capt-of',
'cmap.sty' => 'texlive-cmap',
'ecrm1000.tfm' => 'texlive-ec',
'eqparbox.sty' => 'texlive-eqparbox',
'eu1enc.def' => 'texlive-euenc',
'fancybox.sty' => 'texlive-fancybox',
'fancyvrb.sty' => 'texlive-fancyvrb',
'float.sty' => 'texlive-float',
'fncychap.sty' => 'texlive-fncychap',
'footnote.sty' => 'texlive-mdwtools',
'framed.sty' => 'texlive-framed',
'luatex85.sty' => 'texlive-luatex85',
'multirow.sty' => 'texlive-multirow',
'needspace.sty' => 'texlive-needspace',
'palatino.sty' => 'texlive-psnfss',
'parskip.sty' => 'texlive-parskip',
'polyglossia.sty' => 'texlive-polyglossia',
'tabulary.sty' => 'texlive-tabulary',
'threeparttable.sty' => 'texlive-threeparttable',
'titlesec.sty' => 'texlive-titlesec',
'ucs.sty' => 'texlive-ucs',
'upquote.sty' => 'texlive-upquote',
'wrapfig.sty' => 'texlive-wrapfig',
);
#
# Subroutines that checks if a feature exists
#
sub check_missing(%)
{
my %map = %{$_[0]};
foreach my $prog (sort keys %missing) {
my $is_optional = $missing{$prog};
if ($is_optional) {
print "Warning: better to also install \"$prog\".\n";
} else {
print "ERROR: please install \"$prog\", otherwise, build won't work.\n";
}
if (defined($map{$prog})) {
$install .= " " . $map{$prog};
} else {
$install .= " " . $prog;
}
}
$install =~ s/^\s//;
}
sub add_package($$)
{
my $package = shift;
my $is_optional = shift;
$missing{$package} = $is_optional;
if ($is_optional) {
$optional++;
} else {
$need++;
}
}
sub check_missing_file($$$)
{
my $file = shift;
my $package = shift;
my $is_optional = shift;
return if(-e $file);
add_package($package, $is_optional);
}
sub findprog($)
{
foreach(split(/:/, $ENV{PATH})) {
return "$_/$_[0]" if(-x "$_/$_[0]");
}
}
sub check_program($$)
{
my $prog = shift;
my $is_optional = shift;
return if findprog($prog);
add_package($prog, $is_optional);
}
sub check_perl_module($$)
{
my $prog = shift;
my $is_optional = shift;
my $err = system("perl -M$prog -e 1 2>/dev/null /dev/null");
return if ($err == 0);
add_package($prog, $is_optional);
}
sub check_python_module($$)
{
my $prog = shift;
my $is_optional = shift;
my $err = system("python3 -c 'import $prog' 2>/dev/null /dev/null");
return if ($err == 0);
my $err = system("python -c 'import $prog' 2>/dev/null /dev/null");
return if ($err == 0);
add_package($prog, $is_optional);
}
sub check_rpm_missing($$)
{
my @pkgs = @{$_[0]};
my $is_optional = $_[1];
foreach my $prog(@pkgs) {
my $err = system("rpm -q '$prog' 2>/dev/null >/dev/null");
add_package($prog, $is_optional) if ($err);
}
}
sub check_pacman_missing($$)
{
my @pkgs = @{$_[0]};
my $is_optional = $_[1];
foreach my $prog(@pkgs) {
my $err = system("pacman -Q '$prog' 2>/dev/null >/dev/null");
add_package($prog, $is_optional) if ($err);
}
}
sub check_missing_tex($)
{
my $is_optional = shift;
my $kpsewhich = findprog("kpsewhich");
foreach my $prog(keys %texlive) {
my $package = $texlive{$prog};
if (!$kpsewhich) {
add_package($package, $is_optional);
next;
}
my $file = qx($kpsewhich $prog);
add_package($package, $is_optional) if ($file =~ /^\s*$/);
}
}
sub check_sphinx()
{
return if findprog("sphinx-build");
if (findprog("sphinx-build-3")) {
$need_symlink = 1;
return;
}
if ($virtualenv) {
my $prog = findprog("virtualenv-3");
$prog = findprog("virtualenv-3.5") if (!$prog);
check_program("virtualenv", 0) if (!$prog);
$need_sphinx = 1;
} else {
add_package("python-sphinx", 0);
}
}
#
# Ancillary subroutines
#
sub catcheck($)
{
my $res = "";
$res = qx(cat $_[0]) if (-r $_[0]);
return $res;
}
sub which($)
{
my $file = shift;
my @path = split ":", $ENV{PATH};
foreach my $dir(@path) {
my $name = $dir.'/'.$file;
return $name if (-x $name );
}
return undef;
}
#
# Subroutines that check distro-specific hints
#
sub give_debian_hints()
{
my %map = (
"python-sphinx" => "python3-sphinx",
"sphinx_rtd_theme" => "python3-sphinx-rtd-theme",
"virtualenv" => "virtualenv",
"dot" => "graphviz",
"convert" => "imagemagick",
"Pod::Usage" => "perl-modules",
"xelatex" => "texlive-xetex",
"rsvg-convert" => "librsvg2-bin",
);
if ($pdf) {
check_missing_file("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",
"fonts-dejavu", 1);
}
check_program("dvipng", 1) if ($pdf);
check_missing(\%map);
return if (!$need && !$optional);
printf("You should run:\n\n\tsudo apt-get install $install\n");
}
sub give_redhat_hints()
{
my %map = (
"python-sphinx" => "python3-sphinx",
"sphinx_rtd_theme" => "python3-sphinx_rtd_theme",
"virtualenv" => "python3-virtualenv",
"dot" => "graphviz",
"convert" => "ImageMagick",
"Pod::Usage" => "perl-Pod-Usage",
"xelatex" => "texlive-xetex-bin",
"rsvg-convert" => "librsvg2-tools",
);
my @fedora26_opt_pkgs = (
"graphviz-gd", # Fedora 26: needed for PDF support
);
my @fedora_tex_pkgs = (
"texlive-collection-fontsrecommended",
"texlive-collection-latex",
"dejavu-sans-fonts",
"dejavu-serif-fonts",
"dejavu-sans-mono-fonts",
);
#
# Checks valid for RHEL/CentOS version 7.x.
#
if (! $system_release =~ /Fedora/) {
$map{"virtualenv"} = "python-virtualenv";
}
my $release;
$release = $1 if ($system_release =~ /Fedora\s+release\s+(\d+)/);
check_rpm_missing(\@fedora26_opt_pkgs, 1) if ($pdf && $release >= 26);
check_rpm_missing(\@fedora_tex_pkgs, 1) if ($pdf);
check_missing_tex(1) if ($pdf);
check_missing(\%map);
return if (!$need && !$optional);
if ($release >= 18) {
# dnf, for Fedora 18+
printf("You should run:\n\n\tsudo dnf install -y $install\n");
} else {
# yum, for RHEL (and clones) or Fedora version < 18
printf("You should run:\n\n\tsudo yum install -y $install\n");
}
}
sub give_opensuse_hints()
{
my %map = (
"python-sphinx" => "python3-sphinx",
"sphinx_rtd_theme" => "python3-sphinx_rtd_theme",
"virtualenv" => "python3-virtualenv",
"dot" => "graphviz",
"convert" => "ImageMagick",
"Pod::Usage" => "perl-Pod-Usage",
"xelatex" => "texlive-xetex-bin",
"rsvg-convert" => "rsvg-view",
);
my @suse_tex_pkgs = (
"texlive-babel-english",
"texlive-caption",
"texlive-colortbl",
"texlive-courier",
"texlive-dvips",
"texlive-helvetic",
"texlive-makeindex",
"texlive-metafont",
"texlive-metapost",
"texlive-palatino",
"texlive-preview",
"texlive-times",
"texlive-zapfchan",
"texlive-zapfding",
);
check_rpm_missing(\@suse_tex_pkgs, 1) if ($pdf);
check_missing_tex(1) if ($pdf);
check_missing(\%map);
return if (!$need && !$optional);
printf("You should run:\n\n\tsudo zypper install --no-recommends $install\n");
}
sub give_mageia_hints()
{
my %map = (
"python-sphinx" => "python3-sphinx",
"sphinx_rtd_theme" => "python3-sphinx_rtd_theme",
"virtualenv" => "python3-virtualenv",
"dot" => "graphviz",
"convert" => "ImageMagick",
"Pod::Usage" => "perl-Pod-Usage",
"xelatex" => "texlive",
"rsvg-convert" => "librsvg2-tools",
);
my @tex_pkgs = (
"texlive-fontsextra",
);
check_rpm_missing(\@tex_pkgs, 1) if ($pdf);
check_missing(\%map);
return if (!$need && !$optional);
printf("You should run:\n\n\tsudo urpmi $install\n");
}
sub give_arch_linux_hints()
{
my %map = (
"sphinx_rtd_theme" => "python-sphinx_rtd_theme",
"virtualenv" => "python-virtualenv",
"dot" => "graphviz",
"convert" => "imagemagick",
"xelatex" => "texlive-bin",
"rsvg-convert" => "extra/librsvg",
);
my @archlinux_tex_pkgs = (
"texlive-core",
"texlive-latexextra",
"ttf-dejavu",
);
check_pacman_missing(\@archlinux_tex_pkgs, 1) if ($pdf);
check_missing(\%map);
return if (!$need && !$optional);
printf("You should run:\n\n\tsudo pacman -S $install\n");
}
sub give_gentoo_hints()
{
my %map = (
"sphinx_rtd_theme" => "dev-python/sphinx_rtd_theme",
"virtualenv" => "dev-python/virtualenv",
"dot" => "media-gfx/graphviz",
"convert" => "media-gfx/imagemagick",
"xelatex" => "dev-texlive/texlive-xetex media-fonts/dejavu",
"rsvg-convert" => "gnome-base/librsvg",
);
check_missing_file("/usr/share/fonts/dejavu/DejaVuSans.ttf",
"media-fonts/dejavu", 1) if ($pdf);
check_missing(\%map);
return if (!$need && !$optional);
printf("You should run:\n\n");
printf("\tsudo su -c 'echo \"media-gfx/imagemagick svg png\" > /etc/portage/package.use/imagemagick'\n");
printf("\tsudo su -c 'echo \"media-gfx/graphviz cairo pdf\" > /etc/portage/package.use/graphviz'\n");
printf("\tsudo emerge --ask $install\n");
}
sub check_distros()
{
# Distro-specific hints
if ($system_release =~ /Red Hat Enterprise Linux/) {
give_redhat_hints;
return;
}
if ($system_release =~ /CentOS/) {
give_redhat_hints;
return;
}
if ($system_release =~ /Scientific Linux/) {
give_redhat_hints;
return;
}
if ($system_release =~ /Oracle Linux Server/) {
give_redhat_hints;
return;
}
if ($system_release =~ /Fedora/) {
give_redhat_hints;
return;
}
if ($system_release =~ /Ubuntu/) {
give_debian_hints;
return;
}
if ($system_release =~ /Debian/) {
give_debian_hints;
return;
}
if ($system_release =~ /openSUSE/) {
give_opensuse_hints;
return;
}
if ($system_release =~ /Mageia/) {
give_mageia_hints;
return;
}
if ($system_release =~ /Arch Linux/) {
give_arch_linux_hints;
return;
}
if ($system_release =~ /Gentoo/) {
give_gentoo_hints;
return;
}
#
# Fall-back to generic hint code for other distros
# That's far from ideal, specially for LaTeX dependencies.
#
my %map = (
"sphinx-build" => "sphinx"
);
check_missing_tex(1) if ($pdf);
check_missing(\%map);
print "I don't know distro $system_release.\n";
print "So, I can't provide you a hint with the install procedure.\n";
print "There are likely missing dependencies.\n";
}
#
# Common dependencies
#
sub check_needs()
{
if ($system_release) {
print "Detected OS: $system_release.\n";
} else {
print "Unknown OS\n";
}
# RHEL 7.x and clones have Sphinx version 1.1.x and incomplete texlive
if (($system_release =~ /Red Hat Enterprise Linux/) ||
($system_release =~ /CentOS/) ||
($system_release =~ /Scientific Linux/) ||
($system_release =~ /Oracle Linux Server/)) {
$virtualenv = 1;
$pdf = 0;
printf("NOTE: On this distro, Sphinx and TexLive shipped versions are incompatible\n");
printf("with doc build. So, use Sphinx via a Python virtual environment.\n\n");
printf("This script can't install a TexLive version that would provide PDF.\n");
}
# Check for needed programs/tools
check_sphinx();
check_perl_module("Pod::Usage", 0);
check_program("make", 0);
check_program("gcc", 0);
check_python_module("sphinx_rtd_theme", 1) if (!$virtualenv);
check_program("xelatex", 1) if ($pdf);
check_program("dot", 1);
check_program("convert", 1);
check_program("rsvg-convert", 1) if ($pdf);
check_distros();
if ($need_symlink) {
printf "\tsudo ln -sf %s /usr/bin/sphinx-build\n\n",
which("sphinx-build-3");
}
if ($need_sphinx) {
my $activate = "$virtenv_dir/bin/activate";
if (-e "$ENV{'PWD'}/$activate") {
printf "\nNeed to activate virtualenv with:\n";
printf "\t. $activate\n";
} else {
my $virtualenv = findprog("virtualenv-3");
$virtualenv = findprog("virtualenv-3.5") if (!$virtualenv);
$virtualenv = findprog("virtualenv") if (!$virtualenv);
$virtualenv = "virtualenv" if (!$virtualenv);
printf "\t$virtualenv $virtenv_dir\n";
printf "\t. $activate\n";
printf "\tpip install -r $requirement_file\n";
$need++;
}
}
printf "\n";
print "All optional dependenties are met.\n" if (!$optional);
if ($need == 1) {
die "Can't build as $need mandatory dependency is missing";
} elsif ($need) {
die "Can't build as $need mandatory dependencies are missing";
}
print "Needed package dependencies are met.\n";
}
#
# Main
#
while (@ARGV) {
my $arg = shift(@ARGV);
if ($arg eq "--no-virtualenv") {
$virtualenv = 0;
} elsif ($arg eq "--no-pdf"){
$pdf = 0;
} else {
print "Usage:\n\t$0 <--no-virtualenv> <--no-pdf>\n\n";
exit -1;
}
}
#
# Determine the system type. There's no standard unique way that would
# work with all distros with a minimal package install. So, several
# methods are used here.
#
# By default, it will use lsb_release function. If not available, it will
# fail back to reading the known different places where the distro name
# is stored
#
$system_release = qx(lsb_release -d) if which("lsb_release");
$system_release =~ s/Description:\s*// if ($system_release);
$system_release = catcheck("/etc/system-release") if !$system_release;
$system_release = catcheck("/etc/redhat-release") if !$system_release;
$system_release = catcheck("/etc/lsb-release") if !$system_release;
$system_release = catcheck("/etc/gentoo-release") if !$system_release;
$system_release = catcheck("/etc/issue") if !$system_release;
$system_release =~ s/\s+$//;
check_needs;
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment