Commit f6e07d38 authored by Jörg Sommer's avatar Jörg Sommer Committed by Linus Torvalds

Documentation: update cgroupfs mount point

According to commit 676db4af ("cgroupfs: create /sys/fs/cgroup to
mount cgroupfs on") the canonical mountpoint for the cgroup filesystem
is /sys/fs/cgroup.  Hence, this should be used in the documentation.
Signed-off-by: default avatarJörg Sommer <joerg@alea.gnuu.de>
Acked-by: default avatarPaul Menage <menage@google.com>
Signed-off-by: default avatarRandy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 06a2c45d
...@@ -21,7 +21,7 @@ information will not be available. ...@@ -21,7 +21,7 @@ information will not be available.
To extract cgroup statistics a utility very similar to getdelays.c To extract cgroup statistics a utility very similar to getdelays.c
has been developed, the sample output of the utility is shown below has been developed, the sample output of the utility is shown below
~/balbir/cgroupstats # ./getdelays -C "/cgroup/a" ~/balbir/cgroupstats # ./getdelays -C "/sys/fs/cgroup/a"
sleeping 1, blocked 0, running 1, stopped 0, uninterruptible 0 sleeping 1, blocked 0, running 1, stopped 0, uninterruptible 0
~/balbir/cgroupstats # ./getdelays -C "/cgroup" ~/balbir/cgroupstats # ./getdelays -C "/sys/fs/cgroup"
sleeping 155, blocked 0, running 1, stopped 0, uninterruptible 2 sleeping 155, blocked 0, running 1, stopped 0, uninterruptible 2
...@@ -28,16 +28,19 @@ cgroups. Here is what you can do. ...@@ -28,16 +28,19 @@ cgroups. Here is what you can do.
- Enable group scheduling in CFQ - Enable group scheduling in CFQ
CONFIG_CFQ_GROUP_IOSCHED=y CONFIG_CFQ_GROUP_IOSCHED=y
- Compile and boot into kernel and mount IO controller (blkio). - Compile and boot into kernel and mount IO controller (blkio); see
cgroups.txt, Why are cgroups needed?.
mount -t cgroup -o blkio none /cgroup mount -t tmpfs cgroup_root /sys/fs/cgroup
mkdir /sys/fs/cgroup/blkio
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
- Create two cgroups - Create two cgroups
mkdir -p /cgroup/test1/ /cgroup/test2 mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2
- Set weights of group test1 and test2 - Set weights of group test1 and test2
echo 1000 > /cgroup/test1/blkio.weight echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
echo 500 > /cgroup/test2/blkio.weight echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight
- Create two same size files (say 512MB each) on same disk (file1, file2) and - Create two same size files (say 512MB each) on same disk (file1, file2) and
launch two dd threads in different cgroup to read those files. launch two dd threads in different cgroup to read those files.
...@@ -46,12 +49,12 @@ cgroups. Here is what you can do. ...@@ -46,12 +49,12 @@ cgroups. Here is what you can do.
echo 3 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches
dd if=/mnt/sdb/zerofile1 of=/dev/null & dd if=/mnt/sdb/zerofile1 of=/dev/null &
echo $! > /cgroup/test1/tasks echo $! > /sys/fs/cgroup/blkio/test1/tasks
cat /cgroup/test1/tasks cat /sys/fs/cgroup/blkio/test1/tasks
dd if=/mnt/sdb/zerofile2 of=/dev/null & dd if=/mnt/sdb/zerofile2 of=/dev/null &
echo $! > /cgroup/test2/tasks echo $! > /sys/fs/cgroup/blkio/test2/tasks
cat /cgroup/test2/tasks cat /sys/fs/cgroup/blkio/test2/tasks
- At macro level, first dd should finish first. To get more precise data, keep - At macro level, first dd should finish first. To get more precise data, keep
on looking at (with the help of script), at blkio.disk_time and on looking at (with the help of script), at blkio.disk_time and
...@@ -68,13 +71,13 @@ Throttling/Upper Limit policy ...@@ -68,13 +71,13 @@ Throttling/Upper Limit policy
- Enable throttling in block layer - Enable throttling in block layer
CONFIG_BLK_DEV_THROTTLING=y CONFIG_BLK_DEV_THROTTLING=y
- Mount blkio controller - Mount blkio controller (see cgroups.txt, Why are cgroups needed?)
mount -t cgroup -o blkio none /cgroup/blkio mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
- Specify a bandwidth rate on particular device for root group. The format - Specify a bandwidth rate on particular device for root group. The format
for policy is "<major>:<minor> <byes_per_second>". for policy is "<major>:<minor> <byes_per_second>".
echo "8:16 1048576" > /cgroup/blkio/blkio.read_bps_device echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.read_bps_device
Above will put a limit of 1MB/second on reads happening for root group Above will put a limit of 1MB/second on reads happening for root group
on device having major/minor number 8:16. on device having major/minor number 8:16.
...@@ -149,7 +152,7 @@ Proportional weight policy files ...@@ -149,7 +152,7 @@ Proportional weight policy files
Following is the format. Following is the format.
#echo dev_maj:dev_minor weight > /path/to/cgroup/blkio.weight_device # echo dev_maj:dev_minor weight > blkio.weight_device
Configure weight=300 on /dev/sdb (8:16) in this cgroup Configure weight=300 on /dev/sdb (8:16) in this cgroup
# echo 8:16 300 > blkio.weight_device # echo 8:16 300 > blkio.weight_device
# cat blkio.weight_device # cat blkio.weight_device
......
...@@ -138,7 +138,7 @@ With the ability to classify tasks differently for different resources ...@@ -138,7 +138,7 @@ With the ability to classify tasks differently for different resources
the admin can easily set up a script which receives exec notifications the admin can easily set up a script which receives exec notifications
and depending on who is launching the browser he can and depending on who is launching the browser he can
# echo browser_pid > /mnt/<restype>/<userclass>/tasks # echo browser_pid > /sys/fs/cgroup/<restype>/<userclass>/tasks
With only a single hierarchy, he now would potentially have to create With only a single hierarchy, he now would potentially have to create
a separate cgroup for every browser launched and associate it with a separate cgroup for every browser launched and associate it with
...@@ -153,9 +153,9 @@ apps enhanced CPU power, ...@@ -153,9 +153,9 @@ apps enhanced CPU power,
With ability to write pids directly to resource classes, it's just a With ability to write pids directly to resource classes, it's just a
matter of : matter of :
# echo pid > /mnt/network/<new_class>/tasks # echo pid > /sys/fs/cgroup/network/<new_class>/tasks
(after some time) (after some time)
# echo pid > /mnt/network/<orig_class>/tasks # echo pid > /sys/fs/cgroup/network/<orig_class>/tasks
Without this ability, he would have to split the cgroup into Without this ability, he would have to split the cgroup into
multiple separate ones and then associate the new cgroups with the multiple separate ones and then associate the new cgroups with the
...@@ -310,21 +310,24 @@ subsystem, this is the case for the cpuset. ...@@ -310,21 +310,24 @@ subsystem, this is the case for the cpuset.
To start a new job that is to be contained within a cgroup, using To start a new job that is to be contained within a cgroup, using
the "cpuset" cgroup subsystem, the steps are something like: the "cpuset" cgroup subsystem, the steps are something like:
1) mkdir /dev/cgroup 1) mount -t tmpfs cgroup_root /sys/fs/cgroup
2) mount -t cgroup -ocpuset cpuset /dev/cgroup 2) mkdir /sys/fs/cgroup/cpuset
3) Create the new cgroup by doing mkdir's and write's (or echo's) in 3) mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
the /dev/cgroup virtual file system. 4) Create the new cgroup by doing mkdir's and write's (or echo's) in
4) Start a task that will be the "founding father" of the new job. the /sys/fs/cgroup virtual file system.
5) Attach that task to the new cgroup by writing its pid to the 5) Start a task that will be the "founding father" of the new job.
/dev/cgroup tasks file for that cgroup. 6) Attach that task to the new cgroup by writing its pid to the
6) fork, exec or clone the job tasks from this founding father task. /sys/fs/cgroup/cpuset/tasks file for that cgroup.
7) fork, exec or clone the job tasks from this founding father task.
For example, the following sequence of commands will setup a cgroup For example, the following sequence of commands will setup a cgroup
named "Charlie", containing just CPUs 2 and 3, and Memory Node 1, named "Charlie", containing just CPUs 2 and 3, and Memory Node 1,
and then start a subshell 'sh' in that cgroup: and then start a subshell 'sh' in that cgroup:
mount -t cgroup cpuset -ocpuset /dev/cgroup mount -t tmpfs cgroup_root /sys/fs/cgroup
cd /dev/cgroup mkdir /sys/fs/cgroup/cpuset
mount -t cgroup cpuset -ocpuset /sys/fs/cgroup/cpuset
cd /sys/fs/cgroup/cpuset
mkdir Charlie mkdir Charlie
cd Charlie cd Charlie
/bin/echo 2-3 > cpuset.cpus /bin/echo 2-3 > cpuset.cpus
...@@ -345,7 +348,7 @@ Creating, modifying, using the cgroups can be done through the cgroup ...@@ -345,7 +348,7 @@ Creating, modifying, using the cgroups can be done through the cgroup
virtual filesystem. virtual filesystem.
To mount a cgroup hierarchy with all available subsystems, type: To mount a cgroup hierarchy with all available subsystems, type:
# mount -t cgroup xxx /dev/cgroup # mount -t cgroup xxx /sys/fs/cgroup
The "xxx" is not interpreted by the cgroup code, but will appear in The "xxx" is not interpreted by the cgroup code, but will appear in
/proc/mounts so may be any useful identifying string that you like. /proc/mounts so may be any useful identifying string that you like.
...@@ -354,23 +357,32 @@ Note: Some subsystems do not work without some user input first. For instance, ...@@ -354,23 +357,32 @@ Note: Some subsystems do not work without some user input first. For instance,
if cpusets are enabled the user will have to populate the cpus and mems files if cpusets are enabled the user will have to populate the cpus and mems files
for each new cgroup created before that group can be used. for each new cgroup created before that group can be used.
As explained in section `1.2 Why are cgroups needed?' you should create
different hierarchies of cgroups for each single resource or group of
resources you want to control. Therefore, you should mount a tmpfs on
/sys/fs/cgroup and create directories for each cgroup resource or resource
group.
# mount -t tmpfs cgroup_root /sys/fs/cgroup
# mkdir /sys/fs/cgroup/rg1
To mount a cgroup hierarchy with just the cpuset and memory To mount a cgroup hierarchy with just the cpuset and memory
subsystems, type: subsystems, type:
# mount -t cgroup -o cpuset,memory hier1 /dev/cgroup # mount -t cgroup -o cpuset,memory hier1 /sys/fs/cgroup/rg1
To change the set of subsystems bound to a mounted hierarchy, just To change the set of subsystems bound to a mounted hierarchy, just
remount with different options: remount with different options:
# mount -o remount,cpuset,blkio hier1 /dev/cgroup # mount -o remount,cpuset,blkio hier1 /sys/fs/cgroup/rg1
Now memory is removed from the hierarchy and blkio is added. Now memory is removed from the hierarchy and blkio is added.
Note this will add blkio to the hierarchy but won't remove memory or Note this will add blkio to the hierarchy but won't remove memory or
cpuset, because the new options are appended to the old ones: cpuset, because the new options are appended to the old ones:
# mount -o remount,blkio /dev/cgroup # mount -o remount,blkio /sys/fs/cgroup/rg1
To Specify a hierarchy's release_agent: To Specify a hierarchy's release_agent:
# mount -t cgroup -o cpuset,release_agent="/sbin/cpuset_release_agent" \ # mount -t cgroup -o cpuset,release_agent="/sbin/cpuset_release_agent" \
xxx /dev/cgroup xxx /sys/fs/cgroup/rg1
Note that specifying 'release_agent' more than once will return failure. Note that specifying 'release_agent' more than once will return failure.
...@@ -379,17 +391,17 @@ when the hierarchy consists of a single (root) cgroup. Supporting ...@@ -379,17 +391,17 @@ when the hierarchy consists of a single (root) cgroup. Supporting
the ability to arbitrarily bind/unbind subsystems from an existing the ability to arbitrarily bind/unbind subsystems from an existing
cgroup hierarchy is intended to be implemented in the future. cgroup hierarchy is intended to be implemented in the future.
Then under /dev/cgroup you can find a tree that corresponds to the Then under /sys/fs/cgroup/rg1 you can find a tree that corresponds to the
tree of the cgroups in the system. For instance, /dev/cgroup tree of the cgroups in the system. For instance, /sys/fs/cgroup/rg1
is the cgroup that holds the whole system. is the cgroup that holds the whole system.
If you want to change the value of release_agent: If you want to change the value of release_agent:
# echo "/sbin/new_release_agent" > /dev/cgroup/release_agent # echo "/sbin/new_release_agent" > /sys/fs/cgroup/rg1/release_agent
It can also be changed via remount. It can also be changed via remount.
If you want to create a new cgroup under /dev/cgroup: If you want to create a new cgroup under /sys/fs/cgroup/rg1:
# cd /dev/cgroup # cd /sys/fs/cgroup/rg1
# mkdir my_cgroup # mkdir my_cgroup
Now you want to do something with this cgroup. Now you want to do something with this cgroup.
......
...@@ -10,26 +10,25 @@ directly present in its group. ...@@ -10,26 +10,25 @@ directly present in its group.
Accounting groups can be created by first mounting the cgroup filesystem. Accounting groups can be created by first mounting the cgroup filesystem.
# mkdir /cgroups # mount -t cgroup -ocpuacct none /sys/fs/cgroup
# mount -t cgroup -ocpuacct none /cgroups
With the above step, the initial or the parent accounting group becomes
With the above step, the initial or the parent accounting group visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in
becomes visible at /cgroups. At bootup, this group includes all the the system. /sys/fs/cgroup/tasks lists the tasks in this cgroup.
tasks in the system. /cgroups/tasks lists the tasks in this cgroup. /sys/fs/cgroup/cpuacct.usage gives the CPU time (in nanoseconds) obtained
/cgroups/cpuacct.usage gives the CPU time (in nanoseconds) obtained by by this group which is essentially the CPU time obtained by all the tasks
this group which is essentially the CPU time obtained by all the tasks
in the system. in the system.
New accounting groups can be created under the parent group /cgroups. New accounting groups can be created under the parent group /sys/fs/cgroup.
# cd /cgroups # cd /sys/fs/cgroup
# mkdir g1 # mkdir g1
# echo $$ > g1 # echo $$ > g1
The above steps create a new group g1 and move the current shell The above steps create a new group g1 and move the current shell
process (bash) into it. CPU time consumed by this bash and its children process (bash) into it. CPU time consumed by this bash and its children
can be obtained from g1/cpuacct.usage and the same is accumulated in can be obtained from g1/cpuacct.usage and the same is accumulated in
/cgroups/cpuacct.usage also. /sys/fs/cgroup/cpuacct.usage also.
cpuacct.stat file lists a few statistics which further divide the cpuacct.stat file lists a few statistics which further divide the
CPU time obtained by the cgroup into user and system times. Currently CPU time obtained by the cgroup into user and system times. Currently
......
...@@ -661,21 +661,21 @@ than stress the kernel. ...@@ -661,21 +661,21 @@ than stress the kernel.
To start a new job that is to be contained within a cpuset, the steps are: To start a new job that is to be contained within a cpuset, the steps are:
1) mkdir /dev/cpuset 1) mkdir /sys/fs/cgroup/cpuset
2) mount -t cgroup -ocpuset cpuset /dev/cpuset 2) mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
3) Create the new cpuset by doing mkdir's and write's (or echo's) in 3) Create the new cpuset by doing mkdir's and write's (or echo's) in
the /dev/cpuset virtual file system. the /sys/fs/cgroup/cpuset virtual file system.
4) Start a task that will be the "founding father" of the new job. 4) Start a task that will be the "founding father" of the new job.
5) Attach that task to the new cpuset by writing its pid to the 5) Attach that task to the new cpuset by writing its pid to the
/dev/cpuset tasks file for that cpuset. /sys/fs/cgroup/cpuset tasks file for that cpuset.
6) fork, exec or clone the job tasks from this founding father task. 6) fork, exec or clone the job tasks from this founding father task.
For example, the following sequence of commands will setup a cpuset For example, the following sequence of commands will setup a cpuset
named "Charlie", containing just CPUs 2 and 3, and Memory Node 1, named "Charlie", containing just CPUs 2 and 3, and Memory Node 1,
and then start a subshell 'sh' in that cpuset: and then start a subshell 'sh' in that cpuset:
mount -t cgroup -ocpuset cpuset /dev/cpuset mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
cd /dev/cpuset cd /sys/fs/cgroup/cpuset
mkdir Charlie mkdir Charlie
cd Charlie cd Charlie
/bin/echo 2-3 > cpuset.cpus /bin/echo 2-3 > cpuset.cpus
...@@ -710,14 +710,14 @@ Creating, modifying, using the cpusets can be done through the cpuset ...@@ -710,14 +710,14 @@ Creating, modifying, using the cpusets can be done through the cpuset
virtual filesystem. virtual filesystem.
To mount it, type: To mount it, type:
# mount -t cgroup -o cpuset cpuset /dev/cpuset # mount -t cgroup -o cpuset cpuset /sys/fs/cgroup/cpuset
Then under /dev/cpuset you can find a tree that corresponds to the Then under /sys/fs/cgroup/cpuset you can find a tree that corresponds to the
tree of the cpusets in the system. For instance, /dev/cpuset tree of the cpusets in the system. For instance, /sys/fs/cgroup/cpuset
is the cpuset that holds the whole system. is the cpuset that holds the whole system.
If you want to create a new cpuset under /dev/cpuset: If you want to create a new cpuset under /sys/fs/cgroup/cpuset:
# cd /dev/cpuset # cd /sys/fs/cgroup/cpuset
# mkdir my_cpuset # mkdir my_cpuset
Now you want to do something with this cpuset. Now you want to do something with this cpuset.
...@@ -765,12 +765,12 @@ wrapper around the cgroup filesystem. ...@@ -765,12 +765,12 @@ wrapper around the cgroup filesystem.
The command The command
mount -t cpuset X /dev/cpuset mount -t cpuset X /sys/fs/cgroup/cpuset
is equivalent to is equivalent to
mount -t cgroup -ocpuset,noprefix X /dev/cpuset mount -t cgroup -ocpuset,noprefix X /sys/fs/cgroup/cpuset
echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent echo "/sbin/cpuset_release_agent" > /sys/fs/cgroup/cpuset/release_agent
2.2 Adding/removing cpus 2.2 Adding/removing cpus
------------------------ ------------------------
......
...@@ -22,16 +22,16 @@ removed from the child(ren). ...@@ -22,16 +22,16 @@ removed from the child(ren).
An entry is added using devices.allow, and removed using An entry is added using devices.allow, and removed using
devices.deny. For instance devices.deny. For instance
echo 'c 1:3 mr' > /cgroups/1/devices.allow echo 'c 1:3 mr' > /sys/fs/cgroup/1/devices.allow
allows cgroup 1 to read and mknod the device usually known as allows cgroup 1 to read and mknod the device usually known as
/dev/null. Doing /dev/null. Doing
echo a > /cgroups/1/devices.deny echo a > /sys/fs/cgroup/1/devices.deny
will remove the default 'a *:* rwm' entry. Doing will remove the default 'a *:* rwm' entry. Doing
echo a > /cgroups/1/devices.allow echo a > /sys/fs/cgroup/1/devices.allow
will add the 'a *:* rwm' entry to the whitelist. will add the 'a *:* rwm' entry to the whitelist.
......
...@@ -59,28 +59,28 @@ is non-freezable. ...@@ -59,28 +59,28 @@ is non-freezable.
* Examples of usage : * Examples of usage :
# mkdir /containers # mkdir /sys/fs/cgroup/freezer
# mount -t cgroup -ofreezer freezer /containers # mount -t cgroup -ofreezer freezer /sys/fs/cgroup/freezer
# mkdir /containers/0 # mkdir /sys/fs/cgroup/freezer/0
# echo $some_pid > /containers/0/tasks # echo $some_pid > /sys/fs/cgroup/freezer/0/tasks
to get status of the freezer subsystem : to get status of the freezer subsystem :
# cat /containers/0/freezer.state # cat /sys/fs/cgroup/freezer/0/freezer.state
THAWED THAWED
to freeze all tasks in the container : to freeze all tasks in the container :
# echo FROZEN > /containers/0/freezer.state # echo FROZEN > /sys/fs/cgroup/freezer/0/freezer.state
# cat /containers/0/freezer.state # cat /sys/fs/cgroup/freezer/0/freezer.state
FREEZING FREEZING
# cat /containers/0/freezer.state # cat /sys/fs/cgroup/freezer/0/freezer.state
FROZEN FROZEN
to unfreeze all tasks in the container : to unfreeze all tasks in the container :
# echo THAWED > /containers/0/freezer.state # echo THAWED > /sys/fs/cgroup/freezer/0/freezer.state
# cat /containers/0/freezer.state # cat /sys/fs/cgroup/freezer/0/freezer.state
THAWED THAWED
This is the basic mechanism which should do the right thing for user space task This is the basic mechanism which should do the right thing for user space task
......
...@@ -264,16 +264,17 @@ b. Enable CONFIG_RESOURCE_COUNTERS ...@@ -264,16 +264,17 @@ b. Enable CONFIG_RESOURCE_COUNTERS
c. Enable CONFIG_CGROUP_MEM_RES_CTLR c. Enable CONFIG_CGROUP_MEM_RES_CTLR
d. Enable CONFIG_CGROUP_MEM_RES_CTLR_SWAP (to use swap extension) d. Enable CONFIG_CGROUP_MEM_RES_CTLR_SWAP (to use swap extension)
1. Prepare the cgroups 1. Prepare the cgroups (see cgroups.txt, Why are cgroups needed?)
# mkdir -p /cgroups # mount -t tmpfs none /sys/fs/cgroup
# mount -t cgroup none /cgroups -o memory # mkdir /sys/fs/cgroup/memory
# mount -t cgroup none /sys/fs/cgroup/memory -o memory
2. Make the new group and move bash into it 2. Make the new group and move bash into it
# mkdir /cgroups/0 # mkdir /sys/fs/cgroup/memory/0
# echo $$ > /cgroups/0/tasks # echo $$ > /sys/fs/cgroup/memory/0/tasks
Since now we're in the 0 cgroup, we can alter the memory limit: Since now we're in the 0 cgroup, we can alter the memory limit:
# echo 4M > /cgroups/0/memory.limit_in_bytes # echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes
NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo, NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo,
mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.) mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.)
...@@ -281,11 +282,11 @@ mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.) ...@@ -281,11 +282,11 @@ mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.)
NOTE: We can write "-1" to reset the *.limit_in_bytes(unlimited). NOTE: We can write "-1" to reset the *.limit_in_bytes(unlimited).
NOTE: We cannot set limits on the root cgroup any more. NOTE: We cannot set limits on the root cgroup any more.
# cat /cgroups/0/memory.limit_in_bytes # cat /sys/fs/cgroup/memory/0/memory.limit_in_bytes
4194304 4194304
We can check the usage: We can check the usage:
# cat /cgroups/0/memory.usage_in_bytes # cat /sys/fs/cgroup/memory/0/memory.usage_in_bytes
1216512 1216512
A successful write to this file does not guarantee a successful set of A successful write to this file does not guarantee a successful set of
......
...@@ -223,9 +223,10 @@ When CONFIG_FAIR_GROUP_SCHED is defined, a "cpu.shares" file is created for each ...@@ -223,9 +223,10 @@ When CONFIG_FAIR_GROUP_SCHED is defined, a "cpu.shares" file is created for each
group created using the pseudo filesystem. See example steps below to create group created using the pseudo filesystem. See example steps below to create
task groups and modify their CPU share using the "cgroups" pseudo filesystem. task groups and modify their CPU share using the "cgroups" pseudo filesystem.
# mkdir /dev/cpuctl # mount -t tmpfs cgroup_root /sys/fs/cgroup
# mount -t cgroup -ocpu none /dev/cpuctl # mkdir /sys/fs/cgroup/cpu
# cd /dev/cpuctl # mount -t cgroup -ocpu none /sys/fs/cgroup/cpu
# cd /sys/fs/cgroup/cpu
# mkdir multimedia # create "multimedia" group of tasks # mkdir multimedia # create "multimedia" group of tasks
# mkdir browser # create "browser" group of tasks # mkdir browser # create "browser" group of tasks
......
...@@ -129,9 +129,8 @@ priority! ...@@ -129,9 +129,8 @@ priority!
Enabling CONFIG_RT_GROUP_SCHED lets you explicitly allocate real Enabling CONFIG_RT_GROUP_SCHED lets you explicitly allocate real
CPU bandwidth to task groups. CPU bandwidth to task groups.
This uses the /cgroup virtual file system and This uses the cgroup virtual file system and "<cgroup>/cpu.rt_runtime_us"
"/cgroup/<cgroup>/cpu.rt_runtime_us" to control the CPU time reserved for each to control the CPU time reserved for each control group.
control group.
For more information on working with control groups, you should read For more information on working with control groups, you should read
Documentation/cgroups/cgroups.txt as well. Documentation/cgroups/cgroups.txt as well.
...@@ -150,7 +149,7 @@ For now, this can be simplified to just the following (but see Future plans): ...@@ -150,7 +149,7 @@ For now, this can be simplified to just the following (but see Future plans):
=============== ===============
There is work in progress to make the scheduling period for each group There is work in progress to make the scheduling period for each group
("/cgroup/<cgroup>/cpu.rt_period_us") configurable as well. ("<cgroup>/cpu.rt_period_us") configurable as well.
The constraint on the period is that a subgroup must have a smaller or The constraint on the period is that a subgroup must have a smaller or
equal period to its parent. But realistically its not very useful _yet_ equal period to its parent. But realistically its not very useful _yet_
......
...@@ -129,12 +129,12 @@ Limit injection to pages owned by memgroup. Specified by inode number ...@@ -129,12 +129,12 @@ Limit injection to pages owned by memgroup. Specified by inode number
of the memcg. of the memcg.
Example: Example:
mkdir /cgroup/hwpoison mkdir /sys/fs/cgroup/mem/hwpoison
usemem -m 100 -s 1000 & usemem -m 100 -s 1000 &
echo `jobs -p` > /cgroup/hwpoison/tasks echo `jobs -p` > /sys/fs/cgroup/mem/hwpoison/tasks
memcg_ino=$(ls -id /cgroup/hwpoison | cut -f1 -d' ') memcg_ino=$(ls -id /sys/fs/cgroup/mem/hwpoison | cut -f1 -d' ')
echo $memcg_ino > /debug/hwpoison/corrupt-filter-memcg echo $memcg_ino > /debug/hwpoison/corrupt-filter-memcg
page-types -p `pidof init` --hwpoison # shall do nothing page-types -p `pidof init` --hwpoison # shall do nothing
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment