Commit 085bec85 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-4.15-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup

Pull cgroup fixes from Tejun Heo:

 - Prateek posted a couple patches to fix a deadlock involving cpuset
   and workqueue. It unfortunately caused a different deadlock and the
   recent workqueue hotplug simplification removed the original
   deadlock, so Prateek's two patches are reverted for now.

 - The new stat code was missing u64_stats initialization. Fixed.

 - Doc and other misc changes

* 'for-4.15-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup:
  cgroup: add warning about RT not being supported on cgroup2
  Revert "cgroup/cpuset: remove circular dependency deadlock"
  Revert "cpuset: Make cpuset hotplug synchronous"
  cgroup: properly init u64_stats
  debug cgroup: use task_css_set instead of rcu_dereference
  cpuset: Make cpuset hotplug synchronous
  cgroup/cpuset: remove circular dependency deadlock
parents 72dd379e c2f31b79
...@@ -898,6 +898,13 @@ controller implements weight and absolute bandwidth limit models for ...@@ -898,6 +898,13 @@ controller implements weight and absolute bandwidth limit models for
normal scheduling policy and absolute bandwidth allocation model for normal scheduling policy and absolute bandwidth allocation model for
realtime scheduling policy. realtime scheduling policy.
WARNING: cgroup2 doesn't yet support control of realtime processes and
the cpu controller can only be enabled when all RT processes are in
the root cgroup. Be aware that system management software may already
have placed RT processes into nonroot cgroups during the system boot
process, and these processes may need to be moved to the root cgroup
before the cpu controller can be enabled.
CPU Interface Files CPU Interface Files
~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
......
...@@ -50,7 +50,7 @@ static int current_css_set_read(struct seq_file *seq, void *v) ...@@ -50,7 +50,7 @@ static int current_css_set_read(struct seq_file *seq, void *v)
spin_lock_irq(&css_set_lock); spin_lock_irq(&css_set_lock);
rcu_read_lock(); rcu_read_lock();
cset = rcu_dereference(current->cgroups); cset = task_css_set(current);
refcnt = refcount_read(&cset->refcount); refcnt = refcount_read(&cset->refcount);
seq_printf(seq, "css_set %pK %d", cset, refcnt); seq_printf(seq, "css_set %pK %d", cset, refcnt);
if (refcnt > cset->nr_tasks) if (refcnt > cset->nr_tasks)
...@@ -96,7 +96,7 @@ static int current_css_set_cg_links_read(struct seq_file *seq, void *v) ...@@ -96,7 +96,7 @@ static int current_css_set_cg_links_read(struct seq_file *seq, void *v)
spin_lock_irq(&css_set_lock); spin_lock_irq(&css_set_lock);
rcu_read_lock(); rcu_read_lock();
cset = rcu_dereference(current->cgroups); cset = task_css_set(current);
list_for_each_entry(link, &cset->cgrp_links, cgrp_link) { list_for_each_entry(link, &cset->cgrp_links, cgrp_link) {
struct cgroup *c = link->cgrp; struct cgroup *c = link->cgrp;
......
...@@ -296,8 +296,12 @@ int cgroup_stat_init(struct cgroup *cgrp) ...@@ -296,8 +296,12 @@ int cgroup_stat_init(struct cgroup *cgrp)
} }
/* ->updated_children list is self terminated */ /* ->updated_children list is self terminated */
for_each_possible_cpu(cpu) for_each_possible_cpu(cpu) {
cgroup_cpu_stat(cgrp, cpu)->updated_children = cgrp; struct cgroup_cpu_stat *cstat = cgroup_cpu_stat(cgrp, cpu);
cstat->updated_children = cgrp;
u64_stats_init(&cstat->sync);
}
prev_cputime_init(&cgrp->stat.prev_cputime); prev_cputime_init(&cgrp->stat.prev_cputime);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment