Commit 5046e4cf authored by Vaibhav Nagarnaik's avatar Vaibhav Nagarnaik Committed by Kleber Sacilotto de Souza

ring-buffer: Allow for rescheduling when removing pages

BugLink: https://bugs.launchpad.net/bugs/1798617

commit 83f36555 upstream.

When reducing ring buffer size, pages are removed by scheduling a work
item on each CPU for the corresponding CPU ring buffer. After the pages
are removed from ring buffer linked list, the pages are free()d in a
tight loop. The loop does not give up CPU until all pages are removed.
In a worst case behavior, when lot of pages are to be freed, it can
cause system stall.

After the pages are removed from the list, the free() can happen while
the work is rescheduled. Call cond_resched() in the loop to prevent the
system hangup.

Link: http://lkml.kernel.org/r/20180907223129.71994-1-vnagarnaik@google.com

Cc: stable@vger.kernel.org
Fixes: 83f40318 ("ring-buffer: Make removal of ring buffer pages atomic")
Reported-by: default avatarJason Behmer <jbehmer@google.com>
Signed-off-by: default avatarVaibhav Nagarnaik <vnagarnaik@google.com>
Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: default avatarStefan Bader <stefan.bader@canonical.com>
Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
parent b94c2899
...@@ -1513,6 +1513,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) ...@@ -1513,6 +1513,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
tmp_iter_page = first_page; tmp_iter_page = first_page;
do { do {
cond_resched();
to_remove_page = tmp_iter_page; to_remove_page = tmp_iter_page;
rb_inc_page(cpu_buffer, &tmp_iter_page); rb_inc_page(cpu_buffer, &tmp_iter_page);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment