Commit 002b3436 authored by Davidlohr Bueso's avatar Davidlohr Bueso Committed by Linus Torvalds

fs/epoll: loosen irq safety in ep_scan_ready_list()

Patch series "fs/epoll: loosen irq safety when possible".

Both patches replace saving+restoring interrupts when taking the ep->lock
(now the waitqueue lock), with just disabling local irqs.  This shows
immediate performance benefits in patch 1 for an epoll workload running on
Xen.  The main concern we need to have with this sort of changes in epoll
is the ep_poll_callback() which is passed to the wait queue wakeup and is
done very often under irq context, this patch does not touch this call.

Patches have been tested pretty heavily with the customer workload,
microbenchmarks, ltp testcases and two high level workloads that use epoll
under the hood: nginx and libevent benchmarks.

This patch (of 2):

Saving and restoring interrupts in ep_scan_ready_list() is an
overkill as it is never called with interrupts disabled. Loosen
this to simply disabling local irqs such that archs where managing
irqs is expensive or virtual environments. This patch yields
some throughput improvements on a workload that is epoll intensive
running on a single Xen DomU.

1 Job	 7500	-->    8800 enq/s  (+17%)
2 Jobs	14000   -->   15200 enq/s  (+8%)
3 Jobs	20500	-->   22300 enq/s  (+8%)
4 Jobs	25000   -->   28000 enq/s  (+8-12)%

On bare metal:

For a 2-socket 40-core (ht) IvyBridge on a few workloads, unfortunately I
don't have a xen environment and the results for Xen I do have (which
numbers are in patch 1) I don't have the actual workload, so cannot
compare them directly.

1) Different configurations were used for a epoll_wait (pipes io)
   microbench (http://linux-scalability.org/epoll/epoll-test.c) and shows
   around a 7-10% improvement in overall total number of times the
   epoll_wait() loops when using both regular and nested epolls, so very
   raw numbers, but measurable nonetheless.

# threads	vanilla		dirty
     1		1677717		1805587
     2		1660510		1854064
     4		1610184		1805484
     8		1577696		1751222
     16		1568837		1725299
     32		1291532		1378463
     64		 752584		 787368

   Note that stddev is pretty small.

2) Another pipe test, which shows no real measurable improvement.
   (http://www.xmailserver.org/linux-patches/pipetest.c)

Link: http://lkml.kernel.org/r/20180720172956.2883-2-dave@stgolabs.netSigned-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
Cc: Jason Baron <jbaron@akamai.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e05a8e4d
...@@ -667,7 +667,6 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep, ...@@ -667,7 +667,6 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
{ {
__poll_t res; __poll_t res;
int pwake = 0; int pwake = 0;
unsigned long flags;
struct epitem *epi, *nepi; struct epitem *epi, *nepi;
LIST_HEAD(txlist); LIST_HEAD(txlist);
...@@ -687,17 +686,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep, ...@@ -687,17 +686,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
* because we want the "sproc" callback to be able to do it * because we want the "sproc" callback to be able to do it
* in a lockless way. * in a lockless way.
*/ */
spin_lock_irqsave(&ep->wq.lock, flags); spin_lock_irq(&ep->wq.lock);
list_splice_init(&ep->rdllist, &txlist); list_splice_init(&ep->rdllist, &txlist);
ep->ovflist = NULL; ep->ovflist = NULL;
spin_unlock_irqrestore(&ep->wq.lock, flags); spin_unlock_irq(&ep->wq.lock);
/* /*
* Now call the callback function. * Now call the callback function.
*/ */
res = (*sproc)(ep, &txlist, priv); res = (*sproc)(ep, &txlist, priv);
spin_lock_irqsave(&ep->wq.lock, flags); spin_lock_irq(&ep->wq.lock);
/* /*
* During the time we spent inside the "sproc" callback, some * During the time we spent inside the "sproc" callback, some
* other events might have been queued by the poll callback. * other events might have been queued by the poll callback.
...@@ -739,7 +738,7 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep, ...@@ -739,7 +738,7 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
if (waitqueue_active(&ep->poll_wait)) if (waitqueue_active(&ep->poll_wait))
pwake++; pwake++;
} }
spin_unlock_irqrestore(&ep->wq.lock, flags); spin_unlock_irq(&ep->wq.lock);
if (!ep_locked) if (!ep_locked)
mutex_unlock(&ep->mtx); mutex_unlock(&ep->mtx);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment