Commit e5dc5aff authored by Judy Hsiao's avatar Judy Hsiao Committed by David S. Miller

neighbour: Don't let neigh_forced_gc() disable preemption for long

We are seeing cases where neigh_cleanup_and_release() is called by
neigh_forced_gc() many times in a row with preemption turned off.
When running on a low powered CPU at a low CPU frequency, this has
been measured to keep preemption off for ~10 ms. That's not great on a
system with HZ=1000 which expects tasks to be able to schedule in
with ~1ms latency.
Suggested-by: default avatarDouglas Anderson <dianders@chromium.org>
Signed-off-by: default avatarJudy Hsiao <judyhsiao@chromium.org>
Reviewed-by: default avatarDavid Ahern <dsahern@kernel.org>
Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
Reviewed-by: default avatarDouglas Anderson <dianders@chromium.org>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 179a8b51
...@@ -253,9 +253,11 @@ static int neigh_forced_gc(struct neigh_table *tbl) ...@@ -253,9 +253,11 @@ static int neigh_forced_gc(struct neigh_table *tbl)
{ {
int max_clean = atomic_read(&tbl->gc_entries) - int max_clean = atomic_read(&tbl->gc_entries) -
READ_ONCE(tbl->gc_thresh2); READ_ONCE(tbl->gc_thresh2);
u64 tmax = ktime_get_ns() + NSEC_PER_MSEC;
unsigned long tref = jiffies - 5 * HZ; unsigned long tref = jiffies - 5 * HZ;
struct neighbour *n, *tmp; struct neighbour *n, *tmp;
int shrunk = 0; int shrunk = 0;
int loop = 0;
NEIGH_CACHE_STAT_INC(tbl, forced_gc_runs); NEIGH_CACHE_STAT_INC(tbl, forced_gc_runs);
...@@ -278,11 +280,16 @@ static int neigh_forced_gc(struct neigh_table *tbl) ...@@ -278,11 +280,16 @@ static int neigh_forced_gc(struct neigh_table *tbl)
shrunk++; shrunk++;
if (shrunk >= max_clean) if (shrunk >= max_clean)
break; break;
if (++loop == 16) {
if (ktime_get_ns() > tmax)
goto unlock;
loop = 0;
}
} }
} }
WRITE_ONCE(tbl->last_flush, jiffies); WRITE_ONCE(tbl->last_flush, jiffies);
unlock:
write_unlock_bh(&tbl->lock); write_unlock_bh(&tbl->lock);
return shrunk; return shrunk;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment