Commit 6b226487 authored by Miroslav Urbanek's avatar Miroslav Urbanek Committed by Steffen Klassert

flowcache: Increase threshold for refusing new allocations

The threshold for OOM protection is too small for systems with large
number of CPUs. Applications report ENOBUFs on connect() every 10
minutes.

The problem is that the variable net->xfrm.flow_cache_gc_count is a
global counter while the variable fc->high_watermark is a per-CPU
constant. Take the number of CPUs into account as well.

Fixes: 6ad3122a ("flowcache: Avoid OOM condition under preasure")
Reported-by: default avatarLukáš Koldrt <lk@excello.cz>
Tested-by: default avatarJan Hejl <jh@excello.cz>
Signed-off-by: default avatarMiroslav Urbanek <mu@miroslavurbanek.com>
Signed-off-by: default avatarSteffen Klassert <steffen.klassert@secunet.com>
parent 330e832a
......@@ -95,7 +95,6 @@ static void flow_cache_gc_task(struct work_struct *work)
list_for_each_entry_safe(fce, n, &gc_list, u.gc_list) {
flow_entry_kill(fce, xfrm);
atomic_dec(&xfrm->flow_cache_gc_count);
WARN_ON(atomic_read(&xfrm->flow_cache_gc_count) < 0);
}
}
......@@ -236,9 +235,8 @@ flow_cache_lookup(struct net *net, const struct flowi *key, u16 family, u8 dir,
if (fcp->hash_count > fc->high_watermark)
flow_cache_shrink(fc, fcp);
if (fcp->hash_count > 2 * fc->high_watermark ||
atomic_read(&net->xfrm.flow_cache_gc_count) > fc->high_watermark) {
atomic_inc(&net->xfrm.flow_cache_genid);
if (atomic_read(&net->xfrm.flow_cache_gc_count) >
2 * num_online_cpus() * fc->high_watermark) {
flo = ERR_PTR(-ENOBUFS);
goto ret_object;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment