• Willem de Bruijn's avatar
    rps: selective flow shedding during softnet overflow · 99bbc707
    Willem de Bruijn authored
    A cpu executing the network receive path sheds packets when its input
    queue grows to netdev_max_backlog. A single high rate flow (such as a
    spoofed source DoS) can exceed a single cpu processing rate and will
    degrade throughput of other flows hashed onto the same cpu.
    
    This patch adds a more fine grained hashtable. If the netdev backlog
    is above a threshold, IRQ cpus track the ratio of total traffic of
    each flow (using 4096 buckets, configurable). The ratio is measured
    by counting the number of packets per flow over the last 256 packets
    from the source cpu. Any flow that occupies a large fraction of this
    (set at 50%) will see packet drop while above the threshold.
    
    Tested:
    Setup is a muli-threaded UDP echo server with network rx IRQ on cpu0,
    kernel receive (RPS) on cpu0 and application threads on cpus 2--7
    each handling 20k req/s. Throughput halves when hit with a 400 kpps
    antagonist storm. With this patch applied, antagonist overload is
    dropped and the server processes its complete load.
    
    The patch is effective when kernel receive processing is the
    bottleneck. The above RPS scenario is a extreme, but the same is
    reached with RFS and sufficient kernel processing (iptables, packet
    socket tap, ..).
    Signed-off-by: default avatarWillem de Bruijn <willemb@google.com>
    Acked-by: default avatarEric Dumazet <edumazet@google.com>
    Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    99bbc707
sysctl_net_core.c 8.01 KB