• Eric Dumazet's avatar
    net: avoid 32 x truesize under-estimation for tiny skbs · 3226b158
    Eric Dumazet authored
    Both virtio net and napi_get_frags() allocate skbs
    with a very small skb->head
    
    While using page fragments instead of a kmalloc backed skb->head might give
    a small performance improvement in some cases, there is a huge risk of
    under estimating memory usage.
    
    For both GOOD_COPY_LEN and GRO_MAX_HEAD, we can fit at least 32 allocations
    per page (order-3 page in x86), or even 64 on PowerPC
    
    We have been tracking OOM issues on GKE hosts hitting tcp_mem limits
    but consuming far more memory for TCP buffers than instructed in tcp_mem[2]
    
    Even if we force napi_alloc_skb() to only use order-0 pages, the issue
    would still be there on arches with PAGE_SIZE >= 32768
    
    This patch makes sure that small skb head are kmalloc backed, so that
    other objects in the slab page can be reused instead of being held as long
    as skbs are sitting in socket queues.
    
    Note that we might in the future use the sk_buff napi cache,
    instead of going through a more expensive __alloc_skb()
    
    Another idea would be to use separate page sizes depending
    on the allocated length (to never have more than 4 frags per page)
    
    I would like to thank Greg Thelen for his precious help on this matter,
    analysing crash dumps is always a time consuming task.
    
    Fixes: fd11a83d ("net: Pull out core bits of __netdev_alloc_skb and add __napi_alloc_skb")
    Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
    Cc: Paolo Abeni <pabeni@redhat.com>
    Cc: Greg Thelen <gthelen@google.com>
    Reviewed-by: default avatarAlexander Duyck <alexanderduyck@fb.com>
    Acked-by: default avatarMichael S. Tsirkin <mst@redhat.com>
    Link: https://lore.kernel.org/r/20210113161819.1155526-1-eric.dumazet@gmail.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
    3226b158
skbuff.c 157 KB