• David Vrabel's avatar
    xen-netback: count number required slots for an skb more carefully · 6e43fc04
    David Vrabel authored
    When a VM is providing an iSCSI target and the LUN is used by the
    backend domain, the generated skbs for direct I/O writes to the disk
    have large, multi-page skb->data but no frags.
    
    With some lengths and starting offsets, xen_netbk_count_skb_slots()
    would be one short because the simple calculation of
    DIV_ROUND_UP(skb_headlen(), PAGE_SIZE) was not accounting for the
    decisions made by start_new_rx_buffer() which does not guarantee
    responses are fully packed.
    
    For example, a skb with length < 2 pages but which spans 3 pages would
    be counted as requiring 2 slots but would actually use 3 slots.
    
    skb->data:
    
        |        1111|222222222222|3333        |
    
    Fully packed, this would need 2 slots:
    
        |111122222222|22223333    |
    
    But because the 2nd page wholy fits into a slot it is not split across
    slots and goes into a slot of its own:
    
        |1111        |222222222222|3333        |
    
    Miscounting the number of slots means netback may push more responses
    than the number of available requests.  This will cause the frontend
    to get very confused and report "Too many frags/slots".  The frontend
    never recovers and will eventually BUG.
    
    Fix this by counting the number of required slots more carefully.  In
    xen_netbk_count_skb_slots(), more closely follow the algorithm used by
    xen_netbk_gop_skb() by introducing xen_netbk_count_frag_slots() which
    is the dry-run equivalent of netbk_gop_frag_copy().
    Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
    Acked-by: default avatarIan Campbell <ian.campbell@citrix.com>
    Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    6e43fc04
netback.c 42 KB