• Dmitry Shulga's avatar
    Fixed bug#42503 - "Lost connection" errors when using · be794bc5
    Dmitry Shulga authored
    compression protocol.
    
    The loss of connection was caused by a malformed packet
    sent by the server in case when query cache was in use.
    When storing data in the query cache, the query  cache
    memory allocation algorithm had a tendency to reduce
    the amount of memory block necessary to store a result
    set, up to finally storing the entire result set in a single
    block. With a significant result set, this memory block
    could turn out to be quite large - 30, 40 MB and on.
    When such a result set was sent to the client, the entire
    memory block was compressed and written to network as a
    single network packet. However, the length of the
    network packet is limited by 0xFFFFFF (16MB), since
    the packet format only allows 3 bytes for packet length.
    As a result, a malformed, overly large packet
    with truncated length would be sent to the client
    and break the client/server protocol.
    
    The solution is, when sending result sets from the query
    cache, to ensure that the data is chopped into
    network packets of size <= 16MB, so that there
    is no corruption of packet length. This solution,
    however, has a shortcoming: since the result set
    is still stored in the query cache as a single block,
    at the time of sending, we've lost boundaries of individual
    logical packets (one logical packet = one row of the result
    set) and thus can end up sending a truncated logical
    packet in a compressed network packet.
    
    As a result, on the client we may require more memory than 
    max_allowed_packet to keep, both, the truncated
    last logical packet, and the compressed next packet.
    This never (or in practice never) happens without compression,
    since without compression it's very unlikely that
    a) a truncated logical packet would remain on the client
    when it's time to read the next packet
    b) a subsequent logical packet that is being read would be
    so large that size-of-new-packet + size-of-old-packet-tail >
    max_allowed_packet.
    To remedy this issue, we send data in 1MB sized packets,
    that's below the current client default of 16MB for
    max_allowed_packet, but large enough to ensure there is no
    unnecessary overhead from too many syscalls per result set.
    be794bc5
net_serv.cc 32.2 KB