• Ming Lei's avatar
    dm raid: fix updating of max_discard_sectors limit · c8156fc7
    Ming Lei authored
    Unit of 'chunk_size' is byte, instead of sector, so fix it by setting
    the queue_limits' max_discard_sectors to rs->md.chunk_sectors.  Also,
    rename chunk_size to chunk_size_bytes.
    
    Without this fix, too big max_discard_sectors is applied on the request
    queue of dm-raid, finally raid code has to split the bio again.
    
    This re-split done by raid causes the following nested clone_endio:
    
    1) one big bio 'A' is submitted to dm queue, and served as the original
    bio
    
    2) one new bio 'B' is cloned from the original bio 'A', and .map()
    is run on this bio of 'B', and B's original bio points to 'A'
    
    3) raid code sees that 'B' is too big, and split 'B' and re-submit
    the remainded part of 'B' to dm-raid queue via generic_make_request().
    
    4) now dm will handle 'B' as new original bio, then allocate a new
    clone bio of 'C' and run .map() on 'C'. Meantime C's original bio
    points to 'B'.
    
    5) suppose now 'C' is completed by raid directly, then the following
    clone_endio() is called recursively:
    
    	clone_endio(C)
    		->clone_endio(B)		#B is original bio of 'C'
    			->bio_endio(A)
    
    'A' can be big enough to make hundreds of nested clone_endio(), then
    stack can be corrupted easily.
    
    Fixes: 61697a6a ("dm: eliminate 'split_discard_bios' flag from DM target interface")
    Cc: stable@vger.kernel.org
    Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
    Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
    c8156fc7
dm-raid.c 116 KB