• Mike Christie's avatar
    dm mpath: add IO affinity path selector · e4d2e82b
    Mike Christie authored
    This patch adds a path selector that selects paths based on a CPU to
    path mapping the user passes in and what CPU we are executing on. The
    primary user for this PS is where the app is optimized to use specific
    CPUs so other PSs undo the apps handy work, and the storage and it's
    transport are not a bottlneck.
    
    For these io-affinity PS setups a path's transport/interconnect
    perf is not going to flucuate a lot and there is no major differences
    between paths, so QL/HST smarts do not help and RR always messes up
    what the app is trying to do.
    
    On a system with 16 cores, where you have a job per CPU:
    
    fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=4k \
    --ioengine=libaio --iodepth=128 --numjobs=16
    
    and a dm-multipath device setup where each CPU is mapped to one path:
    
    // When in mq mode I had to set dm_mq_nr_hw_queues=$NUM_PATHS.
    // Bio mode also showed similar results.
    0 16777216 multipath 0 0 1 1 io-affinity 0 16 1 8:16 1 8:32 2 8:64 4
    8:48 8 8:80 10 8:96 20 8:112 40 8:128 80 8:144 100 8:160 200 8:176
    400 8:192 800 8:208 1000 8:224 2000 8:240 4000 65:0 8000
    
    we can see a IOPs increase of 25%.
    
    The percent increase depends on the device and interconnect. For a
    slower/medium speed path/device that can do around 180K IOPs a path
    if you ran that fio command to it directly we saw a 25% increase like
    above. Slower path'd devices that could do around 90K per path showed
    maybe around a 2 - 5% increase. If you use something like null_blk or
    scsi_debug which can multi-million IOPs and hack it up so each device
    they export shows up as a path then you see 50%+ increases.
    Signed-off-by: default avatarMike Christie <michael.christie@oracle.com>
    Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
    e4d2e82b
dm-io-affinity.c 5.22 KB