Commit f2042605 authored by Khazhismel Kumykov's avatar Khazhismel Kumykov Committed by Mike Snitzer

dm mpath selector: more evenly distribute ties

Move the last used path to the end of the list (least preferred) so that
ties are more evenly distributed.

For example, in case with three paths with one that is slower than
others, the remaining two would be unevenly used if they tie. This is
due to the rotation not being a truely fair distribution.

Illustrated: paths a, b, c, 'c' has 1 outstanding IO, a and b are 'tied'
Three possible rotations:
(a, b, c) -> best path 'a'
(b, c, a) -> best path 'b'
(c, a, b) -> best path 'a'
(a, b, c) -> best path 'a'
(b, c, a) -> best path 'b'
(c, a, b) -> best path 'a'
...

So 'a' is used 2x more than 'b', although they should be used evenly.

With this change, the most recently used path is always the least
preferred, removing this bias resulting in even distribution.
(a, b, c) -> best path 'a'
(b, c, a) -> best path 'b'
(c, a, b) -> best path 'a'
(c, b, a) -> best path 'b'
...
Signed-off-by: default avatarKhazhismel Kumykov <khazhy@google.com>
Reviewed-by: default avatarMartin Wilck <mwilck@suse.com>
Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
parent cc656619
...@@ -195,9 +195,6 @@ static struct dm_path *ql_select_path(struct path_selector *ps, size_t nr_bytes) ...@@ -195,9 +195,6 @@ static struct dm_path *ql_select_path(struct path_selector *ps, size_t nr_bytes)
if (list_empty(&s->valid_paths)) if (list_empty(&s->valid_paths))
goto out; goto out;
/* Change preferred (first in list) path to evenly balance. */
list_move_tail(s->valid_paths.next, &s->valid_paths);
list_for_each_entry(pi, &s->valid_paths, list) { list_for_each_entry(pi, &s->valid_paths, list) {
if (!best || if (!best ||
(atomic_read(&pi->qlen) < atomic_read(&best->qlen))) (atomic_read(&pi->qlen) < atomic_read(&best->qlen)))
...@@ -210,6 +207,9 @@ static struct dm_path *ql_select_path(struct path_selector *ps, size_t nr_bytes) ...@@ -210,6 +207,9 @@ static struct dm_path *ql_select_path(struct path_selector *ps, size_t nr_bytes)
if (!best) if (!best)
goto out; goto out;
/* Move most recently used to least preferred to evenly balance. */
list_move_tail(&best->list, &s->valid_paths);
ret = best->path; ret = best->path;
out: out:
spin_unlock_irqrestore(&s->lock, flags); spin_unlock_irqrestore(&s->lock, flags);
......
...@@ -282,9 +282,6 @@ static struct dm_path *st_select_path(struct path_selector *ps, size_t nr_bytes) ...@@ -282,9 +282,6 @@ static struct dm_path *st_select_path(struct path_selector *ps, size_t nr_bytes)
if (list_empty(&s->valid_paths)) if (list_empty(&s->valid_paths))
goto out; goto out;
/* Change preferred (first in list) path to evenly balance. */
list_move_tail(s->valid_paths.next, &s->valid_paths);
list_for_each_entry(pi, &s->valid_paths, list) list_for_each_entry(pi, &s->valid_paths, list)
if (!best || (st_compare_load(pi, best, nr_bytes) < 0)) if (!best || (st_compare_load(pi, best, nr_bytes) < 0))
best = pi; best = pi;
...@@ -292,6 +289,9 @@ static struct dm_path *st_select_path(struct path_selector *ps, size_t nr_bytes) ...@@ -292,6 +289,9 @@ static struct dm_path *st_select_path(struct path_selector *ps, size_t nr_bytes)
if (!best) if (!best)
goto out; goto out;
/* Move most recently used to least preferred to evenly balance. */
list_move_tail(&best->list, &s->valid_paths);
ret = best->path; ret = best->path;
out: out:
spin_unlock_irqrestore(&s->lock, flags); spin_unlock_irqrestore(&s->lock, flags);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment