Commit 2d29c9f8 authored by Federico Motta's avatar Federico Motta Committed by Jens Axboe

block, bfq: improve asymmetric scenarios detection

bfq defines as asymmetric a scenario where an active entity, say E
(representing either a single bfq_queue or a group of other entities),
has a higher weight than some other entities.  If the entity E does sync
I/O in such a scenario, then bfq plugs the dispatch of the I/O of the
other entities in the following situation: E is in service but
temporarily has no pending I/O request.  In fact, without this plugging,
all the times that E stops being temporarily idle, it may find the
internal queues of the storage device already filled with an
out-of-control number of extra requests, from other entities. So E may
have to wait for the service of these extra requests, before finally
having its own requests served. This may easily break service
guarantees, with E getting less than its fair share of the device
throughput.  Usually, the end result is that E gets the same fraction of
the throughput as the other entities, instead of getting more, according
to its higher weight.

Yet there are two other more subtle cases where E, even if its weight is
actually equal to or even lower than the weight of any other active
entities, may get less than its fair share of the throughput in case the
above I/O plugging is not performed:
1. other entities issue larger requests than E;
2. other entities contain more active child entities than E (or in
   general tend to have more backlog than E).

In the first case, other entities may get more service than E because
they get larger requests, than those of E, served during the temporary
idle periods of E.  In the second case, other entities get more service
because, by having many child entities, they have many requests ready
for dispatching while E is temporarily idle.

This commit addresses this issue by extending the definition of
asymmetric scenario: a scenario is asymmetric when
- active entities representing bfq_queues have differentiated weights,
  as in the original definition
or (inclusive)
- one or more entities representing groups of entities are active.

This broader definition makes sure that I/O plugging will be performed
in all the above cases, provided that there is at least one active
group.  Of course, this definition is very coarse, so it will trigger
I/O plugging also in cases where it is not needed, such as, e.g.,
multiple active entities with just one child each, and all with the same
I/O-request size.  The reason for this coarse definition is just that a
finer-grained definition would be rather heavy to compute.

On the opposite end, even this new definition does not trigger I/O
plugging in all cases where there is no active group, and all bfq_queues
have the same weight.  So, in these cases some unfairness may occur if
there are asymmetries in I/O-request sizes.  We made this choice because
I/O plugging may lower throughput, and probably a user that has not
created any group cares more about throughput than about perfect
fairness.  At any rate, as for possible applications that may care about
service guarantees, bfq already guarantees a high responsiveness and a
low latency to soft real-time applications automatically.
Signed-off-by: default avatarFederico Motta <federico@willer.it>
Signed-off-by: default avatarPaolo Valente <paolo.valente@linaro.org>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent a2fa8a19
This diff is collapsed.
...@@ -108,15 +108,14 @@ struct bfq_sched_data { ...@@ -108,15 +108,14 @@ struct bfq_sched_data {
}; };
/** /**
* struct bfq_weight_counter - counter of the number of all active entities * struct bfq_weight_counter - counter of the number of all active queues
* with a given weight. * with a given weight.
*/ */
struct bfq_weight_counter { struct bfq_weight_counter {
unsigned int weight; /* weight of the entities this counter refers to */ unsigned int weight; /* weight of the queues this counter refers to */
unsigned int num_active; /* nr of active entities with this weight */ unsigned int num_active; /* nr of active queues with this weight */
/* /*
* Weights tree member (see bfq_data's @queue_weights_tree and * Weights tree member (see bfq_data's @queue_weights_tree)
* @group_weights_tree)
*/ */
struct rb_node weights_node; struct rb_node weights_node;
}; };
...@@ -151,8 +150,6 @@ struct bfq_weight_counter { ...@@ -151,8 +150,6 @@ struct bfq_weight_counter {
struct bfq_entity { struct bfq_entity {
/* service_tree member */ /* service_tree member */
struct rb_node rb_node; struct rb_node rb_node;
/* pointer to the weight counter associated with this entity */
struct bfq_weight_counter *weight_counter;
/* /*
* Flag, true if the entity is on a tree (either the active or * Flag, true if the entity is on a tree (either the active or
...@@ -266,6 +263,9 @@ struct bfq_queue { ...@@ -266,6 +263,9 @@ struct bfq_queue {
/* entity representing this queue in the scheduler */ /* entity representing this queue in the scheduler */
struct bfq_entity entity; struct bfq_entity entity;
/* pointer to the weight counter associated with this entity */
struct bfq_weight_counter *weight_counter;
/* maximum budget allowed from the feedback mechanism */ /* maximum budget allowed from the feedback mechanism */
int max_budget; int max_budget;
/* budget expiration (in jiffies) */ /* budget expiration (in jiffies) */
...@@ -449,14 +449,9 @@ struct bfq_data { ...@@ -449,14 +449,9 @@ struct bfq_data {
*/ */
struct rb_root queue_weights_tree; struct rb_root queue_weights_tree;
/* /*
* rbtree of non-queue @bfq_entity weight counters, sorted by * number of groups with requests still waiting for completion
* weight. Used to keep track of whether all @bfq_groups have
* the same weight. The tree contains one counter for each
* distinct weight associated to some active @bfq_group (see
* the comments to the functions bfq_weights_tree_[add|remove]
* for further details).
*/ */
struct rb_root group_weights_tree; unsigned int num_active_groups;
/* /*
* Number of bfq_queues containing requests (including the * Number of bfq_queues containing requests (including the
...@@ -851,10 +846,10 @@ struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync); ...@@ -851,10 +846,10 @@ struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync);
void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync); void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync);
struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic); struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic);
void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq);
void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_entity *entity, void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,
struct rb_root *root); struct rb_root *root);
void __bfq_weights_tree_remove(struct bfq_data *bfqd, void __bfq_weights_tree_remove(struct bfq_data *bfqd,
struct bfq_entity *entity, struct bfq_queue *bfqq,
struct rb_root *root); struct rb_root *root);
void bfq_weights_tree_remove(struct bfq_data *bfqd, void bfq_weights_tree_remove(struct bfq_data *bfqd,
struct bfq_queue *bfqq); struct bfq_queue *bfqq);
......
...@@ -788,25 +788,29 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, ...@@ -788,25 +788,29 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
new_weight = entity->orig_weight * new_weight = entity->orig_weight *
(bfqq ? bfqq->wr_coeff : 1); (bfqq ? bfqq->wr_coeff : 1);
/* /*
* If the weight of the entity changes, remove the entity * If the weight of the entity changes, and the entity is a
* from its old weight counter (if there is a counter * queue, remove the entity from its old weight counter (if
* associated with the entity), and add it to the counter * there is a counter associated with the entity).
* associated with its new weight.
*/ */
if (prev_weight != new_weight) { if (prev_weight != new_weight) {
root = bfqq ? &bfqd->queue_weights_tree : if (bfqq) {
&bfqd->group_weights_tree; root = &bfqd->queue_weights_tree;
__bfq_weights_tree_remove(bfqd, entity, root); __bfq_weights_tree_remove(bfqd, bfqq, root);
} else
bfqd->num_active_groups--;
} }
entity->weight = new_weight; entity->weight = new_weight;
/* /*
* Add the entity to its weights tree only if it is * Add the entity, if it is not a weight-raised queue,
* not associated with a weight-raised queue. * to the counter associated with its new weight.
*/ */
if (prev_weight != new_weight && if (prev_weight != new_weight) {
(bfqq ? bfqq->wr_coeff == 1 : 1)) if (bfqq && bfqq->wr_coeff == 1) {
/* If we get here, root has been initialized. */ /* If we get here, root has been initialized. */
bfq_weights_tree_add(bfqd, entity, root); bfq_weights_tree_add(bfqd, bfqq, root);
} else
bfqd->num_active_groups++;
}
new_st->wsum += entity->weight; new_st->wsum += entity->weight;
...@@ -1012,9 +1016,9 @@ static void __bfq_activate_entity(struct bfq_entity *entity, ...@@ -1012,9 +1016,9 @@ static void __bfq_activate_entity(struct bfq_entity *entity,
if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */ if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */
struct bfq_group *bfqg = struct bfq_group *bfqg =
container_of(entity, struct bfq_group, entity); container_of(entity, struct bfq_group, entity);
struct bfq_data *bfqd = bfqg->bfqd;
bfq_weights_tree_add(bfqg->bfqd, entity, bfqd->num_active_groups++;
&bfqd->group_weights_tree);
} }
#endif #endif
...@@ -1692,7 +1696,7 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq) ...@@ -1692,7 +1696,7 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
if (!bfqq->dispatched) if (!bfqq->dispatched)
if (bfqq->wr_coeff == 1) if (bfqq->wr_coeff == 1)
bfq_weights_tree_add(bfqd, &bfqq->entity, bfq_weights_tree_add(bfqd, bfqq,
&bfqd->queue_weights_tree); &bfqd->queue_weights_tree);
if (bfqq->wr_coeff > 1) if (bfqq->wr_coeff > 1)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment