Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
linux
Commits
28d721e2
Commit
28d721e2
authored
Oct 28, 2005
by
Linus Torvalds
Browse files
Options
Browse Files
Download
Plain Diff
Merge branch 'generic-dispatch' of
git://brick.kernel.dk/data/git/linux-2.6-block
parents
0ee40c66
cb19833d
Changes
9
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
9 changed files
with
459 additions
and
847 deletions
+459
-847
Documentation/block/biodoc.txt
Documentation/block/biodoc.txt
+52
-61
drivers/block/as-iosched.c
drivers/block/as-iosched.c
+89
-236
drivers/block/cfq-iosched.c
drivers/block/cfq-iosched.c
+81
-283
drivers/block/deadline-iosched.c
drivers/block/deadline-iosched.c
+18
-105
drivers/block/elevator.c
drivers/block/elevator.c
+170
-96
drivers/block/ll_rw_blk.c
drivers/block/ll_rw_blk.c
+13
-10
drivers/block/noop-iosched.c
drivers/block/noop-iosched.c
+5
-43
include/linux/blkdev.h
include/linux/blkdev.h
+22
-4
include/linux/elevator.h
include/linux/elevator.h
+9
-9
No files found.
Documentation/block/biodoc.txt
View file @
28d721e2
...
@@ -906,9 +906,20 @@ Aside:
...
@@ -906,9 +906,20 @@ Aside:
4. The I/O scheduler
4. The I/O scheduler
I/O schedulers are now per queue. They should be runtime switchable and modular
I/O scheduler, a.k.a. elevator, is implemented in two layers. Generic dispatch
but aren't yet. Jens has most bits to do this, but the sysfs implementation is
queue and specific I/O schedulers. Unless stated otherwise, elevator is used
missing.
to refer to both parts and I/O scheduler to specific I/O schedulers.
Block layer implements generic dispatch queue in ll_rw_blk.c and elevator.c.
The generic dispatch queue is responsible for properly ordering barrier
requests, requeueing, handling non-fs requests and all other subtleties.
Specific I/O schedulers are responsible for ordering normal filesystem
requests. They can also choose to delay certain requests to improve
throughput or whatever purpose. As the plural form indicates, there are
multiple I/O schedulers. They can be built as modules but at least one should
be built inside the kernel. Each queue can choose different one and can also
change to another one dynamically.
A block layer call to the i/o scheduler follows the convention elv_xxx(). This
A block layer call to the i/o scheduler follows the convention elv_xxx(). This
calls elevator_xxx_fn in the elevator switch (drivers/block/elevator.c). Oh,
calls elevator_xxx_fn in the elevator switch (drivers/block/elevator.c). Oh,
...
@@ -921,44 +932,36 @@ keeping work.
...
@@ -921,44 +932,36 @@ keeping work.
The functions an elevator may implement are: (* are mandatory)
The functions an elevator may implement are: (* are mandatory)
elevator_merge_fn called to query requests for merge with a bio
elevator_merge_fn called to query requests for merge with a bio
elevator_merge_req_fn " " " with another request
elevator_merge_req_fn called when two requests get merged. the one
which gets merged into the other one will be
never seen by I/O scheduler again. IOW, after
being merged, the request is gone.
elevator_merged_fn called when a request in the scheduler has been
elevator_merged_fn called when a request in the scheduler has been
involved in a merge. It is used in the deadline
involved in a merge. It is used in the deadline
scheduler for example, to reposition the request
scheduler for example, to reposition the request
if its sorting order has changed.
if its sorting order has changed.
*elevator_next_req_fn returns the next scheduled request, or NULL
elevator_dispatch_fn fills the dispatch queue with ready requests.
if there are none (or none are ready).
I/O schedulers are free to postpone requests by
not filling the dispatch queue unless @force
is non-zero. Once dispatched, I/O schedulers
are not allowed to manipulate the requests -
they belong to generic dispatch queue.
*
elevator_add_req_fn called to add a new request into the scheduler
elevator_add_req_fn called to add a new request into the scheduler
elevator_queue_empty_fn returns true if the merge queue is empty.
elevator_queue_empty_fn returns true if the merge queue is empty.
Drivers shouldn't use this, but rather check
Drivers shouldn't use this, but rather check
if elv_next_request is NULL (without losing the
if elv_next_request is NULL (without losing the
request if one exists!)
request if one exists!)
elevator_remove_req_fn This is called when a driver claims ownership of
the target request - it now belongs to the
driver. It must not be modified or merged.
Drivers must not lose the request! A subsequent
call of elevator_next_req_fn must return the
_next_ request.
elevator_requeue_req_fn called to add a request to the scheduler. This
is used when the request has alrnadebeen
returned by elv_next_request, but hasn't
completed. If this is not implemented then
elevator_add_req_fn is called instead.
elevator_former_req_fn
elevator_former_req_fn
elevator_latter_req_fn These return the request before or after the
elevator_latter_req_fn These return the request before or after the
one specified in disk sort order. Used by the
one specified in disk sort order. Used by the
block layer to find merge possibilities.
block layer to find merge possibilities.
elevator_completed_req_fn called when a request is completed. This might
elevator_completed_req_fn called when a request is completed.
come about due to being merged with another or
when the device completes the request.
elevator_may_queue_fn returns true if the scheduler wants to allow the
elevator_may_queue_fn returns true if the scheduler wants to allow the
current context to queue a new request even if
current context to queue a new request even if
...
@@ -967,13 +970,33 @@ elevator_may_queue_fn returns true if the scheduler wants to allow the
...
@@ -967,13 +970,33 @@ elevator_may_queue_fn returns true if the scheduler wants to allow the
elevator_set_req_fn
elevator_set_req_fn
elevator_put_req_fn Must be used to allocate and free any elevator
elevator_put_req_fn Must be used to allocate and free any elevator
specific storate for a request.
specific storage for a request.
elevator_activate_req_fn Called when device driver first sees a request.
I/O schedulers can use this callback to
determine when actual execution of a request
starts.
elevator_deactivate_req_fn Called when device driver decides to delay
a request by requeueing it.
elevator_init_fn
elevator_init_fn
elevator_exit_fn Allocate and free any elevator specific storage
elevator_exit_fn Allocate and free any elevator specific storage
for a queue.
for a queue.
4.2 I/O scheduler implementation
4.2 Request flows seen by I/O schedulers
All requests seens by I/O schedulers strictly follow one of the following three
flows.
set_req_fn ->
i. add_req_fn -> (merged_fn ->)* -> dispatch_fn -> activate_req_fn ->
(deactivate_req_fn -> activate_req_fn ->)* -> completed_req_fn
ii. add_req_fn -> (merged_fn ->)* -> merge_req_fn
iii. [none]
-> put_req_fn
4.3 I/O scheduler implementation
The generic i/o scheduler algorithm attempts to sort/merge/batch requests for
The generic i/o scheduler algorithm attempts to sort/merge/batch requests for
optimal disk scan and request servicing performance (based on generic
optimal disk scan and request servicing performance (based on generic
principles and device capabilities), optimized for:
principles and device capabilities), optimized for:
...
@@ -993,18 +1016,7 @@ request in sort order to prevent binary tree lookups.
...
@@ -993,18 +1016,7 @@ request in sort order to prevent binary tree lookups.
This arrangement is not a generic block layer characteristic however, so
This arrangement is not a generic block layer characteristic however, so
elevators may implement queues as they please.
elevators may implement queues as they please.
ii. Last merge hint
ii. Merge hash
The last merge hint is part of the generic queue layer. I/O schedulers must do
some management on it. For the most part, the most important thing is to make
sure q->last_merge is cleared (set to NULL) when the request on it is no longer
a candidate for merging (for example if it has been sent to the driver).
The last merge performed is cached as a hint for the subsequent request. If
sequential data is being submitted, the hint is used to perform merges without
any scanning. This is not sufficient when there are multiple processes doing
I/O though, so a "merge hash" is used by some schedulers.
iii. Merge hash
AS and deadline use a hash table indexed by the last sector of a request. This
AS and deadline use a hash table indexed by the last sector of a request. This
enables merging code to quickly look up "back merge" candidates, even when
enables merging code to quickly look up "back merge" candidates, even when
multiple I/O streams are being performed at once on one disk.
multiple I/O streams are being performed at once on one disk.
...
@@ -1013,29 +1025,8 @@ multiple I/O streams are being performed at once on one disk.
...
@@ -1013,29 +1025,8 @@ multiple I/O streams are being performed at once on one disk.
are far less common than "back merges" due to the nature of most I/O patterns.
are far less common than "back merges" due to the nature of most I/O patterns.
Front merges are handled by the binary trees in AS and deadline schedulers.
Front merges are handled by the binary trees in AS and deadline schedulers.
iv. Handling barrier cases
iii. Plugging the queue to batch requests in anticipation of opportunities for
A request with flags REQ_HARDBARRIER or REQ_SOFTBARRIER must not be ordered
merge/sort optimizations
around. That is, they must be processed after all older requests, and before
any newer ones. This includes merges!
In AS and deadline schedulers, barriers have the effect of flushing the reorder
queue. The performance cost of this will vary from nothing to a lot depending
on i/o patterns and device characteristics. Obviously they won't improve
performance, so their use should be kept to a minimum.
v. Handling insertion position directives
A request may be inserted with a position directive. The directives are one of
ELEVATOR_INSERT_BACK, ELEVATOR_INSERT_FRONT, ELEVATOR_INSERT_SORT.
ELEVATOR_INSERT_SORT is a general directive for non-barrier requests.
ELEVATOR_INSERT_BACK is used to insert a barrier to the back of the queue.
ELEVATOR_INSERT_FRONT is used to insert a barrier to the front of the queue, and
overrides the ordering requested by any previous barriers. In practice this is
harmless and required, because it is used for SCSI requeueing. This does not
require flushing the reorder queue, so does not impose a performance penalty.
vi. Plugging the queue to batch requests in anticipation of opportunities for
merge/sort optimizations
This is just the same as in 2.4 so far, though per-device unplugging
This is just the same as in 2.4 so far, though per-device unplugging
support is anticipated for 2.5. Also with a priority-based i/o scheduler,
support is anticipated for 2.5. Also with a priority-based i/o scheduler,
...
@@ -1069,7 +1060,7 @@ Aside:
...
@@ -1069,7 +1060,7 @@ Aside:
blk_kick_queue() to unplug a specific queue (right away ?)
blk_kick_queue() to unplug a specific queue (right away ?)
or optionally, all queues, is in the plan.
or optionally, all queues, is in the plan.
4.
3
I/O contexts
4.
4
I/O contexts
I/O contexts provide a dynamically allocated per process data area. They may
I/O contexts provide a dynamically allocated per process data area. They may
be used in I/O schedulers, and in the block layer (could be used for IO statis,
be used in I/O schedulers, and in the block layer (could be used for IO statis,
priorities for example). See *io_context in drivers/block/ll_rw_blk.c, and
priorities for example). See *io_context in drivers/block/ll_rw_blk.c, and
...
...
drivers/block/as-iosched.c
View file @
28d721e2
This diff is collapsed.
Click to expand it.
drivers/block/cfq-iosched.c
View file @
28d721e2
This diff is collapsed.
Click to expand it.
drivers/block/deadline-iosched.c
View file @
28d721e2
...
@@ -50,7 +50,6 @@ struct deadline_data {
...
@@ -50,7 +50,6 @@ struct deadline_data {
* next in sort order. read, write or both are NULL
* next in sort order. read, write or both are NULL
*/
*/
struct
deadline_rq
*
next_drq
[
2
];
struct
deadline_rq
*
next_drq
[
2
];
struct
list_head
*
dispatch
;
/* driver dispatch queue */
struct
list_head
*
hash
;
/* request hash */
struct
list_head
*
hash
;
/* request hash */
unsigned
int
batching
;
/* number of sequential requests made */
unsigned
int
batching
;
/* number of sequential requests made */
sector_t
last_sector
;
/* head position */
sector_t
last_sector
;
/* head position */
...
@@ -113,15 +112,6 @@ static inline void deadline_del_drq_hash(struct deadline_rq *drq)
...
@@ -113,15 +112,6 @@ static inline void deadline_del_drq_hash(struct deadline_rq *drq)
__deadline_del_drq_hash
(
drq
);
__deadline_del_drq_hash
(
drq
);
}
}
static
void
deadline_remove_merge_hints
(
request_queue_t
*
q
,
struct
deadline_rq
*
drq
)
{
deadline_del_drq_hash
(
drq
);
if
(
q
->
last_merge
==
drq
->
request
)
q
->
last_merge
=
NULL
;
}
static
inline
void
static
inline
void
deadline_add_drq_hash
(
struct
deadline_data
*
dd
,
struct
deadline_rq
*
drq
)
deadline_add_drq_hash
(
struct
deadline_data
*
dd
,
struct
deadline_rq
*
drq
)
{
{
...
@@ -239,10 +229,9 @@ deadline_del_drq_rb(struct deadline_data *dd, struct deadline_rq *drq)
...
@@ -239,10 +229,9 @@ deadline_del_drq_rb(struct deadline_data *dd, struct deadline_rq *drq)
dd
->
next_drq
[
data_dir
]
=
rb_entry_drq
(
rbnext
);
dd
->
next_drq
[
data_dir
]
=
rb_entry_drq
(
rbnext
);
}
}
if
(
ON_RB
(
&
drq
->
rb_node
))
{
BUG_ON
(
!
ON_RB
(
&
drq
->
rb_node
));
rb_erase
(
&
drq
->
rb_node
,
DRQ_RB_ROOT
(
dd
,
drq
));
rb_erase
(
&
drq
->
rb_node
,
DRQ_RB_ROOT
(
dd
,
drq
));
RB_CLEAR
(
&
drq
->
rb_node
);
RB_CLEAR
(
&
drq
->
rb_node
);
}
}
}
static
struct
request
*
static
struct
request
*
...
@@ -286,7 +275,7 @@ deadline_find_first_drq(struct deadline_data *dd, int data_dir)
...
@@ -286,7 +275,7 @@ deadline_find_first_drq(struct deadline_data *dd, int data_dir)
/*
/*
* add drq to rbtree and fifo
* add drq to rbtree and fifo
*/
*/
static
inline
void
static
void
deadline_add_request
(
struct
request_queue
*
q
,
struct
request
*
rq
)
deadline_add_request
(
struct
request_queue
*
q
,
struct
request
*
rq
)
{
{
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
...
@@ -301,12 +290,8 @@ deadline_add_request(struct request_queue *q, struct request *rq)
...
@@ -301,12 +290,8 @@ deadline_add_request(struct request_queue *q, struct request *rq)
drq
->
expires
=
jiffies
+
dd
->
fifo_expire
[
data_dir
];
drq
->
expires
=
jiffies
+
dd
->
fifo_expire
[
data_dir
];
list_add_tail
(
&
drq
->
fifo
,
&
dd
->
fifo_list
[
data_dir
]);
list_add_tail
(
&
drq
->
fifo
,
&
dd
->
fifo_list
[
data_dir
]);
if
(
rq_mergeable
(
rq
))
{
if
(
rq_mergeable
(
rq
))
deadline_add_drq_hash
(
dd
,
drq
);
deadline_add_drq_hash
(
dd
,
drq
);
if
(
!
q
->
last_merge
)
q
->
last_merge
=
rq
;
}
}
}
/*
/*
...
@@ -315,14 +300,11 @@ deadline_add_request(struct request_queue *q, struct request *rq)
...
@@ -315,14 +300,11 @@ deadline_add_request(struct request_queue *q, struct request *rq)
static
void
deadline_remove_request
(
request_queue_t
*
q
,
struct
request
*
rq
)
static
void
deadline_remove_request
(
request_queue_t
*
q
,
struct
request
*
rq
)
{
{
struct
deadline_rq
*
drq
=
RQ_DATA
(
rq
);
struct
deadline_rq
*
drq
=
RQ_DATA
(
rq
);
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
if
(
drq
)
{
list_del_init
(
&
drq
->
fifo
);
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
deadline_del_drq_rb
(
dd
,
drq
);
deadline_del_drq_hash
(
drq
);
list_del_init
(
&
drq
->
fifo
);
deadline_remove_merge_hints
(
q
,
drq
);
deadline_del_drq_rb
(
dd
,
drq
);
}
}
}
static
int
static
int
...
@@ -332,15 +314,6 @@ deadline_merge(request_queue_t *q, struct request **req, struct bio *bio)
...
@@ -332,15 +314,6 @@ deadline_merge(request_queue_t *q, struct request **req, struct bio *bio)
struct
request
*
__rq
;
struct
request
*
__rq
;
int
ret
;
int
ret
;
/*
* try last_merge to avoid going to hash
*/
ret
=
elv_try_last_merge
(
q
,
bio
);
if
(
ret
!=
ELEVATOR_NO_MERGE
)
{
__rq
=
q
->
last_merge
;
goto
out_insert
;
}
/*
/*
* see if the merge hash can satisfy a back merge
* see if the merge hash can satisfy a back merge
*/
*/
...
@@ -373,8 +346,6 @@ deadline_merge(request_queue_t *q, struct request **req, struct bio *bio)
...
@@ -373,8 +346,6 @@ deadline_merge(request_queue_t *q, struct request **req, struct bio *bio)
return
ELEVATOR_NO_MERGE
;
return
ELEVATOR_NO_MERGE
;
out:
out:
q
->
last_merge
=
__rq
;
out_insert:
if
(
ret
)
if
(
ret
)
deadline_hot_drq_hash
(
dd
,
RQ_DATA
(
__rq
));
deadline_hot_drq_hash
(
dd
,
RQ_DATA
(
__rq
));
*
req
=
__rq
;
*
req
=
__rq
;
...
@@ -399,8 +370,6 @@ static void deadline_merged_request(request_queue_t *q, struct request *req)
...
@@ -399,8 +370,6 @@ static void deadline_merged_request(request_queue_t *q, struct request *req)
deadline_del_drq_rb
(
dd
,
drq
);
deadline_del_drq_rb
(
dd
,
drq
);
deadline_add_drq_rb
(
dd
,
drq
);
deadline_add_drq_rb
(
dd
,
drq
);
}
}
q
->
last_merge
=
req
;
}
}
static
void
static
void
...
@@ -452,7 +421,7 @@ deadline_move_to_dispatch(struct deadline_data *dd, struct deadline_rq *drq)
...
@@ -452,7 +421,7 @@ deadline_move_to_dispatch(struct deadline_data *dd, struct deadline_rq *drq)
request_queue_t
*
q
=
drq
->
request
->
q
;
request_queue_t
*
q
=
drq
->
request
->
q
;
deadline_remove_request
(
q
,
drq
->
request
);
deadline_remove_request
(
q
,
drq
->
request
);
list_add_tail
(
&
drq
->
request
->
queuelist
,
dd
->
dispatch
);
elv_dispatch_add_tail
(
q
,
drq
->
request
);
}
}
/*
/*
...
@@ -502,8 +471,9 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
...
@@ -502,8 +471,9 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
* deadline_dispatch_requests selects the best request according to
* deadline_dispatch_requests selects the best request according to
* read/write expire, fifo_batch, etc
* read/write expire, fifo_batch, etc
*/
*/
static
int
deadline_dispatch_requests
(
struct
deadline_data
*
dd
)
static
int
deadline_dispatch_requests
(
request_queue_t
*
q
,
int
force
)
{
{
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
const
int
reads
=
!
list_empty
(
&
dd
->
fifo_list
[
READ
]);
const
int
reads
=
!
list_empty
(
&
dd
->
fifo_list
[
READ
]);
const
int
writes
=
!
list_empty
(
&
dd
->
fifo_list
[
WRITE
]);
const
int
writes
=
!
list_empty
(
&
dd
->
fifo_list
[
WRITE
]);
struct
deadline_rq
*
drq
;
struct
deadline_rq
*
drq
;
...
@@ -597,65 +567,12 @@ static int deadline_dispatch_requests(struct deadline_data *dd)
...
@@ -597,65 +567,12 @@ static int deadline_dispatch_requests(struct deadline_data *dd)
return
1
;
return
1
;
}
}
static
struct
request
*
deadline_next_request
(
request_queue_t
*
q
)
{
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
struct
request
*
rq
;
/*
* if there are still requests on the dispatch queue, grab the first one
*/
if
(
!
list_empty
(
dd
->
dispatch
))
{
dispatch:
rq
=
list_entry_rq
(
dd
->
dispatch
->
next
);
return
rq
;
}
if
(
deadline_dispatch_requests
(
dd
))
goto
dispatch
;
return
NULL
;
}
static
void
deadline_insert_request
(
request_queue_t
*
q
,
struct
request
*
rq
,
int
where
)
{
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
/* barriers must flush the reorder queue */
if
(
unlikely
(
rq
->
flags
&
(
REQ_SOFTBARRIER
|
REQ_HARDBARRIER
)
&&
where
==
ELEVATOR_INSERT_SORT
))
where
=
ELEVATOR_INSERT_BACK
;
switch
(
where
)
{
case
ELEVATOR_INSERT_BACK
:
while
(
deadline_dispatch_requests
(
dd
))
;
list_add_tail
(
&
rq
->
queuelist
,
dd
->
dispatch
);
break
;
case
ELEVATOR_INSERT_FRONT
:
list_add
(
&
rq
->
queuelist
,
dd
->
dispatch
);
break
;
case
ELEVATOR_INSERT_SORT
:
BUG_ON
(
!
blk_fs_request
(
rq
));
deadline_add_request
(
q
,
rq
);
break
;
default:
printk
(
"%s: bad insert point %d
\n
"
,
__FUNCTION__
,
where
);
return
;
}
}
static
int
deadline_queue_empty
(
request_queue_t
*
q
)
static
int
deadline_queue_empty
(
request_queue_t
*
q
)
{
{
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
if
(
!
list_empty
(
&
dd
->
fifo_list
[
WRITE
])
return
list_empty
(
&
dd
->
fifo_list
[
WRITE
])
||
!
list_empty
(
&
dd
->
fifo_list
[
READ
])
&&
list_empty
(
&
dd
->
fifo_list
[
READ
]);
||
!
list_empty
(
dd
->
dispatch
))
return
0
;
return
1
;
}
}
static
struct
request
*
static
struct
request
*
...
@@ -733,7 +650,6 @@ static int deadline_init_queue(request_queue_t *q, elevator_t *e)
...
@@ -733,7 +650,6 @@ static int deadline_init_queue(request_queue_t *q, elevator_t *e)
INIT_LIST_HEAD
(
&
dd
->
fifo_list
[
WRITE
]);
INIT_LIST_HEAD
(
&
dd
->
fifo_list
[
WRITE
]);
dd
->
sort_list
[
READ
]
=
RB_ROOT
;
dd
->
sort_list
[
READ
]
=
RB_ROOT
;
dd
->
sort_list
[
WRITE
]
=
RB_ROOT
;
dd
->
sort_list
[
WRITE
]
=
RB_ROOT
;
dd
->
dispatch
=
&
q
->
queue_head
;
dd
->
fifo_expire
[
READ
]
=
read_expire
;
dd
->
fifo_expire
[
READ
]
=
read_expire
;
dd
->
fifo_expire
[
WRITE
]
=
write_expire
;
dd
->
fifo_expire
[
WRITE
]
=
write_expire
;
dd
->
writes_starved
=
writes_starved
;
dd
->
writes_starved
=
writes_starved
;
...
@@ -748,10 +664,8 @@ static void deadline_put_request(request_queue_t *q, struct request *rq)
...
@@ -748,10 +664,8 @@ static void deadline_put_request(request_queue_t *q, struct request *rq)
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
struct
deadline_data
*
dd
=
q
->
elevator
->
elevator_data
;
struct
deadline_rq
*
drq
=
RQ_DATA
(
rq
);
struct
deadline_rq
*
drq
=
RQ_DATA
(
rq
);
if
(
drq
)
{
mempool_free
(
drq
,
dd
->
drq_pool
);
mempool_free
(
drq
,
dd
->
drq_pool
);
rq
->
elevator_private
=
NULL
;
rq
->
elevator_private
=
NULL
;
}
}
}
static
int
static
int
...
@@ -917,9 +831,8 @@ static struct elevator_type iosched_deadline = {
...
@@ -917,9 +831,8 @@ static struct elevator_type iosched_deadline = {
.
elevator_merge_fn
=
deadline_merge
,
.
elevator_merge_fn
=
deadline_merge
,
.
elevator_merged_fn
=
deadline_merged_request
,
.
elevator_merged_fn
=
deadline_merged_request
,
.
elevator_merge_req_fn
=
deadline_merged_requests
,
.
elevator_merge_req_fn
=
deadline_merged_requests
,
.
elevator_next_req_fn
=
deadline_next_request
,
.
elevator_dispatch_fn
=
deadline_dispatch_requests
,
.
elevator_add_req_fn
=
deadline_insert_request
,
.
elevator_add_req_fn
=
deadline_add_request
,
.
elevator_remove_req_fn
=
deadline_remove_request
,
.
elevator_queue_empty_fn
=
deadline_queue_empty
,
.
elevator_queue_empty_fn
=
deadline_queue_empty
,
.
elevator_former_req_fn
=
deadline_former_request
,
.
elevator_former_req_fn
=
deadline_former_request
,
.
elevator_latter_req_fn
=
deadline_latter_request
,
.
elevator_latter_req_fn
=
deadline_latter_request
,
...
...
drivers/block/elevator.c
View file @
28d721e2
This diff is collapsed.
Click to expand it.
drivers/block/ll_rw_blk.c
View file @
28d721e2
...
@@ -353,6 +353,8 @@ static void blk_pre_flush_end_io(struct request *flush_rq)
...
@@ -353,6 +353,8 @@ static void blk_pre_flush_end_io(struct request *flush_rq)
struct
request
*
rq
=
flush_rq
->
end_io_data
;
struct
request
*
rq
=
flush_rq
->
end_io_data
;
request_queue_t
*
q
=
rq
->
q
;
request_queue_t
*
q
=
rq
->
q
;
elv_completed_request
(
q
,
flush_rq
);
rq
->
flags
|=
REQ_BAR_PREFLUSH
;
rq
->
flags
|=
REQ_BAR_PREFLUSH
;
if
(
!
flush_rq
->
errors
)
if
(
!
flush_rq
->
errors
)
...
@@ -369,6 +371,8 @@ static void blk_post_flush_end_io(struct request *flush_rq)
...
@@ -369,6 +371,8 @@ static void blk_post_flush_end_io(struct request *flush_rq)
struct
request
*
rq
=
flush_rq
->
end_io_data
;
struct
request
*
rq
=
flush_rq
->
end_io_data
;
request_queue_t
*
q
=
rq
->
q
;
request_queue_t
*
q
=
rq
->
q
;
elv_completed_request
(
q
,
flush_rq
);
rq
->
flags
|=
REQ_BAR_POSTFLUSH
;
rq
->
flags
|=
REQ_BAR_POSTFLUSH
;
q
->
end_flush_fn
(
q
,
flush_rq
);
q
->
end_flush_fn
(
q
,
flush_rq
);
...
@@ -408,8 +412,6 @@ struct request *blk_start_pre_flush(request_queue_t *q, struct request *rq)
...
@@ -408,8 +412,6 @@ struct request *blk_start_pre_flush(request_queue_t *q, struct request *rq)
if
(
!
list_empty
(
&
rq
->
queuelist
))
if
(
!
list_empty
(
&
rq
->
queuelist
))
blkdev_dequeue_request
(
rq
);
blkdev_dequeue_request
(
rq
);
elv_deactivate_request
(
q
,
rq
);
flush_rq
->
end_io_data
=
rq
;
flush_rq
->
end_io_data
=
rq
;
flush_rq
->
end_io
=
blk_pre_flush_end_io
;
flush_rq
->
end_io
=
blk_pre_flush_end_io
;
...
@@ -1040,6 +1042,7 @@ EXPORT_SYMBOL(blk_queue_invalidate_tags);
...
@@ -1040,6 +1042,7 @@ EXPORT_SYMBOL(blk_queue_invalidate_tags);
static
char
*
rq_flags
[]
=
{
static
char
*
rq_flags
[]
=
{
"REQ_RW"
,
"REQ_RW"
,
"REQ_FAILFAST"
,
"REQ_FAILFAST"
,
"REQ_SORTED"
,
"REQ_SOFTBARRIER"
,
"REQ_SOFTBARRIER"
,
"REQ_HARDBARRIER"
,
"REQ_HARDBARRIER"
,
"REQ_CMD"
,
"REQ_CMD"
,
...
@@ -2456,6 +2459,8 @@ static void __blk_put_request(request_queue_t *q, struct request *req)
...
@@ -2456,6 +2459,8 @@ static void __blk_put_request(request_queue_t *q, struct request *req)
if
(
unlikely
(
--
req
->
ref_count
))
if
(
unlikely
(
--
req
->
ref_count
))
return
;
return
;
elv_completed_request
(
q
,
req
);
req
->
rq_status
=
RQ_INACTIVE
;
req
->
rq_status
=
RQ_INACTIVE
;
req
->
rl
=
NULL
;
req
->
rl
=
NULL
;
...
@@ -2466,8 +2471,6 @@ static void __blk_put_request(request_queue_t *q, struct request *req)
...
@@ -2466,8 +2471,6 @@ static void __blk_put_request(request_queue_t *q, struct request *req)
if
(
rl
)
{
if
(
rl
)
{
int
rw
=
rq_data_dir
(
req
);
int
rw
=
rq_data_dir
(
req
);
elv_completed_request
(
q
,
req
);
BUG_ON
(
!
list_empty
(
&
req
->
queuelist
));
BUG_ON
(
!
list_empty
(
&
req
->
queuelist
));
blk_free_request
(
q
,
req
);
blk_free_request
(
q
,
req
);
...
@@ -2477,14 +2480,14 @@ static void __blk_put_request(request_queue_t *q, struct request *req)
...
@@ -2477,14 +2480,14 @@ static void __blk_put_request(request_queue_t *q, struct request *req)
void
blk_put_request
(
struct
request
*
req
)
void
blk_put_request
(
struct
request
*
req
)
{
{
unsigned
long
flags
;
request_queue_t
*
q
=
req
->
q
;
/*
/*
*
if req->rl isn't set, this request didnt originate from
the
*
Gee, IDE calls in w/ NULL q. Fix IDE and remove
the
*
block layer, so it's safe to just disregard it
*
following if (q) test.
*/
*/
if
(
req
->
rl
)
{
if
(
q
)
{
unsigned
long
flags
;
request_queue_t
*
q
=
req
->
q
;
spin_lock_irqsave
(
q
->
queue_lock
,
flags
);
spin_lock_irqsave
(
q
->
queue_lock
,
flags
);
__blk_put_request
(
q
,
req
);
__blk_put_request
(
q
,
req
);
spin_unlock_irqrestore
(
q
->
queue_lock
,
flags
);
spin_unlock_irqrestore
(
q
->
queue_lock
,
flags
);
...
...
drivers/block/noop-iosched.c
View file @
28d721e2
...
@@ -7,57 +7,19 @@
...
@@ -7,57 +7,19 @@
#include <linux/module.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/init.h>
/*
static
void
elevator_noop_add_request
(
request_queue_t
*
q
,
struct
request
*
rq
)
* See if we can find a request that this buffer can be coalesced with.
*/
static
int
elevator_noop_merge
(
request_queue_t
*
q
,
struct
request
**
req
,
struct
bio
*
bio
)
{
int
ret
;
ret
=
elv_try_last_merge
(
q
,
bio
);
if
(
ret
!=
ELEVATOR_NO_MERGE
)
*
req
=
q
->
last_merge
;
return
ret
;
}
static
void
elevator_noop_merge_requests
(
request_queue_t
*
q
,
struct
request
*
req
,
struct
request
*
next
)
{
list_del_init
(
&
next
->
queuelist
);
}
static
void
elevator_noop_add_request
(
request_queue_t
*
q
,
struct
request
*
rq
,
int
where
)
{
{
if
(
where
==
ELEVATOR_INSERT_FRONT
)
elv_dispatch_add_tail
(
q
,
rq
);
list_add
(
&
rq
->
queuelist
,
&
q
->
queue_head
);
else
list_add_tail
(
&
rq
->
queuelist
,
&
q
->
queue_head
);
/*
* new merges must not precede this barrier
*/
if
(
rq
->
flags
&
REQ_HARDBARRIER
)
q
->
last_merge
=
NULL
;
else
if
(
!
q
->
last_merge
)
q
->
last_merge
=
rq
;
}
}
static
struct
request
*
elevator_noop_next_request
(
request_queue_t
*
q
)
static
int
elevator_noop_dispatch
(
request_queue_t
*
q
,
int
force
)
{
{
if
(
!
list_empty
(
&
q
->
queue_head
))
return
0
;
return
list_entry_rq
(
q
->
queue_head
.
next
);
return
NULL
;
}
}
static
struct
elevator_type
elevator_noop
=
{
static
struct
elevator_type
elevator_noop
=
{
.
ops
=
{
.
ops
=
{
.
elevator_merge_fn
=
elevator_noop_merge
,
.
elevator_dispatch_fn
=
elevator_noop_dispatch
,
.
elevator_merge_req_fn
=
elevator_noop_merge_requests
,
.
elevator_next_req_fn
=
elevator_noop_next_request
,
.
elevator_add_req_fn
=
elevator_noop_add_request
,
.
elevator_add_req_fn
=
elevator_noop_add_request
,
},
},
.
elevator_name
=
"noop"
,
.
elevator_name
=
"noop"
,
...
...
include/linux/blkdev.h
View file @
28d721e2
...
@@ -203,6 +203,7 @@ struct request {
...
@@ -203,6 +203,7 @@ struct request {
enum
rq_flag_bits
{
enum
rq_flag_bits
{
__REQ_RW
,
/* not set, read. set, write */
__REQ_RW
,
/* not set, read. set, write */
__REQ_FAILFAST
,
/* no low level driver retries */
__REQ_FAILFAST
,
/* no low level driver retries */
__REQ_SORTED
,
/* elevator knows about this request */
__REQ_SOFTBARRIER
,
/* may not be passed by ioscheduler */
__REQ_SOFTBARRIER
,
/* may not be passed by ioscheduler */
__REQ_HARDBARRIER
,
/* may not be passed by drive either */
__REQ_HARDBARRIER
,
/* may not be passed by drive either */
__REQ_CMD
,
/* is a regular fs rw request */
__REQ_CMD
,
/* is a regular fs rw request */
...
@@ -235,6 +236,7 @@ enum rq_flag_bits {
...
@@ -235,6 +236,7 @@ enum rq_flag_bits {
#define REQ_RW (1 << __REQ_RW)
#define REQ_RW (1 << __REQ_RW)
#define REQ_FAILFAST (1 << __REQ_FAILFAST)
#define REQ_FAILFAST (1 << __REQ_FAILFAST)
#define REQ_SORTED (1 << __REQ_SORTED)
#define REQ_SOFTBARRIER (1 << __REQ_SOFTBARRIER)
#define REQ_SOFTBARRIER (1 << __REQ_SOFTBARRIER)
#define REQ_HARDBARRIER (1 << __REQ_HARDBARRIER)
#define REQ_HARDBARRIER (1 << __REQ_HARDBARRIER)
#define REQ_CMD (1 << __REQ_CMD)
#define REQ_CMD (1 << __REQ_CMD)
...
@@ -332,6 +334,12 @@ struct request_queue
...
@@ -332,6 +334,12 @@ struct request_queue
prepare_flush_fn
*
prepare_flush_fn
;
prepare_flush_fn
*
prepare_flush_fn
;
end_flush_fn
*
end_flush_fn
;
end_flush_fn
*
end_flush_fn
;
/*
* Dispatch queue sorting
*/
sector_t
end_sector
;
struct
request
*
boundary_rq
;
/*
/*
* Auto-unplugging state
* Auto-unplugging state
*/
*/
...
@@ -454,6 +462,7 @@ enum {
...
@@ -454,6 +462,7 @@ enum {
#define blk_pm_request(rq) \
#define blk_pm_request(rq) \
((rq)->flags & (REQ_PM_SUSPEND | REQ_PM_RESUME))
((rq)->flags & (REQ_PM_SUSPEND | REQ_PM_RESUME))
#define blk_sorted_rq(rq) ((rq)->flags & REQ_SORTED)
#define blk_barrier_rq(rq) ((rq)->flags & REQ_HARDBARRIER)
#define blk_barrier_rq(rq) ((rq)->flags & REQ_HARDBARRIER)
#define blk_barrier_preflush(rq) ((rq)->flags & REQ_BAR_PREFLUSH)
#define blk_barrier_preflush(rq) ((rq)->flags & REQ_BAR_PREFLUSH)
#define blk_barrier_postflush(rq) ((rq)->flags & REQ_BAR_POSTFLUSH)
#define blk_barrier_postflush(rq) ((rq)->flags & REQ_BAR_POSTFLUSH)
...
@@ -611,12 +620,21 @@ extern void end_request(struct request *req, int uptodate);
...
@@ -611,12 +620,21 @@ extern void end_request(struct request *req, int uptodate);
static
inline
void
blkdev_dequeue_request
(
struct
request
*
req
)
static
inline
void
blkdev_dequeue_request
(
struct
request
*
req
)
{
{
BUG_ON
(
list_empty
(
&
req
->
queuelist
));
elv_dequeue_request
(
req
->
q
,
req
);
}
list_del_init
(
&
req
->
queuelist
);
/*
* This should be in elevator.h, but that requires pulling in rq and q
*/
static
inline
void
elv_dispatch_add_tail
(
struct
request_queue
*
q
,
struct
request
*
rq
)
{
if
(
q
->
last_merge
==
rq
)
q
->
last_merge
=
NULL
;
if
(
req
->
rl
)
q
->
end_sector
=
rq_end_sector
(
rq
);
elv_remove_request
(
req
->
q
,
req
);
q
->
boundary_rq
=
rq
;
list_add_tail
(
&
rq
->
queuelist
,
&
q
->
queue_head
);
}
}
/*
/*
...
...
include/linux/elevator.h
View file @
28d721e2
...
@@ -8,18 +8,17 @@ typedef void (elevator_merge_req_fn) (request_queue_t *, struct request *, struc
...
@@ -8,18 +8,17 @@ typedef void (elevator_merge_req_fn) (request_queue_t *, struct request *, struc
typedef
void
(
elevator_merged_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
void
(
elevator_merged_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
struct
request
*
(
elevator_next_req_fn
)
(
request_queue_t
*
);
typedef
int
(
elevator_dispatch_fn
)
(
request_queue_t
*
,
int
);
typedef
void
(
elevator_add_req_fn
)
(
request_queue_t
*
,
struct
request
*
,
int
);
typedef
void
(
elevator_add_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
int
(
elevator_queue_empty_fn
)
(
request_queue_t
*
);
typedef
int
(
elevator_queue_empty_fn
)
(
request_queue_t
*
);
typedef
void
(
elevator_remove_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
void
(
elevator_requeue_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
struct
request
*
(
elevator_request_list_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
struct
request
*
(
elevator_request_list_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
void
(
elevator_completed_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
void
(
elevator_completed_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
int
(
elevator_may_queue_fn
)
(
request_queue_t
*
,
int
,
struct
bio
*
);
typedef
int
(
elevator_may_queue_fn
)
(
request_queue_t
*
,
int
,
struct
bio
*
);
typedef
int
(
elevator_set_req_fn
)
(
request_queue_t
*
,
struct
request
*
,
struct
bio
*
,
gfp_t
);
typedef
int
(
elevator_set_req_fn
)
(
request_queue_t
*
,
struct
request
*
,
struct
bio
*
,
gfp_t
);
typedef
void
(
elevator_put_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
void
(
elevator_put_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
void
(
elevator_activate_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
void
(
elevator_deactivate_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
void
(
elevator_deactivate_req_fn
)
(
request_queue_t
*
,
struct
request
*
);
typedef
int
(
elevator_init_fn
)
(
request_queue_t
*
,
elevator_t
*
);
typedef
int
(
elevator_init_fn
)
(
request_queue_t
*
,
elevator_t
*
);
...
@@ -31,10 +30,9 @@ struct elevator_ops
...
@@ -31,10 +30,9 @@ struct elevator_ops
elevator_merged_fn
*
elevator_merged_fn
;
elevator_merged_fn
*
elevator_merged_fn
;
elevator_merge_req_fn
*
elevator_merge_req_fn
;
elevator_merge_req_fn
*
elevator_merge_req_fn
;
elevator_
next_req_fn
*
elevator_next_req
_fn
;
elevator_
dispatch_fn
*
elevator_dispatch
_fn
;
elevator_add_req_fn
*
elevator_add_req_fn
;
elevator_add_req_fn
*
elevator_add_req_fn
;
elevator_remove_req_fn
*
elevator_remove_req_fn
;
elevator_activate_req_fn
*
elevator_activate_req_fn
;
elevator_requeue_req_fn
*
elevator_requeue_req_fn
;
elevator_deactivate_req_fn
*
elevator_deactivate_req_fn
;
elevator_deactivate_req_fn
*
elevator_deactivate_req_fn
;
elevator_queue_empty_fn
*
elevator_queue_empty_fn
;
elevator_queue_empty_fn
*
elevator_queue_empty_fn
;
...
@@ -81,15 +79,15 @@ struct elevator_queue
...
@@ -81,15 +79,15 @@ struct elevator_queue
/*
/*
* block elevator interface
* block elevator interface
*/
*/
extern
void
elv_dispatch_sort
(
request_queue_t
*
,
struct
request
*
);
extern
void
elv_add_request
(
request_queue_t
*
,
struct
request
*
,
int
,
int
);
extern
void
elv_add_request
(
request_queue_t
*
,
struct
request
*
,
int
,
int
);
extern
void
__elv_add_request
(
request_queue_t
*
,
struct
request
*
,
int
,
int
);
extern
void
__elv_add_request
(
request_queue_t
*
,
struct
request
*
,
int
,
int
);
extern
int
elv_merge
(
request_queue_t
*
,
struct
request
**
,
struct
bio
*
);
extern
int
elv_merge
(
request_queue_t
*
,
struct
request
**
,
struct
bio
*
);
extern
void
elv_merge_requests
(
request_queue_t
*
,
struct
request
*
,
extern
void
elv_merge_requests
(
request_queue_t
*
,
struct
request
*
,
struct
request
*
);
struct
request
*
);
extern
void
elv_merged_request
(
request_queue_t
*
,
struct
request
*
);
extern
void
elv_merged_request
(
request_queue_t
*
,
struct
request
*
);
extern
void
elv_
remov
e_request
(
request_queue_t
*
,
struct
request
*
);
extern
void
elv_
dequeu
e_request
(
request_queue_t
*
,
struct
request
*
);
extern
void
elv_requeue_request
(
request_queue_t
*
,
struct
request
*
);
extern
void
elv_requeue_request
(
request_queue_t
*
,
struct
request
*
);
extern
void
elv_deactivate_request
(
request_queue_t
*
,
struct
request
*
);
extern
int
elv_queue_empty
(
request_queue_t
*
);
extern
int
elv_queue_empty
(
request_queue_t
*
);
extern
struct
request
*
elv_next_request
(
struct
request_queue
*
q
);
extern
struct
request
*
elv_next_request
(
struct
request_queue
*
q
);
extern
struct
request
*
elv_former_request
(
request_queue_t
*
,
struct
request
*
);
extern
struct
request
*
elv_former_request
(
request_queue_t
*
,
struct
request
*
);
...
@@ -142,4 +140,6 @@ enum {
...
@@ -142,4 +140,6 @@ enum {
ELV_MQUEUE_MUST
,
ELV_MQUEUE_MUST
,
};
};
#define rq_end_sector(rq) ((rq)->sector + (rq)->nr_sectors)
#endif
#endif
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment