Commit c0ea1c22 authored by Tejun Heo's avatar Tejun Heo Committed by Jens Axboe

bdi: make backing_dev_info->wb.dwork canceling stricter

Canceling of bdi->wb.dwork is currently a bit mushy.
bdi_wb_shutdown() performs cancel_delayed_work_sync() at the end after
shutting down and flushing the delayed_work and bdi_destroy() tries
yet again after bdi_unregister().

bdi->wb.dwork is queued only after checking BDI_registered while
holding bdi->wb_lock and bdi_wb_shutdown() clears the flag while
holding the same lock and then flushes the delayed_work.  There's no
way the delayed_work can be queued again after that.

Replace the two unnecessary cancel_delayed_work_sync() invocations
with WARNs on pending.  This simplifies and clarifies the code a bit
and will help future changes in further isolating bdi_writeback
handling.
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
Signed-off-by: default avatarJens Axboe <axboe@fb.com>
parent b6875734
......@@ -376,13 +376,7 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
mod_delayed_work(bdi_wq, &bdi->wb.dwork, 0);
flush_delayed_work(&bdi->wb.dwork);
WARN_ON(!list_empty(&bdi->work_list));
/*
* This shouldn't be necessary unless @bdi for some reason has
* unflushed dirty IO after work_list is drained. Do it anyway
* just in case.
*/
cancel_delayed_work_sync(&bdi->wb.dwork);
WARN_ON(delayed_work_pending(&bdi->wb.dwork));
}
/*
......@@ -497,12 +491,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
bdi_unregister(bdi);
/*
* If bdi_unregister() had already been called earlier, the dwork
* could still be pending because bdi_prune_sb() can race with the
* bdi_wakeup_thread_delayed() calls from __mark_inode_dirty().
*/
cancel_delayed_work_sync(&bdi->wb.dwork);
WARN_ON(delayed_work_pending(&bdi->wb.dwork));
for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
percpu_counter_destroy(&bdi->bdi_stat[i]);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment