Commit 33155292 authored by Pete Wyckoff's avatar Pete Wyckoff Committed by Roland Dreier

IB/fmr_pool: Flush all dirty FMRs from ib_fmr_pool_flush()

Commit a3cd7d90 ("IB/fmr_pool: ib_fmr_pool_flush() should flush all
dirty FMRs") caused a regression for iSER and was reverted in
e5507736.

This change attempts to redo the original patch so that all used FMR
entries are flushed when ib_flush_fmr_pool() is called without
affecting the normal FMR pool cleaning thread.  Simply move used
entries from the clean list onto the dirty list in ib_flush_fmr_pool()
before letting the cleanup thread do its job.
Signed-off-by: default avatarPete Wyckoff <pw@osc.edu>
Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
parent 35fb5340
...@@ -398,8 +398,23 @@ EXPORT_SYMBOL(ib_destroy_fmr_pool); ...@@ -398,8 +398,23 @@ EXPORT_SYMBOL(ib_destroy_fmr_pool);
*/ */
int ib_flush_fmr_pool(struct ib_fmr_pool *pool) int ib_flush_fmr_pool(struct ib_fmr_pool *pool)
{ {
int serial = atomic_inc_return(&pool->req_ser); int serial;
struct ib_pool_fmr *fmr, *next;
/*
* The free_list holds FMRs that may have been used
* but have not been remapped enough times to be dirty.
* Put them on the dirty list now so that the cleanup
* thread will reap them too.
*/
spin_lock_irq(&pool->pool_lock);
list_for_each_entry_safe(fmr, next, &pool->free_list, list) {
if (fmr->remap_count > 0)
list_move(&fmr->list, &pool->dirty_list);
}
spin_unlock_irq(&pool->pool_lock);
serial = atomic_inc_return(&pool->req_ser);
wake_up_process(pool->thread); wake_up_process(pool->thread);
if (wait_event_interruptible(pool->force_wait, if (wait_event_interruptible(pool->force_wait,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment