Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
linux
Commits
d4e253bb
Commit
d4e253bb
authored
Oct 16, 2019
by
David Sterba
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
btrfs: document extent buffer locking
Signed-off-by:
David Sterba
<
dsterba@suse.com
>
parent
a4477988
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
158 additions
and
14 deletions
+158
-14
fs/btrfs/locking.c
fs/btrfs/locking.c
+158
-14
No files found.
fs/btrfs/locking.c
View file @
d4e253bb
...
@@ -13,6 +13,110 @@
...
@@ -13,6 +13,110 @@
#include "extent_io.h"
#include "extent_io.h"
#include "locking.h"
#include "locking.h"
/*
* Extent buffer locking
* =====================
*
* The locks use a custom scheme that allows to do more operations than are
* available fromt current locking primitives. The building blocks are still
* rwlock and wait queues.
*
* Required semantics:
*
* - reader/writer exclusion
* - writer/writer exclusion
* - reader/reader sharing
* - spinning lock semantics
* - blocking lock semantics
* - try-lock semantics for readers and writers
* - one level nesting, allowing read lock to be taken by the same thread that
* already has write lock
*
* The extent buffer locks (also called tree locks) manage access to eb data
* related to the storage in the b-tree (keys, items, but not the individual
* members of eb).
* We want concurrency of many readers and safe updates. The underlying locking
* is done by read-write spinlock and the blocking part is implemented using
* counters and wait queues.
*
* spinning semantics - the low-level rwlock is held so all other threads that
* want to take it are spinning on it.
*
* blocking semantics - the low-level rwlock is not held but the counter
* denotes how many times the blocking lock was held;
* sleeping is possible
*
* Write lock always allows only one thread to access the data.
*
*
* Debugging
* ---------
*
* There are additional state counters that are asserted in various contexts,
* removed from non-debug build to reduce extent_buffer size and for
* performance reasons.
*
*
* Lock nesting
* ------------
*
* A write operation on a tree might indirectly start a look up on the same
* tree. This can happen when btrfs_cow_block locks the tree and needs to
* lookup free extents.
*
* btrfs_cow_block
* ..
* alloc_tree_block_no_bg_flush
* btrfs_alloc_tree_block
* btrfs_reserve_extent
* ..
* load_free_space_cache
* ..
* btrfs_lookup_file_extent
* btrfs_search_slot
*
*
* Locking pattern - spinning
* --------------------------
*
* The simple locking scenario, the +--+ denotes the spinning section.
*
* +- btrfs_tree_lock
* | - extent_buffer::rwlock is held
* | - no heavy operations should happen, eg. IO, memory allocations, large
* | structure traversals
* +- btrfs_tree_unock
*
*
* Locking pattern - blocking
* --------------------------
*
* The blocking write uses the following scheme. The +--+ denotes the spinning
* section.
*
* +- btrfs_tree_lock
* |
* +- btrfs_set_lock_blocking_write
*
* - allowed: IO, memory allocations, etc.
*
* -- btrfs_tree_unlock - note, no explicit unblocking necessary
*
*
* Blocking read is similar.
*
* +- btrfs_tree_read_lock
* |
* +- btrfs_set_lock_blocking_read
*
* - heavy operations allowed
*
* +- btrfs_tree_read_unlock_blocking
* |
* +- btrfs_tree_read_unlock
*
*/
#ifdef CONFIG_BTRFS_DEBUG
#ifdef CONFIG_BTRFS_DEBUG
static
inline
void
btrfs_assert_spinning_writers_get
(
struct
extent_buffer
*
eb
)
static
inline
void
btrfs_assert_spinning_writers_get
(
struct
extent_buffer
*
eb
)
{
{
...
@@ -80,6 +184,15 @@ static void btrfs_assert_tree_write_locks_get(struct extent_buffer *eb) { }
...
@@ -80,6 +184,15 @@ static void btrfs_assert_tree_write_locks_get(struct extent_buffer *eb) { }
static
void
btrfs_assert_tree_write_locks_put
(
struct
extent_buffer
*
eb
)
{
}
static
void
btrfs_assert_tree_write_locks_put
(
struct
extent_buffer
*
eb
)
{
}
#endif
#endif
/*
* Mark already held read lock as blocking. Can be nested in write lock by the
* same thread.
*
* Use when there are potentially long operations ahead so other thread waiting
* on the lock will not actively spin but sleep instead.
*
* The rwlock is released and blocking reader counter is increased.
*/
void
btrfs_set_lock_blocking_read
(
struct
extent_buffer
*
eb
)
void
btrfs_set_lock_blocking_read
(
struct
extent_buffer
*
eb
)
{
{
trace_btrfs_set_lock_blocking_read
(
eb
);
trace_btrfs_set_lock_blocking_read
(
eb
);
...
@@ -96,6 +209,14 @@ void btrfs_set_lock_blocking_read(struct extent_buffer *eb)
...
@@ -96,6 +209,14 @@ void btrfs_set_lock_blocking_read(struct extent_buffer *eb)
read_unlock
(
&
eb
->
lock
);
read_unlock
(
&
eb
->
lock
);
}
}
/*
* Mark already held write lock as blocking.
*
* Use when there are potentially long operations ahead so other threads
* waiting on the lock will not actively spin but sleep instead.
*
* The rwlock is released and blocking writers is set.
*/
void
btrfs_set_lock_blocking_write
(
struct
extent_buffer
*
eb
)
void
btrfs_set_lock_blocking_write
(
struct
extent_buffer
*
eb
)
{
{
trace_btrfs_set_lock_blocking_write
(
eb
);
trace_btrfs_set_lock_blocking_write
(
eb
);
...
@@ -115,8 +236,13 @@ void btrfs_set_lock_blocking_write(struct extent_buffer *eb)
...
@@ -115,8 +236,13 @@ void btrfs_set_lock_blocking_write(struct extent_buffer *eb)
}
}
/*
/*
* take a spinning read lock. This will wait for any blocking
* Lock the extent buffer for read. Wait for any writers (spinning or blocking).
* writers
* Can be nested in write lock by the same thread.
*
* Use when the locked section does only lightweight actions and busy waiting
* would be cheaper than making other threads do the wait/wake loop.
*
* The rwlock is held upon exit.
*/
*/
void
btrfs_tree_read_lock
(
struct
extent_buffer
*
eb
)
void
btrfs_tree_read_lock
(
struct
extent_buffer
*
eb
)
{
{
...
@@ -154,9 +280,10 @@ void btrfs_tree_read_lock(struct extent_buffer *eb)
...
@@ -154,9 +280,10 @@ void btrfs_tree_read_lock(struct extent_buffer *eb)
}
}
/*
/*
* take a spinning read lock.
* Lock extent buffer for read, optimistically expecting that there are no
* returns 1 if we get the read lock and 0 if we don't
* contending blocking writers. If there are, don't wait.
* this won't wait for blocking writers
*
* Return 1 if the rwlock has been taken, 0 otherwise
*/
*/
int
btrfs_tree_read_lock_atomic
(
struct
extent_buffer
*
eb
)
int
btrfs_tree_read_lock_atomic
(
struct
extent_buffer
*
eb
)
{
{
...
@@ -176,8 +303,9 @@ int btrfs_tree_read_lock_atomic(struct extent_buffer *eb)
...
@@ -176,8 +303,9 @@ int btrfs_tree_read_lock_atomic(struct extent_buffer *eb)
}
}
/*
/*
* returns 1 if we get the read lock and 0 if we don't
* Try-lock for read. Don't block or wait for contending writers.
* this won't wait for blocking writers
*
* Retrun 1 if the rwlock has been taken, 0 otherwise
*/
*/
int
btrfs_try_tree_read_lock
(
struct
extent_buffer
*
eb
)
int
btrfs_try_tree_read_lock
(
struct
extent_buffer
*
eb
)
{
{
...
@@ -199,8 +327,10 @@ int btrfs_try_tree_read_lock(struct extent_buffer *eb)
...
@@ -199,8 +327,10 @@ int btrfs_try_tree_read_lock(struct extent_buffer *eb)
}
}
/*
/*
* returns 1 if we get the read lock and 0 if we don't
* Try-lock for write. May block until the lock is uncontended, but does not
* this won't wait for blocking writers or readers
* wait until it is free.
*
* Retrun 1 if the rwlock has been taken, 0 otherwise
*/
*/
int
btrfs_try_tree_write_lock
(
struct
extent_buffer
*
eb
)
int
btrfs_try_tree_write_lock
(
struct
extent_buffer
*
eb
)
{
{
...
@@ -221,7 +351,10 @@ int btrfs_try_tree_write_lock(struct extent_buffer *eb)
...
@@ -221,7 +351,10 @@ int btrfs_try_tree_write_lock(struct extent_buffer *eb)
}
}
/*
/*
* drop a spinning read lock
* Release read lock. Must be used only if the lock is in spinning mode. If
* the read lock is nested, must pair with read lock before the write unlock.
*
* The rwlock is not held upon exit.
*/
*/
void
btrfs_tree_read_unlock
(
struct
extent_buffer
*
eb
)
void
btrfs_tree_read_unlock
(
struct
extent_buffer
*
eb
)
{
{
...
@@ -243,7 +376,11 @@ void btrfs_tree_read_unlock(struct extent_buffer *eb)
...
@@ -243,7 +376,11 @@ void btrfs_tree_read_unlock(struct extent_buffer *eb)
}
}
/*
/*
* drop a blocking read lock
* Release read lock, previously set to blocking by a pairing call to
* btrfs_set_lock_blocking_read(). Can be nested in write lock by the same
* thread.
*
* State of rwlock is unchanged, last reader wakes waiting threads.
*/
*/
void
btrfs_tree_read_unlock_blocking
(
struct
extent_buffer
*
eb
)
void
btrfs_tree_read_unlock_blocking
(
struct
extent_buffer
*
eb
)
{
{
...
@@ -267,8 +404,10 @@ void btrfs_tree_read_unlock_blocking(struct extent_buffer *eb)
...
@@ -267,8 +404,10 @@ void btrfs_tree_read_unlock_blocking(struct extent_buffer *eb)
}
}
/*
/*
* take a spinning write lock. This will wait for both
* Lock for write. Wait for all blocking and spinning readers and writers. This
* blocking readers or writers
* starts context where reader lock could be nested by the same thread.
*
* The rwlock is held for write upon exit.
*/
*/
void
btrfs_tree_lock
(
struct
extent_buffer
*
eb
)
void
btrfs_tree_lock
(
struct
extent_buffer
*
eb
)
{
{
...
@@ -295,7 +434,12 @@ void btrfs_tree_lock(struct extent_buffer *eb)
...
@@ -295,7 +434,12 @@ void btrfs_tree_lock(struct extent_buffer *eb)
}
}
/*
/*
* drop a spinning or a blocking write lock.
* Release the write lock, either blocking or spinning (ie. there's no need
* for an explicit blocking unlock, like btrfs_tree_read_unlock_blocking).
* This also ends the context for nesting, the read lock must have been
* released already.
*
* Tasks blocked and waiting are woken, rwlock is not held upon exit.
*/
*/
void
btrfs_tree_unlock
(
struct
extent_buffer
*
eb
)
void
btrfs_tree_unlock
(
struct
extent_buffer
*
eb
)
{
{
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment