- 22 Oct, 2023 40 commits
-
-
Kent Overstreet authored
- Don't call into bch2_encrypt_bio() when we're not encrypting - Pull slowpath out of trans_lock_write() - Make sure bc2h_trans_journal_res_get() gets inlined. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- With BCH_WRITE_SYNC, we no longer need the completion in struct dio_write - Pull out bch2_dio_write_copy_iov() into a separate non-inline function, it's code that doesn't run in the common case - Copy mapping and inode pointers into dio_write, avoiding pointer chasing at the start of bch2_dio_write_loop() - kthread_use_mm() is not needed in the common case; move it into bch2_dio_write_loop_async() - factor out various helpers from bch2_dio_write_loop() and rework control flow for better icache utilization Other small optimizations: - bch2_keylist_free() is only used in one place, at the end of the bch2_write() path - drop the reinit - in bch2_disk_reservation_put(), check if res->sectors is nonzero before touching c->online_reserved, since that will likely be a cache miss Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> bcachefs: More DIO write path optimization Better code prefetching (?) Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This adds a new flag for the write path, BCH_WRITE_SYNC, and switches the O_DIRECT write path to use it when we're not running asynchronously. It runs the btree update after the write in the original thread's context instead of a kworker, cutting context switches in half. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Fixes for various checkpatch errors. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Dead code, delete. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This factors out a properly-documented helper for deciding when we want to sort a btree node with MAX_BSETS bsets down to a single bset. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This replaces sysfs btree_avg_write_size with btree_write_stats, which now breaks out statistics by the source of the btree write. Btree writes that are too small are a source of inefficiency, and excessive btree resort overhead - this will let us see what's causing them. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Fixes fstests generic/648 Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Per fstests generic/275, on -ENOSPC we're supposed write until the filesystem is full - i.e. do a partial write instead of failing the full write. This is a partial fix for the buffered write path: we'll still fail on a page boundary. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- In the btree iterator code that overlays keys from the journal, we were incorrectly specifying level=0 instead of the btree_path's current level in a few places - When we didn't do journal replay, we shouldn't free the journal keys: this fixes cmd_list and cmd_dump, which run in norecovery mode Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
roundup_pow_of_two() is undefined for 0 - oops. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Use __func__ in error messages that refer to function name, and do so more uniformly - suggested by checkpatch.pl Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This adds an inlined version of bch2_bkey_cmp_packed(), and uses it in bch2_sort_keys(), where it's part of the inner loop. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Long ago, bkey_unpack_key() was added to bset.h instead of bkey.h because bkey.h didn't include btree_types.h, which it needs for the compiled unpack function. This patch finally moves it to the proper location. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This replaces an expensive memmove() call with an open-coded version. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This moves the JOURNAL_REPLAY_DONE flag check from bch2_trans_iter_init() to bch2_trans_init(), where we stash a copy in btree_trans - gaining us a small performance improvement. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
checkpatch.pl gives lots of warnings that we don't want - suggested ignore list: ASSIGN_IN_IF UNSPECIFIED_INT - bcachefs coding style prefers single token type names NEW_TYPEDEFS - typedefs are occasionally good FUNCTION_ARGUMENTS - we prefer to look at functions in .c files (hopefully with docbook documentation), not .h file prototypes MULTISTATEMENT_MACRO_USE_DO_WHILE - we have _many_ x-macros and other macros where we can't do this Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- add bch2_dev_usage_read_fast(), which doesn't return by value - bch_dev_usage is big enough that we don't want the silent memcpy - tweak the allocation path to only call bch2_dev_usage_read() once per bucket allocated Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Daniel Hill authored
crc.compression_type & nouce gets reset to inside bch2_rechecksum_bio(), we set it back to the previous values calculated. This fixes incompressible extents being marked as uncompressed. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Daniel B. Hill authored
Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This shouldn't be needed anymore, since we don't rely on the pointer validity that this was guarding against anymore - we get a new good reference and save it right after this function. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This separates out the slowpath of bch2_trans_update_by_path_trace() into a new non-inlined helper. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Delete some code when CONFIG_BCACHEFS_DEBUG=n Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
It's mainly used from bch2_inode_write(), so inline it there. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We don't want to fire the bucket_alloc_fail tracepoint on transaction restart, when we can retry immediately - only when we the allocation actually has to block. Also, switch from sched_clock() to local_clock(), as we've been doing elsewhere. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Now we store the transaction's fn idx in a local variable, instead of redoing the lookup every time we call bch2_trans_init(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This breaks up btree_path_up_until_good_node() so that only the fastpath gets inlined. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The shrinker assumes freed key cache items are ordered by age, so that it doesn't have to scan the full list to find items that are old enough (according to the srcu code) to be freed. But percpu freelists broke this ordering; this patch fixes this by ensuring we insert items into the proper position. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Daniel Hill authored
A single block can't be compressed, so it's incompressible. This stops rebalance repeatably marking extents as uncompressed. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Daniel Hill authored
Sometimes the user may need to change durability after formatting to match current hardware setup, this option provides a quick and flexible alternative to removing then adding the device. It is HIGHLY ADVISED TO RUN REREPLICATE after changing this value so the system doesn't remain degraded. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Daniel Hill authored
Appending new nodes to the end of the list means we're more likely to evict old entries when btree_cache_scan() is started. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- We now correctly allow soft limits to be exceeded, instead of always returning -EDQUOT - Disk quota grate times/warnings can now be set, not just the systemwide defaults Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
local_clock() isn't always completely accurate - e.g. on machines with TSC drift - but ktime_get_ns() overhead is too high, unfortunately. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- In userspace, we don't have real percpu variables; this patch disables the percpu freelists in userspace - add some error messages for the asserts in bch2_fs_btree_key_cache_exit(); we've been hitting this (only in userspace, oddly), perhaps this will help us track down the error. - bkey_cached_reuse() should likely be taking the key cache lock, and it's a slowpath so it doesn't hurt to Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We were forgetting to count down the number of nodes to prefetch, firing off _way_ more than intended - whoops. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We don't actually allocate memory under the btree key cache lock - so there's no recursion concerns, and the shrinker can just use mutex_lock(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
On journal read, previously we would do full journal entry validation immediately after reading a journal entry. However, this would lead to errors for journal entries we weren't actually going to use, either because they were too old or too new (newer than the most recent flush). We've observed write tearing on journal entries newer than the newest flush - which makes sense, prior to a flush there's no guarantees about write persistence. This patch defers full journal entry validation until the end of the journal read path, when we know which journal entries we'll want to use. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Prep work for the next patch, to defer journal entry validation: we now track for each replica whether we had a good checksum. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-