Commit 720261cf authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'bcachefs-2024-07-18.2' of https://evilpiepirate.org/git/bcachefs

Pull bcachefs updates from Kent Overstreet:

 - Metadata version 1.8: Stripe sectors accounting, BCH_DATA_unstriped

   This splits out the accounting of dirty sectors and stripe sectors in
   alloc keys; this lets us see stripe buckets that still have unstriped
   data in them.

   This is needed for ensuring that erasure coding is working correctly,
   as well as completing stripe creation after a crash.

 - Metadata version 1.9: Disk accounting rewrite

   The previous disk accounting scheme relied heavily on percpu counters
   that were also sharded by outstanding journal buffer; it was fast but
   not extensible or scalable, and meant that all accounting counters
   were recorded in every journal entry.

   The new disk accounting scheme stores accounting as normal btree
   keys; updates are deltas until they are flushed by the btree write
   buffer.

   This means we have no practical limit on the number of counters, and
   a new tagged union format that's easy to extend.

   We now have counters for compression type/ratio, per-snapshot-id
   usage, per-btree-id usage, and pending rebalance work.

 - Self healing on read IO/checksum error

   Data is now automatically rewritten if we get a read error and then a
   successful retry

 - Mount API conversion (thanks to Thomas Bertschinger)

 - Better lockdep coverage

   Previously, btree node locks were tracked individually by lockdep,
   like any other lock. But we may take _many_ btree node locks
   simultaneously, we easily blow through the limit of 48 locks that
   lockdep can track, leading to lockdep turning itself off.

   Tracking each btree node lock individually isn't really necessary
   since we have our own cycle detector for deadlock avoidance and
   centralized tracking of btree node locks, so we now have a single
   lockdep_map in btree_trans for "any btree nodes are locked".

 - Some more small incremental work towards online check_allocations

 - Lots more debugging improvements

 - Fixes, including:
    - undefined behaviour fixes, originally noted as breaking userspace
      LTO builds
    - fix a spurious warning in fsck_err, reported by Marcin
    - fix an integer overflow on trans->nr_updates, also reported by
      Marcin; this broke during deletion of highly fragmented indirect
      extents

* tag 'bcachefs-2024-07-18.2' of https://evilpiepirate.org/git/bcachefs: (120 commits)
  lockdep: Add comments for lockdep_set_no{validate,track}_class()
  bcachefs: Fix integer overflow on trans->nr_updates
  bcachefs: silence silly kdoc warning
  bcachefs: Fix fsck warning about btree_trans not passed to fsck error
  bcachefs: Add an error message for insufficient rw journal devs
  bcachefs: varint: Avoid left-shift of a negative value
  bcachefs: darray: Don't pass NULL to memcpy()
  bcachefs: Kill bch2_assert_btree_nodes_not_locked()
  bcachefs: Rename BCH_WRITE_DONE -> BCH_WRITE_SUBMITTED
  bcachefs: __bch2_read(): call trans_begin() on every loop iter
  bcachefs: show none if label is not set
  bcachefs: drop packed, aligned from bkey_inode_buf
  bcachefs: btree node scan: fall back to comparing by journal seq
  bcachefs: Add lockdep support for btree node locks
  lockdep: lockdep_set_notrack_class()
  bcachefs: Improve copygc_wait_to_text()
  bcachefs: Convert clock code to u64s
  bcachefs: Improve startup message
  bcachefs: Self healing on read IO error
  bcachefs: Make read_only a mount option again, but hidden
  ...
parents 4f40c636 a97b43fa
...@@ -3720,7 +3720,6 @@ F: drivers/md/bcache/ ...@@ -3720,7 +3720,6 @@ F: drivers/md/bcache/
BCACHEFS BCACHEFS
M: Kent Overstreet <kent.overstreet@linux.dev> M: Kent Overstreet <kent.overstreet@linux.dev>
R: Brian Foster <bfoster@redhat.com>
L: linux-bcachefs@vger.kernel.org L: linux-bcachefs@vger.kernel.org
S: Supported S: Supported
C: irc://irc.oftc.net/bcache C: irc://irc.oftc.net/bcache
......
...@@ -29,10 +29,11 @@ bcachefs-y := \ ...@@ -29,10 +29,11 @@ bcachefs-y := \
clock.o \ clock.o \
compress.o \ compress.o \
darray.o \ darray.o \
data_update.o \
debug.o \ debug.o \
dirent.o \ dirent.o \
disk_accounting.o \
disk_groups.o \ disk_groups.o \
data_update.o \
ec.o \ ec.o \
errcode.o \ errcode.o \
error.o \ error.o \
......
...@@ -346,7 +346,6 @@ int bch2_set_acl(struct mnt_idmap *idmap, ...@@ -346,7 +346,6 @@ int bch2_set_acl(struct mnt_idmap *idmap,
{ {
struct bch_inode_info *inode = to_bch_ei(dentry->d_inode); struct bch_inode_info *inode = to_bch_ei(dentry->d_inode);
struct bch_fs *c = inode->v.i_sb->s_fs_info; struct bch_fs *c = inode->v.i_sb->s_fs_info;
struct btree_trans *trans = bch2_trans_get(c);
struct btree_iter inode_iter = { NULL }; struct btree_iter inode_iter = { NULL };
struct bch_inode_unpacked inode_u; struct bch_inode_unpacked inode_u;
struct posix_acl *acl; struct posix_acl *acl;
...@@ -354,6 +353,7 @@ int bch2_set_acl(struct mnt_idmap *idmap, ...@@ -354,6 +353,7 @@ int bch2_set_acl(struct mnt_idmap *idmap,
int ret; int ret;
mutex_lock(&inode->ei_update_lock); mutex_lock(&inode->ei_update_lock);
struct btree_trans *trans = bch2_trans_get(c);
retry: retry:
bch2_trans_begin(trans); bch2_trans_begin(trans);
acl = _acl; acl = _acl;
...@@ -394,8 +394,8 @@ int bch2_set_acl(struct mnt_idmap *idmap, ...@@ -394,8 +394,8 @@ int bch2_set_acl(struct mnt_idmap *idmap,
set_cached_acl(&inode->v, type, acl); set_cached_acl(&inode->v, type, acl);
err: err:
mutex_unlock(&inode->ei_update_lock);
bch2_trans_put(trans); bch2_trans_put(trans);
mutex_unlock(&inode->ei_update_lock);
return ret; return ret;
} }
......
This diff is collapsed.
...@@ -41,6 +41,7 @@ static inline void alloc_to_bucket(struct bucket *dst, struct bch_alloc_v4 src) ...@@ -41,6 +41,7 @@ static inline void alloc_to_bucket(struct bucket *dst, struct bch_alloc_v4 src)
{ {
dst->gen = src.gen; dst->gen = src.gen;
dst->data_type = src.data_type; dst->data_type = src.data_type;
dst->stripe_sectors = src.stripe_sectors;
dst->dirty_sectors = src.dirty_sectors; dst->dirty_sectors = src.dirty_sectors;
dst->cached_sectors = src.cached_sectors; dst->cached_sectors = src.cached_sectors;
dst->stripe = src.stripe; dst->stripe = src.stripe;
...@@ -50,6 +51,7 @@ static inline void __bucket_m_to_alloc(struct bch_alloc_v4 *dst, struct bucket s ...@@ -50,6 +51,7 @@ static inline void __bucket_m_to_alloc(struct bch_alloc_v4 *dst, struct bucket s
{ {
dst->gen = src.gen; dst->gen = src.gen;
dst->data_type = src.data_type; dst->data_type = src.data_type;
dst->stripe_sectors = src.stripe_sectors;
dst->dirty_sectors = src.dirty_sectors; dst->dirty_sectors = src.dirty_sectors;
dst->cached_sectors = src.cached_sectors; dst->cached_sectors = src.cached_sectors;
dst->stripe = src.stripe; dst->stripe = src.stripe;
...@@ -80,30 +82,49 @@ static inline bool bucket_data_type_mismatch(enum bch_data_type bucket, ...@@ -80,30 +82,49 @@ static inline bool bucket_data_type_mismatch(enum bch_data_type bucket,
bucket_data_type(bucket) != bucket_data_type(ptr); bucket_data_type(bucket) != bucket_data_type(ptr);
} }
static inline unsigned bch2_bucket_sectors_total(struct bch_alloc_v4 a) static inline s64 bch2_bucket_sectors_total(struct bch_alloc_v4 a)
{ {
return a.dirty_sectors + a.cached_sectors; return a.stripe_sectors + a.dirty_sectors + a.cached_sectors;
} }
static inline unsigned bch2_bucket_sectors_dirty(struct bch_alloc_v4 a) static inline s64 bch2_bucket_sectors_dirty(struct bch_alloc_v4 a)
{ {
return a.dirty_sectors; return a.stripe_sectors + a.dirty_sectors;
} }
static inline unsigned bch2_bucket_sectors_fragmented(struct bch_dev *ca, static inline s64 bch2_bucket_sectors(struct bch_alloc_v4 a)
{
return a.data_type == BCH_DATA_cached
? a.cached_sectors
: bch2_bucket_sectors_dirty(a);
}
static inline s64 bch2_bucket_sectors_fragmented(struct bch_dev *ca,
struct bch_alloc_v4 a) struct bch_alloc_v4 a)
{ {
int d = bch2_bucket_sectors_dirty(a); int d = bch2_bucket_sectors(a);
return d ? max(0, ca->mi.bucket_size - d) : 0; return d ? max(0, ca->mi.bucket_size - d) : 0;
} }
static inline s64 bch2_gc_bucket_sectors_fragmented(struct bch_dev *ca, struct bucket a)
{
int d = a.stripe_sectors + a.dirty_sectors;
return d ? max(0, ca->mi.bucket_size - d) : 0;
}
static inline s64 bch2_bucket_sectors_unstriped(struct bch_alloc_v4 a)
{
return a.data_type == BCH_DATA_stripe ? a.dirty_sectors : 0;
}
static inline enum bch_data_type alloc_data_type(struct bch_alloc_v4 a, static inline enum bch_data_type alloc_data_type(struct bch_alloc_v4 a,
enum bch_data_type data_type) enum bch_data_type data_type)
{ {
if (a.stripe) if (a.stripe)
return data_type == BCH_DATA_parity ? data_type : BCH_DATA_stripe; return data_type == BCH_DATA_parity ? data_type : BCH_DATA_stripe;
if (a.dirty_sectors) if (bch2_bucket_sectors_dirty(a))
return data_type; return data_type;
if (a.cached_sectors) if (a.cached_sectors)
return BCH_DATA_cached; return BCH_DATA_cached;
...@@ -185,7 +206,8 @@ static inline void set_alloc_v4_u64s(struct bkey_i_alloc_v4 *a) ...@@ -185,7 +206,8 @@ static inline void set_alloc_v4_u64s(struct bkey_i_alloc_v4 *a)
struct bkey_i_alloc_v4 * struct bkey_i_alloc_v4 *
bch2_trans_start_alloc_update_noupdate(struct btree_trans *, struct btree_iter *, struct bpos); bch2_trans_start_alloc_update_noupdate(struct btree_trans *, struct btree_iter *, struct bpos);
struct bkey_i_alloc_v4 * struct bkey_i_alloc_v4 *
bch2_trans_start_alloc_update(struct btree_trans *, struct bpos); bch2_trans_start_alloc_update(struct btree_trans *, struct bpos,
enum btree_iter_update_trigger_flags);
void __bch2_alloc_to_v4(struct bkey_s_c, struct bch_alloc_v4 *); void __bch2_alloc_to_v4(struct bkey_s_c, struct bch_alloc_v4 *);
...@@ -270,6 +292,9 @@ static inline bool bkey_is_alloc(const struct bkey *k) ...@@ -270,6 +292,9 @@ static inline bool bkey_is_alloc(const struct bkey *k)
int bch2_alloc_read(struct bch_fs *); int bch2_alloc_read(struct bch_fs *);
int bch2_alloc_key_to_dev_counters(struct btree_trans *, struct bch_dev *,
const struct bch_alloc_v4 *,
const struct bch_alloc_v4 *, unsigned);
int bch2_trigger_alloc(struct btree_trans *, enum btree_id, unsigned, int bch2_trigger_alloc(struct btree_trans *, enum btree_id, unsigned,
struct bkey_s_c, struct bkey_s, struct bkey_s_c, struct bkey_s,
enum btree_iter_update_trigger_flags); enum btree_iter_update_trigger_flags);
......
...@@ -70,6 +70,8 @@ struct bch_alloc_v4 { ...@@ -70,6 +70,8 @@ struct bch_alloc_v4 {
__u32 stripe; __u32 stripe;
__u32 nr_external_backpointers; __u32 nr_external_backpointers;
__u64 fragmentation_lru; __u64 fragmentation_lru;
__u32 stripe_sectors;
__u32 pad;
} __packed __aligned(8); } __packed __aligned(8);
#define BCH_ALLOC_V4_U64s_V0 6 #define BCH_ALLOC_V4_U64s_V0 6
......
...@@ -1589,7 +1589,7 @@ void bch2_fs_allocator_foreground_init(struct bch_fs *c) ...@@ -1589,7 +1589,7 @@ void bch2_fs_allocator_foreground_init(struct bch_fs *c)
} }
} }
static void bch2_open_bucket_to_text(struct printbuf *out, struct bch_fs *c, struct open_bucket *ob) void bch2_open_bucket_to_text(struct printbuf *out, struct bch_fs *c, struct open_bucket *ob)
{ {
struct bch_dev *ca = ob_dev(c, ob); struct bch_dev *ca = ob_dev(c, ob);
unsigned data_type = ob->data_type; unsigned data_type = ob->data_type;
...@@ -1706,15 +1706,15 @@ void bch2_fs_alloc_debug_to_text(struct printbuf *out, struct bch_fs *c) ...@@ -1706,15 +1706,15 @@ void bch2_fs_alloc_debug_to_text(struct printbuf *out, struct bch_fs *c)
printbuf_tabstops_reset(out); printbuf_tabstops_reset(out);
printbuf_tabstop_push(out, 24); printbuf_tabstop_push(out, 24);
percpu_down_read(&c->mark_lock); prt_printf(out, "capacity\t%llu\n", c->capacity);
prt_printf(out, "hidden\t%llu\n", bch2_fs_usage_read_one(c, &c->usage_base->b.hidden)); prt_printf(out, "reserved\t%llu\n", c->reserved);
prt_printf(out, "btree\t%llu\n", bch2_fs_usage_read_one(c, &c->usage_base->b.btree)); prt_printf(out, "hidden\t%llu\n", percpu_u64_get(&c->usage->hidden));
prt_printf(out, "data\t%llu\n", bch2_fs_usage_read_one(c, &c->usage_base->b.data)); prt_printf(out, "btree\t%llu\n", percpu_u64_get(&c->usage->btree));
prt_printf(out, "cached\t%llu\n", bch2_fs_usage_read_one(c, &c->usage_base->b.cached)); prt_printf(out, "data\t%llu\n", percpu_u64_get(&c->usage->data));
prt_printf(out, "reserved\t%llu\n", bch2_fs_usage_read_one(c, &c->usage_base->b.reserved)); prt_printf(out, "cached\t%llu\n", percpu_u64_get(&c->usage->cached));
prt_printf(out, "online_reserved\t%llu\n", percpu_u64_get(c->online_reserved)); prt_printf(out, "reserved\t%llu\n", percpu_u64_get(&c->usage->reserved));
prt_printf(out, "nr_inodes\t%llu\n", bch2_fs_usage_read_one(c, &c->usage_base->b.nr_inodes)); prt_printf(out, "online_reserved\t%llu\n", percpu_u64_get(c->online_reserved));
percpu_up_read(&c->mark_lock); prt_printf(out, "nr_inodes\t%llu\n", percpu_u64_get(&c->usage->nr_inodes));
prt_newline(out); prt_newline(out);
prt_printf(out, "freelist_wait\t%s\n", c->freelist_wait.list.first ? "waiting" : "empty"); prt_printf(out, "freelist_wait\t%s\n", c->freelist_wait.list.first ? "waiting" : "empty");
......
...@@ -222,6 +222,7 @@ static inline struct write_point_specifier writepoint_ptr(struct write_point *wp ...@@ -222,6 +222,7 @@ static inline struct write_point_specifier writepoint_ptr(struct write_point *wp
void bch2_fs_allocator_foreground_init(struct bch_fs *); void bch2_fs_allocator_foreground_init(struct bch_fs *);
void bch2_open_bucket_to_text(struct printbuf *, struct bch_fs *, struct open_bucket *);
void bch2_open_buckets_to_text(struct printbuf *, struct bch_fs *); void bch2_open_buckets_to_text(struct printbuf *, struct bch_fs *);
void bch2_open_buckets_partial_to_text(struct printbuf *, struct bch_fs *); void bch2_open_buckets_partial_to_text(struct printbuf *, struct bch_fs *);
......
...@@ -395,7 +395,7 @@ static int bch2_check_btree_backpointer(struct btree_trans *trans, struct btree_ ...@@ -395,7 +395,7 @@ static int bch2_check_btree_backpointer(struct btree_trans *trans, struct btree_
struct bpos bucket; struct bpos bucket;
if (!bp_pos_to_bucket_nodev_noerror(c, k.k->p, &bucket)) { if (!bp_pos_to_bucket_nodev_noerror(c, k.k->p, &bucket)) {
if (fsck_err(c, backpointer_to_missing_device, if (fsck_err(trans, backpointer_to_missing_device,
"backpointer for missing device:\n%s", "backpointer for missing device:\n%s",
(bch2_bkey_val_to_text(&buf, c, k), buf.buf))) (bch2_bkey_val_to_text(&buf, c, k), buf.buf)))
ret = bch2_btree_delete_at(trans, bp_iter, 0); ret = bch2_btree_delete_at(trans, bp_iter, 0);
...@@ -407,8 +407,8 @@ static int bch2_check_btree_backpointer(struct btree_trans *trans, struct btree_ ...@@ -407,8 +407,8 @@ static int bch2_check_btree_backpointer(struct btree_trans *trans, struct btree_
if (ret) if (ret)
goto out; goto out;
if (fsck_err_on(alloc_k.k->type != KEY_TYPE_alloc_v4, c, if (fsck_err_on(alloc_k.k->type != KEY_TYPE_alloc_v4,
backpointer_to_missing_alloc, trans, backpointer_to_missing_alloc,
"backpointer for nonexistent alloc key: %llu:%llu:0\n%s", "backpointer for nonexistent alloc key: %llu:%llu:0\n%s",
alloc_iter.pos.inode, alloc_iter.pos.offset, alloc_iter.pos.inode, alloc_iter.pos.offset,
(bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { (bch2_bkey_val_to_text(&buf, c, k), buf.buf))) {
...@@ -505,7 +505,7 @@ static int check_extent_checksum(struct btree_trans *trans, ...@@ -505,7 +505,7 @@ static int check_extent_checksum(struct btree_trans *trans,
struct nonce nonce = extent_nonce(extent.k->version, p.crc); struct nonce nonce = extent_nonce(extent.k->version, p.crc);
struct bch_csum csum = bch2_checksum(c, p.crc.csum_type, nonce, data_buf, bytes); struct bch_csum csum = bch2_checksum(c, p.crc.csum_type, nonce, data_buf, bytes);
if (fsck_err_on(bch2_crc_cmp(csum, p.crc.csum), if (fsck_err_on(bch2_crc_cmp(csum, p.crc.csum),
c, dup_backpointer_to_bad_csum_extent, trans, dup_backpointer_to_bad_csum_extent,
"%s", buf.buf)) "%s", buf.buf))
ret = drop_dev_and_update(trans, btree, extent, dev) ?: 1; ret = drop_dev_and_update(trans, btree, extent, dev) ?: 1;
fsck_err: fsck_err:
...@@ -647,7 +647,7 @@ static int check_bp_exists(struct btree_trans *trans, ...@@ -647,7 +647,7 @@ static int check_bp_exists(struct btree_trans *trans,
prt_printf(&buf, "\n want: "); prt_printf(&buf, "\n want: ");
bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(&n_bp_k.k_i)); bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(&n_bp_k.k_i));
if (fsck_err(c, ptr_to_missing_backpointer, "%s", buf.buf)) if (fsck_err(trans, ptr_to_missing_backpointer, "%s", buf.buf))
ret = bch2_bucket_backpointer_mod(trans, ca, bucket, bp, orig_k, true); ret = bch2_bucket_backpointer_mod(trans, ca, bucket, bp, orig_k, true);
goto out; goto out;
...@@ -762,12 +762,12 @@ static int bch2_get_btree_in_memory_pos(struct btree_trans *trans, ...@@ -762,12 +762,12 @@ static int bch2_get_btree_in_memory_pos(struct btree_trans *trans,
for (enum btree_id btree = start.btree; for (enum btree_id btree = start.btree;
btree < BTREE_ID_NR && !ret; btree < BTREE_ID_NR && !ret;
btree++) { btree++) {
unsigned depth = ((1U << btree) & btree_leaf_mask) ? 0 : 1; unsigned depth = (BIT_ULL(btree) & btree_leaf_mask) ? 0 : 1;
struct btree_iter iter; struct btree_iter iter;
struct btree *b; struct btree *b;
if (!((1U << btree) & btree_leaf_mask) && if (!(BIT_ULL(btree) & btree_leaf_mask) &&
!((1U << btree) & btree_interior_mask)) !(BIT_ULL(btree) & btree_interior_mask))
continue; continue;
bch2_trans_begin(trans); bch2_trans_begin(trans);
...@@ -908,7 +908,7 @@ static int check_one_backpointer(struct btree_trans *trans, ...@@ -908,7 +908,7 @@ static int check_one_backpointer(struct btree_trans *trans,
if (ret) if (ret)
goto out; goto out;
if (fsck_err(c, backpointer_to_missing_ptr, if (fsck_err(trans, backpointer_to_missing_ptr,
"backpointer for missing %s\n %s", "backpointer for missing %s\n %s",
bp.v->level ? "btree node" : "extent", bp.v->level ? "btree node" : "extent",
(bch2_bkey_val_to_text(&buf, c, bp.s_c), buf.buf))) { (bch2_bkey_val_to_text(&buf, c, bp.s_c), buf.buf))) {
...@@ -951,8 +951,8 @@ int bch2_check_backpointers_to_extents(struct bch_fs *c) ...@@ -951,8 +951,8 @@ int bch2_check_backpointers_to_extents(struct bch_fs *c)
while (1) { while (1) {
ret = bch2_get_btree_in_memory_pos(trans, ret = bch2_get_btree_in_memory_pos(trans,
(1U << BTREE_ID_extents)| BIT_ULL(BTREE_ID_extents)|
(1U << BTREE_ID_reflink), BIT_ULL(BTREE_ID_reflink),
~0, ~0,
start, &end); start, &end);
if (ret) if (ret)
......
...@@ -205,6 +205,7 @@ ...@@ -205,6 +205,7 @@
#include <linux/zstd.h> #include <linux/zstd.h>
#include "bcachefs_format.h" #include "bcachefs_format.h"
#include "disk_accounting_types.h"
#include "errcode.h" #include "errcode.h"
#include "fifo.h" #include "fifo.h"
#include "nocow_locking_types.h" #include "nocow_locking_types.h"
...@@ -266,6 +267,8 @@ do { \ ...@@ -266,6 +267,8 @@ do { \
#define bch2_fmt(_c, fmt) bch2_log_msg(_c, fmt "\n") #define bch2_fmt(_c, fmt) bch2_log_msg(_c, fmt "\n")
void bch2_print_str(struct bch_fs *, const char *);
__printf(2, 3) __printf(2, 3)
void bch2_print_opts(struct bch_opts *, const char *, ...); void bch2_print_opts(struct bch_opts *, const char *, ...);
...@@ -535,8 +538,8 @@ struct bch_dev { ...@@ -535,8 +538,8 @@ struct bch_dev {
/* /*
* Buckets: * Buckets:
* Per-bucket arrays are protected by c->mark_lock, bucket_lock and * Per-bucket arrays are protected by c->mark_lock, bucket_lock and
* gc_lock, for device resize - holding any is sufficient for access: * gc_gens_lock, for device resize - holding any is sufficient for
* Or rcu_read_lock(), but only for dev_ptr_stale(): * access: Or rcu_read_lock(), but only for dev_ptr_stale():
*/ */
struct bucket_array __rcu *buckets_gc; struct bucket_array __rcu *buckets_gc;
struct bucket_gens __rcu *bucket_gens; struct bucket_gens __rcu *bucket_gens;
...@@ -544,9 +547,7 @@ struct bch_dev { ...@@ -544,9 +547,7 @@ struct bch_dev {
unsigned long *buckets_nouse; unsigned long *buckets_nouse;
struct rw_semaphore bucket_lock; struct rw_semaphore bucket_lock;
struct bch_dev_usage *usage_base; struct bch_dev_usage __percpu *usage;
struct bch_dev_usage __percpu *usage[JOURNAL_BUF_NR];
struct bch_dev_usage __percpu *usage_gc;
/* Allocator: */ /* Allocator: */
u64 new_fs_bucket_idx; u64 new_fs_bucket_idx;
...@@ -592,6 +593,8 @@ struct bch_dev { ...@@ -592,6 +593,8 @@ struct bch_dev {
#define BCH_FS_FLAGS() \ #define BCH_FS_FLAGS() \
x(new_fs) \ x(new_fs) \
x(started) \ x(started) \
x(btree_running) \
x(accounting_replay_done) \
x(may_go_rw) \ x(may_go_rw) \
x(rw) \ x(rw) \
x(was_rw) \ x(was_rw) \
...@@ -670,8 +673,6 @@ struct btree_trans_buf { ...@@ -670,8 +673,6 @@ struct btree_trans_buf {
struct btree_trans *trans; struct btree_trans *trans;
}; };
#define REPLICAS_DELTA_LIST_MAX (1U << 16)
#define BCACHEFS_ROOT_SUBVOL_INUM \ #define BCACHEFS_ROOT_SUBVOL_INUM \
((subvol_inum) { BCACHEFS_ROOT_SUBVOL, BCACHEFS_ROOT_INO }) ((subvol_inum) { BCACHEFS_ROOT_SUBVOL, BCACHEFS_ROOT_INO })
...@@ -741,15 +742,14 @@ struct bch_fs { ...@@ -741,15 +742,14 @@ struct bch_fs {
struct bch_dev __rcu *devs[BCH_SB_MEMBERS_MAX]; struct bch_dev __rcu *devs[BCH_SB_MEMBERS_MAX];
struct bch_accounting_mem accounting;
struct bch_replicas_cpu replicas; struct bch_replicas_cpu replicas;
struct bch_replicas_cpu replicas_gc; struct bch_replicas_cpu replicas_gc;
struct mutex replicas_gc_lock; struct mutex replicas_gc_lock;
mempool_t replicas_delta_pool;
struct journal_entry_res btree_root_journal_res; struct journal_entry_res btree_root_journal_res;
struct journal_entry_res replicas_journal_res;
struct journal_entry_res clock_journal_res; struct journal_entry_res clock_journal_res;
struct journal_entry_res dev_usage_journal_res;
struct bch_disk_groups_cpu __rcu *disk_groups; struct bch_disk_groups_cpu __rcu *disk_groups;
...@@ -872,6 +872,7 @@ struct bch_fs { ...@@ -872,6 +872,7 @@ struct bch_fs {
struct bch_devs_mask rw_devs[BCH_DATA_NR]; struct bch_devs_mask rw_devs[BCH_DATA_NR];
u64 capacity; /* sectors */ u64 capacity; /* sectors */
u64 reserved; /* sectors */
/* /*
* When capacity _decreases_ (due to a disk being removed), we * When capacity _decreases_ (due to a disk being removed), we
...@@ -889,15 +890,9 @@ struct bch_fs { ...@@ -889,15 +890,9 @@ struct bch_fs {
struct percpu_rw_semaphore mark_lock; struct percpu_rw_semaphore mark_lock;
seqcount_t usage_lock; seqcount_t usage_lock;
struct bch_fs_usage *usage_base; struct bch_fs_usage_base __percpu *usage;
struct bch_fs_usage __percpu *usage[JOURNAL_BUF_NR];
struct bch_fs_usage __percpu *usage_gc;
u64 __percpu *online_reserved; u64 __percpu *online_reserved;
/* single element mempool: */
struct mutex usage_scratch_lock;
struct bch_fs_usage_online *usage_scratch;
struct io_clock io_clock[2]; struct io_clock io_clock[2];
/* JOURNAL SEQ BLACKLIST */ /* JOURNAL SEQ BLACKLIST */
......
...@@ -417,7 +417,8 @@ static inline void bkey_init(struct bkey *k) ...@@ -417,7 +417,8 @@ static inline void bkey_init(struct bkey *k)
x(bucket_gens, 30) \ x(bucket_gens, 30) \
x(snapshot_tree, 31) \ x(snapshot_tree, 31) \
x(logged_op_truncate, 32) \ x(logged_op_truncate, 32) \
x(logged_op_finsert, 33) x(logged_op_finsert, 33) \
x(accounting, 34)
enum bch_bkey_type { enum bch_bkey_type {
#define x(name, nr) KEY_TYPE_##name = nr, #define x(name, nr) KEY_TYPE_##name = nr,
...@@ -467,18 +468,6 @@ struct bch_backpointer { ...@@ -467,18 +468,6 @@ struct bch_backpointer {
struct bpos pos; struct bpos pos;
} __packed __aligned(8); } __packed __aligned(8);
/* LRU btree: */
struct bch_lru {
struct bch_val v;
__le64 idx;
} __packed __aligned(8);
#define LRU_ID_STRIPES (1U << 16)
#define LRU_TIME_BITS 48
#define LRU_TIME_MAX ((1ULL << LRU_TIME_BITS) - 1)
/* Optional/variable size superblock sections: */ /* Optional/variable size superblock sections: */
struct bch_sb_field { struct bch_sb_field {
...@@ -505,6 +494,9 @@ struct bch_sb_field { ...@@ -505,6 +494,9 @@ struct bch_sb_field {
x(downgrade, 14) x(downgrade, 14)
#include "alloc_background_format.h" #include "alloc_background_format.h"
#include "dirent_format.h"
#include "disk_accounting_format.h"
#include "disk_groups_format.h"
#include "extents_format.h" #include "extents_format.h"
#include "ec_format.h" #include "ec_format.h"
#include "dirent_format.h" #include "dirent_format.h"
...@@ -512,6 +504,7 @@ struct bch_sb_field { ...@@ -512,6 +504,7 @@ struct bch_sb_field {
#include "inode_format.h" #include "inode_format.h"
#include "journal_seq_blacklist_format.h" #include "journal_seq_blacklist_format.h"
#include "logged_ops_format.h" #include "logged_ops_format.h"
#include "lru_format.h"
#include "quota_format.h" #include "quota_format.h"
#include "reflink_format.h" #include "reflink_format.h"
#include "replicas_format.h" #include "replicas_format.h"
...@@ -602,48 +595,6 @@ LE64_BITMASK(BCH_KDF_SCRYPT_N, struct bch_sb_field_crypt, kdf_flags, 0, 16); ...@@ -602,48 +595,6 @@ LE64_BITMASK(BCH_KDF_SCRYPT_N, struct bch_sb_field_crypt, kdf_flags, 0, 16);
LE64_BITMASK(BCH_KDF_SCRYPT_R, struct bch_sb_field_crypt, kdf_flags, 16, 32); LE64_BITMASK(BCH_KDF_SCRYPT_R, struct bch_sb_field_crypt, kdf_flags, 16, 32);
LE64_BITMASK(BCH_KDF_SCRYPT_P, struct bch_sb_field_crypt, kdf_flags, 32, 48); LE64_BITMASK(BCH_KDF_SCRYPT_P, struct bch_sb_field_crypt, kdf_flags, 32, 48);
#define BCH_DATA_TYPES() \
x(free, 0) \
x(sb, 1) \
x(journal, 2) \
x(btree, 3) \
x(user, 4) \
x(cached, 5) \
x(parity, 6) \
x(stripe, 7) \
x(need_gc_gens, 8) \
x(need_discard, 9)
enum bch_data_type {
#define x(t, n) BCH_DATA_##t,
BCH_DATA_TYPES()
#undef x
BCH_DATA_NR
};
static inline bool data_type_is_empty(enum bch_data_type type)
{
switch (type) {
case BCH_DATA_free:
case BCH_DATA_need_gc_gens:
case BCH_DATA_need_discard:
return true;
default:
return false;
}
}
static inline bool data_type_is_hidden(enum bch_data_type type)
{
switch (type) {
case BCH_DATA_sb:
case BCH_DATA_journal:
return true;
default:
return false;
}
}
/* /*
* On clean shutdown, store btree roots and current journal sequence number in * On clean shutdown, store btree roots and current journal sequence number in
* the superblock: * the superblock:
...@@ -722,7 +673,9 @@ struct bch_sb_field_ext { ...@@ -722,7 +673,9 @@ struct bch_sb_field_ext {
x(member_seq, BCH_VERSION(1, 4)) \ x(member_seq, BCH_VERSION(1, 4)) \
x(subvolume_fs_parent, BCH_VERSION(1, 5)) \ x(subvolume_fs_parent, BCH_VERSION(1, 5)) \
x(btree_subvolume_children, BCH_VERSION(1, 6)) \ x(btree_subvolume_children, BCH_VERSION(1, 6)) \
x(mi_btree_bitmap, BCH_VERSION(1, 7)) x(mi_btree_bitmap, BCH_VERSION(1, 7)) \
x(bucket_stripe_sectors, BCH_VERSION(1, 8)) \
x(disk_accounting_v2, BCH_VERSION(1, 9))
enum bcachefs_metadata_version { enum bcachefs_metadata_version {
bcachefs_metadata_version_min = 9, bcachefs_metadata_version_min = 9,
...@@ -1174,7 +1127,6 @@ static inline bool jset_entry_is_key(struct jset_entry *e) ...@@ -1174,7 +1127,6 @@ static inline bool jset_entry_is_key(struct jset_entry *e)
switch (e->type) { switch (e->type) {
case BCH_JSET_ENTRY_btree_keys: case BCH_JSET_ENTRY_btree_keys:
case BCH_JSET_ENTRY_btree_root: case BCH_JSET_ENTRY_btree_root:
case BCH_JSET_ENTRY_overwrite:
case BCH_JSET_ENTRY_write_buffer_keys: case BCH_JSET_ENTRY_write_buffer_keys:
return true; return true;
} }
...@@ -1375,7 +1327,9 @@ enum btree_id_flags { ...@@ -1375,7 +1327,9 @@ enum btree_id_flags {
x(rebalance_work, 18, BTREE_ID_SNAPSHOT_FIELD, \ x(rebalance_work, 18, BTREE_ID_SNAPSHOT_FIELD, \
BIT_ULL(KEY_TYPE_set)|BIT_ULL(KEY_TYPE_cookie)) \ BIT_ULL(KEY_TYPE_set)|BIT_ULL(KEY_TYPE_cookie)) \
x(subvolume_children, 19, 0, \ x(subvolume_children, 19, 0, \
BIT_ULL(KEY_TYPE_set)) BIT_ULL(KEY_TYPE_set)) \
x(accounting, 20, BTREE_ID_SNAPSHOT_FIELD, \
BIT_ULL(KEY_TYPE_accounting)) \
enum btree_id { enum btree_id {
#define x(name, nr, ...) BTREE_ID_##name = nr, #define x(name, nr, ...) BTREE_ID_##name = nr,
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <linux/uuid.h> #include <linux/uuid.h>
#include <asm/ioctl.h> #include <asm/ioctl.h>
#include "bcachefs_format.h" #include "bcachefs_format.h"
#include "bkey_types.h"
/* /*
* Flags common to multiple ioctls: * Flags common to multiple ioctls:
...@@ -85,6 +86,7 @@ struct bch_ioctl_incremental { ...@@ -85,6 +86,7 @@ struct bch_ioctl_incremental {
#define BCH_IOCTL_FSCK_OFFLINE _IOW(0xbc, 19, struct bch_ioctl_fsck_offline) #define BCH_IOCTL_FSCK_OFFLINE _IOW(0xbc, 19, struct bch_ioctl_fsck_offline)
#define BCH_IOCTL_FSCK_ONLINE _IOW(0xbc, 20, struct bch_ioctl_fsck_online) #define BCH_IOCTL_FSCK_ONLINE _IOW(0xbc, 20, struct bch_ioctl_fsck_online)
#define BCH_IOCTL_QUERY_ACCOUNTING _IOW(0xbc, 21, struct bch_ioctl_query_accounting)
/* ioctl below act on a particular file, not the filesystem as a whole: */ /* ioctl below act on a particular file, not the filesystem as a whole: */
...@@ -251,12 +253,18 @@ struct bch_replicas_usage { ...@@ -251,12 +253,18 @@ struct bch_replicas_usage {
struct bch_replicas_entry_v1 r; struct bch_replicas_entry_v1 r;
} __packed; } __packed;
static inline unsigned replicas_usage_bytes(struct bch_replicas_usage *u)
{
return offsetof(struct bch_replicas_usage, r) + replicas_entry_bytes(&u->r);
}
static inline struct bch_replicas_usage * static inline struct bch_replicas_usage *
replicas_usage_next(struct bch_replicas_usage *u) replicas_usage_next(struct bch_replicas_usage *u)
{ {
return (void *) u + replicas_entry_bytes(&u->r) + 8; return (void *) u + replicas_usage_bytes(u);
} }
/* Obsolete */
/* /*
* BCH_IOCTL_FS_USAGE: query filesystem disk space usage * BCH_IOCTL_FS_USAGE: query filesystem disk space usage
* *
...@@ -282,6 +290,7 @@ struct bch_ioctl_fs_usage { ...@@ -282,6 +290,7 @@ struct bch_ioctl_fs_usage {
struct bch_replicas_usage replicas[]; struct bch_replicas_usage replicas[];
}; };
/* Obsolete */
/* /*
* BCH_IOCTL_DEV_USAGE: query device disk space usage * BCH_IOCTL_DEV_USAGE: query device disk space usage
* *
...@@ -306,6 +315,7 @@ struct bch_ioctl_dev_usage { ...@@ -306,6 +315,7 @@ struct bch_ioctl_dev_usage {
} d[10]; } d[10];
}; };
/* Obsolete */
struct bch_ioctl_dev_usage_v2 { struct bch_ioctl_dev_usage_v2 {
__u64 dev; __u64 dev;
__u32 flags; __u32 flags;
...@@ -409,4 +419,28 @@ struct bch_ioctl_fsck_online { ...@@ -409,4 +419,28 @@ struct bch_ioctl_fsck_online {
__u64 opts; /* string */ __u64 opts; /* string */
}; };
/*
* BCH_IOCTL_QUERY_ACCOUNTING: query filesystem disk accounting
*
* Returns disk space usage broken out by data type, number of replicas, and
* by component device
*
* @replica_entries_bytes - size, in bytes, allocated for replica usage entries
*
* On success, @replica_entries_bytes will be changed to indicate the number of
* bytes actually used.
*
* Returns -ERANGE if @replica_entries_bytes was too small
*/
struct bch_ioctl_query_accounting {
__u64 capacity;
__u64 used;
__u64 online_reserved;
__u32 accounting_u64s; /* input parameter */
__u32 accounting_types_mask; /* input parameter */
struct bkey_i_accounting accounting[];
};
#endif /* _BCACHEFS_IOCTL_H */ #endif /* _BCACHEFS_IOCTL_H */
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include "btree_types.h" #include "btree_types.h"
#include "alloc_background.h" #include "alloc_background.h"
#include "dirent.h" #include "dirent.h"
#include "disk_accounting.h"
#include "ec.h" #include "ec.h"
#include "error.h" #include "error.h"
#include "extents.h" #include "extents.h"
......
...@@ -602,8 +602,8 @@ int bch2_btree_cache_cannibalize_lock(struct btree_trans *trans, struct closure ...@@ -602,8 +602,8 @@ int bch2_btree_cache_cannibalize_lock(struct btree_trans *trans, struct closure
struct btree_cache *bc = &c->btree_cache; struct btree_cache *bc = &c->btree_cache;
struct task_struct *old; struct task_struct *old;
old = cmpxchg(&bc->alloc_lock, NULL, current); old = NULL;
if (old == NULL || old == current) if (try_cmpxchg(&bc->alloc_lock, &old, current) || old == current)
goto success; goto success;
if (!cl) { if (!cl) {
...@@ -614,8 +614,8 @@ int bch2_btree_cache_cannibalize_lock(struct btree_trans *trans, struct closure ...@@ -614,8 +614,8 @@ int bch2_btree_cache_cannibalize_lock(struct btree_trans *trans, struct closure
closure_wait(&bc->alloc_wait, cl); closure_wait(&bc->alloc_wait, cl);
/* Try again, after adding ourselves to waitlist */ /* Try again, after adding ourselves to waitlist */
old = cmpxchg(&bc->alloc_lock, NULL, current); old = NULL;
if (old == NULL || old == current) { if (try_cmpxchg(&bc->alloc_lock, &old, current) || old == current) {
/* We raced */ /* We raced */
closure_wake_up(&bc->alloc_wait); closure_wake_up(&bc->alloc_wait);
goto success; goto success;
...@@ -1257,6 +1257,14 @@ const char *bch2_btree_id_str(enum btree_id btree) ...@@ -1257,6 +1257,14 @@ const char *bch2_btree_id_str(enum btree_id btree)
return btree < BTREE_ID_NR ? __bch2_btree_ids[btree] : "(unknown)"; return btree < BTREE_ID_NR ? __bch2_btree_ids[btree] : "(unknown)";
} }
void bch2_btree_id_to_text(struct printbuf *out, enum btree_id btree)
{
if (btree < BTREE_ID_NR)
prt_str(out, __bch2_btree_ids[btree]);
else
prt_printf(out, "(unknown btree %u)", btree);
}
void bch2_btree_pos_to_text(struct printbuf *out, struct bch_fs *c, const struct btree *b) void bch2_btree_pos_to_text(struct printbuf *out, struct bch_fs *c, const struct btree *b)
{ {
prt_printf(out, "%s level %u/%u\n ", prt_printf(out, "%s level %u/%u\n ",
......
...@@ -132,6 +132,8 @@ static inline struct btree *btree_node_root(struct bch_fs *c, struct btree *b) ...@@ -132,6 +132,8 @@ static inline struct btree *btree_node_root(struct bch_fs *c, struct btree *b)
} }
const char *bch2_btree_id_str(enum btree_id); const char *bch2_btree_id_str(enum btree_id);
void bch2_btree_id_to_text(struct printbuf *, enum btree_id);
void bch2_btree_pos_to_text(struct printbuf *, struct bch_fs *, const struct btree *); void bch2_btree_pos_to_text(struct printbuf *, struct bch_fs *, const struct btree *);
void bch2_btree_node_to_text(struct printbuf *, struct bch_fs *, const struct btree *); void bch2_btree_node_to_text(struct printbuf *, struct bch_fs *, const struct btree *);
void bch2_btree_cache_to_text(struct printbuf *, const struct btree_cache *); void bch2_btree_cache_to_text(struct printbuf *, const struct btree_cache *);
......
This diff is collapsed.
...@@ -47,17 +47,10 @@ static inline struct gc_pos gc_pos_btree(enum btree_id btree, unsigned level, ...@@ -47,17 +47,10 @@ static inline struct gc_pos gc_pos_btree(enum btree_id btree, unsigned level,
}; };
} }
/*
* GC position of the pointers within a btree node: note, _not_ for &b->key
* itself, that lives in the parent node:
*/
static inline struct gc_pos gc_pos_btree_node(struct btree *b)
{
return gc_pos_btree(b->c.btree_id, b->c.level, b->key.k.p);
}
static inline int gc_btree_order(enum btree_id btree) static inline int gc_btree_order(enum btree_id btree)
{ {
if (btree == BTREE_ID_alloc)
return -2;
if (btree == BTREE_ID_stripes) if (btree == BTREE_ID_stripes)
return -1; return -1;
return btree; return btree;
...@@ -65,11 +58,11 @@ static inline int gc_btree_order(enum btree_id btree) ...@@ -65,11 +58,11 @@ static inline int gc_btree_order(enum btree_id btree)
static inline int gc_pos_cmp(struct gc_pos l, struct gc_pos r) static inline int gc_pos_cmp(struct gc_pos l, struct gc_pos r)
{ {
return cmp_int(l.phase, r.phase) ?: return cmp_int(l.phase, r.phase) ?:
cmp_int(gc_btree_order(l.btree), cmp_int(gc_btree_order(l.btree),
gc_btree_order(r.btree)) ?: gc_btree_order(r.btree)) ?:
-cmp_int(l.level, r.level) ?: cmp_int(l.level, r.level) ?:
bpos_cmp(l.pos, r.pos); bpos_cmp(l.pos, r.pos);
} }
static inline bool gc_visited(struct bch_fs *c, struct gc_pos pos) static inline bool gc_visited(struct bch_fs *c, struct gc_pos pos)
...@@ -85,6 +78,8 @@ static inline bool gc_visited(struct bch_fs *c, struct gc_pos pos) ...@@ -85,6 +78,8 @@ static inline bool gc_visited(struct bch_fs *c, struct gc_pos pos)
return ret; return ret;
} }
void bch2_gc_pos_to_text(struct printbuf *, struct gc_pos *);
int bch2_gc_gens(struct bch_fs *); int bch2_gc_gens(struct bch_fs *);
void bch2_gc_gens_async(struct bch_fs *); void bch2_gc_gens_async(struct bch_fs *);
void bch2_fs_gc_init(struct bch_fs *); void bch2_fs_gc_init(struct bch_fs *);
......
...@@ -4,11 +4,16 @@ ...@@ -4,11 +4,16 @@
#include <linux/generic-radix-tree.h> #include <linux/generic-radix-tree.h>
#define GC_PHASES() \
x(not_running) \
x(start) \
x(sb) \
x(btree)
enum gc_phase { enum gc_phase {
GC_PHASE_not_running, #define x(n) GC_PHASE_##n,
GC_PHASE_start, GC_PHASES()
GC_PHASE_sb, #undef x
GC_PHASE_btree,
}; };
struct gc_pos { struct gc_pos {
......
...@@ -46,8 +46,6 @@ void bch2_btree_node_io_unlock(struct btree *b) ...@@ -46,8 +46,6 @@ void bch2_btree_node_io_unlock(struct btree *b)
void bch2_btree_node_io_lock(struct btree *b) void bch2_btree_node_io_lock(struct btree *b)
{ {
bch2_assert_btree_nodes_not_locked();
wait_on_bit_lock_io(&b->flags, BTREE_NODE_write_in_flight, wait_on_bit_lock_io(&b->flags, BTREE_NODE_write_in_flight,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
} }
...@@ -66,16 +64,12 @@ void __bch2_btree_node_wait_on_write(struct btree *b) ...@@ -66,16 +64,12 @@ void __bch2_btree_node_wait_on_write(struct btree *b)
void bch2_btree_node_wait_on_read(struct btree *b) void bch2_btree_node_wait_on_read(struct btree *b)
{ {
bch2_assert_btree_nodes_not_locked();
wait_on_bit_io(&b->flags, BTREE_NODE_read_in_flight, wait_on_bit_io(&b->flags, BTREE_NODE_read_in_flight,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
} }
void bch2_btree_node_wait_on_write(struct btree *b) void bch2_btree_node_wait_on_write(struct btree *b)
{ {
bch2_assert_btree_nodes_not_locked();
wait_on_bit_io(&b->flags, BTREE_NODE_write_in_flight, wait_on_bit_io(&b->flags, BTREE_NODE_write_in_flight,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
} }
...@@ -534,7 +528,7 @@ static void btree_err_msg(struct printbuf *out, struct bch_fs *c, ...@@ -534,7 +528,7 @@ static void btree_err_msg(struct printbuf *out, struct bch_fs *c,
printbuf_indent_add(out, 2); printbuf_indent_add(out, 2);
prt_printf(out, "\nnode offset %u/%u", prt_printf(out, "\nnode offset %u/%u",
b->written, btree_ptr_sectors_written(&b->key)); b->written, btree_ptr_sectors_written(bkey_i_to_s_c(&b->key)));
if (i) if (i)
prt_printf(out, " bset u64s %u", le16_to_cpu(i->u64s)); prt_printf(out, " bset u64s %u", le16_to_cpu(i->u64s));
if (k) if (k)
...@@ -585,7 +579,7 @@ static int __btree_err(int ret, ...@@ -585,7 +579,7 @@ static int __btree_err(int ret,
switch (ret) { switch (ret) {
case -BCH_ERR_btree_node_read_err_fixable: case -BCH_ERR_btree_node_read_err_fixable:
ret = !silent ret = !silent
? bch2_fsck_err(c, FSCK_CAN_FIX, err_type, "%s", out.buf) ? __bch2_fsck_err(c, NULL, FSCK_CAN_FIX, err_type, "%s", out.buf)
: -BCH_ERR_fsck_fix; : -BCH_ERR_fsck_fix;
if (ret != -BCH_ERR_fsck_fix && if (ret != -BCH_ERR_fsck_fix &&
ret != -BCH_ERR_fsck_ignore) ret != -BCH_ERR_fsck_ignore)
...@@ -689,6 +683,7 @@ static int validate_bset(struct bch_fs *c, struct bch_dev *ca, ...@@ -689,6 +683,7 @@ static int validate_bset(struct bch_fs *c, struct bch_dev *ca,
int write, bool have_retry, bool *saw_error) int write, bool have_retry, bool *saw_error)
{ {
unsigned version = le16_to_cpu(i->version); unsigned version = le16_to_cpu(i->version);
unsigned ptr_written = btree_ptr_sectors_written(bkey_i_to_s_c(&b->key));
struct printbuf buf1 = PRINTBUF; struct printbuf buf1 = PRINTBUF;
struct printbuf buf2 = PRINTBUF; struct printbuf buf2 = PRINTBUF;
int ret = 0; int ret = 0;
...@@ -732,11 +727,13 @@ static int validate_bset(struct bch_fs *c, struct bch_dev *ca, ...@@ -732,11 +727,13 @@ static int validate_bset(struct bch_fs *c, struct bch_dev *ca,
btree_node_unsupported_version, btree_node_unsupported_version,
"BSET_SEPARATE_WHITEOUTS no longer supported"); "BSET_SEPARATE_WHITEOUTS no longer supported");
if (btree_err_on(offset + sectors > btree_sectors(c), if (!write &&
btree_err_on(offset + sectors > (ptr_written ?: btree_sectors(c)),
-BCH_ERR_btree_node_read_err_fixable, -BCH_ERR_btree_node_read_err_fixable,
c, ca, b, i, NULL, c, ca, b, i, NULL,
bset_past_end_of_btree_node, bset_past_end_of_btree_node,
"bset past end of btree node")) { "bset past end of btree node (offset %u len %u but written %zu)",
offset, sectors, ptr_written ?: btree_sectors(c))) {
i->u64s = 0; i->u64s = 0;
ret = 0; ret = 0;
goto out; goto out;
...@@ -1002,7 +999,8 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca, ...@@ -1002,7 +999,8 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca,
bool updated_range = b->key.k.type == KEY_TYPE_btree_ptr_v2 && bool updated_range = b->key.k.type == KEY_TYPE_btree_ptr_v2 &&
BTREE_PTR_RANGE_UPDATED(&bkey_i_to_btree_ptr_v2(&b->key)->v); BTREE_PTR_RANGE_UPDATED(&bkey_i_to_btree_ptr_v2(&b->key)->v);
unsigned u64s; unsigned u64s;
unsigned ptr_written = btree_ptr_sectors_written(&b->key); unsigned ptr_written = btree_ptr_sectors_written(bkey_i_to_s_c(&b->key));
u64 max_journal_seq = 0;
struct printbuf buf = PRINTBUF; struct printbuf buf = PRINTBUF;
int ret = 0, retry_read = 0, write = READ; int ret = 0, retry_read = 0, write = READ;
u64 start_time = local_clock(); u64 start_time = local_clock();
...@@ -1178,6 +1176,8 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca, ...@@ -1178,6 +1176,8 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca,
sort_iter_add(iter, sort_iter_add(iter,
vstruct_idx(i, 0), vstruct_idx(i, 0),
vstruct_last(i)); vstruct_last(i));
max_journal_seq = max(max_journal_seq, le64_to_cpu(i->journal_seq));
} }
if (ptr_written) { if (ptr_written) {
...@@ -1214,6 +1214,7 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca, ...@@ -1214,6 +1214,7 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca,
swap(sorted, b->data); swap(sorted, b->data);
set_btree_bset(b, b->set, &b->data->keys); set_btree_bset(b, b->set, &b->data->keys);
b->nsets = 1; b->nsets = 1;
b->data->keys.journal_seq = cpu_to_le64(max_journal_seq);
BUG_ON(b->nr.live_u64s != u64s); BUG_ON(b->nr.live_u64s != u64s);
...@@ -1796,15 +1797,16 @@ int bch2_btree_root_read(struct bch_fs *c, enum btree_id id, ...@@ -1796,15 +1797,16 @@ int bch2_btree_root_read(struct bch_fs *c, enum btree_id id,
static void bch2_btree_complete_write(struct bch_fs *c, struct btree *b, static void bch2_btree_complete_write(struct bch_fs *c, struct btree *b,
struct btree_write *w) struct btree_write *w)
{ {
unsigned long old, new, v = READ_ONCE(b->will_make_reachable); unsigned long old, new;
old = READ_ONCE(b->will_make_reachable);
do { do {
old = new = v; new = old;
if (!(old & 1)) if (!(old & 1))
break; break;
new &= ~1UL; new &= ~1UL;
} while ((v = cmpxchg(&b->will_make_reachable, old, new)) != old); } while (!try_cmpxchg(&b->will_make_reachable, &old, new));
if (old & 1) if (old & 1)
closure_put(&((struct btree_update *) new)->cl); closure_put(&((struct btree_update *) new)->cl);
...@@ -1815,14 +1817,14 @@ static void bch2_btree_complete_write(struct bch_fs *c, struct btree *b, ...@@ -1815,14 +1817,14 @@ static void bch2_btree_complete_write(struct bch_fs *c, struct btree *b,
static void __btree_node_write_done(struct bch_fs *c, struct btree *b) static void __btree_node_write_done(struct bch_fs *c, struct btree *b)
{ {
struct btree_write *w = btree_prev_write(b); struct btree_write *w = btree_prev_write(b);
unsigned long old, new, v; unsigned long old, new;
unsigned type = 0; unsigned type = 0;
bch2_btree_complete_write(c, b, w); bch2_btree_complete_write(c, b, w);
v = READ_ONCE(b->flags); old = READ_ONCE(b->flags);
do { do {
old = new = v; new = old;
if ((old & (1U << BTREE_NODE_dirty)) && if ((old & (1U << BTREE_NODE_dirty)) &&
(old & (1U << BTREE_NODE_need_write)) && (old & (1U << BTREE_NODE_need_write)) &&
...@@ -1842,7 +1844,7 @@ static void __btree_node_write_done(struct bch_fs *c, struct btree *b) ...@@ -1842,7 +1844,7 @@ static void __btree_node_write_done(struct bch_fs *c, struct btree *b)
new &= ~(1U << BTREE_NODE_write_in_flight); new &= ~(1U << BTREE_NODE_write_in_flight);
new &= ~(1U << BTREE_NODE_write_in_flight_inner); new &= ~(1U << BTREE_NODE_write_in_flight_inner);
} }
} while ((v = cmpxchg(&b->flags, old, new)) != old); } while (!try_cmpxchg(&b->flags, &old, new));
if (new & (1U << BTREE_NODE_write_in_flight)) if (new & (1U << BTREE_NODE_write_in_flight))
__bch2_btree_node_write(c, b, BTREE_WRITE_ALREADY_STARTED|type); __bch2_btree_node_write(c, b, BTREE_WRITE_ALREADY_STARTED|type);
...@@ -2014,8 +2016,9 @@ void __bch2_btree_node_write(struct bch_fs *c, struct btree *b, unsigned flags) ...@@ -2014,8 +2016,9 @@ void __bch2_btree_node_write(struct bch_fs *c, struct btree *b, unsigned flags)
* dirty bit requires a write lock, we can't race with other threads * dirty bit requires a write lock, we can't race with other threads
* redirtying it: * redirtying it:
*/ */
old = READ_ONCE(b->flags);
do { do {
old = new = READ_ONCE(b->flags); new = old;
if (!(old & (1 << BTREE_NODE_dirty))) if (!(old & (1 << BTREE_NODE_dirty)))
return; return;
...@@ -2046,7 +2049,7 @@ void __bch2_btree_node_write(struct bch_fs *c, struct btree *b, unsigned flags) ...@@ -2046,7 +2049,7 @@ void __bch2_btree_node_write(struct bch_fs *c, struct btree *b, unsigned flags)
new |= (1 << BTREE_NODE_write_in_flight_inner); new |= (1 << BTREE_NODE_write_in_flight_inner);
new |= (1 << BTREE_NODE_just_written); new |= (1 << BTREE_NODE_just_written);
new ^= (1 << BTREE_NODE_write_idx); new ^= (1 << BTREE_NODE_write_idx);
} while (cmpxchg_acquire(&b->flags, old, new) != old); } while (!try_cmpxchg_acquire(&b->flags, &old, new));
if (new & (1U << BTREE_NODE_need_write)) if (new & (1U << BTREE_NODE_need_write))
return; return;
...@@ -2133,7 +2136,7 @@ void __bch2_btree_node_write(struct bch_fs *c, struct btree *b, unsigned flags) ...@@ -2133,7 +2136,7 @@ void __bch2_btree_node_write(struct bch_fs *c, struct btree *b, unsigned flags)
if (!b->written && if (!b->written &&
b->key.k.type == KEY_TYPE_btree_ptr_v2) b->key.k.type == KEY_TYPE_btree_ptr_v2)
BUG_ON(btree_ptr_sectors_written(&b->key) != sectors_to_write); BUG_ON(btree_ptr_sectors_written(bkey_i_to_s_c(&b->key)) != sectors_to_write);
memset(data + bytes_to_write, 0, memset(data + bytes_to_write, 0,
(sectors_to_write << 9) - bytes_to_write); (sectors_to_write << 9) - bytes_to_write);
......
...@@ -27,10 +27,10 @@ static inline void clear_btree_node_dirty_acct(struct bch_fs *c, struct btree *b ...@@ -27,10 +27,10 @@ static inline void clear_btree_node_dirty_acct(struct bch_fs *c, struct btree *b
atomic_dec(&c->btree_cache.dirty); atomic_dec(&c->btree_cache.dirty);
} }
static inline unsigned btree_ptr_sectors_written(struct bkey_i *k) static inline unsigned btree_ptr_sectors_written(struct bkey_s_c k)
{ {
return k->k.type == KEY_TYPE_btree_ptr_v2 return k.k->type == KEY_TYPE_btree_ptr_v2
? le16_to_cpu(bkey_i_to_btree_ptr_v2(k)->v.sectors_written) ? le16_to_cpu(bkey_s_c_to_btree_ptr_v2(k).v->sectors_written)
: 0; : 0;
} }
......
...@@ -325,7 +325,7 @@ static int bch2_btree_iter_verify_ret(struct btree_iter *iter, struct bkey_s_c k ...@@ -325,7 +325,7 @@ static int bch2_btree_iter_verify_ret(struct btree_iter *iter, struct bkey_s_c k
} }
void bch2_assert_pos_locked(struct btree_trans *trans, enum btree_id id, void bch2_assert_pos_locked(struct btree_trans *trans, enum btree_id id,
struct bpos pos, bool key_cache) struct bpos pos)
{ {
bch2_trans_verify_not_unlocked(trans); bch2_trans_verify_not_unlocked(trans);
...@@ -336,19 +336,12 @@ void bch2_assert_pos_locked(struct btree_trans *trans, enum btree_id id, ...@@ -336,19 +336,12 @@ void bch2_assert_pos_locked(struct btree_trans *trans, enum btree_id id,
btree_trans_sort_paths(trans); btree_trans_sort_paths(trans);
trans_for_each_path_inorder(trans, path, iter) { trans_for_each_path_inorder(trans, path, iter) {
int cmp = cmp_int(path->btree_id, id) ?: if (path->btree_id != id ||
cmp_int(path->cached, key_cache); !btree_node_locked(path, 0) ||
if (cmp > 0)
break;
if (cmp < 0)
continue;
if (!btree_node_locked(path, 0) ||
!path->should_be_locked) !path->should_be_locked)
continue; continue;
if (!key_cache) { if (!path->cached) {
if (bkey_ge(pos, path->l[0].b->data->min_key) && if (bkey_ge(pos, path->l[0].b->data->min_key) &&
bkey_le(pos, path->l[0].b->key.k.p)) bkey_le(pos, path->l[0].b->key.k.p))
return; return;
...@@ -361,9 +354,7 @@ void bch2_assert_pos_locked(struct btree_trans *trans, enum btree_id id, ...@@ -361,9 +354,7 @@ void bch2_assert_pos_locked(struct btree_trans *trans, enum btree_id id,
bch2_dump_trans_paths_updates(trans); bch2_dump_trans_paths_updates(trans);
bch2_bpos_to_text(&buf, pos); bch2_bpos_to_text(&buf, pos);
panic("not locked: %s %s%s\n", panic("not locked: %s %s\n", bch2_btree_id_str(id), buf.buf);
bch2_btree_id_str(id), buf.buf,
key_cache ? " cached" : "");
} }
#else #else
...@@ -1465,7 +1456,7 @@ void bch2_dump_trans_updates(struct btree_trans *trans) ...@@ -1465,7 +1456,7 @@ void bch2_dump_trans_updates(struct btree_trans *trans)
struct printbuf buf = PRINTBUF; struct printbuf buf = PRINTBUF;
bch2_trans_updates_to_text(&buf, trans); bch2_trans_updates_to_text(&buf, trans);
bch2_print_string_as_lines(KERN_ERR, buf.buf); bch2_print_str(trans->c, buf.buf);
printbuf_exit(&buf); printbuf_exit(&buf);
} }
...@@ -1482,6 +1473,14 @@ static void bch2_btree_path_to_text_short(struct printbuf *out, struct btree_tra ...@@ -1482,6 +1473,14 @@ static void bch2_btree_path_to_text_short(struct printbuf *out, struct btree_tra
path->level); path->level);
bch2_bpos_to_text(out, path->pos); bch2_bpos_to_text(out, path->pos);
if (!path->cached && btree_node_locked(path, path->level)) {
prt_char(out, ' ');
struct btree *b = path_l(path)->b;
bch2_bpos_to_text(out, b->data->min_key);
prt_char(out, '-');
bch2_bpos_to_text(out, b->key.k.p);
}
#ifdef TRACK_PATH_ALLOCATED #ifdef TRACK_PATH_ALLOCATED
prt_printf(out, " %pS", (void *) path->ip_allocated); prt_printf(out, " %pS", (void *) path->ip_allocated);
#endif #endif
...@@ -1557,7 +1556,7 @@ void __bch2_dump_trans_paths_updates(struct btree_trans *trans, bool nosort) ...@@ -1557,7 +1556,7 @@ void __bch2_dump_trans_paths_updates(struct btree_trans *trans, bool nosort)
__bch2_trans_paths_to_text(&buf, trans, nosort); __bch2_trans_paths_to_text(&buf, trans, nosort);
bch2_trans_updates_to_text(&buf, trans); bch2_trans_updates_to_text(&buf, trans);
bch2_print_string_as_lines(KERN_ERR, buf.buf); bch2_print_str(trans->c, buf.buf);
printbuf_exit(&buf); printbuf_exit(&buf);
} }
...@@ -1801,13 +1800,12 @@ struct bkey_s_c bch2_btree_path_peek_slot(struct btree_path *path, struct bkey * ...@@ -1801,13 +1800,12 @@ struct bkey_s_c bch2_btree_path_peek_slot(struct btree_path *path, struct bkey *
goto hole; goto hole;
} else { } else {
struct bkey_cached *ck = (void *) path->l[0].b; struct bkey_cached *ck = (void *) path->l[0].b;
if (!ck)
EBUG_ON(ck &&
(path->btree_id != ck->key.btree_id ||
!bkey_eq(path->pos, ck->key.pos)));
if (!ck || !ck->valid)
return bkey_s_c_null; return bkey_s_c_null;
EBUG_ON(path->btree_id != ck->key.btree_id ||
!bkey_eq(path->pos, ck->key.pos));
*u = ck->k->k; *u = ck->k->k;
k = bkey_i_to_s_c(ck->k); k = bkey_i_to_s_c(ck->k);
} }
...@@ -3173,6 +3171,9 @@ struct btree_trans *__bch2_trans_get(struct bch_fs *c, unsigned fn_idx) ...@@ -3173,6 +3171,9 @@ struct btree_trans *__bch2_trans_get(struct bch_fs *c, unsigned fn_idx)
trans->paths_allocated[0] = 1; trans->paths_allocated[0] = 1;
static struct lock_class_key lockdep_key;
lockdep_init_map(&trans->dep_map, "bcachefs_btree", &lockdep_key, 0);
if (fn_idx < BCH_TRANSACTIONS_NR) { if (fn_idx < BCH_TRANSACTIONS_NR) {
trans->fn = bch2_btree_transaction_fns[fn_idx]; trans->fn = bch2_btree_transaction_fns[fn_idx];
...@@ -3240,15 +3241,6 @@ void bch2_trans_put(struct btree_trans *trans) ...@@ -3240,15 +3241,6 @@ void bch2_trans_put(struct btree_trans *trans)
srcu_read_unlock(&c->btree_trans_barrier, trans->srcu_idx); srcu_read_unlock(&c->btree_trans_barrier, trans->srcu_idx);
} }
if (trans->fs_usage_deltas) {
if (trans->fs_usage_deltas->size + sizeof(trans->fs_usage_deltas) ==
REPLICAS_DELTA_LIST_MAX)
mempool_free(trans->fs_usage_deltas,
&c->replicas_delta_pool);
else
kfree(trans->fs_usage_deltas);
}
if (unlikely(trans->journal_replay_not_finished)) if (unlikely(trans->journal_replay_not_finished))
bch2_journal_keys_put(c); bch2_journal_keys_put(c);
...@@ -3284,6 +3276,21 @@ void bch2_trans_put(struct btree_trans *trans) ...@@ -3284,6 +3276,21 @@ void bch2_trans_put(struct btree_trans *trans)
} }
} }
bool bch2_current_has_btree_trans(struct bch_fs *c)
{
seqmutex_lock(&c->btree_trans_lock);
struct btree_trans *trans;
bool ret = false;
list_for_each_entry(trans, &c->btree_trans_list, list)
if (trans->locking_wait.task == current &&
trans->locked) {
ret = true;
break;
}
seqmutex_unlock(&c->btree_trans_lock);
return ret;
}
static void __maybe_unused static void __maybe_unused
bch2_btree_bkey_cached_common_to_text(struct printbuf *out, bch2_btree_bkey_cached_common_to_text(struct printbuf *out,
struct btree_bkey_cached_common *b) struct btree_bkey_cached_common *b)
...@@ -3437,7 +3444,22 @@ int bch2_fs_btree_iter_init(struct bch_fs *c) ...@@ -3437,7 +3444,22 @@ int bch2_fs_btree_iter_init(struct bch_fs *c)
mempool_init_kmalloc_pool(&c->btree_trans_mem_pool, 1, mempool_init_kmalloc_pool(&c->btree_trans_mem_pool, 1,
BTREE_TRANS_MEM_MAX) ?: BTREE_TRANS_MEM_MAX) ?:
init_srcu_struct(&c->btree_trans_barrier); init_srcu_struct(&c->btree_trans_barrier);
if (!ret) if (ret)
c->btree_trans_barrier_initialized = true; return ret;
return ret;
/*
* static annotation (hackily done) for lock ordering of reclaim vs.
* btree node locks:
*/
#ifdef CONFIG_LOCKDEP
fs_reclaim_acquire(GFP_KERNEL);
struct btree_trans *trans = bch2_trans_get(c);
trans_set_locked(trans);
bch2_trans_put(trans);
fs_reclaim_release(GFP_KERNEL);
#endif
c->btree_trans_barrier_initialized = true;
return 0;
} }
...@@ -268,12 +268,11 @@ static inline int bch2_trans_mutex_lock(struct btree_trans *trans, struct mutex ...@@ -268,12 +268,11 @@ static inline int bch2_trans_mutex_lock(struct btree_trans *trans, struct mutex
#ifdef CONFIG_BCACHEFS_DEBUG #ifdef CONFIG_BCACHEFS_DEBUG
void bch2_trans_verify_paths(struct btree_trans *); void bch2_trans_verify_paths(struct btree_trans *);
void bch2_assert_pos_locked(struct btree_trans *, enum btree_id, void bch2_assert_pos_locked(struct btree_trans *, enum btree_id, struct bpos);
struct bpos, bool);
#else #else
static inline void bch2_trans_verify_paths(struct btree_trans *trans) {} static inline void bch2_trans_verify_paths(struct btree_trans *trans) {}
static inline void bch2_assert_pos_locked(struct btree_trans *trans, enum btree_id id, static inline void bch2_assert_pos_locked(struct btree_trans *trans, enum btree_id id,
struct bpos pos, bool key_cache) {} struct bpos pos) {}
#endif #endif
void bch2_btree_path_fix_key_modified(struct btree_trans *trans, void bch2_btree_path_fix_key_modified(struct btree_trans *trans,
...@@ -866,6 +865,14 @@ __bch2_btree_iter_peek_and_restart(struct btree_trans *trans, ...@@ -866,6 +865,14 @@ __bch2_btree_iter_peek_and_restart(struct btree_trans *trans,
_p; \ _p; \
}) })
#define bch2_trans_run(_c, _do) \
({ \
struct btree_trans *trans = bch2_trans_get(_c); \
int _ret = (_do); \
bch2_trans_put(trans); \
_ret; \
})
void bch2_trans_updates_to_text(struct printbuf *, struct btree_trans *); void bch2_trans_updates_to_text(struct printbuf *, struct btree_trans *);
void bch2_btree_path_to_text(struct printbuf *, struct btree_trans *, btree_path_idx_t); void bch2_btree_path_to_text(struct printbuf *, struct btree_trans *, btree_path_idx_t);
void bch2_trans_paths_to_text(struct printbuf *, struct btree_trans *); void bch2_trans_paths_to_text(struct printbuf *, struct btree_trans *);
...@@ -875,6 +882,8 @@ void bch2_dump_trans_paths_updates(struct btree_trans *); ...@@ -875,6 +882,8 @@ void bch2_dump_trans_paths_updates(struct btree_trans *);
struct btree_trans *__bch2_trans_get(struct bch_fs *, unsigned); struct btree_trans *__bch2_trans_get(struct bch_fs *, unsigned);
void bch2_trans_put(struct btree_trans *); void bch2_trans_put(struct btree_trans *);
bool bch2_current_has_btree_trans(struct bch_fs *);
extern const char *bch2_btree_transaction_fns[BCH_TRANSACTIONS_NR]; extern const char *bch2_btree_transaction_fns[BCH_TRANSACTIONS_NR];
unsigned bch2_trans_get_fn_idx(const char *); unsigned bch2_trans_get_fn_idx(const char *);
......
...@@ -16,21 +16,6 @@ ...@@ -16,21 +16,6 @@
* operations for the regular btree iter code to use: * operations for the regular btree iter code to use:
*/ */
static int __journal_key_cmp(enum btree_id l_btree_id,
unsigned l_level,
struct bpos l_pos,
const struct journal_key *r)
{
return (cmp_int(l_btree_id, r->btree_id) ?:
cmp_int(l_level, r->level) ?:
bpos_cmp(l_pos, r->k->k.p));
}
static int journal_key_cmp(const struct journal_key *l, const struct journal_key *r)
{
return __journal_key_cmp(l->btree_id, l->level, l->k->k.p, r);
}
static inline size_t idx_to_pos(struct journal_keys *keys, size_t idx) static inline size_t idx_to_pos(struct journal_keys *keys, size_t idx)
{ {
size_t gap_size = keys->size - keys->nr; size_t gap_size = keys->size - keys->nr;
...@@ -548,7 +533,13 @@ static void __journal_keys_sort(struct journal_keys *keys) ...@@ -548,7 +533,13 @@ static void __journal_keys_sort(struct journal_keys *keys)
struct journal_key *dst = keys->data; struct journal_key *dst = keys->data;
darray_for_each(*keys, src) { darray_for_each(*keys, src) {
if (src + 1 < &darray_top(*keys) && /*
* We don't accumulate accounting keys here because we have to
* compare each individual accounting key against the version in
* the btree during replay:
*/
if (src->k->k.type != KEY_TYPE_accounting &&
src + 1 < &darray_top(*keys) &&
!journal_key_cmp(src, src + 1)) !journal_key_cmp(src, src + 1))
continue; continue;
......
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
#ifndef _BCACHEFS_BTREE_JOURNAL_ITER_H #ifndef _BCACHEFS_BTREE_JOURNAL_ITER_H
#define _BCACHEFS_BTREE_JOURNAL_ITER_H #define _BCACHEFS_BTREE_JOURNAL_ITER_H
#include "bkey.h"
struct journal_iter { struct journal_iter {
struct list_head list; struct list_head list;
enum btree_id btree_id; enum btree_id btree_id;
...@@ -26,6 +28,21 @@ struct btree_and_journal_iter { ...@@ -26,6 +28,21 @@ struct btree_and_journal_iter {
bool prefetch; bool prefetch;
}; };
static inline int __journal_key_cmp(enum btree_id l_btree_id,
unsigned l_level,
struct bpos l_pos,
const struct journal_key *r)
{
return (cmp_int(l_btree_id, r->btree_id) ?:
cmp_int(l_level, r->level) ?:
bpos_cmp(l_pos, r->k->k.p));
}
static inline int journal_key_cmp(const struct journal_key *l, const struct journal_key *r)
{
return __journal_key_cmp(l->btree_id, l->level, l->k->k.p, r);
}
struct bkey_i *bch2_journal_keys_peek_upto(struct bch_fs *, enum btree_id, struct bkey_i *bch2_journal_keys_peek_upto(struct bch_fs *, enum btree_id,
unsigned, struct bpos, struct bpos, size_t *); unsigned, struct bpos, struct bpos, size_t *);
struct bkey_i *bch2_journal_keys_peek_slot(struct bch_fs *, enum btree_id, struct bkey_i *bch2_journal_keys_peek_slot(struct bch_fs *, enum btree_id,
......
This diff is collapsed.
...@@ -10,19 +10,9 @@ void bch2_btree_lock_init(struct btree_bkey_cached_common *b, ...@@ -10,19 +10,9 @@ void bch2_btree_lock_init(struct btree_bkey_cached_common *b,
enum six_lock_init_flags flags) enum six_lock_init_flags flags)
{ {
__six_lock_init(&b->lock, "b->c.lock", &bch2_btree_node_lock_key, flags); __six_lock_init(&b->lock, "b->c.lock", &bch2_btree_node_lock_key, flags);
lockdep_set_novalidate_class(&b->lock); lockdep_set_notrack_class(&b->lock);
} }
#ifdef CONFIG_LOCKDEP
void bch2_assert_btree_nodes_not_locked(void)
{
#if 0
//Re-enable when lock_class_is_held() is merged:
BUG_ON(lock_class_is_held(&bch2_btree_node_lock_key));
#endif
}
#endif
/* Btree node locking: */ /* Btree node locking: */
struct six_lock_count bch2_btree_node_lock_counts(struct btree_trans *trans, struct six_lock_count bch2_btree_node_lock_counts(struct btree_trans *trans,
......
...@@ -15,12 +15,6 @@ ...@@ -15,12 +15,6 @@
void bch2_btree_lock_init(struct btree_bkey_cached_common *, enum six_lock_init_flags); void bch2_btree_lock_init(struct btree_bkey_cached_common *, enum six_lock_init_flags);
#ifdef CONFIG_LOCKDEP
void bch2_assert_btree_nodes_not_locked(void);
#else
static inline void bch2_assert_btree_nodes_not_locked(void) {}
#endif
void bch2_trans_unlock_noassert(struct btree_trans *); void bch2_trans_unlock_noassert(struct btree_trans *);
static inline bool is_btree_node(struct btree_path *path, unsigned l) static inline bool is_btree_node(struct btree_path *path, unsigned l)
...@@ -136,6 +130,7 @@ static inline void btree_node_unlock(struct btree_trans *trans, ...@@ -136,6 +130,7 @@ static inline void btree_node_unlock(struct btree_trans *trans,
int lock_type = btree_node_locked_type(path, level); int lock_type = btree_node_locked_type(path, level);
EBUG_ON(level >= BTREE_MAX_DEPTH); EBUG_ON(level >= BTREE_MAX_DEPTH);
EBUG_ON(lock_type == BTREE_NODE_WRITE_LOCKED);
if (lock_type != BTREE_NODE_UNLOCKED) { if (lock_type != BTREE_NODE_UNLOCKED) {
six_unlock_type(&path->l[level].b->c.lock, lock_type); six_unlock_type(&path->l[level].b->c.lock, lock_type);
...@@ -196,6 +191,7 @@ int bch2_six_check_for_deadlock(struct six_lock *lock, void *p); ...@@ -196,6 +191,7 @@ int bch2_six_check_for_deadlock(struct six_lock *lock, void *p);
static inline void trans_set_locked(struct btree_trans *trans) static inline void trans_set_locked(struct btree_trans *trans)
{ {
if (!trans->locked) { if (!trans->locked) {
lock_acquire_exclusive(&trans->dep_map, 0, 0, NULL, _THIS_IP_);
trans->locked = true; trans->locked = true;
trans->last_unlock_ip = 0; trans->last_unlock_ip = 0;
...@@ -207,6 +203,7 @@ static inline void trans_set_locked(struct btree_trans *trans) ...@@ -207,6 +203,7 @@ static inline void trans_set_locked(struct btree_trans *trans)
static inline void trans_set_unlocked(struct btree_trans *trans) static inline void trans_set_unlocked(struct btree_trans *trans)
{ {
if (trans->locked) { if (trans->locked) {
lock_release(&trans->dep_map, _THIS_IP_);
trans->locked = false; trans->locked = false;
trans->last_unlock_ip = _RET_IP_; trans->last_unlock_ip = _RET_IP_;
......
...@@ -22,7 +22,9 @@ struct find_btree_nodes_worker { ...@@ -22,7 +22,9 @@ struct find_btree_nodes_worker {
static void found_btree_node_to_text(struct printbuf *out, struct bch_fs *c, const struct found_btree_node *n) static void found_btree_node_to_text(struct printbuf *out, struct bch_fs *c, const struct found_btree_node *n)
{ {
prt_printf(out, "%s l=%u seq=%u cookie=%llx ", bch2_btree_id_str(n->btree_id), n->level, n->seq, n->cookie); prt_printf(out, "%s l=%u seq=%u journal_seq=%llu cookie=%llx ",
bch2_btree_id_str(n->btree_id), n->level, n->seq,
n->journal_seq, n->cookie);
bch2_bpos_to_text(out, n->min_key); bch2_bpos_to_text(out, n->min_key);
prt_str(out, "-"); prt_str(out, "-");
bch2_bpos_to_text(out, n->max_key); bch2_bpos_to_text(out, n->max_key);
...@@ -63,19 +65,37 @@ static void found_btree_node_to_key(struct bkey_i *k, const struct found_btree_n ...@@ -63,19 +65,37 @@ static void found_btree_node_to_key(struct bkey_i *k, const struct found_btree_n
memcpy(bp->v.start, f->ptrs, sizeof(struct bch_extent_ptr) * f->nr_ptrs); memcpy(bp->v.start, f->ptrs, sizeof(struct bch_extent_ptr) * f->nr_ptrs);
} }
static inline u64 bkey_journal_seq(struct bkey_s_c k)
{
switch (k.k->type) {
case KEY_TYPE_inode_v3:
return le64_to_cpu(bkey_s_c_to_inode_v3(k).v->bi_journal_seq);
default:
return 0;
}
}
static bool found_btree_node_is_readable(struct btree_trans *trans, static bool found_btree_node_is_readable(struct btree_trans *trans,
struct found_btree_node *f) struct found_btree_node *f)
{ {
struct { __BKEY_PADDED(k, BKEY_BTREE_PTR_VAL_U64s_MAX); } k; struct { __BKEY_PADDED(k, BKEY_BTREE_PTR_VAL_U64s_MAX); } tmp;
found_btree_node_to_key(&k.k, f); found_btree_node_to_key(&tmp.k, f);
struct btree *b = bch2_btree_node_get_noiter(trans, &k.k, f->btree_id, f->level, false); struct btree *b = bch2_btree_node_get_noiter(trans, &tmp.k, f->btree_id, f->level, false);
bool ret = !IS_ERR_OR_NULL(b); bool ret = !IS_ERR_OR_NULL(b);
if (!ret) if (!ret)
return ret; return ret;
f->sectors_written = b->written; f->sectors_written = b->written;
f->journal_seq = le64_to_cpu(b->data->keys.journal_seq);
struct bkey_s_c k;
struct bkey unpacked;
struct btree_node_iter iter;
for_each_btree_node_key_unpack(b, k, &iter, &unpacked)
f->journal_seq = max(f->journal_seq, bkey_journal_seq(k));
six_unlock_read(&b->c.lock); six_unlock_read(&b->c.lock);
/* /*
...@@ -84,7 +104,7 @@ static bool found_btree_node_is_readable(struct btree_trans *trans, ...@@ -84,7 +104,7 @@ static bool found_btree_node_is_readable(struct btree_trans *trans,
* this node * this node
*/ */
if (b != btree_node_root(trans->c, b)) if (b != btree_node_root(trans->c, b))
bch2_btree_node_evict(trans, &k.k); bch2_btree_node_evict(trans, &tmp.k);
return ret; return ret;
} }
...@@ -105,7 +125,8 @@ static int found_btree_node_cmp_cookie(const void *_l, const void *_r) ...@@ -105,7 +125,8 @@ static int found_btree_node_cmp_cookie(const void *_l, const void *_r)
static int found_btree_node_cmp_time(const struct found_btree_node *l, static int found_btree_node_cmp_time(const struct found_btree_node *l,
const struct found_btree_node *r) const struct found_btree_node *r)
{ {
return cmp_int(l->seq, r->seq); return cmp_int(l->seq, r->seq) ?:
cmp_int(l->journal_seq, r->journal_seq);
} }
static int found_btree_node_cmp_pos(const void *_l, const void *_r) static int found_btree_node_cmp_pos(const void *_l, const void *_r)
...@@ -309,15 +330,15 @@ static int handle_overwrites(struct bch_fs *c, ...@@ -309,15 +330,15 @@ static int handle_overwrites(struct bch_fs *c,
} else if (n->level) { } else if (n->level) {
n->overwritten = true; n->overwritten = true;
} else { } else {
struct printbuf buf = PRINTBUF; if (bpos_cmp(start->max_key, n->max_key) >= 0)
n->overwritten = true;
prt_str(&buf, "overlapping btree nodes with same seq! halting\n "); else {
found_btree_node_to_text(&buf, c, start); n->range_updated = true;
prt_str(&buf, "\n "); n->min_key = bpos_successor(start->max_key);
found_btree_node_to_text(&buf, c, n); n->range_updated = true;
bch_err(c, "%s", buf.buf); bubble_up(n, end);
printbuf_exit(&buf); goto again;
return -BCH_ERR_fsck_repair_unimplemented; }
} }
} }
......
...@@ -11,6 +11,7 @@ struct found_btree_node { ...@@ -11,6 +11,7 @@ struct found_btree_node {
u8 level; u8 level;
unsigned sectors_written; unsigned sectors_written;
u32 seq; u32 seq;
u64 journal_seq;
u64 cookie; u64 cookie;
struct bpos min_key; struct bpos min_key;
......
This diff is collapsed.
...@@ -388,7 +388,6 @@ struct bkey_cached { ...@@ -388,7 +388,6 @@ struct bkey_cached {
unsigned long flags; unsigned long flags;
unsigned long btree_trans_barrier_seq; unsigned long btree_trans_barrier_seq;
u16 u64s; u16 u64s;
bool valid;
struct bkey_cached_key key; struct bkey_cached_key key;
struct rhash_head hash; struct rhash_head hash;
...@@ -478,8 +477,8 @@ struct btree_trans { ...@@ -478,8 +477,8 @@ struct btree_trans {
btree_path_idx_t nr_sorted; btree_path_idx_t nr_sorted;
btree_path_idx_t nr_paths; btree_path_idx_t nr_paths;
btree_path_idx_t nr_paths_max; btree_path_idx_t nr_paths_max;
btree_path_idx_t nr_updates;
u8 fn_idx; u8 fn_idx;
u8 nr_updates;
u8 lock_must_abort; u8 lock_must_abort;
bool lock_may_not_fail:1; bool lock_may_not_fail:1;
bool srcu_held:1; bool srcu_held:1;
...@@ -523,8 +522,10 @@ struct btree_trans { ...@@ -523,8 +522,10 @@ struct btree_trans {
unsigned journal_u64s; unsigned journal_u64s;
unsigned extra_disk_res; /* XXX kill */ unsigned extra_disk_res; /* XXX kill */
struct replicas_delta_list *fs_usage_deltas;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lockdep_map dep_map;
#endif
/* Entries before this are zeroed out on every bch2_trans_get() call */ /* Entries before this are zeroed out on every bch2_trans_get() call */
struct list_head list; struct list_head list;
...@@ -755,9 +756,19 @@ const char *bch2_btree_node_type_str(enum btree_node_type); ...@@ -755,9 +756,19 @@ const char *bch2_btree_node_type_str(enum btree_node_type);
(BTREE_NODE_TYPE_HAS_TRANS_TRIGGERS| \ (BTREE_NODE_TYPE_HAS_TRANS_TRIGGERS| \
BTREE_NODE_TYPE_HAS_ATOMIC_TRIGGERS) BTREE_NODE_TYPE_HAS_ATOMIC_TRIGGERS)
static inline bool btree_node_type_needs_gc(enum btree_node_type type) static inline bool btree_node_type_has_trans_triggers(enum btree_node_type type)
{
return BIT_ULL(type) & BTREE_NODE_TYPE_HAS_TRANS_TRIGGERS;
}
static inline bool btree_node_type_has_atomic_triggers(enum btree_node_type type)
{
return BIT_ULL(type) & BTREE_NODE_TYPE_HAS_ATOMIC_TRIGGERS;
}
static inline bool btree_node_type_has_triggers(enum btree_node_type type)
{ {
return BTREE_NODE_TYPE_HAS_TRIGGERS & BIT_ULL(type); return BIT_ULL(type) & BTREE_NODE_TYPE_HAS_TRIGGERS;
} }
static inline bool btree_node_type_is_extents(enum btree_node_type type) static inline bool btree_node_type_is_extents(enum btree_node_type type)
......
...@@ -656,14 +656,16 @@ int bch2_btree_insert_trans(struct btree_trans *trans, enum btree_id id, ...@@ -656,14 +656,16 @@ int bch2_btree_insert_trans(struct btree_trans *trans, enum btree_id id,
* @disk_res: must be non-NULL whenever inserting or potentially * @disk_res: must be non-NULL whenever inserting or potentially
* splitting data extents * splitting data extents
* @flags: transaction commit flags * @flags: transaction commit flags
* @iter_flags: btree iter update trigger flags
* *
* Returns: 0 on success, error code on failure * Returns: 0 on success, error code on failure
*/ */
int bch2_btree_insert(struct bch_fs *c, enum btree_id id, struct bkey_i *k, int bch2_btree_insert(struct bch_fs *c, enum btree_id id, struct bkey_i *k,
struct disk_reservation *disk_res, int flags) struct disk_reservation *disk_res, int flags,
enum btree_iter_update_trigger_flags iter_flags)
{ {
return bch2_trans_do(c, disk_res, NULL, flags, return bch2_trans_do(c, disk_res, NULL, flags,
bch2_btree_insert_trans(trans, id, k, 0)); bch2_btree_insert_trans(trans, id, k, iter_flags));
} }
int bch2_btree_delete_extent_at(struct btree_trans *trans, struct btree_iter *iter, int bch2_btree_delete_extent_at(struct btree_trans *trans, struct btree_iter *iter,
......
...@@ -29,6 +29,7 @@ void bch2_btree_insert_key_leaf(struct btree_trans *, struct btree_path *, ...@@ -29,6 +29,7 @@ void bch2_btree_insert_key_leaf(struct btree_trans *, struct btree_path *,
"pin journal entry referred to by trans->journal_res.seq") \ "pin journal entry referred to by trans->journal_res.seq") \
x(journal_reclaim, "operation required for journal reclaim; may return error" \ x(journal_reclaim, "operation required for journal reclaim; may return error" \
"instead of deadlocking if BCH_WATERMARK_reclaim not specified")\ "instead of deadlocking if BCH_WATERMARK_reclaim not specified")\
x(skip_accounting_apply, "we're in journal replay - accounting updates have already been applied")
enum __bch_trans_commit_flags { enum __bch_trans_commit_flags {
/* First bits for bch_watermark: */ /* First bits for bch_watermark: */
...@@ -56,8 +57,9 @@ int bch2_btree_insert_nonextent(struct btree_trans *, enum btree_id, ...@@ -56,8 +57,9 @@ int bch2_btree_insert_nonextent(struct btree_trans *, enum btree_id,
int bch2_btree_insert_trans(struct btree_trans *, enum btree_id, struct bkey_i *, int bch2_btree_insert_trans(struct btree_trans *, enum btree_id, struct bkey_i *,
enum btree_iter_update_trigger_flags); enum btree_iter_update_trigger_flags);
int bch2_btree_insert(struct bch_fs *, enum btree_id, struct bkey_i *, int bch2_btree_insert(struct bch_fs *, enum btree_id, struct bkey_i *, struct
struct disk_reservation *, int flags); disk_reservation *, int flags, enum
btree_iter_update_trigger_flags iter_flags);
int bch2_btree_delete_range_trans(struct btree_trans *, enum btree_id, int bch2_btree_delete_range_trans(struct btree_trans *, enum btree_id,
struct bpos, struct bpos, unsigned, u64 *); struct bpos, struct bpos, unsigned, u64 *);
...@@ -130,7 +132,19 @@ static inline int __must_check bch2_trans_update_buffered(struct btree_trans *tr ...@@ -130,7 +132,19 @@ static inline int __must_check bch2_trans_update_buffered(struct btree_trans *tr
enum btree_id btree, enum btree_id btree,
struct bkey_i *k) struct bkey_i *k)
{ {
if (unlikely(trans->journal_replay_not_finished)) /*
* Most updates skip the btree write buffer until journal replay is
* finished because synchronization with journal replay relies on having
* a btree node locked - if we're overwriting a key in the journal that
* journal replay hasn't yet replayed, we have to mark it as
* overwritten.
*
* But accounting updates don't overwrite, they're deltas, and they have
* to be flushed to the btree strictly in order for journal replay to be
* able to tell which updates need to be applied:
*/
if (k->k.type != KEY_TYPE_accounting &&
unlikely(trans->journal_replay_not_finished))
return bch2_btree_insert_clone_trans(trans, btree, k); return bch2_btree_insert_clone_trans(trans, btree, k);
struct jset_entry *e = bch2_trans_jset_entry_alloc(trans, jset_u64s(k->k.u64s)); struct jset_entry *e = bch2_trans_jset_entry_alloc(trans, jset_u64s(k->k.u64s));
...@@ -178,14 +192,6 @@ static inline int bch2_trans_commit(struct btree_trans *trans, ...@@ -178,14 +192,6 @@ static inline int bch2_trans_commit(struct btree_trans *trans,
nested_lockrestart_do(_trans, _do ?: bch2_trans_commit(_trans, (_disk_res),\ nested_lockrestart_do(_trans, _do ?: bch2_trans_commit(_trans, (_disk_res),\
(_journal_seq), (_flags))) (_journal_seq), (_flags)))
#define bch2_trans_run(_c, _do) \
({ \
struct btree_trans *trans = bch2_trans_get(_c); \
int _ret = (_do); \
bch2_trans_put(trans); \
_ret; \
})
#define bch2_trans_do(_c, _disk_res, _journal_seq, _flags, _do) \ #define bch2_trans_do(_c, _disk_res, _journal_seq, _flags, _do) \
bch2_trans_run(_c, commit_do(trans, _disk_res, _journal_seq, _flags, _do)) bch2_trans_run(_c, commit_do(trans, _disk_res, _journal_seq, _flags, _do))
...@@ -203,14 +209,6 @@ static inline void bch2_trans_reset_updates(struct btree_trans *trans) ...@@ -203,14 +209,6 @@ static inline void bch2_trans_reset_updates(struct btree_trans *trans)
trans->journal_entries_u64s = 0; trans->journal_entries_u64s = 0;
trans->hooks = NULL; trans->hooks = NULL;
trans->extra_disk_res = 0; trans->extra_disk_res = 0;
if (trans->fs_usage_deltas) {
trans->fs_usage_deltas->used = 0;
memset((void *) trans->fs_usage_deltas +
offsetof(struct replicas_delta_list, memset_start), 0,
(void *) &trans->fs_usage_deltas->memset_end -
(void *) &trans->fs_usage_deltas->memset_start);
}
} }
static inline struct bkey_i *__bch2_bkey_make_mut_noupdate(struct btree_trans *trans, struct bkey_s_c k, static inline struct bkey_i *__bch2_bkey_make_mut_noupdate(struct btree_trans *trans, struct bkey_s_c k,
......
...@@ -61,7 +61,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b) ...@@ -61,7 +61,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b)
if (!bpos_eq(b->data->min_key, POS_MIN)) { if (!bpos_eq(b->data->min_key, POS_MIN)) {
printbuf_reset(&buf); printbuf_reset(&buf);
bch2_bpos_to_text(&buf, b->data->min_key); bch2_bpos_to_text(&buf, b->data->min_key);
need_fsck_err(c, btree_root_bad_min_key, need_fsck_err(trans, btree_root_bad_min_key,
"btree root with incorrect min_key: %s", buf.buf); "btree root with incorrect min_key: %s", buf.buf);
goto topology_repair; goto topology_repair;
} }
...@@ -69,7 +69,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b) ...@@ -69,7 +69,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b)
if (!bpos_eq(b->data->max_key, SPOS_MAX)) { if (!bpos_eq(b->data->max_key, SPOS_MAX)) {
printbuf_reset(&buf); printbuf_reset(&buf);
bch2_bpos_to_text(&buf, b->data->max_key); bch2_bpos_to_text(&buf, b->data->max_key);
need_fsck_err(c, btree_root_bad_max_key, need_fsck_err(trans, btree_root_bad_max_key,
"btree root with incorrect max_key: %s", buf.buf); "btree root with incorrect max_key: %s", buf.buf);
goto topology_repair; goto topology_repair;
} }
...@@ -105,7 +105,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b) ...@@ -105,7 +105,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b)
prt_str(&buf, "\n next "); prt_str(&buf, "\n next ");
bch2_bkey_val_to_text(&buf, c, k); bch2_bkey_val_to_text(&buf, c, k);
need_fsck_err(c, btree_node_topology_bad_min_key, "%s", buf.buf); need_fsck_err(trans, btree_node_topology_bad_min_key, "%s", buf.buf);
goto topology_repair; goto topology_repair;
} }
...@@ -122,7 +122,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b) ...@@ -122,7 +122,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b)
bch2_btree_id_str(b->c.btree_id), b->c.level); bch2_btree_id_str(b->c.btree_id), b->c.level);
bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(&b->key)); bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(&b->key));
need_fsck_err(c, btree_node_topology_empty_interior_node, "%s", buf.buf); need_fsck_err(trans, btree_node_topology_empty_interior_node, "%s", buf.buf);
goto topology_repair; goto topology_repair;
} else if (!bpos_eq(prev.k->k.p, b->key.k.p)) { } else if (!bpos_eq(prev.k->k.p, b->key.k.p)) {
bch2_topology_error(c); bch2_topology_error(c);
...@@ -135,7 +135,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b) ...@@ -135,7 +135,7 @@ int bch2_btree_node_check_topology(struct btree_trans *trans, struct btree *b)
prt_str(&buf, "\n last key "); prt_str(&buf, "\n last key ");
bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(prev.k)); bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(prev.k));
need_fsck_err(c, btree_node_topology_bad_max_key, "%s", buf.buf); need_fsck_err(trans, btree_node_topology_bad_max_key, "%s", buf.buf);
goto topology_repair; goto topology_repair;
} }
out: out:
...@@ -1356,10 +1356,10 @@ static void bch2_insert_fixup_btree_ptr(struct btree_update *as, ...@@ -1356,10 +1356,10 @@ static void bch2_insert_fixup_btree_ptr(struct btree_update *as,
struct bch_fs *c = as->c; struct bch_fs *c = as->c;
struct bkey_packed *k; struct bkey_packed *k;
struct printbuf buf = PRINTBUF; struct printbuf buf = PRINTBUF;
unsigned long old, new, v; unsigned long old, new;
BUG_ON(insert->k.type == KEY_TYPE_btree_ptr_v2 && BUG_ON(insert->k.type == KEY_TYPE_btree_ptr_v2 &&
!btree_ptr_sectors_written(insert)); !btree_ptr_sectors_written(bkey_i_to_s_c(insert)));
if (unlikely(!test_bit(JOURNAL_replay_done, &c->journal.flags))) if (unlikely(!test_bit(JOURNAL_replay_done, &c->journal.flags)))
bch2_journal_key_overwritten(c, b->c.btree_id, b->c.level, insert->k.p); bch2_journal_key_overwritten(c, b->c.btree_id, b->c.level, insert->k.p);
...@@ -1395,14 +1395,14 @@ static void bch2_insert_fixup_btree_ptr(struct btree_update *as, ...@@ -1395,14 +1395,14 @@ static void bch2_insert_fixup_btree_ptr(struct btree_update *as,
bch2_btree_bset_insert_key(trans, path, b, node_iter, insert); bch2_btree_bset_insert_key(trans, path, b, node_iter, insert);
set_btree_node_dirty_acct(c, b); set_btree_node_dirty_acct(c, b);
v = READ_ONCE(b->flags); old = READ_ONCE(b->flags);
do { do {
old = new = v; new = old;
new &= ~BTREE_WRITE_TYPE_MASK; new &= ~BTREE_WRITE_TYPE_MASK;
new |= BTREE_WRITE_interior; new |= BTREE_WRITE_interior;
new |= 1 << BTREE_NODE_need_write; new |= 1 << BTREE_NODE_need_write;
} while ((v = cmpxchg(&b->flags, old, new)) != old); } while (!try_cmpxchg(&b->flags, &old, new));
printbuf_exit(&buf); printbuf_exit(&buf);
} }
...@@ -2647,6 +2647,28 @@ bch2_btree_roots_to_journal_entries(struct bch_fs *c, ...@@ -2647,6 +2647,28 @@ bch2_btree_roots_to_journal_entries(struct bch_fs *c,
return end; return end;
} }
static void bch2_btree_alloc_to_text(struct printbuf *out,
struct bch_fs *c,
struct btree_alloc *a)
{
printbuf_indent_add(out, 2);
bch2_bkey_val_to_text(out, c, bkey_i_to_s_c(&a->k));
prt_newline(out);
struct open_bucket *ob;
unsigned i;
open_bucket_for_each(c, &a->ob, ob, i)
bch2_open_bucket_to_text(out, c, ob);
printbuf_indent_sub(out, 2);
}
void bch2_btree_reserve_cache_to_text(struct printbuf *out, struct bch_fs *c)
{
for (unsigned i = 0; i < c->btree_reserve_cache_nr; i++)
bch2_btree_alloc_to_text(out, c, &c->btree_reserve_cache[i]);
}
void bch2_fs_btree_interior_update_exit(struct bch_fs *c) void bch2_fs_btree_interior_update_exit(struct bch_fs *c)
{ {
if (c->btree_node_rewrite_worker) if (c->btree_node_rewrite_worker)
......
...@@ -335,6 +335,8 @@ struct jset_entry *bch2_btree_roots_to_journal_entries(struct bch_fs *, ...@@ -335,6 +335,8 @@ struct jset_entry *bch2_btree_roots_to_journal_entries(struct bch_fs *,
void bch2_do_pending_node_rewrites(struct bch_fs *); void bch2_do_pending_node_rewrites(struct bch_fs *);
void bch2_free_pending_node_rewrites(struct bch_fs *); void bch2_free_pending_node_rewrites(struct bch_fs *);
void bch2_btree_reserve_cache_to_text(struct printbuf *, struct bch_fs *);
void bch2_fs_btree_interior_update_exit(struct bch_fs *); void bch2_fs_btree_interior_update_exit(struct bch_fs *);
void bch2_fs_btree_interior_update_init_early(struct bch_fs *); void bch2_fs_btree_interior_update_init_early(struct bch_fs *);
int bch2_fs_btree_interior_update_init(struct bch_fs *); int bch2_fs_btree_interior_update_init(struct bch_fs *);
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
#include "btree_update.h" #include "btree_update.h"
#include "btree_update_interior.h" #include "btree_update_interior.h"
#include "btree_write_buffer.h" #include "btree_write_buffer.h"
#include "disk_accounting.h"
#include "error.h" #include "error.h"
#include "extents.h" #include "extents.h"
#include "journal.h" #include "journal.h"
...@@ -134,7 +135,9 @@ static noinline int wb_flush_one_slowpath(struct btree_trans *trans, ...@@ -134,7 +135,9 @@ static noinline int wb_flush_one_slowpath(struct btree_trans *trans,
static inline int wb_flush_one(struct btree_trans *trans, struct btree_iter *iter, static inline int wb_flush_one(struct btree_trans *trans, struct btree_iter *iter,
struct btree_write_buffered_key *wb, struct btree_write_buffered_key *wb,
bool *write_locked, size_t *fast) bool *write_locked,
bool *accounting_accumulated,
size_t *fast)
{ {
struct btree_path *path; struct btree_path *path;
int ret; int ret;
...@@ -147,6 +150,16 @@ static inline int wb_flush_one(struct btree_trans *trans, struct btree_iter *ite ...@@ -147,6 +150,16 @@ static inline int wb_flush_one(struct btree_trans *trans, struct btree_iter *ite
if (ret) if (ret)
return ret; return ret;
if (!*accounting_accumulated && wb->k.k.type == KEY_TYPE_accounting) {
struct bkey u;
struct bkey_s_c k = bch2_btree_path_peek_slot_exact(btree_iter_path(trans, iter), &u);
if (k.k->type == KEY_TYPE_accounting)
bch2_accounting_accumulate(bkey_i_to_accounting(&wb->k),
bkey_s_c_to_accounting(k));
}
*accounting_accumulated = true;
/* /*
* We can't clone a path that has write locks: unshare it now, before * We can't clone a path that has write locks: unshare it now, before
* set_pos and traverse(): * set_pos and traverse():
...@@ -259,8 +272,9 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans) ...@@ -259,8 +272,9 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans)
struct journal *j = &c->journal; struct journal *j = &c->journal;
struct btree_write_buffer *wb = &c->btree_write_buffer; struct btree_write_buffer *wb = &c->btree_write_buffer;
struct btree_iter iter = { NULL }; struct btree_iter iter = { NULL };
size_t skipped = 0, fast = 0, slowpath = 0; size_t overwritten = 0, fast = 0, slowpath = 0, could_not_insert = 0;
bool write_locked = false; bool write_locked = false;
bool accounting_replay_done = test_bit(BCH_FS_accounting_replay_done, &c->flags);
int ret = 0; int ret = 0;
bch2_trans_unlock(trans); bch2_trans_unlock(trans);
...@@ -301,11 +315,22 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans) ...@@ -301,11 +315,22 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans)
BUG_ON(!k->journal_seq); BUG_ON(!k->journal_seq);
if (!accounting_replay_done &&
k->k.k.type == KEY_TYPE_accounting) {
slowpath++;
continue;
}
if (i + 1 < &darray_top(wb->sorted) && if (i + 1 < &darray_top(wb->sorted) &&
wb_key_eq(i, i + 1)) { wb_key_eq(i, i + 1)) {
struct btree_write_buffered_key *n = &wb->flushing.keys.data[i[1].idx]; struct btree_write_buffered_key *n = &wb->flushing.keys.data[i[1].idx];
skipped++; if (k->k.k.type == KEY_TYPE_accounting &&
n->k.k.type == KEY_TYPE_accounting)
bch2_accounting_accumulate(bkey_i_to_accounting(&n->k),
bkey_i_to_s_c_accounting(&k->k));
overwritten++;
n->journal_seq = min_t(u64, n->journal_seq, k->journal_seq); n->journal_seq = min_t(u64, n->journal_seq, k->journal_seq);
k->journal_seq = 0; k->journal_seq = 0;
continue; continue;
...@@ -340,13 +365,15 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans) ...@@ -340,13 +365,15 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans)
bch2_btree_iter_set_pos(&iter, k->k.k.p); bch2_btree_iter_set_pos(&iter, k->k.k.p);
btree_iter_path(trans, &iter)->preserve = false; btree_iter_path(trans, &iter)->preserve = false;
bool accounting_accumulated = false;
do { do {
if (race_fault()) { if (race_fault()) {
ret = -BCH_ERR_journal_reclaim_would_deadlock; ret = -BCH_ERR_journal_reclaim_would_deadlock;
break; break;
} }
ret = wb_flush_one(trans, &iter, k, &write_locked, &fast); ret = wb_flush_one(trans, &iter, k, &write_locked,
&accounting_accumulated, &fast);
if (!write_locked) if (!write_locked)
bch2_trans_begin(trans); bch2_trans_begin(trans);
} while (bch2_err_matches(ret, BCH_ERR_transaction_restart)); } while (bch2_err_matches(ret, BCH_ERR_transaction_restart));
...@@ -387,8 +414,15 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans) ...@@ -387,8 +414,15 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans)
if (!i->journal_seq) if (!i->journal_seq)
continue; continue;
bch2_journal_pin_update(j, i->journal_seq, &wb->flushing.pin, if (!accounting_replay_done &&
bch2_btree_write_buffer_journal_flush); i->k.k.type == KEY_TYPE_accounting) {
could_not_insert++;
continue;
}
if (!could_not_insert)
bch2_journal_pin_update(j, i->journal_seq, &wb->flushing.pin,
bch2_btree_write_buffer_journal_flush);
bch2_trans_begin(trans); bch2_trans_begin(trans);
...@@ -401,13 +435,45 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans) ...@@ -401,13 +435,45 @@ static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans)
btree_write_buffered_insert(trans, i)); btree_write_buffered_insert(trans, i));
if (ret) if (ret)
goto err; goto err;
i->journal_seq = 0;
}
/*
* If journal replay hasn't finished with accounting keys we
* can't flush accounting keys at all - condense them and leave
* them for next time.
*
* Q: Can the write buffer overflow?
* A Shouldn't be any actual risk. It's just new accounting
* updates that the write buffer can't flush, and those are only
* going to be generated by interior btree node updates as
* journal replay has to split/rewrite nodes to make room for
* its updates.
*
* And for those new acounting updates, updates to the same
* counters get accumulated as they're flushed from the journal
* to the write buffer - see the patch for eytzingcer tree
* accumulated. So we could only overflow if the number of
* distinct counters touched somehow was very large.
*/
if (could_not_insert) {
struct btree_write_buffered_key *dst = wb->flushing.keys.data;
darray_for_each(wb->flushing.keys, i)
if (i->journal_seq)
*dst++ = *i;
wb->flushing.keys.nr = dst - wb->flushing.keys.data;
} }
} }
err: err:
if (ret || !could_not_insert) {
bch2_journal_pin_drop(j, &wb->flushing.pin);
wb->flushing.keys.nr = 0;
}
bch2_fs_fatal_err_on(ret, c, "%s", bch2_err_str(ret)); bch2_fs_fatal_err_on(ret, c, "%s", bch2_err_str(ret));
trace_write_buffer_flush(trans, wb->flushing.keys.nr, skipped, fast, 0); trace_write_buffer_flush(trans, wb->flushing.keys.nr, overwritten, fast, 0);
bch2_journal_pin_drop(j, &wb->flushing.pin);
wb->flushing.keys.nr = 0;
return ret; return ret;
} }
...@@ -494,7 +560,7 @@ int bch2_btree_write_buffer_tryflush(struct btree_trans *trans) ...@@ -494,7 +560,7 @@ int bch2_btree_write_buffer_tryflush(struct btree_trans *trans)
return ret; return ret;
} }
/** /*
* In check and repair code, when checking references to write buffer btrees we * In check and repair code, when checking references to write buffer btrees we
* need to issue a flush before we have a definitive error: this issues a flush * need to issue a flush before we have a definitive error: this issues a flush
* if this is a key we haven't yet checked. * if this is a key we haven't yet checked.
...@@ -544,6 +610,29 @@ static void bch2_btree_write_buffer_flush_work(struct work_struct *work) ...@@ -544,6 +610,29 @@ static void bch2_btree_write_buffer_flush_work(struct work_struct *work)
bch2_write_ref_put(c, BCH_WRITE_REF_btree_write_buffer); bch2_write_ref_put(c, BCH_WRITE_REF_btree_write_buffer);
} }
static void wb_accounting_sort(struct btree_write_buffer *wb)
{
eytzinger0_sort(wb->accounting.data, wb->accounting.nr,
sizeof(wb->accounting.data[0]),
wb_key_cmp, NULL);
}
int bch2_accounting_key_to_wb_slowpath(struct bch_fs *c, enum btree_id btree,
struct bkey_i_accounting *k)
{
struct btree_write_buffer *wb = &c->btree_write_buffer;
struct btree_write_buffered_key new = { .btree = btree };
bkey_copy(&new.k, &k->k_i);
int ret = darray_push(&wb->accounting, new);
if (ret)
return ret;
wb_accounting_sort(wb);
return 0;
}
int bch2_journal_key_to_wb_slowpath(struct bch_fs *c, int bch2_journal_key_to_wb_slowpath(struct bch_fs *c,
struct journal_keys_to_wb *dst, struct journal_keys_to_wb *dst,
enum btree_id btree, struct bkey_i *k) enum btree_id btree, struct bkey_i *k)
...@@ -613,11 +702,35 @@ void bch2_journal_keys_to_write_buffer_start(struct bch_fs *c, struct journal_ke ...@@ -613,11 +702,35 @@ void bch2_journal_keys_to_write_buffer_start(struct bch_fs *c, struct journal_ke
bch2_journal_pin_add(&c->journal, seq, &dst->wb->pin, bch2_journal_pin_add(&c->journal, seq, &dst->wb->pin,
bch2_btree_write_buffer_journal_flush); bch2_btree_write_buffer_journal_flush);
darray_for_each(wb->accounting, i)
memset(&i->k.v, 0, bkey_val_bytes(&i->k.k));
} }
void bch2_journal_keys_to_write_buffer_end(struct bch_fs *c, struct journal_keys_to_wb *dst) int bch2_journal_keys_to_write_buffer_end(struct bch_fs *c, struct journal_keys_to_wb *dst)
{ {
struct btree_write_buffer *wb = &c->btree_write_buffer; struct btree_write_buffer *wb = &c->btree_write_buffer;
unsigned live_accounting_keys = 0;
int ret = 0;
darray_for_each(wb->accounting, i)
if (!bch2_accounting_key_is_zero(bkey_i_to_s_c_accounting(&i->k))) {
i->journal_seq = dst->seq;
live_accounting_keys++;
ret = __bch2_journal_key_to_wb(c, dst, i->btree, &i->k);
if (ret)
break;
}
if (live_accounting_keys * 2 < wb->accounting.nr) {
struct btree_write_buffered_key *dst = wb->accounting.data;
darray_for_each(wb->accounting, src)
if (!bch2_accounting_key_is_zero(bkey_i_to_s_c_accounting(&src->k)))
*dst++ = *src;
wb->accounting.nr = dst - wb->accounting.data;
wb_accounting_sort(wb);
}
if (!dst->wb->keys.nr) if (!dst->wb->keys.nr)
bch2_journal_pin_drop(&c->journal, &dst->wb->pin); bch2_journal_pin_drop(&c->journal, &dst->wb->pin);
...@@ -630,6 +743,8 @@ void bch2_journal_keys_to_write_buffer_end(struct bch_fs *c, struct journal_keys ...@@ -630,6 +743,8 @@ void bch2_journal_keys_to_write_buffer_end(struct bch_fs *c, struct journal_keys
if (dst->wb == &wb->flushing) if (dst->wb == &wb->flushing)
mutex_unlock(&wb->flushing.lock); mutex_unlock(&wb->flushing.lock);
mutex_unlock(&wb->inc.lock); mutex_unlock(&wb->inc.lock);
return ret;
} }
static int bch2_journal_keys_to_write_buffer(struct bch_fs *c, struct journal_buf *buf) static int bch2_journal_keys_to_write_buffer(struct bch_fs *c, struct journal_buf *buf)
...@@ -653,7 +768,7 @@ static int bch2_journal_keys_to_write_buffer(struct bch_fs *c, struct journal_bu ...@@ -653,7 +768,7 @@ static int bch2_journal_keys_to_write_buffer(struct bch_fs *c, struct journal_bu
buf->need_flush_to_write_buffer = false; buf->need_flush_to_write_buffer = false;
spin_unlock(&c->journal.lock); spin_unlock(&c->journal.lock);
out: out:
bch2_journal_keys_to_write_buffer_end(c, &dst); ret = bch2_journal_keys_to_write_buffer_end(c, &dst) ?: ret;
return ret; return ret;
} }
...@@ -685,6 +800,7 @@ void bch2_fs_btree_write_buffer_exit(struct bch_fs *c) ...@@ -685,6 +800,7 @@ void bch2_fs_btree_write_buffer_exit(struct bch_fs *c)
BUG_ON((wb->inc.keys.nr || wb->flushing.keys.nr) && BUG_ON((wb->inc.keys.nr || wb->flushing.keys.nr) &&
!bch2_journal_error(&c->journal)); !bch2_journal_error(&c->journal));
darray_exit(&wb->accounting);
darray_exit(&wb->sorted); darray_exit(&wb->sorted);
darray_exit(&wb->flushing.keys); darray_exit(&wb->flushing.keys);
darray_exit(&wb->inc.keys); darray_exit(&wb->inc.keys);
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#define _BCACHEFS_BTREE_WRITE_BUFFER_H #define _BCACHEFS_BTREE_WRITE_BUFFER_H
#include "bkey.h" #include "bkey.h"
#include "disk_accounting.h"
static inline bool bch2_btree_write_buffer_should_flush(struct bch_fs *c) static inline bool bch2_btree_write_buffer_should_flush(struct bch_fs *c)
{ {
...@@ -32,16 +33,45 @@ struct journal_keys_to_wb { ...@@ -32,16 +33,45 @@ struct journal_keys_to_wb {
u64 seq; u64 seq;
}; };
static inline int wb_key_cmp(const void *_l, const void *_r)
{
const struct btree_write_buffered_key *l = _l;
const struct btree_write_buffered_key *r = _r;
return cmp_int(l->btree, r->btree) ?: bpos_cmp(l->k.k.p, r->k.k.p);
}
int bch2_accounting_key_to_wb_slowpath(struct bch_fs *,
enum btree_id, struct bkey_i_accounting *);
static inline int bch2_accounting_key_to_wb(struct bch_fs *c,
enum btree_id btree, struct bkey_i_accounting *k)
{
struct btree_write_buffer *wb = &c->btree_write_buffer;
struct btree_write_buffered_key search;
search.btree = btree;
search.k.k.p = k->k.p;
unsigned idx = eytzinger0_find(wb->accounting.data, wb->accounting.nr,
sizeof(wb->accounting.data[0]),
wb_key_cmp, &search);
if (idx >= wb->accounting.nr)
return bch2_accounting_key_to_wb_slowpath(c, btree, k);
struct bkey_i_accounting *dst = bkey_i_to_accounting(&wb->accounting.data[idx].k);
bch2_accounting_accumulate(dst, accounting_i_to_s_c(k));
return 0;
}
int bch2_journal_key_to_wb_slowpath(struct bch_fs *, int bch2_journal_key_to_wb_slowpath(struct bch_fs *,
struct journal_keys_to_wb *, struct journal_keys_to_wb *,
enum btree_id, struct bkey_i *); enum btree_id, struct bkey_i *);
static inline int bch2_journal_key_to_wb(struct bch_fs *c, static inline int __bch2_journal_key_to_wb(struct bch_fs *c,
struct journal_keys_to_wb *dst, struct journal_keys_to_wb *dst,
enum btree_id btree, struct bkey_i *k) enum btree_id btree, struct bkey_i *k)
{ {
EBUG_ON(!dst->seq);
if (unlikely(!dst->room)) if (unlikely(!dst->room))
return bch2_journal_key_to_wb_slowpath(c, dst, btree, k); return bch2_journal_key_to_wb_slowpath(c, dst, btree, k);
...@@ -54,8 +84,19 @@ static inline int bch2_journal_key_to_wb(struct bch_fs *c, ...@@ -54,8 +84,19 @@ static inline int bch2_journal_key_to_wb(struct bch_fs *c,
return 0; return 0;
} }
static inline int bch2_journal_key_to_wb(struct bch_fs *c,
struct journal_keys_to_wb *dst,
enum btree_id btree, struct bkey_i *k)
{
EBUG_ON(!dst->seq);
return k->k.type == KEY_TYPE_accounting
? bch2_accounting_key_to_wb(c, btree, bkey_i_to_accounting(k))
: __bch2_journal_key_to_wb(c, dst, btree, k);
}
void bch2_journal_keys_to_write_buffer_start(struct bch_fs *, struct journal_keys_to_wb *, u64); void bch2_journal_keys_to_write_buffer_start(struct bch_fs *, struct journal_keys_to_wb *, u64);
void bch2_journal_keys_to_write_buffer_end(struct bch_fs *, struct journal_keys_to_wb *); int bch2_journal_keys_to_write_buffer_end(struct bch_fs *, struct journal_keys_to_wb *);
int bch2_btree_write_buffer_resize(struct bch_fs *, size_t); int bch2_btree_write_buffer_resize(struct bch_fs *, size_t);
void bch2_fs_btree_write_buffer_exit(struct bch_fs *); void bch2_fs_btree_write_buffer_exit(struct bch_fs *);
......
...@@ -52,6 +52,8 @@ struct btree_write_buffer { ...@@ -52,6 +52,8 @@ struct btree_write_buffer {
struct btree_write_buffer_keys inc; struct btree_write_buffer_keys inc;
struct btree_write_buffer_keys flushing; struct btree_write_buffer_keys flushing;
struct work_struct flush_work; struct work_struct flush_work;
DARRAY(struct btree_write_buffered_key) accounting;
}; };
#endif /* _BCACHEFS_BTREE_WRITE_BUFFER_TYPES_H */ #endif /* _BCACHEFS_BTREE_WRITE_BUFFER_TYPES_H */
This diff is collapsed.
...@@ -85,7 +85,7 @@ static inline struct bucket_array *gc_bucket_array(struct bch_dev *ca) ...@@ -85,7 +85,7 @@ static inline struct bucket_array *gc_bucket_array(struct bch_dev *ca)
return rcu_dereference_check(ca->buckets_gc, return rcu_dereference_check(ca->buckets_gc,
!ca->fs || !ca->fs ||
percpu_rwsem_is_held(&ca->fs->mark_lock) || percpu_rwsem_is_held(&ca->fs->mark_lock) ||
lockdep_is_held(&ca->fs->gc_lock) || lockdep_is_held(&ca->fs->state_lock) ||
lockdep_is_held(&ca->bucket_lock)); lockdep_is_held(&ca->bucket_lock));
} }
...@@ -103,7 +103,7 @@ static inline struct bucket_gens *bucket_gens(struct bch_dev *ca) ...@@ -103,7 +103,7 @@ static inline struct bucket_gens *bucket_gens(struct bch_dev *ca)
return rcu_dereference_check(ca->bucket_gens, return rcu_dereference_check(ca->bucket_gens,
!ca->fs || !ca->fs ||
percpu_rwsem_is_held(&ca->fs->mark_lock) || percpu_rwsem_is_held(&ca->fs->mark_lock) ||
lockdep_is_held(&ca->fs->gc_lock) || lockdep_is_held(&ca->fs->state_lock) ||
lockdep_is_held(&ca->bucket_lock)); lockdep_is_held(&ca->bucket_lock));
} }
...@@ -212,7 +212,6 @@ static inline struct bch_dev_usage bch2_dev_usage_read(struct bch_dev *ca) ...@@ -212,7 +212,6 @@ static inline struct bch_dev_usage bch2_dev_usage_read(struct bch_dev *ca)
return ret; return ret;
} }
void bch2_dev_usage_init(struct bch_dev *);
void bch2_dev_usage_to_text(struct printbuf *, struct bch_dev_usage *); void bch2_dev_usage_to_text(struct printbuf *, struct bch_dev_usage *);
static inline u64 bch2_dev_buckets_reserved(struct bch_dev *ca, enum bch_watermark watermark) static inline u64 bch2_dev_buckets_reserved(struct bch_dev *ca, enum bch_watermark watermark)
...@@ -274,73 +273,14 @@ static inline u64 dev_buckets_available(struct bch_dev *ca, ...@@ -274,73 +273,14 @@ static inline u64 dev_buckets_available(struct bch_dev *ca,
/* Filesystem usage: */ /* Filesystem usage: */
static inline unsigned __fs_usage_u64s(unsigned nr_replicas)
{
return sizeof(struct bch_fs_usage) / sizeof(u64) + nr_replicas;
}
static inline unsigned fs_usage_u64s(struct bch_fs *c)
{
return __fs_usage_u64s(READ_ONCE(c->replicas.nr));
}
static inline unsigned __fs_usage_online_u64s(unsigned nr_replicas)
{
return sizeof(struct bch_fs_usage_online) / sizeof(u64) + nr_replicas;
}
static inline unsigned fs_usage_online_u64s(struct bch_fs *c)
{
return __fs_usage_online_u64s(READ_ONCE(c->replicas.nr));
}
static inline unsigned dev_usage_u64s(void) static inline unsigned dev_usage_u64s(void)
{ {
return sizeof(struct bch_dev_usage) / sizeof(u64); return sizeof(struct bch_dev_usage) / sizeof(u64);
} }
u64 bch2_fs_usage_read_one(struct bch_fs *, u64 *);
struct bch_fs_usage_online *bch2_fs_usage_read(struct bch_fs *);
void bch2_fs_usage_acc_to_base(struct bch_fs *, unsigned);
void bch2_fs_usage_to_text(struct printbuf *,
struct bch_fs *, struct bch_fs_usage_online *);
u64 bch2_fs_sectors_used(struct bch_fs *, struct bch_fs_usage_online *);
struct bch_fs_usage_short struct bch_fs_usage_short
bch2_fs_usage_read_short(struct bch_fs *); bch2_fs_usage_read_short(struct bch_fs *);
void bch2_dev_usage_update(struct bch_fs *, struct bch_dev *,
const struct bch_alloc_v4 *,
const struct bch_alloc_v4 *, u64, bool);
/* key/bucket marking: */
static inline struct bch_fs_usage *fs_usage_ptr(struct bch_fs *c,
unsigned journal_seq,
bool gc)
{
percpu_rwsem_assert_held(&c->mark_lock);
BUG_ON(!gc && !journal_seq);
return this_cpu_ptr(gc
? c->usage_gc
: c->usage[journal_seq & JOURNAL_BUF_MASK]);
}
int bch2_update_replicas(struct bch_fs *, struct bkey_s_c,
struct bch_replicas_entry_v1 *, s64,
unsigned, bool);
int bch2_update_replicas_list(struct btree_trans *,
struct bch_replicas_entry_v1 *, s64);
int bch2_update_cached_sectors_list(struct btree_trans *, unsigned, s64);
int bch2_replicas_deltas_realloc(struct btree_trans *, unsigned);
void bch2_fs_usage_initialize(struct bch_fs *);
int bch2_bucket_ref_update(struct btree_trans *, struct bch_dev *, int bch2_bucket_ref_update(struct btree_trans *, struct bch_dev *,
struct bkey_s_c, const struct bch_extent_ptr *, struct bkey_s_c, const struct bch_extent_ptr *,
s64, enum bch_data_type, u8, u8, u32 *); s64, enum bch_data_type, u8, u8, u32 *);
...@@ -369,9 +309,6 @@ int bch2_trigger_reservation(struct btree_trans *, enum btree_id, unsigned, ...@@ -369,9 +309,6 @@ int bch2_trigger_reservation(struct btree_trans *, enum btree_id, unsigned,
void bch2_trans_account_disk_usage_change(struct btree_trans *); void bch2_trans_account_disk_usage_change(struct btree_trans *);
void bch2_trans_fs_usage_revert(struct btree_trans *, struct replicas_delta_list *);
int bch2_trans_fs_usage_apply(struct btree_trans *, struct replicas_delta_list *);
int bch2_trans_mark_metadata_bucket(struct btree_trans *, struct bch_dev *, u64, int bch2_trans_mark_metadata_bucket(struct btree_trans *, struct bch_dev *, u64,
enum bch_data_type, unsigned, enum bch_data_type, unsigned,
enum btree_iter_update_trigger_flags); enum btree_iter_update_trigger_flags);
...@@ -432,13 +369,13 @@ static inline int bch2_disk_reservation_add(struct bch_fs *c, struct disk_reserv ...@@ -432,13 +369,13 @@ static inline int bch2_disk_reservation_add(struct bch_fs *c, struct disk_reserv
#ifdef __KERNEL__ #ifdef __KERNEL__
u64 old, new; u64 old, new;
old = this_cpu_read(c->pcpu->sectors_available);
do { do {
old = this_cpu_read(c->pcpu->sectors_available);
if (sectors > old) if (sectors > old)
return __bch2_disk_reservation_add(c, res, sectors, flags); return __bch2_disk_reservation_add(c, res, sectors, flags);
new = old - sectors; new = old - sectors;
} while (this_cpu_cmpxchg(c->pcpu->sectors_available, old, new) != old); } while (!this_cpu_try_cmpxchg(c->pcpu->sectors_available, &old, new));
this_cpu_add(*c->online_reserved, sectors); this_cpu_add(*c->online_reserved, sectors);
res->sectors += sectors; res->sectors += sectors;
......
...@@ -16,7 +16,8 @@ struct bucket { ...@@ -16,7 +16,8 @@ struct bucket {
u32 stripe; u32 stripe;
u32 dirty_sectors; u32 dirty_sectors;
u32 cached_sectors; u32 cached_sectors;
}; u32 stripe_sectors;
} __aligned(sizeof(long));
struct bucket_array { struct bucket_array {
struct rcu_head rcu; struct rcu_head rcu;
...@@ -35,7 +36,7 @@ struct bucket_gens { ...@@ -35,7 +36,7 @@ struct bucket_gens {
}; };
struct bch_dev_usage { struct bch_dev_usage {
struct { struct bch_dev_usage_type {
u64 buckets; u64 buckets;
u64 sectors; /* _compressed_ sectors: */ u64 sectors; /* _compressed_ sectors: */
/* /*
...@@ -56,18 +57,6 @@ struct bch_fs_usage_base { ...@@ -56,18 +57,6 @@ struct bch_fs_usage_base {
u64 nr_inodes; u64 nr_inodes;
}; };
struct bch_fs_usage {
/* all fields are in units of 512 byte sectors: */
struct bch_fs_usage_base b;
u64 persistent_reserved[BCH_REPLICAS_MAX];
u64 replicas[];
};
struct bch_fs_usage_online {
u64 online_reserved;
struct bch_fs_usage u;
};
struct bch_fs_usage_short { struct bch_fs_usage_short {
u64 capacity; u64 capacity;
u64 used; u64 used;
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include "bcachefs_ioctl.h" #include "bcachefs_ioctl.h"
#include "buckets.h" #include "buckets.h"
#include "chardev.h" #include "chardev.h"
#include "disk_accounting.h"
#include "journal.h" #include "journal.h"
#include "move.h" #include "move.h"
#include "recovery_passes.h" #include "recovery_passes.h"
...@@ -213,9 +214,8 @@ static long bch2_ioctl_fsck_offline(struct bch_ioctl_fsck_offline __user *user_a ...@@ -213,9 +214,8 @@ static long bch2_ioctl_fsck_offline(struct bch_ioctl_fsck_offline __user *user_a
if (arg.opts) { if (arg.opts) {
char *optstr = strndup_user((char __user *)(unsigned long) arg.opts, 1 << 16); char *optstr = strndup_user((char __user *)(unsigned long) arg.opts, 1 << 16);
ret = PTR_ERR_OR_ZERO(optstr) ?: ret = PTR_ERR_OR_ZERO(optstr) ?:
bch2_parse_mount_opts(NULL, &thr->opts, optstr); bch2_parse_mount_opts(NULL, &thr->opts, NULL, optstr);
if (!IS_ERR(optstr)) if (!IS_ERR(optstr))
kfree(optstr); kfree(optstr);
...@@ -224,6 +224,7 @@ static long bch2_ioctl_fsck_offline(struct bch_ioctl_fsck_offline __user *user_a ...@@ -224,6 +224,7 @@ static long bch2_ioctl_fsck_offline(struct bch_ioctl_fsck_offline __user *user_a
} }
opt_set(thr->opts, stdio, (u64)(unsigned long)&thr->thr.stdio); opt_set(thr->opts, stdio, (u64)(unsigned long)&thr->thr.stdio);
opt_set(thr->opts, read_only, 1);
/* We need request_key() to be called before we punt to kthread: */ /* We need request_key() to be called before we punt to kthread: */
opt_set(thr->opts, nostart, true); opt_set(thr->opts, nostart, true);
...@@ -503,11 +504,9 @@ static long bch2_ioctl_data(struct bch_fs *c, ...@@ -503,11 +504,9 @@ static long bch2_ioctl_data(struct bch_fs *c,
static long bch2_ioctl_fs_usage(struct bch_fs *c, static long bch2_ioctl_fs_usage(struct bch_fs *c,
struct bch_ioctl_fs_usage __user *user_arg) struct bch_ioctl_fs_usage __user *user_arg)
{ {
struct bch_ioctl_fs_usage *arg = NULL; struct bch_ioctl_fs_usage arg = {};
struct bch_replicas_usage *dst_e, *dst_end; darray_char replicas = {};
struct bch_fs_usage_online *src;
u32 replica_entries_bytes; u32 replica_entries_bytes;
unsigned i;
int ret = 0; int ret = 0;
if (!test_bit(BCH_FS_started, &c->flags)) if (!test_bit(BCH_FS_started, &c->flags))
...@@ -516,62 +515,60 @@ static long bch2_ioctl_fs_usage(struct bch_fs *c, ...@@ -516,62 +515,60 @@ static long bch2_ioctl_fs_usage(struct bch_fs *c,
if (get_user(replica_entries_bytes, &user_arg->replica_entries_bytes)) if (get_user(replica_entries_bytes, &user_arg->replica_entries_bytes))
return -EFAULT; return -EFAULT;
arg = kzalloc(size_add(sizeof(*arg), replica_entries_bytes), GFP_KERNEL); ret = bch2_fs_replicas_usage_read(c, &replicas) ?:
if (!arg) (replica_entries_bytes < replicas.nr ? -ERANGE : 0) ?:
return -ENOMEM; copy_to_user_errcode(&user_arg->replicas, replicas.data, replicas.nr);
if (ret)
src = bch2_fs_usage_read(c);
if (!src) {
ret = -ENOMEM;
goto err; goto err;
}
arg->capacity = c->capacity; struct bch_fs_usage_short u = bch2_fs_usage_read_short(c);
arg->used = bch2_fs_sectors_used(c, src); arg.capacity = c->capacity;
arg->online_reserved = src->online_reserved; arg.used = u.used;
arg.online_reserved = percpu_u64_get(c->online_reserved);
arg.replica_entries_bytes = replicas.nr;
for (i = 0; i < BCH_REPLICAS_MAX; i++) for (unsigned i = 0; i < BCH_REPLICAS_MAX; i++) {
arg->persistent_reserved[i] = src->u.persistent_reserved[i]; struct disk_accounting_pos k = {
.type = BCH_DISK_ACCOUNTING_persistent_reserved,
dst_e = arg->replicas; .persistent_reserved.nr_replicas = i,
dst_end = (void *) arg->replicas + replica_entries_bytes; };
for (i = 0; i < c->replicas.nr; i++) {
struct bch_replicas_entry_v1 *src_e =
cpu_replicas_entry(&c->replicas, i);
/* check that we have enough space for one replicas entry */
if (dst_e + 1 > dst_end) {
ret = -ERANGE;
break;
}
dst_e->sectors = src->u.replicas[i];
dst_e->r = *src_e;
/* recheck after setting nr_devs: */
if (replicas_usage_next(dst_e) > dst_end) {
ret = -ERANGE;
break;
}
memcpy(dst_e->r.devs, src_e->devs, src_e->nr_devs);
dst_e = replicas_usage_next(dst_e); bch2_accounting_mem_read(c,
disk_accounting_pos_to_bpos(&k),
&arg.persistent_reserved[i], 1);
} }
arg->replica_entries_bytes = (void *) dst_e - (void *) arg->replicas; ret = copy_to_user_errcode(user_arg, &arg, sizeof(arg));
err:
darray_exit(&replicas);
return ret;
}
percpu_up_read(&c->mark_lock); static long bch2_ioctl_query_accounting(struct bch_fs *c,
kfree(src); struct bch_ioctl_query_accounting __user *user_arg)
{
struct bch_ioctl_query_accounting arg;
darray_char accounting = {};
int ret = 0;
if (!test_bit(BCH_FS_started, &c->flags))
return -EINVAL;
ret = copy_from_user_errcode(&arg, user_arg, sizeof(arg)) ?:
bch2_fs_accounting_read(c, &accounting, arg.accounting_types_mask) ?:
(arg.accounting_u64s * sizeof(u64) < accounting.nr ? -ERANGE : 0) ?:
copy_to_user_errcode(&user_arg->accounting, accounting.data, accounting.nr);
if (ret) if (ret)
goto err; goto err;
ret = copy_to_user_errcode(user_arg, arg, arg.capacity = c->capacity;
sizeof(*arg) + arg->replica_entries_bytes); arg.used = bch2_fs_usage_read_short(c).used;
arg.online_reserved = percpu_u64_get(c->online_reserved);
arg.accounting_u64s = accounting.nr / sizeof(u64);
ret = copy_to_user_errcode(user_arg, &arg, sizeof(arg));
err: err:
kfree(arg); darray_exit(&accounting);
return ret; return ret;
} }
...@@ -606,7 +603,7 @@ static long bch2_ioctl_dev_usage(struct bch_fs *c, ...@@ -606,7 +603,7 @@ static long bch2_ioctl_dev_usage(struct bch_fs *c,
arg.bucket_size = ca->mi.bucket_size; arg.bucket_size = ca->mi.bucket_size;
arg.nr_buckets = ca->mi.nbuckets - ca->mi.first_bucket; arg.nr_buckets = ca->mi.nbuckets - ca->mi.first_bucket;
for (i = 0; i < BCH_DATA_NR; i++) { for (i = 0; i < ARRAY_SIZE(arg.d); i++) {
arg.d[i].buckets = src.d[i].buckets; arg.d[i].buckets = src.d[i].buckets;
arg.d[i].sectors = src.d[i].sectors; arg.d[i].sectors = src.d[i].sectors;
arg.d[i].fragmented = src.d[i].fragmented; arg.d[i].fragmented = src.d[i].fragmented;
...@@ -851,7 +848,7 @@ static long bch2_ioctl_fsck_online(struct bch_fs *c, ...@@ -851,7 +848,7 @@ static long bch2_ioctl_fsck_online(struct bch_fs *c,
char *optstr = strndup_user((char __user *)(unsigned long) arg.opts, 1 << 16); char *optstr = strndup_user((char __user *)(unsigned long) arg.opts, 1 << 16);
ret = PTR_ERR_OR_ZERO(optstr) ?: ret = PTR_ERR_OR_ZERO(optstr) ?:
bch2_parse_mount_opts(c, &thr->opts, optstr); bch2_parse_mount_opts(c, &thr->opts, NULL, optstr);
if (!IS_ERR(optstr)) if (!IS_ERR(optstr))
kfree(optstr); kfree(optstr);
...@@ -928,6 +925,8 @@ long bch2_fs_ioctl(struct bch_fs *c, unsigned cmd, void __user *arg) ...@@ -928,6 +925,8 @@ long bch2_fs_ioctl(struct bch_fs *c, unsigned cmd, void __user *arg)
BCH_IOCTL(disk_resize_journal, struct bch_ioctl_disk_resize_journal); BCH_IOCTL(disk_resize_journal, struct bch_ioctl_disk_resize_journal);
case BCH_IOCTL_FSCK_ONLINE: case BCH_IOCTL_FSCK_ONLINE:
BCH_IOCTL(fsck_online, struct bch_ioctl_fsck_online); BCH_IOCTL(fsck_online, struct bch_ioctl_fsck_online);
case BCH_IOCTL_QUERY_ACCOUNTING:
return bch2_ioctl_query_accounting(c, arg);
default: default:
return -ENOTTY; return -ENOTTY;
} }
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/xxhash.h> #include <linux/xxhash.h>
#include <linux/key.h> #include <linux/key.h>
#include <linux/random.h> #include <linux/random.h>
#include <linux/ratelimit.h>
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include <crypto/algapi.h> #include <crypto/algapi.h>
#include <crypto/chacha.h> #include <crypto/chacha.h>
...@@ -436,7 +437,7 @@ int bch2_rechecksum_bio(struct bch_fs *c, struct bio *bio, ...@@ -436,7 +437,7 @@ int bch2_rechecksum_bio(struct bch_fs *c, struct bio *bio,
if (bch2_crc_cmp(merged, crc_old.csum) && !c->opts.no_data_io) { if (bch2_crc_cmp(merged, crc_old.csum) && !c->opts.no_data_io) {
struct printbuf buf = PRINTBUF; struct printbuf buf = PRINTBUF;
prt_printf(&buf, "checksum error in %s() (memory corruption or bug?)\n" prt_printf(&buf, "checksum error in %s() (memory corruption or bug?)\n"
"expected %0llx:%0llx got %0llx:%0llx (old type ", " expected %0llx:%0llx got %0llx:%0llx (old type ",
__func__, __func__,
crc_old.csum.hi, crc_old.csum.hi,
crc_old.csum.lo, crc_old.csum.lo,
...@@ -446,7 +447,7 @@ int bch2_rechecksum_bio(struct bch_fs *c, struct bio *bio, ...@@ -446,7 +447,7 @@ int bch2_rechecksum_bio(struct bch_fs *c, struct bio *bio,
prt_str(&buf, " new type "); prt_str(&buf, " new type ");
bch2_prt_csum_type(&buf, new_csum_type); bch2_prt_csum_type(&buf, new_csum_type);
prt_str(&buf, ")"); prt_str(&buf, ")");
bch_err(c, "%s", buf.buf); WARN_RATELIMIT(1, "%s", buf.buf);
printbuf_exit(&buf); printbuf_exit(&buf);
return -EIO; return -EIO;
} }
......
This diff is collapsed.
This diff is collapsed.
...@@ -17,7 +17,8 @@ typedef void (*io_timer_fn)(struct io_timer *); ...@@ -17,7 +17,8 @@ typedef void (*io_timer_fn)(struct io_timer *);
struct io_timer { struct io_timer {
io_timer_fn fn; io_timer_fn fn;
unsigned long expire; void *fn2;
u64 expire;
}; };
/* Amount to buffer up on a percpu counter */ /* Amount to buffer up on a percpu counter */
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment