Commit 335f7d4f authored by Brian Foster's avatar Brian Foster Committed by Kent Overstreet

bcachefs: clean up post-eof folios on -ENOSPC

The buffered write path batches folio creations in the file mapping
based on the requested size of the write. Under low free space
conditions, it is possible to add a bunch of folios to the mapping
and then return a short write or -ENOSPC due to lack of space. If
this occurs on an extending write, the file size is updated based on
the amount of data successfully written to the file. If folios were
added beyond the final i_size, they may hang around until reclaimed,
truncated or encountered unexpectedly by another operation.

For example, generic/083 reproduces a sequence of events where a
short write leaves around one or more post-EOF folios on an inode, a
subsequent zero range request extends beyond i_size and overlaps
with an aforementioned folio, and __bch2_truncate_folio() happens
across it and complains.

Update __bch2_buffered_write() to keep track of the start offset of
the last folio added to the mapping for a prospective write. After
i_size is updated, check whether this offset starts beyond EOF. If
so, truncate pagecache beyond the latest EOF to clean up any folios
that don't reside at least partially within EOF upon completion of
the write.
Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
Signed-off-by: default avatarKent Overstreet <kent.overstreet@linux.dev>
parent 4ad6aa46
...@@ -1860,6 +1860,7 @@ static int __bch2_buffered_write(struct bch_inode_info *inode, ...@@ -1860,6 +1860,7 @@ static int __bch2_buffered_write(struct bch_inode_info *inode,
struct folio **fi, *f; struct folio **fi, *f;
unsigned copied = 0, f_offset; unsigned copied = 0, f_offset;
loff_t end = pos + len, f_pos; loff_t end = pos + len, f_pos;
loff_t last_folio_pos = inode->v.i_size;
int ret = 0; int ret = 0;
BUG_ON(!len); BUG_ON(!len);
...@@ -1876,8 +1877,6 @@ static int __bch2_buffered_write(struct bch_inode_info *inode, ...@@ -1876,8 +1877,6 @@ static int __bch2_buffered_write(struct bch_inode_info *inode,
BUG_ON(!folios.nr); BUG_ON(!folios.nr);
end = min(end, folio_end_pos(darray_last(folios)));
f = darray_first(folios); f = darray_first(folios);
if (pos != folio_pos(f) && !folio_test_uptodate(f)) { if (pos != folio_pos(f) && !folio_test_uptodate(f)) {
ret = bch2_read_single_folio(f, mapping); ret = bch2_read_single_folio(f, mapping);
...@@ -1886,6 +1885,8 @@ static int __bch2_buffered_write(struct bch_inode_info *inode, ...@@ -1886,6 +1885,8 @@ static int __bch2_buffered_write(struct bch_inode_info *inode,
} }
f = darray_last(folios); f = darray_last(folios);
end = min(end, folio_end_pos(f));
last_folio_pos = folio_pos(f);
if (end != folio_end_pos(f) && !folio_test_uptodate(f)) { if (end != folio_end_pos(f) && !folio_test_uptodate(f)) {
if (end >= inode->v.i_size) { if (end >= inode->v.i_size) {
folio_zero_range(f, 0, folio_size(f)); folio_zero_range(f, 0, folio_size(f));
...@@ -1999,6 +2000,14 @@ static int __bch2_buffered_write(struct bch_inode_info *inode, ...@@ -1999,6 +2000,14 @@ static int __bch2_buffered_write(struct bch_inode_info *inode,
folio_put(*fi); folio_put(*fi);
} }
/*
* If the last folio added to the mapping starts beyond current EOF, we
* performed a short write but left around at least one post-EOF folio.
* Clean up the mapping before we return.
*/
if (last_folio_pos >= inode->v.i_size)
truncate_pagecache(&inode->v, inode->v.i_size);
darray_exit(&folios); darray_exit(&folios);
bch2_folio_reservation_put(c, inode, &res); bch2_folio_reservation_put(c, inode, &res);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment