Commit cfbf07e2 authored by Qu Wenruo's avatar Qu Wenruo Committed by David Sterba

btrfs: migrate to use folio private instead of page private

As a cleanup and preparation for future folio migration, this patch
would replace all page->private to folio version.  This includes:

- PagePrivate()
  -> folio_test_private()

- page->private
  -> folio_get_private()

- attach_page_private()
  -> folio_attach_private()

- detach_page_private()
  -> folio_detach_private()

Since we're here, also remove the forced cast on page->private, since
it's (void *) already, we don't really need to do the cast.

For now even if we missed some call sites, it won't cause any problem
yet, as we're only using order 0 folio (single page), thus all those
folio/page flags should be synced.

But for the future conversion to utilize higher order folio, the page
<-> folio flag sync is no longer guaranteed, thus we have to migrate to
utilize folio flags.
Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
parent 4cea422a
This diff is collapsed.
......@@ -43,10 +43,10 @@ enum {
};
/*
* page->private values. Every page that is controlled by the extent
* map has page->private set to one.
* Folio private values. Every page that is controlled by the extent map has
* folio private set to this value.
*/
#define EXTENT_PAGE_PRIVATE 1
#define EXTENT_FOLIO_PRIVATE 1
/*
* The extent buffer bitmap operations are done with byte granularity instead of
......
......@@ -869,9 +869,9 @@ static int prepare_uptodate_page(struct inode *inode,
* released.
*
* The private flag check is essential for subpage as we need
* to store extra bitmap using page->private.
* to store extra bitmap using folio private.
*/
if (page->mapping != inode->i_mapping || !PagePrivate(page)) {
if (page->mapping != inode->i_mapping || !folio_test_private(folio)) {
unlock_page(page);
return -EAGAIN;
}
......
......@@ -4725,7 +4725,7 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
/*
* We unlock the page after the io is completed and then re-lock it
* above. release_folio() could have come in between that and cleared
* PagePrivate(), but left the page in the mapping. Set the page mapped
* folio private, but left the page in the mapping. Set the page mapped
* here to make sure it's properly set for the subpage stuff.
*/
ret = set_page_extent_mapped(page);
......@@ -7851,13 +7851,14 @@ static void btrfs_readahead(struct readahead_control *rac)
static void wait_subpage_spinlock(struct page *page)
{
struct btrfs_fs_info *fs_info = btrfs_sb(page->mapping->host->i_sb);
struct folio *folio = page_folio(page);
struct btrfs_subpage *subpage;
if (!btrfs_is_subpage(fs_info, page))
return;
ASSERT(PagePrivate(page) && page->private);
subpage = (struct btrfs_subpage *)page->private;
ASSERT(folio_test_private(folio) && folio_get_private(folio));
subpage = folio_get_private(folio);
/*
* This may look insane as we just acquire the spinlock and release it,
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment