Commit 348e967a authored by Jan Kara's avatar Jan Kara Committed by Ross Zwisler

dax: Make huge page handling depend of CONFIG_BROKEN

Currently the handling of huge pages for DAX is racy. For example the
following can happen:

CPU0 (THP write fault)			CPU1 (normal read fault)

__dax_pmd_fault()			__dax_fault()
  get_block(inode, block, &bh, 0) -> not mapped
					get_block(inode, block, &bh, 0)
					  -> not mapped
  if (!buffer_mapped(&bh) && write)
    get_block(inode, block, &bh, 1) -> allocates blocks
  truncate_pagecache_range(inode, lstart, lend);
					dax_load_hole();

This results in data corruption since process on CPU1 won't see changes
into the file done by CPU0.

The race can happen even if two normal faults race however with THP the
situation is even worse because the two faults don't operate on the same
entries in the radix tree and we want to use these entries for
serialization. So make THP support in DAX code depend on CONFIG_BROKEN
for now.
Signed-off-by: default avatarJan Kara <jack@suse.cz>
Signed-off-by: default avatarRoss Zwisler <ross.zwisler@linux.intel.com>
parent b9953536
...@@ -52,6 +52,7 @@ config FS_DAX_PMD ...@@ -52,6 +52,7 @@ config FS_DAX_PMD
depends on FS_DAX depends on FS_DAX
depends on ZONE_DEVICE depends on ZONE_DEVICE
depends on TRANSPARENT_HUGEPAGE depends on TRANSPARENT_HUGEPAGE
depends on BROKEN
endif # BLOCK endif # BLOCK
......
...@@ -675,7 +675,7 @@ int dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf, ...@@ -675,7 +675,7 @@ int dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
} }
EXPORT_SYMBOL_GPL(dax_fault); EXPORT_SYMBOL_GPL(dax_fault);
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #if defined(CONFIG_TRANSPARENT_HUGEPAGE)
/* /*
* The 'colour' (ie low bits) within a PMD of a page offset. This comes up * The 'colour' (ie low bits) within a PMD of a page offset. This comes up
* more often than one might expect in the below function. * more often than one might expect in the below function.
......
...@@ -29,7 +29,7 @@ static inline int __dax_zero_page_range(struct block_device *bdev, ...@@ -29,7 +29,7 @@ static inline int __dax_zero_page_range(struct block_device *bdev,
} }
#endif #endif
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #if defined(CONFIG_TRANSPARENT_HUGEPAGE)
int dax_pmd_fault(struct vm_area_struct *, unsigned long addr, pmd_t *, int dax_pmd_fault(struct vm_area_struct *, unsigned long addr, pmd_t *,
unsigned int flags, get_block_t); unsigned int flags, get_block_t);
int __dax_pmd_fault(struct vm_area_struct *, unsigned long addr, pmd_t *, int __dax_pmd_fault(struct vm_area_struct *, unsigned long addr, pmd_t *,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment