Commit b5ba3a64 authored by Suren Baghdasaryan's avatar Suren Baghdasaryan Committed by Andrew Morton

userfaultfd: remove WRITE_ONCE when setting folio->index during UFFDIO_MOVE

When folio is moved with UFFDIO_MOVE it gets locked before the rmap and
index are modified.  Due to the folio lock being already held,
WRITE_ONCE() is not needed when setting the folio index.  Remove it.

Link: https://lkml.kernel.org/r/20240415020821.1152951-1-surenb@google.comReported-by: default avatarMatthew Wilcox <willy@infradead.org>
Signed-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
Cc: Lokesh Gidra <lokeshgidra@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 231f8c71
...@@ -2200,7 +2200,7 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm ...@@ -2200,7 +2200,7 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
} }
folio_move_anon_rmap(src_folio, dst_vma); folio_move_anon_rmap(src_folio, dst_vma);
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); src_folio->index = linear_page_index(dst_vma, dst_addr);
_dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot); _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
/* Follow mremap() behavior and treat the entry dirty after the move */ /* Follow mremap() behavior and treat the entry dirty after the move */
......
...@@ -1026,7 +1026,7 @@ static int move_present_pte(struct mm_struct *mm, ...@@ -1026,7 +1026,7 @@ static int move_present_pte(struct mm_struct *mm,
} }
folio_move_anon_rmap(src_folio, dst_vma); folio_move_anon_rmap(src_folio, dst_vma);
WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); src_folio->index = linear_page_index(dst_vma, dst_addr);
orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot); orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
/* Follow mremap() behavior and treat the entry dirty after the move */ /* Follow mremap() behavior and treat the entry dirty after the move */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment