Commit fd639087 authored by David Hildenbrand's avatar David Hildenbrand Committed by Andrew Morton

mm/rmap: drop stale comment in page_add_anon_rmap and hugepage_add_anon_rmap()

Patch series "Anon rmap cleanups".

Some cleanups around rmap for anon pages.  I'm working on more cleanups
also around file rmap -- also to handle the "compound" parameter
internally only and to let hugetlb use page_add_file_rmap(), but these
changes make sense separately.


This patch (of 6):

That comment was added in commit 5dbe0af4 ("mm: fix kernel BUG at
mm/rmap.c:1017!") to document why we can see vma->vm_end getting adjusted
concurrently due to a VMA split.

However, the optimized locking code was changed again in bf181b9f ("mm
anon rmap: replace same_anon_vma linked list with an interval tree.").

...  and later, the comment was changed in commit 0503ea8f ("mm/mmap:
remove __vma_adjust()") to talk about "vma_merge" although the original
issue was with VMA splitting.

Let's just remove that comment.  Nowadays, it's outdated, imprecise and
confusing.

Link: https://lkml.kernel.org/r/20230913125113.313322-1-david@redhat.com
Link: https://lkml.kernel.org/r/20230913125113.313322-2-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 811244a5
...@@ -1245,7 +1245,6 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, ...@@ -1245,7 +1245,6 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
if (likely(!folio_test_ksm(folio))) { if (likely(!folio_test_ksm(folio))) {
/* address might be in next vma when migration races vma_merge */
if (first) if (first)
__page_set_anon_rmap(folio, page, vma, address, __page_set_anon_rmap(folio, page, vma, address,
!!(flags & RMAP_EXCLUSIVE)); !!(flags & RMAP_EXCLUSIVE));
...@@ -2549,7 +2548,6 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma, ...@@ -2549,7 +2548,6 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
BUG_ON(!folio_test_locked(folio)); BUG_ON(!folio_test_locked(folio));
BUG_ON(!anon_vma); BUG_ON(!anon_vma);
/* address might be in next vma when migration races vma_merge */
first = atomic_inc_and_test(&folio->_entire_mapcount); first = atomic_inc_and_test(&folio->_entire_mapcount);
VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page); VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page); VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment