Commit fadae295 authored by Yang Shi's avatar Yang Shi Committed by Linus Torvalds

thp: use mm_file_counter to determine update which rss counter

Since commit eca56ff9 ("mm, shmem: add internal shmem resident
memory accounting"), MM_SHMEMPAGES is added to separate the shmem
accounting from regular files.  So, all shmem pages should be accounted
to MM_SHMEMPAGES instead of MM_FILEPAGES.

And, normal 4K shmem pages have been accounted to MM_SHMEMPAGES, so
shmem thp pages should be not treated differently.  Account them to
MM_SHMEMPAGES via mm_counter_file() since shmem pages are swap backed to
keep consistent with normal 4K shmem pages.

This will not change the rss counter of processes since shmem pages are
still a part of it.

The /proc/pid/status and /proc/pid/statm counters will however be more
accurate wrt shmem usage, as originally intended.  And as eca56ff9
("mm, shmem: add internal shmem resident memory accounting") mentioned,
oom also could report more accurate "shmem-rss".

Link: http://lkml.kernel.org/r/1529442518-17398-1-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 720e14eb
...@@ -1740,7 +1740,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, ...@@ -1740,7 +1740,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
} else { } else {
if (arch_needs_pgtable_deposit()) if (arch_needs_pgtable_deposit())
zap_deposited_table(tlb->mm, pmd); zap_deposited_table(tlb->mm, pmd);
add_mm_counter(tlb->mm, MM_FILEPAGES, -HPAGE_PMD_NR); add_mm_counter(tlb->mm, mm_counter_file(page), -HPAGE_PMD_NR);
} }
spin_unlock(ptl); spin_unlock(ptl);
...@@ -2090,7 +2090,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -2090,7 +2090,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
SetPageReferenced(page); SetPageReferenced(page);
page_remove_rmap(page, true); page_remove_rmap(page, true);
put_page(page); put_page(page);
add_mm_counter(mm, MM_FILEPAGES, -HPAGE_PMD_NR); add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR);
return; return;
} else if (is_huge_zero_pmd(*pmd)) { } else if (is_huge_zero_pmd(*pmd)) {
/* /*
......
...@@ -3400,7 +3400,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page) ...@@ -3400,7 +3400,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
if (write) if (write)
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
add_mm_counter(vma->vm_mm, MM_FILEPAGES, HPAGE_PMD_NR); add_mm_counter(vma->vm_mm, mm_counter_file(page), HPAGE_PMD_NR);
page_add_file_rmap(page, true); page_add_file_rmap(page, true);
/* /*
* deposit and withdraw with pmd lock held * deposit and withdraw with pmd lock held
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment