Commit 8958b249 authored by Haitao Shi's avatar Haitao Shi Committed by Linus Torvalds

mm: fix some spelling mistakes in comments

Fix some spelling mistakes in comments:
	udpate ==> update
	succesful ==> successful
	exmaple ==> example
	unneccessary ==> unnecessary
	stoping ==> stopping
	uknown ==> unknown

Link: https://lkml.kernel.org/r/20201127011747.86005-1-shihaitao1@huawei.comSigned-off-by: default avatarHaitao Shi <shihaitao1@huawei.com>
Reviewed-by: default avatarMike Rapoport <rppt@linux.ibm.com>
Reviewed-by: default avatarSouptick Joarder <jrdr.linux@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ff5c19ed
...@@ -1359,7 +1359,7 @@ static int __wait_on_page_locked_async(struct page *page, ...@@ -1359,7 +1359,7 @@ static int __wait_on_page_locked_async(struct page *page,
else else
ret = PageLocked(page); ret = PageLocked(page);
/* /*
* If we were succesful now, we know we're still on the * If we were successful now, we know we're still on the
* waitqueue as we're still under the lock. This means it's * waitqueue as we're still under the lock. This means it's
* safe to remove and return success, we know the callback * safe to remove and return success, we know the callback
* isn't going to trigger. * isn't going to trigger.
......
...@@ -2391,7 +2391,7 @@ static void __split_huge_page_tail(struct page *head, int tail, ...@@ -2391,7 +2391,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
* Clone page flags before unfreezing refcount. * Clone page flags before unfreezing refcount.
* *
* After successful get_page_unless_zero() might follow flags change, * After successful get_page_unless_zero() might follow flags change,
* for exmaple lock_page() which set PG_waiters. * for example lock_page() which set PG_waiters.
*/ */
page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
page_tail->flags |= (head->flags & page_tail->flags |= (head->flags &
......
...@@ -1275,7 +1275,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, ...@@ -1275,7 +1275,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
* PTEs are armed with uffd write protection. * PTEs are armed with uffd write protection.
* Here we can also mark the new huge pmd as * Here we can also mark the new huge pmd as
* write protected if any of the small ones is * write protected if any of the small ones is
* marked but that could bring uknown * marked but that could bring unknown
* userfault messages that falls outside of * userfault messages that falls outside of
* the registered range. So, just be simple. * the registered range. So, just be simple.
*/ */
......
...@@ -871,7 +871,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size) ...@@ -871,7 +871,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size)
* @base: base address of the region * @base: base address of the region
* @size: size of the region * @size: size of the region
* @set: set or clear the flag * @set: set or clear the flag
* @flag: the flag to udpate * @flag: the flag to update
* *
* This function isolates region [@base, @base + @size), and sets/clears flag * This function isolates region [@base, @base + @size), and sets/clears flag
* *
......
...@@ -2594,7 +2594,7 @@ static bool migrate_vma_check_page(struct page *page) ...@@ -2594,7 +2594,7 @@ static bool migrate_vma_check_page(struct page *page)
* will bump the page reference count. Sadly there is no way to * will bump the page reference count. Sadly there is no way to
* differentiate a regular pin from migration wait. Hence to * differentiate a regular pin from migration wait. Hence to
* avoid 2 racing thread trying to migrate back to CPU to enter * avoid 2 racing thread trying to migrate back to CPU to enter
* infinite loop (one stoping migration because the other is * infinite loop (one stopping migration because the other is
* waiting on pte migration entry). We always return true here. * waiting on pte migration entry). We always return true here.
* *
* FIXME proper solution is to rework migration_entry_wait() so * FIXME proper solution is to rework migration_entry_wait() so
......
...@@ -34,7 +34,7 @@ ...@@ -34,7 +34,7 @@
* *
* The need callback is used to decide whether extended memory allocation is * The need callback is used to decide whether extended memory allocation is
* needed or not. Sometimes users want to deactivate some features in this * needed or not. Sometimes users want to deactivate some features in this
* boot and extra memory would be unneccessary. In this case, to avoid * boot and extra memory would be unnecessary. In this case, to avoid
* allocating huge chunk of memory, each clients represent their need of * allocating huge chunk of memory, each clients represent their need of
* extra memory through the need callback. If one of the need callbacks * extra memory through the need callback. If one of the need callbacks
* returns true, it means that someone needs extra memory so that * returns true, it means that someone needs extra memory so that
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment