Commit 0e9aa675 authored by Miaohe Lin's avatar Miaohe Lin Committed by Linus Torvalds

mm: fix some broken comments

Fix some broken comments including typo, grammar error and wrong function
name.
Signed-off-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Link: https://lkml.kernel.org/r/20200913095456.54873-1-linmiaohe@huawei.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ed017373
...@@ -1445,7 +1445,7 @@ static inline bool clear_bit_unlock_is_negative_byte(long nr, volatile void *mem ...@@ -1445,7 +1445,7 @@ static inline bool clear_bit_unlock_is_negative_byte(long nr, volatile void *mem
* unlock_page - unlock a locked page * unlock_page - unlock a locked page
* @page: the page * @page: the page
* *
* Unlocks the page and wakes up sleepers in ___wait_on_page_locked(). * Unlocks the page and wakes up sleepers in wait_on_page_locked().
* Also wakes sleepers in wait_on_page_writeback() because the wakeup * Also wakes sleepers in wait_on_page_writeback() because the wakeup
* mechanism between PageLocked pages and PageWriteback pages is shared. * mechanism between PageLocked pages and PageWriteback pages is shared.
* But that's OK - sleepers in wait_on_page_writeback() just go back to sleep. * But that's OK - sleepers in wait_on_page_writeback() just go back to sleep.
...@@ -3004,7 +3004,7 @@ static struct page *do_read_cache_page(struct address_space *mapping, ...@@ -3004,7 +3004,7 @@ static struct page *do_read_cache_page(struct address_space *mapping,
goto out; goto out;
/* /*
* Page is not up to date and may be locked due one of the following * Page is not up to date and may be locked due to one of the following
* case a: Page is being filled and the page lock is held * case a: Page is being filled and the page lock is held
* case b: Read/write error clearing the page uptodate status * case b: Read/write error clearing the page uptodate status
* case c: Truncation in progress (page locked) * case c: Truncation in progress (page locked)
......
...@@ -246,7 +246,7 @@ int add_to_swap(struct page *page) ...@@ -246,7 +246,7 @@ int add_to_swap(struct page *page)
goto fail; goto fail;
/* /*
* Normally the page will be dirtied in unmap because its pte should be * Normally the page will be dirtied in unmap because its pte should be
* dirty. A special case is MADV_FREE page. The page'e pte could have * dirty. A special case is MADV_FREE page. The page's pte could have
* dirty bit cleared but the page's SwapBacked bit is still set because * dirty bit cleared but the page's SwapBacked bit is still set because
* clearing the dirty bit and SwapBacked bit has no lock protected. For * clearing the dirty bit and SwapBacked bit has no lock protected. For
* such page, unmap will not set dirty bit for it, so page reclaim will * such page, unmap will not set dirty bit for it, so page reclaim will
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment