Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
linux
Commits
9549db1d
Commit
9549db1d
authored
Apr 08, 2003
by
Andrew Morton
Committed by
Linus Torvalds
Apr 08, 2003
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
[PATCH] rmap comments
From: Hugh Dickins <hugh@veritas.com> Update a few locking comments in rmap.c.
parent
8e98702b
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
7 deletions
+6
-7
mm/rmap.c
mm/rmap.c
+6
-7
No files found.
mm/rmap.c
View file @
9549db1d
...
@@ -14,8 +14,8 @@
...
@@ -14,8 +14,8 @@
/*
/*
* Locking:
* Locking:
* - the page->pte.chain is protected by the PG_chainlock bit,
* - the page->pte.chain is protected by the PG_chainlock bit,
* which nests within the
zone->lru_lock, then the
* which nests within the
the mm->page_table_lock,
*
mm->page_table_lock, and the
n the page lock.
*
which nests withi
n the page lock.
* - because swapout locking is opposite to the locking order
* - because swapout locking is opposite to the locking order
* in the page fault path, the swapout path uses trylocks
* in the page fault path, the swapout path uses trylocks
* on the mm->page_table_lock
* on the mm->page_table_lock
...
@@ -287,9 +287,8 @@ void page_remove_rmap(struct page *page, pte_t *ptep)
...
@@ -287,9 +287,8 @@ void page_remove_rmap(struct page *page, pte_t *ptep)
* table entry mapping a page. Because locking order here is opposite
* table entry mapping a page. Because locking order here is opposite
* to the locking order used by the page fault path, we use trylocks.
* to the locking order used by the page fault path, we use trylocks.
* Locking:
* Locking:
* zone->lru_lock page_launder()
* page lock shrink_list(), trylock
* page lock page_launder(), trylock
* pte_chain_lock shrink_list()
* pte_chain_lock page_launder()
* mm->page_table_lock try_to_unmap_one(), trylock
* mm->page_table_lock try_to_unmap_one(), trylock
*/
*/
static
int
FASTCALL
(
try_to_unmap_one
(
struct
page
*
,
pte_addr_t
));
static
int
FASTCALL
(
try_to_unmap_one
(
struct
page
*
,
pte_addr_t
));
...
@@ -376,8 +375,8 @@ static int try_to_unmap_one(struct page * page, pte_addr_t paddr)
...
@@ -376,8 +375,8 @@ static int try_to_unmap_one(struct page * page, pte_addr_t paddr)
* @page: the page to get unmapped
* @page: the page to get unmapped
*
*
* Tries to remove all the page table entries which are mapping this
* Tries to remove all the page table entries which are mapping this
* page, used in the pageout path. Caller must hold
zone->lru_
lock
* page, used in the pageout path. Caller must hold
the page
lock
* and
the page
lock. Return values are:
* and
its pte chain
lock. Return values are:
*
*
* SWAP_SUCCESS - we succeeded in removing all mappings
* SWAP_SUCCESS - we succeeded in removing all mappings
* SWAP_AGAIN - we missed a trylock, try again later
* SWAP_AGAIN - we missed a trylock, try again later
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment