Commit efbb77b2 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] shmdt() speedup

From: William Lee Irwin III <wli@holomorphy.com>

Micro-optimize sys_shmdt(). There are methods of exploiting knowledge
of the vma's being searched to restrict the search space. These are:

(1) shm mappings always start their lives at file offset 0, so only
	vma's above shmaddr need be considered. find_vma() can be used
	to seek to the proper position in mm->mmap in O(lg(n)) time.

(2) The search is for a vma which could be a fragment of a broken-up
	shm mapping, which would have been created starting at shmaddr
	with vm_pgoff 0 and then continued no further into userspace
	than shmaddr + size. So after having found an initial vma, find
	the size of the shm segment it maps to calculate an upper bound
	to the virtualspace that needs to be searched.

(3) mremap() would have caused the original checks to miss vma's mapping
	the shm segment if shmaddr were the original address at which
	the shm segments were attached. This does no better and no worse
	than the original code in that situation.

(4) If the chain of references in vma->vm_file->f_dentry->d_inode->i_size
	is not guaranteed by refcounting and/or the shm code then this is
	oopsable; AFAICT an inode is always allocated.
parent bb455250
......@@ -737,21 +737,66 @@ asmlinkage long sys_shmat (int shmid, char *shmaddr, int shmflg, ulong *raddr)
* detach and kill segment if marked destroyed.
* The work is done in shm_close.
*/
asmlinkage long sys_shmdt (char *shmaddr)
asmlinkage long sys_shmdt(char *shmaddr)
{
struct mm_struct *mm = current->mm;
struct vm_area_struct *shmd, *shmdnext;
struct vm_area_struct *vma, *next;
unsigned long addr = (unsigned long)shmaddr;
loff_t size = 0;
int retval = -EINVAL;
down_write(&mm->mmap_sem);
for (shmd = mm->mmap; shmd; shmd = shmdnext) {
shmdnext = shmd->vm_next;
if ((shmd->vm_ops == &shm_vm_ops || (shmd->vm_flags & VM_HUGETLB))
&& shmd->vm_start - (shmd->vm_pgoff << PAGE_SHIFT) == (ulong) shmaddr) {
do_munmap(mm, shmd->vm_start, shmd->vm_end - shmd->vm_start);
/*
* If it had been mremap()'d, the starting address would not
* match the usual checks anyway. So assume all vma's are
* above the starting address given.
*/
vma = find_vma(mm, addr);
while (vma) {
next = vma->vm_next;
/*
* Check if the starting address would match, i.e. it's
* a fragment created by mprotect() and/or munmap(), or it
* otherwise it starts at this address with no hassles.
*/
if ((vma->vm_ops == &shm_vm_ops || is_vm_hugetlb_page(vma)) &&
(vma->vm_start - addr)/PAGE_SIZE == vma->vm_pgoff) {
size = vma->vm_file->f_dentry->d_inode->i_size;
do_munmap(mm, vma->vm_start, vma->vm_end - vma->vm_start);
/*
* We discovered the size of the shm segment, so
* break out of here and fall through to the next
* loop that uses the size information to stop
* searching for matching vma's.
*/
retval = 0;
vma = next;
break;
}
vma = next;
}
/*
* We need look no further than the maximum address a fragment
* could possibly have landed at. Also cast things to loff_t to
* prevent overflows and make comparisions vs. equal-width types.
*/
while (vma && (loff_t)(vma->vm_end - addr) <= size) {
next = vma->vm_next;
/* finding a matching vma now does not alter retval */
if ((vma->vm_ops == &shm_vm_ops || is_vm_hugetlb_page(vma)) &&
(vma->vm_start - addr)/PAGE_SIZE == vma->vm_pgoff)
do_munmap(mm, vma->vm_start, vma->vm_end - vma->vm_start);
vma = next;
}
up_write(&mm->mmap_sem);
return retval;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment