Commit 720c2419 authored by Minchan Kim's avatar Minchan Kim Committed by Greg Kroah-Hartman

ANDROID: binder: change down_write to down_read

binder_update_page_range needs down_write of mmap_sem because
vm_insert_page need to change vma->vm_flags to VM_MIXEDMAP unless
it is set. However, when I profile binder working, it seems
every binder buffers should be mapped in advance by binder_mmap.
It means we could set VM_MIXEDMAP in binder_mmap time which is
already hold a mmap_sem as down_write so binder_update_page_range
doesn't need to hold a mmap_sem as down_write.
Please use proper API down_read. It would help mmap_sem contention
problem as well as fixing down_write abuse.

Ganesh Mahendran tested app launching and binder throughput test
and he said he couldn't find any problem and I did binder latency
test per Greg KH request(Thanks Martijn to teach me how I can do)
I cannot find any problem, too.

Cc: Ganesh Mahendran <opensource.ganesh@gmail.com>
Cc: Joe Perches <joe@perches.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Todd Kjos <tkjos@google.com>
Reviewed-by: default avatarMartijn Coenen <maco@android.com>
Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
Reviewed-by: default avatarJoel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 838d5565
...@@ -4727,7 +4727,9 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma) ...@@ -4727,7 +4727,9 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
failure_string = "bad vm_flags"; failure_string = "bad vm_flags";
goto err_bad_arg; goto err_bad_arg;
} }
vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE; vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
vma->vm_flags &= ~VM_MAYWRITE;
vma->vm_ops = &binder_vm_ops; vma->vm_ops = &binder_vm_ops;
vma->vm_private_data = proc; vma->vm_private_data = proc;
......
...@@ -219,7 +219,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, ...@@ -219,7 +219,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
mm = alloc->vma_vm_mm; mm = alloc->vma_vm_mm;
if (mm) { if (mm) {
down_write(&mm->mmap_sem); down_read(&mm->mmap_sem);
vma = alloc->vma; vma = alloc->vma;
} }
...@@ -288,7 +288,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, ...@@ -288,7 +288,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
/* vm_insert_page does not seem to increment the refcount */ /* vm_insert_page does not seem to increment the refcount */
} }
if (mm) { if (mm) {
up_write(&mm->mmap_sem); up_read(&mm->mmap_sem);
mmput(mm); mmput(mm);
} }
return 0; return 0;
...@@ -321,7 +321,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, ...@@ -321,7 +321,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
} }
err_no_vma: err_no_vma:
if (mm) { if (mm) {
up_write(&mm->mmap_sem); up_read(&mm->mmap_sem);
mmput(mm); mmput(mm);
} }
return vma ? -ENOMEM : -ESRCH; return vma ? -ENOMEM : -ESRCH;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment