Commit b1cc94ab authored by Mike Rapoport's avatar Mike Rapoport Committed by Linus Torvalds

shmem: shmem_charge: verify max_block is not exceeded before inode update

Patch series "userfaultfd: enable zeropage support for shmem".

These patches enable support for UFFDIO_ZEROPAGE for shared memory.

The first two patches are not strictly related to userfaultfd, they are
just minor refactoring to reduce amount of code duplication.

This patch (of 7):

Currently we update inode and shmem_inode_info before verifying that
used_blocks will not exceed max_blocks.  In case it will, we undo the
update.  Let's switch the order and move the verification of the blocks
count before the inode and shmem_inode_info update.

Link: http://lkml.kernel.org/r/1497939652-16528-2-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent fe490cc0
...@@ -266,6 +266,14 @@ bool shmem_charge(struct inode *inode, long pages) ...@@ -266,6 +266,14 @@ bool shmem_charge(struct inode *inode, long pages)
if (shmem_acct_block(info->flags, pages)) if (shmem_acct_block(info->flags, pages))
return false; return false;
if (sbinfo->max_blocks) {
if (percpu_counter_compare(&sbinfo->used_blocks,
sbinfo->max_blocks - pages) > 0)
goto unacct;
percpu_counter_add(&sbinfo->used_blocks, pages);
}
spin_lock_irqsave(&info->lock, flags); spin_lock_irqsave(&info->lock, flags);
info->alloced += pages; info->alloced += pages;
inode->i_blocks += pages * BLOCKS_PER_PAGE; inode->i_blocks += pages * BLOCKS_PER_PAGE;
...@@ -273,20 +281,11 @@ bool shmem_charge(struct inode *inode, long pages) ...@@ -273,20 +281,11 @@ bool shmem_charge(struct inode *inode, long pages)
spin_unlock_irqrestore(&info->lock, flags); spin_unlock_irqrestore(&info->lock, flags);
inode->i_mapping->nrpages += pages; inode->i_mapping->nrpages += pages;
if (!sbinfo->max_blocks)
return true;
if (percpu_counter_compare(&sbinfo->used_blocks,
sbinfo->max_blocks - pages) > 0) {
inode->i_mapping->nrpages -= pages;
spin_lock_irqsave(&info->lock, flags);
info->alloced -= pages;
shmem_recalc_inode(inode);
spin_unlock_irqrestore(&info->lock, flags);
shmem_unacct_blocks(info->flags, pages);
return false;
}
percpu_counter_add(&sbinfo->used_blocks, pages);
return true; return true;
unacct:
shmem_unacct_blocks(info->flags, pages);
return false;
} }
void shmem_uncharge(struct inode *inode, long pages) void shmem_uncharge(struct inode *inode, long pages)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment