Commit 347202dc authored by David Hildenbrand's avatar David Hildenbrand Committed by Michael S. Tsirkin

virtio-mem: more precise calculation in virtio_mem_mb_state_prepare_next_mb()

We actually need one byte less (next_mb_id is exclusive, first_mb_id is
inclusive). While at it, compact the code.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/r/20201112133815.13332-3-david@redhat.comSigned-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
parent 6725f211
......@@ -257,10 +257,8 @@ static enum virtio_mem_mb_state virtio_mem_mb_get_state(struct virtio_mem *vm,
*/
static int virtio_mem_mb_state_prepare_next_mb(struct virtio_mem *vm)
{
unsigned long old_bytes = vm->next_mb_id - vm->first_mb_id + 1;
unsigned long new_bytes = vm->next_mb_id - vm->first_mb_id + 2;
int old_pages = PFN_UP(old_bytes);
int new_pages = PFN_UP(new_bytes);
int old_pages = PFN_UP(vm->next_mb_id - vm->first_mb_id);
int new_pages = PFN_UP(vm->next_mb_id - vm->first_mb_id + 1);
uint8_t *new_mb_state;
if (vm->mb_state && old_pages == new_pages)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment