qemu with hax to log dma reads & writes jcs.org/2018/11/12/vfio

migration: Count new_dirty instead of real_dirty

real_dirty_pages becomes equal to total ram size after dirty log sync
in ram_init_bitmaps, the reason is that the bitmap of ramblock is
initialized to be all set, so old path counts them as "real dirty" at
beginning.

This causes wrong dirty rate and false positive throttling.

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
Message-Id: <20200622032037.31112-1-zhukeqian1@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

authored by

Keqian Zhu and committed by
Dr. David Alan Gilbert
fb613580 617a32f5

+6 -7
+1 -4
include/exec/ram_addr.h
··· 442 442 static inline 443 443 uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, 444 444 ram_addr_t start, 445 - ram_addr_t length, 446 - uint64_t *real_dirty_pages) 445 + ram_addr_t length) 447 446 { 448 447 ram_addr_t addr; 449 448 unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS); ··· 469 468 if (src[idx][offset]) { 470 469 unsigned long bits = atomic_xchg(&src[idx][offset], 0); 471 470 unsigned long new_dirty; 472 - *real_dirty_pages += ctpopl(bits); 473 471 new_dirty = ~dest[k]; 474 472 dest[k] |= bits; 475 473 new_dirty &= bits; ··· 502 500 start + addr + offset, 503 501 TARGET_PAGE_SIZE, 504 502 DIRTY_MEMORY_MIGRATION)) { 505 - *real_dirty_pages += 1; 506 503 long k = (start + addr) >> TARGET_PAGE_BITS; 507 504 if (!test_and_set_bit(k, dest)) { 508 505 num_dirty++;
+5 -3
migration/ram.c
··· 859 859 /* Called with RCU critical section */ 860 860 static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb) 861 861 { 862 - rs->migration_dirty_pages += 863 - cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length, 864 - &rs->num_dirty_pages_period); 862 + uint64_t new_dirty_pages = 863 + cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length); 864 + 865 + rs->migration_dirty_pages += new_dirty_pages; 866 + rs->num_dirty_pages_period += new_dirty_pages; 865 867 } 866 868 867 869 /**