diff options
author | Wei Yang <richardw.yang@linux.intel.com> | 2019-06-04 14:17:27 +0800 |
---|---|---|
committer | Juan Quintela <quintela@redhat.com> | 2019-06-05 12:44:03 +0200 |
commit | 03158519384f15890d587937bd1b3ea699898e55 (patch) | |
tree | 8b1c1b98b65a755b8a802eb1fc1278100e196975 /migration/ram.c | |
parent | 24d5588c86faa245220ea821c9bc7ec685ffa97c (diff) |
migratioin/ram: leave RAMBlock->bmap blank on allocating
During migration, we would sync bitmap from ram_list.dirty_memory to
RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
means at the first round this sync is meaningless and is a duplicated
work.
Leaving RAMBlock->bmap blank on allocating would have a side effect on
migration_dirty_pages, since it is calculated from the result of
cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
set migration_dirty_pages to 0 in ram_state_init().
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Diffstat (limited to 'migration/ram.c')
-rw-r--r-- | migration/ram.c | 18 |
1 files changed, 13 insertions, 5 deletions
diff --git a/migration/ram.c b/migration/ram.c index 03a9cce9f9..082aea9d23 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3173,11 +3173,11 @@ static int ram_state_init(RAMState **rsp) QSIMPLEQ_INIT(&(*rsp)->src_page_requests); /* - * Count the total number of pages used by ram blocks not including any - * gaps due to alignment or unplugs. + * This must match with the initial values of dirty bitmap. + * Currently we initialize the dirty bitmap to all zeros so + * here the total dirty page count is zero. */ - (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS; - + (*rsp)->migration_dirty_pages = 0; ram_state_reset(*rsp); return 0; @@ -3192,8 +3192,16 @@ static void ram_list_init_bitmaps(void) if (ram_bytes_total()) { RAMBLOCK_FOREACH_NOT_IGNORED(block) { pages = block->max_length >> TARGET_PAGE_BITS; + /* + * The initial dirty bitmap for migration must be set with all + * ones to make sure we'll migrate every guest RAM page to + * destination. + * Here we didn't set RAMBlock.bmap simply because it is already + * set in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in + * ram_block_add, and that's where we'll sync the dirty bitmaps. + * Here setting RAMBlock.bmap would be fine too but not necessary. + */ block->bmap = bitmap_new(pages); - bitmap_set(block->bmap, 0, pages); if (migrate_postcopy_ram()) { block->unsentmap = bitmap_new(pages); bitmap_set(block->unsentmap, 0, pages); |