From 03bfc2188f061aa8381403f9280555f4e22c35a2 Mon Sep 17 00:00:00 2001 From: Nicholas Piggin Date: Tue, 12 Mar 2024 21:14:58 +0100 Subject: physmem: Fix migration dirty bitmap coherency with TCG memory access MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The fastpath in cpu_physical_memory_sync_dirty_bitmap() to test large aligned ranges forgot to bring the TCG TLB up to date after clearing some of the dirty memory bitmap bits. This can result in stores though the TCG TLB not setting the dirty memory bitmap and ultimately causes memory corruption / lost updates during migration from a TCG host. Fix this by calling cpu_physical_memory_dirty_bits_cleared() when dirty bits have been cleared. Fixes: aa8dc044772 ("migration: synchronize memory bitmap 64bits at a time") Signed-off-by: Nicholas Piggin Tested-by: Thomas Huth Message-ID: <20240219061731.232570-1-npiggin@gmail.com> [PMD: Split patch in 2: part 2/2, slightly adapt description] Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson Link: https://lore.kernel.org/r/20240312201458.79532-4-philmd@linaro.org Signed-off-by: Peter Xu --- include/exec/ram_addr.h | 3 +++ 1 file changed, 3 insertions(+) (limited to 'include') diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index b060ea9176..de45ba7bc9 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -513,6 +513,9 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, idx++; } } + if (num_dirty) { + cpu_physical_memory_dirty_bits_cleared(start, length); + } if (rb->clear_bmap) { /* -- cgit v1.2.3