aboutsummaryrefslogtreecommitdiff
path: root/accel/tcg/translate-all.c
diff options
context:
space:
mode:
authorEmilio G. Cota <cota@braap.org>2017-08-05 02:04:40 -0400
committerRichard Henderson <richard.henderson@linaro.org>2018-06-15 07:42:55 -1000
commitae5486e273a4e368515a963a6d0076e20453eb72 (patch)
tree764f8fa694532aa155c9b153ef106fc1441ae50d /accel/tcg/translate-all.c
parent94da9aec2a50f0c82e6c60939275c0337f03d5fe (diff)
translate-all: work page-by-page in tb_invalidate_phys_range_1
So that we pass a same-page range to tb_invalidate_phys_page_range, instead of always passing an end address that could be on a different page. As discussed with Peter Maydell on the list [1], tb_invalidate_phys_page_range doesn't actually do much with 'end', which explains why we have never hit a bug despite going against what the comment on top of tb_invalidate_phys_page_range requires: > * Invalidate all TBs which intersect with the target physical address range > * [start;end[. NOTE: start and end must refer to the *same* physical page. The appended honours the comment, which avoids confusion. While at it, rework the loop into a for loop, which is less error prone (e.g. "continue" won't result in an infinite loop). [1] https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg09165.html Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Diffstat (limited to 'accel/tcg/translate-all.c')
-rw-r--r--accel/tcg/translate-all.c12
1 files changed, 8 insertions, 4 deletions
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index b9c36a3e45..f32904d4a3 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -1375,10 +1375,14 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
*/
static void tb_invalidate_phys_range_1(tb_page_addr_t start, tb_page_addr_t end)
{
- while (start < end) {
- tb_invalidate_phys_page_range(start, end, 0);
- start &= TARGET_PAGE_MASK;
- start += TARGET_PAGE_SIZE;
+ tb_page_addr_t next;
+
+ for (next = (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
+ start < end;
+ start = next, next += TARGET_PAGE_SIZE) {
+ tb_page_addr_t bound = MIN(next, end);
+
+ tb_invalidate_phys_page_range(start, bound, 0);
}
}