diff options
author | David Gibson <david@gibson.dropbear.id.au> | 2019-11-28 16:37:04 +1100 |
---|---|---|
committer | David Gibson <david@gibson.dropbear.id.au> | 2020-03-17 09:41:15 +1100 |
commit | 8897ea5a9fc0aafa5ed7eee1e0c49893b91a2d87 (patch) | |
tree | a3de19166c0cbb7774f4520389b6d9c77272ffa1 /include | |
parent | 6a84737c80b3febb093e066d451a44e61b54159a (diff) |
spapr: Don't attempt to clamp RMA to VRMA constraint
The Real Mode Area (RMA) is the part of memory which a guest can access
when in real (MMU off) mode. Of course, for a guest under KVM, the MMU
isn't really turned off, it's just in a special translation mode - Virtual
Real Mode Area (VRMA) - which looks like real mode in guest mode.
The mechanics of how this works when using the hash MMU (HPT) put a
constraint on the size of the RMA, which depends on the size of the
HPT. So, the latter part of spapr_setup_hpt_and_vrma() clamps the RMA
we advertise to the guest based on this VRMA limit.
There are several things wrong with this:
1) spapr_setup_hpt_and_vrma() doesn't actually clamp, it takes the minimum
of Node 0 memory size and the VRMA limit. That will *often* work the
same as clamping, but there can be other constraints on RMA size which
supersede Node 0 memory size. We have real bugs caused by this
(currently worked around in the guest kernel)
2) Some callers of spapr_setup_hpt_and_vrma() are in a situation where
we're past the point that we can actually advertise an RMA limit to the
guest
3) But most fundamentally, the VRMA limit depends on host configuration
(page size) which shouldn't be visible to the guest, but this partially
exposes it. This can cause problems with migration in certain edge
cases, although we will mostly get away with it.
In practice, this clamping is almost never applied anyway. With 64kiB
pages and the normal rules for sizing of the HPT, the theoretical VRMA
limit will be 4x(guest memory size) and so never hit. It will hit with
4kiB pages, where it will be (guest memory size)/4. However all mainstream
distro kernels for POWER have used a 64kiB page size for at least 10 years.
So, simply replace this logic with a check that the RMA we've calculated
based only on guest visible configuration will fit within the host implied
VRMA limit. This can break if running HPT guests on a host kernel with
4kiB page size. As noted that's very rare. There also exist several
possible workarounds:
* Change the host kernel to use 64kiB pages
* Use radix MMU (RPT) guests instead of HPT
* Use 64kiB hugepages on the host to back guest memory
* Increase the guest memory size so that the RMA hits one of the fixed
limits before the RMA limit. This is relatively easy on POWER8 which
has a 16GiB limit, harder on POWER9 which has a 1TiB limit.
* Use a guest NUMA configuration which artificially constrains the RMA
within the VRMA limit (the RMA must always fit within Node 0).
Previously, on KVM, we also temporarily reduced the rma_size to 256M so
that the we'd load the kernel and initrd safely, regardless of the VRMA
limit. This was a) confusing, b) could significantly limit the size of
images we could load and c) introduced a behavioural difference between
KVM and TCG. So we remove that as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Greg Kurz <groug@kaod.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/hw/ppc/spapr.h | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index a4216935a1..90dbc55931 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -156,7 +156,6 @@ struct SpaprMachineState { SpaprPendingHpt *pending_hpt; /* in-progress resize */ hwaddr rma_size; - int vrma_adjust; uint32_t fdt_size; uint32_t fdt_initial_size; void *fdt_blob; @@ -795,7 +794,7 @@ void *spapr_build_fdt(SpaprMachineState *spapr, bool reset, size_t space); void spapr_events_init(SpaprMachineState *sm); void spapr_dt_events(SpaprMachineState *sm, void *fdt); void close_htab_fd(SpaprMachineState *spapr); -void spapr_setup_hpt_and_vrma(SpaprMachineState *spapr); +void spapr_setup_hpt(SpaprMachineState *spapr); void spapr_free_hpt(SpaprMachineState *spapr); SpaprTceTable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn); void spapr_tce_table_enable(SpaprTceTable *tcet, |