diff options
author | Daniel Henrique Barboza <danielhb413@gmail.com> | 2021-01-28 14:42:13 -0300 |
---|---|---|
committer | David Gibson <david@gibson.dropbear.id.au> | 2021-02-10 10:43:50 +1100 |
commit | b01fec3659f7e595d5066fc052fb31a94a8a969b (patch) | |
tree | cc5001ea7c31eba5e479aea81f1b04a4c2fa7ba7 /hw/ppc | |
parent | 6640706972c50aac4f620d7385d4e228a118e289 (diff) |
spapr_numa.c: fix ibm,max-associativity-domains calculation
The current logic for calculating 'maxdomain' making it a sum of
numa_state->num_nodes with spapr->gpu_numa_id. spapr->gpu_numa_id is
used as a index to determine the next available NUMA id that a
given NVGPU can use.
The problem is that the initial value of gpu_numa_id, for any topology
that has more than one NUMA node, is equal to numa_state->num_nodes.
This means that our maxdomain will always be, at least, twice the
amount of existing NUMA nodes. This means that a guest with 4 NUMA
nodes will end up with the following max-associativity-domains:
rtas/ibm,max-associativity-domains
00000004 00000008 00000008 00000008 00000008
This overtuning of maxdomains doesn't go unnoticed in the guest, being
detected in SLUB during boot:
dmesg | grep SLUB
[ 0.000000] SLUB: HWalign=128, Order=0-3, MinObjects=0, CPUs=4, Nodes=8
SLUB is detecting 8 total nodes, with 4 nodes being online.
This patch fixes ibm,max-associativity-domains by considering the amount
of NVGPUs NUMA nodes presented in the guest, instead of just
spapr->gpu_numa_id.
Reported-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20210128174213.1349181-4-danielhb413@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Diffstat (limited to 'hw/ppc')
-rw-r--r-- | hw/ppc/spapr_numa.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c index a757dd88b8..779f18b994 100644 --- a/hw/ppc/spapr_numa.c +++ b/hw/ppc/spapr_numa.c @@ -311,6 +311,8 @@ void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas) { MachineState *ms = MACHINE(spapr); SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr); + uint32_t number_nvgpus_nodes = spapr->gpu_numa_id - + spapr_numa_initial_nvgpu_numa_id(ms); uint32_t refpoints[] = { cpu_to_be32(0x4), cpu_to_be32(0x3), @@ -318,7 +320,7 @@ void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas) cpu_to_be32(0x1), }; uint32_t nr_refpoints = ARRAY_SIZE(refpoints); - uint32_t maxdomain = ms->numa_state->num_nodes + spapr->gpu_numa_id; + uint32_t maxdomain = ms->numa_state->num_nodes + number_nvgpus_nodes; uint32_t maxdomains[] = { cpu_to_be32(4), cpu_to_be32(maxdomain), |