diff options
author | Michal Privoznik <mprivozn@redhat.com> | 2022-12-15 10:55:03 +0100 |
---|---|---|
committer | David Hildenbrand <david@redhat.com> | 2022-12-28 14:59:55 +0100 |
commit | 6bb613f0812d1364fc8fcf0846647446884d5148 (patch) | |
tree | ebd4b3ec1817adccfb206581f30db168e8ee0909 /backends | |
parent | 82ba778e1329f6fb89a23d728c680656698d275a (diff) |
hostmem: Honor multiple preferred nodes if possible
If a memory-backend is configured with mode
HOST_MEM_POLICY_PREFERRED then
host_memory_backend_memory_complete() calls mbind() as:
mbind(..., MPOL_PREFERRED, nodemask, ...);
Here, 'nodemask' is a bitmap of host NUMA nodes and corresponds
to the .host-nodes attribute. Therefore, there can be multiple
nodes specified. However, the documentation to MPOL_PREFERRED
says:
MPOL_PREFERRED
This mode sets the preferred node for allocation. ...
If nodemask specifies more than one node ID, the first node
in the mask will be selected as the preferred node.
Therefore, only the first node is honored and the rest is
silently ignored. Well, with recent changes to the kernel and
numactl we can do better.
The Linux kernel added in v5.15 via commit cfcaa66f8032
("mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY")
support for MPOL_PREFERRED_MANY, which accepts multiple preferred
NUMA nodes instead.
Then, numa_has_preferred_many() API was introduced to numactl
(v2.0.15~26) allowing applications to query kernel support.
Wiring this all together, we can pass MPOL_PREFERRED_MANY to the
mbind() call instead and stop ignoring multiple nodes, silently.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Message-Id: <a0b4adce1af5bd2344c2218eb4a04b3ff7bcfdb4.1671097918.git.mprivozn@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Diffstat (limited to 'backends')
-rw-r--r-- | backends/hostmem.c | 19 |
1 files changed, 17 insertions, 2 deletions
diff --git a/backends/hostmem.c b/backends/hostmem.c index 8640294c10..747e7838c0 100644 --- a/backends/hostmem.c +++ b/backends/hostmem.c @@ -23,7 +23,12 @@ #ifdef CONFIG_NUMA #include <numaif.h> +#include <numa.h> QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_DEFAULT != MPOL_DEFAULT); +/* + * HOST_MEM_POLICY_PREFERRED may either translate to MPOL_PREFERRED or + * MPOL_PREFERRED_MANY, see comments further below. + */ QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_PREFERRED != MPOL_PREFERRED); QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_BIND != MPOL_BIND); QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_INTERLEAVE != MPOL_INTERLEAVE); @@ -346,6 +351,7 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp) * before mbind(). note: MPOL_MF_STRICT is ignored on hugepages so * this doesn't catch hugepage case. */ unsigned flags = MPOL_MF_STRICT | MPOL_MF_MOVE; + int mode = backend->policy; /* check for invalid host-nodes and policies and give more verbose * error messages than mbind(). */ @@ -369,9 +375,18 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp) BITS_TO_LONGS(MAX_NODES + 1) * sizeof(unsigned long)); assert(maxnode <= MAX_NODES); +#ifdef HAVE_NUMA_HAS_PREFERRED_MANY + if (mode == MPOL_PREFERRED && numa_has_preferred_many() > 0) { + /* + * Replace with MPOL_PREFERRED_MANY otherwise the mbind() below + * silently picks the first node. + */ + mode = MPOL_PREFERRED_MANY; + } +#endif + if (maxnode && - mbind(ptr, sz, backend->policy, backend->host_nodes, maxnode + 1, - flags)) { + mbind(ptr, sz, mode, backend->host_nodes, maxnode + 1, flags)) { if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) { error_setg_errno(errp, errno, "cannot bind memory to host NUMA nodes"); |