diff options
author | Andrew Jones <drjones@redhat.com> | 2021-10-20 22:21:20 +0800 |
---|---|---|
committer | Richard Henderson <richard.henderson@linaro.org> | 2021-10-20 18:17:54 -0700 |
commit | 72b0527ff6ab09502e0b320050eb086fc4d61d26 (patch) | |
tree | b785430e45cdb565daff66f55a5575dae61b2782 /hw/arm | |
parent | b863f0b75852dfd62b3f31b08eeddd3b03694fc2 (diff) |
hw/arm/virt: Add cpu-map to device tree
Support device tree CPU topology descriptions.
In accordance with the Devicetree Specification, the Linux Doc
"arm/cpus.yaml" requires that cpus and cpu nodes in the DT are
present. And we have already met the requirement by generating
/cpus/cpu@* nodes for members within ms->smp.cpus. Accordingly,
we should also create subnodes in cpu-map for the present cpus,
each of which relates to an unique cpu node.
The Linux Doc "cpu/cpu-topology.txt" states that the hierarchy
of CPUs in a SMP system is defined through four entities and
they are socket/cluster/core/thread. It is also required that
a socket node's child nodes must be one or more cluster nodes.
Given that currently we are only provided with information of
socket/core/thread, we assume there is one cluster child node
in each socket node when creating cpu-map.
Co-developed-by: Yanan Wang <wangyanan55@huawei.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
Message-Id: <20211020142125.7516-4-wangyanan55@huawei.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Diffstat (limited to 'hw/arm')
-rw-r--r-- | hw/arm/virt.c | 70 |
1 files changed, 60 insertions, 10 deletions
diff --git a/hw/arm/virt.c b/hw/arm/virt.c index 15e8d8cf4a..ca433adb5b 100644 --- a/hw/arm/virt.c +++ b/hw/arm/virt.c @@ -351,20 +351,21 @@ static void fdt_add_cpu_nodes(const VirtMachineState *vms) int cpu; int addr_cells = 1; const MachineState *ms = MACHINE(vms); + const VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(vms); int smp_cpus = ms->smp.cpus; /* - * From Documentation/devicetree/bindings/arm/cpus.txt - * On ARM v8 64-bit systems value should be set to 2, - * that corresponds to the MPIDR_EL1 register size. - * If MPIDR_EL1[63:32] value is equal to 0 on all CPUs - * in the system, #address-cells can be set to 1, since - * MPIDR_EL1[63:32] bits are not used for CPUs - * identification. + * See Linux Documentation/devicetree/bindings/arm/cpus.yaml + * On ARM v8 64-bit systems value should be set to 2, + * that corresponds to the MPIDR_EL1 register size. + * If MPIDR_EL1[63:32] value is equal to 0 on all CPUs + * in the system, #address-cells can be set to 1, since + * MPIDR_EL1[63:32] bits are not used for CPUs + * identification. * - * Here we actually don't know whether our system is 32- or 64-bit one. - * The simplest way to go is to examine affinity IDs of all our CPUs. If - * at least one of them has Aff3 populated, we set #address-cells to 2. + * Here we actually don't know whether our system is 32- or 64-bit one. + * The simplest way to go is to examine affinity IDs of all our CPUs. If + * at least one of them has Aff3 populated, we set #address-cells to 2. */ for (cpu = 0; cpu < smp_cpus; cpu++) { ARMCPU *armcpu = ARM_CPU(qemu_get_cpu(cpu)); @@ -407,8 +408,57 @@ static void fdt_add_cpu_nodes(const VirtMachineState *vms) ms->possible_cpus->cpus[cs->cpu_index].props.node_id); } + if (!vmc->no_cpu_topology) { + qemu_fdt_setprop_cell(ms->fdt, nodename, "phandle", + qemu_fdt_alloc_phandle(ms->fdt)); + } + g_free(nodename); } + + if (!vmc->no_cpu_topology) { + /* + * Add vCPU topology description through fdt node cpu-map. + * + * See Linux Documentation/devicetree/bindings/cpu/cpu-topology.txt + * In a SMP system, the hierarchy of CPUs can be defined through + * four entities that are used to describe the layout of CPUs in + * the system: socket/cluster/core/thread. + * + * A socket node represents the boundary of system physical package + * and its child nodes must be one or more cluster nodes. A system + * can contain several layers of clustering within a single physical + * package and cluster nodes can be contained in parent cluster nodes. + * + * Given that cluster is not yet supported in the vCPU topology, + * we currently generate one cluster node within each socket node + * by default. + */ + qemu_fdt_add_subnode(ms->fdt, "/cpus/cpu-map"); + + for (cpu = smp_cpus - 1; cpu >= 0; cpu--) { + char *cpu_path = g_strdup_printf("/cpus/cpu@%d", cpu); + char *map_path; + + if (ms->smp.threads > 1) { + map_path = g_strdup_printf( + "/cpus/cpu-map/socket%d/cluster0/core%d/thread%d", + cpu / (ms->smp.cores * ms->smp.threads), + (cpu / ms->smp.threads) % ms->smp.cores, + cpu % ms->smp.threads); + } else { + map_path = g_strdup_printf( + "/cpus/cpu-map/socket%d/cluster0/core%d", + cpu / ms->smp.cores, + cpu % ms->smp.cores); + } + qemu_fdt_add_path(ms->fdt, map_path); + qemu_fdt_setprop_phandle(ms->fdt, map_path, "cpu", cpu_path); + + g_free(map_path); + g_free(cpu_path); + } + } } static void fdt_add_its_gic_node(VirtMachineState *vms) |