aboutsummaryrefslogtreecommitdiff
path: root/hw/i386
AgeCommit message (Collapse)Author
2022-11-07Merge tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu ↵Stefan Hajnoczi
into staging pci,pc,virtio: features, tests, fixes, cleanups lots of acpi rework first version of biosbits infrastructure ASID support in vhost-vdpa core_count2 support in smbios PCIe DOE emulation virtio vq reset HMAT support part of infrastructure for viommu support in vhost-vdpa VTD PASID support fixes, tests all over the place Signed-off-by: Michael S. Tsirkin <mst@redhat.com> # -----BEGIN PGP SIGNATURE----- # # iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmNpXDkPHG1zdEByZWRo # YXQuY29tAAoJECgfDbjSjVRpD0AH/2G8ZPrgrxJC9y3uD5/5J6QRzO+TsDYbg5ut # uBf4rKSHHzcu6zdyAfsrhbAKKzyD4HrEGNXZrBjnKM1xCiB/SGBcDIWntwrca2+s # 5Dpbi4xvd4tg6tVD4b47XNDCcn2uUbeI0e2M5QIbtCmzdi/xKbFAfl5G8DQp431X # Kmz79G4CdKWyjVlM0HoYmdCw/4FxkdjD02tE/Uc5YMrePNaEg5Bw4hjCHbx1b6ur # 6gjeXAtncm9s4sO0l+sIdyiqlxiTry9FSr35WaQ0qPU+Og5zaf1EiWfdl8TRo4qU # EAATw5A4hyw11GfOGp7oOVkTGvcNB/H7aIxD7emdWZV8+BMRPKo= # =zTCn # -----END PGP SIGNATURE----- # gpg: Signature made Mon 07 Nov 2022 14:27:53 EST # gpg: using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469 # gpg: issuer "mst@redhat.com" # gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [full] # gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [full] # Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67 # Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469 * tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu: (83 commits) checkpatch: better pattern for inline comments hw/virtio: introduce virtio_device_should_start tests/acpi: update tables for new core count test bios-tables-test: add test for number of cores > 255 tests/acpi: allow changes for core_count2 test bios-tables-test: teach test to use smbios 3.0 tables hw/smbios: add core_count2 to smbios table type 4 vhost-user: Support vhost_dev_start vhost: Change the sequence of device start intel-iommu: PASID support intel-iommu: convert VTD_PE_GET_FPD_ERR() to be a function intel-iommu: drop VTDBus intel-iommu: don't warn guest errors when getting rid2pasid entry vfio: move implement of vfio_get_xlat_addr() to memory.c tests: virt: Update expected *.acpihmatvirt tables tests: acpi: aarch64/virt: add a test for hmat nodes with no initiators hw/arm/virt: Enable HMAT on arm virt machine tests: Add HMAT AArch64/virt empty table files tests: acpi: q35: update expected blobs *.hmat-noinitiators expected HMAT: tests: acpi: q35: add test for hmat nodes without initiators ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-11-07intel-iommu: PASID supportJason Wang
This patch introduce ECAP_PASID via "x-pasid-mode". Based on the existing support for scalable mode, we need to implement the following missing parts: 1) tag VTDAddressSpace with PASID and support IOMMU/DMA translation with PASID 2) tag IOTLB with PASID 3) PASID cache and its flush 4) PASID based IOTLB invalidation For simplicity PASID cache is not implemented so we can simply implement the PASID cache flush as a no and leave it to be implemented in the future. For PASID based IOTLB invalidation, since we haven't had L1 stage support, the PASID based IOTLB invalidation is not implemented yet. For PASID based device IOTLB invalidation, it requires the support for vhost so we forbid enabling device IOTLB when PASID is enabled now. Those work could be done in the future. Note that though PASID based IOMMU translation is ready but no device can issue PASID DMA right now. In this case, PCI_NO_PASID is used as PASID to identify the address without PASID. vtd_find_add_as() has been extended to provision address space with PASID which could be utilized by the future extension of PCI core to allow device model to use PASID based DMA translation. This feature would be useful for: 1) prototyping PASID support for devices like virtio 2) future vPASID work 3) future PRS and vSVA work Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221028061436.30093-5-jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-11-07intel-iommu: convert VTD_PE_GET_FPD_ERR() to be a functionJason Wang
We used to have a macro for VTD_PE_GET_FPD_ERR() but it has an internal goto which prevents it from being reused. This patch convert that macro to a dedicated function and let the caller to decide what to do (e.g using goto or not). This makes sure it can be re-used for other function that requires fault reporting. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221028061436.30093-4-jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com>
2022-11-07intel-iommu: drop VTDBusJason Wang
We introduce VTDBus structure as an intermediate step for searching the address space. This works well with SID based matching/lookup. But when we want to support SID plus PASID based address space lookup, this intermediate steps turns out to be a burden. So the patch simply drops the VTDBus structure and use the PCIBus and devfn as the key for the g_hash_table(). This simplifies the codes and the future PASID extension. To prevent being slower for past vtd_find_as_from_bus_num() callers, a vtd_as cache indexed by the bus number is introduced to store the last recent search result of a vtd_as belongs to a specific bus. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221028061436.30093-3-jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com>
2022-11-07intel-iommu: don't warn guest errors when getting rid2pasid entryJason Wang
We use to warn on wrong rid2pasid entry. But this error could be triggered by the guest and could happens during initialization. So let's don't warn in this case. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221028061436.30093-2-jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com>
2022-11-07hw/i386/acpi-build: Resolve north rather than south bridgesBernhard Beschow
The code currently assumes Q35 iff ICH9 and i440fx iff PIIX. Now that more AML generation has been moved into the south bridges and since the machines define themselves primarily through their north bridges, let's switch to resolving the north bridges for AML generation instead. This also allows for easier experimentation with different south bridges in the "pc" machine, e.g. with PIIX4 and VT82xx. Signed-off-by: Bernhard Beschow <shentey@gmail.com> Message-Id: <20221028103419.93398-4-shentey@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-11-07hw/i386/acpi-build: Resolve redundant attributeBernhard Beschow
The is_piix4 attribute is set once in one location and read once in another. Doing both in one location allows for removing the attribute altogether. Signed-off-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221026133110.91828-3-shentey@gmail.com> Message-Id: <20221028103419.93398-3-shentey@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-11-07hw/i386/acpi-build: Remove unused structBernhard Beschow
Ammends commit b23046abe78f48498a423b802d6d86ba0172d57f 'pc: acpi-build: simplify PCI bus tree generation'. Signed-off-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221026133110.91828-2-shentey@gmail.com> Message-Id: <20221028103419.93398-2-shentey@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-11-07hw/i386/pc.c: CXL Fixed Memory Window should not reserve e820 in biosGregory Price
Early-boot e820 records will be inserted by the bios/efi/early boot software and be reported to the kernel via insert_resource. Later, when CXL drivers iterate through the regions again, they will insert another resource and make the RESERVED memory area a child. This RESERVED memory area causes the memory region to become unusable, and as a result attempting to create memory regions with `cxl create-region ...` Will fail due to the RESERVED area intersecting with the CXL window. During boot the following traceback is observed: 0xffffffff81101650 in insert_resource_expand_to_fit () 0xffffffff83d964c5 in e820__reserve_resources_late () 0xffffffff83e03210 in pcibios_resource_survey () 0xffffffff83e04f4a in pcibios_init () Which produces a call to reserve the CFMWS area: (gdb) p *new $54 = {start = 0x290000000, end = 0x2cfffffff, name = "Reserved", flags = 0x200, desc = 0x7, parent = 0x0, sibling = 0x0, child = 0x0} Later the Kernel parses ACPI tables and reserves the exact same area as the CXL Fixed Memory Window: 0xffffffff811016a4 in insert_resource_conflict () insert_resource () 0xffffffff81a81389 in cxl_parse_cfmws () 0xffffffff818c4a81 in call_handler () acpi_parse_entries_array () (gdb) p/x *new $59 = {start = 0x290000000, end = 0x2cfffffff, name = "CXL Window 0", flags = 0x200, desc = 0x0, parent = 0x0, sibling = 0x0, child = 0x0} This produces the following output in /proc/iomem: 590000000-68fffffff : CXL Window 0 590000000-68fffffff : Reserved This reserved area causes `get_free_mem_region()` to fail due to a check against `__region_intersects()`. Due to this reserved area, the intersect check will only ever return REGION_INTERSECTS, which causes `cxl create-region` to always fail. Signed-off-by: Gregory Price <gregory.price@memverge.com> Message-Id: <20221026205912.8579-1-gregory.price@memverge.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
2022-11-07acpi: pc/35: sanitize _GPE declaration orderIgor Mammedov
Move _GPE block declaration before it gets referenced by other hotplug handlers. While at it move PCI hotplug (_E01) handler after PCI tree description to avoid forward reference to to not yet declared methods/devices. PS: Forward 'usage' usualy is fine as long as it's hidden within method, however 'iasl' may print warnings. So be nice to iasl/guest OS and do things in proper order. PS2: Also follow up patches will move some of hotplug code from PCI tree to _E01 and that also requires PCI Device nodes build first, before Scope can reuse that from global context. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221017102146.2254096-11-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-11-07acpi: enumerate SMB bridge automatically along with other PCI devicesIgor Mammedov
to make that happen (bridge sits at _ADR: 0x001F0003), relax PCI enumeration logic to include devices with *function* > 0 if device has something to say about itself (i.e. has build_dev_aml callback set). Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221017102146.2254096-8-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-11-07acpi: pc/q35: drop ad-hoc PCI-ISA bridge AML routines and let bus ↵Igor Mammedov
ennumeration generate AML PCI-ISA bridges that are built in PIIX/Q35 are building its own AML using AcpiDevAmlIf interface. Now build_append_pci_bus_devices() gained AcpiDevAmlIf interface support to get AML of devices atached to PCI slots. So drop ad-hoc build_q35_isa_bridge()/build_piix4_isa_bridge() and let PCI bus enumeration to include PCI-ISA bridge AML when it's enumerated by build_append_pci_bus_devices(). AML change is mostly contextual, which moves whole ISA hierarchy directly under PCI host bridge instead of it being described as separate \SB.PCI0.ISA block. Note: If bus/slot that hosts ISA bridge has BSEL set, it will gain new ASUN and _DMS entries (i.e. acpi-index support, but it should not cause any functional change and that is fine from PCI Firmware spec point of view), potentially it's possible to suppress that by adding a flag to PCIDevice but I don't see a reason to do that yet, I'd rather treat bridge just as any other PCI device if it's possible. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221017102146.2254096-4-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-11-07acpi: pc: vga: use AcpiDevAmlIf interface to build VGA device descriptorsIgor Mammedov
NB: We do not expect any functional change in any ACPI tables with this change. It's only a refactoring. NB2: Some targets (or1k) do not support acpi and CONFIG_ACPI is off for them. However, modules are reused between all architectures so CONFIG_ACPI is on. For those architectures, dummy stub function definitions help to resolve symbols. This change uses more of these and so it adds a couple of dummy stub definitions so that symbols for those can be resolved. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20221017102146.2254096-2-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Ani Sinha <ani@anisinha.ca> CC: Bernhard Beschow <shentey@gmail.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Message-Id: <20221107152744.868434-1-ani@anisinha.ca>
2022-11-03Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingStefan Hajnoczi
* bug fixes * reduced memory footprint for IPI virtualization on Intel processors * asynchronous teardown support (Linux only) # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNiVykUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroN0Swf/YxjphCtFgYYSO14WP+7jAnfRZLhm # 0xWChWP8rco5I352OBFeFU64Av5XoLGNn6SZLl8lcg86lQ/G0D27jxu6wOcDDHgw # 0yTDO1gevj51UKsbxoC66OWSZwKTEo398/BHPDcI2W41yOFycSdtrPgspOrFRVvf # 7M3nNjuNPsQorZeuu8NGr3jakqbt99ZDXcyDEWbrEAcmy2JBRMbGgT0Kdnc6aZfW # CvL+1ljxzldNwGeNBbQW2QgODbfHx5cFZcy4Daze35l5Ra7K/FrgAzr6o/HXptya # 9fEs5LJQ1JWI6JtpaWwFy7fcIIOsJ0YW/hWWQZSDt9JdAJFE5/+vF+Kz5Q== # =CgrO # -----END PGP SIGNATURE----- # gpg: Signature made Wed 02 Nov 2022 07:40:25 EDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: target/i386: Fix test for paging enabled util/log: Close per-thread log file on thread termination target/i386: Set maximum APIC ID to KVM prior to vCPU creation os-posix: asynchronous teardown for shutdown on Linux target/i386: Fix calculation of LOCK NEG eflags Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-11-02hw/i386/e820: remove legacy reserved entries for e820Ani Sinha
e820 reserved entries were used before the dynamic entries with fw config files were intoduced. Please see the following change: 7d67110f2d9a6("pc: add etc/e820 fw_cfg file") Identical support was introduced into seabios as well with the following commit: ce39bd4031820 ("Add support for etc/e820 fw_cfg file") Both the above commits are now quite old. QEMU machines 1.7 and newer no longer use the reserved entries. Seabios uses fw config files and dynamic e820 entries by default and only falls back to using reserved entries when it has to work with old qemu (versions earlier than 1.7). Please see functions qemu_cfg_e820() and qemu_early_e820(). It is safe to remove legacy FW_CFG_E820_TABLE and associated code now as QEMU 7.0 has deprecated i440fx machines 1.7 and older. It would be incredibly rare to run the latest qemu version with a very old version of seabios that did not support fw config files for e820. As far as I could see, edk2/ovfm never supported reserved entries and uses fw config files from the beginning. So there should be no incompatibilities with ovfm as well. CC: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Ani Sinha <ani@anisinha.ca> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Message-Id: <20220831045311.33083-1-ani@anisinha.ca> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-10-31hw/ide/piix: Introduce TYPE_ macros for PIIX IDE controllersBernhard Beschow
Suggested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221022150508.26830-9-shentey@gmail.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2022-10-31hw/i386/pc: Create DMA controllers in south bridgesBernhard Beschow
Just like in the real hardware (and in PIIX4), create the DMA controllers in the south bridges. Signed-off-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20221022150508.26830-2-shentey@gmail.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2022-10-31target/i386: Set maximum APIC ID to KVM prior to vCPU creationZeng Guang
Specify maximum possible APIC ID assigned for current VM session to KVM prior to the creation of vCPUs. By this setting, KVM can set up VM-scoped data structure indexed by the APIC ID, e.g. Posted-Interrupt Descriptor pointer table to support Intel IPI virtualization, with the most optimal memory footprint. It can be achieved by calling KVM_ENABLE_CAP for KVM_CAP_MAX_VCPU_ID capability once KVM has enabled it. Ignoring the return error if KVM doesn't support this capability yet. Signed-off-by: Zeng Guang <guang.zeng@intel.com> Acked-by: Peter Xu <peterx@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20220825025246.26618-1-guang.zeng@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-27x86: do not re-randomize RNG seed on snapshot loadJason A. Donenfeld
Snapshot loading is supposed to be deterministic, so we shouldn't re-randomize the various seeds used. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Message-id: 20221025004327.568476-4-Jason@zx2c4.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27reset: allow registering handlers that aren't called by snapshot loadingJason A. Donenfeld
Snapshot loading only expects to call deterministic handlers, not non-deterministic ones. So introduce a way of registering handlers that won't be called when reseting for snapshots. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Message-id: 20221025004327.568476-2-Jason@zx2c4.com [PMM: updated json doc comment with Markus' text; fixed checkpatch style nit] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-18hyperv: fix SynIC SINT assertion failure on guest resetMaciej S. Szmigiero
Resetting a guest that has Hyper-V VMBus support enabled triggers a QEMU assertion failure: hw/hyperv/hyperv.c:131: synic_reset: Assertion `QLIST_EMPTY(&synic->sint_routes)' failed. This happens both on normal guest reboot or when using "system_reset" HMP command. The failing assertion was introduced by commit 64ddecc88bcf ("hyperv: SControl is optional to enable SynIc") to catch dangling SINT routes on SynIC reset. The root cause of this problem is that the SynIC itself is reset before devices using SINT routes have chance to clean up these routes. Since there seems to be no existing mechanism to force reset callbacks (or methods) to be executed in specific order let's use a similar method that is already used to reset another interrupt controller (APIC) after devices have been reset - by invoking the SynIC reset from the machine reset handler via a new x86_cpu_after_reset() function co-located with the existing x86_cpu_reset() in target/i386/cpu.c. Opportunistically move the APIC reset handler there, too. Fixes: 64ddecc88bcf ("hyperv: SControl is optional to enable SynIc") # exposed the bug Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <cb57cee2e29b20d06f81dce054cbcea8b5d497e8.1664552976.git.maciej.szmigiero@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-09x86: pci: acpi: consolidate PCI slots creationIgor Mammedov
No functional changes nor AML bytecode changes. Consolidate code that generates empty and populated slot descriptors. Besides eliminating duplication, it helps consolidate conditions for generating parts of Device{} desriptor in one place, which makes code more compact and easier to read. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220701133515.137890-18-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-10-09x86: pci: acpi: reorder Device's _DSM methodIgor Mammedov
align _DSM method in empty slot descriptor with a populated slot position. Expected change: + Device (SE8) + { + Name (_ADR, 0x001D0000) // _ADR: Address + Name (ASUN, 0x1D) Method (_DSM, 4, Serialized) // _DSM: Device-Specific Method { Local0 = Package (0x02) { BSEL, ASUN } Return (PDSM (Arg0, Arg1, Arg2, Arg3, Local0)) } - } - Device (SE8) - { - Name (_ADR, 0x001D0000) // _ADR: Address - Name (ASUN, 0x1D) Name (_SUN, 0x1D) // _SUN: Slot User Number Method (_EJ0, 1, NotSerialized) // _EJx: Eject Device { PCEJ (BSEL, _SUN) } + } i.e. put _DSM right after ASUN, with _SUN/_EJ0 following it. that will eliminate contextual changes (causing test failures) when follow up patches merge code generating populated and empty slots descriptors. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220701133515.137890-16-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-10-09x86: pci: acpi: reorder Device's _ADR and _SUN fieldsIgor Mammedov
no functional change, align order of fields in empty slot descriptor with a populated slot ordering. Expected diff: - Name (_SUN, 0x0X) // _SUN: Slot User Number Name (_ADR, 0xY) // _ADR: Address ... + Name (_SUN, 0xX) // _SUN: Slot User Number that will eliminate contextual changes (causing test failures) when follow up patches merge code generating populated and empty slots descriptors. Put mandatory _ADR as the 1st field, then ASUN as it can be present for both pupulated and empty slots and only then _SUN which is present only when slot is hotpluggable. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220701133515.137890-13-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-10-09x86: acpi: cleanup PCI device _DSM duplicationIgor Mammedov
add ASUN variable to hotpluggable slots and use it instead of _SUN which has the same value to reuse _DMS code on both branches (hot- and non-hotpluggable). No functional change. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220701133515.137890-10-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-10-09x86: acpi: _DSM: use Package to pass parametersIgor Mammedov
Numer of possible arguments to pass to a method is limited in ACPI. The following patches will need to pass over more parameters to PDSM method, will hit that limit. Prepare for this by passing structure (Package) to method, which let us workaround arguments limitation. Pass to PDSM all standard arguments of _DSM as is, and pack custom parameters into Package that is passed as the last argument to PDSM. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220701133515.137890-7-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-10-09acpi: x86: refactor PDSM method to reduce nestingIgor Mammedov
.., it will help with code readability and make easier to extend method in followup patches Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220701133515.137890-6-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-10-09acpi: x86: deduplicate HPET AML buildingIgor Mammedov
HPET AML doesn't depend on piix4 nor q35, move code buiding it to common scope to avoid duplication. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220701133515.137890-3-imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-10-09Revert "intel_iommu: Fix irqchip / X2APIC configuration checks"Peter Xu
It's true that when vcpus<=255 we don't require the length of 32bit APIC IDs. However here since we already have EIM=ON it means the hypervisor will declare the VM as x2apic supported (e.g. VT-d ECAP register will have EIM bit 4 set), so the guest should assume the APIC IDs are 32bits width even if vcpus<=255. In short, commit 77250171bdc breaks any simple cmdline that wants to boot a VM with >=9 but <=255 vcpus with: -device intel-iommu,intremap=on For anyone who does not want to enable x2apic, we can use eim=off in the intel-iommu parameters to skip enabling KVM x2apic. This partly reverts commit 77250171bdc02aee106083fd2a068147befa1a38, while keeping the valid bit on checking split irqchip, but revert the other change. One thing to mention is that this patch may break migration compatibility of such VM, however that's probably the best thing we can do, because the old behavior was simply wrong and not working for >8 vcpus. For <=8 vcpus, there could be a light guest ABI change (by enabling KVM x2apic after this patch), but logically it shouldn't affect the migration from working. Also, this is not the 1st commit to change x2apic behavior. Igor provided a full history of how this evolved for the past few years: https://lore.kernel.org/qemu-devel/20220922154617.57d1a1fb@redhat.com/ Relevant commits for reference: fb506e701e ("intel_iommu: reject broken EIM", 2016-10-17) c1bb5418e3 ("target/i386: Support up to 32768 CPUs without IRQ remapping", 2020-12-10) 77250171bd ("intel_iommu: Fix irqchip / X2APIC configuration checks", 2022-05-16) dc89f32d92 ("target/i386: Fix sanity check on max APIC ID / X2APIC enablement", 2022-05-16) We may want to have this for stable too (mostly for 7.1.0 only). Adding a fixes tag. Cc: David Woodhouse <dwmw2@infradead.org> Cc: Claudio Fontana <cfontana@suse.de> Cc: Igor Mammedov <imammedo@redhat.com> Fixes: 77250171bd ("intel_iommu: Fix irqchip / X2APIC configuration checks") Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220926153206.10881-1-peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
2022-10-01x86: re-initialize RNG seed when selecting kernelJason A. Donenfeld
We don't want it to be possible to re-read the RNG seed after ingesting it, because this ruins forward secrecy. Currently, however, the setup data section can just be re-read. Since the kernel is always read after the setup data, use the selection of the kernel as a trigger to re-initialize the RNG seed, just like we do on reboot, to preserve forward secrecy. Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Message-Id: <20220922152847.3670513-1-Jason@zx2c4.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-27x86: re-enable rng seeding via SetupDataJason A. Donenfeld
This reverts 3824e25db1 ("x86: disable rng seeding via setup_data"), but for 7.2 rather than 7.1, now that modifying setup_data is safe to do. Cc: Laurent Vivier <laurent@vivier.eu> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Ard Biesheuvel <ardb@kernel.org> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Message-Id: <20220921093134.2936487-4-Jason@zx2c4.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-27x86: reinitialize RNG seed on system rebootJason A. Donenfeld
Since this is read from fw_cfg on each boot, the kernel zeroing it out alone is insufficient to prevent it from being used twice. And indeed on reboot we always want a new seed, not the old one. So re-fill it in this circumstance. Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Message-Id: <20220921093134.2936487-3-Jason@zx2c4.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-27x86: use typedef for SetupData structJason A. Donenfeld
The preferred style is SetupData as a typedef, not setup_data as a plain struct. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Ard Biesheuvel <ardb@kernel.org> Suggested-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Message-Id: <20220921093134.2936487-2-Jason@zx2c4.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-27x86: return modified setup_data only if read as memory, not as fileJason A. Donenfeld
If setup_data is being read into a specific memory location, then generally the setup_data address parameter is read first, so that the caller knows where to read it into. In that case, we should return setup_data containing the absolute addresses that are hard coded and determined a priori. This is the case when kernels are loaded by BIOS, for example. In contrast, when setup_data is read as a file, then we shouldn't modify setup_data, since the absolute address will be wrong by definition. This is the case when OVMF loads the image. This allows setup_data to be used like normal, without crashing when EFI tries to use it. (As a small development note, strangely, fw_cfg_add_file_callback() was exported but fw_cfg_add_bytes_callback() wasn't, so this makes that consistent.) Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Laurent Vivier <laurent@vivier.eu> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Message-Id: <20220921093134.2936487-1-Jason@zx2c4.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-22hw/i386/multiboot: Avoid dynamic stack allocationPhilippe Mathieu-Daudé
Use autofree heap allocation instead of variable-length array on the stack. Replace the snprintf() call by g_strdup_printf(). Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20220819153931.3147384-9-peter.maydell@linaro.org
2022-09-02util: accept iova_tree_remove_parameter by valueEugenio Pérez
It's convenient to call iova_tree_remove from a map returned from iova_tree_find or iova_tree_find_iova. With the current code this is not possible, since we will free it, and then we will try to search for it again. Fix it making accepting the map by value, forcing a copy of the argument. Not applying a fixes tag, since there is no use like that at the moment. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2022-08-25hw: Add compat machines for 7.2Cornelia Huck
Add 7.2 machine types for arm/i440fx/m68k/q35/s390x/spapr. Signed-off-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20220727121755.395894-1-cohuck@redhat.com> [thuth: fixed conflict with pcmc->legacy_no_rng_seed] Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-08-17x86: disable rng seeding via setup_dataGerd Hoffmann
Causes regressions when doing direct kernel boots with OVMF. At this point in the release cycle the only sensible action is to just disable this for 7.1 and sort it properly in the 7.2 devel cycle. Cc: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Eduardo Habkost <eduardo@habkost.net> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Message-Id: <20220817083940.3174933-1-kraxel@redhat.com> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Eduardo Habkost <eduardo@habkost.net> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2022-07-26i386/pc: restrict AMD only enforcing of 1Tb hole to new machine typeJoao Martins
The added enforcing is only relevant in the case of AMD where the range right before the 1TB is restricted and cannot be DMA mapped by the kernel consequently leading to IOMMU INVALID_DEVICE_REQUEST or possibly other kinds of IOMMU events in the AMD IOMMU. Although, there's a case where it may make sense to disable the IOVA relocation/validation when migrating from a non-amd-1tb-aware qemu to one that supports it. Relocating RAM regions to after the 1Tb hole has consequences for guest ABI because we are changing the memory mapping, so make sure that only new machine enforce but not older ones. Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-12-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26i386/pc: relocate 4g start to 1T where applicableJoao Martins
It is assumed that the whole GPA space is available to be DMA addressable, within a given address space limit, except for a tiny region before the 4G. Since Linux v5.4, VFIO validates whether the selected GPA is indeed valid i.e. not reserved by IOMMU on behalf of some specific devices or platform-defined restrictions, and thus failing the ioctl(VFIO_DMA_MAP) with -EINVAL. AMD systems with an IOMMU are examples of such platforms and particularly may only have these ranges as allowed: 0000000000000000 - 00000000fedfffff (0 .. 3.982G) 00000000fef00000 - 000000fcffffffff (3.983G .. 1011.9G) 0000010000000000 - ffffffffffffffff (1Tb .. 16Pb[*]) We already account for the 4G hole, albeit if the guest is big enough we will fail to allocate a guest with >1010G due to the ~12G hole at the 1Tb boundary, reserved for HyperTransport (HT). [*] there is another reserved region unrelated to HT that exists in the 256T boundary in Fam 17h according to Errata #1286, documeted also in "Open-Source Register Reference for AMD Family 17h Processors (PUB)" When creating the region above 4G, take into account that on AMD platforms the HyperTransport range is reserved and hence it cannot be used either as GPAs. On those cases rather than establishing the start of ram-above-4g to be 4G, relocate instead to 1Tb. See AMD IOMMU spec, section 2.1.2 "IOMMU Logical Topology", for more information on the underlying restriction of IOVAs. After accounting for the 1Tb hole on AMD hosts, mtree should look like: 0000000000000000-000000007fffffff (prio 0, i/o): alias ram-below-4g @pc.ram 0000000000000000-000000007fffffff 0000010000000000-000001ff7fffffff (prio 0, i/o): alias ram-above-4g @pc.ram 0000000080000000-000000ffffffffff If the relocation is done or the address space covers it, we also add the the reserved HT e820 range as reserved. Default phys-bits on Qemu is TCG_PHYS_ADDR_BITS (40) which is enough to address 1Tb (0xff ffff ffff). On AMD platforms, if a ram-above-4g relocation is attempted and the CPU wasn't configured with a big enough phys-bits, an error message will be printed due to the maxphysaddr vs maxusedaddr check previously added. Suggested-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-11-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26i386/pc: bounds check phys-bits against max used GPAJoao Martins
Calculate max *used* GPA against the CPU maximum possible address and error out if the former surprasses the latter. This ensures max used GPA is reacheable by configured phys-bits. Default phys-bits on Qemu is TCG_PHYS_ADDR_BITS (40) which is enough for the CPU to address 1Tb (0xff ffff ffff) or 1010G (0xfc ffff ffff) in AMD hosts with IOMMU. This is preparation for AMD guests with >1010G, where it will want relocate ram-above-4g to be after 1Tb instead of 4G. Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-10-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26i386/pc: factor out device_memory base/size to helperJoao Martins
Move obtaining hole64_start from device_memory memory region base/size into an helper alongside correspondent getters in pc_memory_init() when the hotplug range is unitialized. While doing that remove the memory region based logic from this newly added helper. This is the final step that allows pc_pci_hole64_start() to be callable at the beginning of pc_memory_init() before any memory regions are initialized. Cc: Jonathan Cameron <jonathan.cameron@huawei.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-9-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26i386/pc: handle unitialized mr in pc_get_cxl_range_end()Joao Martins
Remove pc_get_cxl_range_end() dependency on the CXL memory region, and replace with one that does not require the CXL host_mr to determine the start of CXL start. This in preparation to allow pc_pci_hole64_start() to be called early in pc_memory_init(), handle CXL memory region end when its underlying memory region isn't yet initialized. Cc: Jonathan Cameron <jonathan.cameron@huawei.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Message-Id: <20220719170014.27028-8-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Igor Mammedov <imammedo@redhat.com>
2022-07-26i386/pc: factor out cxl range start to helperJoao Martins
Factor out the calculation of the base address of the memory region. It will be used later on for the cxl range end counterpart calculation and as well in pc_memory_init() CXL memory region initialization, thus avoiding duplication. Cc: Jonathan Cameron <jonathan.cameron@huawei.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-7-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26i386/pc: factor out cxl range end to helperJoao Martins
Move calculation of CXL memory region end to separate helper. This is in preparation to a future change that removes CXL range dependency on the CXL memory region, with the goal of allowing pc_pci_hole64_start() to be called before any memory region are initialized. Cc: Jonathan Cameron <jonathan.cameron@huawei.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-6-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26i386/pc: factor out above-4g end to an helperJoao Martins
There's a couple of places that seem to duplicate this calculation of RAM size above the 4G boundary. Move all those to a helper function. Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-5-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26i386/pc: pass pci_hole64_size to pc_memory_init()Joao Martins
Use the pre-initialized pci-host qdev and fetch the pci-hole64-size into pc_memory_init() newly added argument. Use PCI_HOST_PROP_PCI_HOLE64_SIZE pci-host property for fetching pci-hole64-size. This is in preparation to determine that host-phys-bits are enough and for pci-hole64-size to be considered to relocate ram-above-4g to be at 1T (on AMD platforms). Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-4-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26i386/pc: create pci-host qdev prior to pc_memory_init()Joao Martins
At the start of pc_memory_init() we usually pass a range of 0..UINT64_MAX as pci_memory, when really its 2G (i440fx) or 32G (q35). To get the real user value, we need to get pci-host passed property for default pci_hole64_size. Thus to get that, create the qdev prior to memory init to better make estimations on max used/phys addr. This is in preparation to determine that host-phys-bits are enough and also for pci-hole64-size to be considered to relocate ram-above-4g to be at 1T (on AMD platforms). Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-3-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26hw/i386: add 4g boundary start to X86MachineStateJoao Martins
Rather than hardcoding the 4G boundary everywhere, introduce a X86MachineState field @above_4g_mem_start and use it accordingly. This is in preparation for relocating ram-above-4g to be dynamically start at 1T on AMD platforms. Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20220719170014.27028-2-joao.m.martins@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2022-07-26hw/i386/pc: Always place CXL Memory Regions after device_memoryJonathan Cameron
Previously broken_reserved_end was taken into account, but Igor Mammedov identified that this could lead to a clash between potential RAM being mapped in the region and CXL usage. Hence always add the size of the device_memory memory region. This only affects the case where the broken_reserved_end flag was set. Fixes: 6e4e3ae936e6 ("hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)") Reported-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-Id: <20220701132300.2264-3-Jonathan.Cameron@huawei.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>