aboutsummaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2023-10-16Merge tag 'pull-shadow-2023-10-12' of https://repo.or.cz/qemu/armbru into ↵Stefan Hajnoczi
staging -Wshadow=local patches for 2023-10-12 # -----BEGIN PGP SIGNATURE----- # # iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmUoCNsSHGFybWJydUBy # ZWRoYXQuY29tAAoJEDhwtADrkYZTTocP/iQ6RggqcHrBxwZZtyydvpWCFrqfuBTk # 6GQtKGm51UcQ9kmAIsoV90pOzdUdjwrpXzKKJwsLzMcVcp1NDPsQIL54wdsRmZfH # E9mxI7UlZf/KWzrfP1nFLcU8T5+cuXosDgjx55Y1Kq+ZRn+7x0DInBGdRryokWTG # zcKh9T3n9KWKscLL7hvxLZS5054V9HBDYIpBBEyV2GtRrCLL0Y+9aaKkBrejHMgY # oKrLKHz1cOGOTzQ7AbhA+Wv3eN+GYVyjnCSUXK/270jbU8Xg4m1vSbrPq2PWy5kV # IGGKZtZsrSq0VBoTi+i9++vP5djKVUYQLqx10L+NYCp25wBnTgXKSDtdAqI68aev # TYrOlQ1ldKXJT4ghPqoWCjRKkryV6/Gj9fHbbvsHJ7SB84VO8G/kpn5zXvN/BosG # 8vxLEL0xc1Q3Sxi91DCjVsP7UebjBt1j/JugU9zVr8OFJWriFmllYB67AOOo3gS2 # c+FNVPLle3udw5EHClMapcGSzTun4iHeEsiJMOOgGOHC09Bi+Om6LlneFWljmvQp # a6ma+bebxCjzuO6heey2Q/1JjltR8Ex0bnbWIoNsysA6OnDtTlbxDqZEca1h6As+ # Rm9XFKf7nVQIHFKW3sjbx6MgqAL6sBakfeJah5Pj5iIKtLaZR591RyAfvfB2sBlS # ZYtp95GIKWXZ # =AArx # -----END PGP SIGNATURE----- # gpg: Signature made Thu 12 Oct 2023 10:55:23 EDT # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * tag 'pull-shadow-2023-10-12' of https://repo.or.cz/qemu/armbru: target/i386: fix shadowed variable pasto contrib/vhost-user-gpu: Fix compiler warning when compiling with -Wshadow hw/virtio/virtio-gpu: Fix compiler warning when compiling with -Wshadow libvhost-user: Fix compiler warning with -Wshadow=local libvduse: Fix compiler warning with -Wshadow=local Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-16Merge tag 'mem-2023-10-12' of https://github.com/davidhildenbrand/qemu into ↵Stefan Hajnoczi
staging Hi, "Host Memory Backends" and "Memory devices" queue ("mem"): - Support memory devices with multiple memslots - Support memory devices that dynamically consume memslots - Support memory devices that can automatically decide on the number of memslots to use - virtio-mem support for exposing memory dynamically via multiple memslots - Some required cleanups/refactorings # -----BEGIN PGP SIGNATURE----- # # iQJFBAABCAAvFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAmUn+XMRHGRhdmlkQHJl # ZGhhdC5jb20ACgkQTd4Q9wD/g1qDHA//T01suTa+uzrcoJHoMWN11S47WnAmbuTo # vVakucLBPMJAa9xZeCy3OavXaVGpHkw+t6g3OFknof0LfQ5/j9iE3Q1PxURN7g5j # SJ2WJXCoceM6T4TMhPvVvgEaYjFmESqZB5FZgedMT0QRyhAxMuF9pCkWhk1O3OAV # JqQKqLFiGcv60AEuBYGZGzgiOUv8EJ5gKwRF4VOdyHIxqZDw1aZXzlcd4TzFZBQ7 # rwW/3ef+sFmUJdmfrSrqcIlQSRrqZ2w95xATDzLTIEEUT3SWqh/E95EZWIz1M0oQ # NgWgFiLCR1KOj7bWFhLXT7IfyLh0mEysD+P/hY6QwQ4RewWG7EW5UK+JFswssdcZ # rEj5XpHZzev/wx7hM4bWsoQ+VIvrH7j3uYGyWkcgYRbdDEkWDv2rsT23lwGYNhht # oBsrdEBELRw6v4C8doq/+sCmHmuxUMqTGwbArCQVnB1XnLxOEkuqlnfq5MORkzNF # fxbIRx+LRluOllC0HVaDQd8qxRq1+UC5WIpAcDcrouy4HGgi1onWKrXpgjIAbVyH # M6cENkK7rnRk96gpeXdmrf0h9HqRciAOY8oUsFsvLyKBOCPBWDrLyOQEY5UoSdtD # m4QpEVgywCy2z1uU/UObeT/UxJy/9EL/Zb+DHoEK06iEhwONoUJjEBYMJD38RMkk # mwPTB4UAk9g= # =s69t # -----END PGP SIGNATURE----- # gpg: Signature made Thu 12 Oct 2023 09:49:39 EDT # gpg: using RSA key 1BD9CAAD735C4C3A460DFCCA4DDE10F700FF835A # gpg: issuer "david@redhat.com" # gpg: Good signature from "David Hildenbrand <david@redhat.com>" [unknown] # gpg: aka "David Hildenbrand <davidhildenbrand@gmail.com>" [full] # gpg: aka "David Hildenbrand <hildenbr@in.tum.de>" [unknown] # gpg: WARNING: The key's User ID is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 1BD9 CAAD 735C 4C3A 460D FCCA 4DDE 10F7 00FF 835A * tag 'mem-2023-10-12' of https://github.com/davidhildenbrand/qemu: virtio-mem: Mark memslot alias memory regions unmergeable memory,vhost: Allow for marking memory device memory regions unmergeable virtio-mem: Expose device memory dynamically via multiple memslots if enabled virtio-mem: Update state to match bitmap as soon as it's been migrated virtio-mem: Pass non-const VirtIOMEM via virtio_mem_range_cb memory: Clarify mapping requirements for RamDiscardManager memory-device,vhost: Support automatic decision on the number of memslots vhost: Add vhost_get_max_memslots() kvm: Add stub for kvm_get_max_memslots() memory-device,vhost: Support memory devices that dynamically consume memslots memory-device: Track required and actually used memslots in DeviceMemoryState stubs: Rename qmp_memory_device.c to memory_device.c memory-device: Support memory devices with multiple memslots vhost: Return number of free memslots kvm: Return number of free memslots softmmu/physmem: Fixup qemu_ram_block_from_host() documentation vhost: Remove vhost_backend_can_merge() callback vhost: Rework memslot filtering and fix "used_memslot" tracking Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-16gfxstream + rutabaga prep: added need defintions, fields, and optionsGurchetan Singh
This modifies the common virtio-gpu.h file have the fields and defintions needed by gfxstream/rutabaga, by VirtioGpuRutabaga. Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org> Tested-by: Alyssa Ross <hi@alyssa.is> Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org> Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org> Reviewed-by: Antonio Caggiano <quic_acaggian@quicinc.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
2023-10-16virtio-gpu: blob prepAntonio Caggiano
This adds preparatory functions needed to: - decode blob cmds - tracking iovecs Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org> Tested-by: Alyssa Ross <hi@alyssa.is> Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org> Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com> Tested-by: Huang Rui <ray.huang@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
2023-10-16virtio-gpu: hostmemGerd Hoffmann
Use VIRTIO_GPU_SHM_ID_HOST_VISIBLE as id for virtio-gpu. Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com> Tested-by: Alyssa Ross <hi@alyssa.is> Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com> Tested-by: Huang Rui <ray.huang@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
2023-10-16virtio-gpu: CONTEXT_INIT featureAntonio Caggiano
The feature can be enabled when a backend wants it. Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org> Tested-by: Alyssa Ross <hi@alyssa.is> Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com> Tested-by: Huang Rui <ray.huang@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
2023-10-16virtio: Add shared memory capabilityDr. David Alan Gilbert
Define a new capability type 'VIRTIO_PCI_CAP_SHARED_MEMORY_CFG' to allow defining shared memory regions with sizes and offsets of 2^32 and more. Multiple instances of the capability are allowed and distinguished by a device-specific 'id'. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org> Tested-by: Alyssa Ross <hi@alyssa.is> Tested-by: Huang Rui <ray.huang@amd.com> Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com> Acked-by: Huang Rui <ray.huang@amd.com> Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
2023-10-13hw/ufs: Fix incorrect register fieldsJeuk Kim
This patch fixes invalid ufs register fields. This fixes an issue reported by Bin Meng that caused ufs to fail over riscv. Fixes: bc4e68d362ec ("hw/ufs: Initial commit for emulated Universal-Flash-Storage") Signed-off-by: Jeuk Kim <jeuk20.kim@samsung.com> Reported-by: Bin Meng <bmeng@tinylab.org> Reviewed-by: Bin Meng <bmeng@tinylab.org> Tested-by: Bin Meng <bmeng@tinylab.org>
2023-10-13hw/loongarch/virt: Remove unused ISA BusPhilippe Mathieu-Daudé
The LoongArch 'virt' machine doesn't use its ISA I/O region. If a ISA device were to be mapped there, there is no support for ISA IRQ. Unlikely useful. Simply remove. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20231010135342.40219-3-philmd@linaro.org> Signed-off-by: Song Gao <gaosong@loongson.cn>
2023-10-12block: Add assertion for bdrv_graph_wrlock()Kevin Wolf
bdrv_graph_wrlock() can't run in a coroutine (because it polls) and requires holding the BQL. We already have GLOBAL_STATE_CODE() to assert the latter. Assert the former as well and add a no_coroutine_fn marker. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-23-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Protect bs->children with graph_lockKevin Wolf
Almost all functions that access the child links already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itself as protected by the graph lock. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-22-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Protect bs->parents with graph_lockKevin Wolf
Almost all functions that access the parent link already take the graph lock now. Add locking to the remaining user in a test case and finally annotate the struct field itself as protected by the graph lock. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-21-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_get_specific_info() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_get_specific_info() need to hold a reader lock for the graph. This removes an assume_graph_lock() call in vmdk's implementation. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-20-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_apply_auto_read_only() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_apply_auto_read_only() need to hold a reader lock for the graph because it calls bdrv_can_set_read_only(), which indirectly accesses the parents list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-19-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_op_is_blocked() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_op_is_blocked() need to hold a reader lock for the graph because it calls bdrv_get_device_or_node_name(), which accesses the parents list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-18-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12qcow2: Mark qcow2_signal_corruption() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of qcow2_signal_corruption() need to hold a reader lock for the graph because it calls bdrv_get_node_name(), which accesses the parents list of a node. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-15-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_amend_options() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_amend_options() need to hold a reader lock for the graph. This removes an assume_graph_lock() call in crypto's implementation. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-14-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_get_parent_name() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_get_parent_name() need to hold a reader lock for the graph because it accesses the parents list of a node. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-13-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_primary_child() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_primary_child() need to hold a reader lock for the graph because it accesses the children list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-12-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_refresh_filename() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_refresh_filename() need to hold a reader lock for the graph because it accesses the children list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-11-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_get_xdbg_block_graph() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_get_xdbg_block_graph() need to hold a reader lock for the graph because it accesses the children list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-10-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Take graph rdlock in parts of reopenKevin Wolf
Reopen isn't easy with respect to locking because many of its functions need to iterate the graph, some change it, and then you get some drains in the middle where you can't hold any locks. Therefore just documents most of the functions to be unlocked, and take locks internally before accessing the graph. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-9-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_snapshot_fallback() and callers GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_snapshot_fallback() need to hold a reader lock for the graph because it accesses the children list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-8-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark drain related functions GRAPH_RDLOCKEmanuele Giuseppe Esposito
Draining recursively traverses the graph, therefore we need to make sure that also such accesses to the graph are protected by the graph rdlock. There are 3 different drain callers to consider: 1. drain in the main loop: no issue at all, rdlock is nop. 2. drain in an iothread: rdlock only works in main loop or coroutines, so disallow it. 3. drain in a coroutine (regardless of AioContext): the drain mechanism takes care of scheduling a BH in the bs->aio_context that will then take care of perform the actual draining. This is wrong, because as pointed in (2) if bs->aio_context is an iothread then rdlock won't work. Therefore change bdrv_co_yield_to_drain to schedule the BH in the main loop. Caller (2) also implies that we need to modify test-bdrv-drain.c to disallow draining in the iothreads. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-6-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: Mark bdrv_first_blk() and bdrv_is_root_node() GRAPH_RDLOCKKevin Wolf
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_first_blk() and bdrv_is_root_node() need to hold a reader lock for the graph. These functions are the only functions in block-backend.c that access the parent list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-5-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block-coroutine-wrapper: Add no_co_wrapper_bdrv_rdlock functionsKevin Wolf
Add a new wrapper type for GRAPH_RDLOCK functions that should be called from coroutine context. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-3-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: switch to co_wrapper for bdrv_is_allocated_*Paolo Bonzini
Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20230904100306.156197-4-pbonzini@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12block: complete public block status APIPaolo Bonzini
Include both coroutine and non-coroutine versions, the latter being co_wrapper_mixed_bdrv_rdlock of the former. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20230904100306.156197-3-pbonzini@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2023-10-12Merge tag 'pull-riscv-to-apply-20231012-1' of ↵Stefan Hajnoczi
https://github.com/alistair23/qemu into staging Second RISC-V PR for 8.2 * Add support for the max CPU * Detect user choice in TCG * Clear CSR values at reset and sync MPSTATE with host * Fix the typo of inverted order of pmpaddr13 and pmpaddr14 * Split TCG/KVM accelerators from cpu.c * Add extension properties for all cpus * Replace GDB exit calls with proper shutdown * Support KVM_GET_REG_LIST * Remove RVG warning * Use env_archcpu for better performance * Deprecate capital 'Z' CPU properties * Fix vfwmaccbf16.vf # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmUncYAACgkQr3yVEwxT # gBPQ3g/9Fi4uYRK7dymHHAQbOO9NPlmVPPSxmQ8fNUhoZUkbHfm56JEl42Xr02rA # Lg2ORRQxJhAinANV8CotnbyLRHNCAvouCMCQEjHo1YEHzdXc0tQzp+rIOHT7v9rH # 6OQpI6RuCjO+0LQPMgzJx8yokMw/9b0uma3+RkNKod1XsSySo6JvDkMZGGZZWuVX # Que3TMHzc4513PWEwRS9NaAHqRdy/ax0aPu9khswTYBxeJ/mBTLvGj4wBq5wnS7+ # JPvq0M5ScUMl4K5o884wsAzOdxRk8QZOMx3duMCbqXw0xFmYZj/EzcIeHdnXwuDB # lcANd6LcESMNUb8iDBaFRjLnZ/gNiu20/P/LPWyTirfoZXzZ+h6WPnSeli36xtzO # KKWtvS1YggCjsDvh9/PLYAvUGBcS/kUhIynN10YKnoKB+wSDxxyvBS1GU6c8czgc # WDf3V4P3Z8oPKDA/24Qd9Uiho1Gq9FED4eBQPb9PuvkfboKE/g7lUp708XXDFVld # hkJMsYROSRvk54RHITrD9Z+XFQ2TfC8wHLH0IwlyynQnc1sKvXaR6U1hZTAVtE4f # yley/xCQ7OUV+hrx1sQLURcN6A+SPummOY5jdHiD29QcJnOZnkSy5j2KOlnHSa5i # 6v/6EFCgxwr69N6Q6X34VDv6+DZqLO2dNncQCInYFfupRhQ7t1E= # =SUon # -----END PGP SIGNATURE----- # gpg: Signature made Thu 12 Oct 2023 00:09:36 EDT # gpg: using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013 # gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65 9296 AF7C 9513 0C53 8013 * tag 'pull-riscv-to-apply-20231012-1' of https://github.com/alistair23/qemu: (54 commits) target/riscv: Fix vfwmaccbf16.vf target/riscv: deprecate capital 'Z' CPU properties target/riscv: Use env_archcpu for better performance target/riscv/tcg: remove RVG warning target/riscv/kvm: support KVM_GET_REG_LIST target/riscv/kvm: improve 'init_multiext_cfg' error msg gdbstub: replace exit calls with proper shutdown for softmmu hw/char: riscv_htif: replace exit calls with proper shutdown hw/misc/sifive_test.c: replace exit calls with proper shutdown softmmu: pass the main loop status to gdb "Wxx" packet softmmu: add means to pass an exit code when requesting a shutdown target/riscv/tcg-cpu.c: add extension properties for all cpus target/riscv: add riscv_cpu_get_name() target/riscv/cpu: move priv spec functions to tcg-cpu.c target/riscv/cpu.c: export isa_edata_arr[] target/riscv/tcg: move riscv_cpu_add_misa_properties() to tcg-cpu.c target/riscv/cpu.c: make misa_ext_cfgs[] 'const' target/riscv/tcg: introduce tcg_cpu_instance_init() target/riscv/cpu.c: export set_misa() target/riscv/kvm: do not use riscv_cpu_add_misa_properties() ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-12Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into stagingStefan Hajnoczi
trivial patches for 2023-10-12 # -----BEGIN PGP SIGNATURE----- # # iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmUnFa8PHG1qdEB0bHMu # bXNrLnJ1AAoJEHAbT2saaT5ZBv8H/0MtWL6FqTzvz5yLn2WSbj2ng1RG1Deh36Sy # 1PCpFKy85ZSBKLOzgvbpn4VfpEdsvD/+sX4C4CVde+vR3oCjdUM14hnzEWX86gFl # O8Ct8++MLPqnwgu6Rg6Z+Ie2yBtsQ5VABH/1q36T7+XHHh19bgEw6tW34/f2Ncxw # 8UQO2lm9tAMAOEfXoutoj8K8ch3FvbsEic9L0ORc7ntWc7NIauc3zizogtPHAzR8 # elB3BiLn4sMHLBj+IunndOiLadUAVOKTJ5PKi4b8iRa6aE8E6bjtLxdiPr4XEx/g # 7rSGvNM+Lm7mEgJSyyik+u0MshKjfRi+SrbvId9FIqACG1GCKeI= # =rFns # -----END PGP SIGNATURE----- # gpg: Signature made Wed 11 Oct 2023 17:37:51 EDT # gpg: using RSA key 7B73BAD68BE7A2C289314B22701B4F6B1A693E59 # gpg: issuer "mjt@tls.msk.ru" # gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" [full] # gpg: aka "Michael Tokarev <mjt@corpit.ru>" [full] # gpg: aka "Michael Tokarev <mjt@debian.org>" [full] # Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D 4324 457C E0A0 8044 65C5 # Subkey fingerprint: 7B73 BAD6 8BE7 A2C2 8931 4B22 701B 4F6B 1A69 3E59 * tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu: cpus: Remove unused smp_cores/smp_threads declarations scripts/xml-preprocess: Make sure this script is invoked via the right Python roms: use PYTHON to invoke python MAINTAINERS: Add some unowned files to the SBSA-REF section MAINTAINERS: Add section for overall sensors MAINTAINERS: add standard-headers to Hosts/LINUX MAINTAINERS: Add the CI-related doc files to the CI section MAINTAINERS: Add include folder to the hw/char/ section MAINTAINERS: Add unowned RISC-V related files to the right sections MAINTAINERS: Add g364fb and ds1225y to the Jazz section Fix compilation when UFFDIO_REGISTER is not set. Update AMD memory encryption document links. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-12memory,vhost: Allow for marking memory device memory regions unmergeableDavid Hildenbrand
Let's allow for marking memory regions unmergeable, to teach flatview code and vhost to not merge adjacent aliases to the same memory region into a larger memory section; instead, we want separate aliases to stay separate such that we can atomically map/unmap aliases without affecting other aliases. This is desired for virtio-mem mapping device memory located on a RAM memory region via multiple aliases into a memory region container, resulting in separate memslots that can get (un)mapped atomically. As an example with virtio-mem, the layout would look something like this: [...] 0000000240000000-00000020bfffffff (prio 0, i/o): device-memory 0000000240000000-000000043fffffff (prio 0, i/o): virtio-mem 0000000240000000-000000027fffffff (prio 0, ram): alias memslot-0 @mem2 0000000000000000-000000003fffffff 0000000280000000-00000002bfffffff (prio 0, ram): alias memslot-1 @mem2 0000000040000000-000000007fffffff 00000002c0000000-00000002ffffffff (prio 0, ram): alias memslot-2 @mem2 0000000080000000-00000000bfffffff [...] Without unmergable memory regions, all three memslots would get merged into a single memory section. For example, when mapping another alias (e.g., virtio-mem-memslot-3) or when unmapping any of the mapped aliases, memory listeners will first get notified about the removal of the big memory section to then get notified about re-adding of the new (differently merged) memory section(s). In an ideal world, memory listeners would be able to deal with that atomically, like KVM nowadays does. However, (a) supporting this for other memory listeners (vhost-user, vfio) is fairly hard: temporary removal can result in all kinds of issues on concurrent access to guest memory; and (b) this handling is undesired, because temporarily removing+readding can consume quite some time on bigger memslots and is not efficient (e.g., vfio unpinning and repinning pages ...). Let's allow for marking a memory region unmergeable, such that we can atomically (un)map aliases to the same memory region, similar to (un)mapping individual DIMMs. Similarly, teach vhost code to not redo what flatview core stopped doing: don't merge such sections. Merging in vhost code is really only relevant for handling random holes in boot memory where; without this merging, the vhost-user backend wouldn't be able to mmap() some boot memory backed on hugetlb. We'll use this for virtio-mem next. Message-ID: <20230926185738.277351-18-david@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12virtio-mem: Expose device memory dynamically via multiple memslots if enabledDavid Hildenbrand
Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12memory: Clarify mapping requirements for RamDiscardManagerDavid Hildenbrand
We really only care about the RAM memory region not being mapped into an address space yet as long as we're still setting up the RamDiscardManager. Once mapped into an address space, memory notifiers would get notified about such a region and any attempts to modify the RamDiscardManager would be wrong. While "mapped into an address space" is easy to check for RAM regions that are mapped directly (following the ->container links), it's harder to check when such regions are mapped indirectly via aliases. For now, we can only detect that a region is mapped through an alias (->mapped_via_alias), but we don't have a handle on these aliases to follow all their ->container links to test if they are eventually mapped into an address space. So relax the assertion in memory_region_set_ram_discard_manager(), remove the check in memory_region_get_ram_discard_manager() and clarify the doc. Message-ID: <20230926185738.277351-14-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12memory-device,vhost: Support automatic decision on the number of memslotsDavid Hildenbrand
We want to support memory devices that can automatically decide how many memslots they will use. In the worst case, they have to use a single memslot. The target use cases are virtio-mem and the hyper-v balloon. Let's calculate a reasonable limit such a memory device may use, and instruct the device to make a decision based on that limit. Use a simple heuristic that considers: * A memslot soft-limit for all memory devices of 256; also, to not consume too many memslots -- which could harm performance. * Actually still free and unreserved memslots * The percentage of the remaining device memory region that memory device will occupy. Further, while we properly check before plugging a memory device whether there still is are free memslots, we have other memslot consumers (such as boot memory, PCI BARs) that don't perform any checks and might dynamically consume memslots without any prior reservation. So we might succeed in plugging a memory device, but once we dynamically map a PCI BAR we would be in trouble. Doing accounting / reservation / checks for all such users is problematic (e.g., sometimes we might temporarily split boot memory into two memslots, triggered by the BIOS). We use the historic magic memslot number of 509 as orientation to when supporting 256 memory devices -> memslots (leaving 253 for boot memory and other devices) has been proven to work reliable. We'll fallback to suggesting a single memslot if we don't have at least 509 total memslots. Plugging vhost devices with less than 509 memslots available while we have memory devices plugged that consume multiple memslots due to automatic decisions can be problematic. Most configurations might just fail due to "limit < used + reserved", however, it can also happen that these memory devices would suddenly consume memslots that would actually be required by other memslot consumers (boot, PCI BARs) later. Note that this has always been sketchy with vhost devices that support only a small number of memslots; but we don't want to make it any worse.So let's keep it simple and simply reject plugging such vhost devices in such a configuration. Eventually, all vhost devices that want to be fully compatible with such memory devices should support a decent number of memslots (>= 509). Message-ID: <20230926185738.277351-13-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12vhost: Add vhost_get_max_memslots()David Hildenbrand
Let's add vhost_get_max_memslots(). Message-ID: <20230926185738.277351-12-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12kvm: Add stub for kvm_get_max_memslots()David Hildenbrand
We'll need the stub soon from memory device context. While at it, use "unsigned int" as return value and place the declaration next to kvm_get_free_memslots(). Message-ID: <20230926185738.277351-11-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12memory-device,vhost: Support memory devices that dynamically consume memslotsDavid Hildenbrand
We want to support memory devices that have a dynamically managed memory region container as device memory region. This device memory region maps multiple RAM memory subregions (e.g., aliases to the same RAM memory region), whereby these subregions can be (un)mapped on demand. Each RAM subregion will consume a memslot in KVM and vhost, resulting in such a new device consuming memslots dynamically, and initially usually 0. We already track the number of used vs. required memslots for all memslots. From that, we can derive the number of reserved memslots that must not be used otherwise. The target use case is virtio-mem and the hyper-v balloon, which will dynamically map aliases to RAM memory region into their device memory region container. Properly document what's supported and what's not and extend the vhost memslot check accordingly. Message-ID: <20230926185738.277351-10-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12memory-device: Track required and actually used memslots in DeviceMemoryStateDavid Hildenbrand
Let's track how many memslots are required by plugged memory devices and how many are currently actually getting used by plugged memory devices. "required - used" is the number of reserved memslots. For now, the number of used and required memslots is always equal, and there are no reservations. This is a preparation for memory devices that want to dynamically consume memslots after initially specifying how many they require -- where we'll end up with reserved memslots. To track the number of used memslots, create a new address space for our device memory and register a memory listener (add/remove) for that address space. Message-ID: <20230926185738.277351-9-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12memory-device: Support memory devices with multiple memslotsDavid Hildenbrand
We want to support memory devices that have a memory region container as device memory region that maps multiple RAM memory regions. Let's start by supporting memory devices that statically map multiple RAM memory regions and, thereby, consume multiple memslots. We already have one device that uses a container as device memory region: NVDIMMs. However, a NVDIMM always ends up consuming exactly one memslot. Let's add support for that by asking the memory device via a new callback how many memslots it requires. Message-ID: <20230926185738.277351-7-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12vhost: Return number of free memslotsDavid Hildenbrand
Let's return the number of free slots instead of only checking if there is a free slot. Required to support memory devices that consume multiple memslots. This is a preparation for memory devices that consume multiple memslots. Message-ID: <20230926185738.277351-6-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12kvm: Return number of free memslotsDavid Hildenbrand
Let's return the number of free slots instead of only checking if there is a free slot. While at it, check all address spaces, which will also consider SMM under x86 correctly. This is a preparation for memory devices that consume multiple memslots. Message-ID: <20230926185738.277351-5-david@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12softmmu/physmem: Fixup qemu_ram_block_from_host() documentationDavid Hildenbrand
Let's fixup the documentation (e.g., removing traces of the ram_addr parameter that no longer exists) and move it to the header file while at it. Message-ID: <20230926185738.277351-4-david@redhat.com> Suggested-by: Igor Mammedov <imammedo@redhat.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12vhost: Remove vhost_backend_can_merge() callbackDavid Hildenbrand
Checking whether the memory regions are equal is sufficient: if they are equal, then most certainly the contained fd is equal. The whole vhost-user memslot handling is suboptimal and overly complicated. We shouldn't have to lookup a RAM memory regions we got notified about in vhost_user_get_mr_data() using a host pointer. But that requires a bigger rework -- especially an alternative vhost_set_mem_table() backend call that simply consumes MemoryRegionSections. For now, let's just drop vhost_backend_can_merge(). Message-ID: <20230926185738.277351-3-david@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12vhost: Rework memslot filtering and fix "used_memslot" trackingDavid Hildenbrand
Having multiple vhost devices, some filtering out fd-less memslots and some not, can mess up the "used_memslot" accounting. Consequently our "free memslot" checks become unreliable and we might run out of free memslots at runtime later. An example sequence which can trigger a potential issue that involves different vhost backends (vhost-kernel and vhost-user) and hotplugged memory devices can be found at [1]. Let's make the filtering mechanism less generic and distinguish between backends that support private memslots (without a fd) and ones that only support shared memslots (with a fd). Track the used_memslots for both cases separately and use the corresponding value when required. Note: Most probably we should filter out MAP_PRIVATE fd-based RAM regions (for example, via memory-backend-memfd,...,shared=off or as default with memory-backend-file) as well. When not using MAP_SHARED, it might not work as expected. Add a TODO for now. [1] https://lkml.kernel.org/r/fad9136f-08d3-3fd9-71a1-502069c000cf@redhat.com Message-ID: <20230926185738.277351-2-david@redhat.com> Fixes: 988a27754bbb ("vhost: allow backends to filter memory sections") Cc: Tiwei Bie <tiwei.bie@intel.com> Acked-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-10-12hw/virtio/virtio-gpu: Fix compiler warning when compiling with -WshadowThomas Huth
Avoid using trivial variable names in macros, otherwise we get the following compiler warning when compiling with -Wshadow=local: In file included from ../../qemu/hw/display/virtio-gpu-virgl.c:19: ../../home/thuth/devel/qemu/hw/display/virtio-gpu-virgl.c: In function ‘virgl_cmd_submit_3d’: ../../qemu/include/hw/virtio/virtio-gpu.h:228:16: error: declaration of ‘s’ shadows a previous local [-Werror=shadow=compatible-local] 228 | size_t s; | ^ ../../qemu/hw/display/virtio-gpu-virgl.c:215:5: note: in expansion of macro ‘VIRTIO_GPU_FILL_CMD’ 215 | VIRTIO_GPU_FILL_CMD(cs); | ^~~~~~~~~~~~~~~~~~~ ../../qemu/hw/display/virtio-gpu-virgl.c:213:12: note: shadowed declaration is here 213 | size_t s; | ^ cc1: all warnings being treated as errors Signed-off-by: Thomas Huth <thuth@redhat.com> Message-ID: <20231009084559.41427-1-thuth@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2023-10-12gdbstub: replace exit calls with proper shutdown for softmmuClément Chigot
This replaces the exit calls by shutdown requests, ensuring a proper cleanup of Qemu. Features like net/vhost-vdpa.c are expecting qemu_cleanup to be called to remove their last residuals. Signed-off-by: Clément Chigot <chigot@adacore.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20231003071427.188697-6-chigot@adacore.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-10-12softmmu: pass the main loop status to gdb "Wxx" packetClément Chigot
gdb_exit function aims to close gdb sessions and sends the exit code of the current execution. It's being called by qemu_cleanup once the main loop is over. Until now, the exit code sent was always 0. Now that hardware can shutdown this main loop with custom exit codes, these codes must be transfered to gdb as well. Signed-off-by: Clément Chigot <chigot@adacore.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20231003071427.188697-3-chigot@adacore.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-10-12softmmu: add means to pass an exit code when requesting a shutdownClément Chigot
As of now, the exit code was either EXIT_FAILURE when a panic shutdown was requested or EXIT_SUCCESS otherwise. However, some hardware could want to pass more complex exit codes. Thus, introduce a new shutdown request function allowing that. Signed-off-by: Clément Chigot <chigot@adacore.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20231003071427.188697-2-chigot@adacore.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-10-12cpus: Remove unused smp_cores/smp_threads declarationsPhilippe Mathieu-Daudé
Commit a5e0b33119 ("vl.c: Replace smp global variables with smp machine properties") removed the last uses of the smp_cores / smp_threads variables but forgot to remove their declarations. Do it now. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-10-11plugins: Set final instruction count in plugin_gen_tb_endMatt Borgerson
Translation logic may partially decode an instruction, then abort and remove the instruction from the TB. This can happen for example when an instruction spans two pages. In this case, plugins may get an incorrect result when calling qemu_plugin_tb_n_insns to query for the number of instructions in the TB. This patch updates plugin_gen_tb_end to set the final instruction count. Signed-off-by: Matt Borgerson <contact@mborgerson.com> [AJB: added g_assert to defed API] Message-Id: <CADc=-s5RwGViNTR-h5cq3np673W3RRFfhr4vCGJp0EoDUxvhog@mail.gmail.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20231009164104.369749-23-alex.bennee@linaro.org>