Age | Commit message (Collapse) | Author |
|
This implements the basic migration support in the back end, with unit
tests that give additional confidence in the node-counting already in
the tree.
However, the existing PV back ends like xen-disk don't support migration
yet. They will reset the ring and fail to continue where they left off.
We will fix that in future, but not in time for the 8.0 release.
Since there's also an open question of whether we want to serialize the
full XenStore or only the guest-owned nodes in /local/domain/${domid},
for now just mark the XenStore device as unmigratable.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
|
|
Store perms as a GList of strings, check permissions.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
|
|
Firing watches on the nodes that still exist is relatively easy; just
walk the tree and look at the nodes with refcount of one.
Firing watches on *deleted* nodes is more fun. We add 'modified_in_tx'
and 'deleted_in_tx' flags to each node. Nodes with those flags cannot
be shared, as they will always be unique to the transaction in which
they were created.
When xs_node_walk would need to *create* a node as scaffolding and it
encounters a deleted_in_tx node, it can resurrect it simply by clearing
its deleted_in_tx flag. If that node originally had any *data*, they're
gone, and the modified_in_tx flag will have been set when it was first
deleted.
We then attempt to send appropriate watches when the transaction is
committed, properly delete the deleted_in_tx nodes, and remove the
modified_in_tx flag from the others.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
|
|
Given that the whole thing supported copy on write from the beginning,
transactions end up being fairly simple. On starting a transaction, just
take a ref of the existing root; swap it back in on a successful commit.
The main tree has a transaction ID too, and we keep a record of the last
transaction ID given out. if the main tree is ever modified when it isn't
the latest, it gets a new transaction ID.
A commit can only succeed if the main tree hasn't moved on since it was
forked. Strictly speaking, the XenStore protocol allows a transaction to
succeed as long as nothing *it* read or wrote has changed in the interim,
but no implementations do that; *any* change is sufficient to abort a
transaction.
This does not yet fire watches on the changed nodes on a commit. That bit
is more fun and will come in a follow-on commit.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
|
|
Starts out fairly simple: a hash table of watches based on the path.
Except there can be multiple watches on the same path, so the watch ends
up being a simple linked list, and the head of that list is in the hash
table. Which makes removal a bit of a PITA but it's not so bad; we just
special-case "I had to remove the head of the list and now I have to
replace it in / remove it from the hash table". And if we don't remove
the head, it's a simple linked-list operation.
We do need to fire watches on *deleted* nodes, so instead of just a simple
xs_node_unref() on the topmost victim, we need to recurse down and fire
watches on them all.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
|
|
This is a fairly simple implementation of a copy-on-write tree.
The node walk function starts off at the root, with 'inplace == true'.
If it ever encounters a node with a refcount greater than one (including
the root node), then that node is shared with other trees, and cannot
be modified in place, so the inplace flag is cleared and we copy on
write from there on down.
Xenstore write has 'mkdir -p' semantics and will create the intermediate
nodes if they don't already exist, so in that case we flip the inplace
flag back to true as we populate the newly-created nodes.
We put a copy of the absolute path into the buffer in the struct walk_op,
with *two* NUL terminators at the end. As xs_node_walk() goes down the
tree, it replaces the next '/' separator with a NUL so that it can use
the 'child name' in place. The next recursion down then puts the '/'
back and repeats the exercise for the next path element... if it doesn't
hit that *second* NUL termination which indicates the true end of the
path.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
|
|
This implements the basic wire protocol for the XenStore commands, punting
all the actual implementation to xs_impl_* functions which all just return
errors for now.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
|
|
* Fix missing memory barriers
* Fix comments about memory ordering
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmQHIqoUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroPYBwgArUaS0KGrBM1XmRUUpXnJokmA37n8
# ft477na+XW+p9VYi27B0R01P8j+AkCrAO0Ir1MLG7axjn5KiRMnbf2uBgqasEREv
# repJEXsqISoxA6vvAvnehKHAI9zu8b7frRc/30b6EOrrZpn0JKePSNRTyBu2seGO
# NFDXPVA2Wom+xXaNSEGt0dmoJ6AzEVIZKhUIwyvUWOC7MXuuIkRWn9/nySUdvEt0
# RIFPPk7JCjnEc32vb4Xnq/Ncsy20tMIM1hlDxMOVNq3brjeSCzS0PPPSjE/X5OtW
# Yn5YS0nCyD7wjP2dkXI4I1lUPxUUx6LvMz1aGbJCfyjSX41mNES/agoGgA==
# =KEUo
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 07 Mar 2023 11:40:26 GMT
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream-mb' of https://gitlab.com/bonzini/qemu:
async: clarify usage of barriers in the polling case
async: update documentation of the memory barriers
physmem: add missing memory barrier
qemu-coroutine-lock: add smp_mb__after_rmw()
aio-wait: switch to smp_mb__after_rmw()
edu: add smp_mb__after_rmw()
qemu-thread-win32: cleanup, fix, document QemuEvent
qemu-thread-posix: cleanup, fix, document QemuEvent
qatomic: add smp_mb__before/after_rmw()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Modified BMC FRU data in yosemite v2 platform.
Tested: Tested and Verified in yosemitev2 platform.
Fixes: 34f73a81e6 ("hw/arm/aspeed: Adding new machine Yosemitev2 in QEMU")
Signed-off-by: Karthikeyan Pasupathi <pkarthikeyan1509@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20230307104833.3587947-1-pkarthikeyan1509@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
Added TMP421 type sensor support in tiogapass platform.
Tested: Tested and verified in tiogapass platform.
Signed-off-by: Karthikeyan Pasupathi <pkarthikeyan1509@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20230307103334.3586755-1-pkarthikeyan1509@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
Added TMP421 type support in yosemite v2 platform.
Tested: Tested and verified in yosemite V2 platform.
Signed-off-by: Karthikeyan Pasupathi <pkarthikeyan1509@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20230307095239.3583613-1-pkarthikeyan1509@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
Commit a4b15a8b introduced a new function blk_pread_nonzeroes(). Instead
of reading directly from the root node of the BlockBackend, it reads
from its 'file' child node. This can happen to mostly work for raw
images (as long as the 'raw' format driver is in use, but not actually
doing anything), but it breaks everything else.
Fix it to read from the root node instead.
Fixes: a4b15a8b9ef25b44fa92a4825312622600c1f37c
Reported-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230307140230.59158-1-kwolf@redhat.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
Currently, when a block backend is attached to a m25p80 device and the
associated file size does not match the flash model, QEMU complains
with the error message "failed to read the initial flash content".
This is confusing for the user.
Instead, use helper blk_check_size_and_read_all() introduced by commit
06f1521795 ("pflash: Require backend size to match device, improve
errors").
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221115151000.2080833-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
According to the device DMA logging uAPI, IOVA ranges to be logged by
the device must be provided all at once upon DMA logging start.
As preparation for the following patches which will add device dirty
page tracking, keep a record of all DMA mapped IOVA ranges so later they
can be used for DMA logging start.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-10-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
In preparation to be used in device dirty tracking, move the code that
calculate a iova/end range from the container/section. This avoids
duplication on the common checks across listener callbacks.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-9-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
The checks are replicated against region_add and region_del
and will be soon added in another memory listener dedicated
for dirty tracking.
Move these into a new helper for avoid duplication.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Avihai Horon <avihaih@nvidia.com>
Link: https://lore.kernel.org/r/20230307125450.62409-8-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
In preparation to turn more of the memory listener checks into
common functions, one of the affected places is how we trace when
sections are skipped. Right now there is one for each. Change it
into one single tracepoint `vfio_listener_region_skip` which receives
a name which refers to the callback i.e. region_add and region_del.
Suggested-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-7-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
Move the code that finds the container host DMA window against a iova
range. This avoids duplication on the common checks across listener
callbacks.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Avihai Horon <avihaih@nvidia.com>
Link: https://lore.kernel.org/r/20230307125450.62409-6-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
There are already two places where dirty page bitmap allocation and
calculations are done in open code.
To avoid code duplication, introduce VFIOBitmap struct and corresponding
alloc function and use them where applicable.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-5-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
If VFIO dirty pages log start/stop/sync fails during migration,
migration should be aborted as pages dirtied by VFIO devices might not
be reported properly.
This is not the case today, where in such scenario only an error is
printed.
Fix it by aborting migration in the above scenario.
Fixes: 758b96b61d5c ("vfio/migrate: Move switch of dirty tracking into vfio_memory_listener")
Fixes: b6dd6504e303 ("vfio: Add vfio_listener_log_sync to mark dirty pages")
Fixes: 9e7b0442f23a ("vfio: Add ioctl to get dirty pages bitmap during dma unmap")
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-4-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
There are several places where the %m conversion is used if one of
vfio_dma_map(), vfio_dma_unmap() or vfio_get_dirty_bitmap() fail.
The %m usage in these places is wrong since %m relies on errno value while
the above functions don't report errors via errno.
Fix it by using strerror() with the returned value instead.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-3-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
Return -errno instead of -1 if VFIO_IOMMU_DIRTY_PAGES ioctl fails in
vfio_get_dirty_bitmap().
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/r/20230307125450.62409-2-joao.m.martins@oracle.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
Hardly anybody still uses 32-bit x86 environments for running QEMU with
full system emulation, so let's stop wasting our scarce CI minutes with
this job.
(There are still the 32-bit MinGW and TCI jobs around for having
some compile test coverage on 32-bit)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Message-Id: <20230306084658.29709-4-thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
Hardly anybody still uses 32-bit x86 hosts today, so we should start
deprecating them to stop wasting our time and CI minutes here.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Message-Id: <20230306084658.29709-2-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
nmi.h and notify.h are not needed here, drop them to speed up
the compiling a little bit.
Message-Id: <20230210111438.1114600-1-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230301104450.1017-1-quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
Hexagon's idef-parser machinery uses some bison features that are not
available at older versions. The most preeminent example (as it can
be used as a sentinel) is "%define parse.error verbose". This was
introduced in version 3.0 of the tool, which is able to compile
qemu-hexagon just fine. However, compilation fails with the previous
minor bison release, v2.7. So let's assert the minimum version at
meson.build to give a more comprehensive error message for those trying
to compile QEMU.
[1]: https://www.gnu.org/software/bison/manual/html_node/_0025define-Summary.html#index-_0025define-parse_002eerror
Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alessandro Di Federico <ale@rev.ng>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <a6763f9f7b89ea310ab86f9a2b311a05254a1acd.1675779233.git.quic_mathbern@quicinc.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
For long-term distributions that release a new version only very
seldom, we limit the support to five years after the initial release.
Otherwise, we might need to support distros like openSUSE 15 for
up to 7 or even more years in total due to our "two more years
after the next major release" rule, which is just way too much to
handle in a project like QEMU that only has limited human resources.
Message-Id: <20230223193257.1068205-1-thuth@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
https://gitlab.com/palmer-dabbelt/qemu into staging
Sixth RISC-V PR for 8.0
* Support for the Zicbiom, ZCicboz, and Zicbop extensions.
* OpenSBI has been updated to version 1.2, see
<https://github.com/riscv-software-src/opensbi/releases/tag/v1.2> for
the release notes.
* Support for setting the virtual address width (ie, sv39/sv48/sv57) on
the command line.
* Support for ACPI on RISC-V.
# -----BEGIN PGP SIGNATURE-----
#
# iQJHBAABCAAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAmQGYGgTHHBhbG1lckBk
# YWJiZWx0LmNvbQAKCRAuExnzX7sYidmyEAC6FEMbbFM5D++qR6w6xM6hXgzcrev6
# s1kyRRNVa45uSA78ti/Zi0hsDLNf7ZsNPndF0OIkkO5iAE0OVm3LU7tV1TqKcT82
# Dd9VXxe93zEmfnuJazHrMa54SXPhhnNdWHtKlZ6vBfZpbxgx0FFs50xkCsrM5LQZ
# hYHxQUqPWQTvF2MdDHrxCuLcdKl+Wg3ysCcgRh2d049KUBrIu6vNaHC2+AGRjCbj
# BkrGCkB82fTmVJjzAcVWQxLoAV12pCbJS4og1GtP8hA7WevtB39tbPin9siBKRZp
# QBeiIsg0nebkpmZGrb+xWVwlIBNe9yYwJa0KmveQk8v7L5RIzjM1mtDL91VrVljC
# KC2tfT570m0Iq2NoFMb3wd/kESHFzVDM/g+XYqRd4KSoiCNP/RbqYNQBwbMc31Tr
# E27xfA1D8w2vem0Rk20x3KgPf1Z5OmGXjq6YObTpnAzG8cZlA37qKBP+ortt5aHX
# GZSg3CAwknHHVajd4aaegkPsHxm1tRvoTfh38MwkPSNxaA9GD0nz0k9xaYDmeZ2L
# olfanNsaQEwcVUId31+7sAENg1TZU0fnj879/nxkMUCazVTdL8/mz+IoTTx0QCST
# 3+9ATWcyJUlmjbDKIs7kr1L+wJdvvHEJggPAbbPI8ekpXaLZvUYOT6ObzYKNAmwY
# wELQBn8QKXcLVA==
# =5gAt
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 06 Mar 2023 21:51:36 GMT
# gpg: using RSA key 2B3C3747446843B24A943A7A2E1319F35FBB1889
# gpg: issuer "palmer@dabbelt.com"
# gpg: Good signature from "Palmer Dabbelt <palmer@dabbelt.com>" [unknown]
# gpg: aka "Palmer Dabbelt <palmerdabbelt@google.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41
# Subkey fingerprint: 2B3C 3747 4468 43B2 4A94 3A7A 2E13 19F3 5FBB 1889
* tag 'pull-riscv-to-apply-20230306' of https://gitlab.com/palmer-dabbelt/qemu: (22 commits)
MAINTAINERS: Add entry for RISC-V ACPI
hw/riscv/virt.c: Initialize the ACPI tables
hw/riscv/virt: virt-acpi-build.c: Add RHCT Table
hw/riscv/virt: virt-acpi-build.c: Add RINTC in MADT
hw/riscv/virt: Enable basic ACPI infrastructure
hw/riscv/virt: Add memmap pointer to RiscVVirtState
hw/riscv/virt: Add a switch to disable ACPI
hw/riscv/virt: Add OEM_ID and OEM_TABLE_ID fields
riscv: Correctly set the device-tree entry 'mmu-type'
riscv: Introduce satp mode hw capabilities
riscv: Allow user to set the satp mode
riscv: Change type of valid_vm_1_10_[32|64] to bool
riscv: Pass Object to register_cpu_props instead of DeviceState
roms/opensbi: Upgrade from v1.1 to v1.2
gitlab/opensbi: Move to docker:stable
hw: intc: Use cpu_by_arch_id to fetch CPU state
target/riscv: cpu: Implement get_arch_id callback
disas/riscv Fix ctzw disassemble
hw/riscv/virt.c: add cbo[mz]-block-size fdt properties
target/riscv: add Zicbop cbo.prefetch{i, r, m} placeholder
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Explain that aio_context_notifier_poll() relies on
aio_notify_accept() to catch all the memory writes that were
done before ctx->notified was set to true.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Ever since commit 8c6b0356b539 ("util/async: make bh_aio_poll() O(1)",
2020-02-22), synchronization between qemu_bh_schedule() and aio_bh_poll()
is happening when the bottom half is enqueued in the bh_list; not
when the flags are set. Update the documentation to match.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
mutex->from_push and mutex->handoff in qemu-coroutine-lock implement
the familiar pattern:
write a write b
smp_mb() smp_mb()
read b read a
The memory barrier is required by the C memory model even after a
SEQ_CST read-modify-write operation such as QSLIST_INSERT_HEAD_ATOMIC.
Add it and avoid the unclear qatomic_mb_read() operation.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The barrier comes after an atomic increment, so it is enough to use
smp_mb__after_rmw(); this avoids a double barrier on x86 systems.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Ensure ordering between clearing the COMPUTING flag and checking
IRQFACT, and between setting the IRQFACT flag and checking
COMPUTING. This ensures that no wakeups are lost.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
QemuEvent is currently broken on ARM due to missing memory barriers
after qatomic_*(). Apart from adding the memory barrier, a closer look
reveals some unpaired memory barriers that are not really needed and
complicated the functions unnecessarily. Also, it is relying on
a memory barrier in ResetEvent(); the barrier _ought_ to be there
but there is really no documentation about it, so make it explicit.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
QemuEvent is currently broken on ARM due to missing memory barriers
after qatomic_*(). Apart from adding the memory barrier, a closer look
reveals some unpaired memory barriers too. Document more clearly what
is going on.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
On ARM, seqcst loads and stores (which QEMU does not use) are compiled
respectively as LDAR and STLR instructions. Even though LDAR is
also used for load-acquire operations, it also waits for all STLRs to
leave the store buffer. Thus, LDAR and STLR alone are load-acquire
and store-release operations, but LDAR also provides store-against-load
ordering as long as the previous store is a STLR.
Compare this to ARMv7, where store-release is DMB+STR and load-acquire
is LDR+DMB, but an additional DMB is needed between store-seqcst and
load-seqcst (e.g. DMB+STR+DMB+LDR+DMB); or with x86, where MOV provides
load-acquire and store-release semantics and the two can be reordered.
Likewise, on ARM sequentially consistent read-modify-write operations only
need to use LDAXR and STLXR respectively for the load and the store, while
on x86 they need to use the stronger LOCK prefix.
In a strange twist of events, however, the _stronger_ semantics
of the ARM instructions can end up causing bugs on ARM, not on x86.
The problems occur when seqcst atomics are mixed with relaxed atomics.
QEMU's atomics try to bridge the Linux API (that most of the developers
are familiar with) and the C11 API, and the two have a substantial
difference:
- in Linux, strongly-ordered atomics such as atomic_add_return() affect
the global ordering of _all_ memory operations, including for example
READ_ONCE()/WRITE_ONCE()
- in C11, sequentially consistent atomics (except for seq-cst fences)
only affect the ordering of sequentially consistent operations.
In particular, since relaxed loads are done with LDR on ARM, they are
not ordered against seqcst stores (which are done with STLR).
QEMU implements high-level synchronization primitives with the idea that
the primitives contain the necessary memory barriers, and the callers can
use relaxed atomics (qatomic_read/qatomic_set) or even regular accesses.
This is very much incompatible with the C11 view that seqcst accesses
are only ordered against other seqcst accesses, and requires using seqcst
fences as in the following example:
qatomic_set(&y, 1); qatomic_set(&x, 1);
smp_mb(); smp_mb();
... qatomic_read(&x) ... ... qatomic_read(&y) ...
When a qatomic_*() read-modify write operation is used instead of one
or both stores, developers that are more familiar with the Linux API may
be tempted to omit the smp_mb(), which will work on x86 but not on ARM.
This nasty difference between Linux and C11 read-modify-write operations
has already caused issues in util/async.c and more are being found.
Provide something similar to Linux smp_mb__before/after_atomic(); this
has the double function of documenting clearly why there is a memory
barrier, and avoiding a double barrier on x86 and s390x systems.
The new macro can already be put to use in qatomic_mb_set().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* allwinner-h3: Fix I2C controller model for Sun6i SoCs
* allwinner-h3: Add missing i2c controllers
* Expose M-profile system registers to gdbstub
* Expose pauth information to gdbstub
* Support direct boot for Linux/arm64 EFI zboot images
* Fix incorrect stage 2 MMU setup validation
# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmQGB+wZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3gdQEACVfgbs77mxbOb6u8yWHKGZ
# tVnQr9KZMv2lmwt5H3ROJPXznchrIIAwdMeRgKnbI+lC5jTq9L+Q8RJch3t/EbAd
# f0VMyiPe3DzCbCrAR9cW6EWzbYnEVo3Ioj4k7qjxK6u1BIKhXz99DLYd1KRdTxnx
# BAYmcl857Uir1q2FrBVMZ/ItCLbk4ejn+YaDIawNue2/s1oGa+we473x9rosCFvp
# L9bzT3R46e0o+Mfkn1OYRmgCmURTalWPpWAxyOUFR9YbrzXleLgAKEB3o3PPcvls
# u26uxztyRMqje1q06VjUzwaLw7zN9XPhmir+NXX7KXp2/x9PZjApOpPtt0kl+6qe
# FbByKfl24O9w/OKewsJw+udCBYdYrRPm6tWv2D71iAwjBUzBJgNGe5VPRdPFtPDn
# uSRO65o34w1nPzRpAheUciZueiabYrVmIgVltFxj0JlrKGfgiYHPLVyU0Uu0K/A7
# F2kUEQIzIcWdo+c8SlvlWOEA2ojVd/KoLVLgndqr40Tk5pbc65TRS08kkVVl4cMT
# jUGscl7Dyxe+yo8+nHdycAJpnKYDllJOh2JbGv3r2FqCy5FMuIqW4hHeuUxwpE+O
# nxm7lzjnaVHSAFHdzhk9x4E4uH/GTcdWzX1EsmpgGqe5oejLJOrCINb+Dj44+Y8h
# 8aGRvE7kxMs11upxc7BcAw==
# =KIMt
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 06 Mar 2023 15:34:04 GMT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20230306' of https://git.linaro.org/people/pmaydell/qemu-arm: (21 commits)
hw: arm: allwinner-h3: Fix and complete H3 i2c devices
hw: allwinner-i2c: Fix TWI_CNTR_INT_FLAG on SUN6i SoCs
hw: arm: Support direct boot for Linux/arm64 EFI zboot images
target/arm: Rewrite check_s2_mmu_setup
target/arm: Diagnose incorrect usage of arm_is_secure subroutines
target/arm: Stub arm_hcr_el2_eff for m-profile
target/arm: Handle m-profile in arm_is_secure
target/arm: Implement gdbstub m-profile systemreg and secext
target/arm: Export arm_v7m_get_sp_ptr
target/arm: Export arm_v7m_mrs_control
target/arm: Implement gdbstub pauth extension
target/arm: Create pauth_ptr_mask
target/arm: Simplify iteration over bit widths
target/arm: Add name argument to output_vector_union_type
target/arm: Fix svep width in arm_gen_dynamic_svereg_xml
target/arm: Hoist pred_width in arm_gen_dynamic_svereg_xml
target/arm: Simplify register counting in arm_gen_dynamic_svereg_xml
target/arm: Split out output_vector_union_type
target/arm: Move arm_gen_dynamic_svereg_xml to gdbstub64.c
target/arm: Unexport arm_gen_dynamic_sysreg_xml
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
staging
hw/nvme updates
* basic support for directives
* simple support for endurance groups
* emulation of flexible data placement (tp4146)
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEEUigzqnXi3OaiR2bATeGvMW1PDekFAmQF+doACgkQTeGvMW1P
# DenIEgf+MLsRQ3kKUmsgVNnPuR69M0COfyaz0AnfX6YEIL9ukFJQPsmASfPmHof5
# tCYIFyKEpZt/givmzSI1jdpm0uX2MRwLGLYRdNhEPVjo+TfGda15x7DgpBEduqjq
# mChUS2wrmgP9TZne+kTAU28pUpU7hcfrt1RkDOO86W8oJmpBeIyGe6vikVhQppKW
# fAIKvhNfN3p5Kxq1fhE6I5YzKd2vvKtBvPpZp2uFe6LHXEcVV/FPcTx3Ph+um/o6
# ScmmxowT4Wqk4EgXh1ohephlxB89aWgwLNHLHcfte6UCU9x4eSmTC2T3pf7piBaE
# pGLpzPoYk6BAurwrMuxCxYgStl6SzQ==
# =CNSk
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 06 Mar 2023 14:34:02 GMT
# gpg: using RSA key 522833AA75E2DCE6A24766C04DE1AF316D4F0DE9
# gpg: Good signature from "Klaus Jensen <its@irrelevant.dk>" [full]
# gpg: aka "Klaus Jensen <k.jensen@samsung.com>" [full]
# Primary key fingerprint: DDCA 4D9C 9EF9 31CC 3468 4272 63D5 6FC5 E55D A838
# Subkey fingerprint: 5228 33AA 75E2 DCE6 A247 66C0 4DE1 AF31 6D4F 0DE9
* tag 'nvme-next-pull-request' of https://gitlab.com/birkelund/qemu:
hw/nvme: flexible data placement emulation
hw/nvme: basic directives support
hw/nvme: add basic endurance group support
hw/nvme: store a pointer to the NvmeSubsystem in the NvmeNamespace
hw/nvme: move adjustment of data_units{read,written}
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
https://xenbits.xen.org/git-http/people/aperard/qemu-dm into staging
Xen queue:
- fix for graphic passthrough with 'xenfv' machine
- fix uninitialized variable
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEE+AwAYwjiLP2KkueYDPVXL9f7Va8FAmQF8fgACgkQDPVXL9f7
# Va/nSQf/XVfmhe2W1ailKJxuvGeMLRW/tmY/dsNAZNXXBMjRYEaF4Eps51pjYdb7
# 6UUY/atT1fm9v/AYhxc+k8weIE/mxCDbaRStQUzHlrWPof1NsmEeYZ3NVdVq5w7s
# FmDCR+yiP2tcrBPhPD0aFBB7Lsayfy0P5qLFMMeeerlkZmk1O3fB04EKtus3YD1r
# hVSH+H8i5b8vg0d/5fGGrRzKalh5E2xGGUfz4ukp3+AYWNCl2m65K0JsX42+G79b
# Cg+OpeNp9CEXZSUvkfVoRxH9OJp6GpGZIHA9U3nvH31KR4OnDeCSZuCiPvoUuvZT
# Q0fd8eA4DRTEtt9gJ+ecQEpON5dcSA==
# =kvNV
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 06 Mar 2023 14:00:24 GMT
# gpg: using RSA key F80C006308E22CFD8A92E7980CF5572FD7FB55AF
# gpg: Good signature from "Anthony PERARD <anthony.perard@gmail.com>" [marginal]
# gpg: aka "Anthony PERARD <anthony.perard@citrix.com>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 5379 2F71 024C 600F 778A 7161 D8D5 7199 DF83 42C8
# Subkey fingerprint: F80C 0063 08E2 2CFD 8A92 E798 0CF5 572F D7FB 55AF
* tag 'pull-xen-20230306' of https://xenbits.xen.org/git-http/people/aperard/qemu-dm:
hw/xen/xen_pt: fix uninitialized variable
xen/pt: reserve PCI slot 2 for Intel igd-passthru
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
The following improvements are made for predicated HVX instructions
During gen_commit_hvx, unconditionally move the "new" value into
the dest
Don't set slot_cancelled
Remove runtime bookkeeping of which registers were updated
Reduce the cases where gen_log_vreg_write[_pair] is called
It's only needed for special operands VxxV and VyV
Remove gen_log_qreg_write
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-15-tsimpson@quicinc.com>
|
|
We only need to track slot for predicated stores and predicated HVX
instructions.
Add arguments to the probe helper functions to indicate if the slot
is predicated.
Here is a simple example of the differences in the TCG code generated:
IN:
0x00400094: 0xf900c102 { if (P0) R2 = and(R0,R1) }
BEFORE
---- 00400094
mov_i32 slot_cancelled,$0x0
mov_i32 new_r2,r2
and_i32 tmp0,p0,$0x1
brcond_i32 tmp0,$0x0,eq,$L1
and_i32 tmp0,r0,r1
mov_i32 new_r2,tmp0
br $L2
set_label $L1
or_i32 slot_cancelled,slot_cancelled,$0x8
set_label $L2
mov_i32 r2,new_r2
AFTER
---- 00400094
mov_i32 new_r2,r2
and_i32 tmp0,p0,$0x1
brcond_i32 tmp0,$0x0,eq,$L1
and_i32 tmp0,r0,r1
mov_i32 new_r2,tmp0
br $L2
set_label $L1
set_label $L2
mov_i32 r2,new_r2
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-14-tsimpson@quicinc.com>
|
|
We assign the instruction destination register to hex_new_value[num]
instead of a TCG temp that gets copied back to hex_new_value[num].
We introduce new functions get_result_gpr[_pair] to facilitate getting
the proper destination register.
Since we preload hex_new_value for predicated instructions, we don't
need the check for slot_cancelled. So, we call gen_log_reg_write instead.
We update the helper function generation and gen_tcg.h to maintain the
disable-hexagon-idef-parser configuration.
Here is a simple example of the differences in the TCG code generated:
IN:
0x00400094: 0xf900c102 { if (P0) R2 = and(R0,R1) }
BEFORE
---- 00400094
mov_i32 slot_cancelled,$0x0
mov_i32 new_r2,r2
mov_i32 loc2,$0x0
and_i32 tmp0,p0,$0x1
brcond_i32 tmp0,$0x0,eq,$L1
and_i32 tmp0,r0,r1
mov_i32 loc2,tmp0
br $L2
set_label $L1
or_i32 slot_cancelled,slot_cancelled,$0x8
set_label $L2
and_i32 tmp0,slot_cancelled,$0x8
movcond_i32 new_r2,tmp0,$0x0,loc2,new_r2,eq
mov_i32 r2,new_r2
AFTER
---- 00400094
mov_i32 slot_cancelled,$0x0
mov_i32 new_r2,r2
and_i32 tmp0,p0,$0x1
brcond_i32 tmp0,$0x0,eq,$L1
and_i32 tmp0,r0,r1
mov_i32 new_r2,tmp0
br $L2
set_label $L1
or_i32 slot_cancelled,slot_cancelled,$0x8
set_label $L2
mov_i32 r2,new_r2
We'll remove the unnecessary manipulation of slot_cancelled in a
subsequent patch.
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-13-tsimpson@quicinc.com>
|
|
The F2_sffms instruction [r0 -= sfmpy(r1, r2)] doesn't properly
handle -0. Previously we would negate the input operand by subtracting
from zero. Instead, we negate by changing the sign bit.
Test case added to tests/tcg/hexagon/fpstuff.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-12-tsimpson@quicinc.com>
|
|
Made possible by new toolchain container
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-11-tsimpson@quicinc.com>
|
|
Replace __builtin_* with inline assembly
The __builtin's are subject to change with different compiler
releases, so might break
Mark arrays as aligned when accessed as HVX vectors
Clean up comments
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-10-tsimpson@quicinc.com>
|
|
Add control registers (c4, c5) to clobbers list
Made possible by new toolchain container
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-9-tsimpson@quicinc.com>
|
|
Extend the analyze_<tag> functions for HVX vector and predicate writes
Remove calls to ctx_log_vreg_write[_pair] from gen_tcg_funcs.py
During gen_start_packet, reload the predicated HVX registers into
fugure_VRegs and tmp_VRegs
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-8-tsimpson@quicinc.com>
|
|
The pkt_has_store_s1 field in CPUHexagonState is only needed in generated
helpers for scalar load instructions. See check_noshuf and mem_load[1248]
in op_helper.c.
We add logic in gen_analyze_funcs.py to set need_pkt_has_store_s1 in
DisasContext when it is needed at runtime.
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230307025828.1612809-7-tsimpson@quicinc.com>
|