aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-05-26qemu-config: load modules when instantiating option groupsPaolo Bonzini
Right now the SPICE module is special cased to be loaded when processing of the -spice command line option. However, the spice option group can also be brought in via -readconfig, in which case the module is not loaded. Add a generic hook to load modules that provide a QemuOpts group, and use it for the "spice" and "iscsi" groups. Fixes: #194 Fixes: https://bugs.launchpad.net/qemu/+bug/1910696 Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26vl: allow not specifying size in -m when using -M memory-backendPaolo Bonzini
Starting in QEMU 6.0's commit f5c9fcb82d ("vl: separate qemu_create_machine", 2020-12-10), a function have_custom_ram_size() replaced the return value of set_memory_options(). The purpose of the return value was to record the presence of "-m size", and if it was not there, change the default RAM size to the size of the memory backend passed with "-M memory-backend". With that commit, however, have_custom_ram_size() is now queried only after set_memory_options has stored the fixed-up RAM size in QemuOpts for "future use". This was actually the only future use of the fixed-up RAM size, so remove that code and fix the bug. Cc: qemu-stable@nongnu.org Fixes: f5c9fcb82d ("vl: separate qemu_create_machine", 2020-12-10) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26replication: move include out of root directoryPaolo Bonzini
The replication.h file is included from migration/colo.c and tests/unit/test-replication.c, so it should be in include/. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26remove qemu-options* from root directoryPaolo Bonzini
These headers are also included from softmmu/vl.c, so they should be in include/. Remove qemu-options-wrapper.h, since elsewhere we include "template" headers directly and #define the parameters in the including file; move qemu-options.h to include/. Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26meson: Set implicit_include_directories to falseKatsuhiro Ueno
Without this, libvixl cannot be compiled with macOS 11.3 SDK due to include file name conflict (usr/include/c++/v1/version conflicts with VERSION). Signed-off-by: Katsuhiro Ueno <uenobk@gmail.com> Message-Id: <CA+pCdY09+OQfXq3YmRNuQE59ACOq7Py2q4hqOwgq4PnepCXhTA@mail.gmail.com> Tested-by: Alexander Graf <agraf@csgraf.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26tests/qtest/fuzz: Fix build failurePhilippe Mathieu-Daudé
On Fedora 32, using clang (version 10.0.1-3.fc32) we get: tests/qtest/fuzz/fuzz.c:237:5: error: implicit declaration of function 'qemu_init' is invalid in C99 [-Werror,-Wimplicit-function-declaration] qemu_init(result.we_wordc, result.we_wordv, NULL); ^ qemu_init() is declared in "sysemu/sysemu.h", include this header to fix. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20210513162008.3922223-1-philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Dirty ring supportPeter Xu
KVM dirty ring is a new interface to pass over dirty bits from kernel to the userspace. Instead of using a bitmap for each memory region, the dirty ring contains an array of dirtied GPAs to fetch (in the form of offset in slots). For each vcpu there will be one dirty ring that binds to it. kvm_dirty_ring_reap() is the major function to collect dirty rings. It can be called either by a standalone reaper thread that runs in the background, collecting dirty pages for the whole VM. It can also be called directly by any thread that has BQL taken. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-11-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Disable manual dirty log when dirty ring enabledPeter Xu
KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is for KVM_CLEAR_DIRTY_LOG, which is only useful for KVM_GET_DIRTY_LOG. Skip enabling it for kvm dirty ring. More importantly, KVM_DIRTY_LOG_INITIALLY_SET will not wr-protect all the pages initially, which is against how kvm dirty ring is used - there's no way for kvm dirty ring to re-protect a page before it's notified as being written first with a GFN entry in the ring! So when KVM_DIRTY_LOG_INITIALLY_SET is enabled with dirty ring, we'll see silent data loss after migration. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-10-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Add dirty-ring-size propertyPeter Xu
Add a parameter for dirty gfn count for dirty rings. If zero, dirty ring is disabled. Otherwise dirty ring will be enabled with the per-vcpu gfn count as specified. If dirty ring cannot be enabled due to unsupported kernel or illegal parameter, it'll fallback to dirty logging. By default, dirty ring is not enabled (dirty-gfn-count default to 0). Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-9-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Cache kvm slot dirty bitmap sizePeter Xu
Cache it too because we'll reference it more frequently in the future. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-8-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Simplify dirty log sync in kvm_set_phys_memPeter Xu
kvm_physical_sync_dirty_bitmap() on the whole section is inaccurate, because the section can be a superset of the memslot that we're working on. The result is that if the section covers multiple kvm memslots, we could be doing the synchronization for multiple times for each kvmslot in the section. With the two helpers that we just introduced, it's very easy to do it right now by calling the helpers. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-7-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Provide helper to sync dirty bitmap from slot to ramblockPeter Xu
kvm_physical_sync_dirty_bitmap() calculates the ramblock offset in an awkward way from the MemoryRegionSection that passed in from the caller. The truth is for each KVMSlot the ramblock offset never change for the lifecycle. Cache the ramblock offset for each KVMSlot into the structure when the KVMSlot is created. With that, we can further simplify kvm_physical_sync_dirty_bitmap() with a helper to sync KVMSlot dirty bitmap to the ramblock dirty bitmap of a specific KVMSlot. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-6-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Provide helper to get kvm dirty logPeter Xu
Provide a helper kvm_slot_get_dirty_log() to make the function kvm_physical_sync_dirty_bitmap() clearer. We can even cache the as_id into KVMSlot when it is created, so that we don't even need to pass it down every time. Since at it, remove return value of kvm_physical_sync_dirty_bitmap() because it should never fail. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-5-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Create the KVMSlot dirty bitmap on flag changesPeter Xu
Previously we have two places that will create the per KVMSlot dirty bitmap: 1. When a newly created KVMSlot has dirty logging enabled, 2. When the first log_sync() happens for a memory slot. The 2nd case is lazy-init, while the 1st case is not (which is a fix of what the 2nd case missed). To do explicit initialization of dirty bitmaps, what we're missing is to create the dirty bitmap when the slot changed from not-dirty-track to dirty-track. Do that in kvm_slot_update_flags(). With that, we can safely remove the 2nd lazy-init. This change will be needed for kvm dirty ring because kvm dirty ring does not use the log_sync() interface at all. Also move all the pre-checks into kvm_slot_init_dirty_bitmap(). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-4-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Use a big lock to replace per-kml slots_lockPeter Xu
Per-kml slots_lock will bring some trouble if we want to take all slots_lock of all the KMLs, especially when we're in a context that we could have taken some of the KML slots_lock, then we even need to figure out what we've taken and what we need to take. Make this simple by merging all KML slots_lock into a single slots lock. Per-kml slots_lock isn't anything that helpful anyway - so far only x86 has two address spaces (so, two slots_locks). All the rest archs will be having one address space always, which means there's actually one slots_lock so it will be the same as before. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-3-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26memory: Introduce log_sync_global() to memory listenerPeter Xu
Some of the memory listener may want to do log synchronization without being able to specify a range of memory to sync but always globally. Such a memory listener should provide this new method instead of the log_sync() method. Obviously we can also achieve similar thing when we put the global sync logic into a log_sync() handler. However that's not efficient enough because otherwise memory_global_dirty_log_sync() may do the global sync N times, where N is the number of flat ranges in the address space. Make this new method be exclusive to log_sync(). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-2-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: do not allow setting properties at runtimePaolo Bonzini
Only allow accelerator properties to be set when the accelerator is being created. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26qtest: add a QOM object for qtestPaolo Bonzini
The qtest server right now can only be created using the -qtest and -qtest-log options. Allow an alternative way to create it using "-object qtest,chardev=...,log=...". This is part of the long term plan to make more (or all) of QEMU configurable through QMP and preconfig mode. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26object: add more commands to preconfig modePaolo Bonzini
Creating and destroying QOM objects does not require a fully constructed machine. Allow running object-add and object-del before machine initialization has concluded. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26i386/cpu: Expose AVX_VNNI instruction to guestYang Zhong
Expose AVX (VEX-encoded) versions of the Vector Neural Network Instructions to guest. The bit definition: CPUID.(EAX=7,ECX=1):EAX[bit 4] AVX_VNNI The following instructions are available when this feature is present in the guest. 1. VPDPBUS: Multiply and Add Unsigned and Signed Bytes 2. VPDPBUSDS: Multiply and Add Unsigned and Signed Bytes with Saturation 3. VPDPWSSD: Multiply and Add Signed Word Integers 4. VPDPWSSDS: Multiply and Add Signed Integers with Saturation As for the kvm related code, please reference Linux commit id 1085a6b585d7. The release document ref below link: https://software.intel.com/content/www/us/en/develop/download/\ intel-architecture-instruction-set-extensions-programming-reference.html Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20210407015609.22936-1-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26hw/mem/nvdimm: Use Kconfig 'imply' instead of 'depends on'Philippe Mathieu-Daudé
Per the kconfig.rst: A device should be listed [...] ``imply`` if (depending on the QEMU command line) the board may or may not be started without it. This is the case with the NVDIMM device, so use the 'imply' weak reverse dependency to select the symbol. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20210511155354.3069141-2-philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26configure: simplify assignment to GIT_SUBMODULESPaolo Bonzini
Do not guard each assignment with a check for --with-git-submodules=ignore. To avoid a confusing "GIT" line from the Makefile, guard the git-submodule-update recipe so that it is empty when --with-git-submodules=ignore. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26configure: check for submodules if --with-git-submodules=ignorePaolo Bonzini
Right now --with-git-submodules=ignore has a subtle difference from just running without a .git directory, in that it does not check that submodule sources actually exist. Move the check for ui/keycodemapdb/README so that it happens even if the user specified --with-git-submodules=ignore, with a customized error message that is more suitable for this situation. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26configure: Only clone softfloat-3 repositories if TCG is enabledPhilippe Mathieu-Daudé
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20210512045821.3257963-1-philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-25Merge remote-tracking branch ↵Peter Maydell
'remotes/pmaydell/tags/pull-target-arm-20210525' into staging target-arm queue: * Implement SVE2 emulation * Implement integer matrix multiply accumulate * Implement FEAT_TLBIOS * Implement FEAT_TLBRANGE * disas/libvixl: Protect C system header for C++ compiler * Use correct SP in M-profile exception return * AN524, AN547: Correct modelling of internal SRAMs * hw/intc/arm_gicv3_cpuif: Fix EOIR write access check logic * hw/arm/smmuv3: Another range invalidation fix # gpg: Signature made Tue 25 May 2021 16:02:25 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20210525: (114 commits) target/arm: Enable SVE2 and related extensions linux-user/aarch64: Enable hwcap bits for sve2 and related extensions target/arm: Implement integer matrix multiply accumulate target/arm: Implement aarch32 VSUDOT, VUSDOT target/arm: Split decode of VSDOT and VUDOT target/arm: Split out do_neon_ddda target/arm: Fix decode for VDOT (indexed) target/arm: Remove unused fpst from VDOT_scalar target/arm: Split out do_neon_ddda_fpst target/arm: Implement aarch64 SUDOT, USDOT target/arm: Implement SVE2 fp multiply-add long target/arm: Move endian adjustment macros to vec_internal.h target/arm: Implement SVE2 bitwise shift immediate target/arm: Implement 128-bit ZIP, UZP, TRN target/arm: Implement SVE2 LD1RO target/arm: Tidy do_ldrq target/arm: Share table of sve load functions target/arm: Implement SVE2 FLOGB target/arm: Implement SVE2 FCVTXNT, FCVTX target/arm: Implement SVE2 FCVTLT ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Enable SVE2 and related extensionsRichard Henderson
Disable I8MM again for !have_neon during realize. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-93-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25linux-user/aarch64: Enable hwcap bits for sve2 and related extensionsRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-92-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement integer matrix multiply accumulateRichard Henderson
This is {S,U,US}MMLA for both AArch64 AdvSIMD and SVE, and V{S,U,US}MMLA.S8 for AArch32 NEON. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-91-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement aarch32 VSUDOT, VUSDOTRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-90-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Split decode of VSDOT and VUDOTRichard Henderson
Now that we have a common helper, sharing decode does not save much. Also, this will solve an upcoming naming problem. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-89-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Split out do_neon_dddaRichard Henderson
Split out a helper that can handle the 4-register format for helpers shared with SVE. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-88-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Fix decode for VDOT (indexed)Richard Henderson
We were extracting the M register twice, once incorrectly as M:vm and once correctly as rm. Remove the incorrect name and remove the incorrect decode. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-87-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Remove unused fpst from VDOT_scalarRichard Henderson
Cut and paste error from another pattern. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-86-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Split out do_neon_ddda_fpstRichard Henderson
Split out a helper that can handle the 4-register format for helpers shared with SVE. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-85-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement aarch64 SUDOT, USDOTRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-84-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 fp multiply-add longStephen Long
Implements both vectored and indexed FMLALB, FMLALT, FMLSLB, FMLSLT Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Stephen Long <steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-83-richard.henderson@linaro.org Message-Id: <20200504171240.11220-1-steplong@quicinc.com> [rth: Rearrange to use float16_to_float32_by_bits.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Move endian adjustment macros to vec_internal.hRichard Henderson
We have two copies of these, one set of which is not complete. Move them to a common header. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-82-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 bitwise shift immediateStephen Long
Implements SQSHL/UQSHL, SRSHR/URSHR, and SQSHLU Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Stephen Long <steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-81-richard.henderson@linaro.org Message-Id: <20200430194159.24064-1-steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement 128-bit ZIP, UZP, TRNRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-80-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 LD1RORichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-79-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Tidy do_ldrqRichard Henderson
Use tcg_constant_i32 for passing the simd descriptor, as this hashed value does not need to be freed. Rename dofs to doff to match poff. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-78-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Share table of sve load functionsRichard Henderson
The table used by do_ldrq is a subset of the table used by do_ld_zpa; we can share them by passing dtype instead of msz to do_ldrq. The lack of MTE handling in do_ldrq was a bug, fixed by this change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-77-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 FLOGBStephen Long
Signed-off-by: Stephen Long <steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-76-richard.henderson@linaro.org Message-Id: <20200430191405.21641-1-steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 FCVTXNT, FCVTXStephen Long
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Stephen Long <steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-75-richard.henderson@linaro.org Message-Id: <20200428174332.17162-4-steplong@quicinc.com> [rth: Use do_frint_mode, which avoids a specific runtime helper.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 FCVTLTStephen Long
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Stephen Long <steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-74-richard.henderson@linaro.org Message-Id: <20200428174332.17162-3-steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 FCVTNTRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Stephen Long <steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-73-richard.henderson@linaro.org Message-Id: <20200428174332.17162-2-steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 TBL, TBXStephen Long
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Stephen Long <steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-72-richard.henderson@linaro.org Message-Id: <20200428144352.9275-1-steplong@quicinc.com> [rth: rearrange the macros a little and rebase] Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 crypto constructive binary operationsRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-71-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 crypto destructive binary operationsRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-70-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25target/arm: Implement SVE2 crypto unary operationsRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-69-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>