Age | Commit message (Collapse) | Author |
|
The iwmmxt_msadb helper and its corresponding gen function are unused;
delete them. (This function appears to have never been used right back
to the initial implementation of iwMMXt; it is identical to iwmmxt_madduq,
and is presumably an accidental remnant from the initial development.)
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401822125-1822-1-git-send-email-peter.maydell@linaro.org
|
|
Bring the 32-bit CRC helper functions into line with the A64 ones,
by masking the high bytes of the value in the calling code rather
than the helper. This is more efficient since we can determine the
mask at translation time.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-7-git-send-email-peter.maydell@linaro.org
|
|
Add support for the VMULL.P64 polynomial 64x64 to 128 bit multiplication
instruction in the A32/T32 instruction sets; this is part of the v8
Crypto Extensions.
To do this we have to move the neon_pmull_64_{lo,hi} helpers from
helper-a64.c into neon_helper.c so they can be used by the AArch32
translator.
Inspired-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-4-git-send-email-peter.maydell@linaro.org
|
|
The current undefreq field in the neon_3reg_wide handling allows us
to encode "UNDEF if size != 0" and "UNDEF if size == 0". This is
no longer sufficient with the advent of 64-bit polynomial VMULL,
which means we want to UNDEF if size == 1. Change the undefreq
encoding to use separate bits for all of "UNDEF if size == 0",
"UNDEF if size == 1" and "UNDEF if size == 2".
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-3-git-send-email-peter.maydell@linaro.org
|
|
This adds support for the SHA1 and SHA256 instructions that are available
on some v8 implementations of Aarch32.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-2-git-send-email-peter.maydell@linaro.org
[PMM:
* rebase
* fix bad indent
* add a missing UNDEF check for Q!=1 in the 3-reg SHA1/SHA256 case
* use g_assert_not_reached()
* don't re-extract bit 6 for the 2-reg-misc encodings
* set the ELF HWCAP2 bits for the new features
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
These will soon require cpu_ldst.h, so move them out of cpu.h.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h. This
minimizes the files that must be rebuilt when changing the macros
for file N.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-12-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Avoid using IS_USER directly as the MMU-idx to simplify future
changes to the MMU layout.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1400980132-25949-5-git-send-email-edgar.iglesias@gmail.com
Message-id: 1400805738-11889-6-git-send-email-edgar.iglesias@gmail.com
[PMM: parts relating to LDRT/STRT moved into earlier patches]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
The SRS instruction was using a hardcoded 0 for the memory
accesses. This happens to be OK since the SRS instruction is
UNPREDICTABLE in User and System modes, but is awkward if we
want to rearrange the MMU index uses. Switch to using
get_mem_index() like all the other accesses.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1400980132-25949-4-git-send-email-edgar.iglesias@gmail.com
|
|
Clean up the mmu index handling for ldrt/strt insns: instead
of a flag 'user' indicating whether to treat the store as user
mode or not, use 'memidx' to indicate the correct memory index to use.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1400980132-25949-3-git-send-email-edgar.iglesias@gmail.com
|
|
The smlald (and probably smlsld) instruction was doing incorrect sign
extensions of the operands amongst 64bit result calculation. The
instruction psuedo-code is:
operand2 = if m_swap then ROR(R[m],16) else R[m];
product1 = SInt(R[n]<15:0>) * SInt(operand2<15:0>);
product2 = SInt(R[n]<31:16>) * SInt(operand2<31:16>);
result = product1 + product2 + SInt(R[dHi]:R[dLo]);
R[dHi] = result<63:32>;
R[dLo] = result<31:0>;
The result calculation should be done in 64 bit arithmetic, and hence
product1 and product2 should be sign extended to 64b before calculation.
The current implementation was adding product1 and product2 together
then sign-extending the intermediate result leading to false negatives.
E.G. if product1 = product2 = 0x4000000, their sum = 0x80000000, which
will be incorrectly interpreted as -ve on sign extension.
We fix by doing the 64b extensions on both product1 and product2 before
any addition/subtraction happens.
We also fix where we were possibly incorrectly setting the Q saturation
flag for SMLSLD, which the ARM ARM specifically says is not set.
Reported-by: Christina Smith <christina.smith@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 2cddb6f5a15be4ab8d2160f3499d128ae93d304d.1397704570.git.peter.crosthwaite@xilinx.com
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
For system mode, we may have a 64 bit CPU which is currently executing
in AArch32 state; if we're dumping CPU state to the logs we should
therefore show the correct state for the current execution state,
rather than hardwiring it based on the type of the CPU. For consistency
with how we handle translation, we leave the 32 bit dump function
as the default, and have it hand off control to the 64 bit dump code
if we're in AArch64 mode.
Reported-by: Rob Herring <rob.herring@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
For ARMv8 there are two changes to the MVFR media feature registers:
* there is a new MVFR2 which is accessible from 32 bit code
* 64 bit code accesses these via the usual sysreg instructions
rather than with a floating-point specific instruction
Implement this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
|
|
The current A32/T32 decoder bases its "is VFP/Neon enabled?" check
on the FPSCR.EN bit. This is correct if EL1 is AArch32, but for
an AArch64 EL1 the logic is different: it must act as if FPSCR.EN
is always set. Instead, trapping must happen according to CPACR
bits for cp10/cp11; these cover all of FP/Neon, including the
FPSCR/FPSID/MVFR register accesses which FPSCR.EN does not affect.
Add support for CPACR checks (which are also required for ARMv7,
but were unimplemented because Linux happens not to use them)
and make sure they generate exceptions with the correct syndrome.
We actually return incorrect syndrome information for cases
where FP is disabled but the specific instruction bit pattern
is unallocated: strictly these should be the Uncategorized
exception, not a "SIMD disabled" exception. This should be
mostly harmless, and the structure of the A32/T32 VFP/Neon
decoder makes it painful to put the 'FP disabled?' checks in
the right places.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
|
|
Add new helpers exception_with_syndrome (for generating an exception
with syndrome information) and exception_uncategorized (for generating
an exception with "Unknown or Uncategorized Reason", which have a syndrome
register value of zero), and use them to generate the correct syndrome
information for exceptions which are raised directly from generated code.
This patch includes moving the A32/T32 gen_exception_insn functions
further up in the source file; they will be needed for "VFP/Neon disabled"
exception generation later.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
|
|
For exceptions taken to AArch64, if a coprocessor/system register
access fails due to a trap or enable bit then the syndrome information
must include details of the failing instruction (crn/crm/opc1/opc2
fields, etc). Make the decoder construct the syndrome information
at translate time so it can be passed at runtime to the access-check
helper function and used as required.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
|
|
Currently cpu.h defines a mixture of functions and types needed by
the rest of QEMU and those needed only by files within target-arm/.
Split the latter out into a new header so they aren't needlessly
exposed further than required.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
|
|
This adds support for [UF]RSQRTE instructions. It utilises the existing
NEON helpers with some changes. The changes include an explicit passing
of fpstatus (so the correct one is used between arm32 and aarch64),
denormilzation, more correct error handling and also proper scaling of
the fraction going into the estimate.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1394822294-14837-25-git-send-email-peter.maydell@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Implement URECPE and FRECPE instructions in both scalar and vector forms.
The actual reciprocal estimate function is shared with the A32/T32 Neon
code. However in A64 we aren't using the Neon "standard FPSCR value"
so extra checks are necessary to handle non-squashed denormal inputs
which can never happen for A32/T32. Calling conventions for the helpers
are thus modified to pass the fpst directly; we mark the helpers as
TCG_CALL_NO_RWG since we're changing the declarations anyway.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1394822294-14837-21-git-send-email-peter.maydell@linaro.org
|
|
Implement the PMULL instruction; this is the last unimplemented insn
in the three-reg-diff group.
Note that PMULL with size 3 is considered part of the AES part
of the crypto extensions (see the ID_AA64ISAR0_EL1 register definition
in the v8 ARM ARM), so it isn't necessary to burn an extra feature
bit on it, even though we're using more feature bits than a single
"crypto extension present/not present" toggle.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1394822294-14837-2-git-send-email-peter.maydell@linaro.org
|
|
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Implement WFE to yield our timeslice to the next CPU.
This avoids slowdowns in multicore configurations caused
by one core busy-waiting on a spinlock which can't possibly
be unlocked until the other core has an opportunity to run.
This speeds up my test case A15 dual-core boot by a factor
of three (though it is still four or five times slower than
a single-core boot).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1393339545-22111-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-by: Rob Herring <rob.herring@linaro.org>
|
|
Add support for AArch32 CRC32 and CRC32C instructions added in ARMv8
and add a CPU feature flag to enable these instructions.
The CRC32-C implementation used is the built-in qemu implementation
and The CRC-32 implementation is from zlib. This requires adding zlib
to LIBS to ensure it is linked for the linux-user binary.
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1393411566-24104-3-git-send-email-will.newton@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Now that cpreg read and write functions can't fail and throw an
exception, we can remove the code from the translator that synchronises
the guest PC in case an exception is thrown.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Several of the system registers handled via the ARMCPRegInfo
mechanism have access trap control bits controlling whether the
registers are accessible to lower privilege levels. Replace
the existing mechanism (allowing the read and write functions
to return EXCP_UDEF if access is denied) with a dedicated
"check access rights" function pointer in the ARMCPRegInfo.
This will allow us to simplify some of the register definitions,
which no longer need read/write functions purely to handle
the access checks.
We take the opportunity to define the return value from the
access checking function in a way that allows us to set the
correct exception syndrome information for exceptions taken
to AArch64 (which may need to distinguish access failures due
to a configurable trap or enable from other kinds of access
failure).
This commit defines the new mechanism but does not move any
of the registers across to use it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
|
|
Log guest attempts to access unimplemented system registers via
the LOG_UNIMP reporting mechanism (for both the 32 bit and 64 bit
instruction sets). This is particularly useful for debugging
problems where the guest is trying to use a system register that
QEMU doesn't implement.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
|
|
Add support for the AArch32 floating-point half-precision to double-
precision conversion VCVTB and VCVTT instructions.
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: fixed a minor missing-braces style issue]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add support for the AArch32 Advanced SIMD VCVTA, VCVTN, VCVTP
and VCVTM instructions.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add support for the AArch32 floating-point VCVTA, VCVTN, VCVTP
and VCVTM instructions.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add support for the AArch32 Advanced SIMD VRINTA, VRINTN, VRINTP
VRINTM and VRINTZ instructions.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add support for the AArch32 Advanced SIMD VRINTX instruction.
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add support for the AArch32 floating-point VRINTX instruction.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add support for the AArch32 floating-point VRINTZ instruction.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add support for the AArch32 floating-point VRINTR instruction.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add support for AArch32 ARMv8 FP VRINTA, VRINTN, VRINTP and VRINTM
instructions.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
The VFP conversion helpers for A32 round to zero as this is the only
rounding mode supported. Rename these helpers to make it clear that
they round to zero and are not suitable for use in the AArch64 code.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
Use the VFP_BINOP macro to provide helpers for min, max, minnum
and maxnum, rather than hand-rolling them. (The float64 max
version is not used by A32 but will be needed for A64.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
In preparation for adding support for A64 load/store exclusive instructions,
widen the fields in the CPU state struct that deal with address and data values
for exclusives from 32 to 64 bits. Although in practice AArch64 and AArch32
exclusive accesses will be generally separate there are some odd theoretical
corner cases (eg you should be able to do the exclusive load in AArch32, take
an exception to AArch64 and successfully do the store exclusive there), and it's
also easier to reason about.
The changes in semantics for the variables are:
exclusive_addr -> extended to 64 bits; -1ULL for "monitor lost",
otherwise always < 2^32 for AArch32
exclusive_val -> extended to 64 bits. 64 bit exclusives in AArch32 now
use the high half of exclusive_val instead of a separate exclusive_high
exclusive_high -> is no longer used in AArch32; extended to 64 bits as
it will be needed for AArch64's pair-of-64-bit-values exclusives.
exclusive_test -> extended to 64 bits, as it is an address. Since this is
a linux-user-only field, in arm-linux-user it will always have the top
32 bits zero.
exclusive_info -> stays 32 bits, as it is neither data nor address, but
simply holds register indexes etc. AArch64 will be able to fit all its
information into 32 bits as well.
Note that the refactoring of gen_store_exclusive() coincidentally fixes
a minor bug where ldrexd would incorrectly update the first CPU register
even if the load for the second register faulted.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
The cpregs APIs used by the decoder (get_arm_cp_reginfo() and
cp_access_ok()) currently take either a CPUARMState* or an ARMCPU*.
This is problematic for the A64 decoder, which doesn't pass the
environment pointer around everywhere the way the 32 bit decoder
does. Adjust the parameters these functions take so that we can
copy only the relevant info from the CPUARMState into the
DisasContext and then use that.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
This patch adds emulation for the conditional branch (b.cond) instruction.
Signed-off-by: Alexander Graf <agraf@suse.de>
[claudio: adapted to new decoder structure,
reused arm infrastructure for checking the flags]
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
The A32/T32 gen_intermediate_code_internal() is complicated because it
has to deal with:
* conditionally executed instructions
* Thumb IT blocks
* kernel helper page
* M profile exception-exit special casing
None of these apply to A64, so putting the "this is A64 so
call the A64 decoder" check in the middle of the A32/T32
loop is confusing and means the A64 decoder's handling of
things like conditional jump and singlestepping has to take
account of the conditional-execution jumps the main loop
might emit.
Refactor the code to give A64 its own gen_intermediate_code_internal
function instead.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
This adds support for the AESE/AESD/AESMC/AESIMC instructions that
are available on some v8 implementations of Aarch32.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Message-id: 1386266078-6976-1-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Retain the existing gen_aa32_* inlines, to aid compilation for A64.
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1386628626-21627-1-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This adds support for the ARMv8 Advanced SIMD VMAXNM and VMINNM
instructions.
Signed-off-by: Will Newton <will.newton@linaro.org>
Message-id: 1386158099-9239-7-git-send-email-will.newton@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This adds support for the ARMv8 floating point VMAXNM and VMINNM
instructions.
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1386158099-9239-6-git-send-email-will.newton@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This adds support for the VSEL floating point selection instruction
which was added in ARMv8.
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1386158099-9239-3-git-send-email-will.newton@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Floating point is an extension to the instruction set rather than
a coprocessor, so call it directly from the ARM and Thumb decode
functions.
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1386158099-9239-2-git-send-email-will.newton@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
No longer needs to be done on a per-target basis.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|