Age | Commit message (Collapse) | Author |
|
All of the core-code usages of this API have the cpu pointer handy so
pass it in. There are only 3 architecture specific usages (2 of which
are commented out) which can just use ENV_GET_CPU() locally to get the
cpu pointer. The reduces core code usage of the CPU env, which brings
us closer to common-obj'ing these core files.
Cc: Riku Voipio <riku.voipio@iki.fi>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
disas does not need to access the CPU env for any reason. Change the
APIs to accept CPU pointers instead. Small change pattern needs to be
applied to all target translate.c. This brings us closer to making
disas.o a common-obj and less architecture specific in general.
Cc: Richard Henderson <rth@twiddle.net>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Jia Liu <proljc@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Acked-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
In order to do this, stop using the cpu_in*/out* helpers, and instead
access address_space_io directly.
cpu_in* and cpu_out* remain for usage in the monitor, in qtest, and
in Xen.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This is improved type checking for the translators -- it's no longer
possible to accidentally swap arguments to the branch functions.
Note that the code generating backends still manipulate labels as int.
With notable exceptions, the scope of the change is just a few lines
for each target, so it's not worth building extra machinery to do this
change in per-target increments.
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Anthony Green <green@moxielogic.com>
Cc: Jia Liu <proljc@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
The method by which we count the number of ops emitted
is going to change. Abstract that away into some inlines.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
After the next patch, we will move the high parts of AVX and AVX512 registers
in the same array as the SSE registers. This will make it impossible to
memcpy an array of 128-bit values in and out of xmm_regs in one swoop.
Use a for loop instead.
Similarly, always use XMM_Q in translate.c. This avoids introducing bugs
such as the one fixed in the previous patch.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This was accessing an XMM register's low half without going through XMM_Q.
Cc: qemu-stable@nongnu.org
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
- Migration and linuxboot fixes for 2.2 regressions
- valgrind/KVM support
- small i386 patches
- PCI SD host controller support
- malloc/free cleanups from Markus (x86/scsi)
- IvyBridge model
- XSAVES support for KVM
- initial patches from record/replay
# gpg: Signature made Mon 15 Dec 2014 16:35:08 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (47 commits)
sdhci: Support SDHCI devices on PCI
sdhci: Define SDHCI PCI ids
sdhci: Add "sysbus" to sdhci QOM types and methods
sdhci: Remove class "virtual" methods
sdhci: Set a default frequency clock
serial: only resample THR interrupt on rising edge of IER.THRI
serial: update LSR on enabling/disabling FIFOs
serial: clean up THRE/TEMT handling
serial: reset thri_pending on IER writes with THRI=0
linuxboot: fix loading old kernels
kvm/apic: fix 2.2->2.1 migration
target-i386: add Ivy Bridge CPU model
target-i386: add f16c and rdrand to Haswell and Broadwell
target-i386: add VME to all CPUs
pc: add 2.3 machine types
i386: do not cross the pages boundaries in replay mode
cpus: make icount warp behave well with respect to stop/cont
timer: introduce new QEMU_CLOCK_VIRTUAL_RT clock
cpu-exec: invalidate nocache translation if they are interrupted
icount: introduce cpu_get_icount_raw
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This patch denies crossing the boundary of the pages in the replay mode,
because it can cause an exception. Do it only when boundary is
crossed by the first instruction in the block.
If current instruction already crossed the bound - it's ok,
because an exception hasn't stopped this code.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
TCG generates optimized code for i386 repz instructions in single step mode.
It means that when ecx becomes 0, execution of the string instruction breaks
immediately without an additional iteration for ecx==0 (which will only check
ecx and set the flags). Omitting this iteration leads to different
instructions counting in singlestep mode and in normal execution.
This patch disables optimization of this last iteration for icount mode
which should be deterministic.
v2: inverted the condition and formatted the comment
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This patch fixes instructions counting when execution is stopped on
breakpoint (e.g. set from gdb). Without a patch extra instruction is translated
and icount is incremented by invalid value (which equals to number of
executed instructions + 1).
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
|
|
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The function tcg_gen_lshift() is unused; remove it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
This will collect all load and store helpers soon. For now
it is just a replacement for softmmu_exec.h, which this patch
stops including directly, but we also include it where this will
be necessary in order to simplify the next patch.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h. This
minimizes the files that must be rebuilt when changing the macros
for file N.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Older Intel manuals (pre-2010) and current AMD manuals describe Z as
undefined, but newer Intel manuals describe Z as unchanged.
Cc: qemu-stable@nongnu.org
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
We were loading 16 bytes for both single and double-precision
scalar comparisons.
Reported-by: Alexander Bluhm <bluhm@openbsd.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Parity should be set for a zero result.
Cc: qemu-stable@nongnu.org
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Commit 1b90d56e changed the implementation of in/out imm to not assign
the accessed port number to cpu_T[0] as it appeared unnecessary.
However, currently gen_check_io() makes use of cpu_T[0] to implement the
I/O bitmap checks, so it's in fact still used and the change broke the
check, leading to #GP in legitimate cases (and probably also allowing
access to ports that shouldn't be allowed).
This patch reintroduces the missing assignment for these cases.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
Remove an unnecessary move opcode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
And make the destination argument explicit.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Clean up relics of multiple size domains: - MO_16 + 1 => - 1 + 1 => 0.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace with its definition.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace with its definition.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace with its definition.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace with its definition.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace with its definition.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reduce ifdefs, share more code between paths, reduce the number of TCG
ops generated. Avoid re-computing the size of the operation across
gen_pop_T0 and gen_pop_update.
Add forgotten zero-extension in the TARGET_X86_64, !CODE64, ss32 case.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reduce ifdefs, share more code between paths, reduce the number of TCG
ops generated.
Add forgotten zero-extension in the TARGET_X86_64, !CODE64, ss32 case.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Unlike the addr32, there was no bug. But we can use the same
technique to reduce the number of TCG ops.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Changing the domain to TCGMemOp makes it easier to interoperate
with other portions of the rest of the translator.
We now only have one domain for size operands inside the translator,
which makes things less confusing all the way around. There are
still a number of helpers that continue to use the log2-1 domain.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Change the domain of the parameter and update all callers.
Which lets us defer completely to gen_op_mov_reg_v.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Changing the domain to TCGMemOp makes it easier to interoperate
with other portions of the rest of the translator.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Change the domain of the parameter and update all callers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
These functions used the aflags/dflags domain, which is log2-1
of the byte size. Confusingly, they used enumeration values
from the log2 domain.
Change the domain of the parameter and update all callers.
Since we're now in a common domain, defer the deposit/extend/mov
decision to gen_op_mov_reg_v.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
The 'ot' variables (operand type?) hold the log2(byte size) of
the operand being manipulated. This is the same as the MO_SIZE
subset of the TCGMemOp. Indeed, we often pass 'ot' to the
tcg_gen_qemu_ld/st functions.
Changing the type from 'int' makes it easier to see what domain
the variable should be.
This does require adding some default cases to some switch statements,
to avoid the 'unhandled enumeration value' warning that would result
from the change of type.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace it with tcg_gen_ext16u_tl, and in two cases merge with a
previous move from cpu_regs.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace it with its definition.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace it with its definition.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace it with tcg_gen_ext16u_tl. In four places we can combine that
with a previous move into cpu_T[0], and in one place we can infer that
the zero-extension has already happened via the previous load.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate the definitions into all users.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate the definitions into all users. In two cases, this allows
us to share code between the 32-bit and 64-bit immediate moves.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate the definitions into all users. The only time that
gen_op_movl_T1_imu was used, the input was type 'unsigned',
so the replacement works identically.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate the definition of gen_op_movl_T0_im to all users.
The function gen_op_movl_T0_imu was unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate its definition into all users.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|