Age | Commit message (Collapse) | Author |
|
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reduce ifdefs, share more code between paths, reduce the number of TCG
ops generated. Avoid re-computing the size of the operation across
gen_pop_T0 and gen_pop_update.
Add forgotten zero-extension in the TARGET_X86_64, !CODE64, ss32 case.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reduce ifdefs, share more code between paths, reduce the number of TCG
ops generated.
Add forgotten zero-extension in the TARGET_X86_64, !CODE64, ss32 case.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Unlike the addr32, there was no bug. But we can use the same
technique to reduce the number of TCG ops.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Changing the domain to TCGMemOp makes it easier to interoperate
with other portions of the rest of the translator.
We now only have one domain for size operands inside the translator,
which makes things less confusing all the way around. There are
still a number of helpers that continue to use the log2-1 domain.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Change the domain of the parameter and update all callers.
Which lets us defer completely to gen_op_mov_reg_v.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Changing the domain to TCGMemOp makes it easier to interoperate
with other portions of the rest of the translator.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Change the domain of the parameter and update all callers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
These functions used the aflags/dflags domain, which is log2-1
of the byte size. Confusingly, they used enumeration values
from the log2 domain.
Change the domain of the parameter and update all callers.
Since we're now in a common domain, defer the deposit/extend/mov
decision to gen_op_mov_reg_v.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
The 'ot' variables (operand type?) hold the log2(byte size) of
the operand being manipulated. This is the same as the MO_SIZE
subset of the TCGMemOp. Indeed, we often pass 'ot' to the
tcg_gen_qemu_ld/st functions.
Changing the type from 'int' makes it easier to see what domain
the variable should be.
This does require adding some default cases to some switch statements,
to avoid the 'unhandled enumeration value' warning that would result
from the change of type.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace it with tcg_gen_ext16u_tl, and in two cases merge with a
previous move from cpu_regs.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace it with its definition.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace it with its definition.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace it with tcg_gen_ext16u_tl. In four places we can combine that
with a previous move into cpu_T[0], and in one place we can infer that
the zero-extension has already happened via the previous load.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate the definitions into all users.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate the definitions into all users. In two cases, this allows
us to share code between the 32-bit and 64-bit immediate moves.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate the definitions into all users. The only time that
gen_op_movl_T1_imu was used, the input was type 'unsigned',
so the replacement works identically.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate the definition of gen_op_movl_T0_im to all users.
The function gen_op_movl_T0_imu was unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate its definition into all users.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
For the known MO_32/MO_64 cases, we don't need to extend a 32-bit temp
into a 64-bit temp before storing into the hardware register.
We do need the extension for the MO_8/MO_16 cases, in order for the
deposit_tl operation to work, so leave those alone.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
We can now use tcg_gen_qemu_st_i32 directly to avoid the extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
We can now use tcg_gen_qemu_ld_i32 directly to avoid the truncation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
For the 16 and 32-bit cases, we don't need to truncate via
a temporary register.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Fold the bswap into the memory operation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
The reg_ptr and offset_ptr outputs are universally unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Always perform a sign-extending load. In the extremely unlikely
case that we've used an 0x66 prefix, the extension to 64-bits is
unnecessary but not wrong; the store will still examine only 16 bits.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
We can use the MO_SIGN bit to tidy the reg-reg switch statement
as well as pass it on to gen_op_ld_v, eliminating one call.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
By inspection, obviously we should be storing T[1] not T[0].
This could only happen for x86_64 in 64-bit mode with 0x66
prefix to call insn -- i.e. never.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate its definition into all users.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate its definition into all users.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Too many places have the same test vs OR_TMP0 to indicate
a write back to memory. Hoist that to a subroutine.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace its users by gen_op_ld_v with the MO_SIGN bit set.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate its definition into all users.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate its definition into all users.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Propagate its definition into all users.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
The MO_8/16/32/64 constants have the same encoding and meaning
as the OT_BYTE/WORD/LONG/QUAD. Since we rely on them being the
same, for the qemu_ld/st helpers, standardize on the common names.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
In preference to the older helpers. Stores only in this patch.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
In preference to the older helpers. Loads only in this patch.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Now that we don't combine mem_index with operand size info,
we don't need to encode it. Which tidies many places that
access it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Rather than add s->mem_index into a combined size+mem_index
argument, pass the context down. This will allow cleaning
up s->mem_index later.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace an assert_no_error() usage with the error_abort system.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
|
|
Features family, model, stepping, level, hv_spinlocks are treated similarly
when passed from command line, so it's not necessary to handle each of them
individually. Collapse them to one catch-all branch which will treat
any not explicitly handled feature in format 'foo=val'.
Any unknown feature will be rejected by property setter so there is no
need to check for unknown feature in cpu_x86_parse_featurestr(), therefore
it's replaced by above mentioned catch-all handler.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Features check, enforce, hv_relaxed and hv_vapic are treated as boolean
set to 'on' when passed from command line, so it's not necessary to
handle each of them separately. Collapse them to one catch-all branch
which will treat any feature in format 'foo' as boolean set to 'on'.
Any unknown feature will be rejected by CPU property setter so there is no
need to check for unknown feature in cpu_x86_parse_featurestr(), therefore
it's replaced by above mentioned catch-all handler.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
* Additionally convert check_cpuid & enforce_cpuid to bool and make them
members of X86CPU
* Make 'enforce' feature independent from 'check'
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
This motion is preparing for refactoring vCPU APIC subsequently.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
When we're running in non-64bit mode with qemu-system-x86_64 we can
still end up with virtual addresses that are above the 32bit boundary
if a segment offset is set up.
GNU Hurd does exactly that. It sets the segment offset to 0x80000000 and
puts its EIP value to 0x8xxxxxxx to access low memory.
This doesn't hit us when we enable paging, as there we just mask away the
unused bits. But with real mode, we assume that vaddr == paddr which is
wrong in this case. Real hardware wraps the virtual address around at the
32bit boundary. So let's do the same.
This fixes booting GNU Hurd in qemu-system-x86_64 for me.
Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
If the guest is running in nested mode on system reset, clearing the
feature MSR signals the kernel to leave this mode. Recent kernels
processes this properly, but leave the VCPU state undefined behind. It
is the job of userspace to bring it to a proper shape. Therefore, write
this specific MSR first so that no state transfer gets lost.
This allows to cleanly reset a guest with VMX in use.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|