Age | Commit message (Collapse) | Author |
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Do all of group 17 at one time for ease.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
As this is the first of the BMI insns to be implemented,
this carries quite a bit more baggage than normal.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
No actual required uses of these encodings yet.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Avoid duplicating switch statement between 32 and 64-bit modes.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Add another slot in ENV and store two of the three inputs. This lets us
do less work when carry-out is not needed, and avoids the unpredictable
CC_OP after translating these insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Pass the data in explicitly, rather than indirectly via env.
This avoids all sorts of unnecessary register spillage.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
In preparation for making this a const helper.
By using the proper types in the parameters to the helper functions,
we get to avoid quite a lot of subsequent casting.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
After a comparison or subtraction, the original value of the LHS will
currently be reconstructed using an addition. However, in most cases
it is already available: store it in a temp-local variable and save 1
or 2 TCG ops (2 if the result of the addition needs to be extended).
The temp-local can be declared dead as soon as the cc_op changes again,
or also before the translation block ends because gen_prepare_cc will
always make a copy before returning it. All this magic, plus copy
propagation and dead-code elimination, ensures that the temp local will
(almost) never be spilled.
Example (cmp $0x21,%rax + jbe):
Before After
----------------------------------------------------------------------------
movi_i64 tmp1,$0x21 movi_i64 tmp1,$0x21
movi_i64 cc_src,$0x21 movi_i64 cc_src,$0x21
sub_i64 cc_dst,rax,tmp1 sub_i64 cc_dst,rax,tmp1
add_i64 tmp7,cc_dst,cc_src
movi_i32 cc_op,$0x11 movi_i32 cc_op,$0x11
brcond_i64 tmp7,cc_src,leu,$0x0 discard loc11
brcond_i64 rax,cc_src,leu,$0x0
Before After
----------------------------------------------------------------------------
mov (%r14),%rbp mov (%r14),%rbp
mov %rbp,%rbx mov %rbp,%rbx
sub $0x21,%rbx sub $0x21,%rbx
lea 0x21(%rbx),%r12
movl $0x11,0xa0(%r14) movl $0x11,0xa0(%r14)
movq $0x21,0x90(%r14) movq $0x21,0x90(%r14)
mov %rbx,0x98(%r14) mov %rbx,0x98(%r14)
cmp $0x21,%r12 | cmp $0x21,%rbp
jbe ... jbe ...
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Placing the CC_OP_DYNAMIC at the join is less effective than
before the branch, as the branch will have forced global registers
to their home locations. This way we have a chance to discard
CC_SRC2 before it gets stored.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
A jump that ends a basic block or otherwise falls back to CC_OP_DYNAMIC
will always have to call gen_op_set_cc_op. However, not all jumps end
a basic block, so introduce a variant that does not do this.
This was partially undone earlier (i386: drop cc_op argument of gen_jcc1),
redo it now also to prepare for the introduction of src2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Replace low-level ops with a higher-level "cmp %al, (A0)" in the case
of scas, and "cmp T0, (A0)" in the case of cmps.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
It is almost unused, and it is simpler to pass a TCG value directly
to gen_shiftd_rm_T1_T3. This value is then written to t2 without
going through a temporary register.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This simplifies all the jump generation code. CCPrepare allows the
code to create an efficient brcond always, so there is no need to
duplicate the setcc and jcc code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This makes the i386 front-end able to create CCPrepare structs for all
condition, not just those that come from a single flag. In particular,
JCC_L and JCC_LE can be optimized because gen_prepare_cc is not forced
to return a result in bit 0 (unlike gen_setcc_slow).
However, for now the slow jcc operations will still go through CC
computation in a single-bit temporary, followed by a brcond if the
temporary is nonzero.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Introduce a struct that describes how to build a *cond operation
that checks for a given x86 condition code. For now, just change
gen_compute_eflags_* to return the new struct, generate code for
the CCPrepare struct, and go on as before.
[rth: Use ctz with the proper width rather than ffs.]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reconstruct the arguments for complex conditions involving CC_OP_SUBx (BE,
L, LE). In the others do it via setcond and gen_setcc_slow (which is
not that slow in many cases).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
And allow gen_setcc_slow to operate on cpu_cc_src.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This is looking at EFLAGS, but it can do so more efficiently with
setcond.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Do not hard code the destination register.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Do the switch at translation time, converting the helper templates to
TCG opcodes. In some cases CF can be computed with a single setcond,
though others it may require a little more work.
In the CC_OP_DYNAMIC case, compute the whole EFLAGS, same as for ZF/SF/PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Make gen_compute_eflags_z and gen_compute_eflags_s able to compute the
inverted condition, and use this in gen_setcc_slow_T0. We cannot do it
yet in gen_compute_eflags_c, but prepare the code for it anyway. It is
not worthwhile for PF, as usual.
shr+and+xor could be replaced by and+setcond. I'm not doing it yet.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
ZF, SF and PF can always be computed from CC_DST except in the
CC_OP_EFLAGS case (and CC_OP_DYNAMIC, which just resolves to CC_OP_EFLAGS
in gen_compute_eflags). Use setcond to compute ZF and SF.
We could also use a table lookup to compute PF.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This gets us universal coverage, rather than scattering discards
around at various places. As a bonus, we do not emit redundant
discards e.g. between sequential logic insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This makes code more similar to the other callers of gen_eob, especially
loopz/loopnz/jcxz.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
After calling gen_compute_eflags, leave the computed value in cc_reg_src
and set cc_op to CC_OP_EFLAGS. The next few patches will remove anyway
most calls to gen_compute_eflags.
As a result of this change it is more natural to remove the register
argument from gen_compute_eflags and change all the callers.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Introduce new functions to extract PF, SF, OF, ZF in addition to CF.
These provide single entry points for optimizing accesses to a single
flag.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
All of the conditional calls to gen_op_set_cc_op go away, and
gen_op_set_cc_op itself gets inlined into its only remaining caller.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Use a dirty flag to know whether env->cc_op is up to date,
rather than forcing s->cc_op to DYNAMIC and losing info.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This will provide a good hook into which we can consolidate
all of the cc variable discards.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Before computing flags we need to store the cc_op to memory. Move this
to gen_compute_eflags_c and gen_compute_eflags rather than doing it all
over the place.
Alo, after computing the flags in cpu_cc_src we are in EFLAGS mode.
Set s->cc_op and discard cpu_cc_dst in gen_compute_eflags, rather than
doing it all over the place.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Discard CC_DST and set s->cc_op immediately after computing EFLAGS.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Always compute EFLAGS first since it is needed whenever
the shift is non-zero, i.e. most of the time. This makes it possible
to remove some writes of CC_OP_EFLAGS to cpu_cc_op and more importantly
removes cases where s->cc_op becomes CC_OP_DYNAMIC. Also, we can
remove cc_tmp and just modify cc_src from within the helper.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Always compute EFLAGS first since it is needed whenever the shift is
non-zero, i.e. most of the time. This makes it possible to remove some
writes of CC_OP_EFLAGS to cpu_cc_op and more importantly removes cases
where s->cc_op becomes CC_OP_DYNAMIC. These are slow and we want to
avoid them: CC_OP_EFLAGS is quite efficient once we paid the initial
cost of computing the flags.
Finally, always follow gen_compute_eflags(cpu_cc_src) by setting s->cc_op
and discarding cpu_cc_dst.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This ensures the invariant that cpu_cc_op matches s->cc_op when calling
the helpers. The next patches need this because gen_compute_eflags and
gen_compute_eflags_c will take care of setting cpu_cc_op.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
As in the gen_repz_scas/gen_repz_cmps case, delay setting
CC_OP_DYNAMIC in gen_jcc until after code generation. All of
gen_jcc1/is_fast_jcc/gen_setcc_slow_T0 now work on s->cc_op, which makes
things a bit easier to follow and to patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Set it to the appropriate CC_OP_SUBx constant in gen_scas/gen_cmps.
In the repz case it can be overridden to CC_OP_DYNAMIC after generating
the code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Introduce a function that abstracts extracting an 8, 16, 32 or 64-bit value
with or without sign, generalizing gen_extu and gen_exts.
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reviewed-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|