Age | Commit message (Collapse) | Author |
|
TILE-Gx has been removed during the v6.0 release (see
commit 2cc1a90166 "Remove deprecated target tilegx"),
no need to mention it in the list of "supported targets".
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Motorola treats denormals with explicit integer bit set as
having unbiased exponent 0, unlike Intel which treats it as
having unbiased exponent 1 (more like all other IEEE formats
that have no explicit integer bit).
Add a flag on FloatFmt to differentiate the behaviour.
Reported-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
We missed these functions when upstreaming the bfloat16 support.
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230531065458.2082-1-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Add versions of float64_to_int* which do not saturate the result.
Reviewed-by: Christoph Muellner <christoph.muellner@vrull.eu>
Tested-by: Christoph Muellner <christoph.muellner@vrull.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230527141910.1885950-2-richard.henderson@linaro.org>
|
|
Balton discovered that asserts for the extract/deposit calls had a
significant impact on a lame benchmark on qemu-ppc. Replicating with:
./qemu-ppc64 ~/lsrc/tests/lame.git-svn/builds/ppc64/frontend/lame \
-h pts-trondheim-3.wav pts-trondheim-3.mp3
showed up the pack/unpack routines not eliding the assert checks as it
should have done causing them to prominently figure in the profile:
11.44% qemu-ppc64 qemu-ppc64 [.] unpack_raw64.isra.0
11.03% qemu-ppc64 qemu-ppc64 [.] parts64_uncanon_normal
8.26% qemu-ppc64 qemu-ppc64 [.] helper_compute_fprf_float64
6.75% qemu-ppc64 qemu-ppc64 [.] do_float_check_status
5.34% qemu-ppc64 qemu-ppc64 [.] parts64_muladd
4.75% qemu-ppc64 qemu-ppc64 [.] pack_raw64.isra.0
4.38% qemu-ppc64 qemu-ppc64 [.] parts64_canonicalize
3.62% qemu-ppc64 qemu-ppc64 [.] float64r32_round_pack_canonical
After this patch the same test runs 31 seconds faster with a profile
where the generated code dominates more:
+ 14.12% 0.00% qemu-ppc64 [unknown] [.] 0x0000004000619420
+ 13.30% 0.00% qemu-ppc64 [unknown] [.] 0x0000004000616850
+ 12.58% 12.19% qemu-ppc64 qemu-ppc64 [.] parts64_uncanon_normal
+ 10.62% 0.00% qemu-ppc64 [unknown] [.] 0x000000400061bf70
+ 9.91% 9.73% qemu-ppc64 qemu-ppc64 [.] helper_compute_fprf_float64
+ 7.84% 7.82% qemu-ppc64 qemu-ppc64 [.] do_float_check_status
+ 6.47% 5.78% qemu-ppc64 qemu-ppc64 [.] parts64_canonicalize.constprop.0
+ 6.46% 0.00% qemu-ppc64 [unknown] [.] 0x0000004000620130
+ 6.42% 0.00% qemu-ppc64 [unknown] [.] 0x0000004000619400
+ 6.17% 6.04% qemu-ppc64 qemu-ppc64 [.] parts64_muladd
+ 5.85% 0.00% qemu-ppc64 [unknown] [.] 0x00000040006167e0
+ 5.74% 0.00% qemu-ppc64 [unknown] [.] 0x0000b693fcffffd3
+ 5.45% 4.78% qemu-ppc64 qemu-ppc64 [.] float64r32_round_pack_canonical
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <ec9cfe5a-d5f2-466d-34dc-c35817e7e010@linaro.org>
[AJB: Patchified rth's suggestion]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Cc: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <20230523131107.3680641-1-alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
The float32_exp2 function is computing wrong exponent of 2.
For example, with the following set of values {0.1, 2.0, 2.0, -1.0},
the expected output would be {1.071773, 4.000000, 4.000000, 0.500000}.
Instead, the function is computing {1.119102, 3.382044, 3.382044, -0.191022}
Looking at the code, the float32_exp2() attempts to do this
2 3 4 5 n
x x x x x x x
e = 1 + --- + --- + --- + --- + --- + ... + --- + ...
1! 2! 3! 4! 5! n!
But because of the typo it ends up doing
x x x x x x x
e = 1 + --- + --- + --- + --- + --- + ... + --- + ...
1! 2! 3! 4! 5! n!
This is because instead of the xnp which holds the numerator, parts_muladd
is using the xp which is just 'x'. Commit '572c4d862ff2' refactored this
function, and mistakenly used xp instead of xnp.
Cc: qemu-stable@nongnu.org
Fixes: 572c4d862ff2 "softfloat: Convert float32_exp2 to FloatParts"
Partially-Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1623
Reported-By: Luca Barbato (https://gitlab.com/lu-zero)
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Message-Id: <168304110865.537992.13059030916325018670.stgit@localhost.localdomain>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
logB(0) should raise divideByZero exception from IEEE 754-2008 spec 7.3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220930024510.800005-4-gaosong@loongson.cn>
|
|
Added the possibility of recalculating a result if it overflows or
underflows, if the result overflow and the rebias bool is true then the
intermediate result should have 3/4 of the total range subtracted from
the exponent. The same for underflow but it should be added to the
exponent of the intermediate number instead.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220805141522.412864-2-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
|
|
staging
* Fixes for s390x floating point vector instructions
# gpg: Signature made Wed 20 Jul 2022 08:14:50 BST
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2022-07-20' of https://gitlab.com/thuth/qemu:
tests/tcg/s390x: test signed vfmin/vfmax
target/s390x: fix NaN propagation rules
target/s390x: fix handling of zeroes in vfmin/vfmax
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# fpu/softfloat-specialize.c.inc
|
|
The muladd (inf,zero,nan) case sets InvalidOp and returns the
input value 'c', and prefer sNaN over qNaN, in c,a,b order.
Binary operations prefer sNaN over qNaN and a,b order.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-3-gaosong@loongson.cn>
[rth: Add specialization for pickNaN]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
s390x has the same NaN propagation rules as ARM, and not as x86.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220713182612.3780050-3-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
Since the caller, partsN_compare, is now exclusively
using FloatRelation, it's clearer to use it here too.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220401132240.79730-4-richard.henderson@linaro.org>
|
|
As the return type is FloatRelation, it's clearer to
use the type for 'cmp' within the function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220401132240.79730-3-richard.henderson@linaro.org>
|
|
The declaration used 'int', while the definition used 'FloatRelation'.
This should have resulted in a compiler error, but mysteriously didn't.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220401132240.79730-2-richard.henderson@linaro.org>
|
|
Implements float128_to_int128 based on parts_float_to_int logic.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220330175932.6995-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
|
|
Implements float128_to_uint128 based on parts_float_to_uint logic.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220330175932.6995-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
|
|
Based on parts_sint_to_float, implements int128_to_float128 to convert a
signed 128-bit value received through an Int128 argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220330175932.6995-5-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
|
|
Based on parts_uint_to_float, implements uint128_to_float128 to convert
an unsigned 128-bit value received through an Int128 argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220330175932.6995-4-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
|
|
These variants take a float64 as input, compute the result to
infinite precision (as we do with FloatParts), round the result
to the precision and dynamic range of float32, and then return
the result in the format of float64.
This is the operation PowerPC requires for its float32 operations.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-28-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
PowerPC has this flag, and it's easier to compute it here
than after the fact.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-8-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
PowerPC has this flag, and it's easier to compute it here
than after the fact.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-7-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
PowerPC has this flag, and it's easier to compute it here
than after the fact.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-6-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
PowerPC has these flags, and it's easier to compute them here
than after the fact.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-5-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
PowerPC has this flag, and it's easier to compute it here
than after the fact.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-4-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
PowerPC has this flag, and it's easier to compute it here
than after the fact.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211119160502.17432-3-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
For "fmax/fmin ft0, ft1, ft2" and if one of the inputs is sNaN,
The original logic:
Return NaN and set invalid flag if ft1 == sNaN || ft2 == sNan.
The alternative path:
Set invalid flag if ft1 == sNaN || ft2 == sNaN.
Return NaN only if ft1 == NaN && ft2 == NaN.
The IEEE 754 spec allows both implementation and some architecture such
as riscv choose different defintions in two spec versions.
(riscv-spec-v2.2 use original version, riscv-spec-20191213 changes to
alternative)
Signed-off-by: Chih-Min Chao <chihmin.chao@sifive.com>
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211021160847.2748577-2-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
In commit a777d6033447a we added an assertion to parts_silence_nan() that
prohibits calling float*_silence_nan() when in default-NaN mode.
This ties together a property of the output ("do we generate a default
NaN when the result is a NaN?") with an operation on an input ("silence
this input NaN").
It's true that most of the time when in default-NaN mode you won't
need to silence an input NaN, because you can just produce the
default NaN as the result instead. But some functions like
float*_maxnum() are defined to be able to work with quiet NaNs, so
silencing an input SNaN is still reasonable. In particular, the
upcoming implementation of MVE VMAXNMV would fall over this assertion
if we didn't delete it.
Delete the assertion.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210614233143.1221879-3-richard.henderson@linaro.org>
|
|
Typo in the conversion to FloatParts64.
Fixes: 572c4d862ff2
Fixes: Coverity CID 1457457
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210607223812.110596-1-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
For the normal case of no additional scaling, this reduces the
profile contribution of int64_to_float64 to the testcase in the
linked issue from 0.81% to 0.04%.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/134
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Rename to parts$N_modrem. This was the last use of a lot
of the legacy infrastructure, so remove it as required.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Rename to parts$N_log2. Though this is partly a ruse, since I do not
believe the code will succeed for float128 without work. Which is ok
for now, because we do not need this for more than float32 and float64.
Since berkeley-testfloat-3 doesn't support log2, compare float64_log2
vs the system log2. Fix the errors for inputs near 1.0:
test: 3ff00000000000b0 +0x1.00000000000b0p+0
sf: 3d2fa00000000000 +0x1.fa00000000000p-45
libm: 3d2fbd422b1bd36f +0x1.fbd422b1bd36fp-45
Error in fraction: 32170028290927 ulp
test: 3feec24f6770b100 +0x1.ec24f6770b100p-1
sf: bfad3740d13c9ec0 -0x1.d3740d13c9ec0p-5
libm: bfad3740d13c9e98 -0x1.d3740d13c9e98p-5
Error in fraction: 40 ulp
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Keep the intermediate results in FloatParts instead of
converting back and forth between float64. Use muladd
instead of separate mul+add.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
This is the last use of commonNaNT and all of the routines
that use it, so remove all of them for Werror.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Since this is the first such, this includes all of the
packing and unpacking routines as well.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
With floatx80_precision_x, the rounding happens across
the break between words. Notice this case with
frac_lsb = round_mask + 1 -> 0
and check the bits in frac_hi as needed.
In addition, since frac_shift == 0, we won't implicitly clear
round_mask via the right-shift, so explicitly clear those bits.
This fixes rounding for floatx80_precision_[sd].
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Use an enumeration instead of raw 32/64/80 values.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Remove frac_lsb, frac_lsbm1, roundeven_mask. Compute
these from round_mask in parts$N_uncanon_normal.
With floatx80, round_mask will not be tied to frac_shift.
Everything else is easily computable.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
We will need to treat the non-normal cases of floatx80 specially,
so split out the normal case that we can reuse.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Rename to parts$N_sqrt.
Reimplement float128_sqrt with FloatParts128.
Reimplement with the inverse sqrt newton-raphson algorithm from musl.
This is significantly faster than even the berkeley sqrt n-r algorithm,
because it does not use division instructions, only multiplication.
Ordinarily, changing algorithms at the same time as migrating code is
a bad idea, but this is the only way I found that didn't break one of
the routines at the same time.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|