aboutsummaryrefslogtreecommitdiff
path: root/include/exec/cpu_ldst.h
AgeCommit message (Collapse)Author
2020-01-21cputlb: Make tlb_n_entries private to cputlb.cRichard Henderson
There are no users of this function outside cputlb.c, and its interface will change in the next patch. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15tcg: Search includes from the project root source directoryPhilippe Mathieu-Daudé
We currently search both the root and the tcg/ directories for tcg files: $ git grep '#include "tcg/' | wc -l 28 $ git grep '#include "tcg[^/]' | wc -l 94 To simplify the preprocessor search path, unify by expliciting the tcg/ directory. Patch created mechanically by running: $ for x in \ tcg.h tcg-mo.h tcg-op.h tcg-opc.h \ tcg-op-gvec.h tcg-gvec-desc.h; do \ sed -i "s,#include \"$x\",#include \"tcg/$x\"," \ $(git grep -l "#include \"$x\""); \ done Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc parts) Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200101112303.20724-2-philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15cputlb: Expand cpu_ldst_template.h in cputlb.cRichard Henderson
Reduce the amount of preprocessor obfuscation by expanding the text of each of the functions generated. The result is only slightly smaller than the original. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15cputlb: Remove support for MMU_MODE*_SUFFIXRichard Henderson
All users have now been converted to cpu_*_mmuidx_ra. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15cputlb: Expand cpu_ldst_useronly_template.h in user-exec.cRichard Henderson
With the tracing hooks, the inline functions are no longer so simple. Reduce the amount of preprocessor obfuscation by expanding the text of each of the functions generated. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15cputlb: Provide cpu_(ld,st}*_mmuidx_ra for user-onlyRichard Henderson
This finishes the new interface began with the previous patch. Document the interface and deprecate MMU_MODE<N>_SUFFIX. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15cputlb: Rename helper_ret_ld*_cmmu to cpu_ld*_codeRichard Henderson
There are no uses of the *_cmmu names other than the bare wrapping within the *_code inlines. Therefore rename the functions so we can drop the inlines. Use abi_ptr instead of target_ulong in preparation for user-only; the two types are identical for softmmu. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15translator: Use cpu_ld*_code instead of open-codingRichard Henderson
The DO_LOAD macros replicate the distinction already performed by the cpu_ldst.h functions. Use them. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15cputlb: Move body of cpu_ldst_template.h out of lineRichard Henderson
With the tracing hooks, the inline functions are no longer so simple. Once out-of-line, the current tlb_entry lookup is redundant with the one in the main load/store_helper. This also begins the introduction of a new target facing interface, with suffix *_mmuidx_ra. This is not yet official because the interface is not done for user-only. Use abi_ptr instead of target_ulong in preparation for user-only; the two types are identical for softmmu. What remains in cpu_ldst_template.h are the expansions for _code, _data, and MMU_MODE<N>_SUFFIX. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28translator: add translator_ld{ub,sw,uw,l,q}Emilio G. Cota
We don't bother with replicating the fast path (tlb_hit) of the old cpu_ldst helpers as it has no measurable effect on performance. This probably indicates we should consider flattening the whole set of helpers but that is out of scope for this change. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> [AJB: directly plumb into softmmu/user helpers] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-07-18linux-user: check valid address in access_ok()Rémi Denis-Courmont
Fix a crash with LTP testsuite and aarch64: tst_test.c:1015: INFO: Timeout per run is 0h 05m 00s qemu-aarch64: .../qemu/accel/tcg/translate-all.c:2522: page_check_range: Assertion `start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)' failed. qemu:handle_cpu_signal received signal outside vCPU context @ pc=0x60001554 page_check_range() should never be called with address outside the guest address space. This patch adds a guest_addr_valid() check in access_ok() to only call page_check_range() with a valid address. Fixes: f6768aa1b4c6 ("target/arm: fix AArch64 virtual address space size") Signed-off-by: Rémi Denis-Courmont <remi@remlab.net> Signed-off-by: Laurent Vivier <lvivier@redhat.com> Message-Id: <20190704084115.24713-1-lvivier@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-07-14tcg: Introduce set/clear_helper_retaddrRichard Henderson
At present we have a potential error in that helper_retaddr contains data for handle_cpu_signal, but we have not ensured that those stores will be scheduled properly before the operation that may fault. It might be that these races are not in practice observable, due to our use of -fno-strict-aliasing, but better safe than sorry. Adjust all of the setters of helper_retaddr. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10tcg: Create struct CPUTLBRichard Henderson
Move all softmmu tlb data into this structure. Arrange the members so that we are able to place mask+table together and at a smaller absolute offset from ENV. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-10tcg: Use tlb_fill probe from tlb_vaddr_to_hostRichard Henderson
Most of the existing users would continue around a loop which would fault the tlb entry in via a normal load/store. But for AArch64 SVE we have an existing emulation bug wherein we would mark the first element of a no-fault vector load as faulted (within the FFR, not via exception) just because we did not have its address in the TLB. Now we can properly only mark it as faulted if there really is no valid, readable translation, while still not raising an exception. (Note that beyond the first element of the vector, the hardware may report a fault for any reason whatsoever; with at least one element loaded, forward progress is guaranteed.) Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28cputlb: Remove static tlb sizingRichard Henderson
Now that all tcg backends support TCG_TARGET_IMPLEMENTS_DYN_TLB, remove the define and the old code. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg: introduce dynamic TLB sizingEmilio G. Cota
Disabled in all TCG backends for now. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20190116170114.26802-3-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18cputlb: read CPUTLBEntry.addr_write atomicallyEmilio G. Cota
Updates can come from other threads, so readers that do not take tlb_lock must use atomic_read to avoid undefined behaviour (UB). This completes the conversion to tlb_lock. This conversion results on average in no performance loss, as the following experiments (run on an Intel i7-6700K CPU @ 4.00GHz) show. 1. aarch64 bootup+shutdown test: - Before: Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs): 7487.087786 task-clock (msec) # 0.998 CPUs utilized ( +- 0.12% ) 31,574,905,303 cycles # 4.217 GHz ( +- 0.12% ) 57,097,908,812 instructions # 1.81 insns per cycle ( +- 0.08% ) 10,255,415,367 branches # 1369.747 M/sec ( +- 0.08% ) 173,278,962 branch-misses # 1.69% of all branches ( +- 0.18% ) 7.504481349 seconds time elapsed ( +- 0.14% ) - After: Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs): 7462.441328 task-clock (msec) # 0.998 CPUs utilized ( +- 0.07% ) 31,478,476,520 cycles # 4.218 GHz ( +- 0.07% ) 57,017,330,084 instructions # 1.81 insns per cycle ( +- 0.05% ) 10,251,929,667 branches # 1373.804 M/sec ( +- 0.05% ) 173,023,787 branch-misses # 1.69% of all branches ( +- 0.11% ) 7.474970463 seconds time elapsed ( +- 0.07% ) 2. SPEC06int: SPEC06int (test set) [Y axis: Speedup over master] 1.15 +-+----+------+------+------+------+------+-------+------+------+------+------+------+------+----+-+ | | 1.1 +-+.................................+++.............................+ tlb-lock-v2 (m+++x) +-+ | +++ | +++ tlb-lock-v3 (spinl|ck) | | +++ | | +++ +++ | | | 1.05 +-+....+++...........####.........|####.+++.|......|.....###....+++...........+++....###.........+-+ | ### ++#| # |# |# ***### +++### +++#+# | +++ | #|# ### | 1 +-+++***+#++++####+++#++#++++++++++#++#+*+*++#++++#+#+****+#++++###++++###++++###++++#+#++++#+#+++-+ | *+* # #++# *** # #### *** # * *++# ****+# *| * # ****|# |# # #|# #+# # # | 0.95 +-+..*.*.#....#..#.*|*..#...#..#.*|*..#.*.*..#.*|.*.#.*++*.#.*++*+#.****.#....#+#....#.#..++#.#..+-+ | * * # # # *|* # # # *|* # * * # *++* # * * # * * # * |* # ++# # # # *** # | | * * # ++# # *+* # # # *|* # * * # * * # * * # * * # *++* # **** # ++# # * * # | 0.9 +-+..*.*.#...|#..#.*.*..#.++#..#.*|*..#.*.*..#.*..*.#.*..*.#.*..*.#.*..*.#.*.|*.#...|#.#..*.*.#..+-+ | * * # *** # * * # |# # *+* # * * # * * # * * # * * # * * # *++* # |# # * * # | 0.85 +-+..*.*.#..*|*..#.*.*..#.***..#.*.*..#.*.*..#.*..*.#.*..*.#.*..*.#.*..*.#.*..*.#.****.#..*.*.#..+-+ | * * # *+* # * * # *|* # * * # * * # * * # * * # * * # * * # * * # * |* # * * # | | * * # * * # * * # *+* # * * # * * # * * # * * # * * # * * # * * # * |* # * * # | 0.8 +-+..*.*.#..*.*..#.*.*..#.*.*..#.*.*..#.*.*..#.*..*.#.*..*.#.*..*.#.*..*.#.*..*.#.*++*.#..*.*.#..+-+ | * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # | 0.75 +-+--***##--***###-***###-***###-***###-***###-****##-****##-****##-****##-****##-****##--***##--+-+ 400.perlben401.bzip2403.gcc429.m445.gob456.hmme45462.libqua464.h26471.omnet473483.xalancbmkgeomean png: https://imgur.com/a/BHzpPTW Notes: - tlb-lock-v2 corresponds to an implementation with a mutex. - tlb-lock-v3 corresponds to the current implementation, i.e. a spinlock and a single lock acquisition in tlb_set_page_with_attrs. Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20181016153840.25877-1-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18tcg: Add tlb_index and tlb_entry helpersRichard Henderson
Isolate the computation of an index from an address into a helper before we change that function. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> [ cota: convert tlb_vaddr_to_host; use atomic_read on addr_write ] Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20181009175129.17888-2-cota@braap.org>
2018-08-17linux-user: fix 32bit g2h()/h2g()Laurent Vivier
sparc32plus has 64bit long type but only 32bit virtual address space. For instance, "apt-get upgrade" failed because of a mmap()/msync() sequence. mmap() returned 0xff252000 but msync() used g2h(0xffffffffff252000) to find the host address. The "(target_ulong)" in g2h() doesn't fix the address because it is 64bit long. This patch introduces an "abi_ptr" that is set to uint32_t if the virtual address space is addressed using 32bit in the linux-user case. It stays set to target_ulong with softmmu case. Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20180814171217.14680-1-laurent@vivier.eu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [lv: added "%" in TARGET_ABI_FMT_ptr "%"PRIx64]
2018-07-02tcg: Define and use new tlb_hit() and tlb_hit_page() functionsPeter Maydell
The condition to check whether an address has hit against a particular TLB entry is not completely trivial. We do this in various places, and in fact in one place (get_page_addr_code()) we have got the condition wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline functions (one for a known-page-aligned address and one for an arbitrary address), and use them in all the places where we had the condition correct. This is a no-behaviour-change patch; we leave fixing the buggy code in get_page_addr_code() to a subsequent patch. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20180629162122.19376-2-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-03-09linux-user: fix mmap/munmap/mprotect/mremap/shmatMax Filippov
In linux-user QEMU that runs for a target with TARGET_ABI_BITS bigger than L1_MAP_ADDR_SPACE_BITS an assertion in page_set_flags fires when mmap, munmap, mprotect, mremap or shmat is called for an address outside the guest address space. mmap and mprotect should return ENOMEM in such case. Change definition of GUEST_ADDR_MAX to always be the last valid guest address. Account for this change in open_self_maps. Add macro guest_addr_valid that verifies if the guest address is valid. Add function guest_range_valid that verifies if address range is within guest address space and does not wrap around. Use that macro in mmap/munmap/mprotect/mremap/shmat for error checking. Cc: qemu-stable@nongnu.org Cc: Riku Voipio <riku.voipio@iki.fi> Cc: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20180307215010.30706-1-jcmvbkbc@gmail.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2017-11-15tcg: Record code_gen_buffer address for user-only memory helpersRichard Henderson
When we handle a signal from a fault within a user-only memory helper, we cannot cpu_restore_state with the PC found within the signal frame. Use a TLS variable, helper_retaddr, to record the unwind start point to find the faulting guest insn. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2016-11-22cpu_ldst.h: use correct guest address parameterBobby Bingham
In the user emulation code path, tlb_vaddr_to_host erronesously passed vaddr as the guest address to be translated, instead of addr, the parameter which actually contained the guest address. This resulted in incorrect addresses being used when emulating block copy (mvc/mvpg) and block clear (xc) instructions for the s390x target. Signed-off-by: Bobby Bingham <koorogi@koorogi.info> Message-Id: <20161113050523.23909-1-koorogi@koorogi.info> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-11softmmu: remove now unused functionsPavel Dovgalyuk
Now that the cpu_ld/st_* function directly call helper_ret_ld/st, we can drop the old helper_ld/st functions. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150710095656.13280.7085.stgit@PASHA-ISP> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-09-11tlb: Add "ifetch" argument to cpu_mmu_index()Benjamin Herrenschmidt
This is set to true when the index is for an instruction fetch translation. The core get_page_addr_code() sets it, as do the SOFTMMU_CODE_ACCESS acessors. All targets ignore it for now, and all other callers pass "false". This will allow targets who wish to split the mmu index between instruction and data accesses to do so. A subsequent patch will do just that for PowerPC. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Message-Id: <1439796853-4410-2-git-send-email-benh@kernel.crashing.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24linux-user: remove useless macros GUEST_BASE and RESERVED_VALaurent Vivier
As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base and the macros GUEST_BASE and RESERVED_VA become useless: replace them by their values. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-17softmmu: provide tlb_vaddr_to_host function for user modeAurelien Jarno
To avoid to many #ifdef in target code, provide a tlb_vaddr_to_host for both user and softmmu modes. In the first case the function always succeed and just call the g2h function. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2015-06-03softmmu: support up to 12 MMU modesPaolo Bonzini
At 8k per TLB (for 64-bit host or target), 8 or more modes make the TLBs bigger than 64k, and some RISC TCG backends do not like that. On the affected hosts, cut the TLB size in half---there is still a measurable speedup on PPC with the next patch. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1424436345-37924-3-git-send-email-pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2015-02-05cpu_ldst.h: Allow NB_MMU_MODES to be 7Peter Maydell
Support guest CPUs which need 7 MMU index values. Add a comment about what would be required to raise the limit further (trivial for 8, TCG backend rework for 9 or more). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Greg Bellows <greg.bellows@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2015-01-20cpu_ldst.h: Don't define helpers if MMU_MODE*_SUFFIX not definedPeter Maydell
Not all targets define a full set of suffix strings for the NB_MMU_MODES that they have. In this situation, don't define any helper functions for that mode, rather than defining helper functions with no suffix at all. The MMU mode is still functional; it is merely not directly accessible via cpu_ld*_MODE from target helper functions. Also add an "NB_MMU_MODES >= 2" check to the definition of the mode 1 helpers -- some targets only define one MMU mode. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-id: 1421432008-6786-1-git-send-email-peter.maydell@linaro.org
2015-01-20cpu_ldst.h, cpu-all.h, bswap.h: Update documentation on ld/st accessorsPeter Maydell
Add documentation of what the cpu_*_* accessors look like. Correct some minor errors in the existing documentation of the direct _p accessor family. Remove the near-duplicate comment on the _p accessors from cpu-all.h and replace it with a reference to the comment in bswap.h. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1421334118-3287-16-git-send-email-peter.maydell@linaro.org
2015-01-20cpu_ldst.h: Drop unused _raw macros, saddr() and laddr()Peter Maydell
The _raw macros and their helpers saddr() and laddr() are now totally unused -- delete them. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1421334118-3287-14-git-send-email-peter.maydell@linaro.org
2015-01-20cpu_ldst.h: Use inline functions for usermode cpu_ld/st accessorsPeter Maydell
Use inline functions rather than macros for cpu_ld/st accessors for the *-user configurations, as we already do for softmmu. This has a two advantages: * we can actually typecheck our arguments * we don't need to leak the _raw macros everywhere Since the _kernel functions were only used by target-i386/seg_helper.c, put the definitions for them in that file too. (It already has the similar template include code to define them for the softmmu case, so it makes sense to have it deal with defining them for user-only.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1421334118-3287-12-git-send-email-peter.maydell@linaro.org
2015-01-20cpu_ldst.h: Remove unused very short ld*/st* definesPeter Maydell
The very short ld*/st* defines are now not used anywhere; delete them. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1421334118-3287-11-git-send-email-peter.maydell@linaro.org
2015-01-20cpu_ldst.h: Drop unused ld/st*_kernel definesPeter Maydell
The ld*_kernel and st*_kernel defines are not used anywhere; delete them. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1421334118-3287-10-git-send-email-peter.maydell@linaro.org
2015-01-20cpu_ldst.h: Remove unused ldul_ macrosPeter Maydell
The five ldul_ macros are not used anywhere and are marked up with an XXX comment. "ldul" is a non-standard prefix for our family of load instructions: we don't mark 32-bit accesses for signedness because they return a 32 bit quantity. So just delete them. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-id: 1421334118-3287-2-git-send-email-peter.maydell@linaro.org
2014-06-05softmmu: move all load/store functions to cpu_ldst.hPaolo Bonzini
Unify pieces of cpu-all.h, exec-all.h, softmmu_exec.h and tcg/tcg.h into a single new header file with all helpers. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-06-05softmmu: introduce cpu_ldst.hPaolo Bonzini
This will collect all load and store helpers soon. For now it is just a replacement for softmmu_exec.h, which this patch stops including directly, but we also include it where this will be necessary in order to simplify the next patch. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>