aboutsummaryrefslogtreecommitdiff
path: root/accel/tcg/cputlb.c
AgeCommit message (Collapse)Author
2020-10-20accel/tcg: Add tlb_flush_page_bits_by_mmuidx*Richard Henderson
On ARM, the Top Byte Ignore feature means that only 56 bits of the address are significant in the virtual address. We are required to give the entire 64-bit address to FAR_ELx on fault, which means that we do not "clean" the top byte early in TCG. This new interface allows us to flush all 256 possible aliases for a given page, currently missed by tlb_flush_page*. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20201016210754.818257-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-09-30exec: Remove MemoryRegion::global_locking fieldPhilippe Mathieu-Daudé
Last uses of memory_region_clear_global_locking() have been removed in commit 7070e085d4 ("acpi: mark PMTIMER as unlocked") and commit 08565552f7 ("cputlb: Move NOTDIRTY handling from I/O path to TLB path"). Remove memory_region_clear_global_locking() and the now unused 'global_locking' field in MemoryRegion. Reported-by: Alexander Bulekov <alxndr@bu.edu> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200806150726.962-1-philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-09-23qemu/atomic.h: rename atomic_ to qatomic_Stefan Hajnoczi
clang's C11 atomic_fetch_*() functions only take a C11 atomic type pointer argument. QEMU uses direct types (int, etc) and this causes a compiler error when a QEMU code calls these functions in a source file that also included <stdatomic.h> via a system header file: $ CC=clang CXX=clang++ ./configure ... && make ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid) Avoid using atomic_*() names in QEMU's atomic.h since that namespace is used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h and <stdatomic.h> can co-exist. I checked /usr/include on my machine and searched GitHub for existing "qatomic_" users but there seem to be none. This patch was generated using: $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \ sort -u >/tmp/changed_identifiers $ for identifier in $(</tmp/changed_identifiers); do sed -i "s%\<$identifier\>%q$identifier%g" \ $(git grep -I -l "\<$identifier\>") done I manually fixed line-wrap issues and misaligned rST tables. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
2020-09-03cputlb: Make store_helper less fragile to compiler optimizationsRichard Henderson
This has no functional change. The current function structure is: inline QEMU_ALWAYSINLINE store_memop() { switch () { ... default: qemu_build_not_reached(); } } inline QEMU_ALWAYSINLINE store_helper() { ... if (span_two_pages_or_io) { ... helper_ret_stb_mmu(); } store_memop(); } helper_ret_stb_mmu() { store_helper(); } Whereas GCC will generate an error at compile-time when an always_inline function is not inlined, Clang does not. Nor does Clang prioritize the inlining of always_inline functions. Both of these are arguably bugs. Both `store_memop` and `store_helper` need to be inlined and allow constant propogations to eliminate the `qemu_build_not_reached` call. However, if the compiler instead chooses to inline helper_ret_stb_mmu into store_helper, then store_helper is now self-recursive and the compiler is no longer able to propagate the constant in the same way. This does not produce at current QEMU head, but was reproducible at v4.2.0 with `clang-10 -O2 -fexperimental-new-pass-manager`. The inline recursion problem can be fixed solely by marking helper_ret_stb_mmu as noinline, so the compiler does not make an incorrect decision about which functions to inline. In addition, extract store_helper_unaligned as a noinline subroutine that can be shared by all of the helpers. This saves about 6k code size in an optimized x86_64 build. Reported-by: Shu-Chun Weng <scw@google.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-08-21meson: rename included C source files to .c.incPaolo Bonzini
With Makefiles that have automatically generated dependencies, you generated includes are set as dependencies of the Makefile, so that they are built before everything else and they are available when first building the .c files. Alternatively you can use a fine-grained dependency, e.g. target/arm/translate.o: target/arm/decode-neon-shared.inc.c With Meson you have only one choice and it is a third option, namely "build at the beginning of the corresponding target"; the way you express it is to list the includes in the sources of that target. The problem is that Meson decides if something is a source vs. a generated include by looking at the extension: '.c', '.cc', '.m', '.C' are sources, while everything else is considered an include---including '.inc.c'. Use '.c.inc' to avoid this, as it is consistent with our other convention of using '.rst.inc' for included reStructuredText files. The editorconfig file is adjusted. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-08-21trace: switch position of headers to what Meson requiresPaolo Bonzini
Meson doesn't enjoy the same flexibility we have with Make in choosing the include path. In particular the tracing headers are using $(build_root)/$(<D). In order to keep the include directives unchanged, the simplest solution is to generate headers with patterns like "trace/trace-audio.h" and place forwarding headers in the source tree such that for example "audio/trace.h" includes "trace/trace-audio.h". This patch is too ugly to be applied to the Makefiles now. It's only a way to separate the changes to the tracing header files from the Meson rewrite of the tracing logic. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-24tcg: update comments for save_iotlb_data in cputlbAlex Bennée
I missed Emilio's review comments: Message-ID: <20200718205107.GA994221@sff> and the patch got merged. Correcting the comments now. Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200720122358.26881-1-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-07-15cputlb: ensure we save the IOTLB data in case of resetAlex Bennée
Any write to a device might cause a re-arrangement of memory triggering a TLB flush and potential re-size of the TLB invalidating previous entries. This would cause users of qemu_plugin_get_hwaddr() to see the warning: invalid use of qemu_plugin_get_hwaddr because of the failed tlb_lookup which should always succeed. To prevent this we save the IOTLB data in case it is later needed by a plugin doing a lookup. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200713200415.26214-7-alex.bennee@linaro.org>
2020-06-16cputlb: destroy CPUTLB with tlb_destroyEmilio G. Cota
I was after adding qemu_spin_destroy calls, but while at it I noticed that we are leaking some memory. Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Robert Foley <robert.foley@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200609200738.445-5-robert.foley@linaro.org> Message-Id: <20200612190237.30436-8-alex.bennee@linaro.org>
2020-05-11accel/tcg: Add endian-specific cpu_{ld, st}* operationsRichard Henderson
We currently have target-endian versions of these operations, but no easy way to force a specific endianness. This can be helpful if the target has endian-specific operations, or a mode that swaps endianness. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11accel/tcg: Add probe_access_flagsRichard Henderson
This new interface will allow targets to probe for a page and then handle watchpoints themselves. This will be most useful for vector predicated memory operations, where one page lookup can be used for many operations, and one test can avoid many watchpoint checks. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-01-21cputlb: Hoist timestamp outside of loops over tlbsRichard Henderson
Do not call get_clock_realtime() in tlb_mmu_resize_locked, but hoist outside of any loop over a set of tlbs. This is only two (indirect) callers, tlb_flush_by_mmuidx_async_work and tlb_flush_page_locked, so not onerous. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Initialize tlbs as flushedRichard Henderson
There's little point in leaving these data structures half initialized, and relying on a flush to be done during reset. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Partially merge tlb_dyn_init into tlb_initRichard Henderson
Merge into the only caller, but at the same time split out tlb_mmu_init to initialize a single tlb entry. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Split out tlb_mmu_flush_lockedRichard Henderson
We will want to be able to flush a tlb without resizing. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Hoist tlb portions in tlb_flush_one_mmuidx_lockedRichard Henderson
No functional change, but the smaller expressions make the code easier to read. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Hoist tlb portions in tlb_mmu_resize_lockedRichard Henderson
No functional change, but the smaller expressions make the code easier to read. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Pass CPUTLBDescFast to tlb_n_entries and sizeof_tlbRichard Henderson
We do not need the entire CPUArchState to compute these values. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Make tlb_n_entries private to cputlb.cRichard Henderson
There are no users of this function outside cputlb.c, and its interface will change in the next patch. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Merge tlb_table_flush_by_mmuidx into tlb_flush_one_mmuidx_lockedRichard Henderson
There is only one caller for tlb_table_flush_by_mmuidx. Place the result at the earlier line number, due to an expected user in the near future. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Handle NB_MMU_MODES > TARGET_PAGE_BITS_MINRichard Henderson
In target/arm we will shortly have "too many" mmu_idx. The current minimum barrier is caused by the way in which tlb_flush_page_by_mmuidx is coded. We can remove this limitation by allocating memory for consumption by the worker. Let us assume that this is the unlikely case, as will be the case for the majority of targets which have so far satisfied the BUILD_BUG_ON, and only allocate memory when necessary. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15cputlb: Expand cpu_ldst_template.h in cputlb.cRichard Henderson
Reduce the amount of preprocessor obfuscation by expanding the text of each of the functions generated. The result is only slightly smaller than the original. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15cputlb: Rename helper_ret_ld*_cmmu to cpu_ld*_codeRichard Henderson
There are no uses of the *_cmmu names other than the bare wrapping within the *_code inlines. Therefore rename the functions so we can drop the inlines. Use abi_ptr instead of target_ulong in preparation for user-only; the two types are identical for softmmu. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15cputlb: Move body of cpu_ldst_template.h out of lineRichard Henderson
With the tracing hooks, the inline functions are no longer so simple. Once out-of-line, the current tlb_entry lookup is redundant with the one in the main load/store_helper. This also begins the introduction of a new target facing interface, with suffix *_mmuidx_ra. This is not yet official because the interface is not done for user-only. Use abi_ptr instead of target_ulong in preparation for user-only; the two types are identical for softmmu. What remains in cpu_ldst_template.h are the expansions for _code, _data, and MMU_MODE<N>_SUFFIX. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-11-11Remove unassigned_access CPU hookPeter Maydell
All targets have now migrated away from the old unassigned_access hook to the new do_transaction_failed hook. This means we can remove the core-code infrastructure for that hook and the code that calls it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20191108173732.11816-1-peter.maydell@linaro.org
2019-10-30Merge remote-tracking branch ↵Peter Maydell
'remotes/stsquad/tags/pull-tcg-plugins-281019-4' into staging TCG Plugins initial implementation - use --enable-plugins @ configure - low impact introspection (-plugin empty.so to measure overhead) - plugins cannot alter guest state - example plugins included in source tree (tests/plugins) - -d plugin to enable plugin output in logs - check-tcg runs extra tests when plugins enabled - documentation in docs/devel/plugins.rst # gpg: Signature made Mon 28 Oct 2019 15:13:23 GMT # gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44 # gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full] # Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44 * remotes/stsquad/tags/pull-tcg-plugins-281019-4: (57 commits) travis.yml: enable linux-gcc-debug-tcg cache MAINTAINERS: add me for the TCG plugins code scripts/checkpatch.pl: don't complain about (foo, /* empty */) .travis.yml: add --enable-plugins tests include/exec: wrap cpu_ldst.h in CONFIG_TCG accel/stubs: reduce headers from tcg-stub tests/plugin: add hotpages to analyse memory access patterns tests/plugin: add instruction execution breakdown tests/plugin: add a hotblocks plugin tests/tcg: enable plugin testing tests/tcg: drop test-i386-fprem from TESTS when not SLOW tests/tcg: move "virtual" tests to EXTRA_TESTS tests/tcg: set QEMU_OPTS for all cris runs tests/tcg/Makefile.target: fix path to config-host.mak tests/plugin: add sample plugins linux-user: support -plugin option vl: support -plugin option plugin: add qemu_plugin_outs helper plugin: add qemu_plugin_insn_disas helper plugin: expand the plugin_init function to include an info block ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-28cputlb: ensure _cmmu helper functions follow the naming standardAlex Bennée
We document this in docs/devel/load-stores.rst so lets follow it. The 32 bit and 64 bit access functions have historically not included the sign so we leave those as is. We also introduce some signed helpers which are used for loading immediate values in the translator. Fixes: 282dffc8 Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20191021150910.23216-1-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28plugins: implement helpers for resolving hwaddrAlex Bennée
We need to keep a local per-cpu copy of the data as other threads may be running. Currently we can provide insight as to if the access was IO or not and give the offset into a given device (usually the main RAMBlock). We store enough information to get details such as the MemoryRegion which might be useful in later expansions to the API. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28atomic_template: add inline trace/plugin helpersEmilio G. Cota
In preparation for plugin support. Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28cputlb: introduce get_page_addr_code_hostpEmilio G. Cota
This will be used by plugins to get the host address of instructions. Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28trace: add mmu_index to mem_infoAlex Bennée
We are going to re-use mem_info later for plugins and will need to track the mmu_idx for softmmu code. Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2019-10-28cputlb: Fix tlb_vaddr_to_hostRichard Henderson
Using uintptr_t instead of target_ulong meant that, for 64-bit guest and 32-bit host, we truncated the guest address comparator and so may not hit the tlb when we should. Fixes: 4811e9095c0 Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28cputlb: ensure _cmmu helper functions follow the naming standardAlex Bennée
We document this in docs/devel/load-stores.rst so lets follow it. The 32 bit and 64 bit access functions have historically not included the sign so we leave those as is. We also introduce some signed helpers which are used for loading immediate values in the translator. Fixes: 282dffc8 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20191021150910.23216-1-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Pass retaddr to tb_invalidate_phys_page_fastRichard Henderson
Rather than rely on cpu->mem_io_pc, pass retaddr down directly. Within tb_invalidate_phys_page_range__locked, the is_cpu_write_access parameter is non-zero exactly when retaddr would be non-zero, so that is a simple replacement. Recognize that current_tb_not_found is true only when mem_io_pc (and now retaddr) are also non-zero, so remove a redundant test. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Remove cpu->mem_io_vaddrRichard Henderson
With the merge of notdirty handling into store_helper, the last user of cpu->mem_io_vaddr was removed. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Handle TLB_NOTDIRTY in probe_accessRichard Henderson
We can use notdirty_write for the write and return a valid host pointer for this case. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Merge and move memory_notdirty_write_{prepare,complete}Richard Henderson
Since 9458a9a1df1a, all readers of the dirty bitmaps wait for the rcu lock, which means that they wait until the end of any executing TranslationBlock. As a consequence, there is no need for the actual access to happen in between the _prepare and _complete. Therefore, we can improve things by merging the two functions into notdirty_write and dropping the NotDirtyInfo structure. In addition, the only users of notdirty_write are in cputlb.c, so move the merged function there. Pass in the CPUIOTLBEntry from which the ram_addr_t may be computed. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Partially inline memory_region_section_get_iotlbRichard Henderson
There is only one caller, tlb_set_page_with_attrs. We cannot inline the entire function because the AddressSpaceDispatch structure is private to exec.c, and cannot easily be moved to include/exec/memory-internal.h. Compute is_ram and is_romd once within tlb_set_page_with_attrs. Fold the number of tests against these predicates. Compute cpu_physical_memory_is_clean outside of the tlb lock region. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Move NOTDIRTY handling from I/O path to TLB pathRichard Henderson
Pages that we want to track for NOTDIRTY are RAM. We do not really need to go through the I/O path to handle them. Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Move ROM handling from I/O path to TLB pathRichard Henderson
It does not require going through the whole I/O path in order to discard a write. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Introduce TLB_BSWAPRichard Henderson
Handle bswap on ram directly in load/store_helper. This fixes a bug with the previous implementation in that one cannot use the I/O path for RAM. Fixes: a26fc6f5152b47f1 Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Split out load/store_memopRichard Henderson
We will shortly be using these more than once. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Use qemu_build_not_reached in load/store_helpersRichard Henderson
Increase the current runtime assert to a compile-time assert. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25cputlb: Disable __always_inline__ without optimizationRichard Henderson
This forced inlining can result in missing symbols, which makes a debugging build harder to follow. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03tcg: Factor out probe_write() logic into probe_access()David Hildenbrand
Let's also allow to probe other access types. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190830100959.26615-3-david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03tcg: Make probe_write() return a pointer to the host pageDavid Hildenbrand
... similar to tlb_vaddr_to_host(); however, allow access to the host page except when TLB_NOTDIRTY or TLB_MMIO is set. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190830100959.26615-2-david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03tcg: Enforce single page access in probe_write()David Hildenbrand
Let's enforce the interface restriction. Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190826075112.25637-5-david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03tcg: Check for watchpoints in probe_write()David Hildenbrand
Let size > 0 indicate a promise to write to those bytes. Check for write watchpoints in the probed range. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190823100741.9621-10-david@redhat.com> [rth: Recompute index after tlb_fill; check TLB_WATCHPOINT.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03cputlb: Handle watchpoints via TLB_WATCHPOINTRichard Henderson
The raising of exceptions from check_watchpoint, buried inside of the I/O subsystem, is fundamentally broken. We do not have the helper return address with which we can unwind guest state. Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT. Move the call to cpu_check_watchpoint into the cputlb helpers where we do have the helper return address. This allows watchpoints on RAM to bypass the full i/o access path. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03cputlb: Remove double-alignment in store_helperRichard Henderson
We have already aligned page2 to the start of the next page. There is no reason to do that a second time. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>