aboutsummaryrefslogtreecommitdiff
path: root/accel/tcg/cputlb.c
AgeCommit message (Collapse)Author
2017-10-20accel/tcg: allow to invalidate a write TLB entry immediatelyDavid Hildenbrand
Background: s390x implements Low-Address Protection (LAP). If LAP is enabled, writing to effective addresses (before any translation) 0-511 and 4096-4607 triggers a protection exception. So we have subpage protection on the first two pages of every address space (where the lowcore - the CPU private data resides). By immediately invalidating the write entry but allowing the caller to continue, we force every write access onto these first two pages into the slow path. we will get a tlb fault with the specific accessed addresses and can then evaluate if protection applies or not. We have to make sure to ignore the invalid bit if tlb_fill() succeeds. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20171016202358.3633-2-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-10cputlb: bring back tlb_flush_count under !TLB_DEBUGEmilio G. Cota
Commit f0aff0f124 ("cputlb: add assert_cpu_is_self checks") buried the increment of tlb_flush_count under TLB_DEBUG. This results in "info jit" always (mis)reporting 0 TLB flushes when !TLB_DEBUG. Besides, under MTTCG tlb_flush_count is updated by several threads, so in order not to lose counts we'd either have to use atomic ops or distribute the counter, which is more scalable. This patch does the latter by embedding tlb_flush_count in CPUArchState. The global count is then easily obtained by iterating over the CPU list. Note that this change also requires updating the accessors to tlb_flush_count to use atomic_read/set whenever there may be conflicting accesses (as defined in C11) to it. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-09-25accel/tcg/cputlb: avoid recursive BQL (fixes #1706296)Alex Bennée
The mmio path (see exec.c:prepare_mmio_access) already protects itself against recursive locking and it makes sense to do the same for io_readx/writex. Otherwise any helper running in the BQL context will assert when it attempts to write to device memory as in the case of the bug report. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> CC: Richard Jones <rjones@redhat.com> CC: Paolo Bonzini <bonzini@gnu.org> CC: qemu-stable@nongnu.org Message-Id: <20170921110625.9500-1-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-09-04cputlb: Support generating CPU exceptions on memory transaction failuresPeter Maydell
Call the new cpu_transaction_failed() hook at the places where CPU generated code interacts with the memory system: io_readx() io_writex() get_page_addr_code() Any access from C code (eg via cpu_physical_memory_rw(), address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions via cpu_transaction_failed(). Handling for transactions failures for this kind of call should be done by using a function which returns a MemTxResult and treating the failure case appropriately in the calling code. In an ideal world we would not generate CPU exceptions for instruction fetch failures in get_page_addr_code() but instead wait until the code translation process tried a load and it failed; however that change would require too great a restructuring and redesign to attempt at this point. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-06-30tcg: consistently access cpu->tb_jmp_cache atomicallyEmilio G. Cota
Some code paths can lead to atomic accesses racing with memset() on cpu->tb_jmp_cache, which can result in torn reads/writes and is undefined behaviour in C11. These torn accesses are unlikely to show up as bugs, but from code inspection they seem possible. For example, tb_phys_invalidate does: /* remove the TB from the hash list */ h = tb_jmp_cache_hash_func(tb->pc); CPU_FOREACH(cpu) { if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) { atomic_set(&cpu->tb_jmp_cache[h], NULL); } } Here atomic_set might race with a concurrent memset (such as the ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and therefore we might end up with a torn pointer (or who knows what, because we are under undefined behaviour). This patch converts parallel accesses to cpu->tb_jmp_cache to use atomic primitives, thereby bringing these accesses back to defined behaviour. The price to pay is to potentially execute more instructions when clearing cpu->tb_jmp_cache, but given how infrequently they happen and the small size of the cache, the performance impact I have measured is within noise range when booting debian-arm. Note that under "safe async" work (e.g. do_tb_flush) we could use memset because no other vcpus are running. However I'm keeping these accesses atomic as well to keep things simple and to avoid confusing analysis tools such as ThreadSanitizer. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1497486973-25845-1-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-06-27exec: allow to get a pointer for some mmio memory regionKONRAD Frederic
This introduces a special callback which allows to run code from some MMIO devices. SysBusDevice with a MemoryRegion which implements the request_ptr callback will be notified when the guest try to execute code from their offset. Then it will be able to eg: pre-load some code from an SPI device or ask a pointer from an external simulator, etc.. When the pointer or the data in it are no longer valid the device has to invalidate it. Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-06-27cputlb: fix the way get_page_addr_code fills the tlbKONRAD Frederic
get_page_addr_code(..) does a cpu_ldub_code to fill the tlb: This can lead to some side effects if a device is mapped at this address. So this patch replaces the cpu_memory_ld by a tlb_fill. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-06-27cputlb: move get_page_addr_codeKONRAD Frederic
This just moves the code before VICTIM_TLB_HIT macro definition so we can use it. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-06-27cputlb: cleanup get_page_addr_code to use VICTIM_TLB_HITKONRAD Frederic
This replaces env1 and page_index variables by env and index so we can use VICTIM_TLB_HIT macro later. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-06-15tcg: move tcg related files into accel/tcg/ subdirectoryYang Zhong
move cputlb.c, cpu-exec-common.c and cpu-exec.c related tcg exec file into accel/tcg/ subdirectory. Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <1496383606-18060-3-git-send-email-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>