aboutsummaryrefslogtreecommitdiff
path: root/include/exec/cputlb.h
AgeCommit message (Collapse)Author
2020-11-15overall/alpha tcg cpus|hppa: Fix Lesser GPL version numberChetan Pant
There is no "version 2" of the "Lesser" General Public License. It is either "GPL version 2.0" or "Lesser GPL version 2.1". This patch replaces all occurrences of "Lesser GPL version 2" with "Lesser GPL version 2.1" in comment section. Signed-off-by: Chetan Pant <chetan4windows@gmail.com> Message-Id: <20201023123353.19796-1-chetan4windows@gmail.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2019-08-16include: Make headers more self-containedMarkus Armbruster
Back in 2016, we discussed[1] rules for headers, and these were generally liked: 1. Have a carefully curated header that's included everywhere first. We got that already thanks to Peter: osdep.h. 2. Headers should normally include everything they need beyond osdep.h. If exceptions are needed for some reason, they must be documented in the header. If all that's needed from a header is typedefs, put those into qemu/typedefs.h instead of including the header. 3. Cyclic inclusion is forbidden. This patch gets include/ closer to obeying 2. It's actually extracted from my "[RFC] Baby steps towards saner headers" series[2], which demonstrates a possible path towards checking 2 automatically. It passes the RFC test there. [1] Message-ID: <87h9g8j57d.fsf@blackfin.pond.sub.org> https://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg03345.html [2] Message-Id: <20190711122827.18970-1-armbru@redhat.com> https://lists.nongnu.org/archive/html/qemu-devel/2019-07/msg02715.html Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-2-armbru@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2018-10-31cputlb: Count "partial" and "elided" tlb flushesRichard Henderson
Our only statistic so far was "full" tlb flushes, where all mmu_idx are flushed at the same time. Now count "partial" tlb flushes where sets of mmu_idx are flushed, but the set is not maximal. Account one per mmu_idx flushed, as that is the unit of work performed. We don't actually count elided flushes yet, but go ahead and change the interface presented to the monitor all at once. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-10cputlb: bring back tlb_flush_count under !TLB_DEBUGEmilio G. Cota
Commit f0aff0f124 ("cputlb: add assert_cpu_is_self checks") buried the increment of tlb_flush_count under TLB_DEBUG. This results in "info jit" always (mis)reporting 0 TLB flushes when !TLB_DEBUG. Besides, under MTTCG tlb_flush_count is updated by several threads, so in order not to lose counts we'd either have to use atomic ops or distribute the counter, which is more scalable. This patch does the latter by embedding tlb_flush_count in CPUArchState. The global count is then easily obtained by iterating over the CPU list. Note that this change also requires updating the accessors to tlb_flush_count to use atomic_read/set whenever there may be conflicting accesses (as defined in C11) to it. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-02-24cputlb: atomically update tlb fields used by tlb_reset_dirtyAlex Bennée
The main use case for tlb_reset_dirty is to set the TLB_NOTDIRTY flags in TLB entries to force the slow-path on writes. This is used to mark page ranges containing code which has been translated so it can be invalidated if written to. To do this safely we need to ensure the TLB entries in question for all vCPUs are updated before we attempt to run the code otherwise a race could be introduced. To achieve this we atomically set the flag in tlb_reset_dirty_range and take care when setting it when the TLB entry is filled. On 32 bit systems attempting to emulate 64 bit guests we don't even bother as we might not have the atomic primitives available. MTTCG is disabled in this case and can't be forced on. The copy_tlb_helper function helps keep the atomic semantics in one place to avoid confusion. The dirty helper function is made static as it isn't used outside of cputlb. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2015-09-16include/exec: Move cputlb exec.c defs outPeter Crosthwaite
Move the architecture agnostic function prototypes for exec.c out of cputlb.h to exec-all.h. This allows hiding of the arch specific cputlb.h from exec.c which should be getting close to having no architecture specifics. Prepares support for multi-arch, which will have a minimal cpu.h that services exec.c but not cputlb.h. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <b4fe754c58c860315e35d44430c26b1c967ce2c9.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-16cputlb: Change tlb_set_dirty() arg to cpuPeter Crosthwaite
Change tlb_set_dirty() to accept a CPU instead of an env pointer. This allows for removal of another CPUArchState usage from prototypes that need to be QOMified. Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <d2b1dcbe7945112989861d8ba7369449c11cc273.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-16cputlb: move CPU_LOOP() for tlb_reset() to exec.cPeter Crosthwaite
To prepare for multi-arch, cputlb.c should only have awareness of one single architecture. This means it should not have access to the full CPU lists which may be heterogeneous. Instead, push the CPU_LOOP() up to the one and only caller in exec.c. Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <db06dc6c49f8970caaf116d0385f00ee10a56f2f.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05cputlb: remove useless arguments to tlb_unprotect_code_phys, renamePaolo Bonzini
These days modification of the TLB is done in notdirty_mem_write, so the virtual address and env pointer as unnecessary. The new name of the function, tlb_unprotect_code, is consistent with tlb_protect_code. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-02-16exec: make iotlb RCU-friendlyPaolo Bonzini
After the previous patch, TLBs will be flushed on every change to the memory mapping. This patch augments that with synchronization of the MemoryRegionSections referred to in the iotlb array. With this change, it is guaranteed that iotlb_to_region will access the correct memory map, even once the TLB will be accessed outside the BQL. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-03-13exec: Change memory_region_section_get_iotlb() argument to CPUStateAndreas Färber
It no longer needs CPUArchState since moving watchpoints to CPUState. Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cputlb: Change tlb_unprotect_code_phys() argument to CPUStateAndreas Färber
Note that the argument is unused. Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13translate-all: Change tb_flush_jmp_cache() argument to CPUStateAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-20exec: Resolve subpages in one step except for IOTLB fillsJan Kiszka
Except for the case of setting the IOTLB entry in TCG mode, we can avoid the subpage dispatching handlers and do the resolution directly on address_space_lookup_region. An IOTLB entry describes a full page, not only the region that the first access to a sub-divided page may return. This patch therefore introduces a special translation function, address_space_translate_for_iotlb, that avoids the subpage resolutions. In contrast, callers of the existing address_space_translate service will now always receive the terminal memory region section. This will be important for breaking the BQL and for enabling unaligned memory region. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29memory: add address_space_translatePaolo Bonzini
Using phys_page_find to translate an AddressSpace to a MemoryRegionSection is unwieldy. It requires to pass the page index rather than the address, and later memory_region_section_addr has to be called. Replace memory_region_section_addr with a function that does all of it: call phys_page_find, compute the offset within the region, and check how big the current mapping is. This way, a large flat region can be written with a single lookup rather than a page at a time. address_space_translate will also provide a single point where IOMMU forwarding is implemented. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19exec: move include files to include/exec/Paolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>