aboutsummaryrefslogtreecommitdiff
path: root/accel/tcg
AgeCommit message (Collapse)Author
2020-08-21meson: accelMarc-André Lureau
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-08-21meson: rename included C source files to .c.incPaolo Bonzini
With Makefiles that have automatically generated dependencies, you generated includes are set as dependencies of the Makefile, so that they are built before everything else and they are available when first building the .c files. Alternatively you can use a fine-grained dependency, e.g. target/arm/translate.o: target/arm/decode-neon-shared.inc.c With Meson you have only one choice and it is a third option, namely "build at the beginning of the corresponding target"; the way you express it is to list the includes in the sources of that target. The problem is that Meson decides if something is a source vs. a generated include by looking at the extension: '.c', '.cc', '.m', '.C' are sources, while everything else is considered an include---including '.inc.c'. Use '.c.inc' to avoid this, as it is consistent with our other convention of using '.rst.inc' for included reStructuredText files. The editorconfig file is adjusted. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-08-21trace: switch position of headers to what Meson requiresPaolo Bonzini
Meson doesn't enjoy the same flexibility we have with Make in choosing the include path. In particular the tracing headers are using $(build_root)/$(<D). In order to keep the include directives unchanged, the simplest solution is to generate headers with patterns like "trace/trace-audio.h" and place forwarding headers in the source tree such that for example "audio/trace.h" includes "trace/trace-audio.h". This patch is too ugly to be applied to the Makefiles now. It's only a way to separate the changes to the tracing header files from the Meson rewrite of the tracing logic. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-27accel/tcg: better handle memory constrained systemsAlex Bennée
It turns out there are some 64 bit systems that have relatively low amounts of physical memory available to them (typically CI system). Even with swapping available a 1GB translation buffer that fills up can put the machine under increased memory pressure. Detect these low memory situations and reduce tb_size appropriately. Fixes: 600e17b2615 ("accel/tcg: increase default code gen buffer size for 64 bit") Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Robert Foley <robert.foley@linaro.org> Cc: BALATON Zoltan <balaton@eik.bme.hu> Cc: Christian Ehrhardt <christian.ehrhardt@canonical.com> Message-Id: <20200724064509.331-7-alex.bennee@linaro.org>
2020-07-24tcg: update comments for save_iotlb_data in cputlbAlex Bennée
I missed Emilio's review comments: Message-ID: <20200718205107.GA994221@sff> and the patch got merged. Correcting the comments now. Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200720122358.26881-1-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-07-17tcg/cpu-exec: precise single-stepping after an interruptRichard Henderson
When single-stepping with a debugger attached to QEMU, and when an interrupt is raised, the debugger misses the first instruction after the interrupt. Tested-by: Luc Michel <luc.michel@greensocs.com> Reviewed-by: Luc Michel <luc.michel@greensocs.com> Buglink: https://bugs.launchpad.net/qemu/+bug/757702 Message-Id: <20200717163029.2737546-1-richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-07-16tcg/cpu-exec: precise single-stepping after an exceptionLuc Michel
When single-stepping with a debugger attached to QEMU, and when an exception is raised, the debugger misses the first instruction after the exception: $ qemu-system-aarch64 -M virt -display none -cpu cortex-a53 -s -S $ aarch64-linux-gnu-gdb GNU gdb (GDB) 9.2 [...] (gdb) tar rem :1234 Remote debugging using :1234 warning: No executable has been specified and target does not support determining executable automatically. Try using the "file" command. 0x0000000000000000 in ?? () (gdb) # writing nop insns to 0x200 and 0x204 (gdb) set *0x200 = 0xd503201f (gdb) set *0x204 = 0xd503201f (gdb) # 0x0 address contains 0 which is an invalid opcode. (gdb) # The CPU should raise an exception and jump to 0x200 (gdb) si 0x0000000000000204 in ?? () With this commit, the same run steps correctly on the first instruction of the exception vector: (gdb) si 0x0000000000000200 in ?? () Buglink: https://bugs.launchpad.net/qemu/+bug/757702 Signed-off-by: Luc Michel <luc.michel@greensocs.com> Message-Id: <20200716193947.3058389-1-luc.michel@greensocs.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-07-15cputlb: ensure we save the IOTLB data in case of resetAlex Bennée
Any write to a device might cause a re-arrangement of memory triggering a TLB flush and potential re-size of the TLB invalidating previous entries. This would cause users of qemu_plugin_get_hwaddr() to see the warning: invalid use of qemu_plugin_get_hwaddr because of the failed tlb_lookup which should always succeed. To prevent this we save the IOTLB data in case it is later needed by a plugin doing a lookup. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200713200415.26214-7-alex.bennee@linaro.org>
2020-07-10error: Eliminate error_propagate() with Coccinelle, part 1Markus Armbruster
When all we do with an Error we receive into a local variable is propagating to somewhere else, we can just as well receive it there right away. Convert if (!foo(..., &err)) { ... error_propagate(errp, err); ... return ... } to if (!foo(..., errp)) { ... ... return ... } where nothing else needs @err. Coccinelle script: @rule1 forall@ identifier fun, err, errp, lbl; expression list args, args2; binary operator op; constant c1, c2; symbol false; @@ if ( ( - fun(args, &err, args2) + fun(args, errp, args2) | - !fun(args, &err, args2) + !fun(args, errp, args2) | - fun(args, &err, args2) op c1 + fun(args, errp, args2) op c1 ) ) { ... when != err when != lbl: when strict - error_propagate(errp, err); ... when != err ( return; | return c2; | return false; ) } @rule2 forall@ identifier fun, err, errp, lbl; expression list args, args2; expression var; binary operator op; constant c1, c2; symbol false; @@ - var = fun(args, &err, args2); + var = fun(args, errp, args2); ... when != err if ( ( var | !var | var op c1 ) ) { ... when != err when != lbl: when strict - error_propagate(errp, err); ... when != err ( return; | return c2; | return false; | return var; ) } @depends on rule1 || rule2@ identifier err; @@ - Error *err = NULL; ... when != err Not exactly elegant, I'm afraid. The "when != lbl:" is necessary to avoid transforming if (fun(args, &err)) { goto out } ... out: error_propagate(errp, err); even though other paths to label out still need the error_propagate(). For an actual example, see sclp_realize(). Without the "when strict", Coccinelle transforms vfio_msix_setup(), incorrectly. I don't know what exactly "when strict" does, only that it helps here. The match of return is narrower than what I want, but I can't figure out how to express "return where the operand doesn't use @err". For an example where it's too narrow, see vfio_intx_enable(). Silently fails to convert hw/arm/armsse.c, because Coccinelle gets confused by ARMSSE being used both as typedef and function-like macro there. Converted manually. Line breaks tidied up manually. One nested declaration of @local_err deleted manually. Preexisting unwanted blank line dropped in hw/riscv/sifive_e.c. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200707160613.848843-35-armbru@redhat.com>
2020-07-10qapi: Use returned bool to check for failure, Coccinelle partMarkus Armbruster
The previous commit enables conversion of visit_foo(..., &err); if (err) { ... } to if (!visit_foo(..., errp)) { ... } for visitor functions that now return true / false on success / error. Coccinelle script: @@ identifier fun =~ "check_list|input_type_enum|lv_start_struct|lv_type_bool|lv_type_int64|lv_type_str|lv_type_uint64|output_type_enum|parse_type_bool|parse_type_int64|parse_type_null|parse_type_number|parse_type_size|parse_type_str|parse_type_uint64|print_type_bool|print_type_int64|print_type_null|print_type_number|print_type_size|print_type_str|print_type_uint64|qapi_clone_start_alternate|qapi_clone_start_list|qapi_clone_start_struct|qapi_clone_type_bool|qapi_clone_type_int64|qapi_clone_type_null|qapi_clone_type_number|qapi_clone_type_str|qapi_clone_type_uint64|qapi_dealloc_start_list|qapi_dealloc_start_struct|qapi_dealloc_type_anything|qapi_dealloc_type_bool|qapi_dealloc_type_int64|qapi_dealloc_type_null|qapi_dealloc_type_number|qapi_dealloc_type_str|qapi_dealloc_type_uint64|qobject_input_check_list|qobject_input_check_struct|qobject_input_start_alternate|qobject_input_start_list|qobject_input_start_struct|qobject_input_type_any|qobject_input_type_bool|qobject_input_type_bool_keyval|qobject_input_type_int64|qobject_input_type_int64_keyval|qobject_input_type_null|qobject_input_type_number|qobject_input_type_number_keyval|qobject_input_type_size_keyval|qobject_input_type_str|qobject_input_type_str_keyval|qobject_input_type_uint64|qobject_input_type_uint64_keyval|qobject_output_start_list|qobject_output_start_struct|qobject_output_type_any|qobject_output_type_bool|qobject_output_type_int64|qobject_output_type_null|qobject_output_type_number|qobject_output_type_str|qobject_output_type_uint64|start_list|visit_check_list|visit_check_struct|visit_start_alternate|visit_start_list|visit_start_struct|visit_type_.*"; expression list args; typedef Error; Error *err; @@ - fun(args, &err); - if (err) + if (!fun(args, &err)) { ... } A few line breaks tidied up manually. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200707160613.848843-19-armbru@redhat.com>
2020-06-26osdep: Make MIN/MAX evaluate arguments only onceEric Blake
I'm not aware of any immediate bugs in qemu where a second runtime evaluation of the arguments to MIN() or MAX() causes a problem, but proactively preventing such abuse is easier than falling prey to an unintended case down the road. At any rate, here's the conversation that sparked the current patch: https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg05718.html Update the MIN/MAX macros to only evaluate their argument once at runtime; this uses typeof(1 ? (a) : (b)) to ensure that we are promoting the temporaries to the same type as the final comparison (we have to trigger type promotion, as typeof(bitfield) won't compile; and we can't use typeof((a) + (b)) or even typeof((a) + 0), as some of our uses of MAX are on void* pointers where such addition is undefined). However, we are unable to work around gcc refusing to compile ({}) in a constant context (such as the array length of a static variable), even when only used in the dead branch of a __builtin_choose_expr(), so we have to provide a second macro pair MIN_CONST and MAX_CONST for use when both arguments are known to be compile-time constants and where the result must also be usable as a constant; this second form evaluates arguments multiple times but that doesn't matter for constants. By using a void expression as the expansion if a non-constant is presented to this second form, we can enlist the compiler to ensure the double evaluation is not attempted on non-constants. Alas, as both macros now rely on compiler intrinsics, they are no longer usable in preprocessor #if conditions; those will just have to be open-coded or the logic rewritten into #define or runtime 'if' conditions (but where the compiler dead-code-elimination will probably still apply). I tested that both gcc 10.1.1 and clang 10.0.0 produce errors for all forms of macro mis-use. As the errors can sometimes be cryptic, I'm demonstrating the gcc output: Use of MIN when MIN_CONST is needed: In file included from /home/eblake/qemu/qemu-img.c:25: /home/eblake/qemu/include/qemu/osdep.h:249:5: error: braced-group within expression allowed only inside a function 249 | ({ \ | ^ /home/eblake/qemu/qemu-img.c:92:12: note: in expansion of macro ‘MIN’ 92 | char array[MIN(1, 2)] = ""; | ^~~ Use of MIN_CONST when MIN is needed: /home/eblake/qemu/qemu-img.c: In function ‘is_allocated_sectors’: /home/eblake/qemu/qemu-img.c:1225:15: error: void value not ignored as it ought to be 1225 | i = MIN_CONST(i, n); | ^ Use of MIN in the preprocessor: In file included from /home/eblake/qemu/accel/tcg/translate-all.c:20: /home/eblake/qemu/accel/tcg/translate-all.c: In function ‘page_check_range’: /home/eblake/qemu/include/qemu/osdep.h:249:6: error: token "{" is not valid in preprocessor expressions 249 | ({ \ | ^ Fix the resulting callsites that used #if or computed a compile-time constant min or max to use the new macros. cpu-defs.h is interesting, as CPU_TLB_DYN_MAX_BITS is sometimes used as a constant and sometimes dynamic. It may be worth improving glib's MIN/MAX definitions to be saner, but that is a task for another day. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200625162602.700741-1-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-16translate-all: call qemu_spin_destroy for PageDescEmilio G. Cota
The radix tree is append-only, but we can fail to insert a PageDesc if the insertion races with another thread. Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Robert Foley <robert.foley@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200609200738.445-8-robert.foley@linaro.org> Message-Id: <20200612190237.30436-11-alex.bennee@linaro.org>
2020-06-16tcg: call qemu_spin_destroy for tb->jmp_lockEmilio G. Cota
Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Robert Foley <robert.foley@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> [RF: minor changes + remove tb_destroy_func] Message-Id: <20200609200738.445-7-robert.foley@linaro.org> Message-Id: <20200612190237.30436-10-alex.bennee@linaro.org>
2020-06-16cputlb: destroy CPUTLB with tlb_destroyEmilio G. Cota
I was after adding qemu_spin_destroy calls, but while at it I noticed that we are leaking some memory. Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Robert Foley <robert.foley@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200609200738.445-5-robert.foley@linaro.org> Message-Id: <20200612190237.30436-8-alex.bennee@linaro.org>
2020-06-02accel/tcg: Provide a NetBSD specific aarch64 cpu_signal_handlerNick Hudson
Fix qemu build on NetBSD/evbarm-aarch64 by providing a NetBSD specific cpu_signal_handler. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Nick Hudson <skrll@netbsd.org> Message-Id: <20200517101529.5367-1-skrll@netbsd.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-02accel/tcg: Adjust cpu_signal_handler for NetBSD/armNick Hudson
Fix building on NetBSD/arm by extracting the FSR value from the correct siginfo_t field. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Nick Hudson <skrll@netbsd.org> Message-Id: <20200516154147.24842-1-skrll@netbsd.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-02tcg: Implement gvec support for rotate by vectorRichard Henderson
No host backend support yet, but the interfaces for rotlv and rotrv are in place. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- v3: Drop the generic expansion from rot to shift; we can do better for each backend, and then this code becomes unused.
2020-06-02tcg: Implement gvec support for rotate by immediateRichard Henderson
No host backend support yet, but the interfaces for rotli are in place. Canonicalize immediate rotate to the left, based on a survey of architectures, but provide both left and right shift interfaces to the translators. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-05-15translate-all: include guest address in out_asm outputAlex Bennée
We already have information about where each guest instructions representation starts stored in the tcg_ctx->gen_insn_data so we can rectify the PC for faults. We can re-use this information to annotate the out_asm output with guest instruction address which makes it a bit easier to work out where you are especially with longer blocks. A minor wrinkle is that some instructions get optimised away so we have to scan forward until we find some actual generated code. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200513175134.19619-11-alex.bennee@linaro.org>
2020-05-15disas: include an optional note for the start of disassemblyAlex Bennée
This will become useful shortly for providing more information about output assembly inline. While there fix up the indenting and code formatting in disas(). Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200513175134.19619-9-alex.bennee@linaro.org>
2020-05-15accel/tcg: don't disable exec_tb trace eventsAlex Bennée
I doubt the well predicted trace event check is particularly special in the grand context of TCG code execution. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200513175134.19619-8-alex.bennee@linaro.org>
2020-05-15accel/tcg: Relax va restrictions on 64-bit guestsRichard Henderson
We cannot at present limit a 64-bit guest to a virtual address space smaller than the host. It will mostly work to ignore this limitation, except if the guest uses high bits of the address space for tags. But it will certainly work better, as presently we can wind up failing to allocate the guest stack. Widen our user-only page tree to the host or abi pointer width. Remove the workaround for this problem from target/alpha. Always validate guest addresses vs reserved_va, as there we control allocation ourselves. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200513175134.19619-7-alex.bennee@linaro.org>
2020-05-15qom: Drop parameter @errp of object_property_add() & friendsMarkus Armbruster
The only way object_property_add() can fail is when a property with the same name already exists. Since our property names are all hardcoded, failure is a programming error, and the appropriate way to handle it is passing &error_abort. Same for its variants, except for object_property_add_child(), which additionally fails when the child already has a parent. Parentage is also under program control, so this is a programming error, too. We have a bit over 500 callers. Almost half of them pass &error_abort, slightly fewer ignore errors, one test case handles errors, and the remaining few callers pass them to their own callers. The previous few commits demonstrated once again that ignoring programming errors is a bad idea. Of the few ones that pass on errors, several violate the Error API. The Error ** argument must be NULL, &error_abort, &error_fatal, or a pointer to a variable containing NULL. Passing an argument of the latter kind twice without clearing it in between is wrong: if the first call sets an error, it no longer points to NULL for the second call. ich9_pm_add_properties(), sparc32_ledma_realize(), sparc32_dma_realize(), xilinx_axidma_realize(), xilinx_enet_realize() are wrong that way. When the one appropriate choice of argument is &error_abort, letting users pick the argument is a bad idea. Drop parameter @errp and assert the preconditions instead. There's one exception to "duplicate property name is a programming error": the way object_property_add() implements the magic (and undocumented) "automatic arrayification". Don't drop @errp there. Instead, rename object_property_add() to object_property_try_add(), and add the obvious wrapper object_property_add(). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200505152926.18877-15-armbru@redhat.com> [Two semantic rebase conflicts resolved]
2020-05-15qom: Drop object_property_set_description() parameter @errpMarkus Armbruster
object_property_set_description() and object_class_property_set_description() fail only when property @name is not found. There are 85 calls of object_property_set_description() and object_class_property_set_description(). None of them can fail: * 84 immediately follow the creation of the property. * The one in spapr_rng_instance_init() refers to a property created in spapr_rng_class_init(), from spapr_rng_properties[]. Every one of them still gets to decide what to pass for @errp. 51 calls pass &error_abort, 32 calls pass NULL, one receives the error and propagates it to &error_abort, and one propagates it to &error_fatal. I'm actually surprised none of them violates the Error API. What are we gaining by letting callers handle the "property not found" error? Use when the property is not known to exist is simpler: you don't have to guard the call with a check. We haven't found such a use in 5+ years. Until we do, let's make life a bit simpler and drop the @errp parameter. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200505152926.18877-8-armbru@redhat.com> [One semantic rebase conflict resolved]
2020-05-11accel/tcg: Add endian-specific cpu_{ld, st}* operationsRichard Henderson
We currently have target-endian versions of these operations, but no easy way to force a specific endianness. This can be helpful if the target has endian-specific operations, or a mode that swaps endianness. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11accel/tcg: Add probe_access_flagsRichard Henderson
This new interface will allow targets to probe for a page and then handle watchpoints themselves. This will be most useful for vector predicated memory operations, where one page lookup can be used for many operations, and one test can avoid many watchpoint checks. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-05-11accel/tcg: Adjust probe_access call to page_check_rangeRichard Henderson
We have validated that addr+size does not cross a page boundary. Therefore we need to validate exactly one page. We can achieve that passing any value 1 <= x <= size to page_check_range. Passing 1 will simplify the next patch. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-28tcg: Remove softmmu code_gen_buffer fixed addressRichard Henderson
The commentary talks about "in concert with the addresses assigned in the relevant linker script", except there is no linker script for softmmu, nor has there been for some time. (Do not confuse the user-only linker script editing that was removed in the previous patch, because user-only does not use this code_gen_buffer allocation method.) Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-03-17tcg: Remove tcg-runtime-gvec.c DO_CMP0Richard Henderson
Partial cleanup from the CONFIG_VECTOR16 removal. Replace DO_CMP0 with its scalar expansion, a simple negation. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-03-17tcg: Tidy tcg-runtime-gvec.c DUP*Richard Henderson
Partial cleanup from the CONFIG_VECTOR16 removal. Replace the DUP* expansions with the scalar argument. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-03-17tcg: Tidy tcg-runtime-gvec.c typesRichard Henderson
Partial cleanup from the CONFIG_VECTOR16 removal. Replace the vec* types with their scalar expansions. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-03-17tcg: Remove CONFIG_VECTOR16Richard Henderson
The comment in tcg-runtime-gvec.c about CONFIG_VECTOR16 says that tcg-op-gvec.c has eliminated size 8 vectors, and only passes on multiples of 16. This may have been true of the first few operations, but is not true of all operations. In particular, multiply, shift by scalar, and compare of 8- and 16-bit elements are not expanded inline if host vector operations are not supported. For an x86_64 host that does not support AVX, this means that we will fall back to the helper, which will attempt to use SSE instructions, which will SEGV on an invalid 8-byte aligned memory operation. This patch simply removes the CONFIG_VECTOR16 code and configuration without further simplification. Buglink: https://bugs.launchpad.net/bugs/1863508 Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-02-28accel/tcg: increase default code gen buffer size for 64 bitAlex Bennée
While 32mb is certainly usable a full system boot ends up flushing the codegen buffer nearly 100 times. Increase the default on 64 bit hosts to take advantage of all that spare memory. After this change I can boot my tests system without any TB flushes. As we usually run more CONFIG_USER binaries at a time in typical usage we aren't quite as profligate for user-mode code generation usage. We also bring the static code gen defies to the same place to keep all the reasoning in the comments together. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com> Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com> Message-Id: <20200228192415.19867-5-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-02-28accel/tcg: only USE_STATIC_CODE_GEN_BUFFER on 32 bit hostsAlex Bennée
There is no particular reason to use a static codegen buffer on 64 bit hosts as we have address space to burn. Allow the common CONFIG_USER case to use the mmap'ed buffers like SoftMMU. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com> Message-Id: <20200228192415.19867-4-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-02-28accel/tcg: remove link between guest ram and TCG cache sizeAlex Bennée
Basing the TB cache size on the ram_size was always a little heuristic and was broken by a1b18df9a4 which caused ram_size not to be fully realised at the time we initialise the TCG translation cache. The current DEFAULT_CODE_GEN_BUFFER_SIZE may still be a little small but follow-up patches will address that. Fixes: a1b18df9a4 Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com> Message-Id: <20200228192415.19867-3-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-02-28accel/tcg: use units.h for defining code gen buffer sizesAlex Bennée
It's easier to read. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200228192415.19867-2-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-02-28accel/tcg: fix race in cpu_exec_step_atomic (bug 1863025)Alex Bennée
The bug describes a race whereby cpu_exec_step_atomic can acquire a TB which is invalidated by a tb_flush before we execute it. This doesn't affect the other cpu_exec modes as a tb_flush by it's nature can only occur on a quiescent system. The race was described as: B2. tcg_cpu_exec => cpu_exec => tb_find => tb_gen_code B3. tcg_tb_alloc obtains a new TB C3. TB obtained with tb_lookup__cpu_state or tb_gen_code (same TB as B2) A3. start_exclusive critical section entered A4. do_tb_flush is called, TB memory freed/re-allocated A5. end_exclusive exits critical section B2. tcg_cpu_exec => cpu_exec => tb_find => tb_gen_code B3. tcg_tb_alloc reallocates TB from B2 C4. start_exclusive critical section entered C5. cpu_tb_exec executes the TB code that was free in A4 The simplest fix is to widen the exclusive period to include the TB lookup. As a result we can drop the complication of checking we are in the exclusive region before we end it. Cc: Yifan <me@yifanlu.com> Buglink: https://bugs.launchpad.net/qemu/+bug/1863025 Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200214144952.15502-1-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-27Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell
* Register qdev properties as class properties (Marc-André) * Cleanups (Philippe) * virtio-scsi fix (Pan Nengyuan) * Tweak Skylake-v3 model id (Kashyap) * x86 UCODE_REV support and nested live migration fix (myself) * Advisory mode for pvpanic (Zhenwei) # gpg: Signature made Fri 24 Jan 2020 20:16:23 GMT # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (58 commits) build-sys: clean up flags included in the linker command line target/i386: Add the 'model-id' for Skylake -v3 CPU models qdev: use object_property_help() qapi/qmp: add ObjectPropertyInfo.default-value qom: introduce object_property_help() qom: simplify qmp_device_list_properties() vl: print default value in object help qdev: register properties as class properties qdev: move instance properties to class properties qdev: rename DeviceClass.props qdev: set properties with device_class_set_props() object: return self in object_ref() object: release all props object: add object_class_property_add_link() object: express const link with link property object: add direct link flag object: rename link "child" to "target" object: check strong flag with & object: do not free class properties object: add object_property_set_default ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-01-24accel/tcg: Sanitize include pathPhilippe Mathieu-Daudé
Commit af0440ae852 moved the qemu_tcg_configure() function, but introduced extraneous 'include/' in the includes path. As it is not necessary, remove it. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20200121110349.25842-11-philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-01-24accel: Replace current_machine->accelerator by current_accel() wrapperPhilippe Mathieu-Daudé
We actually want to access the accelerator, not the machine, so use the current_accel() wrapper instead. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200121110349.25842-10-philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-01-21cputlb: Hoist timestamp outside of loops over tlbsRichard Henderson
Do not call get_clock_realtime() in tlb_mmu_resize_locked, but hoist outside of any loop over a set of tlbs. This is only two (indirect) callers, tlb_flush_by_mmuidx_async_work and tlb_flush_page_locked, so not onerous. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Initialize tlbs as flushedRichard Henderson
There's little point in leaving these data structures half initialized, and relying on a flush to be done during reset. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Partially merge tlb_dyn_init into tlb_initRichard Henderson
Merge into the only caller, but at the same time split out tlb_mmu_init to initialize a single tlb entry. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Split out tlb_mmu_flush_lockedRichard Henderson
We will want to be able to flush a tlb without resizing. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Hoist tlb portions in tlb_flush_one_mmuidx_lockedRichard Henderson
No functional change, but the smaller expressions make the code easier to read. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Hoist tlb portions in tlb_mmu_resize_lockedRichard Henderson
No functional change, but the smaller expressions make the code easier to read. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Pass CPUTLBDescFast to tlb_n_entries and sizeof_tlbRichard Henderson
We do not need the entire CPUArchState to compute these values. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Make tlb_n_entries private to cputlb.cRichard Henderson
There are no users of this function outside cputlb.c, and its interface will change in the next patch. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Merge tlb_table_flush_by_mmuidx into tlb_flush_one_mmuidx_lockedRichard Henderson
There is only one caller for tlb_table_flush_by_mmuidx. Place the result at the earlier line number, due to an expected user in the near future. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-21cputlb: Handle NB_MMU_MODES > TARGET_PAGE_BITS_MINRichard Henderson
In target/arm we will shortly have "too many" mmu_idx. The current minimum barrier is caused by the way in which tlb_flush_page_by_mmuidx is coded. We can remove this limitation by allocating memory for consumption by the worker. Let us assume that this is the unlikely case, as will be the case for the majority of targets which have so far satisfied the BUILD_BUG_ON, and only allocate memory when necessary. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>