aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2018-07-03Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.0-20180703' ↵Peter Maydell
into staging ppc patch queue 2018-07-03 Here's a last minue pull request before today's soft freeze. Ideally I would have sent this earlier, but I was waiting for a couple of extra fixes I knew were close. And the freeze crept up on me, like always. Most of the changes here are bugfixes in any case. There are some cleanups as well, which have been in my staging tree for a little while. There are a couple of truly new features (some extensions to the sam460ex platform), but these are low risk, since they only affect a new and not really stabilized machine type anyway. Higlights are: * Mac platform improvements from Mark Cave-Ayland * Sam460ex improvements from BALATON Zoltan et al. * XICS interrupt handler cleanups from Cédric Le Goater * TCG improvements for atomic loads and stores from Richard Henderson * Assorted other bugfixes # gpg: Signature made Tue 03 Jul 2018 06:55:22 BST # gpg: using RSA key 6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * remotes/dgibson/tags/ppc-for-3.0-20180703: (35 commits) ppc: Include vga cirrus card into the compiling process target/ppc: Relax reserved bitmask of indexed store instructions target/ppc: set is_jmp on ppc_tr_breakpoint_check spapr: compute default value of "hpt-max-page-size" later target/ppc/kvm: don't pass cpu to kvm_get_smmu_info() target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() ppc440_uc: Basic emulation of PPC440 DMA controller sam460ex: Add RTC device hw/timer: Add basic M41T80 emulation ppc4xx_i2c: Rewrite to model hardware more closely hw/ppc: Give sam46ex its own config option fpu_helper.c: fix setting FPSCR[FI] bit target/ppc: Implement the rest of gen_st_atomic target/ppc: Implement the rest of gen_ld_atomic target/ppc: Use atomic min/max helpers target/ppc: Use MO_ALIGN for EXIWX and ECOWX target/ppc: Split out gen_st_atomic target/ppc: Split out gen_ld_atomic target/ppc: Split out gen_load_locked target/ppc: Tidy gen_conditional_store ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org> # Conflicts: # hw/ppc/spapr.c
2018-07-03target/ppc: Relax reserved bitmask of indexed store instructionsBALATON Zoltan
The PPC440 User Manual says that if bit 31 is set, the contents of CR[CR0] are undefined for indexed store instructions but this form is not invalid. Other PPC variants confirming to recent ISA where this bit may be reserved should ignore reserved bits and not raise invalid instruction exception. In particular, MorphOS has an stwx instruction with bit 31 set and fails to boot currently because of this. With this patch it gets further. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: set is_jmp on ppc_tr_breakpoint_checkEmilio G. Cota
The use of GDB breakpoints was broken by b0c2d52 ("target/ppc: convert to TranslatorOps", 2018-02-16). Fix it by setting is_jmp, so that we break from the translation loop as originally intended. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc/kvm: don't pass cpu to kvm_get_smmu_info()Greg Kurz
In a future patch the machine code will need to retrieve the MMU information from KVM during machine initialization before the CPUs are created. Actually, KVM_PPC_GET_SMMU_INFO is a VM class ioctl, and thus, we don't need to have a CPU object around. We just need for KVM to be initialized and use the kvm_state global. This patch just does that. Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc/kvm: get rid of kvm_get_fallback_smmu_info()Greg Kurz
Now that we're checking our MMU configuration is supported by KVM, rather than adjusting it to KVM, it doesn't really make sense to have a fallback for kvm_get_smmu_info(). If KVM is too old or buggy to provide the details, we should rather treat this as an error. This patch thus adds error reporting to kvm_get_smmu_info() and get rid of the fallback code. QEMU will now terminate if KVM fails to provide MMU details. This may break some very old setups, but the simplification is worth the sacrifice. Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03fpu_helper.c: fix setting FPSCR[FI] bitJohn Arbuckle
The FPSCR[FI] bit indicates if the last floating point instruction had a result that was rounded. Each consecutive floating point instruction is suppose to set this bit to the correct value. What currently happens is this bit is not set as often as it should be. I have verified that this is the behavior of a real PowerPC 950. This patch fixes that problem by deciding to set this bit after each floating point instruction. https://www.pdfdrive.net/powerpc-microprocessor-family-the-programming-environments-for-32-e3087633.html Page 63 in table 2-4 is where the description of this bit can be found. Signed-off-by: John Arbuckle <programmingkidx@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Implement the rest of gen_st_atomicRichard Henderson
The store twin case was stubbed out. For now, implement it only within a serial context, forcing parallel execution to synchronize. It would be possible to implement with a cmpxchg loop, if we care, but the loose alignment requirements (simply no crossing 32-byte boundary) might send us back to the serial context anyway. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Implement the rest of gen_ld_atomicRichard Henderson
These cases were stubbed out. For now, implement them only within a serial context, forcing parallel execution to synchronize. It would be possible to implement these with cmpxchg loops, if we care. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic min/max helpersRichard Henderson
These operations were previously unimplemented for ppc. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use MO_ALIGN for EXIWX and ECOWXRichard Henderson
This avoids the need for gen_check_align entirely. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Split out gen_st_atomicRichard Henderson
Move the guts of ST_ATOMIC to a function. Use foo_tl for the operations instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an explicit call to gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Split out gen_ld_atomicRichard Henderson
Move the guts of LD_ATOMIC to a function. Use foo_tl for the operations instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an explicit call to gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Split out gen_load_lockedRichard Henderson
Leave only the minimal amount of code within the LDAR macro, moving the rest of the code into gen_load_locked. Use MO_ALIGN and remove the explicit call to gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Tidy gen_conditional_storeRichard Henderson
Leave only the minimal amount of code within the STCX macro, moving the rest of the code into gen_conditional_store. Remove the explicit call to gen_check_align; the matching LDAX will have already checked alignment, and we verify the same address. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Remove POWERPC_EXCP_STCXRichard Henderson
Always use the gen_conditional_store implementation that uses atomic_cmpxchg. Make sure and clear reserve_addr across most interrupts crossing the cpu_loop. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic cmpxchg for STQCXRichard Henderson
When running in a parallel context, we must use a helper in order to perform the 128-bit atomic operation. When running in a serial context, do the compare before the store. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic store for STQRichard Henderson
Section 1.4 of the Power ISA v3.0B states that this insn is single-copy atomic. As we cannot (yet) issue 128-bit stores within TCG, use the generic helpers provided. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic load for LQ and LQARXRichard Henderson
Section 1.4 of the Power ISA v3.0B states that both of these instructions are single-copy atomic. As we cannot (yet) issue 128-bit loads within TCG, use the generic helpers provided. Since TCG cannot (yet) return a 128-bit value, add a slot within CPUPPCState for returning the high half of a 128-bit return value. This solution is preferred to the helper assigning to architectural registers directly, as it avoids clobbering all TCG live values. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Add do_unaligned_access hookRichard Henderson
This allows faults from MO_ALIGN to have the same effect as from gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-02Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell
* IEC units series (Philippe) * Hyper-V PV TLB flush (Vitaly) * git archive detection (Daniel) * host serial passthrough fix (David) * NPT support for SVM emulation (Jan) * x86 "info mem" and "info tlb" fix (Doug) # gpg: Signature made Mon 02 Jul 2018 16:18:21 BST # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (50 commits) tcg: simplify !CONFIG_TCG handling of tb_invalidate_* i386/monitor.c: make addresses canonical for "info mem" and "info tlb" target-i386: Add NPT support serial: Open non-block bsd-user: Use the IEC binary prefix definitions linux-user: Use the IEC binary prefix definitions tests/crypto: Use the IEC binary prefix definitions vl: Use the IEC binary prefix definitions monitor: Use the IEC binary prefix definitions cutils: Do not include "qemu/units.h" directly hw/rdma: Use the IEC binary prefix definitions hw/virtio: Use the IEC binary prefix definitions hw/vfio: Use the IEC binary prefix definitions hw/sd: Use the IEC binary prefix definitions hw/usb: Use the IEC binary prefix definitions hw/net: Use the IEC binary prefix definitions hw/i386: Use the IEC binary prefix definitions hw/ppc: Use the IEC binary prefix definitions hw/mips: Use the IEC binary prefix definitions hw/mips/r4k: Constify params_size ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-02i386/monitor.c: make addresses canonical for "info mem" and "info tlb"Doug Gale
Correct the output of the "info mem" and "info tlb" monitor commands to correctly show canonical addresses. In 48-bit addressing mode, the upper 16 bits of linear addresses are equal to bit 47. In 57-bit addressing mode (LA57), the upper 7 bits of linear addresses are equal to bit 56. Signed-off-by: Doug Gale <doug16k@gmail.com> Message-Id: <20180617084025.29198-1-doug16k@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02target-i386: Add NPT supportJan Kiszka
This implements NPT suport for SVM by hooking into x86_cpu_handle_mmu_fault where it reads the stage-1 page table. Whether we need to perform this 2nd stage translation, and how, is decided during vmrun and stored in hflags2, along with nested_cr3 and nested_pg_mode. As get_hphys performs a direct cpu_vmexit in case of NPT faults, we need retaddr in that function. To avoid changing the signature of cpu_handle_mmu_fault, this passes the value from tlb_fill to get_hphys via the CPU state. This was tested successfully via the Jailhouse hypervisor. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Message-Id: <567473a0-6005-5843-4c73-951f476085ca@web.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02hw/ppc: Use the IEC binary prefix definitionsPhilippe Mathieu-Daudé
It eases code review, unit is explicit. Patch generated using: $ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/ and modified manually. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20180625124238.25339-33-f4bug@amsat.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02hw/xtensa: Use the IEC binary prefix definitionsPhilippe Mathieu-Daudé
It eases code review, unit is explicit. Patch generated using: $ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/ $ git grep -n '[<>][<>]= ?[1-5]0' and modified manually. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Message-Id: <20180625124238.25339-22-f4bug@amsat.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02x86/cpu: Use definitions from "qemu/units.h"Philippe Mathieu-Daudé
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Acked-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20180625124238.25339-4-f4bug@amsat.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02i386/kvm: add support for Hyper-V TLB flushVitaly Kuznetsov
Add support for Hyper-V TLB flush which recently got added to KVM. Just like regular Hyper-V we announce HV_EX_PROCESSOR_MASKS_RECOMMENDED regardless of how many vCPUs we have. Windows is 'smart' and uses less expensive non-EX Hypercall whenever possible (when it wants to flush TLB for all vCPUs or the maximum vCPU index in the vCPU set requires flushing is less than 64). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20180610184927.19309-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02s390x/tcg: fix locking problem with tcg_s390_tod_updatedDavid Hildenbrand
tcg_s390_tod_updated() is always called with the iothread being locked (e.g. from S390TODClass->set() e.g. via HELPER(sck) or on incoming migration). The helper we call takes the lock itself - bad. Let's change that by factoring out updating the ckc timer. This now looks much nicer than having to call a helper from another function. While touching it we also make sure that env->ckc is updated even if the new value is -1ULL, for now it would not have been modified in that case. Reported-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180629170520.13671-1-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/kvm: indicate alignment in legacy_s390_alloc()David Hildenbrand
Let's do this for completeness reason, although we don't support e.g. PCDIMM/NVDIMM, which would use the alignment for placing the memory region in guest physical memory. But maybe someday we would want to support something like this - then we don't forget about this if allowing multiple allocations in legacy_s390_alloc(). Use the same alignment as we would set in qemu_anon_ram_alloc(). Our fixed address satisfies this alignment (1MB). This implicitly sets the alignment of the underlying memory region. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180628113817.30814-3-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/kvm: legacy_s390_alloc() only supports one allocationDavid Hildenbrand
We always allocate at a fixed address, a second allocation can therefore of course never work. We would simply overwrite mappings. This can e.g. happen in s390_memory_init(), if trying to allocate more than > 8TB. Let's just bail out, as there is no need for supporting it (legacy handling for z/VM). Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180628113817.30814-2-david@redhat.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/tcg: fix CPU hotplug with single-threaded TCGDavid Hildenbrand
run_on_cpu() doesn't seem to work reliably until the CPU has been fully created if the single-threaded TCG main loop is already running. Therefore, hotplugging a CPU under single-threaded TCG does currently not work. We should use the direct call instead of going via run_on_cpu(). So let's use run_on_cpu() for KVM only - KVM requires it due to the initial CPU reset ioctl. As a nice side effect, we get rid of the ifdef. Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180627134410.4901-10-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/tcg: rearm the CKC timer during migrationDavid Hildenbrand
If the CPU data is migrated after the TOD clock, the CKC timer of a CPU is not rearmed. Let's rearm it when loading the CPU state. Introduce tcg-stub.c just like kvm-stub.c for tcg specific stubs. Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180627134410.4901-9-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/tcg: implement SET CLOCKDavid Hildenbrand
This allows a guest to change its TOD. We already take care of updating all CKC timers from within S390TODClass. Use MO_ALIGN to load the operand manually - this will properly trigger a SPECIFICATION exception. Acked-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180627134410.4901-8-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/tcg: SET CLOCK COMPARATOR can clear CKC interruptsDavid Hildenbrand
Let's stop the timer and delete any pending CKC IRQ before doing anything else. While at it, add a comment why the check for ckc == -1ULL is needed. Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180627134410.4901-7-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/tcg: properly implement the TODDavid Hildenbrand
Right now, each CPU has its own TOD. Especially, the TOD will differ based on creation time of a CPU - e.g. when hotplugging a CPU the times will differ quite a lot, resulting in stall warnings in the guest. Let's use a single TOD by implementing our new TOD device. Prepare it for TOD-clock epoch extension. Most importantly, whenever we set the TOD, we have to update the CKC timer. Introduce "tcg_s390x.h" just like "kvm_s390x.h" for tcg specific function declarations that should not go into cpu.h. Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180627134410.4901-6-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/tcg: drop tod_basetimeDavid Hildenbrand
Never set to anything but 0. Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180627134410.4901-5-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/tod: factor out TOD into separate deviceDavid Hildenbrand
Let's treat this like a separate device. TCG will have to store the actual state/time later on. Include cpu-qom.h in kvm_s390x.h (due to S390CPU) to compile tod-kvm.c. Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180627134410.4901-4-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/kvm: pass values instead of pointers to kvm_s390_set_clock_*()David Hildenbrand
We are going to factor out the TOD into a separate device and use const pointers for device class functions where possible. We are passing right now ordinary pointers that should never be touched when setting the TOD. Let's just pass the values directly. Note that s390_set_clock() will be removed in a follow-on patch and therefore its calling convention is not changed. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180627134410.4901-3-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/tcg: avoid overflows in time2tod/tod2timeDavid Hildenbrand
Big values for the TOD/ns clock can result in some overflows that can be avoided. Not all overflows can be handled however, as the conversion either multiplies by 4.096 or divided by 4.096. Apply the trick used in the Linux kernel in arch/s390/include/asm/timex.h for tod_to_ns() and use the same trick also for the conversion in the other direction. Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20180627134410.4901-2-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02s390x/cpumodel: default enable bpb and ppa15 for z196 and laterChristian Borntraeger
Most systems and host kernels provide the necessary building blocks for bpb and ppa15. We can reverse the logic and default enable those features, while still allowing to disable it via cpu model. So let us add bpb and ppa15 to z196 and later default CPU model for the qemu 3.0 machine. (like -cpu z13). Older machine types (e.g. s390-ccw-virtio-2.12) will retain the old value and not provide those bits in the default model. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <20180626123830.18282-1-borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02loader: Check access size when calling rom_ptr() to avoid crashesThomas Huth
The rom_ptr() function allows direct access to the ROM blobs that we load during startup. However, there are currently no checks for the size of the accesses, so it's currently possible to crash QEMU for example with: $ echo "Insane in the mainframe" > /tmp/test.txt $ s390x-softmmu/qemu-system-s390x -kernel /tmp/test.txt -append xyz Segmentation fault (core dumped) $ s390x-softmmu/qemu-system-s390x -kernel /tmp/test.txt -initrd /tmp/test.txt Segmentation fault (core dumped) $ echo -n HdrS > /tmp/hdr.txt $ sparc64-softmmu/qemu-system-sparc64 -kernel /tmp/hdr.txt -initrd /tmp/hdr.txt Segmentation fault (core dumped) We need a possibility to check the size of the ROM area that we want to access, thus let's add a size parameter to the rom_ptr() function to avoid these problems. Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <1530005740-25254-1-git-send-email-thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-06-30xtensa: Avoid calling get_page_addr_code() from helper functionPeter Maydell
The xtensa frontend calls get_page_addr_code() from its itlb_hit_test helper function. This function is really part of the TCG core's internals, and calling it from a target helper makes it awkward to make changes to that core code. It also means that we don't pass the correct retaddr to tlb_fill(), so we won't correctly handle the case where an exception is generated. The helper is used for the instructions IHI, IHU and IPFL. Change it to call cpu_ldb_code_ra() instead. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-30target/xtensa: Convert to TranslatorOpsRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-30target/xtensa: Change gen_intermediate_code dc to pointerRichard Henderson
This will reduce the size of the patch in the next patch, where the context will have to be a pointer. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-30target/xtensa: Convert to DisasContextBaseRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-30target/xtensa: Replace DISAS_UPDATE with DISAS_NORETURNRichard Henderson
The usage of DISAS_UPDATE is after noreturn helpers. It is thus indistinguishable from DISAS_NORETURN. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-30target/xtensa: check zero overhead loop alignmentMax Filippov
ISA book documents that the first instruction of zero overhead loop must fit completely into naturally aligned region of an instruction fetch unit size. Check that condition and log a message if it's violated. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-29target/arm: Add ID_ISAR6Richard Henderson
This register was added to aa32 state by ARMv8.2. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20180629001538.11415-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29target/arm: Prune a15 features from maxRichard Henderson
There is no need to re-set these 3 features already implied by the call to aarch64_a15_initfn. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20180629001538.11415-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29target/arm: Prune a57 features from maxRichard Henderson
There is no need to re-set these 9 features already implied by the call to aarch64_a57_initfn. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20180629001538.11415-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29target/arm: Fix SVE system register access checksRichard Henderson
Leave ARM_CP_SVE, removing ARM_CP_FPU; the sve_access_check produced by the flag already includes fp_access_check. If we also check ARM_CP_FPU the double fp_access_check asserts. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20180629001538.11415-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>