Age | Commit message (Collapse) | Author |
|
In user mode Linux, Qemu currently refuses to load ELF files that do not
contain section headers (ehdr->e_shentsize == 0). Since section headers are not
required in order to load an ELF file, simply removing the e_shentsize check in
elf_check_ehdr() allows ELF binaries with no section headers to be run properly
in user mode:
Signed-off-by: Craig Heffner <cheffner@tacnetsol.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
This fixes "Cannot open audit interface - aborting." when the
EAFNOSUPPORT errno differs between the target and host
architectures (e.g. mips target and x86_64 host).
Signed-off-by: Ed Swierk <eswierk@skyportsystems.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
If the guest's "long" type is smaller than the host's, then
our sched_getaffinity wrapper needs to round the buffer size
up to a multiple of the host sizeof(long). This means that when
we copy the data back from the host buffer to the guest's
buffer there might be more than we can fit. Rather than
overflowing the guest's buffer, handle this case by returning
EINVAL or ignoring the unused extra space, as appropriate.
Note that only guests using the syscall interface directly might
run into this bug -- the glibc wrappers around it will always
use a buffer whose size is a multiple of 8 regardless of guest
architecture.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
We were returning the incorrect uname string (with a hyphen, not
an underscore) for x86_64. Fix this by removing the x86_64 special
case, since the default "just use UNAME_MACHINE" behaviour suffices.
This leaves cpu_to_uname_machine() special cases for only those
architectures which need to vary the string based on runtime CPU
features.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
gcc-4.9 finds unused operand:
linux-user/syscall.c: In function ‘host_to_target_stat64’:
linux-user/qemu.h:301:19: error: right-hand operand of comma expression
has no effect [-Werror=unused-value]
((hptr), (x)), 0)
Just removing the rh operand is no good, it will error in later:
linux-user/main.c: In function ‘arm_kernel_cmpxchg64_helper’:
linux-user/qemu.h:330:15: error: void value not ignored as it ought to be
__ret = __put_user((x), __hptr); \
Thus, remove setting __ret from __get_user and __put_user, as and
set the right hand operand to (void)0 to make it clear that these
return never nothing.
This commit depends on the signal.c cleanup, to ensure bisectable
version history.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Richard Henderson <rth@twiddle.net>
|
|
The last remaining check for return value of __get_user.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Alexander Graf <agraf@suse.de>
|
|
Remove checks of __get_user and the err variable
used to control flow with it.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
As __get_user and __put_user do not return errors, remove the
if checks from around them. This allows making the save/restore
functions void.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Alexander Graf <agraf@suse.de>
|
|
Remove "if(__put_user" checks and their related error paths
for all architecture's setup_frame, setup_rt_frame and similar.
Remove the unlock_user_struct when the only way to end up there is
from failed lock_user_struct.
Remove err variable if there are no users for it in the function
anymore.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Remove "if(__get_user" checks and their related error paths
for all architecture's do_sigreturn. Remove the unlock_user_struct
when the only way to end up there is from failed lock_user_struct.
v3: remove unneccesary sigsegv label as suggested by Peter
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Access is already checked in the lock_user_struct
call before.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
A function never called from anywhere, obviously half-complete.
Remove function and if someone wants to complete this, please
check the old version out of git history.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
make most implementations of restore_sigcontext void and
remove checking it's return value from functions calling
restore_sigcontext.
The exception is the X86 version of the function that is
too different from others to deal in this way, and arm
version, to keep possibility of erroring out from failed
valid_user_regs.
v3: keep arm valid_user_regs for filling in near future.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Make all implementations of setup_sigcontext void and
remove checking it's return value from functions calling
setup_sigcontext.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Since copy_siginfo_to_user always returns 0, make it void
and remove any checks for return value from calling functions.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Remove the remaining check for __put_user return
value, and all the checks for err variable which
isn't set anywhere anymore.
No we can only end up in give_sigsegv due to failed
lock_user_struct - thus we remove the unlock_user_struct
to avoid unlocking a region never locked.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Remove all the simple cases of reading the return value
of __get_user and __put_user.
We set err = 0 in sparc versions of do_sigreturn and
sparc64_set_context to avoid compile error, but else this patch is
just general removal of err |= __get_user ... idiom.
v2: remove err variable from target_rt_restore_ucontext
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
We tell the guest its page size via AUX vectors. The guest process then uses
this page size as information on which boundaries it can mmap() things.
However, if the host has a bigger page size granularity than the guest, it can
not fulfill these mmap() requests - which falls apart when MAP_FIXED is passed
to mmap.
So in that case, let the guest know that we're running on a bigger page size
granularity than the target would require.
This fixes running qemu-ppc (TARGET_PAGE_SIZE=4k) on a 64k page size ppc64 host
for me.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
The size and register information are encoded into the reserve_info field
of CPU state in the store conditional translation code. Specifically, the
size is shifted left by 5 bits (see target-ppc/translate.c gen_conditional_store).
The user-mode store conditional code erroneously extracts the size by ANDing
with a 4 bit mask; this breaks if size >= 16.
Eliminate the mask to make the extraction of size mirror its encoding.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The existing code does a check to ensure that a .bss region is properly
mmap'd. When additional mmap is required, the (guest) pages are also
validated. However, this code has a bug: when host page size is larger
than target page size, it is possible for the .bss pages to already be
(host) mapped but the guest .bss pages may not be valid.
The check to mmap additional space is separated from the flagging of the
target (guest) pages, thus ensuring that both aspects are done properly.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This allows running PPC64 little-endian in user mode if target is configured
that way. In PPC64 LE user mode we set MSR.LE during initialization.
Signed-off-by: Doug Kwan <dougkwan@google.com>
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Look at ELF header to determine ABI version on PPC64. This is required
for executing the first instruction correctly. Also print correct machine
name in uname() system call.
Signed-off-by: Doug Kwan <dougkwan@google.com>
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Implement the two-register SHA instruction group from the optional
Crypto Extensions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-10-git-send-email-peter.maydell@linaro.org
|
|
Implement the AES instructions from the optional Crypto Extensions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-8-git-send-email-peter.maydell@linaro.org
|
|
Implement the optional A64 CRC instructions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-6-git-send-email-peter.maydell@linaro.org
|
|
Now that we have a separate ARM_FEATURE_V8_PMULL bit, use it for
the A64 PMULL, not the AES feature bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add support for the VMULL.P64 polynomial 64x64 to 128 bit multiplication
instruction in the A32/T32 instruction sets; this is part of the v8
Crypto Extensions.
To do this we have to move the neon_pmull_64_{lo,hi} helpers from
helper-a64.c into neon_helper.c so they can be used by the AArch32
translator.
Inspired-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-4-git-send-email-peter.maydell@linaro.org
|
|
This adds support for the SHA1 and SHA256 instructions that are available
on some v8 implementations of Aarch32.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-2-git-send-email-peter.maydell@linaro.org
[PMM:
* rebase
* fix bad indent
* add a missing UNDEF check for Q!=1 in the 3-reg SHA1/SHA256 case
* use g_assert_not_reached()
* don't re-extract bit 6 for the 2-reg-misc encodings
* set the ELF HWCAP2 bits for the new features
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
* remotes/bonzini/softmmu-smap: (33 commits)
target-i386: cleanup x86_cpu_get_phys_page_debug
target-i386: fix protection bits in the TLB for SMEP
target-i386: support long addresses for 4MB pages (PSE-36)
target-i386: raise page fault for reserved bits in large pages
target-i386: unify reserved bits and NX bit check
target-i386: simplify pte/vaddr calculation
target-i386: raise page fault for reserved physical address bits
target-i386: test reserved PS bit on PML4Es
target-i386: set correct error code for reserved bit access
target-i386: introduce support for 1 GB pages
target-i386: introduce do_check_protect label
target-i386: tweak handling of PG_NX_MASK
target-i386: commonize checks for PAE and non-PAE
target-i386: commonize checks for 4MB and 4KB pages
target-i386: commonize checks for 2MB and 4KB pages
target-i386: fix coding standards in x86_cpu_handle_mmu_fault
target-i386: simplify SMAP handling in MMU_KSMAP_IDX
target-i386: fix kernel accesses with SMAP and CPL = 3
target-i386: move check_io helpers to seg_helper.c
target-i386: rename KSMAP to KNOSMAP
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This will collect all load and store helpers soon. For now
it is just a replacement for softmmu_exec.h, which this patch
stops including directly, but we also include it where this will
be necessary in order to simplify the next patch.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
With the next patch, these need to be correct or VM86 tasks
have the wrong CPL. The flags are basically what the Intel VMX
documentation say is mandatory for entry into a VM86 guest.
For consistency, SMM ought to have the same flags except with
CPL=0.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
accordingly.
Instead of manually calling cpu_x86_set_cpl() when the CPL changes,
check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS). Every
location that called cpu_x86_set_cpl() also called
cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer
required.
This fixes the SMM handler code as it was not setting/restoring the
CPL level manually.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Implementations of system calls getrusage and wait4 have not previously
handled correctly cases when incorrect address of struct rusage is
passed.
This change makes sure return values are correctly set for these cases.
Signed-off-by: Petar Jovanovic <petar.jovanovic@imgtec.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
The ARM kernel has chosen to spill into the HWCAP2 ELF feature bit flags
early, even though it hasn't yet exhausted all 32 bits of the HWCAP word.
Add support for setting this in the same way we do for HWCAP.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
The ARM target-specific code in elfload.c was incorrectly allowing
the 64-bit ARM target to use most of the existing 32-bit definitions:
most noticably this meant that our HWCAP bits passed to the guest
were wrong, and register handling when dumping core was totally
broken. Fix this by properly separating the 64 and 32 bit code,
since they have more differences than similarities.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
The kernel has added support for a number of new ARM HWCAP bits;
add them to QEMU, including support for setting them where we have
a corresponding CPU feature bit.
We were also incorrectly setting the VFPv3D16 HWCAP -- this means
"only 16 D registers", not "supports 16-bit floating point format";
since QEMU always has 32 D registers for VFPv3, we can just remove
the line that incorrectly set this bit.
The kernel does not set the HWCAP_FPA even if it is providing FPA
emulation via nwfpe, so don't set this bit in QEMU either.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
The ELF HWCAP bits for ARM features THUMBEE, NEON, VFPv3 and VFPv3D16 are
all off by one compared to the kernel definitions. Fix this discrepancy
and add in the missing CRUNCH bit which was the cause of the off-by-one
error. (We don't emulate any of the CPUs which have that weird hardware,
so it's otherwise uninteresting to us.)
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
--enable-uname-release was a rather heavyweight hammer, as it allows
providing values less that UNAME_MINIMUM_RELEASE. Also, it affects
all built linux-user targets, which in most cases is not what user
wants.
Now that we have UNAME_MINIMUM_RELEASE for all linux-user platforms,
we can drop --enable-uname-release and the related CONFIG_UNAME_RELEASE
define.
Users can still override the variable with QEMU_UNAME=2.6.32 or -r
command line option. If distributors need to update a minimum version
for a specific target, it can be done by updating UNAME_MINIMUM_RELEASE.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
Make syscall.c slightly smaller by moving uname-related
functions to uname.c.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
To move more uname related functions out of syscall.c,
rename cpu-uname.{c,h} to uname.{c.h}
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
Set the fault address correctly in the signal information passed
to a signal handler for AArch64 guests.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
target_sigevent struct
Use the public sigset_t instead of the glibc specific internal
__sigset_t in _syscall.
Calculate the sigevent pad size is calculated in similar way as kernel
does it instead of using glibc internal field _pad.
This is needed for building with musl libc.
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Recently merged kernel ports (such as OpenRISC and Meta) have an llseek
system call instead of _llseek. This is handled for the host
architecture by defining __NR__llseek as __NR_llseek, but not for the
target architecture.
Handle it in the same way for these architectures, defining
TARGET_NR__llseek as TARGET_NR_llseek.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Riku Voipio <riku.voipio@iki.fi>
Cc: Jia Liu <proljc@gmail.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
Signed-off-by: Huw Davies <huw@codeweavers.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
This makes adding more message types cleaner.
Signed-off-by: Huw Davies <huw@codeweavers.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
Assert that the amount of stack space used for auxvec, envp & argv
exactly matches the amount allocated. This catches if DLINFO_ITEMS isn't
updated when another NEW_AUX_ENT is added.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Riku Voipio <riku.voipio@iki.fi>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
QEMU already supports /proc/self/{maps,stat,auxv} so addition of
/proc/self/exe is rather trivial.
Fixes https://bugs.launchpad.net/qemu/+bug/1299190
Signed-off-by: Maxim Ostapenko <m.ostapenko@partner.samsung.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
For AArch32 exceptions, the only information provided about
the cause of an exception is the individual exception type (data
abort, undef, etc), which we store in cs->exception_index. For
AArch64, the CPU provides much more detail about the cause of
the exception, which can be found in the syndrome register.
Create a set of fields in CPUARMState which must be filled in
whenever an exception is raised, so that exception entry can
correctly fill in the syndrome register for the guest.
This includes the information which in AArch32 appears in
the DFAR and IFAR (fault address registers) and the DFSR
and IFSR (fault status registers) for data aborts and
prefetch aborts, since if we end up taking the MMU fault
to AArch64 rather than AArch32 this will need to end up
in different system registers.
This patch does a refactoring which moves the setting of the
AArch32 DFAR/DFSR/IFAR/IFSR from the point where the exception
is raised to the point where it is taken. (This is no change
for cores with an MMU, retains the existing clearly incorrect
behaviour for ARM946 of trashing the MP access permissions
registers which share the c5_data and c5_insn state fields,
and has no effect for v7M because we don't implement its
MPU fault status or address registers.)
As a side effect of the cleanup we fix a bug in the AArch64
linux-user mode code where we were passing a 64 bit fault
address through the 32 bit c6_data/c6_insn fields: it now
goes via the always-64-bit exception.vaddress.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
|
|
Flags NONBLOCK and CLOEXEC can have different values on the host and the
guest, so set correct host values before calling accept4().
This fixes several issues with accept4 system call and user-mode of QEMU.
Signed-off-by: Petar Jovanovic <petar.jovanovic@imgtec.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|
Signed-off-by: Prasad Joshi <prasadjoshi.linux@gmail.com>
Acked-by: Riku Voipio <riku.voipio@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|