Age | Commit message (Collapse) | Author |
|
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Introduce run-time-instrumentation support when running under kvm for
virtio-ccw 2.7 machine and make sure older machines can not enable it.
The new ri_allowed field in the s390MachineClass serves as an indicator
whether the feature can be used by the machine and should therefore be
activated if available.
riccb_needed() is used to check whether riccb is needed or not in live
migration.
Signed-off-by: Fan Zhang <zhangfan@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
Also add some of the option cascading we were missing.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
staging
gtk/sdl build tweaks
fix gtk 3.20 warnings
gtk clipboard support
spice-gl monitor config support
fix coverity warnings
# gpg: Signature made Fri 13 May 2016 13:30:39 BST using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
* remotes/kraxel/tags/pull-ui-20160513-1:
gtk: don't leak the GtkBorder with VTE 0.36
gtk: update grab code for gtk 3.20
spice: fix coverity complains
egl-helpers: fix possible resource leak
Changed malloc to g_malloc, free to g_free in ui/shader.c
spice/gl: add & use qemu_spice_gl_monitor_config
ui/gtk: copy to clipboard support
ui: gtk: Fix some deprecation warnings
ui: gtk: Fix a runtime warning on vte >= 0.37
configure: support vte-2.91
configure: report SDL version
configure: report GTK version
configure: add echo_version helper
configure: error on unknown --with-sdlabi value
configure: build SDL if only SDL2 available
ui: sdl2: Release grab before opening console window
ui: gtk: fix crash when terminal inner-border is NULL
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This field was used for telling cpu_interrupt() to unlink a chain of TBs
being executed when it worked that way. Now, cpu_interrupt() don't do
this anymore. So we don't need this field anymore.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Message-Id: <1462273462-14036-1-git-send-email-sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
'tb_invalidated_flag' was meant to catch two events:
* some TB has been invalidated by tb_phys_invalidate();
* the whole translation buffer has been flushed by tb_flush().
Then it was checked:
* in cpu_exec() to ensure that the last executed TB can be safely
linked to directly call the next one;
* in cpu_exec_nocache() to decide if the original TB should be provided
for further possible invalidation along with the temporarily
generated TB.
It is always safe to patch an invalidated TB since it is not going to be
used anyway. It is also safe to call tb_phys_invalidate() for an already
invalidated TB. Thus, setting this flag in tb_phys_invalidate() is
simply unnecessary. Moreover, it can prevent from pretty proper linking
of TBs, if any arbitrary TB has been invalidated. So just don't touch it
in tb_phys_invalidate().
If this flag is only used to catch whether tb_flush() has been called
then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using
only 'true' and 'false' to set its value. Also, instead of setting it in
tb_gen_code(), just after tb_flush() has been called, do it right inside
of tb_flush().
In cpu_exec(), this flag is used to track if tb_flush() has been called
and have made 'next_tb' (a reference to the last executed TB) invalid
for linking it to directly call the next TB. tb_flush() can be called
during the CPU execution loop from tb_gen_code(), during TB execution or
by another thread while 'tb_lock' is released. Catch for translation
buffer flush reliably by resetting this flag once before first TB lookup
and each time we find it set before trying to add a direct jump. Don't
touch in in tb_find_physical().
Each vCPU has its own execution loop in multithreaded mode and thus
should have its own copy of the flag to be able to reset it with its own
'next_tb' and don't affect any other vCPU execution thread. So make this
flag per-vCPU and move it to CPUState.
In cpu_exec_nocache(), we only need to check if tb_flush() has been
called from tb_gen_code() called by cpu_exec_nocache() itself. To do
this reliably, preserve the old value of the flag, reset it before
calling tb_gen_code(), check afterwards, and combine the saved value
back to the flag.
This patch is based on the patch "tcg: move tb_invalidated_flag to
CPUState" from Paolo Bonzini <pbonzini@redhat.com>.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
The check is to make sure that another thread hasn't already done the
same while we were outside of tb_lock. Mention this in a comment.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
These fields do not contain pure pointers to a TranslationBlock
structure. So uintptr_t is the most appropriate type for them.
Also put some asserts to assure that the two least significant bits of
the pointer are always zero before assigning it to jmp_list_first.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.
Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
tb_next_offset => jmp_reset_offset
tb_jmp_offset => jmp_insn_offset
tb_next => jmp_target_addr
jmp_next => jmp_list_next
jmp_first => jmp_list_first
Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <1461341333-19646-12-git-send-email-sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Ensure direct jump patching in ARM is atomic by using
atomic_read()/atomic_set() for code patching.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Message-Id: <1461341333-19646-8-git-send-email-sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Ensure direct jump patching in s390 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Message-Id: <1461341333-19646-7-git-send-email-sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Ensure direct jump patching in i386 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.
tcg_out_nopn() implementation:
Suggested-by: Richard Henderson <rth@twiddle.net>.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Message-Id: <1461341333-19646-6-git-send-email-sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Ensure direct jump patching in TCI is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() to load/store the address.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Message-Id: <1461341333-19646-4-git-send-email-sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
These macros provide a convenient way to n-byte align pointers up and
down and check if a pointer is n-byte aligned.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Message-Id: <1461341333-19646-3-git-send-email-sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Message-Id: <1461341333-19646-2-git-send-email-sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
We are inconsistent with the type of tb->flags: usage varies loosely
between int and uint64_t. Settle to uint32_t everywhere, which is
superior to both: at least one target (aarch64) uses the most significant
bit in the u32, and uint64_t is wasteful.
Compile-tested for all targets.
Suggested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Suggested-by: Richard Henderson <rth@twiddle.net>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1460049562-23517-1-git-send-email-cota@braap.org>
|
|
Block layer patches
# gpg: Signature made Thu 12 May 2016 14:37:05 BST using RSA key ID C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
* remotes/kevin/tags/for-upstream: (69 commits)
qemu-iotests: iotests: fail hard if not run via "check"
block: enable testing of LUKS driver with block I/O tests
block: add support for encryption secrets in block I/O tests
block: add support for --image-opts in block I/O tests
qemu-io: Add 'write -z -u' to test MAY_UNMAP flag
qemu-io: Add 'write -f' to test FUA flag
qemu-io: Allow unaligned access by default
qemu-io: Use bool for command line flags
qemu-io: Make 'open' subcommand more like command line
qemu-io: Add missing option documentation
qmp: add monitor command to add/remove a child
quorum: implement bdrv_add_child() and bdrv_del_child()
Add new block driver interface to add/delete a BDS's child
qemu-img: check block status of backing file when converting.
iotests: fix the redirection order in 083
block: Inactivate all children
block: Drop superfluous invalidating bs->file from drivers
block: Invalidate all children
nbd: Simplify client FUA handling
block: Honor BDRV_REQ_FUA during write_zeroes
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
'remotes/pmaydell/tags/pull-target-arm-20160512' into staging
target-arm queue:
* blizzard, omap_lcdc: code cleanup to remove DEPTH != 32 dead code
* QOMify various ARM devices
* bcm2835_property: use cached values when querying framebuffer
* hw/arm/nseries: don't allocate large sized array on the stack
* fix LPAE descriptor address masking (only visible for EL2)
* fix stage 2 exec permission handling for AArch32
* first part of supporting syndrome info for data aborts to EL2
* virt: NUMA support
* work towards i.MX6 support
* avoid unnecessary TLB flush on TCR_EL2, TCR_EL3 writes
# gpg: Signature made Thu 12 May 2016 14:29:14 BST using RSA key ID 14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
* remotes/pmaydell/tags/pull-target-arm-20160512: (43 commits)
hw/arm: QOM'ify versatilepb.c
hw/arm: QOM'ify strongarm.c
hw/arm: QOM'ify stellaris.c
hw/arm: QOM'ify spitz.c
hw/arm: QOM'ify pxa2xx_pic.c
hw/arm: QOM'ify pxa2xx.c
hw/arm: QOM'ify integratorcp.c
hw/arm: QOM'ify highbank.c
hw/arm: QOM'ify armv7m.c
target-arm: Avoid unnecessary TLB flush on TCR_EL2, TCR_EL3 writes
hw/display/blizzard: Remove blizzard_template.h
hw/display/blizzard: Expand out macros
i.MX: Add sabrelite i.MX6 emulation.
i.MX: Add i.MX6 SOC implementation.
i.MX: Add the Freescale SPI Controller
FIFO: Add a FIFO32 implementation
i.MX: Add i.MX6 System Reset Controller device.
ARM: Factor out ARM on/off PSCI control functions
ACPI: Virt: Generate SRAT table
ACPI: move acpi_build_srat_memory to common place
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
|
|
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com>
Message-id: 1462865799-19402-3-git-send-email-xiecl.fnst@cn.fujitsu.com
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
In some cases, we want to take a quorum child offline, and take
another child online.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 1462865799-19402-2-git-send-email-xiecl.fnst@cn.fujitsu.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
The block layer has a couple of cases where it can lose
Force Unit Access semantics when writing a large block of
zeroes, such that the request returns before the zeroes
have been guaranteed to land on underlying media.
SCSI does not support FUA during WRITESAME(10/16); FUA is only
supported if it falls back to WRITE(10/16). But where the
underlying device is new enough to not need a fallback, it
means that any upper layer request with FUA semantics was
silently ignoring BDRV_REQ_FUA.
Conversely, NBD has situations where it can support FUA but not
ZERO_WRITE; when that happens, the generic block layer fallback
to bdrv_driver_pwritev() (or the older bdrv_co_writev() in qemu
2.6) was losing the FUA flag.
The problem of losing flags unrelated to ZERO_WRITE has been
latent in bdrv_co_do_write_zeroes() since commit aa7bfbff, but
back then, it did not matter because there was no FUA flag. It
became observable when commit 93f5e6d8 paved the way for flags
that can impact correctness, when we should have been using
bdrv_co_writev_flags() with modified flags. Compare to commit
9eeb6dd, which got flag manipulation right in
bdrv_co_do_zero_pwritev().
Symptoms: I tested with qemu-io with default writethrough cache
(which is supposed to use FUA semantics on every write), and
targetted an NBD client connected to a server that intentionally
did not advertise NBD_FLAG_SEND_FUA. When doing 'write 0 512',
the NBD client sent two operations (NBD_CMD_WRITE then
NBD_CMD_FLUSH) to get the fallback FUA semantics; but when doing
'write -z 0 512', the NBD client sent only NBD_CMD_WRITE.
The fix is do to a cleanup bdrv_co_flush() at the end of the
operation if any step in the middle relied on a BDS that does
not natively support FUA for that step (note that we don't
need to flush after every operation, if the operation is broken
into chunks based on bounce-buffer sizing). Each BDS gains a
new flag .supported_zero_flags, which parallels the use of
.supported_write_flags but only when accessing a zero write
operation (the flags MUST be different, because of SCSI having
different semantics based on WRITE vs. WRITESAME; and also
because BDRV_REQ_MAY_UNMAP only makes sense on zero writes).
Also fix some documentation to describe -ENOTSUP semantics,
particularly since iscsi depends on those semantics.
Down the road, we may want to add a driver where its
.bdrv_co_pwritev() honors all three of BDRV_REQ_FUA,
BDRV_REQ_ZERO_WRITE, and BDRV_REQ_MAY_UNMAP, and advertise
this via bs->supported_write_flags for blocks opened by that
driver; such a driver should NOT supply .bdrv_co_write_zeroes
nor .supported_zero_flags. But none of the drivers touched
in this patch want to do that (the act of writing zeroes is
different enough from normal writes to deserve a second
callback).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Pre-patch, .supported_write_flags lives at the driver level, which
means we are blindly declaring that all block devices using a
given driver will either equally support FUA, or that we need a
fallback at the block layer. But there are drivers where FUA
support is a per-block decision: the NBD block driver is dependent
on the remote server advertising NBD_FLAG_SEND_FUA (and has
fallback code to duplicate the flush that the block layer would do
if NBD had not set .supported_write_flags); and the iscsi block
driver is dependent on the mode sense bits advertised by the
underlying device (and is currently silently ignoring FUA requests
if the underlying device does not support FUA).
The fix is to make supported flags as a per-BDS option, set during
.bdrv_open(). This patch moves the variable and fixes NBD and iscsi
to set it only conditionally; later patches will then further
simplify the NBD driver to quit duplicating work done at the block
layer, as well as tackle the fact that SCSI does not support FUA
semantics on WRITESAME(10/16) but only on WRITE(10/16).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Now that there are no remaining clients, we can drop the
sector-based blk_read(), blk_write(), blk_aio_readv(), and
blk_aio_writev(). Sadly, there are still remaining
sector-based interfaces, such as blk_*discard(), or
blk_write_compressed(); those will have to wait for another
day.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Sector-based blk_aio_readv() and blk_aio_writev() should die; switch
to byte-based blk_aio_preadv() and blk_aio_pwritev() instead.
The patch had to touch multiple files at once, because dma_blk_io()
takes pointers to the functions, and ide_issue_trim() piggybacks on
the same interface (while ignoring offset under the hood).
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
blk_aio_readv() and blk_aio_writev() are annoying in that they
can't access sub-sector granularity, and cannot pass flags.
Also, they require the caller to pass redundant information
about the size of the I/O (qiov->size in bytes must match
nb_sectors in sectors).
Add new blk_aio_preadv() and blk_aio_pwritev() functions to fix
the flaws. The next few patches will upgrade callers, then
finally delete the old interfaces.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Sector-based blk_write() should die; convert the one-off
variant blk_write_zeroes() to use an offset/count interface
instead. Likewise for blk_co_write_zeroes() and
blk_aio_write_zeroes().
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Sector-based blk_read() should die; convert the one-off
variant blk_read_unthrottled().
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We have several block drivers that understand BDRV_REQ_FUA,
and emulate it in the block layer for the rest by a full flush.
But without a way to actually request BDRV_REQ_FUA during a
pass-through blk_pwrite(), FUA-aware block drivers like NBD are
forced to repeat the emulation logic of a full flush regardless
of whether the backend they are writing to could do it more
efficiently.
This patch just wires up a flags argument; followup patches
will actually make use of it in the NBD driver and in qemu-io.
Signed-off-by: Eric Blake <eblake@redhat.com>
Acked-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Vmdk images have metadata to indicate the vmware virtual
hardware version image was created/tested to run with.
Allow users to specify that version via new 'hwversion'
option.
[ kwolf: Adjust qemu-iotests common.filter ]
Signed-off-by: Janne Karhunen <Janne.Karhunen@gmail.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
There are no block drivers left that implement the old .bdrv_read/write
interface, so it can be removed now. This gets us rid of the
corresponding emulation functions, too.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
|
|
Many parts of the block layer are already byte granularity. The block
driver interface, however, was still missing an interface that allows
making use of this. This patch introduces a new BlockDriver interface,
which is based on coroutines, vectored, has flags and uses a byte
granularity. This is now the preferred interface for new drivers.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
|
|
It used to be an internal helper function just for implementing
bdrv_co_do_readv/writev(), but now that it's a public interface, it
deserves a name without "do" in it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
|
|
Extract the handling of io_plug "depth" from linux-aio.c and let the
main bdrv_drain loop do nothing but wait on I/O.
Like the two newly introduced functions, bdrv_io_plug and bdrv_io_unplug
now operate on all children. The visit order is now symmetrical between
plug and unplug, making it possible for formats to implement plug/unplug.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Extract the handling of throttling from bdrv_flush_io_queue. These
new functions will soon become BdrvChildRole callbacks, as they can
be generalized to "beginning of drain" and "end of drain".
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We want to remove throttled_reqs from block/io.c. This is the easy
part---hide the handling of throttled_reqs during disable/enable of
throttling within throttle-groups.c.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
For now we only support the following devices:
* up to 4 Cortex A9 cores
* A9 MPCORE (SCU, GIC, TWD)
* 5 i.MX UARTs
* 2 EPIT timers
* 1 GPT timer
* 3 I2C controllers
* 7 GPIO controllers
* 6 SDHC controllers
* 5 SPI controllers
* 1 CCM device
* 1 SRC device
* various ROM/RAM areas.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This one is build on top of the existing FIFO8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This controller is also present in i.MX5X devices but they are not
yet emulated by QEMU.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Move acpi_build_srat_memory to common place so that it could be reused
by ARM. Rename it to build_srat_memory.
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 1461667229-9216-5-git-send-email-zhaoshenglong@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
ACPI spec says that Proximity Domain is an "Integer that represents
the proximity domain to which the processor belongs". So define it as a
uint32_t.
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 1461667229-9216-4-git-send-email-zhaoshenglong@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 1461667229-9216-3-git-send-email-zhaoshenglong@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Use tcg_set_insn_param() instead of directly accessing internal
tcg data structures to update an insn param.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1461931684-1867-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Returning a partial object on error is an invitation for a careless
caller to leak memory. We already fixed things in an earlier
patch to guarantee NULL if visit_start fails ("qapi: Guarantee
NULL obj on input visitor callback error"), but that does not
help the case where visit_start succeeds but some other failure
happens before visit_end, such that we leak a partially constructed
object outside visit_type_FOO(). As no one outside the testsuite
was actually relying on these semantics, it is cleaner to just
document and guarantee that ALL pointer-based visit_type_FOO()
functions always leave a safe value in *obj during an input visitor
(either the new object on success, or NULL if an error is
encountered), so callers can now unconditionally use
qapi_free_FOO() to clean up regardless of whether an error occurred.
The decision is done by adding visit_is_input(), then updating the
generated code to check if additional cleanup is needed based on
the type of visitor in use.
Note that we still leave *obj unchanged after a scalar-based
visit_type_FOO(); I did not feel like auditing all uses of
visit_type_Enum() to see if the callers would tolerate a specific
sentinel value (not to mention having to decide whether it would
be better to use 0 or ENUM__MAX as that sentinel).
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1461879932-9020-25-git-send-email-eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
The semantics of the list visit are somewhat baroque, with the
following pseudocode when FooList is used:
start()
for (prev = head; cur = next(prev); prev = &cur) {
visit(&cur->value)
}
Note that these semantics (advance before visit) requires that
the first call to next() return the list head, while all other
calls return the next element of the list; that is, every visitor
implementation is required to track extra state to decide whether
to return the input as-is, or to advance. It also requires an
argument of 'GenericList **' to next(), solely because the first
iteration might need to modify the caller's GenericList head, so
that all other calls have to do a layer of dereferencing.
Thankfully, we only have two uses of list visits in the entire
code base: one in spapr_drc (which completely avoids
visit_next_list(), feeding in integers from a different source
than uint8List), and one in qapi-visit.py. That is, all other
list visitors are generated in qapi-visit.c, and share the same
paradigm based on a qapi FooList type, so we can refactor how
lists are laid out with minimal churn among clients.
We can greatly simplify things by hoisting the special case
into the start() routine, and flipping the order in the loop
to visit before advance:
start(head)
for (tail = *head; tail; tail = next(tail)) {
visit(&tail->value)
}
With the simpler semantics, visitors have less state to track,
the argument to next() is reduced to 'GenericList *', and it
also becomes obvious whether an input visitor is allocating a
FooList during visit_start_list() (rather than the old way of
not knowing if an allocation happened until the first
visit_next_list()). As a minor drawback, we now allocate in
two functions instead of one, and have to pass the size to
both functions (unless we were to tweak the input visitors to
cache the size to start_list for reuse during next_list, but
that defeats the goal of less visitor state).
The signature of visit_start_list() is chosen to match
visit_start_struct(), with the new parameters after 'name'.
The spapr_drc case is a virtual visit, done by passing NULL for
list, similarly to how NULL is passed to visit_start_struct()
when a qapi type is not used in those visits. It was easy to
provide these semantics for qmp-output and dealloc visitors,
and a bit harder for qmp-input (several prerequisite patches
refactored things to make this patch straightforward). But it
turned out that the string and opts visitors munge enough other
state during visit_next_list() to make it easier to just
document and require a GenericList visit for now; an assertion
will remind us to adjust things if we need the semantics in the
future.
Several pre-requisite cleanup patches made the reshuffling of
the various visitors easier; particularly the qmp input visitor.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1461879932-9020-24-git-send-email-eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
As mentioned in previous patches, we want to call visit_end_struct()
functions unconditionally, so that visitors can release resources
tied up since the matching visit_start_struct() without also having
to worry about error priority if more than one error occurs.
Even though error_propagate() can be safely used to ignore a second
error during cleanup caused by a first error, it is simpler if the
cleanup cannot set an error. So, split out the error checking
portion (basically, input visitors checking for unvisited keys) into
a new function visit_check_struct(), which can be safely skipped if
any earlier errors are encountered, and leave the cleanup portion
(which never fails, but must be called unconditionally if
visit_start_struct() succeeded) in visit_end_struct().
Generated code in qapi-visit.c has diffs resembling:
|@@ -59,10 +59,12 @@ void visit_type_ACPIOSTInfo(Visitor *v,
| goto out_obj;
| }
| visit_type_ACPIOSTInfo_members(v, obj, &err);
|- error_propagate(errp, err);
|- err = NULL;
|+ if (err) {
|+ goto out_obj;
|+ }
|+ visit_check_struct(v, &err);
| out_obj:
|- visit_end_struct(v, &err);
|+ visit_end_struct(v);
| out:
and in qapi-event.c:
@@ -47,7 +47,10 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
| visit_type_q_obj_ACPI_DEVICE_OST_arg_members(v, ¶m, &err);
|- visit_end_struct(v, err ? NULL : &err);
|+ if (!err) {
|+ visit_check_struct(v, &err);
|+ }
|+ visit_end_struct(v);
| if (err) {
| goto out;
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1461879932-9020-20-git-send-email-eblake@redhat.com>
[Conflict with a doc fixup resolved]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
Right now, qmp-output-visitor happens to produce a QNull result
if nothing is actually visited between the creation of the visitor
and the request for the resulting QObject. A stronger protocol
would require that a QMP output visit MUST visit something. But
to still be able to produce a JSON 'null' output, we need a new
visitor function that states our intentions. Yes, we could say
that such a visit must go through visit_type_any(), but that
feels clunky.
So this patch introduces the new visit_type_null() interface and
its no-op interface in the dealloc visitor, and stubs in the
qmp visitors (the next patch will finish the implementation).
For the visitors that will not implement the callback, document
the situation. The code in qapi-visit-core unconditionally
dereferences the callback pointer, so that a segfault will inform
a developer if they need to implement the callback for their
choice of visitor.
Note that JSON has a primitive null type, with the single value
null; likewise with the QNull type for QObject; but for QAPI,
we just have the 'null' value without a null type. We may
eventually want to add more support in QAPI for null (most likely,
we'd use it via an alternate type that permits 'null' or an
object); but we'll create that usage when we need it.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1461879932-9020-15-git-send-email-eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|
|
The visitor interface for mapping between QObject/QemuOpts/string
and QAPI is scandalously under-documented, making changes to visitor
core, individual visitors, and users of visitors difficult to
coordinate. Among other questions: when is it safe to pass NULL,
vs. when a string must be provided; which visitors implement which
callbacks; the difference between concrete and virtual visits.
Correct this by retrofitting proper contracts, and document where some
of the interface warts remain (for example, we may want to modify
visit_end_* to require the same 'obj' as the visit_start counterpart,
so the dealloc visitor can be simplified). Later patches in this
series will tackle some, but not all, of these warts.
Add assertions to (partially) enforce the contract. Some of these
were only made possible by recent cleanup commits.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1461879932-9020-13-git-send-email-eblake@redhat.com>
[Doc fix from Eric squashed in]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
|