Age | Commit message (Collapse) | Author |
|
As part of plumbing MemTxAttrs down to the IOMMU translate method,
add MemTxAttrs as an argument to address_space_translate()
and address_space_translate_cached(). Callers either have an
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
|
|
As part of plumbing MemTxAttrs down to the IOMMU translate method,
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
Its callers either have an attrs value to hand, or don't care
and can use MEMTXATTRS_UNSPECIFIED.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
|
|
Add more detail to the documentation for memory_region_init_iommu()
and other IOMMU-related functions and data structures.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
|
|
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
we forgot to also update the register's reset value. The effect
was that (a) a guest that read CPACR on reset would not see ones in
the RAO bits, and (b) if you did a migration before the guest did
a write to the CPACR then the migration would fail because the
destination would enforce the RAO bits and then complain that they
didn't match the zero value from the source.
Implement reset for the CPACR using a custom reset function
that just calls cpacr_write(), to avoid having to duplicate
the logic for which bits are RAO.
This bug would affect migration for TCG CPUs which are ARMv7
with VFP but without one of Neon or VFPv3.
Reported-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
|
|
Coverity found that the string return by 'object_get_canonical_path' was not
being freed at two locations in the model (CID 1391294 and CID 1391293) and
also that a memset was being called with a value greater than the max of a byte
on the second argument (CID 1391286). This patch corrects this by adding the
freeing of the strings and also changing to memset to zero instead on
descriptor unaligned errors.
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
g_new is even better because it is type-safe.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
When QEMU is started with following CLI
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
it crashes with abort at
accel/kvm/kvm-all.c:2164:
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
arm_gicv3_icc_reset() where the later is called by CPU reset
reset callback.
However commit:
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
broke CPU reset callback registration in case
arm_load_kernel()
...
if (!info->kernel_filename || info->firmware_loaded)
branch is taken, i.e. it's sufficient to provide a firmware
or do not provide kernel on CLI to skip cpu reset callback
registration, where before offending commit the callback
has been registered unconditionally.
Fix it by registering the callback right at the beginning of
arm_load_kernel() unconditionally instead of doing it at the end.
NOTE:
we probably should eliminate that dependency anyways as well as
separate arch CPU reset parts from arm_load_kernel() into CPU
itself, but that refactoring that I probably would have to do
anyways later for CPU hotplug to work.
Reported-by: Auger Eric <eric.auger@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Depending on the host abi, float16, aka uint16_t, values are
passed and returned either zero-extended in the host register
or with garbage at the top of the host register.
The tcg code generator has so far been assuming garbage, as that
matches the x86 abi, but this is incorrect for other host abis.
Further, target/arm has so far been assuming zero-extended results,
so that it may store the 16-bit value into a 32-bit slot with the
high 16-bits already clear.
Rectify both problems by mapping "f16" in the helper definition
to uint32_t instead of (a typedef for) uint16_t. This forces
the host compiler to assume garbage in the upper 16 bits on input
and to zero-extend the result on output.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
It forgot to increase clroffset during the loop. So it only clear the
first 4 bytes.
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
Cc: qemu-stable@nongnu.org
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
There was a nasty flip in identifying which register group an access is
targeting. The issue caused spuriously raised priorities of the guest
when handing CPUs over in the Jailhouse hypervisor.
Cc: qemu-stable@nongnu.org
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add entries to MAINTAINERS to cover the newer MPS2 boards and
the new devices they use.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
|
|
The FRECPX instructions should (like most other floating point operations)
honour the FPCR.FZ bit which specifies whether input denormals should
be flushed to zero (or FZ16 for the half-precision version).
We forgot to implement this, which doesn't affect the results (since
the calculation doesn't actually care about the mantissa bits) but did
mean we were failing to set the FPSR.IDC bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
|
|
into staging
NUMA queue, 2018-05-30
* New command-line option: --preconfig
This option allows pausing QEMU and allow the configuration
using QMP commands before running board initialization code.
* New QMP set-numa-node, now made possible because of --preconfig
* Small update on -numa error messages
# gpg: Signature made Thu 31 May 2018 00:02:59 BST
# gpg: using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/numa-next-pull-request:
tests: functional tests for QMP command set-numa-node
qmp: add set-numa-node command
qmp: permit query-hotpluggable-cpus in preconfig state
tests: extend qmp test with preconfig checks
cli: add --preconfig option
tests: qapi-schema tests for allow-preconfig
qapi: introduce new cmd option "allow-preconfig"
hmp: disable monitor in preconfig state
qapi: introduce preconfig runstate
numa: split out NumaOptions parsing into set_numa_options()
numa: postpone options post-processing till machine_run_board_init()
numa: clarify error message when node index is out of range in -numa dist, ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Block layer patches:
- Add blockdev-create job
- qcow2: Silence Coverity false positive
# gpg: Signature made Wed 30 May 2018 16:55:31 BST
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
block/create: Mark blockdev-create stable
qemu-iotests: Rewrite 213 for blockdev-create job
qemu-iotests: Rewrite 212 for blockdev-create job
qemu-iotests: Rewrite 211 for blockdev-create job
qemu-iotests: Rewrite 210 for blockdev-create job
qemu-iotests: Rewrite 207 for blockdev-create job
qemu-iotests: Rewrite 206 for blockdev-create job
qemu-iotests: iotests.py helper for non-file protocols
qemu-iotests: Add VM.run_job()
qemu-iotests: Add iotests.img_info_log()
qemu-iotests: Add VM.qmp_log()
qemu-iotests: Add VM.get_qmp_events_filtered()
block/create: Make x-blockdev-create a job
job: Add error message for failing jobs
vhdx: Fix vhdx_co_create() return value
vdi: Fix vdi_co_do_create() return value
qcow2: Fix Coverity warning when calculating the refcount cache size
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
* start QEMU with 2 unmapped cpus,
* while in preconfig state
* add 2 numa nodes
* assign cpus to them
* exit preconfig and in running state check that cpus
are mapped correctly.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1526556607-268163-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Command is allowed to run only in preconfig stage and
will allow to configure numa mapping for CPUs depending
on possible CPUs layout (query-hotpluggable-cpus) for
given machine instance.
Example of configuration session:
$QEMU -smp 2 --preconfig ...
QMP:
-> {'execute': 'query-hotpluggable-cpus' }
<- {'return': [
{'props': {'core-id': 0, 'thread-id': 0, 'socket-id': 1}, ... },
{'props': {'core-id': 0, 'thread-id': 0, 'socket-id': 0}, ... }
]}
-> {'execute': 'set-numa-node', 'arguments': { 'type': 'node', 'nodeid': 0 } }
<- {'return': {}}
-> {'execute': 'set-numa-node', 'arguments': { 'type': 'cpu',
'node-id': 0, 'core-id': 0, 'thread-id': 0, 'socket-id': 1, }
}
<- {'return': {}}
-> {'execute': 'set-numa-node', 'arguments': { 'type': 'node', 'nodeid': 1 } }
-> {'execute': 'set-numa-node', 'arguments': { 'type': 'cpu',
'node-id': 1, 'core-id': 0, 'thread-id': 0, 'socket-id': 0 }
}
<- {'return': {}}
-> {'execute': 'query-hotpluggable-cpus' }
<- {'return': [
{'props': {'core-id': 0, 'thread-id': 0, 'node-id': 0, 'socket-id': 1}, ... },
{'props': {'core-id': 0, 'thread-id': 0, 'node-id': 1, 'socket-id': 0}, ... }
]}
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1525423069-61903-11-git-send-email-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
[ehabkost: Changed "since 2.13" to "since 3.0"]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
it will allow mgmt to query possible CPUs, which depends on
used machine(version)/-smp options, without restarting
QEMU and use results to configure numa mapping or adding
CPUs with device_add* later.
PS:
*) device_add is not allowed to run at preconfig in this series
but later it could be dealt with by injecting -device
in preconfig state and letting existing -device handling
to actually plug devices
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1525423069-61903-10-git-send-email-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Add permission checks for commands at 'preconfig' stage.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1526556524-267991-1-git-send-email-imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
This option allows pausing QEMU in the new RUN_STATE_PRECONFIG state,
allowing the configuration of QEMU from QMP before the machine jumps
into board initialization code of machine_run_board_init()
The intent is to allow management to query machine state and additionally
configure it using previous query results within one QEMU instance
(i.e. eliminate the need to start QEMU twice, 1st to query board specific
parameters and 2nd for actual VM start using query results for
additional parameters).
The new option complements -S option and could be used with or without
it. The difference is that -S pauses QEMU when the machine is completely
initialized with all devices wired up and ready to execute guest code
(QEMU needs only to unpause VCPUs to let guest execute its code),
while the "preconfig" option pauses QEMU early before board specific init
callback (machine_run_board_init) is executed and allows the configuration
of machine parameters which will be used by board init code.
When early introspection/configuration is done, command 'exit-preconfig'
should be used to exit RUN_STATE_PRECONFIG and transition to the next
requested state (i.e. if -S is used then QEMU will pause the second
time when board/device initialization is completed or start guest
execution if -S isn't provided on CLI)
PS:
Initially 'preconfig' is planned to be used for configuring numa
topology depending on board specified possible cpus layout.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1526059483-42847-1-git-send-email-imammedo@redhat.com>
[ehabkost: Changed "since 2.13" to "since 3.0"]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
use new allow-preconfig parameter in tests and make sure that
the QAPISchema can parse allow-preconfig correctly
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1526058959-41425-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
New option will be used to allow commands, which are prepared/need
to run, during preconfig state. Other commands that should be able
to run in preconfig state, should be amended to not expect machine
in initialized state or deal with it.
For compatibility reasons, commands that don't use new flag
'allow-preconfig' explicitly are not permitted to run in
preconfig state but allowed in all other states like they used
to be.
Within this patch allow following commands in preconfig state:
qmp_capabilities
query-qmp-schema
query-commands
query-command-line-options
query-status
exit-preconfig
to allow qmp connection, basic introspection and moving to the next
state.
PS:
set-numa-node and query-hotpluggable-cpus will be enabled later in
a separate patches.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1526057503-39287-1-git-send-email-imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[ehabkost: Changed "since 2.13" to "since 3.0"]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Ban it for now, if someone would need it to work early,
one would have to implement checks if HMP command is valid
at preconfig state.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1525423069-61903-5-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
New preconfig runstate will be used in follow up patches
related to introducing --preconfig CLI option and is
intended to replace prelaunch runstate from QEMU start
up to machine_init callback.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1525423069-61903-4-git-send-email-imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[ehabkost: Changed "since 2.13" to "since 3.0"]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
it will allow to reuse set_numa_options() for parsing
configuration commands received via QMP interface
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1525423069-61903-3-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
in preparation for numa options to being handled via QMP before
machine_run_board_init(), move final numa configuration checks
and processing to machine_run_board_init() so it could take into
account both CLI (via parse_numa_opts()) and QMP input
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1525423069-61903-2-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
When using following CLI:
-numa dist,src=128,dst=1,val=20
user gets a rather confusing error message:
"Invalid node 128, max possible could be 128"
Where 128 is number of nodes that QEMU supports (MAX_NODES),
while src/dst is an index up to that limit, so it should be
MAX_NODES - 1 in error message.
Make error message to explicitly state valid range for node
index to be more clear.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1526483174-169008-1-git-send-email-imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
We're ready to declare the blockdev-create job stable. This renames the
corresponding QMP command from x-blockdev-create to blockdev-create.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
This rewrites the test case 213 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
This rewrites the test case 212 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
This rewrites the test case 211 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
This rewrites the test case 210 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
This rewrites the test case 207 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
Most of the test cases stay the same as before (the exception being some
improved 'size' options that allow distinguishing which command created
the image), but in order to be able to implement proper job handling,
the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
This rewrites the test case 206 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
This adds two helper functions that are useful for test cases that make
use of a non-file protocol (specifically ssh).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
Add an iotests.py function that runs a job and only returns when it is
destroyed. An error is logged when the job failed and job-finalize and
job-dismiss commands are issued if necessary.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
This adds a filter function to postprocess 'qemu-img info' input
(similar to what _img_info does), and an img_info_log() function that
calls 'qemu-img info' and logs the filtered output.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
This adds a helper function that logs both the QMP request and the
received response before returning it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
This adds a helper function that returns a list of QMP events that are
already filtered through filter_qmp_event().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
This changes the x-blockdev-create QMP command so that it doesn't block
the monitor and the main loop any more, but starts a background job that
performs the image creation.
The basic job as implemented here is all that is necessary to make image
creation asynchronous and to provide a QMP interface that can be marked
stable, but it still lacks a few features that jobs usually provide: The
job will ignore pause commands and it doesn't publish more than very
basic progress yet (total-progress is 1 and current-progress advances
from 0 to 1 when the driver callbacks returns). These features can be
added later without breaking compatibility.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
So far we relied on job->ret and strerror() to produce an error message
for failed jobs. Not surprisingly, this tends to result in completely
useless messages.
This adds a Job.error field that can contain an error string for a
failing job, and a parameter to job_completed() that sets the field. As
a default, if NULL is passed, we continue to use strerror(job->ret).
All existing callers are changed to pass NULL. They can be improved in
separate patches.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
.bdrv_co_create() is supposed to return 0 on success, but vhdx could
return a positive value instead. Fix this.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
.bdrv_co_create() is supposed to return 0 on success, but vdi could
return a positive value instead. Fix this.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
|
OSX 10.13 deprecates the NSFileHandlingPanelOKButton constant, and
would rather you use NSModalResponseOK, which was introduced in OS 10.9.
Use the recommended new constant name, with a backward compatibility
define if we're building on an older OSX.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: John Arbuckle <programmingkidx@gmail.com>
Message-id: 20180529181523.19185-1-peter.maydell@linaro.org
|
|
MIN_REFCOUNT_CACHE_SIZE is 4 and the cluster size is guaranteed to be
at most 2MB, so the minimum refcount cache size (in bytes) is always
going to fit in a 32-bit integer.
Coverity doesn't know that, and since we're storing the result in a
uint64_t (*refcount_cache_size) it thinks that we need the 64 bits and
that we probably want to do a 64-bit multiplication to prevent the
result from being truncated.
This is a false positive in this case, but it's a fair warning.
We could do a 64-bit multiplication to get rid of it, but since we
know that a 32-bit variable is enough to store this value let's simply
reuse min_refcount_cache, make it a normal int and stop doing casts.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
'remotes/edgar/tags/edgar/xilinx-next-2018-05-29-v1.for-upstream' into staging
Tag edgar/xilinx-next-2018-05-29-v1.for-upstream
# gpg: Signature made Tue 29 May 2018 09:58:30 BST
# gpg: using RSA key 29C596780F6BCA83
# gpg: Good signature from "Edgar E. Iglesias (Xilinx key) <edgar.iglesias@xilinx.com>"
# gpg: aka "Edgar E. Iglesias <edgar.iglesias@gmail.com>"
# Primary key fingerprint: AC44 FEDC 14F7 F1EB EDBF 4151 29C5 9678 0F6B CA83
* remotes/edgar/tags/edgar/xilinx-next-2018-05-29-v1.for-upstream: (38 commits)
target-microblaze: Consolidate MMU enabled checks
target-microblaze: cpu_mmu_index: Fixup indentation
target-microblaze: Use tcg_gen_movcond in eval_cond_jmp
target-microblaze: Convert env_btarget to i64
target-microblaze: Remove argument b in eval_cc()
target-microblaze: Use table based condition-codes conversion
target-microblaze: mmu: Cleanup debug log messages
target-microblaze: Simplify address computation using tcg_gen_addi_i32()
target-microblaze: Allow address sizes between 32 and 64 bits
target-microblaze: Add support for extended access to TLBLO
target-microblaze: dec_msr: Plug a temp leak
target-microblaze: mmu: Add a configurable output address mask
target-microblaze: mmu: Prepare for 64-bit addresses
target-microblaze: mmu: Remove unused register state
target-microblaze: mmu: Add R_TBLX_MISS macros
target-microblaze: Implement MFSE EAR
target-microblaze: Add Extended Addressing
target-microblaze: Setup for 64bit addressing
target-microblaze: Make special registers 64-bit
target-microblaze: dec_msr: Fix MTS to FSR
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Update references to 2.13 to read 3.0, since that's the
number we're using for the next release.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20180522104000.9044-6-peter.maydell@linaro.org
|
|
Rename the 2.13 machines to match the number we're going to
use for the next release.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-id: 20180522104000.9044-5-peter.maydell@linaro.org
|
|
Rename the 2.13 machines to match the number we're going to
use for the next release.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20180522104000.9044-4-peter.maydell@linaro.org
|
|
Rename the 2.13 machine types to match what we're going to
use as our next release number.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-id: 20180522104000.9044-3-peter.maydell@linaro.org
|
|
We're going to make the next release be 3.0, not 2.13; change
the annotations in our json appropriately.
Changes produced with
sed -i -e 's/2\.13/3.0/g' qapi/*.json
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20180522104000.9044-2-peter.maydell@linaro.org
|