Age | Commit message (Collapse) | Author |
|
We want to eventually stick request_alignment alongside other
BlockLimits, but first, we must ensure it is populated at the
same time as all other limits, rather than being a special case
that is set only when a block is first opened.
Add a .bdrv_refresh_limits() to all four of our legacy devices
that will always be sector-only (bochs, cloop, dmg, vvfat), in
spite of their recent conversion to expose a byte interface.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We want to eventually stick request_alignment alongside other
BlockLimits, but first, we must ensure it is populated at the
same time as all other limits, rather than being a special case
that is set only when a block is first opened.
In this case, raw_probe_alignment() already did what we needed,
so just fix its signature and wire it in correctly.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We want to eventually stick request_alignment alongside other
BlockLimits, but first, we must ensure it is populated at the
same time as all other limits, rather than being a special case
that is set only when a block is first opened.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We want to eventually stick request_alignment alongside other
BlockLimits, but first, we must ensure it is populated at the
same time as all other limits, rather than being a special case
that is set only when a block is first opened.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We want to eventually stick request_alignment alongside other
BlockLimits, but first, we must ensure it is populated at the
same time as all other limits, rather than being a special case
that is set only when a block is first opened.
Note that when the user does not provide "align", then we were
defaulting to bs->request_alignment - but at this stage in the
initialization, that was always 512. We were also rejecting an
explicit "align":0 from the user; this patch now allows that,
as an explicit request for the default alignment (which may not
always be 512 in the future).
qemu-iotests 77 is particularly sensitive to the fact that we
can specify an artificial alignment override in blkdebug, and
that override must continue to work even when limits are
refreshed on an already open device.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Making all callers special-case 0 as unlimited is awkward,
and we DO have a hard maximum of BDRV_REQUEST_MAX_SECTORS given
our current block layer API limits.
In the case of scsi, this means that we now always advertise a
limit to the guest, even in cases where the underlying layers
previously use 0 for no inherent limit beyond the block layer.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
s->blocksize may be larger than 512, in which case our
tweaks to max_xfer_len and opt_xfer_len must be scaled
appropriately.
CC: qemu-stable@nongnu.org
Reported-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
The function sector_limits_lun2qemu() returns a value in units of
the block layer's 512-byte sector, and can be as large as
0x40000000, which is much larger than the block layer's inherent
limit of BDRV_REQUEST_MAX_SECTORS. The block layer already
handles '0' as a synonym to the inherent limit, and it is nicer
to return this value than it is to calculate an arbitrary
maximum, for two reasons: we want to ensure that the block layer
continues to special-case '0' as 'no limit beyond the inherent
limits'; and we want to be able to someday expand the block
layer to allow 64-bit limits, where auditing for uses of
BDRV_REQUEST_MAX_SECTORS will help us make sure we aren't
artificially constraining iscsi to old block layer limits.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We were basing the advertisement of maximum discard and transfer
length off of UINT32_MAX, but since the rest of the block layer
has signed int limits on a transaction, nothing could ever reach
that maximum, and we risk overflowing an int once things are
converted to byte-based rather than sector-based limits. What's
more, we DO have a much smaller limit: both the current kernel
and qemu-nbd have a hard limit of 32M on a read or write
transaction, and while they may also permit up to a full 32 bits
on a discard transaction, the upstream NBD protocol is proposing
wording that without any explicit advertisement otherwise,
clients should limit ALL requests to the same limits as read and
write, even though the other requests do not actually require as
many bytes across the wire. So the better limit to tell the
block layer is 32M for both values.
Behavior doesn't actually change with this patch (the block layer
is currently ignoring the max_transfer advertisements); but when
that problem is fixed in a later series, this patch will prevent
the exposure of a latent bug.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
The NBD layer was breaking up request at a limit of 2040 sectors
(just under 1M) to cater to old qemu-nbd. But the server limit
was raised to 32M in commit 2d8214885 to match the kernel, more
than three years ago; and the upstream NBD Protocol is proposing
documentation that without any explicit communication to state
otherwise, a client should be able to safely assume that a 32M
transaction will work. It is time to rely on the larger sizing,
and any downstream distro that cares about maximum
interoperability to older qemu-nbd servers can just tweak the
value of #define NBD_MAX_SECTORS.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
If the amount of data to read ends exactly on the total size
of the bs, then we were wasting time creating a local qiov
to read the data in preparation for what would normally be
appending zeroes beyond the end, even though this corner case
has nothing further to do.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We don't pass any flags on to drivers to handle. Tighten an
assert to explain why we pass 0 to bdrv_driver_preadv(), and add
some comments on things to be aware of if we want to turn on
per-BDS BDRV_REQ_FUA support during reads in the future. Also,
document that we may want to consider using unmap during
copy-on-read operations where the read is all zeroes.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
For symmetry with bdrv_aligned_preadv(), assert that the caller
really has aligned things properly. This requires adding an align
parameter, which is used now only in the new asserts, but will
come in handy in a later patch that adds auto-fragmentation to the
max transfer size, since that value need not always be a multiple
of the alignment, and therefore must be rounded down.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
There are 9 iotests failed on Ubuntu 15.10 at the moment.
The problem is that options parsing in qemu-img is broken by the
following commit:
commit 10985131e337a0c52c5bd1e191fd7867a6ff8d02
Author: Denis V. Lunev <den@openvz.org>
Date: Fri Jun 17 17:44:13 2016 +0300
qemu-img: move common options parsing before commands processing
This strange command line reports error
./qemu-img create -f qcow2 TEST_DIR/t.qcow2 -- 1024
qemu-img: Invalid image size specified!
while original code parses it successfully.
The problem is that getopt_long state should be reset. This could be done
using this assignment according to the manual:
optind = 0
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Eric Blake <eblake@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
staging
ipxe: update submodule from 4e03af8ec to 041863191
e1000e+vmxnet3: add boot rom
# gpg: Signature made Mon 04 Jul 2016 07:25:46 BST
# gpg: using RSA key 0x4CB6D8EED3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
# Primary key fingerprint: A032 8CFF B93A 17A7 9901 FE7D 4CB6 D8EE D3E8 7138
* remotes/kraxel/tags/pull-ipxe-20160704-1:
build: add pc-bios to config-host.mak deps
ipxe: add new roms to BLOBS
ipxe: update prebuilt binaries
vmxnet3: add boot rom
e1000e: add boot rom
ipxe: add vmxnet3 rom
ipxe: add e1000e rom
ipxe: update submodule from 4e03af8ec to 041863191
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
into staging
ppc patch queue for 2016-07-05
Here's the current ppc, sPAPR and related drivers patch queue.
* The big addition is dynamic DMA window support (this includes some
core VFIO changes)
* There are also several fixes to the MMU emulation for bugs
introduced with the HV mode patches
* Several other bugfixes and cleanups
Changes in v2:
I messed up and forgot to make a fix in the last patch which BenH
pointed out (introduced by my rebasing). That's fixed in this
version, and I'm replacing the tag in place with the revised
version.
# gpg: Signature made Tue 05 Jul 2016 06:28:58 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.7-20160705:
ppc/hash64: Fix support for LPCR:ISL
ppc/hash64: Add proper real mode translation support
target-ppc: Return page shift from PTEG search
target-ppc: Simplify HPTE matching
target-ppc: Correct page size decoding in ppc_hash64_pteg_search()
ppc: simplify ppc_hash64_hpte_page_shift_noslb()
spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW)
vfio/spapr: Create DMA window dynamically (SPAPR IOMMU v2)
vfio: Add host side DMA window capabilities
vfio: spapr: Add DMA memory preregistering (SPAPR IOMMU v2)
spapr_iommu: Realloc guest visible TCE table when starting/stopping listening
ppc: simplify max_smt initialization in ppc_cpu_realizefn()
spapr: Ensure thread0 of CPU core is always realized first
ppc: Fix xsrdpi, xvrdpi and xvrspi rounding
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
We need to ignore the segment page size and essentially treat
all pages as coming from a 4K segment.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[dwg: Adjusted for differences in my version of the prereq patches]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
This adds proper support for translating real mode addresses based
on the combination of HV and LPCR bits. This handles HRMOR offset
for hypervisor real mode, and both RMA and VRMA modes for guest
real mode. PAPR mode adjusts the offsets appropriately to match the
RMA used in TCG, but we need to limit to the max supported by the
implementation (16G).
This includes some fixes by Cédric Le Goater <clg@kaod.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[dwg: Adjusted for differences in my version of the prereq patches]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
ppc_hash64_pteg_search() now decodes a PTEs page size encoding, which it
didn't previously do. This means we're now double decoding the page size
because we check it int he fault path after ppc64_hash64_htab_lookup()
returns.
To avoid this duplication have ppc_hash64_pteg_search() and
ppc_hash64_htab_lookup() return the page size from the PTE and use that in
the callers instead of decoding again.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
ppc_hash64_pteg_search() explicitly checks each HPTE's VALID and
SECONDARY bits, then uses the HPTE64_V_COMPARE() macro to check the B field
and AVPN. However, a small tweak to HPTE64_V_COMPARE() means we can check
all of these bits at once with a suitable ptem value. So, consolidate all
the comparisons for simplicity.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
The architecture specifies that when searching a PTEG for PTEs, entries
with a page size encoding that's not valid for the current segment should
be ignored, continuing the search.
The current implementation does this with ppc_hash64_pte_size_decode()
which is a very incomplete implementation of this check. We already have
code to do a full and correct page size decode in hpte_page_shift().
This patch moves hpte_page_shift() so it can be used in
ppc_hash64_pteg_search() and adjusts the latter's parameters to include
a full SLBE instead of just a segment page shift.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
The segment page shift parameter is never used. Let's remove it.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
This adds support for Dynamic DMA Windows (DDW) option defined by
the SPAPR specification which allows to have additional DMA window(s)
The "ddw" property is enabled by default on a PHB but for compatibility
the pseries-2.6 machine and older disable it.
This also creates a single DMA window for the older machines to
maintain backward migration.
This implements DDW for PHB with emulated and VFIO devices. The host
kernel support is required. The advertised IOMMU page sizes are 4K and
64K; 16M pages are supported but not advertised by default, in order to
enable them, the user has to specify "pgsz" property for PHB and
enable huge pages for RAM.
The existing linux guests try creating one additional huge DMA window
with 64K or 16MB pages and map the entire guest RAM to. If succeeded,
the guest switches to dma_direct_ops and never calls TCE hypercalls
(H_PUT_TCE,...) again. This enables VFIO devices to use the entire RAM
and not waste time on map/unmap later. This adds a "dma64_win_addr"
property which is a bus address for the 64bit window and by default
set to 0x800.0000.0000.0000 as this is what the modern POWER8 hardware
uses and this allows having emulated and VFIO devices on the same bus.
This adds 4 RTAS handlers:
* ibm,query-pe-dma-window
* ibm,create-pe-dma-window
* ibm,remove-pe-dma-window
* ibm,reset-pe-dma-window
These are registered from type_init() callback.
These RTAS handlers are implemented in a separate file to avoid polluting
spapr_iommu.c with PCI.
This changes sPAPRPHBState::dma_liobn to an array to allow 2 LIOBNs
and updates all references to dma_liobn. However this does not add
64bit LIOBN to the migration stream as in fact even 32bit LIOBN is
rather pointless there (as it is a PHB property and the management
software can/should pass LIOBNs via CLI) but we keep it for the backward
migration support.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
New VFIO_SPAPR_TCE_v2_IOMMU type supports dynamic DMA window management.
This adds ability to VFIO common code to dynamically allocate/remove
DMA windows in the host kernel when new VFIO container is added/removed.
This adds a helper to vfio_listener_region_add which makes
VFIO_IOMMU_SPAPR_TCE_CREATE ioctl and adds just created IOMMU into
the host IOMMU list; the opposite action is taken in
vfio_listener_region_del.
When creating a new window, this uses heuristic to decide on the TCE table
levels number.
This should cause no guest visible change in behavior.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[dwg: Added some casts to prevent printf() warnings on certain targets
where the kernel headers' __u64 doesn't match uint64_t or PRIx64]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
There are going to be multiple IOMMUs per a container. This moves
the single host IOMMU parameter set to a list of VFIOHostDMAWindow.
This should cause no behavioral change and will be used later by
the SPAPR TCE IOMMU v2 which will also add a vfio_host_win_del() helper.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
This makes use of the new "memory registering" feature. The idea is
to provide the userspace ability to notify the host kernel about pages
which are going to be used for DMA. Having this information, the host
kernel can pin them all once per user process, do locked pages
accounting (once) and not spent time on doing that in real time with
possible failures which cannot be handled nicely in some cases.
This adds a prereg memory listener which listens on address_space_memory
and notifies a VFIO container about memory which needs to be
pinned/unpinned. VFIO MMIO regions (i.e. "skip dump" regions) are skipped.
The feature is only enabled for SPAPR IOMMU v2. The host kernel changes
are required. Since v2 does not need/support VFIO_IOMMU_ENABLE, this does
not call it when v2 is detected and enabled.
This enforces guest RAM blocks to be host page size aligned; however
this is not new as KVM already requires memory slots to be host page
size aligned.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
[dwg: Fix compile error on 32-bit host]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
The sPAPR TCE tables manage 2 copies when VFIO is using an IOMMU -
a guest view of the table and a hardware TCE table. If there is no VFIO
presense in the address space, then just the guest view is used, if
this is the case, it is allocated in the KVM. However since there is no
support yet for VFIO in KVM TCE hypercalls, when we start using VFIO,
we need to move the guest view from KVM to the userspace; and we need
to do this for every IOMMU on a bus with VFIO devices.
This implements the callbacks for the sPAPR IOMMU - notify_started()
reallocated the guest view to the user space, notify_stopped() does
the opposite.
This removes explicit spapr_tce_set_need_vfio() call from PCI hotplug
path as the new callbacks do this better - they notify IOMMU at
the exact moment when the configuration is changed, and this also
includes the case of PCI hot unplug.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
kvmppc_smt_threads() returns 1 if KVM is not enabled.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
During CPU core realization, we create all the thread objects and parent
them to the core object in a loop. However, the realization of thread
objects is done separately by walking the threads of a core using
object_child_foreach(). With this, there is no guarantee on the order
in which the child thread objects get realized. Since CPU device tree
properties are currently derived from the CPU thread object, we assume
thread0 of the core to be the representative thread of the core when
creating device tree properties for the core. If thread0 is not the
first thread that gets realized, then we would end up having an
incorrect dt_id for the core and this causes hotplug failures from
the guest.
Fix this by realizing each thread object by walking the core's thread
object list thereby ensuring that thread0 and other threads are always
realized in the correct order.
Future TODO: CPU DT nodes are per-core properties and we should
ideally base the creation of CPU DT nodes on core objects rather than
the thread objects.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
xsrdpi, xvrdpi and xvrspi use the round ties away method, not round
nearest even.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
into staging
Revert "bios: Add fast variant of SeaBIOS for use with -kernel on x86."
# gpg: Signature made Mon 04 Jul 2016 16:24:55 BST
# gpg: using RSA key 0x4CB6D8EED3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
# Primary key fingerprint: A032 8CFF B93A 17A7 9901 FE7D 4CB6 D8EE D3E8 7138
* remotes/kraxel/tags/pull-seabios-20160704-3:
Revert "bios: Add fast variant of SeaBIOS for use with -kernel on x86."
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
'remotes/berrange/tags/pull-qcrypto-2016-07-04-1' into staging
Merge qcrypto 2016/07/04 v1
# gpg: Signature made Mon 04 Jul 2016 15:54:26 BST
# gpg: using RSA key 0xBE86EBB415104FDF
# gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>"
# gpg: aka "Daniel P. Berrange <berrange@redhat.com>"
# Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF
* remotes/berrange/tags/pull-qcrypto-2016-07-04-1:
crypto: allow default TLS priority to be chosen at build time
crypto: add support for TLS priority string override
crypto: implement sha224, sha384, sha512 and ripemd160 hashes
crypto: switch hash code to use nettle/gcrypt directly
crypto: rename OUT to out in xts test to avoid clash on MinGW
crypto: fix handling of iv generator hash defaults
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This reverts commit 4e04ab6a63ebe9fb4305e7e8e49cc8b0095db8fb.
Also remove pc-bios/bios-fast.bin.
Commit was merged by mistake.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
Modern gnutls can use a global config file to control the
crypto priority settings for TLS connections. For example
the priority string "@SYSTEM" instructs gnutls to find the
priority setting named "SYSTEM" in the global config file.
Latest gnutls GIT codebase gained the ability to reference
multiple priority strings in the config file, with the first
one that is found to existing winning. This means it is now
possible to configure QEMU out of the box with a default
priority of "@QEMU,SYSTEM", which says to look for the
settings "QEMU" first, and if not found, use the "SYSTEM"
settings.
To make use of this facility, we introduce the ability to
set the QEMU default priority at build time via a new
configure argument. It is anticipated that distro vendors
will set this when building QEMU to a suitable value for
use with distro crypto policy setup. eg current Fedora
would run
./configure --tls-priority=@SYSTEM
while future Fedora would run
./configure --tls-priority=@QEMU,SYSTEM
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
The gnutls default priority is either "NORMAL" (most historical
versions of gnutls) which is a built-in label in gnutls code,
or "@SYSTEM" (latest gnutls on Fedora at least) which refers
to an admin customizable entry in a gnutls config file.
Regardless of which default is used by a distro, they are both
global defaults applying to all applications using gnutls. If
a single application on the system needs to use a weaker set
of crypto priorities, this potentially forces the weakness onto
all applications. Or conversely if a single application wants a
strong default than all others, it can't do this via the global
config file.
This adds an extra parameter to the tls credential object which
allows the mgmt app / user to explicitly provide a priority
string to QEMU when configuring TLS.
For example, to use the "NORMAL" priority, but disable SSL 3.0
one can now configure QEMU thus:
$QEMU -object tls-creds-x509,id=tls0,dir=/home/berrange/qemutls,\
priority="NORMAL:-VERS-SSL3.0" \
..other args...
If creating tls-creds-anon, whatever priority the user specifies
will always have "+ANON-DH" appended to it, since that's mandatory
to make the anonymous credentials work.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
Wire up the nettle and gcrypt hash backends so that they can
support the sha224, sha384, sha512 and ripemd160 hash algorithms.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
'remotes/pmaydell/tags/pull-target-arm-20160704' into staging
target-arm queue:
* fix semihosting SYS_HEAPINFO call for A64 guests
* fix crash if guest tries to write to ROM on imx boards
* armv7m_nvic: fix crash for debugger reads from some registers
* virt: mark PCIe host controller as dma-coherent in the DT
* add data-driven register API
* Xilinx Zynq: add devcfg device model
* m25p80: fix various bugs
* ast2400: add SMC controllers and SPI flash slaves
# gpg: Signature made Mon 04 Jul 2016 13:17:34 BST
# gpg: using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20160704: (23 commits)
ast2400: create SPI flash slaves
ast2400: add SPI flash slaves
ast2400: add SMC controllers (FMC and SPI)
m25p80: qdev-ify drive property
m25p80: change cur_addr to 32 bit integer
m25p80: avoid out of bounds accesses
m25p80: do not put iovec on the stack
ssi: change ssi_slave_init to be a realize ops
xilinx_zynq: Connect devcfg to the Zynq machine model
dma: Add Xilinx Zynq devcfg device model
register: Add block initialise helper
register: QOMify
register: Define REG and FIELD macros
register: Add Memory API glue
register: Add Register API
bitops: Add MAKE_64BIT_MASK macro
hw/arm/virt: mark the PCIe host controller as DMA coherent in the DT
armv7m_nvic: Use qemu_get_cpu(0) instead of current_cpu
memory: Assert that memory_region_init_rom_device() ops aren't NULL
imx: Use memory_region_init_rom() for ROMs
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
into staging
seabios: update from 1.9.1 to 1.9.3
# gpg: Signature made Mon 04 Jul 2016 10:29:47 BST
# gpg: using RSA key 0x4CB6D8EED3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
# Primary key fingerprint: A032 8CFF B93A 17A7 9901 FE7D 4CB6 D8EE D3E8 7138
* remotes/kraxel/tags/pull-seabios-20160704-1:
seabios: update binaries from 1.9.1 to 1.9.3
seabios: update 128k config
bios: Add fast variant of SeaBIOS for use with -kernel on x86.
seabios: update submodule from 1.9.1 to 1.9.3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
A set of SPI flash slaves is attached under the flash controllers of
the palmetto platform. "n25q256a" flash modules are used for the BMC
and "mx25l25635e" for the host. These types are common in the
OpenPower ecosystem.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1467138270-32481-9-git-send-email-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Each controller on the ast2400 has a memory range on which it maps its
flash module slaves. Each slave is assigned a memory segment for its
mapping that can be changed at bootime with the Segment Address
Register. This is not supported in the current implementation so we
are using the defaults provided by the specs.
Each SPI flash slave can then be accessed in two modes: Command and
User. When in User mode, accesses to the memory segment of the slaves
are translated in SPI transfers. When in Command mode, the HW
generates the SPI commands automatically and the memory segment is
accessed as if doing a MMIO. Other SPI controllers call that mode
linear addressing mode.
For this purpose, we are adding below each crontoller an array of
structs gathering for each SPI flash module, a segment rank, a
MemoryRegion to handle the memory accesses and the associated SPI
slave device, which should be a m25p80.
Only the User mode is supported for now but we are preparing ground
for the Command mode. The framework is sufficient to support Linux.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1467138270-32481-8-git-send-email-clg@kaod.org
[PMM: Use g_new0() rather than g_malloc0()]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
The Aspeed AST2400 soc includes a static memory controller for the BMC
which supports NOR, NAND and SPI flash memory modules. This controller
has two modes : the SMC for the legacy interface which supports only
one module and the FMC for the new interface which supports up to five
modules. The AST2400 also includes a SPI only controller used for the
host firmware, commonly called BIOS on Intel. It can be used in three
mode : a SPI master, SPI slave and SPI pass-through
Below is the initial framework for the SMC controller (FMC mode only)
and the SPI controller: the sysbus object, MMIO for registers
configuration and controls. Each controller has a SPI bus and a
configurable number of CS lines for SPI flash slaves.
The differences between the controllers are small, so they are
abstracted using indirections on the register numbers.
Only SPI flash modules are supported.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1467138270-32481-7-git-send-email-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: added one missing error_propagate]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This allows specifying the property via -drive if=none and creating
the flash device with -device.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1467138270-32481-6-git-send-email-clg@kaod.org
[clg: added an extra fix for sabrelite_init()
keeping the test on flash_dev did not seem necessary. ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
The maximum amount of storage that can be addressed by the m25p80 command
set is 4 GiB. However, cur_addr is currently a 64-bit integer. To avoid
further problems related to sign extension of signed 32-bit integer
expressions, change cur_addr to a 32 bit integer. Preserve migration
format by adding a dummy 4-byte field in place of the (big-endian)
high four bytes in the formerly 64-bit cur_addr field.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1467138270-32481-5-git-send-email-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
s->cur_addr can be made to point outside s->storage, either by
writing a value >= 128 to s->ear (because s->ear * MAX_3BYTES_SIZE
is a signed integer and sign-extends into the 64-bit cur_addr),
or just by writing an address beyond the size of the flash being
emulated. Avoid the sign extension to make the code cleaner, and
on top of that mask s->cur_addr to s->size.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1467138270-32481-4-git-send-email-clg@kaod.org
Reviewed by: Marcin Krzeminski <marcin.krzeminski@nokia.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
When doing a read-modify-write cycle, QEMU uses the iovec after returning
from blk_aio_pwritev. m25p80 puts the iovec on the stack of blk_aio_pwritev's
caller, which causes trouble in this case. This has been a problem
since commit 243e6f6 ("m25p80: Switch to byte-based block access",
2016-05-12) started doing writes at a smaller granularity than 512 bytes.
In principle however it could have broken before when using -drive
if=mtd,cache=none on a disk with 4K native sectors.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1467138270-32481-3-git-send-email-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This enables qemu to handle late inits and report errors. All the SSI
slave routine names were changed accordingly. Code was modified to
handle errors when possible (m25p80 and ssi-sd)
Tested with the m25p80 slave object.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1467138270-32481-2-git-send-email-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 85f39c9a13569b1113dacac3b952b0af54fc1260.1467053537.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add a minimal model for the devcfg device which is part of Zynq.
This model supports DMA capabilities and interrupt generation.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 83df49d8fa2d203a421ca71620809e4b04754e65.1467053537.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add a helper that will scan a static RegisterAccessInfo Array
and populate a container MemoryRegion with registers as defined.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 347b810b2799e413c98d5bbeca97bcb1557946c3.1467053537.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
QOMify registers as a child of TYPE_DEVICE. This allows registers to
define GPIOs.
Define an init helper that will do QOM initialisation.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 2545f71db26bf5586ca0c08a3e3cf1b217450552.1467053537.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|