Age | Commit message (Collapse) | Author |
|
The commit 88c1ee73d3231c74ff90bcfc084a7589670ec244
char/serial: Fix emptyness check
Still causes extra NULL byte(s) to be sent.
So if the fifo is empty, do not send an extra NULL byte.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
Message-id: 1395160174-16006-1-git-send-email-dslutz@verizon.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
spice: monitors_config: check pointer before dereferencing
# gpg: Signature made Mon 07 Apr 2014 11:19:19 BST using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
* remotes/spice/tags/pull-spice-6:
spice: monitors_config: check pointer before dereferencing
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
gtk: pointer fixes from Takashi Iwai.
# gpg: Signature made Mon 07 Apr 2014 09:51:52 BST using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
* remotes/kraxel/tags/pull-gtk-4:
ui: Update MAINTAINERS entry.
gtk: Remember the last grabbed pointer position
gtk: Fix the relative pointer tracking mode
gtk: Use gtk generic event signal instead of motion-notify-event
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Reported-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
With Amazon eating Anthonys time status "Maintained" certainly isn't
true any more. Update entry accordingly.
Also add myself, so scripts/get_maintainer.pl will Cc: me, to reduce
the chance ui patches fall through the cracks on our pretty loaded
qemu-devel mailing list.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
It's pretty annoying that the pointer reappears at a random place once
after grabbing and ungrabbing the input. Better to restore to the
original position where the pointer was grabbed.
Reference: https://bugzilla.novell.com/show_bug.cgi?id=849587
Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Tested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
The relative pointer tracking mode was still buggy even after the
previous fix of the motion-notify-event since the events are filtered
out when the pointer moves outside the drawing window due to the
boundary check for the absolute mode.
This patch fixes the issue by moving the unnecessary boundary check
into the if block of absolute mode, and keep the coordinate in the
relative mode even if it's outside the drawing area. But this makes
the coordinate (last_x, last_y) possibly pointing to (-1,-1),
introduce a new flag to indicate the last coordinate has been
updated.
Reference: https://bugzilla.novell.com/show_bug.cgi?id=849587
Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Tested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
The GDK motion-notify-event isn't generated when the pointer goes out
of the target window even if the pointer is grabbed, which essentially
means to lose the pointer tracking in gtk-ui.
Meanwhile the generic "event" signal is sent when the pointer is
grabbed, so we can use this and pick the motion notify events manually
there instead.
Reference: https://bugzilla.novell.com/show_bug.cgi?id=849587
Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Tested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
The subsection already exists in one well-known enterprise Linux
distribution, but for some strange reason the fields were swapped
when forward-porting the patch to upstream.
Limit headaches for said enterprise Linux distributor when the
time will come to rebase their version of QEMU.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1396452782-21473-1-git-send-email-pbonzini@redhat.com
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Block patches for 2.0.0
# gpg: Signature made Fri 04 Apr 2014 20:25:08 BST using RSA key ID C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
* remotes/kevin/tags/for-upstream:
dataplane: replace iothread object_add() with embedded instance
iothread: make IOThread struct definition public
dma-helpers: Initialize DMAAIOCB in_cancel flag
block: Check bdrv_getlength() return value in bdrv_append_temp_snapshot()
block: Fix snapshot=on for protocol parsed from filename
qemu-iotests: Remove CR line endings in reference output
block: Don't parse 'filename' option
qcow2: Put cache reference in error case
qcow2: Flush metadata during read-only reopen
iscsi: Don't set error if already set in iscsi_do_inquiry
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Before IOThread was its own object, each virtio-blk device would create
its own internal thread. We need to preserve this behavior for
backwards compatibility when users do not specify -device
virtio-blk-pci,iothread=<id>.
This patch changes how the internal IOThread object is created.
Previously we used the monitor object_add() function, which is really a
layering violation. The problem is that this needs to assign a name but
we don't have a name for this internal object.
Generating names for internal objects is a pain but even worse is that
they may collide with user-defined names.
Paolo Bonzini <pbonzini@redhat.com> suggested that the internal IOThread
object should not be named. This way the conflict cannot happen and we
no longer need object_add().
One gotcha is that internal IOThread objects will not be listed by the
query-iothreads command since they are not named. This is okay though
because query-iothreads is new and the internal IOThread is just for
backwards compatibility. New users should explicitly define IOThread
objects.
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Make the IOThread struct definition public so objects can be embedded in
parent structs.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Initialize the dbs->in_cancel flag in dma_bdrv_io(), since qemu_aio_get()
does not return zero-initialized memory. Spotted by the clang sanitizer
(which complained when the value loaded in dma_complete() was not valid
for a bool type); this might have resulted in leaking the AIO block.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
Since commit 9fd3171a, BDRV_O_SNAPSHOT uses an option QDict to specify
the originally requested image as the backing file of the newly created
temporary snapshot. This means that the filename is stored in
"file.filename", which is an option that is not parsed for protocol
names. Therefore things like -drive file=nbd:localhost:10809 were
broken because it looked for a local file with the literal name
'nbd:localhost:10809'.
This patch changes the way BDRV_O_SNAPSHOT works once again. We now open
the originally requested image as normal, and then do a similar
operation as for live snapshots to put the temporary snapshot on top.
This way, both driver specific options and parsed filenames work.
As a nice side effect, this results in code movement to factor
bdrv_append_temp_snapshot() out. This is a good preparation for moving
its call to drive_init() and friends eventually.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
If the guest attempts to execute from unreadable memory, this will
cause us to longjmp back to the main loop from inside the
target frontend decoder. For linux-user mode, this means we will
still hold the tb_ctx.tb_lock, and will deadlock when we try to
start executing code again. Unlock the lock in the return-from-longjmp
code path to avoid this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Andrei Warkentin <andrey.warkentin@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
When checking a page range, if we found that a page was
made read-only by QEMU because it contained translated code,
we were incorrectly returning immediately after unprotecting
that page, rather than continuing to check the entire range,
so we might fail to unprotect pages later in the range, or
might incorrectly return a "success" result even if later
pages were not writable.
In particular, this could cause segfaults in a case where
signals are delivered back to back on a target architecture
which uses trampoline code in the stack frame (as AArch64
currently does). The second signal causes a segfault because
the frame cannot be written to (it was protected because
we translated and executed the restorer trampoline, and the
unprotect logic did not unprotect the whole range).
Signed-off-by: Andrei Warkentin <andrey.warkentin@gmail.com
[PMM: expanded commit message a bit]
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
For the machine models which can have a Cortex-A15 CPU (vexpress-a15 and
midway), silently continue if the CPU object has no reset-cbar property
rather than failing. This allows these boards to be used under KVM with
the "-cpu host" option, since the 'host' CPU object has no reset-cbar
property.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Rob Herring <rob.herring@linaro.org>
|
|
If the user passes an unknown CPU name via the '-cpu' option, exit
with an error message rather than segfaulting.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Rob Herring <rob.herring@linaro.org>
|
|
qemu doesn't print these CRs any more. The test still didn't fail
because the output comparison ignores line endings, but the change turns
up each time when you want to update the output.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
When using the QDict option 'filename', it is supposed to be interpreted
literally. The code did correctly avoid guessing the protocol from any
string before the first colon, but it still called bdrv_parse_filename()
which would, for example, incorrectly remove a 'file:' prefix in the
raw-posix driver.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
When qcow2_get_cluster_offset() sees a zero cluster in a version 2
image, it (rightfully) returns an error. But in doing so it shouldn't
leak an L2 table cache reference.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
|
If lazy refcounts are enabled for a backing file, committing to this
backing file may leave it in a dirty state even if the commit succeeds.
The reason is that the bdrv_flush() call in bdrv_commit() doesn't flush
refcount updates with lazy refcounts enabled, and qcow2_reopen_prepare()
doesn't take care to flush metadata.
In order to fix this, this patch also fixes qcow2_mark_clean(), which
contains another ineffective bdrv_flush() call beause lazy refcounts are
disabled only afterwards. All existing callers of qcow2_mark_clean()
either don't modify refcounts or already flush manually, so that this
fixes only a latent, but not yet actually triggerable bug.
Another instance of the same problem is live snapshots. Again, a real
corruption is prevented by an explicit flush for non-read-only images in
external_snapshot_prepare(), but images using lazy refcounts stay dirty.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
This eliminates the possible assertion failure in error_setg().
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
* remotes/riku/for-2.0:
linux-user: pass correct host flags to accept4()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
FreeBSD 10.0-RELEASE has bswap16() etc. macros defined in sys/endian.h,
which leads to a conflict with our static inline definitions.
Force using the system version of the macros.
Signed-off-by: Andreas Färber <andreas.faerber@web.de>
Tested-by: Ed Maste <emaste@freebsd.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Commit 6f1834a2b exposed a bug in openpic_kvm where we don't filter
for memory events that only happen to the region we want to know
events about.
Add proper filtering, fixing the e500plat target with KVM.
Signed-off-by: Alexander Graf <agraf@suse.de>
Message-id: 1396431718-14908-1-git-send-email-agraf@suse.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
* remotes/bonzini/scsi-next:
iscsi: always query max WRITE SAME length
iscsi: ignore flushes on scsi-generic devices
iscsi: recognize "invalid field" ASCQ from WRITE SAME command
scsi-bus: remove bogus assertion
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Change over to my proper Xilinx email. s/petalogix.com/xilinx.com.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: cdff0c388c70df06217c467dcfb89267b7911feb.1396506607.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Max WRITE SAME length is also used when the UNMAP bit is zero, so it
should be queried even if LBPWS=0. Same for the optimal transfer
length.
However, the write_zeroes_alignment only matters for UNMAP=1 so we
still restrict it to LBPWS=1.
Reviewed-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Non-block SCSI devices do not support flushing, but we may still send
them requests via bdrv_flush_all. Just ignore them.
Reviewed-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Some targets may return "invalid field" as the ASCQ from WRITE SAME
if they support the command only without the UNMAP field. Recognize
that, and return ENOTSUP just like for "invalid operation code".
Reviewed-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This assertion is invalid, because get_sg_list can return an
empty sg-list even for commands that transfer no data (such
as SYNCHRONIZE CACHE).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
into staging
Tracing pull request
# gpg: Signature made Tue 01 Apr 2014 19:08:48 BST using RSA key ID 81AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
* remotes/stefanha/tags/tracing-pull-request:
trace: add workaround for SystemTap PR13296
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
SystemTap sdt.h sometimes results in compiled probes without sufficient
information to extract arguments. This can be solved in a slightly
hacky way by encouraging the compiler to place arguments into registers.
This patch fixes the apic_reset_irq_delivered() trace event on Fedora 20
with gcc-4.8.2-7.fc20 and systemtap-sdt-devel-2.4-2.fc20 on x86_64.
Signed-off-by: Frank Ch. Eigler <fche@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
staging
Block pull request
# gpg: Signature made Tue 01 Apr 2014 18:11:16 BST using RSA key ID 81AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
* remotes/stefanha/tags/block-pull-request: (51 commits)
qcow2: link all L2 meta updates in preallocate()
parallels: Sanity check for s->tracks (CVE-2014-0142)
parallels: Fix catalog size integer overflow (CVE-2014-0143)
qcow2: Limit snapshot table size
qcow2: Check maximum L1 size in qcow2_snapshot_load_tmp() (CVE-2014-0143)
qcow2: Fix L1 allocation size in qcow2_snapshot_load_tmp() (CVE-2014-0145)
qcow2: Fix NULL dereference in qcow2_open() error path (CVE-2014-0146)
qcow2: Fix copy_sectors() with VM state
block: Limit request size (CVE-2014-0143)
block: vdi bounds check qemu-io tests
dmg: prevent chunk buffer overflow (CVE-2014-0145)
dmg: use uint64_t consistently for sectors and lengths
dmg: sanitize chunk length and sectorcount (CVE-2014-0145)
dmg: use appropriate types when reading chunks
dmg: drop broken bdrv_pread() loop
dmg: prevent out-of-bounds array access on terminator
dmg: coding style and indentation cleanup
qcow2: Fix new L1 table size check (CVE-2014-0143)
qcow2: Protect against some integer overflows in bdrv_check
qcow2: Fix types in qcow2_alloc_clusters and alloc_clusters_noref
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
input bugfixes for 2.0
# gpg: Signature made Tue 01 Apr 2014 10:16:43 BST using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
* remotes/kraxel/tags/pull-input-7:
input: add sanity check
input: mouse_set should check input device type.
input: fix input_event_key_number trace event
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
preallocate() only links the first QCowL2Meta's data clusters into the
L2 table and ignores any chained QCowL2Metas in the linked list.
Chains of QCowL2Meta structs are built up when contiguous clusters span
L2 tables. Each QCowL2Meta describes one L2 table update. This is a
rare case in preallocate() but can happen.
This patch fixes preallocate() by iterating over the whole list of
QCowL2Metas. Compare with the qcow2_co_writev() function's
implementation, which is similar but also also handles request
dependencies. preallocate() only performs one allocation at a time so
there can be no dependencies.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
This avoids a possible division by zero.
Convert s->tracks to unsigned as well because it feels better than
surviving just because the results of calculations with s->tracks are
converted to unsigned anyway.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The first test case would cause a huge memory allocation, leading to a
qemu abort; the second one to a too small malloc() for the catalog
(smaller than s->catalog_size), which causes a read-only out-of-bounds
array access and on big endian hosts an endianess conversion for an
undefined memory area.
The sample image used here is not an original Parallels image. It was
created using an hexeditor on the basis of the struct that qemu uses.
Good enough for trying to crash the driver, but not for ensuring
compatibility.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Even with a limit of 64k snapshots, each snapshot could have a filename
and an ID with up to 64k, which would still lead to pretty large
allocations, which could potentially lead to qemu aborting. Limit the
total size of the snapshot table to an average of 1k per entry when
the limit of 64k snapshots is fully used. This should be plenty for any
reasonable user.
This also fixes potential integer overflows of s->snapshot_size.
Suggested-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
This avoids an unbounded allocation.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
For the L1 table to loaded for an internal snapshot, the code allocated
only enough memory to hold the currently active L1 table. If the
snapshot's L1 table is actually larger than the current one, this leads
to a buffer overflow.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The qcow2 code assumes that s->snapshots is non-NULL if s->nb_snapshots
!= 0. By having the initialisation of both fields separated in
qcow2_open(), any error occuring in between would cause the error path
to dereference NULL in qcow2_free_snapshots() if the image had any
snapshots.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
bs->total_sectors is not the highest possible sector number that could
be involved in a copy on write operation: VM state is after the end of
the virtual disk. This resulted in wrong values for the number of
sectors to be copied (n).
The code that checks for the end of the image isn't required any more
because the code hasn't been calling the block layer's bdrv_read() for a
long time; instead, it directly calls qcow2_readv(), which doesn't error
out on VM state sector numbers.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Limiting the size of a single request to INT_MAX not only fixes a
direct integer overflow in bdrv_check_request() (which would only
trigger bad behaviour with ridiculously huge images, as in close to
2^64 bytes), but can also prevent overflows in all block drivers.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
This test checks for proper bounds checking of some VDI input
headers. The following is checked:
1. Max image size (1024TB) with the appropriate Blocks In Image
value (0x3fffffff) is detected as valid.
2. Image size exceeding max (1024TB) is seen as invalid
3. Valid image size but with Blocks In Image value that is too
small fails
4. Blocks In Image size exceeding max (0x3fffffff) is seen as invalid
5. 64MB image, with 64 Blocks In Image, and 1MB Block Size is seen
as valid
6. Block Size < 1MB not supported
7. Block Size > 1MB not supported
[Max Reitz <mreitz@redhat.com> pointed out that "1MB + 1" in the test
case is wrong. Change to "1MB + 64KB" to match the 0x110000 value.
--Stefan]
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Both compressed and uncompressed I/O is buffered. dmg_open() calculates
the maximum buffer size needed from the metadata in the image file.
There is currently a buffer overflow since ->lengths[] is accounted
against the maximum compressed buffer size but actually uses the
uncompressed buffer:
switch (s->types[chunk]) {
case 1: /* copy */
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->uncompressed_chunk, s->lengths[chunk]);
We must account against the maximum uncompressed buffer size for type=1
chunks.
This patch fixes the maximum buffer size calculation to take into
account the chunk type. It is critical that we update the correct
maximum since there are two buffers ->compressed_chunk and
->uncompressed_chunk.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The DMG metadata is stored as uint64_t, so use the same type for
sector_num. int was a particularly poor choice since it is only 32-bit
and would truncate large values.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|