aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-09-05block: Accept node-name for drive-mirrorKevin Wolf
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts drive-mirror to accept a node-name without lifting the restriction that we're operating at a root node. In case of an invalid device name, the command returns the GenericError error class now instead of DeviceNotFound, because this is what qmp_get_root_bs() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-09-05block: Accept node-name for drive-backupKevin Wolf
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts drive-backup and the corresponding transaction action to accept a node-name without lifting the restriction that we're operating at a root node. In case of an invalid device name, the command returns the GenericError error class now instead of DeviceNotFound, because this is what qmp_get_root_bs() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-09-05block: Accept node-name for change-backing-fileKevin Wolf
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts change-backing-file to accept a node-name without lifting the restriction that we're operating at a root node. In case of an invalid device name, the command returns the GenericError error class now instead of DeviceNotFound, because this is what qmp_get_root_bs() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-09-05block: Accept node-name for blockdev-snapshot-internal-syncKevin Wolf
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts blockdev-snapshot-internal-sync to accept a node-name without lifting the restriction that we're operating at a root node. In case of an invalid device name, the command returns the GenericError error class now instead of DeviceNotFound, because this is what qmp_get_root_bs() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-09-05block: Accept node-name for blockdev-snapshot-delete-internal-syncKevin Wolf
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts blockdev-snapshot-delete-internal-sync to accept a node-name without lifting the restriction that we're operating at a root node. In case of an invalid device name, the command returns the GenericError error class now instead of DeviceNotFound, because this is what qmp_get_root_bs() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-09-05block: Accept node-name for blockdev-mirrorKevin Wolf
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts blockdev-mirror to accept a node-name without lifting the restriction that we're operating at a root node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-09-05block: Accept node-name for blockdev-backupKevin Wolf
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts blockdev-backup and the corresponding transaction action to accept a node-name without lifting the restriction that we're operating at a root node. In case of an invalid device name, the command returns the GenericError error class now instead of DeviceNotFound, because this is what qmp_get_root_bs() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-09-05block: Accept node-name for block-commitKevin Wolf
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts block-commit to accept a node-name without lifting the restriction that we're operating at a root node. As libvirt makes use of the DeviceNotFound error class, we must add explicit code to retain this behaviour because qmp_get_root_bs() only returns GenericErrors. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com>
2016-09-05block: Accept node-name for block-streamKevin Wolf
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts block-stream to accept a node-name without lifting the restriction that we're operating at a root node. In case of an invalid device name, the command returns the GenericError error class now instead of DeviceNotFound, because this is what qmp_get_root_bs() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com>
2016-09-05scsi: scsi-cd without drive property for empty driveKevin Wolf
This allows the creation of an empty scsi-cd device without manually creating a BlockBackend. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2016-09-05ide: ide-cd without drive property for empty driveKevin Wolf
This allows the creation of an empty ide-cd device without manually creating a BlockBackend. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Eric Blake <eblake@redhat.com>
2016-09-05Open 2.8 development treePeter Maydell
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-09-02Update version for v2.7.0 releasev2.7.0Peter Maydell
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-30Update version for v2.7.0-rc5 releasev2.7.0-rc5Peter Maydell
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-309pfs: handle walk of ".." in the root directoryGreg Kurz
The 9P spec at http://man.cat-v.org/plan_9/5/intro says: All directories must support walks to the directory .. (dot-dot) meaning parent directory, although by convention directories contain no explicit entry for .. or . (dot). The parent of the root directory of a server's tree is itself. This means that a client cannot walk further than the root directory exported by the server. In other words, if the client wants to walk "/.." or "/foo/../..", the server should answer like the request was to walk "/". This patch just does that: - we cache the QID of the root directory at attach time - during the walk we compare the QID of each path component with the root QID to detect if we're in a "/.." situation - if so, we skip the current component and go to the next one Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-309pfs: forbid . and .. in file namesGreg Kurz
According to the 9P spec http://man.cat-v.org/plan_9/5/open about the create request: The names . and .. are special; it is illegal to create files with these names. This patch causes the create and lcreate requests to fail with EINVAL if the file name is either "." or "..". Even if it isn't explicitly written in the spec, this patch extends the checking to all requests that may cause a directory entry to be created: - mknod - rename - renameat - mkdir - link - symlink The unlinkat request also gets patched for consistency (even if rmdir("foo/..") is expected to fail according to POSIX.1-2001). The various error values come from the linux manual pages. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-309pfs: forbid illegal path namesGreg Kurz
Empty path components don't make sense for most commands and may cause undefined behavior, depending on the backend. Also, the walk request described in the 9P spec [1] clearly shows that the client is supposed to send individual path components: the official linux client never sends portions of path containing the / character for example. Moreover, the 9P spec [2] also states that a system can decide to restrict the set of supported characters used in path components, with an explicit mention "to remove slashes from name components". This patch introduces a new name_is_illegal() helper that checks the names sent by the client are not empty and don't contain unwanted chars. Since 9pfs is only supported on linux hosts, only the / character is checked at the moment. When support for other hosts (AKA. win32) is added, other chars may need to be blacklisted as well. If a client sends an illegal path component, the request will fail and ENOENT is returned to the client. [1] http://man.cat-v.org/plan_9/5/walk [2] http://man.cat-v.org/plan_9/5/intro Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-30Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell
* pc-bios/optionrom/Makefile fix for -O0 * revert socket_connect change # gpg: Signature made Tue 30 Aug 2016 15:36:59 BST # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: optionrom: cope with multiple -O options Revert "Change net/socket.c to use socket_*() functions" Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-30optionrom: cope with multiple -O optionsPaolo Bonzini
Reproducer: CFLAGS="-g3 -O0" ./configure --target-list=aarch64-softmmu,arm-softmmu --enable-vhost-net --enable-virtfs Here CFLAGS ends up with "-O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 ... -g3 -O0" and pc-bios/optionrom/Makefile forgets to add the -O2 it needs. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-30Revert "Change net/socket.c to use socket_*() functions"Paolo Bonzini
Since commit 7e8449594c929, the socket connect code is blocking, because calling socket_connect() without callback is blocking. This reverts the commit. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-08-30translate: early exit in tb_flush if there is no tcgChristian Borntraeger
tb_flush does all kind of things, which are very tcg specific. As it is called from some places even for KVM (e.g. gdb server) it is better to detect these cases and do an early exit. This also fixes a crash in the gdb server that was triggered by commit 909eaac9bbc2 ("tb hash: track translated blocks with qht"). Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Reported-by: Richard Henderson <rth@twiddle.net> Reported-by: Brent Baccala <cosine@freesoft.org> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-id: 1472148686-39841-1-git-send-email-borntraeger@de.ibm.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-30ui: fix refresh of VNC server surfaceDaniel P. Berrange
In previous commit commit c7628bff4138ce906a3620d12e0820c1cf6c140d Author: Gerd Hoffmann <kraxel@redhat.com> Date: Fri Oct 30 12:10:09 2015 +0100 vnc: only alloc server surface with clients connected the VNC server was changed so that the 'vd->server' pixman image was only allocated when a client is connected. Since then if a client disconnects and then reconnects to the VNC server all they will see is a black screen until they do something that triggers a refresh. On a graphical desktop this is not often noticed since there's many things going on which cause a refresh. On a plain text console it is really obvious since nothing refreshes frequently. The problem is that the VNC server didn't update the guest dirty bitmap, so still believes its server image is in sync with the guest contents. To fix this we must explicitly mark the entire guest desktop as dirty after re-creating the server surface. Move this logic into vnc_update_server_surface() so it is guaranteed to be call in all code paths that re-create the surface instead of only in vnc_dpy_switch() Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Reviewed-by: Peter Lieven <pl@kamp.de> Tested-by: Peter Lieven <pl@kamp.de> Message-id: 1471365032-18096-1-git-send-email-berrange@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-24Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into stagingPeter Maydell
virtio: fixes some bugfixes for virtio balloon is still broken wrt migration Signed-off-by: Michael S. Tsirkin <mst@redhat.com> # gpg: Signature made Tue 23 Aug 2016 17:33:11 BST # gpg: using RSA key 0x281F0DB8D28D5469 # gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" # gpg: aka "Michael S. Tsirkin <mst@redhat.com>" # Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67 # Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469 * remotes/mst/tags/for_upstream: virtio: decrement vq->inuse in virtqueue_discard() virtio: recalculate vq->inuse after migration Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-24Fix bsd-user build after d915b7bbEd Maste
Must include "qemu-version.h" for the QEMU_PKGVERSION definition. Signed-off-by: Ed Maste <emaste@freebsd.org> Message-id: 1471877833-52343-1-git-send-email-emaste@freebsd.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-23virtio: decrement vq->inuse in virtqueue_discard()Stefan Hajnoczi
virtqueue_discard() moves vq->last_avail_idx back so the element can be popped again. It's necessary to decrement vq->inuse to avoid "leaking" the element count. Cc: qemu-stable@nongnu.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-08-23virtio: recalculate vq->inuse after migrationStefan Hajnoczi
The vq->inuse field is not migrated. Many devices don't hold VirtQueueElements across migration so it doesn't matter that vq->inuse starts at 0 on the destination QEMU. At least virtio-serial, virtio-blk, and virtio-balloon migrate while holding VirtQueueElements. For these devices we need to recalculate vq->inuse upon load so the value is correct. Cc: qemu-stable@nongnu.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-08-22Update version for v2.7.0-rc4 releasev2.7.0-rc4Peter Maydell
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-22Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into ↵Peter Maydell
staging # gpg: Signature made Mon 22 Aug 2016 09:06:32 BST # gpg: using RSA key 0xEF04965B398D6211 # gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211 * remotes/jasowang/tags/net-pull-request: e1000e: remove internal interrupt flag slirp: fix segv when init failed Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-22e1000e: remove internal interrupt flagCao jin
Commit 66bf7d58 removed internal msi state flag E1000E_USE_MSI, E1000E_USE_MSIX is not necessary too, remove it now. And interrupt flag field intr_state also can be removed now. CC: Dmitry Fleytman <dmitry@daynix.com> CC: Jason Wang <jasowang@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Marcel Apfelbaum <marcel@redhat.com> CC: Michael S. Tsirkin <mst@redhat.com> CC: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Acked-by: Dmitry Fleytman <dmitry@daynix.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2016-08-22slirp: fix segv when init failedMarc-André Lureau
Since commit f6c2e66ae8c8a, slirp uses an exit notifier to call slirp_smb_cleanup. However, if init() failed, the notifier isn't added, and removing it will fail: ==18447== Invalid write of size 8 ==18447== at 0x7EF2B5: notifier_remove (notify.c:32) ==18447== by 0x48E80C: qemu_remove_exit_notifier (vl.c:2661) ==18447== by 0x6A2187: net_slirp_cleanup (slirp.c:134) ==18447== by 0x69419D: qemu_cleanup_net_client (net.c:338) ==18447== by 0x69445B: qemu_del_net_client (net.c:401) ==18447== by 0x6A2B81: net_slirp_init (slirp.c:366) ==18447== by 0x6A4241: net_init_slirp (slirp.c:865) ==18447== by 0x695C6D: net_client_init1 (net.c:1051) ==18447== by 0x695F6E: net_client_init (net.c:1108) ==18447== by 0x696DBA: net_init_netdev (net.c:1498) ==18447== by 0x7F1F99: qemu_opts_foreach (qemu-option.c:1116) ==18447== by 0x696E60: net_init_clients (net.c:1516) ==18447== Address 0x0 is not stack'd, malloc'd or (recently) free'd Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2016-08-19test-logging: don't hard-code paths in /tmpSascha Silbe
Since f6880b7f [qemu-log: support simple pid substitution for logs], test-logging creates files with hard-coded names in /tmp. In the best case, this prevents multiple developers from running "make check" on the same machine. In the worst case, it allows for symlink attacks, enabling an attacker to overwrite files that are writable to the developer running "make check". Instead of hard-coding the paths, create a temporary directory using g_dir_make_tmp() and clean it up afterwards. Fixes: f6880b7f ("qemu-log: support simple pid substitution for logs") Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Message-id: 1471545963-11720-3-git-send-email-silbe@linux.vnet.ibm.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-19glib: add compatibility implementation for g_dir_make_tmp()Sascha Silbe
We're going to make use of g_dir_make_tmp() in test-logging. Provide a compatibility implementation of it for glib < 2.30. May behave differently in some edge cases (e.g. pattern only at the end of the template, the file name is not part of the error message), but good enough in practice. Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Message-id: 1471545963-11720-2-git-send-email-silbe@linux.vnet.ibm.com [PMM: removed variable "template" which caused compilation failures when C++ files include glib-compat.h] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-19syscall.c: Redefine IFLA_* enumsMichal Privoznik
In 9c37146782 I've tried to fix a broken build with older linux-headers. However, I didn't do it properly. The solution implemented here is to grab the enums that caused the problem initially, and rename their values so that they are "QEMU_" prefixed. In order to guarantee matching values with actual enums from linux-headers, the enums are seeded with starting values from the original enums. Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-id: 75c14d6e8a97c4ff3931d69c13eab7376968d8b4.1471593869.git.mprivozn@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-19Revert "syscall.c: Fix build with older linux-headers"Michal Privoznik
The fix I've made there was wrong. I mean, basically what I did there was equivalent to: #if 0 some code; #endif This reverts commit 9c37146782e7850877d452da47dc451ba73aa62d. Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-id: 40d61349e445c1ad5fef795da704bf7ed6e19c86.1471593869.git.mprivozn@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-18Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into ↵Peter Maydell
staging # gpg: Signature made Thu 18 Aug 2016 14:39:31 BST # gpg: using RSA key 0x9CA4ABB381AB73C8 # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" # gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" # Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8 * remotes/stefanha/tags/block-pull-request: block: fix possible reorder of flush operations block: fix deadlock in bdrv_co_flush Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-18block: fix possible reorder of flush operationsDenis V. Lunev
This patch reduce CPU usage of flush operations a bit. When we have one flush completed we should kick only next operation. We should not start all pending operations in the hope that they will go back to wait on wait_queue. Also there is a technical possibility that requests will get reordered with the previous approach. After wakeup all requests are removed from the wait queue. They become active and they are processed one-by-one adding to the wait queue in the same order. Though new flush can arrive while all requests are not put into the queue. Signed-off-by: Denis V. Lunev <den@openvz.org> Tested-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com> Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com> Message-id: 1471457214-3994-3-git-send-email-den@openvz.org CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Fam Zheng <famz@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> CC: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-08-18block: fix deadlock in bdrv_co_flushEvgeny Yakovlev
The following commit commit 3ff2f67a7c24183fcbcfe1332e5223ac6f96438c Author: Evgeny Yakovlev <eyakovlev@virtuozzo.com> Date: Mon Jul 18 22:39:52 2016 +0300 block: ignore flush requests when storage is clean has introduced a regression. There is a problem that it is still possible for 2 requests to execute in non sequential fashion and sometimes this results in a deadlock when bdrv_drain_one/all are called for BDS with such stalled requests. 1. Current flushed_gen and flush_started_gen is 1. 2. Request 1 enters bdrv_co_flush to with write_gen 1 (i.e. the same as flushed_gen). It gets past flushed_gen != flush_started_gen and sets flush_started_gen to 1 (again, the same it was before). 3. Request 1 yields somewhere before exiting bdrv_co_flush 4. Request 2 enters bdrv_co_flush with write_gen 2. It gets past flushed_gen != flush_started_gen and sets flush_started_gen to 2. 5. Request 2 runs to completion and sets flushed_gen to 2 6. Request 1 is resumed, runs to completion and sets flushed_gen to 1. However flush_started_gen is now 2. From here on out flushed_gen is always != to flush_started_gen and all further requests will wait on flush_queue. This change replaces flush_started_gen with an explicitly tracked active flush request. Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com> Signed-off-by: Denis V. Lunev <den@openvz.org> Message-id: 1471457214-3994-2-git-send-email-den@openvz.org CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Fam Zheng <famz@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> CC: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-08-18Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into ↵Peter Maydell
staging # gpg: Signature made Thu 18 Aug 2016 06:36:16 BST # gpg: using RSA key 0xEF04965B398D6211 # gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211 * remotes/jasowang/tags/net-pull-request: net/net: properly handle multiple packets in net_fill_rstate() net: vmxnet: use g_new for pkt initialisation Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-18Merge remote-tracking branch 'remotes/famz/tags/docker-pull-request' into ↵Peter Maydell
staging Fix 'make docker-test-mingw@fedora' Peter, This is the single patch that stalls patchew's mingw testing. Since it is small and trivial, let's have it in 2.7. Fam # gpg: Signature made Wed 17 Aug 2016 13:13:53 BST # gpg: using RSA key 0xCA35624C6A9171C6 # gpg: Good signature from "Fam Zheng <famz@redhat.com>" # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6 * remotes/famz/tags/docker-pull-request: curl: Cast fd to int for DPRINTF Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-18net/net: properly handle multiple packets in net_fill_rstate()Zhang Chen
When network is busy, we will receive multiple packets at one time. In that situation, we should keep trying to do the receiving instead of finalizing only the first packet. Signed-off-by: Zhang Chen <zhangchen.fnst@cn.fujitsu.com> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2016-08-18net: vmxnet: use g_new for pkt initialisationLi Qiang
When network transport abstraction layer initialises pkt, the maximum fragmentation count is not checked. This could lead to an integer overflow causing a NULL pointer dereference. Replace g_malloc() with g_new() to catch the multiplication overflow. Reported-by: Li Qiang <liqiang6-s@360.cn> Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org> Acked-by: Dmitry Fleytman <dmitry@daynix.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2016-08-17curl: Cast fd to int for DPRINTFFam Zheng
Currently "make docker-test-mingw@fedora" has a warning like: /tmp/qemu-test/src/block/curl.c: In function 'curl_sock_cb': /tmp/qemu-test/src/block/curl.c:172:6: warning: format '%d' expects argument of type 'int', but argument 4 has type 'curl_socket_t {aka long long unsigned int}' DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd); ^ cc1: all warnings being treated as errors Cast to int to suppress it. Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <1470027888-24381-1-git-send-email-famz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2016-08-16Update version for v2.7.0-rc3 releasev2.7.0-rc3Peter Maydell
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-16linux-user: Fix llseek with high bit of offset_low setPeter Maydell
The llseek syscall takes two 32-bit arguments, offset_high and offset_low, which must be combined to form a single 64-bit offset. Unfortunately we were combining them with (uint64_t)arg2 << 32) | arg3 and arg3 is a signed type; this meant that when promoting arg3 to a 64-bit type it would be sign-extended. The effect was that if the offset happened to have bit 31 set then this bit would get sign-extended into all of bits 63..32. Explicitly cast arg3 to abi_ulong to avoid the erroneous sign extension. Reported-by: Chanho Park <parkch98@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Chanho Park <parkch98@gmail.com> Message-id: 1470938379-1133-1-git-send-email-peter.maydell@linaro.org
2016-08-16syscall.c: Fix build with older linux-headersMichal Privoznik
In c5dff280 we tried to make us understand netlink messages more. So we've added a code that does some translation. However, the code assumed linux-headers to be at least version 4.4 of it because most of the symbols there (if not all of them) were added in just that release. This, however, breaks build on systems with older versions of the package. Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Message-id: 23806aac6db3baf7e2cdab4c62d6e3468ce6b4dc.1471340849.git.mprivozn@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-16qmp-commands.hx: remove outdated noteMarc-André Lureau
input-send-event is now stable since 6575ccddf4e7c2484bc14b10d5e89f57506c3953. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Message-id: 20160811112041.18616-1-marcandre.lureau@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-16Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into ↵Peter Maydell
staging target-i386: kernel_irqchip=off fix for KVM # gpg: Signature made Tue 16 Aug 2016 12:55:42 BST # gpg: using RSA key 0x2807936F984DC5A6 # gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" # Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6 * remotes/ehabkost/tags/x86-pull-request: target-i386: kvm: Report kvm_pv_unhalt as unsupported w/o kernel_irqchip Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-16target-i386: kvm: Report kvm_pv_unhalt as unsupported w/o kernel_irqchipEduardo Habkost
The kvm_pv_unhalt feature doesn't work if kernel_irqchip is disabled, so we need to report it as unsupported. Tested-by: Peter Xu <peterx@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-08-16slirp: Rename "struct arphdr" to "struct slirp_arphdr"Thomas Huth
struct arphdr is already used by the system headers on OpenBSD and thus QEMU does not compile here anymore. Fix it by renaming our struct to slirp_arphdr instead. Reported-by: Brad Smith Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org> Signed-off-by: Thomas Huth <thuth@redhat.com> Message-id: 1471249494-17392-1-git-send-email-thuth@redhat.com Buglink: https://bugs.launchpad.net/qemu/+bug/1613133 Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-08-16char: fix waiting for TLS and telnet connectionMarc-André Lureau
Since commit d7a04fd7d5008, tcp_chr_wait_connected() was introduced, so vhost-user could wait until a backend started successfully. In vhost-user case, the chr socket must be plain unix, and the chr+vhost setup happens synchronously during qemu startup. However, with TLS and telnet socket, initial socket setup happens asynchronously, and s->connected is not set after the socket is accepted. In order for tcp_chr_wait_connected() to not keep accepting new connections and proceed with the last accepted socket, it can check for s->ioc instead. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 20160816083332.15088-1-marcandre.lureau@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>