aboutsummaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)Author
2019-01-11qemu/queue.h: leave head structs anonymous unless necessaryPaolo Bonzini
Most list head structs need not be given a name. In most cases the name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV or reverse iteration, but this does not apply to lists of other kinds, and even for QTAILQ in practice this is only rarely needed. In addition, we will soon reimplement those macros completely so that they do not need a name for the head struct. So clean up everything, not giving a name except in the rare case where it is necessary. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-01-11block/iscsi: cancel libiscsi task when ABORT TASK TMF completesStefan Hajnoczi
The libiscsi iscsi_task_mgmt_async() API documentation says: abort_task will also cancel the scsi task. The callback for the scsi task will be invoked with SCSI_STATUS_CANCELLED The libiscsi implementation does not fulfil this promise. The task's callback is not invoked and its struct iscsi_pdu remains in the internal list (effectively leaked). This patch invokes the libiscsi iscsi_scsi_cancel_task() API to force the task's callback to be invoked with SCSI_STATUS_CANCELLED when the ABORT TASK TMF completes and the task's callback hasn't been invoked yet. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20180215111526.2464-1-stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-01-11block/iscsi: fix ioctl cancel use-after-freeStefan Hajnoczi
iscsi_aio_cancel() does not increment the request's reference count, causing a use-after-free when ABORT TASK finishes after the request has already completed. There are some additional issues with iscsi_aio_cancel(): 1. Several ABORT TASKs may be sent for the same task if iscsi_aio_cancel() is invoked multiple times. It's better to avoid this just in case the command identifier is reused. 2. The iscsilun->mutex protection is missing in iscsi_aio_cancel(). Reported-by: Felipe Franciosi <felipe@nutanix.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20180203061621.7033-4-stefanha@redhat.com> Reviewed-by: Felipe Franciosi <felipe@nutanix.com> Tested-by: Sreejith Mohanan <sreejit.mohanan@nutanix.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-01-11block/iscsi: take iscsilun->mutex in iscsi_timed_check_events()Stefan Hajnoczi
Commit d045c466d9e62b4321fadf586d024d54ddfd8bd4 ("iscsi: do not use aio_context_acquire/release") introduced iscsilun->mutex but appears to have overlooked iscsi_timed_check_events() when introducing the mutex. iscsi_service() and iscsi_set_events() must be called with iscsilun->mutex held. iscsi_timed_check_events() is invoked from the AioContext and does not take the mutex. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20180203061621.7033-3-stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-01-11block/iscsi: drop unused IscsiAIOCB->buf fieldStefan Hajnoczi
The IscsiAIOCB->buf field has not been used since commit e49ab19fcaa617ad6cdfe1ac401327326b6a2552 ("block/iscsi: bump libiscsi requirement to 1.9.0"). It used to be a linear buffer for old libiscsi versions that didn't support scatter-gather. The minimum libiscsi version supports scatter-gather so we don't linearize buffers anymore. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20180203061621.7033-2-stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-01-09block/nvme: optimize the performance of nvme driver based on vfio-pciLi Feng
When the IO size is larger than 2 pages, we move the the pointer one by one in the pagelist, this is inefficient. This is a simple benchmark result: Before: $ qemu-io -c 'write 0 1G' nvme://0000:00:04.0/1 wrote 1073741824/1073741824 bytes at offset 0 1 GiB, 1 ops; 0:00:02.41 (424.504 MiB/sec and 0.4146 ops/sec) $ qemu-io -c 'read 0 1G' nvme://0000:00:04.0/1 read 1073741824/1073741824 bytes at offset 0 1 GiB, 1 ops; 0:00:02.03 (503.055 MiB/sec and 0.4913 ops/sec) After: $ qemu-io -c 'write 0 1G' nvme://0000:00:04.0/1 wrote 1073741824/1073741824 bytes at offset 0 1 GiB, 1 ops; 0:00:02.17 (471.517 MiB/sec and 0.4605 ops/sec) $ qemu-io -c 'read 0 1G' nvme://0000:00:04.0/1 read 1073741824/1073741824 bytes at offset 0 1 GiB, 1 ops; 0:00:01.94 (526.770 MiB/sec and 0.5144 ops/sec) Signed-off-by: Li Feng <lifeng1519@gmail.com> Message-Id: <20181101103807.25862-1-lifeng1519@gmail.com> Signed-off-by: Fam Zheng <famz@redhat.com>
2019-01-07Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2019-01-05' into ↵Peter Maydell
staging nbd patches for 2019-01-05 Error and trace improvements in NBD code, such as less noise for common disconnect scenarios. - Vladimir Sementsov-Ogievskiy: 0/3 nbd-client: drop extra error noise - Eric Blake: portions of 0/22 nbd: add qemu-nbd --list # gpg: Signature made Sat 05 Jan 2019 13:58:54 GMT # gpg: using RSA key A7A16B4A2527436A # gpg: Good signature from "Eric Blake <eblake@redhat.com>" # gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" # gpg: aka "[jpeg image of size 6874]" # Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A * remotes/ericb/tags/pull-nbd-2019-01-05: nbd/client: Drop pointless buf variable qemu-nbd: Fail earlier for -c/-d on non-linux nbd/client: More consistent error messages nbd: Document timeline of various features qemu-nbd: Use program name in error messages block/nbd-client: use traces instead of noisy error_report_err nbd/client: Trace all server option error messages nbd: publish _lookup functions Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-04block/nbd-client: use traces instead of noisy error_report_errVladimir Sementsov-Ogievskiy
Reduce extra noise of nbd-client, change 083 correspondingly. In various commits (be41c100 in 2.10, f140e300 in 2.11, 78a33ab in 2.12), we added spots where qemu as an NBD client would report problems communicating with the server to stderr, because there was no where else to send the error to. However, this is racy, particularly since the most common source of these errors is when either the client or the server abruptly hangs up, leaving one coroutine to report the error only if it wins (or loses) the race in attempting the read from the server before another thread completes its cleanup of a protocol error that caused the disconnect in the first place. The race is also apparent in the fact that differences in the flush behavior of the server can alter the frequency of encountering the race in the client (see commit 6d39db96). Rather than polluting stderr, it's better to just trace these situations, for use by developers debugging a flaky connection, particularly since the real error that either triggers the abrupt disconnection in the first place, or that results from the EIO when a request can't receive a reply, DOES make it back to the user in the normal Error propagation channels. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20181102151152.288399-4-vsementsov@virtuozzo.com> [eblake: drop depedence on error hint, enhance commit message] Signed-off-by: Eric Blake <eblake@redhat.com>
2019-01-04dmg: don't skip zero chunkyuchenlin
The dmg file has many tables which describe: "start from sector XXX to sector XXX, the compression method is XXX and where the compressed data resides on". Each sector in the expanded file should be covered by a table. The table will describe the offset of compressed data (or raw depends on the type) in the dmg. For example: [-----------The expanded file------------] [---bzip table ---]/* zeros */[---zlib---] ^ | if we want to read this sector. we will find bzip table which contains this sector, and get the compressed data offset, read it from dmg, uncompress it, finally write to expanded file. If we skip zero chunk (table), some sector cannot find the table which will cause search_chunk() return s->n_chunks, dmg_read_chunk() return -1 and finally causing dmg_co_preadv() return EIO. See: [-----------The expanded file------------] [---bzip table ---]/* zeros */[---zlib---] ^ | if we want to read this sector. Oops, we cannot find the table contains it... In the original implementation, we don't have zero table. When we try to read sector inside the zero chunk. We will get EIO, and skip reading. After this patch, we treat zero chunk the same as ignore chunk, it will directly write zero and avoid some sector may not find the table. After this patch: [-----------The expanded file------------] [---bzip table ---][--zeros--][---zlib---] Signed-off-by: yuchenlin <npes87184@gmail.com> Reviewed-by: Julio Faracco <jcfaracco@gmail.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20190103114700.9686-4-npes87184@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-01-04dmg: use enumeration type instead of hard coding numberyuchenlin
Signed-off-by: yuchenlin <npes87184@gmail.com> Reviewed-by: Julio Faracco <jcfaracco@gmail.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20190103114700.9686-3-npes87184@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-01-04dmg: fix binary searchyuchenlin
There is a possible hang in original binary search implementation. That is if chunk1 = 4, chunk2 = 5, chunk3 = 4, and we go else case. The chunk1 will be still 4, and so on. Signed-off-by: yuchenlin <npes87184@gmail.com> Message-id: 20190103114700.9686-2-npes87184@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-01-04dmg: Fixing wrong dmg block type value for block terminator.Julio Faracco
This is a trivial patch to fix a wrong value for block terminator. The old value was 0x7fffffff which is wrong. It was not affecting the code because QEMU dmg block is not handling block terminator right now. Neverthless, it should be fixed. Signed-off-by: Julio Faracco <jcfaracco@gmail.com> Reviewed-by: yuchenlin <yuchenlin@synology.com> Message-id: 20181228145055.18039-1-jcfaracco@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-12-14block/mirror: add missing coroutine_fn annotationsStefan Hajnoczi
Marking a function coroutine_fn currently has no effect on the compiler, but it documents that this function must be called from coroutine context and it may yield. This is important information for the programmer. Also, if we ever transition to a stackless coroutine implementation, then it's likely that the annotation will become mandatory so the compiler can use the correct calling convention for coroutine functions. Cc: Max Reitz <mreitz@redhat.com> Cc: John Snow <jsnow@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14block: Remove flags parameter from bdrv_reopen_queue()Alberto Garcia
Now that all callers are passing all flag changes as QDict options, the flags parameter is no longer necessary, so we can get rid of it. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14block: Clean up reopen_backing_file() in block/replication.cAlberto Garcia
This function is used to put the hidden and secondary disks in read-write mode before launching the backup job, and back in read-only mode afterwards. This patch does the following changes: - Use an options QDict with the "read-only" option instead of passing the changes as flags only. - Simplify the code (it was unnecessarily complicated and verbose). - Fix a bug due to which the secondary disk was not being put back in read-only mode when writable=false (because in this case orig_secondary_flags always had the BDRV_O_RDWR flag set). - Stop clearing the BDRV_O_INACTIVE flag. The flags parameter to bdrv_reopen_queue() becomes redundant and we'll be able to get rid of it in a subsequent patch. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14block: Use bdrv_reopen_set_read_only() in the mirror driverAlberto Garcia
The 'block-commit' QMP command is implemented internally using two different drivers. If the source image is the active layer then the mirror driver is used (commit_active_start()), otherwise the commit driver is used (commit_start()). In both cases the destination image must be put temporarily in read-write mode. This is done correctly in the latter case, but what commit_active_start() does is copy all flags instead. This patch replaces the bdrv_reopen() calls in that function with bdrv_reopen_set_read_only() so that only the read-only status is changed. A similar change is made in mirror_exit(), which is also used by the 'drive-mirror' and 'blockdev-mirror' commands. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14block: Use bdrv_reopen_set_read_only() in stream_start/complete()Alberto Garcia
This patch replaces the bdrv_reopen() calls that set and remove the BDRV_O_RDWR flag with the new bdrv_reopen_set_read_only() function. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14block: Use bdrv_reopen_set_read_only() in bdrv_commit()Alberto Garcia
This patch replaces the bdrv_reopen() calls that set and remove the BDRV_O_RDWR flag with the new bdrv_reopen_set_read_only() function. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14block: Use bdrv_reopen_set_read_only() in commit_start/complete()Alberto Garcia
This patch replaces the bdrv_reopen() calls that set and remove the BDRV_O_RDWR flag with the new bdrv_reopen_set_read_only() function. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Avoid aio_worker() for QEMU_AIO_IOCTLKevin Wolf
aio_worker() doesn't add anything interesting, it's only a useless indirection. Call the handler function directly instead. As we know that this handler function is only called from coroutine context and the coroutine stays around until the worker thread finishes, we can keep RawPosixAIOData on the stack. This was the last user of aio_worker(), so the function goes away now. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Switch to .bdrv_co_ioctlKevin Wolf
No real reason to keep using the callback based mechanism here when the rest of the file-posix driver is coroutine based. Changing it brings ioctls more in line with how other request types work. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Remove paio_submit_co()Kevin Wolf
The function is not used any more, remove it. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Avoid aio_worker() for QEMU_AIO_READ/WRITEKevin Wolf
aio_worker() doesn't add anything interesting, it's only a useless indirection. Call the handler function directly instead. As we know that this handler function is only called from coroutine context and the coroutine stays around until the worker thread finishes, we can keep RawPosixAIOData on the stack. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Move read/write operation logic out of aio_worker()Kevin Wolf
aio_worker() for reads and writes isn't boring enough yet. It still does some postprocessing for handling short reads and turning the result into the right return value. However, there is no reason why handle_aiocb_rw() couldn't do the same, and even without duplicating code between the read and write path. So move the code there. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Avoid aio_worker() for QEMU_AIO_FLUSHKevin Wolf
aio_worker() doesn't add anything interesting, it's only a useless indirection. Call the handler function directly instead. As we know that this handler function is only called from coroutine context and the coroutine stays around until the worker thread finishes, we can keep RawPosixAIOData on the stack. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Avoid aio_worker() for QEMU_AIO_DISCARDKevin Wolf
aio_worker() doesn't add anything interesting, it's only a useless indirection. Call the handler function directly instead. As we know that this handler function is only called from coroutine context and the coroutine stays around until the worker thread finishes, we can keep RawPosixAIOData on the stack. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Avoid aio_worker() for QEMU_AIO_WRITE_ZEROESKevin Wolf
aio_worker() doesn't add anything interesting, it's only a useless indirection. Call the handler function directly instead. As we know that this handler function is only called from coroutine context and the coroutine stays around until the worker thread finishes, we can keep RawPosixAIOData on the stack. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Avoid aio_worker() for QEMU_AIO_COPY_RANGEKevin Wolf
aio_worker() doesn't add anything interesting, it's only a useless indirection. Call the handler function directly instead. As we know that this handler function is only called from coroutine context and the coroutine stays around until the worker thread finishes, we can keep RawPosixAIOData on the stack. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Avoid aio_worker() for QEMU_AIO_TRUNCATEKevin Wolf
aio_worker() doesn't add anything interesting, it's only a useless indirection. Call the handler function directly instead. As we know that this handler function is only called from coroutine context and the coroutine stays around until the worker thread finishes, we can keep RawPosixAIOData on the stack. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Factor out raw_thread_pool_submit()Kevin Wolf
Getting the thread pool of the AioContext of a block node and scheduling some work in it is an operation that is already done twice, and we'll get more instances. Factor it out into a separate function. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14file-posix: Reorganise RawPosixAIODataKevin Wolf
RawPosixAIOData contains a lot of fields for several separate operations that are to be processed in a worker thread and that need different parameters. The struct is currently rather unorganised, with unions that cover some, but not all operations, and even one #define for field names instead of a union. Clean this up to have some common fields and a single union. As a side effect, on x86_64 the struct shrinks from 72 to 48 bytes. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14qcow2: do decompression in threadsVladimir Sementsov-Ogievskiy
Do decompression in threads, like it is already done for compression. This improves asynchronous compressed reads performance. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14qcow2: aio support for compressed cluster readVladimir Sementsov-Ogievskiy
Allocate buffers locally and release qcow2 lock. Than, reads inside qcow2_co_preadv_compressed may be done in parallel, however all decompression is still done synchronously. Let's improve it in the following commit. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14qcow2: use byte-based read in qcow2_decompress_clusterVladimir Sementsov-Ogievskiy
We are gradually moving away from sector-based interfaces, towards byte-based. Get rid of it here too. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14qcow2: refactor decompress_bufferVladimir Sementsov-Ogievskiy
- make it look more like a pair of qcow2_compress - rename the function and its parameters - drop extra out_len variable, check filling of output buffer by strm structure itself - fix code style - add some documentation Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14qcow2: move decompression from qcow2-cluster.c to qcow2.cVladimir Sementsov-Ogievskiy
Compression is done in threads in qcow2.c. We want to do decompression in the same way, so, firstly, move it to the same file. The only change is braces around if-body in decompress_buffer, to satisfy checkpatch. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14qcow2: make more generic interface for qcow2_compressVladimir Sementsov-Ogievskiy
Give explicit size both for source and destination buffers, to make it similar with decompression path and than cleanly reuse parameter structure for decompression threads. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14qcow2: use Z_OK instead of 0 for deflateInit2 return code checkVladimir Sementsov-Ogievskiy
Use appropriate macro, corresponding to deflateInit2 spec. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14block/backup: drop unused synchronization interfaceVladimir Sementsov-Ogievskiy
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14block/replication: drop extra synchronizationVladimir Sementsov-Ogievskiy
After commit f8d59dfb40 "block/backup: fix fleecing scheme: use serialized writes" fleecing (specifically reading from backup target, when backup source is in backing chain of backup target) is safe, because all backup-job writes to target are serialized. Therefore we don't need additional synchronization for these reads. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14dmg: exchanging hardcoded dmg UDIF block types to enum.Julio Faracco
This change is better to understand what kind of block type is being handled by the code. Using a syntax similar to the DMG documentation is easier than tracking all hex values assigned to a block type. Signed-off-by: Julio Faracco <jcfaracco@gmail.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14dmg: including dmg-lzfse module inside dmg block driver.Julio Faracco
This commit includes the support to new module dmg-lzfse into dmg block driver. It includes the support for block type ULFO (0x80000007). Signed-off-by: Julio Faracco <jcfaracco@gmail.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14configure: adding support to lzfse library.Julio Faracco
This commit includes the support to lzfse opensource library. With this library dmg block driver can decompress images with this type of compression inside. Signed-off-by: Julio Faracco <jcfaracco@gmail.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-14block: adding lzfse decompressing support as a module.Julio Faracco
QEMU dmg support includes zlib and bzip2, but it does not contains lzfse support. This commit adds the source file to extend compression support for new DMGs. Signed-off-by: Julio Faracco <jcfaracco@gmail.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-12-12crypto: support multiple threads accessing one QCryptoBlockVladimir Sementsov-Ogievskiy
The two thing that should be handled are cipher and ivgen. For ivgen the solution is just mutex, as iv calculations should not be long in comparison with encryption/decryption. And for cipher let's just keep per-thread ciphers. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2018-12-03Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2018-12-03' into ↵Peter Maydell
staging nbd patches for 2018-12-03 Improve x-dirty-bitmap handling for experimenting with pull mode incremental backups. - Eric Blake: 0/3 NBD dirty bitmap cleanups # gpg: Signature made Mon 03 Dec 2018 15:56:23 GMT # gpg: using RSA key A7A16B4A2527436A # gpg: Good signature from "Eric Blake <eblake@redhat.com>" # gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" # gpg: aka "[jpeg image of size 6874]" # Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A * remotes/ericb/tags/pull-nbd-2018-12-03: nbd/client: Send NBD_CMD_DISC if open fails after connect nbd/client: Make x-dirty-bitmap more reliable nbd/server: Advertise all contexts in response to bare LIST Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-12-03mirror: fix dead-lockVladimir Sementsov-Ogievskiy
Let start from the beginning: Commit b9e413dd375 (in 2.9) "block: explicitly acquire aiocontext in aio callbacks that need it" added pairs of aio_context_acquire/release to mirror_write_complete and mirror_read_complete, when they were aio callbacks for blk_aio_* calls. Then, commit 2e1990b26e5 (in 3.0) "block/mirror: Convert to coroutines" dropped these blk_aio_* calls, than mirror_write_complete and mirror_read_complete are not callbacks more, and don't need additional aiocontext acquiring. Furthermore, mirror_read_complete calls blk_co_pwritev inside these pair of aio_context_acquire/release, which leads to the following dead-lock with mirror: (gdb) info thr Id Target Id Frame 3 Thread (LWP 145412) "qemu-system-x86" syscall () 2 Thread (LWP 145416) "qemu-system-x86" __lll_lock_wait () * 1 Thread (LWP 145411) "qemu-system-x86" __lll_lock_wait () (gdb) bt #0 __lll_lock_wait () #1 _L_lock_812 () #2 __GI___pthread_mutex_lock #3 qemu_mutex_lock_impl (mutex=0x561032dce420 <qemu_global_mutex>, file=0x5610327d8654 "util/main-loop.c", line=236) at util/qemu-thread-posix.c:66 #4 qemu_mutex_lock_iothread_impl #5 os_host_main_loop_wait (timeout=480116000) at util/main-loop.c:236 #6 main_loop_wait (nonblocking=0) at util/main-loop.c:497 #7 main_loop () at vl.c:1892 #8 main Printing contents of qemu_global_mutex, I see that "__owner = 145416", so, thr1 is main loop, and now it wants BQL, which is owned by thr2. (gdb) thr 2 (gdb) bt #0 __lll_lock_wait () #1 _L_lock_870 () #2 __GI___pthread_mutex_lock #3 qemu_mutex_lock_impl (mutex=0x561034d25dc0, ... #4 aio_context_acquire (ctx=0x561034d25d60) #5 dma_blk_cb #6 dma_blk_io #7 dma_blk_read #8 ide_dma_cb #9 bmdma_cmd_writeb #10 bmdma_write #11 memory_region_write_accessor #12 access_with_adjusted_size #15 flatview_write #16 address_space_write #17 address_space_rw #18 kvm_handle_io #19 kvm_cpu_exec #20 qemu_kvm_cpu_thread_fn #21 qemu_thread_start #22 start_thread #23 clone () Printing mutex in fr 2, I see "__owner = 145411", so thr2 wants aio context mutex, which is owned by thr1. Classic dead-lock. Then, let's check that aio context is hold by mirror coroutine: just print coroutine stack of first tracked request in mirror job target: (gdb) [...] (gdb) qemu coroutine 0x561035dd0860 #0 qemu_coroutine_switch #1 qemu_coroutine_yield #2 qemu_co_mutex_lock_slowpath #3 qemu_co_mutex_lock #4 qcow2_co_pwritev #5 bdrv_driver_pwritev #6 bdrv_aligned_pwritev #7 bdrv_co_pwritev #8 blk_co_pwritev #9 mirror_read_complete () at block/mirror.c:232 #10 mirror_co_read () at block/mirror.c:370 #11 coroutine_trampoline #12 __start_context Yes it is mirror_read_complete calling blk_co_pwritev after acquiring aio context. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-11-30nbd/client: Send NBD_CMD_DISC if open fails after connectEric Blake
If nbd_client_init() fails after we are already connected, then the server will spam logs with: Disconnect client, due to: Unexpected end-of-file before all bytes were read unless we gracefully disconnect before closing the connection. Ways to trigger this: $ opts=driver=nbd,export=foo,server.type=inet,server.host=localhost,server.port=10809 $ qemu-img map --output=json --image-opts $opts,read-only=off $ qemu-img map --output=json --image-opts $opts,x-dirty-bitmap=nosuch: Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20181130023232.3079982-4-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-11-30nbd/client: Make x-dirty-bitmap more reliableEric Blake
The implementation of x-dirty-bitmap in qemu 3.0 (commit 216ee365) silently falls back to treating the server as not supporting NBD_CMD_BLOCK_STATUS if a requested meta_context name was not negotiated, which in turn means treating the _entire_ image as data. Since our hack relied on using 'qemu-img map' to view which portions of the image were dirty by seeing what the redirected bdrv_block_status() treats as holes, this means that our fallback treats the entire image as clean. Better would have been to treat the entire image as dirty, or to fail to connect because the user's request for a specific context could not be honored. This patch goes with the latter. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20181130023232.3079982-3-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-11-19file-posix: Fix shared locks on reopen commitMax Reitz
s->locked_shared_perm is the set of bits locked in the file, which is the inverse of the permissions actually shared. So we need to pass them as they are to raw_apply_lock_bytes() instead of inverting them again. Reported-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>