diff options
author | John Snow <jsnow@redhat.com> | 2015-07-04 02:06:04 -0400 |
---|---|---|
committer | John Snow <jsnow@redhat.com> | 2015-07-04 02:06:04 -0400 |
commit | a718978ed58abc1ad92567a9c17525136be02a71 (patch) | |
tree | baed83445cbc9819e3b67ab6d0ba7da413dfef06 /hw/ide/pci.c | |
parent | 07a1ee7958cc3433706ab0d3a07c42fdd9d98fe6 (diff) |
ide: add limit to .prepare_buf()
prepare_buf should not always grab as many descriptors
as it can, sometimes it should self-limit.
For example, an NCQ transfer of 1 sector with a PRDT that
describes 4GiB of data should not copy 4GiB of data, it
should just transfer that first 512 bytes.
PIO is not affected, because the dma_buf_rw dma helpers
already have a byte limit built-in to them, but DMA/NCQ
will exhaust the entire list regardless of requested size.
AHCI 1.3 specifies in section 6.1.6 Command List Underflow that
NCQ is not required to detect underflow conditions. Non-NCQ
pathways signal underflow by writing to the PRDBC field, which
will already occur by writing the actual transferred byte count
to the PRDBC, signaling the underflow.
Our NCQ pathways aren't required to detect underflow, but since our DMA
backend uses the size of the PRDT to determine the size of the transer,
if our PRDT is bigger than the transaction (the underflow condition) it
doesn't cost us anything to detect it and truncate the PRDT.
This is a recoverable error and is not signaled to the guest, in either
NCQ or normal DMA cases.
For BMDMA, the existing pathways should see no guest-visible difference,
but any bytes described in the overage will no longer be transferred
before indicating to the guest that there was an underflow.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1435767578-32743-2-git-send-email-jsnow@redhat.com
Diffstat (limited to 'hw/ide/pci.c')
-rw-r--r-- | hw/ide/pci.c | 21 |
1 files changed, 16 insertions, 5 deletions
diff --git a/hw/ide/pci.c b/hw/ide/pci.c index 4afd0cfe8c..d31ff885b7 100644 --- a/hw/ide/pci.c +++ b/hw/ide/pci.c @@ -53,10 +53,14 @@ static void bmdma_start_dma(IDEDMA *dma, IDEState *s, } /** - * Return the number of bytes successfully prepared. - * -1 on error. + * Prepare an sglist based on available PRDs. + * @limit: How many bytes to prepare total. + * + * Returns the number of bytes prepared, -1 on error. + * IDEState.io_buffer_size will contain the number of bytes described + * by the PRDs, whether or not we added them to the sglist. */ -static int32_t bmdma_prepare_buf(IDEDMA *dma, int is_write) +static int32_t bmdma_prepare_buf(IDEDMA *dma, int32_t limit) { BMDMAState *bm = DO_UPCAST(BMDMAState, dma, dma); IDEState *s = bmdma_active_if(bm); @@ -75,7 +79,7 @@ static int32_t bmdma_prepare_buf(IDEDMA *dma, int is_write) /* end of table (with a fail safe of one page) */ if (bm->cur_prd_last || (bm->cur_addr - bm->addr) >= BMDMA_PAGE_SIZE) { - return s->io_buffer_size; + return s->sg.size; } pci_dma_read(pci_dev, bm->cur_addr, &prd, 8); bm->cur_addr += 8; @@ -90,7 +94,14 @@ static int32_t bmdma_prepare_buf(IDEDMA *dma, int is_write) } l = bm->cur_prd_len; if (l > 0) { - qemu_sglist_add(&s->sg, bm->cur_prd_addr, l); + uint64_t sg_len; + + /* Don't add extra bytes to the SGList; consume any remaining + * PRDs from the guest, but ignore them. */ + sg_len = MIN(limit - s->sg.size, bm->cur_prd_len); + if (sg_len) { + qemu_sglist_add(&s->sg, bm->cur_prd_addr, sg_len); + } /* Note: We limit the max transfer to be 2GiB. * This should accommodate the largest ATA transaction |