aboutsummaryrefslogtreecommitdiff
path: root/hw/arm/smmuv3.c
AgeCommit message (Collapse)Author
2023-07-25hw/arm/smmu: Handle big-endian hosts correctlyPeter Maydell
The implementation of the SMMUv3 has multiple places where it reads a data structure from the guest and directly operates on it without doing a guest-to-host endianness conversion. Since all SMMU data structures are little-endian, this means that the SMMU doesn't work on a big-endian host. In particular, this causes the Avocado test machine_aarch64_virt.py:Aarch64VirtMachine.test_alpine_virt_tcg_gic_max to fail on an s390x host. Add appropriate byte-swapping on reads and writes of guest in-memory data structures so that the device works correctly on big-endian hosts. As part of this we constrain queue_read() to operate only on Cmd structs and queue_write() on Evt structs, because in practice these are the only data structures the two functions are used with, and we need to know what the data structure is to be able to byte-swap its parts correctly. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Message-id: 20230717132641.764660-1-peter.maydell@linaro.org Cc: qemu-stable@nongnu.org
2023-05-30hw/arm/smmuv3: Add knob to choose translation stage and enable stage-2Mostafa Saleh
As everything is in place, we can use a new system property to advertise which stage is supported and remove bad_ste from STE stage2 config. The property added arm-smmuv3.stage can have 3 values: - "1": Stage-1 only is advertised. - "2": Stage-2 only is advertised. If not passed or an unsupported value is passed, it will default to stage-1. Advertise VMID16. Don't try to decode CD, if stage-2 is configured. Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Mostafa Saleh <smostafa@google.com> Tested-by: Eric Auger <eric.auger@redhat.com> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-id: 20230516203327.2051088-11-smostafa@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30hw/arm/smmuv3: Add stage-2 support in iova notifierMostafa Saleh
In smmuv3_notify_iova, read the granule based on translation stage and use VMID if valid value is sent. Signed-off-by: Mostafa Saleh <smostafa@google.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-id: 20230516203327.2051088-10-smostafa@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30hw/arm/smmuv3: Add CMDs related to stage-2Mostafa Saleh
CMD_TLBI_S2_IPA: As S1+S2 is not enabled, for now this can be the same as CMD_TLBI_NH_VAA. CMD_TLBI_S12_VMALL: Added new function to invalidate TLB by VMID. For stage-1 only commands, add a check to throw CERROR_ILL if used when stage-1 is not supported. Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Mostafa Saleh <smostafa@google.com> Tested-by: Eric Auger <eric.auger@redhat.com> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-id: 20230516203327.2051088-9-smostafa@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30hw/arm/smmuv3: Add VMID to TLB taggingMostafa Saleh
Allow TLB to be tagged with VMID. If stage-1 is only supported, VMID is set to -1 and ignored from STE and CMD_TLBI_NH* cmds. Update smmu_iotlb_insert trace event to have vmid. Signed-off-by: Mostafa Saleh <smostafa@google.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-id: 20230516203327.2051088-8-smostafa@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30hw/arm/smmuv3: Make TLB lookup work for stage-2Mostafa Saleh
Right now, either stage-1 or stage-2 are supported, this simplifies how we can deal with TLBs. This patch makes TLB lookup work if stage-2 is enabled instead of stage-1. TLB lookup is done before a PTW, if a valid entry is found we won't do the PTW. To be able to do TLB lookup, we need the correct tagging info, as granularity and input size, so we get this based on the supported translation stage. The TLB entries are added correctly from each stage PTW. When nested translation is supported, this would need to change, for example if we go with a combined TLB implementation, we would need to use the min of the granularities in TLB. As stage-2 shouldn't be tagged by ASID, it will be set to -1 if S1P is not enabled. Signed-off-by: Mostafa Saleh <smostafa@google.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-id: 20230516203327.2051088-7-smostafa@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30hw/arm/smmuv3: Parse STE config for stage-2Mostafa Saleh
Parse stage-2 configuration from STE and populate it in SMMUS2Cfg. Validity of field values are checked when possible. Only AA64 tables are supported and Small Translation Tables (STT) are not supported. According to SMMUv3 UM(IHI0070E) "5.2 Stream Table Entry": All fields with an S2 prefix (with the exception of S2VMID) are IGNORED when stage-2 bypasses translation (Config[1] == 0). Which means that VMID can be used(for TLB tagging) even if stage-2 is bypassed, so we parse it unconditionally when S2P exists. Otherwise it is set to -1.(only S1P) As stall is not supported, if S2S is set the translation would abort. For S2R, we reuse the same code used for stage-1 with flag record_faults. However when nested translation is supported we would need to separate stage-1 and stage-2 faults. Fix wrong shift in STE_S2HD, STE_S2HA, STE_S2S. Signed-off-by: Mostafa Saleh <smostafa@google.com> Tested-by: Eric Auger <eric.auger@redhat.com> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Message-id: 20230516203327.2051088-6-smostafa@google.com [PMM: fixed format string] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-05-30hw/arm/smmuv3: Refactor stage-1 PTWMostafa Saleh
In preparation for adding stage-2 support, rename smmu_ptw_64 to smmu_ptw_64_s1 and refactor some of the code so it can be reused in stage-2 page table walk. Remove AA64 check from PTW as decode_cd already ensures that AA64 is used, otherwise it faults with C_BAD_CD. A stage member is added to SMMUPTWEventInfo to differentiate between stage-1 and stage-2 ptw faults. Add stage argument to trace_smmu_ptw_level be consistent with other trace events. Signed-off-by: Mostafa Saleh <smostafa@google.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Message-id: 20230516203327.2051088-4-smostafa@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-02-16hw/arm/smmuv3: Add GBPA registerMostafa Saleh
GBPA register can be used to globally abort all transactions. It is described in the SMMU manual in "6.3.14 SMMU_GBPA". ABORT reset value is IMPLEMENTATION DEFINED, it is chosen to be zero(Do not abort incoming transactions). Other fields have default values of Use Incoming. If UPDATE is not set, the write is ignored. This is the only permitted behavior in SMMUv3.2 and later.(6.3.14.1 Update procedure) As this patch adds a new state to the SMMU (GBPA), it is added in a new subsection for forward migration compatibility. GBPA is only migrated if its value is different from the reset value. It does this to be backward migration compatible if SW didn't write the register. Signed-off-by: Mostafa Saleh <smostafa@google.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Message-id: 20230214094009.2445653-1-smostafa@google.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-15hw/arm: Convert TYPE_ARM_SMMUV3 to 3-phase resetPeter Maydell
Convert the TYPE_ARM_SMMUV3 device to 3-phase reset. The legacy reset method doesn't do anything that's invalid in the hold phase, so the conversion only requires changing it to a hold phase method, and using the 3-phase versions of the "save the parent reset method and chain to it" code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20221109161444.3397405-3-peter.maydell@linaro.org
2022-04-28hw/arm/smmuv3: Advertise support for SMMUv3.2-BBML2Peter Maydell
The Arm SMMUv3 includes an optional feature equivalent to the CPU FEAT_BBM, which permits an OS to switch a range of memory between "covered by a huge page" and "covered by a sequence of normal pages" without having to engage in the traditional 'break-before-make' dance. (This is particularly important for the SMMU, because devices performing I/O through an SMMU are less likely to be able to cope with the window in the sequence where an access results in a translation fault.) The SMMU spec explicitly notes that one of the valid ways to be a BBM level 2 compliant implementation is: * if there are multiple entries in the TLB for an address, choose one of them and use it, ignoring the others Our SMMU TLB implementation (unlike our CPU TLB) does allow multiple TLB entries for an address, because the translation table level is part of the SMMUIOTLBKey, and so our IOTLB hashtable can include entries for the same address where the leaf was at different levels (i.e. both hugepage and normal page). Our TLB lookup implementation in smmu_iotlb_lookup() will always find the entry with the lowest level (i.e. it prefers the hugepage over the normal page) and ignore any others. TLB invalidation correctly removes all TLB entries matching the specified address or address range (unless the guest specifies the leaf level explicitly, in which case it gets what it asked for). So we can validly advertise support for BBML level 2. Note that we still can't yet advertise ourselves as an SMMU v3.2, because v3.2 requires support for the S2FWB feature, which we don't yet implement. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Message-id: 20220426160422.2353158-4-peter.maydell@linaro.org
2022-04-28hw/arm/smmuv3: Add space in guest error messageJean-Philippe Brucker
Make the translation error message prettier by adding a missing space before the parenthesis. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Message-id: 20220427111543.124620-2-jean-philippe@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-28hw/arm/smmuv3: Cache event fault recordJean-Philippe Brucker
The Record bit in the Context Descriptor tells the SMMU to report fault events to the event queue. Since we don't cache the Record bit at the moment, access faults from a cached Context Descriptor are never reported. Store the Record bit in the cached SMMUTransCfg. Fixes: 9bde7f0674fe ("hw/arm/smmuv3: Implement translate callback") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Message-id: 20220427111543.124620-1-jean-philippe@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-04-22hw/arm/smmuv3: Pass the actual perm to returned IOMMUTLBEntry in ↵Xiang Chen
smmuv3_translate() It always calls the IOMMU MR translate() callback with flag=IOMMU_NONE in memory_region_iommu_replay(). Currently, smmuv3_translate() return an IOMMUTLBEntry with perm set to IOMMU_NONE even if the translation success, whereas it is expected to return the actual permission set in the table entry. So pass the actual perm to returned IOMMUTLBEntry in the table entry. Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Message-id: 1650094695-121918-1-git-send-email-chenxiang66@hisilicon.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-02-08hw/arm/smmuv3: Fix device resetEric Auger
We currently miss a bunch of register resets in the device reset function. This sometimes prevents the guest from rebooting after a system_reset (with virtio-blk-pci). For instance, we may get the following errors: invalid STE smmuv3-iommu-memory-region-0-0 translation failed for iova=0x13a9d2000(SMMU_EVT_C_BAD_STE) Invalid read at addr 0x13A9D2000, size 2, region '(null)', reason: rejected invalid STE smmuv3-iommu-memory-region-0-0 translation failed for iova=0x13a9d2000(SMMU_EVT_C_BAD_STE) Invalid write at addr 0x13A9D2000, size 2, region '(null)', reason: rejected invalid STE Signed-off-by: Eric Auger <eric.auger@redhat.com> Message-id: 20220202111602.627429-1-eric.auger@redhat.com Fixes: 10a83cb988 ("hw/arm/smmuv3: Skeleton") Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-30dma: Let dma_memory_read/write() take MemTxAttrs argumentPhilippe Mathieu-Daudé
Let devices specify transaction attributes when calling dma_memory_read() or dma_memory_write(). Patch created mechanically using spatch with this script: @@ expression E1, E2, E3, E4; @@ ( - dma_memory_read(E1, E2, E3, E4) + dma_memory_read(E1, E2, E3, E4, MEMTXATTRS_UNSPECIFIED) | - dma_memory_write(E1, E2, E3, E4) + dma_memory_write(E1, E2, E3, E4, MEMTXATTRS_UNSPECIFIED) ) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Li Qiang <liq3ea@gmail.com> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20211223115554.3155328-6-philmd@redhat.com>
2021-05-25hw/arm/smmuv3: Another range invalidation fixEric Auger
6d9cd115b9 ("hw/arm/smmuv3: Enforce invalidation on a power of two range") failed to completely fix misalignment issues with range invalidation. For instance invalidations patterns like "invalidate 32 4kB pages starting from 0xff395000 are not correctly handled" due to the fact the previous fix only made sure the number of invalidated pages were a power of 2 but did not properly handle the start address was not aligned with the range. This can be noticed when boothing a fedora 33 with protected virtio-blk-pci. Signed-off-by: Eric Auger <eric.auger@redhat.com> Fixes: 6d9cd115b9 ("hw/arm/smmuv3: Enforce invalidation on a power of two range") Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-02Do not include exec/address-spaces.h if it's not really necessaryThomas Huth
Stop including exec/address-spaces.h in files that don't need it. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210416171314.2074665-5-thuth@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-04-30hw/arm/smmuv3: Support 16K translation granuleKunkun Jiang
The driver can query some bits in SMMUv3 IDR5 to learn which translation granules are supported. Arm recommends that SMMUv3 implementations support at least 4K and 64K granules. But in the vSMMUv3, there seems to be no reason not to support 16K translation granule. In addition, if 16K is not supported, vSVA will failed to be enabled in the future for 16K guest kernel. So it'd better to support it. Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-12hw/arm/smmuv3: Emulate CFGI_STE_RANGE for an aligned range of StreamIDsZenghui Yu
In emulation of the CFGI_STE_RANGE command, we now take StreamID as the start of the invalidation range, regardless of whatever the Range is, whilst the spec clearly states that - "Invalidation is performed for an *aligned* range of 2^(Range+1) StreamIDs." - "The bottom Range+1 bits of the StreamID parameter are IGNORED, aligning the range to its size." Take CFGI_ALL (where Range == 31) as an example, if there are some random bits in the StreamID field, we'll fail to perform the full invalidation but get a strange range (e.g., SMMUSIDRange={.start=1, .end=0}) instead. Rework the emulation a bit to get rid of the discrepancy with the spec. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Acked-by: Eric Auger <eric.auger@redhat.com> Message-id: 20210402100449.528-1-yuzenghui@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-03-12hw/arm/smmuv3: Fix SMMU_CMD_CFGI_STE_RANGE handlingEric Auger
If the whole SID range (32b) is invalidated (SMMU_CMD_CFGI_ALL), @end overflows and we fail to handle the command properly. Once this gets fixed, the current code really is awkward in the sense it loops over the whole range instead of removing the currently cached configs through a hash table lookup. Fix both the overflow and the lookup. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210309102742.30442-7-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-03-12hw/arm/smmuv3: Enforce invalidation on a power of two rangeEric Auger
As of today, the driver can invalidate a number of pages that is not a power of 2. However IOTLB unmap notifications and internal IOTLB invalidations work with masks leading to erroneous invalidations. In case the range is not a power of 2, split invalidations into power of 2 invalidations. When looking for a single page entry in the vSMMU internal IOTLB, let's make sure that if the entry is not found using a g_hash_table_remove() we iterate over all the entries to find a potential range that overlaps it. Signed-off-by: Eric Auger <eric.auger@redhat.com> Message-id: 20210309102742.30442-6-eric.auger@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-02-05vhost: Unbreak SMMU and virtio-iommu on dev-iotlb supportPeter Xu
Previous work on dev-iotlb message broke vhost on either SMMU or virtio-iommu since dev-iotlb (or PCIe ATS) is not yet supported for those archs. An initial idea is that we can let IOMMU to export this information to vhost so that vhost would know whether the vIOMMU would support dev-iotlb, then vhost can conditionally register to dev-iotlb or the old iotlb way. We can work based on some previous patch to introduce PCIIOMMUOps as Yi Liu proposed [1]. However it's not as easy as I thought since vhost_iommu_region_add() does not have a PCIDevice context at all since it's completely a backend. It seems non-trivial to pass over a PCI device to the backend during init. E.g. when the IOMMU notifier registered hdev->vdev is still NULL. To make the fix smaller and easier, this patch goes the other way to leverage the flag_changed() hook of vIOMMUs so that SMMU and virtio-iommu can trap the dev-iotlb registration and fail it. Then vhost could try the fallback solution as using UNMAP invalidation for it's translations. [1] https://lore.kernel.org/qemu-devel/1599735398-6829-4-git-send-email-yi.l.liu@intel.com/ Reported-by: Eric Auger <eric.auger@redhat.com> Fixes: b68ba1ca57677acf870d5ab10579e6105c1f5338 Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210204191228.187550-1-peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-02-02hw/arm/smmuv3: Fix addr_mask for range-based invalidationZenghui Yu
When handling guest range-based IOTLB invalidation, we should decode the TG field into the corresponding translation granule size so that we can pass the correct invalidation range to backend. Set @granule to (tg * 2 + 10) to properly emulate the architecture. Fixes: d52915616c05 ("hw/arm/smmuv3: Get prepared for range invalidation") Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Acked-by: Eric Auger <eric.auger@redhat.com> Message-id: 20210130043220.1345-1-yuzenghui@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-12-08memory: Add IOMMUTLBEventEugenio Pérez
This way we can tell between regular IOMMUTLBEntry (entry of IOMMU hardware) and notifications. In the notifications, we set explicitly if it is a MAPs or an UNMAP, instead of trusting in entry permissions to differentiate them. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20201116165506.31315-3-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Acked-by: David Gibson <david@gibson.dropbear.id.au>
2020-12-08memory: Rename memory_region_notify_one to memory_region_notify_iommu_oneEugenio Pérez
Previous name didn't reflect the iommu operation. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20201116165506.31315-2-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-11-02hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)Philippe Mathieu-Daudé
Use the BIT_ULL() macro to ensure we use 64-bit arithmetic. This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN): CID 1432363 (#1 of 1): Unintentional integer overflow: overflow_before_widen: Potentially overflowing expression 1 << scale with type int (32 bits, signed) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type hwaddr (64 bits, unsigned). Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Eric Auger <eric.auger@redhat.com> Message-id: 20201030144617.1535064-1-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-10-27hw/arm/smmuv3: Set the restoration priority of the vSMMUv3 explicitlyZenghui Yu
Ensure the vSMMUv3 will be restored before all PCIe devices so that DMA translation can work properly during migration. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Message-id: 20201019091508.197-1-yuzenghui@huawei.com Acked-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24hw/arm/smmuv3: Advertise SMMUv3.2 range invalidationEric Auger
Expose the RIL bit so that the guest driver uses range invalidation. Although RIL is a 3.2 features, We let the AIDR advertise SMMUv3.1 support as v3.x implementation is allowed to implement features from v3.(x+1). Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200728150815.11446-12-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24hw/arm/smmuv3: Support HAD and advertise SMMUv3.1 supportEric Auger
HAD is a mandatory features with SMMUv3.1 if S1P is set, which is our case. Other 3.1 mandatory features come with S2P which we don't have. So let's support HAD and advertise SMMUv3.1 support in AIDR. HAD support allows the CD to disable hierarchical attributes, ie. if the HAD0/1 bit is set, the APTable field of table descriptors walked through TTB0/1 is ignored. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200728150815.11446-11-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24hw/arm/smmuv3: Let AIDR advertise SMMUv3.0 supportEric Auger
Add the support for AIDR register. It currently advertises SMMU V3.0 spec. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200728150815.11446-10-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24hw/arm/smmuv3: Get prepared for range invalidationEric Auger
Enhance the smmu_iotlb_inv_iova() helper with range invalidation. This uses the new fields passed in the NH_VA and NH_VAA commands: the size of the range, the level and the granule. As NH_VA and NH_VAA both use those fields, their decoding and handling is factorized in a new smmuv3_s1_range_inval() helper. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200728150815.11446-8-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24hw/arm/smmuv3: Introduce smmuv3_s1_range_inval() helperEric Auger
Let's introduce an helper for S1 IOVA range invalidation. This will be used for NH_VA and NH_VAA commands. It decodes the same fields, trace, calls the UNMAP notifiers and invalidate the corresponding IOTLB entries. At the moment, we do not support 3.2 range invalidation yet. So it reduces to a single IOVA invalidation. Note the leaf bit now is also decoded for the CMD_TLBI_NH_VAA command. At the moment it is only used for tracing. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200728150815.11446-7-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24hw/arm/smmu-common: Manage IOTLB block entriesEric Auger
At the moment each entry in the IOTLB corresponds to a page sized mapping (4K, 16K or 64K), even if the page belongs to a mapped block. In case of block mapping this unefficiently consumes IOTLB entries. Change the value of the entry so that it reflects the actual mapping it belongs to (block or page start address and size). Also the level/tg of the entry is encoded in the key. In subsequent patches we will enable range invalidation. This latter is able to provide the level/tg of the entry. Encoding the level/tg directly in the key will allow to invalidate using g_hash_table_remove() when num_pages equals to 1. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200728150815.11446-6-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24hw/arm/smmu: Introduce SMMUTLBEntry for PTW and IOTLB valueEric Auger
Introduce a specialized SMMUTLBEntry to store the result of the PTW and cache in the IOTLB. This structure extends the generic IOMMUTLBEntry struct with the level of the entry and the granule size. Those latter will be useful when implementing range invalidation. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200728150815.11446-5-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24hw/arm/smmu-common: Add IOTLB helpersEric Auger
Add two helpers: one to lookup for a given IOTLB entry and one to insert a new entry. We also move the tracing there. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200728150815.11446-3-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-20hw: Remove unnecessary cast when calling dma_memory_read()Philippe Mathieu-Daudé
Since its introduction in commit d86a77f8abb, dma_memory_read() always accepted void pointer argument. Remove the unnecessary casts. This commit was produced with the included Coccinelle script scripts/coccinelle/exec_rw_const. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> --- v4: Drop parenthesis when removing cast (Eric Blake)
2019-12-20hw/arm/smmuv3: Report F_STE_FETCH fault address in correct word positionSimon Veith
The smmuv3_record_event() function that generates the F_STE_FETCH error uses the EVT_SET_ADDR macro to record the fetch address, placing it in 32-bit words 4 and 5. The correct position for this address is in words 6 and 7, per the SMMUv3 Architecture Specification. Update the function to use the EVT_SET_ADDR2 macro instead, which is the macro intended for writing to these words. ref. ARM IHI 0070C, section 7.3.4. Signed-off-by: Simon Veith <sveith@amazon.de> Acked-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Message-id: 1576509312-13083-7-git-send-email-sveith@amazon.de Cc: Eric Auger <eric.auger@redhat.com> Cc: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org Acked-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-12-20hw/arm/smmuv3: Align stream table base address to table sizeSimon Veith
Per the specification, and as observed in hardware, the SMMUv3 aligns the SMMU_STRTAB_BASE address to the size of the table by masking out the respective least significant bits in the ADDR field. Apply this masking logic to our smmu_find_ste() lookup function per the specification. ref. ARM IHI 0070C, section 6.3.23. Signed-off-by: Simon Veith <sveith@amazon.de> Acked-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Message-id: 1576509312-13083-5-git-send-email-sveith@amazon.de Cc: Eric Auger <eric.auger@redhat.com> Cc: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-12-20hw/arm/smmuv3: Check stream IDs against actual table LOG2SIZESimon Veith
When checking whether a stream ID is in range of the stream table, we have so far been only checking it against our implementation limit (SMMU_IDR1_SIDSIZE). However, the guest can program the STRTAB_BASE_CFG.LOG2SIZE field to a size that is smaller than this limit. Check the stream ID against this limit as well to match the hardware behavior of raising C_BAD_STREAMID events in case the limit is exceeded. Also, ensure that we do not go one entry beyond the end of the table by checking that its index is strictly smaller than the table size. ref. ARM IHI 0070C, section 6.3.24. Signed-off-by: Simon Veith <sveith@amazon.de> Acked-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Message-id: 1576509312-13083-4-git-send-email-sveith@amazon.de Cc: Eric Auger <eric.auger@redhat.com> Cc: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-12-20hw/arm/smmuv3: Apply address mask to linear strtab base addressSimon Veith
In the SMMU_STRTAB_BASE register, the stream table base address only occupies bits [51:6]. Other bits, such as RA (bit [62]), must be masked out to obtain the base address. The branch for 2-level stream tables correctly applies this mask by way of SMMU_BASE_ADDR_MASK, but the one for linear stream tables does not. Apply the missing mask in that case as well so that the correct stream base address is used by guests which configure a linear stream table. Linux guests are unaffected by this change because they choose a 2-level stream table layout for the QEMU SMMUv3, based on the size of its stream ID space. ref. ARM IHI 0070C, section 6.3.23. Signed-off-by: Simon Veith <sveith@amazon.de> Acked-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Message-id: 1576509312-13083-2-git-send-email-sveith@amazon.de Cc: Eric Auger <eric.auger@redhat.com> Cc: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org Acked-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-04memory: allow memory_region_register_iommu_notifier() to failEric Auger
Currently, when a notifier is attempted to be registered and its flags are not supported (especially the MAP one) by the IOMMU MR, we generally abruptly exit in the IOMMU code. The failure could be handled more nicely in the caller and especially in the VFIO code. So let's allow memory_region_register_iommu_notifier() to fail as well as notify_flag_changed() callback. All sites implementing the callback are updated. This patch does not yet remove the exit(1) in the amd_iommu code. in SMMUv3 we turn the warning message into an error message saying that the assigned device would not work properly. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-03hw/arm/smmuv3: Remove spurious error messages on IOVA invalidationsEric Auger
An IOVA/ASID invalidation is notified to all IOMMU Memory Regions through smmuv3_inv_notifiers_iova/smmuv3_notify_iova. When the notification occurs it is possible that some of the PCIe devices associated to the notified regions do not have a valid stream table entry. In that case we output a LOG_GUEST_ERROR message, for example: invalid sid=<SID> (L1STD span=0) "smmuv3_notify_iova error decoding the configuration for iommu mr=<MR> This is unfortunate as the user gets the impression that there are some translation decoding errors whereas there are not. This patch adds a new field in SMMUEventInfo that tells whether the detection of an invalid STE must lead to an error report. invalid_ste_allowed is set before doing the invalidations and kept unset on actual translation. The other configuration decoding error messages are kept since if the STE is valid then the rest of the config must be correct. Signed-off-by: Eric Auger <eric.auger@redhat.com> Message-id: 20190822172350.12008-6-eric.auger@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03hw/arm/smmuv3: Log a guest error when decoding an invalid STEEric Auger
Log a guest error when encountering an invalid STE. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190822172350.12008-5-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16Clean up inclusion of sysemu/sysemu.hMarkus Armbruster
In my "build everything" tree, changing sysemu/sysemu.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). Almost a third of its inclusions are actually superfluous. Delete them. Downgrade two more to qapi/qapi-types-run-state.h, and move one from char/serial.h to char/serial.c. hw/semihosting/config.c, monitor/monitor.c, qdev-monitor.c, and stubs/semihost.c define variables declared in sysemu/sysemu.h without including it. The compiler is cool with that, but include it anyway. This doesn't reduce actual use much, as it's still included into widely included headers. The next commit will tackle that. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-27-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2019-08-16Include hw/boards.h a bit lessMarkus Armbruster
hw/boards.h pulls in almost 60 headers. The less we include it into headers, the better. As a first step, drop superfluous inclusions, and downgrade some more to what's actually needed. Gets rid of just one inclusion into a header. Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-23-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
2019-08-16Include migration/vmstate.h lessMarkus Armbruster
In my "build everything" tree, changing migration/vmstate.h triggers a recompile of some 2700 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). hw/hw.h supposedly includes it for convenience. Several other headers include it just to get VMStateDescription. The previous commit made that unnecessary. Include migration/vmstate.h only where it's still needed. Touching it now recompiles only some 1600 objects. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-16-armbru@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16Include hw/irq.h a lot lessMarkus Armbruster
In my "build everything" tree, changing hw/irq.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). hw/hw.h supposedly includes it for convenience. Several other headers include it just to get qemu_irq and.or qemu_irq_handler. Move the qemu_irq and qemu_irq_handler typedefs from hw/irq.h to qemu/typedefs.h, and then include hw/irq.h only where it's still needed. Touching it now recompiles only some 500 objects. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-13-armbru@redhat.com>
2019-06-13hw/arm/smmuv3: Fix decoding of ID register rangePeter Maydell
The SMMUv3 ID registers cover an area 0x30 bytes in size (12 registers, 4 bytes each). We were incorrectly decoding only the first 0x20 bytes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Message-id: 20190524124829.2589-1-peter.maydell@linaro.org
2019-04-29hw/arm/smmuv3: Remove SMMUNotifierNodeEric Auger
The SMMUNotifierNode struct is not necessary and brings extra complexity so let's remove it. We now directly track the SMMUDevices which have registered IOMMU MR notifiers. This is inspired from the same transformation on intel-iommu done in commit b4a4ba0d68f50f218ee3957b6638dbee32a5eeef ("intel-iommu: remove IntelIOMMUNotifierNode") Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Message-id: 20190409160219.19026-1-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>