aboutsummaryrefslogtreecommitdiff
path: root/target/riscv/pmp.c
AgeCommit message (Collapse)Author
2023-11-07target/riscv: pmp: Ignore writes when RW=01Mayuresh Chitale
As per the Priv spec: "The R, W, and X fields form a collective WARL field for which the combinations with R=0 and W=1 are reserved." However currently such writes are not ignored as ought to be. The combinations with RW=01 are allowed only when the Smepmp extension is enabled and mseccfg.MML is set. Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20231019065705.1431868-1-mchitale@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-11-07target/riscv: pmp: Clear pmp/smepmp bits on resetMayuresh Chitale
As per the Priv and Smepmp specifications, certain bits such as the 'L' bit of pmp entries and mseccfg.MML can only be cleared upon reset and it is necessary to do so to allow 'M' mode firmware to correctly reinitialize the pmp/smpemp state across reboots. As required by the spec, also clear the 'A' field of pmp entries. Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20231019065644.1431798-1-mchitale@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-11-07Add epmp to extensions list and rename it to smepmpHimanshu Chauhan
Smepmp is a ratified extension which qemu refers to as epmp. Rename epmp to smepmp and add it to extension list so that it is added to the isa string. Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com> Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20231019065546.1431579-1-mchitale@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-09-11target/riscv/pmp.c: respect mseccfg.RLB for pmpaddrX changesLeon Schuermann
When the rule-lock bypass (RLB) bit is set in the mseccfg CSR, the PMP configuration lock bits must not apply. While this behavior is implemented for the pmpcfgX CSRs, this bit is not respected for changes to the pmpaddrX CSRs. This patch ensures that pmpaddrX CSR writes work even on locked regions when the global rule-lock bypass is enabled. Signed-off-by: Leon Schuermann <leons@opentitan.org> Reviewed-by: Mayuresh Chitale <mchitale@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20230829215046.1430463-1-leon@is.currently.online> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Smepmp: Return error when access permission not allowed in PMPHimanshu Chauhan
On an address match, skip checking for default permissions and return error based on access defined in PMP configuration. v3 Changes: o Removed explicit return of boolean value from comparision of priv/allowed_priv v2 Changes: o Removed goto to return in place when address matches o Call pmp_hart_has_privs_default at the end of the loop Fixes: 90b1fafce06 ("target/riscv: Smepmp: Skip applying default rules when address matches") Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Message-Id: <20230605164548.715336-1-hchauhan@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Deny access if access is partially inside the PMP entryWeiwei Li
Access will fail if access is partially inside the PMP entry. However,only setting ret = false doesn't really mean pmp violation since pmp_hart_has_privs_default() may return true at the end of pmp_hart_has_privs(). Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230517091519.34439-13-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Separate pmp_update_rule() in pmpcfg_csr_writeWeiwei Li
Use pmp_update_rule_addr() and pmp_update_rule_nums() separately to update rule nums only once for each pmpcfg_csr_write. Then remove pmp_update_rule() since it become unused. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230517091519.34439-12-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Flush TLB only when pmpcfg/pmpaddr really changesWeiwei Li
TLB needn't be flushed when pmpcfg/pmpaddr don't changes. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-Id: <20230517091519.34439-11-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Flush TLB when pmpaddr is updatedWeiwei Li
TLB should be flushed not only for pmpcfg csr changes, but also for pmpaddr csr changes. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-Id: <20230517091519.34439-10-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Update the next rule addr in pmpaddr_csr_write()Weiwei Li
Currently only the rule addr of the same index of pmpaddr is updated when pmpaddr CSR is modified. However, the rule addr of next PMP entry may also be affected if its A field is PMP_AMATCH_TOR. So we should also update it in this case. Write to pmpaddr CSR will not affect the rule nums, So we needn't update call pmp_update_rule_nums() in pmpaddr_csr_write(). Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230517091519.34439-9-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Flush TLB when MMWP or MML bits are changedWeiwei Li
MMWP and MML bits may affect the allowed privs of PMP entries and the default privs, both of which may change the allowed privs of exsited TLB entries. So we need flush TLB when they are changed. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230517091519.34439-8-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Remove unused paramters in pmp_hart_has_privs_default()Weiwei Li
The addr and size parameters in pmp_hart_has_privs_default() are unused. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230517091519.34439-7-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is enabledWeiwei Li
RLB/MML/MMWP bits in mseccfg CSR are introduced by Smepmp extension. So they can only be writable and set to 1s when cfg.epmp is true. Then we also need't check on epmp in pmp_hart_has_privs_default(). Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230517091519.34439-6-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Change the return type of pmp_hart_has_privs() to boolWeiwei Li
We no longer need the pmp_index for matched PMP entry now. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230517091519.34439-5-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Make the short cut really work in pmp_hart_has_privsWeiwei Li
Return the result directly for short cut, since We needn't do the following check on the PMP entries if there is no PMP rules. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230517091519.34439-4-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-06-13target/riscv: Update pmp_get_tlb_size()Weiwei Li
PMP entries before (including) the matched PMP entry may only cover partial of the TLB page, and this may split the page into regions with different permissions. Such as for PMP0 (0x80000008~0x8000000F, R) and PMP1 (0x80000000~ 0x80000FFF, RWX), write access to 0x80000000 will match PMP1. However we cannot cache the translation result in the TLB since this will make the write access to 0x80000008 bypass the check of PMP0. So we should check all of them instead of the matched PMP entry in pmp_get_tlb_size() and set the tlb_size to 1 in this case. Set tlb_size to TARGET_PAGE_SIZE if PMP is not support or there is no PMP rules. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230517091519.34439-2-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-05-05target/riscv: Fix lines with over 80 charactersWeiwei Li
Fix lines with over 80 characters for both code and comments. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-Id: <20230405085813.40643-5-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-05-05target/riscv: Fix format for commentsWeiwei Li
Fix formats for multi-lines comments. Add spaces around single line comments(after "/*" and before "*/"). Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Acked-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-Id: <20230405085813.40643-4-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-05-05target/riscv: Fix format for indentationWeiwei Li
Fix identation problems, and try to use the same indentation strategy in the same file. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-Id: <20230405085813.40643-3-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-03-01target/riscv: remove RISCV_FEATURE_MMUDaniel Henrique Barboza
RISCV_FEATURE_MMU is set whether cpu->cfg.mmu is set, so let's just use the flag directly instead. With this change the enum is also removed. It is worth noticing that this enum, and all the RISCV_FEATURES_* that were contained in it, predates the existence of the cpu->cfg object. Today, using cpu->cfg is an easier way to retrieve all the features and extensions enabled in the hart. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Reviewed-by: Bin Meng <bmeng@tinylab.org> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-ID: <20230222185205.355361-10-dbarboza@ventanamicro.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-03-01target/riscv: remove RISCV_FEATURE_PMPDaniel Henrique Barboza
RISCV_FEATURE_PMP is being set via riscv_set_feature() by mirroring the cpu->cfg.pmp flag. Use the flag instead. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Reviewed-by: Bin Meng <bmeng@tinylab.org> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-ID: <20230222185205.355361-8-dbarboza@ventanamicro.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-03-01target/riscv: remove RISCV_FEATURE_EPMPDaniel Henrique Barboza
RISCV_FEATURE_EPMP is always set to the same value as the cpu->cfg.epmp flag. Use the flag directly. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Reviewed-by: Bin Meng <bmeng@tinylab.org> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-ID: <20230222185205.355361-7-dbarboza@ventanamicro.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-02-23target/riscv: Smepmp: Skip applying default rules when address matchesHimanshu Chauhan
When MSECCFG.MML is set, after checking the address range in PMP if the asked permissions are not same as programmed in PMP, the default permissions are applied. This should only be the case when there is no matching address is found. This patch skips applying default rules when matching address range is found. It returns the index of the match PMP entry. Fixes: 824cac681c3 (target/riscv: Fix PMP propagation for tlb) Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230209055206.229392-1-hchauhan@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-01-06target/riscv: Fix PMP propagation for tlbLIU Zhiwei
Only the pmp index that be checked by pmp_hart_has_privs can be used by pmp_get_tlb_size to avoid an error pmp index. Before modification, we may use an error pmp index. For example, we check address 0x4fc, and the size 0x4 in pmp_hart_has_privs. If there is an pmp rule, valid range is [0x4fc, 0x500), then pmp_hart_has_privs will return true; However, this checked pmp index is discarded as pmp_hart_has_privs return bool value. In pmp_is_range_in_tlb, it will traverse all pmp rules. The tlb_sa will be 0x0, and tlb_ea will be 0xfff. If there is a pmp rule [0x10, 0x14), it will be misused as it is legal in pmp_get_tlb_size. As we have already known the correct pmp index, just remove the remove the pmp_is_range_in_tlb and get tlb size directly from pmp_get_tlb_size. Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20221012060016.30856-1-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-10-14target/riscv: pmp: Fixup TLB size calculationAlistair Francis
Since commit 4047368938f6 "accel/tcg: Introduce tlb_set_page_full" we have been seeing this assert ../accel/tcg/cputlb.c:1294: tlb_set_page_with_attrs: Assertion `is_power_of_2(size)' failed. When running Tock on the OpenTitan machine. The issue is that pmp_get_tlb_size() would return a TLB size that wasn't a power of 2. The size was also smaller then TARGET_PAGE_SIZE. This patch ensures that any TLB size less then TARGET_PAGE_SIZE is rounded down to 1 to ensure it's a valid size. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei<zhiwei_liu@linux.alibaba.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221012011449.506928-1-alistair.francis@opensource.wdc.com Message-Id: <20221012011449.506928-1-alistair.francis@opensource.wdc.com>
2022-07-03target/riscv/pmp: guard against PMP ranges with a negative sizeNicolas Pitre
For a TOR entry to match, the stard address must be lower than the end address. Normally this is always the case, but correct code might still run into the following scenario: Initial state: pmpaddr3 = 0x2000 pmp3cfg = OFF pmpaddr4 = 0x3000 pmp4cfg = TOR Execution: 1. write 0x40ff to pmpaddr3 2. write 0x32ff to pmpaddr4 3. set pmp3cfg to NAPOT with a read-modify-write on pmpcfg0 4. set pmp4cfg to NAPOT with a read-modify-write on pmpcfg1 When (2) is emulated, a call to pmp_update_rule() creates a negative range for pmp4 as pmp4cfg is still set to TOR. And when (3) is emulated, a call to tlb_flush() is performed, causing pmp_get_tlb_size() to return a very creatively large TLB size for pmp4. This, in turn, may result in accesses to non-existent/unitialized memory regions and a fault, so that (4) ends up never being executed. This is in m-mode with MPRV unset, meaning that unlocked PMP entries should have no effect. Therefore such a behavior based on PMP content is very unexpected. Make sure no negative PMP range can be created, whether explicitly by the emulated code or implicitly like the above. Signed-off-by: Nicolas Pitre <nico@fluxnic.net> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <3oq0sqs1-67o0-145-5n1s-453o118804q@syhkavp.arg> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-04-22target/riscv/pmp: fix NAPOT range computation overflowNicolas Pitre
There is an overflow with the current code where a pmpaddr value of 0x1fffffff is decoded as sa=0 and ea=0 whereas it should be sa=0 and ea=0xffffffff. Fix that by simplifying the computation. There is in fact no need for ctz64() nor special case for -1 to achieve proper results. Signed-off-by: Nicolas Pitre <nico@fluxnic.net> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <rq81o86n-17ps-92no-p65o-79o88476266@syhkavp.arg> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-01-21target/riscv: Adjust pmpcfg access with mxlLIU Zhiwei
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20220120122050.41546-2-zhiwei_liu@c-sky.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-07-15target/riscv: pmp: Fix some typosBin Meng
%s/CSP/CSR %s/thie/the Signed-off-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20210627115716.3552-1-bmeng.cn@gmail.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-06-08target/riscv/pmp: Add assert for ePMP operationsAlistair Francis
Although we construct epmp_operation in such a way that it can only be between 0 and 15 Coverity complains that we don't handle the other possible cases. To fix Coverity and make it easier for humans to read add a default case to the switch statement that calls g_assert_not_reached(). Fixes: CID 1453108 Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com> Message-id: ec5f225928eec448278c82fcb1f6805ee61dde82.1621550996.git.alistair.francis@wdc.com
2021-05-11target/riscv/pmp: Remove outdated commentAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Message-id: 10387eec21d2f17c499a78fdba85280cab4dd27f.1618812899.git.alistair.francis@wdc.com
2021-05-11target/riscv: Implementation of enhanced PMP (ePMP)Hou Weiying
This commit adds support for ePMP v0.9.1. The ePMP spec can be found in: https://docs.google.com/document/d/1Mh_aiHYxemL0umN3GTTw8vsbmzHZ_nxZXgjgOUzbvc8 Signed-off-by: Hongzheng-Li <Ethan.Lee.QNL@gmail.com> Signed-off-by: Hou Weiying <weiying_hou@outlook.com> Signed-off-by: Myriad-Dreamin <camiyoru@gmail.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Message-id: fef23b885f9649a4d54e7c98b168bdec5d297bb1.1618812899.git.alistair.francis@wdc.com [ Changes by AF: - Rebase on master - Update to latest spec - Use a switch case to handle ePMP MML permissions - Fix a few bugs ] Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-05-11target/riscv: Add ePMP CSR access functionsHou Weiying
Signed-off-by: Hongzheng-Li <Ethan.Lee.QNL@gmail.com> Signed-off-by: Hou Weiying <weiying_hou@outlook.com> Signed-off-by: Myriad-Dreamin <camiyoru@gmail.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Message-id: 270762cb2507fba6a9eeb99a774cf49f7da9cc32.1618812899.git.alistair.francis@wdc.com [ Changes by AF: - Rebase on master - Fix build errors - Fix some style issues ] Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
2021-05-11target/riscv: Fix the PMP is locked check when using TORAlistair Francis
The RISC-V spec says: if PMP entry i is locked and pmpicfg.A is set to TOR, writes to pmpaddri-1 are ignored. The current QEMU code ignores accesses to pmpaddri-1 and pmpcfgi-1 which is incorrect. Update the pmp_is_locked() function to not check the supporting fields and instead enforce the lock functionality in the pmpaddr write operation. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Message-id: 2831241458163f445a89bd59c59990247265b0c6.1618812899.git.alistair.francis@wdc.com
2021-03-22target/riscv: flush TLB pages if PMP permission has been changedJim Shu
If PMP permission of any address has been changed by updating PMP entry, flush all TLB pages to prevent from getting old permission. Signed-off-by: Jim Shu <cwshu@andestech.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 1613916082-19528-4-git-send-email-cwshu@andestech.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-03-22target/riscv: propagate PMP permission to TLB pageJim Shu
Currently, PMP permission checking of TLB page is bypassed if TLB hits Fix it by propagating PMP permission to TLB page permission. PMP permission checking also use MMU-style API to change TLB permission and size. Signed-off-by: Jim Shu <cwshu@andestech.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 1613916082-19528-2-git-send-email-cwshu@andestech.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-01-16target/riscv/pmp: Raise exception if no PMP entry is configuredAtish Patra
As per the privilege specification, any access from S/U mode should fail if no pmp region is configured. Signed-off-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20201223192553.332508-1-atish.patra@wdc.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2020-11-03target/riscv: Add PMP state descriptionYifei Jiang
In the case of supporting PMP feature, add PMP state description to vmstate_riscv_cpu. 'vmstate_pmp_addr' and 'num_rules' could be regenerated by pmp_update_rule(). But there exists the problem of updating num_rules repeatedly in pmp_update_rule(). So here extracts pmp_update_rule_addr() and pmp_update_rule_nums() to update 'vmstate_pmp_addr' and 'num_rules' respectively. Signed-off-by: Yifei Jiang <jiangyifei@huawei.com> Signed-off-by: Yipeng Yin <yinyipeng1@huawei.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20201026115530.304-4-jiangyifei@huawei.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2020-08-21target/riscv: Change the TLB page size depends on PMP entries.Zong Li
The minimum granularity of PMP is 4 bytes, it is small than 4KB page size, therefore, the pmp checking would be ignored if its range doesn't start from the alignment of one page. This patch detects the pmp entries and sets the small page size to TLB if there is a PMP entry which cover the page size. Signed-off-by: Zong Li <zong.li@sifive.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <6b0bf48662ef26ab4c15381a08e78a74ebd7ca79.1595924470.git.zong.li@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2020-08-21riscv: Fix bug in setting pmpcfg CSR for RISCV64Hou Weiying
First, sizeof(target_ulong) equals to 4 on riscv32, so this change does not change the function on riscv32. Second, sizeof(target_ulong) equals to 8 on riscv64, and 'reg_index * 8 + i' is not a legal pmp_index (we will explain later), which should be 'reg_index * 4 + i'. If the parameter reg_index equals to 2 (means that we will change the value of pmpcfg2, or the second pmpcfg on riscv64), then pmpcfg_csr_write(env, 2, val) will map write tasks to pmp_write_cfg(env, 2 * 8 + [0...7], val). However, no cfg csr is indexed by value 16 or 23 on riscv64, so we consider it as a bug. We are looking for constant (e.g., define a new constant named RISCV_WORD_SIZE) in QEMU to help others understand code better, but none was found. A possible good explanation of this literal is it is the minimum word length on riscv is 4 bytes (32 bit). Signed-off-by: Hongzheng-Li <Ethan.Lee.QNL@gmail.com> Signed-off-by: Hou Weiying <weiying_hou@outlook.com> Signed-off-by: Myriad-Dreamin <camiyoru@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <SG2PR02MB263420036254AC8841F66CE393460@SG2PR02MB2634.apcprd02.prod.outlook.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2020-07-13target/riscv: Fix pmp NA4 implementationAlexandre Mergnat
The end address calculation for NA4 mode is wrong because the address used isn't shifted. It doesn't watch 4 bytes but a huge range because the end address calculation is wrong. The solution is to use the shifted address calculated for start address variable. Modifications are tested on Zephyr OS userspace test suite which works for other RISC-V boards (E31 and E34 core). Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20200706084550.24117-1-amergnat@baylibre.com Message-Id: <20200706084550.24117-1-amergnat@baylibre.com> [ Changes by AF: - Improve the commit title and message ] Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2020-06-19target/riscv: Use a smaller guess size for no-MMU PMPAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bin.meng@windriver.com>
2019-10-28target/riscv: PMP violation due to wrong size parameterDayeol Lee
riscv_cpu_tlb_fill() uses the `size` parameter to check PMP violation using pmp_hart_has_privs(). However, if the size is unknown (=0), the ending address will be `addr - 1` as it is `addr + size - 1` in `pmp_hart_has_privs()`. This always causes a false PMP violation on the starting address of the range, as `addr - 1` is not in the range. In order to fix, we just assume that all bytes from addr to the end of the page will be accessed if the size is unknown. Signed-off-by: Dayeol Lee <dayeol@berkeley.edu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17target/riscv/pmp: Convert qemu_log_mask(LOG_TRACE) to trace eventsPhilippe Mathieu-Daudé
Use the always-compiled trace events, remove the now unused RISCV_DEBUG_PMP definition. Note pmpaddr_csr_read() could previously do out-of-bound accesses passing addr_index >= MAX_RISCV_PMPS. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17target/riscv/pmp: Restrict priviledged PMP to system-mode emulationPhilippe Mathieu-Daudé
The RISC-V Physical Memory Protection is restricted to privileged modes. Restrict its compilation to QEMU system builds. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-06-23RISC-V: Fix a PMP bug where it succeeds even if PMP entry is offHesham Almatary
The current implementation returns 1 (PMP check success) if the address is in range even if the PMP entry is off. This is a bug. For example, if there is a PMP check in S-Mode which is in range, but its PMP entry is off, this will succeed, which it should not. The patch fixes this bug by only checking the PMP permissions if the address is in range and its corresponding PMP entry it not off. Otherwise, it will keep the ret = -1 which will be checked and handled correctly at the end of the function. Signed-off-by: Hesham Almatary <Hesham.Almatary@cl.cam.ac.uk> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-06-23RISC-V: Check for the effective memory privilege mode during PMP checksHesham Almatary
The current PMP check function checks for env->priv which is not the effective memory privilege mode. For example, mstatus.MPRV could be set while executing in M-Mode, and in that case the privilege mode for the PMP check should be S-Mode rather than M-Mode (in env->priv) if mstatus.MPP == PRV_S. This patch passes the effective memory privilege mode to the PMP check. Functions that call the PMP check should pass the correct memory privilege mode after reading mstatus' MPRV/MPP or hstatus.SPRV (if Hypervisor mode exists). Suggested-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Hesham Almatary <Hesham.Almatary@cl.cam.ac.uk> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-06-23target/riscv: Fix PMP range boundary address bugDayeol Lee
A wrong address is passed to `pmp_is_in_range` while checking if a memory access is within a PMP range. Since the ending address of the pmp range (i.e., pmp_state.addr[i].ea) is set to the last address in the range (i.e., pmp base + pmp size - 1), memory accesses containg the last address in the range will always fail. For example, assume that a PMP range is 4KB from 0x87654000 such that the last address within the range is 0x87654fff. 1-byte access to 0x87654fff should be considered to be fully inside the PMP range. However the access now fails and complains partial inclusion because pmp_is_in_range(env, i, addr + size) returns 0 whereas pmp_is_in_range(env, i, addr) returns 1. Signed-off-by: Dayeol Lee <dayeol@berkeley.edu> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Michael Clark <mjc@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-06-12Include qemu-common.h exactly where neededMarkus Armbruster
No header includes qemu-common.h after this commit, as prescribed by qemu-common.h's file comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-5-armbru@redhat.com> [Rebased with conflicts resolved automatically, except for include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and net/tap-bsd.c fixed up]
2019-03-19riscv: pmp: Log pmp access errors as guest errorsAlistair Francis
Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>