summaryrefslogtreecommitdiffstats
path: root/arch/arm/cpu
Commit message (Collapse)AuthorAgeFilesLines
* ARM64: let 'end' point after the range in cache functionsEnrico Scholz2 days2-3/+3
| | | | | | | | | | | | | | | v8_flush_dcache_range() and v8_inv_dcache_range() are implemented under the assumption that their 'end' parameter points *after* the range. Fix callers to use it in this way. This fixes e.g. spurious corruptions in the last octet when sending 129 bytes over ethernet. Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de> Link: https://lore.barebox.org/20240412162836.284671-1-enrico.scholz@sigma-chemnitz.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* commands: add cpuinfo -s option for stacktraceAhmad Fatoum2024-03-051-2/+23
| | | | | | | | | | | | While a call to dump_stack() is easily hacked into the code, it can be useful during development to just print the stacktrace from the shell, e.g. to verify that kallsyms sharing for EFI works as intended. Add an option to cpuinfo to provide this functionality. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20240304190038.3486881-98-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM64: add optional EFI stubAhmad Fatoum2024-03-058-9/+157
| | | | | | | | | | | While very recent binutils releases have dedicated efi-*-aarch targets, we may want to support older toolchains. For this reason, we import the kernel's EFI stub PE fakery, so the same barebox-dt-2nd.img may be loaded as if it were a "normal" or an EFI-stubbed kernel. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20240304190038.3486881-87-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM64: cpu: setupc: rewrite to be fully PICAhmad Fatoum2024-03-051-13/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code resulting from building the barebox ARM64 assembly contains relocations, which could've been position-independent just as well. Let's make them truly position independent by turning: ldr x0, =label into adr_l x0, label adr_l is position independent by virtue of being implemented using adrp, so it's usuable as-is before relocation and requires no manual addition of get_runtime_offset(). With these changes, only relocation necessary for the ARM64 generic DT 2nd stage is the one needed for get_runtime_offset() to find out whether barebox has been relocated. This is one step towards supporting mapping barebox PBL text section W^X, which precludes relocation entries emitted for code. With this change applied, there is still a data relocation entry in assembly code for get_runtime_offset(), but that doesn't bother us because it's in the data section. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20240304190038.3486881-57-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu-early: gracefully handle already enabled MMUAhmad Fatoum2024-03-052-0/+6
| | | | | | | | | | | | | arm_cpu_lowlevel_init will disable the MMU, but there are valid cases to not call it on startup, e.g. when barebox is being run as EFI payload. To allow booting an EFI-stubbed barebox both as primary bootloader and as EFI payload, teach mmu_early_enable() to bail out when the MMU is already set up. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20240304190038.3486881-52-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* Merge branch 'for-next/layerscape'Sascha Hauer2024-01-231-0/+94
|\
| * ARM: atf: add bl31 v2 calling methodSascha Hauer2024-01-081-0/+94
| | | | | | | | | | | | | | | | | | Newer bl31 binaries require a slightly different parameter structure. This patch adds support for it. The code is taken from U-Boot and changed to static initializers for improved readability. Link: https://lore.barebox.org/20240104141746.165014-17-s.hauer@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | Merge branch 'for-next/imx'Sascha Hauer2024-01-231-2/+10
|\ \
| * | ARM64: mmu: add dynamic optee memory mapping supportMarco Felsch2024-01-191-2/+10
| |/ | | | | | | | | | | | | | | | | Use the dynamic optee memory base address for the early mapping if possible and fallback to the static mapping if the query failed. Signed-off-by: Marco Felsch <m.felsch@pengutronix.de> Link: https://lore.barebox.org/20240116170738.209954-13-m.felsch@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* / ARM: kasan: reserve shadow memory regionAhmad Fatoum2024-01-041-0/+4
|/ | | | | | | | | | We did not have any protection in place to ensure that KASAN shadow memory isn't overwritten during boot. Add that now to avoid strange effects during debugging. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20240104085736.541171-1-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* Merge branch 'for-next/misc'Sascha Hauer2023-12-182-3/+35
|\
| * ARM64: mmu: fix mmu_early_enable VA->PA mappingLior Weintraub2023-12-182-3/+35
| | | | | | | | | | | | | | | | | | | | | | | | Fix the mmu_early_enable function to correctly map 40bits of virtual address into physical address with a 1:1 mapping. It uses the init_range function to sets 2 table entries on TTB level0 and then fill level1 with the correct 1:1 mapping. Signed-off-by: Lior Weintraub <liorw@pliops.com> Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # Qemu ARM64 Virt Link: https://lore.barebox.org/PR3P195MB0555FF28C5158FF2A789DC2AC390A@PR3P195MB0555.EURP195.PROD.OUTLOOK.COM Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mmu: align size of remapped region to page sizeAhmad Fatoum2023-12-052-0/+6
|/ | | | | | | | | | | | | | | | Currently, barebox ARM arch_remap_range() will hang in an infinite loop, when called with a size that's not aligned to a page boundary. Its Linux equivalent, ioremap(), will just rounds up to page size and work correctly. Adopt the Linux behavior to make porting code easier, e.g. when calling devm_ioremap(). The only other arch_remap_range() in barebox, is PowerPC's. That one wouldn't loop indefinitely if the size isn't page aligned, so nothing to do there. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20231205081247.4148947-1-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM64: mmu: panic when out of PTEsSascha Hauer2023-12-041-0/+3
| | | | | | | | When running out of PTEs panic with an appropriate message instead of continuing with NULL pointers. Link: https://lore.barebox.org/20231201151044.1648393-2-s.hauer@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM64: mmu: Fix alloc_pte() address calculationSascha Hauer2023-12-041-1/+1
| | | | | | | | | | | | | | | get_ttb() returns an uint64_t * which means that with get_ttb() + idx * GRANULE_SIZE the distance between two PTEs is wrongly calculated as 0x8000 bytes instead of 0x1000 bytes. With this we leave the space allocated for PTEs quite fast and the available space check also doesn't work. Fix this by explicitly casting to void *. Link: https://lore.barebox.org/20231201151044.1648393-1-s.hauer@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu64: setup ttb for EL2 as wellSascha Hauer2023-11-211-0/+2
| | | | | | | | | | | The TF-A is often started before the MMU is initialized. There are some exceptions though. On Layerscape the TF-A (or: PPA in that case) is started while the MMU is running. The PPA is then executed in EL3 and returns in EL2. For this case setup the TTB for EL2 as well so that we have a valid MMU setup when the PPA returns. Link: https://lore.barebox.org/20231120144453.1075740-1-s.hauer@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* memory: remap immediately in reserve_sdram_region()Sascha Hauer2023-10-232-3/+2
| | | | | | | | | | | | | | | | | | | | | | | reserve_sdram_region() has the purpose of reserving SDRAM regions from being accessed by the CPU. Right now this remaps the reserved region during MMU setup. Instead of doing this, remap the region immediately. The MMU may be enabled already by early code. This means that when reserve_sdram_region() is called with MMU enabled, we can't rely on the region being mapped non-executable right after the call, but only when __mmu_init() is executed. This patch relaxes this constraint. Also, reserve_sdram_region() may now be called after __mmu_init() is executed. So far we silently aligned the remapped region to page boundaries, but really calling reserve_sdram_region() with non page aligned boundaries has undesired effects on the regions between the reserved region and the page boundaries. Stay with this behaviour, but warn the user when the to be reserved region is not page aligned as this really shouldn't happen. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* Merge branch 'for-next/misc'Sascha Hauer2023-09-253-0/+58
|\
| * ARM: add support for booting ELF executablesAhmad Fatoum2023-09-212-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Unlike MIPS and kvx, where ELF is used as kernel image format, Linux ARM support defines its own flattened format. Other kernels may be distributed as ELF images though, so it makes sense to enable booting of ELF images on ARM as well. This has been tested booting FreeRTOS ELF executables on the ZynqMP. Note that this will refuse to boot kernel ELF images as those have type dyn, while the common ELF code in barebox will only boot type exec. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230913125715.2142524-3-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu64: mark barebox text section executable during early initAhmad Fatoum2023-09-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | barebox on ARM64 is usually relocated to DRAM by the time mmu_early_enable() is called, but in the future we may want to enable the MMU earlier and thus we need to ensure that the location barebox is currently running from is not marked eXecute Never, even if it's outside the initially known RAM bank. This is the first part of fixing barebox hanging on i.MX8M when located at an address greater than 4G. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230907082126.2326381-2-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mmu: catch stack overflowing into TTB with stack guard pageAhmad Fatoum2023-09-214-9/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While barebox stack is often quite generous, due to its default of 32K, bugs can make it overflow and on ARM, this clobbers the page tables leading to even harder to debug problems than usual. Let's add a 4K buffer zone between the page tables and the stack and configure the MMU to trap all accesses into it. Note that hitting the stack guard page can be silent if the exception handler places it's frame there. Still a hanging barebox may be better than an erratically behaving one. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230911150900.3584523-5-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mark early C setup functions as __prerelocAhmad Fatoum2023-09-212-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for adding stack protector support, we need to start marking functions run before the C environment is completely set up. Introduce a __prereloc attribute for this use case and an even stronger no noinstr (no instrumentation) attribute and start adding it at enough places for bareboxproper to start up with -fstack-protector-all. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230911150900.3584523-3-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mmu32: mark whole early pagetable region as reservedAhmad Fatoum2023-09-041-1/+2
|/ | | | | | | | | | | | The TTB area allocated for early MMU is now 64K instead of 16K, yet only the old 16K were requested to ensure e.g. memtest doesn't touch them. Fix this by requesting the full region. Fixes: 407ff71a3b5d ("ARM: mmu: alloc 64k for early page tables") Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230831090943.3778087-1-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* treewide: Print device nodes with %pOFSascha Hauer2023-07-031-1/+1
| | | | | | | We have the %pOF format specifier for printing device nodes. Use it where appropriate. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* Merge branch 'for-next/misc'Sascha Hauer2023-06-221-1/+1
|\
| * ARM64: cpu: support 64-bit stack top in ENTRY_FUNCTION_WITHSTACKAhmad Fatoum2023-06-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | ENTRY_FUNCTION_WITHSTACK was written with the naive assumption that there will always be some memory in the first 32-bit of the address space to be used as early stack. There are SoCs out there though with sole on-chip SRAM > 4G. Accommodate this by accepting full 64-bit stack pointers in ENTRY_FUNCTION_WITHSTACK. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230531175157.1442379-1-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Tested-by: Lior Weintraub <liorw@pliops.com>
* | Merge branch 'for-next/dma-streaming-interface'Sascha Hauer2023-06-223-22/+17
|\ \
| * | dma: rework dma_sync_single_for_*() interfaceDenis Orlov2023-06-063-22/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, a lot of code handles dma_addr_t values as if they actually hold CPU addresses. However, this is not always true. For example, MIPS architecture requires an explicit conversion from the physical address space to some virtual address space segment to get a valid CPU-side pointer. Another issue is that DMA ranges that may be specified in a device tree will not work this way. To get from a virtual address to a dma handle and vice versa we need to add/subtract some offset, which is calculated from "dma-ranges" property. Only dma_map_single() was doing this, but dma_sync_single_for_*() also should. Improve the interface by adding 'struct device' as the first argument to the dma_sync_single_for_*(). This allows to do cpu_to_dma/dma_to_cpu() conversions in common code and call into arch-specific code with proper cpu-side addresses. To make things more clear, make the virtual address argument of those arch-side functions be properly represented with a void* type. Apply the required changes in device drivers that use the affected functions, making them pass the appropriate device pointer. Signed-off-by: Denis Orlov <denorl2009@gmail.com> Link: https://lore.barebox.org/20230604215002.20240-2-denorl2009@gmail.com Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | | treewide: add MODULE_DEVICE_TABLE markersAhmad Fatoum2023-06-131-0/+1
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Syncing device trees with Linux upstream can lead to breakage, when the device trees are switched to newer bindings, which are not yet supported in barebox. To make it easier to spot such issues, we want to start applying some heuristics to flag possibly problematic DT changes. One step towards being able to do that is to know what nodes barebox actually consumes. Most of the nodes have a compatible entry, which is matched by an array of of_device_id, so let's have MODULE_DEVICE_TABLE point at it for future extraction. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230612125908.1087340-1-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mmu_32: Fix zero page faultingSascha Hauer2023-06-011-2/+6
| | | | | | | | | | | | | | | | Even with MAP_FAULT we still set PTE_TYPE_SMALL in the 2nd level page table. With this the zero page doesn't fault on ARMv6 and earlier. Fix this by only setting PTE_TYPE_SMALL when the map_type is not MAP_FAULT. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mmu_32: fix setting up zero page when it is in SDRAMSascha Hauer2023-06-011-14/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | We used to skip setting the zero page to faulting when SDRAM starts at 0x0. As bootm code now explicitly sets the zero page accessible before copying ATAGs there this should no longer be necessary, so unconditionally set the zero page to faulting during MMU startup. This also moves the zero page and vector table setup after the point the SDRAM has been mapped cachable, because otherwise the zero page and possibly the vector table mapping would be overwritten. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
* | ARM: mmu: invalidate when mapping range uncachedAhmad Fatoum2023-05-302-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | memtest can call remap_range to map regions being tested as uncached, but remap_range did not take care to evict any stale cache lines. Do this now. This fixes an issue of SELFTEST_MMU failing on an i.MX8MN, when running memtest on an uncached region that was previously memtested while being cached. Fixes: 3100ea146688 ("ARM: rework MMU support") Fixes: 7cc98fbb6128 ("arm: cpu: add basic arm64 mmu support") Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230526063354.1145474-4-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mmu32: define early_remap_range for mmu_early_enable usageAhmad Fatoum2023-05-301-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | Like done for ARM64, define early_remap_range, which should always be safe to call while MMU is disabled. This is to prepare doing cache maintenance in regular arch_remap_range. No functional change. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230526063354.1145474-3-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mmu64: define early_remap_range for mmu_early_enable usageAhmad Fatoum2023-05-301-13/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | Adding a new dma_inv_range/dma_flush_range into arch_remap_range before MMU is enabled hangs, so let's define a new early_remap_range that should always be safe to call while MMU is disabled. This is to prepare doing cache maintenance in regular arch_remap_range in a later commit. No functional change. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230526063354.1145474-2-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mmu64: request TTB regionAhmad Fatoum2023-05-301-0/+12
|/ | | | | | | | | | | | ARM64 MMU code used to disable early MMU, reallocate TTB from malloc area and then reenable it. This has recently been changed, so MMU is left enabled like on ARM32, but unlike ARM32, the SDRAM region used in PBL is not requested in barebox proper. Do that now. Fixes: b53744ffe333 ("ARM: mmu64: Use two level pagetables in early code") Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230526063354.1145474-1-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmuinfo: add options for enabling/disabling zero page trappingAhmad Fatoum2023-05-231-2/+45
| | | | | | | | | | | mmuinfo 0 will most likely trigger a translation fault. To allow decoding the zero page or to allow reading BootROM code at address 0, teach mmuinfo -z to disable trapping of the zero page and mmuinfo -Z to reinstate the faulting zero page. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230522052835.1039143-12-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM64: mmu: implement ARMv8 mmuinfo commandAhmad Fatoum2023-05-233-1/+218
| | | | | | | | | To aid with debugging of MMU code, let's implement mmuinfo for ARMv8, like we already support for ARMv7. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230522052835.1039143-9-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: prepare extending mmuinfo beyond ARMv7Ahmad Fatoum2023-05-233-66/+95
| | | | | | | | | | | | | There's no reason to restrict mmuinfo to ARMv7 or ARM at all for that matter. Prepare extending it for ARMv8 support by splitting off the 32-bit parts. While at it, make the output available for debuggin by exporting a mmuinfo() function. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230522052835.1039143-8-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu64: support non-1:1 mappings in arch_remap_rangeAhmad Fatoum2023-05-231-5/+3
| | | | | | | | | This provides an alternative to ARM32's map_io_sections with the added benefit of supporting 4K granularity. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230522052835.1039143-5-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu32: support non-1:1 mappings in arch_remap_rangeAhmad Fatoum2023-05-231-16/+13
| | | | | | | | | This makes the function usable as alternative to map_io_sections, but at 4K granularity. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230522052835.1039143-4-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* mmu: add physical address parameter to arch_remap_rangeAhmad Fatoum2023-05-232-14/+19
| | | | | | | | | | | | | | ARM32 has map_io_sections for non-1:1 remapping, but it's limited to 1M sections. arch_remap_range has newly gained support for 4K granularity remapping, but supports only changing attributes and no non-1:1 remapping yet. In preparation for adding this missing feature, adjust the prototype. No functional change. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230522052835.1039143-3-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* treewide: use remap_range instead of arch_remap_rangeAhmad Fatoum2023-05-233-17/+17
| | | | | | | | | | | The remapping in arch_remap_range is currently limited to attributes. In a later commit, we'll start supporting non-1:1 remappings. We'll keep remap_range as is for 1:1, so as preparation, let's switch all arch_remap_range users that want 1:1 remappings to remap_range. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230522052835.1039143-2-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu64: Use two level pagetables in early codeSascha Hauer2023-05-221-75/+20
| | | | | | | | | | | | | | | | | | So far we used 1GiB sized sections in the early MMU setup. This has the disadvantage that we can't use the MMU in early code when we require a finer granularity. Rockchip for example keeps TF-A code in the lower memory so the code just skipped MMU initialization. Also we can't properly map the OP-TEE space at the end of SDRAM non executable. With this patch we now use two level page tables and can map with 4KiB granularity. The MMU setup in barebox proper changes as well. Instead of disabling the MMU for reconfiguration we can now keep the MMU enabled and just add the mappings for SDRAM banks not known to the early code. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu32: Skip reserved ranges during initializationSascha Hauer2023-05-221-3/+18
| | | | | | | | The early MMU code now uses pages to map the OP-TEE area non executable. This mapping is overwritten with sections in barebox proper. Refrain from doing so by using arch_remap_range() and bypassing reserved areas. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu32: Use pages for early MMU setupSascha Hauer2023-05-221-24/+5
| | | | | | | | | | | | | | | | | Up to now we use 1MiB sections to setup the page tables in PBL. There are two places where this leads to problems. First is OP-TEE, we have to map the OP-TEE area with PTE_EXT_XN to prevent the instruction prefetcher from speculating into that area. With the current section mapping we have to align OPTEE_SIZE to 1MiB boundaries. The second problem comes with SRAM where the PBL might be running. This SRAM has to be mapped executable, but at the same time we should map the surrounding areas non executable which is not always possible with 1MiB mapping granularity. We now have everything in place to use two level page tables from PBL, so use arch_remap_range() for the problematic cases. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu32: read TTB value from registerSascha Hauer2023-05-221-21/+20
| | | | | | | | Instead of relying on a variable for the location of the TTB which we have to initialize in both PBL and barebox proper, just read the value back from the hardware register. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu32: move functions into c fileSascha Hauer2023-05-222-20/+19
| | | | | | | Move create_flat_mapping() and create_sections() into the c file rather than having them as static inline functions in the header file. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu32: add get_pte_flags, get_pmd_flagsSascha Hauer2023-05-221-41/+41
| | | | | | | | | | | The mmu code has several variables containing the pte/pmd values for different mapping types. These variables only contain the correct values after initializing them which makes it a bit hard to follow when the code is used in both PBL and barebox proper. Instead of using variables calculate the values when they are needed. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu32: Add pte_flags_to_pmd()Sascha Hauer2023-05-221-6/+29
| | | | Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu32: Fix pmd_flags_to_pte() for ARMv4/5/6Sascha Hauer2023-05-221-11/+16
| | | | | | | | | | | | pmd_flags_to_pte() assumed ARMv7 page table format. This has the effect that random bit values end up in the access permission bits. This works because the domain is configured as manager in the DACR and thus the access permissions are ignored by the MMU. Nevertheless fix this and take the cpu architecture into account when translating the bits. Don't bother to translate the access permission bits though, just hardcode them as PTE_SMALL_AP_UNO_SRW. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>