summaryrefslogtreecommitdiffstats
path: root/arch/arm/cpu
diff options
context:
space:
mode:
authorSascha Hauer <s.hauer@pengutronix.de>2023-10-23 09:49:34 +0200
committerSascha Hauer <s.hauer@pengutronix.de>2023-10-23 10:43:56 +0200
commit58e9c930de5babda02097458a2ab0f42424eca50 (patch)
treef37cc3e026f6029948a651d11bf07dba5a463507 /arch/arm/cpu
parent95249ca7a94dd92d61098935d0de25bc92b82ba3 (diff)
downloadbarebox-58e9c930de5babda02097458a2ab0f42424eca50.tar.gz
barebox-58e9c930de5babda02097458a2ab0f42424eca50.tar.xz
memory: remap immediately in reserve_sdram_region()
reserve_sdram_region() has the purpose of reserving SDRAM regions from being accessed by the CPU. Right now this remaps the reserved region during MMU setup. Instead of doing this, remap the region immediately. The MMU may be enabled already by early code. This means that when reserve_sdram_region() is called with MMU enabled, we can't rely on the region being mapped non-executable right after the call, but only when __mmu_init() is executed. This patch relaxes this constraint. Also, reserve_sdram_region() may now be called after __mmu_init() is executed. So far we silently aligned the remapped region to page boundaries, but really calling reserve_sdram_region() with non page aligned boundaries has undesired effects on the regions between the reserved region and the page boundaries. Stay with this behaviour, but warn the user when the to be reserved region is not page aligned as this really shouldn't happen. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Diffstat (limited to 'arch/arm/cpu')
-rw-r--r--arch/arm/cpu/mmu_32.c2
-rw-r--r--arch/arm/cpu/mmu_64.c3
2 files changed, 2 insertions, 3 deletions
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 07b2250677..d0ada5866f 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -558,8 +558,8 @@ void __mmu_init(bool mmu_on)
pos = bank->start;
+ /* Skip reserved regions */
for_each_reserved_region(bank, rsv) {
- remap_range((void *)rsv->start, resource_size(rsv), MAP_UNCACHED);
remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
pos = rsv->end + 1;
}
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index fb57260c90..b718cb1efa 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -243,9 +243,8 @@ void __mmu_init(bool mmu_on)
pos = bank->start;
+ /* Skip reserved regions */
for_each_reserved_region(bank, rsv) {
- remap_range((void *)resource_first_page(rsv),
- resource_count_pages(rsv), MAP_UNCACHED);
remap_range((void *)pos, rsv->start - pos, MAP_CACHED);
pos = rsv->end + 1;
}