From 3100ea14668853aeedae85ec83e3536b59ba7728 Mon Sep 17 00:00:00 2001 From: Sascha Hauer Date: Fri, 29 Jul 2011 11:20:11 +0200 Subject: ARM: rework MMU support In barebox we used 1MiB sections to map our SDRAM cachable. This has the drawback that we have to map our sdram twice: cached for normal sdram and uncached for DMA operations. As address space gets sparse on newer systems we are sometines unable to find a suitably big enough area for the dma coherent space. This patch changes the MMU code to use second level page tables. With it we can implement dma_alloc_coherent as normal malloc, we just have to remap the allocated area uncached afterwards and map it cached again after free(). This makes arm_create_section(), setup_dma_coherent() and mmu_enable() noops. Signed-off-by: Sascha Hauer --- include/common.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include/common.h') diff --git a/include/common.h b/include/common.h index f3353c8643..0ce4a70b56 100644 --- a/include/common.h +++ b/include/common.h @@ -221,6 +221,7 @@ int run_shell(void); #define ULLONG_MAX (~0ULL) #define PAGE_SIZE 4096 +#define PAGE_SHIFT 12 int memory_display(char *addr, ulong offs, ulong nbytes, int size); -- cgit v1.2.3