summaryrefslogtreecommitdiffstats
path: root/arch/arm/cpu
Commit message (Collapse)AuthorAgeFilesLines
* ARM: mmu: fix cache flushing when replacing a section with a PTELucas Stach2018-07-271-45/+32
| | | | | | | | | | | | | | When replacing a section with a PTE, we must make sure that the newly initialized PTE entries are flushed from the cache before changing the entry in the TTB. Otherwise a L1 TLB miss causes the hardware pagetable walker to walk into a PTE with undefined content, causing exactly that behaviour. Move all the necessary cache flushing to arm_create_pte(), to avoid any caller getting this wrong in the future. Fixes: e3e54c644180 (ARM: mmu: Implement on-demand PTE allocation) Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
* ARM: MMU: fix arch_remap_range() across section boundariesSascha Hauer2018-07-121-1/+1
| | | | | | | | | | | Fixes: e3e54c6441 ARM: mmu: Implement on-demand PTE allocation PGD_FLAGS_WC_V7 lacks the PMD_TYPE_SECT and PMD_SECT_BUFFERABLE flags. Without them a dma_alloc_writecombine() creates an invalid section when it crosses a section boundary. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
* Merge branch 'for-next/imx8mq'Sascha Hauer2018-07-091-0/+1
|\
| * ARM: Specify HAVE_PBL_IMAGE for CPU_64Andrey Smirnov2018-06-151-0/+1
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: mmu: psci: Make use of get_ttbr()Andrey Smirnov2018-06-183-5/+11
| | | | | | | | | | | | | | | | Introduce a simple inline function to get TTBR and use it in mmu.c and sm.c Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: psci: Make use of set_ttbr() in armv7_secure_monitor_install()Andrey Smirnov2018-06-181-2/+1
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: psci: Remove unused code in psci_entry()Andrey Smirnov2018-06-181-6/+0
|/ | | | | | | | Remove what looks like a leftover code that doesn't appear to be referenced anywhere. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* Merge branch 'for-next/arm-mmu'Sascha Hauer2018-06-116-211/+221
|\
| * ARM: mmu: Make use of dsb() and isb() helpersAndrey Smirnov2018-06-081-2/+2
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: no-mmu: Disable building for ARMv8Andrey Smirnov2018-06-081-1/+1
| | | | | | | | | | | | | | Disable building for ARMv8 since no-mmu.c only supports ARMv7 Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu64: Convert flags in arch_remap_range()Andrey Smirnov2018-06-081-0/+11
| | | | | | | | | | | | | | | | | | Flags passed to arch_remap_range() are architecture independent, so it can't be passed as is to map_region(). Add code to do the proper conversion to avoid subtle bugs that this confusion brings. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu64: Make use of create_table()Andrey Smirnov2018-06-081-4/+1
| | | | | | | | | | | | | | | | Make use of create_table() instead of calling xmemalign() and memset() explicitly. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu64: Trivial code simplificationAndrey Smirnov2018-06-081-5/+4
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: only create flat mapping when early MMU hasn't done it alreadySascha Hauer2018-05-281-6/+6
| | | | | | | | Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Introduce ARM_TTB_SIZEAndrey Smirnov2018-05-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 1c33aacf8a247ab45814b43ac0ca903677afffae ("ARM: use memalign to allocate page table"), reasonalby changed TTB allocation size from SZ_32K to SZ_16K (TTB's real size), but it also changed alignment from SZ_16K to SZ_64K for unclear reasons. Reading various TTBR related ARM documentation it seems that worst case alignment for it is 16KiB (bits [0, 13 - N] must be zero) which also matches early TTB allocation code. Since both early and regular MMU code has to share this paramter, introduce ARM_TTB_SIZE and use it in both cases for both size and alignment. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Implement on-demand PTE allocationAndrey Smirnov2018-05-221-99/+110
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocating PTEs for every 4K page corresponding to SDRAM upfront costs us quite a bit of memory: 1KB per 1MB or RAM. This is far from being a deal-breaker for majority of use-cases, but for builds where amount of free memory is in hundres of KBs* it becomes a real hurdle for being able to use MMU (which also means no L1 cache). Given how we really only need PTEs for a very few regions of memory dedicated from DMA buffers (Ethernet, USB, etc), changing MMU code to do on-demand section splitting can allow us to save significant amount of memory without any functionality loss. Below is a very trivial comparison of memory usages on start before and after this patch is applied. Before: barebox@ZII VF610 Development Board, Rev B:/ meminfo used: 1271584 free: 265553032 After: barebox@ZII VF610 Development Board, Rev B:/ meminfo used: 795276 free: 266024448 Tested on: - VF610 Tower Board, - VF610 ZII Development Board (Rev. C) - i.MX51 Babbage Board - i.MX7 SabreSD Board - i.MX6 ZII RDU2 Board - AT91SAM9X5-EK Board * One example of such use-case is memory testing while running purely out of SRAM Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Simplify the use of dma_flush_range()Andrey Smirnov2018-05-221-8/+8
| | | | | | | | | | | | | | | | | | | | | | Simplify the use of dma_flush_range() by changing its signature to accept pointer to start of the data and data size. This change allows us to avoid a whole bunch of repetitive arithmetic currently done by all of the callers. Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Use dma_inv_range() in dma_sync_single_for_cpu()Andrey Smirnov2018-05-221-5/+2
| | | | | | | | | | | | | | | | | | The code in the if () statement is identical to already existing dma_inv_rand(). Use it instead. Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Use find_pte() to find PTE in create_vector_table()Andrey Smirnov2018-05-221-5/+4
| | | | | | | | | | | | | | | | | | There's already a function that implement necessary arithemtic to find offset within page table for a given address, so make use of it instead of re-implementing it again. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Make sure that address is 1M aligned in arm_create_pte()Andrey Smirnov2018-05-221-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If address passed arm_create_pte() is not 1M (PGDIR_SIZE) aligned, page table that is created will end up having unexpected mapping offset, breaking "1:1 mapping" assumption and leading to bugs that are not immediately obvious in their nature. To prevent this and because all of the callers already do said alignement in-place, move the alignment code to be a part of arm_create_pte(). Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Pass PTE flags a parameter to arm_create_pte()Andrey Smirnov2018-05-181-4/+5
| | | | | | | | | | | | | | | | | | | | In order to make it possible to use this functions in contexts where creating a new PTE of uncached pages in not appropriate, pass PTE flags a parameter to arm_create_pte() and fix all of the current users as necessary. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Share code between dma_alloc_*() functionsAndrey Smirnov2018-05-181-14/+8
| | | | | | | | | | | | | | | | | | | | Code of dma_alloc_coherent() and dma_alloc_writecombine() is almost identical with exception of the flags passed to undelying call to __remap_range(). Move commong code into a shared subroutine and convert both functions to use it. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Use xmemalign in mmu_init()Andrey Smirnov2018-05-181-1/+1
| | | | | | | | | | | | | | | | We don't handle OOM case in that code, so using xmemalign seems like a better option. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Use xmemalign in arm_create_pte()Andrey Smirnov2018-05-181-2/+2
| | | | | | | | | | | | | | | | We don't handle the OOM case in that code, so using xmemalign seems like a better option. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Use PAGE_SIZE instead of magic right shift by 12Andrey Smirnov2018-05-181-1/+1
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Define and use PTRS_PER_PTEAndrey Smirnov2018-05-181-3/+5
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Use PAGE_SIZE when specifying size of one pageAndrey Smirnov2018-05-181-2/+2
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Replace various SZ_1M with PGDIR_SIZEAndrey Smirnov2018-05-182-5/+6
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Trivial simplification in arm_mmu_remap_sdram()Andrey Smirnov2018-05-181-1/+1
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Replace hardcoded shifts with pgd_index() from LinuxAndrey Smirnov2018-05-182-8/+12
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Drop needless shifting in map_io_sections()Andrey Smirnov2018-05-181-3/+2
| | | | | | | | | | | | | | | | | | Instead of shifting phys right by 20 and then again left by the same amount, just convert the code to expect it to be in unit of bytes all the time. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Share PMD_SECT_DEF_CACHEDAndrey Smirnov2018-05-182-2/+2
| | | | | | | | | | | | | | Share PMD_SECT_DEF_CACHED between mmu.c and mmu-early.c. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Share code for initial flat mapping creationAndrey Smirnov2018-05-183-8/+10
| | | | | | | | | | | | | | | | Code creating inital 4GiB flat mapping is identical between mmu.c and mmu-early.c, so move it to mmu.h and share. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Specify size in bytes in create_sections()Andrey Smirnov2018-05-183-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Seeing create_sections(ttb, 0, PAGE_SIZE, ...); as the code the creates initial flat 4 GiB mapping is a bit less intuitive then create_sections(ttb, 0, SZ_4G - 1, ...); so, for the sake of clarification, convert create_sections() to accept address of the last byte in the region instead of size in MiB. Note the alternative of converting size to units of bytes was not chosen to avoid converting 3-rd function argument into a 64-bit number. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Separate index and address in create_sections()Andrey Smirnov2018-05-181-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | Both TTB index and address used to fill that entry are derived from the same variable 'addr' which requires shifting right and left by 20 and somewhat confusing. Split the counter used to iterate over elements of TTB into a separate variable to make this code a bit easier to read. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Share code for create_sections()Andrey Smirnov2018-05-183-31/+20
| | | | | | | | | | | | | | | | | | | | Regular MMU code never creates anything but 1:1 mapping, and barring that plus the call to __mmu_cache_flush(), early MMU code version of the function is pretty much identical. To avoid code duplication, move it to mmu.h and convert both regular and early MMU code to use it. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Introduce set_domain()Andrey Smirnov2018-05-183-11/+11
| | | | | | | | | | | | | | | | Port set_domain() form Linux kernel and share it between regular and early MMU code to avoid duplication. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Introduce set_ttbr()Andrey Smirnov2018-05-183-4/+7
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Use ALIGN and ALIGN_DOWN in map_cachable()Andrey Smirnov2018-05-181-2/+2
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Make use of IS_ALIGNED in arm_mmu_remap_sdram()Andrey Smirnov2018-05-181-1/+1
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: mmu: Remove unused ARM_VECTORS_SIZEAndrey Smirnov2018-05-181-6/+0
| | | | | | | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: cache: Remove unused cache ops structAndrey Smirnov2018-06-081-1/+0
| | | | | | | | | | | | | | | | | | Remove what appears to be a leftover from moving ARMv8 cache function into a separate file that happened in 4b57aae26 ("ARM: Create own cache.c file for aarch64") Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: interrupts64: Include ESR value in exception tracebackAndrey Smirnov2018-06-081-1/+2
|/ | | | | | | | | Oftentimes knowing the class of exception is not enough and full ESR value is needed to decode the specifics. Add code to print ESR as a part of excpetion traceback to provide that information. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* Merge branch 'for-next/arm'Sascha Hauer2018-05-091-8/+5
|\
| * ARM: start: Avoid calling arm_mem_barebox_image() twiceAndrey Smirnov2018-04-111-8/+5
| | | | | | | | | | | | | | | | Avoid calling arm_mem_barebox_image() twice by making barebox_base function-wide in scope Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: arm32: fix realocate_to_curr_addrAndreas Schmidt2018-04-251-0/+1
|/ | | | | | | | | | | After add aarch64 support (commit 868df08038a91d674a0c50b0c0a2f70dbc445510) to realocation, MLO on beaglebone black do not boot any more. The issue is, that addition of offset in one if-case was not done. This patch fix this. Signed-off-by: Andreas Schmidt <mail@schmidt-andreas.de> Tested-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu64: Add commmentSascha Hauer2018-04-091-0/+1
| | | | Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: Make early MMU support workSascha Hauer2018-04-044-129/+182
| | | | | | | | | | Until now it was not possible to enable the MMU in PBL because create_section needs memory allocations which are not available. With this patch we move the early MMU support to a separate file and all necessary aux functions to mmu_64.h. create_sections is reimplmented for the early case to only create 1st level pagetables. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: change mmu_early_enable() prototypeSascha Hauer2018-04-046-13/+7
| | | | | | | Change the arguements to type unsigned long which is suitable for both arm32 and arm64. While at it move the prototype to arch/arm/include/. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: create separate mmu_64.h fileSascha Hauer2018-04-043-48/+38
| | | | | | | cpu/mmu.h has nothing in common for the 32bit and 64bit variant. Make it two separate files. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>