summaryrefslogtreecommitdiffstats
path: root/arch/arm/include
Commit message (Collapse)AuthorAgeFilesLines
...
* | ARM: mmu: Introduce ARM_TTB_SIZEAndrey Smirnov2018-05-221-1/+7
|/ | | | | | | | | | | | | | | | | | Commit 1c33aacf8a247ab45814b43ac0ca903677afffae ("ARM: use memalign to allocate page table"), reasonalby changed TTB allocation size from SZ_32K to SZ_16K (TTB's real size), but it also changed alignment from SZ_16K to SZ_64K for unclear reasons. Reading various TTBR related ARM documentation it seems that worst case alignment for it is 16KiB (bits [0, 13 - N] must be zero) which also matches early TTB allocation code. Since both early and regular MMU code has to share this paramter, introduce ARM_TTB_SIZE and use it in both cases for both size and alignment. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: Make use of ALIGN_DOWN macro in barebox-arm.hAndrey Smirnov2018-04-111-6/+3
| | | | | Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: change mmu_early_enable() prototypeSascha Hauer2018-04-041-0/+3
| | | | | | | Change the arguements to type unsigned long which is suitable for both arm32 and arm64. While at it move the prototype to arch/arm/include/. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: Add barebox head supportSascha Hauer2018-04-041-0/+13
| | | | | | Allow aarch64 images to use the same image header as arm32 images. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: Add esr stringsSascha Hauer2018-03-291-0/+117
| | | | | | | | The Exception Syndrome Register (ESR) holds information over an exception. This adds the strings necessary to dispatch this information. Based on Linux code. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: implement stacktracesSascha Hauer2018-03-292-1/+5
| | | | | | | | Implement stacktraces as a great debugging aid. On aarch64 this is cheap enough to be enabled unconditionally. Unwinding code is taken from the Kernel. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: implement show_regs()Sascha Hauer2018-03-291-0/+19
| | | | | | | Do something useful in an exception and at least print the current register contents. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: fix exception level mixupSascha Hauer2018-03-291-0/+21
| | | | | | | | When entering an exception the we currently jump to the code handling EL1 when we are actually at EL3 and the other way round. Fix this by introducing and using the switch_el macro from U-Boot. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* dma: Use dma_addr_t as type for DMA addressesSascha Hauer2018-03-291-2/+2
| | | | | | | DMA addresses are not necessarily the same as unsigned long. Fix the type for the dma_sync_single_* operations. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: mmu: Fix TCR settingSascha Hauer2018-03-231-1/+2
| | | | | | | | | | A BITS_PER_VA value of 33 is a little small. Increase it to 39 which is the maximum size we can do with 3 level page tables. The TCR value depends on the current exception level, so we have to calculate the value during runtime. To do this use a function derived from U-Boots get_tcr function. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: mmu: use PTE_* definitions from U-BootSascha Hauer2018-03-231-45/+22
| | | | | | | | | 'PMD' (Page Middle Directory) is a Linuxism that is not really helpful in the barebox MMU code. Use the U-Boot definitions which only use PTE_* and seem to be more consistent for our usecase. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: mmu: enable mmu in generic codeSascha Hauer2018-03-231-1/+0
| | | | | | | | Using board code to enable the MMU is not nice. Do it in generic code. Since mmu_enable() is now done in mmu_64.c we no longer have to export it. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: mmu: Fix mair register settingSascha Hauer2018-03-231-0/+7
| | | | | | | | | The memory attributes register contains the memory attribute settings for the corresponding to the possible AttrIndx values in the page table entries. Passing UNCACHED_MEM makes no sense here, pass the desired attributes instead. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: move linker variable declarations to sections.hSascha Hauer2018-03-212-2/+2
| | | | | | | We collected most linker variable declarations in asm/sections.h, so move __exceptions_start/__exceptions_stop there aswell. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: remove function prototypes from the pastSascha Hauer2018-03-211-8/+0
| | | | | | | Several functions do not exist anymore for a long time now. Remove their prototypes. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: Add function to return offset to global variablesSascha Hauer2018-03-211-0/+14
| | | | | | | ARM and aarch64 differ in the way global variables are addressed. This adds a function which abstracts the differences. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: cache: no need to ifdef prototypesSascha Hauer2018-03-211-3/+1
| | | | | | | There's no need to ifdef function prototypes, so remove the ifdefs. While there also remove unnecessary "export" for functions. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: cache: Add v8_inv_dcache_rangeSascha Hauer2018-03-211-0/+2
| | | | | | | implement v8_flush_dcache_range based on v8_inv_dcache_range. While at it add a prototype for v8_inv_dcache_range. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: fix early cache flushingSascha Hauer2018-03-211-1/+2
| | | | | | | | | v8_dcache_all() should not be used directly, but only called from v8_flush_dcache_all() and v8_invalidate_dcache_all() which set pass the type of operation in x0. While at it add the missing prototype for v8_invalidate_dcache_all(). Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: Do not use 32bit optimized flsSascha Hauer2018-03-211-1/+1
| | | | | | | The clz operation only works with 32bit values, so use the generic fls() variants on aarch64. With this tlsf_malloc works as expected. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: bitops: remove unnecessary #ifdefSascha Hauer2018-03-211-2/+0
| | | | | | | | | __fls() must always be provided, not only for aarch64, so remove unnecessary #ifdef. This didn't show up because nobody directly uses __fls() in barebox. Only aarch64 indirectly uses __fls() for implementing fls64(). Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: Use generic ffz()Sascha Hauer2018-03-211-17/+1
| | | | | | | | The generic ffz() from <asm-generic/bitops/ffz.h> works like our ARM specific variant except that the generic variant has 64bit word size support. Use the generic variant to fix ffz() for aarch64. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: implement get_pc()Sascha Hauer2018-03-211-2/+8
| | | | | | | The arm32 version can't be used on aarch64, implement an aarch64 specific version. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: get_runtime_offset() returns unsigned longSascha Hauer2018-03-211-1/+1
| | | | | | | Change return type from uint32_t to unsigned long which is suitable for aarch64 aswell. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: aarch64: Add dummy naked attributeSascha Hauer2018-03-211-2/+15
| | | | | | | The naked attribute is not supported on aarch64. To silence the compiler warning add a dummy naked attribute. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: remove ld_var supportSascha Hauer2018-03-211-25/+0
| | | | | | Now that ld_var is no longer used it can be removed. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: move away from ld_varSascha Hauer2018-03-211-0/+5
| | | | | | | | | | | | | | The ld_var solves the issue that when compiled with -pie the linker provided variables are all 0x0. This mechanism however refuses to compile with aarch64 support. This patch replaces the ld_var mechanism with a nice little trick learned from U-Boot: Instead of using linker provided variables directly with "__bss_start = ." we put a zero size array into a separate section and use the address of that array instead of the linker variable. This properly works before relocation. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: mmu: include pgtable header from where it's neededSascha Hauer2018-03-211-15/+0
| | | | | | | | Instead of #ifdefing the correct pgtable header file to include, include it where it's needed. Also, move the memory type attributes into there consumers, namely the mmu.c files. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* arm: ARM64 doen't provide the armlinux_ functionsLucas Stach2018-03-011-1/+1
| | | | | | | Those set parameters specific to the older ARM Linux implementation. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: i.MX6ul: Add SoC specific lowlevel_init functionSascha Hauer2017-10-171-0/+1
| | | | | | | | | | | | | | | On i.MX6ul(l) (Cortex A7) We have to set the SMP bit before enabling the caches, otherwise they won't work. Add a SoC specific lowlevel_init function to be called by the i.MX6ul(l) boards. Since this is a quirk of the Cortex A7 core we put the functionality into a separate function to be reused by other Cortex A7 cores. Change existing i.MX6ul(l) boards to use the new initialisation function. It seems this is only needed when booting from USB, in other boot modes the ROM will already have done the initialisation. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: rename flush_icache to icache_invalidateSascha Hauer2017-09-271-1/+1
| | | | | | | flush_icache is a misnomer since the icache is invalidated, not flushed. Rename the function accordingly. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* asm-generic: partially sync io.h with linux kernelOleksij Rempel2017-09-081-2/+2
| | | | | Signed-off-by: Oleksij Rempel <linux@rempel-privat.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: correctly identify ARMv6 K/ZLucas Stach2017-03-031-0/+8
| | | | | | | | | | The ARMv6 K/Z derivatives have a v7 compatible MMU, but all other parts (including the cache handling) is still at v6. As we don't make use of the more advanced features of the v7 MMU in Barebox, it's okay to just override this to properly identify the CPU as ARMv6. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* Merge branch 'for-next/imx'Sascha Hauer2017-02-137-1/+403
|\
| * ARM: Add PSCI supportSascha Hauer2017-02-135-1/+293
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch contains the barebox implementation for the ARM "Power State Coordination Interface" (PSCI). The interface is aimed at the generalization of code in the following power management scenarios: * Core idle management. * Dynamic addition and removal of cores, and secondary core boot. * big.LITTLE migration. * System shutdown and reset. In practice, all that's currently implemented is a way to enable the secondary core one some SoCs. With PSCI the Kernel is either started in nonsecure or in Hypervisor mode and PSCI is used to apply power to the secondary cores. The start mode is passed in the global.bootm.secure_state variable. This enum can contain "secure" (Kernel is started in secure mode, means no PSCI), "nonsecure" (Kernel is started in nonsecure mode, PSCI available) or "hyp" (Kernel is started in hyp mode, meaning it can support virtualization). We currently only support putting the secure monitor code into SDRAM, which means we always steal some amount of memory from the Kernel. To keep things simple for now we simply keep the whole barebox binary in memory The PSCI support has been tested on i.MX7 only so far. The only supported operations are CPU_ON and CPU_OFF. The PSCI and secure monitor code is based on the corresponding U-Boot code. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: Add smc call supportSascha Hauer2017-02-081-0/+104
| | | | | | | | | | | | Taken from the Kernel: A wrapper to make a smc call from C. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
| * ARM: Add UNWIND macroSascha Hauer2017-02-081-0/+6
| | | | | | | | Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | ARM: start: Fix image size calculationSascha Hauer2017-02-081-0/+2
|/ | | | | | | | | | In barebox_non_pbl_start() we do not run at the address we are linked at, so we must read linker variables using ld_var(). Since ld_var() current is not available on arm64 we create two zero sized arrays, one at the begin of the image and one at the end. The difference between both is the image size we are looking for. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: Fix a bug in stack's "top" initializationAndrey Smirnov2017-01-091-1/+7
| | | | | | | | | | | | | | | | | | | Code-paths responsible for initializing CPU's stack pointer and variable used in stack memory resource reservation got out of sync which resulted in actual stack being 64K off from what "stack" struct resource registered by arm_request_stack() thought it was. At least one issue resulting from that can be easily triggered by running: memtest -t This commit unifies the aforementioned code to a certain degree which solves the problem and hopefuly makes it less likely to become an issue again. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* Merge branch 'for-next/arm'Sascha Hauer2016-10-101-1/+8
|\
| * ARM: Fix calling of arm_mem_barebox_image()Sascha Hauer2016-09-151-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | arm_mem_barebox_image() is used to pick a suitable place where to put the final image to. This is called from both the PBL uncompression code and also from the final image. To make it work properly it is crucial that it's called with the same arguments both times. Currently it is called with the wrong image size from the PBL uncompression code. The size passed to arm_mem_barebox_image() has to be the size of the whole uncompressed image including the BSS segment size. The PBL code calls it with the compressed image size instead and without the BSS segment. This patch fixes this by reading the uncompressed image size from the compressed binary (the uncompressed size is appended to the end of the compressed binary by our compression wrappers). The size of the BSS segment is unknown though by the PBL uncompression code, so we introduce a maximum BSS size which is used instead. The code before this patch worked by accident because the base address of the final image was aligned down to a 1MiB boundary. The alignment was sufficient already to make enough space. This breaks though when the uncompressed image including BSS becomes bigger than 1MiB while the compressed image is smaller. Fixes: 65071bd0: arm: Clarify memory layout calculation Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* | arm(64): don't advertise stack_dumping capabilities for ARM64Lucas Stach2016-10-041-0/+2
|/ | | | | | | The unwind code to support this feature is not there yet. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* arm: include: swab: use rigth assembly for armv8Raphael Poggi2016-07-061-0/+4
| | | | | Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* arm: cpu: add basic arm64 mmu supportRaphael Poggi2016-07-062-3/+151
| | | | | | | | | | | | | | | | This commit adds basic mmu support, ie: - DMA cache handling is not supported - Remapping memory region also The current mmu setting is: - 4KB granularity - 3 level lookup (skipping L0) - 33 bits per VA This is based on coreboot and u-boot mmu configuration. Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* arm: include: system_info: add armv8 identificationRaphael Poggi2016-07-061-0/+38
| | | | | Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* arm: include: bitops: arm64 use generic __flsRaphael Poggi2016-07-061-0/+5
| | | | | Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* arm: include: system: add arm64 helper functionsRaphael Poggi2016-07-061-1/+45
| | | | | Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* arm: cpu: add arm64 specific codeRaphael Poggi2016-07-061-0/+9
| | | | | | | | | | This patch adds arm64 specific codes, which are: - exception support - cache support - rework Makefile to support arm64 Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* ARM: start: Fix arm_mem_barebox_image for !CONFIG_RELOCATABLESascha Hauer2016-06-201-2/+6
| | | | | | | | | | | | | | Fixes: 65071bd arm: Clarify memory layout calculation arm_mem_barebox_image() shall return the beginning of the barebox image (and thus the end of the malloc region). For relocatable images we can return a suitable location, but for non relocatable images we do not have a choice: We must return TEXT_BASE. If TEXT_BASE happens to be outside the memory region between membase and endmem we can return the base of the ramoops area. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Cc: Markus Pargmann <mpa@pengutronix.de>
* whole tree: remove trailing whitespacesDu Huanpeng2016-04-211-2/+2
| | | | | Signed-off-by: Du Huanpeng <u74147@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>