summaryrefslogtreecommitdiffstats
path: root/arch/arm/cpu
diff options
context:
space:
mode:
authorAndrey Smirnov <andrew.smirnov@gmail.com>2018-08-23 19:54:21 -0700
committerSascha Hauer <s.hauer@pengutronix.de>2018-08-24 10:12:02 +0200
commit4e0112568c318b666624e51f91073bac03b6006b (patch)
tree90d6e9589e783b3bd13abd3f294c99725bcb142b /arch/arm/cpu
parentdd36eef35fba38972d1fcb358969d5ac5003dcb3 (diff)
downloadbarebox-4e0112568c318b666624e51f91073bac03b6006b.tar.gz
barebox-4e0112568c318b666624e51f91073bac03b6006b.tar.xz
ARM: mmu64: Don't flush freshly invalidated region
Current code for dma_sync_single_for_device(), when called with dir set to DMA_FROM_DEVICE, will first invalidate given region of memory as a first step and then clean+invalidate it as a second. While the second step should be harmless it seems to be an unnecessary no-op that could probably be avoided. Analogous code in Linux kernel (4.18) in arch/arm64/mm/cache.S: ENTRY(__dma_map_area) cmp w2, #DMA_FROM_DEVICE b.eq __dma_inv_area b __dma_clean_area ENDPIPROC(__dma_map_area) is written to only perform either invalidate or clean, depending on the direction, so change dma_sync_single_for_device() to behave in the same vein and perfom _either_ invlidate or flush of the given region. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Diffstat (limited to 'arch/arm/cpu')
-rw-r--r--arch/arm/cpu/mmu_64.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index b6287aec89..69d1b20718 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -297,7 +297,8 @@ void dma_sync_single_for_device(dma_addr_t address, size_t size,
{
if (dir == DMA_FROM_DEVICE)
v8_inv_dcache_range(address, address + size - 1);
- v8_flush_dcache_range(address, address + size - 1);
+ else
+ v8_flush_dcache_range(address, address + size - 1);
}
dma_addr_t dma_map_single(struct device_d *dev, void *ptr, size_t size,