summaryrefslogtreecommitdiffstats
path: root/common/tlsf.c
Commit message (Collapse)AuthorAgeFilesLines
* common: malloc: ensure alignment is always at least 8 byteAhmad Fatoum2023-09-261-0/+2
| | | | | | | | | | | | | | | | | We used to have following alignments: 32-bit CPU 64-bit CPU dummy 8 bytes 8 bytes dlmalloc 8 bytes 16 bytes tlsf 4 bytes 8 bytes With recent change to TLSF, we now always have at least 8 bytes as alignment. To make this clearer, define a new CONFIG_MALLOC_ALIGNMENT and either use it as the alignment (as done for dummy) or add static asserts to ensure we have at least this alignment. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230911152433.3640781-6-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* tlsf: give malloc 8-byte alignment on 32-bit as wellAhmad Fatoum2023-09-261-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | The current alignment of 4 bytes is too low. Access to 64-bit data via ldrd/strd requires at least an eight byte alignment: | Prior to ARMv6, if the memory address is not 64-bit aligned, the | data read from memory is UNPREDICTABLE. Alignment checking (taking | a data abort), and support for a big-endian (BE-32) data format are | implementation options. We already have at least an 8 byte alignment for dlmalloc, so have TLSF follow suit by aligning the accounting structures appropriately. Instead of adding manual padding, we could also enlarge block_header_t::size to an uint64_t unconditionally, but mark block_header_t __packed. This comes with a runtime cost though or ugly __builtin_assume_aligned annotations, so we stick to the simpler version. Reported-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de> Link: https://lore.barebox.org/barebox/ly7d1z1qvs.fsf@ensc-pc.intern.sigma-chemnitz.de/ Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230911152433.3640781-5-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* tlsf: fix sizeof(size_t) == sizeof(void *) assumptionAhmad Fatoum2023-09-261-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TLSF struct block_header_t doesn't describe a single block, but instead its first member covers the previous block: .~~~~~~~~~~~~~~~~~~~. | prev_phys_block | End of previous block --> |———————————————————| <-- Start of a free block | size | |— — — — — — — — — —| | < Start of Data > | '———————————————————' This works because if the previous block is free, there is no harm in using its last word to store the prev_phys_block. We thus need pointer arithmetic to: - arrive from start of data to size, i.e. decrement offset by sizeof(size_t) - arrive from size to prev_phys_block, i.e. decrement offset by sizeof(struct block_header_t *) Across the TLSF implementation, we conflate the two though and use block_header_shift to mean both. This works as long as sizeof(size_t) == sizeof(struct block_header_t *), which is true for both 32-bit and 64-bit configuration currently. To facilitate having an 8-byte minimum allocation alignment for 32-bit systems as well, we will increase sizeof(struct block_header_t::size) to 8 bytes, which will break the implicit assumption. Fix it by adding an additional const block_header_shift and use it where appropriate. No functional change just yet. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230911152433.3640781-4-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* tlsf: ensure malloc pool is alignedAhmad Fatoum2023-09-261-2/+2
| | | | | | | | | | | | | The struct control_t describing a pool is allocated at its very start and then directly followed by the first block. To ensure the first block is suitably aligned, align_up the size in tlsf_size(). So far, TLSF on 32-bit and 64-bit happened to be aligned, so this introduces no functional change just yet. With upcoming changes to the block header to increase alignment on 32-bit systems, this realignment will become required. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230911152433.3640781-3-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* tlsf: turn static const variables into compiletime constant expressionsAhmad Fatoum2023-09-261-8/+6
| | | | | | | | | | | | | | static const is not a compiletime expression in C, unlike C++ and treating them the same just means that we just restrict where we can use the constants, e.g. they are not appropriate for static_assert. Turn them into proper macros to fix this. To keep the code easier to sync with other TLSF implementations we maintain the same lowercase naming, despite it being at odds with the general kernel coding style. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20230911152433.3640781-2-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* tlsf: fix internal overflow trying to allocate big buffersAhmad Fatoum2022-05-241-3/+16
| | | | | | | | | | | | | | | | The function adjust_request_size() has an unhandled failure mode: If aligning a buffer up overflows SIZE_MAX, it will compute a way to short buffer instead of propagating an error. Fix this by returning 0 in this case and checking for 0 whereever the function is called. 0 is a safe choice for an error code, because the function returns at least block_size_min on success and 0 was already an error code (that was just never handled). Reported-by: Jonas Martin <j.martin@pengutronix.de> Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20220523062756.774153-2-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* treewide: add SPDX-License-Identifier for files without explicit licenseAhmad Fatoum2022-01-051-0/+2
| | | | | | | | | Record GPL-2.0-only as license for all files lacking an explicit license statement. Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Link: https://lore.barebox.org/20220103120539.1730644-12-a.fatoum@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* Add KASan supportSascha Hauer2020-09-221-6/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | KernelAddressSANitizer (KASAN) is a dynamic memory error detector. It provides a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs. This adds support for KASan to barebox. It is basically a stripped down version taken from the Linux Kernel as of v5.9-rc1. Quoting the initial Linux commit 0b24becc810d ("kasan: add kernel address sanitizer infrastructure") describes what KASan does: | KASAN uses compile-time instrumentation for checking every memory access, | therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with | putting symbol aliases into the wrong section, which breaks kasan | instrumentation of globals. | | Basic idea: | | The main idea of KASAN is to use shadow memory to record whether each byte | of memory is safe to access or not, and use compiler's instrumentation to | check the shadow memory on each memory access. | | Address sanitizer uses 1/8 of the memory addressable in kernel for shadow | memory and uses direct mapping with a scale and offset to translate a | memory address to its corresponding shadow address. | | For every 8 bytes there is one corresponding byte of shadow memory. | The following encoding used for each shadow byte: 0 means that all 8 bytes | of the corresponding memory region are valid for access; k (1 <= k <= 7) | means that the first k bytes are valid for access, and other (8 - k) bytes | are not; Any negative value indicates that the entire 8-bytes are | inaccessible. Different negative values used to distinguish between | different kinds of inaccessible memory (redzones, freed memory) (see | mm/kasan/kasan.h). | | To be able to detect accesses to bad memory we need a special compiler. | Such compiler inserts a specific function calls (__asan_load*(addr), | __asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16. | | These functions check whether memory region is valid to access or not by | checking corresponding shadow memory. If access is not valid an error | printed. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* tlsf: Update to v3.1Sascha Hauer2020-06-161-153/+258
| | | | | | | | This updates the tlsf implementation to v3.1. This is taken from commit deff9ab509341f264addbd3c8ada533678591905 in https://github.com/mattconte/tlsf.git. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* whole tree: remove trailing whitespacesDu Huanpeng2016-04-211-3/+3
| | | | | Signed-off-by: Du Huanpeng <u74147@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* drop <stddef.h> includesSascha Hauer2015-07-231-1/+0
| | | | | | | The compilers stddef.h should not be included. We declare all types ourselves. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* tlsf: Use NULL instead of 0 for returning NULL pointersSascha Hauer2012-06-301-6/+6
| | | | Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* tlsf: enable assertionsSascha Hauer2011-12-231-10/+7
| | | | Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* adapt tlsf for bareboxAntony Pavlov2011-12-231-0/+11
| | | | | Signed-off-by: Antony Pavlov <antonynpavlov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* import TLSF 2.0 from http://tlsf.baisoku.org/tlsf-2.0.zipAntony Pavlov2011-12-231-0/+961
TLSF: Two Level Segregated Fit memory allocator implementation. Written by Matthew Conte (matt@baisoku.org). Public Domain, no restrictions. Signed-off-by: Antony Pavlov <antonynpavlov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>