summaryrefslogtreecommitdiffstats
path: root/common/state/backend_bucket_direct.c
Commit message (Collapse)AuthorAgeFilesLines
* state: Fix lseek error check in state_backend_bucket_direct_write()Andrey Smirnov2019-03-111-4/+3
| | | | | | | | | Don't use 'int' to store lseek()'s return value to avoid problems with large seek offsets. While at it, make sure to populate return error code from 'errno'. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* state: Fix lseek error check in state_backend_bucket_direct_read()Andrey Smirnov2019-03-111-8/+8
| | | | | | | | | Don't use 'int' to store lseek()'s return value to avoid problems with large seek offsets. While at it, make sure to populate return error code from 'errno'. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* common: state: harmonize code with dt-utilsUlrich Ölmann2019-02-111-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Insert a helpful size check that is an outcome of the following dt-utils commits: | commit a6eb5350be0f7a5673162d20f2dd72569d5a4d0c | Author: Markus Pargmann <mpa@pengutronix.de> | Date: Fri May 27 13:53:40 2016 +0200 | | barebox-state: Import updated state code | | Signed-off-by: Markus Pargmann <mpa@pengutronix.de> | commit 583acea6669550ffa7ffb465301ddb3529206afc | Author: Sascha Hauer <s.hauer@pengutronix.de> | Date: Thu Mar 23 11:29:50 2017 +0100 | | state: backend-direct: Fix max_size | | The max_size in the direct backend includes the meta data, so | substract its size when determing the max data size we can store. | | Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> | commit dcf781f1b3d15aff5f5ff0b604bff447dee2040c | Author: Sascha Hauer <s.hauer@pengutronix.de> | Date: Thu Mar 23 12:59:48 2017 +0100 | | state: backend_bucket_direct: max_size is always given | | max_size is always != 0, so if(direct->max_size) can be skipped. | | Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Ulrich Ölmann <u.oelmann@pengutronix.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* common: state: Add property to protect existing dataDaniel Schultz2018-04-161-0/+2
| | | | | | | | | | | After an update to a newer barebox version with an enabled state framework, existing data in storage memories could be overwritten. Add a new property to check in front of every write task, if the meta magic field only contains the magic number, zeros or ones. Signed-off-by: Daniel Schultz <d.schultz@phytec.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* state: keep backward compatibilityJuergen Borleis2017-09-061-9/+19
| | | | | | | | | | | | | Previous 'state' variable set variants do not know and use metadata. The 'direct' storage backend's read function honors this, but not its counterpart the write function. This makes an update of the 'state' variable set impossible. This change makes backward compatibility explicit, else it complains in the read function as well. With some more debug output it helps the developer to do things right. Signed-off-by: Juergen Borleis <jbe@pengutronix.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* state: storage: direct: do not close file that is not openedSascha Hauer2017-03-311-1/+0
| | | | | | When open failed to not try to close the invalid fd afterwards. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* state: backend_bucket_direct: max_size is always givenSascha Hauer2017-03-311-1/+1
| | | | | | max_size is always != 0, so if(direct->max_size) can be skipped. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* state: backend-direct: Fix max_sizeSascha Hauer2017-03-311-1/+1
| | | | | | | The max_size in the direct backend includes the meta data, so substract its size when determing the max data size we can store. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* state: Convert all bufs to void *Sascha Hauer2017-03-311-3/+3
| | | | | | | | A void * is a much better type for a buffer than a u8 * as it can be casted to any other type implicitly. Convert all buffers used by the state framework to void *. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* state: replace len_hint logicSascha Hauer2017-03-311-8/+3
| | | | | | | | The len_hint mechanism is rather hard to understand as it's not clear from where to where the hint is passed and also it's not clear what happens if the hint is empty or wrong. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* state: use packed attribute for on storage structsStefan Lengfeld2016-11-031-1/+1
| | | | | | | | | | | These structs are used for on-storage data layouts. They should be not affected by different integer precisions and alignment optimizations of 32bit or 64bit machines. Using the architecture independent integer data types, like uint32_t, achieves the former, using the packed attribute the later. Signed-off-by: Stefan Lengfeld <s.lengfeld@phytec.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* state: Refactor state frameworkMarkus Pargmann2016-07-081-0/+180
The state framework grew organically over the time. Unfortunately the architecture and abstractions disappeared during this period. This patch refactors the framework to recreate the abstractions. The main focus was the backend with its storage. The main use-case was to offer better NAND support with less erase cycles and interchangeable data formats (dtb,raw). The general architecture now has a backend which consists of a data format and storage. The storage consists of multiple storage buckets each holding exactly one copy of the state data. A data format describes a data serialization for the state framework. This can be either dtb or raw. A storage bucket is a storage location which is used to store any data. There is a (new) circular type which writes changes behind the last written data and therefore reduces the number of erases. The other type is a direct bucket which writes directly to a storage offset for all non-erase storage. Furthermore this patch splits up all classes into different files in a subdirectory. This is currently all in one patch as I can't see a good way to split the changes up without having a non-working state framework in between. The following diagram shows the new architecture roughly: .----------. | state | '----------' | | v .----------------------------. | state_backend | |----------------------------| | + state_load(*state); | | + state_save(*state); | | + state_backend_init(...); | | | | | '----------------------------' | | The format describes | | how the state data | '-------------> is serialized | .--------------------------------------------. | | state_backend_format <INTERFACE> | | |--------------------------------------------| | | + verify(*format, magic, *buf, len); | | | + pack(*format, *state, **buf, len); | | | + unpack(*format, *state, *buf, len); | | | + get_packed_len(*format, *state); | | | + free(*format); | | '--------------------------------------------' | ^ ^ | * * | * * | .--------------------. .--------------------. | | backend_format_dtb | | backend_format_raw | | '--------------------' '--------------------' | | | v .----------------------------------------------------------. | state_backend_storage | |----------------------------------------------------------| | + init(...); | | + free(*storage); | | + read(*storage, *format, magic, **buf, *len, len_hint); | | + write(*storage, *buf, len); | | + restore_consistency(*storage, *buf, len); | '----------------------------------------------------------' | The backend storage is responsible to manage multiple data copies and distribute them onto several buckets. Read data is verified against the given format to ensure that the read data is correct. | | | | | v .------------------------------------------. | state_backend_storage_bucket <INTERFACE> | |------------------------------------------| | + init(*bucket); | | + write(*bucket, *buf, len); | | + read(*bucket, **buf, len_hint); | | + free(*bucket); | '------------------------------------------' ^ ^ ^ * * * * * * A storage bucket represents*exactly one data copy at one data location. A circular b*cket writes any new data to the end of the bucket (for *educed erases on NAND). A direct bucket directly writ*s at one location. * * * * * * * * * .-----------------------. * .-------------------------. | backend_bucket_direct | * | backend_bucket_circular | '-----------------------' * '-------------------------' ^ * ^ | * | | * | | * | | .-----------------------. | '--| backend_bucket_cached |---' '-----------------------' A backend_bucket_cached is a transparent bucket that directly uses another bucket as backend device and caches all accesses. Signed-off-by: Markus Pargmann <mpa@pengutronix.de> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>