summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
Commit message (Expand)AuthorAgeFilesLines
* proc: move /proc/slabinfo boilerplate to mm/slub.c, mm/slab.cAlexey Dobriyan2008-10-231-9/+20
* slub: fixed uninitialized counter in struct kmem_cache_nodeSalman Qazi2008-09-151-0/+1
* slub: Disable NUMA remote node defragmentation by defaultChristoph Lameter2008-08-201-2/+2
* SLUB: dynamic per-cache MIN_PARTIALPekka Enberg2008-08-051-7/+19
* mm: unexport ksizeAdrian Bunk2008-07-291-1/+0
* SL*B: drop kmem cache argument from constructorAlexey Dobriyan2008-07-261-7/+6
* slub: record page flag overlays explicitlyAndy Whitcroft2008-07-241-48/+17
* slub: dump more data on slab corruptionPekka Enberg2008-07-191-1/+1
* SLUB: simplify re on_each_cpu()Alexey Dobriyan2008-07-161-8/+0
* Merge branch 'generic-ipi' into generic-ipi-for-linusIngo Molnar2008-07-151-1/+1
|\
| * on_each_cpu(): kill unused 'retry' parameterJens Axboe2008-06-261-1/+1
* | slub: current is always validAlexey Dobriyan2008-07-151-1/+1
* | slub: Add check for kfree() of non slab objects.Christoph Lameter2008-07-151-0/+1
* | Start using the new '%pS' infrastructure to print symbolsLinus Torvalds2008-07-141-3/+2
* | slub: Fix use-after-preempt of per-CPU data structureDmitry Adamushko2008-07-101-1/+3
* | Christoph has movedChristoph Lameter2008-07-041-1/+1
* | slub: Do not use 192 byte sized cache if minimum alignment is 128 byteChristoph Lameter2008-07-031-2/+10
|/
* slub: ksize() abuse checksPekka Enberg2008-05-221-2/+3
* slub: fix atomic usage in any_slab_objects()Benjamin Herrenschmidt2008-05-081-1/+1
* slub: #ifdef simplificationChristoph Lameter2008-05-021-4/+2
* slub: Whitespace cleanup and use of strict_strtoulChristoph Lameter2008-05-021-13/+25
* remove div_long_long_remRoman Zippel2008-05-011-5/+4
* infrastructure to debug (dynamic) objectsThomas Gleixner2008-04-301-0/+3
* ipc: define the slab_memory_callback priority as a constantNadia Derbey2008-04-291-1/+1
* Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/pen...Linus Torvalds2008-04-281-194/+287
|\
| * slub: pack objects denserChristoph Lameter2008-04-271-2/+2
| * slub: Calculate min_objects based on number of processors.Christoph Lameter2008-04-271-1/+3
| * slub: Drop DEFAULT_MAX_ORDER / DEFAULT_MIN_OBJECTSChristoph Lameter2008-04-271-21/+2
| * slub: Simplify any_slab_object checksChristoph Lameter2008-04-271-9/+1
| * slub: Make the order configurable for each slab cacheChristoph Lameter2008-04-271-7/+22
| * slub: Drop fallback to page allocator methodChristoph Lameter2008-04-271-41/+2
| * slub: Fallback to minimal order during slab page allocationChristoph Lameter2008-04-271-11/+28
| * slub: Update statistics handling for variable order slabsChristoph Lameter2008-04-271-53/+97
| * slub: Add kmem_cache_order_objects structChristoph Lameter2008-04-271-25/+51
| * slub: for_each_object must be passed the number of objects in a slabChristoph Lameter2008-04-271-6/+18
| * slub: Store max number of objects in the page struct.Christoph Lameter2008-04-271-20/+34
| * slub: Dump list of objects not freed on kmem_cache_close()Christoph Lameter2008-04-271-1/+31
| * slub: free_list() cleanupChristoph Lameter2008-04-271-11/+7
| * slub: improve kmem_cache_destroy() error messagePekka Enberg2008-04-271-2/+5
* | mm: move cache_line_size() to <linux/cache.h>Pekka Enberg2008-04-281-5/+0
* | mm: have zonelist contains structs with both a zone pointer and zone_idxMel Gorman2008-04-281-1/+1
* | mm: use two zonelist that are filtered by GFP maskMel Gorman2008-04-281-3/+5
* | mm: introduce node_zonelist() for accessing the zonelist for a GFP maskMel Gorman2008-04-281-2/+1
|/
* slab_err: Pass parameters correctly to slab_bugChristoph Lameter2008-04-231-2/+2
* slub: No need for per node slab counters if !SLUB_DEBUGChristoph Lameter2008-04-141-11/+40
* slub: Move map/flag clearing to __free_slabChristoph Lameter2008-04-141-2/+2
* slub: Fixes to per cpu stat output in sysfsChristoph Lameter2008-04-141-1/+3
* slub: Deal with config variable dependenciesChristoph Lameter2008-04-141-15/+15
* slub: Reduce #ifdef ZONE_DMA by moving kmalloc_caches_dma near dma logicChristoph Lameter2008-04-141-4/+1
* slub: Initialize per-cpu statsPekka Enberg2008-04-141-0/+3