summaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c
Commit message (Expand)AuthorAgeFilesLines
* Merge tag 'gcc-plugins-v4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel...Linus Torvalds2016-10-151-0/+5
|\
| * latent_entropy: Mark functions with __latent_entropyEmese Revfy2016-10-101-1/+1
| * gcc-plugins: Add latent_entropy pluginEmese Revfy2016-10-101-0/+5
* | mm: warn about allocations which stall for too longMichal Hocko2016-10-071-0/+10
* | mm: consolidate warn_alloc_failed usersMichal Hocko2016-10-071-15/+12
* | mm, page_alloc: pull no_progress_loops update to should_reclaim_retry()Vlastimil Babka2016-10-071-14/+14
* | mm, compaction: restrict full priority to non-costly ordersVlastimil Babka2016-10-071-1/+4
* | mm, compaction: more reliably increase direct compaction priorityVlastimil Babka2016-10-071-14/+19
* | Revert "mm, oom: prevent premature OOM killer invocation for high order request"Vlastimil Babka2016-10-071-2/+49
* | mm: introduce arch_reserved_kernel_pages()Srikar Dronamraju2016-10-071-0/+12
* | mm: use zonelist name instead of using hardcoded indexAneesh Kumar K.V2016-10-071-4/+4
* | mm/page_ext: support extra space allocation by page_ext userJoonsoo Kim2016-10-071-1/+1
* | mm/debug_pagealloc.c: don't allocate page_ext if we don't use guard pageJoonsoo Kim2016-10-071-1/+7
* | mm/debug_pagealloc.c: clean-up guard page handling codeJoonsoo Kim2016-10-071-16/+18
* | mm: fix set pageblock migratetype in deferred struct page initXishi Qiu2016-10-071-7/+13
* | mem-hotplug: fix node spanned pages when we have a movable nodeXishi Qiu2016-10-071-31/+23
* | mm, compaction: require only min watermarks for non-costly ordersVlastimil Babka2016-10-071-2/+7
* | mm, compaction: use proper alloc_flags in __compaction_suitable()Vlastimil Babka2016-10-071-1/+1
|/
* mm, vmscan: only allocate and reclaim from zones with pages managed by the bu...Mel Gorman2016-09-011-2/+2
* mm, oom: prevent premature OOM killer invocation for high order requestMichal Hocko2016-09-011-49/+2
* proc, meminfo: use correct helpers for calculating LRU sizes in meminfoMel Gorman2016-08-111-1/+1
* mm/page_alloc.c: recalculate some of node threshold when on/offline memoryJoonsoo Kim2016-08-101-15/+35
* mm/page_alloc.c: fix wrong initialization when sysctl_min_unmapped_ratio changesJoonsoo Kim2016-08-101-1/+1
* mm: memcontrol: only mark charged pages with PageKmemcgVladimir Davydov2016-08-091-9/+5
* mm: initialise per_cpu_nodestats for all online pgdats at bootMel Gorman2016-08-041-5/+5
* treewide: replace obsolete _refok by __refFabian Frederick2016-08-021-2/+2
* mm, compaction: simplify contended compaction handlingVlastimil Babka2016-07-281-27/+1
* mm, compaction: introduce direct compaction priorityVlastimil Babka2016-07-281-14/+14
* mm, thp: remove __GFP_NORETRY from khugepaged and madvised allocationsVlastimil Babka2016-07-281-4/+2
* mm, page_alloc: make THP-specific decisions more genericVlastimil Babka2016-07-281-13/+9
* mm, page_alloc: restructure direct compaction handling in slowpathVlastimil Babka2016-07-281-52/+57
* mm, page_alloc: don't retry initial attempt in slowpathVlastimil Babka2016-07-281-11/+18
* mm, page_alloc: set alloc_flags only once in slowpathVlastimil Babka2016-07-281-26/+26
* mm: track NR_KERNEL_STACK in KiB instead of number of stacksAndy Lutomirski2016-07-281-2/+1
* mm: remove reclaim and compaction retry approximationsMel Gorman2016-07-281-39/+10
* mm: add per-zone lru list statMinchan Kim2016-07-281-0/+10
* mm: show node_pages_scanned per node, not zoneMinchan Kim2016-07-281-3/+3
* mm, vmstat: remove zone and node double accounting by approximating retriesMel Gorman2016-07-281-12/+43
* mm: vmstat: replace __count_zone_vm_events with a zone id equivalentMel Gorman2016-07-281-1/+1
* mm: page_alloc: cache the last node whose dirty limit is reachedMel Gorman2016-07-281-2/+11
* mm, page_alloc: remove fair zone allocation policyMel Gorman2016-07-281-74/+1
* mm: convert zone_reclaim to node_reclaimMel Gorman2016-07-281-8/+16
* mm, page_alloc: wake kswapd based on the highest eligible zoneMel Gorman2016-07-281-1/+1
* mm, vmscan: only wakeup kswapd once per node for the requested classzoneMel Gorman2016-07-281-2/+6
* mm: move most file-based accounting to the nodeMel Gorman2016-07-281-42/+32
* mm: move page mapped accounting to the nodeMel Gorman2016-07-281-3/+3
* mm, page_alloc: consider dirtyable memory in terms of nodesMel Gorman2016-07-281-15/+11
* mm, vmscan: make shrink_node decisions more node-centricMel Gorman2016-07-281-1/+1
* mm, vmscan: simplify the logic deciding whether kswapd sleepsMel Gorman2016-07-281-1/+1
* mm, vmscan: move LRU lists to nodeMel Gorman2016-07-281-31/+37