summaryrefslogtreecommitdiffstats
path: root/fs/xfs/xfs_buf.h
diff options
context:
space:
mode:
authorLucas Stach <dev@lynxeye.de>2016-12-07 17:36:36 +1100
committerDave Chinner <david@fromorbit.com>2016-12-07 17:36:36 +1100
commit6031e73a5b3f85ec45cac08ef90995b2d3f941c7 (patch)
tree7ea855c18580f1b227a8ed8de6379a660c17d5f2 /fs/xfs/xfs_buf.h
parentcae028df53449905c944603df624ac94bc619661 (diff)
downloadlinux-6031e73a5b3f85ec45cac08ef90995b2d3f941c7.tar.gz
linux-6031e73a5b3f85ec45cac08ef90995b2d3f941c7.tar.xz
xfs: use rhashtable to track buffer cache
On filesystems with a lot of metadata and in metadata intensive workloads xfs_buf_find() is showing up at the top of the CPU cycles trace. Most of the CPU time is spent on CPU cache misses while traversing the rbtree. As the buffer cache does not need any kind of ordering, but fast lookups a hashtable is the natural data structure to use. The rhashtable infrastructure provides a self-scaling hashtable implementation and allows lookups to proceed while the table is going through a resize operation. This reduces the CPU-time spent for the lookups to 1/3 even for small filesystems with a relatively small number of cached buffers, with possibly much larger gains on higher loaded filesystems. [dchinner: reduce minimum hash size to an acceptable size for large filesystems with many AGs with no active use.] [dchinner: remove stale rbtree asserts.] [dchinner: use xfs_buf_map for compare function argument.] [dchinner: make functions static.] [dchinner: remove redundant comments.] Signed-off-by: Lucas Stach <dev@lynxeye.de> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
Diffstat (limited to 'fs/xfs/xfs_buf.h')
-rw-r--r--fs/xfs/xfs_buf.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h
index 38c9a95bda7d..8a9d3a9599f0 100644
--- a/fs/xfs/xfs_buf.h
+++ b/fs/xfs/xfs_buf.h
@@ -151,7 +151,7 @@ typedef struct xfs_buf {
* which is the only bit that is touched if we hit the semaphore
* fast-path on locking.
*/
- struct rb_node b_rbnode; /* rbtree node */
+ struct rhash_head b_rhash_head; /* pag buffer hash node */
xfs_daddr_t b_bn; /* block number of buffer */
int b_length; /* size of buffer in BBs */
atomic_t b_hold; /* reference count */