|
|
Subscribe / Log in / New account

The "too small to fail" memory-allocation rule

The "too small to fail" memory-allocation rule

Posted Dec 24, 2014 10:40 UTC (Wed) by yoe (guest, #25743)
In reply to: The "too small to fail" memory-allocation rule by xorbe
Parent article: The "too small to fail" memory-allocation rule

ZFS is not a 'memory offender'; it simply has a different approach to caching. Where other file systems assume they're not the only users of the system, ZFS in its default setting assumes the system is a file server and will hard-allocate most of the available RAM as cache. While that does leave little room for other processes, that is an appropriate thing to do if the system is, indeed, a file server and there aren't many other processes. If there are, however, it's only a default which can be changed.

I personally run ZFS on a system with 'only' 2G of ram and a single hard disk. It's not as fast as ZFS can be, but it runs perfectly well, with room to spare in its RAM for plenty of other stuff.


to post comments

The "too small to fail" memory-allocation rule

Posted Dec 25, 2014 18:38 UTC (Thu) by quotemstr (subscriber, #45331) [Link] (1 responses)

Why does ZFS need its own cache? Page cache should be sufficient and system-global. If you want a particular installation to prefer file-backed pages to swap-backed pages at reclaim time, fine, but I don't see why this policy should have to be tied to a specific filesystem. It feels like a step backward from the unified cache era.

The "too small to fail" memory-allocation rule

Posted Dec 25, 2014 22:33 UTC (Thu) by yoe (guest, #25743) [Link]

ZFS doesn't necessarily need its own cache, I suppose, but the algorithms involved can be more efficient if they have knowledge of the actual storage layout (which *does* require that they are part of the ZFS code, if done to the extreme). E.g., ZFS has the ability to store a second-level cache on SSDs; if the first-level (in-memory) cache needs to drop something, it may prefer to drop something which it knows to also be stored on the SSD cache over something which isn't. The global page cache doesn't have the intricate knowledge of the storage backend which is required for that sort of thing.

I suppose that's a step backwards if what you want is "mediocre cache performance, but similar performance for *all* file systems". That's not what ZFS is about, though; ZFS wants to provide excellent performance at all costs. That does mean it's not the best choice for all workloads, but it does beat the pants off of most other solutions in the workloads that it was meant for.

The "too small to fail" memory-allocation rule

Posted Dec 26, 2014 16:11 UTC (Fri) by nix (subscriber, #2304) [Link] (1 responses)

You're talking about ZFS, but the post was talking about XFS. They are very different things.

The "too small to fail" memory-allocation rule

Posted Dec 27, 2014 13:09 UTC (Sat) by yoe (guest, #25743) [Link]

I know.

The article was about XFS, but the comment that I replied to was very much about ZFS, not XFS.

The "too small to fail" memory-allocation rule

Posted Jan 6, 2015 20:40 UTC (Tue) by Lennie (subscriber, #49641) [Link] (1 responses)

I thought ZFS was a heavy user of RAM mostly when deduplication is enabled.

The "too small to fail" memory-allocation rule

Posted Feb 28, 2015 13:59 UTC (Sat) by yoe (guest, #25743) [Link]

Dedup is a large memory eater, yes, but doesn't have an influence on how much memory gets used (it just gets terribly slow if you don't have enough...)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds