Andreas Gruenbacher egy patchet postázott az LKML-re (Linux Kernel Mailing List), mellyel egy érdekes ötletet valósított meg. Pontosabban ez az ötlet a ``nagy terhelés hatására zsugorodó cache" elképzelés. Dinamikusan méretezett cache, amely a nagy nyomás alatt csökkenti méretét, azaz összezsugorodik, ha a rendelkezésre álló fizikai memória (RAM) kevés. Ez az új funkció, megoldás kicsit ellentmond annak a bevett, tradícionális szokásnak, hogy cache-eljünk mindent ami lehetséges, és majd ezt a cache-t lapozzuk ki nagy terhelés alatt a diszkre (swapout).
A másik tábor szerint az ésszerű megoldás az lenne, hogy amit cache-elünk azt tartsuk a RAM-ban, és a lehető legkevesebbet lapozzunk ki a diszkre, mert az állandó swapolás csökkenti a diszk elérését, teljesítményét. Szerintük a legjobb megoldás az, hogy nyomás alatt a meglevő cache-t ne a diszkre lapozzuk ki, hanem a fizikai RAM-ban levő gyorsítótárat ürítsük ki, és csökkentsük a cache méretét. Így egy dinamikusan változó cache-hez jutunk.
A megoldás kicsit technikai szemmel nézve valahogy így fest:
Amikor a rendszer memória nagy ``nyomás" alá kerül (azaz nagy lesz a terhelés), a Linux kswapd meghívja a shrink_caches() függvényt a try_to_free_pages() -ből 6-os prioritási szinten (ez a default, legalacsonyabb prioritás). Az alap prioritási szint 1-es (a legmagasabb). Azaz tulajdonképpen a kswapd hívja meg a ``cache zsugorító" függvényt ahelyett, hogy kilapozná a cache-t.
Akit mélyebben érdekel a dolog, az olvassa el a részletes leírást:From: Joshua MacDonald
To: linux-kernel
Subject: Re: [PATCH] Caches that shrink automatically
Date: Mon, 5 Aug 2002 11:44:17 +0400
On Sun, Aug 04, 2002 at 11:17:03PM +0400, Hans Reiser wrote:
> Linus Torvalds wrote:
> >True in theory, but I doubt you will see it very much in practice.
> >
> >Most of the time when you want to free dentries, it is because you have a
> >_ton_ of them.
> >
> >The fact that some will look cold even if they aren't should not matter
> >that much statistically.
> >
> >Yah, it's a guess. We can test it.
>
> Josh tested it. He posted on it. I'll have him find his original post
> and repost tomorrow, but summarized in brief, the current dcache
> shrinking/management approach was quite inefficient. Each active dcache
> entry kept a whole lot of dead ones around.
It seems the rmap+slab-LRU patches work to address this problem, but even with those patches applied Ed Tomlinson's test shows a ~50% utilization of dcache
slab pages under memory pressure. My original test on 2.4.18 showed a ~25% utilization under memory pressure, so the slab-LRU patches definetly show an
improvement. In practice and theory, we see that the slab caches are inefficient at using memory under pressure, so probably there is more work to
be done. Ed's posted results:
http://groups.google.com/ groups?selm=01041309360100.32757%40oscar
Here is the text of the message that Hans refers to:
http://groups.google.com/ groups?selm=20020128091338.D6578%40helen.CS.Berkeley.EDU
--
When memory pressure becomes high, the Linux kswapd begins calling shrink_caches() from try_to_free_pages() with an integer priority from 6 (the default, lowest priority) to 1 (high priority). Looking specifically at the dcache, this results in a calls to shrink_dcache_memory() that attempt to free a fraction (1/priority) of the inactive dcache entries. This ultimately leads to prune_dcache()
scanning the dcache in least-recently-used order attempting to call kmem_cache_free() on some number of dcache entries.
Dcache entries are allocated from the kmem_slab_cache, which manages objects in page-size "slabs", but the kmem_slab_cache cannot free a page until every object in a slab becomes unused. The problem is that
freeing dcache entries in LRU-order is effectively freeing entries from randomly-selected slabs, and therefore applying shrink_caches() pressure to the dcache has an undesired result. In the attempt to reduce its size, the dcache must free objects from random slabs in
pressure to the dcache has an undesired result. In the attempt to reduce its size, the dcache must free objects from random slabs in order to actually release full pages. The result is that under high memory pressure the dcache utilization drops dramatically. The prune_dcache() mechanism doesn't just reduce the page utilization as
desired, it reduces the intra-page utilization, which is bad.
In order to measure this effect (via /proc/slabinfo) I first populated a large dcache and then ran a memory-hog to force swapping to occur. The dcache utilization drops to between 20-35%. For example, before running the memory-hog my dcache reports:
dentry_cache 10170 10170 128 339 339 1 : 252 126
(i.e., 10170 active dentry objects, 10170 available dentry objects @ 128 bytes each, 339 pages with at least one object, and 339 allocated pages, an approximately 1.4MB dcache)
While running the memory-hog program to initiate swapping, the dcache stands at:
dentry_cache 693 3150 128 105 105 1 : 252 126
Meaning, the randomly-applied cache pressure was successful at freeing 234 (= 339-105) pages, leaving a 430KB dcache, but at the same time it reduced the cache utilization to 22%, meaning that although it was able to free nearly 1MB of space, 335KB are now wasted as a result of the high memory-pressure condition.
So, it would seem that the dcache and kmem_slab_cache memory allocator could benefit from a way to shrink the dcache in a less random way. Any thoughtspressure to the dcache has an undesired result. In the attempt to
reduce its size, the dcache must free objects from random slabs in order to actually release full pages. The result is that under high memory pressure the dcache utilization drops dramatically. The prune_dcache() mechanism doesn't just reduce the page utilization as
desired, it reduces the intra-page utilization, which is bad.
Any thoughts??