[Ocfs2-devel] [PATCH V2 0/7] Cleancache (was Transcendent Memory): overview
Nitin Gupta
ngupta at vflare.org
Fri Jun 4 02:36:54 PDT 2010
On 06/03/2010 09:13 PM, Dan Magenheimer wrote:
>> On 06/03/2010 10:23 AM, Andreas Dilger wrote:
>>> On 2010-06-02, at 20:46, Nitin Gupta wrote:
>>
>>> I was thinking it would be quite clever to do compression in, say,
>>> 64kB or 128kB chunks in a mapping (to get decent compression) and
>>> then write these compressed chunks directly from the page cache
>>> to disk in btrfs and/or a revived compressed ext4.
>>
>> Batching of pages to get good compression ratio seems doable.
>
> Is there evidence that batching a set of random individual 4K
> pages will have a significantly better compression ratio than
> compressing the pages separately? I certainly understand that
> if the pages are from the same file, compression is likely to
> be better, but pages evicted from the page cache (which is
> the source for all cleancache_puts) are likely to be quite a
> bit more random than that, aren't they?
>
Batching of pages from random files may not be so effective but
it would be interesting to collect some data for this. Still,
per-inode batching of pages seems doable and this should help
us get over this problem.
Thanks,
Nitin
More information about the Ocfs2-devel
mailing list