[Btrfs-devel] 3 thoughts about important and outstanding features
myLC at gmx.net
myLC at gmx.net
Mon Jan 21 02:18:32 PST 2008
'lo there, =)
having a DVB-S receiver running Linux (PPC) I found myself
wondering how to delete data from the middle of a large file
(stripping a recording of ads, for example - or messing
around in a virtual disk file, etc.). Currently, the common
way of doing this seems to be by copying the file (leaving a
part behind) and then deleting the original. Of course, on a
large file (say 12 GB or more) this can take an eternity;
also you can run into trouble if the filesystem is nearly
Is it me, or doesn't that make any sense?
Having a block-oriented filesystem, operations like this
should only take an instance.
So basically I'm looking for functions to:
- insert a chunk into a file
- delete a chunk from a file
- move a chunk from one file into another
All of the above would be very useful when dealing with
large data, such as DVB-recordings (i.e.: video).
This is also interesting for large databases. Currently they
are implementing "their own" filesystems on top of other
filesystems - which would then be superfluous.
Seeing a large file as a chain of blocks, making such an
operation on a block-sized basis should already be easy to
accomplish. However, if you want full support there should
be the possibility to insert "sparse blocks" (less content
than the usual) within a chain of blocks (including the
Is this already possible?
Would it be difficult to implement?
Think about it: instead of copying gigabytes with the
drive's heads clicking around - taking minutes to hours,
such operations could be performed in (milli)seconds.
I think that there should indeed be a standard (Posix?) for
providing such functionality. (One call could be to
determine if the filesystem supports those operations fast -
it could return a version for instance, 0 meaning that the
operations, although provided, will be slow.)
What is needed in the first place however, is a filesystem
supporting those operations (via fcntl or so) - making it
instantly first choice for VDRs running Linux. (The other
fs' would surely follow soon after, at which point there
should be the chance for establishing a standard.)
There is a HUGE difference in performance when it comes to
harddrives and their outer versus their inner tracks.
A "self-optimizing" filesystem could make use of this by
allowing the user/administrator to specify preferences about
where to put certain files.
For instance, a particular group of files might be best held
together; big files of importance to the system's
performance could declare their liking to be put into the
outer zones, while less critical stuff could be shoved
towards the drive's hub...
Currently this is mostly handled by partitioning. We all
know how inflexible and wasteful this can be.
Since you are already having online fschecks, online
optimization (also defragmentation) would be the next,
certainly quite appreciated, level.
Not sure if you already have this: ZFS' outstanding feature?
Copy on write links. ;-) VERY powerful.
Looking forward to some enlightenment... :-)
LC (myLC at gmx.net)
More information about the Btrfs-devel