[Btrfs-devel] Re: 3 thoughts about important and outstanding features - cont.

myLC at gmx.net myLC at gmx.net
Fri Jan 25 01:59:05 PST 2008

Chris Mason wrote:

 > I think you're saying that a copy-on-write duplicate is not
 > sufficient for video editing. The thing to keep in mind is
 > that with btrfs COW is done on an extent basis, and extents
 > can be small (1MB is small for these files). So, you can do
 > the video editing part via cow, slice out the parts you
 > don't want, and you'll get that storage back. It is just a
 > matter of setting the max extent size on the file.

Maybe I'm missing a point here - the COW file is still
nothing without the original, right?
If so, then how fast is it when it comes to throwing away
the original? Provided that the former works reasonably
fast, then you would have the inserting into/removing from a
file problem solved (1 MB is indeed becoming small, even for
embedded systems and certainly for harddrives:-).
You stated before that it wouldn't be "that fast". I'm
somewhat curious about that - 'should be somewhat faster
than copying the whole damn/or even half of the damn
(say 10 GB) thing...

 > I don't think you can conclude moving the hibernation file
 > is the cause of the performance problem.

Trust me, it is (and I oughta kick myself for that one;-).

 > XP probably frees as much file cache as it can before
 > suspend to disk, which means that when you resume you have
 > to seek all over the drive to load files back in...

Nope. XP is rather primitive in (not only) that matter.
The size of the hibernation file always matches the amount
of installed RAM. The memory simply gets written into the
file and read back upon awakening. There is a simple
progress bar, now indicating that the whole operation takes
a lot longer than before. After the memory is read back it
reinitializes a few devices and such (1-3 seconds)...

By coincidence I have the interior of an old harddisk right
next to me (pinned to a wall). A disk measures about 9.5cm
in diameter on the outer rim and 2.5 on the inner.
The difference between inner and outer rim is usually left
up to the manufacturers (the smaller the inner rim, the more
they can fit on the disk without any problems in marketing -
i.e.: the bigger the profits).
Now, with that old drive you can easily see that:
- the outer rim measures about 9.5*pi = ~30cm in circumference
- the inner: ~8cm

Since the speed of rotation remains constant it is
relatively safe to conclude (and computer magazines testing
harddrives confirm it) that there is a HUGE difference in
performance between outer and inner tracks (how much a
difference is left up to the manufacturer as previously

Why not make use of that and boost performance?
XFS already does a good job in keeping files within a
directory together. This way, for instance, the include
files get read in a chunk (read-ahead) with a bit of luck.
That feature combined with a preference indicator for files
(priority) would already be very powerful. It would enable
you to keep files relevant to the system's performance that
tend to get read together grouped and in the faster zones
(and much more). As previously stated this can very much
double the performance at very little a cost.

Yes, I have slaughtered harddrives - I admit it.  X-|
But I was young and needed no money...
                                            LC (myLC at gmx.net)

More information about the Btrfs-devel mailing list