[ale] Ext4 adoption anyone?

Pat Regan thehead at patshead.com
Thu Jan 22 05:03:37 EST 2009


Michael B. Trausch wrote:
> The impression that I get from reading the high-level overview is
> that it will be a true versioning filesystem where you can take any
> file back in time, at least, that's what I think when I read about
> the trees being used to organize the blocks of the file the way it
> does.  I'd hope that it means that you can treat your filesystem as a
> sort of primitive, yet global, version control system.  I expect that
> it would need some special-purpose utilities to do that, too, but
> learning them would be useful.

Zfs, tux3, and btrfs all seem to use copy-on-write.  The act of
automatic versioning would be trivial to implement with any of them.
The trouble would be deciding what to keep and what to throw away.  Do
you want to keep every revision of a log file every time a line is
flushed?  I imagine not, but which revisions do you keep?

I'd be pretty happy if the filesystem kept at least one copy around
after delete, at least for a little while.

I also noticed that btrfs has been merged into one of the 2.6.29 release
candidates.  It sounds like the on disk format has been stabilized.  I
might get to test this guy out on my home directory sooner than I thought.

> That's kind of the reason that I rsnapshot every couple of hours.  :)

rdiff-backup has usually been my friend in the past.  He's pretty cpu
intensive, but the individual snapshots are about as frugal as you can
hope for.

I just peeked, and I have a daily backup of my /etc directory on my old,
old web server.  The backups run from August of 2004 until today.  The
total size of the backups is 66 meg, size of /etc is about 5 meg.  It is
a horrible example because the files rarely change, but I was terribly
surprised how old they were getting.

> I've used FUSE on and off for various things for a while; the only 
> plugin that I have found to be utterly unreliable (though more
> because of the typical configuration of a remote machine) is the
> sshfs plugin. It's great if you control both machines and don't get
> disconnected for inactivity, but when you do, it's a pain.

I haven't had any major fuse problems yet.  I always have sshfs (via
afuse) and zip-fuse mounts up and running.

> That said, I am actually kind of surprised that more filesystem 
> development isn't happening that way.  I think it'd be interesting to
>  actually move to a model where most filesystem code resides outside
> of the kernel, but only because I think that is probably better since
> FUSE plugins can run on any platform that supports FUSE.  I like the
> idea of being able to use a single filesystem "driver" on multiple
> UNIX-like systems, ensuring compatibility between them.  Today, it's
> still a pain to read media from FreeBSD on Linux, and vice
> versa---FreeBSD supports ext2 only, last I checked, and Linux still
> makes you do some manual tweaking to mount UFS media from any system
> that uses a variant of that filesystem format.  *shrugs*

I rarely share a disk between machines, unless it is fat32.

My fear of fuse for my home directory, at least with zfs, is speed and
memory consumption.  zfs-fuse did a pretty good job of pushing me into
swap when I 'only' had 2 gig of ram in my laptop :).

> That actually reminds me that I need to get a BD burner so that I can
>  have an easier time of doing a whole-filesystem-data backup.  DVD is
>  just not big enough...

Full filesystem backups are tedious, slow, and use a ton of media.  My
personal systems are pretty well segregated so that I (and my backup
scripts!) can easily tell the difference between stuff that needs to be
backed up regularly and stuff that needs to be backed up every 6+ months.

The sad part is that all the data that I truly need to keep say, besides
photos, is only a few dozen meg compressed...  And now my flash drive is
big enough to hold all the photos I've ever taken.

> Ahh.  I thought you meant RCS, the classic version control system.  I
>  use Bazaar (bzr) myself; I used Subversion for a long time before
> that, but I don't think I could look back even for a very large purse
> of money.  I can't imagine _not_ using bzr for VCS tasks any more.

I can't even imagine using RCS.  I try to forget that it ever even
existed :).  I won't give up my distributed vcs, ever.  I don't think I
could manage.

> I use git, but only for pulling things which already use it.  I like
> it as well, though I like some of the ways that bzr does things
> better since the branch model is much more distributed, and you don't
> *have* to work with all of the branches in a given tree when pulling
> it.  My understanding of git and other DVCS tools is that they are
> repositories housing a collection of branches, whereas with bzr you
> just get the branch.  You can have multiple branches in the same
> directory and pool storage between them if you'd like, but it's not
> required.  Not sure how Darcs works at its lower levels, but reading
> on WP leads me to think it is probably closer to git's model than it
> is bzr's.

As far as I can tell, Darcs is pretty unique.  I'm a little bit behind
on the state of the competition, though.  A quick look at my directory
of repositories says I've been using it since at least April 2004.  I
probably stopped caring about other systems a few years later :).

What I do know is that sometimes I talk to friends about how they use
their revision control systems.  From what I can gather, Darcs does a
better job of conflict resolution on merging.  Darcs will make an
attempt to apply patches in different orders and it does a very good job
of tracking patch dependencies.

Darcs is likely the slowest, but it seems that it does more work for me.





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: OpenPGP digital signature
Url : http://mail.ale.org/pipermail/ale/attachments/20090122/7cb77b7e/attachment.bin 


More information about the Ale mailing list