[ale] Again with the filesystem recovery SOLUTION?

Dow Hurst dhurst at kennesaw.edu
Mon Jan 27 23:51:07 EST 2003


Since all of you are using Linux in production, I love hearing your examples such as the one below on neat ways to help yourself.  Since I don't have a truly production level Linux machine, I admit to very careless handling of my personal data on my home Linux box.  Many Linux users may act in this way thru either bad training on Windows (expecting to lose data and try undeletions) or just lack of understanding the Linux/Unix way of sysadmin practices.  Just taking the time to make ONE backup on a CD every six months can save you alot of headache down the road.  I just dump the important stuff on a CD before I try out a new distro or do a rebuild.

Now here is my real question:
Why not use CVS and the rsync idea together?  Wouldn't that give you the redundancy and source code change tracking that you need?  I haven't used CVS hardly at all and never have used rsync so am looking for some wisdom from the wise.

I love the rsync/cron job idea.  Sounds like that would be a nice way to implement mirroring your home directory among several machines.  How could you combine that with the CVS home directory tracking idea presented many many posts ago by someone smarter than me?  What would you do?  An initial checkout from the main repository.  As you work the rsync/cron job backs you up to two or three different drives in different time delays.  Sounds complicated but is actually trivial in practical scripting.  Is it worth it?  From a production standpoint?  Or are there better ways to do it?  At the end of the day you checkin your changes and the rsync/cron could dup your repository to another machine.
Dow


>>> jkinney at localnetsolutions.com 01/27/03 22:31 PM >>>
On Mon, 2003-01-27 at 20:31, Geoffrey wrote:

> 
> I agree with you Jonathan, too many folks don't take the correct 
> measures ahead of time.  As for my recent exploits with recovery tools, 
> I've had one in the recent past.  Once when I deleted a source code file 
> after a days worth of work on it, thus my backup would do me no good.
> 
I have a "HOT" directory for works in progress files. This is where I am
actively writing code. It is backed up automatically by a 2 minute cron
job running rsync that puts the synced files onto a separate hard drive.
It is set to not delete if the file is missing. If I goof, I have a
backup that is two minutes old. If I goof and overwrite an existing
file, I'm hosed. I could probably do an archival of files before any
changes of size X, but I haven't looked into it.
> > 
> > 1) "Magic undeletes"
Useful for work stations where the users has a greater opportunity to
muck things up.

> > 
> > 2) True "bare metal restore" capabilities via boot disk or write protected
> > boot partition.
> 
> I like it.  Quite valuable.

Totally invaluable. Hard drives die. Bare metal recovery is a mandatory
function. There are some really good one out that can make a boot CD
that pulls from a tape backup. 
> 
> > 
> > 3) Better forensics tools.
> 
> I want them even more.  I think these would be more valuable.

Great to have not just when a file is accidentally deleted, but also
deliberately from a PO'ed user, or some cracker who just weaseled their
way in through a hole that Bugtraq doesn't have yet.
-- 
James P. Kinney III   \Changing the mobile computing world/
President and CEO      \          one Linux user         /
Local Net Solutions,LLC \           at a time.          /
770-493-8244             \.___________________________./

GPG ID: 829C6CA7 James P. Kinney III (M.S. Physics) <jkinney at localnetsolutions.com>
Fingerprint = 3C9E 6366 54FC A3FE BA4D 0659 6190 ADC3 829C 6CA7 



_______________________________________________
Ale mailing list
Ale at ale.org
http://www.ale.org/mailman/listinfo/ale






More information about the Ale mailing list