Searching \ for '[PIC] More CVS for PIC development - snapshot back' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page:
Search entire site for: 'More CVS for PIC development - snapshot back'.

Exact match. Not showing close matches.
PICList Thread
'[PIC] More CVS for PIC development - snapshot back'
2006\01\18@083833 by Rolf

face picon face
part 0 44 bytes
his is a multi-part message in MIME format.
part 1 5294 bytes content-type:text/plain; charset=ISO-8859-1; format=flowed (decoded 7bit)

This thread has taken a bit of a divergence, but I have something to add
for those interested.

For those of you with Linux (UNIX) systems, and now for those with a
reason to get one... ;-)

rsync is your friend. The UNIX concept of symbolic, and hard linking
makes incremental backups very space efficient. For example, I have
33Gig of photographs, 65Meg of CVS repo, 1.2Gig of E-Mail, and about 2
Gig of "other stuff" that I keep on my Linux box. I have a second
physical hard disk in the machine that I use for "snapshots". Every
hour, every day, every week, and every month I take a complete snapshot
of all that data.... I keep 4 iterations of each backup cycle.

In other words, I can go back 1 hour, 2 hours, 3 hours, 4 hours, 1 day,
2 days, etc... 1week, etc, 1month etc ... 4 months and get a copy of any
given file at any of those time-points.

I have about 36Gig of "valuable" data, and it takes about 36 gig of
space to keep all the snapshots of it... how? rsync!

A backup of a directory looks something like:
drwxr-xr-x  18 root root  4096 Jan 18 08:00 .
drwxr-xr-x   9 root root  4096 Jan 13 19:02 ..
drwxr-xr-x   3  509 users 4096 Jan 18 04:15 daily.0
drwxr-xr-x   3  509 users 4096 Jan 17 04:15 daily.1
drwxr-xr-x   3  509 users 4096 Jan 16 04:15 daily.2
drwxr-xr-x   3  509 users 4096 Jan 15 04:15 daily.3
drwxr-xr-x   3  509 users 4096 Jan 18 08:00 hourly.0
drwxr-xr-x   3  509 users 4096 Jan 18 07:00 hourly.1
drwxr-xr-x   3  509 users 4096 Jan 18 06:00 hourly.2
drwxr-xr-x   3  509 users 4096 Jan 18 05:00 hourly.3
lrwxrwxrwx   1 root root    23 Jan 18 08:00 latestsnap ->
drwxr-xr-x   3  509 users 4096 Jan  1 04:45 monthly.0
drwxr-xr-x   3  509 users 4096 Dec  1 04:45 monthly.1
drwxr-xr-x   3  509 users 4096 Nov  1 04:45 monthly.2
drwxr-xr-x   3  509 users 4096 Oct  1 04:45 monthly.3
drwxr-xr-x   3  509 users 4096 Jan 14 04:30 weekly.0
drwxr-xr-x   3  509 users 4096 Jan  7 04:30 weekly.1
drwxr-xr-x   3  509 users 4096 Dec 31 04:30 weekly.2
drwxr-xr-x   3  509 users 4096 Dec 24 04:30 weekly.3

The way the process works is that it takes the latest backup used,
creates a complete directory structure that mimics that backup, then
creates *hard* links to each of the backed up files. This takes very
little additional space. It then uses rsync to synchronize the "live"
data to this backup data. rsync is instructed to delete any files in the
backup, and replace them with the live file if they are not the same. In
UNIX, a "delete" means that the file is "unlinked", so the backup file
only gets deleted from the current backup set, not all of them. The
deleted 'old' version is then replaced with a copy of the live version.

In essence, the space required to keep the complete backup is only
dependant on how much the data changes... or, in simple terms, it is the
space of the live system + all changes made in the last 6 months, plus a
few bytes per directory * 16.

Attached is the script I use on my machine to do the work. It is run
from cron. Credit is due mostly to some guy on the web whose script I
adapted for my purposes. See

I also take (somewhat) regular backups of the critical data as it is
added.... like the photos, and so forth. Fortunately I have not had to
go to off-machine backups, but the snapshots are a great help.

As for my comment on the backup of the live CVS repo, well, I am the
only developer, and I have not had a problem with it, even though I may
well have checked in files during one of the automated snapshots. My
thinkking suggests that even if I have a corrupt snapshot, it will only
be for a very few files in the repo, and I could always go back to the
previous version of the Repo.


Bob Axtell wrote:
{Quote hidden}

part 2 4186 bytes content-type:text/plain;
(decoded 7bit)

# ----------------------------------------------------------------------
# adapted from mikes handy rotating-filesystem-snapshot utility
# ----------------------------------------------------------------------
# this needs to be a lot more general, but the basic idea is it makes
# rotating backup-snapshots of /home whenever called
# ----------------------------------------------------------------------

unset PATH        # suggestion from H. Milz: avoid accidental use of $PATH

# ------------- system commands used by this script --------------------


if [ -z "$PERIOD" -o -z "$SOURCE" ] ; then
       $ECHO 'snapshot: usage: snapshot <period> <sourcedir>' ;
       exit 1;




# ------------- file locations -----------------------------------------


# ------------- the script itself --------------------------------------

# make sure we're running as root
# if (( `$ID -u` != 0 )); then { $ECHO "Sorry, must be root.  Exiting..."; exit; } fi

if [ -f $SNAPSHOT_RW/$ARCH/.snapshot.lock ] ; then
       $ECHO It appears that another snapshot is being taken of this archive.
       $ECHO If no other snapshot is running, remove the file $SNAPSHOT_RW/$ARCH/.snapshot.lock
       exit 1

$ECHO $$ > $SNAPSHOT_RW/$ARCH/.snapshot.lock

# attempt to remount the RW mount point as RW; else abort
# $MOUNT -o remount,rw $SNAPSHOT_RW ;
# if (( $? )); then
# {
#        $ECHO "snapshot: could not remount $SNAPSHOT_RW readwrite";
#        exit 1;

if [ ! -f $EXCLUDES ] ; then
       #$ECHO "snapshot: Creating excludes file $EXCLUDES"
fi ;

# rotating snapshots of $SOURCE (fixme: this should be more general)
# step 1: If there was not a "latest" snapshot, then create one.
if [ ! -d $SNAPSHOT_RW/$ARCH/latestsnap ] ; then
       if [ ! -d $SNAPSHOT_RW/$ARCH/$PERIOD.0 ] ; then
               $MKDIR -p $SNAPSHOT_RW/$ARCH/$PERIOD.0
       $LN -s $SNAPSHOT_RW/$ARCH/$PERIOD.0 $SNAPSHOT_RW/$ARCH/latestsnap

if [ ! -h $SNAPSHOT_RW/$ARCH/latestsnap ] ; then
       $ECHO snapshot: $SNAPSHOT_RW/$ARCH/latestsnap is not a symbolic link. Exiting.
       exit 1

# step 2: make a hard-link-only (except for dirs) copy of the latest snapshot, put it in storage
$CP -alH $SNAPSHOT_RW/$ARCH/latestsnap $SNAPSHOT_RW/$ARCH/in_transit
$RM $SNAPSHOT_RW/$ARCH/latestsnap

# step 3: delete the oldest snapshot, if it exists:
if [ -d $SNAPSHOT_RW/$ARCH/$PERIOD.3 ] ; then                        \
       $RM -rf $SNAPSHOT_RW/$ARCH/$PERIOD.3 ;                                \
fi ;

# step 4: shift the snapshots(s) back by one, if they exist
if [ -d $SNAPSHOT_RW/$ARCH/$PERIOD.2 ] ; then
if [ -d $SNAPSHOT_RW/$ARCH/$PERIOD.1 ] ; then
if [ -d $SNAPSHOT_RW/$ARCH/$PERIOD.0 ] ; then

# step 5: move a hard-link-only (except for dirs) copy of the latest snapshot,

# step 6: rsync from the system into the latest snapshot (notice that
# rsync behaves like cp --remove-destination by default, so the destination
# is unlinked first.  If it were not so, this would copy over the other
# snapshot(s) too!
$RSYNC                                                                \
       -va --delete --delete-excluded                                \
       --exclude-from="$EXCLUDES"                                \

# step 7: update the mtime of $PERIOD.0 to reflect the snapshot time

# step 8: make the latest snapshot the latest...

# and thats it for home.
$RM $SNAPSHOT_RW/$ARCH/.snapshot.lock


# now remount the RW snapshot mountpoint as readonly

$MOUNT -o remount,ro $SNAPSHOT_RW ;
if (( $? )); then
       $ECHO "snapshot: could not remount $SNAPSHOT_RW readonly";
} fi;

part 3 35 bytes content-type:text/plain; charset="us-ascii"
(decoded 7bit)

More... (looser matching)
- Last day of these posts
- In 2006 , 2007 only
- Today
- New search...