Duply Guide: Backup to S3
This is very quick and terse guide to start backing up to S3 using Duply as I am posting this as a reminder for myself.
This should done using a package manager if available, but here are the steps for building from source:
wget "http://downloads.sourceforge.net/project/librsync/librsync/0.9.7/librsync-0.9.7.tar.gz" tar zxf librsync-0.9.7.tar.gz cd librsync-0.9.7/ ./configure --enable-shared make && make install
This is the underlying library that communicates with S3.
pip install boto
wget "wget http://code.launchpad.net/duplicity/0.6-series/0.6.22/+download/duplicity-0.6.22.tar.gz" tar zxf duplicity-0.6.22.tar.gz cd duplicity-0.6.22/ python setup.py install
Note: The librsync headers and shared libraries must be found by this installer. This is only an issue if it was installed in an arbitrary location (e.g. your home directory). If needed, export the
INCLUDE_DIRS environment variables in your .bashrc file, e.g
wget "http://downloads.sourceforge.net/project/ftplicity/duply%20%28simple%20duplicity%29/1.5.x/duply_1.5.11.tgz" tar zxf duply_1.5.11.tgz mkdir -p $HOME/bin mv duply_1.5.11/duply $HOME/bin
This creates a bin directory in your home directory and moves the duply script into it. You can of course move it anywhere you want. However I am going to assume it is on your
Create Duply Profile
duply myprofile create
Edit the new profile config
vim ~/.duply/myprofile/conf and change
TARGET (around line 69) to the match the following template:
access_key_id- AMI access key
secret_access_key- AMI secrete access key
region_host- the host to the S3 region, e.g. s3.amazonaws.com for US Eastern
bucket_name- the name of the bucket to create/use
path- the path in the bucket that will store the files
SOURCE (around line 76) to the directory or file to be backed up.
duply myprofile backup