I am running Kopie via docker on Unraid and am backing up to one HDD via filesystem. I would love to sync that existing repo with one on GDrive. Questions:
Do I first need to create a new repo on Drive?
Is the sync-to command a one time command I run in the docker or something I need to automate via cron?
Is there any recommendation on using GDrive native over rsync? And for rsync I failed to find documentation on a quick readover.
I don’t do this anymore after my gdrive account got capped and making a backup of a backup suddenly didn’t feel as important, but I used to use rclone directly to copy the source (your local repository) to the destination (a google drive “remote”). No kopia involved whatsoever as both their rclone and gdrive implementations don’t seem particularly reliable or performant.
You first need to install rclone and set up your “remote” which is just rclone-speak for a connection to your cloud storage. Then you use the sync command to send your data to that remote. Don’t know why they call it “sync” and not “mirror” but it mirrors the source to the destination.
Once you have a remote set up you can use it in the sync command like this: sync <path-to-your-local-kopia-repo> <remote-name>:<destination-path-on-remote-storage>
--progress gives you info about the transfer while it happens --create-empty-src-dirs copies folders to destination even if they’re empty on the source.
As a check I can use rclone to connect to the gdrive repo and see all my snapshots, download files, etc…
======
Other thoughts…
If you’ve gotten this far, another method would be mounting the google drive remote you’ve already set up and accessing it via kopia just like you would a local filesystem for backups, or using kopia’s own send-to command to copy the files, but in my limited testing, mounting google drive storage locally has choked up a bit with complicated file manipulation like this where there are 1000’s of files and directories. The only method similar to that which I have found works flawlessly with kopia is their native “Google Drive” app on a Windows machine. I’ve backed up many a TB via that app in the past w/o issue.
I changed my planes and bought a cheap Synology which I will place at a far enough place that fires etc. should not be able to destroy both the server and the Synology. The NAS will provide SMB space and I will use this as secondary storage. Question remains whether or not to use send-to or rsync. What’s the benefit of either? And I suppose I will not have to “create” a secondary repository on the NAS first since the send-to will do this for me?
I have a similar setup with a home server and office server. They’re connected over ZeroTier and the whole send-to portion works pretty flawlessly.
What tool to use? - I think it’ll just be your personal preference since you’re literally just copying files. You just want to make sure you use a tool that isn’t gonna mess with the files in any way. Some people are rsync pros and wouldn’t dare use something else. For me, I use kopia’s built in send-to command in a script that runs every other day. send-to would appear to be a safer option as you are way less likely to mess something up and I’ve never had issues with it.
I personally use the following flags. See (hopefully accurate) descriptions in-line:
--list-parallelism 10 ## more=faster reading of what’s on the destination. This was a sweet spot for me as past 10 didn’t really help so I stayed there
--parallel 4 ## concurrent transfer processes. more=faster, but like above, there are diminishing returns and possibly worse performance if you go too high. Test and find your sweet spot.
--delete ## if a file is deleted on the source, it will be deleted on the destination; a “mirror” transfer mode.
correct. it’s a 1-time file copy. What I do is schedule it to run at regular intervals. I’m on Debian so I can use cron or systemd for this. In unraid you would typically use the user scripts plugin (I believe, it’s been awhile), but as with anything related to unraid, there are 100 semi-normal, semi-documented ways to get that job done.
In any case, you’d write a script which is just a text file (typically with a .sh file extension) that lists the commands you want it to run, and you use user scripts (or whatever else) to tell the system when you want it to run the script.
Here’s a mini version of my script. Most importantly, you’re connecting to your repo, then telling kopia where to copy it to:
And here's the whole script in case you're interested
#!/bin/bash
##local-client backup to offsite via kopia 'send-to'
##send-to cycle: source & dest check > connect > verify > maintenance > send-to > disconnect
##define source & dest
source_check_path="/mnt/backups/client-data/kopia_client_backups_05/zz_pre-backup-check.txt"
dest_check_path="/mnt/EYE_KOPIA-OFFSITE-CLIENT-BACKUP_01/zz_pre-backup-check.txt"
##check for source
if [ ! -f "$source_check_path" ]; then
echo "Error: The test file '$source_check_path' does not exist."
exit 1
fi
##check for dest
if [ ! -f "$dest_check_path" ]; then
echo "Error: The test file '$dest_check_path' does not exist."
exit 1
fi
echo "source & dest are confirmed to exist. continuing with the backup..."
##connect
/usr/bin/kopia repository connect filesystem \
--path "/mnt/pvehost/backups/client-data/kopia_client_backups_05" \
--password=<yourpass>
##pause...
sleep 5
##verify
/usr/bin/kopia content verify --log-dir=/var/log/kopia/client2office --log-level=warning --file-log-level=debug
##maintenance
/usr/bin/kopia maintenance run --full --log-dir=/var/log/kopia/client2office --log-level=warning --file-log-level=info
##send-to
/usr/bin/kopia repository sync-to filesystem --path "/mnt/EYE_KOPIA-OFFSITE-CLIENT-BACKUP_01" --list-parallelism 10 --parallel 4 --delete --log-dir=/var/log/kopia/client2office --log-level=warning --file-log-level=info
##disconnect
/usr/bin/kopia repository disconnect
##end##