Stream single source to multiple destinations

Hi there! First up, Kopia seems super cool. Easy to use and loving the easy mount/restore stuff directly from cloud repos.

I do have one question though: is there a way to efficiently backup a single source to multiple remotes in the one go? My use case is a server with a directory i want to backup nightly, both to B2 offsite and a local USB drive.

From my understanding currently i can do one of two things:
1.) Have a single repo and use what is detailed here Synchronization | Kopia. The downside, as i understand it, for this is i would do a series of reads, upload data to the cloud, and then download that same data again to mirror to the other device. I understand that ideally diffs are small between snapshots but even so this seems like unnecessary network traffic
2.) Have two separate repos and disconnect/reconnect when doing the nightly backup. This is a little bit of a faff and also seems like a waste of reads as i would be reading over the whole datastore looking for changes twice

Is there a way to just ā€˜pipeā€™ the snapshot to two locations? Should I instead actually make a local temporary copy of the snapshot and then rsync that to two locations?

Thanks!

1 Like

Hi,

According to 3-2-1 backup rule, you ā€œarenā€™t shouldā€, but must to have a local backup.
You have to have 3 copies of each file:
1 - is live original content, 2 - is local backup copy, 3 - offsite backup copy.
2nd copy (local backup) is the fastest way to restore files. Remote (offsite) copy must exists only to cover cases such as fire/thief/armageddon and it has as dependencies: cloud + internet + a good speed + time to restore (or in case of huge backup, - ability of cloud to send your backup to you over snail mail)

A 2nd copy might be an external hard drive, but better yet dedicated backup server where you can set APPEND only rule for backup to prevent backup infection in case of ransomware encrypted files, so backup should be immutable.

Also you donā€™t need rsync, the documentation you referenced describes kopiaā€™s way to ā€œsync-toā€ which is pretty efficient since it uploading only blobs that doesnā€™t exists @ remote destination

Hi iBackup

Thanks for the answer but unfortunately it is a non answer. Firstly, i was asking about kopia specific commands not general backup strategies.

Second, I already referenced the synchronization option and why i have concerns about it, which i still have.

Thanks,
Z

1 Like

No.

No.

Yes. Not necessarily local (but as I already said, it highly recommended). Any repositories can be synced.

1 Like

What did you end up doing @Zylatis ? Iā€™m having the same issue as you: both options seem to have problems to support a concurrent onsite and offsite repository

Thereā€™s another issue with option 1 btw. if repo 1 goes down (becomes unavailable), repo 2 stops getting updates too. i.e. single point of failure

1 Like

This is why I simply have 2 separate backup jobs/repositories running: one makes a local backup to another drive and one uploads to B2.

Depending on the amount of data you are backing up and Kopiaā€™s apparently pretty efficient scan for changed data, it should be pretty quick.

For example, in my setup out of 30GB of total data it is backing up (currently) 9GB, and of that ~50-100MB of data actually gets uploaded to backup repositories per backup run(every 12 hours). That whole process takes just a couple of minutes. So a couple of minutes * 2 isnā€™t that big of a deal.

If you do want to sync repos instead of running backup twice, and you want to save on traffic, then do local backup first and then sync it to whatever online repo you use.

(Also iBackupā€™s explanation of 3-2-1 guideline is incorrect. Itā€™s actually: At least 3 copies of your data on at least 2 different media types in at least 1 different physical location. Thereā€™s no requirement to have local copy or anything like that. see: Backup Strategies: Why the 3-2-1 Backup Strategy is the Best)

1 Like