I would like to implement a 3-2-1 backup (sortof) using Amazon Glacier Deep Archive.
I know this is somewhat backwards, but my workflow is like:
Working directory is a hybrid of my laptop + Google Drive. Infrequently (maybe every 6 months or yearly) I’ll mount an external spinning disk, and I’ll mount Google Drive, and use Kopia to snapshot the Google Drive mount to the local storage disk I have.
I am concerned about drive failure after experiencing this in the past. As a result, I would like to take the snapshot and have all the blobs compacted. I’m guessing I would want to run some kind of forceful full maintenance on the repository.
I only keep one snapshot at a time, I mainly use this for compression/deduplication.
Once I have my snapshot stored on my local disk, I’ll sync-to an S3 remote, then shovel that into Glacier Deep Archive.
From reading Amazon docs, Upload and Delete operations are Free.
When next year rolls around, I’ll drop all contents from Glacier and perform a new sync-to.
Is this a reasonable way to use this? I like the peace of mind of having an additional copy in the cloud, but I don’t want to pay the relatively higher expense of keeping something in hot storage like Wasabi or B2. This is long term storage, and I expect to never need it, but I want to have it in case I do. If something happens to my Google Drive and my local disk with my snapshot, I could always recover my files from the deep archive.
Am I missing anything? Or doing something stupid? Will this be more expensive than I think it will be? (I’m seeing this cost as $1/TB/Mo)
Will a full maintenance compact the repo as much as possible before I use sync-to?
Since I would restore the entire snapshot and not just individual files in the event of a problem, should I just tar up the entire repo and rclone it up to S3? This would mean restoring a single file rather than hundreds of thousands of tiny ones.
Need a little guidance here. I apologize in advance if any questions are ignorant.