I’m running btrfs
and using Timeshift to snapshot my system, including my @home
subvolume.
With Kopia, I’m backing up /run/timeshift/
to Google Drive via rclone
. This works great, happens very fast every time. Speed and efficiency is top notch.
This weekend I wanted to test my backup/restore system to make sure it works as intended. I had a mishap previously, where my backups were actually no help in restoring.
So, I made a bare metal image of my machine, booted a live system, and installed a fresh distro with Timeshift.
Then I fired up Kopia, connected to my repo, and began restoring /run/timeshift
.
This was taking an incredibly long time to finish- in the area of multiple hundreds of hours. I canceled the task after it had restored 3% in 8 hours.
I’ve got something like 195GB to restore, and I was using --parallel=32
. I see something called shallow restore, but the docs don’t say much that I could find, so I don’t know what that’s about. I didn’t use it, but maybe it would help?
Basically, is there anything I can do to speed up the restore procedure?
Maybe change caching, or use some shallow restore, or something else?
As it is now, it’s almost more efficient to run a bare metal backup once a week. That’s a hassle because it isn’t automated, but at least I can pull it off over the course of about 10 hours. I tried uploading these 256GB images using Kopia, but haven’t noticed deduping to work on these (Encrypted drive, so dedupe probably can’t help anyway).
One final note, since this is a btrfs timeshift, I think it’s just using hardlinks all over the place, but Kopia sees the hard links as their full apparent size, which substantially increases the size of these backups.
EDIT:
Maybe, would it help to reduce the number of snapshots I keep in history? Maybe run maintenance and reduce the entire repo size?
I’m just trying to figure out what would improve the speed at which I can get these files restored.