Slow incremental snapshot of remote sftp drive to local repository


I would like to use Kopia to backup files (<400 GB) on a remote computer to an external harddrive.

I got an initial snapshot by mounting the remote directory to my computer, but it took days. I figured that was fine since the incremental backup would be faster but a week later it’s taking a super long time to get another snapshot.

Here’s the process I used the first time:
mounted remote ssh directory to local computer (using a method I don’t remember)
Snapshotted drive (took several days)
Took another snapshot (didn’t take very long)
unmounted remote directory
Disconnected from the repository

Second time:
Connected to the repository
Accidentally created an empty snapshot by trying to run an action to mount the drive without enabling actions in the repo configuration (didn’t realize it was deleted when I disconnected)
Deleted empty snapshot
Mounted repository using sshfs (definitely used a different method last time since I had to install it)
Started snapshot

And now it’s 18 hours later even though the files on the drive are mostly the same.
Also it says there are zero cached files and bytes.

Did unmounting and remounting the remote drive (using a different method) or disconnecting the repository make things take longer? Is there a better way to backup a remote drive to local?

Well, for starters, I’d always run a KRS (Kopia Server) in such a case and run Kopia client on the remote server itself. This will speed up things a lot, since you’re saving all the round-trips for cache and index files. Kopia is more of a push, than a pull system.

Having mounted one remote storage and snapshotting it locally, really is cumbersome. And if you need to go down that route, make at least sure, that you’ll get the lowest latency to the remote storage possible - I doubt, that sshfs is very suited for that.

I don’t know if kopia works the same but restic (a similiar backup solution) heavily relies on inodes to detect changes. If the inode of a files changes (which is usually the case when re-mounting) then the file needs to be split and re-hashed again, which is slow.