Multiple Kopia instances snapshotting to the same repository simultaneously

Is it possible and safe to have multiple kopia instances snapshotting to the same repository at the same time?

I’m backing up a few different services on a server, but I don’t have a snapshotting filesystem so I have to take down each service while I’m running kopia snapshot to make assurances that files aren’t changing while I’m snapshotting. Because I don’t want one service to hold up all other services I’m running the service backups in different systemd units, so once a service’s backup completes the service will be brought back up.

I’ve seen the --max-parallel-snapshots option. I believe that would be used as:

kopia snapshot create /source1 /source2

but I am effectively splitting these out into separate kopia snapshot create commands.

Kopia supports taking multiple snaoshots at the same time - no problem.

This setting purpose is to limit resources usage. If your system is too slow, you do not have enough RAM or your internet connection can not cope with many snapshots at the same time.

1 Like

From the repo perspective this should be exactly as if multiple clients were accessing the repo at the same time, and that is considered safe also. From what I understand, the worst case is that you upload a blob that the other process (or another client) just recently also uploaded but haven’t had time to get the indexes updated, and the later client will figure this out at some point, and the duplicate gets fixed, or garbage-collected at next maintenance. So if many clients are ok, then many kopia instances from one machine should be equally safe.

1 Like

Thanks for the feedback!

In my testing I stumbled on some issues. All of these processes are running on the same machine as the same user. It seems like Kopia is designed with the expectation that one repository will only have one connection on any given user@host combination at a specific time.

By default, when you connect to a repository Kopia creates a repository.config file in the config directory ~/.config/kopia and sets up a cache folder in ~/.cache/kopia. This is fine until the first backup completes at which point it seems to purge the repository.config and the cache folder. Because I’m using an identical configuration file and repository the {unique-id} for the cache folder is the same for all of my backups. The removal of the cache folder can cause later backups to stall. I’m not actually sure if they are stalled or just taking a very long time, but the backups go from minutes to hours before I eventually kill them.

I can workaround the cache being deleted by other backup runs by specifying --cache-directory in kopia repository connect.

As far as I can tell, there isn’t a good way to stop Kopia from creating a repository.config. If I use from-config --file= or from-config --token=, Kopia will still create a repository.config. If I’m running multiple kopia backup threads at the same time, one of them will finish first and then delete the repository.config causing backup runs that haven’t completed to throw an error when running kopia repository disconnect. This doesn’t seem to prevent the backup itself from running and my plan is to trap the error in my script so that it doesn’t cause the monitoring systems to flag an issue with the backup.

I’m unsure how maintenance will be handled in this instance. According to the docs an owner is specified for running maintenance on the repository as user@host, but each of the running backups jobs will have an identical user@host. Would this cause the maintenance to be run multiple times (potentially once by each job)? It seems like that could cause issues.

As an alternative, I could

  • combine all the backups into a single kopia snapshot create /source1 /source2 ... command or,
  • force the snapshots to be serial

and accept that some/all services will be down until the backups complete.