I was backing up the repo and the disk was full to the point of not being able to delete snapshots. Removing the few logs did not help. Quick google search suggests restarting the repo from scratch since there doesn’t seem to be a way for kopia to handle this. Now I’m thinking I should create a placeholder file to reserve some space so that if such an issue repeats, I can delete the placeholder file and hopefully that will give kopia enough space to do stuff like deleting snapshots to free up enough space to proceed. How much space to reserve? 100 Mb? 1GB? Kopia running out of space will still allow it to resume gracefully once some space is freed?
To be clear, to free up space immediately: after snapshots are deleted, a full maintenance with –safety=none is needed?
Unrelated questions:
There’s no straightforward way to tell much a deleting a particular snapshot will free up in terms of space? It is fine to delete any snapshot (i.e. it will not affect the other snapshots and prevent them from being restored), right?
When passing the paths for snapshot create, all the paths for backup must be passed at once, right? I was thinking of wanting to back up (snapshot) certain paths first (e.g. prioritizing paths from disk that are less busy at the time), but e.g. snapshot create <pathB>; snapshot create <pathA> wouldn’t make any sense here if e.g. policy allows only a max of 3 snapshots, then this would take up 2 of the 3 snapshots even though I consider both state of pathA and pathB together as a single backup? Only `snapshot create <pathB> <pathA>` would make sense and perhaps this might not necessarily guarantee `<pathA>` is backed up after `<pathB>` finishes?
I think to answer your question precisely is hard as it probably depends on many variables like size of your snapshots, maybe files structure (big files vs small files) etc. Based on rule of thumb I always keep 1GiB placeholder file for such situations. And it was always enough.
The best approach is make sure that you always have enough space. You know your data and what to expect. Before running backup check available space and abandon if less than e.g. 3x what you expect your typical snapshot delta size is.
Correct. Mainly due to deduplication it is hard to tell how much data is shared with other snapshots.
Yes. It does not impact any data stored in other snapshots. See Q1. No shared data will be deleted. It also means that worst case no free space will be recovered.
Not really. You can have multiple sets of snapshots.
The issue of how to recover from a volume which became full has been discussed on this forum. If you ever need to do that it won’t help to delete any snapshots at all, since snapshots are only virtual representations of a state of your source at a given time.
So, you will have to turn your attention to any recently written blobs, which hold the actual data and you will have to manually delete some of those. Afterwards it will be a back and forth between snapshot verify and snapshot repair.
Sigh, started a new repo, and same thing. Source disk has 35G remaining space, Target disk with the kopia repo is of the exact same HDD size formatted (same filesystem and everything, 1MB of accuracy). I created 1GB placeholder file on the target disk for kopia. First snapshot (i.e. initial backup) was successful. Then I removed a file of ~12GB from source disk, snapshotted again. I then delete the oldest snapshot, run maintenance with safety=none expecting to free up close to 12GB of space but it freed ~2GB. A subsequent snapshot (without writing new files to source disk) and it ran out of space. Removed the 1GB placeholder file, re-run kopia, same thing. Also tried deleting latest snapshot keeping only the original as well as files from the snapshot with`kopia snapshot fix remove-files --commit` followed by maintenance with safety=none, but there’s no space to complete these actions without errors. At no point did the source disk have less than 35G of free space available but for some reason the target disk exceeds more than 35G of additional space.
These are media files on 2 TB disks for archival storage–I realize 35G of free space might be too tight for some, but I never had a problem when I used rsync for mirroring (I’ve gotten to 10G of free space and I would think 25G more for kopia would be enough). I’ve set policy to 2 snapshots since I am mirroring the disks and don’t really care to revert to an older snapshot, preferring more storage capacity. My intention for using a backup software like kopia is that it can handle file renames whereas rsync would treat it as new files and resync them, being in-efficient. I was also thinking kopia’s block-level deduplication would result in less space used but in the case above I still don’t understand how it requires at least 35G more space than the source disk given that no new data was written to source disk after the initial successful snapshot and the original problem of not having enough space for kopia to free up space.
I suppose I can try again with 100G or 200G free space but it’s a little too arbitrary for my liking with seemingly no obvious reason why kopia uses so much additional space even though I’ve only deleted files (so at most it should be maintain roughly the same size if I understand correctly).
WARN unable to write diagnostics blob: unable to complete PutBlobInPath:/media/targetdisk/source_backup.kopia/_/log/20250926172233_f9cd_1758921753_1758921779_1_4dd039776a13a9de9f134227a7755045.f despite 10 retries: can’t write temporary file: write /media/targetdisk/source_backup.kopia//log/_20250926172233_f9cd_1758921753_1758921779_1_4dd039776a13a9de9f134227a7755045.f.tmp.d30b2ea878ec96a7: no space left on device ERROR error flushing writer: error flushing contents: error writing pending content: error writing pack content: error writing pack: can’t save pack data blob qffea7470dcb15b06cf5b5121ce736b5f-s07ab5ba55116f692139: error writing pack file: unable to complete PutBlobInPath:/media/targetdisk/source_backup.kopia/q/ffe/a7470dcb15b06cf5b5121ce736b5f-s07ab5ba55116f692139.f despite 10 retries: can’t write temporary file: write /media/targetdisk/source_backup.kopia/q/ffe/a7470dcb15b06cf5b5121ce736b5f-s07ab5ba55116f692139.f.tmp.dbf783932018a4e3: no space left on device
I guess that’s rather impossible for anyone to guess. The on-disk format of Kopia doesn’t simply store complete files in the repo, but instead breaks them down into blobs, which are in turn saved in packs. So, if you delete a 12 GB file, you will be asking Kopia to re-arrange a lot of packs, which might cause this issue. Probably start with a slightly higher free count that the space occupied by the file you want to delete…