After losing a lot of files because the snapshot was broken and I didn’t know it, I was rebuilding with what I’ve salvaged. I spent many hours taking a snapshot, and then immediately ran verify contents --full for many more hours.
It ran through the night and found one error:
[ … many successful lines … ]
ERROR error content bd24d7b5100ee1ad014705bf74e2721d is invalid: invalid checksum at p4d86ef908128223b30d9ac96a0c54de0-s3790696fb7e5769412f offset 17584579 length 5257669/5257669: decrypt: unable to decrypt content: cipher: message authentication failed
[… many more successful lines …]
Verified 451026 of 488431 contents (92.3%), 1 errors, remaining 1h15m30s, ETA 2025-04-09 05:16:39 PDT
Finished verifying 488431 contents, found 1 errors.
I haven’t tried “fix invalid-files” yet, because I’d like to avoid re-downloading the entire contents (~850GB, which takes hours). If the error message gives me a hash, is there a way I can specifically fix that one file? I’m still pretty confused about the various verify and fix commands.
edit, to clarify: I had this huge backup in a Wasabi S3 bucket before. I recovered it earlier this year and found that some of the files were broken. I deleted the broken files and kept the rest, and now I’m backing them up to the same S3 bucket, so it’s possible this incorrect checksum is the same error from before. (Most of the snapshot was probably cached, I think it only uploaded a small portion of the 850GB size)
fix invalid files will not download the entire contents - why should it? It will merely check what files have vanished from the repo and perform the fix to resolve that issue. It might download some contents, since it will surely need to rewrite some blobs, but that should do it.
I vaguely remember your thread about this but I wonder, what you mean by “I deleted the broken files”? If you’ve got the repo available locally and I seem to remember, that you do, I’d fix that repo locally and perform a repo sync-to-s3 afterwards, to get the repo in the S3 bucket in order.
I thought, since the error was discovered during a full content verify, it would need to verify the contents again to know where the error is.
Here’s a timeline of events to clarify my situation, because I didn’t do a good job explaining what’s up:
Used KopiaUI for years, backed up my projects to an S3 bucket but didn’t do any verifications lately
This snapshot is actually a Cryptomator volume, because it was previously backed up to Google drive and I wanted it encrypted
That Windows installation was lost, but I figured I had my files backed up to the S3 bucket, so I formatted the local drive and installed Linux on the machine
Restored the snapshot from Kopia and discovered that some files were now incomplete/corrupt/zero-length
That’s when I realized my mistake (KopiaUI doesn’t verify anything, yet?) and made my previous thread about it
I accepted that those files were probably unrecoverable and just deleted them
I created a new snapshot to the same S3 repo and let it run
Most of the data was unchanged, so a lot of it didn’t have to upload again, I think.
Ran content verify --full and found this one checksum error.
Now I want to fix the checksum on the S3 repo to make sure the data is all solid there.
I’d assume that the repo maintenance would then take care of the unreferenced blob and also remove it. The data within that blob should be useless to you anyway and probably ties into the files, that got corrupted.