What do I do about: “ERROR: failed to rewrite 8484 contents” and isn’t there a way to get notified? What if I had not checked this info?
Why are the last runs of these sections so long ago? Are they not necessary or nto scheduled? => cleanup-epoch-manager, cleanup-logs, full-delete-blobs and full-drop-deleted-content
Owner: root@nas
Quick Cycle:
scheduled: true
interval: 1h0m0s
next run: now
Full Cycle:
scheduled: true
interval: 24h0m0s
next run: 2023-09-22 06:39:33 CEST (in 15h8m37s)
Log Retention:
max count: 10000
max age of logs: 720h0m0s
max total size: 1.1 GB
Object Lock Extension: disabled
Recent Maintenance Runs:
snapshot-gc:
2023-09-21 06:39:46 CEST (5m37s) SUCCESS
2023-09-20 06:07:28 CEST (4m41s) SUCCESS
2023-09-19 06:07:03 CEST (3m22s) SUCCESS
2023-09-18 06:06:35 CEST (2m8s) SUCCESS
2023-09-17 06:06:30 CEST (1m52s) SUCCESS
cleanup-epoch-manager:
2023-05-15 23:15:17 CEST (1m32s) SUCCESS
2023-05-14 21:23:12 CEST (1m25s) SUCCESS
2023-05-13 21:14:37 CEST (1m33s) SUCCESS
2023-05-11 17:59:13 CEST (13m45s) SUCCESS
2023-05-10 19:55:41 CEST (14m56s) SUCCESS
cleanup-logs:
2023-05-15 23:15:12 CEST (3s) SUCCESS
2023-05-14 21:23:09 CEST (3s) SUCCESS
2023-05-13 21:14:29 CEST (6s) SUCCESS
2023-05-11 17:59:10 CEST (3s) SUCCESS
2023-05-10 19:55:36 CEST (3s) SUCCESS
full-delete-blobs:
2023-05-15 22:06:51 CEST (1h8m20s) SUCCESS
2023-05-13 20:06:43 CEST (1h7m45s) SUCCESS
2023-05-10 16:08:07 CEST (3h47m28s) SUCCESS
2023-05-08 12:17:58 CEST (3h45m15s) SUCCESS
2023-05-06 10:14:28 CEST (3h41m53s) SUCCESS
full-drop-deleted-content:
2023-05-15 22:06:50 CEST (0s) SUCCESS
2023-05-14 21:23:08 CEST (0s) SUCCESS
2023-05-13 20:06:42 CEST (0s) SUCCESS
2023-05-11 17:59:09 CEST (0s) SUCCESS
2023-05-10 16:08:06 CEST (1s) SUCCESS
full-rewrite-contents:
2023-09-21 06:45:33 CEST (1h45m0s) ERROR: failed to rewrite 8484 contents
2023-09-20 06:12:14 CEST (2h29m9s) ERROR: failed to rewrite 8484 contents
2023-09-19 06:10:28 CEST (1h37m32s) ERROR: failed to rewrite 8484 contents
2023-09-18 06:08:44 CEST (5m20s) ERROR: failed to rewrite 8484 contents
2023-09-17 06:08:24 CEST (5m10s) ERROR: failed to rewrite 8484 contents
My alst attempt at fixing stuff with the above command resultet in a lot of these messages:
unable to rewrite content "440b4227812e229a1f09017036a010cf": unable to get content data and info: error getting cached content: failed to get blob with ID pdb1cdf407964733dd3c2
94f83656d05f-s0493f16e0f53d02211c: BLOB not found
unable to rewrite content "461f0d1c4b792bbc46921e8ae88007fa": unable to get content data and info: error getting cached content: failed to get blob with ID pdb1cdf407964733dd3c2
94f83656d05f-s0493f16e0f53d02211c: BLOB not found
unable to rewrite content "46ac8903f0c9ff1a64797c427a1b01a3": unable to get content data and info: error getting cached content: failed to get blob with ID pdb1cdf407964733dd3c2
94f83656d05f-s0493f16e0f53d02211c: BLOB not found
unable to rewrite content "48a89d927caf04e35ad2c24302f3ec2a": unable to get content data and info: error getting cached content: failed to get blob with ID pdb1cdf407964733dd3c2
94f83656d05f-s0493f16e0f53d02211c: BLOB not found
unable to rewrite content "44f577dedf30679cbd3e8acfa2da9
What kind of setup is that? Have you checked, that the caches get purged, when you shutdown Kopia. Is this a single client/KopiaUI or a multi-client/Kopia Server setup?
I think, I once had the issue with corrupted caches on either my Kopia client or server, but I can’t exactly remember when. Also none of my Kopia installations do show this behaviour. Maybe try to shutdown Kopia altogether and make sure to empty the cache directory.
Otherwise, I’d focus on the storage itself, since BLOBs wouldn’t go anywhere on their own…
Thanks. It is an experiment, running kopia on Debian, backing up via rclone to OneDrive. Fully aware of the issues of rclone and OneDrive. I was just trying to figure out if these “problems” could possibly not be related to rclone and OenDrive.
Well… there seems to be an issue with the content cache size rising above the configured value, when performing a verify with --verify-files-percent= being greater then 10%, which made my Kopia server choke and bail, when the volume went to 100% usage… I am sure, that this is a bug somehow.
Okay, got that sorted out. If you’re running a Kopia server setup, you might want to put a restraint on the content-cache size into the config for that repo, otherwise a kopia snapshot verify --verify-files-percent=XX% might fill up your Kopia Server’s volume and crash the server.