Repairing "is backed by missing blob"

Currently backing up via kopia cli from Debian Linux using a repository backed by Onedrive via rclone.

Everything “seemed” fine until my nightly cronjob threw an error:

0 3 * * * kopia snapshot verify --sources=/path/Documents/ --sources=/path/Pictures/ --verify-files-percent=1 --file-parallelism=4 --parallel=4 --log-level=error

 [31mERROR [0m error processing root@nas:/path/Documents@2022-12-31 21:11:47 CET/Scan2Net/xerocopy06-12-2022-141529.pdf: object Ix2e733e4f507d5d3d42b345d3522a0e92 is backed by missing blob qcd6b3db76463a69fadf9fdd5931900e7-s980bced608a740fe11c
 [31mERROR [0m object Ix2e733e4f507d5d3d42b345d3522a0e92 is backed by missing blob qcd6b3db76463a69fadf9fdd5931900e7-s980bced608a740fe11c

So I manually ran this:

kopia snapshot fix invalid-files --source root@nas:/sixer/Documents

which resulted in a lot of similar sounding warnings:

WARN removing invalid file Scan2Net/xerocopy06-12-2022-141529.pdf/xerocopy06-12-2022-141529.pdf due to: object Ix2e733e4f507d5d3d42b345d3522a0e92 is backed by missing blob qcd6b

3db76463a69fadf9fdd5931900e7-s980bced608a740fe11c
  2023-05-13 00:05:15 CEST replaced manifest from 8fd6b121ba83bf89839db59060afbf03 to 8fd6b121ba83bf89839db59060afbf03
    diff k084fb6800b80a3b42dce707c2c465609 k50ec9de110c8b89d45bf535c2a847316
    delta:-5.8 GB
WARN removing invalid file Email-archive/db/2023-05/1765913171312128002.meta/1765913171312128002.meta due to: verify object: error getting content info for 7b0d7c8e40c7b50c27653

5dbdda5595d: content not found

And the ending line said:

Fixed 27 snapshots, but snapshot manifests were not updated. Pass --commit to update snapshots.

Do I now have to run the command again and append “–commit”?

Looking at:

--commit default value is false. As I read it without setting it to true it is effectively dry run.

1 Like

I would worry why some blob is missing from repo… Stored snapshot should not get corrupted without some reason.

I have noticed that onedrive/rclone with kopia at least for me is not entirely stable:

you could check manually if there is no

qcd6b3db76463a69fadf9fdd5931900e7-s980bced608a740fe11c.f-?????????

file on onedrive - it would point to problems with move operation.

have a look in folder qcd/6b3

that folder is totally empty

Thank you for checking

Thanks for the help, I would not have had any idea where to look. I just googled a bit about kopia maintenance so let me add more info. Looks like there were some gc problem.

kopia maintenance info
Owner: root@nas
Quick Cycle:
  scheduled: true
  interval: 1h0m0s
  next run: 2023-05-17 13:14:22 CEST (in 22m0s)
Full Cycle:
  scheduled: true
  interval: 24h0m0s
  next run: 2023-05-17 22:08:29 CEST (in 9h16m7s)
Log Retention:
  max count:       10000
  max age of logs: 720h0m0s
  max total size:  1.1 GB
Recent Maintenance Runs:
  cleanup-epoch-manager:
    2023-05-15 23:15:17 CEST (1m32s) SUCCESS
    2023-05-14 21:23:12 CEST (1m25s) SUCCESS
    2023-05-13 21:14:37 CEST (1m33s) SUCCESS
    2023-05-11 17:59:13 CEST (13m45s) SUCCESS
    2023-05-10 19:55:41 CEST (14m56s) SUCCESS
  cleanup-logs:
    2023-05-15 23:15:12 CEST (3s) SUCCESS
    2023-05-14 21:23:09 CEST (3s) SUCCESS
    2023-05-13 21:14:29 CEST (6s) SUCCESS
    2023-05-11 17:59:10 CEST (3s) SUCCESS
    2023-05-10 19:55:36 CEST (3s) SUCCESS
  full-delete-blobs:
    2023-05-15 22:06:51 CEST (1h8m20s) SUCCESS
    2023-05-13 20:06:43 CEST (1h7m45s) SUCCESS
    2023-05-10 16:08:07 CEST (3h47m28s) SUCCESS
    2023-05-08 12:17:58 CEST (3h45m15s) SUCCESS
    2023-05-06 10:14:28 CEST (3h41m53s) SUCCESS
  full-drop-deleted-content:
    2023-05-15 22:06:50 CEST (0s) SUCCESS
    2023-05-14 21:23:08 CEST (0s) SUCCESS
    2023-05-13 20:06:42 CEST (0s) SUCCESS
    2023-05-11 17:59:09 CEST (0s) SUCCESS
    2023-05-10 16:08:06 CEST (1s) SUCCESS
  full-rewrite-contents:
    2023-05-14 20:07:06 CEST (1h16m0s) SUCCESS
    2023-05-11 16:53:16 CEST (1h5m52s) SUCCESS
    2023-05-09 14:21:36 CEST (44m27s) SUCCESS
    2023-05-07 12:10:40 CEST (52m25s) SUCCESS
    2023-05-05 10:08:39 CEST (54m23s) SUCCESS
  snapshot-gc:
    2023-05-16 22:08:31 CEST (17s) ERROR: unable to find in-use content ID: error processing snapshot root: error verifying k11e4f92612e6c55d88fb2c88173db1a2: error getting content info for k11e4f92612e6c55d88fb2c88173db1a2: content not found
    2023-05-15 22:06:18 CEST (31s) SUCCESS
    2023-05-14 20:06:34 CEST (32s) SUCCESS
    2023-05-13 20:06:11 CEST (30s) SUCCESS
    2023-05-12 18:06:25 CEST (17s) ERROR: unable to find in-use content ID: error processing snapshot root: encountered 16 errors
kopia maintenance --help

usage: kopia maintenance <command> [<args> ...]
Maintenance commands.

Flags:
      --[no-]help             Show context-sensitive help (also try --help-long and --help-man).
      --[no-]version          Show application version.
      --log-file=LOG-FILE     Override log file.
      **--log-dir="/root/.cache/kopia"**  
                              Directory where log files should be written. ($KOPIA_LOG_DIR)

Any clue as to where exactly these maintenance log files are stored? I had a quick look and poked around but didn’t find them. I certainly did not change the default location so they should be somewhere there.

So for Linux in:

~/.cache/kopia

in home folder of user running kopia to be precise

Yes, I saw, I even highlighted it red but where “exactly”?

I had a quick look and poked around but didn’t find them.

###edit###
Found it (I think) but nothing too interesting in there :frowning:

/root/.cache/kopia/cli-logs/kopia-20230517-105205-1668645-maintenance-info.0.log

I think I might give up on this (onedrive via rclone) , I now have daily errors like this from the nightly maintenance.

Yeah… it is also my conclusion with kopia/rclone atm.

I will give it one more try with the latest rclone (not released yet) as they fixed one webdav issue I suspect might be related to how kopia uses rclone: