Are there limits to the length of path names/file names

Kopia seems to be having issue with files that it can’t grab while snapshotting. The file names are quite large >64 chars, plus the path name, and I am wondering, if there’s some kind of limit to the lengths of file/path names in kopia?

The error messages go like this:

Error when processing "Allgemeine_Kundeninfos/16_Einreichungen/Effie_2022/material/JvM_Interspar_Lehrlingskampagne_Casefilm_Adaption-2022_ProRes4444_HD_MASTER_CLEAN.mov": object not found

I am not sure that there is one. I whipped up a quick repo and tried the following: create a 5 layer nested directory with 200 random characters as names (mkdir $(hexdump -n 100 -v -e '/1 "%02X"' /dev/urandom) recursively 5 times), and inside the innermost directory, a single empty file with 200 character name. In all there are like 1200 characters in the path, and the snapshot completed without any errors. Are you sure that the file exists/isn’t being locked? Does it contain any special characters that you think might cause issues?

EDIT: I am running BTRFS on Arch.

Yeah, the files might be locked at that time, although its quite deep into the night by then. It’s really non-trivial to check that these days with all this remote work happening.

There should be no inherent file path limits beyond what the OS/filesystem supports.

We have seen kopia choke on invalid filenames before - containing bad unicode characters, but nothing path length related.

In fact we run virtually all tests using very long file paths (>260 characters) on Windows, where it requires special handling and tests are generally all passing.

Well, I just checked these occurrences again and the only thing, that these files do have in common is that they all are bigger than 2GB in size. Could it be, that these files fall victim to the checkpoint interval? Since, these files don’t ever seem to appear again as faulty the following runs, that would be my guess.

These jobs run at night via cron and the connection runs from Vienna to Hamburg, where the Kopia server is located. The site runs a 1GbE MPLS, but for performance reasons, I am always running 10 Kopia backups parallel…

I checked this issue again and the issue has to be the snapshot-interval, which keeps Kopia from successfully processing large files in one go. This has some consequences for large files and I will have to see, if I can get around this limitation, since otherwise a large file, which gets updates each day, would never fully make it into the repo.

I will probably pause Kopia’s maintenance on that repo before starting the job and increase the checkpoint intervall.