Memory consumption with many files or repo size

Everything works perfect with 16Gb ram (10-15 tasks for ~50Tb total), before i backup few new folders for ~100Tb more and probably 50-100+ millions files inside.

Docker container repeatedly fall with memory leak, about every 1 min. Can’t get kopia blob stats kopia content stats because it fall faster).

With 100Gb ram it consume ~45-50Gb then stop, and after maintenance free up all memory. Without running tasks kopia consume 2-4Gb. Still waiting kopia blob stats, it takes very long time.

What should i increase and which ratio recommended? like for ex 10Gb memory for each 100Tb.

I saw recomendations split backups to different repo. But at web i can connect only 1 repo at once. How work with that? Edit repository.config manualy?
found solution 1docker=1repo, but that reeaaally not very good idea. Multiple Repositories (using Docker)? - #2 by tugdualenligne

after this first big maintenance done, max memoxy util decreased to 28-30Gb.

blob stat failed after 12 hours.

Got 2320000 blobs...
Got 2330000 blobs...
Error response from daemon: No such exec instance: 74dbfae560e20f814fa33885ee2be4f1b96be223c88efa5faba9275904d0e234

kopia content stats

Count: 113500707
Total Bytes: 96.7 TB
Total Packed: 96.7 TB (compression 0.0%)
By Method:
  (uncompressed)         count: 108336543 size: 96.7 TB
  zstd-fastest           count: 5164164 size: 24.2 GB packed: 6.2 GB compression: 74.3%
Average: 852.1 KB

        0 between 0 B and 10 B (total 0 B)
   731140 between 10 B and 100 B (total 62.3 MB)
 11279518 between 100 B and 1 KB (total 4.4 GB)
 11137845 between 1 KB and 10 KB (total 39.3 GB)
 31376512 between 10 KB and 100 KB (total 1.7 TB)
 26261227 between 100 KB and 1 MB (total 5.8 TB)
 32714465 between 1 MB and 10 MB (total 89.1 TB)
        0 between 10 MB and 100 MB (total 0 B)