Kopia maintenance takes up to 10 times badwidth utilization than dataset size

Hi there!
We’ve encountered a critical issue that causes kopia maintenance to overuse the bandwidth against an S3-compatible backend, in our case, Wasabi.

The total bucket size is around 2T, and we have found that running it on daily basis it consumes as much as +20T of network badwitdh for that session to finish. It could possibly be that this has been existing since always, because we kind of detected this by coincidence and it as back as our logs retention policy allows us to confirm, so it doesn’t appear to be a version-specific issues.

For me It’s important to remark that we used to run the maintenance on a daily basis until we noticed this, and also, that the data being written between runs will not be that significant in comparison withe the total amount of the dataset at all, maybe 100 or 200gigs at the worse possible scenario.
Mentioning this, I don’t expect the compact/purging/deletion/rewrites could be that heavy to justify consuming 10x the full dataset.

Did anyone encounter this?