Setting max parallel file reads beyond the number of logical processors

Hi
I’m trying to backup thousands and sometimes millions of documents from multiple sources. 99% of my files are below 200mb with most being less than 20-50mb. I want to take a snapshot at least every 4 hours.

To do this I’m backing up to Azure block blob storage and running Kopia on an Azure Standard D4s v3 machine 16GB 4 Core 6400 IOPS.

I’ve monitored CPU and RAM usage whilst using Kopia and it seems very low no matter what size I pick, I image that this is because I am reading / writing so many small files instead of a fewer big ones.

I have set max parallel file reads to larger numbers than 4 like 16 and 32 and my snapshots finish much quicker, I imagine because they can use more of the resources on the machine.

My question is are there any reasons I shouldn’t do this / any consequences that are known?

any help / responses appreciated

many thanks

You mean, other consequences that bogging down your host - guess not. If handling very large number of small files, the speed of your storage will also play a big part.