For full disclosure note I’ve not run a snapshot that I’d yet trust as I’m still in the evaluation phase. I’ve never used S3 as a backend. I’m sure it’s just a matter of time before someone with long term experience comes across your post. In the meantime & regardless the backup method/software used: no, I wouldn’t trust it if the old underlying device didn’t remap the bad sectors of the drive & subsequent snapshot uploads to it didn’t validate the pool. The clone operation may have just ended up copying the corrupted blocks to the new storage device.
I had considered such a scenario that you’re now experiencing. I intend to ensure the endpoint file system at least uses some form of fs check summing combined with Kopia’s (stated as experimental) ECC.
Again, before others who may have more to say on the matter post, I’d perhaps place on the ‘back burner’ of the matter at that you might … & I do emphasis might have a basis to try a method to target the failed blobs to remove them in a targeted fashion. See the attached link to a thread. I’d then re-download/verify the pool to fully confirm 1:1 src/endpoint consistency. I suggest this as you’ve mentioned you’re not against just a ‘nuke & reset’ of the endpoint’s repo/pool.
If you’re willing to go that far anyway, you might as well try a surgical strike.
Good luck.