List of S3-Compatible Cloud Storage Providers That Work With Kopia

Kopia’s s3 repository should work with all cloud storage providers that support the S3 API. However, some S3-compatible cloud storage providers do slightly different things under the hood, which may break Kopia compatibility – even if the cloud storage is S3 compatible. The purpose of this thread is to create a community list of S3-compatible cloud storage providers that are known to work with Kopia.

If you have used Kopia’s s3 repository to connect to a cloud storage, please post the name of the cloud storage in this thread. If possible, please also post the version of Kopia that works with the cloud storage provider.

The following are the S3-compatible cloud storage providers that I know successfully work with Kopia (v0.11.x):

  • Amazon S3 (duh!)
  • Amazon Lightsail
  • Backblaze B2
  • Cloudflare R2
  • DigitalOcean Spaces
  • Google Cloud Storage
  • IDrive E2
  • Linode
  • MinIO
  • Oracle Cloud Infrastructure
  • OVH
  • Scaleway
  • Vultr
  • Wasabi

I have also gotten Storj to work successfully with Kopia, but there are some reports that Storj may not work.

  • Infomaniak Swiss Backup
1 Like

So far, it seems that kopia and storj can work together, I have successfully built the library in storj and can upload and restore snapshots normally, I will continue to use it, in order to find out if there are other compatibility issues.

kopia --version
0.11.3 build: 317cc36892707ab9bdc5f6e4dea567d1e638a070 from: kopia/kopia
uplink ls
CREATED NAME
2022-07-23 09:15:13 kopia
.\storj-kopia
Connected to repository.
NOTICE: Kopia will check for updates on GitHub every 7 days, starting 24 hours after first use.
To disable this behavior, set environment variable KOPIA_CHECK_FOR_UPDATES=false
Alternatively you can remove the file “C:\Users\User\AppData\Roaming\kopia\repository.config.update-info.json”.
kopia restore k278f54fc4ff902019fa50f04969d2e6a d:\downloads\456\1.zip
Restoring to a zip file (d:\downloads\456\1.zip)…
Processed 10 (5.6 KB) of 55 (36.8 KB) 3.6 KB/s (15.2%) remaining 8s.
Processed 49 (30.4 KB) of 55 (36.8 KB) 9.1 KB/s (82.7%) remaining 0s.
Processed 52 (33.9 KB) of 55 (36.8 KB) 7.1 KB/s (92.2%) remaining 0s.
Processed 53 (36.4 KB) of 55 (36.8 KB) 5.6 KB/s (98.9%) remaining 0s.
Processed 54 (36.5 KB) of 55 (36.8 KB) 4.5 KB/s (99.1%) remaining 0s.
Processed 56 (36.8 KB) of 55 (36.8 KB) 3.9 KB/s (100.0%) remaining 0s.
Processed 56 (36.8 KB) of 55 (36.8 KB) 3.9 KB/s (100.0%) remaining 0s.
Restored 53 files, 3 directories and 0 symbolic links (36.8 KB).

1 Like

Any service based on CEPH (radosgw) for S3 works.

1 Like

I found that backups to storj were extremely slow.
And in trying to restore from services like Google Drive, I wonder how slow the restore process will be from Storj. The splitting and reassembly of everything seems like it would be a huge amount of overhead.

1 Like

It takes about 40 minutes to restore 6GB of content, which seems to be related to network access speed.
Processed 15670 (6 GB) of 15669 (6 GB) 3.2 MB/s (100.0%) 0s remaining.
Processed 15670 (6 GB) of 15669 (6 GB) 3.2 MB/s (100.0%) 0s remaining.
13905 files, 1765 directories and 0 symlinks (6 GB) were recovered.

Well, restoring will always be slower than backing up. Perhaps you can get to equal speeds on full flash-based storage, but I you won’t have that with most of the cloud storage providers, since it’s simply too expensive. And even then, will backing up be faster anyway, since restoring always comes with access time and latency penalties.

E.g. on my 1GbE internet access I was able to upload to Wasabi with approx. 500 mbit/s whereas downloading was significantly slower, afair I never got above 80 Mbit/s.

At home, I do have one local NVMe/USB3 repo directly attached to my MBP and restoring from that I am getting a good 200 MB/s. On my home network (Wifi), I am still getting 12 to 20 MB/s from my USB3 2TB SSD, attached to my VM server, where Kopia server runs in a VM.

Burt yeah… S3 - or any other remote network storage will be slow, when restoring, no matter what.

That’s incredible. I’m not looking for this thing to be able to restore at the same speed that it backs up. It’s just that when I try to initiate a 50 gigabyte restore from Google drive, it was going at a pace that would have taken about two weeks. Way too slow. The problem itself is Google drive though.

If slowness is an issue, may I recommend using one of the object storage repos (S3, Google Cloud, Azure, etc.)? Google Drive (and its comparables: OneDrive, Dropbox, etc.) are not designed to be used as object storage, so they will be slower when saving/restoring snapshots. Some even have explicit API limits. Plus, Kopia uses rclone to connect to Google Drive / OneDrive / Dropbox, so that extra layer slows down the process as well.

For reference, restoring from Cloudflare is at around 300 Mbps for me, which does not max out my download link, but still is pretty fast imo.

If it’s speed, you’re concerned about, I am running Kopia this way:

  1. Kopia repo locally on a direct attached NVMe drive via USB3. The drive hosts a m2 1TB NVMe stick and is quite small and very fast
  2. Kopia repo on my local LAN, fast SSD, but limited by LAN/WIFI speeds
  3. Kopia repo at Wasabi, which is a sync-to repo from my repo on the LAN

Whenever I should need to have to restore something quickly, I am using my USB3 attached repo. The 2nd repo is just for redundancy and the 3rd repo at Wasabi is simply an insurance, should the house burn down.

Is IBM Cloud Object Storage supported?

I don’t know if anyone has used this provider but you can create a repository and run kopia repository validate-provider to see if it is compatible with kopia.

I’m having trouble determining the endpoints and keys necessary to even create a repo. :confused:

Hello everyone, I’m using kopia on Cubbit DS3 infrastructure (AWS S3 Compatible) and it seems to work well.

1 Like

I’ve been testing kopia with StorJ using the self-hosted S3 gateway, and I haven’t had any issues with it so far (100GB repo over 2 weeks). I increased the default max blob size to 55MB as StorJ likes files around 64MB in size for sharding.

1 Like