How to use different storage-tiers in one (or more) retention policy?

First of all thx for having me :slight_smile:
I’m very intrigued by kopia but it is still early days for me :slight_smile:

One question after I’ve read the docs and configured my first repo, snapshots and policies is:

    • how to use different storage tiers for
    • daily, weekly, monthly, yearly
    • for example filestorage for daily, weekly but S3 for monthly, yearly?
    • sync-to seems to be used for mirror snapshots but not instead?

As older the snapshots get the “colder” the tier can be (aka cheaper).
Danke :slight_smile:

That’s not really how Kopia works, it organizes data from different snapshots together into pack files and there’s no real distinction between how recent a piece of data is.

This doc has more info on storage tiers:

I’ve read the storage tiers (and just did it again) => which led me to the question how to align storage-tiers to “aging” of my snapshots. Which are get “colder” over time, right?
But Kopia UI seems to only support one repository… which is either by design or I’ve to use CLI for more complex setups?

Example:

  • I have an very cold and slow storage tier where only yearly snapshots should snapshots go into
  • and the hourly snapshots should to “hot” local filestorage

How would you configure this, if possible?

This is actually not possible with Kopia. The mentioned article only refers to the type of files, based on filename prefixes, and the kind storage tier they would be suited for. It doesn’t imply that Kopia will move them around for you.

Running a Kopia repo on a tiered storage would need the storage management software to be able to move files around according to their filename prefixes - and I doubt that any of such systems out there does that. They usually move data by creation/access time.

The snapshots are just lists or indexes that says “the yearly snap wants datablob XYZ for another year”, but this does not mean the datablob XYZ in itself is necessarily getting “colder”, there might well be tons and tons of hourly/daily/weekly snaps for many clients that also wants to retain the data blob XYZ.

So while the yearly index file might become “colder”, it is very small compared to the data blobs which make up the sum of all data of the snapshots.

Now and then, some datablob falls out of its last index (and this might not be the yearly snap, it could easily be a part of a file that only existed from Wednesday to Friday and then never was seen again) and then the next maintenance run will mark it for deletion and remove it since no index file points to it.