Hi guys, as asked here on restic (i’m testing both restinc and kopia), is possible to use something immutable like object lock/bucket lock set on Google Cloud ?
Because i’ve tested a standard Kopia config and at the end of the snapshot, an error is triggered because a file can’t be updated/deleted (honestly i dont’t remember the exact error) due to the lock.
So, which is the best solution to keep snapshot immutable and safe from ransomware ?
i’ve seen that is possible to create a repostitory with object lock set in Kopia, but only for S3, wouldn’t work for Google Storage ?
GCS supports S3 protocol AFAIK. So the best way would be to test. Assuming there are no incompatibility issues (typical problem with S3 implementations) it should work.
With S3 backend i’m able to use the bucket and i can see (from the google cli) that each object has a retention policy set. I’ve also enabled the versioning, like suggest, BUT i’m still able to delete the snapshot from Kopia cli and even from google console (i’ve deleted all files with no issues at all)
So, i think i did something wrong, i would expect that a succesfull snapshot would be set as read-only for everyone (google included, like with the retention policy set) and, as very minimum, Kopia shouldn’t be able to delete/alter an existing snapshot
# kopia repo create s3 --endpoint="storage.googleapis.com" --bucket backup-test --access-key="x" --secret-access-key="y" --retention-mode=GOVERNANCE --retention-period=30d
Enter password to create new repository:
Re-enter password for verification:
Initializing repository with:
block hash: BLAKE2B-256-128
encryption: AES256-GCM-HMAC-SHA256
splitter: DYNAMIC-4M-BUZHASH
Connected to repository.
NOTICE: Kopia will check for updates on GitHub every 7 days, starting 24 hours after first use.
To disable this behavior, set environment variable KOPIA_CHECK_FOR_UPDATES=false
Alternatively you can remove the file "/root/.config/kopia/repository.config.update-info.json".
Retention:
Annual snapshots: 3 (defined for this target)
Monthly snapshots: 24 (defined for this target)
Weekly snapshots: 4 (defined for this target)
Daily snapshots: 7 (defined for this target)
Hourly snapshots: 48 (defined for this target)
Latest snapshots: 10 (defined for this target)
Ignore identical snapshots: false (defined for this target)
Compression disabled.
To find more information about default policy run 'kopia policy get'.
To change the policy use 'kopia policy set' command.
NOTE: Kopia will perform quick maintenance of the repository automatically every 1h0m0s
and full maintenance every 24h0m0s when running as root@c.
See https://kopia.io/docs/advanced/maintenance/ for more information.
NOTE: To validate that your provider is compatible with Kopia, please run:
$ kopia repository validate-provider
root@c:~/test# kopia snapshot create testfile
Snapshotting root@c:/root/test/testfile ...
* 0 hashing, 1 hashed (12 B), 0 cached (0 B), uploaded 197 B, estimating...
Created snapshot with root acc3015984e71e6af5b127b149c0cdad and ID cb01272d4ad66dbda2278bbdeb83ca1e in 0s
Running full maintenance...
Looking for active contents...
Looking for unreferenced contents...
GC found 0 unused contents (0 B)
GC found 0 unused contents that are too recent to delete (0 B)
GC found 1 in-use contents (40 B)
GC found 3 in-use system-contents (1.4 KB)
Rewriting contents from short packs...
Total bytes rewritten 0 B
Not enough time has passed since previous successful Snapshot GC. Will try again next time.
Skipping blob deletion because not enough time has passed yet (59m59s left).
Extending retention time for blobs...
Found 9 blobs to extend
Extended total 9 blobs
Cleaned up 0 logs.
Cleaning up old index blobs which have already been compacted...
Finished full maintenance.
root@c:~/test# kopia snapshot list
root@c:/root/test/testfile
2024-01-03 21:10:47 CET acc3015984e71e6af5b127b149c0cdad 12 B -rw-r--r-- (latest-1,hourly-1,daily-1,monthly-1,annual-1)
root@c:~/test# kopia snapshot delete acc3015984e71e6af5b127b149c0cdad --delete
Deleting snapshot cb01272d4ad66dbda2278bbdeb83ca1e of root@c:/root/test/testfile at 2024-01-03 21:10:47 CET...
root@c:~/test# kopia snapshot list
root@c:~/test# kopia snapshot list
root@c:~/test#
but i was able to delete it from the console with no issue at all. The same retention applied on a bucket, prevent any changes inside the bucket even from google console (i have to remove the retention to delete files)
All your test is very interesting. I would not worry that you can delete - normally it would only mean that you transitioned this object to past version. Old versions should not be possible to be deleted using API - of course using web console you can. If somebody has access to your console credentials then can delete buckets or even terminate all account.
This is OK. Object lock with versioning is not the same as “append only repo”.
Versioning is what really makes object not deletable - using API you should not be able to delete any version not old enough (this is where locking comes to play - it locks versions for set number of time)
Let’s say I got your S3 API keys and not just deleted some snapshot but all files from your bucket. But if locking works it means than all objects previous versions are still there and I can not do anything about it.
Now what you do as per docs:
Reconnect the repo in Kopia using the --point-in-time option (ex: --point-in-time=2021-11-29T01:10:00.000Z)
so e.g. connect to your repo with --point-in-time "week ago" - and voila you see all your repo as it was one week ago. You can restore your files or even all repo if needed.
This is OK too. Versioning and locking protects your from rouge usage of your API credentials. Not from your Google account credentials misuse.
All your test is very interesting. I would not worry that you can delete - normally it would only mean that you transitioned this object to past version. Old versions should not be possible to be deleted using API - of course using web console you can. If somebody has access to your console credentials then can delete buckets or even terminate all account.
not exactly, because in GCP , if you have set a bucket lock (the same as object lock but on the bucket), before deleting the bucket or an object inside, you have to manually remove the lock. Even the administrtaro from the console can’t delete an object under a locked bucket without removing the lock first.
I would expect the same with single object lock.
Honestly i don’t know how to test if everything is working as expected and with a good ransom protection, if Kopia is able to delete snapshot easily, the same will be possible for a ransom and when this happens, how can I restore a backup if Kopia doesn’t know anything about it ? (snapshot list is empty)
Another question: i’ve seen that the repo is “flat”, all files on a single directory. Is this correct or i did something wrong ? In example restic has a deeper structure.
It is very different for local repo on external disk for example. File systems have limits - too many files in one directory and things can become very slow. This is where sharding helps. You split many files into many directories using some logic which allows you to know where to search for file with given name.
Yes, i know that’s different and it’s not an issue. A deeper structure with multiple “directories” it’s just more clean if you have to access the repo in some way. but that’s not an issue, i was just curious.
My real issue is how to check and simulate a ransom attack and based on my previous tests, i think something is not working as expected, because i was able to delete everythin with just a command.
i’ll try later this day. honestly i didnt use the point in time option. any idea how to know the last usable point in time date? a corruption could be undetected for days. (i would suggest adding command that automatically lists the usable date, by calling automatically a different point in time until it find a valid repository)
anyway: do you have something like append mode where no changes to existing files are made (even tò any lock file, if used). this allow users to use GCP natively with a bucket lock. you write a file and you’llnnever touch that anymore, until the expire date.
I doubt it will be ever implemented… you can create your own script doing it. The main reason is that it is not something anybody would use often - it is one off (maybe never) situation somebody needs “to go back in time”. It is not such a big deal to try it even manually - yesterday, one week ago, two weeks ago etc.
I have never tried append only to a cloud - I do not see any advantage over lock. Maybe somebody else can suggest something clever here.
You can also do Google for backup products supporting GCS. Nobody claims that kopia can do everything. Maybe you need something else for your specific requirement.
If you want to append only then definitely have a look at rustic. It does not have to delete anything (as is not using lock files) so can be perfect for what you want.
I’ve tried rustic, seems good but doesn’t support gcp natively but only via rclone, an additional software to install and configure (and i really HATE the rclone configurator). rclone was the first software i’ve tried to backup some gcp instances, but saving all files directly in remote wasn’t good for me. Much better to use something like kopia/rustic/restic with a structured repository