Kopia, malware and object locks

So, let’s try again from scratch ,step by step.

Main goal: ransomware protection and 6 months of restores configured like this:

latest 72 hourly backups
30 daily backups
24 weekly backups

in other words , i would like to restore any file up to 3 days ago, with an hourly resolution, up to 1 month with a daily resolution then 6 months with a weekly resolution (6*4 = 24 weekly backups

Step by step commands:

# kopia repo create s3 --bucket xxx --access-key=yyy --secret-access-key=zzz+WDc0ZdJce --retention-mode GOVERNANCE --retention-period 180d --endpoint="storage.googleapis.com" --encryption=CHACHA20-POLY1305-HMAC-SHA256
Enter password to create new repository: 
Re-enter password for verification: 
Initializing repository with:
  block hash:          BLAKE2B-256-128
  encryption:          CHACHA20-POLY1305-HMAC-SHA256
  splitter:            DYNAMIC-4M-BUZHASH
Connected to repository.

NOTICE: Kopia will check for updates on GitHub every 7 days, starting 24 hours after first use.
To disable this behavior, set environment variable KOPIA_CHECK_FOR_UPDATES=false
Alternatively you can remove the file "/root/.config/kopia/repository.config.update-info.json".

Retention:
  Annual snapshots:                 3   (defined for this target)
  Monthly snapshots:               24   (defined for this target)
  Weekly snapshots:                 4   (defined for this target)
  Daily snapshots:                  7   (defined for this target)
  Hourly snapshots:                48   (defined for this target)
  Latest snapshots:                10   (defined for this target)
  Ignore identical snapshots:   false   (defined for this target)
Compression disabled.

To find more information about default policy run 'kopia policy get'.
To change the policy use 'kopia policy set' command.

NOTE: Kopia will perform quick maintenance of the repository automatically every 1h0m0s
and full maintenance every 24h0m0s when running as root@x.

See https://kopia.io/docs/advanced/maintenance/ for more information.

NOTE: To validate that your provider is compatible with Kopia, please run:

$ kopia repository validate-provider
# kopia maintenance set --extend-object-locks true
Object Lock extension maintenance enabled.
# kopia policy set --global --compression=pgzip
Setting policy for (global)
 - setting compression algorithm to pgzip
Running full maintenance...
Looking for active contents...
Looking for unreferenced contents...
GC found 0 unused contents (0 B)
GC found 0 unused contents that are too recent to delete (0 B)
GC found 0 in-use contents (0 B)
GC found 3 in-use system-contents (1.5 KB)
Rewriting contents from short packs...
Total bytes rewritten 0 B
Not enough time has passed since previous successful Snapshot GC. Will try again next time.
Skipping blob deletion because not enough time has passed yet (59m59s left).
Extending retention time for blobs...
Found 8 blobs to extend
Extended total 8 blobs
Cleaned up 0 logs.
Cleaning up old index blobs which have already been compacted...
Finished full maintenance.
# kopia snapshot create /
Snapshotting root@crm:/ ...
 \ 2 hashing, 1171 hashed (7.6 MB), 0 cached (0 B), uploaded 196 B, estimating...
 ! Ignored error when processing "etc/ssh/oslogin_trustedca.pub": unknown or unsupported entry type
 * 0 hashing, 181604 hashed (6.7 GB), 0 cached (0 B), uploaded 2.3 GB (1 errors ignored), estimated 6.7 GB (100.0%) 0s left     
Created snapshot with root keb0ecfc7aebdc6c089ea26581d5c9789 and ID c6ae3e458c50d0dfa028bb4778cd1286 in 5m20s
WARN Ignored 1 error(s) while snapshotting root@x:/.

(there is a .kopiaignore file in /)

# kopia snapshot list
root@x:/
  2024-01-05 18:40:51 CET keb0ecfc7aebdc6c089ea26581d5c9789 6.7 GB drwxr-xr-x files:176451 dirs:29070 (latest-1,hourly-1,daily-1,monthly-1,annual-1)

Up to this, looks good to me.

So, now

  • which check should I do to see if ransomware protection is working as expected ?
  • how can i configure Kopia for the retantion wrote above?
  • as scheduling, a siimple hourly cron (the lower resolution i need) that call kopia snapshot create is enough or should I add more commands ?
1 Like

This is not what kopia recommends:

It is strongly recommended to use compliance mode when creating the Kopia repository. Compliance mode ensures that even root users cannot delete files from a bucket, and provides the highest level of security. More information can be found in the S3 documentation

But if you understand what you are doing you can experiment. In GOVERNANCE mode lock can be removed - it is your responsibility to manage permissions to make sure it works as intended.

It’s just for testing, as compliance can’t be removed.
And as you know, i’m doing a lot of test these days so i would like to be able to remove everything and start again. When i’m confident with the backup tool and the configuration, i’ll move to compliance

that’s why i’ve wrote above a step-by-step proceudre, so that i can replicate eveything again when i’m confident to use compliance mode

2 Likes

Delete all files from your bucket:) and then try to connect to the repo “from the past”

This is what works for me:

kopia repository connect --point-in-time=2024-01-04T11:10:00.000Z

please note that this repo will be in read only mode

not sure what you mean

yeap. Make sure that you full maintenance is set to run daily - it is default so should be ok

I need to get this:

latest 72 hourly backups
30 daily backups
24 weekly backups

Somebody did very nice job for this:

2 Likes

This is the current cron:

# cat /etc/cron.d/backup 
3  * * * * root /usr/bin/kopia content verify --log-level=warning
33 * * * * root /usr/bin/kopia snapshot create --all --log-level=warning --no-progress
10 3 * * * root /usr/bin/kopia maintenance run --log-level=warning

It is interesting if it works. At the time kopia docs were written:

Kopia’s Google Cloud Services (GCS) engine provides neither restricted access key nor object-lock support.

Google’s S3 compatibility layer does not provide sufficient access controls to use these features, and thus Kopia cannot use the ransomware mitigation discussed on this page with GCS at this time.

But maybe since then Google fixed their S3 implementation. We will see

we will see is something more ,we see right now.

let’s destroy everything:

x@cloudshell:~ (x)$ gsutil rm -r gs://x/*
Removing gs://x/_log_20240105183808_84e9_1704476288_1704476289_1_cbb004f50230992600ca39556184e5de#1704476289686809...
Removing gs://x/_log_20240105183924_7403_1704476364_1704476366_1_e4982de1da61cbcf9bbc1f9d773afccf#1704476366204421...
Removing gs://x/_log_20240105183942_4df5_1704476382_1704476385_1_f4d4599b65cb9e5f7d6a5449dac3ee4e#1704476385881920...
Removing gs://x/_log_20240105184050_821f_1704476450_1704476573_1_5ee8f186ec2150a5da71daf044019447#1704476574123904...
/ [4 objects]                                                                   
==> NOTE: You are performing a sequence of gsutil operations that may
run significantly faster if you instead use gsutil -m rm ... Please
see the -m section under "gsutil help options" for further information
about when gsutil -m can be advantageous.

Removing gs://x/_log_20240105184050_821f_1704476573_1704476718_2_d7e222b1b19198daeb5ab307142e576f#1704476719045756...
Removing gs://x/_log_20240105184050_821f_1704476718_1704476774_3_ba6c60e0a234ecd539237e1f7a94095f#1704476774746647...
Removing gs://x/_log_20240105185656_6033_1704477416_1704477417_1_cbb2220bda9697d94c3997f1868ae3e6#1704477417257134...
Removing gs://x/_log_20240105185709_cc56_1704477429_1704477430_1_4098485261f44bbeefd7e65563675d07#1704477430578037...
Removing gs://x/_log_20240105185717_cff7_1704477437_1704477438_1_1147a325f82e0b234e0930c7258bea3f#1704477438336221...
Removing gs://x/_log_20240105185722_a412_1704477442_1704477444_1_afe2790028b4710337ae6dfd73d50a65#1704477444154099...
Removing gs://x/_log_20240105185732_9832_1704477452_1704477453_1_08cac04e10a051307218a490525f351f#1704477454070963...
Removing gs://x/_log_20240105185902_6b25_1704477542_1704477552_1_191273f85b0346b5761476e8513bb612#1704477552358647...
Removing gs://x/_log_20240105192307_f24d_1704478987_1704478989_1_75e0320675e85c5f848f275557ec35e5#1704478989390413...
Removing gs://x/kopia.blobcfg#1704476286583116...                    
AccessDeniedEception: 403 Object 'x/kopia.blobcfg' is subject to bucket's retention policy or object retention and cannot be deleted or overwritten until 2024-07-03T10:59:10.636657674-07:00
x@cloudshell:~ (x)$

uhm, i have much more file in the bucket…

deleted everything from the web interface. now the bucket is empty (but i can see the deleted file thanks to versioning)

Kopia is not happy:

# kopia snapshot list
root@x:~# 
# kopia repository connect --point-in-time=2024-01-05T18:00:00.000Z
kopia: error: unknown long flag '--point-in-time', try --help

there isn’t any point-in-time argument in v0.15:

# kopia repository connect --help | grep point
root@x:~# 

Seems to be impossible to reconnect to a saved repository with from-config and setting the point-in-time value. To use point-in-time i have to specify all connection commands, like endpoint, access-key and so on, BUT there is an error trying to check if versioning is enabled.

I’m trying to find which api call is made by minio-go library to AWS (and to which endpoint) to check if is supported by google.

1 Like

Some progress: api calls works properly but I have to create a custom role (not sure which permission is needed, i have to check)

But, probably due to the custom role (and some missing permission) i’m getting this error:

DEBUG got error could not get version metadata for blob kopia.repository: could not list objects with prefix "kopia.repository": unrecognized option:Marker when GetBlob(kopia.repository,0,-1) (#0), sleeping for 100ms before retrying

Based on versioning, the kopia.repository file (currently deleted like everything else) was stored at 2024-01-05T18:38:06 so i was trying with --point-in-time=2024-01-05T18:38:06.000Z

Not sure if the error above is due to a wrong point-in-time date or a bad permission

Not sure what permissions you are talking about…

connect using the same keys as you used before (disconnect before):

kopia repository connect s3 --bucket test-lock --access-key=XXX–secret-access-key=XXX --endpoint url.com --point-in-time=2024-01-04T11:10:00.000Z

you have to connect before so use like 15 before

On the other hand Google S3 API is still buggy - there was recently some issue with versioning with rclone:

https://issuetracker.google.com/issues/312292516

But they seem to be working on it.

For kopia if things do not work then it would mean hard core debug to find out exactly what is not working → report to Google → wait until they fix.

Let’s troubleshoot one step at once.
Which date should I put in point-in-time ? I don’t know the exact time the deletion was done, so were should I get a valid date to use ?

I’ve tried to use one date shown from the kopia.repository version files, in example:

$ gcloud storage objects describe gs://x/kopia.repository#1704476286779475
acl:
- entity: project-owners-x
  projectTeam:
    projectNumber: 'x'
    team: owners
  role: OWNER
- entity: project-editors-x
  projectTeam:
    projectNumber: 'x'
    team: editors
  role: OWNER
- entity: project-viewers-x
  projectTeam:
    projectNumber: 'x'
    team: viewers
  role: READER
- email: backup@x.iam.gserviceaccount.com
  entity: user-backup@x.iam.gserviceaccount.com
  role: OWNER
bucket: x
content_type: application/x-kopia
crc32c_hash: 376ALg==
creation_time: 2024-01-05T17:38:06+0000
etag: CNOA6PzkxoMDEAM=
generation: '1704476286779475'
md5_hash: hM2ApNoF6JCeRr7rSOmbuw==
metageneration: 3
name: kopia.repository
noncurrent_time: 2024-01-05T18:29:35+0000
retention_expiration: 2024-07-03T17:59:10+0000
retention_settings:
  mode: Unlocked
  retainUntilTime: '2024-07-03T17:59:10.621000+00:00'
size: 1109
storage_class: COLDLINE
storage_class_update_time: 2024-01-05T17:38:06+0000
storage_url: gs://x/kopia.repository#1704476286779475
update_time: 2024-01-05T17:59:10+0000
x@cloudshell:~ (x)$ \

noncurrent_time is set to 2024-01-05T18:29:35+0000 so i would think that file was deleted at that time, but using point-in-time=2024-01-05T18:29:35.000Z gives me the error shown above

The bucket is totally empty, files are visible only in the versioning, there isn’t any active file but this shouldn’t be an issue

any date before - of course when you know you already had your repo. For me it works for any date/time before deletion

anyway , I was looking at Kopia sources, adding a native gcs versioning support like with s3, shouldn’t be very hard, almost everything is self container in gcs_storage.go and based on their s3 counterpart s3_storage.go, there are some methods to add/change. if I had been a Go programmer I would have done it on myself.

1 Like

seems to be a bug in google s3 mode from minio-go, not supporting Marker options (i don’t know what is it and why is needed). rclone with s3 pointing to the same google storage, show all the files as expected.

Still very good and valid test.

At least now we know that it still does not work with Google S3 API.

Unfortunately IMO GCS is not very popular among kopia users - there are much cheaper S3 alternatives working perfectly fine. My S3 provider charges $30 per year per TB with no egress/transactions charges. It would be like $300 with GCS + egress…

yes, and i’m still without a safe backup :smiling_face_with_tear:

trying to add native support to gcs backend right now…

2 Likes

This is the right attitude!:slight_smile: :muscle::muscle::muscle:

Good luck.

Sure, but i’m not a go programmer so it’s hard

1 Like