Location of S3 backup settings

hello. I am try to S3 backups working to a non-aws S3 storage system, on two miab instances, at two locations. I have it working on one, but not the other. So, I am am trying see where the configuration settings, .conf or whatever, in the file system are located. Can anyone help? Thanks :slight_smile:

You should be able to do it all from the admin panel. On the filesystem, I think all backup settings are stored in /home/user-data/backup/custom.yaml

1 Like

Hi. Thanks for that, much appreciated & was what I was looking for. The error messages in the Backup admin panel are not specific enough to troubleshoot issues. e.g., one just says ā€œEnter an S3 bucket name.ā€ even if there is one named. I was able to copy the config to the other machine, modify slightly & test. It didn’t work, but at least I know it was not the configuration that’s the problem. Iz

Just to help as I got it working but the panel looks quite different then several version ago when I entered it. The yaml should look like:

target: s3://region.amazonaws.com/bucketname/
target_pass: secret key
target_user: access key

At least that is how my working config looks :wink:

replace region, bucketname and the rest with your settings.

Hi there. Thank you for your comment, that is how the yaml file was structured. Of the two systems I am working with, one is on a VPS, and is working ok, albeit with the duplicity error below. It only differs from the recommended install in that it is running on ARM, not x86. It’s been going fine. The other is self hosted at home, on x86 bare metal. The email side works fine, email flows & certs install. But the non-AWS S3 is causing problems. I can connect to my bucket with RClone, https etc but not MIAB. It could be any number of issues. I’m still experimenting - might get there eventually!
Errors, for example:

WARNING 1
. WARNING: Using boto3 >= 1,36.0 with non-amazon s3 services may result in checksum errors. a workaround is to set the following env vars
*. *
. export AWS_REQUEST_CHECKSUM_CALCULATION=when_required
. export AWS_RESPONSE_CHECKSUM_VALIDATION=when_required
*. *
. see boto3 1.36+ breaks Duplicity...but there is a workaround (#870) Ā· Issues Ā· duplicity / duplicity Ā· GitLab for details.

Attempt of list Nr. 1 failed. SSLError: SSL validation failed for https://object-storage.nz-por-1.catalystcloud.io/mailtwo hostname ā€˜object-storage.nz-por-1.catalystcloud.io’ doesn’t match either of ā€˜autoconfig.mail.abcd.xyz’, ā€˜autodiscover.mail.abcd.xyz’, ā€˜mail.abcd.xyz’, ā€˜mta-sts.mail.abcd.xyz’
WARNING 1

Thanks, Iz

I’ve been getting the same message:

WARNING: Using boto3 >= 1,36.0 with non-amazon s3 services may result in checksum errors. a workaround is to set the following env vars

    export AWS_REQUEST_CHECKSUM_CALCULATION=when_required
    export AWS_RESPONSE_CHECKSUM_VALIDATION=when_required

see https://gitlab.com/duplicity/duplicity/-/issues/870 for details.

And this is the only remotely relevant thread, so I’m responding here.

As best I can tell, the solution to this problem is to add the environmental variables to /etc/environment. One way to do this is with nano.

First, ssh into your Box. Then:

sudo nano /etc/environment

In nano, scroll down one line, paste the following:

export AWS_REQUEST_CHECKSUM_CALCULATION=when_required
export AWS_RESPONSE_CHECKSUM_VALIDATION=when_required

(You may want to add an extra line break at the end, hence the blank line in the blockquote.)

Next, type Ctrl-O to save the file, then Ctrl-X to exit nano.

The environmental variables should take hold the next time the server is rebooted (I think?), so if you want to make sure they’ve taken hold, you can manually force-restart with the following:

sudo reboot

Note that there are other, more elegant ways to add these environmental variables, such as with echo and >>, but I’m not sure how to put the sudo in the correct place for that, hence just using nano.

After rebooting, you can ssh in again check the variables with echo:

echo "$AWS_REQUEST_CHECKSUM_CALCULATION"

Should print when_required, and

echo "$AWS_RESPONSE_CHECKSUM_VALIDATION"

Should also print when_required.

I just cross-posted this as an Issue on the MiaB GitHub, since I feel like probably these environmental variables could be set by the MiaB installer without any detrimental side effects.

1 Like

Hi. Thanks very much Elsie(?), I will apply the fix. I’m getting an alert about it everyday so that’s a motivation to attend to it :smile: Thank you for raising it in GitHub & I agree it is a good idea asking for it to be corrected for in the installer.
The issue I’m most struggling with is that odd ā€˜SSLError’. It’s very perplexing, especially given all SSL Certificates are reporting as correct & up to date in the MIAB Status page. Iz :slight_smile:

You’re welcome!

Though after 24h and another daily backup cycle I got another email with the same error, so the fix I proposed may or may not have actually worked… :grimacing:


If I put line breaks in the error you’re seeing, it looks like this:

Attempt of list Nr. 1 failed.

SSLError: SSL validation failed for https://object-storage.nz-por-1.catalystcloud.io/mailtwo

hostname object-storage.nz-por-1.catalystcloud.io doesn’t match either of

  • autoconfig.mail.abcd.xyz
  • autodiscover.mail.abcd.xyz
  • mail.abcd.xyz
  • mta-sts.mail.abcd.xyz

Does improving the legibility of the error like this help at all?

I searched for the error (using Kagi, so I can’t really share the search itself), and I got the following results:

You could try searching in each page for the strings SSLError: SSL validation failed for and doesn't match either of. (Note that doesn't has a non-curly apostrophe, so if you just type it, and it autocorrects to doesn’t, i.e. with a curly apostrophe, you might not find a result.)

Hi there. I decided to build a new box, with a different domain name. I have learned that before you use the same domain for various experimental Linux installs, it pays to delete DNSSEC settings first at your registrar, so that was worth finding out. :slight_smile: With the clean install, and different domain, so far 97.8% of things are going fine. The SSL errors went away (sure that was a DNSSEC issue); I didn’t bother trying to use the GUI to set up S3 backup, I just created the custom.yaml file, added the settings it needs (from above) and rebooted. It backs up fine… except that

WARNING: Using boto3 >= 1,36.0 with non-amazon s3 services may result…

error message does not go away, even, as you found out, adding those environment variables. It’s a bit annoying, but pales in comparison to the joy of actually getting it to work with my obscure S3 host. I guess the next step is to see that those back ups are restorable.

I do get the impression Duplicity is the MIAB software problem that keeps giving. On my ARM MIAB VPS at Hetzner, Duplicity was the only item modified to make it work :slight_smile:

I replied to the github issue regarding this topic. Didn’t notice until now that it was being discussed here.

Maybe you’ve already tried this, but if not, I’d be curious if is resolves the noise.

Hi. I have followed the suggestion outlined in the github thread to add those lines to the cron job & they do seem to have suppressed the alert. @elsiehupp - Have you tried the suggestion?
One final thing I’d like to check is the backups are in fact still restorable.
Thank you ā€˜dms’ :slight_smile:

Alas, I’m still getting the warning emails :-/