Double backup? (duplicity/encrypted)

I noticed that there are two subfolders in the /home/user-data/backup folder: duplicity and encrypted.
Probably, the duplicity folder contains the unencrypted data and the encrypted folder an encrypted copy of it, right?
Is it necessary to have both? - They tend to get quite big… Or can I delete the encrypted files after I downloaded them?

It will re-encrypt any backup files missing their encrypted counterpart.

So I really must make shure not to use more than 1/3 of the available space for mails/files (because the other 2/3 are needed for the backup)?
Would it be an option to use EncFS or something alike that only creates an encrypted/decrypted view of a decrypted/encrypted folder? This way, only half of the space would be needed for the backup.

1 Like

I think I need to re-think the entire backup system. I’m not sure what people actually find useful.

To not waste space on the mail server, maybe provide us with an option to backup directly an offsite server via ssh, sftp, or other means. Having the backups on the same server as the mailserver wastes space and provides no help when the server/harddrive fails.


Duplicity supports a silly number of backup endpoints. The software is in place it just needs a way to configure it at this point.

yeah, I disabled the backups and installed duply and used it to backup directly to and S3 SOS service, still testing it, and its working fine for now, I’ll see how it goes. It seems to be and easy alternative, its simple to configure and you might enable multiple endpoints, (i.e. you can backup each domain separately).

I like the backup setup well enough right now… The alternative I can see that’s viable is S3QL… S3QL supports all the things one would probably want, and is independent of what backup method you use. It just exposes a locally mounted ‘backup’ directory that connected to your remote storage…

As for Glacier backups, what you can do is backup via any solution to S3 and then set Amazon to rollover your S3 data to Glacier after 24 hours.

S3QL is a filesystem-on-object-store that supports S3, Google Drive, and Openstack… And it has native built-in compression, encryption, deduplication, copy-on-write, as well as support for live mounting of files over high-latency WAN connections.

Technically, S3QL is meant to work WITH something like Rsync, Duplicity / Duply, Bup, etc. Its just the remote connectivity layer for filesystem on object-store over WAN.

I’d like a sanctioned way to disable backup – as it is, if you use any serious amount of storage it causes problems, because the backups take up even more space than the original data. This would enable people to set up their own backup methods without having to manually muck around with MIAB.

It would be ‘nice’ to have a method for S3 encrypted backup built-in, and instructions for people to turn that on if they’d like, then Duplicity could back up straight to that. I’m willing to give that a go, but I’d like some feedback from @JoshData before I start tinkering.

An option in the admin panel to set the duplicity target URL would be great.

OK I’ll get on it. Time to figure out the admin ui!

Looks like it should be pretty easy and this guy has good instructions already, including setting policy. I’ve been through this once, but it’s good to have a reference.

I’m working on this now. I’ve got the initial template changes made no problem. I can also rework the duplicity script. I’m getting in a bit over my head with the javascript, though. Currently I’m cargo-culting from the aliases page.

If I can get it most of the way hopefully someone more fluent in JS can get it the rest of the way?

I’m working on it in this branch:

That’s definitely in the right direction.