I’m testing MIAB, and first impression is very good.
But, current backup implementation is not very thought through in my opinion.
So, now I’m building my own.
Before diving into system, I thought I’d ask first.
In beginning of backup.py is written:
System services are stopped.
what services exactly?
An incremental encrypted backup is made using duplicity.
of what? Presumably /home/user-data and that’s it?
In a hypothetical situation, when new installation is made and backup need to be restored, then restoring /home/user-data is enough? Or is there any other steps needed?
Thanks.
EDIT:
Actually services list was easy to find: php8.0-fpm, postfix, dovecot, postgrey
But is it really necessary to shut them down? Having potentially up to hundreds GB of backup, then it takes some time to complete (with rsync) and mails are inaccessible at this time.
The backups work fine for what is likely 3,000+ users.
Likely all of the services that are needed to be shutdown to ensure mail has a consistent backup. Dovecot, php, Postfix, postgrey. But I see that you have figured that out via your edit.
Yes I believe it is /home/user-data which is all that is really needed to restore your box, Ive moved between several boxes, tested backup and restore several times and that’s all you need!
Correct, pretty much all you need is /home/user-data you will need to answer the setup questions like your creating a new box but once you do all of the other settings, users, multiple domains will be picked up by the configuration stored in /home/user-data
no, no, the project is very good, don’t mean to offend anybody. I have searched for long time simple way to set up modern mail server, and MIAB tick almost all the boxes. There is just some shortcomings (missing quotas) and “unusual” design choices (partial backup to local disk) in my opinion. Apparently I take backups more seriously than 3000+ people, but this is my problem, and some additional scripting will fix that.
This is based on NSA-proof your e-mail in 2 hours and it is a community based project, as far as I understand. There is also an element of availability for small businesses.
There is a quota candidate to be merged on git.
Can you be more specific on this? Is this more of a robustness question?
I would give option 1 - use rsync (local or remote) and option 2 use duplicity.
Second option as a separate checkboxes, of what to back up - MIAB conf, mailboxes, /etc, something else.
Backup must not involve manual work. In real life, people get lazy and careless over time. Storing keys and suggestion to copy backups manually from local disk is not a good idea in long term.
Most of us do not keep backups in public places and not need encryption at file level. But many of us may not remember after 2 years where that particular encryption key is stored.
Backups must contain everything that is needed to restore previous state. Right now /etc is not included in the backup (as far I understand), and many things vital for the functioning system is kept outside /home
Backup should be in some standard format easy to work with. Restoring one file or whole system should be fast and simple. With current obscure file format and encryption it is not. Duplicity is not very common tool compared to tar or bare rsync for example. Also is complicated to check that backup is actually made and contains what it should.
From practical standpoint, making backups (include remote ones) to local disk is pointless, wasteful and not reliable. Yes, for very small system this may be acceptable, but 200GB system it is waste of space. In case of rented VM’s can be expensive as well.
With that I mean to use some special tool nobody heard of to make incomplete encrypted backups in obscured format. That is unusual decision.
once this “unusual” decision is made, there is no easy fix.
I intend to disable builtin backup and use my own script (using rsync) instead. Same what is running in 30+ other servers I have. Also, to be able restore whole system in short time as possible, backups of whole VM is made at every night. All backups (about 40TB) is made directly to the backup server under my control in different part of the same building. Then they gets replicated to encrypted volume to additional backup server in rented space in the datacenter and not connected to the internet. Encrypted volume is for case when server get’s stolen or something. At the same time it is easy to work with. That’s my backup strategy.
I dont say, that MIAB’s backup is totally useless and wrong, but for my almost 30 years of backup experience it feels not very … thought through. At the same time I can understand, that when encryption and easy backup to cloud providers is needed this is a valid choice. In most of cases these things is not needed though.
There’s no manual work as far as I know. Only when you set-up the backup, you need to copy the key. Only when you choose local backup there’s need to copy it yourself to somewhere else. But there’s also rsync to remote server (as well as S3 and backblaze support)
This depends very much on your backup target. Since Mailinabox supports several public places to backup to, it is veey nice that it is encrypted by default.
Everything you need is under /home/user-data. The rest (e.g. what’s under /etc) will be recreated on setup of Mailinabox or the nightly maintenance jobs.
Duplicity is quite a common tool. It actually uses rsync for the heavy lifting, but saves you from having to write your own backup scripts (which you would need if you only use tar or rsync)
You must copy a key and keep it safe and remember where you put it, etc, etc.
This is manual work.
it is nice option, agreed, but this is only option right now, and how many people actually do their backups to cloud providers and have real need to encrypt their backups?
For that reason /etc should be included in the backup, that all the custom settings can be restored after running MIAB update. A week ago I changed firewall rules and some other things there and changes is still present today. Maintenance did not removed them.
As far I understand, in order to make one big encrypted file, duplicity must make it in local disk first and then rsync it to the remote location. I’m not entirely sure why, but I have exact copy of all the remote files in /home/user-data/backup/encrypted folder. What make me wonder, that even with remote backup, duplicity will keep all the backup files in the local disk and just rsync that directory to remote server. That’s why I called it wasteful and unreliable.