Multi server with load balancer

Is it possible to have a Load Balancer in front of multiple MIAB server all running the same domains?

Option 1
With an NFS server

I would setup 4 servers 2 MIAB servers , 2 NFS servers, and one load balancer server.

I would install MIAB on the NFS server.
Install MIAB on the MIAB servers at /mnt/NFS.
Create an NFS mount on both MIAB servers at /mnt/NFS

At this point both MIAB will see the same data from the NFS server and ignore their local installation data of MIAB.

Then I would create a Realtime backup with the other NFS server and if one NFS crash then I would simply swap to the next NFS servers. I could use GlusterFS for the replication between the NFS server.

Option 2
Another option could be to use GlusterFS and no NFS server.
Load balancer with 3 MIAB server that use GlusterFS mapped on /home/user-data/

Reason
This would allow me to move between servers if a problem arise with my hosting provider. The problem is that with BuyVM I have been having weird down time all caused by the hosting company. These downtime have occurred in the past 2-4 years at random times.

BuyVM are great to work with and will fix the problem within 12h, but still you not suppose to have down time du to your hosting provider. @alento I know you are an avocat of buyVM, but a MIAB server (Mails and Primary DNS) needs to be stable and up at 99% of the time.

The above idea with multiple server I would make sure that buyVm place them on different cluster within the same data center. But if one VM cluster goes bad then you have another machine to rely on.

What your thoughts on that?

I think that this is completely unchartered territory, that said, I am intrigued by your idea and wonder if in production it would work out … let me know if you decide to be the guinea pig. :stuck_out_tongue:

I think that I caught the tail end of your conversation with Fran in their discord. The issue you are experiencing is unfortunate, but it can happen. Sadly, it happened to you. Personally I have had great experience with Slabs, but again there is always that exception. :frowning:

I tend to find NY to be the most stable location, so if you are migrating may I suggest migrating to NY from LV. As for those instances that there are downtime, do you have Secondary DNS in place? It is a must IMHO no matter how reliable your provider is.

I like the NFS option better because I think the file access is more real time then ClusterFs, but I’m wondering if you have to MIAB pointing to the same data, can that data be accessed concurrently? What kind of database is behind MIAB?
I think their is more then one saving location and there is some sqllite involved. SqlLite shouldn’t be a problem how about the rest of the data?

I do have multiple secondary DNS, I went down so many times I set that up very quickly.

The way I was setup with BuyVM was with a small server with a 256gig volume attached to the small server. This allowed me to keep a smaller server and run my mail server with lots of data.
Then Installed MIAB on the attach volume. So it was running all from the volume.
But since the begging my server was very slow. I don’t need much speed so I didn’t care that it was slow. But then every 2-3 months the server would become so slow that it returned errors and MIAB didn’t work.
I would open a ticket and they do something to fix it each time.

Anyway now I’m off the Slab (volume) I will no use that again. They are using actual HD instead of NVME that’s probaly why.

I’m thinking that maybe what you can do is this. If you have, for example, three machines, you install Proxmox on them and create a Proxmox cluster. Then, you install the virtualized MIAB on one of the machines, and in Proxmox, you configure the High Availability (HA) option. With this setup, the MIAB machine will be running on all three nodes, and if one node goes down, the other two will keep the machine running. Proxmox also has load balancing options, so with this setup, I believe your proposed solution could be easily resolved.

1 Like

This topic was automatically closed after 61 days. New replies are no longer allowed.