Speed of the admin menus

Hello. I have a mail-in-a-box machine virtualized in our server (we have other virtual machines running on it) using vmware vcenter. We have assigned 2 cores of the proccesors (we have 2 xeon proccesors of 16 cores each one) and 4 GB of ram (from a total of 128), but this cores rarely reach 8% of their processing capacity while running the machine. We have now 121 mail box created in this virtual machine. Yes… it sound a lot of mail boxes, but performance of the machine is very good, and the users are very happy with it. They don’t have any lag in the normal use of their mailboxes.

BUT

The speed of the admin panel is very low (only the admin panel. Roundcube or access to the mailboxes over pop or imap runs at high speed). When i have to access to the list of the mails, the interface takes a lot to show it, and frecuently i have a message sayin “Error, something went wrong, sorry” (this message it’s more frecuent using firefox instead of google chrome). I notice that if i try to access to the admin panel while i am waiting the user lists in other web browser, i have an error saying “504 Gateway Time-out”.

Is there any optimization that i can do to make the admin panel run more speedy? I think that the problem perhaps is the sqlite database… but i’m not sure. Any idea to solve this problem?

Thanks.

PD: Sorry for my english. I’m spanish and it cost me a bit speak in English.

the reason it is so slow is it has to load data in the backend, run system checks like checking spam lists, DNS look ups, etc. there is no optimization to be had here…

OK. I have no problem with it. If it must to be slow, then i let it be slow. The problem is the error. I am afraid that if the number of users grows this error can appears more times than appears now. Or even that the error can make imposible to enter the user administration panel.

I suppose the error appears after a timeout. ¿Can I modify the value of this timeout to set it to a higher value? The user administration panel might load in more time, but it will charge after all and without errors.

That timeout is client specific, change it in your browser.

I was trying to resolve this and i’ll post the things that i was found.

First of all. The error “Error, something went wrong, sorry” is a timeout from python (The system delays to show the users more than 60 seconds, so the timeout error appears). I think that when you click in mail --> users, the system, instead of looking for the users in a database, is looking at each user’s folders to see the name of the users and that’s why it takes so long. I have a lot of users (and a lot of folders) so the system is very slow about this.

In firefox i entered in about:config. I searched “timeout” and I clicked the first value (accessibility.typeaheadfind.enabletimeout) to set it to “false”. When I setted it the error dissapear (Thanks Murgero. You guide me to solve this problem). But I think that this must to be corrected (is not nomal that the system delay more than 60 seconds in show the users).

OK. I have the problem again. Perhaps the solution was temporal. There is another timeout in the server side that i have to modify.

Perhaps the solution is in this folder.

/etc/nginx/conf.d

Here, there is a file called “ssl.conf”. If i edit it, i can see this line:

#keepalive_timeout 70; # in Ubuntu 14.04/nginx 1.4.6 the default is 65, so plenty good

So I added a line below as this

keepalive_timeout 100;

but when i restart the machine (sudo shutdown -r now) nginx is not working. Same if i uncomment the line.

I tried to add this line in nginx.conf (Http section)

 fastcgi_read_timeout 300;

as is posted in the following page

But the error contiunes. ¿May be any setting in the python execution time? ¿How can i change it?

Anybody knows how to increase this timeout?

Thanks.

hey… I have an anottation in the logs when i activate keepalive_timeout in ssl.conf. I have this

“keepalive_timeout” directive is duplicate in /etc/nginx/conf.d/ssl.conf:41

So there is another directive with the same config, but it isn’t in ssl.conf. This is a mistery.

In logs, The error message that i have when i try to enter in the “users menu” is the following.

2018/08/31
11:43:45 [error] 1891#0: *36 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: box.myserverdomain.org, request: “GET
/admin/mail/users?format=json&=1535708564918 HTTP/1.1", upstream: "http://127.0.0.1:10222/mail/users?format=json&=1535708564918”, host: “box.myserverdomain.org”, referrer: “https://box.myserverdomain.org/admin

(I used “myserverdomain” to hide the real address)

Any idea?

OK. I have more data. The slow response in the “users” tab is caused by checking of the size of the mail accounts. When we click to this tab, the system is checking the size of all mail accounts. When the mail account are very big (one account very big, with a lot of little received mails you have a lot of accounts) this error appear. I see that i’m not the only with this problem. See this posts

So i think that this must be fixed in a update. The user “Evgeniy-Bondaren” suggest to put a button that checks the size of the mail accounts when its neccesary (or a button in each account to check the size of one account). Or separate the size check of the accounts from the management of the accounts.

I think that there are two ways to fix it in more elegent way:

  • increasing the waiting time to check the size of the acccounts
  • making a script that automatically checks the size of the account every day at night. Write the size of the accounts in a database and show this size the following day for each account.

Meanwile i will try to disable this space check. I’ll continue investigating

OK. I have a little workaround. I edited this file.

/home/myuseraccount/mailinabox/management/templates/users.html

I commented the following lines:

code

I think that this disallow the check of the size of the mail accounts. Now I can manage the accounts again (I know that the size of the accounts now are not real, but i can manage them).

This is only a workaround, but i think that it should be treated as an issue to be resolved.

Thanks.