Anyone? Being unable to search my mailboxes is a show-stopper for me. I keep all sorts of things in my email archives. I’ve had to resort to grepping through the mailboxes from the shell, and that’s not a viable long-term solution.
Actually, I think I just fixed it. After some more digging, I found that mailinabox is setting roundcube’s imap_timeout to 15 seconds. I increased this (in my case, to 360 seconds, which may be overkill) by editing /usr/local/lib/roundcubemail/config/config.inc.php , and now my searches work!
So I thought I had fixed this… at least it worked for a day or so, but now it’s back. It times out with this error right at 60 seconds. The changes I mentioned above in various config files are still there, so don’t understand why it’s timing out after 60 seconds. Watching the VM performence indicators show that disk activity starts thrashing upon execution of the search & CPU activity ramps up as well, and continues for up to 2-3 minutes after the server produces this error (at 60 seconds in) on the Webclient.
So it would seem that the search is still continuing on the server, but the web client is timing out too early at 60 seconds.
Any one have any thoughts or suggestions of where to look to resolve this?
Here’s some more reasons why I think it’s a forced timeout issue from the Webclient perspective.
If I narrow down a full text search to “Unread” messages, then I have success with the search. I get 173 returned emails, but those are only Unread messages containing that search keyword. (BTW, I only have Body defined for the search, nothing else.). This takes about 40-50 seconds… just before the 60 second timeout error is produced on larger searches.
Of course, what I’m looking for is in a Read message, so this doesn’t help me. So if I revert it back to “ALL” messages to be searched, it reverts back to “Server Error! (Error)” or “Server Error! (Gateway Timeout)” error states in the Web client. This error comes up right at 60 seconds after starting any search. Always.
This is frustrating more for my wife, as she can’t use her old El Captain based Email client due to the TLS issue. I got her over to using the WebClient instead until she can afford to buy a new Mac (major purchases are on hold until Corona times are over). But she immediately ran into this issue trying to search her emails for some personal business info.
It’s like MailintheBox / Roundcube isn’t honouring that change to nginix.conf timeout properly.
And the crazy thing is I know we never had these problems with previous versions of MailintheBox - as I’ve been using the Webclient for all my email for more than a year, and have used the full text search to successfully search bodies of emails before (and my Email mailbox is bigger than hers at 13GB), so I feel a recent upgrade has introduced this bug.
I actually had this thought initially before you were making changes to nginx.conf … problem is that I am not good on the back end. I think that you may get some better traction opening this as an issue on the project’s git hub page.
I’ve been scouring the configuration files, trying to find a timeout of 180 seconds / 3 minutes specified somewhere, but cannot seem to find them.
I did a:
tail -f /var/log/mail.log /var/log/syslog /var/log/nginx/access.log /var/log/nginx/error.log
as I was doing the searches, and besides connection notifications, there’s no information in the logs that reveals why the Request Timed Out. But clearly I’m hitting a barrier somewhere.
This is an example of the type of search I’m attempting to perform…just choose an arbitrary name that I knew would be in my email box to test with. If I narrow the scope to just current Inbox or folder, than it will work in under 3 minutes successfully. But would be nice to have the full search scope option working as well.
My Mailinabox is running on a VM which resides on a i5 2.7-3.1Ghz CPU, with 4 cores dedicated to the VM and 4GB of RAM. The vDisk resides on a fast Samsung SSD. My mail file has 60,000 email messages. So I could imagine anyone with a machine less specs may also come up against this barrier as well.
Will continue to plunder ahead and try to find out the cause, but if anyone has an idea or guess they could suggest where I should look, it would be appreciated.