500 Internal Server Error for /admin

I just ran the install on a fresh 14.04 VPS. I keep getting a 500 error when I try to access /admin. The Roundcube route at /mail works fine.

I don’t even know where to start debugging this. /var/log/syslog is not showing me anything helpful when I hit /admin. I tried /var/log/php5-fpm.log and /var/log/nginx/error.log, and neither of them are reading out anything helpful.

What have I missed?

Run:

sudo DEBUG=1 management/daemon.py

and let me know what happens.

chris@mail-two:~/mailinabox$ sudo DEBUG=1 management/daemon.py
--------------------------------------------------------------------------------
INFO in daemon [management/daemon.py:469]:
API key: Oz783VYlsd53ySvIwAYEB0TUw5mb3j17Czfhl939ZGI=
--------------------------------------------------------------------------------
 * Running on http://127.0.0.1:10222/
Traceback (most recent call last):
  File "management/daemon.py", line 472, in <module>
    app.run(port=10222)
  File "/usr/lib/python3/dist-packages/flask/app.py", line 772, in run
    run_simple(host, port, self, **options)
  File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 706, in run_simple
    test_socket.bind((hostname, port))
OSError: [Errno 98] Address already in use

Ok. That shows the /admin server can start fine, up until when it seems that the real one running in the background is still running. Which should also be a good sign.

Kill that, and then run sudo service stop mailinabox, and then run the above command again, and then load /admin in your browser and see what the output on the console is?

So every time I tried to run sudo service mailinabox stop the process could not be killed. So I ran netstat to find the process that was running on port 10222 and killed it manually. It was a Python3 process, and killing it opened up the port.

I tried running sudo service mailinabox start again to see if that fixed it, but I still got the 500 error. So I ran netstat again, killed the Python3 job, and fired up management/daemon.py again. Here’s the output from hitting the /admin route:

chris@mail-two:~/mailinabox$ sudo DEBUG=1 management/daemon.py
--------------------------------------------------------------------------------
INFO in daemon [management/daemon.py:469]:
API key: dxqNqjfHuOUHG462JS9PeRDI9jL/0ui7I792z1GDtgU=
--------------------------------------------------------------------------------
 * Running on http://127.0.0.1:10222/
 * Restarting with reloader
--------------------------------------------------------------------------------
INFO in daemon [management/daemon.py:469]:
API key: 49bGbZLKbpBWti1yQAiGAwDNTSKiT4N7VCoxOZRslkQ=
--------------------------------------------------------------------------------
127.0.0.1 - - [24/Nov/2015 17:03:49] "GET / HTTP/1.0" 500 -
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/flask/app.py", line 1836, in __call__
    return self.wsgi_app(environ, start_response)
  File "/usr/lib/python3/dist-packages/flask/app.py", line 1820, in wsgi_app
    response = self.make_response(self.handle_exception(e))
  File "/usr/lib/python3/dist-packages/flask/app.py", line 1403, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/lib/python3/dist-packages/flask/_compat.py", line 33, in reraise
    raise value
  File "/usr/lib/python3/dist-packages/flask/app.py", line 1817, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/lib/python3/dist-packages/flask/app.py", line 1477, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/lib/python3/dist-packages/flask/app.py", line 1381, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/lib/python3/dist-packages/flask/_compat.py", line 33, in reraise
    raise value
  File "/usr/lib/python3/dist-packages/flask/app.py", line 1475, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/lib/python3/dist-packages/flask/app.py", line 1461, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/home/chris/mailinabox/management/daemon.py", line 97, in index
    import boto.s3
  File "/usr/local/lib/python3.4/dist-packages/boto/__init__.py", line 1216, in <module>
    boto.plugin.load_plugins(config)
  File "/usr/local/lib/python3.4/dist-packages/boto/plugin.py", line 93, in load_plugins
    _import_module(file)
  File "/usr/local/lib/python3.4/dist-packages/boto/plugin.py", line 75, in _import_module
    return imp.load_module(name, file, filename, data)
  File "/usr/lib/python3.4/imp.py", line 235, in load_module
    return load_source(name, filename, file)
  File "/usr/lib/python3.4/imp.py", line 171, in load_source
    module = methods.load()
File "/usr/share/google/boto/boto_plugins/compute_auth.py", line 64
    except (urllib2.URLError, urllib2.HTTPError, IOError), e:

Same issue here… I just tried the install everything went fine till it tried creating a user… (again):

500 Internal Server Error

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

Running the DEBUG commmand I get:
~/mailinabox# DEBUG=1 management/daemon.py

INFO in daemon [management/daemon.py:469]:
API key: BN4/vQmwXeNKneawN4JYSPCKLTfH+ckAYZJdHhgL4wc=

  • Running on http://127.0.0.1:10222/
    Traceback (most recent call last):
    File “management/daemon.py”, line 472, in
    app.run(port=10222)
    File “/usr/lib/python3/dist-packages/flask/app.py”, line 772, in run
    run_simple(host, port, self, **options)
    File “/usr/lib/python3/dist-packages/werkzeug/serving.py”, line 706, in run_simple
    test_socket.bind((hostname, port))
    OSError: [Errno 98] Address already in use

So its looks very similar to the previous poster… I

Yep! That’s what I’m dealing with two. I spun up my first Mailinabox about a year ago and it’s been hassle free the entire time. I’m currently migrating to a new server, so I figured I’d just spin it up like before… only there appears to be a bug in the admin server :frowning:

@epsilon: Do you know how /usr/share/google/boto/boto_plugins got on your system? (Seems to be from this package.) Are you launching your VM within GCP? This may not be compatible with the version of boto we install.

Also, the stack trace that you pasted looks incomplete so I don’t see what the actual error was.

@joshData I’m running on Google Cloud Compute Engine, which should account for the google plugins.

Here’s a screenshot of the page:

@joshData Here’s a gist of the context for compute_auth.py. Discourse won’t let me post another link, because I’m a “new user”. Baloney. Anyway, I’ll link it as a code block.

https://gist.github.com/deltaepsilon/88dd0a2d7547737b1206#file-compute_auth-py-L64

Yeah I understand the problem now. It’s a conflict. That library doesn’t work in Python 3, and our management server runs with Python 3, and so boto is crashing when it tries to load the library. We’ll either need to have Mail-in-a-Box disable boto plugins or you will have to see if you can disable boto plugins on your end.

Got any idea on how I can disable boto plugins on my end? Google Cloud uses them at the OS level. I’ve tried sudo pip uninstall boto, but it tells me that it can’t uninstall it do to the OS ownership. I’m running an Ubuntu instance that appears to be using python2.7… but I’m no Python dev, so I could be confused.

Ok I’ve pushed a fix for this. Can you try updating to the latest development version of Mail-in-a-Box:

cd mailinabox
git pull
sudo setup/start.sh

Hopefully everything will work after that.

Worked like a charm. I actually deleted my mailinabox directory and pulled fresh from Github. Thanks for your help! I’m not quite up and running yet, but I’ll take a breathe and see if DNS propagation doesn’t straighten things out.

A quick note to anyone else attempting to deploy to Google Cloud… Google Cloud has its own Firewall system. I tried setting it up initially using UFW, but that failed. Then I found the Network->Firewall tab and got it running pretty quickly.

1 Like

Fantastic. Thanks for your help figuring out the issue.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.