I know that this is a very, very old thread, but I thought I’d chip in.
We use SQLIte as a main database for our apps, currently the largest database we host is 75GB. it grows around 500MB per week. We’re by no means experts but we know a reasonable amount of SQL and how to keep databases going.
For what we are using this DB for, SQLite is more than adequate. For 125 users on one server, it won’t even break sweat for what its used for. I looked at the schema and its trivial, it’s also not optimal as there are no indexes at all, however even if it table scanned every single entry, I can’t imagine SQLite being a bottle neck. Postfix and the anti virus software will be the bottleneck on any e-mail.
The other advantage of SQLite is its simplicity. Its easy to setup and run. Don’t mistake simplicity for loss of power though, we do a analytics run through 75GB of data once a day to generate stats. We have tuned the database and we can process 75GB of data in 90 secs. To be fair we make sure that all the data is in the indexes so it never actually hits the database so there is zero IO
For a backend database like this, SQLIte is quick and easy and has very low calling latency, indeed we have used SQLite in preference to certain file IO ops.
So just in case anybody is thinking about pulling SQLIte and putting in MySQL (or any derivative), I’d be very careful as it may well be slower. The issues that are talked about with random 4K writes are not an issue for a database of this size. Putting a simple index on the two tables will give you immediate response rates as it’ll all be in memory. Opening and closing files on SQLite database is not a massive overhead.
Just my 2p worth and apologies for resurrecting an old thread,