Nothing beats the convenience of doing apt-get install mysql-server
or apt-get install postgresql
to get an instantly working open source DBMS. MySQL and Postgres are part of the foundations of a lot of open source library software, including ILSs (Koha and Evergreen), discovery interfaces (VuFind), digital repositories (DSpace and Fedora), and content management systems (Drupal, MediaWiki, Moodle, WordPress).
But there’s a catch: the packaged version of an RDBMS is usually optimized for working out of the box. But since the packager can’t control how big the box is, as it were, that means that the database tuning settings set by the package are usually for a small system. The upside to that is that you can dust off a five-year-old laptop, install your favorite flavor of Linux on it, and get up and running quickly with an RDBMS-based app.
The downside is that if you do the same thing on a production database server, using the default settings can mean that even a small database can perform slowly, and sometimes quite slowly indeed. In particular, the key to performance with most RDBMSs is using as much RAM as possible and touching disk as little as possible. Particularly in the case of MySQL, the default settings can leave gigabytes of system memory unused. You’ve paid for your memory, make it sing for its electrons.
Tip: after installing a DBMS, tune it (although you may need to load your production data first to get the best results from the tuner). In the general case, that can be a very involved process, but the starting point can be quite simple. If you’re using MySQL, run mysqltuner. If you’re using Postgres, run pgtune.
Want the tao of database tuning in dead tree form? I’ve read and recommend both High Performance MySQL by Jeremy D. Zawodny and PostgreSQL 9.0 High Performance by Gregory Smith.
2 Responses to Defaulting to functional but slow