This was supposed to be a “SceneKids.com the first year” post, but nothing has really changed too much since the 8 month post with the exception of expanding out with additional websites on the same platform. Instead, I’m going to discuss a dilemma I ran into with MySQL’s InnoDB data size and buffer pool. Specifically, dealing with a ballooning data size without throwing money at the problem for more RAM.
At it’s peak, my InnoDB data size was upwards of 1.6G on a Linode 2048 (2G of RAM). I had allocated 1.2G of RAM to innodb_buffer_pool and that worked well until the data size exceeded 1.4G or so. I could have thrown more RAM at the problem, but I had already done that a few times. Simply put, the data was growing at such a rate that throwing RAM at the problem would have only been a temporary solution.
The data that was occupying the bulk of the data size was a table used to store messages between users. Typical stuff, subject line, message body, timestamps and user mapping. Taking a further look at the data and learning more about how the InnoDB buffer pool works, I realized that the TEXT field that was used for the message body was what was bloating the hell out of the data size. After running some local tests on a copy of the database, it became very apparent that removing the TEXT field entirely would reduce the size by 75%.
So now I have data, but I’m unsure of what to do with it. My buddy Sumit suggested looking into running a local IMAP server and just piggy backing the well established technology. Unfortunately, I found the time to connect to a local IMAP server from PHP to be very slow (sometimes seconds) but more so than that, I was scared off by the idea of needing to migrate all that data to a brand new system. That’s not to say that IMAP isn’t a good solution. If nothing else, by reading up on how IMAP actually worked, the way it indexes messages and the Maildir format, I was able to come up with a solution that was to my liking.
The solution for me was to move the data out of the database and store it in a Maildir-style directory structure. Keep in mind, I didn’t move all of the data out of MySQL, just the TEXT field that contained the message body. The directory structure is simply the user’s UID, what type of message it is (when first implemented it was just “inbox” but I’ve since added “sent” to this) and then a text field named for the message’s UID containing the body of the message. This gave me the most flexibility for being able to continue to keep the table indexes in MySQL (and in RAM since it all fits).
The migration was broken out into a few steps. First, the system was updated to write new messages to both the database’s TEXT field as well as the flat file. Second, the existing data (million and a half rows or so) was converted to the flat files while leaving the data in the database. Then the code was migrated to only use the flat files and the TEXT field dropped. But wait, there was no recoup of data size?! Oh that’s right, I don’t use innodb_file_per_table at this time. Without that, you have to drop the database and re-import it (which I did to recoup the space). When all was said and done, the InnoDB data size was down below 400MB and at present is at 485MB (with 600MB allocated). Overall, it was quite the success for the site as swap usage went down dramatically.
But wait, wouldn’t there be more disk I/O because of this?! Well, maybe, but that’s not what I saw. There was no noticeable increase in I/O (remember MySQL writes to disk too, so it was probably a wash) but hard swapping did cease. I had left the same logic in place as I had before which was to write the message data to Memcached at the time of the message send with an expiration of an hour, this eliminates a ton disk reads because most messages end up being read within a few minutes of being sent anyway.
Based on how this all went, the same logic will be utilized when adding blogging to the site since the blog body would end up being thrown into a TEXT field and could potentially end up bloating in the same exact way. Anyway, you don’t have to agree with it, but it worked out well and has scaled up nicely thus far :)