I’d like to know what’s the people using out there to run rails migrations with MySQL.
Things are slowly improving regarding schema changes in MySQL but blocked tables is still an issue to run online schema changes in large databases. I feel like it’s something that the Rails community hasn’t discussed much about but it’s a know problem many Rails apps on MySQL run into.
So I wonder what does the people use: LHM, any homebrewed script, manually running Percona’s pt-online-schema-change, the new Github’s gh-ost, etc.
At Redbooth we’ve implemented Percona Migrator, an Active Record adapter that executes the statements through Percona’s
pt-online-schema-change. What we first thought it was very challenging it’s proven to be an exciting and successful experiment. It’s become an essential part of a our toolchain.
On the other hand, I’d also like to hear the experience of those running Postgres with large databases. What’s your experience? Does it really allow you to run online schema changes without any problem?
Postgres really doesn’t have the same issue in my experience. Migrations are fast, tables don’t get blocked. I have found a trick for schema changes to mysql databases though. Granted this wasn’t in Rails, but I assume the issue is the same.
I found that you can do this: SET foreign_key_checks = 0;
Do what you need to do, then turn them back on with SET foreign_key_checks = 1;
There is risk involved in terms of data integrity (obviously, you’re turning off foreign key checks), so be absolutely sure you know what you’re doing (and please, research elsewhere as well) before you do this.
I’m guessing there are probably more creative solutions as well involving greater normal forms, such as just making a new table for additional columns instead of adding them to an existing one. This way the existing tables shouldn’t get blocked at all (I believe).
Interesting. I will deploy a rails app running on postgres soon. We’ll see
how things turn out.
As for the MySQL trick you mentioned, unfortunately it’s nothing you can do
with tables that have a considerable number of rows, which is our case.
All tools I know though, LHM, pt-online-schema-change, gh-ost do you use a
new table to apply the schema changes ad you mentioned. Check them out if
ever hit that point.
How considerable is the number of rows? What’s the scale and time to run a migration for you?
The last table that I had to apply a migration on has currently 34.8M rows. We try to run them early morning on weekdays, when we experience quite low load, but sometimes there’s no other way than running them on weekends.
I’ve used this trick for a table of ~50 million rows, it made alter tables take basically no time. You can use it after you’ve made a copy of the table (with the new schema) just to change the name to that of the old table as well. You could test the method in development, might be worth a shot.