wgrant | cjwatson, guruprasad: It's just risk mitigation, yeah. It's only come up a handful of times, due to fairly low schema velocity, but we've violated it for simple cases. | 08:46 |
---|---|---|
wgrant | If you're just adding a table here and a column there I've no problem combining them. | 08:47 |
guruprasad | wgrant, thanks for the context. I ended up doing 3 back-to-back schema deployments one after the other with the preparations done once, and 3 separate outage windows within a few minutes of each other. | 08:51 |
guruprasad | It was late on a Friday night and I didn't want to find out what might go wrong and then be forced to fix it. | 08:51 |
guruprasad | :) | 08:51 |
wgrant | That is also fine now that the prep process is so much faster. | 08:51 |
wgrant | The latency to prep the new code etc. used to be 40 minutes when that process was written. | 08:51 |
guruprasad | Oh okay, that is useful to know because I hear a lot of suggestions/complaints from new people who want to throw away all the "legacy cruft" and fix it with a "modern" replacement :) | 08:53 |
guruprasad | wgrant, if you have a few minutes to spare, can you check my question in https://irclogs.ubuntu.com/2024/09/23/%23launchpad-dev.txt? | 08:54 |
guruprasad | This is not urgent. | 08:54 |
wgrant | Apologies, I've been out and about the last couple of weeks, let me see. | 08:54 |
wgrant | guruprasad: I almost certainly removed the walblock pipe and added -j8 to the pg_restore. | 08:55 |
guruprasad | After your departure, we managed to restore the qastaging database from the production copy in ~2+ days time without any issues. The staging one (with all the known iops issues) is still pending. So your tweaks might be helpful to have. | 08:55 |
wgrant | he says, before reading the third line where he clearly removed walblock | 08:56 |
guruprasad | Thanks, this is very helpful | 08:56 |
wgrant | The full set of changes had fancier stuff to do it in three phases, but that didn't end up being a big win. | 08:56 |
wgrant | But removing walblock and adding -j`nproc` is always a good win. | 08:56 |
guruprasad | Even without any of these fixes the qastaging restore went on rails and succeeded. But these are good to have. | 08:57 |
wgrant | Oh nice. | 08:59 |
wgrant | How long did it take? | 08:59 |
wgrant | Like 6 days? | 08:59 |
guruprasad | No. 2+ days. I started it on a Friday evening and it finished in the middle of Monday. | 09:00 |
guruprasad | The qastaging database is on a more performant, much less-lightly-loaded aggregate. | 09:00 |
wgrant | Ah right yes | 09:00 |
guruprasad | We checked with IS about moving the staging database to the same agg too, but they mentioned that there is no capacity to do so. | 09:00 |
wgrant | staging primary was on staging AZ1 iirc | 09:00 |
guruprasad | So we are stuck with the status quo | 09:01 |
wgrant | so was absolutely dreadful %st | 09:01 |
wgrant | Is qastaging on the prod agg, or just on a less bad AZ? | 09:01 |
guruprasad | I think it is just in a less bad AZ. | 09:01 |
guruprasad | I also asked IS about moving the staging database to a prod agg, and they pushed back strongly and recommended against doing so. | 09:02 |
guruprasad | Particularly since they were concerned about such a change affecting other production/"critical" services. | 09:03 |
wgrant | LP staging is much more critical than most of them :P | 09:03 |
wgrant | legit | 09:03 |
guruprasad | God, I miss that snark :) | 09:04 |
wgrant | :'( | 09:09 |
guruprasad | And it was right on time for the weekly infrastructure meeting, which is happening now :) | 09:10 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!