=== pieq_ is now known as pieq [08:36] I'll take a look [08:37] It looked like William fixed that overnight but didn't mention it here [08:37] rabbitmq OOM [08:39] I saw that, but that looked to be several hours ago, and it looks like there's a bunch of builders that might be stuck cleaning [08:43] Ah, sure, feel free to dig then :) [09:08] tsimonq2: Builders reset, should now be much happier! === ErichEickmeyer is now known as Eickmeyer [18:08] I don't know what's going on, but it seems that every day around this time some dummy recipe in a PPA takes up nearly every amd64 builder for a good half-hour. Seems suspicious to me, I don't recognize the name. [18:10] It also occurs whether or not the person has updated their code. Seems nobody has touched it since 2017? [18:12] Eickmeyer: I guess you mean the kstars/indi ones? [18:13] RikMills: Yes, more like mutlaqja/libinid, but yes. [18:13] it is the kstars developer [18:14] Ah, why is it hogging every single builder then? [18:24] I have never looked that closely, but I guess he has a bazillion recipes, and in the recipes the base code is a common 'dummy' repo with each recipe nesting other repos on that [18:25] so that he can build each indi driver separately I guess, instead of the whole indi source repo in one big build [18:29] 100 has drivers for over 100 different types of astronomy device IIRC [18:29] *indi has [18:50] Eesh. [20:28] Eickmeyer: It's not that user's fault. [20:28] I mean apart from having a bazillion recipes. [20:28] But it's not their fault that we aren't doing a good job of spacing them out. [20:29] However until like six hours ago it would have been very hard for us to space them out without risking tipping buildd-manager further over a performance cliff. [20:29] The situation should be better now, but I'd rather not perturb it in newly-invented ways for a while ... [20:30] The recipe build description is misleading - it doesn't show enough information to indicate that all the builds are actually distinct. [20:30] it certainly sounds more like something to fiddle with near the start of a day than at the end of a day :) heh [20:31] At the moment it's not very much of a practical problem - it's noticeable if you happen to look at /builders at the right time, but it doesn't typically clog things up for too long. [20:31] (It was a good bit worse before we redeployed git.l.n on faster hardware.) [20:33] yay :) [20:38] As of the rollout earlier today, buildd-manager now does a batched candidate selection query roughly once per group of builders with the same properties per scan cycle (15 seconds), rather than one query per idle builder per 15 seconds, so it's potentially much more viable to complicate the candidate selection query than it was. But as I say I want to make sure that this actually solves the ... [20:38] ... performance problems before changing anything else there. [20:40] We think this is almost certainly related to why we've had to run two buildd-managers for the last couple of months to avoid catastrophically slow fetching of large build output files, and at least plausibly implicated in the occasional buildd-manager hangs. [20:41] Turns out that doing lots of synchronous DB queries in your asynchronous event loop is a bad idea. (We're still doing some synchronous work there, just much less of it.) [22:08] cjwatson: Thanks for the info! Such a complicated topic for sure. I don't envy you your job here. [22:23] Did my build just get... paused? https://pasteboard.co/JbAxSpW.png [22:24] Link? Impossible to grep logs for just that fragment [22:25] cjwatson, https://code.launchpad.net/~kyrofa/+snap/nextcloud-test/+build/986978 [22:25] Looks like it's kinda back... but no log [22:25] It got requeued after probably some kind of network interruption to the builder I think [22:26] Ah, so it's starting over? [22:26] Yeah [22:26] Slightly odd, not sure what went on with the cancellation there [22:26] Not a big deal, just nothing I'd seen happen in the past [22:27] (Could also be that your build managed to DoS the builder in some way) [22:27] That would be interesting. Nextcloud has been building LP for years, but this is my attempt to get it using base: core18 [22:28] Which also means I specified the stable channel for snapcraft as opposed to the default [22:28] Let's see if it happens again [22:28] I also changed the distribution series to bionic from xenial. Not sure if that was required or not [22:29] You should in general just leave it unset [22:29] i.e. the very misnamed "Ubuntu Core 16" [22:29] (sorry - we'll fix that soonish I hope) [22:29] Oh really? So I should distro series set to ubuntu core 16 and it'll work regardless of base? [22:30] (what about a lack of a base?) [22:30] "Ubuntu Core 16" is a really unclear way to say "build for series: 16 (i.e. what everything is) and autodetect everything else from base:" [22:31] If there's no base: it uses xenial [22:31] cjwatson, perfect, good to know, thank you! And yeah, that series 16 has thrown me in multiple places [22:31] curling the store, for example [22:33] cjwatson, interesting, that removes the arch checkbox as well, so it will depend on my having architectures set in the snapcraft.yaml? [22:33] Looks happier so far anyway [22:33] Yep [22:34] Which you should anyway, that's the modern way to spell it [22:34] Darn. I wish LP assumed I wanted the old behavior when there wasn't any architecture [22:34] snapcraft defines things according to base really