=== nOOb__ is now known as LargePrime === nOOb__ is now known as LargePrime === seb128_ is now known as seb128 === tumbleweed_ is now known as tumbleweed [12:58] given these test results from bileto, is there a way to trigger another run with different options? https://bileto.ubuntu.com/excuses/3531/disco.html [12:58] like the retry icon, when it fails, but in this case it didn't fail, so no such icon [12:58] I need to retry symfony/3.4.20+dfsg-1ubuntu2~ppa2 armhf with disco-proposed [13:01] ahasenack: retry-autopkgtest-regressions --bileto 3531 --state PASS --series disco [13:02] thanks [13:03] probably should get retry-autopkgtest-regressions to not put stable-phone-overlay in there [13:05] Laney: the "session cookie", is that from bileto.u.c in this case? Or autopkgtest.u.c, even though I'm retrying a bileto run? [13:07] ahasenack: there's just one, on autopkgtest.ubuntu.com [13:10] ok [13:18] HIho, this just yesterday built in a ppa, now the build in armhf for d-propsoed says "Chroot problem" and it seems to be a networking issue [13:18] https://launchpad.net/ubuntu/+source/strongswan/5.7.1-1ubuntu2/+build/15768264 [13:18] is this a "retry and forget" issue or is there something bigger going on? [13:19] hah, just because I just managed to trigger a retry on armhf? :) [13:19] my luck :) [13:19] I don't want to hit rebuild as it would flush the logs unless someone says that is ok [13:19] ahasenack: you mean it can only retry for you or build for me [13:19] well then let me hit rebuild and break yours :-P [13:20] I'll know sometime later today [13:20] this is the log http://paste.ubuntu.com/p/2KytF4zPd9/ so I can hit rebuild ... [13:21] cpaelzer: hm, I've seen that dns issue a few times in the past few days [13:22] ahasenack: on armhf in particular, or in general? [13:22] can't remember that detail [13:22] but I saw it in some build log === ricab is now known as ricab|lunch [13:25] the rebuild seems to have passed that stage, so no permanent issue === ricab|lunch is now known as ricab [16:32] remind me how to run an autopkgtest locally in an lxd environment then enter the environment when it fails so I can better examine what's goin on? [16:32] been a while since I had a regressed autopkgtest :| [16:33] --shell-fail [16:37] Laney: thank you :) [16:46] huh... so that's weird [16:49] Laney: autopkgtest on the system that runs the tests fails, but local autopkgtest for the same test and package succeeds... should I just retry the tests on the CI system then? [16:50] teward: dunno, depends on what the failure is and whether it's likely to be flaky tests (which should also be fixed) [16:51] or if you didn't manage to recreate the conditions that make it happen [16:52] Laney: it's weird because it looks to me like fcgiwrap was working locally, but not up on CI - 502 bad gateway would indicate fcgiwrap didn't work properly :| [16:52] some kind of race? [16:52] probably. [16:52] trying to diagnose the nginx autopkgtest 'regression' state on fcgiwrap [16:53] and 502 gateway timeout is fcgiwrap being stupid and not replying [16:53] but can't replicate it in local tests [16:53] you could maybe run it a bunch of times locally to see if it ever happens [16:53] true. [16:53] i also just restarted the test on CI to see if it happens again [16:53] right, if it goes green that possibly points to a race condition [16:53] and if it's still failing then I'll have to find someone sponsor a modification to the fcgiwrap tests to add a wait period before actually *testing* things to let fcgi finish its startup [16:54] running the same test now, 3 simultaneouls [16:54] y [16:54] to see if it breaks [16:54] all i know is my computer's going to be a bit mad after this lol [16:58] i'll have to wait a while for the autopkgtest I kicked off again to work [16:58] Laney: four simultaneous tests, all succeeded [16:59] so i'm going to *assume* maybe a CI race condition, or something changed in fcgiwrap between the time it ran the autopkgtest with the nginx trigger and now [16:59] *shrugs* [17:02] Laney: well now my computer *is* hating me, I have 10 simultaneous autopkgtests of it now, trying to force a race condition. do we have more details about how autopkgtests are run in the automated CI? and what resources are available/assigned for each test? [17:03] teward: looks like fcgiwrap -12 might have fixed it [17:03] https://launchpad.net/ubuntu/+source/fcgiwrap/1.1.0-12 sounds like a test fix to me [17:03] ah, indeed, looks like it [17:03] LOL not being in the test dependencies sounds like it's a "WTF Are You Doing" moment xD [17:04] i'll fire off the rest of the requests then so that they'll all succeed, I only fired off amd64 :P [17:04] sounds like someone didn't run the tests properly [17:05] Laney: or didn't write them proper heh [17:05] Laney: in any case, it does look like -12 may have fixed the failures, which means they should pass now when it reruns for nginx [17:05] then maybe CI can stop complaining to me about it being stuck >.> [17:05] about nginx being stuck* [17:05] yeh, let's see [19:02] Laney: looks like you were right, redoing the tests against -12 for fcgiwrap shows no regressions now against nginx [19:03] it's slower on two of the test still but once those complete and pass i can stop having the system nag me then :P === joedborg_ is now known as joedborg === blackboxsw_ is now known as blackboxsw