[12:58] <ahasenack> given these test results from bileto, is there a way to trigger another run with different options? https://bileto.ubuntu.com/excuses/3531/disco.html
[12:58] <ahasenack> like the retry icon, when it fails, but in this case it didn't fail, so no such icon
[12:58] <ahasenack> I need to retry symfony/3.4.20+dfsg-1ubuntu2~ppa2 armhf with disco-proposed
[13:01] <Laney> ahasenack: retry-autopkgtest-regressions --bileto 3531 --state PASS --series disco
[13:02] <ahasenack> thanks
[13:03] <Laney> probably should get retry-autopkgtest-regressions to not put stable-phone-overlay in there
[13:05] <ahasenack> Laney: the "session cookie", is that from bileto.u.c in this case? Or autopkgtest.u.c, even though I'm retrying a bileto run?
[13:07] <Laney> ahasenack: there's just one, on autopkgtest.ubuntu.com
[13:10] <ahasenack> ok
[13:18] <cpaelzer> HIho, this just yesterday built in a ppa, now the build in armhf for d-propsoed says "Chroot problem" and it seems to be a networking issue
[13:18] <cpaelzer> https://launchpad.net/ubuntu/+source/strongswan/5.7.1-1ubuntu2/+build/15768264
[13:18] <cpaelzer> is this a "retry and forget" issue or is there something bigger going on?
[13:19] <ahasenack> hah, just because I just managed to trigger a retry on armhf? :)
[13:19] <ahasenack> my luck :)
[13:19] <cpaelzer> I don't want to hit rebuild as it would flush the logs unless someone says that is ok
[13:19] <cpaelzer> ahasenack: you mean it can only retry for you or build for me
[13:19] <cpaelzer> well then let me hit rebuild and break yours :-P
[13:20] <ahasenack> I'll know sometime later today
[13:20] <cpaelzer> this is the log http://paste.ubuntu.com/p/2KytF4zPd9/ so I can hit rebuild ...
[13:21] <ahasenack> cpaelzer: hm, I've seen that dns issue a few times in the past few days
[13:22] <cpaelzer> ahasenack: on armhf in particular, or in general?
[13:22] <ahasenack> can't remember that detail
[13:22] <ahasenack> but I saw it in some build log
[13:25] <cpaelzer> the rebuild seems to have passed that stage, so no permanent issue
[16:32] <teward> remind me how to run an autopkgtest locally in an lxd environment then enter the environment when it fails so I can better examine what's goin on?
[16:32] <teward> been a while since I had a regressed autopkgtest :|
[16:33] <Laney> --shell-fail
[16:37] <teward> Laney: thank you :)
[16:46] <teward> huh... so that's weird
[16:49] <teward> Laney: autopkgtest on the system that runs the tests fails, but local autopkgtest for the same test and package succeeds... should I just retry the tests on the CI system then?
[16:50] <Laney> teward: dunno, depends on what the failure is and whether it's likely to be flaky tests (which should also be fixed)
[16:51] <Laney> or if you didn't manage to recreate the conditions that make it happen
[16:52] <teward> Laney: it's weird because it looks to me like fcgiwrap was working locally, but not up on CI - 502 bad gateway would indicate fcgiwrap didn't work properly :|
[16:52] <Laney> some kind of race?
[16:52] <teward> probably.
[16:52] <teward> trying to diagnose the nginx autopkgtest 'regression' state on fcgiwrap
[16:53] <teward> and 502 gateway timeout is fcgiwrap being stupid and not replying
[16:53] <teward> but can't replicate it in local tests
[16:53] <Laney> you could maybe run it a bunch of times locally to see if it ever happens
[16:53] <teward> true.
[16:53] <teward> i also just restarted the test on CI to see if it happens again
[16:53] <Laney> right, if it goes green that possibly points to a race condition
[16:53] <teward> and if it's still failing then I'll have to find someone sponsor a modification to the fcgiwrap tests to add a wait period before actually *testing* things to let fcgi finish its startup
[16:54] <teward> running the same test now, 3 simultaneouls
[16:54] <teward> y
[16:54] <teward> to see if it breaks
[16:54] <teward> all i know is my computer's going to be a bit mad after this lol
[16:58] <teward> i'll have to wait a while for the autopkgtest I kicked off again to work
[16:58] <teward> Laney: four simultaneous tests, all succeeded
[16:59] <teward> so i'm going to *assume* maybe a CI race condition, or something changed in fcgiwrap between the time it ran the autopkgtest with the nginx trigger and now
[16:59] <teward> *shrugs*
[17:02] <teward> Laney: well now my computer *is* hating me, I have 10 simultaneous autopkgtests of it now, trying to force a race condition.  do we have more details about how autopkgtests are run in the automated CI?  and what resources are available/assigned for each test?
[17:03] <Laney> teward: looks like fcgiwrap -12 might have fixed it
[17:03] <Laney> https://launchpad.net/ubuntu/+source/fcgiwrap/1.1.0-12 sounds like a test fix to me
[17:03] <teward> ah, indeed, looks like it
[17:03] <teward> LOL not being in the test dependencies sounds like it's a "WTF Are You Doing" moment xD
[17:04] <teward> i'll fire off the rest of the requests then so that they'll all succeed, I only fired off amd64 :P
[17:04] <Laney> sounds like someone didn't run the tests properly
[17:05] <teward> Laney: or didn't write them proper heh
[17:05] <teward> Laney: in any case, it does look like -12 may have fixed the failures, which means they should pass now when it reruns for nginx
[17:05] <teward> then maybe CI can stop complaining to me about it being stuck >.>
[17:05] <teward> about nginx being stuck*
[17:05] <Laney> yeh, let's see
[19:02] <teward> Laney: looks like you were right, redoing the tests against -12 for fcgiwrap shows no regressions now against nginx
[19:03] <teward> it's slower on two of the test still but once those complete and pass i can stop having the system nag me then :P