[07:40] <didrocks> slyon: hey, about the adsys MIR, 0.7 contains vendor/gopkg.in/yaml.v{2,3}/NOTICE. I was wondering how you got the lintian warning about them missing?
[07:49] <didrocks> slyon: ack, I got it in pendantic mode only, I can maybe patch debian/rules to include it. I’m looking for the PIE warning now
[07:53] <slyon> ack
[08:04] <didrocks> slyon: I can’t see the PIE warning on impish and latest lintian, even with pedantic mode, how did you get it?
[08:08] <slyon> didrocks: I get it by running "adsys-0.7$ lintian ../adsys_0.7_amd64.changes" after having it built via "sbuild -dimpish ."
[08:41] <didrocks> slyon: ack, problably an upstream 1.17 vs distro 1.16 thingy as I didn’t have this warning. However, running with -buildmode=pie explicitely in GOFLAGS works and enables PIE \o/ Thanks for spotting it!
[08:41] <didrocks> sarnold: FYI ^
[08:56] <didrocks> slyon: and integration tests fixed (just removing sudo which was a leftover of zsys), I’ll update 0.7.1 soon, but still, feel free to post your review once ready and I’ll answer :)
[08:59] <slyon> very good! Thank you
[09:24] <slyon> didrocks: I sent my review to https://pad.lv/1936907 I'd like to assign ~ubuntu-security so we can move on with that review as well, but somehow I'm not allowed to change the assignee (probably because I'm not yet part of ~ubuntu-mir). Could you assign ~ubuntu-security for me?
[09:26] <didrocks> slyon: sure, doing
[12:17] <bluca> ddstreet, laney,slyon: we have a problem with the github/autopkgtest integration for the systemd upstream CI and cpaelzer suggested pinging here
[12:17] <bluca> it seems jobs are running fine, but since yesterday there are no reports back on the PRs on github
[12:17] <bluca> eg, top job on https://autopkgtest.ubuntu.com/running#pkg-systemd-upstream is https://github.com/systemd/systemd/pull/20744 but there's no status report
[12:18] <ddstreet> bluca hmm...laney was running autopkgtest infrastructure before, but he's left canonical...not sure who took that over
[12:18] <slyon> juliank: could you help with that ^ ?
[12:18] <bluca> the github interface settings shows webhooks are working without errors
[12:18] <bluca> any idea?
[12:19] <bluca> there's an ERROR 500 every now and then
[12:19] <juliank> bluca: we'll have to talk to xnox, I think error reports go to him, as it's his token
[12:19] <bluca> but most http posts work
[12:19] <bluca> thank you
[12:19] <laney> check logs on the web units
[12:19] <laney> if it's failing to post the result back to github it should say something there
[12:20] <bluca> 10% of webhooks call fail with error 500
[12:20] <bluca> the rest suceeds
[12:20] <bluca> there's nothing else other than error 500 in the github webhook interface
[12:21] <juliank> I see multiple 401 Unauthorized per second
[12:21] <laney> after the test is running the webhook is not the problem any more
[12:21] <laney> it's only used to dispatch it
[12:21] <laney> then it's up to autopkgtest-cloud to submit the result back to github to be recorded on the PR
[12:21] <laney> could be a problem with the token being used, or something else, logs should give a clue
[12:21] <laney> (401 sounds like a token issue)
[12:22] <bluca> is that the same token as the webhook? or a diff one?
[12:22] <juliank> laney: I'm digging, it's complaining about http://10.24.0.23:8080/v1/AUTH_... urls
[12:22] <laney> swifT?
[12:22] <juliank> Seems like swift storage is messed up?
[12:23] <laney> like downloading the results on the web side to decide what to post back is failing?
[12:23] <laney> check if they uploaded ok on the cloud worker ...
[12:23] <laney> like see what the job said when it finished, confirm with swift list
[12:25] <juliank> It started 10 minutes after I ran mojo run
[12:25] <juliank> :D
[12:26] <laney> D:
[12:28] <juliank> cloud worker seems happy
[12:32] <laney> try curling the url it's complaining about?
[12:32] <juliank> I don't care right now, I see that github-status-credentials.txt has reverted to the old one
[12:34] <laney> so it's not complaining about swift urls then
[12:34] <laney> and the creds weren't updated on wendigo properly, so mojo run reset them
[12:35] <juliank> I updated them in ~proposed-migration
[12:35] <juliank> ~prod-proposed-migration
[12:35] <juliank> seems they need to be updated in /srv/mojo/mojo-prod-proposed-migration/focal/ROOTFS/srv/mojo/mojo-prod-proposed-migration/focal/production/local/github-status-credentials.txt
[12:35] <juliank> aka /srv/mojo/LOCAL/mojo-prod-proposed-migration
[12:36] <juliank> duplicate files are confusin :D
[12:37] <juliank> bluca: should be working again now/future PRs
[12:37] <bluca> fab, thank you!
[12:37] <bluca> any chance the reports can be retriggered for existing PRs?
[12:37] <bluca> otherwise they are kinda flying blind
[12:39] <bluca> it's ok if it's not possible, I can then ask to force push
[12:39] <juliank> not easily, let's see what happens to the ones running right now, I don't know what is normal
[12:39] <laney> it should sort itself out
[12:40] <laney> (for the ones that got triggered)
[12:40] <juliank> laney: So hooray because despite logging the swift urls, the 401s are gone now, so it logged the wrong url I guess
[12:40] <juliank> bluca: Yeah, so I see feedback on 20744
[12:40] <juliank> bluca: focal-s390x — autopkgtest finished (success)
[12:41] <laney> juliank: I guess it means: I tried to report this result but I got a 401 from github
[12:41] <juliank> probably :D
[12:41] <bluca> fab, thank you for the quick fix
[12:41] <laney> still need a centralised logging place :-)
[12:41] <laney> this incident should have caused alerts
[16:12] <keithc> question regarding bug#1942031
[16:15] <bdmurray> and what's the question?
[16:15] <keithc>  gnome-shell crash on impish-indri daily with unreportable reason invalid core dump:"/tmp/apport_core_whctskr4" is not a core dump: file format not recognized is this the same bug#1942031?
[16:18] <bdmurray> keithc: Do you have anything in /var/log/apport.log ?
[16:20] <keithc> yes
[16:24] <keithc> 5 errors
[16:27] <keithc> do you need me to list the errors?
[16:29] <bdmurray> could you pastebin the log somewhere? Are you using a live session?
[16:30] <keithc> yes
[16:43] <keithc> apport.log is at https://pastebin.com/g0pkCyL1
[16:46] <keithc> another potential bug question bdmurray
[16:46] <bdmurray> that looks the same based off the executable path
[16:46] <keithc> ok thanks
[16:50] <keithc> I looked in the Defects summary but did see a bug for the cheese application. The  problem I encountered on my laptop was cheese did not show an image when first opened but would display an inage after being reopened the second or third try.
[16:53] <bdmurray> keithc: you could report a bug about cheese using 'ubuntu-bug cheese'
[16:54] <keithc> Thanks again
[18:40] <keithc> bdmurray: I reported the bug about cheese it is bug#1943752
[18:49] <krytarik> Once deployed, the pattern used above will also be suitable for bug snarfing btw.
[21:16] <dbungert> May I get a retest click?  The test failure is a known issue on casper and is unrelated to squashfs-tools LP: #1938325 https://autopkgtest.ubuntu.com/request.cgi?release=impish&arch=amd64&package=casper&trigger=squashfs-tools/1%3A4.4-2ubuntu2
[21:24] <bdmurray> clicked