[04:12] <rbasak> Unit193: no the current set is correct, thanks
[04:13] <Unit193> Ah, OK.  I just saw the message from slashd and wondered.
[04:15] <rbasak> I think he was trying to ping just the remaining people not at the meeting
[04:16] <rbasak> But specifying them individually made the dmb-ping part moot :)
[08:23] <slyon> Hey! Could any core-dev please trigger this test for me? It seems to be flakey and passes reproducibly in a local autopkgtest VM: https://autopkgtest.ubuntu.com/request.cgi?release=groovy&arch=amd64&package=ranger&trigger=sensible-utils%2F0.0.13 (same for s390x please)
[08:37] <seb128> slyon, hey, sure, retried
[08:38]  * RikMills aborts
[08:38] <RikMills> slyon: ps, as ranger is in universe, a MOTU could have as well
[08:39] <slyon> thanks seb128!
[08:39] <seb128> np!
[08:39] <slyon> Also, thanks RikMills for this hint!
[09:35] <slyon> RikMills: The ranger/amd64 test passed, but I guess s390x needs another try... Could you trigger that for me? https://autopkgtest.ubuntu.com/request.cgi?release=groovy&arch=s390x&package=ranger&trigger=sensible-utils/0.0.13
[09:50] <RikMills> slyon: done
[09:55] <slyon> RikMills: thank you!
[10:15] <RikMills> slyon: np, and it passed :)
[10:20] <slyon> yay \o/
[12:07] <ahasenack> good morning
[12:07] <ahasenack> rbalint: hi, did you see my ping yesterday about protobuf?
[13:30] <cpaelzer> rbalint: do you happen to know about recently livecd-rootfs tests all breaking on
[13:30] <cpaelzer> Unexpected seeded snap for ubuntu-cpc:minimized build: lxd=4.0/stable/ubuntu-20.04
[13:30] <cpaelzer> seems quite unrelated to the uploads that it blocks
[13:32] <cpaelzer> There is a good run "in between" that found the very same snap
[13:32] <cpaelzer> https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-focal/focal/amd64/l/livecd-rootfs/20200814_140408_fec2b@/log.gz
[13:32] <cpaelzer> snap: found lxd=4.0/stable/ubuntu-20.04
[13:32] <cpaelzer> but did not complain/crash
[13:33] <cpaelzer> hrm that working one was 2.664.5 and that is in focal-updates - so I was assuming that is used
[13:33] <cpaelzer> but in the failing test I see
[13:33] <cpaelzer> Get:34 http://ftpmaster.internal/ubuntu focal-updates/main amd64 livecd-rootfs amd64 2.664.4 [80.6 kB]
[13:34] <cpaelzer> published yesterday 2020-08-24 19:28:43 CEST
[13:34] <cpaelzer> yeah should be https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1889470 then
[13:35] <cpaelzer> just need to find why it doesn't use the new package yet
[13:36] <cpaelzer> ok the test was on proposed of 664.5 and the others before release
[13:37] <cpaelzer> I'll retrigger all the waiting ones as nothing is in the queue yet
[13:40] <cpaelzer> That leaves me with systemd-fsckd (on focal this time) and not reproducible locally
[13:42] <cpaelzer> rbasak: are we in focal still on "retry until success" for those? Or are these also a symptom of https://bugs.launchpad.net/ubuntu/+source/glib2.0/+bug/1892358/comments/1 ?
[13:43] <cpaelzer> rbalint: and if so will you mark it there flaky as well or something else?
[13:43] <seb128> cpaelzer, is systemd-fsckd supposed to be fixed in groovy now? I retried the plymouth tests with the systemd version in proposed yesterday but that ended up still failing
[13:43] <seb128> rbalint, ^
[13:44] <cpaelzer> seb128: no not yet, the link I had waits for an upload to systemd AFAIK
[13:44] <seb128> ah ok, good
[13:44] <seb128> cpaelzer, since I've a reply here, unsure if you saw I asked you on -desktop about bug #1892358
[13:45] <seb128> cpaelzer, is there any work expected from e.g glib?
[13:45] <seb128> or did you just add those for referencing on the by team report?
[13:45] <cpaelzer> seb128: I add packages I see blocked by the systemd tests
[13:46] <cpaelzer> seb128: that way the update-excuse tag will be linked in excuses
[13:46] <seb128> technically those tasks are invalid
[13:46] <seb128> right
[13:46] <cpaelzer> and we can avoid everyone spending hours to find the same issue over and over
[13:46] <seb128> it's slightly annoying because they end up as being targetted but not assigned items which are red flag in our team reviews
[13:46] <cpaelzer> I tried setting invalid once, then the link in excuses goes away
[13:46] <seb128> right
[13:46] <seb128> hum
[13:46] <seb128> unsure what to do in those cases :/
[13:47] <cpaelzer> if one ready through the bgu comments I'm clearly saying that we added them for tracking of the actual systemd bug
[13:47] <cpaelzer> and on that I'm waiting for rbalint to add the mentioned flaky flag for systemd-fsckd
[13:47] <seb128> right, still they are on http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-gg-incoming-bug-tasks.html
[13:47] <seb128> which add noise to the report
[13:47] <seb128> and we usually try to drive our section to 0 in our weekly meeting
[13:47] <cpaelzer> yeah because the underlying issue is release important and got added the tag
[13:48] <seb128> I've no good idea how to get the referencing without the rls report noise
[13:49] <seb128> oh well, hopefully rbalint fixes the systemd issue by next week and we can ignore that problem until next time :-)
[13:49] <cpaelzer> Laney: do you happen to know a state we could set for all but systemd on bug 1892358 that will remove them from other overviews (as they are not really issues), but still get the linking due to the update-excuse tag?
[13:49] <seb128> cpaelzer, he's off this week
[13:50] <seb128> and I'm pretty confident we can't do that today, we would need to add some other way to reference on the proposed report
[13:54] <cpaelzer> seb128: and for plymouth https://bugs.launchpad.net/ubuntu/+source/plymouth/+bug/1886886
[13:55] <ahasenack> sil2100: hi, I'm getting a 500 error when trying to login on bileto
[13:56] <cpaelzer> ahasenack: I'm still logged in and things work, can I do anything for you until sorted out?
[13:56] <cpaelzer> I feel like I had that issue in the past /me tries to remember
[13:56] <ahasenack> well, I wanted to create a ticket
[13:57] <cpaelzer> ahasenack: tell me what you need and I'll create it
[13:57] <ahasenack> cpaelzer: create one for nginx dep8 please
[13:57] <seb128> cpaelzer, right, I saw that one and rbalint landed a systemd update the same day he commented so I though that was maybe the version including the fix he mentioned
[13:57] <seb128> sounds like it was not though!
[13:58] <cpaelzer> ahasenack: https://bileto.ubuntu.com/#/ticket/4227
[13:58] <ahasenack> 4227, ack
[13:58] <ahasenack> thanks
[13:59] <cpaelzer> I see the same error if I try to log in with another browser btw
[14:01] <ahasenack> hang on to your cookie! :)
[14:34] <rbalint> seb128, cpaelzer ahasenack sorry, i was out and apparently forgot setting that in my email. the last systemd upload should fix everything except for the livecd-rootfs revert that made fstab in lxd images invalide
[14:35] <ahasenack> rbalint: hi, I wanted to ask you about the golang-goprotobuf
[14:35] <rbalint> this hint should help till livecd-rootfs is fixed, too https://code.launchpad.net/~rbalint/britney/hints-ubuntu/+merge/389791
[14:35] <ahasenack> in the ubuntu-server@ ml thread you said another package needed an update, which means also going ahead of debian there (quite a big ahead, actually).
[14:35] <ahasenack> is that the plan still?
[14:35] <ahasenack> and would that be the only package where we would have to do that, or is it just the tip of the iceberg? tbd?
[14:36] <rbalint> ahasenack, this or vendoring things with is agains policy
[14:36] <ahasenack> do we need that new protobuf?
[14:37] <ahasenack> I ask because it's my +1maint week, and that package stuck in proposed caught my attention, as many others depend on it
[14:38] <rbalint> ahasenack, well, google's cloud agent started the upgrades by having been rewritten in go requiring fairly fresh golang packages and this caused a chain of upgrades to not vendor things
[14:38] <ahasenack> rbalint: so it's not just about vendoring, but vendoring a different version of an already existing package in the archive
[14:38] <rbalint> i thing there are 2-3 more packages to upgrade and I'm staging the changes in Debian's packaging repos, too
[14:38] <rbalint> ahasenack, yes
[14:39] <ahasenack> rbalint: I checked salsa for that package you mentioned in the ML, I didn't see it in a branch there, is it in your personal repo area?
[14:39] <rbalint> ahasenack, which package?
[14:39] <ahasenack> let me get its name
[14:39] <ahasenack> rbalint: golang-github-grpc-ecosystem-grpc-gateway/1.6.4-2
[14:42] <rbalint> ahasenack, yes, i have not yet pushed this one because it broke, i'll when the package gets in a good shape, i'm testing it in https://bileto.ubuntu.com/#/ticket/4148
[14:42] <ahasenack> rbalint: ok, so, are you back and handling this? :)
[14:42] <rbalint> ahasenack, if you would like to have the wip branch i can push it somewhere
[14:42] <rbalint> ahasenack, yes, definitely  :-0
[14:42] <rbalint> :-)
[14:42] <ahasenack> ok, sorry for the poke then, it's just as I said, I saw it in excuses, the ML thread, it's my +1week, "how bad can it be" and so on
[14:43] <ahasenack> I'll slowly step back from it :)
[14:43] <rbalint> ahasenack, no problem, it is in -proposed for wuite some time :-\
[14:43] <rbalint> quite
[14:44] <ahasenack> it's not alone there :)
[14:45] <cpaelzer> rbalint: " the last systemd upload" means groovy I guess, what about focal?
[14:47] <rbalint> cpaelzer, sru-s are usually prepared by ddstreet, but i guess you mean the flakiness
[14:49] <rbalint> cpaelzer, as i look at the last runs the latest focal systemd upload got flakier without a change in systemd so it may be an infra issue or needs more investigation
[14:50] <cpaelzer> rbalint: do you want a new bug about it for you and ddstreet then?
[14:50] <cpaelzer> to focus the discussion in one place I mean
[14:50] <rbalint> cpaelzer, LP: #1892358 is not open against focal so i think just adding focal would be enough
[14:51] <cpaelzer> ok
[15:39] <ijohnson> hmm does anybody know why there is no reply button on https://discourse.ubuntu.com/t/why-is-snapd-both-a-deb-and-a-snap-in-focal/18022 ? I don't think the question is a support request, it is a question about snapd and I can explain the reason why the OP sees what they do, but erm I can't reply to it :-/
[15:44] <danboid> Been hit by that F^&n GRUB bug :/ Anyone know what date the dodgy GRUB package hit the repos? End of July?
[15:44] <danboid> I use Landscape for updates so stuff gets updated daily
[15:45] <danboid> but just tried rebooting our Azure 18.04 web server and ... nope
[15:47] <danboid> I've just tried using a recovery VM to reinstall GRUB but I obvs don't understand Azure disks and snapshots well enough yet
[15:49] <danboid> The bug was reported on 30th July but I'd like to know when the dodgy GRUB hit the repos
[15:50] <danboid> (for 18.04)
[15:54] <rbasak> danboid: https://launchpad.net/ubuntu/+source/grub2/+publishinghistory
[15:55] <rbasak> "whent he dodgy GRUB hit the repos" is ambiguous because I think the bug was already there, and the update revealed it?
[15:55] <rbasak> Anyway, the publishing history there will give you the exact timestamps. The mirrors lag a little behind the publishing timestamps.
[15:56] <danboid> rbasak, Yeah OK, thats part of what I needed to know, it should help anyway
[15:57] <danboid> rbasak, What do you mean "the update revealed it?"
[15:59] <rbasak> danboid: AIUI, the systems that the grub update "broke" were already broken such that a subsequent grub update would fail.
[15:59] <rbasak> Bug 1889556 has some details I think
[16:01] <rbasak> That's why the regression wasn't detected before the update was released - the update itself wasn't buggy. It revealed a problem that had already occurred on some systems that made those systems unbootable.
[16:05] <danboid> rbasak, So the version to revert before would be 2.02~beta2-36ubuntu3.26
[16:06] <rbasak> I'm not sure, sorry.
[16:06] <rbasak> As I say, just reverting isn't necessarily sufficient.
[16:06] <rbasak> Also see https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/GRUB2SecureBootBypass#Known_issues
[16:06] <rbasak> I don't know the details in all of this - I'm just an observer
[16:07] <rbasak> I think if you follow those steps you don't need to revert
[16:16] <danboid> rbasak, It should be possible for me to fix this by using an Azure recovery VM to reinstall GRUB on this VM but I've just tried that and I'm having issues getting the correct drive to mount and detaching it afterwards
[16:17] <danboid> I don't have much experience with Azure
[16:18] <danboid> Even when my recovery VM is stopped, its not letting me detach the disk etc
[16:21] <rbasak> danboid: I can't help, sorry. Try #ubuntu for user support
[16:21] <danboid> rbasak, I'm trying in #azure now. Thanks
[16:22] <danboid> I've never had any luck in #ubuntu asking anything btw :)
[16:26] <rbasak> danboid: maybe #ubuntu-server. If the answer isn't widely known then non-realtime places tend to work better, such as askubuntu.com
[16:33] <ahasenack> hi, is there somebody here who can fix bileto login? I'm getting a 500 error when I try to login
[17:26] <sil2100> tseliot: hey! I have a question regarding the libnvidia-compute-* packages - so I was trying to resolve why the new nvidia-cuda-toolkit stuff didn't want to migrate
[17:28] <sil2100> tseliot: and it seems the general problem boils down to pyhst2 not being able to build on ppc64el now, as I think none of the nvidia-graphics-drivers-*-server packages provide a libnvidia-compute-* package for ppc64el (so it can't find libcuda.so anywhere)
[17:28] <sil2100> tseliot: do you know if there's any ppc64el story for those still?
[21:20] <mwhudson> good morning