[11:11] <rbasak> I've been trying to run britney2 locally. It's been going for ~5 hours, mostly fetching test results from swift, one at ta time, at a rate of around 2 per second.
[11:12] <rbasak> Apparently all results ever or something.
[11:12] <rbasak> What am I doing wrong?
[11:13] <Laney> you're probably the first person to do that
[11:13] <Laney> in production it benefits from having run before, so there's already state established
[11:13] <Laney> one thing that the autopkgtest policy needs to do is determine if a test has ever passed before, and that happens by going through the results available on swift
[11:13] <rbasak> I'm doing it because I want to run an MP against it
[11:13] <rbasak> s/run/write/
[11:14] <Laney> even the incremental runs spend a decent amount of time in swift :(
[11:14] <Laney> running it somewhere closer to the DC might be helpful but I think it's still likely to be fairly slow
[11:15] <rbasak> It does seem like a very inefficientn workaround for not having a database :-/
[11:15] <rbasak> Maybe I can hack the code to skip the already-passed check.
[11:16] <Laney> depending on what you're trying to do, you could chop the problem up in various ways
[11:16] <Laney> e.g. hack it somewhere to only consider a subset of the archive
[11:23] <Laney> I feel like autopkgtest-cloud could help solve this problem by exposing some kind of database(-like thing), but there's no designed solution yet
[11:23] <Laney> I'd be willing to help mentor someone if they wanted to work on it (which is a nice way of saying 'patches welcome', I understand, so maybe not particularly helpful - sorry)
[11:24] <rbasak> Thanks :)
[11:25] <rbasak> I thought I'd just add an additional "retry" link for a newer version of a failing dep8 test if present.
[11:25] <rbasak> Since I have a ton of those to submit and thought it's about time I didn't construct those URLs manually
[11:25] <rbasak> I think I can see how to write the code, but wanted to be able to test the result.
[11:26] <rbasak> (and about time I understood more about how britney works too)
[11:27] <Laney> The testsuite has access to the excuses HTML, FWIW
[11:29] <Laney> although writing those can be pretty baroque at first
[11:29]  * Laney glances at pi_tti
[11:31] <rbasak> Oh, there's a testsuite.
[11:31]  * rbasak should have looked rather than assume
[11:33] <Laney> :)
[11:33] <Laney> You might want to run your idea past vorlon before spending too much time on it
[11:34] <Laney> He's often got stronger opinions than others ...
[11:35] <Laney> like you might reasonably say that requesting different variations of tests should be a feature of retry-autopkgtest-regressions rather than britney
[11:36] <rbasak> My user story is: I examined a failing dep8 test from excuses; I found that an update to the dep8 is the correct fix; I uploaded that fix; now the retry button on the excuses page is no good
[11:36] <rbasak> And I have to construct a manual retry URL for every architecture
[11:36] <rbasak> AFAICT
[11:38] <rbasak> My plan was to add a second link next to the existing retry URL.
[11:38] <rbasak> An up arrow or something. If a higher version is seen in unstable.
[11:42] <Laney> OK, so the argument against that is that it's an easy way to test situations that aren't what users will be receving, since the package you're testing will be in -proposed rather than the release proper - and if you've fixed the test then the package should migrate and then a retry at this point will do the expected thing.
[11:42] <Laney> Maybe we've already conceded this argument by allowing arbitrary triggers at all though, not sure.
[11:45] <rbasak> The package doesn't migrate in this case - it's held up by the same transition
[11:45] <rbasak> mysql-8.0 vs libdbd-mariadb-perl
[11:46] <Laney> nod
[13:11] <rbasak> Laney: thinking about this, it strikes me that the integration point for autopkgtest in britney2 could be better to be in a different place. Really we want to be testing what britney wants to migrate.
[13:11] <rbasak> Perhaps that's too hard to implement though.
[13:14] <Laney> rbasak: I think ideally it'd be after the update_output.txt phase, when you know the transactions that proposed-migration is attempting to migrate - those would be the units that it makes sense to test together
[13:15] <Laney> but britney's not really designed to facilitate that unfortunately
[13:15] <rbasak> Yes
[13:16] <Laney> I'm probably on video at Debconf 15 saying this to pitt_i :P
[14:26] <ahasenack> tjaalton: hi, do you know if we can build sssd with pcre2 now? That issue you opened is closed, I'm checking if 2.2.0 has that code
[14:29] <ahasenack> I think it's just in master
[14:29] <ahasenack> https://github.com/SSSD/sssd/pull/677
[14:30] <ahasenack> yep
[15:18] <tjaalton> ahasenack: yeah, needs to be patched still
[16:02] <rbasak> Laney: do you think gnome-shell in Disco can be released now or should it wait for mutter aging?
[16:04] <Laney> rbasak: I'll defer to Trevinho if he's (hopefully) around
[16:05] <Trevinho> rbasak: yes, should be not an issue, let me check a second, but iirc the change wasn't depending on mutter
[16:05] <Trevinho> rbasak: mutter should be green to go in disco too, though, isn't it?
[16:05] <Laney> it needs more days
[16:06] <Trevinho> ah, ok
[16:06] <Trevinho> anyways yes.. g-s can go
[16:18] <rbasak> Trevinho: so to be clear, the mutter issue won't be triggered for any user not already affected by just releasing g-s now?
[16:18] <Trevinho> rbasak: nope, shell fixes are touching something completely different
[16:19] <rbasak> OK, thanks.
[16:28] <rbasak> Hmm. sru-release is done, but Launchpad doesn't seem to have updated with the release. I've not seen that before. Perhaps there's a queue?
[16:35] <cjwatson> The first attempt hit a lock trying to update the bug, and backed off for a while
[16:36] <cjwatson> It's copied now
[16:36] <cjwatson> I suspect the copy was unlucky enough to happen at exactly the same time as sru-release itself was posting the bug update
[16:38] <rbasak> Thanks
[16:39] <rbasak> That didn't need any intervention presumably? If it happens again, I can be confident that it'll retry eventually?
[16:40] <ahasenack> juliank: java apps don't seem to be respecting the font scaling we set in my laptop the other day
[16:40] <ahasenack> I guess that's expected?
[16:40] <rbasak> FWIW, the reason I was watching this one is that one of the bug tasks was Won't Fix and I wanted to check it correctly flipped to Fix Released. Looks like it did.
[16:41] <juliank> ahasenack: hmm
[16:42] <ahasenack> well, "a" java app (only one I have)
[16:42] <juliank> ahasenack: my eclipse-based one does
[16:42] <juliank> probably depends on the toolkit in use
[16:42] <ahasenack> yeah
[16:43] <ahasenack> "swing"?
[16:44] <cjwatson> rbasak: No intervention needed.
[16:44] <rbasak> Thanks
[17:20] <vorlon> rbasak, Laney: yeah, I think I've -1ed before the idea of adding different retry links on update_excuses, because that page doesn't scale well when there's a logjam and making the page bigger / more memory-consuming to render is not a good tradeoff IMHO
[18:57] <juliank> $ juju ssh autopkgtest-cloud-worker/0
[18:57] <juliank> ERROR opening environment: Get https://10.33.102.1:8443/1.0: Unable to connect to: 10.33.102.1:8443
[18:58] <juliank> ^ I think I broke my laptop's local juju autopkgtest-cloud
[18:58] <juliank> I rebooted the machine earlier today
[18:58] <juliank> and it seems it did not recover
[18:58] <juliank> Also, how do I prevent my lxd containers juju creates from being started at boot?
[18:58] <juliank> ideas welcome!
[19:01] <sarnold> juliank: that second bit I can help with :) search for 'autostart' on https://ubuntu.com/blog/lxd-5-easy-pieces
[19:04] <juliank> sarnold: oh, that's lxd? It does not autostart my other stopped containers at boot
[19:04] <sarnold> juliank: hmm. I *assumed* it was lxd autostarting things. now that you say it I don't know.
[19:05] <juliank> Hence my thought that it's juju trying to bring up the units
[19:06] <juliank> I probably should move the entire juju thing into a multipass VM so I can shut it down more cleanly
[20:34] <rbasak> Is it just me or has Firefox's performance and stability nosedived in recent months?
[21:27] <gQuigs> xnox: new push for DNSSEC to be working..  do we still need the DVE-2018-0001 workaround?  https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1796501
[21:32] <gQuigs> it seems to clearly cause some DNS failures that otherwise would work
[21:34] <rbasak> Oh, I think it was GNOME leaking something. Sorry Firefox