[11:11] I've been trying to run britney2 locally. It's been going for ~5 hours, mostly fetching test results from swift, one at ta time, at a rate of around 2 per second. [11:12] Apparently all results ever or something. [11:12] What am I doing wrong? [11:13] you're probably the first person to do that [11:13] in production it benefits from having run before, so there's already state established [11:13] one thing that the autopkgtest policy needs to do is determine if a test has ever passed before, and that happens by going through the results available on swift [11:13] I'm doing it because I want to run an MP against it [11:13] s/run/write/ [11:14] even the incremental runs spend a decent amount of time in swift :( [11:14] running it somewhere closer to the DC might be helpful but I think it's still likely to be fairly slow [11:15] It does seem like a very inefficientn workaround for not having a database :-/ [11:15] Maybe I can hack the code to skip the already-passed check. [11:16] depending on what you're trying to do, you could chop the problem up in various ways [11:16] e.g. hack it somewhere to only consider a subset of the archive [11:23] I feel like autopkgtest-cloud could help solve this problem by exposing some kind of database(-like thing), but there's no designed solution yet [11:23] I'd be willing to help mentor someone if they wanted to work on it (which is a nice way of saying 'patches welcome', I understand, so maybe not particularly helpful - sorry) [11:24] Thanks :) [11:25] I thought I'd just add an additional "retry" link for a newer version of a failing dep8 test if present. [11:25] Since I have a ton of those to submit and thought it's about time I didn't construct those URLs manually [11:25] I think I can see how to write the code, but wanted to be able to test the result. [11:26] (and about time I understood more about how britney works too) [11:27] The testsuite has access to the excuses HTML, FWIW [11:29] although writing those can be pretty baroque at first [11:29] * Laney glances at pi_tti [11:31] Oh, there's a testsuite. [11:31] * rbasak should have looked rather than assume [11:33] :) [11:33] You might want to run your idea past vorlon before spending too much time on it [11:34] He's often got stronger opinions than others ... [11:35] like you might reasonably say that requesting different variations of tests should be a feature of retry-autopkgtest-regressions rather than britney [11:36] My user story is: I examined a failing dep8 test from excuses; I found that an update to the dep8 is the correct fix; I uploaded that fix; now the retry button on the excuses page is no good [11:36] And I have to construct a manual retry URL for every architecture [11:36] AFAICT [11:38] My plan was to add a second link next to the existing retry URL. [11:38] An up arrow or something. If a higher version is seen in unstable. [11:42] OK, so the argument against that is that it's an easy way to test situations that aren't what users will be receving, since the package you're testing will be in -proposed rather than the release proper - and if you've fixed the test then the package should migrate and then a retry at this point will do the expected thing. [11:42] Maybe we've already conceded this argument by allowing arbitrary triggers at all though, not sure. [11:45] The package doesn't migrate in this case - it's held up by the same transition [11:45] mysql-8.0 vs libdbd-mariadb-perl [11:46] nod [13:11] Laney: thinking about this, it strikes me that the integration point for autopkgtest in britney2 could be better to be in a different place. Really we want to be testing what britney wants to migrate. [13:11] Perhaps that's too hard to implement though. [13:14] rbasak: I think ideally it'd be after the update_output.txt phase, when you know the transactions that proposed-migration is attempting to migrate - those would be the units that it makes sense to test together [13:15] but britney's not really designed to facilitate that unfortunately [13:15] Yes [13:16] I'm probably on video at Debconf 15 saying this to pitt_i :P [14:26] tjaalton: hi, do you know if we can build sssd with pcre2 now? That issue you opened is closed, I'm checking if 2.2.0 has that code [14:29] I think it's just in master [14:29] https://github.com/SSSD/sssd/pull/677 [14:30] yep [15:18] ahasenack: yeah, needs to be patched still [16:02] Laney: do you think gnome-shell in Disco can be released now or should it wait for mutter aging? [16:04] rbasak: I'll defer to Trevinho if he's (hopefully) around [16:05] rbasak: yes, should be not an issue, let me check a second, but iirc the change wasn't depending on mutter [16:05] rbasak: mutter should be green to go in disco too, though, isn't it? [16:05] it needs more days [16:06] ah, ok [16:06] anyways yes.. g-s can go [16:18] Trevinho: so to be clear, the mutter issue won't be triggered for any user not already affected by just releasing g-s now? [16:18] rbasak: nope, shell fixes are touching something completely different [16:19] OK, thanks. [16:28] Hmm. sru-release is done, but Launchpad doesn't seem to have updated with the release. I've not seen that before. Perhaps there's a queue? [16:35] The first attempt hit a lock trying to update the bug, and backed off for a while [16:36] It's copied now [16:36] I suspect the copy was unlucky enough to happen at exactly the same time as sru-release itself was posting the bug update [16:38] Thanks [16:39] That didn't need any intervention presumably? If it happens again, I can be confident that it'll retry eventually? [16:40] juliank: java apps don't seem to be respecting the font scaling we set in my laptop the other day [16:40] I guess that's expected? [16:40] FWIW, the reason I was watching this one is that one of the bug tasks was Won't Fix and I wanted to check it correctly flipped to Fix Released. Looks like it did. [16:41] ahasenack: hmm [16:42] well, "a" java app (only one I have) [16:42] ahasenack: my eclipse-based one does [16:42] probably depends on the toolkit in use [16:42] yeah [16:43] "swing"? [16:44] rbasak: No intervention needed. [16:44] Thanks [17:20] rbasak, Laney: yeah, I think I've -1ed before the idea of adding different retry links on update_excuses, because that page doesn't scale well when there's a logjam and making the page bigger / more memory-consuming to render is not a good tradeoff IMHO [18:57] $ juju ssh autopkgtest-cloud-worker/0 [18:57] ERROR opening environment: Get https://10.33.102.1:8443/1.0: Unable to connect to: 10.33.102.1:8443 [18:58] ^ I think I broke my laptop's local juju autopkgtest-cloud [18:58] I rebooted the machine earlier today [18:58] and it seems it did not recover [18:58] Also, how do I prevent my lxd containers juju creates from being started at boot? [18:58] ideas welcome! [19:01] juliank: that second bit I can help with :) search for 'autostart' on https://ubuntu.com/blog/lxd-5-easy-pieces [19:04] sarnold: oh, that's lxd? It does not autostart my other stopped containers at boot [19:04] juliank: hmm. I *assumed* it was lxd autostarting things. now that you say it I don't know. [19:05] Hence my thought that it's juju trying to bring up the units [19:06] I probably should move the entire juju thing into a multipass VM so I can shut it down more cleanly [20:34] Is it just me or has Firefox's performance and stability nosedived in recent months? === connork_ is now known as connor_k [21:27] xnox: new push for DNSSEC to be working.. do we still need the DVE-2018-0001 workaround? https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1796501 [21:27] Launchpad bug 1796501 in systemd (Ubuntu Disco) "systemd-resolved tries to mitigate DVE-2018-0001 even if DNSSEC=yes" [Medium,In progress] [21:32] it seems to clearly cause some DNS failures that otherwise would work [21:34] Oh, I think it was GNOME leaking something. Sorry Firefox