[00:52] ppa uploads seem to not be reaching the builders? [00:57] hmm. seems some are being processed now [01:22] acheronuk: Some background jobs were delayed by DB maintenance. === chihchun_afk is now known as chihchun === chihchun is now known as chihchun_afk === chihchun_afk is now known as chihchun === chihchun is now known as chihchun_afk === chihchun_afk is now known as chihchun [08:15] rashed [08:15] oops :) === chihchun is now known as chihchun_afk === chihchun_afk is now known as chihchun === JanC_ is now known as JanC [13:37] ahasenack: Could you try that GPG key import again? I just landed some improved instrumentation. [13:37] ok [13:37] cjwatson: it was imported already, I guess it worked eventually [13:38] bah [13:38] was hoping to find out why it was slow [13:39] In case it's related, I earlier did a --send-key to keyserver.ubuntu.com and got an error. Sorry, I didn't save the error. Retrying worked. [13:39] Are you in a path that you need to talk to the keyserver? Any chance it's the keyserver behaving badly that LP doesn't handle? [13:40] Sure. That's why I landed improved instrumentation, so that I could find out this sort of thing. [13:40] Aha, https://oops.canonical.com/oops/?oopsid=OOPS-c19ccefbed6bcb7b0d094cc14e180842 [13:40] https://oops.canonical.com/?oopsid=OOPS-c19ccefbed6bcb7b0d094cc14e180842 [13:42] I don't think that's the keyserver [13:43] I mean, maybe a bit of it is, but mostly not [13:48] Is this really taking three seconds to build a gpgme context or am I hallucinating? [13:50] Hmm, it does seem about that slow locally too [13:52] Oh, I bet it's the stupid ulimit thing [13:54] In which case I should perhaps just backport the gpgme optimisation I did === dgadomski_ is now known as dgadomski === askhl_ is now known as askhl === shadeslayer_ is now known as shadeslayer === chihchun is now known as chihchun_afk [16:18] Hello, I've been trying to report a bug via launchpad for a couple days, and I always get "Timeout error". Latest attempt: (Error ID: OOPS-ad34bc67402921e640c37c5909ecd6a7) [16:18] https://oops.canonical.com/?oopsid=OOPS-ad34bc67402921e640c37c5909ecd6a7 [16:20] That one has historically always gone away in ten minutes or so, although it recurs every so often. [16:21] Hm, or is that the same one ... [16:22] OK, actually different from what I was thinking. Let's see. [16:23] It's been happening every attempt, since yesterday evening [16:28] bp0: You could work around this by making the package name field be just "nvidia-graphics-drivers-390" rather than "nvidia-graphics-drivers-390 bionic". [16:29] bp0: I believe the superfluous " bionic" causes a form error which then means that the failure handling for that form does a search for similar bugs, which in this case times out due to overly-complicated search terms. [16:29] Hmm, alright, I'll try [16:34] https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers-390/+bug/1757202 [16:34] Ubuntu bug 1757202 in nvidia-graphics-drivers-390 (Ubuntu) "xubuntu / bionic / nvidia-driver-390 can only be used by one user at a time" [Undecided,New] [16:34] Thanks, cjwatson [16:35] Excellent [19:50] cjwatson: i believe all but one package in the 10% phasing set is now imported (oxide-qt, which is taking a while) [19:51] nacc: Nagios says 1010 GB free, which suggests 2% -> 10% took 4 GB [19:51] cjwatson: ok, that seems sensible to me (this was 240 or so packages, iirc) [19:53] cjwatson: so, in theory, presuming a similar repository size going forward; all of main (~5000 packages) will take < 100G [19:53] cjwatson: our plan was to go up to 20% next, and see if we're still following the same curve for space [19:53] cjwatson: then 50% and 100% [19:54] Right, no immediate concerns about that [19:55] cjwatson: thanks! [19:55] We'll want to take some care about universe, but let's see how it goes [19:56] yeah, universe is long-term at this point :) [19:56] the goal was to see if we can import main by 18.10 starting [20:03] nacc: are you sure oxide-qt isn't stuck? 284 publications isn't _that_ many. [20:05] rbasak: it's making progress, just slowly [20:05] rbasak: afaict (i see as mch in top, at least) [20:05] OK [20:06] rbasak: i'm not 100% on why it's so slow, but there are lots of repetitive publishes [20:06] i think it'd be much faster with one of my branches, i forget which one [20:06] taht doesn't do the branch changes until the end [20:06] right now the git branches are churning, i think [20:17] rbasak: lol, each orig tarball is roughly 400M [20:17] rbasak: that's why it's taking so long, it's hammering both the network and pristine-tar [20:35] Nice [20:41] That stuff isn't run in parallel? [20:44] The importer can run stuff in parallel. But the import of multiple publications of a single source package is linear. It has to be for the commit graph to be able to be created correctly. [21:01] tsimonq2: e.g., right now we are doing 10 imports at the same time [21:01] tsimonq2: but each import itself is linearly reading the publishing hsistory, as we care about the order things happen in [21:02] Ah ok. [21:02] Right. [21:03] tsimonq2: we actually care less now than we did before, but we still need to ensure all prior stuff is in before the currently examined publish, so that we can parent as correctly as possible [21:04] nacc: Right.