[08:53] <mapreri> umh, what's up with OOPS-42285e4cd60b7df301fb4e621594cb81 ?
[08:53] <mapreri> (ans several other tries with different oops id)
[08:56] <cjwatson> OffsiteFormPostError: https://jenkins.debian.org/manage
[08:57] <mapreri> wut
[08:57] <cjwatson> what on earth are you trying to do?
[08:58] <wgrant> wat
[08:58] <wgrant> It's XHR too
[08:58] <mapreri> oh, that leaked from a addon mangling the headers I used yesterday and forgot to disable
[08:58] <wgrant> A browser shouldn't have allowed that.
[08:58] <mapreri> u.U
[08:58] <wgrant> mapreri: Hm, that addon sounds like a very bad security hole.
[08:59] <wgrant> I don't know how to configure Referer sanely in Chromium, but in Firefox you just set XOriginPolicy=trimmingPolicy=2 and it basically does what Origin will in five years.
[08:59] <mapreri> well, I added Referrer: http://jenkins.debian.org/manage myself, it didn't pick it up, but my plan all along was to remove it once done
[09:00] <wgrant> Ahh
[09:00] <mapreri> I needed to mangle referrer for a test of mine
[09:01] <mapreri> better now :)  Thanks ^^
[16:10] <ricotz> hello, it seems the automatic translation export to a dedicated branch stopped working?
[16:22] <cjwatson> ricotz: Always easier to check logs given an example, if you have one.
[16:30] <cjwatson> Though it looks like it's semi-consistently failing on exporting https://translations.launchpad.net/josm/trunk
[16:31] <cjwatson> Not all the time.  It's failed three days in a row, and the job only runs daily so that blocks everything else.  When it succeeds, that particular series takes on the order of 45 minutes to export.
[16:32] <cjwatson> ~10000 messages ...
[16:35] <cjwatson> wgrant: Do you know of a particular reason we don't run translations-export-to-branch at DEBUG?  Not quite easy to see where this is slow at the moment.
[16:37] <cjwatson> TransactionFreeOperation might help.
[16:38] <cjwatson> Though that's a bit awkward with the _findChangedPOFiles generator.
[16:38] <cjwatson> ricotz: Anyway, this is a bit more than a five-minute job to sort out, so please file a bug.
[16:38] <ricotz> cjwatson, I don't know about log that I could have looked at ;), just noticed after approving some new translations it wasn't exported in the later days
[16:39] <ricotz> in my case https://translations.launchpad.net/plank and https://code.launchpad.net/~docky-core/plank/l10n
[16:45] <cjwatson> ricotz: You couldn't have looked at it, but it's easier for us to look when we have examples to go from.
[16:45] <cjwatson> ricotz: Anyway, I found it eventually, as above.
[18:27] <theShirbiny> Hello, i'm running LC_ALL=en_US.UTF-8 add-apt-repository -y ppa:ondrej/php
[18:27] <theShirbiny> i'm getting Cannot add PPA: 'ppa:ondrej/php'.
[18:27] <theShirbiny> Please check that the PPA name or format is correct.
[18:35] <theShirbiny> :(
[18:36] <theShirbiny> restarting the docker service fixed it, no idea why
[22:07] <kyrofa> Can launchpad be used to build gadget snaps?
[22:17] <wgrant> kyrofa: Hm, I don't see why not, though I'm not familiar with any snapcraft specialties there.
[22:18] <wgrant> cjwatson: I switch things to DEBUG whenever I see them.
[22:18] <kyrofa> wgrant, good deal, just wanted to see if it was constrained
[22:18] <kyrofa> wgrant, if I run into issues I'll let you know
[23:36] <cjwatson> wgrant: cheers.  MP sent.
[23:37] <wgrant> kyrofa: Could you have a look at https://bugs.launchpad.net/snapcraft/+bug/1675513 at some point? It looks like the stage-packages caching work has seriously regressed performance.
[23:38] <kyrofa> wgrant, yeah we're looking at it now
[23:38] <kyrofa> wgrant, I heard the jump was from 7 minutes to over two hours, which seems... insane
[23:39] <wgrant> kyrofa: Yup. And it only dies at two hours because of the proxy auth timeout.
[23:39] <wgrant> kyrofa: Let me know if you need any more info.
[23:39] <kyrofa> Right, so even _more_. Yeesh
[23:39] <kyrofa> wgrant, will do, thank you
[23:42] <cjwatson> Hopefully you can simulate the situation with tc or similar - just slap some latency in.
[23:43] <cjwatson> Something like https://wiki.linuxfoundation.org/networking/netem#emulating-wide-area-network-delays maybe
[23:44] <kyrofa> Thanks cjwatson, that's helpful
[23:45] <kyrofa> cjwatson, my current solution is to have elopio take a look. He's in costa rica: a terrible connection to everywhere
[23:45] <cjwatson> Hah
[23:46] <cjwatson> My connection is terrible, but more in terms of bandwidth than latency.
[23:47] <cjwatson> In this case it's the classic long fat pipe.  We have some mitigations in place for most builds, but snapcraft may manage to avoid them due to the way it handles sources.list, and in any case fetching one package per connection without keepalive seems to be pathological.
[23:48] <kyrofa> cjwatson, yeah we hit some shortcomings with the apt API
[23:49] <cjwatson> Yeah, I saw that from the comment.
[23:49] <cjwatson> This workaround is apparently not an unalloyed success though :)
[23:50] <kyrofa> *cough*
[23:51] <kyrofa> Wait... cjwatson why on earth are you awake?
[23:51] <cjwatson> It's not that late here, and I'm not going to be for much longer anyway ...
[23:51] <wgrant> We are strange people. That's why.
[23:52] <cjwatson> Or that.
[23:52] <kyrofa> wgrant, isn't it like 11am for you? :P
[23:52] <wgrant> Yes but I'm also on leave :P
[23:52] <kyrofa> Hahaha
[23:52] <wgrant> cjwatson: This bit of snapcraft uses the proxy, I think.
[23:52] <wgrant> So it bypasses the LFP workaround entirely.
[23:53] <wgrant> s/the proxy/snap-proxy/
[23:53] <cjwatson> I thought I vaguely remembered snap-proxy being included in the workaround, but perhaps not.
[23:54] <wgrant> I don't think it was. It would be difficult to get the reverse route right due to OpenStack.
[23:54] <wgrant> (not to mention that the workaround firewall is in GS2)