[02:31] <StevenK> wgrant: https://code.launchpad.net/~stevenk/launchpad/rbsj-generalise/+merge/112684
[04:01] <StevenK> wgrant: Can haz review?
[04:10] <wgrant> StevenK: Looks good
[04:12] <wgrant> Oh
[04:12] <wgrant> It's docutils
[04:12] <wgrant> That's why WADL generation is so slow.
[04:12] <wgrant> It passes all the docs through docutils
[04:12]  * wgrant headdesks
[04:12] <StevenK> Haha
[04:12] <StevenK> Trying to speed up make?
[04:25] <wgrant> Yeah
[04:25] <wgrant> Removing docutils speeds it up by about 80%
[04:26] <wgrant> But I don't think we can just stop docutilsing :(
[05:36] <StevenK> wgrant: http://pastebin.ubuntu.com/1065497/
[05:40] <wgrant> StevenK: Good start, although it doesn't look liek it actually unsubs yet
[05:44] <StevenK> It does not, no.
[05:45] <StevenK> I need help with that bit, since RASJ.run() seems to use BTF for everything.
[05:45] <wgrant> cjwatson_: I don't quite understand the reason for the divergent paths in lib/lp/soyuz/scripts/packagecopier.py. Shouldn't it just be defaulting series to the one on the SPPH, with no other differences?
[05:47] <wgrant> StevenK: Right. Initially you could just loop through the branches and call Branch._checkBranchVisibleByUser. Once we redefine branch security in terms of the new access schema, we can migrate to something better.
[05:47] <wgrant> Using the slow awful APIs isn't so bad for branches, since there are very few private ones.
[05:51] <StevenK> wgrant: for branch in self.branches: for sub in branch.subscribers: branch._checkBranchVisibleByUser(sub) does not seem full of win and puppies.
[05:51] <StevenK> sub.person, but details
[05:53] <wgrant> Hmm
[05:53] <wgrant> So
[05:53] <wgrant> We need this for revocation from +sharing
[05:54] <wgrant> It is perhaps best to do this straight against the new access schema, since we'll hopefully have that populated early next week.
[05:55] <wgrant> StevenK: So you'll want to prepare this branch, using something almost identical to the BugTaskFlat query, but keep it on ice until all the garbo stuff is done next week
[05:55] <StevenK> wgrant: All I'm saying that is going to be slow and horrible. If it's going to be replaced after branch.access_grants is populated, then I withdraw my objection, since it should be shortlived.
[05:55] <wgrant> StevenK: But the job isn't actually useful until we're using branch.access_grants, so it can just use that frokm the start
[05:56] <StevenK> Right
[05:56] <StevenK> So I should land the branch.aag garbo job? :-)
[05:57] <wgrant> StevenK: Not until the unsubscribe fix is deployed everywhere
[05:58] <StevenK> wgrant: Which I wanted to deploy this morning and you told me to wait :-P
[05:59] <wgrant> StevenK: buildbot will be done in half an hour, you can deploy then :)
[05:59] <wgrant> If you had explained your reasoning for wanting to deploy urgently this morning, we could have.
[06:00] <StevenK> wgrant: I had forgotten the branch.unsubscribe fix was blocking the garbo job. :-(
[06:19] <StevenK> Right, buildbot done
[07:32] <StevenK> wgrant: Do we want to wait for Europeans to QA or polish off the QA?
[07:37] <jelmer> StevenK: jam and mgz seem to be quite keen on doing their own, to see what it's like
[07:46] <adeuring> good morning
[07:47] <StevenK> jelmer: Feel free. I'd like to deploy at least up to r15520
[07:51] <gmb> jam: http://oo00.eu/
[07:56] <jam> StevenK: so we qa'd are own, but the 15518 is currently blocked because the MP's I've tried to lookat are unable to load their diffs from the librarian
[08:11] <StevenK> jam: bzr di -r ? :-)
[08:12] <jam> StevenK: the issue is you can't load a page like: https://code.qastaging.launchpad.net/~gz/bzr/2.4_robust_logging_714449/+merge/84035
[08:12] <jam> because it thinks there was a diff generated for the MP
[08:12] <jam> which should be sitting in the librarian
[08:12] <jam> but the librarian id is for production
[08:13] <jam> and doesn't exist on qastaging
[08:13] <jam> so we're trying to force a new build
[08:13] <jam> (by pushing to an old branch, and then running the scanner jobs)
[08:16] <StevenK> jam: Right, so are you guys happy to beg gmb how to put up a deployment request?
[08:16] <jam> StevenK: that would be good for us, yes
[08:17] <StevenK> jam: I'm happy enough to do it as well -- but if gmb has it covered, that works for me.
[08:18] <jam> jam: I'd like to expose our team to it, though we may need a bit more guidance if there are bits that gmb is unclear on
[08:18] <gmb> StevenK, You assume I've ever done one; I haven't. Usually an Aussie has done the work before I come online :)
[08:18] <StevenK> Haha
[08:18] <StevenK> I'm happy to have a G+ hangout or something to walk all of you through it
[08:19] <gmb> StevenK, That sounds like a good idea.
[08:20] <StevenK> gmb: You're there in person, organise it and tell me where to be. :-)
[08:35] <cjwatson> wgrant: It seemed kind of strange to go through series.getSourcePackage(spn).latest_published_component when I could just use source.component.  And there are different numbers of checks needed in each case, too.  I tried consolidating the two branches but it wasn't clear that it actually made things any more readable.
[08:35] <cjwatson> wgrant: Is your concern purely style, or do you think I made a semantic mistake?
[08:51] <jam> StevenK: we've managed to QA up to 15520, so I think we're ready to chat about deployment. I think gmb is setting up the hangout now.
[08:55] <StevenK> jam: Okay, sounds excellent.
[08:55] <StevenK> gmb: Please use my personal G+ account
[08:56] <gmb> StevenK, We'll ping you a link in a sec... techofail happening atm.
[08:58] <gmb> StevenK, https://plus.google.com/hangouts/_/1db0fa7779d1a1038578f306df3aa20ebf61a0be?authuser=0&hl=en-GB#
[09:02] <StevenK> http://pastebin.ubuntu.com/1065642/
[09:04] <StevenK> https://wiki.canonical.com/InformationInfrastructure/OSA/LaunchpadProductionStatus
[09:05] <StevenK> https://devpad.canonical.com/~wgrant/production-revnos
[09:35] <jam> stub: It looks like the last dump of production => staging was back in April (04-20?). Is there a specific process if we want to get newer data?
[09:35] <stub> jam: Yes, it was disabled and never reenabled after the disk space issue was worked around.
[09:36] <stub> jam: I left it disabled, as I was going to try the new rebuild process on the weekend.
[09:36] <stub> jam: I could do it now, but unfortunately the new rebuild needs to happen in place due to disk space limits, so staging will be offline for several hours while it works.
[09:36] <jam> stub: so it is likely to be re-enabled Soon (tm) ? (I don't need it urgently at all, I was just trying to understand the status)
[09:37] <stub> jam: Yes, I will run the script myself tomorrow and schedule it for weekly rebuilds on the weekend.
[09:37] <jam> sounds good
[10:47] <jam> stub: if we do fall back to the production librarian, can you help me debug things a bit?
[10:47] <jam> If I try to go to an old merge proposal, such as: https://code.qastaging.launchpad.net/~gz/bzr/2.4_robust_logging_714449/+merge/84465
[10:47] <jam> I get this oops: https://oops.canonical.com/oops/?oopsid=OOPS-106e5fe837de46e8a46356f148ecd888
[10:47] <jam> which looks like a librarian lookup failure
[10:50] <stub> jam: That is indeed a librarian lookup failure. So we need to work out why that isn't in the Librarian. I'll check the production db
[10:50] <stub> (faster than tracking down the traceback from the librarian)
[10:51] <jam> note, that wasn't the only merge proposal I tried, it happened on at least one other one.
[10:52] <jam> stub: I can see the entry in 'libraryfilealias' in staging at least.
[10:52] <stub> jam: It is there on production too
[10:54] <stub> jam: You need to check qastaging's config to confirm that indeed requests are being passed back to the production librarian, and get the oops from the librarian for that request (grepping for that integer id should work)
[10:54] <stub> This Librarian is oopsing now rather than spewing to a log file.
[10:56] <stub> c/This/I think that the/
[10:56] <jam> stub: would that be in launchpad-lazr.conf under 'librarian_server' ?
[10:56] <jam> because all I see is "launch: True"
[10:56] <stub> upstream_url or something
[10:57] <jam> ah, there is another page to look at (qastaging-lazr.conf)
[10:57] <jam> stub: upstream_host: launchpadlibrarian.net
[10:57] <stub> So it is supposed to work
[10:58] <stub> It might be restricted resources have a problem, qastaging librarian having troubles with the session database or something, but I would expect more fallout if that was the case.
[10:59] <jam> stub: is there an obvious way to turn a lfa.id into a URL?
[10:59] <jam> I tried poking at the code, but I also get a 404 when I go to the url I expected.
[10:59] <stub> You need the lfa and filename
[11:00] <jam> stub: which you get from 'SELECT id, filename FROM LibraryFileAlias where id = ....'
[11:01] <stub> def get_libraryfilealias_download_path(aliasID, filename):
[11:01] <stub>     """Download path for a given `LibraryFileAlias` id and filename."""
[11:01] <stub>     return '/%d/%s' % (int(aliasID), url_path_quote(filename))
[11:01] <jam> yeah, that is the one I tried
[11:01] <jam> stub: I wonder if the value in qastaging differs from the one in production?
[11:01] <stub> If it does, that would be a bug.
[11:03] <stub> I get a 404 too.
[11:03] <stub> So maybe the file never arrived on disk?
[11:04] <jam> stub: well if you go to the production page, it looks correct: https://code.launchpad.net/~gz/bzr/2.4_robust_logging_714449/+merge/84465
[11:04] <jam> but maybe the page has  a different diff now in prod
[11:04] <stub> yer, mebbe
[11:05] <stub> Is this one example of many, or is it a rare case?
[11:05] <jam> stub: I tried 2-3 mp's and all of them failed in a similar way
[11:07] <jam> stub: if I go to: https://code.qastaging.launchpad.net/bzr/+activereviews
[11:07] <jam> all of the links off of there seem to fail with lookup error
[11:08] <jam> stub: having tried 6 of them
[11:08] <jam> stub: and all of them fail for https://code.qastaging.launchpad.net/launchpad/+activereviews as well
[11:09] <jam> (one succeeds, but because it doesn't have a diff to show.)
[11:09] <jam> I'm off to lunch for now.
[11:09] <jam> but it is definitely systematically broken
[11:12] <stub> jam: I'm thinking that the file exists on production, but you don't have access to it there so are getting a 404.
[11:14] <stub> jam: I'm not sure if in this case your permissions on production or your permissions on qastaging are in effect.
[11:15] <stub> jam: If a MP has been superseded, then perhaps the old one is no longer visible to anyone? So a superseded diff on production will always return 404?
[12:24] <jam> https://oops.canonical.com/oops/?oopsid=OOPS-00746fbd7a0bd9d6b5273c8342091655
[12:24] <jam> jelmer: ^^
[12:54] <jelmer> gmb: lp:~jelmer/launchpad/skip-the-skips
[12:57] <jelmer> gmb: lp:~jelmer/launchpad/skip-the-skips
[13:16] <jcsackett> jelmer: saw your updates on the bzr update branch. r=me, and thanks for the answers.
[13:26] <jelmer> jcsackett: thanks!
[13:29] <czajkowski> jcsackett: sorry I didnt put you down for holidays, I go by whats on the admin in case folks are doing a swap day
[13:35] <deryck> adeuring, https://plus.google.com/hangouts/_/355d337a416990bcc886b192858066f0198beb37?authuser=0&hl=en
[13:54] <jcsackett> czajkowski: i forget there are holidays. :-P
[13:54] <czajkowski> well indeeed but some people use them for swap days
[13:54] <jcsackett> so no worries at all. :-)
[13:55] <jcsackett> czajkowski: yeah--what your doing makes sense. those of us who forget to list the holiday can always reply to the staffing email with corrections.
[13:55] <jcsackett> ...or our bosses can. :-P
[13:55] <czajkowski> jcsackett: indeed
[13:56] <jcsackett> sorry if my forgetfulness causes you any problems.
[14:03] <czajkowski> jcsackett: no none at all
[14:04] <czajkowski> just dont want you thinking I'm slacking or forgetting you either
[14:07] <jelmer> mgz: pqm_email = Canonical PQM <launchpad@pqm.canonical.com>
[14:18] <mgz> wgrant: I am a tool, my apologies
[14:20] <jcsackett> czajkowski: the thought never crossed my mind. :-)
[14:22] <wgrant> cjwatson: I believe it's a semantic mistake. You're using the source component, when the permission check should be on the target.
[14:22] <wgrant> Shouldn't it?
[14:23] <wgrant> jam: Restricted librarian files aren't proxied back from production, so you can't really view old MPs on staging
[14:25] <cjwatson> wgrant: If that's so, then the current code is buggy too, because it gets the most recently published component for the series regardless of which archive it's in
[14:26] <cjwatson> *for the source package in the distroseries
[14:27] <wgrant> cjwatson: Yes, but a lot of code is crap, so it's not inconceivable.
[14:27] <cjwatson> Well, yes, but it makes it hard to tell whether I'm making things any worse.
[14:27] <wgrant> It would never really be noticed, since non-distro archives only publish in main.
[14:27] <wgrant> mgz: What did you do?
[14:28] <cjwatson> So what's the right fix?  Explicitly look up the latest published component in the target archive, but what if it isn't in the target archive at all yet?
[14:29] <wgrant> There's rules for that already in archiveuploader.
[14:29] <wgrant> strict_component exists for that reason
[14:29] <wgrant> IIRC it tries with the source component and strict_component=True
[14:30] <wgrant> If there is no source component, it tries with strict_component=False
[14:30] <wgrant> To allow any component uploader to upload a new package
[14:30] <wgrant> IIRC
[14:30] <cjwatson> Dear nascentupload, thanks for not explicitly tagging when you pass a keyword argument so that I can't grep for it.
[14:31] <cjwatson> But OK, that's similar to what half of check_copy_permissions is doing then.
[14:31] <wgrant> Keyword-only arguments cannot come soon enough :)
[14:32] <wgrant> I haven't actually read check_copy_permissions
[14:32] <wgrant> I possibly should
[14:33] <wgrant> Ah, right.
[14:34] <gmb> jam, jelmer, vila, mgz, gary_poster: Behold: Python 2.7 parallel testing without an unknown worker (all the problem tests have been nuked for now; a proper fix is coming).
[14:34] <wgrant> So the version before your branch is correct, apart from the fact that latest_published_component only considers distro archives, which won't be a problem in practice, but we should probably fix.
[14:34] <gmb> http://ec2-184-72-186-58.compute-1.amazonaws.com:8010/builders/lucid_lp/builds/2
[14:34] <vila> gmb: \o/
[14:34] <gary_poster> sweet, gmb!  congrats and thanks!
[14:35] <gmb> gary_poster, Welcome. Actually, much kudos must go to the Blue Squad and their genetic subunit parsing abilities.
[14:35] <wgrant> 20 workers? Madness.
[14:35] <gmb> Sparta.
[14:35] <gary_poster> heh
[14:35] <gary_poster> yay blue squad! :-)
[14:38] <mgz> wgrant: read bug 692357 then filed bug 1018905 when we were fixing it anyway
[14:38] <_mup_> Bug #692357: lib/canonical/lazr/doc/timeout.txt hangs on Python 2.7 <python-upgrade> <tech-debt> <Launchpad itself:Triaged> < https://launchpad.net/bugs/692357 >
[14:38] <_mup_> Bug #1018905: lp/services/doc/timeout.txt hangs in Python 2.7 <python-upgrade> <qa-untestable> <Launchpad itself:Fix Released by jameinel> < https://launchpad.net/bugs/1018905 >
[14:38] <mgz> and lo, even named the bug (nearly) identically
[14:43] <wgrant> mgz: Ah, yeah, I thought I'd filed that one.
[14:52] <cjwatson> wgrant: OK, I'll see about fixing that today then.  Relatedly, do you think I might be able to try turning soyuz.derived_series.max_synchronous_syncs back down to something smallish and turning on soyuz.copypackageppa.enabled, both on dogfood, so that I can see how it behaves with current code?
[14:53] <cjwatson> I think I know of two bugs right now: private->public copying isn't hooked up on the copy_asynchronous path, and I'm fairly sure that failed copy notification is about 95% implemented but doesn't entirely work.
[14:53] <wgrant> cjwatson: Sure, that's fine on DF
[14:53] <wgrant> cjwatson: It's never been tested to any significant extent, however.
[14:53] <wgrant> So be prepared for spectacular fireworks.
[14:54] <cjwatson> Any particular suggestions on exercising it?  I found the latter bug while experimenting with the test suite ...
[14:54] <cjwatson> The actual copies themselves should be much the same as Archive.copyPackage, it's the UI glue that's probably broken
[14:54] <cjwatson> AFAICS
[14:55] <wgrant> Yeah, probably.
[14:55] <cjwatson> I have a branch that deletes synchronous copies and that passes tests.  So I want to know what I'm missing :-)
[14:57] <cjwatson> I note also that at the moment +copy-packages will copy from private to public PPAs using delayed copies without (AFAICT) any particular explicit confirmation, at least if I'm reading the code correctly.  Is this by design?
[14:57] <cjwatson> (Because if it is then I can just add unembargo=True to fix the first bug I mentioned above.)
[15:02] <jam> cjwatson: I see you are stealing all the low-hanging LoC for yourself :)
[15:03] <cjwatson> I reckoned that if nobody had noticed in four years it couldn't have been *that* low-hanging
[15:05] <cjwatson> wgrant: Did my work on https://code.launchpad.net/~cjwatson/launchpad/custom-uefi/+merge/111626 to use publisher configuration instead of lazr config look sensible to you?
[15:25] <sinzui> rick_h_, deryck, jcsackett: I located the madness in choicesource tabbing. There was a handler setting the close button to have focus when the overlay had focus...the overlay is modal, it always has focus when visible
[15:27] <sinzui> rick_h_, deryck, jcsackett: I have a plan so clever you could stuff it down your pants can call it a weasel. I wrote a handler to watch what was happening, then adapted it to make tabs cycle through the actions...you cannot tab out of the overlay. I can push this down to pretty overlay so that you cannot accidentally tab out of any overlay.
[15:29] <deryck> sinzui, why not just remove the bit that calls focus to the close button?
[15:29] <sinzui> I did
[15:30] <sinzui> deryck, once to tab or shift tab out of a modal overlay, the focus is lost.
[15:30] <sinzui> chromium changed the focus to the location bar
[15:30] <deryck> sinzui, ok, rick_h_ and I are on a call together now.  we can chat more in a second about it.
[15:30] <deryck> sinzui, ah, that's what I would expect.
[15:38] <mgz> any ideas why I'm missing /var/tmp/mailman after doing normal launchpad install steps, have some failing test due to that which aren't 2.7 related
[15:47] <jam> Have a good weekend, everyone!
[15:48] <czajkowski> jam: toodles
[16:07] <gmb> jam, jelmer, mgz, vila: The build failed... with only one failed test. http://ec2-184-72-186-58.compute-1.amazonaws.com:8010/builders/lucid_lp/builds/2/steps/shell_9/logs/summary
[16:08] <gmb> But
[16:08] <gmb> There were only ~2300 tests.
[16:08] <gmb> Which means we're missing ~15,000 tests.
[16:08] <gmb> Somewhere...
[16:09] <sinzui> mgz, are these timeouts on doctests?
[16:10] <sinzui> mgz we disabled the MailmanLayer because a few doctests are fundamentally flawed. They always timeout.
[18:47] <cjwatson> wgrant: https://code.launchpad.net/~cjwatson/launchpad/fix-check-copy-permissions/+merge/112832 - does this look better?
[21:35] <jelmer> gmb: that bug was the one that vincent fixed earlier
[21:35] <jelmer> gmb: s/that bug/that failure/
[21:58] <czajkowski> evening