[00:20] <StevenK> Purple buildbot
[00:21] <wgrant> Exception?
[00:21] <wgrant> Yeah
[00:21] <wgrant> subunit corruption
[00:25] <StevenK> wgrant, wallyworld_: http://instagram.com/p/RYnio1s6gf/
[00:26] <wallyworld_> yeah, saw that. i don't like instagram
[00:26] <wallyworld_> i can't see what all the fuss is about
[00:26] <wallyworld_> why ruin perfectly good photos with crappy filters
[00:27] <StevenK> wallyworld_: Yes, but you don't like anything that's social. And I'm getting off your lawn now.
[00:27] <wallyworld_> how is ruining a good photo being social?
[00:27] <wgrant> We may have a dogfood DB in about 20 minutes, finally...
[00:27] <StevenK> Facebook bought out Instagram, which I thought was your issue.
[00:27] <StevenK> However, I was talking about the content, not the platform hosting the photo. :-)
[00:28] <wgrant> The issue with Instagram is that it's the worst idea ever
[00:28] <wgrant> *and* it's "social"
[00:28] <StevenK> Note to self: Do not share hurricane photos with workmates.
[00:29] <wallyworld_> StevenK: sorry, i saw instagram and my radar went off
[00:29] <wallyworld_> StevenK: did you see the one of the crane?
[00:29] <StevenK> wallyworld_: Nope
[00:30] <wallyworld_> StevenK: it was on this morning's news - a huge apartment building, still under construction - the crane is folded in half and dangling in the wind
[00:30] <StevenK> http://www.smh.com.au/photogallery/environment/weather/hurricane-sandy-strikes-us-coast-20121030-28g5z.html?selectedImage=3
[00:30] <StevenK> That one?
[00:30] <wallyworld_> yeah
[00:31] <StevenK> ... holy ...
[00:31] <wallyworld_> the latest pictures have it swinging in the "breeze"
[00:31] <wallyworld_> those apartments are selling for $90M
[01:50] <enginespot> Hi everyone , I get this channel from here https://help.launchpad.net/
[01:51] <enginespot> currently I have a question , I want to get people list of launchpadlib
[01:51] <enginespot> but I only get 50 persons
[01:51] <enginespot> so I do not know how to to
[02:07] <wgrant> StevenK: DF is back, and timing out merrily
[02:44] <wallyworld> wgrant: what's the difference between spr.upload_archive and spph.archive? are these related?
[02:45] <StevenK> Loosely
[02:45] <StevenK> spph.archive == Archive it is published in, spr.upload_archive == Archive it was first published in, so they can be different.
[02:46] <wallyworld> right, ok. thanks
[02:46] <wgrant> s/first published in/first uploaded to or seen in/
[02:46] <wgrant> It wasn't necessarily ever published there.
[02:46] <wallyworld> roo bad spph.archive is not denormalised ontop spr
[02:46] <wallyworld> too
[02:46] <wgrant> wallyworld: That doesn't make sense
[02:46] <StevenK> It can't be.
[02:46] <wgrant> Since an SPR can be published in an unbounded number of different archives
[02:46] <StevenK> Or even different series in the same archive
[02:46] <wallyworld> ok
[02:54] <enginespot> when I use the method PersonSet.find()
[02:54] <enginespot> it told me AttributeError: type object 'PersonSet' has no attribute 'find'
[02:58] <StevenK> wallyworld: http://www.smh.com.au/photogallery/environment/weather/hurricane-sandy-strikes-us-coast-20121030-28g5z.html?selectedImage=1
[02:59] <wallyworld> wgrant: these 2 queries should produce same result, but 2nd one faster. can you try on df for me when you have a moment? https://pastebin.canonical.com/77405/
[02:59] <wallyworld> StevenK:  that's a lot of water
[03:00] <wallyworld> enginespot: find() is exported, so perhaps you can paste your code for us to look at
[03:00] <StevenK> wallyworld: http://www.smh.com.au/photogallery/environment/weather/hurricane-sandy-strikes-us-coast-20121030-28g5z.html?selectedImage=10 is pretty impressive too
[03:00] <wallyworld> yes indeed
[03:05] <enginespot> my code like the follows:
[03:05] <enginespot> people=PersonSet.find()
[03:05] <enginespot>     #    for project in pad.projects:
[03:05] <enginespot>     #        print project.name
[03:05] <enginespot>     people = pad.people
[03:05] <enginespot>     i = 0
[03:05] <enginespot>     j=0
[03:05] <enginespot>     for person in people:
[03:05] <enginespot>         i+=1
[03:05] <enginespot>         ppas = person.ppas
[03:05] <enginespot> #        r.set(person.name, 1)
[03:06] <enginespot>         for ppa in ppas:
[03:06] <enginespot>             j += 1
[03:06] <enginespot>             r.set(j, ppa)
[03:06] <enginespot> I want to get a person list
[03:07] <enginespot> or launchpadlib can supply to me a pointer , I can read a person one by one
[03:09] <StevenK> enginespot: You've been told by two different people that what you want isn't possible. Asking different people isn't going to change the answer.
[03:09] <enginespot> yes , I find a different interface from launchpadlib
[03:27] <wallyworld> StevenK: what was the strategy going to be to stop the mixed visibility error oopses? what we do now to show <hidden> seems reasonable? i can't recall what the end game was to be
[03:28] <StevenK> I've can't recall off the top of my head, maybe wgrant can.
[03:31] <wgrant> It's complicated™
[03:31] <wgrant> Ignore for now
[03:51] <wallyworld> wgrant: i made a mistake on the sql above, here's a fixed version https://pastebin.canonical.com/77410/
[03:54] <wgrant> wallyworld: Ah, sorry, distracted by secure boot destroying the world
[03:54] <wgrant> Will run in a sec
[03:54] <wallyworld> no hurry
[03:54] <wallyworld> i'm not sure if we need the filter on the spph subquery
[04:00] <wgrant> wallyworld: So removing the persistent cache fixed the ec2 issue?
[04:01] <wallyworld> wgrant: yeah, thanks. my mind was telling me it wasn't persistent when clearly it was
[04:01] <wgrant> Great
[04:01] <wallyworld> if you want to look at my soyuz mp, it's up.
[04:02] <wgrant> I've glanced at it and opted to try to finish my current stuff before diving into it, unless you're in a hurry
[04:03] <wgrant> wallyworld: Is this the +ppa-packages query?
[04:03] <wallyworld> oh, no not at all. just mentioning it in case you hadn't noticed
[04:03] <wallyworld> yes
[04:03] <wgrant> Right, so you've changed the semantics slightly
[04:03] <wgrant> But probably not in a way that anyone cares about
[04:03] <wallyworld> i can't see why it was joining to spph when it didn't need to
[04:03] <wgrant> And it's now much faster
[04:03] <wgrant> It did technically need to, for the old definition of the package
[04:03] <wallyworld> the semantics should be the same
[04:03] <wgrant> s/package/page/
[04:04] <wgrant> ... oh, I see
[04:04] <wgrant> You're right
[04:04] <wallyworld> since there's a clause that says spr.uploaded_archive = archive.id
[04:04] <wgrant> So it joined Archive against SPR.upload_archive anyway?
[04:04] <wallyworld> yes, so it seems
[04:04] <wgrant> Right, so that SPPH join is completely unused?
[04:04] <wallyworld> as far as i can tell
[04:04] <wallyworld> but it's there to eliminate spr that haven't been published?
[04:05] <wgrant> Ah, yes, of course
[04:05] <wallyworld> so i did it as a subquery
[04:05] <wallyworld> so the semantics should be the same
[04:05] <wgrant> Right, I see the subquery now
[04:05] <wgrant> Missed it because of the indentation
[04:05] <wgrant> Um, well
[04:05] <wgrant> It's slightly different
[04:05] <wallyworld> it was quicker?
[04:05] <wgrant> Heh
[04:05] <wgrant> heh
[04:05] <wgrant> heh
[04:05] <wgrant> The subquery one is still running...
[04:06] <wallyworld> oh :-(
[04:06] <wallyworld> i would have thought postgres would have optimised that
[04:07] <wgrant> I'm not sure why it didn't here.
[04:07] <wgrant> But rewording it as an EXISTS is a bit cleaner and works fine
[04:08] <wallyworld> i did it as an exists originally, but thought it too verbose
[04:08] <wgrant> It's about 30% faster for me
[04:08] <wallyworld> ok, will change it
[04:08] <wgrant> using       AND EXISTS (SELECT 1 FROM sourcepackagepublishinghistory spph WHERE spph.sourcepackagerelease = sourcepackagerelease.id AND spph.archive = sourcepackagerelease.upload_archive)
[04:08] <wallyworld> but is it faster than the original join?
[04:08] <wgrant> Although the spph.archive constraint is new; it's not in the original SPPH-joining query
[04:08] <wgrant> Right
[04:08] <wgrant> 30% faster than the original query for me
[04:08] <wallyworld> how much faster?
[04:08] <wgrant> Trying with pathological cases now
[04:08] <wallyworld> ok
[04:08] <wgrant> 35ms vs 24ms
[04:09] <wallyworld> the oops should it taking a lot longer
[04:09] <wallyworld> showed
[04:09] <wgrant> Well, I'm sort of the optimal case for the query
[04:09] <wallyworld> try with fta
[04:12] <wgrant> 14s -> 8s with fta
[04:13] <lifeless> to slow :)
[04:14] <wallyworld> well, it's still almost 50% improvement :-)
[04:14] <wallyworld> and thats on DF
[04:15] <wgrant> prod won't be significantly faster for this
[04:15] <wallyworld> might be ok on prod
[04:15] <wallyworld> i'm not sure hoe else to make it faster
[04:16] <wgrant> Well
[04:16] <lifeless> wallyworld: yes its a lot better, but prods cpu isn't /that/ much better than dogfoods; once you're in cache...
[04:16] <wgrant> The general approach for this sort of problem is to not be stupid in the first place
[04:16] <wgrant> Sadly we have history
[04:16] <wgrant> So we can't really decide to not be stupid in the first place
[04:17] <wgrant> But, in general, trying to calculate this sort of stuff out of the live tables is a terrible idea.
[04:17] <wallyworld> agreed
[04:17] <wgrant> We could perhaps redesign the page
[04:17] <wgrant> Such that it returns recent PPA publications created by the user
[04:18] <StevenK> Why do you need to chain through SPR anyway?
[04:18] <wallyworld> i'm not sure of the requirements or history of what's required
[04:18] <wallyworld> StevenK: not sure, that's how it was
[04:18] <wgrant> It's not chaining through SPR
[04:18] <wgrant> It's entirely based on SPR
[04:18] <StevenK> Ew
[04:19] <wgrant> It doesn't care about publications, beyond the "has this been published?" check to get around rejections
[04:21] <wallyworld> with the CPU issue, that assumes the query will be in cache, but i would have thought this page would nearly always be loaded cold
[04:21] <wgrant> The involved tables total only 10GB or so
[04:21] <wgrant> And some of that is probably (hopefully) TOASTed
[04:21] <StevenK> I wonder if SRF and batching will help
[04:22] <wgrant> No
[04:23] <wgrant> SRF helps only if we can calculate an ordered batch substantially more quickly than we can calculate the entire set
[04:23] <wgrant> Which requires sensible schema design and indexing
[04:23] <wallyworld> i may as well land this small change till we figure out what to do next
[04:24] <wallyworld> since it helps as is
[04:25] <wgrant> No
[04:25] <wgrant> It probably doesn't help significantly, and it may regress other cases
[04:25] <wgrant> It's liable to turn the subquery into a 4-6s seqscan if the wind blows the wrong way
[04:26] <wallyworld> what would cause that to happen?
[04:26] <wgrant> If it decides that there will be too many SPRs to efficiently use an index lookup
[04:27] <wallyworld> but don't we batch so that we only ask for 75 or so at once?
[04:28] <wgrant> We only ask for 75 at once, sure
[04:28] <wallyworld> i guess it still needs to know the number
[04:28] <wgrant> But it's an ordered set
[04:28] <wgrant> And the order is sufficiently complex and layered that it can't be indexed
[04:28] <wallyworld> yeah, sadly
[04:28] <wgrant> So it has to calculate the entire set
[04:28] <wgrant> Sort it
[04:28] <wgrant> Then take the first 75
[04:29] <wallyworld> are you sure it could become a seqscan?
[04:29] <wgrant> Yes, I've seen it happen here with fta once
[04:29] <wallyworld> and the join avoids that?
[04:30] <wgrant> Somehow, yeah
[04:30] <wgrant> The join has to be applied after the conditions
[04:30] <wgrant> So for the join it doesn't have to optimally determine the condition order to work out numbers
[04:31] <wallyworld> above you said for fta it went from 14->8
[04:31] <wallyworld> and he is a bit of a corner case
[04:31] <wgrant> And then it went to 16 or so a couple of times, as it chose a different plan for reasons which are unclear
[04:32] <wallyworld> ah, bollocks, ok
[04:32] <wallyworld> i could use a CTE perhaps
[04:32] <wallyworld> to narrow down the spr records
[04:32] <wallyworld> before checking for publiscation
[04:45] <wallyworld> wgrant: is this any faster? https://pastebin.canonical.com/77411/   (when you have a moment)
[04:46] <wgrant> I suspect not, but let's see
[04:47] <wgrant> still going...
[04:47] <wallyworld> :-(
[04:49] <StevenK> wallyworld, wgrant: https://code.launchpad.net/~stevenk/launchpad/sensible-superseded-by/+merge/132009
[04:49] <wallyworld> maybe i need to add an array column to spr - "archives_published_in"
[04:49] <wallyworld> that would eliminate the join
[04:50] <wgrant> That wouldn't help significantly.
[04:50] <wgrant> It would eliminate the seqscan, but leave the big performance issue
[04:51] <wallyworld> any other ideas?
[04:51] <wgrant> I'm trying to use window functions to do it
[04:51] <wallyworld> that don't involve a schema redesign
[04:52] <wallyworld> StevenK: typing a blueprint name seems unfortunate.
[04:52] <wallyworld> maybe a picker?
[04:52] <StevenK> wallyworld: So you missed that part of the call when Curtis said don't write a picker?
[04:53] <wallyworld> must have
[04:53] <wallyworld> sometimes words cut out
[04:53] <wallyworld> or whole sentences with curtis speaks
[04:53] <wgrant> Hm
[04:53] <wgrant> Curtis did say that :)
[04:53] <wallyworld> i would hate to type a blueprint name
[04:54] <StevenK> Copy and paste, hit continue
[04:54] <StevenK> Move on
[04:54] <wallyworld> guess so. seems very primative
[04:56] <StevenK> wallyworld: Better or worse than a dropdown with 6200 items?
[04:56] <wallyworld> that's why a search based solution like a picker is best
[04:57] <wallyworld> not sure why it was rejected
[04:58] <StevenK> Oh good god https://twitter.com/raywert/status/263102070989680640/photo/1/large
[04:58] <wallyworld> StevenK: in validate, fetch the spec and put it in the data map. then do not fetch it again in the submit
[04:59] <StevenK> wallyworld: I wasn't sure if I could do that.
[04:59] <wallyworld> StevenK: yeah, the data dict is just passed arounf during the submit
[05:00] <wallyworld> run the tests just to be sure your implementation is all good
[05:00] <StevenK> I've not run all the blueprint tests, but xx-superseding{,-within-projects}.txt pass
[05:01] <wallyworld> that should be enough to send to ec2 with
[05:01] <StevenK> Pfft, I was going to play buildbot bingo
[05:02] <StevenK> I already lost once today, though
[05:02] <wallyworld> StevenK: also
[05:02] <wallyworld> 192	+ if result is None:
[05:02] <wallyworld> 193	+ return result
[05:02] <wallyworld> rs.one() should be enough
[05:02] <wallyworld> i think?
[05:03] <StevenK> I thought that would die horribly?
[05:03] <wallyworld> let me check the code
[05:03] <wallyworld> yes, rs.one() works
[05:04] <wallyworld> it returns None if rs is empty
[05:04] <wgrant> It returns None, the sole value, or raises an exception
[05:04] <wgrant> For 0, 1 and >1 results respectively
[05:05] <StevenK> Yeah, I've changed it and checked it in iharness
[05:06] <wallyworld> StevenK: r=me, gotta pick up kid from school
[05:11] <wgrant> blah
[05:11] <wgrant> PARTITION BY seems to always want to traverse the whole set
[05:16] <wgrant> I guess it's not smart enough to realise that the inner and outer sorts match
[05:17] <wgrant> eg. SELECT id, purpose, row_number() OVER (PARTITION BY purpose ORDER BY id) FROM archive ORDER BY id LIMIT 5; should just be able to walk down the id index until the fifth row
[05:17] <wgrant> But it actually grabs the full table due to the PARTITION :/
[05:19] <wgrant> Ah, it wants them sorted by purpose, id
[05:19] <wgrant> Perhaps it doesn't want to have to remember the latest row_number for each purpose
[05:20] <wgrant> So there's no efficient way to do the DISTINCT ON on the server
[05:21] <wgrant> The quickest solution is probably to ask for a reasonable number and do the distinct in Python :/
[05:39] <wallyworld> wgrant: so removing the distinct on from the query will make it fast?
[05:39] <wgrant> wallyworld: Well, with an index on (creator, dateuploaded DESC, id)
[05:39] <wgrant> You can then do a direct indexed query
[05:40] <wgrant> If you ignore the DISTINCT ON bit
[05:40] <wgrant> The page tries to only show the latest SPR for each (distroseries, archive, sourcepackagename)
[05:40] <wallyworld> i guess i could iterate the result set till i have the requisite number of records
[05:40] <wgrant> And that is the bit that is slow to implement in SQL
[05:40] <StevenK> Bleh, sinzui didn't use rollback
[05:41] <wallyworld> s/in SQL/in Postgres
[05:41] <wgrant> We can do that using DISTINCT ON or a rank() with PARTITION BY, but both of those require that it be sorted by the partition/distinct first
[05:41] <wgrant> s/in SQL/in SQL in postgres/, but yes
[05:41] <wallyworld> so i assume without checking yet that we don't have the index
[05:42] <wgrant> We don't yet, indeed
[05:42] <wallyworld> would adding it make the distinct faster?
[05:42] <wgrant> No
[05:42] <wallyworld> ok, we can add it live too then i think
[05:42] <wgrant> Indeed
[05:43]  * StevenK tries to work out how to QA r16207
[05:43] <wallyworld> oh well, thanks for experimenting with it
[05:43] <wallyworld> too bad it doesn't want to play nice
[05:43] <wgrant> StevenK: librarian, codehosting, regret that you didn't wait until we had builders
[05:43] <wgrant> basically
[05:44] <StevenK> wgrant: So push and pull a branch, but I'm not sure how to fiddle with the librarian
[05:44] <wgrant> click link to librarian file
[05:44] <wgrant> see if link to librarian file works
[05:45] <wgrant> upload file to l[ibrarian
[05:45] <wgrant> see if librarian file works
[05:55] <StevenK> Bleh, can't push to lazr.restful on qas due to bzr: ERROR: Server sent an unexpected error: ('error', 'NotBranchError', 'Not a branch: "chroot-67793680:///+branch-id/43364/".')  which I guess is stacking
[05:55] <wgrant> +junk is your friend
[05:57] <StevenK> Excellent, all four look good
[06:19] <enginespot> Hi everyone
[06:19] <enginespot> how to get ppa from a project
[06:28] <nigelb> enginespot: Projects don't have PPAs. People do.
[06:49] <wallyworld> wgrant: do you think that without the distincts, it's best to do the subquery for the spph check or stick to a big join?
[06:51] <wgrant> wallyworld: You'll need to do the subquery
[06:51] <wallyworld> cool, thanks. that's what i've done
[06:51] <wallyworld> just wanted to check
[06:52] <wallyworld> i also need an index on maintainer
[06:52] <wallyworld> since another query filters on that
[08:23] <deryck> rick_h_, you actually on irc?
[08:23] <rick_h_> deryck: rgr
[08:24] <deryck> rick_h_, there's a hacking room all the way at the back of the hall.  right side of the hall as you go back.
[08:24] <deryck> rick_h_, in case you need a place between sessions.
[08:24] <rick_h_> deryck: cool thanks. hacking out in the main area from yesterday atm
[08:24] <deryck> rick_h_, ok, might come say hello when I get coffee and chat through a couple things.
[08:46] <adeuring> good morning
[08:49] <rick_h_> morning adeuring
[08:49] <deryck> morning, adeuring
[08:49] <adeuring> hi ric, deryck
[10:53] <StevenK> dpm: O HAI
[10:53] <StevenK> dpm: Can you look at production in terms of raring translations?
[10:53] <dpm> StevenK, hey people at UDS say hi, you were on the projector
[10:53] <StevenK> Haha
[10:53] <dpm> ajmitch says hi, and go to sleep :)
[10:53] <StevenK> Pft, it's only 10pm
[10:53] <StevenK> He can go to sleep.
[10:54] <nigelb> lol
[11:00] <dpm> StevenK, raring translations look good to me in production - only the contributors column has gone down in terms of number of contributors for each language comparing q to r - any ideas why that could have happened?
[11:02] <StevenK> None, I'm afraid
[11:02] <ajmitch> StevenK: I'm hardly going to go & sleep at noon
[11:02] <StevenK> ajmitch: Pft. You fail at jetlag, then.
[11:09] <dpm> StevenK, so the contributor numbers is the only issue I see, otherwise translations look good. Could you investigate what caused the decrease in number of contributors?
[14:58] <jtv> sinzui, are you here?
[14:58] <sinzui> I am
[15:01] <jtv> Hi!  Long time no see.
[15:01] <jtv> sinzui: Laura mentioned that you'd been busy opening translations, and were having some trouble with Blender's uploads.
[15:01] <jtv> (As two separate issues)
[15:01] <sinzui> blender was broken before we started opening
[15:02] <jtv> The two things I wanted to ask are:
[15:02] <jtv> (1) Need any urgent help with the blender issue?  I may even be able to meet up physically with them in the coming days or weeks.
[15:02] <jtv> (2) Maybe we should try skipping the whole translations-copying step for S.
[15:04] <sinzui> please help with blender, the users do not understand lp -- they are not configuring translations as I suggest and they do not believe that open source communities will hate them if they setup a separate bug tracker
[15:04] <sinzui> translations for raring are in qa now. We might enable them in 12 hours
[15:05] <sinzui> jtv, this is the first opening where there there no errors in the db or scripts for translations...
[15:05] <sinzui> a non-event that we don't know how to document
[15:14] <jtv> sinzui: that is fantastic — documentation should just show date done, steps taken & time spent.  The way we always hoped it would become.
[15:14] <jtv> Can you give me a quick summary of what's wrong with the blender setup?
[15:15] <sinzui> 1 they did not setup the right series.
[15:15] <sinzui> 2. they did not setup the branch form the series
[15:15] <sinzui> 3. yesterday, they had still not set the sync to import templates
[15:15]  * jtv looks at blender's series
[15:16] <sinzui> yes, we both can do it, but I think the suer should do it show that he knows ho to change it
[15:16] <jtv> I agree.
[15:16] <sinzui> and he has set pots and pos I see
[15:16] <jtv> Ouch — a separate “translations” series
[15:16] <sinzui> but still note mplates
[15:17] <sinzui> still no templates
[15:17] <sinzui> but I don't know when this change was made
[15:18] <jtv> sinzui: this is https://launchpad.net/blender ?  I see a template.
[15:20] <sinzui> jtv https://translations.launchpad.net/blender/2.6x/+templates has no templates, which is what he was trying to do
[15:20] <jtv> Ah!
[15:20] <sinzui> oh, the tree is scons an deep
[15:20] <sinzui> is this intl tools?
[15:20] <jtv> But there's nothing in the upload queue waiting for review.
[15:21] <sinzui> I don't see any pots or pos
[15:24] <jtv> I'm not finding it either.
[15:25] <jtv> Maybe they want to get Launchpad to extract the pots.  But are we still running that?
[15:26] <jtv> And it doesn't look as if strings are even marked for translation…
[15:35] <sinzui> jtv: http://wiki.blender.org/index.php/Dev:Doc/How_to/Translate_Blender
[15:36] <jtv> Ah, I was getting the branch
[15:42] <jtv> sinzui: not having much luck running their tools…  but it doesn't sound as if there's very much I can do to help.  :/
[15:45] <sinzui> yuck
[15:46] <sinzui> I think this is a case where we advise them to stick with the import of translation branch, and work closer with the upstream community
[15:50] <jtv> Wait... these aren't the upstream people?
[17:33] <sinzui> jcsackett, would you have time to review https://code.launchpad.net/~sinzui/launchpad/hide-question-comment/+merge/132177
[18:04] <jcsackett> sinzui: sure.
[18:11] <jcsackett> sinzui: r=me. thanks for the fix. :-)
[19:45] <sinzui> jcsackett, how goes bug 164530
[19:45] <_mup_> Bug #164530: User translations showing broken links <404> <lp-translations> <oops> <Launchpad itself:In Progress by jcsackett> < https://launchpad.net/bugs/164530 >
[19:46] <jcsackett> sinzui: it died in ec2 when i sent it out to land on thursday. i sent it out to ec2 again earlier today when i caught up with email and realized it had never landed and i had no ec2 results.
[19:46] <sinzui> :(
[19:47] <jcsackett> sinzui: it's been out for 3h45m. should get a (hopefully) ok result soon, so barring pqm issues it will be qa-able before i EoD.
[19:47]  * jcsackett knocks on all available wood surfaces
[19:49] <sinzui> jcsackett, baring that, I would run all lp.translations tests and submit if they all pass
[19:49] <sinzui> jcsackett, what bugs are you looking at now?
[19:50] <jcsackett> sinzui: bug 798954
[19:50] <_mup_> Bug #798954: InvalidProductName: Invalid name for product: bookmark:galapagos.  <oops> <Launchpad itself:Triaged> < https://launchpad.net/bugs/798954 >
[19:50] <jcsackett> we discussed it the other day, i'm trying to replicate the error now.
[19:51] <sinzui> hmm
[19:52] <sinzui> jcsackett, Do we have any modern oopses
[19:52] <sinzui> jcsackett, https://oops.canonical.com/oops/?oopsid=OOPS-1df2ed046d05000215e8d42933e44934
[19:53] <jcsackett> sinzui: what method are you using to search the OOPS DB? you seem *much* faster at it than me.
[19:54] <sinzui> jcsackett, I was already 2F logged in. I went to the lp production page, opened the latest report and search for InvalidProductName, got nothing, then url hacked to the day earlier, and repeated the search
[19:55] <jcsackett> sinzui: ah.
[19:55] <sinzui> chromium also remembers the search so I only type it once
[19:56] <sinzui> I think I get these often when I forget how to push to lp://qastaging/ , but I don't see the oopses
[20:00] <jcsackett> it's easy to reproduce manually on dev; i just had to spend some time getting codehosting working in my lxc.
[20:04] <sinzui> Maybe I should never switch to lxc
[20:05] <sinzui> this oops might be more interesting. https://oops.canonical.com/oops/?oopsid=OOPS-a3ef3a2868f310958e03998c18758fdb I don't think the user ever figured our how to push a branch
[20:17] <jcsackett> sinzui: does look like they were having problems.
[22:24] <sinzui> wgrant, https://bugs.launchpad.net/launchpad/+bug/408585
[22:24] <_mup_> Bug #408585: choosing blueprint for branch is broken <lp-code> <Launchpad itself:Triaged> < https://launchpad.net/bugs/408585 >
[22:37] <StevenK> wgrant: http://pastebin.ubuntu.com/1319322/
[23:03] <StevenK> wallyworld, sinzui: You two coming back?
[23:04] <sinzui> I am not. got eat
[23:04] <wallyworld> back where?
[23:04] <wallyworld> sorry, i thought cal lhad finished
[23:04] <StevenK> The server kicked everyone but wgrant off