[06:08] <jtv> lifeless: I see you were playing with the idea of sacrificing bug-count accuracy for private bugs.  I sometimes wonder if it makes sense to treat them as the same thing in the first place.
[06:09] <jtv> UI-wise, that is.
[06:11] <jtv> If we had, say, private-bug portlets then we'd get more caching, less low-level decision-making ("which bugs should I count here exactly?") and maybe even less of the narrow-deep/wide-shallow query antipattern as a matter of course.
[06:13] <jtv> In that example (private-bug portlets), I imagine bug counts etc. including only public items, and logged-in users with access to private items getting something extra on the side to show those items.
[06:13] <jtv> In some ways that'd be worse UI, but in some ways it might be better: less "oh you can't access this it's private" and more confidence that private things really are private.
[06:14] <lifeless> jtv: its an interesting discussion to have for sure
[06:14] <lifeless> jtv: doesn't really impact the coding though - its easy enough to union in on search generation to avoid shallow-wide issues
[06:16] <lifeless> jtv: caching of public stuff doesn't really interest me
[06:16] <jtv> shallow is rarely an issue when it comes to db query performance.  :)
[06:16] <lifeless> jtv: users seem to notice stale data
[06:16] <lifeless> so I'm very keen on use ripping out caches that aren't keep fresh
[06:17] <jtv> Of course.
[06:17] <jtv> But how often do we need e.g. the Ubuntu bug count?
[06:17] <lifeless> dunno :)
[06:17] <jtv> In fact...
[06:18] <lifeless> I can tell you of course
[06:18] <jtv> this is way too generic, but
[06:18] <jtv> just a thought:
[06:19] <jtv> imagine we had support for short-lived caches with optional aggressive pre-seeding.
[06:19] <jtv> So basically on-demand caching with short lifetimes, but separately from that imagine we could pick the most popular items and refresh them before they expire.
[06:20] <lifeless> there is a common antipattern that turns up there
[06:20] <jtv> That'd be a way to head off the thundering herd.
[06:20] <lifeless> which is that the generation time is high
[06:20] <lifeless> said generation time is then the lowest latency for updates to the cache
[06:21] <jtv> Why would a high generation time be the lowest latency for updating the cache?
[06:22] <lifeless> either the data is incrementally updatable
[06:23] <lifeless> in which case a cache is a bit of a misleading description (its more a memo that can be edited - like my fact table)
[06:23] <lifeless> or the data has to be regenerated after a change occurs
[06:24] <lifeless> in which case the shortest latency between change and inserting the updated cached item is the time it takes to generate the the item from scratch
[06:27] <jtv> So you're really saying the problem there is we lose sight of the time it takes for a change to become visible to the user?
[06:27] <lifeless> yes
[06:28] <lifeless> the key is to build data structures that let us do /all/ the work we need to do efficiently
[06:28] <lifeless> e.g.
[06:28] <lifeless> 'update the bug'
[06:28] <lifeless> 'update the aggregates the bug appeared/now appears in'
[06:50] <jtv> lifeless: ISWYM
[06:51] <jtv> lifeless: got a moment to discuss something else?  Pre-imp, basically.
[06:52] <lifeless> sure
[06:52] <jtv> Thanks.  It starts at this page: https://wiki.ubuntu.com/NewReleaseCycleProcess
[06:53] <jtv> Lots of manual steps on there, many of which could be automated, when a new Ubuntu distroseries is created.
[06:53] <jtv> Some of those steps apply to other distros as well.
[06:54] <jtv> I've been asked to automate steps 12 & 14.
[06:55] <jtv> Basically, two script runs for a new distroseries.
[06:56] <jtv> AFAICT those script runs can be done right after initialise-from-parent (step 10).
[06:57] <jtv> For Debian series, it basically doesn't matter whether we do that script run or not.
[06:58] <jtv> And I think for the foreseeable future, every series that matters is going to have a parent to initialise from.
[06:58] <jtv> The way I think I'd like to handle that is:
[06:58] <lifeless> man, that whole thing should be automated
[06:58] <jtv> Exactly.
[06:58] <StevenK> We don't follow that process for Debian at all, since we don't publish it
[06:59] <jtv> And I don't want my solution to get in the way of that, but also not go overboard on the design.
[06:59] <jtv> StevenK: no, I'm just using that page as a starting illustration here.
[07:00] <jtv> StevenK: as I said earlier, many of those steps are not Ubuntu-specific on the one hand, and on the other we can take or leave steps 12 & 14 for Debian.
[07:01] <jtv> What I have in mind for a solution when it comes to just steps 12 & 14, is to have a DistributionJob type that is handled by a runner on cocoplum.
[07:01] <jtv> If it fails, I'm thinking it should notify the distro owners who can then sort it out with us LP people.
[07:02] <lifeless> jtv: so, what about the cron job disable/enable?
[07:02] <lifeless> seems you need to fit in with that / make it unneeded
[07:02] <wgrant> jtv: Debian's archive won't have its publish flag set, so you won't try to publish it.
[07:02] <jtv> What cron job disable/enable is that?
[07:03] <lifeless> step 5 and 15
[07:03] <lifeless> that bracket the step syou're automating
[07:03] <jtv> wgrant: AIUI it doesn't matter whether we run this or not for Debian series.
[07:03] <wgrant> jtv: Debian has nowhere to publish to, so it will fail if you try.
[07:04] <jtv> wgrant: you mean it doesn't have a publisher config?
[07:04] <wgrant> jtv: Indeed, that is the case now.
[07:06] <jtv> lifeless: sorry if I left that bit out -- I was thinking to create this job in initialise-from-parent, so the run won't happen before the cron jobs are disabled.  We'll want a notification to the distro owners in the case of success as well then, as one of the preconditions for re-enabling them.
[07:06] <wgrant> jtv: We need to automate the disablement.
[07:06] <jtv> wgrant: you mean "yes it doesn't have a publisher config" or "yes it does have a publisher config"?  (Asking because where I live, people tend to answer negative questions with "yes")
[07:07] <wgrant> jtv: That is the case: it doesn't have a publisher config.
[07:07] <wgrant> A month ago that wasn't the case.
[07:07] <wgrant> Because there were no publisher configs.
[07:07] <jtv> wgrant: we ought to automate the whole bally lot of this.  But that's for another day.  I'm trying to solve a specific problem in such a way that it doesn't stand in the way of a wider solution.
[07:07] <wgrant> So it would have happily published into completely the wrong place.
[07:08] <jtv> That's fine too, as long as it's not a place used by someone else.
[07:08] <jtv> I do wonder though whether a new non-Ubuntu, non-Debian distroseries will have a publisher config at this point.
[07:08] <jtv> If not, there's not much point to any of this.
[07:09] <wgrant> Not right now, no.
[07:09] <wgrant> But as soon as DDs are in use, yes.
[07:09] <lifeless> as a DD, I object to that
[07:09] <lifeless> we don't get 'used'
[07:09] <wgrant> Ha ha.
[07:09] <jtv> Before I make the joke none of us wants to see me make... what's a DD?
[07:09] <lifeless> debian developer
[07:10] <jtv> I don't think that's what wgrant meant...
[07:10] <wgrant> I believe I meant Derivative Distros, but now I am not so sure.
[07:11] <jtv> Well this is _for_ derived distros (if you don't mind changing your words a bit) so will a new distroseries for a new derived distro have a publisher config right from the outset?
[07:11] <jtv> mind *me* changing your words, that is
[07:12] <wgrant> jtv: A new distro had better have a publisher config before its series is initialised, yes.
[07:12] <jtv> If that job is not on the board yet, perhaps it should be.
[07:13] <wgrant> Mm, not sure that's the case.
[07:14] <wgrant> It's one of those things that can/will be done at the end once everything else is worked out.
[07:14] <wgrant> It doesn't affect much else.
[07:14] <wgrant> Unlike the other stuff that people seem to be pushing through excessively quickly :P
[07:15] <jtv> By the way, I suppose that the disabling/enabling the cron jobs can in future be done for just the distro.
[07:15] <wgrant> Can't it already?
[07:16] <jtv> Don't think so, no.  All the locking is still per-script.
[07:17] <wgrant> Ah, assuming you're publishing multiple distros on a single machine, yeah.
[07:17] <jtv> Well otherwise there's not much difference, is there?  :)
[07:18] <wgrant> What do you mean?
[07:18] <wgrant> There is one instance of each job for each distribution.
[07:19] <jtv> Yes, so disabling the cron jobs for all distributions on a machine would be pretty much the same thing as disabling the cron jobs for the one distribution on the machine.
[07:20] <jtv> By the way, we do have a ticket for making the scheduling of publisher jobs db-driven.
[07:20] <jtv> That'd be helpful in managing this.
[07:20] <wgrant> Have you actually discussed how the scheduling is going to work?
[07:20] <wgrant> And what the requirements for setting up a new distro are?
[07:20] <jtv> No, the scheduling thing is a lower-priority task.
[07:21] <jtv> The requirements for setting up a distro is effectively what we're discussing right now.
[07:21] <jtv> I imagine the Ubuntu procedure is about the best template we have for that.
[07:22] <wgrant> The best template for what is necessary for creating a new series in an existing distribution.
[07:22] <wgrant> Not for what we want creating a new series to be like, let alone creating a new distribution.'
[07:25] <lifeless> grag
[07:25] <lifeless> whats the bug # for users being confused about login.launchpad.net
[07:29] <lifeless> jtv: I just remembered a great example about cache update latency
[07:29] <lifeless> google used to run their search index in readonly mode
[07:29] <lifeless> they would process newly scanned pages into a new index
[07:29] <lifeless> and then run around their datacentres pivoting the new index into place
[07:29] <lifeless> this was a fortnightly or something even
[07:29] <lifeless> *event*
[07:30] <lifeless> so a new page wasn't findable for weeks - as an expected thing
[07:31] <lifeless> then, they made it transactionally updatable (they did a whitepaper on this, adding transaction abstraction to bigtable); and now scan pages directly into the live index, with sub-minute latency on event driven changes (like pingthesemanticweb) and 1/2-refresh-frequency for polled content
[07:31] <lifeless> all about data structures
[07:32] <jtv> wgrant: yes, that is a big question right there.  I have a feeling we'll either have to do another round of design (which will probably still miss things, I guess) or create a manual list similar to this one as we learn, and automate as a separate process.
[07:36] <jtv> lifeless: that's very much down to investing effort into the specific case though.  There's often _also_ a lot to be gained from more generic partial solutions: partly as a stopgap while you direct the effort elsewhere, and partly as an actual solution for the more mundane problems.
[07:37] <lifeless> jtv: perhaps.
[07:38] <jtv> For instance, caching can be bad in that it hides worst-case latency problems, but it can be good for reducing load-driven latency and a stopgap for average-case latency.
[07:39] <lifeless> so, the latter two cases do make sense for judicious use of caching
[07:40] <lifeless> however
[07:40] <lifeless> identifying them is harder than you might think
[07:40] <jtv> wgrant: have you had a chance to look at the pythonic publish-ftpmaster again by the way?  I'm really quite dependent on your judgment when it comes to Q/A.
[07:41] <lifeless> jtv: at a guess, not till wed
[07:41] <lifeless> jtv: its easter
[07:41] <lifeless> + anzac day
[07:41] <wgrant> jtv: lifeless speaks the truth.
[07:42] <wgrant> When are we going to switch over to it?
[07:42] <wgrant> I don't really have much idea of the timeframe of this whole thing.
[07:42] <jtv> anzac?  aus&nz asomething csomething?
[07:42] <wgrant> Other than suspicions that everyone's being very optimistic about timing.
[07:42] <jtv> wgrant: totally concur there, but I suspect the answer depends largely on "when wgrant is satisfied that it works properly."
[07:43] <lifeless> jtv: australia new zealand armed core
[07:43] <lifeless> jtv: gallipoli remembrance basically
[07:43] <jtv> Some kind of independence movement?
[07:43] <jtv> Joking, joking.
[07:43] <jtv> Perhaps not a good idea to joke about Gallipolli with the antipodeans...
[07:43] <StevenK> I thought it was Army Core?
[07:44] <lifeless> StevenK: probably
[07:44] <wgrant> YUou mean Corps?
[07:44] <lifeless> wgrant: I do
[07:44] <jtv> That makes slightly more sense.
[07:44] <lifeless> <- tired
[07:44] <StevenK> At 6:45pm?
[07:44] <lifeless> StevenK: late night + early kitty-wake-up
[07:44] <jtv> A full corps would be a sensible unit for those population sizes, I guess.
[07:49] <jtv> lifeless, StevenK, wgrant: thanks for your help with this.  The salient points have been recorded in notes.  Which after an urgently approaching meal I shall start to turn into code.
[07:50] <wgrant> jtv: I wish we had Julian this week :(
[07:50] <jtv> Having Julian is always good.
[09:03] <jtv> wgrant: do you think anyone would mind terribly if I added an --all-suites option to publish-distro?
[09:06] <wgrant> jtv: As opposed to omitting the '-s' option..?
[09:06] <jtv> wgrant: well I would have _expected_ omission to mean "all," but I don't see that in the code.
[09:06] <wgrant> It does.
[09:06] <wgrant> It leaves allowed_suites empty.
[09:06] <jtv> Is it hidden in getPublisher somehow?
[09:07] <wgrant> Which means everything is allowed.
[09:07] <jtv> *somewhere
[09:07] <wgrant>     def isAllowed(self, distroseries, pocket):
[09:07] <wgrant>         return (not self.allowed_suites or
[09:07] <wgrant>                 (distroseries.name, pocket) in self.allowed_suites)
[09:07] <wgrant>                 if (self.allowed_suites and not (distroseries.name, pocket) in
[09:07] <wgrant>                     self.allowed_suites):
[09:07] <wgrant>                     self.log.debug(
[09:07] <wgrant>                         "* Skipping %s/%s" % (distroseries.name, pocket.name))
[09:07] <jtv> That's not in publishdistro.py, is it?
[09:07] <wgrant> etc.
[09:09] <jtv> wgrant: intriguingly, I see different code for isAllowed.
[09:09] <jtv> Oh wait, you've got stuff after the return... what is that?
[09:10] <jtv> And more importantly, don't the publish-distro invocations in the Ubuntu release procedure manually specify all Ubuntu suites?
[09:11] <jtv> That plus the publishdistro.py code made me think that omitting -s couldn't mean "all suites"
[09:12] <wgrant> jtv: The stuff after the return is from another method, doing a similar thing.
[09:12] <wgrant> jtv: NRCP runs publish-distro for all suites in the new series.
[09:12] <wgrant> Not all suites.
[09:13] <jtv> It passes "-s DSN -s DSN-updates -s DSN-security -s DSN-proposed -s DSN-backports" -- what's missing?
[09:13] <wgrant> DSN-1, DNS-1-updates...
[09:13] <jtv> Ah OK.
[09:13] <wgrant> Please don't imbue it with the knowledge of series.
[09:14] <wgrant> It should operate on suites opaquely as far as is practical.
[09:14]  * jtv scribbles note
[09:14] <jtv> By the way, the distsroot that's passed for the partner archive looks to me like it's exactly what we would now get (thanks to archive configs, I take it) without the -R option.
[09:15] <wgrant> We don't use .new?
[09:15] <jtv> Not there, no.
[11:03] <stub> How to I get from a project page to its PPA? Do we have a link yet?
[11:05] <stub> nm. projects can't specify an official ppa yet
[11:25] <lifeless> and due to w----aacky UI choices its hard to  it now
[11:59] <deryck> Morning, all.
[12:03] <jtv> hi deryck
[12:36] <bac> morning deryck
[12:49] <danilos> mrevell, hi, we should have all the bits ready on production for feature review to start (a bit late because qa took longer than expected)
[12:50] <mrevell> danilos, Superb. I'm working my way through the feature review notes at the mo. Ursinha should be around soon, too.
[12:51] <danilos> mrevell, cool
[13:30] <benji> gary_poster: two will enter, one will leave
[13:30] <benji> heh, but I will enter the wrong room
[13:31] <gary_poster> lol
[13:35] <magcius> where did jam go? :(
[13:36] <jelmer> magcius: it's a holiday here in .nl
[13:37] <gary_poster> not in US
[13:37] <jelmer> gary_poster: jam moved to Eindhoven in March
[13:37] <gary_poster> oh, cool!
[13:37]  * gary_poster mildly jealous :-)
[13:38] <jelmer> gary_poster: there's no second easter day in the US? Do you at least get judgement day off?
[13:39] <gary_poster> heh, there's no first easter day off in the US.  judgement day is still up in the air.  we'll probably have off for the heat death of the universe.
[13:40] <wgrant> In .au we have Tuesday off this year as well :D
[13:40] <gary_poster> heh :-P
[13:41] <jelmer> wgrant: a third easter day? Don't they only have that in very catholic countries?
[13:41] <wgrant> jelmer: ANZAC Day falls on Easter Monday this year, the so its public holiday is shifted to Tuesday.
[13:41] <wgrant> So two Easter days plus ANZAC Day.
[13:41] <jelmer> ah
[13:46] <jtv> gary_poster: I think I want a lib/lp/archivepublisher/configure.zcml, where there currently is none.  Could you help with this?
[13:48] <gary_poster> jtv, sure, 1 sec, trying to tie something up
[13:49] <jtv> Thanks.  I find one shoe at a time is easier.
[13:50] <gary_poster> :-)
[13:51] <jtv> (In this vague and poorly thought-out pun, I am the other shoe)
[13:56] <gary_poster> heh
[13:56] <gary_poster> jtv, ok, what's up
[13:56] <gary_poster> that is to say, what do you need help with?
[13:57] <jtv> Hi.  I'm adding a job class in archivepublisher, and it has its own utility.
[13:57] <gary_poster> ok
[13:57] <jtv> So I reckon there ought to be a configure.zcml.
[13:57] <jtv> There is not.
[13:57] <gary_poster> lol, end of thought?
[13:57] <wgrant> Get your dirty XML away from that innocent code!
[13:58] <jtv> gary_poster: I just discovered the zcml directory though.
[13:59] <jtv> I guess it goes in there now.
[13:59] <gary_poster> I don't think so
[13:59] <gary_poster> but maybe I'm wrong
[13:59] <gary_poster> I know the machinery, but I don't know our policies
[13:59] <jtv> Ah
[14:00] <jtv> Well there's a lib/lp/archivepublisher/zcml/configure.zcml that already has a utility...
[14:00] <gary_poster> oh *that* zcml directory
[14:00] <jtv> So at a minimum I suppose I wouldn't be making things substantially worse.
[14:00] <gary_poster> yeah that sounds fine
[14:00] <gary_poster> I thought you meant the top-level zcml directory
[14:01] <jtv> Whoooah no
[14:01] <gary_poster> :-)
[14:01] <jtv> Funny how these things always get rolling when you're explaining the problem.
[14:01] <gary_poster> :-)
[14:01] <gary_poster> ok, so you have found a happy home for your ZCML snippet, and all is well in this small corner of the world?
[14:01] <jtv> Yup.  So thanks for, er, listening.  :)
[14:02] <gary_poster> heh, yeah, np
[14:38] <gary_poster> benji, very small MP when you have a chance, please: https://code.launchpad.net/~gary/launchpad/bug770217/+merge/58958
[14:38] <benji> gary_poster: will do it now as I'm in the middle of a deep review that will take a while
[14:39] <gary_poster> ok thanks benji
[14:47] <benji> gary_poster: if I only apply the patch to the test (and leave out the change to lib/lp/bugs/javascript/subscription.js, the new test doesn't fail
[14:47] <gary_poster> benji, huh, did for me
[14:47] <gary_poster> that's how I developed it anyway :-)
[14:47] <gary_poster> Lemme see...
[14:47] <benji> I wonder if I did something wrong, double-checking.
[14:51] <benji> gary_poster: my fault: I was running lib/lp/bugs/javascript/tests/test_subscriber.html instead of lib/lp/bugs/javascript/tests/test_subscription.html
[14:51] <gary_poster> benji, great.  Thank you for checking
[14:52] <benji> gary_poster: review done
[14:53] <gary_poster> thanks benji
[14:53] <benji> my pleasure
[16:09] <beuno> hello hello
[16:10] <beuno> Launchpad seems to be doing funny things in code hosting: http://paste.ubuntu.com/598761/
[16:18] <sinzui> beuno: I just confirmed I can push a new branch. Can you try pushing you branch with a letter at the start of the branch name
[16:19] <beuno> sinzui,  I can, but branches i nPQM are failing for us now
[16:20] <sinzui> I wonder if this related to thumper's change to branch ids to permit stable renames
[16:20] <sinzui> beuno: I do not know how to test this kind of issue :(
[18:21] <sinzui> jcsackett: are available to mumble about question activity and email. My next step is pretty uncertain
[18:43] <jcsackett> sinzui: sorry, was brewing up coffee, missed your ping. i can mumble.
[18:43] <sinzui> fab
[18:45] <sinzui> jcsackett: can you hear me?
[18:45] <jcsackett> sinzui: i can hear you, i believe that you cannot hear me. i am rebooting mumble.
[18:45]  * sinzui too
[19:15] <deryck> has anyone online run windmill tests on natty?
[19:16] <deryck> I know wallyworld has, but anyone else?
[19:18] <gary_poster> benji, have a moment for https://code.launchpad.net/~gary/launchpad/bug770287/+merge/58979 ?  Should be another short one.
[19:18] <gary_poster> (no)
[19:18] <benji> gary_poster: sure
[19:18] <gary_poster> thanks benji
[19:18] <gary_poster> sorry, the "no" was to deryck
[19:19] <gary_poster> ("not me" would be more accurate :-) )
[19:19] <deryck> heh, thanks for answering gary_poster :-)
[19:19] <gary_poster> :-)
[19:19] <deryck> I'm guessing no one has.
[19:19] <gary_poster> I'm guessing they are badly broken?
[19:22] <deryck> gary_poster, a couple reporting on jenkins.  I just want to verify a fix locally, but can't get it to run.
[19:22] <gary_poster> gotcha :-/
[19:23] <deryck> No default or local profile error for windmill running FF 4.  FF 4 doesn't have a profile template anymore.
[19:24] <benji> gary_poster: done
[19:24] <gary_poster> benji, yippee! :-) thanks
[20:05] <sinzui> jcsackett: ping
[20:06] <jcsackett> sinzui: pong.
[20:09] <lifeless> morning
[20:11] <jcsackett> sinzui: hi, what's up?
[20:12] <sinzui> jcsackett: did you update hide-comment.py to work with questions?
[20:13] <jcsackett> sinzui: i did locally, but it's periodically failing for reasons i haven't diagnosed; thanks for the reminder. i will prioritize that for the rest of today.
[20:14] <jcsackett> if you are encountering question spam, just assign to me.
[20:14] <sinzui> I am updating my copy of the script now to close two questions. You may have already done this since I do not see spam comment on https://answers.launchpad.net/ubuntu/+question/144090
[20:16] <jcsackett> sinzui: yes, i whacked spam from that earlier.
[20:16] <sinzui> How do you know which comment to deactivate?
[20:16] <sinzui> hide
[20:16] <jcsackett> i believe the question that points out the spam for that is already assigned to me, isn't it? i can update.
[20:17] <jcsackett> sinzui: right now? you have to count the messages. which is terrible. the ui work needs to add numbers to the message (a la bug comments).
[20:17] <sinzui> okay. I will run my script
[20:45] <sinzui> jcsackett: This is my updated script. Question comment silently fails: http://pastebin.ubuntu.com/598898/
[20:47] <jcsackett> sinzui: fails as in you go to the page and the comment is still there?
[20:48] <jcsackett> if that's the case, wait a bit. i've found i was seeing delays between firing off setCommentVisibility and seeing it removed.
[20:49] <sinzui> jcsackett: comment hiding is queued?
[20:49] <jcsackett> sinzui: i don't believe so. i haven't determined the cause of the delay.
[20:50] <sinzui> jcsackett: were you anonymous...if so you get the cached page from the proxy
[20:50] <jcsackett> sinzui: i was not anonymous.
[20:52] <sinzui> I was viewing the page logged in, then  using an anon browser with a query string appended to ensure I do not get the cached
[20:52] <sinzui> page
[20:52] <jcsackett> sinzui: ah, i see what you mean.
[20:53] <lifeless> bug 691352
[20:56] <jcsackett> sinzui: so, you are not observing anything happening after a delay, i take it.
[20:59] <sinzui> jcsackett: that is correct
[20:59]  * jcsackett resists the urge to swear.
[21:00] <sinzui> jcsackett: I can also state the comment is number 10000
[21:01]  * jcsackett ceases resisting the urge.
[21:01] <sinzui> I am pretty sure there are no questions with 10,000 comments
[21:01] <sinzui> I already swore.
[21:01] <jcsackett> the latter is especially vexing, as you should get an index error on trying to do anything to comment 10000.
[21:01] <sinzui> I have decided that I need to make questions faster so I will make that my top priority today
[21:02] <sinzui> I have confirmed the script gets the correct question too
[21:03] <jcsackett> sinzui: i'll start a branch in a sec to see if the setCommentVisibility stuff became unbolted by something.
[21:07] <jcsackett> i will probably do my analysis far away from people, perhaps in a cave somewhere.
[21:20] <lifeless> sinzui: hey
[21:20] <sinzui> hi lifeless
[21:21] <lifeless> I'm trying to find a bug
[21:21] <lifeless> it was about how users get confused by login.launchpad.net
[21:21] <lifeless> I think bug 770107 is a dup of it
[21:21] <sinzui> canonical-identity-provider
[21:21] <lifeless> sinzui: thats the project
[21:21] <lifeless> I thought we had a task on LP
[21:21] <lifeless> but I can't find the darn thing
[21:21] <lifeless> I was wondering if you remembered such a bug?
[21:22] <sinzui> I am not seeing it either
[21:22] <sinzui> But I think I can find a question that links to it
[21:28] <sinzui> lifeless: I am not hunting my email. This is infuriating
[21:28] <lifeless> sinzui: s/not/now/ ?
[21:28] <sinzui> yes
[21:29] <lifeless> :) makes more sense
[21:39] <sinzui> lifeless: I give up. I cannot find a login.launchpad.net  bug in c-i-p or a login.ubuntu.com bug in launchpad.
[21:54] <lifeless> thanks for looking
[22:18] <lifeless> flacoste: ping
[22:21] <thumper> sinzui: my change wouldn't have effected beuno's branch like that
[22:22] <sinzui> fab
[22:57] <lifeless> I really wish we had full bug federation
[23:01] <lifeless> sinzui: did you mean to /part #launchpad ?
[23:02] <sinzui> No, emapathy is being a complete and utter prick
[23:02] <lifeless> :(
[23:02] <lifeless> its not being empathetic :)
[23:09] <sinzui> Well I cannot workout where to report bugs about that launchpad-bugzilla, so I unconfigured the bugtracker. But since we are the maintainers, shouldn't we be tracking bugs in Lp?
[23:22] <sinzui> Looks like empathy want me to be in one freenode channel at a time
[23:23] <sinzui> lifeless: http://pastebin.ubuntu.com/598960/
[23:23] <sinzui> ^ My thoughts are really scattered
[23:26] <lifeless> Seems reasonable
[23:26] <lifeless> the snapshot things seems overly generic to me
[23:26] <lifeless> its one of those attractive until you use it patterns
[23:29] <sinzui> lifeless: I prefer putting the snaptshot/copy of the current question into the event. Let a subscribed function queue a job with the question and the event as metadata.
[23:29] <lifeless> sinzui: sure; I think that the json should be quite targeted though.
[23:30] <sinzui> lifeless: I am unsure now if QuestionNotification needs to change if I build something that Implements IQuestion, but is acutally using values instead of references.
[23:31] <lifeless> sinzui: I'm confused about the implementing of IQuestion
[23:32] <sinzui> lifeless: I was looking a PersonTransferJob that requires json-compatible values. Instead of keep a reference to the reviewer that is optional, the job has a property that instanties the user from the id in the metadata
[23:33] <sinzui> lifeless: I do not want to build a schema for the job that  create lots of references to other objects. I think the question plus a dict of values will work most of the time. Some of those values need to turned back into users, bugs, faqs and messages.
[23:34] <sinzui> lifeless: I cannot put the snapshot into the db, so I need a light way to store the changes
[23:35] <lifeless> sinzui: AIUI other deferred mailers
[23:35] <sinzui> I think create a ro-implementation of question that uses values instead of objects to make the email may work
[23:35] <lifeless> they generate the message
[23:35] <lifeless> or a template for th message
[23:35] <lifeless> and a list of subscribers
[23:35] <lifeless> and they shove that into the queue to be processed on the backend
[23:35] <sinzui> Well I want to put an end to timeouts and thoses tasks you mentioned are part of the time out
[23:35] <lifeless> right
[23:36] <lifeless> I'm just saying that /most/ or even /all/ of the objects you're mentioning are not needed
[23:36] <sinzui> personMemebershipJob an PersonMergeJob do all the email building, themselves
[23:36] <lifeless> but perhaps I misunderstand the problem
[23:36] <sinzui> I think you do
[23:36] <sinzui> As I said, I cannot get my thoughts in order
[23:37] <lifeless> AIUI the work required is:
[23:37] <lifeless>  - update the db to show the event
[23:37] <lifeless>  - build a message to send to each of [long list of subscribers], and its customised per subscriber
[23:38] <lifeless>  - spool and send the message
[23:38] <sinzui> How do I queue the event
[23:38] <sinzui> QuestionNotification does th last two points perfectly
[23:38] <lifeless> I was thinking you wouldn't queue the event at all
[23:39] <lifeless> but generate the common message text /once/
[23:39] <lifeless> and then queue a job to do a mail-merge of the subscribers & the mail template
[23:39] <lifeless> but I haven't looked at this code
[23:39] <lifeless> if what I describe sounds harder or worse, you should ignore me :)
[23:45] <sinzui> lifeless: That is doable by extracting the QuestionNotification.getBody() implementation to the subscriber function I envision. I think there are a few nuances  though. headers and footers assume the current question state is correct, but infact, the state of things like assignee or target may have several changes in the same jobqueue
[23:45] <sinzui> Thus I started to envision a very elaborate way of serialising the existing state.
[23:46] <lifeless> wouldn't the state of things like assignee or target be captured when building the template to use for all subscribers ?
[23:46] <lifeless> not as explicit fields, but as things injected to said template
[23:48] <sinzui> The build op needs to create the common body, common header and common footer. We merge the to addresses, and the header/footer reasons later>
[23:48] <sinzui> ?
[23:52] <sinzui> lifeless: I think the X-Launchpad-Question and the body must be queued with the question since those describe the current state. Other headers, and the footer can  created using the current question and its current subscribers
[23:52] <lifeless> sinzui: sounds reasonable to me
[23:53] <sinzui> So I think the schema is (question, body, header) header can be a string at this moment. It could be a serialised dict for extension.