[00:30] <lifeless> wgrant: the PPA quotas are precisely as arbitrary as the recipe volume quota
[00:30] <lifeless> the PPA quota is less of a surrogate
[00:32] <wgrant> lifeless: Well, arbitrary is perhaps not the best word.
[00:32] <wgrant> But the PPA quota does what it's designed to do.
[00:32] <wgrant> Places a hard, known limit.
[00:32] <wgrant> The recipe one does not.
[00:33] <lifeless> I'm having some trouble following your detailed description of whats wrong; perhaps you could give me a summary instead?
[00:33] <wgrant> The PPA disk quota prevents users from using more disk than they should.
[00:34] <wgrant> The recipe build quota is presumably to stop users from using more buildfarm time than they should.
[00:34] <wgrant> But it limits how many times they can use it.
[00:34] <wgrant> Not how much of it they use.
[00:34] <lifeless> right
[00:34] <lifeless> as I said, its a surrogate.
[00:34] <lifeless> I'd welcome a series of bugs that would allow measuring and allocating the actual resource.
[00:38] <poolie> hi there
[00:38] <poolie> jml wrote some code to suck lp bugs into a desktopcouch db
[00:38] <poolie> i was thinking in the shower this could possibly even be offered as a standard facility
[00:39] <poolie> which would be pretty cool: cheap async (at least readonly) access
[00:39] <poolie> this is very blue sky of course
[00:39] <lifeless> maybe
[00:40] <lifeless> I'd want couch to have an OOM less sysadmin time first :)
[00:40] <poolie> :)
[00:40] <lifeless> (and sadly, I'm not joking)
[00:40]  * poolie reinterprets that as "order of magnitude" not "out of memory"
[00:40] <poolie> yeah
[00:41] <poolie> perhaps i'll look at his client code someday
[00:45] <poolie> lifeless: your shoes are now available to be filled <http://webapps.ubuntu.com/employment/canonical_BSE/> - help me find someone good?
[00:46] <lifeless> poolie: certainly.
[00:46] <lifeless> Have you tweeted yet ?
[00:46] <poolie> i can do that
[00:52] <lifeless> http://www.google.com/instant/#utm_campaign=launch&utm_medium=van&utm_source=instant
[01:00] <spm> why the GA tracking codes? (utm_* are google analytics tracking codes. fwiw)
[01:01] <lifeless> spm: it was in my browser bar
[01:01] <lifeless> copy-paste, you think I read these things?
[01:01] <spm> fair enough :-)
[03:08] <lifeless> does LoginToken:+validategpg talk to the gpg servers?
[03:08]  * lifeless is guessing it does
[03:12] <lifeless> I wonder, should oauthnonces go in the sessiondb
[03:13] <cr3> hi folks, I created a couple projects recently without anything in them yet. would it be simpler to ask to rename them, or create another couple projects and ask to remove them?
[03:13] <james_w`> poolie: lp in desktopcouch> standard facility where?
[03:13] <lifeless> cr3: rename should be easy enough
[03:13] <lifeless> cr3: unless you have mailing lists or ppas
[03:13] <poolie> james_w`: well, if i ever get around to it, i would look about writing an apis-couch daemon
[03:14] <poolie> eventually, and this is utter pie-in-the-sky, it would be cool to just have something like bugs.launchpad.net/bzr/+couch
[03:14] <poolie> and to through that means get a whole copy of them
[03:14] <james_w`> apis-couch? what would that do?
[03:15] <lifeless> james_w`: map lp into couch?
[03:15] <cr3> lifeless: nothing yet, so can I ask in the channel or is there a preferable avenue?
[03:15] <lifeless> CHR should be able to help in #launchpad
[03:15] <lifeless> failing that follow the instructions in the topic there.
[03:15] <lifeless> :)
[03:16] <james_w`> lifeless: well, I have a project already that doesn't need new daemons etc.
[03:16] <cr3> "CHR"? is that a nick or an acronym I'm not familiar with?
[03:17] <james_w`> I'm interested in whether poolie has other ideas
[03:17] <cr3> lifeless: by the way, might you have a moment to chat about the progress of the results tracker?
[03:17] <poolie> james_w`: i have very few ideas ;) other than that having it in a local db that can sync smartly with the server would be nice
[03:17] <poolie> what's your project?
[03:17] <james_w`> txrestfulclient
[03:18] <james_w`> poolie: it's almost transparent to the app whether it is talking to LP or a couchdb copy of the data. The non-transparent part is that if talking to couch it has extra capabilities to push stuff back to lp
[03:19] <poolie> really, wow
[03:23] <poolie> james_w`: i'll check it out
[03:24] <james_w`> I'd love some help getting it up to standard
[03:24] <james_w`> it's not got full coverage of the capabilities of the API yet
[03:25] <james_w`> rolling it in to my attempt to write a twisted API library wasn't the best thing
[03:27] <lifeless> james_w`: I'd love to see that split out
[03:27] <lifeless> james_w`: there is a use case in LP for a txlaunchpadlib
[03:27] <lifeless> james_w`: but we wouldn't want the couch stuff mixed in
[03:28] <james_w`> lifeless: the code is independent at that level
[03:28] <james_w`> lifeless: but the couch feature relies on the tx code as I have written it, so people can't play with that until they use twisted
[03:29] <james_w`> (and couch is only one possible implementation of the required interface, it just so happens that a JSON-based document store is kind of handy for this)
[03:30] <cr3> lifeless: sleep time, I'll catch up with you another time about the results tracker. cheerio
[03:30] <lifeless> cr3: ciao
[03:30] <poolie> the latency is so high doing it on twisted would be good
[03:32] <poolie> james_w`: so your code can use couch as a cache?
[03:32] <james_w`> poolie: "cache", yes
[03:33] <james_w`> poolie: you talk to LP and it sticks the documents it gets back in to couch before returning them to you
[03:33] <james_w`> poolie: at any time you like you can reconfigure the client to talk directly to couch, and you will get those documents back again.
[03:33] <james_w`> that's the read-only part
[03:34] <james_w`> then you can make changes, and it will store the modifications, and give you the updated information if asked for it again
[03:35] <poolie> sweet
[03:35] <james_w`> then you can ask it to iterate the modifications and send them back to LP, and the collision detection will naturally act to prevent problems there
[03:35] <poolie> so this just all works over the existing restful protocol?
[03:35] <james_w`> there are still a bunch of things that need work, and I'm not sure whether the approach taken will ever get us to 100%, but it does have an elegance
[03:36] <james_w`> poolie: yep
[03:36] <nigelb> isn't this what someone demo'd at last UDS?
[03:37] <james_w`> poolie: with a way to replace that restful protocol with queries in to couch
[03:37] <james_w`> nigelb: yeah, me, very shoddily
[03:37] <nigelb> james_w`: lol, laptop not working et al ;)
[03:37] <james_w`> exactly
[03:38]  * nigelb hugs james_w` :)
[03:44] <mwhudson> james_w`: now do this for notmuch pls
[03:46] <james_w`> mwhudson: one day
[03:47] <james_w`> though I think I should try and add more moving parts next time
[03:57] <jtv> wgrant: something I don't get… I'm to keep a bfj fk in my new TranslationTemplatesBuild table—but where does it come from?  I see build_farm_job being set all over the place, but that all refers to BuildFarmJobOld stuff.
[04:22]  * poolie tries txrestfulclient
[04:22] <poolie> and whacks in to bug 461356
[04:22] <_mup_> Bug #461356: desktopcouch-service crashed with ImportError in <module>() <apport-crash> <i386> <ubuntu-unr> <desktopcouch (Ubuntu):Incomplete by cmiller> <https://launchpad.net/bugs/461356>
[04:35] <lifeless> spm: suppose we could zap the first 8 months of successful-updates.txt?
[04:35] <spm> sure. one sec
[04:35] <lifeless> spm: of this year!
[04:36] <lifeless> [I think it goes back forever, so keep a copy
[04:36] <lifeless> but its getting a tad large]
[04:36] <lifeless> spm: also, whats it up to?
[04:38] <spm> successful-updates-2008.txt (already existed) also now have successful-updates-2009.txt
[04:38] <lifeless> heh
[04:39] <lifeless> gmm
[04:39] <lifeless> is there a system load average python module already, I wonder
[04:40] <spm> import system.load.averages ?
[04:40] <spm> which is spm doing the equivalent of import icanfly
[04:44] <mwhudson> lifeless: os.loadavg()
[04:44] <mwhudson> er no
[04:44] <lifeless> getloadavg
[04:45] <mwhudson> right
[04:46] <lifeless> bug https://bugs.edge.launchpad.net/oops-tools/+bug/243554, freshly updated
[04:46] <_mup_> Bug #243554: oops report should record information about the running environment <infrastructure> <oops-tools> <Launchpad Foundations:Triaged> <OOPS Tools:Triaged> <https://launchpad.net/bugs/243554>
[04:48] <lifeless> I wonder if time.clock() is pid wide or pid wide :P
[04:52] <mwhudson> lifeless: one of those was supposed to be thread?
[04:53] <lifeless> mwhudson: being droll about linux clarity in this area
[04:53] <mwhudson> heh
[04:53] <lifeless> we can use clock_gettime(CLOCK_THREAD_CPUTIME_ID) though
[04:54] <jtv> lifeless: would you mind if I just made that Librarian change now?
[04:55] <lifeless> jtv: I don't mind when you do it :)
[04:55] <stub> lifeless: I've just pulled in the information from /proc before
[04:55] <jtv> lifeless: :)
[04:55] <lifeless> jtv: if you mean ...'and sneak it in the release', that would be risky, wouldn't it?
[04:56] <jtv> lifeless: that would be, and it's not what I had in mind.  Thinking more of avoiding being the subject of future "what flaming idiot made this horrible change!?" inquiries
[04:57] <lifeless> jtv: I'd hope noone in the team would take that attitude following something up
[04:57] <lifeless> jtv: and the only risk I know of, is the one I mentioned: the librarian db access is currently very tightly encapsulated; it needs to stay that way.
[04:57] <jtv> Well, so to speak.  The thing is, I'm not 100% happy about making an API where you pass "either an object or its id."
[04:57] <lifeless> jtv: so don't do that.
[04:58] <lifeless> jtv: make a separate pass-the-object API (perhaps on the object :P)
[04:58] <jtv> Ahh
[04:58] <lifeless> and have the current id based function delegate
[04:58] <jtv> Now, what is the reason for the tight encapsulation?
[04:58] <lifeless> because its in twisted
[04:58] <lifeless> so its called via deferToThread
[04:58]  * jtv likes reasons—easier to remember than rules :)
[04:58] <lifeless> it can do DB access in the thread
[04:59] <lifeless> it cannot outside of it, or all other requests in progress will block.
[04:59] <jtv> I thought it ran as a separate process?
[04:59] <lifeless> jtv: if you aren't touching code used in the librarian /server/ this won't matter - but I don't know exactly what you're touching (and be sure to check for imports :))
[04:59] <lifeless> jtv: the librarian is a twistd process, yes.
[05:00] <jtv> lifeless: I'm touching stuff in canonical.launchpad.librarian and canonical.librarian.client, but nothing in server.
[05:00] <lifeless> in the process it has a mainloop, and worker threads; the worker threads do DB access, the mainloop does all the business logic (except DB access)
[05:00] <lifeless> jtv: the server is in canonical.launchpad.librarian
[05:01] <jtv> So client.py is sort of exceptional in there?
[05:01] <jtv> I mean, FileDownloadClient does run client-side, right?
[05:02] <lifeless> it might be nice to have the twisted code more visually distinct (e.g. in a submodule, or move the client to lp.services.librarian.client, or something.
[05:02] <lifeless> I don't want to make the scope bigger on you :)
[05:02] <jtv> Yes.
[05:02] <jtv> I'm not going to do anything along those lines, no.  :)
[05:04] <lifeless> so, to answer your question, I presume so, but I'd need to check.
[05:04] <lifeless> All I'd advising is a little caution and investigation in this area, as we have two dramatically different programming models in play here, and they mix poorly.
[05:04] <lifeless> speaking of which.
[05:05] <jtv> ?
[05:05] <lifeless> spm: are were there yet ? its been an hour.
[05:05] <lifeless> jtv: the speaking of which was a joke.
[05:05] <jtv> :-|
[05:06] <lifeless> jtv: the line before wasn't.
[05:06] <jtv> That much was clear.  The joke I still don't get.
[05:06] <lifeless> oh, it wasn't a very good one
[05:06] <lifeless> it was leading into the spm: line
[05:07] <jtv> ah
[05:07] <jtv> lifeless: still, better to have the other shoe dropped :)
[05:13] <lifeless> taking a breather; I'll be back to heckle spm later
[05:13] <jtv> lifeless: I think there's a better solution for the librarian problem: it's a bad internal distribution of responsibilities.  In _getPathForAlias, the LFA is loaded _only_ to determine that it's visible.  The actual work doesn't involve the object at all.
[05:14] <jtv> nm; take your breather
[05:15] <EdwinGrubbs> poolie: ping
[05:16] <poolie> james_w`: a teeny patch for you
[05:16] <poolie> EdwinGrubbs: hi there
[05:24] <EdwinGrubbs> poolie: I have some questions about the preferred way to use the apport format. The oops currently groups the request variables together, but it seems cleaner to use email.message.Message than to use another ProblemReport to make a hierarchy, so that I don't end up with multiple Date and ProblemType fields.
[05:25] <EdwinGrubbs> poolie: I also wondered if I should use the Stacktrace field for python stacktraces, or if it would be better to only use that for stacktraces created by gdb.
[05:26] <poolie> EdwinGrubbs: you can look at bzrlib.crash to see what we do
[05:26] <poolie> we use Traceback for the python traceback
[05:27] <poolie> which i think is consistent with what other python programs use
[05:27] <poolie> i would probably have one thing RequestVariables containing all the variables
[05:27] <poolie> either just as they are in the url, or decoded
[05:32] <EdwinGrubbs> poolie: oh, request variables in the oops actually is all the cgi variables like HTTP_REFERER, REQUEST_METHOD, etc. and not just the query string. I saw in the apport file format pdf that some of the hierarchical variables are stored as "name=value", but it seems more consistent to me to use "name: value". I'm trying to decide whether to use email.message.Message.as_string(), or ProblemReport().write(StringIO()) to create a
[05:32] <EdwinGrubbs>  hierarchy.
[05:33] <poolie> so you agreed with gary to do it in apport now anyhow?
[05:33] <poolie> i'd probably just pprint a python dict
[05:43] <spm> lifeless: looks like it came back about 5 mins ago
[05:44] <EdwinGrubbs> poolie: well, I spent today determining how easy it would be to do now. I just saw Gary's email that he preferred to do it in the long term. However, my original solution, was almost identical to ProblemReport.write_mime() except that I don't base64 encode things and that it handled the special case of the request variables hierarchy. I really think we should use apport now. oops-tools will still be able to process the old oops
[05:44] <EdwinGrubbs>  format, so the differences in the apport format don't cause any implementation problems.
[05:49] <poolie> EdwinGrubbs: so they're not really a ahierarchy, are they?
[05:49] <poolie> i mean it's just a dict of strings
[06:01] <EdwinGrubbs> poolie: right, I just meant that the whole problem report is a hierarchy, since the request variables contains multiple values.
[06:05] <poolie> the simplest thing that could work is to pprint an array or put them in urlencoded
[06:06] <poolie> istm that using email formatting would be complicated, might break, and wouldn't help
[06:06] <poolie> ditto nested appotr
[06:10] <jtv> wgrant: seems I needed to create the BuildFarmJob from the factory for my custom job type.  Passing tests again.
[06:22] <lifeless> spm: ok cool
[06:22] <lifeless> spm: so, can we enable profiling, and make the load be good ?
[06:24] <SpamapS> lifeless: hey, what ever happened with SSL improvements? Was just reviewing past threads..
[06:28] <lifeless> SpamapS: theres an RT ticket open to increase the cache length
[06:28] <lifeless> (for idle keys)
[06:29] <lifeless> and theres another open to get me access to the DC apache front end over a VPN + HTTP; I can then test a FE SSL here
[06:29] <SpamapS> ah cool. I have used distcache for mod_ssl in the past to great effect before btw. ;)
[06:30] <lifeless> I'm not sure if we have dual apache or not
[06:30] <lifeless> I suspect not
[06:31] <lifeless> jtv: uhm, doesn't the name from the the LFA too ?
[06:31] <SpamapS> wow distcache's last release was in 2004 .. man its been so long since I setup an actual SSL server .. got BigIP's to do it a while back and have just been soft on SSL ever since. ;)
[06:31] <jtv> lifeless: yes, as usual I saw my mistake right after I said it—but no reason to keep you at the time.  ;)
[06:32] <jtv> lifeless: can there be thread/process boundaries in this call chain that I would not see at all?
[06:33] <poolie> well, the txrestfulclient hello world passes again
[06:33] <poolie> that's something
[06:34] <poolie> but also probably enough for now
[06:36] <lifeless> jtv: deferToThread in the librarian is the call boundary
[06:36] <lifeless> jtv: when it returns from the callable supplied to that function, it comes back across the thread.
[06:38] <jtv> lifeless: I don't see that happening anywhere in the call chain from the first fetch of the LFA to the redundant second fetch—I guess that means that it's safe to re-use the same LFA object.
[06:43] <spm> lifeless: is back in profling mode
[06:43] <spm> pro-fling. hrm. maybe not quite. profiling tho....
[06:45] <lifeless> heh
[06:45] <lifeless> spm: and hows the load ?
[06:45] <spm> dropping. 1 3 4 atm
[06:46] <lifeless> spm: is it running with/without the patch ?
[06:46] <spm> good question...
[06:46] <lifeless> spm: and are background jobs still disabled ?
[06:46] <spm> with-out
[06:46] <spm> yup; just nowish.
[06:46] <lifeless> ok thats cool
[06:46] <lifeless> spm: have they been killed :P
[06:47] <spm> oh yes. legit excuse to kill cronjobs off? opportunities like this are rare and to be enjoyed post haste!
[06:47] <lifeless> ok, profiling things now
[06:48] <lifeless> first off, bugtask:+index
[06:49] <lifeless> and now ubuntu/++assignments
[06:50] <lifeless> ok, 7 seconds rather than 16
[06:50] <lifeless> spm: the patch I put up the other day
[06:50] <spm> hm?
[06:51] <lifeless> spm: can we put that on again?
[06:51] <spm> do you have a paste handy?
[06:51] <spm> said file has been overwritten since
[06:51] <lifeless> one sec , finding/making
[06:52] <lifeless> http://paste.ubuntu.com/489589/
[06:56] <lifeless> spm: ^
[06:56] <spm> ta
[06:57] <spm> sorry - was horribly distracted doing the push code for the release; and made the shocking discovery that praseodymium does NOT have sl installed.
[06:57] <spm> naturally this is a critical/urgent problem and moves to rectify needed to be made.
[06:58] <lifeless> indeed
[06:58] <lifeless> sysadmin porn comes first, of course.
[06:58] <spm> hahaha
[06:58] <lifeless> darn, I typed that :(
[06:59] <lifeless> :P
[06:59]  * SpamapS cannot disagree with that
[07:00]  * SpamapS prepares a petition to have sl added to the server cd seed
[07:03] <spm> there. nicely taken wildly out of context.
[07:03] <lifeless> rotfl
[07:04]  * spm bows at the appreciation
[07:04] <spm> the fine art of context free quoting - choosing the title
[07:05] <SpamapS> damn, seems somebody beat me to it. ;)
[07:05] <spm> lifeless: restarting with the patch; give it a few
[07:05] <lifeless> thanks
[07:05] <SpamapS> spm: your title was better than mine. :)
[07:05] <spm> heh
[07:05] <SpamapS> well done
[07:06] <spm> blink. something crash nicely on the restart
[07:09] <spm> wow. something is really not right here...
[07:09] <lifeless> details?
[07:09] <spm> oh ffs. it's doing a staging rollout AGAIN! aARGH

[07:10]  * spm grumps off and puts in the lock file on sourcherry.
[07:12] <spm> i've killed the crontab entry as a savage "don't do that" for now. I'll see if I can manually get the app server on asuka back to right'n'goodness
[07:15] <wgrant> jtv: Sorry, I'm not completely down with the latest implementation details.
[07:15] <jtv> wgrant: I think I've done all I know I should do… question now is: what next?
[07:16] <wgrant> jtv: You have BuildFarmJob rows now?
[07:16] <jtv> Yes
[07:16] <jtv> I had to create them myself, which from what I see elsewhere seems to be the way.
[07:16] <wgrant> I suggest talking to noodles.
[07:17] <jtv> Yeah
[07:17] <wgrant> How much do you still depend on BranchJob?
[07:18] <jtv> I haven't gone through the details, but I thought it depended mainly on how much the dispatch machinery still depends on TranslationTemplatesBuildJob.
[07:19] <jtv> I mean, there's not much in there other than methods the build farm needs.
[07:19] <wgrant> True.
[07:19] <wgrant> OK, so you've probably done your bit for now, but talk to noodles.
[07:19] <jtv> If the build farm can start dispatching TranslationTemplatesBuilds instead of TranslationTemplatesBuildJobs, I'm either there or very close.
[07:19] <wgrant> That's the next step.
[07:19] <jtv> Yes, I will definitely talk to him once he appears—he also promised me a review this morning.  :)
[07:20] <wgrant> Ah, excellent.
[07:21] <jtv> I'm sort of eager to start cleaning out the old stuff, and sort of not looking forward to it at the same time.  :-)
[07:32] <lifeless> spm: and the conclufsion is?
[07:33] <spm> ARGH
[07:33] <spm> am working on getting that to argh <== atm
[07:33] <lifeless> ij
[07:33] <lifeless> ok
[07:33] <lifeless> bbs
[07:33] <lifeless> can you kick the profile rsync in the interim?
[07:34] <spm> so kicked
[07:36] <spm> try to start the patched and profiling "new" version...
[07:36] <spm> trying.
[07:48] <spm> it's still "starting"....
[07:50] <spm> lifeless: wooo. it's started. have at it.
[07:55] <lifeless> spm: load is still low ?
[07:55] <lifeless> spm: and does it have both patches, or only the query changing one?
[07:58] <lifeless> spm: please kick the profile rsync - thanks
[07:59] <spm> kicked and very low, 0.22 0.35 0.71
[07:59] <lifeless> thanks
[07:59] <lifeless> spm: which patch(es) did it have?
[08:00] <spm>  lib/lp/blueprints/model/specification.py and the profiling on
[08:00] <lifeless> uhm, both patches change that file :P
[08:01] <spm> haha
[08:01] <lifeless> have  alook
[08:01] <lifeless> does it change the column definitions
[08:01] <spm> one sec. just trying to stop a db from faceplanting
[08:01] <lifeless> or the query
[08:01] <lifeless> kk
[08:07] <spm> lifeless: appears to be this one at a cursory glance at the first few lines: http://paste.ubuntu.com/489589/
[08:08] <lifeless> ok, could you appyly the other as well ?
[08:11] <spm> heh sure, you have a paste handy?
[08:13] <lifeless> yes
[08:13] <lifeless> hang on while I check the backlog
[08:13] <spm> ta
[08:14] <spm> just doing about 17 bazzilions things at once atm.
[08:14] <StevenK> spm: Like notmal
[08:14] <StevenK> Er, normal
[08:14] <spm> ....
[08:14] <lifeless> http://pastebin.com/E7hMnL28
[08:14] <lifeless> spm: ^
[08:14] <spm> ta
[08:14] <spm> gimme 5-10; just need to disable.notify a bunch of things in prep for the release in 45.
[08:15] <lifeless> sure
[08:22] <adeuring> good morning
[08:41] <lifeless> spm: I'll check back in regularly till you say its done... I can see you're busay
[08:42] <spm> ta
[08:46]  * lifeless starts coming up with ideas to make rollouts even shorter
[08:49] <adeuring> lifeless: did you talk with deryrck about my theory of the cause of the still remaining problems of the apprt retracer? that the librarian is simply queueing request from the app server for too long?
[08:52] <spiv> adeuring: it was discussed; IIRC it was diagnosed as missing firewall rules for some app servers
[08:53] <adeuring> spiv: intersting.... do you have any details?
[08:54] <spiv> adeuring: see the #is and/or #launchpad-code logs from about 7 hours ago
[08:54] <adeuring> spiv: thanks!
[08:57] <mrevell> Hi
[08:58] <lifeless> adeuring: hi
[08:59] <adeuring> hi lifeless
[08:59] <lifeless> adeuring: tcp connect timeout default is 30 seconds IIRC (you need to wait for the MSS, again IIRC)
[08:59] <lifeless> if the librarian was doing that with 5 concurrent requests we'd be sunk, also its careful to do lots of incremental bits of work
[09:00] <lifeless> so I'd expect a-diskio * 4 peak slowness, - a second or two tops - not 30
[09:00] <lifeless> adeuring: which is why I looked elsewhere
[09:00] <lifeless> adeuring: now, 4 is the number of threads our appservers have
[09:01] <lifeless> so 5 spilling you into a new appserver was a reasonable assumption :)
[09:01] <adeuring> right
[09:01] <elmo> I'm changing our firewall rules so that we REJECT rather than DROP cross-site requests to make diagnosis of this kind of thing easier, FWIW
[09:02] <lifeless> elmo: thank you!
[09:02] <adeuring> lifeless: just tried my scrpit with 5 concurrent requests -- looks a bit better
[09:02] <lifeless> adeuring: a bit?
[09:02] <adeuring> no errors
[09:02] <lifeless> thats a lot better then :P
[09:03] <adeuring> lifeless:i think we should do more logging of what happens on the librarian server,
[09:03] <poolie> lifeless: i've been reading a book ian gave me 'prefactoring'
[09:04] <lifeless> adeuring: We should strike a balance between non and too much... note that the librarian does OOPS as of this rollout (or was it last one)
[09:04] <poolie> it's a bit basic but it has some nice suggestions along the lines of your guide there
[09:04] <lifeless> I think QA haven't added it to the daily reportcard yet.
[09:04] <lifeless> poolie: intereseting
[09:04] <lifeless> poolie: can I borrow it @ UDS ?
[09:04] <poolie> if you remind me several times closer to the date :)
[09:05] <lifeless> poolie: is this close enough?
[09:05] <lifeless> poolie: how about now?
[09:05] <spm> lifeless: applied that 2nd patch as well; restarting now
[09:05] <StevenK> Haha
[09:05] <poolie> uh
[09:05] <poolie> good night :)
[09:05] <lifeless> poolie: :P
[09:05] <lifeless> poolie: I'll remind you just before we go
[09:05] <adeuring> lifeless: well, I think the issue is not necessarily a bug in code or anything -- just that the librarian can't handle requsts fast enough
[09:05] <lifeless> adeuring: I'm not aware of issues like that
[09:06] <lifeless> adeuring: or data suggesting we have them; certainly I agree that we *need to be able to diagnose such things*
[09:06] <bigjools> morning all
[09:06] <lifeless> and if the logs are insufficient, we should increase them till they are.
[09:06] <adeuring> lifeless: right
[09:06] <lifeless> adeuring: we're now logging librarian times in the appserver for downloads; we can add uploads easily
[09:07] <lifeless> hmm
[09:07] <lifeless> for uploads we should also add the size, perhaps in the closing bit of downloads too
[09:07] <adeuring> lifeless: ok, that would help. but i suspect that these connection timeouts are caused by the librarian, not the app server
[09:07] <lifeless> adeuring: which timeouts?
[09:08] <lifeless> adeuring: if you mean the ones apport has been having, that show up as 500 errors from the API with timeout to mizuho in them...
[09:08] <lifeless> adeuring: they were a firewall
[09:08] <spiv> adeuring: the evidence I've seen suggests the librarian server is coping just fine
[09:08] <spiv> adeuring: why do you think otherwise?
[09:08] <adeuring> lifeless: well, my little script causes them just fine even now
[09:08] <lifeless> adeuring: it does?
[09:09] <adeuring> yes, if it starts 8 concurrent requests
[09:09] <lifeless> can you pastebin the appserver trace?
[09:09] <adeuring> 5 seems to be better
[09:09] <lifeless> or does it seem to be identical?
[09:09] <spiv> "just now", you mean during a rollout? ;)
[09:09] <lifeless> adeuring: oh hang on. lollolllollololl
[09:10] <lifeless> adeuring: launchpad is down. Readonly mode. Zer iz no upload possible because the librarian is switched off
[09:10] <lifeless> or meant to be; if you're successfully uploading there is something really wrong.
[09:10] <adeuring> lifeless: yeah, ok, that's another possible cause
[09:10] <adeuring> but... when exactly was the rollout started?
[09:10] <spiv> About 10 minutes ago.
[09:10] <lifeless> 11 minutes ago
[09:11] <adeuring> ok.. hard to be sure then when exactly i ran the script again....
[09:11] <adeuring> ok, let's try again once the rollout is done
[09:11] <lifeless> yes
[09:11] <lifeless> if you can provoke a connection timeout error, its a definite bug.
[09:11] <lifeless> My first reaction is to look elsewhere than the librarian
[09:12] <spiv> adeuring: so, connection timeouts are really unlikely to be a problem in the librarian server in my opinion
[09:12] <lifeless> for a bunch of reasons.
[09:12] <lifeless> but we can't exclude it; lets just keep the net broad.
[09:12] <adeuring> spiv: well, we _could_ see them
[09:12] <lifeless> e.g. today we found a concrete problem with the firewall config
[09:12] <lifeless> adeuring: no, you saw a firewall.
[09:13] <lifeless> adeuring: we know for *some things* it was definitely *a firewall*
[09:13] <adeuring> lifeless: what firewall issue was it?
[09:13] <lifeless> adeuring: 2 appservers are in a different datacentre.
[09:13] <spiv> adeuring: it's a Twisted server that ought that ought to always be accepting connections rapidly; if connections are not being processed by that daemon in a timely fashion the only likely cause is that the librarian's host is totally swamped for disk IO
[09:13] <adeuring> ahhh, so they could not access the librarian?
[09:13] <lifeless> adeuring: the firewall rules for them did not include the restricted upload port, which is what the appservers connect to to upload restricted files.
[09:14] <lifeless> adeuring: the firewall rules dropped the packets as hostile, and so at the network layer it looks like the librarian /machine/ is missing.
[09:14] <adeuring> lifeless: ah, ok, that looks like a real problem....
[09:14] <spiv> adeuring: so a failure to connect() from another machine strongly suggests problems in something other than the librarian daemon.
[09:15] <lifeless> adeuring: but there may be other problems.
[09:15] <lifeless> adeuring: however, we *know* there was *a* problem, that would cause the symptoms seen before
[09:15] <spiv> That doesn't make it impossible, of course, but it does mean that assuming the daemon is the most likely source of the problem is likely to be the wrong approach.
[09:16] <adeuring> yes, i understand
[09:24] <bigjools> lifeless: are you familiar with the distroeries +queue page?
[09:24] <bigjools> distroseries, even
[09:24] <lifeless> I seem to recall it showing up on slow-pages reports
[09:25] <bigjools> indeed
[09:25] <bigjools> it's the bane of my life
[09:25] <lifeless> anyhow, I'm not intimately familiar with it
[09:25] <lifeless> but lets pretend I am
[09:25] <bigjools> well it's mainly intended to let archive admins move uploaded packages into the accepted state
[09:25] <bigjools> if they got held for some reason
[09:25] <lifeless> righto, its the aa review queue
[09:26] <bigjools> this is normally done in zopeless scripts if it's auto-accepted
[09:26] <bigjools> when accepting packages we also close lots of bugs and email people
[09:26] <bigjools> (potentially)
[09:26] <lifeless> meep
[09:27] <bigjools> and this is where the trouble arises with that page if any of the objects are private
[09:27] <bigjools> (not to mention the query load)
[09:27] <lifeless> and email load
[09:27] <lifeless> that really needs to be out of appserver anyway
[09:27] <bigjools> yes
[09:27] <lifeless> I'm so excited
[09:27] <bigjools> anyway, I have a bug where it's OOPSing occasionally for someone when it tries to access private email addresses
[09:27] <lifeless> should be able to point really clearly at email perf tomorrow
[09:28] <lifeless> we'll have failed convertToQuestions, I'm sure.
[09:28] <bigjools> I am wondering if it's acceptable to remove the security proxy in carefully defined situations
[09:28] <lifeless> so, why does it try to access their email address?
[09:28] <lifeless> [clearly its ok to do that in carefully defined istuations
[09:28] <lifeless> code exists to serve us, not the other way around; but if we can avoid it its somewhat nicer.
[09:29] <bigjools> it's trying to email potentially private addresses as part of a) upload notification, b) bug notification
[09:29] <bigjools> all done under the permission of the webapp user
[09:29] <lifeless> now, to avoid disclosure that has to be part of the BCC right ?
[09:29] <lifeless> or a direct mail
[09:29] <bigjools> long term we need to jobify it of course
[09:29] <bigjools> did I just make up a word? :)
[09:30] <wgrant> I don't think that's the private email address problem.
[09:30] <wgrant> IIRC it dies (possibly correctly) when trying to include it in Changed-By or Signed-By in the announcement email.
[09:30] <lifeless> I'd suggest having some method that you pass to the Person asking it to do the bit thats private
[09:30] <wgrant> But I said that in the bug... let's see..
[09:30] <lifeless> yeah, I'd say thats correct.
[09:30] <bigjools> that would be a) as I said above
[09:31] <wgrant> bigjools: There's no problem emailing to them, though.
[09:31] <wgrant> It's including them in the email that's the problem.
[09:31] <wgrant> Oh, no, other way around.
[09:31] <wgrant> This is confusing.
[09:31] <wgrant> So it must already rSP in places.
[09:31] <bigjools> no
[09:31] <bigjools> well,
[09:31] <wgrant> I said in the bug:
[09:31] <wgrant> I believe it only fails if it would send a notification to the private
[09:31] <wgrant> email address; using the private email address in the email (eg. if the
[09:31] <wgrant> person is deactivated) seems to work fine.
[09:32] <bigjools> I would have thought any access of a private email would blow up
[09:32] <wgrant> Yeah, it does.
[09:32] <wgrant> So it's rSPing already.
[09:32] <bigjools> hmmm
[09:32] <lifeless> broad brush strokes
[09:32] <lifeless> here is how I would tackle it.
[09:32] <lifeless> I would ensure that Person is responsible for deciding when to/not to bypass the proxy
[09:33] <bigjools> removeSecurityProxy does not appear in the queue.py file
[09:33] <bigjools> so if it's used it's elsewhere
[09:33] <bigjools> lifeless: I'm not sure it knows enough to do that  does it?
[09:33] <lifeless> bigjools: add methods ;)
[09:33] <bigjools> eugh :(
[09:34] <bigjools> Person is already bloated
[09:34] <lifeless> multiple places may want to be able to send an email
[09:34] <lifeless> *to* someone, with only one recipient
[09:34] <lifeless> thats reasonable to bypass the proxy -in that case-
[09:35] <lifeless> grabbing a private email to put into a template for announcements isn't ok though.
[09:35] <bigjools> that was the point I was going to make
[09:35] <lifeless> as long as folk choosing to use the method won't be confused or guided into making the wrong choice, I think its ok to do it anywhich way..
[09:36] <bigjools> maybe we should refuse uploads on people who have private email addresses?
[09:36] <bigjools> s/on/by/
[09:36] <lifeless> why are we looking up their email (vs using the one in the changes file itself) ?
[09:37] <bigjools> it uses the preferred email
[09:37] <lifeless> so if I upload as noddy@example.com
[09:37] <lifeless> but my preferred mai is fred@demo.com
[09:37] <lifeless> the .changes file is regenerated with fred@demo.cpom ?
[09:38] <bigjools> nothing's regenerated
[09:38] <lifeless> ok, so why are we looking up their email?
[09:38] <bigjools> we put email addresses on the email template
[09:38] <lifeless> whats the template file
[09:38] <lifeless> it will be faster than 20 questions :)
[09:38] <bigjools> to, y'know, send it :)
[09:39] <bigjools> man it's been 3 years since I looked at this code, hang on
[09:39] <lifeless> bigjools: no, I don't understand why we need their email
[09:39] <bigjools> lib/canonical/launchpad/emailtemplates/upload-accepted.txt
[09:39] <bigjools> it uses changed-by, maintainer and signer
[09:39] <lifeless> the approver did the approving; we need their Name; the uploaded uploaded it, we need their Name. Neither sent the mail, so we don't need their emails, and we send separate copies per recipient, so thats fine too.
[09:40] <bigjools> lifeless: the current format was arrived at after extensive discussion with the ubuntu guys
[09:40] <lifeless> I think we need to involve them then.
[09:41] <bigjools> we need to re-create the problem first
[09:41] <lifeless> I think its incompatible to both have 'you can have your email address private in launchpad' and to be putting it in mails sent to other people.
[09:41] <lifeless> certainly a test case will help
[09:41] <bigjools> yes, exactly
[09:41] <lifeless> o/~
[09:42] <bigjools> I also don't understand why a preferred email address would be private
[09:42] <lifeless> its their only email ?
[09:42] <bigjools> actually it's all or none isn't it?
[09:42] <lifeless> probably
[09:42] <lifeless> I think so
[09:43] <bigjools> so trying to hide your email address while doing public works seems...odd :)
[09:43] <lifeless> folk are very worried about spam
[09:43] <lifeless> changes files go to a public list.
[09:43] <bigjools> we could put <private email> on the template
[09:44] <lifeless> for instance, yes.
[09:44] <lifeless> Or an LP account url, or the SSO persistent url.
[09:44] <bigjools> but the To: can't be hidden
[09:44] <lifeless> bigjools: the To: shouldn't be them anyway ?
[09:44] <bigjools> err From:, sorry
[09:44] <bigjools> actually I can't remember
[09:44] <bigjools> I think the uploader is CCed from memory
[09:44] <lifeless> when I read that template, it doesn't look like it would make sense to be 'from them', because its 'thanking them'
[09:45] <lifeless> bigjools: I have a suggestion: register a UDS spec for this.
[09:45] <lifeless> bigjools: it needs it.
[09:45] <bigjools> well, there's quite a few people involved in a package
[09:46] <lifeless> well, maybe it doesn't, but I think the driving forces are complex enough it will benefit from that scale of analysis and discussion.
[09:46] <bigjools> I'm not comfortable with the analysis yet
[09:46] <lifeless> ok
[09:46] <bigjools> once we get a failing test case (and when I can see the oops that might help) then we can decide where to go
[09:46] <lifeless> lets get a autoated test
[09:47] <lifeless> agreed
[09:47] <lifeless> if I can help further I'd be delighted to do so,
[09:47] <bigjools> great, thank you
[09:47] <lifeless> but I suspect that its going to run into a definitional problem very early on rather than a code problem; and for that the stakeholders... have to hold their stakes.
[09:48] <bigjools> dipped in silver nitrate?
[09:48] <lifeless> lol
[10:23] <lifeless> spm: hah just saw you put th epatch on... trying
[10:23] <lifeless> its gone already...
[10:23] <lifeless> will try tomorrow
[10:38] <lifeless> bigjools: I bet that  https://launchpad.net/ubuntu/+search (Distribution:+search) will be your top timeout this cycle.
[10:39] <bigjools> yay
[10:41] <lifeless> anyhow, I'm going to be looking at when I get up :)
[10:41] <lifeless> https://lpstats.canonical.com/graphs/OopsLpnetHourly
[10:42] <lifeless> adeuring: try now
[10:47] <wgrant> Now, let's see how badly Soyuz breaks on Lucid...
[10:47] <bigjools>  /o\
[10:47] <bigjools> I love your optimism
[10:47] <wgrant> Oh, code upgrade's done already too? Nice.
[10:47] <lifeless> whee things feel sluggish
[10:48] <wgrant> bigjools: Is optimism ever a good idea when Soyuz's fragility is involved? :)
[10:49] <bigjools> hey it's a lot better than it used to nbe
[10:50] <wgrant> It is.
[10:50] <wgrant> But buildd-manager will still break if you think about it the wrong way.
[10:50] <wgrant> Although I guess that's not technically Soyuz any more.
[10:50] <bigjools> indeeed
[10:50] <bigjools> I need to set up a new project for it
[10:50] <bigjools> launchpad-buildmaster has a nice ring to it
[10:52] <lifeless> another project?
[10:52] <wgrant> bigjools: launchpad-buildfarm, surely?
[10:53] <wgrant> bigjools: This reminds me...
[10:53] <bigjools> no, buildmaster, it's specifically for the master sude
[10:53] <bigjools> side
[10:53] <wgrant> Have you ever seen production or DF b-m start failing to dispatch builds, complaining that the XML-RPC build command is being given too many arguments?
[10:53] <bigjools> nope
[10:54] <wgrant> I've seen it locally, and had similar reports from a couple of others' local setups.
[10:54] <wgrant> And I partly tracked down why it happens.
[10:54] <wgrant> But I don't know what triggered it... and I'd never heard about it happening anywhere prod-ish.
[10:55] <spiv> Sweet, codebrowse OOPSes appear to work.
[10:55] <wgrant> (the problem is that BuilderSlave is broken now, but we only use RecordingSlave... except for in some circumstances that appear sometimes locally, which are not entirely clear)
[10:56] <lifeless> spiv: you got one?
[10:56] <spiv> Be one of the first to generate your very own codebrowse OOPS: http://bazaar.launchpad.net/~bzr-pqm/bzr/bzr.dev/annotate/head%3A/README2
[10:56] <lifeless> spiv: but can you see it on lp-oops?
[10:56] <wgrant> That's a little more friendly than the old page
[10:56] <spiv> lifeless: I haven't tried yet
[10:56] <lifeless> spiv: acid test man ;)
[10:56] <adeuring> lifeless: one test run with 8 parallel uplods: one failed with a "connection timed out"
[10:56] <spiv> lifeless: but even that page beats "Internal server error"
[10:57] <lifeless> adeuring: whats the oops?
[10:57] <adeuring> wait a second...
[10:57] <spiv> lifeless: so far the OOPS isn't on lp-oops
[10:58] <spiv> (OOPS-1713CB6)
[10:58] <lifeless> spiv: it may need some follow up
[10:58] <lifeless> spiv: with QA
[10:58] <lifeless> specifically:
[10:58] <spiv> What's the typical delay for syncing?
[10:58] <lifeless>  - needs to be added to the lpnet summaries.
[10:58] <lifeless>  - needs to be added to the list of dirs to scan for the oops db scanner
[10:58] <adeuring> lifeless: problem is that we don't get an OOPS
[10:58] <lifeless> spiv: 3m I think
[10:58] <lifeless> adeuring: what do we get ?
[10:59] <lifeless> adeuring: if its apis check the X-Launchpad-OOPS header
[10:59] <adeuring> just the error message "connection time out", my script doesn't print it
[10:59] <lifeless> adeuring: (I think that is where the id goes)
[10:59] <adeuring> ok, I'll try to find it
[10:59] <lifeless> adeuring: to debug this we need:
[10:59] <lifeless>  the backtrace
[10:59] <lifeless>  the appserver it happened on
[10:59] <adeuring> i know
[11:00] <lifeless> it may be a further firewall issue on the other appservers.
[11:00] <lifeless> or something.
[11:00] <adeuring> so, how can I figure out which app server is used?
[11:01] <lifeless> anyhow... late here. If you can get the appserver + error, ask the GSAs if they can confirm that appserver has access to the restricted upload port
[11:01] <lifeless> adeuring: its in the OOPS :)
[11:01] <adeuring> ah, ok
[11:01] <lifeless> I'm quite sure we generate one, just goes into a header from what gary was saying th eother week
[11:02] <lifeless> adeuring: you might like to file a new private bug
[11:02] <lifeless> adeuring: unsubscribe everyone but you
[11:02] <lifeless> and then test on it.
[11:02] <adeuring> lifeless: yeah...
[11:02] <lifeless> gnight
[11:03] <lifeless> if its not a firewall issue I will debug it with your script (if you supply it) and the data you gather overnight; getting the OOPS is critical path to solving it.
[11:03]  * lifeless waves gnight
[11:04] <wgrant> jelmer: Hey, how's the branch? Were you able to reproduce the issues I reported?
[11:04] <bigjools> nn lifeless
[11:04] <wgrant> Night lifeless.
[11:04] <noodles775> Enjoy the rest of your evening lifeless
[11:04] <jelmer> 'night lifeless
[11:05] <jelmer> wgrant, yeah, fixing + qa'ing at the moment
[11:05] <wgrant> jelmer: Great.
[11:05] <bigjools> so, my failure-detecting b-m hasn't failed anything yet.  Is it wrong to want to see that happen? :)
[11:06] <wgrant> Sorry for throwing them at you so late... I wasn't aware until yesterday that the branch was targetted at 10.09.
[11:07] <wgrant> bigjools: Does it manage to distinguish between build and builder failures?
[11:07] <bigjools> that's the plan, yes
[11:08] <jelmer> wgrant: Thanks for bringing it up in the first place. You saved quite a few people the stress and extra time that would've come with a broken rollout.
[11:09] <wgrant> jelmer: I have a few other issues with the branch from a more thorough review today, but I'm sure I've caused you enough trouble for now.
[11:09] <wgrant> None are particularly major, I don't think.
[12:04] <deryck> Morning, all.
[14:22] <wgrant> Was cocoplum upgraded to Lucid?
[14:22] <wgrant> grep-dctrl now chokes on maverick's Sources files for reasons which I cannot entirely determine.
[14:23] <wgrant> Ooh dear.
[14:24] <wgrant> I think cocoplum's a-f cache might be pretty fucked.
[14:25] <wgrant> bigjools: ^^
[14:25] <wgrant> maverick's main Sources is now ~3.4MB. From an out-of-date mirror, it's ~4MB.
[14:25] <wgrant> Some sections are truncated.
[14:25] <bigjools> wgrant: known bug with lucid a-f
[14:26] <bigjools> bug 633967
[14:26] <_mup_> Bug #633967: apt-ftparchive generates corrupt Sources stanzas for .dsc files without Checksums-* fields <apt (Ubuntu):In Progress> <apt (Ubuntu Lucid):In Progress> <apt (Ubuntu Maverick):In Progress> <https://launchpad.net/bugs/633967>
[14:26] <wgrant> Ah, great.
[14:26] <bigjools> life is never too easy is it
[14:27] <wgrant> (now, what were you saying about optimism a few hours ago? :P)
[14:30] <wgrant> bigjools: I bet it's the case-sensitivity stuff in changes file parsing.
[14:31] <wgrant> The code looks for Launchpad-bugs-fixed, but it's normally spelt Launchpad-Bugs-Fixed.
[14:31] <bigjools> well remembered
[14:31] <wgrant> The uploads referenced are fine.
[14:32] <wgrant> A ha ha.
[14:33] <wgrant> A test was changed to use the former capitalisation.
[14:33] <bigjools> jelmer!
[14:33] <bigjools> (that's one way of making the tests pass)
[14:33] <wgrant> Heh.
[14:43] <bigjools> wgrant: which test was changed?
[14:44] <wgrant> bigjools: See the last hunk of the rev...
[14:44] <wgrant> Let me find it.
[14:44] <bigjools> which revno?
[14:44] <wgrant> Ah, no.
[14:44] <wgrant> The tests were already broken.
[14:44] <wgrant> sync-source was changed to use the bad capitalisation.
[14:45] <bigjools> it's because of the changed parser we're using now
[14:45] <wgrant> db-devel r9741
[14:45] <bigjools> FML
[14:45] <wgrant> It is, yes.
[14:45]  * bigjools wonders why that didn't break the test
[14:46] <wgrant> The test packages probably use the bogus capitalisation.
[14:46] <bigjools> oh, I see
[14:46]  * wgrant greps.
[14:46] <bigjools> yes it does
[14:46] <wgrant> YAY.
[14:46]  * bigjools files a bug
[15:09] <bigjools> wgrant: btw, isn't it way past your bedtime? :)
[15:12] <wgrant> bigjools: It's barely midnight.
[15:33] <tyarusso> Um, how would I set up an environment of db-stable instead of devel with rocketfuel-setup?  I tried changing all of the lines I could find in the script and still got a "devel" directory under lp-branches.
[15:35] <tyarusso> ...or I could try reading the help output.  *headdesk*
[16:08] <tyarusso> Where is the "etc/zope.conf" for disabling developer mode supposed to go, and how would one do that?
[16:10] <tyarusso> Also, I can log in with the test account, but how do I register new accounts?  I don't see any links for that yet...
[16:27] <leonardr> asking a possibly stupid question rather than wasting more time. i've got a totally up-to-date system and 'make' is failing with a bzrlib import error, "cannot import name SAFE_INSTRUCTIONS". any help?
[16:27] <leonardr> gary, lifeless -^
[16:29] <gary_poster> don't know leonardr but will try to dupe.  maybe try a code team or bzr team member after that.
[16:29] <leonardr> maybe rockstar can help?
[16:30] <gary_poster> tyarusso: register new accounts: not exposed on dev system.  our openid server is responsible for that in the production/staging.
[16:30] <rockstar> leonardr, update download-cache?
[16:30] <tyarusso> gary_poster: Oh.  Well how is someone supposed to use it then?
[16:30] <gary_poster> tyarusso: etc/zope.conf: see configs/development/launchpad.conf
[16:31] <leonardr> rockstar, up to date
[16:31] <gary_poster> tyarusso: sorry, don't understand question.  dev build is for developers, which approximates production just enough to do dev style testing.  We're not in charge of making new users, so we don't expose it
[16:32] <rockstar> leonardr, oh, and sourcecode also needs to be updated.
[16:32] <rockstar> SAFE_INSTRUCTIONS should come from bzr-builder, which can't be eggified.
[16:32] <tyarusso> gary_poster: Okay, my goal here is to set up a Launchpad instance that we could actually use for our company.  What other pieces would I need to get to accomplish that?
[16:33] <leonardr> rockstar: sourcecode/bzr-builder is up to date at revision 63
[16:33] <leonardr> however, revision 63 is from january. could it have moved?
[16:34] <rockstar> leonardr, hm, it should have.  abentley?
[16:34] <gary_poster> tyarusso: oh, eek.  I'm sorry, that's a huge deal that we don't support.  We open source the code to improve our free service.  ...um, you could try asking the dev list and see if anyone there has advice?
[16:35] <abentley> rockstar, I have just landed my permit-commands branch.
[16:35] <leonardr> rockstar: i've got the pqm-managed branch at bzr+ssh://bazaar.launchpad.net/~launchpad-pqm/bzr-builder/trunk/
[16:35] <abentley> rockstar, in that branch, the revno is 66.
[16:35] <rockstar> leonardr, ^^
[16:36] <abentley> leonardr, the revno in that branch is 66, so you aren't up to date.
[16:36] <tyarusso> gary_poster: Hmm, okay.  Meanwhile let's try a different direction:  1) Is the stuff you use for your openid server all open source too?  2)  Do you offer hosted services that could use our own domain names & branding?  If so, what's the fee schedule like for that?
[16:36] <leonardr> abentley: ok, i've got the new stuff
[16:38] <tyarusso> gary_poster: Oh, I should note - we'd be interested in hosting both open-source and proprietary projects.
[16:38] <gary_poster> bac, are you around to address tyarusso's question # 2 above?
[16:39] <leonardr> ok, i think i've got a bunch of out-of-date stuff
[16:39] <bac> mrevell: can you help tyarusso?
[16:40] <tyarusso> haha, support hot potato!
[16:40] <bac> ah, tyarusso, i read through all of your question.  to answer #2, no, we don't offer branded hosting
[16:40] <tyarusso> bac: Hmm, okay.
[16:41] <tyarusso> The normal stuff might be better than what we have now, but wouldn't let us put everything all in one place, which would be ideal.
[16:42] <gary_poster> tyarusso: for 1) the openid server is closed source.  ...I keep revising my answer to cut to the chase sufficiently...in sum, you'd have to branch the code to make it work, and it would be hairy; maybe you could get some community people interested, dunno.
[16:43] <gary_poster> tyarusso: everything in one place: this isn't my part of story, and I have to run now, but (A) you could have a group that collects your projects and (B) I'm almost 100% sure it can contain both proprietary and open-source bits.
[16:44] <gary_poster> bac, mrevell, you can fix my reply if necessary :-)
[16:55] <m4n1sh> is there any way in the Launchpad API for search a bug?
[16:55] <m4n1sh> bugs has only createBug and in bug there is nothing which matches searchBug or something like that
[17:01] <deryck> rockstar, did you ever get that widget moving with help from dav?
[17:01] <rockstar> deryck, nope.  Was hoping to get with you tomorrow about it.
[17:01] <rockstar> deryck, also, I just landed yui 3.2 into lazr-js, and that's got some more debugging happiness in it, so I thought I'd merge.
[17:01] <deryck> ok, let's plan on it.  I'll try to poke at your code in my evening for home work. :-)
[17:01] <deryck> indeed!
[17:10] <gmb> m4n1sh, Find the project you want to search on and then use project.searchTasks()
[17:10] <m4n1sh> gmb: so this means global bug search is not possible
[17:11] <gmb> m4n1sh, Yes, but it's not possible in the Launchpad UI, either.
[17:12] <m4n1sh> gmb: I think you can https://bugs.launchpad.net/
[17:12] <m4n1sh> you can choose "All projects"
[17:12] <gmb> m4n1sh, You're right; my bad.
[17:13] <gmb> I thought we used Google for that.
[17:13] <m4n1sh> me too :)
[17:13] <gmb> (Launchpad Bugs developer doesn't know about Launchpad Bugs)
[17:13] <gmb> m4n1sh, But no, that's not available in the API at the moment, I'm afraid.
[17:13] <gmb> Of that I'm sure.
[17:13] <m4n1sh> I could not find it in the API
[17:14] <m4n1sh> actually I am presenting a talk on PyCon India 2010 on launchpadlib
[17:14] <m4n1sh> so needed to know abuot this
[17:14] <m4n1sh> gmb: your help is appreciated :)
[17:14] <gmb> m4n1sh, I think that's because the API is basically an export of our underlying object model.
[17:14] <gmb> And the UI sits on top of that object model, so it can do things the API can't.
[17:15] <gmb> m4n1sh, There's probably a bug for it (I'm on the phone now, otherwise I'd check for you) but feel free to file one if there isn't.
[17:15] <m4n1sh> gmb: sure :) thanks
[17:16] <gmb> np
[17:40] <leonardr> gary, rockstar: (rockstar, i know this isn't your problem, but maybe you can help anyhow)
[17:40] <gary_poster> :-)
[17:41] <leonardr> actually, let me check the wiki really quick, since the problem is probably that my dev process has gotten out of sync with launchpad's
[17:54] <leonardr> gary, rockstar: no, i'm fairly certain i've done everything right, and i'm getting an import error from shipit
[17:54] <leonardr> from canonical.cachedproperty import cachedproperty -> "No module named cachedproperty"
[17:54] <rockstar> leonardr, oh, shipit? Uh, I have no knowledge in that area.  :(
[17:54] <leonardr> rockstar: yeah, i said it wasn't your problem
[17:55] <rockstar> leonardr, I thought we moved all the shipit import stuff to somewhere under lp though.
[17:55] <gary_poster> I saw that leonardr...I think I ran utilities/update-sourcecode.  You've done that too?
[17:55] <tyarusso> All right, flailing e-mail sent to the mailing list - maybe someone out there has a clue :S
[17:56] <leonardr> gary: does rocketfuel-get not run update-sourcecode? i didn't run it specifically
[17:56] <gary_poster> :-) Good luck.  It won't be a core LP dev, I strongly suspect, and I'll be surprised if someone knows what to do, but who knows.  I'm sorry that the available options don't work for you. :-/
[17:56] <gary_poster> leonardr: yeah I think so
[17:57] <gary_poster> I mean, I think it does run it
[17:57] <gary_poster> it certainly should
[17:57] <gary_poster> I'll look...
[17:58] <gary_poster> it does
[17:58] <leonardr> when i run it manually i get a bzr repository conversion error!
[17:58] <gary_poster> in dulwich I bet
[17:58] <gary_poster> yeah?
[17:58] <leonardr> no, in pygettextpo
[17:59] <gary_poster> oh
[17:59] <leonardr> maybe i should just remove sourcecode and get everything again? i don't think i've run this properly for a _long_ time
[17:59] <gary_poster> yeah ,maybe so leonardr.  I did surgery instead, myself
[18:00] <gary_poster> I found the broken ones and did a fresh branch in launchpad/lp-sourcedeps/sourcecode
[18:00] <gary_poster> of the broken ones reported by update-sourcecode
[18:00] <gary_poster> your call
[18:10] <cody-somerville> Who takes care of the svn import stuff? Can someone take a look at http://launchpadlibrarian.net/54208924/vcs-imports-django-trunk.log ?
[18:16] <rockstar> cody-somerville, are there branches from this branch?
[18:17] <cody-somerville> rockstar, it looks like it but most are hundreds of weeks old
[18:18] <rockstar> cody-somerville, I suggest creating a new import, and maybe deleting or Abandoning that branch.
[18:18] <rockstar> It's still using cscvs, which we're basically not maintaining anymore.
[18:21] <cody-somerville> rockstar, https://code.edge.launchpad.net/django <-- do you see how there are two series of the same name associated with lp:django? Is that intentionally possible?
[18:22] <rockstar> cody-somerville, whoa, that's weird.  I don't know if that's intentionally possible.  Maybe sinzui knows?
[18:22] <beuno> cody-somerville, one seems to be off the project group, and the other off of the prohect
[18:22] <beuno> *project
[18:22] <cody-somerville> beuno, both are just normal projects
[18:22] <rockstar> beuno, yeah, djangoproject should probably be deleted.
[18:22] <cody-somerville> yea
[18:22] <rockstar> Er, disabled.
[18:23] <rockstar> cody-somerville, can you file a bug about that, and I'll discuss it with the team this afternoon?
[18:23] <beuno> that is something interesting and new then!
[18:23] <cody-somerville> It seems possible to associate any branch in launchpad with a series
[18:24] <cody-somerville> Or no... if I try to edit the series in djangoproject I get an error
[18:24] <cody-somerville> so I wonder how the value it has now got set... maybe via web services API if it has different error checking?
[18:30] <lifeless> moin
[18:34] <cody-somerville> rockstar, bug #634280
[18:34] <_mup_> Bug #634280: Branch of another project associated with series <Launchpad Bazaar Integration:New> <https://launchpad.net/bugs/634280>
[18:34] <rockstar> cody-somerville, thanks
[18:35] <cody-somerville> rockstar, Is it possible to edit the URL for svn imports yet or does that still require a LOSA?
[18:35] <rockstar> cody-somerville, are you trying to edit the cscvs one?
[18:35] <cody-somerville> no, new one
[18:35] <rockstar> cody-somerville, link?
[18:35] <cody-somerville> rockstar, https://code.edge.launchpad.net/~vcs-imports/django/trunk (I renamed the old import to old-trunk)
[18:36] <cody-somerville> rockstar, It looks like maybe the trailing slash is problematic on the URL
[18:36] <rockstar> cody-somerville, looks like the old branch needs to be deleted...
[18:36] <cody-somerville> rockstar, why?
[18:37] <rockstar> cody-somerville, the urls need to be unique.
[18:37] <cody-somerville> You can't have two imports with the same URL?
[18:37] <rockstar> cody-somerville, nope.
[18:37] <rockstar> I just pointed the old-trunk to django/trunkDISABLED
[18:38] <cr3> in launchpad, an account has a identity url and a person has a an account and a name, does that mean that multiple people can share the same account?
[18:38] <lifeless> no
[18:39] <cr3> lifeless: so why doesn't he person contain the identity url instead?
[18:42] <cody-somerville> rockstar, it looks like it  failed again... :/
[18:42] <lifeless> cr3: account is there for shipit
[18:42] <lifeless> from a long time ago, nothing to do with openid
[18:43] <lifeless> it has grown and changed since then, but we're trying to delete the account table
[18:43] <rockstar> jelmer, see https://code.edge.launchpad.net/~vcs-imports/django/trunk - Does that make any sense?
[18:43] <lifeless> deryck[lunch]: when you return
[18:43] <lifeless> have a look at this:
[18:43] <lifeless> https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1713L718
[18:44] <cr3> lifeless: awesome, makes sense. I was wondering when logging in with openid how it might demultiplex the various person objects in the event there was a one to many relationship
[18:45] <lifeless> 'do not permit one' :P
[18:49] <leonardr> gary, how many packages are in your sourcecode/?
[18:49] <gary_poster> leonardr: 20
[18:50] <leonardr> gary: i've got 17, but i used to have 56. the difference between you and me probalby reflects the continuing trend of moving things out of sourcecode
[18:51] <leonardr> i ask because i was still getting an error, but i think 'make' might fix it
[18:51] <lifeless> gary_poster: hi, I'd also like you to eyeball https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1713L718
[18:51] <gary_poster> yes, yay.  on call leonardr, but http://pastebin.ubuntu.com/491137/ fwiw
[18:51] <lifeless> gary_poster: don't interrupt your call; just look when you have a chance.
[18:51] <gary_poster> ack thanks lifeless
[18:52] <lifeless> we are making -huge- numbers of memcache calls - and they appear to be capable of blocking for 20ms
[19:13] <jelmer> rockstar: it works locally; it's hard to say much about it without the error from bzr-svn on lp
[19:13] <rockstar> jelmer, do we need better error logging?  Is that what you're saying?
[19:14] <jelmer> rockstar: it would be nice to have the bzr log output as well
[19:15] <rockstar> jelmer, okay.
[19:15] <jelmer> rockstar: Actually
[19:15] <jelmer> this looks like a launchpad-code bug
[19:15] <jelmer> the exception is being raised from the part of the code that fetches the existing import branch
[19:15] <rockstar> jelmer, I suspected it might be.
[19:16] <jelmer> my initial thought was that it was failing to open the svn branch using bzr-svn
[19:22] <deryck> hi lifeless.  Looking at the OOPS report now....
[19:23] <lifeless> deryck: so basically we're spending - FWICT - about 2 seconds in memcache on that page
[19:23] <lifeless> deryck: that combined with the low hit rate we're seeing makes me strongly suspect that turning off memcache in the template/tales will take 2 seconds off that page.
[19:24] <lifeless> we need to do some improvements to the oops aggregation facilities now that we're putting more data in them
[19:25] <lifeless> (I had to stare rather hard at the page to get a good sense of whats going on)
[19:26] <deryck> yeah, taking me a bit to process it too...
[19:26] <lifeless> so interestingly
[19:26] <lifeless> comments=all
[19:26] <lifeless> is in the memcache key
[19:27] <deryck> yeah, I was noticing that.  Seems like stupid caching in the first place.
[19:27] <lifeless> thats an issue : it means the short and long versions of the messages won't share cache keys.
[19:27] <lifeless> but perhaps on some pages that matters. Filing a bug on -foundations now vis-a-vis that
[19:28] <deryck> yeah, I'm trying to look at the template.  We didn't add this caching, so I need to remind myself what it's doing.
[19:29] <lifeless> deryck: https://bugs.edge.launchpad.net/launchpad-foundations/+bug/634326
[19:29] <_mup_> Bug #634326: memcache cache keys interact poorly with query parameters <Launchpad Foundations:New> <https://launchpad.net/bugs/634326>
[19:30] <lifeless> deryck: Please understand, when I talk about turning memcache off for things, its not because I don't like it : its because I want the best performance for any given page.
[19:30] <deryck> lifeless, I do understand that.  I'm not against turning off bits of it or poor uses *at all* :-)  I was just against ripping it our wholesale.
[19:31] <lifeless> k
[19:31] <deryck> though it's more accurate to say I'm *for* learning to use it correctly and having the ability to use it correctly.
[19:31] <lifeless> right
[19:31] <lifeless> I have the sense though, that we haven't finished putting our house in order in the underlying layers yet :)
[19:32] <deryck> I completely agree
[19:32] <deryck> or have the same sense rather
[19:32] <lifeless> :)
[19:32] <deryck> which is okay if we rapidly iterate on it.
[19:32] <deryck> but as is, it has some issues.
[19:33] <deryck> lifeless, so I don't know this spelling, I guess:  "cache:authenticated,comment/rendered_cache_time,comment/index"  is that part of your logging work?
[19:33] <lifeless> hmm, for clarity by underlying layers I meant - db use; storm; tal rendering; python scheduling
[19:33] <lifeless> my logging work puts this in
[19:33] <deryck> ah, ok.  agree on that, too, though I thought you meant something different
[19:33] <lifeless> category=memcache-get
[19:33] <lifeless> details=$url where $url is the memcached url we're requesting
[19:34] <lifeless> so pt:lpnet:lp/bugs/templates/bugcomment-box.pt,9760:l:1,0:MjE=,https_//bugs.launchpad.net/ubuntu/+bug/1/+index?comments=all4N2mYcTmyKyEC6n0gBJBFG
[19:34] <_mup_> Bug #1: Microsoft has a majority market share <iso-testing> <ubuntu> <Clubdistro:Confirmed> <Computer Science Ubuntu:Invalid by compscibuntu-bugs> <EasyPeasy Overview:Invalid by ramvi> <GNOME Screensaver:Won't Fix> <Ichthux:Invalid by raphink> <JAK LINUX:Invalid> <The Linux OS Project:In Progress> <OpenOffice:In Progress by lh-maviya> <Tabuntu:Invalid by tinarussell> <Tivion:Invalid by shakaran> <Tv-Player:New> <Ubuntu:In Progress by sabdfl> <
[19:34] <lifeless> is the url in memcache
[19:34] <lifeless> the cache: stuff I didn't touch, thats the existing control-memcache-in-templates
[19:35] <deryck> hmmm, so I thought it was just "cache:type,howlong" so I need to refer to updated docs.
[19:36] <lifeless> anyhow
[19:36] <lifeless> if we're caching per url in the webapp thats going to go a long way to explaining the terrible hit rate.
[19:36] <lifeless> many objects appear in many urls.
[19:36] <lifeless> e.g person, branding, bug comments (per distrotask) etc
[19:37] <lifeless> hmmm
[19:38] <lifeless> create-question appears to send email via a different codepath or something. darn.
[19:38] <lifeless> no, thats not it. I wonder
[19:39] <lifeless> is email spooling perhaps deferred to the end of the request?
[19:39] <lifeless> gary_poster: ^
[19:39] <lifeless> https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1713N810
[19:39] <deryck> lifeless, so about this memcache question, are you asking me about ripping this out and seeing if we save 2 seconds?
[19:39] <lifeless> deryck: I'm speculating; I have a lot of data gathering to do now.
[19:40] <lifeless> deryck: but bugtask is one of your pain-pages
[19:40] <lifeless> and turning that off would be a very easy thing to do
[19:40] <deryck> lifeless, yeah, that was going to be my concern is that we don't know the savings (if any) vs the 2 sec. cost.
[19:40] <deryck> but I'm open to turn it off and see what oops appear.
[19:40] <lifeless> I think it would be worth trying
[19:41] <lifeless> even with the overhead of doing a CP to get it on prod, it will be a pretty cheap experiment
[19:41] <deryck> sure, I'm open to that.  With a close eye on it.  I do however wish we could feature flag it, if that works now.  So if it was a bust, we could turn it on/off easy.
[19:41] <deryck> I didn't put this in, so I don't know the problems it was trying to prevent.
[19:42] <deryck> or the knowledge that underpinned the choice.
[19:42] <lifeless> feature flags do work, I think you'd need to repeat the contained section though
[19:43] <lifeless> which is a bit ugly
[19:43] <deryck> yeah
[19:43] <lifeless> if we had a pageid /scope/ we could disable memcache for a pageid via a feature flag
[19:43] <lifeless> that should be reasonably easy to do, for someone familiar with the page id logic.
[19:43] <lifeless> I'll file a bug requesting that, and note we'd like to CP it to prod
[19:43] <deryck> sure, sounds good.
[19:44] <deryck> leave it New please, so I'll triage it and add it to the board.
[19:44] <deryck> forces me to add a card when I have to toggle it to triaged.
[19:44] <deryck> unless this is a foundations bug ;)
[19:45] <lifeless> its a foundations arena bug, but you might find it easy enough to JDI - sec while I file it
[19:47] <dobey> hmm, is there a known issue with the code scanner in lp right now?
[19:47] <lifeless> deryck: https://bugs.edge.launchpad.net/launchpad-foundations/+bug/634342
[19:47] <_mup_> Bug #634342: need a features 'scope' for page ids <Launchpad Foundations:New> <https://launchpad.net/bugs/634342>
[19:48] <dobey> like it seems to not be scanning only some branches, and sending proposal e-mails without diffs for them
[19:50] <deryck> lifeless, got it.  I'll decide if I can JFDI that later today or not.  which goes a bit against JFDI, I know.
[19:50] <lifeless> deryck: :P
[20:19] <lifeless> deryck: how did abel go last night
[20:20] <deryck> lifeless, he was worn out from all this, so worked on an easy bug. :-)  But glad that we had it fixed for retracers.
[20:21] <lifeless> deryck: I'm glad too
[20:25] <lifeless> deryck: also on performance - https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1713S81
[20:25] <lifeless> I grabbed that for bug 1 on staging yesterday, with profile.
[20:25] <_mup_> Bug #1: Microsoft has a majority market share <iso-testing> <ubuntu> <Clubdistro:Confirmed> <Computer Science Ubuntu:Invalid by compscibuntu-bugs> <EasyPeasy Overview:Invalid by ramvi> <GNOME Screensaver:Won't Fix> <Ichthux:Invalid by raphink> <JAK LINUX:Invalid> <The Linux OS Project:In Progress> <OpenOffice:In Progress by lh-maviya> <Tabuntu:Invalid by tinarussell> <Tivion:Invalid by shakaran> <Tv-Player:New> <Ubuntu:In Progress by sabdfl> <
[20:25] <lifeless> I'm going to put it in the
[20:25] <gary_poster> lifeless, today is call day, but wading through backlog.  yes, email spooling is deferred to the end of the request.
[20:26] <lifeless> gary_poster: ahha
[20:26] <lifeless> gary_poster: can you point me at the thing that does that, so I can instrument it ?
[20:26] <gary_poster> end of transaction to be precise
[20:26] <lifeless> gary_poster: really? after request finalisation ?
[20:26] <gary_poster> yes
[20:26] <lifeless> can we change that?
[20:27] <gary_poster> bad idea IMO.  the idea is that if a transaction retries, we don't want to send multiple emails.
[20:27] <lifeless> to be on-part-of-something like that
[20:27] <lifeless> ok
[20:27] <lifeless> uhm, let me describe some symptoms
[20:27] <gary_poster> k
[20:27] <lifeless> https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1713N810
[20:27] <lifeless> 32 second request
[20:28] <gary_poster> need to go in a few; and then will be back on calls, btw
[20:28] <gary_poster> looking
[20:28] <lifeless> it sends 115 emails
[20:28] <lifeless> 2pc would solve this but be a bit nasty.
[20:28] <lifeless> doing the email spooling from a worker thread would solve it and not distort the request time
[20:29] <gary_poster> we're suposed to have the infrastructure to be doing this already. standard zope bit...
[20:29] <lifeless> so, anyhow, 260ms per email is one useful thing to know
[20:29] <lifeless> that seems slow.
[20:29] <gary_poster> it's supposed to put in worker thread (in* the 2pc
[20:29] <gary_poster> *in*
[20:30] <lifeless> could I perhaps move the bug to foundations ?
[20:31] <gary_poster> I guess...it's either something for foundations or something that foundations would be happy to advise on
[20:31] <lifeless> blue sky, does pgsql let us ask 'will this commit succeed' ?
[20:31] <lifeless> its bug 438116, I'll update the details now we have some instrumentation.
[20:31] <_mup_> Bug #438116: Timeout when converting bug into question (BugTask:+create-question) <timeout> <Launchpad Foundations:Triaged> <https://launchpad.net/bugs/438116>
[20:32] <gary_poster> lifeless, I don't yet see 260ms per email on that OOPS.  give me a hint as to where I should be looking?
[20:32] <lifeless> gary_poster: look for sending emails
[20:33] <lifeless> or 'send' perhaps
[20:33] <lifeless> anyhow, 115 rows (by subtraction)
[20:33] <gary_poster> e.g. 352.	3617	0ms	sendmail	[Question #124730]: No vdpau-va-driver for amd64 in Maverick
[20:33] <_mup_> Bug #124730: Feisty - Sound stops after a few seconds and various apps fail to work properly <Ubuntu:Invalid> <https://launchpad.net/bugs/124730>
[20:33] <lifeless> and at the end
[20:33] <lifeless> the request does its session update @ 32 seconds in
[20:33] <lifeless> and it finished its sql work, which for this code is ~= to finishing completely, ~6 seconds in
[20:34] <lifeless> 26000/115
[20:34] <gary_poster> ah ok
[20:34] <lifeless> approximations-R-us-ly-yrs
[20:36] <gary_poster> sure. :-) from users perspective, the request should have been sent back by 6 seconds, but agreed anyway that sending email ought to be done elsewhere.  as I said, the expected pattern is that you would spool in the commit phase.  well, since that's not happening, then maybe user didn't get the reply till 6 seconds in either. :-/
[20:36] <gary_poster> didn't get the reply 6 seconds in I meant
[20:36] <gary_poster> I need to go get boys from school
[20:36] <gary_poster> then I have calls
[20:37] <gary_poster> I will look at how we are sending emails
[20:37] <gary_poster> if you happen to know the code path that would be nice
[20:37] <lifeless> thanks, no panic - this like other timeouts is just kanban backlog
[20:37] <gary_poster> (that is, where in LP the email is sent)
[20:38] <gary_poster> right, cool
[20:38] <lifeless> I will dig up some of that for you
[20:38] <gary_poster> thank you
[20:38] <gary_poster> biab
[21:45] <dobey> i guess this is the more active channel now?
[21:55] <lifeless> this is the development channel
[21:56] <EdwinGrubbs> matsubara: ping
[21:56] <matsubara> EdwinGrubbs, on the phone
[21:56] <matsubara> will ping you once I'm done
[21:56] <EdwinGrubbs> ok
[22:53] <wallyworld_> morning
[22:59] <gary_poster> lifeless: need to go, bug re bug 633664, when you talk about double-buffering, you mean at the load balancer and in the app server, correct?
[22:59] <_mup_> Bug #633664: API file attachments are done via the appservers <Launchpad Foundations:Invalid> <https://launchpad.net/bugs/633664>
[22:59] <gary_poster> s/bug re bug/but re bug/
[22:59] <lifeless> gary_poster: the load balancer isn't buffering
[22:59] <gary_poster> apache?
[22:59] <lifeless> gary_poster: the double buffering is in the appserver
[23:00] <gary_poster> buffers in asyncore, then?
[23:00] <lifeless> gary_poster: from what you're telling me, yes.
[23:00] <gary_poster> right but, I mean, where else?
[23:00] <lifeless> gary_poster: there are some explicit things I know, and some things I don't.
[23:01] <lifeless> gary_poster: we're calling addFile in the appserver, and we have a facility to retry the request if the db conflicts -> that implies a buffer
[23:01] <gary_poster> (this isn't to ignore your other concerns, but I understand them)
[23:01] <gary_poster> but why a second buffer?
[23:01] <lifeless> ah
[23:01] <lifeless> I think I've added confusion
[23:02] <lifeless> what I mean is 'we're not sending it directly to where it belongs, but the design is intended to support big blobs by doing that'
[23:02] <gary_poster> ah!
[23:02] <gary_poster> got it
[23:02] <gary_poster> fair enough, understood
[23:02] <gary_poster> thanks
[23:02] <gary_poster> must run :-)
[23:02] <thumper> wallyworld_: morning
[23:02] <lifeless> happy to talk another day
[23:02] <lifeless> ciao
[23:03] <gary_poster> cool, but I understand now and agree that what we have is a problem
[23:03] <wallyworld_> thumper: happy birthday
[23:03] <gary_poster> thumper: happy birthday :-)
[23:03] <lifeless> gary_poster: cool, thanks.
[23:03] <thumper> thanks
[23:53] <jelmer> thumper: hi
[23:53] <jelmer> thumper: did you see the wiki page maxb added with failing bzr-svn imports summary?
[23:53] <thumper> jelmer: hi
[23:53] <thumper> no
[23:53] <thumper> I'm busy chasing a critical branch scanner fubar
[23:53] <thumper> due to bug heat
[23:53] <jelmer> I won't bother you then :-)
[23:53] <thumper> jelmer: good that chicken finally imported :)
[23:54] <thumper> I'm sure peter is happy
[23:54] <mwhudson> jelmer: i saw a lot of code imports marked failed overnight, do you know what that's all about?
[23:55] <jelmer> mwhudson: I retried all of the ones I thought *might* be fixed and some still failed, but a lot are now actually working
[23:55] <mwhudson> jelmer: oh ok
[23:55] <mwhudson> that's good
[23:56] <jelmer> mwhudson, we're down to less than 100 failures in the bzr-svn/bzr-git imports, 59 of which are caused by lack of support for nested trees
[23:57] <mwhudson> jelmer: \o/
[23:59] <wgrant> Are nested trees actually happening at some point?
[23:59] <lifeless> yes
[23:59] <wgrant> Well, more than they were two years ago? :P