[00:01] <lifeless> wgrant: https://code.launchpad.net/~lifeless/python-oops-datedir-repo/bug-891251/+merge/82469
[00:03] <wgrant> lifeless: Why is the umask 077 in the first place?
[00:04] <lifeless> I don't know the backstory
[00:04] <lifeless> we have multiple services like this though, have for ages
[00:17] <wallyworld_> StevenK: FYI, in my hide bug comments mp, i've added 2 methods to Bug which i believe may be similar to what you need - userCanSetCommentVisibility and userInProjectRole
[00:18] <StevenK> wallyworld_: I fear we may step on each others toes, then.
[00:18] <StevenK> wallyworld_: Let me finish what I'm in the middle of, and I'll show you what I have.
[00:18] <wallyworld_> ok
[00:19] <wallyworld_> we should try and consolidate
[00:19] <StevenK> Exactly
[00:34] <StevenK> wallyworld_: http://pastebin.ubuntu.com/740741/
[00:37] <wallyworld_> StevenK: so it looks like code under # Otherwise, if you're a member of the pillar owner, drivers..... is similar to my userInProjectRole(), but my method iterates over all affected pillars and also checks for security contact
[00:37]  * StevenK grumbles at Unity
[00:38] <StevenK> If I run indicator-weather, I want you to stay in the notification area!
[00:38] <wallyworld_> StevenK: i've added new methods to IPersonRole: isBugSupervisor(obj) and isSecurityContact(obj). you could use these, plus the existing isDriber() API instead of your code
[00:38] <StevenK> isDriber()? Sounds interesting.
[00:38]  * StevenK ducks.
[00:38] <wallyworld_> you know i can't type :-P
[00:39] <StevenK> Why all affected pillars?
[00:39] <wallyworld_> because it's to control who can hide a bug comment
[00:40] <StevenK> Ah
[00:40] <lifeless> win
[00:40] <StevenK> Right
[00:40] <lifeless> win win win win win
[00:40] <wallyworld_> ie the user is validated against the bug, not an individual bugtask
[00:40] <lifeless> 2011-11-17 05:45:37+0530 [-]     self._oops_datedir_repo = DateDirRepo(config[section_name].error_dir)
[00:40] <lifeless> 2011-11-17 05:45:37+0530 [-]   File "/var/launchpad/tmp/eggs/oops_datedir_repo-0.0.12-py2.6.egg/oops_datedir_repo/repository.py", line 93, in __init__
[00:40] <StevenK> wallyworld_: Right.
[00:40] <lifeless> 2011-11-17 05:45:37+0530 [-]     self.metadatadir = os.path.join(self.root, 'metadata')
[00:40] <lifeless> 2011-11-17 05:45:37+0530 [-]   File "/usr/lib/python2.6/posixpath.py", line 67, in join
[00:40]  * StevenK kicks lifeless 
[00:40] <lifeless> 2011-11-17 05:45:37+0530 [-]     elif path == '' or path.endswith('/'):
[00:40] <lifeless> 2011-11-17 05:45:37+0530 [-] AttributeError: 'NoneType' object has no attribute 'endswith'
[00:40] <lifeless> we have an error_dir setting of None somewhere.
[00:40] <lifeless> StevenK: its 6 lines, deal
[00:41] <StevenK> lifeless: Except with your internets, it takes 20 seconds per line
[00:41] <lifeless> TacException: Unable to start /var/launchpad/test/daemons/poppy-sftp.tac. Content of /tmp/tmp_Nitmg/poppy-sftp.log:
[00:41] <lifeless> this might explain a lot, actually.
[00:41] <wallyworld_> StevenK: not sure if we want to introduce a isUserInProjectRole(pillar). your stuff doesn't require isSecurityContact(). maybe it's sufficient for you to just use the IPersonRoles stuff
[00:44] <wallyworld_> StevenK: https://pastebin.canonical.com/55928/
[00:44] <wallyworld_> plus there's the other existing IPersonRoles helpers
[00:44] <wgrant> lifeless: We have lots of missing errordirs.
[00:44] <wgrant> lifeless: You've not seen that before?
[00:45] <lifeless> wgrant: it wasn't fatal before
[00:46] <lifeless> wgrant: (for some bizarre reason)
[00:46] <wgrant> That traceback was always there.
[00:47] <lifeless> wgrant: poppy is failing to start in ec2
[00:47] <lifeless> wgrant: this is the error
[00:47] <wgrant> Hmmm
[00:47] <wgrant> I see
[00:47] <lifeless> wgrant: I dispute 'always there'
[00:47] <lifeless> wgrant: what may well look similar is raising an oops barfing if errordir is None
[00:47] <lifeless> wgrant: this evaluates it for sanity a little earlier, because it was simpler.
[00:47] <wgrant> Right.;
[00:48] <lifeless> as such I think its a net win, but it puts back my landing my oops polish branch
[00:50] <lifeless> poolie: which reminds me, any word on that email oops raising critical?
[01:14] <sinzui> StevenK, do you have time to mumble about goals
[01:15] <StevenK> sinzui: Certainly.
[01:15]  * StevenK fights with Unity on his laptop
[01:15] <lifeless> how does this even work on prod
[01:16] <lifeless> we've no error dir or oops prefix set for poppy.
[01:18] <wgrant> poppy isn't meant to OOPS, is it?
[01:18] <lifeless> hah
[01:18] <wgrant> I didn't know it had anything installed to raise OOPSes.
[01:21] <lifeless> SSHService
[01:21] <lifeless> takes a parameter for the oops config section to use, and installs twisted top level oops glue
[01:21] <lifeless> so yeah, it does
[01:21] <lifeless> but it has never ever worked
[01:21] <wgrant> Hah
[01:22] <wgrant> Well, poppy-sftp is not yet 18 months old.
[01:29] <lifeless> wgrant: can you please do an incremental review on https://code.launchpad.net/~lifeless/launchpad/oops-polish/+merge/82359 for me ?
[01:31] <lifeless> wgrant: if you do, I can lp-land this bad boy (all the tests that failed are fixed by these commits)
[01:34]  * StevenK kicks subunit for being stupid
[01:35]  * subunit kicks StevenK for using me badly
[01:35] <lifeless> StevenK: seriously, whats the issue?
[01:37] <StevenK> lifeless: I have a stream, I want a list of failing tests only.
[02:00] <wgrant> huh
[02:00] <wgrant> MP diff in less than 30 seconds
[02:04] <StevenK> Something must be wrong, then. :-P
[02:06] <StevenK> lifeless: OCR: Me! is about as useful as [rs=me] in a commit log.
[02:09] <lifeless> StevenK: if you can't figure it out you don't need a review
[02:09] <huwshimi> lifeless: I have a review for you :P
[02:09] <huwshimi> ok, ok, enough with the hassling
[02:10] <lifeless> StevenK: | subunit-filter --no-skips | subunit-ls
[02:10] <lifeless> StevenK: I seem to be giving you this recipe weekly, or something
[02:13]  * wallyworld_ goes to have lunch with bigjools who is melting in the heat
[02:13] <lifeless> wallyworld_: he's in.au? say hi
[02:14] <lifeless> wgrant: so, incremental on https://code.launchpad.net/~lifeless/launchpad/oops-polish/+merge/82359 ?
[02:14] <wallyworld_> lifeless: here's in brisbane for 2 weeks to check out schools and houses
[02:15] <huwshimi> wallyworld_: Tell him to move somewhere a little further south
[02:15] <lifeless> somewhere nice :P
[02:15] <poolie> lifeless, yeah i might try changing the counters
[02:15] <poolie> ec2 on tmpfses is doing approximately zero io
[02:15] <poolie> that's nice
[02:15] <wgrant> lifeless: Sure
[02:16] <poolie> i haven't measured the 'before' though
[02:16] <wallyworld_> huwshimi: it still gets hot down there too. adelaide for example is often hotter than brisbane
[02:16] <wgrant> Yes, but it's *Adelaide*.
[02:17] <wallyworld_> could be worse - could be sydney or melbourne
[02:17]  * wallyworld_ runs away
[02:18] <wgrant> lifeless: Why are you UTF-8ing the URL in errorlog.py?
[02:20] <lifeless> wgrant: to eliminate the double escaping
[02:20] <wgrant> Mmm, OK
[02:20] <wgrant> Anyway, approved.
[02:21] <lifeless> wgrant: the u1/isd wsgi stack gets de-%coded, de-utf8 coded urls
[02:21] <mwhudson> this whole visiting countries before moving to them is very cautious!
[02:21] <lifeless> wgrant: so oops tools treats a unicode url as probably garbage and immediately utf8 and % encodes it
[02:21] <lifeless> wgrant: thats why our oopses look bong url wise atm
[02:22] <lifeless> wgrant: I figure the only semi sane way out is to treat bytestring urls as 'from the wire' and unicode ones as 'from noone knows where'
[02:23] <StevenK> lifeless: That might be hint that subunit needs better docs
[02:23] <poolie> deleting 'import threading' and 'import random' from test code can only be good
[02:24] <StevenK> wallyworld_: How is he melting in the heat? It's like 29 there?
[02:24] <wgrant> Eep
[02:25] <wgrant> regression
[02:25] <wgrant> 1716 InconsistentBuildFarmJobError: Could not find all the related specific jobs.
[02:25] <StevenK> Crap
[02:38] <poolie> review for build failure please? https://code.launchpad.net/~mbp/launchpad/891028-notunique
[02:41] <wgrant> poolie: The critical bit is probably fixed by the rollback. We'll see in half an hour or so.
[02:41] <poolie> oh you rolled back everything?
[02:41] <poolie> all 5 revisions or whatever
[02:41] <wgrant> [Branch ~launchpad-pqm/launchpad/devel] Rev 14304: [testfix][r=bac][rollback=14301]
[02:41] <poolie> it's still a bloody stupid bug, if it is what it seems to be
[02:43] <rick_h_> heh, that's what I was planning on doing last night
[02:43] <rick_h_> poolie: so when I brought that up in our stand up, it got brought up that there's some part of the tests that use the number for a "number of iterations" or something?
[02:43] <rick_h_> hmm, well you're still count() though nvm
[02:44] <lifeless> StevenK: well, TBH you're the only person that has this trouble
[02:44] <poolie> if they are counting on getting particular versions they need to be shot
[02:44] <lifeless> StevenK: perhaps the other users are unix diehards
[02:48] <lifeless> StevenK: have you considered using testr ?
[02:53] <lifeless> okies, time to vote, and due to early start, EOD too.
[02:53] <lifeless> enjoy y'all
[03:09] <StevenK> lifeless: Clearly, I am not smart enough to use subunit.
[03:15] <thumper> can anyone give me a one liner to search for critical and high bugs using launchpadlib?
[03:16] <thumper> and I don't suppose I can search in one go over both packages and projects can I?
[03:16] <thumper> wgrant, poolie, StevenK: ^^?
[03:16] <StevenK> I don't think searchTasks() will bend to that particular case.
[03:18] <wgrant> somecontext.searchTasks(importance=['Critical', 'High']), I expect
[03:18] <wgrant> It will only search over one context, or all contextys.
[03:19] <StevenK> So you probably want to fetch your DSPs and then sprinkle in your products and call searchTasks over each of them
[03:19] <wgrant> jtv: Can we have our builders back yet?
[03:20] <thumper> wgrant: thanks
[03:20] <jtv> wgrant: actually I was hoping to get you to try it for ubuntu uploads first.
[03:21] <jtv> Only my mouse pointer went off somewhere and using my system's a bit hard.
[03:25] <wgrant> lifeless: Are the tracebacks not going to ENOSPC us reasonably quickly?
[03:30] <thumper> wgrant: how do I get a source package from a distro (ubuntu) in launchpadlib to use as a bug target?
[03:30] <jtv> wgrant: any chance you could try out a bunch of concurrent ubuntu builds on dogfood?
[03:33] <wgrant> thumper: lp.distributions['ubuntu'].getSourcePackage(name='some-broken-compiz-plugin')
[03:33] <lifeless> wgrant: the new pruner is ready and usable
[03:33] <wgrant> lifeless: Ah!
[03:33] <wgrant> lifeless: That is good news.
[03:34] <lifeless> wgrant: its in oops-datedir-repo 0.0.12
[03:34] <poolie> lifeless, i'm having a bit of trouble understanding why you would want bugs in the help text to just not be tracked at all
[03:34] <thumper> ta
[03:34] <lifeless> wgrant: and I've ping in a few rt's about it :P
[03:34] <wgrant> lifeless: Right, but there are like 100 RTs :)
[03:34] <wgrant> And not much IS this week.
[03:35] <lifeless> poolie: uhm, I may well have knee jerked.
[03:35] <wgrant> jtv: Maybe.
[03:35] <lifeless> poolie: the combination of not-code and critical got to me
[03:35] <wgrant> jtv: Ubuntu in particular, or just non-virtual?
[03:36] <lifeless> poolie: perhaps a better reaction would have been to retarget it to launchpad-moin-help-theme and priority low
[03:36] <poolie> imbw, i thought broken links were critical
[03:36] <poolie> it's not a bug in the theme either
[03:36] <poolie> i will as you suggest justnot mention the ppa
[03:36] <poolie> people can use the one on oneiric
[03:36] <lifeless> poolie: oopses are critical; broken links generated from within LP oops (by design) and those are thus critical
[03:39] <lifeless> \o/ mirror finally got onto multiverse
[03:39] <lifeless> now I can install locally, in principle.
[03:41] <wgrant> mirror?
[03:42] <lifeless> I've been bootstrapping an ubuntu archive mirror for a while now
[03:42] <thumper> gah
[03:42] <lifeless> slow nets
[03:42] <lifeless> long pipe
[03:42]  * thumper foiled by unicode again
[03:42] <wgrant> Oh.
[03:42] <lifeless> 150kB peak rate/s
[03:42] <wgrant> I was about to say, the mirror package has been around for as long as I can remember.
[03:45] <poolie> jesus
[03:46] <poolie> the use of randomness in tests just gets worse
[03:48] <lifeless> allenap: https://bugs.launchpad.net/txlongpoll/+bug/891251 may interest you
[03:48] <_mup_> Bug #891251: txlongpoll oopses are recorded with the wrong permission, causing oops loader script to fail with a permission denied error <Python OOPS Date-dir repository:Fix Released by lifeless> <txlongpoll:Triaged> < https://launchpad.net/bugs/891251 >
[03:48] <jtv> wgrant: in principle, we want to exercise all kinds of job we have.  I just don't feel we can realistically do it.  I think I'll try TTBJs though.
[03:58] <poolie> lifeless, thanks for the review of the randomness patch
[03:58] <poolie> it turns out the testtools one will not quite fit
[04:05] <lifeless> ah well
[04:05] <poolie> it's good to know it's there though
[04:15] <jtv> wgrant: I think it would make sense to do some ubuntu uploads on dogfood, in case it exercises other code paths that write to the database without having been permitted to do so.  Once my branch lands, there's a good chance that we'll find more of these.  We should minimize that risk though.
[04:18] <wgrant> jtv: You can't throw your 10 zillion hellos at Ubuntu?
[04:18] <jtv> Can I?
[04:19] <wgrant> Why not?
[04:19] <jtv> Indeed, why not?
[04:23] <jtv> wgrant: what location should I feed to dput?  My “incoming” setting is “%(dogfood)s”.
[04:23] <wgrant> jtv: dogfood:ubuntu
[04:23] <jtv> Thanks.
[04:23] <wgrant> Or dogfood:ubuntu/oneiric to override the series
[04:26] <poolie> it's kind of interesting how the lp test suite doesn't use a full cpu even when it's not doing io
[04:26] <poolie> i don't know what it is doing
[04:26] <jtv> poolie: sledgehammer trick — ctrl-C it and look at the backtrace.
[04:27] <poolie> mm i know
[04:27] <jtv> My apologies.
[04:28] <jtv> I guess we're not doing enough context switches (yet) to make that the cause.
[04:28] <poolie> it's ok
[04:28] <jtv> I mean, to make context switches the cause of such symptoms.
[04:28] <poolie> we might be
[04:28] <poolie> there is a lot of interprocess io
[04:29] <poolie> mwhudson, still here? could you read https://code.launchpad.net/~mbp/launchpad/ec2-region/+merge/82487
[04:29] <mwhudson> poolie: looking
[04:31]  * mwhudson goes cross-eyed
[04:31] <mwhudson> time to try meld
[04:31] <poolie> :/
[04:31] <jtv> poolie: I'd be quite interested to learn how you see the interprocess I/O going on, incidentally!
[04:32] <poolie> strace
[04:32] <poolie> lots of reading and writing on sockets
[04:32] <jtv> Ah   :)
[04:32] <poolie> i guess talking to the db
[04:32] <jtv> Does strace show the difference between a socket and a file?
[04:32] <mwhudson> there is also iotop i guess?
[04:32]  * jtv runs that
[04:33] <jtv> Fun, thanks!
[04:33] <poolie> not directly
[04:33] <poolie> though if you see it previously polling on the fd, or things like that, it's a good clue
[04:33] <jtv> True.
[04:34] <jtv> There's also the "wa" column in "top," but that's only a global number.
[04:34] <StevenK> poolie: Multiverse is required
[04:35] <poolie> StevenK, it's added by the sed line a bit higher up
[04:35] <poolie> in a way that does not hardcode the region
[04:35] <StevenK> Right
[04:36] <jtv> wgrant: where do I observe my uploads to ubuntu?  /ubuntu/+builds?  Or is there more?
[04:37] <StevenK> /ubuntu/+source/<spn>
[04:37] <jtv> Thanks.
[04:37] <poolie> StevenK, is that what you were asking about?
[04:38] <StevenK> poolie: Yes, but your branch still makes me ... uneasy
[04:38] <poolie> cause it's changing something dangerous?
[04:39] <StevenK> Because it's changing something that isn't well tested.
[04:39] <poolie> yeah i know
[04:39] <StevenK> And ec2 issues are hard to debug
[04:39] <poolie> much like testing lplib clients, it's hard to test it without a whole server cluster on the other end
[04:39] <poolie> fwiw this makes several failure modes easier to understand
[04:39] <StevenK> Which isn't helped by the ec2 test runner going "Hrm, can't work out what is going on. sudo halt"
[04:40] <poolie> yeah
[04:40] <poolie> we could easily have an option to not do that
[04:42] <mwhudson> poolie: surely you can delete _convert_instance_type now?
[04:42] <poolie> fwiw i've run it several times here with no unfixed errors
[04:43] <poolie> done
[04:45]  * mwhudson is reminded that, now we don't do the private dependency thing, ec2 test could detach plenty quicker
[04:45] <poolie> that would be nice
[04:45] <poolie> especially when travelling
[04:46] <StevenK> mwhudson: Fix eet?
[04:46] <poolie> StevenK, i'm kind of glad i broke my personal ice of finding this too scary to change
[04:46] <mwhudson> StevenK: not today
[04:46] <mwhudson> poolie: \o/
[04:46] <wgrant> Well, there are several tests or
[04:46] <wgrant> functions reached from tests that reset the random seed, for instance
[04:46] <wgrant> test_token_creation just sets it flat out to zero
[04:46] <wgrant> What
[04:46] <mwhudson> it's not that bad code
[04:46] <wgrant> seriously?
[04:46] <poolie> "wgrant moment"
[04:47] <wgrant> Heh
[04:47] <mwhudson> !!!!!!
[04:47] <poolie> someone from ubuntu will tut me if i write down here what i actually said
[04:47] <poolie> "i want something kinda random but not actually random"
[04:48] <mwhudson> the reason for the randomness iirc was to stop people putting 'person-name15' in doctests
[04:48] <mwhudson> :(
[04:48] <poolie> haha
[04:48] <wgrant> Yeah :(
[04:48] <poolie> it now starts at 100,000 and counts up across all tests
[04:48] <StevenK> Which didn't actually work
[04:49] <poolie> people will realize pretty soon it's not predictable
[04:49] <poolie> :/
[04:49] <StevenK> Since people started doing factory.makePerson(name='fred')
[04:49] <poolie> i hope they don't count on it being 6 digits but we can only do so much
[04:49] <StevenK> TBH, I write my tests to not care. I just want an IPerson.
[04:50] <mwhudson> StevenK: that's ok
[04:50] <jtv> wgrant: process-upload.py doesn't like “dogfood:ubuntu” as my dput upload path.  It says: Could not find distribution 'dogfood:ubuntu'.
[04:50] <mwhudson> if you care about the name, you can care about the name
[04:50] <mwhudson> you just have to be explicit about it
[04:50] <wgrant> jtv: dogfood:ubuntu, not dogfood:dogfood:ubuntu
[04:50] <poolie> so can i get an upvote?
[04:50] <poolie> https://code.launchpad.net/~mbp/launchpad/891028-notunique/+merge/82480
[04:50] <nigelb> DId you guys fix the random thing not being actually random? :)
[04:50] <jtv> wgrant: whoops, you're right.  Sorry.
[04:50] <poolie> i think doing random.seed(0) at the start of the tests is arguably a good idea
[04:50] <mwhudson> poolie: approved your branch btw
[04:51] <poolie> let's not pretend
[04:51] <poolie> thanks
[04:51] <mwhudson> poolie: yeah, seed dependent failures are bad
[04:51] <poolie> nigelb, see above
[04:51] <nigelb> \o/
[04:51] <nigelb> Nice.
[04:51]  * nigelb reads the mail thread as well
[04:53] <nigelb> How did these tests work until this week.
[04:54] <poolie> there used to be say 2500 getUnique calls before you reached this test
[04:54] <poolie> now there are 2501, and that causes the counters to collide
[04:54] <poolie> or 2512312 and 2512313
[04:54] <rick_h_> lol, damn +1
[04:54] <poolie> not necessarily exactly one more
[04:54] <nigelb> oh. wow.
[04:54] <poolie> but the patterns aligned
[04:54] <poolie> i'm not totally sure but it looks a lot like that
[04:55] <mwhudson> poolie: approved your other branch too
[04:55] <poolie> i have to confess i have not run its own tests locally yet
[04:56] <poolie> for ec2
[04:57] <wallyworld_> lifeless: oops raising question for you. attempting to delete the last bug task raises a CannotDeleteBugtask exception and oops in browser. this doesn't normally happen except if two users are involved and step on each others' toes. i am going to fix the web ui to display a nice message. if i catch the exception in the view code, will that still generate an oops on the server side? do we care?
[04:57] <wgrant> wallyworld_: If you catch and don't reraise, it won't log an oops.
[04:57] <lifeless> the view code is server side
[04:57] <wgrant> Only unhandled exceptiongs log oopses.
[04:58] <lifeless> if you catch it, its handled, fin.
[04:58] <wallyworld_> lifeless: cool. i was hoping that would be the case, just wanted to double check. thanks
[04:59] <wallyworld_> wgrant: thanks to you too
[05:01] <jtv> wgrant: found out why my builds aren't getting through — I have no upload rights to Ubuntu Main.
[05:01] <jtv> Anything simple I can poke in the database to change that?
[05:02] <wgrant> Add yourself to ubuntu-core-dev
[05:02]  * jtv fiddles
[05:03] <huwshimi> wallyworld_: Following up on yesterday, I'm just now taking a look, and not quite sure how I override the io request for the test (in https://code.launchpad.net/~huwshimi/launchpad/tag-cloud-removal-709009/+merge/81689)
[05:03]  * wallyworld_ looks
[05:04] <huwshimi> wallyworld_: It's trying to get a URL for the request from the HTML
[05:04] <huwshimi> wallyworld_: I can pass in the config, but what I really need to do is pass it the response
[05:04] <huwshimi> wallyworld_: I might be missing something here
[05:06] <wallyworld_> huwshimi: so you want to force io_provider to be an instance of MockIO so that your test can check the correct url is being used in the get and also to allow you to poke in a return value
[05:06] <wallyworld_> so in the call to setup_taglist(config), you make config = {io_provider: your_mockio_instance}
[05:07] <wallyworld_> your test will call setup_taglist
[05:08] <wallyworld_> once setup_taglist is called by your test, then you poke in the return value
[05:08] <wallyworld_> by calling a method on the mockio instance
[05:09] <poolie> are there any objections to me landing the deletion of lib/canonical/buildd?
[05:09] <poolie> it will then only be an external dependency
[05:09] <wgrant> If the tests pass, go ahead/
[05:09] <poolie> we have already deployed from the external tree to all the buildds
[05:09] <poolie> thanks
[05:10] <wallyworld_> huwshimi: see for an example get_intensity_listing in test_buglisting.js (there are many others also)
[05:11] <wallyworld_> huwshimi: ListingNavigator is constructed with a mockio instance
[05:11] <wallyworld_> huwshimi: thwn later, mock_io.last_request.successJSON(xxxx) is called to simulate a value sent back from the xhr call
[05:12] <wallyworld_> clear as mud?
[05:13] <huwshimi> wallyworld_: hmmm
[05:14] <wallyworld_> is that a good hmmm or a bad hmmm
[05:15] <huwshimi> wallyworld_: Sorry I'm dealing with too many things at the moment, it's a slightly unsure hmmm, I'll have a proper look in a sec
[05:15] <wallyworld_> np. take your time
[05:24] <jtv> wgrant: I've got some uploads, which process-upload.py said it set to UNAPPROVED.  What comes next?
[05:25] <huwshimi> wallyworld_: Oh, so I need to call the function that would be called on io:success manually?
[05:25] <wgrant> jtv: Approve them :)
[05:25] <jtv> How?
[05:25] <wgrant> Add yourself to ubuntu-archive
[05:25]  * jtv fiddles
[05:26] <wgrant> And go to https://launchpad.net/ubuntu/someseries/+queue
[05:26] <wgrant> +dogfood
[05:26] <wallyworld_> huwshimi: no, that is done when you poke a successful response back into the mock io provider using for example successJSON(xxx)
[05:27] <wallyworld_> huwshimi: you can also pass in data simulating an error response
[05:27] <huwshimi> wallyworld_: Ah I see
[05:28] <wallyworld_> huwshimi: so, if you factor out the success handler to it's own method, you can do stuff like test that method separately without and io involved at all
[05:29] <wallyworld_> just call it from your test with data and see that the DOM is manipulated as expected
[05:29] <wallyworld_> there's a few ways to skin the proverbial cat
[05:29] <wallyworld_> huwshimi: more examples in test_bugtask_delete.js
[05:30] <huwshimi> wallyworld_: Yeah, was looking at that
[05:30] <huwshimi> wallyworld_: I think I'm understanding now
[05:30] <wallyworld_> :-)
[05:30] <huwshimi> wallyworld_: I didn't realise that setting the mockio.success() would call "io:success"
[05:31] <wallyworld_> ah, right. it's handy like that.
[05:31] <wallyworld_> it allows you to simulate a server request -> response without getting into all the io glue
[05:39] <wgrant> poolie: Did you break lp-buildd?
[05:39] <wgrant> make: dh_lintian: Command not found
[05:39] <wgrant> Trying to build 100 in hardy
[05:48] <huwshimi> wallyworld_: Is there a way to make the log console more verbose? It's telling me about an error, but doesn't tell me where
[05:49] <wallyworld_> huwshimi: you looking at the log in the page or the browser console (F12 in firefox)
[05:49] <wallyworld_> ?
[05:49] <poolie> wgrant, i hope i didn't break it
[05:49] <poolie> where did that happen?
[05:49] <wgrant> https://launchpad.net/~launchpad/+archive/ppa/+build/2932600
[05:49] <wallyworld_> huwshimi: i thing firebug's browser log may be more verbose
[05:49] <wgrant> Maybe it was jelmer(?)'s cleanups.
[05:50] <wallyworld_> huwshimi: other than that, it's debug js time or add Y.log()s
[05:50] <huwshimi> wallyworld_: Hmm.. that just shows me an error in yui.js
[05:50] <wallyworld_> huwshimi: ah, you haven't included all dependencies in the html file
[05:51] <wallyworld_> you need to include each lp js file required, one by one
[05:51] <wallyworld_> that'
[05:51] <wallyworld_> s most likely what's wrong
[05:51] <poolie> wgrant, i don't think i've landed any changes to it since the last deployment
[05:51] <wallyworld_> if you look at an existing html file, you'll see a block for including all the yui stuff, then each lp file included under that
[05:51] <huwshimi> wallyworld_: Ah I see
[05:51] <poolie> i have one outstanding change up for review
[05:53] <huwshimi> wallyworld_: Thanks, I was thinking, but I've added it to use(), but we don't have a combo loader :)
[05:53] <wallyworld_> yep :-)
[05:53] <huwshimi> wallyworld_: :(
[05:53] <poolie> wgrant, yes, jelmer changed launchpad-buildd to run dh_lintian during its own build
[05:54] <wallyworld_> not yet anyway. it keeps getting talked about from time to time
[05:54] <poolie> i guess he assumed it was in build essential or something
[05:54] <StevenK> It's in debhelper, just the version in hardy is too old
[05:54] <StevenK> Check if dh_lintian is in $PATH in debian/rules, or backport debhelper
[05:55] <poolie> k
[05:55] <poolie> i have to go out for a bit
[05:55] <poolie> please file a bug or fix it or both
[05:57]  * StevenK is down to 7 test failures, from 132.
[05:57] <poolie> running on tmpfs is 10 minutes, 5% faster
[05:58]  * wallyworld_ goes to get the kid from school
[05:58] <StevenK> At 5pm?
[05:58] <StevenK> Oh, bloody daylight savings.
[05:59] <wgrant> lol qld
[06:10] <nigelb> I thought australia was sane about DST?
[06:11] <wgrant> No DST is sane.
[06:11] <StevenK> No
[06:11] <wgrant> But we are a bit special.
[06:12] <huwshimi> it's getting better
[06:12] <wgrant> The eastern states are all in the same timezone, except that Queensland doesn't do DST.
[06:12] <wgrant> So they stay an hour behind.
[06:12] <StevenK> WA no longer does DST
[06:12] <wgrant> And occasionally one of the other states will decide to change their DST rules one year because why not and what could go wrong.
[06:12] <StevenK> I'm not sure about SA and NT
[06:12] <wgrant> And then the WA government will announce with two weeks' notice that they're doing DST.
[06:13] <wgrant> And then abolish it again the next year.
[06:13] <huwshimi> nigelb: So no, not really :)
[06:14] <StevenK> Currently, New South Wales, Victoria, Tasmania, Australian Capital Territory and South Australia apply DST each year, from the first Sunday in October to the first Sunday in April. The Northern Territory, Queensland and Western Australia do not observe DST.
[06:14] <wgrant> With notable exceptions.
[06:15] <StevenK> In December 2008, the Daylight Saving for South East Queensland (DS4SEQ) political party was officially registered, advocating the implementation of a dual-time zone arrangement for Daylight Saving in South East Queensland while the rest of the state maintains standard time.
[06:15] <StevenK> Oh, sure. That won't be confusing at *all*
[06:16] <wgrant> "On 12 April 2007, New South Wales, Victoria, Tasmania and the Australian Capital Territory agreed to common starting and finishing dates for DST."
[06:16] <wgrant> IIRC there were changes due to the Commonwealth Games in '06
[06:16] <wgrant> And probably the '00 Olympics as well.
[06:16] <StevenK> That sounds right
[06:16] <wgrant> But they were well-announced.
[06:16] <wgrant> Unlike WA's stupidity.
[06:16] <StevenK> There certainly were for the '00 Olympics
[06:17] <StevenK> We had another two weeks of DST
[06:17] <wgrant> I don't recall.
[06:17] <StevenK> I think it was only NSW.
[06:17] <StevenK> Because lolympics
[06:18] <wgrant> Nah, probably applied everywhere, but I don't remember.
[06:18] <StevenK> Well, you were 8
[06:18] <wgrant> 9
[06:18] <StevenK> Damn, I was close
[06:18] <wgrant> Yeah, all the AEDT-observing states and territories were affected.
[06:18] <StevenK> Ah
[06:18] <StevenK> Heck, I was out of high school in '00
[06:19] <wgrant> Yes, but you're prehistoric.
[06:20] <StevenK> Back in my day, people respected their elders.
[06:20] <huwshimi> wgrant: I'm not sure about prehistoric, maybe more like carefully not remembered
[06:29] <nigelb> Fun!
[06:49] <huwshimi> So, I have a js module that works fine except when I load it to be tested it can't access Y.lp.client. Any suggestions for what obvious bit of info I'm overlooking?
[06:49] <huwshimi> I may have to provide some more info....
[06:51] <huwshimi> ugh, nevermind
[07:46] <huwshimi> How is it helpful for NodeList.hasClass('foo') to return a list of booleans?
[07:47] <huwshimi> Frustration.
[07:47] <wgrant> What else would it do?
[07:48] <huwshimi> wgrant: Well it would be nice to provide a way to actually get the nodes
[07:48] <huwshimi> instead of "[false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, true, true]"
[07:50] <huwshimi> wgrant: I'm just not sure what situation getting an array like that would be useful for
[07:52] <wgrant> huwshimi: .all(".foo") doesn't work?
[07:53] <huwshimi> wgrant: Well I can just do .filter('.foo')
[07:53] <huwshimi> wgrant: I'm just confused by the usefulness of .hasClass on a nodelist
[07:53] <wgrant> Or that.
[07:53] <huwshimi> or lack of
[07:53] <wgrant> Well, you could conceivably want to check.
[07:53] <wgrant> It's the logical thing for hasClass to do
[07:54] <jtv> wgrant: question about findBuildsByArchitecture… it only returns cases where BPPH.architecturetag == BPB.architecturetag.  Doesn't that exclude arch-all for all but the nominated architecture?  If so, is that intentional?
[07:55] <wgrant> jtv: Where's that?
[07:56] <wgrant> I haven't seen that API before...
[07:56] <jtv> Sorry.  SPR.findBuildsByArchitecture
[07:56] <wgrant> Huh, didn't know that existed.
[07:56] <wgrant> ... and indeed it doesn't.
[07:58] <jtv> Argh.  I meant, getBuildByArch.
[07:58] <wgrant> Ah.
[07:58] <wgrant> I know that well enough to realise that I probably don't want to dig into it just before dinner.
[07:59] <wgrant> But let's see...
[08:00] <wgrant> jtv: It's meant to return the build for that arch.
[08:00] <wgrant> As the name suggests.
[08:00] <wgrant> Arch-indep binaries from the wrong architecture aren't useful in the search for such a build.
[08:00] <jtv> Yes… just wondering if the implication for arch-indep builds is intentional.
[08:00] <wgrant> Because they are published on all arches, from a single build.
[08:00] <jtv> Right.
[08:00] <wgrant> There aren't arch-indep builds.
[08:01] <wgrant> There are i386 builds that only build arch-indep binaries.
[08:01] <wgrant> But they're still i386 builds.
[08:01] <jtv> Right, that's the ones I mean.
[08:01] <jtv> If that's intentional, that answers my question.  Thanks.
[08:01] <jtv> There's an annoying timeout on that query when approving uploads.  Turns out to be ridiculously faster to query all architectures at once.
[08:02] <jtv> Even without re-using the result across architectures.
[08:02] <wgrant> Oh yes, createMissingBuilds is a horrible person.
[08:02] <jtv> Goes from 0.8s to 0.3ms.
[08:04] <jtv> Repeated for every architecture in the series.  The 0.3ms version would make even that repetition easy to eliminate—but why bother?
[08:04] <wgrant> The whole thing needs to be rewritten.
[08:04] <wgrant> It's crap.
[08:05] <wgrant> I never got around to it when I was on maintenance.
[08:05] <wgrant> But it's a clear problem for both copies and queue operations.
[08:05] <wgrant> It got a bit simpler with DDs.
[08:08] <jtv> I'm beginning to understand the appeal of Gentoo.  :-)
[08:08] <wgrant> Heh
[08:09] <jtv> “Yeah, whatever, we'll just exaggerate the speed advantage of a native compile and let the users take care of the building.”
[08:09] <nigelb> Oh dear.
[08:09] <wgrant> Would make things *so* much easier.
[08:09] <jtv> For the distribution side.  Not necessarily for the users.  :-)
[08:10] <wgrant> The archive management side is all I care about :)
[08:11] <nigelb> wgrant: traitor :P
[08:37] <rvba> Hi lifeless, are you around?
[09:07] <adeuring> good morning
[09:14] <jtv> hi adeuring
[09:14] <adeuring> hi jtv!
[09:14] <jtv> wgrant: I have my approved uploads, but how do I set the build farm to work on them?
[09:16] <allenap> lifeless: I'll put that txlongpoll bug on the board. I have too much WIP right now as it is, but I'll try and work on it next.
[09:25] <lifeless> allenap: thanks; it should be trivial, I didn't do the txlongpoll part because I figure its in active fiddling from you guys atm
[09:25] <lifeless> rvba: no, but whats up ?
[09:26] <wgrant> jtv: They'll build automatically.
[09:26] <jtv> Even on dogfood?
[09:26] <rvba> lifeless: ;) … In the mean time, jtv helped me and I got my problem sorted. We 'discovered' that having a cache size of 100 is a little bit small.
[09:26] <wgrant> jtv: Yes.
[09:26] <lifeless> rvba: the dev cache? I thought wgrant was landing a bump of that to 10L
[09:27] <jtv> wgrant: curses.
[09:27] <lifeless> rvba: we should make it match prod IMO
[09:27] <wgrant> jtv: Oh?
[09:27] <rvba> lifeless: yes, the dev cache, and no
[09:27] <wgrant> lifeless: Mmm, I decided I didn't have enough reason to do it.
[09:27] <wgrant> lifeless: It is somewhat helpful possibly.
[09:27] <jtv> No need to make it match production.  Plenty of objects will be disposable anyway.
[09:27] <wgrant> jtv: Ah, you don't have any non-virt builders...
[09:27] <lifeless> wgrant: future request; please let folk you've discussed things with know of plan changes :)
[09:27] <jtv> wgrant: the company won't give me more hardware!
[09:28] <wgrant> lifeless: I didn't think it was a plan.
[09:28] <wgrant> More of a thought.
[09:28] <lifeless> wgrant: ah, I thought it was a definite
[09:28] <lifeless> we see about one headscratch a week for scaling tests where the dev/testrunner cache size is too small
[09:28] <rvba> true ;)
[09:29] <lifeless> I need to dig around and see how to switch the cache to non-evict mode for appservers.
[09:29] <lifeless> it would save some CPU cycles and avoid this sort of problem
[09:29] <lifeless> but in the mean time - +1 on anyone changing it to match, or approximate, production.
[09:30] <jtv> lifeless: non-evict mode is StupidCache, which is what we used to have.  They required regular restarts.
[09:31] <lifeless> jtv: given we discard the entire cache between requests, I'm at a loss for why that would correlate with restarts being needed
[09:31] <lifeless> jtv: how did we decide it was the storm cache ?
[09:32] <jtv> lifeless: oh, you mean the appserver cache _in tests_?
[09:32] <lifeless> jtv: no, in prod
[09:32] <jtv> lifeless: rvba tried it.
[09:32]  * jtv → otp
[09:32] <lifeless> in prod, we throw away the storm cache between requests
[09:35] <lifeless> What I'm actually worried about is performance tanking when we start evicting and lazy reloading objects.
[09:36] <lifeless> so a cache that does not evict, but still has an upper cap, would address my concerns.
[09:36] <lifeless> AIUI thats not the same as stupidcache
[09:46] <allenap> lifeless: That /could/ have similar problems, just with a different subset of retrieved objects (i.e. the most recently fetched ones). But at least it's not spending time evicting stuff that's already cached.
[09:47] <jtv> lifeless: completely forgot that we discard the cache between requests…
[09:48] <jtv> I don't see how you could usefully get a limit on cache size without eviction.  The StupidCache does neither; GenerationalCache does both but at a very low cost.
[09:49] <rvba> allenap: can you have a "second look" at my MP please ;) https://code.launchpad.net/~rvb/launchpad/activereviews-bug-867941/+merge/82375
[09:49] <allenap> rvba: Sure.
[09:49] <jtv> lifeless: GenerationalCache detects that it gets too large, and drops a single large bucket of up to half the cached objects.
[09:50] <jtv> (The most recently used half-or-more objects stay cached)
[09:50] <lifeless> allenap: I am suggesting that it error when the cache would be full
[09:51] <lifeless> allenap: that would make it fail hard, which we can then review why it exceeded 10K live objects (or whatever)
[09:51] <lifeless> jtv: ^
[09:51] <allenap> lifeless: Okay, that's interesting. I like that.
[09:52] <jtv> lifeless: IIRC GenerationalCache has a separate “I'm full, do something about it” method that you could override.
[09:52] <lifeless> jtv: that might be an easy route to implementation - thanks
[09:53] <jtv> Obviously I'd like to see us continue to use the generational cache.  :)
[09:53] <lifeless> why?
[09:54] <lifeless> [not trolling] - its an implementation detail, right? Is there something particularly good about it other than it choosing to discard things rather than thrash ?
[09:55] <jtv> lifeless: simply because I designed it specifically for our production use-case (as well as the tiny-cache use-case Storm was written for at the time) and I hate to see that work no longer be useful to us.  :)
[09:55] <jtv> At the time we kept all objects in cache forever.
[09:56] <jtv> Of course there's also scripts, where it's still useful.
[09:56] <lifeless> ah :)
[09:57] <lifeless> so, scripts are different, I haven't got a view on their needs yet - not one I could articulate or argue around
[09:57] <jtv> I'm not sure if we cleaned out all the nasty manual cache-clearing code in our scripts, although we did clean out some.
[09:59] <jtv> Some scripts are definitely best off with the generational cache: access too much data for StupidCache, working sets too large for the old LRUCache (not sure Storm even kept that one—I saw one script spend only infinitesimal time outside of the LRU accounting code), and lots of data that can be forgotten soon after use.
[10:03] <jtv> The enormous LRU overhead only happened in an experiment on a test server, of course; the default LRU cache was just tiny.
[10:05] <stub> Should we hard fail if the cache 'fills up', or generate a soft OOPS?
[10:06] <stub> 'fills up' in quotes given the current 10k limit is a number I pulled out of my arse a few years ago
[10:16] <jtv> allenap: rvba: more succinct version of what I said earlier.  :)   ^^^
[10:19] <rvba> jtv: indeed ;)
[10:20] <jtv> wgrant: maybe I do need to mark some builders as nonvirtual for this..?
[10:20] <lifeless> stub: soft oops is another option; something at least, rather than degrading silently and (probably) heavily
[10:20] <wgrant> jtv: Yes.
[10:20] <jtv> Will I kill anyone?
[10:22] <jtv> lifeless: there may even be separate actions you can override for “demote current generation” (i.e. “configured cache size exceeded”) and “retire old generation” (i.e. “this is not the first time I had to revolve generations”).  But maybe that's a bit arbitrary, since there can be any amount of overlap between the generations.
[10:23] <jtv> Think of them as equally-sized L1 and L2 caches, give or take.
[10:28] <jtv> wgrant: and in case you're interested… https://code.launchpad.net/~jtv/launchpad/bug-891493/+merge/82514
[10:29] <wgrant> jtv: Will review after the downtime. Sounds really good, though.
[10:30] <jtv> Thanks.
[10:35] <jtv> wgrant: still no joy on the buildfarm.  :(
[10:36] <jtv> Oh don't tell me it's simply not running again…
[10:36] <jtv> Nope, nope, it's running.
[10:37] <wgrant> I just checked that.
[10:38] <wgrant> Nothing in the logs for nearly 24 hours.
[10:38] <wgrant> Restart it?
[10:38] <jtv> Yeah, I guess
[10:40] <wgrant> That worked.
[10:43] <jtv> Exciting.
[11:08] <rvba> lifeless: maybe you will want to give your opinion about the possible fix suggested on https://bugs.launchpad.net/launchpad/+bug/867941
[11:10] <wgrant> jtv: Approved with a comment.
[11:11] <jtv> Thank you!
[11:11] <wgrant> rvba: I wouldn't optimise this too hard now -- the privacy schema is in the middle of changing entirely.
[11:12] <rvba> wgrant: but the main problem does not come from the private branches (even if I know that it might look like it does)
[11:12] <rvba> The huge number of public branches is the problem here.
[11:14] <rvba> Seq scanning the public branches 4 times in one query without any optimisation is the problem. (see the original query)
[11:15] <wgrant> Hmm.
[11:28] <stub> rvba: You should be able to create temporary tables on the slave dbs just fine.
[11:29] <rvba> stub: oh really?
[11:29] <rvba> Thanks for looking into that stub.
[11:29] <rvba> jtv: ↑
[11:29] <stub> rvba: (at least on production. You might get an error with the test suite in case we are being overly paranoid?)
[11:29] <jtv> stub: I tried creating one in a read-only transaction and that wasn't allowed.
[11:29] <jtv> That is the only basis for my conclusion that it won't work on a slave, so disprove that and I stand corrected.
[11:30] <stub> jtv: Ok. So that is just in our test suite. This might be a use case for turning that behaviour off or adjusting it.
[11:30] <stub> jtv: I didn't think of temporary tables when I implemented it.
[11:30] <jtv> I tried it in psql, actually.
[11:31] <stub> Right. We don't use read only transactions when talking to the slaves. There are triggers on the relevant tables that block updates.
[11:31] <stub> And if we do, we can fix that.
[11:32] <stub> I am surprised that building a temporary table and an index is faster than embedding it as a CTE.
[11:33] <stub> 1.2s is still high, but if it is an improvement its a step in the right direction.
[11:34] <rvba> jtv's proposition is a serious improvement over the CTE version.
[11:34] <jtv> It's a massive improvement.
[11:34] <jtv> Unfortunately.
[11:35] <stub> yer. I was assuming pg would be smart enough to build any necessary indexes in RAM to make queries faster.
[11:37] <jtv> I didn't check whether the speedup was a consequence of the index or of the plan.
[11:38] <rvba> I /think/ the speedup is explained line 20 21 (https://pastebin.canonical.com/55836/) but I'm not sure what it means exactly.
[11:41] <jtv> rvba: do you happen to have the CTE version handy?
[11:42] <stub> I'm braindead now and not looking at query plans :-)
[11:42] <rvba> jtv: https://pastebin.canonical.com/55835/
[11:43] <jtv> rvba: I meant the *plan* for the CTE version, sorry
[11:43] <rvba> jtv: hang on
[11:43]  * jtv hangs on
[11:44] <rvba> jtv: https://pastebin.canonical.com/55780/
[11:44] <rvba> jtv: 2nd result
[11:44] <stub> https://pastebin.canonical.com/55945/
[11:45] <jtv> Argh—where does that formatting come from?
[11:45] <rvba> stub: this is just another run of the CTE version?
[11:45] <stub> https://pastebin.canonical.com/55946/ (had \x on sorry)
[11:46] <stub> rvba: yer, from prod
[11:46] <jtv> I wonder what a CTE Scan node really does.
[11:46] <rvba> Total runtime: 6029.359 ms ("my" run) != Total runtime: 1817.746 ms (your run)
[11:47] <rvba> Crazy.
[11:47] <jtv> That could be those buffer sizes I mentioned.
[11:47] <rvba> stub: can data hotness or something like this explain such a difference?
[11:47] <stub> rvba: Where are you running the query?
[11:48] <stub> rvba: first time was 2s
[11:48] <rvba> I asked Tom H to run it on the prod db.
[11:48] <rick_h_> rvba: thanks for the MP review
[11:49] <rvba> FTR the original query (without a CTE) is ~4.5 s (this is from oops reports)
[11:49] <rvba> rick_h_: you're welcome.
[11:49] <rvba> stub: so the CTE is a serious improvement after all.
[11:49] <stub> rvba: If it is random, could be conflicts with something like the branch scanner making updates
[11:50] <stub> rvba: I expect the CTE will perform the same as a temporary table for a single query, and a temporary table will win if you can reuse if for subsequent queries.
[11:50] <stub> rvba: But I haven't investigated much - this is just my assumption
[11:51] <rvba> stub: Right, but I see very different things in the 2 query plans:
[11:51] <rvba> CTE Scan on visible_branches source_visible_branches  (cost=0.00..9412.66 rows=470633 width=4) (actual time=0.001..971.882 rows=465499 loops=1)
[11:51] <rvba> That's from the CTE version (obviously)
[11:51] <rvba> Index Scan using visible_branches_ids on visible_branches  (cost=0.00..4.00 rows=1 width=4) (actual time=0.017..0.017 rows=1 loops=86)
[11:52] <rvba> And that's from the other version (with a temporary table)
[11:52] <rvba> The difference in the number of returned rows puzzles me… maybe there is an optimization that the index allows somewhere…
[11:53] <stub> rvba: yer. Adding the index is a wild card. I would expect that what you gain from being able to use the index is outweighed by the cost of building the index, but again this is just my assumption.
[11:53] <jtv> I think that's the number of rows it inspects, rather than the number of rows it returns.
[11:53] <rvba> Ah, that might explain the difference… and the speedup.
[11:54] <jtv> I think the CTE Scan is effectively a HOT table scan.
[11:54] <rvba> stub: jtv said the creating the index was instantaneous.
[11:54] <rvba> s/the/that/
[11:55] <jtv> (HOT standing for Heap-Only Tuples, tuple allocations that never need to be written to disk—I /think/ those would be used here but am not sure)
[11:56] <rvba> It would be nice if we could create the index in the CTE.
[11:56] <jtv> Quite.
[11:57] <jtv> I suppose nobody figured a single query would re-use the CTE's contents quite this much.
[11:57] <stub> You could move the join with the CTE outside of the UNION, joining twice instead of four times. I think. Brainfried - I can look at it tomorrow if someone emails me the queries etc.
[11:58] <stub> That might win for temp table too
[11:59] <rvba> I suppose we want to avoid using temp tables if we can use CTE instead.
[11:59] <stub> much of a muchness
[11:59] <jtv> Yes.
[11:59] <jtv> In terms of code maintenance, I think CTEs are much to be preferred.
[12:00] <rvba> stub: I'll see if that possible… but the code that builds this query if pretty crazy to say the least ;)
[12:00] <stub> It would be nice to rewrite the query so it doesn't need to materialize a big list of all public and private-but-visable branches
[12:01] <stub> ie. select all branchmergeproposals where the branch is not transitively private, UNION ALL branchmergeproopsals where the branch is transitively private but visible
[12:02] <stub> Just because slinging around 400k resultsets is always going to be slow
[12:02] <rvba> Right.
[12:03] <stub> Anyway... done for now. Email me with details if you want me to look at it tomorrow, or have fun :-)
[12:03] <rvba> Again, I'll have to see if that's feasible without a massive refactoring but it's worth having a look.
[12:03] <rvba> stub: jtv thanks a lot for looking into that.
[12:03] <jtv> np
[12:04] <rvba> I was misguided by my unlucky run :(
[12:04] <stub> Also, I suspect we see some timing issues due to branchscanner sometimes locking things for longer than we would like. There are caches on the Branch table it maintains.
[12:07] <rvba> I've added our recent findings to the bug as comments.
[13:09] <timrc> is there an agenda for the Launchpad Thunderdome? This page looks a little neglected, https://wiki.canonical.com/Launchpad/Sprints/ThunderdomeJan2012/Agenda
[13:41] <allenap> matsubara: Hey there. Fancy reviewing a python-oops-tools branch? https://code.launchpad.net/~allenap/python-oops-tools/use-psycopg2-2.4.1/+merge/82537
[13:46] <matsubara> allenap, sure, waiting for the diff to update
[13:46] <allenap> It's taking an age :-/
[13:52] <matsubara> allenap, r=me
[13:52] <matsubara> thanks for the fix!
[13:53] <allenap> matsubara: Thanks :)
[14:01] <sinzui> timrc, you found the correct page. We have not decided what the theme of work will be
[14:02] <timrc> sinzui, there was some off discussion about  moving some of Ubuntu's image building into LP... that topic is of particular interest to us.  If that makes it on the agenda, we'd like to know
[14:02] <timrc> s/off//
[14:04] <sinzui> timrc, no. Our thunderdomes usually focus of tech-debt. fixing common issues that requires a lot of contribution.
[14:05] <timrc> sinzui, ok, good to know
[14:07] <sinzui> timrc, We have a poor history of completing feature work that starts in an epic. Well we do, but it takes months because the work was not owned by a team that had a date to deliver it.
[14:07] <deryck> Morning, all.
[14:08] <rick_h_> morning
[14:08] <deryck> The week of buildnot has made what's left of my hair turn even more gray.
[14:09] <rick_h_> hah, it's called "distinguished"
[14:10] <sinzui> I was just thinking I should strip my hair, then when colour grows in I might not look so old
[14:10] <rick_h_> I just shave it off, saves on the hair cuts
[14:11]  * deryck is going for nearly head shaved next round, too
[14:11] <jcsackett> allenap: you waiting for someone specific to look at your precise wiping script, or just haven't found a a reviewer yet?
[14:12] <sinzui> I do not like the thought of having more hair on my body than on top of my head.
[14:12] <allenap> jcsackett: No, Aaron had a look yesterday. I'm never going to land it, so he didn't vote. I'll move it out of the review queue.
[14:12] <allenap> Thanks though Jon :)
[14:12] <jcsackett> allenap: no need to thank me for *not* having to review something, but you're welcome. :-P
[14:17] <deryck> So is anyone looking into buildnot?  I think the last email blames flacoste and lifeless.
[14:18] <timrc> sinzui, as a user, I would vote for fixing stability issues and doing tactical things to the UI to make it more enjoyable to use (and if that means fixing tech debt, then we have a win-win)
[14:19] <deryck> rick_h_, you have access to your work email now, right?
[14:19] <rick_h_> deryck: yes
[14:19] <sinzui> timrc, indeed that is was we have done over the last 3 epics
[14:19] <deryck> rick_h_, just making sure, since I CC'ed your work email this morning.
[14:19] <rick_h_> yep, do you have time to chat on that pre stand up?
[14:20] <rick_h_> I wanted to chat on it if you get a sec
[14:22] <deryck> rick_h_, sure.  I'll jump in mumble now.
[14:53] <bac> allenap, jcsackett: could one of you review https://code.launchpad.net/~bac/launchpad/bug-891641/+merge/82544
[14:54] <jcsackett> bac: sure.
[14:54] <jcsackett> bac: looks fine to me. r=me.
[14:55] <bac> thanks
[15:05] <cr3> hi folks, any plans to upgrade YUI to 3.4 in Launchpad?
[15:16] <deryck> flacoste, hi.  The latest build failure has you and lifeless listed as blamelist.  Did you take a peak yet, or do you need someone else to look?
[15:16] <deryck> cr3, hi.
[15:16] <flacoste> deryck: i cannot be responsible for the latest failure, i swear
[15:16] <flacoste> deryck: it's a XSL change that isn't tested
[15:16] <deryck> cr3, yes and no.  I want to skip 3.4 and jump to 3.5 in January.
[15:16] <rick_h_> deryck: I'm getting an issue starting my local env because of the timeline egg needing to update
[15:17] <flacoste> deryck: unless it failed at the build step
[15:17] <rick_h_> where am I looking to pull that? make egg, build_egg and download-cache only seem to check the dir and all report "up to date"
[15:17] <cr3> deryck: hola, good to know. no rush, thanks!
[15:18] <deryck> rick_h_, if you use the rocketfuel script to update devel, it should work.  I think rocketfuel-get.
[15:18] <rick_h_> ah, thanks
[15:18] <deryck> flacoste, ok, I'll look and see what rev introduced an XSL change.
[15:19] <flacoste> deryck: i mean, my change is to the WADL XSL file, and that cannot fail any tests
[15:19] <deryck> rick_h_, but in short, you have a download-cache that needs to have bzr up run in it.
[15:19] <deryck> flacoste, ah, sorry, I follow now.
[15:20] <flacoste> deryck:
[15:20] <flacoste> reference = u'a moment ago by Person-name-359157'
[15:20] <flacoste> actual = '14 seconds ago by Person-name-359157'
[15:20] <flacoste> that looks like a timing issue
[15:20] <rvba> deryck: http://paste.ubuntu.com/741266/
[15:20] <rvba> oops, too late ;)
[15:21] <deryck> flacoste, ah, same one then.  I missed that at first glance.
[15:21] <rick_h_> deryck: yea, I manually did a bzr pull and missed on getting the egg
[15:21] <deryck> flacoste, can we kick the build then?  Since changes should be in to fix that now.
[15:21] <gary_poster> I can't build the tree
[15:21] <deryck> rvba, thanks anyway! :)
[15:21] <rvba> :)
[15:22] <deryck> flacoste, sorry, ignore me again, I see it's not the same.
[15:22] <deryck> too much red lines to me.  I got lost. ;)
[15:22] <gary_poster> I keep on getting complaints that builder.py can't import lpbuildd.slave
[15:22] <deryck> gary_poster, so perhaps you and rick_h_ are seeing someone forgot to commit to the download-cache then?
[15:22] <gary_poster> deryck, no, I don't think so
[15:23] <gary_poster> possible
[15:23] <rvba> gary_poster: ah ! I had this one in ec2 this morning.
[15:23] <gary_poster> rvba, oh!  did you resolve it?  I've tried make clean && make to no avail, and have been trying other random things
[15:23] <rvba> gary_poster: http://paste.ubuntu.com/741269/ ?
[15:23] <deryck> rick_h_, could be sourcecode needs updating too then.  rocketfuel-get should do the right thing for you.  if not, make clean && make can correct these things.
[15:24] <rick_h_> yea, rocketfuel-get got me running
[15:24] <gary_poster> precisely rvba
[15:24] <rick_h_> thanks
[15:24] <rvba> gary_poster: I solved it by merging devel and fixing the stupid missing directories conflicts.
[15:24] <rick_h_> oh that, yea same here
[15:25] <gary_poster> rvba, maybe I fixed the conflicts incorrectly...but when I diff my trunk against the main tree in doesn't report a diff
[15:25] <rick_h_> ooh, wrong one. I had left over directories from lp/buildd dirs
[15:25] <gary_poster> (and this was a problem in my pristine devel branch)
[15:26] <rvba> Right, left over directories masquerading as conflicts.
[15:26] <jml> there's a thing you can do to avoid that
[15:26] <gary_poster> what is it jml?
[15:26] <jml> Add bzr.transform.orphan_policy=move to bazaar.conf
[15:26] <jml> not sure where the documentation for that lives, got the tip directly from vila
[15:27] <rvba> jml: good to know! thanks!
[15:27] <gary_poster> huh, cool
[15:27] <gary_poster> thanks jml
[15:27] <jml> np.
[15:27] <rvba> that's an insider's tip then :)
[15:27] <gary_poster> I guess I'll wipe away my tree and do it all from scratch and hope that does the trick
[15:28] <rvba> That would be the next random thing I would try too ;)
[15:28] <gary_poster> :-)
[15:28] <gary_poster> thanks
[15:35] <rick_h_> gmb: ping
[15:36] <gmb> rick_h_: Hi
[15:36] <rick_h_> gmb: howdy, I wanted to see if you had time to do a pre-implementation chat with me on https://bugs.launchpad.net/launchpad/+bug/723417
[15:36]  * gmb looks
[15:36] <rick_h_> sometime in the not too distant future
[15:38] <gmb> rick_h_: I could do without context switching for the next while... How does ~16:30UTC sound?
[15:38] <rick_h_> sure thing
[15:38] <gmb> Cool.
[15:38] <gmb> rick_h_: I'll ping you when I'm ready.
[15:38] <rick_h_> thanks, appreciate it
[15:38] <gmb> np
[15:56] <deryck> abentley, for our call discussion:  http://people.canonical.com/~deryck/field-state.png
[15:57] <abentley> deryck: okay.
[15:57] <deryck> abentley, this represents my understand, which may well be wrong.  but I thought it was easier to explain my thinking with a pic.
[15:58] <jml> When I change a milestone on a bug, I get a broken image link
[15:58] <jml> e.g. https://bugs.launchpad.net/testrepository/+bug/613181/undefined
[15:59] <jml> instead of the edit button that I've come to know.
[16:01] <deryck> jml, that was fixed once before already.  seems that gets broken a lot.
[16:02] <jml> deryck: interesting, thanks.
[16:15] <gary_poster> OK this is kinda not cool.  I'm still unable to build LP ("ImportError: No module named lpbuildd.slave"), despite fresh checkouts and lots of random jiggling of things around. :-(
[16:16] <jelmer> gary_poster: python-lpbuildd is a debian package
[16:16] <gary_poster> jelmer, ah!
[16:16] <jelmer> gary_poster: if you update launchpad-developer-dependencies it should be pulled in
[16:17] <gary_poster> right, I should have remebered that thing to jiggle
[16:17] <gary_poster> thanks jelmer
[16:17] <jelmer> gary_poster: it's just one of the gazillion ways in which launchpad pulls in dependencies
[16:17] <gary_poster> yes, I was thinking something similar jelmer. :-(
[16:33] <gmb> rick_h_: I'm free for a chat now if you are.
[16:33] <rick_h_> gmb: sure thing
[16:33] <gmb> Cool.
[16:33] <gmb> rick_h_: Skype or Mumble?
[16:33] <rick_h_> what works for you? mumble, skype?
[16:33] <rick_h_> lol, either here
[16:33] <rick_h_> mumble and jump in orange room with me for a sec then?
[16:34] <gmb> rick_h_: Sure. Bear with me a sec...
[16:38] <timrc> getting timeouts on staging when visiting, https://bugs.staging.launchpad.net/
[16:38] <timrc>  OOPS-20c822b69c6c8fa38258f4a19efebcf6
[16:48] <allenap> timrc: If you keep hitting refresh does it eventually work?
[16:49] <allenap> Although the first query in the page is hideous.
[16:50] <timrc> allenap, it says, "Initiating launch sequence"... JK... I'll try
[16:50] <allenap> :)
[16:50] <timrc> allenap, hitting reload once did work
[16:51] <allenap> timrc: Okay, it's probably a cold disk cache. That happens a lot on staging and qastaging unfortunately.
[16:51] <timrc> allenap, ah, okay
[16:53] <timrc> allenap, is it actually calculating those statistics in real-time?
[16:54] <allenap> timrc: Yes, I believe so.
[16:54] <timrc> yikes
[16:54] <adeuring> allenap, jcsackett: could you please review this mp: https://code.launchpad.net/~adeuring/launchpad/bug-sorting-3/+merge/82562 ?
[16:55] <allenap> timrc: Private bugs make things a little complicated.
[16:55] <jcsackett> adeuring: i'm in the middle of reviewing one right now, if allenap can't get to yours after that i'll take a look.
[16:55] <allenap> adeuring, jcsackett: I'll take it.
[16:55] <adeuring> allenap: thanks!
[16:56] <timrc> seems like a waste of CPU cycles
[17:04] <deryck> allenap, jcsackett -- I have a branch to get us out of testfix if one of you could kindly review it.
[17:05] <deryck> it's short.
[17:05] <jcsackett> jcsackett:
[17:05] <jcsackett> deryck: just looked.
[17:05] <jcsackett> r=me.
[17:05] <deryck> jcsackett, rockin' thanks!
[17:08] <deryck> alright, playing in pqm.  we should be out of testfix shortly.
[17:10] <allenap> adeuring: Have you tried some of those subqueries on production? They look like big red bad performance markers to me.
[17:11] <adeuring> allenap: right, there are problems for some bug targets, but we want to get a first version running (protected by a beta flag)
[17:11] <adeuring> allenap: but for other tagretsm the queries are quite fast
[17:11] <allenap> adeuring: Okay.
[17:19] <abentley> allenap or jcsackett: could you please review https://code.launchpad.net/~abentley/launchpad/view-flags/+merge/82570 ?
[17:20] <allenap> abentley: If it'll take me less than 20 minutes, sure!
[17:20] <abentley> allenap: I think it should.
[17:21] <allenap> Okay, deal.
[17:25] <allenap> abentley: Why "is_beta" rather than, say, "is_default", and/or including the default value?
[17:27] <abentley> allenap: The use case of is_beta is to indicate whether the feature is in beta.  We could change the way we determine whether a feature is in beta, but we would still care about whether it was in beta, not whether it was the default.
[17:27] <GRiD> hey guys, fyi: https://code.launchpad.net/~launchpad-pqm/launchpad/devel/+activereviews seems to break
[17:27] <allenap> abentley: Okay, fair.
[17:28] <allenap> abentley: If related_features is empty the code still loops through flag_info. I guess that's not a big overhead, but it feels wasteful. It would be nice if there was a pre-prepared flag_info_map or similar, then you could loop through related_features instead.
[17:31] <abentley> allenap: It's really not expensive to loop through flag_info, because it's a list of tuples of strings.  I don't know why poolie chose to implement it that way.  If there was a flag_info_map, I don't think we'd need the list.
[17:32] <allenap> abentley: Okay.
[17:41] <allenap> abentley: I have to go now, but I'll complete the review asap when I'm back.
[17:42] <abentley> allenap: okay, thanks.
[17:50] <rick_h_> gmb: ping, heading your way: https://code.launchpad.net/~rharding/launchpad/bugfix_723417/+merge/82577
[18:11] <rick_h_> anyone know how I can work around this gpg issue when trying to use ec2 land? https://pastebin.canonical.com/55983/
[18:11] <rick_h_> my GPGKEY env var is set right, but the email on that is different since I added my GPG pre-starting
[18:19] <rick_h_> ok, that was part file permissions, here's a trace with not finding a gpg key for the work email addr https://pastebin.canonical.com/55984/
[18:25] <rick_h_> ok, nvm, I just changed it up for now and I'll add a new gpg key on the @canonical email address since I can't run different ones in the bazaar.conf it appears
[19:19] <flacoste> we should revert mbp buildd removal
[19:25] <deryck> jcsackett, I have an itsy-bitsy teanie-weanie css change branch.  what say ye?
[19:27] <lifeless> morning yeall
[19:27] <flacoste> morning lifeless
[19:27] <flacoste> you've got stakeholders email!
[19:27] <lifeless> I do?
[19:27] <flacoste> you do!
[19:28] <lifeless> I see a bunch
[19:28] <lifeless> which in particular is for me ?
[19:28] <flacoste> the one you are CC-ed on
[19:28] <flacoste> escalation request from skaet
[19:28] <lifeless> hah. lets see now ;)
[19:29] <flacoste> but i'm sure you'll have interesting opinions on others as well!
[19:29] <jcsackett> deryck: I am just finishing up a very late lunch, and then I am happy to take a look.
[19:33] <deryck> jcsackett, thanks! it's my latest on the review queue.
[19:36] <deryck> rick_h_, hi.  I see you're moving ahead with that expanding idea.... might I suggest you look at YUI plugins?
[19:37] <rick_h_> deryck: sure thing
[19:37] <jcsackett> deryck: R=me.
[19:37] <deryck> rick_h_, it's a nice way to add functionality to something to existing classes....
[19:38] <deryck> rick_h_, so you could plug anything that is a Y.Node instance, for example.  And then that node would know how to expand as you type.
[19:38] <rick_h_> ok, is there a place I should look at where we'd add the plugin?
[19:38] <deryck> jcsackett, awesome sauce, thanks!
[19:38] <rick_h_> I was starting to look at the stuff in formwidgets and seeing if it should be a type there
[19:39] <deryck> rick_h_, well, you'd create the plugin in one branch, independent of any use in lp.
[19:39] <rick_h_> in talking with gmb and looking at things, wasn't sure how to "apply" it since in a nice way across things since there's not a good hook currently
[19:40] <rick_h_> ic
[19:40] <deryck> rick_h_, honestly, I'm not sure how you'd apply it globally.  I'd have to have a think about that myself.
[19:40] <rick_h_> s/branch/repo  ?
[19:40] <deryck> rick_h_, I think I mean branch.  But not sure if we're having bzr-->git break down. :)
[19:41] <deryck> rick_h_, the thing you create and push to lp. :)  You need one for the widget, then a follow up branch to wire it up in lp....
[19:41] <rick_h_> right right now I started out thinking I'd add it to lp/app/javascript somewhere and add the object and the tests to the existing ones
[19:41] <deryck> rick_h_, you can, of course, do this in one branch, but I think you'll make a better plugin or widget if you do that on its own.
[19:41] <deryck> rick_h_, exactly.
[19:41] <rick_h_> right, ok. That's basically what I'm doing ok
[19:42] <rick_h_> I wasn't sure if you were thinking of making it a more generic YUI item that could be pushed to their gallery or something outside of LP
[19:42] <deryck> ah
[19:42] <deryck> no, not at this point.
[19:42] <rick_h_> ok, sounds good
[19:42] <deryck> rick_h_, might be worth seeing if the gallery has something like this already.
[19:42] <deryck> rick_h_, has to be a common pattern.
[19:43] <rick_h_> yea, I didn't see anything but the gallery is a pain to find things in imo
[19:44] <rick_h_> hah, never mind. just don't use their search ui.
[19:44] <rick_h_> http://yuilibrary.com/gallery/show/text-expander
[19:47] <deryck> rick_h_, that's exactly what you want.
[19:47] <deryck> rick_h_, we're putting stuff from the gallery in:  lib/lp/contrib/javascript/yui3-gallery/
[19:47] <rick_h_> loading it up to test now and will see how it does
[19:48] <rick_h_> ok
[19:48] <deryck> rick_h_, you could probably add that and wire it up in one branch now.  since it's just an import from the gallery.
[20:07] <lifeless> flacoste: done
[20:29] <flacoste> lifeless: "deep diving into tsearch2 plumbing" -> that put it outside the scope of escalated bug?
[20:32] <lifeless> flacoste: depends - who do we have that knows postgresql internals well enough to dig deep there - probably only jtv, but IMBW. Anyone else will need a longer run up, though I have confidence there are many folk that *can* do it, it would be less like maintenance scope, I suspect.
[20:33] <flacoste> lifeless: and stub?
[20:33] <lifeless> flacoste: AFAIK he hasn't been down into the implementation API. I haven't asked though.
[20:34] <lifeless> flacoste: its not a matter of knowing data model - its C/C++ coding
[20:34] <lifeless> flacoste: *I think*
[20:34] <flacoste> ok
[20:35] <flacoste> i'll leave it out for now
[20:35] <flacoste> not like the escalated queue was empty anyway
[20:35] <lifeless> hah, no.
[21:26] <lifeless> mwhudson: hi; why does bug 881144 have linaro tasks ?
[21:26] <_mup_> Bug #881144: field.tags_combinator=ALL gives same results as with ANY when searching all bug reports <bugs> <qa-ok> <regression> <search> <Launchpad itself:Fix Released by rvb> <Linaro Android:Fix Released> <Linaro-Ubuntu:Fix Released> < https://launchpad.net/bugs/881144 >
[21:28] <mwhudson> lifeless: no idea off the top of my head
[21:29] <mwhudson> lifeless: i guess because of asac's comment
[21:29] <lifeless> I'm extremely tempted to delete them :)
[21:29] <mwhudson> i presume the "Removed milestone. Wait for new TI LT Kernel." comment was misdirected
[21:30] <lifeless> plus toggling on and off the status of the two tasks
[21:46] <deryck> abentley, I *just* used Y.each on an object literal.  hurray for new knowledge.
[21:47] <abentley> deryck: :-)
[22:06] <deryck> lifeless, I replied briefly in the bug about client/server rendering, but for the sake of higher bandwidth, can we chat here about sessions?
[22:07] <lifeless> sure
[22:07] <lifeless> or voice if you want
[22:08] <wgrant> deryck: Could you QA https://bugs.launchpad.net/launchpad/+bug/887646?
[22:08] <_mup_> Bug #887646: Advanced search showing new bug sorting widget <bug-columns> <qa-needstesting> <ui> <Launchpad itself:Fix Committed by deryck> < https://launchpad.net/bugs/887646 >
[22:09] <deryck> wgrant, oh, I thought I did way back.  sorry.
[22:10] <deryck> lifeless, so is the sesssion cookie hashed, or plain strings?
[22:10] <StevenK> deryck: There are two bugs linked to the branch. Both need to be qa-ok
[22:10] <lifeless> deryck: oh, you want me to know whats in the code ?:P
[22:11] <deryck> wgrant, StevenK -- yeah, I'm so sorry.  That's been qa-ok for awhile.
[22:11] <deryck> lifeless, right. when I say "read" I mean the js code needs to see plain english, not a string it needs to decode.
[22:12] <lifeless> deryck: lib/lp/services/session/model.py
[22:12] <wgrant> Huh, what are you doing with sessions?
[22:12] <wgrant> The JS should not know about them.
[22:12] <StevenK> wgrant: I'll sort out a deployment request
[22:12] <deryck> wgrant, I'm not doing anything with sessions.  lifeless wants me too. :)
[22:12] <lifeless> wgrant: a 'because' would help. Also see the bug
[22:12] <deryck> he means js should be able to decipher the hash.  which is what I mean.
[22:13] <wgrant> O_o
[22:13] <wgrant> no
[22:13] <deryck> s/should/should not/
[22:13] <deryck> wgrant, does the replace make that correct now?  do we agree? :)
[22:13] <wgrant> Yes
[22:13] <deryck> ok :)
[22:14] <lifeless> deryck: so server side its a pickle
[22:14] <wgrant> The content is a pickle.
[22:14] <lifeless> deryck: we have state in there, and we can add more state, and so forth
[22:14] <wgrant> The cookie is a hashed reference to the pickle.
[22:14] <deryck> right
[22:14] <lifeless> deryck: I haven't suggested the raw content be sent over the wire at any point
[22:14] <deryck> no, you haven't, sorry, didn't mean to give that impression.
[22:14] <lifeless> deryck: AIUI you need to store a new variable in a clients browser
[22:15] <deryck> yes, it's like a url query string basically.
[22:15] <lifeless> deryck: you also don't care if this is lost when the user clears local state, switches machines etc.
[22:15] <deryck> right
[22:15] <wgrant> We used to just store them in additional cookies.
[22:15] <wgrant> For storing portlet expanded state.
[22:16] <lifeless> so one complication here is that at least for a sort widget, the view has to render differently
[22:16] <lifeless> (in fact it has to render differently for non-js browsers with columns as well)
[22:16] <lifeless> s/columns/selected fields/
[22:17] <lifeless> I don't really care how this is done, the thing that has little alarms going off is the assertion the view doesn't care :)
[22:18] <deryck> lifeless, it doesn't care because we do it all on the client.  we do render a basic view with the same template, but it doesn't get any of the js widgets to change the display of fields....
[22:18] <deryck> so if no js, no widgets, no need to read cookie.
[22:18] <deryck> the cookie is only useful to the js widgets that redraw the list.
[22:19] <lifeless> deryck: so you'll deliver it with content A, and immdiately redraw with content B ?
[22:19] <lifeless> deryck: naively, I'd expect that to look a little jerky
[22:20] <lifeless> deryck: I probably don't understand the issue
[22:21] <deryck> lifeless, no, we render based on the model in js.  so thing of it like a bunch of widget state being updated.  one widget use cookies to maintain it's state across page loads.
[22:21] <deryck> then after looking into all that state, we render the template on the client.
[22:21] <deryck> it's actually quite sexy ;)
[22:21] <lifeless> deryck: yes, I get that. Whats the *initial* html delivered to the client though ?
[22:22] <lifeless> deryck: doesn't that initial content have a batch, rendered on the server (and mirrored into the client cache) ?
[22:23] <lifeless> deryck: (I agree its sexy, its a very nice thing you've been building!)
[22:24] <deryck> lifeless, not rendered.  there is an initial server rendered batch, but AIUI, we're not showing that on page load.  only if you have js disabled....
[22:25] <lifeless> ah
[22:25] <deryck> lifeless, if js is enabled, we render on the client completely.
[22:25] <lifeless> and I presume the js is lightning fast ;)
[22:26] <lifeless> so, something you could /consider/ is to use this state to render the initial batch on the server exactly as the client needs it, saving a few client side cycles (and not costing server side ones as we render anyway)
[22:26] <deryck> lifeless, it is pretty fast, yes, and we've been mindful of that.  we think we may have a couple ways to speed it up even more.
[22:26] <lifeless> then the client side render would only need to kick in when someone changes something
[22:27] <lifeless> anyhow, I understand a bit better now.
[22:27] <deryck> ok, cool.
[22:27] <lifeless> francis and I both reaching for the session state is possibly just knee-jerk reactions
[22:27] <lifeless> I wouldn't want to see a massive proliferation of cookies though
[22:28] <lifeless> because cookies are sent on *every* request, they can become quite big.
[22:28] <deryck> no, it makes sense for a lot of cases.  and I agree, we don't need a lot of cookies.
[22:28] <deryck> and if we need more client state as our js use grows, and we don't want to get into js apis to poke at the session, we could share a single cookie for all client-side state.
[22:29] <lifeless> so my inclination would still be to use the session state, emit the knob to js explicitly (e.g. via the cache) only on pages that need it, and use an API call or whatever to tell the server its being set
[22:29] <deryck> yeah, I agree we might need to get to that.  I think that's a bit much for my small cookie use now.
[22:29] <lifeless> IIRC facebook do something similar; they have just enough cookies to id the user, the machine, and then the rest of it is js
[22:30] <lifeless> deryck: the reason for my concern is you only have a few hundred bytes before requests will be getting split into multiple frames
[22:30] <lifeless> deryck: so a 15 byte unique name + value in a cookie is actually a Big Deal.
[22:31] <deryck> I see what you mean.
[22:31] <lifeless> deryck: multiple frames for a request is a huge performance hit, as I'm sure you know
[22:31] <deryck> indeed
[22:31] <wgrant> How do cookies interact with having multiple searches open?
[22:32] <lifeless> if you want to use a cookie, I suggest doing a bit of defensive digging into our current request sizes - we may be already terrible, in which case I won't hold you to a higher standard.
[22:32] <lifeless> or we may be very safe, in which case cool.
[22:32] <lifeless> or we may be right on the border, particularly with POST requests, and I would really push back hard on you at that point
[22:33] <deryck> lifeless, ok, how about this?  The branch I have for this is quite small.  Let me get it working.  Then in the week of polish we do, we can explore this further and make sure we're good.  fair enough?
[22:33] <lifeless> deryck: the risk is that you'll cause a site wide one round trip additional latency for every user.
[22:33] <lifeless> deryck: I think thats a pretty big risk.
[22:33] <lifeless> deryck: and I have not good sense about the probability.
[22:34] <lifeless> deryck: mmm, scratch that, my logic is bong.
[22:34] <lifeless> *users that cause the cookie to be set*...
[22:34] <deryck> right
[22:34] <lifeless> so, sure, go ahead - but lets make sure the t's are crossed and the i's are dotted *before* it is enabled beyond ~launchpad.
[22:34] <deryck> not everyone, I'd guess *most* won't have this cookie.
[22:35] <lifeless> deryck: as soon as they fiddle with the feature they will get it
[22:35] <deryck> correct.  well, fiddle with the display of fields.
[22:35] <deryck> you can sort all day without changing fields.
[22:35] <lifeless> so how does that sound? land now, research and fix if needed before widespread (> ~launchpad) use ?
[22:35] <deryck> lifeless, are you ok with opening it for beta users without running this issue to ground?
[22:36] <lifeless> I'm worried about opening it to beta users
[22:36] <lifeless> deleting the cookie, if we change our mind and want to remove the overhead, will be a nuisance.
[22:37] <deryck> lifeless, ok, fair enough.  I'm fine to land now, and dig into it completely before opening further than ~launchpad.
[22:37] <lifeless> If you're up for making that automatic (e.g. delete-cookie in a reply if we see it in a reqest), then I don't mind ~beta-users getting it
[22:38] <deryck> lifeless, let's just settle the issue before opening to beta.  should be easy enough to do tomorrow, to work out how we're doing on page size.
[22:38] <deryck> lifeless, that's the main issue, right?  the total page size.
[22:38] <lifeless> deryck: request size
[22:38] <deryck> lifeless, right, sorry.
[22:38] <lifeless> deryck: set-cookie in the odd response is nothing, cookie: in everyrequest, is :)
[22:39] <lifeless> s/is :/may be :/
[22:39] <deryck> right, I follow.
[22:39] <deryck> I'll do some follow up tomorrow then.  fair enough?
[22:40] <lifeless> deryck: particular things to look at are the sizes of advanced search requests and comment containing posts I guess.
[22:40] <lifeless> deryck: yes, thats great. thanks for grabbing me to discuss this!
[22:40] <deryck> lifeless, np!  Thank you for the discussion!
[22:50] <deryck> Later on, everyone.
[22:52] <poolie> huwshimi, can't wait for your bug tags to land
[22:52] <poolie> do you want to have that talk about markup today maybe?
[22:52] <huwshimi> poolie: :)
[22:52] <huwshimi> poolie: Sure, maybe do you want to have a chat after lunch, I just need to finish off a few things this morning
[22:52] <poolie> wfm, ping me then
[22:53] <huwshimi> poolie: Thanks :)
[22:58]  * StevenK can't find versions.cfg in lp:txlongpoll
[22:59] <StevenK> How does this even work?
[23:00] <lifeless> I think allenap has a branhc proposed fo rmerging that will add that
[23:00] <lifeless> and/or check setup.py - it may have version constraints there
[23:03] <StevenK> No constraints in setup.py for Twisted
[23:05] <lifeless> then its probably grabbing latest-always
[23:40] <rick_h_> is there a webui to PQM? I had to change my .bzr emai addy and so not sure I'm going to get anything from it
[23:40] <rick_h_> see my ec2 instance is terminated and wonder if things went ok or fail
[23:40] <lifeless> https://pqm.launchpad.net
[23:40] <wgrant> But it's not very useful :)
[23:41] <rick_h_> heh, ok
[23:49] <wallyworld_> poolie: just curious about the change to restore the prng state as opposed to the original proposed change. is yours more correct?
[23:51] <poolie> wallyworld_, i think it's preferable to have tests not mutate the global state any more than is necessary
[23:52] <wallyworld_> sure. agree with that
[23:52] <StevenK> 287. When did that happen. :-(
[23:53] <wgrant> StevenK: Lots of escalations overnight.