[00:11] <lifeless> wgrant: tells haproxy the service hasn't crashed
[00:16] <wgrant> lifeless: What's the difference?
[00:18] <wgrant> The only conceivable difference I can see is that in the case of a crashed service you might possibly want to kill leftover connections, but I doubt it does that.
[00:20] <lifeless> uhm, I looked into it, I don't recall now.
[00:20] <lifeless> it may have been to do with zope awfulness
[00:20] <wgrant> Right, but that only affects appservers.
[00:20] <lifeless> feel free to dig, I don't want to try and page that in right now.
[00:21] <wgrant> It would be nice to not have to add an HTTP service to every service.
[00:21] <wgrant> Sure.
[00:23] <lifeless> wgrant: persistent connections
[00:23] <lifeless> wgrant: and no alerts generated
[00:27] <wgrant> lifeless: Do we use persistent connections?
[00:29] <lifeless> we were; we should on twisted services; we don't on the appservers
[00:29] <lifeless> s/twisted/non-ultra-tuned/
[00:29] <wgrant> Why would we use them on Twisted services?
[00:30] <lifeless> also non-cpu-bound
[00:30] <wgrant> Why do we care which service it gets dispatched to?
[00:30] <lifeless> less overheads
[00:31] <lifeless> tcp will already be open on the second request
[00:31] <wgrant> Hm? That sounds like HTTP keepalive, not haproxy persistent connections.
[00:32] <lifeless> same same
[00:32] <lifeless> at this point I'm going to say 'shoo, go read' - haproxy is terrifyingly evil in some ways
[00:32] <lifeless> yes we have room to rejigger things
[00:32] <wgrant> AFAICT the maintenance-mode persistence behaviour affects cookie-based persistence.
[00:33] <lifeless> also 'how can we tell the service is really down'
[00:34] <wgrant> It's not listening.
[00:34] <lifeless> I don't have any requirements that all our services use haprox the same way
[00:34] <lifeless> wgrant: that not at all the same as really down
[00:34] <lifeless> wgrant: (if we don't have a status page)
[00:34] <wgrant> lifeless: Does that matter?
[00:34] <lifeless> hell yes
[00:36] <wgrant> .. how?
[00:36] <wgrant> Assuming I'm using haproxy as a round-robin load balancer, not as a nagios or cookie-based load balancer.
[00:36] <lifeless> so, again, I don't care if some services are configured differently; the requirements are that its got a sane nodowntime upgrade mechanism, including all the aspects like telling the difference between crashed (so just start it) and going down (do not touch), and getting metrics out
[00:38] <lifeless> I expect different answers for different stacks and different services
[00:38] <wgrant> And I think our assumptions for appservers and codehosting are flawed.
[00:38] <lifeless> right now we have two services with status pages
[00:38]  * lifeless shrugs
[00:39] <lifeless> it works, its introspectable, and aided us iding and fixing the root cause of hang-on-restart; preserve the functionality and I'm happy.
[00:40] <wgrant> Huh, how was it relevant to that?
[00:40] <wgrant> The regular requests?
[00:42] <lifeless> I seem to recall you polling it too watch what was gong on
[00:43] <wgrant> Oh, for codehosting, not appservers.
[00:43] <lifeless> what has you pulling on this thread ?
[00:43] <lifeless> why is it interesting to talk about now ?
[00:43] <wgrant> Well, I don't want to have to add a pointless HTTP service to poppy.
[00:44] <lifeless> I don't think status pages are pointless even if not used for haproxy
[00:45] <lifeless> they provide a good place to hook useful metrics we (have in the past) wanted to poke at (vs publishing to *statsd which is also a good idea but not as watchable-by-F5)
[00:46] <lifeless> conversely, I don't think everything has to have one.
[00:46] <lifeless> Have you been told you have to add one?
[00:47] <wgrant> No.
[00:48] <wgrant> But if there's a reason that codehosting needs one for haproxy, then poppy needs it to.
[00:48] <wgrant> However, I suspect that neither does, and probably neither do the appservers.
[00:48] <lifeless> poppy and codehosting aren't the same service, so I don't see that that holds
[00:58] <lifeless> hah
[00:58] <wgrant> ?
[00:59] <lifeless> I've filed 6.2% of open LP bugs
[00:59] <wgrant> Timeouts do that :(
[01:01] <wgrant> Bah, I'm just below 3% now.
[01:02] <lifeless> could you add some technical info to bug 528459 please?
[01:02] <_mup_> Bug #528459: PPA does not delete old packages after new build <lp-soyuz> <ppa> <Launchpad itself:Triaged> < https://launchpad.net/bugs/528459 >
[01:03] <wgrant> lifeless: NBS
[01:03] <lifeless> I was hoping for a paragraph, to point future folk straight at the issue
[01:03] <lifeless> 2-3 lines
[01:04] <wgrant> But all Soyuz people will know basic stuff like NBS.
[01:04] <wgrant> Oh wait :(
[01:06] <StevenK> lifeless: NBS == Not Built from Source. If you have a 'user' source package that builds 'libuser1', and then a new version of user is uploaded that now builds 'libuser2', libuser1 is now NBS.
[01:07] <lifeless> StevenK: yes, I am slightly familiar with packaging jargon
[01:07] <lifeless> StevenK: (thanks for chiming in :P)
[01:07] <StevenK> I was providing said paragraph
[01:08] <lifeless> StevenK: mmm; Something like 'ppas do not handle NBS binaries, look at <method> for how they are handled in primary archives' - that would be said paragraph
[01:08] <lifeless> StevenK: though it sounds like wgrant suspects its not NBS handling
[01:09] <wgrant> Hm?
[01:09] <wgrant> It is NBS.
[01:09] <lifeless> you said 'Oh wait :('
[01:09] <StevenK> But all Soyuz people will know basic stuff like NBS. *Oh wait :(*
[01:09] <wgrant> That was imitating realisation that there is no longer a Soyuz team to know things.
[01:09] <StevenK> That is what wgrant meant
[01:16] <lifeless> and there are now no lp medium bugs
[01:16] <lifeless> that are not incomplete
[01:16] <lifeless> + we need a release of loggerhead
[01:16] <wgrant> Just lots of bugspam :P
[01:16] <wgrant> Indeed.
[01:35] <wgrant> erk
[01:35] <lifeless> are we able to use amqplib 1.0.2 ?
[01:35] <wgrant> germanium:process-uploaded failed to run for at least 35 minutes.
[01:35]  * wgrant investigates.
[01:35] <wgrant> lifeless: What are we using now?
[01:35] <lifeless> 0.6.1 is in the downloadcache
[01:37] <wgrant> Seems like 1.0 was released at the end of July.
[01:37] <wgrant> Two years after the previous release.
[01:38] <wgrant> Upgrade away.
[01:46] <nigelb> Morning!
[01:46] <wgrant> Evening nigelb.
[01:49] <nigelb> wgrant: Its alreay evening out there?
[01:49] <wgrant> No.
[01:50] <wgrant> But correctly timed greetings are silly.
[02:02] <nigelb> Heh, true.
[02:03] <lifeless> bwah, going to have to thread-sanitise errorlog.py
[02:04] <lifeless> unless; do we get one instance per thread? I seem to recall a hateful global in there. <grah>
[02:12] <mwhudson> quick, stash it on the interaction!
[02:26] <lifeless> mwhudson: separate non-threadsafe library
[02:26] <lifeless> mwhudson: giving it a callback to the interaction seems even uglier than a callback to make a connection
[02:26] <mwhudson> ah ok
[02:26] <lifeless> mwhudson: however, it won't have global tls; just per object tls
[02:27] <lifeless> (oops_amqp.Publisher will hold a threading.locals() containing the amqplib Connection object for a given threads oops reporting)
[03:00] <lifeless> can has revu?
[03:01] <wgrant> As long as it doesn't involve oops_amqp and thread-globals, probably.
[03:02] <lifeless> oops-datedir-repo - support to act as a component in oops-tools
[03:02] <lifeless> https://code.launchpad.net/~lifeless/python-oops-datedir-repo/0.0.9/+merge/78793
[03:02] <lifeless> the amqp thing I will want you to read, as one of the more amqp fluent folk around
[03:02] <lifeless> but thats still largely scifi.
[03:05] <poolie> hi all
[03:06] <StevenK> wgrant: Can haz mumble?
[03:06] <wgrant> StevenK: Why?
[03:06] <StevenK> wgrant: I'd like a pre-impl about the enum that sinzui was talking about
[03:06] <wgrant> Which enum?
[03:06] <StevenK> The privacy enum
[03:07] <wgrant> The one that probably isn't an enum at all?
[03:07] <wgrant> Oh.
[03:07] <wgrant> Team privacy?
[03:07] <StevenK> Right
[03:08] <wgrant> I think we really need Curtis for this as well.
[03:08] <wgrant> Do you know the history around private and private membership teams?
[03:08] <StevenK> Not really, so we probably want to do it after the stand-up
[03:09] <StevenK> wallyworld_: Your three cards in Deployment-Ready can probably be tossed into Done-Done.
[03:09] <lifeless> enum ?
[03:10] <wallyworld_> StevenK: ok
[03:10] <StevenK> wallyworld_: But check the linked bugs are Fix Released.
[03:10]  * wallyworld_ checks
[03:10] <wgrant> lifeless: A less-private privacy level for private teams.
[03:10] <wgrant> lifeless: To reveal their name.
[03:11] <StevenK> garbo-hourly is *still* crashing?
[03:11] <wgrant> StevenK: Yes.
[03:11] <wgrant> Somebody needs to fix it.
[03:11] <StevenK> Fail.
[03:11]  * StevenK looks
[03:11] <wgrant> It started crashing on Friday night, and will crash for weeks if one of us doesn't fix the code, I expect.
[03:12] <wallyworld_> but isn't that a maintenance squad role?
[03:12] <wgrant> Yes.
[03:12] <wallyworld_> so let them do it
[03:12] <StevenK> 2011-10-10 00:13:10 ERROR   [BugTaskIncompleteMigrator] Unhandled exception
[03:12] <StevenK>  -> http://launchpadlibrarian.net/82446649/vWceIFzREh8RMuyErP2OjLr1n4s.txt (can'
[03:12] <StevenK> t compare datetime.datetime to NoneType)
[03:12] <StevenK> wallyworld_: So we can get spamed with script-activity for weeks?
[03:13] <wgrant> Bug #871038
[03:13] <_mup_> Bug #871038: BugTaskIncompleteMigrator blows up on bugs that were filed Incomplete <oops> <Launchpad itself:Triaged> < https://launchpad.net/bugs/871038 >
[03:13] <wallyworld_> StevenK: no, if someone else keeps doing it fot them then nothing will change
[03:26] <lifeless> wgrant: so we believe some teams will want to be islands and standalone ?
[03:26] <lifeless> wgrant: (I would question that, particularly for canonical ones)
[03:27] <lifeless> I suspect brad will see it on monday, as he is working that arc
[03:27] <wgrant> lifeless: I assume there was a reason that completely private teams were implemented.
[03:27] <lifeless> wgrant: thats a nice assumption
[03:28] <lifeless> the history is messy
[03:28] <lifeless> rather than design a model that scales up as folk need more, we've special cased things with the team-type enum, and its been traumatic
[03:28] <lifeless> I would like to butt into this discussion
[03:31] <StevenK> lifeless: I've found a bunch of lines in BugTaskIncompleteMigrator that are either indented incorrectly, or >77 characters. :-(
[03:34]  * StevenK nails Ursinha to the channel.
[03:34] <Ursinha> StevenK, sorry
[03:35] <Ursinha> setting up my bip server
[03:36] <StevenK> Ursinha: It's okay. Hopefully my nail didn't hit anything vital. :-P
[03:36] <Ursinha> StevenK, haha
[03:36] <poolie> lifeless, hi, i might have a go at the affectsmetoo timeouts later
[03:37] <poolie> i would appreciate some advice or reassurance though
[03:37] <poolie> because with ~20x variations in query speed on production
[03:37] <poolie> i feel a bit pessimistic about being able to write something that will be consistently fast :/
[03:37] <poolie> and trying it out by landing changes has a long latency
[03:45] <StevenK> Can haz review? https://code.launchpad.net/~stevenk/launchpad/fix-bugtask-incomplete-migrator/+merge/78794
[03:50] <lifeless> poolie: its tricky, yes.
[03:51] <lifeless> poolie: basically needs a wholistic approach - look at the whole work happening, what its doing and how, and come up with a way to answer it efficiently
[03:51] <poolie> hm
[03:52] <poolie> it doesn't seem like it should be that hard of a query to answer
[03:52] <poolie> it's not retrieving a large amount of data
[03:52] <lifeless> poolie: I wouldn't underestimate the warm-up cost for doing that; you can easily spend a couple of days on a low hanging fruit timeout, and a week or more on one that has been previously optimised
[03:52] <poolie> mm
[03:52] <poolie> i hate to leave this half baked
[03:52] <poolie> i really like the feature but it's pretty flakey now
[03:52] <lifeless> poolie: so, as a for-instance: its going to be cold data, nearly every time
[03:52] <poolie> i don't suppose we can up the timeout?
[03:53] <lifeless> so the question is 'how can we answer this query when the data is being read off of disk each time'
[03:53] <poolie> oh, and it got ~4 bug dupes about the timeout since it launched
[03:53] <lifeless> poolie: yah, making it visible tends to do that :>
[03:53] <lifeless> poolie: no, if we can't answer this efficiently, we should drop the feature.
[03:53] <poolie> re 'cold' - i had heard that the db fitted in memory
[03:53] <poolie> but, it's not guaranteed to be all in memory, just mostly?
[03:53] <StevenK> Like fun it does
[03:53] <lifeless> poolie: nope, not at al.
[03:54] <StevenK> poolie: The DB is 300GiB, and the DB servers have 128GiB of RAM
[03:54] <StevenK> There's math in there.
[03:54] <lifeless> poolie: the DB exceeds main memory on our prod servers by -lots-l
[03:54] <lifeless> StevenK: re line length - meh; sure; shrug.
[03:54] <poolie> 'drop the feature' - doing a bug search at all also generally times out :/
[03:54] <StevenK> lifeless: We have coding standards for a reason. :-(
[03:54] <lifeless> StevenK: if it wasn't in PEP8, I would be arguing that we should expand it
[03:55] <StevenK> It makes me unhappy to see them not used.
[03:55] <lifeless> StevenK: there are standards and standards; amongst other things pragmatism over purity.
[03:55] <poolie> anyhow
[03:55] <lifeless> StevenK: line length violations don't cause crashes exceptions unbound variables etc
[03:55] <poolie> so this stuff might be cold
[03:56] <lifeless> poolie: its pretty much guaranteed to be
[03:56] <poolie> ok
[03:56] <lifeless> poolie: its a separate table with no reason for it to be read until someone consults it for $user-FOO
[03:56] <StevenK> lifeless: Right, they don't, but we still have the standards for a reason, so they should be followeed.
[03:56] <StevenK> s/ee/e/
[03:56] <poolie> is it reasonable to do a query that may do 'a bit' of hard io, but not much, or will even that often be too slow?
[03:57] <lifeless> poolie: its about 1 to 2 ms per row of IO (thats a terrible rule of thumb but close enough to be useful)
[03:57] <poolie> thanks
[03:57] <poolie> so it'd be reasonable to expect someone affected by 5 bugs would always be able to see this without timing out?
[03:58] <lifeless> StevenK: so fix them; if you're saying I put them there, 'sorry' - I have my editor configured to wrap at 80 and don't usually break the limit, but OTOH if its clearer as it is, then I think its better : standards are meant to help with clarity after all
[03:58] <StevenK> lifeless: I have fixed them -- I'm not complaining that you put them -- I'm just saying that they make me sad.
[03:59] <lifeless> poolie: possibly; once it gets enough use for the table statistics to be hot, then it will probably be able to plan the queries quickly and execute in a reasonable timeframe
[03:59] <lifeless> StevenK: k
[03:59] <lifeless> StevenK: thanks for fixing the migrator
[03:59] <lifeless> StevenK: can I ask that you also comment it out :)
[03:59] <lifeless> StevenK: (see the bug about ProductSeries:+index and migrated bugs)
[04:00] <StevenK> Er?
[04:00] <StevenK> Now I'm confused
[04:00] <lifeless> there were two issues with the branch
[04:00] <poolie> lifeless: ok so if you were going to work on this, how, generally,  would you go about it?
[04:00] <lifeless> one is the migrator going boom
[04:00] <poolie> try different queries in the hope of finding one that's fast?
[04:00] <lifeless> the other is that a migratde bug in a series causes a manually-mapping function on ProductSeries:+index to time out.
[04:01] <poolie> or, do some kind of more architectural change as far as ajax loading, or freshening them for active users, or something?
[04:01] <StevenK> lifeless: So we should revert the migrator wholesale?
[04:01] <poolie> or something about trying to create different indexes - and if so which?
[04:01] <lifeless> StevenK: I don't think the whole branch needs reverting; but the garbo job shouldn't convert any more until the web UI issue is fixed
[04:01] <poolie> s/which/can they just be tested on staging to see how they perform on real data?
[04:02] <lifeless> poolie: those are all valid strategies; doing ajax loading of just one number is something I'm suspicious off (when I started we had ajax things that themselves time out) - I figure if we can't do the whole page sanely in 5s time, splitting it into bits just lets us be inefficient in more places
[04:03] <poolie> i'm talking about loading the whole page, like bugs.l.n/~/+affectingbugs
[04:03] <poolie> not the number
[04:03] <lifeless> ok
[04:03] <poolie> and yes, i shared that thing that doing it multiple roundtrips just seems more inefficient
[04:03] <poolie> ah
[04:03] <lifeless> so for that, ajax is irrelevant anyhow, you need the batch to be efficient
[04:03] <poolie> well
[04:04] <poolie> we could load most of the page, then have the actual bug rows arrive later
[04:04] <poolie> that would give a bit better of an impression
[04:04] <poolie> but it would be a lot of work
[04:04] <lifeless> the timeout though is coming from the batch
[04:04] <poolie> right, and it would still need a long timeout to actually load the rows
[04:04] <lifeless> which we won't do - long queries affect liveliness throught the system
[04:04] <lifeless> we're driving *everything* down to 5s tops
[04:05] <lifeless> there is a quote request in with IS to examine new hardware
[04:05] <lifeless> which would have more memory
[04:05] <lifeless> I think that that will make a significant difference to the 'this works if I keep hitting refresh' cases
[04:06] <lifeless> StevenK: I think something like removing the job from the garbo list, or putting it in the experimental list or something, is appropriate
[04:06] <wgrant> StevenK: Can't you just add another disjunct to the existing if?
[04:06] <lifeless> StevenK: reverting the whole branch would break existing migrated bugs in more ways
[04:07] <wgrant> StevenK: If either date is None, -> WITHOUT_RESPONSE
[04:07] <poolie> in case you don't know, i landed this with no count next to it, so there is no performance impact unless you actually click through to the new pages
[04:07] <lifeless> poolie: yeah, I know :)
[04:07] <nigelb> Is there a guide somewhere on writing garbo jobs?
[04:07] <lifeless> nigelb: there are docs on it in the system about how to use the API etc
[04:07] <wgrant> lifeless: Product:+series OOPSes, it doesn't time out.
[04:07] <lifeless> nigelb: there isn't a 'for dummies' one AFAIK
[04:07] <lifeless> wgrant: I know
[04:07] <nigelb> lifeless: HA.
[04:08] <nigelb> lifeless: lol :)
[04:08] <lifeless> wgrant: why do you think I didn't ?
[04:08] <wgrant> "causes a manually-mapping function on ProductSeries:+index to time out"
[04:08] <lifeless> wgrant: ah, because I wrote bad words.
[04:08] <lifeless> wgrant: I meant go boom
[04:09] <wgrant> Heh
[04:09] <StevenK> wgrant: Right, fixed.
[04:09] <StevenK> Is there a bug for Product:+series bangness?
[04:10] <nigelb> lifeless: Is it on the wiki?
[04:10] <wgrant> StevenK: Also, rather than using deprecated switchDbUser and testadmin, can you use 'with dbuser' and launchpad?
[04:10] <lifeless> poolie: so the problem here is you need to consult 4 tables to generate the batch, one of will have a lot of pages loaded
[04:10] <lifeless> poolie: its not a clustered table IIRC, so take me - 1K rows in the table
[04:10] <wgrant> StevenK: Bug #871076, but that can be left for maintenance people.
[04:10] <_mup_> Bug #871076: Product:+series OOPSes on INCOMPLETE_WITH_RESPONSE and INCOMPLETE_WITHOUT_RESPONSE bug tasks <oops> <regression> <Launchpad itself:Triaged> < https://launchpad.net/bugs/871076 >
[04:10] <wgrant> Since it isn't cronspamming incessantly.
[04:11] <wgrant> LP can choke and die, as long as it's quiet about it.
[04:11] <StevenK> wgrant: But then I'm just spreading the problem by fixing garbo-hourly ...
[04:11] <lifeless> poolie: because they are spread out over the table, and there are 1M rows, its probably loading 1K pages of table data plus 10 or so pages of index, to calculate the query result
[04:11] <poolie> right
[04:11] <wgrant> StevenK: Yeah, but it's likely they'll be deployed together.
[04:12] <StevenK> wgrant: Hopefully.
[04:12] <wgrant> StevenK: As it's not as if we have any big series things happening this week.
[04:12] <StevenK> Hah
[04:12] <StevenK> Hahahahahaha
[04:12] <lifeless> poolie: so, in short, personalisation of the bug tracker is -hard-, this is going to be very tricky.
[04:12] <lifeless> poolie: probably we want a personalisation service on its own hardware that can run in-RAM totally and not get paged out, vs the one big DB which just LRU's everything.
[04:13] <StevenK> wgrant: I'd like to push back on switchDbUser -- I'm following the established pattern, and I'd rather not re-indent the entire function.
[04:13] <poolie> obviously there's going to be strong locality across actually-active users
[04:13] <wgrant> StevenK: You should only need to indent the makeBug.
[04:13] <wgrant> StevenK: It is the established pattern, but I deprecated it weeks ago.
[04:13] <lifeless> poolie: the crazy thing here is that the table is only 40MB packed and 21MB in the index I would predict will be used.
[04:13]  * StevenK looks for 'with dbuser' in the code
[04:13] <wgrant> StevenK: from lp.testing.dbuser import dbuser
[04:13] <wgrant> with dbuser('launchpad'):
[04:14] <poolie> so i guess we *could* cluster on user id, but it's probably also used to count up affected users?
[04:14] <wgrant>   # ahaha I am god
[04:14] <poolie> or maybe that's cached? otherwise it seems it would always be hot
[04:14] <lifeless> poolie: for instance, I have only 64K of the data in that table
[04:14] <wgrant> I have a suspicion that it'd be faster to do it in a few separate queries.
[04:14] <poolie> have it where?
[04:14] <wgrant> The enforce optimisation boundaries.
[04:14] <lifeless> select * into temporary table lifelessbap from bugaffectsperson where person=2;
[04:14] <lifeless> SELECT
[04:14] <wgrant> s/The/To/
[04:14] <lifeless>  \dt+ lifelessbap
[04:14] <lifeless>                        List of relations
[04:14] <lifeless>    Schema   |    Name     | Type  | Owner | Size  | Description
[04:14] <lifeless> ------------+-------------+-------+-------+-------+-------------
[04:14] <lifeless>  pg_temp_40 | lifelessbap | table | ro    | 64 kB |
[04:15] <poolie> so how can it get so cold?
[04:15] <StevenK> wgrant: Fixed, re-running tests.
[04:15] <lifeless> poolie: we have cron scripts that walk entire tables end to end
[04:15] <lifeless> poolie: for some very big tables
[04:15] <wgrant> StevenK: I'll do a batch fix-up and remove switchDbUser eventually.
[04:15] <wgrant> And then pull some tests back to Functional layers.
[04:15] <StevenK> poolie: Revision, for instance is nearly a billion rows
[04:16] <lifeless> poolie: the point that that 64K is spread over everything is still valid
[04:16] <lifeless> poolie: however - have you deployed stubs query fix yet ?
[04:16] <poolie> nup
[04:16] <lifeless> poolie: it was just a crazy query initially, and theres no point stressing until that is landed
[04:16] <poolie> my point in talking to you was, whether it was too much of a shot in the dark
[04:16] <poolie> since even the improved one is sometimes quite bad (>3000ms)
[04:16] <poolie> but perhaps the typical case is better
[04:16] <lifeless> the bad query is 9s; the improved one is 3x better to start with
[04:17] <poolie> i'll take that
[04:21] <StevenK> Bring on longpoll
[04:22] <StevenK> wgrant: Changed pushed, diff updated.
[04:25] <wgrant> StevenK: Approved.
[04:26] <StevenK> wgrant: Thanks, tossing through ec2
[04:28] <StevenK> wallyworld_: Your 518 ec2 AMI can be deleted.
[04:28]  * StevenK looks at deleting 519
[04:29] <wgrant> Ah, yes, was going to poke people about that.
[04:29] <wgrant> But related stuff on Saturday didn't quite go smoothly.
[04:29]  * wgrant thwacks buildbot and stuff.
[04:30] <StevenK> Yes, the build of that old revision was lol-worthy
[04:30] <wgrant> And then the one afterwards caused a bzrlib exception.
[04:30] <StevenK> Come on, AWS! Surely you're not slower than SSO!
[04:30] <nigelb> lol
[04:31] <nigelb> YOu'd be surprised.
[04:31] <StevenK> I would not.
[04:32] <stub> StevenK: Easiest way of handling a garbo job you don't want to run on production but might want to run on staging is to stick it in the 'experimental' list.
[04:34] <nigelb> lib/lp/scripts/garbo.py is where garbo jobs go?
[04:34] <StevenK> Yes
[04:34]  * StevenK kicks AWS and re-loads
[04:35] <stub> nigelb: For now. It is growing a little unwieldy.
[04:35] <nigelb> stub: I'm still trying to figure out how it works :)
[04:37] <StevenK> wgrant: 519 binned
[04:37] <wgrant> StevenK: Great.
[04:49]  * StevenK tries to understand bug 307539
[04:49] <_mup_> Bug #307539: bug attachment HostedFiles refuse to be deleted <lp-foundations> <oops> <tech-debt> <Launchpad itself:Triaged> < https://launchpad.net/bugs/307539 >
[04:53] <StevenK> Ah. HostedFile is in lazr.restfulclient
[05:04] <StevenK> lifeless: You commented on bug 307539 -- writing a quick API script: bug = launchpad.bugs[17] ; bug.attachments[0].data.delete() ; results in a 405, not a 500
[05:04] <_mup_> Bug #307539: bug attachment HostedFiles refuse to be deleted <lp-foundations> <oops> <tech-debt> <Launchpad itself:Triaged> < https://launchpad.net/bugs/307539 >
[05:06] <lifeless> StevenK: sounds like its Fix Released already then
[05:07] <wgrant> lifeless: Do you have opinions on how to do per-artifact observer modelling?
[05:09] <lifeless> wgrant: some testing required but either partitioned FK columns or a single generic-reference intermediary table (which we may want to move BugTask etc to use as well)
[05:10] <lifeless> the testing basically being tossing variations of existing prod data at both and assessing query performance
[05:10] <wgrant> lifeless: I was thinking it's probably best (for indices) to just have bugtask/branch FKs on the observer table for now.
[05:10] <lifeless> do you observe a task or a bug ?
[05:11] <lifeless> what happens if you observe something thats not in your pillar anymore ?
[05:11] <wgrant> I'm not sure yet, but I think a task.
[05:11] <wgrant> Bugs are horrible and will probably need several triggers anyway.
[05:11] <lifeless> I lean towards a separate table
[05:12] <wgrant> Oh?
[05:12] <lifeless> surrogate keys are a common part of query schemas, and generally good for performance
[05:12] <lifeless> but like I say, needs testing
[05:12] <wgrant> What would this table look like?
[05:13] <lifeless> id, bugtask, branch, blueprint, question, milestone, series, sourcepackagename, distroseries
[05:13] <wgrant> Oh, a universal generic reference table?
[05:14] <lifeless> possibly a little less ambitious than totally generic, but yes.
[05:14] <lifeless> immutable rows
[05:14] <lifeless> (except when we truely are moving the referenced thing around)
[05:15] <lifeless> by which I mean, if the observer changs from observing bug X to bug Y, you would add a row for bug Y and change the observer FK to its PK, rather than editing the dereference table
[05:15] <wgrant> Also, I can't see any way around using triggers to ensure that an observer exists for every task if there is an observer for one of them.
[05:16] <wgrant> Right.
[05:16] <lifeless> you will probably need a chunk of triggers, yes
[05:16] <wgrant> And I'm not sure how exactly we'll handle task retargeting, but I guess we can sort that out somehow.
[05:17] <lifeless> do you mean observer policy ?
[05:17] <lifeless> '18:15 < wgrant> Also, I can't see any way around using triggers to ensure that an observer exists for every task if there is an observer for one of them.
[05:17] <lifeless> '
[05:17] <wgrant> lifeless: Well, that too, but that's far simpler.
[05:17] <lifeless> ok, so why would you want to ensure observers exist for every task ?
[05:17] <wgrant> lifeless: The issue is that if I'm an observer for project A, and there's also a project B task on one of A's bugs, I need a restricted observer record for the project B task as well.
[05:18] <wgrant> Or disclosure views will have to query across bugtask.
[05:18] <lifeless> or are you suggesting for a 200 task bug, that the special exemption given to let A see the bug, will result in 200 (A, taskN) rules ?
[05:18] <wgrant> Yes.
[05:18] <lifeless> uhm
[05:18] <lifeless> my immediate reaction is to suggest two schemas
[05:19] <lifeless> a single non-duplicate schema
[05:19] <lifeless> and a derived query schema
[05:19] <wgrant> Why?
[05:19] <lifeless> depends on exactly what the page needs to show I guess
[05:20] <lifeless> why? so that web and API transactions to change things write a small amount of data
[05:20] <wgrant> We need to be able to show everyone that has access to the project artifacts.
[05:20] <lifeless> you have two very different statements here though
[05:20] <lifeless> one is a statement that (person or team X) has access to (asset Y)
[05:21] <lifeless> the other is a statement that (anyone that has access to asset Y in project B) has access to (asset Y too)
[05:22] <lifeless> I presume you're not suggesting that you would expand out all the former statements from project B into restricted statements on project A
[05:22] <lifeless> because that would make any write to project B's observer list also write to project A
[05:22] <wgrant> We need to something along those lines.
[05:22] <wgrant> Or we have to join across bugtask, which would be even worse.
[05:23] <lifeless> so, don't conflate primary storage with query storage
[05:23] <lifeless> you need somewhere to record intentions *once*
[05:23] <lifeless> otherwise you can never audit and tell what things are meant to be where.
[05:23] <lifeless> DRY etc etc
[05:23] <wgrant> Why can we never audit?
[05:24] <lifeless> separately, you can do the math and see whether joining through task.bug to get the observer rules for project B is fast
[05:24] <lifeless> however, unless you're - ELOCAL
[05:24] <wgrant> EPARSE
[05:26] <lifeless> skype?
[05:26] <wgrant> Sure
[05:37] <StevenK> wgrant: Are you still skyping?
[05:40] <wgrant> StevenK: Yes.
[07:48] <poolie> > ImportError: cannot import name OfficialBugTag
[07:48] <poolie> halp?
[07:48] <poolie>     ZopeXMLConfigurationError: File "/home/mbp/launchpad/lp-branches/work/lib/lp/services/messages/configure.zcml", line 44.2-47.6
[07:51] <StevenK> Do you remove it?
[07:51] <StevenK> Or move it?
[08:00] <adeuring> good morning
[08:09] <poolie> StevenK: no
[08:09] <poolie> perhaps it's a knock-on import error, i'll shelve my changes and try again
[08:16] <wgrant> poolie: We have lots of potential circular imports.
[08:16] <wgrant> Because any sense of layering is for the weak.
[08:16] <poolie> yep, it was circularity
[08:16] <poolie> well
[08:16] <poolie> requiring the imports be a strictly acyclic graph even between closely related modules would be a pretty high bar
[08:17] <wgrant> Sure, but we are far worse than we could be :)
[08:17] <poolie> 'we' meaning humans :)
[08:17] <poolie> also launchpad
[08:17] <poolie> i will try not to make it worse
[08:18] <poolie> i would like to clean up the per-person bug views to be not entirely unrelated to the main ones
[08:22] <lifeless> \o/
[08:22] <lifeless> amqp oops publishing working
[08:22] <lifeless> just need the receiver loop and am done
[08:23] <lifeless> -> family for a bit
[08:23] <poolie> lifeless: and then zero latency?
[08:46] <lifeless> gmb: yo
[08:47] <lifeless> poolie: then open source oops tools, and then add a oops-tools script using the bits to receive oopses over amqplib
[08:48] <lifeless> poolie: somewhere in there I need to add spilling to disk if rabbit is down
[08:49] <gmb> lifeless 'sup?
[08:51] <lifeless> gmb: wanted to check with you - I retriaged some bugs down to high; but if they were critical due to priority inheritance then I can put them back
[08:52] <gmb> lifeless: I've not seen any yet that I disagreed with. Have you got any particular bugs in mind?
[08:53] <lifeless> just the ajax fallout ones
[08:53] <gmb> Okay. I'm happy with those being High rather than critical.
[09:21] <poolie> short review please? https://code.launchpad.net/~mbp/launchpad/866100-affectsme-timeout/+merge/78806
[09:21] <poolie> do i need a db review for that?
[09:24] <poolie>  lifeless, for your interest (realize it's late) i did do the change to a join, in that mp, and it seems it may help
[09:31] <poolie> stub: could you review it?
[09:47] <stub> k
[12:11] <danilos> benji, oh, you are actually in already, can you please take a look https://code.launchpad.net/~danilo/launchpad/bug-869089/+merge/78819 when you find some time? (mostly mechanical, replacing remaining uses of official_rosetta throughout the code)
[12:11] <benji> danilos: sure
[12:16] <danilos> benji, thanks
[13:01] <deryck> Good morning, all.
[14:23] <bigjools> has anyone seen "UploadFailed: Server said: 500 Internal server error" coming from the librarian when running tests?
[14:37] <jml> Hmm.
[14:37] <jml> bigjools: Yes.
[14:37] <jml> bigjools: But I can't remember what caused it.
[14:37] <bigjools> it's oneiric
[14:37] <bigjools> https://bugs.launchpad.net/launchpad/+bug/871596
[14:37] <_mup_> Bug #871596: Can't run tests involving Librarian <build-infrastructure> <librarian> <Launchpad itself:Triaged> < https://launchpad.net/bugs/871596 >
[14:37] <jml> Hmm.
[14:39] <jml> bigjools: no, I haven't seen that thing from the librarian log
[14:39] <jml> bigjools: but I've only done one or two LP patches in oneiric, and that was weeks ago.
[14:40] <bigjools> jml: I only get it when running more than one test
[14:40] <bigjools> single runs are fine
[14:43] <jml> bigjools: it takes quite a while to get an updated tree with runnable tests.
[14:43] <bigjools> jml: not sure what you mean
[14:44] <jml> bigjools: bzr pull; bzr up download-cache; ./utilities/update-sourcecode; make schema
[14:44] <jml> takes time
[14:46] <bigjools> ah, that :)
[15:30]  * bigjools nails 5-digit bug to the wall
[15:51] <rvba> Thanks for the reviews benji!
[15:52] <benji> rvba: my pleasure
[19:20] <lifeless> flacoste: 'yo'
[19:23] <lifeless> gary_poster: wheee, nice bug there
[19:23] <gary_poster> lifeless, :-)
[20:57] <lifeless> wgrant: medical schedule change; I have to pop out quite a bit sooner; I should be on 3g successfully though, once I'm at the place we're going.
[21:45] <lifeless> \o/ and handling of rabbit down done too. :)
[21:57] <lifeless> wgrant: can you do a review of the first two commits of python-oops-amqp ?  We don't have a sensible review-first-commit thing yet
[22:02] <wgrant> Oh, bah, no Curtis today, I suppose.
[22:03] <lifeless> wgrant: I'm specifically looking for thinkos in my amqp handling
[22:03] <wgrant> Yep, am looking, since it seems we don't have a standup without the US :)
[22:03] <lifeless> thanks
[22:07] <lifeless> just drop me mail; I'll try to get back online ~your 10am
[22:07] <wgrant> k
[22:10] <wgrant> lifeless: Also, http://www.rabbitmq.com/faq.html#which-entity-should-declare-exchanges
[22:15] <lifeless> thanks
[22:15]  * lifeless goes
[22:53] <wallyworld> wgrant: StevenK: have you had any issues landing stuff via ec2? The last few things I've thrown at it all fail with hung tests
[22:53] <wgrant> wallyworld: Which tests?
[22:54] <wallyworld> wgrant: doesn't specify. just says "A test appears to be hung..."
[22:54] <wgrant> Can you forward me the emails?
[22:54] <wallyworld> the last successful test appears to be lp.archivepublisher.tests.test_publisher.TestPublisher.testReleaseFile
[22:54] <wgrant> Oh.
[22:55] <wgrant> Merge devel.
[22:55] <wgrant> You need my new AMI from Saturday.
[22:55] <wgrant> 522
[22:55] <wallyworld> i think i'm using 521
[22:55] <wallyworld> thanks
[22:56] <StevenK> wallyworld: And 518 can be binned ...
[22:56] <wallyworld> StevenK: how do i do that?
[22:56]  * wallyworld looks at the wiki
[22:56] <StevenK> wallyworld: https://dev.launchpad.net/EC2Test/Image
[22:56] <StevenK> wallyworld: Last section is 'Deleting AMIs'
[22:56] <wallyworld> thanks
[23:12] <lifeless> and on 3g
[23:40] <wgrant> lifeless:   File "/tmp/python-oops-amqp/eggs/testtools-0.9.12_r228-py2.7.egg/testtools/testcase.py", line 134, in gather_details
[23:40] <wgrant>     for name, content_object in source_dict.items():
[23:40] <wgrant> AttributeError: 'RabbitServerRunner' object has no attribute 'items'
[23:40] <lifeless> wgrant: reproduction instructions ?
[23:41] <wgrant> lifeless: bzr branch lp:python-oops-amqp; cd python-oops-amqp; ln -s ~/launchpad/lp-sourcedeps/download-cache; ./bootstrap.py; bin/buildout; bin/py -m testtools.run oops_amqp.tests.test_suite
[23:41] <lifeless> wgrant: on lucid ?
[23:42] <wgrant> lifeless: oneiric
[23:42] <wgrant> I could try lucid, I suppose.
[23:42] <lifeless> thats a mismatch with changed testtools api
[23:43] <lifeless> between fixtures and testtools
[23:43] <wgrant> lifeless: But both fixtures and testtools are from eggs.
[23:43] <lifeless> just reproducing on my laptop (in my lucid lxc)
[23:44] <wgrant> Ah, works.
[23:45] <wgrant> On Lucid.
[23:45] <wgrant> Hmmm.
[23:45] <wgrant> I suppose it could be a minor problem that I don't actually have rabbitmq-server installed on my oneiric host.
[23:45] <wgrant> Still, rather opaque error :)
[23:46] <mwhudson> suggests there's a mismatch in the specified versions then?
[23:47] <lifeless> yes