[00:12] <sinzui> wallyworld: wgrant, I am back
[00:12]  * wallyworld opens mumble
[00:13] <lifeless> win - 1567  OOPS-1983AS138  Bug:EntryResource
[00:19] <lifeless> wgrant: guess what
[00:19] <wgrant> lifeless: What?
[00:20] <lifeless>  http://pastebin.com/ZpWvaAgr
[00:20] <wgrant> lifeless: Heh. Yay
[00:20] <lifeless> thats why list() was around official bug tags before
[00:21] <lifeless> not because it wasn't a list
[00:21] <lifeless> can you suggeset any improvements ?
[00:22] <wgrant> lifeless: Not really :/
[00:22] <lifeless> are we coming out of rc now ?
[00:23] <wgrant> We can't QA on prod any more. But I guess so.
[00:23] <lifeless> prod ?
[00:23] <wgrant> Well, prod worked well as a QA environment yesterday... ahem.
[00:23] <lifeless> *blink*
[00:24] <wgrant> lifeless: It found a bug that would have really screwed us over on Thursday.
[00:24] <wgrant> Because we wouldn't have been able to roll back.
[00:24] <lifeless> right
[00:25] <lifeless> but we don't have a branch with that rolled back and without db changes
[00:25] <wgrant> No.
[00:25] <lifeless> and we aren't aware of any damage in rev 13168
[00:25] <lifeless> that we haven't already agreed to
[00:26] <wgrant> Correct.
[00:26] <lifeless> so, unless we have some more testing we want to do
[00:26] <lifeless> we should un-rc
[00:26] <wgrant> Indeed.
[00:26] <wgrant> But we said that yesterday too.
[00:26] <wgrant> And then the world melted.
[00:26] <lifeless> yes
[00:26] <lifeless> unknown unknowns, they suck.
[00:27] <lifeless> still
[00:27] <lifeless> I've very glad I have been conservative about moving the freeze time closer to the downtime
[00:37] <wgrant> Yes...
[00:38] <wgrant> Once we have a faster test suite and no stupid feature deadlines, we can improve.
[00:46] <LPCIBot> Project windmill-db-devel build #371: STILL FAILING in 1 hr 8 min: https://lpci.wedontsleep.org/job/windmill-db-devel/371/
[00:50] <lifeless> allenap: are you still trying to get rabbit stable ?
[00:52] <wgrant> lifeless: I don't think so.
[00:52] <wgrant> StevenK might know.
[00:52] <StevenK> He is not, no.
[01:59] <StevenK> sinzui: I've picked bug 307620 off that list
[01:59] <_mup_> Bug #307620: Confusing message "There is a no package named 'xserver-xorg' in Debian" when filing a bug <confusing-ui> <filebug-package> <lp-bugs> <lp-soyuz> <trivial> <Launchpad itself:Triaged> < https://launchpad.net/bugs/307620 >
[01:59] <abentley> lifeless: ah.  Hysterical raisins.
[01:59] <StevenK> It's so lightly tested it's scary
[02:01] <wgrant> StevenK: How do you propose to fix that?
[02:02] <wgrant> lifeless, abentley: Do we know why PQM is silent about that?
[02:02] <StevenK> wgrant: By doing what sinzui suggested in comment 18
[02:02] <abentley> wgrant: I do not.
[02:02] <wgrant> StevenK: Ah, heh.
[02:02] <StevenK> wgrant: You think that's a good idea or a bad one?
[02:02] <wgrant> StevenK: Mmm. We will have a proper solution to this soon.
[02:02] <wgrant> StevenK: Ensemble needs it.
[02:03] <wgrant> (hi lifeless)
[02:03] <wgrant> StevenK: https://code.launchpad.net/~wgrant/launchpad/fix-sprite-rebuild-rules/+merge/63791
[02:03] <StevenK> But Ensemble is only concerned about SPNs
[02:04] <wgrant> StevenK: It's a similar issue.
[02:04] <wgrant> But true.
[02:04] <StevenK> wgrant: r=me, awesome work
[02:04] <wgrant> Thankyou sir.
[02:07] <LPCIBot> Project windmill-devel build #179: STILL FAILING in 1 hr 9 min: https://lpci.wedontsleep.org/job/windmill-devel/179/
[02:10] <wgrant> Hmm. The OOPSes are creeping back up.
[02:10] <lifeless> wgrant: it blows an exception through to cron
[02:10] <lifeless> comment 18 ?
[02:25] <wgrant> The Internet hasn't ended yet?
[02:25] <wgrant> What with World IPv6 Day and all that.
[02:25] <StevenK> Doesn't seem to have
[02:27] <wgrant> Tomboy needs bzr integration.
[02:51] <LPCIBot> Project windmill-devel build #180: STILL FAILING in 44 min: https://lpci.wedontsleep.org/job/windmill-devel/180/
[02:53] <StevenK> Can I set a distribution that I've just created to use LP as a bug tracker?
[02:54] <wgrant> StevenK: As a mortal? Probably not.
[02:55] <wgrant> Although +edit may work...
[02:55] <wgrant> Some distro stuff is only accessible to admins.
[02:55] <wgrant> Yeah, it's on +edit still.
[02:55] <StevenK> wgrant: In a test case
[02:56] <wgrant> StevenK: Set official_malone = True
[02:58] <wgrant> Grar sampledata.
[03:10] <StevenK> wgrant: Bah, setting offical_malone to True gives me the same message
[03:10] <wgrant> StevenK: Which message?
[03:10] <StevenK> wgrant: u"Distribution-774925 doesn't use Launchpad as its bug tracker."
[03:10] <wgrant> grep may help.
[03:15] <StevenK> wgrant: Spelling correctly would also help. :-(
[03:25] <wgrant> StevenK: Heh.
[03:35] <LPCIBot> Project windmill-devel build #181: STILL FAILING in 44 min: https://lpci.wedontsleep.org/job/windmill-devel/181/
[03:43] <jtv> Good day gentlebeings.
[03:44] <wgrant> Greeting jtv.
[03:44] <jtv> greeting back
[03:55] <lifeless> (yes, its going the wrong way)
[03:56] <wgrant> The graphs are quite depressing.
[04:18] <lifeless> jml: I would like to catch up with you this week; my tonight is doable for me if it works for you.
[04:35]  * wgrant stabs SQLObject
[04:39] <lifeless> oh
[04:39] <wgrant> Oh?
[04:39] <lifeless> what provoked the stabbing ?
[04:40] <wgrant> The old cls.select thingy doesn't seem to behave just like Storm, particularly around using SQL(..., params=...) in orderby.
[04:41] <lifeless> thats a SQLObject result set, it is a little different
[04:41] <wgrant> Yeah.
[04:42] <wgrant> I'm notting stabbing SQLObject for being buggy and inconsistent -- merely for existing at all.
[04:42] <wgrant> Er.
[04:42] <wgrant> s/notting/not/
[04:58] <lifeless> poolie: in canonical wikis 'Bug:1234' will link to lp automatically
[04:59] <poolie> that's good to know
[05:00] <poolie> i guess you realized my edit was fixing a typo, not just inserting the pad bit
[05:03] <wgrant> Changing person vocab sorting is fuuuuun.
[05:03] <StevenK> wgrant: The type of fun that is spelt with no n, and an extra c, k, e and d?
[05:03] <wgrant> Something like that.
[05:04] <StevenK> Like Help in CS is spelt with no p and an extra l
[05:08] <StevenK> jtv: Are you free to review one of my branches?
[05:08] <jtv> StevenK: yes, just finished the last one.
[05:09] <jtv> StevenK:  the also-affects one?
[05:10] <StevenK> jtv: Yes.
[05:14] <StevenK> jtv: I'm in the middle of allenap's MP.
[05:16] <StevenK> rvb's MP looks okay to me, but I'm comfortable about reviewing a JS-only branch
[05:16] <StevenK> Er. Not comfortable
[05:16]  * StevenK goes to inject tea
[05:18] <lifeless> poolie: I didn't realise that,no.
[05:49] <LPCIBot> Project windmill-devel build #182: STILL FAILING in 1 hr 6 min: https://lpci.wedontsleep.org/job/windmill-devel/182/
[05:59] <wgrant> Tests normally run as the 'launchpad' DB user, right?
[06:04] <StevenK> Yes, I think so.
[06:04] <StevenK> jtv: No news is good news?
[06:33] <jtv> StevenK: sorry for the delays — lots of distractions so threw in a spot of lunch as well
[06:40] <lifeless> jtv: do you know anything about the storage-and-performance of arrays vs related tables in pg ?
[06:41] <lifeless> jtv: I'm wondering if we should make subscriptions into an array (or duplicate them into one) on (perhaps just) private bugs
[06:42] <lifeless> jtv: specifically, every time we access a private bug, we want to check the condition (private is false or has-subscription)
[06:42] <jtv> lifeless: I know very little except what's obvious.  This sounds like a good idea; I've been quietly wishing for arrays in several places but not daring to speak out.
[06:43] <jtv> However front-end library support is not a given.
[06:43] <jtv> Generalized array parsing is an annoying little job that I decided not to tackle in libpqxx because it was coming in libpq, and AFAIK it never came.
[06:43] <lifeless> ah
[06:43] <lifeless> so I believe we do array processing in some (limited) places already
[06:44] <jtv> Nice
[06:44] <lifeless> storm etc support is a good question though
[06:44] <lifeless> this may be done via 'raw' sql today.
[06:44] <jtv> I did crib a bit of Storm support from somewhere and put it in the codebase IIRC… let me dig it up
[06:44] <lifeless> no need, I'm not hacking on this just now
[06:44] <jtv> ok
[06:45] <lifeless> was curious about the performance implications more than higher level stuff
[06:45] <jtv> I think you can index and search arrays for specific elements.
[06:46] <jtv> But duplicating into arrays as you say would give us the best of both, at the cost of writing a bit more data.
[06:46] <lifeless> we're massively read heavy
[06:46] <jtv> There you go.
[06:46] <lifeless> other than crazy things like bug heat, I've no issue with us writing more to read more efficiently
[06:46] <lifeless> well
[06:46] <lifeless> once we get the DB server disk upgrades done.
[06:46] <jtv> Although (advocate o/t devil) we've also got scaling headroom for reads that we don't have for writes
[06:46] <lifeless> until then we need to be juidicious about it ;)
[06:49] <LPCIBot> Project windmill-db-devel build #372: STILL FAILING in 1 hr 17 min: https://lpci.wedontsleep.org/job/windmill-db-devel/372/
[06:50]  * lifeless makes a 800K row temp table
[06:51] <lifeless> oh -wow-
[06:51] <lifeless> we have a slightly brain dead privacy check for assignee
[06:51] <wgrant> The new one?
[06:51] <lifeless> yes
[06:52] <poolie> well, https://code.launchpad.net/~mbp/launchpad/790902-mail-librarian/+merge/63816 makes it a uuid
[06:52] <lifeless> goes from bugtask -> bug -> bugtask.assignee -> teamparticipation
[06:52] <poolie> the lp typo test to make the initial patch seems fairly consistent at about 45m for me
[06:52] <lifeless> this is accurate but fiee.
[06:56] <lifeless> so, denorming 3 fields onto bugtask and drop the assignee visibility -> 2 second table scan query.
[06:57] <lifeless> still an OOM slower than bugsummary
[06:58] <lifeless> poolie: is that friction, familiarity, not knowing where to find things?
[06:58] <poolie> StevenK, jtv, could you have a look at https://code.launchpad.net/~mbp/launchpad/790902-mail-librarian/+merge/63816 for me please?
[06:58] <Peng> /1/1
[06:58] <jtv> poolie: I'm just reviewing StevenK but shouldn't take long, so I can take it after that
[06:58] <Peng> erk, damn. I hadn't done that all day!
[06:58] <poolie> about 5m getting the environment up to date, eg updating lp-sourcedeps, make schema etc
[06:59] <poolie> ah, recently i've had a run of things where there is redundant code that makes the fix more complicated
[06:59] <poolie> like the "decoy" mail sending code in the ppa notification stuff
[07:03] <poolie> perhaps none of them are truly typo-level fixes
[07:40] <jtv> StevenK: review done — I'm recommending some changes though
[07:43] <jtv> poolie: looking at yours now… why the trans.begin() in incoming.py?
[07:44] <StevenK> jtv: structured() is smart enough to not escape things that are themselves an instance of structured() -- since I wrote that bit
[07:44] <poolie> jtv: rather than the other
[07:44] <jtv> StevenK: ahhh!  But then where does that displayname get escaped?
[07:44] <poolie> it seemed to me only incoming.py really has an opinion about when things hould be committed
[07:44] <StevenK> jtv: So, there are two displayname, both are inside a structured() of some form
[07:46] <wgrant> StevenK: You fail.
[07:46] <wgrant> 20	+                binary_tracking = structured(
[07:46] <wgrant> 21	+                    ' Launchpad does not track binary package names '
[07:46] <wgrant> 22	+                    'in %s.' % distribution.displayname)
[07:46] <wgrant> s/ %/,/
[07:47] <jtv> poolie: I've bumped my head into this myself… turns out a transaction is started automatically whenever we access the database while not in a transaction and not in autocommit mode.  So for the general case (and I don't know whether that includes this of course) all that begin() really does for us is increase the length of the running transaction.
[07:47] <jtv> wgrant: see my review and our ongoing discussion.
[07:47] <StevenK> wgrant: Sigh, so I do.
[07:48] <jtv> poolie: I may be misunderstanding you though because I'm trying to carry two conversations at once.  :)
[07:48] <poolie> ok
[07:48] <poolie> i didn't add that line, i just moved it
[07:49] <poolie> your logic makes sense
[07:49] <jtv> That's a fine answer AFAIC.
[07:49] <poolie> does "start if not already started" actually take much time?
[07:49] <jtv> If you don't feel comfortable changing it, leaving it as-is is valid.
[07:49] <jtv> No, that's really really quick.
[07:49] <poolie> that's good to know for the future
[07:49] <poolie> later i may try to clean up incoming.py more
[07:49] <jtv> Good call keeping that separate.
[08:05] <StevenK> jtv: I have another one, if you're free.
[08:06] <jtv> StevenK: that wasn't enough to put you off?  :)  Sure, I'm almost done with Martin's.
[08:07] <StevenK> jtv: This is one is 1: Much smaller, and 2: Needs more thought, but this is quick solution to the problem.
[08:08] <jtv> StevenK: being talked at from 5 directions; will be with you shortly.
[08:10] <jtv> poolie: you're done
[08:12] <jtv> StevenK: wait… the vocab currently only returns descriptions, and you're changing it not to return descriptions?
[08:14] <StevenK> jtv: How useful do you think it is for people to pick out a source or binary package purely on their description only?
[08:14] <StevenK> It's utterly utterly pointless
[08:14] <jtv> I don't think I've ever knowingly read a package description.  What's in them, in practice?
[08:15] <jtv> (Just out of interest)
[08:15] <StevenK> libc6's description is: Embedded GNU C Library: Shared libraries
[08:15] <lifeless> hmm, 150ms stats portlets appear possible
[08:15] <wgrant> StevenK: That's the summar.
[08:15] <wgrant> StevenK: Not the description.
[08:16] <StevenK> Oh, in which case the picker will show "Contains the standard libraries that are used by nearly all programs on"
[08:16] <wgrant> Right.
[08:16] <StevenK> jtv: ^
[08:16] <jtv> Wow, that _does_ sound very helpful.
[08:16] <wgrant> And that's one of the most useful descriptions.
[08:16] <jtv> (irony there)
[08:17] <wgrant>  This package contains debugging files used to investigate problems with
[08:17] <wgrant> That's another example.
[08:17] <jtv> StevenK: I can see how you'd want to think about the proper solution some more… is it still worth setting a title though, or testing for it?
[08:17] <jtv> You can just leave out the third arg to SimpleTerm()
[08:18] <jtv> …and cut it out of the doctest.
[08:18] <jtv> If you're going to replace it very soon with something more sensible, it makes sense to keep it.  Or maybe something in the picker requires it?  Otherwise, I'd just drop it.
[08:19] <StevenK> jtv: The picker does not seem to use the term, only the title, so we need to return it twice
[08:20] <lifeless> hmm, where is the stubster
[08:20] <jtv> I see
[08:23] <jtv> StevenK: approved
[08:25] <StevenK> jtv: Thanks!
[08:27] <jtv> rvba: I see you have a review waiting.  I may get to that soon.
[08:27] <jtv> (There's one more in the queue before you)
[08:27] <rvba> jtv: great, thanks (it's a js fix).
[08:46] <jtv> rvba: that's a nasty little bug you're tackling there.  As they say, only two things are hard in software engineering: naming things and cache invalidation.
[08:46] <jtv> And off-by-one errors.
[08:48] <adeuring> good morning
[08:51] <jtv> hi adeuring!
[08:51] <jtv> need refreshment, brb
[08:52] <poolie> thanks jtv
[08:52] <poolie> it would be cool if lp did ipv6 day next year
[08:52] <wgrant> Hahahah
[08:53] <wgrant> indeed, but...
[08:53] <poolie> in all your spare time :)
[08:53] <wgrant> It's got nothing to do with LP :)
[08:54] <jtv-afk> And then someone in China pops up and says "here's a version of IP that just uses 64-bit addresses, and we NAT it into a hole in the IPv4 space, and we already talked to the equipment manufacturers" and then all that effort is going to be wasted.  :)
[08:55] <wgrant> I think there's like two significant Australian ISPs that provide native v6 :/
[08:55] <poolie> i'm all right jack :)
[08:56] <poolie> wgrant: meaning it's mostly a canonical-network thing?
[08:56] <wgrant> poolie: Entirely.
[08:56] <wgrant> Well. Unless you call the frontends LP.
[08:56] <wgrant> Which I guess you almost could.
[09:00] <lifeless> poolie: I thought we chatted about this on the phone
[09:00] <lifeless> poolie: when I closed that bug
[09:00] <poolie> oh about ipv6?
[09:00] <poolie> yes, we did
[09:01] <lifeless> stub: hola!
[09:01] <stub> Yo
[09:01] <lifeless> stub: I want to add 3 columns to bugsummary :>
[09:02] <stub> Oh joy :)
[09:02] <lifeless> https://bugs.launchpad.net/launchpad/+bug/793848/comments/4
[09:02] <_mup_> Bug #793848: Distribution:+bugtarget-portlet-bugfilters-stats timeouts <dba> <timeout> <Launchpad itself:Triaged> < https://launchpad.net/bugs/793848 >
[09:03] <stub> lifeless: You are quadrupling the number of rows. Not sure if the distro queries are going to survive with their tag counts.
[09:04] <lifeless> stub: https://bugs.launchpad.net/launchpad/+bug/793848/comments/5
[09:05] <lifeless> :P
[09:05] <_mup_> Bug #793848: Distribution:+bugtarget-portlet-bugfilters-stats timeouts <dba> <timeout> <Launchpad itself:Triaged> < https://launchpad.net/bugs/793848 >
[09:05] <lifeless> stub: its about 50ms slower on the tag counting case
[09:07] <stub> lifeless: Will it be exploded by tags and milestones, or can we add CHECK (has_patch IS NULL != tag IS NULL) etc.
[09:07] <mrevell> Hello devers
[09:07] <lifeless> stub: its more dimensions
[09:07] <lifeless> stub: so more cross products
[09:08] <stub> Right. So we have to be careful, as every addition could put us over the top with this model
[09:08] <lifeless> we do:
[09:08] <lifeless>  select count(*) from bugsummary2;
[09:08] <lifeless>  count
[09:08] <lifeless> --------
[09:08] <lifeless>  758533
[09:09] <lifeless> \dt+ bugsummary2;
[09:09] <lifeless>                        List of relations
[09:09] <lifeless>   Schema   |    Name     | Type  | Owner | Size  | Description
[09:09] <lifeless> -----------+-------------+-------+-------+-------+-------------
[09:09] <lifeless>  pg_temp_1 | bugsummary2 | table | ro    | 54 MB |
[09:09] <stub> Did you check the existing queries for distribution and distroseries use cases?
[09:09] <lifeless> distribution yes, haven't checked distroseries
[09:09] <stub> I think we will be ok too, this time....
[09:09] <lifeless> I'd expect distribution to be worst, and its 200ms
[09:11] <lifeless> stub: we could make the booleans and importance nullable and have two separate summaries - one for use in this query, one for the tags one
[09:12] <lifeless> but I don't think the complexity is worth it - this seems pretty straight forward and the performance (hot) is still extremely shiny
[09:13] <stub> lifeless: If we took that approach, I'd just stick them in a separate table.
[09:16] <lifeless> yeah
[09:21] <lifeless> so, whats the right way forward to apply this change
[09:23] <stub> lifeless: The usual. db patch to add the new columns and extend all the triggers, test on staging...
[09:23] <stub> lifeless: Probably one for me.
[09:23] <lifeless> stub: would you?
[09:24] <stub> although it will likely be cut & paste work with the existing code as a template.
[09:24] <stub> sure
[09:24] <lifeless> yah
[09:24] <lifeless> I have a test script in that bug, which is an adapted version of the original patch (but no triggers)
[09:25] <lifeless> oh btw
[09:25] <lifeless> stub: were you aware that copying a template db locks the source ?
[09:25] <lifeless> exclusive-locks
[09:25] <lifeless> anyhow, I put a spinlock-with-random-backoff into the db layer when I found this some months ago
[09:25] <stub> Your template db can't be in use, so not surprising.
[09:25] <lifeless> I'm thinking of making a test-run-specific template
[09:26] <stub> You get an error if there are any open connections to the template, or at least you used to.
[09:26] <lifeless> so launchpad_ftest_template -> launchpad_ftest_template_$PID -> launchpad_ftest_$PID
[09:26] <lifeless> do you see any problem with such a double-indirection ?
[09:27] <stub> This is to avoid contention I guess? No problem except twice as many databases to clean up on a cancelled run. We should probably automatically do that somehow... might need to maintain shared state in some well known location, such as a dedicated PG database for lp test run metadata.
[09:28] <stub> lifeless: Is db setup time worth optimizing yet though? Last I checked, teardown and setup was only 10-15 minutes or so of our total test run time.
[09:28] <lifeless> stub: parallel testing contention...
[09:29] <lifeless> stub: I have no solid figures, but I'd like to be at least theoretically interaction-free.
[09:29] <stub> Yer, it will be better. Just wondering if there is other low hanging fruit that should be tackled first.
[09:30] <lifeless> stub: possibly. Feel free to try parallel testing anytime you like :)
[09:30] <lifeless> StevenK: which reminds me, does jenkins parallel test devel nowadays ?
[09:30] <stub> There used to be some backoff time required *anyway*, because it took a short while after you closed your connection before PG had cleaned up the backend and the connection was really dead.
[09:31] <stub> I think that is better now under 8.4, where it just blocks until the db is available. Might be possible to pull out the retry code (?)
[09:31] <lifeless> definitely isn't possible :)
[09:32] <lifeless> running 8 test threads at once, they all blew up on template copying until I added the random backoff
[09:32] <stub> I mean after a separate template per thread.
[09:32] <lifeless> oh right
[09:32] <lifeless> up
[09:32] <lifeless> uhm
[09:32] <stub> I guess if it isn't needed, the retry will never happen so no need to pull it out yet.
[09:33] <lifeless> if we're using the same helper to copy from l-f-t to l-f-t-$pid and l-f-t-$pid to l-f-$pid then it needs to be in for the first copy
[09:33] <stub> right
[09:34] <stub> mark had the idea of a background task that created fresh databases for tests to consume, but I never chased it once we saw db teardown/setup was not as significant as we thought.
[09:34] <lifeless> *blink*
[09:34] <lifeless> that would create IO and CPU contention
[09:34] <lifeless> unless you had less test runners than the machines concurrency could support
[09:35] <lifeless> grr 14492 tests
[09:35] <lifeless> 14491 passed
[09:35] <stub> this was singlethreaded tests, so building the db would chew io while the test was chewing cpu
[09:35] <lifeless> whats the bet I failed to push
[09:35] <stub> and most tests don't need new dbs, so it would be idle a lot of the time anyway.
[09:36] <lifeless> yeah
[09:36] <lifeless> I wouldn't pursue the idea :)
[09:37] <StevenK> lifeless: What do you mean?
[09:37] <lifeless> StevenK: there was a jenkins job doing parallel test runs
[09:38] <StevenK> There still is
[09:38] <lifeless> what branch does it run
[09:39] <StevenK> http://bazaar.launchpad.net/~lifeless/launchpad/librarian
[09:39] <lifeless> StevenK: can we switch that to running stable ?
[09:40] <lifeless> and sorry for breaking trunk, I had not pushed :(
[09:40] <StevenK> Done
[09:42] <lifeless> StevenK: thanks!
[09:59] <stub> lifeless: Although extending BugSummary will fix counts, I think there will still be problems elsewhere discovering bugs that have been fixed upstream. If people are interested in how many, they are also interested in a set of links. Is this search also problematic? Perhaps being able to flag a bugtask as 'fixed upstream' will make both the counts and search acceptable. And if the counts still are not acceptable, make the triggers much easier.
[10:00] <lifeless> stub: I'd be fine with denormalising 'fixed_upstream' onto bugtask
[10:01] <stub> garbo task or checkwatches could do that
[10:01] <lifeless> stub: or a trigger could
[10:01] <stub> yer.
[10:02] <lifeless> stub: I did an experiment with a wider bugtask
[10:02] <poolie> thanks for the kind words jtv
[10:02] <lifeless> stub: it can get down to 2 seconds if we can drop the join onto bug
[10:02] <stub> but checkwatches is the only thing that updates the relevant rows I think (?), so there might be simplest.
[10:02] <lifeless> https://bugs.launchpad.net/launchpad/+bug/793848/comments/1
[10:02] <_mup_> Bug #793848: Distribution:+bugtarget-portlet-bugfilters-stats timeouts <dba> <timeout> <Launchpad itself:Triaged by stub> < https://launchpad.net/bugs/793848 >
[10:02] <jtv> poolie: you know me — you earned them.  :)
[10:03] <lifeless> stub: there are 2 cases to fixed upstream, a) bugwatch, b) a product task
[10:03] <lifeless> stub: b) is changed by users.
[10:05] <stub> k
[10:06] <poolie> lifeless: so just briefly i'm merging those two mail->librarian things into just one, that uses a uuid
[10:07] <lifeless> poolie: cool, I saw. Looks good.
[10:08] <lifeless> stub: comment 3 gets a 2 second query with reflexive-join to find fixed_upstream
[10:09] <lifeless> stub: by denormalising duplicateof, latest_patch_uploaded and private onto bugtask
[10:09] <poolie> thanks
[10:09] <lifeless> stub: but we have to join to bug to get assignee
[10:09] <lifeless> stub: so we're still looking at 4.5 seconds minimum
[10:09] <lifeless> stub: -> 3.5 seconds too slow
[10:10] <stub> Unless we use triggers to mush together bug and bugtask into one flat BugSearch table...
[10:11] <stub> maybe too fat for sanity... but I guess fat for a reason.
[10:11] <lifeless> stub: I doubt it would get down to the 100ms that bugsummary does
[10:11] <stub> right, but useful for stuff besides counts
[10:11] <lifeless> stub: the thing about the portlets is that they want multiple criteria shown at once
[10:11] <stub> or is this only a problem with counts and I'm fixing a problem we don't have?
[10:12] <lifeless> stub: indeed, and I'm totally happy to have a bugsearch table too, but its a different scenario
[10:12] <lifeless> stub: so these counts link to searches
[10:12] <lifeless> and bug search is also slow
[10:12] <lifeless> but the one portlet shows 9 different searches summarised
[10:13] <lifeless> so an individual search has rather more headroom than the portlet
[10:13] <stub> Right, so we will need bugsummary for this sort of things in every case, unless we can make bug search super fast (10x more than we are currently aiming at)
[10:14] <stub> Or something similar... we already know BugSummary isn't going to scale to too many more dimensions
[10:14] <stub> But there are a finite number of possibilities that I can't recall of the top of my head.
[10:15] <RawChid> Hey, from Python I try to load a pot from LP (staging). The code should work and I should have the right permission, nevertheless I get HTTP Error 401: Unauthorized  (Unknown access token). If I paste the URL in my browser I DO get the correct response. Any ideas?
[10:16] <lifeless> your python code hasn't logged in via either oauth or openid
[10:16] <wgrant> Or anonymously.
[10:22] <RawChid> It does launchpadlib.login_with()
[10:22] <RawChid> How can I check if it's done right
[10:25] <RawChid> lifeless, this is the output: http://pastebin.com/kZ9wr6BL
[10:45] <adeuring> jtv: could you please have a look at this MP: https://code.launchpad.net/~adeuring/launchpad/bug-735979/+merge/63832 ?
[10:45] <poolie> RawChid: can you paste the code too?
[10:45] <jtv> adeuring: ok, but that's the last one for today.  :)
[10:46] <adeuring> jtv: thanks!
[10:47] <poolie> RawChid: one issue is that the url you're printing is on lpnet, but you say you're trying to load from staging
[10:47] <poolie> i wonder if you are only authenticated on one of them?
[10:49] <RawChid> poolie, I just doscovered I can load the pot from production without a problem.
[10:49] <RawChid> You're right! I authenticate to staging, but the URL is production. Thanks a lot
[10:49] <poolie> you're very welcome
[10:52] <lifeless> adeuring: hi
[10:52] <lifeless> adeuring: you know about ++profile++ right ?
[10:52] <adeuring> hi lifeless
[10:52] <adeuring> lifeless: argh, yeah, right!
[10:52] <adeuring> thanks for the reminder
[10:52] <lifeless> no probs!
[10:53] <lifeless> I was debugging a CPU case the other week
[10:53] <lifeless> with code like:
[10:53] <lifeless> foo = list(resultset)
[10:53] <lifeless> for quux in bar:
[10:53] <lifeless>     if quux in foo:
[10:53] <lifeless>         ....
[10:53] <lifeless> that triggered O(N^2) storm object comparisons
[10:53] <lifeless> and __eq__ on storm objects is -slow-
[10:54] <lifeless> showed up very clearly on qastaging with ++profile++show
[10:59] <jtv> adeuring: done
[10:59] <adeuring> jtv: thanks!
[11:02] <adeuring> jtv: thanks for the reminders of the "usual suspects" for "mysterious delays" :) Though GIL contention for more than a second sounds unlikely... or, well, it could be a kind of accumulated shorter delays
[11:02] <jtv> adeuring: you'd be amazed.  I was.
[11:03] <wgrant> The GIL isn't so much of a problem these days.
[11:03] <wgrant> Thanks to lifeless' ruthless campaign.
[11:03] <jtv> He optimized it in the interpreter?
[11:04] <wgrant> He minimised multithreading.
[11:04]  * adeuring wonders if we could record the time a thread waits for the GIL
[11:04] <jtv> wgrant: if you're talking about *in Launchpad*, then see the context: the starting point is "I asked for non-threaded app servers because of the GIL."
[11:05] <jtv> adeuring: someone-or-other (I should have bookmarked!) did really extensive research.
[11:05] <jtv> The results were really stunning.
[11:05] <wgrant> jtv: Sure, but that means that the GIL is not a major concern for our code any more.
[11:06] <jtv> wgrant: see the context.  :)  We were talking about how _now that we have nonthreaded appservers_, we have a better base for evaluating python-side time sinks.
[11:06] <wgrant> jtv: Ahhh, I see.
[11:06] <wgrant> I'm not actually sure where the context is.
[11:06] <wgrant> I guess a review somewhere in backscroll.
[11:06] <wgrant> Anyway, sorry.
[11:07] <jtv> Yes, it's a review.  Hard to follow without that, I know.  :-)
[11:07] <jtv> But since you are aware of the issue, you may want to help me convince abel that there were (are, I guess) serious problems with the GIL in general.  :-)
[11:08] <lifeless> jtv: wgrant: note that we're still 2-threaded
[11:08] <lifeless> xmlrpc-internal is the second thread
[11:08] <adeuring> jtv: well, I know about these problems in general...
[11:08] <wgrant> lifeless: Yeah, but I pretend XML-RPC doesn't exit.
[11:08] <wgrant> +s
[11:08] <jtv> Annnnd there goes my good mood, thanks.  :-)
[11:08] <lifeless> this is a tolerable compromise
[11:08] <wgrant> adeuring: We had 8-second gaps sometimes, and they magically vanished once we went single-threaded.
[11:08] <wgrant> :/
[11:08] <jtv> adeuring: sorry, just meant to point out the part where it's really ridiculously bad sometimes.
[11:09] <jtv> Wow, 8 seconds?
[11:09] <wgrant> There were lots of 5-6. But I saw at least a few 8s.
[11:09] <wgrant> Possibly it was bad code on both sides, I guess.
[11:09] <poolie> is there anyone who specially cares for lazr.restful these days?
[11:09] <wgrant> No.
[11:09] <poolie> and could answer questions
[11:09] <poolie> hm
[11:09] <lifeless> s/specially//
[11:11] <lifeless> jtv: the internal xmlrpc calls are (today) relatively low frequency; if we see more than 1-timeout-a-day its not going to be GIL (and thus we get the better analysis environment as you say)
[11:11] <jtv> lifeless: yeah I figured you wouldn't have done it if it were a likely source of problems.  :)
[11:19] <LPCIBot> Project windmill-devel build #183: STILL FAILING in 1 hr 19 min: https://lpci.wedontsleep.org/job/windmill-devel/183/
[11:31] <poolie> lifeless: i wonder if it would be worth trying 0mq just within a machine or something
[11:31] <poolie> hm
[11:32] <LPCIBot> Project parallel-test build #19: STILL FAILING in 1 hr 20 min: https://lpci.wedontsleep.org/job/parallel-test/19/
[11:36] <jtv> suite == series-pocket?
[11:36] <jtv> (I always forget this after being out of soyuz for a while)
[11:36] <wgrant> jtv: Yes.
[11:36] <bigjools> jtv: yes
[11:37] <jtv> Thanks
[11:37] <wgrant> stub: Can we turn off notifications for parallel-test?
[11:37] <wgrant> Er./
[11:37] <wgrant> StevenK: ^^
[11:50] <bigjools> anyone else who's built an image for ec2 get an email from Amazon saying it's been made private?
[11:50] <bigjools> they don't like having our SSH keys in it
[11:52] <wgrant> There are SSH keys in it? :/
[11:52] <bigjools> yeah, your public key if you built it
[11:52] <bigjools> which means you can log into anyone's instance
[11:52] <wgrant> Oh, right, in authorized_keys?
[11:52] <bigjools> they don't like that :)
[11:52] <wgrant> Yeah, that is less than ideal.
[11:53] <bigjools> it means if they do this with all our ec2 images then we can't use ec2  any more
[11:53] <bigjools> so we'd better prod a maintenance dude to fix this asap
[11:53] <wgrant> Does it affect current images too?
[11:53] <wgrant> some bug about SSH keys was fixed a year or so ago.
[11:54] <bigjools> they only emailed about one that I built recently-ish but I expect so
[11:55] <wgrant> :/
[11:56] <jtv> bigjools: what architecture do you want to show for PCJ uploads?
[11:57] <bigjools> jtv: Source
[11:57] <jtv> thx
[11:57] <bigjools> we're not syncing binaries
[12:14] <LPCIBot> Project devel build #786: FAILURE in 5 hr 39 min: https://lpci.wedontsleep.org/job/devel/786/
[12:35] <jtv> allenap: this baffles.  I create a PlainPackageCopyJob, and its id is None.
[12:35] <jtv> Maybe I still need to add it to the store.
[12:38] <jtv> Yup, that does it
[12:38] <jtv> How unspeakably frustrating.
[12:44] <stub> So we could do that automatically if we created a subclass of Storm that added the object to the correct store automatically in its __init__, and used that instead.
[12:45] <jtv> Sounds good.
[12:45] <jtv> oh what is it _now_?  I'm still getting None ids in one place.
[12:45] <stub> Its not done automatically so projects like Landscape can do sharding. Or Launchpad either, since we have sharding support even if we are no longer using it.
[12:45] <wgrant> jtv: Have you flushed?
[12:46] <jtv> No.  I figured that was implicit when I read the attribute!
[12:46] <wgrant> No.
[12:46] <StevenK> Er, so who broke devel? I had two branches fail ec2 with errors in xx-official-bug-tags.txt
[12:46] <wgrant> StevenK: lifeless
[12:46] <stub> Adding it to the store should flag the id column to be refreshed from the db
[12:46] <wgrant> StevenK: Fix is landed already.
[12:46] <stub> I thought
[12:46] <jtv> That's what I expected, yes.
[12:46] <jtv> Or that it would happen somewhere.
[12:46] <wgrant> jtv, stub: I don't think so...
[12:47] <wgrant> That could cause a very early flush.
[12:47] <stub> one of these objects is not like the other. Perhaps not everything got added to the store?
[12:47] <jtv> wgrant: it could, but if the alternative is merrily reading a meaningless value from the attribute..?
[12:47] <stub> wgrant: It delays as late as possible - the id column is set to a magic value, and when you attempt to access a magic value the flush happens.
[12:48] <wgrant> https://storm.canonical.com/Tutorial#References%20and%20subclassing
[12:48] <wgrant>    1 >>> ben = store.add(Employee(u"Ben Bill"))
[12:48] <wgrant>    2
[12:48] <wgrant>    3 >>> print "%r, %r, %r" % (ben.id, ben.name, ben.company_id)
[12:48] <wgrant>    4 None, u'Ben Bill', None
[12:49] <jtv> Grrr
[12:49] <wgrant> If whatever is pending is flushed to the database (implicitly or explicitly), objects will get their ids, and any references are updated as well (before being flushed!).
[12:50]  * stub wonders what he is thinking of
[12:50] <jtv> bigjools: will a PackageUpload with a PCJ have a Component?
[12:51] <bigjools> yes
[12:51] <bigjools> jtv: it'll be the same as any other source
[12:51] <bigjools> jtv: with the exception that there's no changes file
[12:51] <jtv> I guess we'll have to look up the SPR then.
[12:52] <bigjools> jtv: so component, section will come from the job metadata
[12:52] <bigjools> don't use the SPR
[12:52] <jtv> When I looked (yesterday I think) the metadata did not contain component and section yet; I think there's something like an addOverride() that you'd need to call first.
[12:52] <bigjools> jtv: there's an addSourceOverride and getSourceOverride on the PCJ
[12:53] <bigjools> jtv: when the job creates the PU it will guarantee that an override is present
[12:53] <jtv> That's it.  When I looked, we weren't calling addSourceOverride yet.
[12:53] <bigjools> yeah it's not landed yet it's in my branch :)
[12:53] <jtv> So I'm testing for the presence of something that's absent.
[12:53] <bigjools> jtv: if you want you can merge my branch and use it as a pre-req
[12:54] <bigjools> we can land both at the same time
[12:54] <jtv> I suppose I'll want that then.
[12:54] <jtv> BTW I'm off db-devel.
[12:54] <bigjools> jtv: use devel
[12:54] <jtv> Bit late!
[12:54] <wgrant> not late.
[12:54] <bigjools> everything you need is in devel
[12:54] <wgrant> devel has all of db-devel.
[12:54] <wgrant> Except for a couple of automerges.
[12:54] <bigjools> jtv: lp:~julian-edwards/launchpad/copies-must-use-queue-bug-789502
[12:55] <bigjools> that branch has one TODO left, which I am landing some pre-req stuff for separately
[12:55] <jtv> OK, will have to look further tomorrow.
[12:55] <bigjools> namely, letting IDistroSeries.createQueueItem() work without a changesfile
[13:02]  * jtv is away, away!
[13:02] <jtv> good night people
[13:27] <bigjools> I wish diff was clever enough to show stuff that was indented or moved
[13:43] <danilos> bigjools, how about "diff -b" (though it's the opposite of what you are asking about)
[13:43] <bigjools> yeah
[13:44] <bigjools> kdiff3 does it nicely
[13:44] <bigjools> not sure that's in the distro any more though :/
[14:07] <flacoste> bigjools: bzr qdiff is your friend
[14:07] <flacoste> bigjools: install qbzr
[14:11] <deryck> Morning, everyone.
[14:25] <bigjools> flacoste: I already had that installed, had no idea it had improved this much!  Now, can we do the same colouring on the MP diff page? :)
[14:25] <bigjools> morning deryck
[14:28] <flacoste> bigjools: qt plugin for mozilla ;-)
[14:29] <cr3> is there a way to use launchpadlib in a cron and handle openid/oauth automatically somehow?
[14:30] <bigjools> flacoste: I can't tell if you're joking!
[14:30] <flacoste> lol
[14:30] <flacoste> i was
[14:30] <flacoste> cr3: yes, you need to run the script one interactively and save the credentials to a file
[14:31] <flacoste> cr3: you can ask pitti how he does it for apport-retracer
[14:31] <cr3> flacoste: cheers!
[14:57]  * bigjools wonders if the derived distros work on package copy jobs has fixed all the PPA copying timeouts
[15:10] <henninge> Hey deryck! I am back.
[15:13] <deryck> henninge, ok, coming.
[15:14] <henninge> deryck: otp
[15:14] <deryck> ok
[15:22] <bigjools> do we have a revno for the release yet?
[15:30] <rvba> bigjools: https://dev.launchpad.net/DowntimeDeploymentSchedule
[15:32] <flacoste> bigjools: we should have, PQM Is open
[15:32] <bigjools> \o/
[15:32] <bigjools> thought it was closed still
[15:33] <flacoste> bigjools: 13168 is the one
[15:37] <LPCIBot> Project windmill-db-devel build #373: STILL FAILING in 1 hr 6 min: https://lpci.wedontsleep.org/job/windmill-db-devel/373/
[15:50] <LPCIBot> Project parallel-test build #20: STILL FAILING in 1 hr 28 min: https://lpci.wedontsleep.org/job/parallel-test/20/
[16:26] <lifeless> man, this is waaay to early to be awake
[16:29] <bigjools> lifeless: !
[16:29] <bigjools> nobody, ever, has had a better nick than you
[16:31] <sinzui> deryck_: Can you review https://code.launchpad.net/~sinzui/launchpad/webkit-yuitest-love-1/+merge/63884 so that we can discuss the future of JS testing
[16:32] <deryck_> indeed
[16:32] <deryck_> sinzui, looking now....
[16:33] <lifeless> bigjools: IRC nicks, you gotta own them :)
[16:35] <abentley> lifeless: or else pwn them using sniffed credentials :-)
[16:40] <lifeless> bac: if you can qa rev 13170 we can get it deployed today
[16:41] <deryck_> sinzui, I like this very much.  No concerns or comments really.  r=me.
[16:42] <bac> lifeless: looking
[16:42] <sinzui> deryck_: thanks. I will make my first attempt to land. I may need to tune html5browser for lucid because the dependencies are different.
[16:43] <gary_poster> abentley, could you kick https://code.launchpad.net/~chromium-team/chromium-browser/channels , per https://answers.launchpad.net/launchpad/+question/160629 ?  I'm assuming this is related to bug 648075, per your comment in https://answers.launchpad.net/launchpad/+question/159857
[16:43] <_mup_> Bug #648075: Automatic translations export fails intermittently <branch-scanner> <lp-translations> <oops> <qa-untestable> <Launchpad itself:Fix Committed by abentley> < https://launchpad.net/bugs/648075 >
[16:44] <deryck> sinzui, Once it lands, it would be nice to move the test_yuitests files out of the windmill dir, just to make it clear this is separate...
[16:44] <deryck> sinzui, but that's easy and not a huge issue either.
[16:44] <LPCIBot> Project windmill-devel build #184: STILL FAILING in 1 hr 6 min: https://lpci.wedontsleep.org/job/windmill-devel/184/
[16:44] <sinzui> deryck: agreed.
[16:45] <abentley> gary_poster: kicked.
[16:45] <gary_poster> thanks abentley!
[16:59] <jcsackett> sinzui, could you look at https://code.launchpad.net/~jcsackett/launchpad/oh-oh-pick-me-pick-me-3/+merge/63885 for me?
[17:00] <sinzui> okay
[17:29] <LPCIBot> Project windmill-devel build #185: STILL FAILING in 44 min: https://lpci.wedontsleep.org/job/windmill-devel/185/
[17:54] <LPCIBot> Project devel build #787: STILL FAILING in 5 hr 40 min: https://lpci.wedontsleep.org/job/devel/787/
[17:57] <henninge> sinzui: Hi! Do you have a minute?
[18:19] <lifeless> I recall there being an existing bug asking for a strike-out style on closed bugs
[18:19] <lifeless> (not bug 109113)
[18:19] <_mup_> Bug #109113: closed bugs are not clearly marked in the recently filed/touched lists <lp-bugs> <ui> <Launchpad itself:Triaged> < https://launchpad.net/bugs/109113 >
[18:19] <lifeless> I cannot find it
[18:21] <sinzui> jcsackett: I replied with a question. I hope you can reassure me that the comment is wrong
[18:28] <lifeless> found it, it had become a dupe
[18:31] <LPCIBot> Project windmill-devel build #186: STILL FAILING in 45 min: https://lpci.wedontsleep.org/job/windmill-devel/186/
[18:42] <lifeless> sinzui: hi
[18:42] <sinzui> hi lifeless
[18:42] <lifeless> lib/lp/bugs/templates/bugtarget-portlet-tags-content.pt to remove memcache I just need to delete the <tal:tags content="cache:public noparam, 60 minutes"> line (and its closing line) ?
[18:43] <sinzui> yes
[18:44] <lifeless> like so - http://pastebin.com/UseELrCv ?
[18:47] <sinzui> lifeless: exactly so
[18:49] <lifeless> great
[18:49]  * lifeless does it
[18:49] <lifeless> (we don't need memcache for a 150ms page :)
[18:50] <lifeless> not a 60 minute one anyhow
[18:53] <jcsackett> sinzui: i see your comment on my MP. do you want confirmation that it's not possible period, or not possible for someone who doesn't own/moderate said private team?
[18:55] <sinzui> jcsackett: register a private team, then try to set the team as the driver of /firefox
[18:55] <jcsackett> sinzui: so, if i do that, as say admin, i can assign the private team. if i log in as name12 afterwards i cannot.
[18:55] <jcsackett> this is, i take it, not ideal.
[18:56] <sinzui> jcsackett: No admin can do this. the picker should not suggest you can
[18:57] <jcsackett> ok.
[19:12] <sinzui> jcsackett:  I believe is mispoke. Please read Brad's private-team-roles.txt doctest that clearly states our expectation.
[19:13] <jcsackett> sinzui: ok.
[19:16] <jcsackett> sinzui: so it looks like provided you can see the private team, you should be able to pick it as maintainer?
[19:16] <sinzui> yes
[19:18] <sinzui> This is actually good news because lifeless and I agree that you should be able to place a private team in a pubic role (and we will tell users who the team is) This check the bug assignee example to ensure nothing odd happens
[19:19] <jcsackett> okay, i'm looking at that now.
[19:20] <jcsackett> sinzui: it's a little weird when a product has a private team assigned. the display if you can't see the team is "Maintainer: <hidden>"
[19:20] <lifeless> bigjools: package copy
[19:20] <bigjools> lifeless: you're like a rabid dog :)
[19:20] <lifeless> bigjools: does it count the user-ticked-boxes (e.g. sources) or expand to source + binary count and use that ?
[19:20] <lifeless> bigjools: woof!
[19:21] <bigjools> lifeless: I asked for source+bin but let me check.  Although ideally we need *file* count as that's the badly scaling bit
[19:21] <lifeless> bigjools: so if its source + bin can I suggest a threshold of 40 ?
[19:22] <bigjools> lifeless: it's set in a FF so you can suggest wth you like :)
[19:22] <lifeless> :P
[19:22] <lifeless> I mean, do you think that that would be too low?
[19:23] <bigjools> gah, it's only # of sources
[19:24] <bigjools> easy enough to fix
[19:24] <lifeless> I'll leave it with you :)
[19:24] <sinzui> jcsackett: That is correct for the code in the current state
[19:24] <bigjools> lifeless: it was done with syncing in mind for DDs but DDs will always use PCJs now so I am going to leave this for a maintenance team
[19:24] <bigjools> maybe that will still be me :)
[19:24] <jcsackett> sinzui: right, didn't think it was wrong. just peculiar. :-)
[19:25] <lifeless> bigjools: ok, so we can set the threshold to 0 for now to fix unembargo of linux
[19:25] <bigjools> lifeless: yes but I'd like to see that tested on staging first
[19:25] <lifeless> indeed!
[19:25] <bigjools> :)
[19:26] <bigjools> lifeless: the other thing is, most people expect the copies to complete immediately, this will be a culture shock and possibly even break api scripts
[19:26] <lifeless> we can socialise it a little first
[19:27] <bigjools> give it a few beers?
[19:27] <lifeless> take it out to dinner!
[19:27] <bigjools> fnar
[19:38] <jcsackett> sinzui: bug assignee stuff is working on my branch as well.
[19:38] <sinzui> okay. r=me
[19:38] <sinzui> with the fix of the comment
[19:39] <jcsackett> sinzui: dig.
[19:39] <jcsackett> making that fix now.
[19:43] <LPCIBot> Project windmill-db-devel build #374: STILL FAILING in 42 min: https://lpci.wedontsleep.org/job/windmill-db-devel/374/
[19:46] <henninge> sinzui: Hi!
[19:49] <henninge> sinzui: can you please review this branch?
[19:49] <henninge> https://code.launchpad.net/~henninge/launchpad/bug-410864-undefined-in-picker/+merge/63911
[19:50] <henninge> sinzui: It is fixing a bug in the picker widget which I heard your squad is working on, too.
[19:51] <henninge> sinzui: So I hope you can check if this is colliding with any work that is going on
[19:51] <sinzui> henninge: thank you very much
[19:51] <lifeless> gary_poster: while you're here, is there anything you need from me / that I can do for you ?
[19:51] <henninge> sinzui: but it is only a small fix.
[19:51] <lifeless> bigjools: ditto^
[19:52] <gary_poster> lifeless, hi, thanks for offer.  thinking...
[19:52] <gary_poster> lifeless, nothing offhand, but I will keep in mind that you are here unusually early :-)
[19:53] <lifeless> :)
[20:07] <LPCIBot> Project parallel-test build #21: STILL FAILING in 53 min: https://lpci.wedontsleep.org/job/parallel-test/21/
[20:12] <sinzui> henninge: r=me , but your branch conflicts with jcsackett's current branch that is landing
[20:12] <henninge> sinzui: thanks, I'll have a look
[20:13] <sinzui> henninge: jcsacket moved modules and code so it may not be obvious. You may need to paste line 8 from the diff into the new code
[20:25] <LPCIBot> Project windmill-devel build #187: STILL FAILING in 42 min: https://lpci.wedontsleep.org/job/windmill-devel/187/
[20:26] <henninge> sinzui: I'll EOD now but I will look into it in the morning. Thanks for your review and comments.
[20:42] <LPCIBot> Project db-devel build #621: FAILURE in 6 hr 29 min: https://lpci.wedontsleep.org/job/db-devel/621/
[21:08] <bac> hi lifeless, you around and have a minute?
[21:08] <lifeless> sure
[21:08] <lifeless> how can I help ?
[21:08] <bac> lifeless: great.  i'm trying to QA the branch for bug 788874 -- the too big email oops
[21:08] <_mup_> Bug #788874: mail handling stalls when a large message is received and not deliverable <canonical-losa-lp> <qa-needstesting> <Launchpad itself:Fix Committed by bac> < https://launchpad.net/bugs/788874 >
[21:09] <bac> on qastaging i've seen the log message written, and an OOPS report generated but i cannot find the outbound OOPS email to verify it got truncated as intended
[21:09] <lifeless> ok
[21:10] <bac> i triggered it by sending a bug message to qastaging with an 11MB attachment
[21:10] <bac> and had cronscripts/process-mail and send-bug-nofications run on demand
[21:10] <bac> lifeless: so, can you think of something i'm missing?
[21:11] <lifeless> 2011-02-26 06:20:56 ERROR   No mail box is configured. Please see mailbox.txt for info on how to configure one.
[21:11] <lifeless> the logs for qastaging are in carob:/srv/launchpad.net-logs/qastaging/asuka
[21:11] <lifeless> process-mail is logging that output
[21:11] <bac> right, and if you look at 16:10 from today it has the log message i referred to
[21:11] <lifeless> I think this means there is a config issue of some sort in the way its being run
[21:12] <lifeless> oh, thats feb
[21:12] <lifeless> nvm
[21:12] <lifeless> so, I thought that when process-mail generates an oops
[21:12] <lifeless> it reported it to the log
[21:12] <lifeless> so, if thats not happening
[21:13] <bac> lifeless: this is the generated OOPS: https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1985QASTAGING40
[21:13] <lifeless> ah, I wasn't looking at the right point in th elog
[21:13] <lifeless> *now* I'm more together
[21:14] <lifeless> and you've looked in the staging mailbox?
[21:14] <bac> yes
[21:14] <LPCIBot> Project windmill-devel build #188: STILL FAILING in 43 min: https://lpci.wedontsleep.org/job/windmill-devel/188/
[21:15] <bac> lifeless: but the odd thing abt the staging mbox is it has the same message repeated hundreds of times, unrelated to what i'm doing
[21:17] <lifeless> I think paolo is testing a script
[21:17] <lifeless> there are several different bugs with duplicate messages
[21:17] <lifeless> 718985, 718986, ..
[21:18] <lifeless> I've cleaned those out
[21:19] <lifeless> so, possibilities are:
[21:19] <lifeless>  - the NDR isn't getting captured by qastaging mail rules
[21:19] <lifeless>    (check the account you sent the big message from)
[21:20] <lifeless>  - the patch is not sending the mail (and not causing a log message to be sent)
[21:21] <lifeless>  - something else?
[21:22] <lifeless> the change to delete the email before sending the NDR makes this a bit harder to check
[21:22] <bac> lifeless: i was wondering if there was some other process that needed to be kicked.  but it doesn't sound like it
[21:22] <lifeless> because we can't check that its been deleted as a way to see where the code got to
[21:23] <lifeless> at least on prod this is definitely in-process
[21:40] <gary_poster> lifeless, I was thinking about your perception that all or most of yellow was on the feature well after we switched to maintenance.  FWIW, it is true that danilo and I were the people I expected to be consumed, and we were; but it is also true that gmb and benji each spent a bit of time on the feature.
[21:41] <gary_poster> benji was only about a week in my estimation; gmb is fixing generic critical bugs but one he's tried to fix repeatedly is tied into the whole feature thing.  So I think there's a good explanation for your perception--there's something behind it.
[21:41] <gary_poster> If this clarification means you think it is important to reraise your concern later, I'll be happy to reiterate this (and perhaps get concrete data if you desire).
[21:41] <gary_poster> (I personally think we should simply remember this for the future, and if it happens again, raise the concern again.)
[21:43] <lifeless> gary_poster: thanks!
[21:44] <gary_poster> np
[21:44] <bac> lifeless: i'm going to try another and ask that process-mail be run with INFO level logging...it won't tell us much more but is worth trying
[21:44] <lifeless> gary_poster: I will talk to flacoste about it; I'm not sure I can suggest anything /different/
[21:44] <lifeless> gary_poster: estimation is hard
[21:44] <gary_poster> ack
[21:45] <lifeless> gary_poster: did my reply about the account-merge-feedback thing help you?
[21:50] <gary_poster> lifeless, ah, sorry!  that was yesterday, and so is no longer relevant. ;-)  I forgot.  How about I bring up the question on -dev and include your reply, to see if we have any comments?  The issue is a small thing, but I expect you can see that it is still relevant in CHR-land (and a bit annoying that we ask for feedback without any particular plan on what to do about it).
[21:50] <gary_poster> I generally agree that we should not follow up on each case.  I'd go further and say that we could simply say, in the email, "if you receive many of these, please contact feedback"
[21:50] <gary_poster> That way, if we get a ping on feedback, it's more likely to be actionable
[21:51] <gary_poster> (and if it is not actionable, let's not ask for feedback)
[21:53] <lifeless> gary_poster: +1
[21:55] <LPCIBot> Project windmill-devel build #189: STILL FAILING in 41 min: https://lpci.wedontsleep.org/job/windmill-devel/189/
[22:15] <bac> lifeless: i have this now:  https://pastebin.canonical.com/48325/ with INFO level debugging
[22:15] <bac> lifeless: followed by the results of send-bug-notifications.py -vv at https://pastebin.canonical.com/48327/
[22:17] <bac> lifeless: aha!  the mail did show up in the staging mailbox
[22:17] <lifeless> \o/
[22:18] <bac> and the 11MB attachment is 60KB long
[22:18] <bac> whee
[22:18] <gary_poster> yay!!!!
[22:18] <lifeless> sweet
[22:18] <lifeless> so thats qa-ok
[22:19] <bac> i was afraid i was going to have to roll-back a branch i really thought was good...
[22:20] <lifeless> bac: I'm glad you persevered ! I dunno what went wrong on the first test
[22:23] <bac> dunno.  thanks for your help, though.
[22:26] <lifeless> no probs, anytime
[22:46] <persia> Good day.  I just noticed that I had a workitem from UDS to "rocketfuel setup - dev recommendation in a chroot - http://dev.launchpad.net/Running/Schroot".  Does anyone happen to know what this means?
[22:46] <persia> Am I to clean up the page, with recommendations for automated means to create schroots?  Just test the procedure and file bugs?
[22:46] <lifeless> both of those things sound good
[22:46] <lifeless> you know what would be super awesome
[22:46] <persia> What?
[22:47] <lifeless> instructions to get lp dev environment going with lxc
[22:47] <lifeless> with the container natty/oni and the contained thing lucid
[22:47] <persia> I agree that would be super-awesome.  If I work on that, I'll spend the next three months yak shaving, take care of some of the precedents, and get distracted.
[22:49] <persia> My recommendation there is to wait for LXC support to be added to libvirt and the ubuntu-security-tools stuff (both planned), and then ask someone to document how to do rocketfuel-setup therein at that point.
[22:50] <lifeless> oohhh lixc libvirt sounds shiny
[22:51] <persia> It's getting really close.  Some architectures don't have the necessary HW support for KVM, and qemu is *slow*.
[22:53] <persia> Unless I misread my tea leaves, I suspect we should have nice automation for LXC in oneiric, making the super awesome procedural documentation something that should be able to happen sometime next Ubuntu cycle (which ought make it easier to cross-test as LP migrates to new LTS)
[22:55] <lifeless> we're unlikely (due to sheer manpower constraints) to migrate to the next LTS for 6-12 months after its release
[22:55] <lifeless> we'll have the dev environment move
[22:55] <lifeless> but moving to the next LTS means:
[22:55] <lifeless>  - moving to python2.7 on lucid
[22:55] <lifeless>  - then moving LTS
[22:55] <lifeless> neither are trivial
[22:55] <persia> Indeed.
[22:56] <lifeless> and we have no manpower to do them for their own sake; so stakeholder requests || dealing with criticals have to be sacrificed to do them ahead of 'must do them now'
[22:57] <LPCIBot> Project windmill-devel build #190: STILL FAILING in 41 min: https://lpci.wedontsleep.org/job/windmill-devel/190/
[22:58] <persia> Lucid will have another ~36 months of support when oneiric+1 is released, so it's not precisely a critical transition.