/srv/irclogs.ubuntu.com/2011/01/27/#bzr.txt

* vila woke up one hour later than yesterday, one more week and the jet lag will be sorted out :-(00:19
lifelessjam: no worries00:24
jamvila: you should definitely not be up now :)00:24
vilajam: indeed :-/00:25
vilajam: nice summary in bug #589521, thanks for that00:25
ubot5Launchpad bug 589521 in Ubuntu Distributed Development "nagios monitoring of package imports needed" [Critical,Triaged] https://launchpad.net/bugs/58952100:25
fullermdvila: Sorted out?  What's wrong with things like they are?  It sounds like a perfectly sensible schedule to me   :p00:38
vilafullermd: the truth is, the arguments with wife and daughters tend to be easier to manage under this schedule...00:40
fullermdSee?  Come to the dark side...00:40
lifelessmkanat: hey, you said the other day that b.l.n had the wrong theme?02:54
mkanatlifeless: Yeah.02:54
lifelessmkanat: I meant to ask what you mean by that, but got local interrupts02:54
mkanatlifeless: I'll see if I can show you.02:54
mkanatlifeless: This page has the wrong CSS: http://bazaar.launchpad.net/~loggerhead-team/loggerhead/trunk-rich/view/head:/.bzrignore02:54
mkanatlifeless: Because this file is in fact missing: http://bazaar.launchpad.net/static/css/view.css02:55
lifelesswhat css should it have?02:55
mkanatlifeless: That CSS exists in the actual branch.02:55
mkanatlifeless: But it is a 404 on bln.02:56
lifelesslet me just nab someone02:56
pooliemkanat, i don't suppose you would have any time left to actually measure trunk (without historydb) vs 1.18 performance on a large branch?02:56
mkanatpoolie: I might. I have 2.15 hours left.02:57
mkanatI mean, 2 hrs and 15 minutes.02:57
pooliefacts might bring a merciful death to this thread02:57
mkanatpoolie: I don't think anybody ever said that history_db was slower.02:57
poolierobert thinks that john said it would be slower if the db was not enabled02:57
mkanatpoolie: I didn't see that on the list.02:58
lifelessmkanat: jam said this:02:58
lifelessOld     New     Action02:58
lifeless10s     30s     First request of a branch in a project that has not been02:58
lifeless               previously seen. (slower)02:58
lifeless(amongst other things)02:58
lifelessnote that none of our projects have been previously seen02:58
mkanatlifeless: That's with history_db enabled or without?02:59
lifelessand a 30s request will be killed02:59
lifelessmkanat: with02:59
mkanatlifeless: Okay, we're not talking about that situation, then.02:59
mkanatlifeless: That situation would already be handled by what you'd do to enable history_db on codehosting.02:59
lifelessmkanat: he also said that without the plugin, it uses plain bzr apis02:59
mkanatlifeless: Which it does now.02:59
lifelessso will be slower than before as well02:59
mkanatlifeless: He said that there is some difference there from before history_db? Because the branch you're currently running uses plain bzr apis.03:00
mkanatOh, I may be seeing what he means.03:01
poolie?03:04
poolieis it that it's removed caching code that would have helped in 1.18?03:04
mkanatI'm still reading over the code.03:04
mkanatpoolie: I believe so, yes.03:04
mkanatI'm not sure how much that actually matters, though.03:05
mkanatWhat I don't know is how to turn off history_db to test.03:05
lifelessremove the plugin03:05
lifelessthe historydb change is the following:03:05
mkanatlifeless: It's a part of loggerhead.03:05
lifeless - discard all the caching apis03:05
lifeless - switch to raw bzrlib apis03:05
poolieok, so istm it would be a worthwhile use of time to measure whether trunk-minus-hdb is in fact slow03:05
jamlifeless, mkanat: 30s is slower for the first request on a project when it has to convert the full ancestry to the cached form. Note that it can incrementally build the cache, so even if the thread is killed it will eventually succeed.03:05
lifeless - have a *bzrlib* plugin that makes those apis faster.03:05
jamAlso note, it is only for Emacs sized projects. bzr is a few seconds, IIRC03:06
mkanatYeah, launchpad also seems to be a few seconds, in my testing.03:06
mkanatAnd yeah, even if history_db can't write to the disk, it will create its cache in memory.03:06
jamhistory_db was pulled into the loggerhead codebase03:06
lifelessjam: ah, k03:06
mkanatSo I suppose history_db is always enabled.03:07
jamand 10s would become *every* request if you remove all the caching03:07
lifelessso, this is really a bit of a distraction; unless its a no brainer: < a day of work, including getting up to speed, its not suitable for a maintenance squad at this time.03:07
mkanatjam: That would happen basically if I just don't specify "--cache", right?03:07
jammkanat: I don't remember from the command line, tbh03:07
pooliethe question is: is there anything you could do in ~1 day that will get it at least no worse than 1.18?03:07
mkanatpoolie: I suspect that for the actual average use case, it is already no worse than 1.18.03:08
mkanatpoolie: With the scale of codebrowse, it's likely that current stable will be rebuilding caches quite a bit.03:08
poolieok03:08
mkanatpoolie: This is just my suspicion, though, not a proven fact.03:08
jampoolie: I think the most immediate thing would be to teach Launchpad to shared caches between projects03:09
jamat which point you'll hit a couple timeouts, but afterwards everything should be better03:09
pooliemkanat, i'm wondering if we can do some measurements to get more confidence in whether that is true or not03:09
mkanatpoolie: Yeah, I could do that, for sure.03:10
pooliejam, i know, but it seems like that is going to be more than a day of workL03:10
mkanatpoolie: I have a loggerhead load tester already.03:10
jampoolie: it *should* just be changing the path to not include the branch name, just the project name03:10
mkanatpoolie: Let me do a test right now.03:10
jamwon't be shared between users, but probably pretty good03:10
jambut I don't fully understand the codehosting remapping, etc.03:10
lifelessjam: would that permit leakage from private projects03:10
jamlifeless: revision id and revno stuff03:10
lifelesse.g. if someone has the revid for a private project, makes that a ghost in their branch of same project03:11
jamno revision information03:11
jamlifeless: the cache is just the graph of revnos and revision_ids, it doesn't cache the actual content03:11
lifelessjam: revids are sensitive too, given they include email addresses03:11
jamso if you have the revision id, then you have all the information you're going to get03:11
jamlifeless: but you just said you *have* it03:11
lifelessright, thats one scenario03:11
jamsince you are making it a ghost03:12
mkanatOkay, so I think that disabling the on-disk caching of historydb involves commenting out two lines.03:12
lifelessthe other one is it likely to expose a revid when it shouldn't.03:12
mkanatThen it should cache in memory.03:12
lifelesse.g. due to bugs03:12
pooliemkanat, let's try it!03:12
mkanatpoolie: Okay, going to try it now.03:12
lifelessremember03:13
lifelessto test it with a stacked branch and HTTP backend03:13
mkanatpoolie, lifeless, jam: Another option is to have codebrowse cache everything in one directory, in one file.03:13
lifelessfor apples to apples comparison03:13
pooliewell03:13
pooliehis hardware and load is undoubtedly not exactly the same03:13
mkanatMy load tester creates significantly more load than codebrowse experiences.03:14
pooliebut perhaps we can get some data03:14
mkanatOkay, I can already tell you that if you disable the on-disk cache, yes, trunk is slow.03:14
mkanatI don't even have to run the tester.03:14
mkanatGetting information for ViewUI: 15.945 secs03:14
poolieon emacs?03:15
mkanatOn launchpad.03:15
poolieok, and you're confident that's representative?03:16
poolieif so, that means we can't just deploy trunk03:16
mkanatjam: I imagine that if we used loggerhead's normal configuration where there's only one history_db.sql file for all branches combined, then it would become unmanageably large, on codehosting?03:16
pooliewe would need to do whatever it is to get historydb in use03:16
mkanatpoolie: Yeah--what I didn't realize was that history_db was always being enabled, in my earlier tests.03:17
mkanatpoolie: Even if you don't specify a cache directory, loggerhead creates a temporary one when you start it.03:17
poolieand the question then is: can that task be made sufficiently small to do it ahead of doing other loggerhead work03:17
mkanatpoolie: It's all infrastructural work on codehosting, AIUI.03:18
poolieso it does cache to disk at the moment? but in a different form03:18
mkanatpoolie: No, your current stable loggerhead does not cache to disk.03:18
mkanatpoolie: But trunk always does.03:18
lifelessuhm03:19
lifelessMy understanding is that it uses sqlite caches still03:19
lifelessmwhudson: ^03:19
mkanatlifeless: It *can*.03:19
mkanatlifeless: I don't think codebrowse has that enabled.03:19
mwhudsoni'm pretty sure it does03:20
lifelessso am I03:20
poolieit seems like changing it to write historydb caches into the same place should not be too traumatic03:20
mkanatYeah.03:20
jammkanat: I don't think it would become unmanageable, but probably bigger, and after that first attempt, probably faster.03:20
mkanatActually, that wouldn't be a problem at all, if you are already using the on-disk cache.03:20
pooliejam, so it would be better to use larger caching scopes, but doing it per branch would be ok?03:20
poolieit won't die on the first access?03:20
mkanatYeah, so you could actually just deploy history_db now.03:20
jammkanat: no, loggerhead always cached to disk, it always created a temp one, even before history_db03:20
mkanatjam: Hmm, except that I could always make loggerhead re-generate its branch caches before history_db, with enough simultaneous access.03:21
mkanatjam: It looked to me like it was only using the lru_cache.03:21
mkanat(Unless you configured it to use the on-disk cache.)03:21
jammkanat: it just happened to regen the cache all the time03:21
jamif you don't specify a cache, one is created that only lasts the lifetime of the process, and then is thrown out03:22
mkanatjam: Oh indeed.03:22
poolie(separately, lifeless, is deletion of the /beta api near enough we should worry?)03:22
jampoolie: I have to double check the code a bit. but I believe it will populate the cache file incrementaly, so even if it was killed in the first request, the second one will succeed for any given branch03:23
poolieor maybe the 18th?03:23
lifelesspoolie: in principle we're not adding anything to it03:23
mkanatjam: In that case, history_db sounds like a net win with little infrastructural work required.03:23
lifelessjam: well the second request may be to a different instance03:23
jamlifeless: but the disk cache is in the same location03:23
jamaiui03:23
lifelessI find the hand wave on infrastructure work scary.03:23
jamusing the same cache that loggerhead is using today03:24
mkanatlifeless: Why would you?03:24
mkanatlifeless: It's just a new file in the same directory.03:24
mkanatlifeless: That's the extent of the change that would happen.03:24
pooliejam: is historydb broken or unfinished or dangerous in any regard?03:24
mkanatlifeless: Out of curiosity, do both processes currently use the same on-disk cache directory?03:25
lifelessyes03:25
lifelessthey do03:25
jampoolie: I don't know of anything specifically broken03:25
jamcaveat code has bugs03:25
lifelesswhich is one thing jam highlighted as being a concern03:25
pooliesure03:25
jamprobably nothing worse than the existing code03:25
poolieand rollouts are hard03:25
pooliebut, we still need to get through them03:25
lifelessmkanat: I find it scary because handwaving around a bit of core plumbing is somewhat cavalier03:25
lifelessmkanat: when one considers all the bits that can - and have - gone wrong in the past.03:26
mkanatlifeless: Are you talking about infrastructure or about the internal architecture of loggerhead?03:26
lifelessmkanat: infrastructure03:26
lifelessand architecture03:26
mkanatlifeless: Okay. Are you using (or do you have) sqlite 3.7? If you use a WAL, I bet both processes would be fine accessing the same cache.03:27
lifelessI haven't looked at the risk of disclosure for private branches other than my brief queries here03:27
mkanatlifeless: Risk of disclosure in what way?03:27
mkanatlifeless: history_db, when we're just talking about loggerhead, is only a backend change.03:27
lifelessbugs disclosing revision ids not in the branch being accessed03:27
pooliebut, i think at the moment we're not talking about having a shared cache?03:28
lifelessmkanat: I realise that its a backend change.03:28
lifelesspoolie: one of jams caveats was slower initial scans w/out a shared cache.03:28
mkanatBut we would have a shared cache.03:28
lifelesssigh03:28
mkanatBecause we'd have only one cache.03:28
lifelessI'll come back to this, I feel that the point is ....> over there03:29
lifelessdoing a bunch of work to assess whether its small enough, based on guesses - is at best a guess. Right now, I still wouldn't confidently ask curtis or julian to take this on within maintenance scope03:29
mkanatlifeless: It doesn't sound like any code work is required.03:30
lifelessif I'm over concerned about the risk, its because of a history of problems.03:30
poolieit's like a major version bump in a dependency03:30
poolieperhaps one that has caused trouble in the past03:30
mkanatYeah, sure. I mean, there's still the testing thing that I just talked about on the list.03:30
mkanatI'm fully with you on that.03:30
lifelessmkanat: I don't know that I've claimed code work; the same engineers do code and integration - everything except deploy & server ops.03:30
lifelessmkanat: its an integrated team leading right up to the ops team03:30
mkanatlifeless: Okay. What do you mean by "integration"?03:31
lifelessfor me to say to curtis or julian that its in scope for maintenance, I'd want to be quite confident that there is < 12 hours work asessing, measuring, qaing on staging, deploying & dealing with the fallout.03:31
poolielifeless, how would you handle an upgrade to some other component we depend upon?03:32
pooliethere may well be some that are over 12h03:32
lifelesspoolie: if its small a maintenance team just does it. If its not a feature squad gets assigned it - as will happen to python2.703:32
poolieok03:33
lifelessnow, someone devs may scratch an itch and get python2.7 workable on dev machines, but actually upgrading prod etc requires considerable care.03:33
pooliesure03:33
lifeless12h is an arbitrary figure03:33
poolieubuntu upgrades are another case that comes to mind03:33
poolieistm we have the option of trundling along 1.18.x until we get time to safely do a bigger deployment03:33
lifelessright, they were traumatic, and we're doing them differently next time - separating OS and dependencies; we'll go to py 2.7 *before* the next OS upgrade.03:34
lifelessso that we're in place.03:34
pooliebut we'd probably prefer not to do major development that's duplicating what's been done in an upstream later release03:34
lifelesspoolie: I don't see that as an option; I see that as the /only/ option given the info I've got about whats up.03:34
pooliemax has said, rightly or wrongly, that he doesn't think there's any more low hanging fruit for performanec03:34
poolieso, i don't think we can usefully schedule small blocks on this03:35
mkanatTalking about actual development of loggerhead, that is.03:35
mkanatWhich would be a separate issue from rolling out this one.03:35
poolieand if we're going to schedule a big block, we might as well start it by getting historydb running03:35
mkanatlifeless: Without tests, I can't say how much time deployment would take. But if you also start changing *any* branch significantly, my answer would be the same.03:35
pooliesince we know that will probably help alot03:35
mkanatlifeless: If everything goes fine, I suspect it will take no longer than a normal rollout, perhaps an hour or two more.03:36
mkanatlifeless: The worst that could happen--that I can predict now--is that the two processes may need separate cache directories so that they don't lock each other out.03:36
lifelessmkanat: so, I'm aiming at 10 minute downtime windows; thats a factor of 12 more than 'ok'03:36
lifelessmkanat: or maybe you don't mean downtime window ?03:37
mkanatlifeless: I don't see any downtime happening.03:37
mkanatlifeless: Yeah, I mean total maintenance work time.03:37
mkanatlifeless: To do the rollout.03:37
poolieit seems like we should easily be able to run it in parallel03:37
pooliesince it's readonly etc03:37
mkanatpoolie: history_db gets written to quite a bit.03:37
lifelessall this is fine03:37
lifelessBUT ITS WORK.03:38
poolielogically readonly03:38
lifelesswho is going to do it? what if the guess is wrong? whats the impact on load? will the machine thrash?03:38
mkanatpoolie: Yeah.03:38
poolieindeed03:38
mkanatlifeless: The machine should use considerably less memory with history_db than it does now.03:38
lifelessI mean, I'm totally fine with whatever plan, my point has *never* been that its crap, its been that *we need to do work to make this usable in prod*03:38
lifelesseffort03:38
lifelesstime * energy.03:38
poolielifeless, the point is, i don't see that there is any other sensible next step other than deploying this03:38
mkanatlifeless: In an ideal situation, the rollout requires no attention from anybody--it would just be a standard merge and push.03:38
lifelesswhich btw, this conversation is sapping for me.03:38
pooliei'm not saying you have to do it right now03:38
pooliebut there's no point discussing how to make it possible to do other things off 1.1803:39
mkanatlifeless: I totally understand. :-)03:39
lifelesspoolie: But thats not what I argued *for*. I argued that other than oops fixing, which there are plenty of sensible things to do in the current lp branch, we're not going ot be undertaking major works for 6-9 months, based on my knowledge of jml's pipeline.03:40
poolieok03:40
poolieso again we come back to assumptions03:40
lifelessdo you agree that nearly 20K unhandled exceptions a day is a little bad?03:40
pooliemax thinks it is not easy to fix oopses in the current branch03:40
mkanatAh, I wouldn't exactly say that.03:40
lifelessand do you agree that historydb is an *operational unknown*?03:40
mkanatIt was easy to fix the "no such revision" one.03:40
mkanatlifeless: Actually, now that I understand it, I only agree minimally.03:41
mkanatlifeless: Because codebrowse is already using on-disk caches.03:41
poolielifeless, i'm only spending time on this because i think it needs to be improved and i want to see us take the best course on it03:41
mkanatlifeless: This is just another on-disk cache.03:41
lifelessmkanat: which we have several deployment options for03:41
lifelessa) per branch03:41
lifelessb) shared cross branch03:41
lifelessc) memory only03:41
lifelessd) huge one03:41
lifelesse) ...?03:41
mkanatlifeless: Right, and (d) requires no code work at all.03:42
lifelesswe've seen locking issues with sqlite already03:42
poolieso we have some undelivered work in trunk03:42
pooliewe all agree that actually deploying it will take work03:42
lifelessa) would be equivalent to our current cache story in terms of disk, but higher upfront scans and no benefit cross-branch03:42
lifelessetc03:42
mkanatlifeless: (d) is essentially what codebrowse is doing right now.03:42
lifelessmkanat: I thought a) was03:43
mkanatlifeless: Nope--one file in one directory.03:43
lifelessmwhudson: ^ cache per branch, right ?03:43
mwhudsonthere would be little point in doing (a)03:43
mwhudsonlifeless: currently?03:43
lifelessyes03:43
mwhudsonnot sure actually03:43
* mwhudson checks03:43
mkanatlifeless: At least, AIUI.03:43
pooliemwh, for my education, where are you checking to find out?03:43
mwhudsonpoolie: lib/lp/launchpad_loggerhead/app.py in the launchpad tree03:44
mkanatOh right, launchpad-loggerhead.03:44
pooliemwhudson, i don't have that directory...03:45
mwhudsonlifeless: it's one per branch currently03:45
mwhudsonpoolie: -lp/ sorry03:45
mkanatpoolie: It's a separate project, lp:launchpad-loggerhead03:45
mwhudsonmkanat: not any more it's not03:45
mkanatmwhudson: Oh! didn't know.03:45
lifelessmwhudson: I don't have that directory, for whatever reason03:45
mwhudsonpoolie: lib/launchpad_loggerhead/app.py03:45
lifelessright03:46
lifelessso,  back to facts, we're deployed with (a) today03:46
lifelessshared between two instances03:46
mkanatlifeless: Ahh.03:46
lifelesswith some threads each03:46
mkanatYes, in that case, history_db would be different.03:46
poolieok, and are you saying that historydb is going to be slower in that setup?03:46
mkanatpoolie: No, just different.03:46
mwhudsonlifeless: changing to one cache per <whatever> would be very easy03:47
lifelessjam says initial scan is thrice the time, subsequent operations are faster03:47
lifelessmwhudson: I know, but its all time03:47
lifelessmwhudson: for a maintenance squad without your context03:47
lifelessmwhudson: who will be writing up what they figure out like crazy03:47
mwhudsonlifeless: less time than this conversation has been going on for03:48
mwhudsonlifeless: sure, they are welcome to talk to me :)03:48
lifelessmwhudson: sure feels like it; or are you serious ?03:48
mwhudsonlifeless: sorry, not quite sure i understand03:49
lifelessmwhudson: 'less time than this conversation has been going on for'03:49
mkanatlifeless: I'm pretty sure he was serious.03:49
mwhudsonlifeless: yes, i was serious about that03:49
lifelessok03:49
mwhudsonit's a few lines03:49
lifeless+ a test run :)03:50
lifeless+ qa to make sure it behaves03:50
lifelessanyhow03:50
mwhudsonyeah03:50
mwhudsonsadly, a test run won't make any difference, but yes03:50
lifelesspoolie: so whats your proposal thats different to what I've suggested?03:51
mkanatmwhudson: Where's the on-disk cache setup in app.py? I only see "self.graph_cache = lru_cache.LRUCache(10)".03:51
mwhudsonlifeless: this diff will switch to using a single cache http://pastebin.ubuntu.com/558839/03:51
lifelesspoolie: or are you trying to say I've got a bad axoim or assumption?03:51
poolielifeless: keep both branches as they are03:51
lifelessmkanat: cachepath =03:51
mkanatlifeless: Ah, thanks.03:51
poolieif there's genuinely low fruit in 1.18, do that03:51
pooliefile a bug saying "should upgrade to trunk"03:51
pooliedo that in a safe way, but don't be scared of it03:51
mwhudsonmkanat: the BranchWSGIApp constructor03:51
mkanatmwhudson: Yeah, I found it. :-)03:51
lifelesspoolie: I'm not scared of it; honestly.03:52
lifelessI'm assessing how long its taken mwhudson to do this in the past03:52
lifelessthe things that have gone wrong in the past03:52
mkanatlifeless: Okay, I'm starting to agree with you more now that I see how codebrowse currently runs.03:52
lifelessand our pipeline of stakeholder requests03:52
mkanatlifeless: I was under the impression that it was doing (d).03:52
poolieit's fine if it stays in the pipeline until it's a priority03:52
lifelessmkanat: god no, it would be flattened in an instant03:52
mkanatlifeless: That's sort of what I thought, yeah. :-)03:53
mwhudsonlifeless: i'm not so sure about that actually03:53
lifelessmwhudson: that was humour03:53
mwhudsonyou might have some locking related fun03:53
mwhudsonok03:53
lifelessI'm pretty sure we do benefit from the caches03:53
mkanatlifeless: That's possibly resolvable by WAL, but that would indeed be infrastructural and integration work for sure.03:53
lifelessbut I'm also sure there is friction in there03:53
lifelesswhich needs time and attention to detail03:54
mkanatlifeless: Okay. Out of curiosity, are you sure because you have tracebacks, or just because you suspect?03:54
lifelesspoolie: I don't think putting it in the pipeline as a given makes sense; saying 'do something to make codebrowse better' is something we should put in the pipeline; one concrete thing we can do there is productionise and deploy history db03:54
lifelessmkanat: have seen the odd thing go by03:55
mkanatlifeless: Okay.03:55
lifelessmkanat: such as sqlite lock contention errors03:55
poolieit's essentially already on the kanban towards the right hand side03:55
lifelesspoolie: I have to disagree.03:55
mkanatlifeless: I think that it would probably be sensible and manageable to do oops fixing on trunk and 1.18 both simultaneously, and then schedule trunk rollout for a few months away.03:55
mkanatOr on ~launchpad-pqm/devel and trunk.03:55
lifelesspoolie: theres a bunch of work been done on it, yes. But no team has been pushing it forward; its a new-situation for whomever comes along.03:56
poolielifeless, because... it's not that far to the right if deployment hasn't been thoroughly considered?03:56
mkanatlifeless: If you limit the changes to just oops-fixing until trunk is ready, I think you'd be fine.03:56
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=fixing03:56
poolieif jam's willing (not too scarred by lp-serve) we could ask him to do it with help from mwh and it would not be new03:56
lifelesspoolie: that, plus what I said, plus the work done hasn't been done with a view to the whole lp story around this; its been somewhat siloed off03:57
lifelesspoolie: if you want to do that, that would be awesome; don't underestimate it though - I mean, you know how gung ho I am about stuff generally, right ?03:57
poolieheh03:58
lifelesspoolie: this is *it can be done and there is a pie on the line collins* being concerned.03:58
poolie:)03:58
mkanatlifeless: Given codebrowse's history, I don't blame you.03:58
poolieso in a way i would be ok with saying "this is really not a good fit, let's throw it out"03:58
mkanatlifeless: Also given the fact that right now, it's far more stable than it's ever been.03:58
poolieideally, if we had a specific reason for that03:58
lifelessso if you want to do that, say so; if we leave trunk as-is, I'll setup a separate project to get code reviews and bugs visible to the lp team until we're converged.03:58
pooliebut, if it's done and we don't think there's anything actually wrong, we just find it hard to deploy03:59
pooliethat just seems like a bad reason03:59
poolieif true, it seems to mean some bugs can just never be fixed :(04:00
lifelesspoolie: we have a similar set of reliability and robustness concerns around soyuz04:00
pooliedo we ultimately want to get historydb running or not?04:01
mkanatpoolie: I would think so.04:01
pooliei'd be fine with deciding that it was just a spike experiment and we want to take another go at it, if that's what we really think04:01
lifelesscompared to loggerhead, which has - I think - one person familiar with it still on the lp team, soyuz has 4 folk familiar with it, so we can actually react quickly to issues04:01
lifelessfrom my perspective, its an experiment until either a) a feature squad is given codebrowse improvements as a card, and agrees that historydb is the way forward, or b) someone else does the work necessary to get it live04:02
mkanatIt's too bad I don't have enough time left to stick around and help with the rollout.04:02
lifelessI can't prejudge what the feature squad engineers will decide makes sense04:02
lifeless[well I can, but I'd hate it if someone did that to me, so I don't do it to them]04:03
pooliemkanat, it seems like organizing rollouts is a pretty hard thing to do unless you are actually full time on the lp team04:03
mkanatlifeless: That's understandable. That might just be re-having a conversation that's already been had, though, from back when jam originally proposed history_db.04:03
lifelessmkanat: it may be a nobrainer04:03
mkanatlifeless: Sure. But I understand that the specific application to LP is still something to discuss.04:03
lifelessI'm just being clear about what I can commit to confidently.04:03
mkanatlifeless: Sure, I totally understand.04:03
mkanatThe primary thing that a designer has to deal with is the fact of uncertainty.04:03
lifelessyup :)04:04
poolieare there any concerns about trunk other than that the deployment may be very bumpy (and i realize that may be a very large conrcern)04:04
mkanatYeah. It's something that a lot of people don't understand. They try to predict an unpredictable future. :-)04:04
mkanatpoolie: Unpredictable problems that I can't imagine now, based on the fact that there have been a lot of changes.04:04
lifelesspoolie: I haven't done a rev by revision review because I think thats a waste of time.04:04
poolies/bumpy/& laborious whatever04:04
lifelesspoolie: once the card size is assessed as being > maintenance bucket, it goes into the queue.04:04
lifelesspoolie: history db alone is enough to do that for me04:05
poolieok04:05
poolieso i propose to file a bug saying that historydb ought to be tried out and deployed04:05
poolieif in the course of trying it out, it seems that it is not a good solution, or too much work, or whatever04:06
pooliewe can mark the bug wontfix and then pull it out of loggerhead's trunk04:06
mkanatpoolie: Where would it be tried out, though? There's no edge.04:06
lifelesswho will do the trying out ?04:06
lifelessmkanat: we do have bazaar.qastaging.launchpad.net04:06
mkanatlifeless: Oh, okay.04:06
lifelessthats in the qa pipeline04:07
mkanatlifeless: Which...is a redirect loop.04:07
pooliedo you mean 'who will work on the bug' or 'how will we test it'?04:07
lifelesspoolie: who, now how.04:07
lifelesss/now/not/04:07
pooliedoes that matter?04:08
poolieit's a thing we can do to make lp better04:08
poolieeither a squad will do it, or someone else04:08
lifelesswell04:08
lifelessfrom my perspective this is a feature bug as I've said many times before04:08
lifelessthat means it won't get a maintenance squad timeslice04:08
mkanatlifeless: You know...there is another option, here.04:09
mkanatlifeless: You could just live with the OOPSes until you switch to trunk.04:09
lifelessI'm not clear what you want to have happen, given the constrains we have.04:09
pooliewe are getting timeouts on some pages04:09
lifelessmkanat: I'm particularly unhappy with that idea because they generally mean a user has had a bad experience04:09
mkanatlifeless: That's true. BTW, do you have analysis of them yet?04:10
poolieit seems at least possible that the most efficient way to fix them is to move to loggerhead trunk04:10
lifelessmkanat: yes, the connection reset one is 16K a day04:10
poolieor, there may be other fixes04:10
lifelessmkanat: the others are largely below the fold04:10
mkanatlifeless: Oh! That just happens when somebody closes their browser.04:10
lifelessmkanat: yes04:10
mkanatlifeless: Okay. So that would be the primary one to fix, and the others could wait.04:10
pooliereally?04:11
lifelessmkanat: yes04:11
mkanatlifeless: Okay. That would be easy to fix both on trunk and 1.18.04:11
pooliei mean obviously, it can happen there, but is that really happening 16k/day?04:11
lifelessnext highest one is04:11
lifeless 61 error: [Errno 104] Connection reset by peer04:11
lifeless  31 http://0.0.0.0:8081/%7Evcs-imports/busybox/main/changes (Unknown)04:11
lifeless       OOPS-1852CBB1307, OOPS-1852CBB1810, OOPS-1852CBB188, OOPS-1852CBB1990, OOPS-1852CBB262604:11
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB130704:11
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB181004:11
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB18804:11
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB199004:11
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB262604:11
lifelesspoolie: its in the daily reports04:11
mkanatlifeless: That's the stability-checking script that the losas run.04:11
pooliewhat i mean is, there are other things that can generate econnreset04:11
lifelesspoolie: the cause may be misanalysed, but04:12
poolieor eof04:12
lifeless15332 error: [Errno 32] Broken pipe04:12
lifeless   Bug: https://launchpad.net/bugs/70132904:12
lifeless   Other: 15332 Robots: 0  Local: 004:12
lifeless   7604 http://0.0.0.0:8080/%7Evcs-imports/busybox/main/changes (Unknown)04:12
lifeless       OOPS-1852CBA1, OOPS-1852CBA10, OOPS-1852CBA100, OOPS-1852CBA1000, OOPS-1852CBA100104:12
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA104:12
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA1004:12
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA10004:12
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA100004:12
ubot5https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA100104:12
mkanatpoolie: It would happen with bots.04:12
lifelessso we see 16K +- 2K a day of errno 32, and < 100 of errno 10404:13
lifelessfixing the 16K one *may* expose some timeout cases04:13
lifelessthis is another reason to just stay focused on stabilisation04:13
* mkanat nods.04:13
lifelesspoolie: so it seems to me you want to push this branch forward04:13
poolieso it seems likely to me that the epipe error is loggerhead seeing the connection closed by haproxy04:13
pooliewe don't really know what caused haproxy to close it04:14
lifelesswhat I'll do then, is make a new project, pull the lp loggerhead stuff into that, which will get changes to that branch under review and bugs per the lp policy a home04:14
mkanatpoolie: Yeah, that sounds likely right.04:14
poolie(incidentally log correlation stuff would be useful here, but maybe just less(1) is enough)04:14
lifelessif and when trunk gets into lp we can consolidate the projects again04:14
poolielifeless, yeah, i would like to push it forward04:14
lifelessI've spent enough time on this discussion, I'm going to stop now. We're not going forward, and the bits I'm describing don't affect anyone else.04:14
poolie(copious time and all that )04:14
pooliewell, i'm sorry it was frustrating04:15
pooliei think it was actually useful to get a few things explicit where people had different ideas in their head04:15
mkanatlifeless: I think that just fixing the oopses on the LP branch seems a reasonable solution.04:15
lifelessthe frustrating bit is the handwave around resources04:15
lifelessmkanat: thats what I've been asking for! :)04:15
lifelessmkanat: I've been *offering* lp developers doing maintenance and triage on the main loggerhead project04:15
pooliewell, i've watched/tried to help jam spend ~6m getting his work deployed04:15
lifelessbut not with the unknown in trunk04:15
mkanatlifeless: Yeah, that seems fine. I don't even think that most of those fixes need to go into loggerhead trunk.04:16
poolieso i may not be pessimistic _enough_, but i think i'm at least fairly pessimistic04:16
mkanatlifeless: At least, the oops fixes.04:16
lifelessmkanat: I doubt that that will happen with trunk as different as it is :(.04:16
lifelesswe'll see04:16
mkanatlifeless: Yeah. As long as you just stick to OOPS fixing, though, I don't think trunk would be missing out.04:17
lifelessmkanat: it builds up debt04:17
mkanatlifeless: And then when the feature squad has time in six months or so, merging them back in as appropriate could be done.04:17
mkanatlifeless: True.04:17
lifelessmkanat: at a certain point trunk will then be unfeasible to merge04:17
mkanatlifeless: Well, hopefully we'd just be talking primarily about two fixes.04:17
lifelessI don't know if that will happen or not; I hope it doesn't.04:17
mkanatlifeless: Which should be small patches.04:17
lifelessmy bigger concern is community04:18
pooliei will at least have a look at bug 70132904:18
lifelessnoone will be maintaining loggerhead per se04:18
ubot5Launchpad bug 701329 in Launchpad itself "error: [Errno 32] Broken pipe" [Critical,Triaged] https://launchpad.net/bugs/70132904:18
mkanatlifeless: Yeah. And there are outstanding review requests.04:18
lifelessright04:18
lifelesswhich I was aiming to solve by cunning :)04:18
poolieand the other04:18
mkanatlifeless: Ahhh, I see. :-)04:18
lifelessmkanat: by having lp developers maintain the project, the various queues would be driven to zero: bug triage, code review04:18
mkanatlifeless: Right.04:19
lifelessmkanat: but doing that is contingent on it being straight forward, which an unusable trunk is not. (Yes, I know the nuance there)04:19
* mkanat nods.04:19
mkanatlifeless: I think we'll just have to delay that for 6-9 months.04:19
lifelessso, i think this is a bit sad. Shrug.04:19
mkanatlifeless: And then at that point it can happen.04:19
pooliecanonical put time & money into historydb04:20
pooliei really think we should either clearly decide it was wrong, or finish it04:20
mkanatpoolie: I think we're all in agreement about that.04:20
mkanatpoolie: It's just a question of when it will happen.04:20
lifelessare we not allowed to say 'it was a spike, its not finished, put it to the side and perhaps later' ?04:21
lifelessthats what francis was saying04:21
mkanatlifeless: I think unmerging it would revert a lot of good refactoring that happened along with it.04:21
lifelessmkanat: I agree, I think thats a shame. But perhaps better than 6-9 months without maintainer?04:21
mkanatlifeless: Hmm.04:22
poolielifeless, well, yes, we're allowed, but04:22
lifelessbesides, its not lost per se, its mergable if/when someone decides they have the time.04:22
pooliethings being hard to change doesn't seem like a very satisfying reason to do that04:22
lifelessthis is all about resourcing.04:22
mkanatlifeless: I think 6-9 months without a maintainer would be no different than it was in 2010.04:22
pooliei'd be happier to say that if it turned out the design was actually bad04:22
lifelessits a sunk cost04:22
lifelessit wasn't resourced with a budget up front04:23
poolieit's true it's a sunk cost04:23
lifelessspm: 15:55 < mkanat> lifeless: This page has the wrong CSS: http://bazaar.launchpad.net/~loggerhead-team/loggerhead/trunk-rich/view/head:/.bzrignore04:23
lifeless15:55 < mkanat> lifeless: Because this file is in fact missing: http://bazaar.launchpad.net/static/css/view.css04:23
lifeless15:55 < lifeless> what css should it have?04:23
pooliethe question is, now that the code is there, does the cost of deploying it exceed the benefit we would get from deploying it?04:23
lifeless15:56 < mkanat> lifeless: That CSS exists in the actual branch.04:23
poolieif deployment is very hard that may well be true04:23
lifelessspm: can you look at how the /static/ path is accessed and see if perhaps we have something bong going on?04:23
spmokidoki.04:24
lifelesspoolie: I don't see that thats the question. its a question sure, but *the* question is, where does completing that work fall in the queue for the various folk that might do it.04:24
mkanatlifeless, poolie: My vote is to fix the two OOPSes on ~launchpad-pqm/loggerhead/devel, then wait 6-9 months, stabilize trunk, and then have the LP team maintain trunk.04:24
lifelesspoolie: its 6-9 months away for LP; even if it was the very best thing since sliced bread it would still be 6-9 months away04:25
spmactually... I might even be able to hazard a guess as to what/where/why this fault exists....04:25
lifelesspoolie: *unless* it becomes a stakeholder priority, *or* is small enough for a maintenance squad to handle.04:25
mkanatlifeless: I don't think it's a terrific emergency.04:25
lifelesspoolie: I've not argued its wrong or bad or whatever. But polish and doing it well matters a great deal.04:25
mkanatlifeless: codebrowse sucks far less than it used to, already.04:25
pooliei guess the axiom i have here is that zero timeouts is a priority, and that historydb is said to be the best way to get there04:26
lifelessbecause if we get it wrong the costs are huge04:26
pooliein that case it should not be shelved for so long04:26
poolieperhaps there are less-risky ways to get there04:26
mkanatpoolie: Yeah. I think that "priority" in this case simply means 6-9 months.04:26
lifelesszero timeouts are a policy04:26
mkanatpoolie: I think the least risky way would be to hire somebody to work on loggerhead.04:26
poolieeven just re-doing similar work but with stepwise deployments are a good idea04:26
lifelessand I'm keen on those04:26
lifelessthings that are too big go into the feature queue to do, even timeouts.04:26
poolies/are a good idea/could be a safer course04:26
lifelesswhich this falls into IMNSHO04:27
mkanatlifeless: Are you with me on the "fix the two oopses then maintain trunk later" plan?04:27
lifelessmkanat: I"m with you on fix the two oopses04:28
lifelessmkanat: I hesitate to commit to the maintain trunk later plan, because it presupposes the outcome of a feature squad looking at this stuff04:28
mkanatlifeless: Okay. In that case, "roll out history db in 6-9 months and then see how things go from there".04:29
lifeless'assign a feature squad to work on loggerhead performance in 6-9 months and see what happens'04:29
mkanatlifeless: Sure, sounds great. :-)04:29
lifelesspending flacoste and jml agreeing on priority/timeline04:29
lifelessone obvious tactical move for such a squad is to pick this up (if noone has done it in spare / volunteer time in the interim)04:30
poolieok, so now back to the big one04:34
pooliein that plan are we ok to leave lp on 1.18 and trunk a bit free-floating04:34
mkanatpoolie: Yeah.04:34
poolieor shall we say what's in trunk was an experiment that's semi-abandoned, and move it aside?04:34
lifelessthats to you guys04:34
lifelessif you want maintainers from the lp team, no. If you don't care about that (indefinitely), yes.04:35
lifelessIt *was* an experiment and in the absense of maintainers, sure it must be considered abandoned.04:35
mkanatpoolie: I don't think it should be moved aside.04:37
mkanatpoolie: I suspect that in 6-9 months, the best thing for LP's feature team to do will be to stabilize and roll out trunk.04:38
lifelessI don't have a particular investment in either answer; I have a strong bias towards having things be maintained when possible, and code that noone is looking after be clearly abandoned.04:38
lifelessbut its not 'my' project04:38
mkanatlifeless: It was effectively abandoned before I started working on it, we'd just be putting it back into that state.04:38
mkanatWell, besides the work that jam did on history_db, which was substantial. But nobody was fixing bugs.04:39
lifelessmkanat: which is a shame, no ?04:39
mkanatlifeless: It is, but it's also just a fact, and I'm okay with waiting 6-9 months for it to be resolved.04:39
mkanatlifeless: I would love to keep working on it; that would be the most ideal solution given infinite resources. :-)04:40
lifelessjust for clarity, you think its better for users to have a nonresponsive project with some cool stuff unreleased in trunk than a responsive project with a 'future' branch earmarked for a later examination ?04:40
lifelessmkanat: I'm assuming, perhaps incorrectly, that you're not going to be doing regular maintenance in your spare time.04:41
mkanatlifeless: I think that trunk is probably stable and usable by everybody except LP.04:41
mkanatlifeless: Yeah, unfortunately because I also maintain Bugzilla and some other stuff, I don't have a lot of spare bandwidth to do other free projects.04:41
lifelessmkanat: I didn't say anything about the code quality04:41
lifelessmkanat: or stability.04:41
lifelessmkanat: which is totally understandable... I have the same quandry myself/04:42
mkanatlifeless: I think it honestly wouldn't be that hard, if you're just doing simple OOPS fixes, to apply them both to trunk and the LP branch. But I also suspect that some of those OOPS fixes will not be quite so wanted on trunk.04:43
mkanatlifeless: I could fix the issues and do the backports myself, with one more block of hours.04:43
lifelessmkanat: we really want the lp team to get stuck into all the crannies04:44
mkanatlifeless: Okay. If that's the immovable object, then I think the *simplest* solution is the one that we have proposed and agreed on.04:44
lifelessthe previous silo structure led to the state loggerhead was in 2010; bringing you in had great results but has left loggerhead out on the cold04:44
mkanatlifeless: I would say that loggerhead is a separately-maintained modular dependency, not a silo.04:45
lifelesswhat I want is the following:04:45
lifelessmkanat: 'bugs', 'code', 'soyuz' etc - those were the silos04:45
lifelessmkanat: lp team substructure04:45
lifelessmkanat: we've just reorganised into squads with a project wide remit04:46
mkanatlifeless: Yeah...that's a conversation that you and I already had. :-)04:46
lifelessright ;)04:46
lifelessso04:46
lifelesswhat I want is:04:46
mkanatlifeless: (BTW, I think that having developers flexible is good, but I think the individual components should still have assigned individual designers.)04:46
lifeless - a place for changes to the LP loggerhead to go for code review that code.launchpad.net/launchpad-project/+activereviews will pick up04:46
lifeless - ditto bug reports for bugs.launchpad.net/launchpad-project/+bugs04:47
lifeless - a clear and simple policy for what to do with changes to the project vis-a-vis upstream for developers. One of 'nothing', 'land upstream and do a release', 'file a patch upstream will review it'04:48
lifelessif those three constraints are met, I will be satisifed. It may not be the globally best thing for canonical / bazaar / loggerhead, but I'll be confident that it will get acted on.04:48
lifelessif any of them are not met, I'm confident it will become a spotty mess.04:49
poolieit's already in /bazaar04:49
pooliewe read code reviews etc04:49
poolieif lp developers put up changes i'm sure they will be piloted04:49
mkanatlifeless: That sounds sensible. So I suppose what you could do is develop against launchpad-pqm/devel, and submit merge proposals against the trunk, but not have anybody assigned to do them.04:49
lifelessthere are multiple code branchces in that project that are not landed and approved04:49
mkanatlifeless: Then at least when somebody was interested, they could go and look at the active MPs against trunk.04:49
lifelessmkanat: with noone on the other end, that seems a bit noddy04:50
mkanatlifeless: It's at least a method of making a queue to handle for the future.04:50
poolielifeless,  i have actually read them and they're not easily landable04:50
pooliethey could be bumped to wip04:50
lifelessthen they probably should be04:50
lifelessjam: https://code.launchpad.net/~mnordhoff/loggerhead/history_db/+merge/2538104:50
mkanatlifeless: Also, if you make the target reviewer ~bzr, eventually somebody will come along and do them.04:51
lifelessmkanat: well, I'm assuming (because these are there) that that isn't the case.04:51
mkanatlifeless: We could say that the Bzr team maintains loggerhead's upstream (without anybody actually assigned to do the work) and Launchpad maintains the launchpad-pqm branch.04:51
mkanatlifeless: Oh, the reason those are there is that the default reviewer for loggerhead is wrong.04:51
mkanatlifeless: It should be ~bzr instead of ~loggerhead-team, or whatever it is now.04:51
pooliei think i added canonical-bazaar to that team04:52
poolieor at any rate we can04:52
lifelessso04:52
lifelesscorollaries04:52
lifelessbecause lp is limited04:52
lifelessI need a project in launchpad-project04:52
mkanatlifeless: Ultimately I think we're just talking about what will happen for the next 6-9 months.04:52
lifelessthat can be a new one, launchpad-loggerhead repurposed, or loggerhead itself.04:52
lifelessmkanat: I don't see this as timeboxed, unless we get rid of loggerhead04:52
mkanatlifeless: I can't imagine the feature squad deciding that anything is that valuable besides stabilizing and deploying trunk.04:53
mkanatlifeless: At which point the LP branch and trunk will become the same.04:53
lifelessmkanat: sure04:54
lifelessbut they could still work off of a branch and let trunk be approximately unmaintained04:54
lifelessor they could be doing releases04:54
mkanatlifeless: Sure. But either way, there's no difference between the plans, after the 6-9 month timeframe.04:55
lifelesssure there is04:55
mkanatlifeless: You're positing the possibility that history_db never becomes trunk?04:56
lifelessyes04:56
lifelessand04:56
lifelessthat perhaps bazaar will maintain loggerhead04:56
mkanatlifeless: Okay. So that bzr will maintain loggerhead seems somewhat likely.04:57
mkanatlifeless: That history_db will never be trunk seems very unlikely.04:57
lifelessor that what launchpad needs will be so unsuitable for trunk that trunk maintainers reject it04:57
lifelessmy crystal ball is broken04:57
mkanatlifeless: Sure, but I have a bit more intimate knowledge of loggerhead and the problems it's faced, so I feel fairly confident in these predictions.04:57
mkanatlifeless: Here's another possibility.04:58
mkanatlifeless: You could merge ~launchpad-pqm/devel into 1.18.04:58
mkanatlifeless: You could maintain 1.18 and do regular releases of it.04:58
mkanatlifeless: Then LP would maintain the stable branch and bzr would maintain the trunk.05:00
mkanatlifeless: That seems like a pretty reasonable solution for everybody.05:01
lifelessmkanat: AIUI bzr don't have the cycles or domain experience to maintain loggerhead comfortably. IMBW.05:01
mkanatlifeless: Ah, I think many of the devs have some casual knowledge of it.05:01
pooliei think we're as well placed as lp, aside from being smaller05:02
pooliei don't know05:02
pooliei think you said you felt lp devs would be unwilling to work on it unless trunk was moved away05:02
mkanatYeah, poolie reviewed most of my code, jam did history_db, and some other folks submit patches from time to time and have done reviews.05:02
lifelessI was extrapolating from my third constraint05:02
lifelesspoolie: ^05:02
pooliei'm not sure i understand it05:03
poolieis it "where do i send my patches?" or "what do i do with third-party patches?"?05:03
lifelesshow should I approach doing my changes05:04
poolieunderstand lp's running 1.18; do changes off that; ask someone experienced to review them05:04
poolieit doesn't seem particularly harder than requesting changes in lazr.restful or bzr or whatever05:04
poolies//proposing05:05
lifelesslps review and bug ui lets us down here, but sure05:05
lifelessI don't think you can propose merges cross-project05:06
lifelesspoolie: does patch pilot work off of https://code.launchpad.net/bazaar/+activereviews ?05:07
pooliewow05:07
poolieobviously not :)05:07
pooliebut, i was contemplating that last week05:08
lifelessspm: any joy?05:08
lifelesspoolie: I think you need to decide how you want to hang it together; if you want to maintain loggerhead - great, we'll send you patches, and bugs, and so on.05:10
lifelesspoolie: I understood you to be focusing on udd + treading water05:10
lifelesspoolie: so it seems a bit inconsistent to me for you also be to be picking up maintenance of this, which was effective unmaintained for quite some time.05:10
lifelesspoolie: but its up to you05:10
mkanatlifeless: I think loggerhead is an important part of bzr adoption.05:11
lifelesspoolie: I just don't want to be held captive to this outstanding work, and want a clear answer whether you would like the lp team to pickup maintenance (subject to the constraints I'm operating under)05:11
lifelesspoolie: I feel like every time we reach an agreement, its snatched away one email later05:12
spmlifeless: too many other things exploding atm to get a look at beyond logging into the likely problematic server05:12
lifelessspm: should we rt it ?05:12
lifelessspm: or would that just be throwing hay into a barn fire?05:12
spmwell, aiui (need to confirm) static files are served off crowberry directly. if guava has been updated; yet not crowberry, then this sort of mismatch seems possible.05:13
lifelessspm: yeah, thats what I'd expect needs checking05:13
pooliei just want to get it to improve in the most efficient way possible05:14
pooliei have been doing reviews on it05:14
poolieas have others05:14
lifelesswe've spent 2 hours of three people - 6 man hours discussing the most efficient way.05:14
mkanatpoolie, lifeless: I'll leave you guys to decide this. I think having bzr maintain trunk and LP maintain 1.18 is probably a good idea.05:14
mkanatI'm out for the night, now. :-) Goodnight!05:17
poolienight05:17
poolie:(05:17
mkanatpoolie: Did you want me to stick around?05:17
poolieno, go05:18
pooliei'm just annoyed this is getting stuck05:18
mkanatpoolie: Okay.05:18
mkanatpoolie: It's only stuck if you disagree that bzr should maintain trunk.05:18
mkanatAnyhow, I'm out. :-)05:19
mkanatNight!05:19
poolielifeless, do you think having bzr maintain trunk (as we have been, at a low level) is feasible?05:21
pooliei guess that still leaves launchpad bugs meaning it's hard for lp devs to see the queue05:21
lifelesswell05:24
lifelessas I said05:24
lifelessI need an LP queue05:24
lifelessfor lp devs to review lp changes05:24
lifelessand ditto bugs05:24
pooliehow about if we just leave them where they are05:28
poolieand if lp developers are not getting reasonable turn around on their reviews, we deal with that?05:28
poolieso far, bzr people have been doing reviews, and lp people have not been proposing them05:28
lifelesspoolie: I don't understand05:31
lifelesslp changes to the lp loggerhead will be reviewed by lp devs05:31
lifelessif they go upstream, if thats desired and there is an upstream, then having reasonable turnaround is indeed important05:32
poolieok, so you stated your constraints; what are mine?05:41
pooliei think it's very unfortunate to be letting historydb bitrot because nobody will schedule time to deploy it05:41
pooliebut that is not really a constraint05:42
pooliei am also concerned that we will make these rearrangements and then nobody will actually work on it, therefore leaving things actually worse off05:42
poolieagain, not a constraint05:42
spmlifeless: so yes. /static is served off crowberry directly. both http or https. I guessing we have revno X on crowberry for LH, and X + Y on guava - hence the mismatch.06:24
lifelessspm: but loggerhead runs on guava?06:30
spmyup06:31
lifelessI can has fix?06:31
spmsorry, I don't understand?06:31
spmwe fixed it this way to stop codebrowse being overwhelmed serving static content06:31
lifelesscan you not serve it staticly from guava?06:31
spmwe did. it was bad.06:31
lifelessoh06:32
lifelessthat surprises me06:32
lifelessserving it out of loggerhead would be poor06:32
lifelessbut apache on guava?06:32
lifelessspm: are you sure it was static on guava, not being handled by lh ?06:32
spmit was being handled by lh on guava. there is no apache on guava. ?06:33
spmI'd suggest apache on guava be unwise - just to solve a css file woe.06:33
lifelessok06:34
lifelessI had no idea there wasn't one there06:34
spmif you give me a bit I may even be able to enunciate why :-)06:34
lifelesscan you check the revno's ?06:34
lifelessspm: is there an LP tree on crowberry we can point into instead of the one we do now, that would be more up to date?06:34
spmsourcecode/loggerhead/loggerhead/static <== guava is 178, crowberry is 176.06:36
spmrevno 12263 vs 1217706:37
lifelessand is there a view.ss on guava?06:37
lifelessview.css06:37
spmahhh. codebounce is in nodowntime.06:37
lifelessspm: so, we need an additional nodowntime deploy to a dir on crowberry, and apache pointing at that.06:38
lifeless?06:38
spmI wouldn't call codehost a nodowntime deplot?06:38
spmdeply too06:38
spmdeploy 306:38
lifelessspm: (after we confirm a view.css exists on guava)06:38
spm-rw-r--r-- 2 loggerhead loggerhead 519 2011-01-14 11:05 ./css/view.css <== guava06:39
lifelessspm: what I mean is06:40
lifelesscodehosting cannot be nodowntime yet (rt something or other)06:40
lifelesscan we have a separate lp tree on crowberry06:41
lifelesswhich would be*in* the nodowntime set06:41
lifelessand apache would be pointed into that tree06:41
spmjust for static files? Ahhh I see. clever.06:41
lifelessor apache on guava. your call :P06:41
spmheh06:41
spmyeah - I like that idea. bit ugly and heavyweight, but the alts are worse.06:42
spmlifeless: oki. new 'launchpad-static' now on codehost; about to switch the softlinks to point into there....06:52
spmand live06:53
spmhttp://bazaar.launchpad.net/static/css/view.css <== looks good06:55
lifelessspm: thanks!07:49
jelmer'morning08:31
damdhi.  i'm working on a bzr branch here where i have revisions up to 4 and now i've made uncommitted changes to r4.  now i'd like to sort of "throw away" the changes in r4 and make the current state the new r4.  how would i do that?09:23
damdi.e. i already have an r4, but i'd like to replace that r4 with the current uncommitted state which builds on r4.09:31
jelmerdamd: 'bzr uncommit'09:32
jelmerfollowed by a new commit09:32
damdokay, so my changes won't be lost that way?09:32
damdthey weren't, great, thanks09:33
damdi just did a "bzr pull" and there was a conflict in a file.  now i've fixed that conflict.  so is everything swell now?  i'm worried that maybe i have to take some extra action to prevent e.g. a "merge" commit from appearing in the log once i commit, like the stuff that happens in mercurial unless you rebase.09:39
jelmerdamd: you'll have to run "bzr resolved" to let bzr know you've resolved the conflict (it will tell you this when you try to commit)09:44
damdoh, okay, thanks09:45
jelmerdamd: we only create merge commits if you've actually done a 'bzr merge' invocation. 'bzr pull' will never create a merge commit09:45
damdgreat!09:45
maxbThe behaviour isn't unlike Mercurial, for that matter09:56
Takisn't it not unlike mercurial?09:56
maxbI suppose the difference is that "bzr pull" complains, whereas "hg pull" just reports (+1 heads) and leaves you with multiple local heads09:57
damdif i remember correctly, in mercurial if you hg pull you need to make a "merge commit"09:58
damdto avoid that you "hg pull --rebase"09:58
damdbut then again, i never bothered learning it properly09:58
Takyou only need to do a merge commit if you have to do a merge09:58
damdyes, of course09:58
maxbIt sounds to me that you are overly focused on avoiding merge commits09:59
damdnot overly focused, it's just that virtually every project i've worked on urge you to not make merge commits10:00
lifelessin what VCS10:00
maxbdamd: I have noticed this too. Have any of those projects given a decent explanation as to why they urge that?10:00
damdgit (conkeror), hg (work) and now bzr (emacs)10:00
damdmaxb: ugly logs basically10:00
maxbalso git (postgresql)10:00
damdno, i mean i worked on conkeror10:01
Takmost git-using projects hate merge commits ime10:01
TakI haven't had any complaints from projects using other vcss10:01
damdthe only merge commits i find in emacs are the ones where they merge emacs-23 (bug fixes) to emacs-trunk (new features)10:01
maxbSo they don't have feature branches?10:02
damdthey have those too, but those merges are very rare10:02
Takbut that's also because git makes a merge commit for EVERYTHING10:02
maxb?10:03
jelmerTak: it only makes a merge commit if a fast-forward isn't possible10:04
maxbmuch like bzr10:07
* Tak shrug10:09
jelmermaxb: bzr pull never makes a merge commit, bzr merge always makes a merge commit10:09
TakI feel like it happens a lot more often for me with git10:09
jelmermaxb: or rather, sets the working tree up for a merge commit10:10
jelmermaxb: that's different from the way git pull works10:10
maxbOK, perhaps it is more accurate to say that the default way git pull works creates merge commits in much the same scenarios as a user of bzr would10:10
jelmermaxb: I don't think that's true. In a lot of situations it's surprising that "git pull" creates a merge commit.10:11
maxbReally? My understanding was that git pull will create a merge commit in exactly the same situations that bzr pull will say "branches diverged, you need to merge"10:12
jelmermaxb: that assumes that a merge is actually what the user wants, which (at least I found) is often not true.10:13
jelmermaxb: fwiw bzr can do the same thing using 'bzr merge --pull'10:13
quicksilverjelmer: bzr merge --pull has the disadvantage of replacing your mainline revision numbering with the other branches revision number, doesn't it?10:19
maxbNo, it doesn't do that10:20
maxbOh10:20
maxbI suppose it could10:20
jelmerquicksilver: yes, but that's a problem with pull in general if the other branch has merged your tip and has additional revisions10:20
glendoes bzr support git like merges, so that original commit is preserved from merge?10:20
quicksilverjelmer: yup.10:20
quicksilverjelmer: just checking I understood :) We go to some lengths to keep monotonically increasing revision numbers on our branches10:21
jelmerglen: yes10:21
glenjelmer: any hints? :)10:21
jelmerglen: how do you mean?10:21
maxbquicksilver: You know about append_revisions_only, right?10:21
quicksilvermaxb: nope.10:21
jelmerquicksilver: in general I think landing feature branches as merges is a good idea. it nicely groups them10:21
quicksilverjelmer: agreed.10:22
maxbquicksilver: It's a per branch setting to say "never change my existing mainline history"10:22
glenjelmer: well. i did bzr merge ../path/to/branch, and when i did bzr commit, i had to type again commit message, but i'd like that commit messages that i merged get auto added, currently my commit after merge looks like i did the commit myself10:22
quicksilvermaxb: ah, interesting. Would stop a class of stupid error.10:22
maxbThat's the point of it, yes10:23
quicksilvermaxb: still, it's never a problem to pivot the branch by repulling the right revision and push --overwriting it.10:23
jelmerglen: The merge has a reference to the revison that was merged10:23
quicksilvermaxb: the worst thing that happens is redmine gets terribly confused10:23
glenjelmer: do you know is that merge rev visible in launchpad too?10:23
quicksilver(it assumes, and caches, monotonically increasing revisions)10:23
jelmerglen: much like in git, which also doesn't copy commit messages10:23
maxbquicksilver: Huh. Sounds like its bzr integration wasn't exactly well thought through :-/10:24
jelmerglen: launchpad's main branch page just shows the mainline revisions (so the merge revisions, not the revisions that were merged by those merge revisions)10:24
quicksilvermaxb: well, let's call it a known deficiency ;)10:24
glenjelmer: ah, i see now. it's only in bazaar browser (http://bazaar.launchpad.net)10:25
quicksilvermaxb: we use loggerhead as well10:25
quicksilvermaxb: but we use redmine for issue tracking and it's convenient for the issue tracker to be able to see the repo10:25
maxbquicksilver: Of course, if redmine auto-set append_revisions_only on all of the branches managed under it, it *could* assume that :-)10:25
maxbjelmer: Hi, can we discuss bug 707170? My thought was that for the branches to be in the state they are, some software component must have done something wrong, and bzr-git seemed like the likely culprit10:27
ubot5Launchpad bug 707170 in Bazaar Git Plugin ""Cannot add revision(s) to repository: missing text keys" branching lp:dulwich and alioth packaging branch into same shared repo" [Undecided,Incomplete] https://launchpad.net/bugs/70717010:27
jelmermaxb: can you explain why though? bzr-git didn't touch the broken commit at all10:33
maxbjelmer: Perhaps I'm misunderstanding, but I was interpreting that error as there being one revision-id that has bzr-style text-revision ids for some files in one branch, but bzr-git ones in the other11:12
maxbSo unless bzr core managed to rewrite the text-rev ids, the problem must be in bzr core?11:12
maxberm11:12
maxb* So unless bzr core managed to rewrite the text-rev ids, the problem must be in bzr-git?11:12
jelmermaxb: that one revision id was created by bzr core though - bzr-git hasn't touched it11:19
maxbThat's interesting. Does this hint at a potential bzr core bug, then?11:19
jelmerit seems more likely to me that e.g. dpush has updated the working tree incorrectly and thus caused bzr core to create a commit with the wrong text revisions11:19
=== Ursinha is now known as Ursinha-afk
=== oubiwann is now known as oubiwann_
catphisham i correct in thinking that the only way to set up hooks in bzr is to write a python plugin?12:24
jelmercatphish: yes12:25
jelmerthere is a simple wrapper plugin that calls out to the shell but it doesn't cover all hooks, and requires per-branch configuration12:25
catphishi might write my own wrapper that calls out to the shell12:26
catphishi need to be able to define hook shell scripts on a per-repository basis12:26
catphishhttp://people.samba.org/bzr/jelmer/bzr-shell-hooks/trunk/12:28
=== oubiwann_ is now known as oubiwann
=== oubiwann is now known as oubiwann_
vilajam: good morning !14:55
vilajam: I'm looking at your reset-checkout proposal and I'm wondering what will happen if they are existing conflicts ?14:56
vilajam: I smell bogus edge cases here14:56
=== Ursinha is now known as Ursinha-afk
CardinalFangHi all.  In Ubuntu, I'm interested in updating a package, but the source-package-branch is out of date with the current shipping package.  I heard I should mention it here, and ask for advice.15:02
=== Ursinha-afk is now known as Ursinha
maxbWhat is the source package name?15:04
CardinalFang"couchdb"15:04
CardinalFanglp:ubuntu/natty/couchdb15:05
vilahttps://bugs.launchpad.net/udd/+bug/65331215:06
maxbVia http://package-import.ubuntu.com/status/ and the linked https://bugs.launchpad.net/udd/+bug/653312 I infer that a fix is not likely to be forthcoming in the immediate future, and therefore you would be best advised to do an update using the traditional methods for now15:06
* vila nods15:07
CardinalFangmaxb, vila, thank you.15:07
=== mnepton is now known as mneptok
jamvila: honestly, I don't really care about those edge cases much. for conflicts, it probably leaves them alone15:21
jamthis is more about "my dirstate is corrupted, fix it"15:21
vilajam: my point is not to fix every edge case but let the user know some may exist (reviewing)15:22
vilajam: you care at least enough to offer the ability to restore the merge parents though, I don't want users to be fooled15:23
vilajam: this is a great addition, so let's publicize what it reallt does15:23
jamI'm not sure I want it to be particularly public. Since it is a 'repair something broken' tool. I don't want people to think it happens often15:24
jamit has happened just often enough and the workaround is clumsy15:24
vilajam: I' all for leaving it hidden, what I mean (already written in the not-yet-posted review) is that when the repair succeeds, a warning should tell the user about limitations15:25
vilajam: review posted15:28
vilajam: spoiler: BB:tweak ;)15:28
sobersabrehi.16:02
sobersabreI have a q which may sound a bit trollish. but it's not.16:02
sobersabrewhat was the idea behind STARTING bzr  ?16:02
sobersabreI mean it is python. there was python project already (hg)16:02
jamsobersabre: technically we were first, I believe16:04
jamcertainly the start date is very close16:04
damdalso, bzr is GNU16:04
jamsobersabre: we talked about joining at one point. I think the primary sticking point was copyright, followed by some disagreements about structure. We wanted to be abstract and support multiple concrete formats, etc.16:05
LeoNerdThere was also a lot of history in baz, the Canonical fork of tla; Tom Lord's Arch..16:07
LeoNerdA C reimplemetnation of the orignal GNU arch; a revision control system written in shell scripts (I kid ye not)16:07
LeoNerdThere's no code sharing between baz and bzr, but bzr was started by a lot of the same developers who first worked on baz; shares some of the ideas and concepts... at least initially16:08
LeoNerd.oO( Except for cherrypicking... </bitter> )16:09
sobersabrejam: thanks.16:58
sobersabrenow bzr q.16:58
sobersabre:)16:58
sobersabreI want to take a branch with all its history, and put it into a subfolder of another (unrelated branch), with all the branch's history, and mold them together into 1 branch.16:59
sobersabreI saw a plugin "merge-into",16:59
=== Ursinha is now known as Ursinha-lunch
sobersabreis there a way to do it without giving birth to a hedgehog and avoid using this plugin ?17:00
damd...17:02
maxbsobersabre: You want 'bzr join'17:07
maxbsobersabre: However, first please check the format of the two branches17:07
maxbIf they are both poor-root type, this may be an issue17:07
sobersabremaxb: I am on 2a formats.17:16
sobersabreon both branches.17:16
sobersabrethanks for join.17:16
maxbIn that case, bzr join should just work17:16
=== beuno is now known as beuno-lunch
=== vila changed the topic of #bzr to: Bazaar version control | try https://answers.launchpad.net/bzr for more help | http://irclogs.ubuntu.com/ | Patch pilot: jam | 2.2.3 is officially out ! (rm vila)
=== beuno-lunch is now known as beuno
=== Ursinha-lunch is now known as Ursinha
=== r0bby is now known as robbyoconnor
bialixheya GaryvdM21:04
GaryvdMHi bialix!21:04
bialixI have couple of questions to re windows-installers branches21:05
GaryvdMok21:05
bialixI'd like to merge Naoki's patches21:05
bialixbut wonder if we need 2 separate branches for bzr 2.2.x and 2.3.x21:05
bialixNaoki's improvements are definitely for 2.3 only, IMO21:06
bialixGaryvdM: ^21:07
GaryvdMbialix: I should be possible to code it so that you can define different versions for 2.2 and 2.3 in the same branch21:07
vilaI'd say focus on 2.3, but that's just me :)21:07
GaryvdMHi vila21:07
bialixbonsoir vila21:08
vilahey GaryvdM, hello bialix21:08
vilaI'm tweaking qbzr tests so they can be run with --parallel=fork21:08
vilaI just succeeded and will send an mp :)21:08
bialixGaryvdM: for 2.3 there is used tag:bzr-2.3b1, it does not sound right for me21:08
GaryvdMWhether that is the best way to do it or to have different branches - I'm not sure.21:09
GaryvdMyes - that should be 2.3b5!21:09
* bialix wonders how Gary built the latest installers21:10
GaryvdMHmm - let me boot the vm I built on and check. I may have forgotten to push21:12
bialixGaryvdM: on the other hand the changes are straightforward, and I think we may want to upgarde TortoiseOverlays to latest version for all bzr versions21:12
bialixand removing unused w.exe is not big eal21:12
bialixand removing unused w.exe is not big deal21:12
bialixGaryvdM: I need you final word21:13
GaryvdMok21:14
GaryvdMbialix: I would rather not change to much in 2.2. If there are bugs that people are complaining about, then yes, but otherwise I think rather stick to the known.21:16
GaryvdMbrb - nature call.21:16
vila. o O (Strange name for a phone...)21:16
GaryvdMAh - not phone. That's slang for the toilet :-)21:18
vilahehe, I know :)21:19
vila./bzr selftest -s bzrlib.plugins.qbzr --parallel=fork ===> Ran 153 tests in 2.271s21:19
* bialix remember the walk to Chateau de La-Hulpe21:20
vilainstead of Ran 153 tests in 4.161s21:20
vilahmm, at least windows popping all over the place are more fun21:20
bialixyep21:20
vilathey pop all together with --parallel=fork21:21
bialixjumping jack21:21
vilaanyway, patch is up for review21:21
vilabialix: exactly :)21:21
GaryvdMvila: thanks for the patch. I21:23
bialixGaryvdM: if we don't want to change 2.2 installers then I think we need to create separate branch for 2.2 installers21:23
bialixGaryvdM: will it be too much trouble for future 2.2 releases?21:24
bialixfor you?21:24
vilakeep in mind that the cadence will slow down for 2.2 once 2.3 is out21:24
GaryvdMvila: I want to review the excepthookwatcher stuff carefully - so I'll review in the morning when I have a fresh brain :-)21:25
bialixI know, but you just released 2.2.321:25
vilaGaryvdM: sure21:25
GaryvdMvila: There is a bug in the qbzr test where a failing test can cause others to fail, when there is nothing wrong. I suspect you changes may fix that (well hoping).21:26
vilabialix: indeed and the next planned release is 2.3.0, then 2.3.1 then 2.4b1 and only then and if needed 2.2.421:27
GaryvdMbialix: I don't think thats nessary - I'm going to look now.21:27
* fullermd is bummed that we never had a 1.1.2.3.521:27
vila. o O (CVS lovers....)21:27
bialixoh, I thought we left 1.1.1.1.1.1.1.1 numbers long ago21:28
vilawell, we avoid them so far but who knows, may be someone will find a good use :D21:28
bialixGaryvdM: this change https://code.launchpad.net/~songofacandy/bzr-windows-installers/remove-unused-wexe/+merge/44708 affects Inno Setup script, that's shared between bzr installers21:29
fullermdIt's not CVS lovers, it's math geeks   :p21:29
vilayeah, yeah, stop pretending :-P21:29
GaryvdMbialix: oh - ok I see what you mean now.21:31
bialixGaryvdM: well, it's possible to extend issgen.py to allow using different templates for different bzr releases21:33
bialixif you want I can look into this21:33
GaryvdMbialix: maybe different branches is easier21:34
bialixalthough it increases complexity of the tool with almost no value21:34
bialixso yes different branches might be easier21:34
GaryvdMvila: do you have 1 branch, or different branches for the mac installer?21:35
vila1 branch per series + trunk21:35
vilathis sometimes requires some cherry-pick and always require careful merging for config.py (which defines what version of which plugin is packaged)21:36
vilabut otherwise it works flawlessly ensuring that we don't try to do too much in older releases21:36
bialixIan made it a bit better for windows installers, IMO21:36
GaryvdMvila: Has the merging caused any errors in the past?21:37
vilanot that I know of but I am a noob there having built what... 4 or 5 installers21:38
GaryvdMok - I'm asking to just try get a feel for what works.21:38
vilaThe nice thing with separate branches is that you have a clean history of the releases and what installer modifications went from one branch to the other21:39
vilaweird, -s bp.plugins.qbzr succed but running the full test suite still fail some qbzr test...21:42
vilatest isolation related problem probably21:42
vilahey poolie !21:42
GaryvdMvila: I Think thats problem I was talking about just now. Is it a treewiget test?21:43
vilayup21:43
vilare-running21:44
vilaGaryvdM: bzrlib.plugins.qbzr.lib.tests.test_treewidget.TestTreeWidgetSelectAll.test_commit_selectall21:47
vilaGaryvdM: bzrlib.plugins.qbzr.lib.tests.test_treewidget.TestTreeWidgetSelectAll.test_add_selectall21:48
vilaGaryvdM: ha ha, tricked by a leaking excepthookwatcher.pyc21:56
GaryvdMvila: :-) I've done that before21:57
GaryvdMvila: I've just discovered another reason why it happens (and no surprise, it involves a global variable)21:59
vilaGaryvdM: I'll need to update my submission22:00
vilaGaryvdM: which one ?22:02
GaryvdMvila bp.qbzr.lib.util.loading_queue22:03
vilaoverrideAttr(bp....util, 'loading_queue', None) in the relevant setUp should fix it22:04
GaryvdMYes - but I also want to investigate a but as it may need a try...finally somewhere.22:05
vilaoverrideAttr is good at avoiding try/finally if you need to purge this queue, addCleanup may help too22:06
GaryvdMvila: The "missing try...finally" is in the code, not the tests, but I will also add your suggestion for an extra level of test isolation protection.22:14
vilaGaryvdM: oh I see22:15
=== pickscrape__ is now known as pickscrape
vilaRuntimeError: cannot join thread before it is started... never saw this one :)22:17
vilaGaryvdM: so, I updated my mp but  the 2 failures are still there...22:18
GaryvdMvila: I should have a fix soon :-)22:19
vilayeah !22:19
vilaI'm pretty close being able to run bzr selftest *without* --no-plugins or BZR_PLUGINS_PATH tricks22:20
vilaI'm pretty close being able to run bzr selftest *without* --no-plugins or BZR_PLUGIN_PATH tricks22:20
vilaGaryvdM: I added a self.assertEqual(None, util.loading_queue) which didn't trigger22:21
GaryvdMOh22:22
viladouble checking22:22
GaryvdMMerging you mp now...22:22
GaryvdMvila: both you fix + mine now in lp:qbzr22:27
GaryvdM*your22:27
vilapulled, running22:29
vilaGaryvdM: still 2 failures :-/22:33
GaryvdMvila: the same tests?22:33
vilaexact some ones, yes22:34
GaryvdMAssertionError: not equal:  a = [] b = ['changed'] ?22:34
vilaboth fail because they miss unversioned-with-ignored22:34
vilano22:34
GaryvdMvila: Please pastebin error.22:35
vilahttp://paste.ubuntu.com/559265/22:35
GaryvdMTy22:35
GaryvdMvila: any tips on how to reproduce?22:37
vilabzr selftest --parallel=fork22:37
vila:-/22:37
vilathat is: the whole test suite22:37
GaryvdMok22:38
vilaI can try to isolate a bit more but this is never trivial22:38
GaryvdM:-/22:38
vilaGaryvdM: one possible cause is that the tests pass when run sequentially because another test is doing some init that is missing when run in parallel22:41
* GaryvdM runs grep global . -r22:43
GaryvdMvila: Does it fail if you run bzr -s bp.qbzr --parallel fork?22:44
vilano22:45
GaryvdMOk - same for me22:45
vilaand meither if I run bzr selftest -s bp.qbzr.lib.tests.test_treewidget.TestTreeWidgetSelectAll.test_commit_selectall22:45
vilaso s/init/something/ above22:46
pooliejam?22:48
jamhi poolie22:48
mathrickjelmer: any plans to add support for darcs repos?22:53
mathrickI can see how that might be tricky, given the completely opposite ways darcs and bzr look at trees and patches22:54
pooliejam, when you said "going on the 26th, or more like the 23rd sounds like it works better for me" which month was that?22:54
jampoolie: The 9th isn't possible, as you know. Going the week ahead would work for me, but you didn't list it as a possibility.22:55
jamgoing the direct week after would be bad22:55
jambecause my wife has to do late-night phone calls22:55
jamso 2 weeks after or 1-2 weeks before is ok22:55
jamOr put in sequence:22:56
pooliei understand now22:56
jamApr 25 good, May 2nd good for me, May 9, 16 bad for me, May 23rd good again22:57
poolieok, and the 30th of May good too?22:58
vilaGaryvdM: I got the failures wi th'bzr selftest' even without --parallel=fork, weird23:04
GaryvdMvila: I have a suspicion. I'm making a branch with a whole bunch of trace.mutters to check.23:05
jampoolie: yeah, should be fine.23:05
jamLina says she'll have a definite schedule first week of Feb23:06
vilajam, poolie: 30th of May won't be good at my place, too close to Marie's exams23:07
jampoolie: I'm off for now. I really feel like we should be pushing to get history-db deployed for loggerhead. I can't say there won't be any regressions, but other than initial import speed, everytime I hit something, I end up making it 10x faster than the current production code.23:10
jamI realize it shouldn't be my priority to work on it23:11
jamand I'll try to stop poking after today23:11
poolieit's always really disappointing to leave good work undeployed23:11
poolieand looking back on my mail, i think that we did basically have a deal with lp that if you did this, they'd deploy it23:12
pooliefor various reasons, some good some bad, some due to you and me not pushing it, that wasn't followed through23:12
jampoolie: I def see how not having a good test suite hurts being able to maintain the software, though.23:13
jambecause when I make changes, I know that the things *I'm currently testing* are ok, but I don't know about far-reaching implications23:14
* vila nods frantically23:14
jamvila: what are you doing here ?23:15
jam:)23:15
GaryvdMvila: lp:~garyvdm/qbzr/debug_select_all_test - But I think we should stop now.23:16
vilajam: fighting jetlag by hopefully reversing the pattern from these last days, staying up later to sleep between 1 and 6AM23:16
jamI guess yesterday you were waking up at this time?23:18
jamAnyway, i'll lurk for another 15min or so, but then I'm going to be away till tomorrow.23:18
pooliejam, i know what you mean about tests23:18
pooliejam, i'm so happy you got the things merged and the tests passing23:19
pooliei'm fine for you to push it as a 20%-type thing23:19
vilajam: roughly yes but then I couldn't sleep until 6AM...23:19
pooliei do think that as well as getting it technically ready you will need to work on lining it up to be deployed23:19
poolieie: do they have any concerns about the approach; are we able to deploy it on staging (etc) to test it;23:20
lifelessjam: I think history db is really nice23:20
lifelessjam: I'm keen to see it live; we're working within a bunch of constraints that were not really present a year ago - and if present would have given a much stronger focus to moving things forward.23:21
poolieanyhow, so getting a thread (maybe a new thread) about what those constraints are may be good23:22
poolieor maybe a lep23:22
vilaGaryvdM: ok, trying it now but I agree we should stop23:22
pooliethat's kind of more important than the actual speed23:22
poolieit could be 100x faster, but if it will not be deployed that just makes it more sad23:23
lifelessso23:23
lifelessbroadly they are23:23
lifeless - limit work in progress by the lp squads, no more dancing all over the map23:23
GaryvdMvila: ok - Goodnight23:24
lifeless   which implies not accepting handoffs that haven't been slotted in - help folk to self service instead23:24
GaryvdMGoodnight all23:24
vilaGaryvdM: see you, I'll send a mail with the failure result23:24
lifeless - data driven / operational excellence - analyse and fix the oopses we're getting23:24
=== GaryvdM is now known as GaryvdM_away
lifeless   also add loggerhead to the page performance report23:24
lifeless - we need to get timing data in the loggerhead oopses23:25
lifeless - iterate until the picture is clear about /what/ is wrong in terms of coarse user experience23:25
lifelesse.g. it may be that what we're really suffering is sqlite cache file contention cross-thread within single loggerhead processes23:26
lifelessat each point, we broadly want to do the single thing that will solve the most observable issues23:26
lifeless(think pareto analysis)23:26
poolieso: _should_ john push on this, or is it going to be too hard to get it deployed without active help from a squad?23:27
jamlifeless: I don't mind pushing a bit, but things get blackholed so often. I don't really know *how* to push lp-serve, for example. It is in an rt, and at that point... I poke at francis and poolie?23:30
lifelessjam: so its live on qastaging ?23:35
lifelessjam: AIUI ?23:35
jamlifeless: yes23:35
lifelessjam: in which case, hammer it until you're happy its robust23:35
jamlifeless: done23:35
lifelessjam: look in the qastaging OOPS report summaries for anything related to it23:35
jamit faild at one point23:35
lifelessjam: and then file an RT for production23:35
jamas near as I can tell it was because it wasn't restarted as part of the deploy code23:35
jamI've filed an RT for staging, which is "stuck"23:35
jamhanded off to SPM, I believe, and then ... I don't knwo23:36
jamknw23:36
jamknow23:36
lifelessspm: ping23:36
lifelessjam: staging is irrelevant for getting it on production23:36
poolieso to answer the general question, you just need to ask them, or me23:36
spmlifeless: hola23:36
lifelessspm: ^ jam's rt - do you know where that is at?23:36
spmwaiting for me to get around to it23:37
spmcurrently I've got 3 LS ones (variants on the same theme) and an ISD one ahead in terms of getting things done; as well as the edge redirector excitement.23:38
jamspm: any idea on timeframe for that?23:38
jamIs it possible to just upgrade that to rollout onto production, if it isn't necessary to roll out on staging first?23:39
jamMostly I'm trying to get *some* momentup23:39
jammomentum23:39
spmno idea unf. I spent purty much all of yesterday completely reactive to alerts. got maybe 15-20 mins to spend on RT's.23:39
jamand I thought staging would at least get stuff moving.23:39
lifelessjam: so, for clarity23:39
jamBut if the round-trip-to-get l-osa time is the block23:39
lifelessjam: the pipeline ordering is production<-qastaging<-staging23:39
jamthen I'm happy to skip steps23:39
lifelessjam: if its been checked on qastaging, its totally ready to move forward23:40
lifelessjam: we only have to check *db schema changes* on staging23:40
jamlifeless: that wasn't the info I heard23:40
jamsince qastaging was "we'll blow it away often"23:40
jamvs "public people test stuff on staging"23:40
lifelessjam: same for staging23:40
lifelessno23:40
lifelessqastaging and staging have the same lifetime guarantees (none)23:40
jamlifeless: *developers* test on qastaging and 3rd parties use staging (from all the conversations I've overhead)23:40
lifelessjam: where did you get the info, so I can correct it23:40
jamheard)23:40
jamlifeless: the bug import discussions23:41
jametc23:41
lifeless3rd parties are permitted to play with either qastaging or staging23:41
lifelessdevelopers test db schema stuff on staging, and (approximately) all other things on qastaging23:41
lifelessthings that we can we put behind a feature flag w23:41
jamalso comments saying that if qastaging breaks, no big deal, and I haven't heard that for staging23:41
lifelessits a huge deal23:41
lifelessif qastaging is broken we cannot deploy23:41
lifelessthe entire qa workflow breaks down23:42
pooliebecause there is only one qastaging for the whole site23:42
poolieover the rainbow, it would be modularized enough you could run something representative on a dynamically-allocated domain23:42
jamsee y'all tomorrow (if you're around :)23:44
vilajam: cu !23:44
lifelessmmm23:44
lifelessI don't think thats a great idea poolie23:44
poolieoh?23:45
lifelesswe have many complex parts, its all to easy to bring up a shiny short lived instance23:46
lifelessits the heavyness of the whole thing that lets us learn about issues in qa23:46
lifelessits only by having a 250GB db behind it that we can see various query change impacts, for instance.23:47
lifelessmkanat: btw the css is fixed23:51
mkanatlifeless: Great!23:52

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!