[00:19] * vila woke up one hour later than yesterday, one more week and the jet lag will be sorted out :-( [00:24] jam: no worries [00:24] vila: you should definitely not be up now :) [00:25] jam: indeed :-/ [00:25] jam: nice summary in bug #589521, thanks for that [00:25] Launchpad bug 589521 in Ubuntu Distributed Development "nagios monitoring of package imports needed" [Critical,Triaged] https://launchpad.net/bugs/589521 [00:38] vila: Sorted out? What's wrong with things like they are? It sounds like a perfectly sensible schedule to me :p [00:40] fullermd: the truth is, the arguments with wife and daughters tend to be easier to manage under this schedule... [00:40] See? Come to the dark side... [02:54] mkanat: hey, you said the other day that b.l.n had the wrong theme? [02:54] lifeless: Yeah. [02:54] mkanat: I meant to ask what you mean by that, but got local interrupts [02:54] lifeless: I'll see if I can show you. [02:54] lifeless: This page has the wrong CSS: http://bazaar.launchpad.net/~loggerhead-team/loggerhead/trunk-rich/view/head:/.bzrignore [02:55] lifeless: Because this file is in fact missing: http://bazaar.launchpad.net/static/css/view.css [02:55] what css should it have? [02:55] lifeless: That CSS exists in the actual branch. [02:56] lifeless: But it is a 404 on bln. [02:56] let me just nab someone [02:56] mkanat, i don't suppose you would have any time left to actually measure trunk (without historydb) vs 1.18 performance on a large branch? [02:57] poolie: I might. I have 2.15 hours left. [02:57] I mean, 2 hrs and 15 minutes. [02:57] facts might bring a merciful death to this thread [02:57] poolie: I don't think anybody ever said that history_db was slower. [02:57] robert thinks that john said it would be slower if the db was not enabled [02:58] poolie: I didn't see that on the list. [02:58] mkanat: jam said this: [02:58] Old New Action [02:58] 10s 30s First request of a branch in a project that has not been [02:58] previously seen. (slower) [02:58] (amongst other things) [02:58] note that none of our projects have been previously seen [02:59] lifeless: That's with history_db enabled or without? [02:59] and a 30s request will be killed [02:59] mkanat: with [02:59] lifeless: Okay, we're not talking about that situation, then. [02:59] lifeless: That situation would already be handled by what you'd do to enable history_db on codehosting. [02:59] mkanat: he also said that without the plugin, it uses plain bzr apis [02:59] lifeless: Which it does now. [02:59] so will be slower than before as well [03:00] lifeless: He said that there is some difference there from before history_db? Because the branch you're currently running uses plain bzr apis. [03:01] Oh, I may be seeing what he means. [03:04] ? [03:04] is it that it's removed caching code that would have helped in 1.18? [03:04] I'm still reading over the code. [03:04] poolie: I believe so, yes. [03:05] I'm not sure how much that actually matters, though. [03:05] What I don't know is how to turn off history_db to test. [03:05] remove the plugin [03:05] the historydb change is the following: [03:05] lifeless: It's a part of loggerhead. [03:05] - discard all the caching apis [03:05] - switch to raw bzrlib apis [03:05] ok, so istm it would be a worthwhile use of time to measure whether trunk-minus-hdb is in fact slow [03:05] lifeless, mkanat: 30s is slower for the first request on a project when it has to convert the full ancestry to the cached form. Note that it can incrementally build the cache, so even if the thread is killed it will eventually succeed. [03:05] - have a *bzrlib* plugin that makes those apis faster. [03:06] Also note, it is only for Emacs sized projects. bzr is a few seconds, IIRC [03:06] Yeah, launchpad also seems to be a few seconds, in my testing. [03:06] And yeah, even if history_db can't write to the disk, it will create its cache in memory. [03:06] history_db was pulled into the loggerhead codebase [03:06] jam: ah, k [03:07] So I suppose history_db is always enabled. [03:07] and 10s would become *every* request if you remove all the caching [03:07] so, this is really a bit of a distraction; unless its a no brainer: < a day of work, including getting up to speed, its not suitable for a maintenance squad at this time. [03:07] jam: That would happen basically if I just don't specify "--cache", right? [03:07] mkanat: I don't remember from the command line, tbh [03:07] the question is: is there anything you could do in ~1 day that will get it at least no worse than 1.18? [03:08] poolie: I suspect that for the actual average use case, it is already no worse than 1.18. [03:08] poolie: With the scale of codebrowse, it's likely that current stable will be rebuilding caches quite a bit. [03:08] ok [03:08] poolie: This is just my suspicion, though, not a proven fact. [03:09] poolie: I think the most immediate thing would be to teach Launchpad to shared caches between projects [03:09] at which point you'll hit a couple timeouts, but afterwards everything should be better [03:09] mkanat, i'm wondering if we can do some measurements to get more confidence in whether that is true or not [03:10] poolie: Yeah, I could do that, for sure. [03:10] jam, i know, but it seems like that is going to be more than a day of workL [03:10] poolie: I have a loggerhead load tester already. [03:10] poolie: it *should* just be changing the path to not include the branch name, just the project name [03:10] poolie: Let me do a test right now. [03:10] won't be shared between users, but probably pretty good [03:10] but I don't fully understand the codehosting remapping, etc. [03:10] jam: would that permit leakage from private projects [03:10] lifeless: revision id and revno stuff [03:11] e.g. if someone has the revid for a private project, makes that a ghost in their branch of same project [03:11] no revision information [03:11] lifeless: the cache is just the graph of revnos and revision_ids, it doesn't cache the actual content [03:11] jam: revids are sensitive too, given they include email addresses [03:11] so if you have the revision id, then you have all the information you're going to get [03:11] lifeless: but you just said you *have* it [03:11] right, thats one scenario [03:12] since you are making it a ghost [03:12] Okay, so I think that disabling the on-disk caching of historydb involves commenting out two lines. [03:12] the other one is it likely to expose a revid when it shouldn't. [03:12] Then it should cache in memory. [03:12] e.g. due to bugs [03:12] mkanat, let's try it! [03:12] poolie: Okay, going to try it now. [03:13] remember [03:13] to test it with a stacked branch and HTTP backend [03:13] poolie, lifeless, jam: Another option is to have codebrowse cache everything in one directory, in one file. [03:13] for apples to apples comparison [03:13] well [03:13] his hardware and load is undoubtedly not exactly the same [03:14] My load tester creates significantly more load than codebrowse experiences. [03:14] but perhaps we can get some data [03:14] Okay, I can already tell you that if you disable the on-disk cache, yes, trunk is slow. [03:14] I don't even have to run the tester. [03:14] Getting information for ViewUI: 15.945 secs [03:15] on emacs? [03:15] On launchpad. [03:16] ok, and you're confident that's representative? [03:16] if so, that means we can't just deploy trunk [03:16] jam: I imagine that if we used loggerhead's normal configuration where there's only one history_db.sql file for all branches combined, then it would become unmanageably large, on codehosting? [03:16] we would need to do whatever it is to get historydb in use [03:17] poolie: Yeah--what I didn't realize was that history_db was always being enabled, in my earlier tests. [03:17] poolie: Even if you don't specify a cache directory, loggerhead creates a temporary one when you start it. [03:17] and the question then is: can that task be made sufficiently small to do it ahead of doing other loggerhead work [03:18] poolie: It's all infrastructural work on codehosting, AIUI. [03:18] so it does cache to disk at the moment? but in a different form [03:18] poolie: No, your current stable loggerhead does not cache to disk. [03:18] poolie: But trunk always does. [03:19] uhm [03:19] My understanding is that it uses sqlite caches still [03:19] mwhudson: ^ [03:19] lifeless: It *can*. [03:19] lifeless: I don't think codebrowse has that enabled. [03:20] i'm pretty sure it does [03:20] so am I [03:20] it seems like changing it to write historydb caches into the same place should not be too traumatic [03:20] Yeah. [03:20] mkanat: I don't think it would become unmanageable, but probably bigger, and after that first attempt, probably faster. [03:20] Actually, that wouldn't be a problem at all, if you are already using the on-disk cache. [03:20] jam, so it would be better to use larger caching scopes, but doing it per branch would be ok? [03:20] it won't die on the first access? [03:20] Yeah, so you could actually just deploy history_db now. [03:20] mkanat: no, loggerhead always cached to disk, it always created a temp one, even before history_db [03:21] jam: Hmm, except that I could always make loggerhead re-generate its branch caches before history_db, with enough simultaneous access. [03:21] jam: It looked to me like it was only using the lru_cache. [03:21] (Unless you configured it to use the on-disk cache.) [03:21] mkanat: it just happened to regen the cache all the time [03:22] if you don't specify a cache, one is created that only lasts the lifetime of the process, and then is thrown out [03:22] jam: Oh indeed. [03:22] (separately, lifeless, is deletion of the /beta api near enough we should worry?) [03:23] poolie: I have to double check the code a bit. but I believe it will populate the cache file incrementaly, so even if it was killed in the first request, the second one will succeed for any given branch [03:23] or maybe the 18th? [03:23] poolie: in principle we're not adding anything to it [03:23] jam: In that case, history_db sounds like a net win with little infrastructural work required. [03:23] jam: well the second request may be to a different instance [03:23] lifeless: but the disk cache is in the same location [03:23] aiui [03:23] I find the hand wave on infrastructure work scary. [03:24] using the same cache that loggerhead is using today [03:24] lifeless: Why would you? [03:24] lifeless: It's just a new file in the same directory. [03:24] lifeless: That's the extent of the change that would happen. [03:24] jam: is historydb broken or unfinished or dangerous in any regard? [03:25] lifeless: Out of curiosity, do both processes currently use the same on-disk cache directory? [03:25] yes [03:25] they do [03:25] poolie: I don't know of anything specifically broken [03:25] caveat code has bugs [03:25] which is one thing jam highlighted as being a concern [03:25] sure [03:25] probably nothing worse than the existing code [03:25] and rollouts are hard [03:25] but, we still need to get through them [03:25] mkanat: I find it scary because handwaving around a bit of core plumbing is somewhat cavalier [03:26] mkanat: when one considers all the bits that can - and have - gone wrong in the past. [03:26] lifeless: Are you talking about infrastructure or about the internal architecture of loggerhead? [03:26] mkanat: infrastructure [03:26] and architecture [03:27] lifeless: Okay. Are you using (or do you have) sqlite 3.7? If you use a WAL, I bet both processes would be fine accessing the same cache. [03:27] I haven't looked at the risk of disclosure for private branches other than my brief queries here [03:27] lifeless: Risk of disclosure in what way? [03:27] lifeless: history_db, when we're just talking about loggerhead, is only a backend change. [03:27] bugs disclosing revision ids not in the branch being accessed [03:28] but, i think at the moment we're not talking about having a shared cache? [03:28] mkanat: I realise that its a backend change. [03:28] poolie: one of jams caveats was slower initial scans w/out a shared cache. [03:28] But we would have a shared cache. [03:28] sigh [03:28] Because we'd have only one cache. [03:29] I'll come back to this, I feel that the point is ....> over there [03:29] doing a bunch of work to assess whether its small enough, based on guesses - is at best a guess. Right now, I still wouldn't confidently ask curtis or julian to take this on within maintenance scope [03:30] lifeless: It doesn't sound like any code work is required. [03:30] if I'm over concerned about the risk, its because of a history of problems. [03:30] it's like a major version bump in a dependency [03:30] perhaps one that has caused trouble in the past [03:30] Yeah, sure. I mean, there's still the testing thing that I just talked about on the list. [03:30] I'm fully with you on that. [03:30] mkanat: I don't know that I've claimed code work; the same engineers do code and integration - everything except deploy & server ops. [03:30] mkanat: its an integrated team leading right up to the ops team [03:31] lifeless: Okay. What do you mean by "integration"? [03:31] for me to say to curtis or julian that its in scope for maintenance, I'd want to be quite confident that there is < 12 hours work asessing, measuring, qaing on staging, deploying & dealing with the fallout. [03:32] lifeless, how would you handle an upgrade to some other component we depend upon? [03:32] there may well be some that are over 12h [03:32] poolie: if its small a maintenance team just does it. If its not a feature squad gets assigned it - as will happen to python2.7 [03:33] ok [03:33] now, someone devs may scratch an itch and get python2.7 workable on dev machines, but actually upgrading prod etc requires considerable care. [03:33] sure [03:33] 12h is an arbitrary figure [03:33] ubuntu upgrades are another case that comes to mind [03:33] istm we have the option of trundling along 1.18.x until we get time to safely do a bigger deployment [03:34] right, they were traumatic, and we're doing them differently next time - separating OS and dependencies; we'll go to py 2.7 *before* the next OS upgrade. [03:34] so that we're in place. [03:34] but we'd probably prefer not to do major development that's duplicating what's been done in an upstream later release [03:34] poolie: I don't see that as an option; I see that as the /only/ option given the info I've got about whats up. [03:34] max has said, rightly or wrongly, that he doesn't think there's any more low hanging fruit for performanec [03:35] so, i don't think we can usefully schedule small blocks on this [03:35] Talking about actual development of loggerhead, that is. [03:35] Which would be a separate issue from rolling out this one. [03:35] and if we're going to schedule a big block, we might as well start it by getting historydb running [03:35] lifeless: Without tests, I can't say how much time deployment would take. But if you also start changing *any* branch significantly, my answer would be the same. [03:35] since we know that will probably help alot [03:36] lifeless: If everything goes fine, I suspect it will take no longer than a normal rollout, perhaps an hour or two more. [03:36] lifeless: The worst that could happen--that I can predict now--is that the two processes may need separate cache directories so that they don't lock each other out. [03:36] mkanat: so, I'm aiming at 10 minute downtime windows; thats a factor of 12 more than 'ok' [03:37] mkanat: or maybe you don't mean downtime window ? [03:37] lifeless: I don't see any downtime happening. [03:37] lifeless: Yeah, I mean total maintenance work time. [03:37] lifeless: To do the rollout. [03:37] it seems like we should easily be able to run it in parallel [03:37] since it's readonly etc [03:37] poolie: history_db gets written to quite a bit. [03:37] all this is fine [03:38] BUT ITS WORK. [03:38] logically readonly [03:38] who is going to do it? what if the guess is wrong? whats the impact on load? will the machine thrash? [03:38] poolie: Yeah. [03:38] indeed [03:38] lifeless: The machine should use considerably less memory with history_db than it does now. [03:38] I mean, I'm totally fine with whatever plan, my point has *never* been that its crap, its been that *we need to do work to make this usable in prod* [03:38] effort [03:38] time * energy. [03:38] lifeless, the point is, i don't see that there is any other sensible next step other than deploying this [03:38] lifeless: In an ideal situation, the rollout requires no attention from anybody--it would just be a standard merge and push. [03:38] which btw, this conversation is sapping for me. [03:38] i'm not saying you have to do it right now [03:39] but there's no point discussing how to make it possible to do other things off 1.18 [03:39] lifeless: I totally understand. :-) [03:40] poolie: But thats not what I argued *for*. I argued that other than oops fixing, which there are plenty of sensible things to do in the current lp branch, we're not going ot be undertaking major works for 6-9 months, based on my knowledge of jml's pipeline. [03:40] ok [03:40] so again we come back to assumptions [03:40] do you agree that nearly 20K unhandled exceptions a day is a little bad? [03:40] max thinks it is not easy to fix oopses in the current branch [03:40] Ah, I wouldn't exactly say that. [03:40] and do you agree that historydb is an *operational unknown*? [03:40] It was easy to fix the "no such revision" one. [03:41] lifeless: Actually, now that I understand it, I only agree minimally. [03:41] lifeless: Because codebrowse is already using on-disk caches. [03:41] lifeless, i'm only spending time on this because i think it needs to be improved and i want to see us take the best course on it [03:41] lifeless: This is just another on-disk cache. [03:41] mkanat: which we have several deployment options for [03:41] a) per branch [03:41] b) shared cross branch [03:41] c) memory only [03:41] d) huge one [03:41] e) ...? [03:42] lifeless: Right, and (d) requires no code work at all. [03:42] we've seen locking issues with sqlite already [03:42] so we have some undelivered work in trunk [03:42] we all agree that actually deploying it will take work [03:42] a) would be equivalent to our current cache story in terms of disk, but higher upfront scans and no benefit cross-branch [03:42] etc [03:42] lifeless: (d) is essentially what codebrowse is doing right now. [03:43] mkanat: I thought a) was [03:43] lifeless: Nope--one file in one directory. [03:43] mwhudson: ^ cache per branch, right ? [03:43] there would be little point in doing (a) [03:43] lifeless: currently? [03:43] yes [03:43] not sure actually [03:43] * mwhudson checks [03:43] lifeless: At least, AIUI. [03:43] mwh, for my education, where are you checking to find out? [03:44] poolie: lib/lp/launchpad_loggerhead/app.py in the launchpad tree [03:44] Oh right, launchpad-loggerhead. [03:45] mwhudson, i don't have that directory... [03:45] lifeless: it's one per branch currently [03:45] poolie: -lp/ sorry [03:45] poolie: It's a separate project, lp:launchpad-loggerhead [03:45] mkanat: not any more it's not [03:45] mwhudson: Oh! didn't know. [03:45] mwhudson: I don't have that directory, for whatever reason [03:45] poolie: lib/launchpad_loggerhead/app.py [03:46] right [03:46] so, back to facts, we're deployed with (a) today [03:46] shared between two instances [03:46] lifeless: Ahh. [03:46] with some threads each [03:46] Yes, in that case, history_db would be different. [03:46] ok, and are you saying that historydb is going to be slower in that setup? [03:46] poolie: No, just different. [03:47] lifeless: changing to one cache per would be very easy [03:47] jam says initial scan is thrice the time, subsequent operations are faster [03:47] mwhudson: I know, but its all time [03:47] mwhudson: for a maintenance squad without your context [03:47] mwhudson: who will be writing up what they figure out like crazy [03:48] lifeless: less time than this conversation has been going on for [03:48] lifeless: sure, they are welcome to talk to me :) [03:48] mwhudson: sure feels like it; or are you serious ? [03:49] lifeless: sorry, not quite sure i understand [03:49] mwhudson: 'less time than this conversation has been going on for' [03:49] lifeless: I'm pretty sure he was serious. [03:49] lifeless: yes, i was serious about that [03:49] ok [03:49] it's a few lines [03:50] + a test run :) [03:50] + qa to make sure it behaves [03:50] anyhow [03:50] yeah [03:50] sadly, a test run won't make any difference, but yes [03:51] poolie: so whats your proposal thats different to what I've suggested? [03:51] mwhudson: Where's the on-disk cache setup in app.py? I only see "self.graph_cache = lru_cache.LRUCache(10)". [03:51] lifeless: this diff will switch to using a single cache http://pastebin.ubuntu.com/558839/ [03:51] poolie: or are you trying to say I've got a bad axoim or assumption? [03:51] lifeless: keep both branches as they are [03:51] mkanat: cachepath = [03:51] lifeless: Ah, thanks. [03:51] if there's genuinely low fruit in 1.18, do that [03:51] file a bug saying "should upgrade to trunk" [03:51] do that in a safe way, but don't be scared of it [03:51] mkanat: the BranchWSGIApp constructor [03:51] mwhudson: Yeah, I found it. :-) [03:52] poolie: I'm not scared of it; honestly. [03:52] I'm assessing how long its taken mwhudson to do this in the past [03:52] the things that have gone wrong in the past [03:52] lifeless: Okay, I'm starting to agree with you more now that I see how codebrowse currently runs. [03:52] and our pipeline of stakeholder requests [03:52] lifeless: I was under the impression that it was doing (d). [03:52] it's fine if it stays in the pipeline until it's a priority [03:52] mkanat: god no, it would be flattened in an instant [03:53] lifeless: That's sort of what I thought, yeah. :-) [03:53] lifeless: i'm not so sure about that actually [03:53] mwhudson: that was humour [03:53] you might have some locking related fun [03:53] ok [03:53] I'm pretty sure we do benefit from the caches [03:53] lifeless: That's possibly resolvable by WAL, but that would indeed be infrastructural and integration work for sure. [03:53] but I'm also sure there is friction in there [03:54] which needs time and attention to detail [03:54] lifeless: Okay. Out of curiosity, are you sure because you have tracebacks, or just because you suspect? [03:54] poolie: I don't think putting it in the pipeline as a given makes sense; saying 'do something to make codebrowse better' is something we should put in the pipeline; one concrete thing we can do there is productionise and deploy history db [03:55] mkanat: have seen the odd thing go by [03:55] lifeless: Okay. [03:55] mkanat: such as sqlite lock contention errors [03:55] it's essentially already on the kanban towards the right hand side [03:55] poolie: I have to disagree. [03:55] lifeless: I think that it would probably be sensible and manageable to do oops fixing on trunk and 1.18 both simultaneously, and then schedule trunk rollout for a few months away. [03:55] Or on ~launchpad-pqm/devel and trunk. [03:56] poolie: theres a bunch of work been done on it, yes. But no team has been pushing it forward; its a new-situation for whomever comes along. [03:56] lifeless, because... it's not that far to the right if deployment hasn't been thoroughly considered? [03:56] lifeless: If you limit the changes to just oops-fixing until trunk is ready, I think you'd be fine. [03:56] https://lp-oops.canonical.com/oops.py/?oopsid=fixing [03:56] if jam's willing (not too scarred by lp-serve) we could ask him to do it with help from mwh and it would not be new [03:57] poolie: that, plus what I said, plus the work done hasn't been done with a view to the whole lp story around this; its been somewhat siloed off [03:57] poolie: if you want to do that, that would be awesome; don't underestimate it though - I mean, you know how gung ho I am about stuff generally, right ? [03:58] heh [03:58] poolie: this is *it can be done and there is a pie on the line collins* being concerned. [03:58] :) [03:58] lifeless: Given codebrowse's history, I don't blame you. [03:58] so in a way i would be ok with saying "this is really not a good fit, let's throw it out" [03:58] lifeless: Also given the fact that right now, it's far more stable than it's ever been. [03:58] ideally, if we had a specific reason for that [03:58] so if you want to do that, say so; if we leave trunk as-is, I'll setup a separate project to get code reviews and bugs visible to the lp team until we're converged. [03:59] but, if it's done and we don't think there's anything actually wrong, we just find it hard to deploy [03:59] that just seems like a bad reason [04:00] if true, it seems to mean some bugs can just never be fixed :( [04:00] poolie: we have a similar set of reliability and robustness concerns around soyuz [04:01] do we ultimately want to get historydb running or not? [04:01] poolie: I would think so. [04:01] i'd be fine with deciding that it was just a spike experiment and we want to take another go at it, if that's what we really think [04:01] compared to loggerhead, which has - I think - one person familiar with it still on the lp team, soyuz has 4 folk familiar with it, so we can actually react quickly to issues [04:02] from my perspective, its an experiment until either a) a feature squad is given codebrowse improvements as a card, and agrees that historydb is the way forward, or b) someone else does the work necessary to get it live [04:02] It's too bad I don't have enough time left to stick around and help with the rollout. [04:02] I can't prejudge what the feature squad engineers will decide makes sense [04:03] [well I can, but I'd hate it if someone did that to me, so I don't do it to them] [04:03] mkanat, it seems like organizing rollouts is a pretty hard thing to do unless you are actually full time on the lp team [04:03] lifeless: That's understandable. That might just be re-having a conversation that's already been had, though, from back when jam originally proposed history_db. [04:03] mkanat: it may be a nobrainer [04:03] lifeless: Sure. But I understand that the specific application to LP is still something to discuss. [04:03] I'm just being clear about what I can commit to confidently. [04:03] lifeless: Sure, I totally understand. [04:03] The primary thing that a designer has to deal with is the fact of uncertainty. [04:04] yup :) [04:04] are there any concerns about trunk other than that the deployment may be very bumpy (and i realize that may be a very large conrcern) [04:04] Yeah. It's something that a lot of people don't understand. They try to predict an unpredictable future. :-) [04:04] poolie: Unpredictable problems that I can't imagine now, based on the fact that there have been a lot of changes. [04:04] poolie: I haven't done a rev by revision review because I think thats a waste of time. [04:04] s/bumpy/& laborious whatever [04:04] poolie: once the card size is assessed as being > maintenance bucket, it goes into the queue. [04:05] poolie: history db alone is enough to do that for me [04:05] ok [04:05] so i propose to file a bug saying that historydb ought to be tried out and deployed [04:06] if in the course of trying it out, it seems that it is not a good solution, or too much work, or whatever [04:06] we can mark the bug wontfix and then pull it out of loggerhead's trunk [04:06] poolie: Where would it be tried out, though? There's no edge. [04:06] who will do the trying out ? [04:06] mkanat: we do have bazaar.qastaging.launchpad.net [04:06] lifeless: Oh, okay. [04:07] thats in the qa pipeline [04:07] lifeless: Which...is a redirect loop. [04:07] do you mean 'who will work on the bug' or 'how will we test it'? [04:07] poolie: who, now how. [04:07] s/now/not/ [04:08] does that matter? [04:08] it's a thing we can do to make lp better [04:08] either a squad will do it, or someone else [04:08] well [04:08] from my perspective this is a feature bug as I've said many times before [04:08] that means it won't get a maintenance squad timeslice [04:09] lifeless: You know...there is another option, here. [04:09] lifeless: You could just live with the OOPSes until you switch to trunk. [04:09] I'm not clear what you want to have happen, given the constrains we have. [04:09] we are getting timeouts on some pages [04:09] mkanat: I'm particularly unhappy with that idea because they generally mean a user has had a bad experience [04:10] lifeless: That's true. BTW, do you have analysis of them yet? [04:10] it seems at least possible that the most efficient way to fix them is to move to loggerhead trunk [04:10] mkanat: yes, the connection reset one is 16K a day [04:10] or, there may be other fixes [04:10] mkanat: the others are largely below the fold [04:10] lifeless: Oh! That just happens when somebody closes their browser. [04:10] mkanat: yes [04:10] lifeless: Okay. So that would be the primary one to fix, and the others could wait. [04:11] really? [04:11] mkanat: yes [04:11] lifeless: Okay. That would be easy to fix both on trunk and 1.18. [04:11] i mean obviously, it can happen there, but is that really happening 16k/day? [04:11] next highest one is [04:11] 61 error: [Errno 104] Connection reset by peer [04:11] 31 http://0.0.0.0:8081/%7Evcs-imports/busybox/main/changes (Unknown) [04:11] OOPS-1852CBB1307, OOPS-1852CBB1810, OOPS-1852CBB188, OOPS-1852CBB1990, OOPS-1852CBB2626 [04:11] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB1307 [04:11] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB1810 [04:11] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB188 [04:11] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB1990 [04:11] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB2626 [04:11] poolie: its in the daily reports [04:11] lifeless: That's the stability-checking script that the losas run. [04:11] what i mean is, there are other things that can generate econnreset [04:12] poolie: the cause may be misanalysed, but [04:12] or eof [04:12] 15332 error: [Errno 32] Broken pipe [04:12] Bug: https://launchpad.net/bugs/701329 [04:12] Other: 15332 Robots: 0 Local: 0 [04:12] 7604 http://0.0.0.0:8080/%7Evcs-imports/busybox/main/changes (Unknown) [04:12] OOPS-1852CBA1, OOPS-1852CBA10, OOPS-1852CBA100, OOPS-1852CBA1000, OOPS-1852CBA1001 [04:12] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA1 [04:12] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA10 [04:12] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA100 [04:12] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA1000 [04:12] https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA1001 [04:12] poolie: It would happen with bots. [04:13] so we see 16K +- 2K a day of errno 32, and < 100 of errno 104 [04:13] fixing the 16K one *may* expose some timeout cases [04:13] this is another reason to just stay focused on stabilisation [04:13] * mkanat nods. [04:13] poolie: so it seems to me you want to push this branch forward [04:13] so it seems likely to me that the epipe error is loggerhead seeing the connection closed by haproxy [04:14] we don't really know what caused haproxy to close it [04:14] what I'll do then, is make a new project, pull the lp loggerhead stuff into that, which will get changes to that branch under review and bugs per the lp policy a home [04:14] poolie: Yeah, that sounds likely right. [04:14] (incidentally log correlation stuff would be useful here, but maybe just less(1) is enough) [04:14] if and when trunk gets into lp we can consolidate the projects again [04:14] lifeless, yeah, i would like to push it forward [04:14] I've spent enough time on this discussion, I'm going to stop now. We're not going forward, and the bits I'm describing don't affect anyone else. [04:14] (copious time and all that ) [04:15] well, i'm sorry it was frustrating [04:15] i think it was actually useful to get a few things explicit where people had different ideas in their head [04:15] lifeless: I think that just fixing the oopses on the LP branch seems a reasonable solution. [04:15] the frustrating bit is the handwave around resources [04:15] mkanat: thats what I've been asking for! :) [04:15] mkanat: I've been *offering* lp developers doing maintenance and triage on the main loggerhead project [04:15] well, i've watched/tried to help jam spend ~6m getting his work deployed [04:15] but not with the unknown in trunk [04:16] lifeless: Yeah, that seems fine. I don't even think that most of those fixes need to go into loggerhead trunk. [04:16] so i may not be pessimistic _enough_, but i think i'm at least fairly pessimistic [04:16] lifeless: At least, the oops fixes. [04:16] mkanat: I doubt that that will happen with trunk as different as it is :(. [04:16] we'll see [04:17] lifeless: Yeah. As long as you just stick to OOPS fixing, though, I don't think trunk would be missing out. [04:17] mkanat: it builds up debt [04:17] lifeless: And then when the feature squad has time in six months or so, merging them back in as appropriate could be done. [04:17] lifeless: True. [04:17] mkanat: at a certain point trunk will then be unfeasible to merge [04:17] lifeless: Well, hopefully we'd just be talking primarily about two fixes. [04:17] I don't know if that will happen or not; I hope it doesn't. [04:17] lifeless: Which should be small patches. [04:18] my bigger concern is community [04:18] i will at least have a look at bug 701329 [04:18] noone will be maintaining loggerhead per se [04:18] Launchpad bug 701329 in Launchpad itself "error: [Errno 32] Broken pipe" [Critical,Triaged] https://launchpad.net/bugs/701329 [04:18] lifeless: Yeah. And there are outstanding review requests. [04:18] right [04:18] which I was aiming to solve by cunning :) [04:18] and the other [04:18] lifeless: Ahhh, I see. :-) [04:18] mkanat: by having lp developers maintain the project, the various queues would be driven to zero: bug triage, code review [04:19] lifeless: Right. [04:19] mkanat: but doing that is contingent on it being straight forward, which an unusable trunk is not. (Yes, I know the nuance there) [04:19] * mkanat nods. [04:19] lifeless: I think we'll just have to delay that for 6-9 months. [04:19] so, i think this is a bit sad. Shrug. [04:19] lifeless: And then at that point it can happen. [04:20] canonical put time & money into historydb [04:20] i really think we should either clearly decide it was wrong, or finish it [04:20] poolie: I think we're all in agreement about that. [04:20] poolie: It's just a question of when it will happen. [04:21] are we not allowed to say 'it was a spike, its not finished, put it to the side and perhaps later' ? [04:21] thats what francis was saying [04:21] lifeless: I think unmerging it would revert a lot of good refactoring that happened along with it. [04:21] mkanat: I agree, I think thats a shame. But perhaps better than 6-9 months without maintainer? [04:22] lifeless: Hmm. [04:22] lifeless, well, yes, we're allowed, but [04:22] besides, its not lost per se, its mergable if/when someone decides they have the time. [04:22] things being hard to change doesn't seem like a very satisfying reason to do that [04:22] this is all about resourcing. [04:22] lifeless: I think 6-9 months without a maintainer would be no different than it was in 2010. [04:22] i'd be happier to say that if it turned out the design was actually bad [04:22] its a sunk cost [04:23] it wasn't resourced with a budget up front [04:23] it's true it's a sunk cost [04:23] spm: 15:55 < mkanat> lifeless: This page has the wrong CSS: http://bazaar.launchpad.net/~loggerhead-team/loggerhead/trunk-rich/view/head:/.bzrignore [04:23] 15:55 < mkanat> lifeless: Because this file is in fact missing: http://bazaar.launchpad.net/static/css/view.css [04:23] 15:55 < lifeless> what css should it have? [04:23] the question is, now that the code is there, does the cost of deploying it exceed the benefit we would get from deploying it? [04:23] 15:56 < mkanat> lifeless: That CSS exists in the actual branch. [04:23] if deployment is very hard that may well be true [04:23] spm: can you look at how the /static/ path is accessed and see if perhaps we have something bong going on? [04:24] okidoki. [04:24] poolie: I don't see that thats the question. its a question sure, but *the* question is, where does completing that work fall in the queue for the various folk that might do it. [04:24] lifeless, poolie: My vote is to fix the two OOPSes on ~launchpad-pqm/loggerhead/devel, then wait 6-9 months, stabilize trunk, and then have the LP team maintain trunk. [04:25] poolie: its 6-9 months away for LP; even if it was the very best thing since sliced bread it would still be 6-9 months away [04:25] actually... I might even be able to hazard a guess as to what/where/why this fault exists.... [04:25] poolie: *unless* it becomes a stakeholder priority, *or* is small enough for a maintenance squad to handle. [04:25] lifeless: I don't think it's a terrific emergency. [04:25] poolie: I've not argued its wrong or bad or whatever. But polish and doing it well matters a great deal. [04:25] lifeless: codebrowse sucks far less than it used to, already. [04:26] i guess the axiom i have here is that zero timeouts is a priority, and that historydb is said to be the best way to get there [04:26] because if we get it wrong the costs are huge [04:26] in that case it should not be shelved for so long [04:26] perhaps there are less-risky ways to get there [04:26] poolie: Yeah. I think that "priority" in this case simply means 6-9 months. [04:26] zero timeouts are a policy [04:26] poolie: I think the least risky way would be to hire somebody to work on loggerhead. [04:26] even just re-doing similar work but with stepwise deployments are a good idea [04:26] and I'm keen on those [04:26] things that are too big go into the feature queue to do, even timeouts. [04:26] s/are a good idea/could be a safer course [04:27] which this falls into IMNSHO [04:27] lifeless: Are you with me on the "fix the two oopses then maintain trunk later" plan? [04:28] mkanat: I"m with you on fix the two oopses [04:28] mkanat: I hesitate to commit to the maintain trunk later plan, because it presupposes the outcome of a feature squad looking at this stuff [04:29] lifeless: Okay. In that case, "roll out history db in 6-9 months and then see how things go from there". [04:29] 'assign a feature squad to work on loggerhead performance in 6-9 months and see what happens' [04:29] lifeless: Sure, sounds great. :-) [04:29] pending flacoste and jml agreeing on priority/timeline [04:30] one obvious tactical move for such a squad is to pick this up (if noone has done it in spare / volunteer time in the interim) [04:34] ok, so now back to the big one [04:34] in that plan are we ok to leave lp on 1.18 and trunk a bit free-floating [04:34] poolie: Yeah. [04:34] or shall we say what's in trunk was an experiment that's semi-abandoned, and move it aside? [04:34] thats to you guys [04:35] if you want maintainers from the lp team, no. If you don't care about that (indefinitely), yes. [04:35] It *was* an experiment and in the absense of maintainers, sure it must be considered abandoned. [04:37] poolie: I don't think it should be moved aside. [04:38] poolie: I suspect that in 6-9 months, the best thing for LP's feature team to do will be to stabilize and roll out trunk. [04:38] I don't have a particular investment in either answer; I have a strong bias towards having things be maintained when possible, and code that noone is looking after be clearly abandoned. [04:38] but its not 'my' project [04:38] lifeless: It was effectively abandoned before I started working on it, we'd just be putting it back into that state. [04:39] Well, besides the work that jam did on history_db, which was substantial. But nobody was fixing bugs. [04:39] mkanat: which is a shame, no ? [04:39] lifeless: It is, but it's also just a fact, and I'm okay with waiting 6-9 months for it to be resolved. [04:40] lifeless: I would love to keep working on it; that would be the most ideal solution given infinite resources. :-) [04:40] just for clarity, you think its better for users to have a nonresponsive project with some cool stuff unreleased in trunk than a responsive project with a 'future' branch earmarked for a later examination ? [04:41] mkanat: I'm assuming, perhaps incorrectly, that you're not going to be doing regular maintenance in your spare time. [04:41] lifeless: I think that trunk is probably stable and usable by everybody except LP. [04:41] lifeless: Yeah, unfortunately because I also maintain Bugzilla and some other stuff, I don't have a lot of spare bandwidth to do other free projects. [04:41] mkanat: I didn't say anything about the code quality [04:41] mkanat: or stability. [04:42] mkanat: which is totally understandable... I have the same quandry myself/ [04:43] lifeless: I think it honestly wouldn't be that hard, if you're just doing simple OOPS fixes, to apply them both to trunk and the LP branch. But I also suspect that some of those OOPS fixes will not be quite so wanted on trunk. [04:43] lifeless: I could fix the issues and do the backports myself, with one more block of hours. [04:44] mkanat: we really want the lp team to get stuck into all the crannies [04:44] lifeless: Okay. If that's the immovable object, then I think the *simplest* solution is the one that we have proposed and agreed on. [04:44] the previous silo structure led to the state loggerhead was in 2010; bringing you in had great results but has left loggerhead out on the cold [04:45] lifeless: I would say that loggerhead is a separately-maintained modular dependency, not a silo. [04:45] what I want is the following: [04:45] mkanat: 'bugs', 'code', 'soyuz' etc - those were the silos [04:45] mkanat: lp team substructure [04:46] mkanat: we've just reorganised into squads with a project wide remit [04:46] lifeless: Yeah...that's a conversation that you and I already had. :-) [04:46] right ;) [04:46] so [04:46] what I want is: [04:46] lifeless: (BTW, I think that having developers flexible is good, but I think the individual components should still have assigned individual designers.) [04:46] - a place for changes to the LP loggerhead to go for code review that code.launchpad.net/launchpad-project/+activereviews will pick up [04:47] - ditto bug reports for bugs.launchpad.net/launchpad-project/+bugs [04:48] - a clear and simple policy for what to do with changes to the project vis-a-vis upstream for developers. One of 'nothing', 'land upstream and do a release', 'file a patch upstream will review it' [04:48] if those three constraints are met, I will be satisifed. It may not be the globally best thing for canonical / bazaar / loggerhead, but I'll be confident that it will get acted on. [04:49] if any of them are not met, I'm confident it will become a spotty mess. [04:49] it's already in /bazaar [04:49] we read code reviews etc [04:49] if lp developers put up changes i'm sure they will be piloted [04:49] lifeless: That sounds sensible. So I suppose what you could do is develop against launchpad-pqm/devel, and submit merge proposals against the trunk, but not have anybody assigned to do them. [04:49] there are multiple code branchces in that project that are not landed and approved [04:49] lifeless: Then at least when somebody was interested, they could go and look at the active MPs against trunk. [04:50] mkanat: with noone on the other end, that seems a bit noddy [04:50] lifeless: It's at least a method of making a queue to handle for the future. [04:50] lifeless, i have actually read them and they're not easily landable [04:50] they could be bumped to wip [04:50] then they probably should be [04:50] jam: https://code.launchpad.net/~mnordhoff/loggerhead/history_db/+merge/25381 [04:51] lifeless: Also, if you make the target reviewer ~bzr, eventually somebody will come along and do them. [04:51] mkanat: well, I'm assuming (because these are there) that that isn't the case. [04:51] lifeless: We could say that the Bzr team maintains loggerhead's upstream (without anybody actually assigned to do the work) and Launchpad maintains the launchpad-pqm branch. [04:51] lifeless: Oh, the reason those are there is that the default reviewer for loggerhead is wrong. [04:51] lifeless: It should be ~bzr instead of ~loggerhead-team, or whatever it is now. [04:52] i think i added canonical-bazaar to that team [04:52] or at any rate we can [04:52] so [04:52] corollaries [04:52] because lp is limited [04:52] I need a project in launchpad-project [04:52] lifeless: Ultimately I think we're just talking about what will happen for the next 6-9 months. [04:52] that can be a new one, launchpad-loggerhead repurposed, or loggerhead itself. [04:52] mkanat: I don't see this as timeboxed, unless we get rid of loggerhead [04:53] lifeless: I can't imagine the feature squad deciding that anything is that valuable besides stabilizing and deploying trunk. [04:53] lifeless: At which point the LP branch and trunk will become the same. [04:54] mkanat: sure [04:54] but they could still work off of a branch and let trunk be approximately unmaintained [04:54] or they could be doing releases [04:55] lifeless: Sure. But either way, there's no difference between the plans, after the 6-9 month timeframe. [04:55] sure there is [04:56] lifeless: You're positing the possibility that history_db never becomes trunk? [04:56] yes [04:56] and [04:56] that perhaps bazaar will maintain loggerhead [04:57] lifeless: Okay. So that bzr will maintain loggerhead seems somewhat likely. [04:57] lifeless: That history_db will never be trunk seems very unlikely. [04:57] or that what launchpad needs will be so unsuitable for trunk that trunk maintainers reject it [04:57] my crystal ball is broken [04:57] lifeless: Sure, but I have a bit more intimate knowledge of loggerhead and the problems it's faced, so I feel fairly confident in these predictions. [04:58] lifeless: Here's another possibility. [04:58] lifeless: You could merge ~launchpad-pqm/devel into 1.18. [04:58] lifeless: You could maintain 1.18 and do regular releases of it. [05:00] lifeless: Then LP would maintain the stable branch and bzr would maintain the trunk. [05:01] lifeless: That seems like a pretty reasonable solution for everybody. [05:01] mkanat: AIUI bzr don't have the cycles or domain experience to maintain loggerhead comfortably. IMBW. [05:01] lifeless: Ah, I think many of the devs have some casual knowledge of it. [05:02] i think we're as well placed as lp, aside from being smaller [05:02] i don't know [05:02] i think you said you felt lp devs would be unwilling to work on it unless trunk was moved away [05:02] Yeah, poolie reviewed most of my code, jam did history_db, and some other folks submit patches from time to time and have done reviews. [05:02] I was extrapolating from my third constraint [05:02] poolie: ^ [05:03] i'm not sure i understand it [05:03] is it "where do i send my patches?" or "what do i do with third-party patches?"? [05:04] how should I approach doing my changes [05:04] understand lp's running 1.18; do changes off that; ask someone experienced to review them [05:04] it doesn't seem particularly harder than requesting changes in lazr.restful or bzr or whatever [05:05] s//proposing [05:05] lps review and bug ui lets us down here, but sure [05:06] I don't think you can propose merges cross-project [05:07] poolie: does patch pilot work off of https://code.launchpad.net/bazaar/+activereviews ? [05:07] wow [05:07] obviously not :) [05:08] but, i was contemplating that last week [05:08] spm: any joy? [05:10] poolie: I think you need to decide how you want to hang it together; if you want to maintain loggerhead - great, we'll send you patches, and bugs, and so on. [05:10] poolie: I understood you to be focusing on udd + treading water [05:10] poolie: so it seems a bit inconsistent to me for you also be to be picking up maintenance of this, which was effective unmaintained for quite some time. [05:10] poolie: but its up to you [05:11] lifeless: I think loggerhead is an important part of bzr adoption. [05:11] poolie: I just don't want to be held captive to this outstanding work, and want a clear answer whether you would like the lp team to pickup maintenance (subject to the constraints I'm operating under) [05:12] poolie: I feel like every time we reach an agreement, its snatched away one email later [05:12] lifeless: too many other things exploding atm to get a look at beyond logging into the likely problematic server [05:12] spm: should we rt it ? [05:12] spm: or would that just be throwing hay into a barn fire? [05:13] well, aiui (need to confirm) static files are served off crowberry directly. if guava has been updated; yet not crowberry, then this sort of mismatch seems possible. [05:13] spm: yeah, thats what I'd expect needs checking [05:14] i just want to get it to improve in the most efficient way possible [05:14] i have been doing reviews on it [05:14] as have others [05:14] we've spent 2 hours of three people - 6 man hours discussing the most efficient way. [05:14] poolie, lifeless: I'll leave you guys to decide this. I think having bzr maintain trunk and LP maintain 1.18 is probably a good idea. [05:17] I'm out for the night, now. :-) Goodnight! [05:17] night [05:17] :( [05:17] poolie: Did you want me to stick around? [05:18] no, go [05:18] i'm just annoyed this is getting stuck [05:18] poolie: Okay. [05:18] poolie: It's only stuck if you disagree that bzr should maintain trunk. [05:19] Anyhow, I'm out. :-) [05:19] Night! [05:21] lifeless, do you think having bzr maintain trunk (as we have been, at a low level) is feasible? [05:21] i guess that still leaves launchpad bugs meaning it's hard for lp devs to see the queue [05:24] well [05:24] as I said [05:24] I need an LP queue [05:24] for lp devs to review lp changes [05:24] and ditto bugs [05:28] how about if we just leave them where they are [05:28] and if lp developers are not getting reasonable turn around on their reviews, we deal with that? [05:28] so far, bzr people have been doing reviews, and lp people have not been proposing them [05:31] poolie: I don't understand [05:31] lp changes to the lp loggerhead will be reviewed by lp devs [05:32] if they go upstream, if thats desired and there is an upstream, then having reasonable turnaround is indeed important [05:41] ok, so you stated your constraints; what are mine? [05:41] i think it's very unfortunate to be letting historydb bitrot because nobody will schedule time to deploy it [05:42] but that is not really a constraint [05:42] i am also concerned that we will make these rearrangements and then nobody will actually work on it, therefore leaving things actually worse off [05:42] again, not a constraint [06:24] lifeless: so yes. /static is served off crowberry directly. both http or https. I guessing we have revno X on crowberry for LH, and X + Y on guava - hence the mismatch. [06:30] spm: but loggerhead runs on guava? [06:31] yup [06:31] I can has fix? [06:31] sorry, I don't understand? [06:31] we fixed it this way to stop codebrowse being overwhelmed serving static content [06:31] can you not serve it staticly from guava? [06:31] we did. it was bad. [06:32] oh [06:32] that surprises me [06:32] serving it out of loggerhead would be poor [06:32] but apache on guava? [06:32] spm: are you sure it was static on guava, not being handled by lh ? [06:33] it was being handled by lh on guava. there is no apache on guava. ? [06:33] I'd suggest apache on guava be unwise - just to solve a css file woe. [06:34] ok [06:34] I had no idea there wasn't one there [06:34] if you give me a bit I may even be able to enunciate why :-) [06:34] can you check the revno's ? [06:34] spm: is there an LP tree on crowberry we can point into instead of the one we do now, that would be more up to date? [06:36] sourcecode/loggerhead/loggerhead/static <== guava is 178, crowberry is 176. [06:37] revno 12263 vs 12177 [06:37] and is there a view.ss on guava? [06:37] view.css [06:37] ahhh. codebounce is in nodowntime. [06:38] spm: so, we need an additional nodowntime deploy to a dir on crowberry, and apache pointing at that. [06:38] ? [06:38] I wouldn't call codehost a nodowntime deplot? [06:38] deply too [06:38] deploy 3 [06:38] spm: (after we confirm a view.css exists on guava) [06:39] -rw-r--r-- 2 loggerhead loggerhead 519 2011-01-14 11:05 ./css/view.css <== guava [06:40] spm: what I mean is [06:40] codehosting cannot be nodowntime yet (rt something or other) [06:41] can we have a separate lp tree on crowberry [06:41] which would be*in* the nodowntime set [06:41] and apache would be pointed into that tree [06:41] just for static files? Ahhh I see. clever. [06:41] or apache on guava. your call :P [06:41] heh [06:42] yeah - I like that idea. bit ugly and heavyweight, but the alts are worse. [06:52] lifeless: oki. new 'launchpad-static' now on codehost; about to switch the softlinks to point into there.... [06:53] and live [06:55] http://bazaar.launchpad.net/static/css/view.css <== looks good [07:49] spm: thanks! [08:31] 'morning [09:23] hi. i'm working on a bzr branch here where i have revisions up to 4 and now i've made uncommitted changes to r4. now i'd like to sort of "throw away" the changes in r4 and make the current state the new r4. how would i do that? [09:31] i.e. i already have an r4, but i'd like to replace that r4 with the current uncommitted state which builds on r4. [09:32] damd: 'bzr uncommit' [09:32] followed by a new commit [09:32] okay, so my changes won't be lost that way? [09:33] they weren't, great, thanks [09:39] i just did a "bzr pull" and there was a conflict in a file. now i've fixed that conflict. so is everything swell now? i'm worried that maybe i have to take some extra action to prevent e.g. a "merge" commit from appearing in the log once i commit, like the stuff that happens in mercurial unless you rebase. [09:44] damd: you'll have to run "bzr resolved" to let bzr know you've resolved the conflict (it will tell you this when you try to commit) [09:45] oh, okay, thanks [09:45] damd: we only create merge commits if you've actually done a 'bzr merge' invocation. 'bzr pull' will never create a merge commit [09:45] great! [09:56] The behaviour isn't unlike Mercurial, for that matter [09:56] isn't it not unlike mercurial? [09:57] I suppose the difference is that "bzr pull" complains, whereas "hg pull" just reports (+1 heads) and leaves you with multiple local heads [09:58] if i remember correctly, in mercurial if you hg pull you need to make a "merge commit" [09:58] to avoid that you "hg pull --rebase" [09:58] but then again, i never bothered learning it properly [09:58] you only need to do a merge commit if you have to do a merge [09:58] yes, of course [09:59] It sounds to me that you are overly focused on avoiding merge commits [10:00] not overly focused, it's just that virtually every project i've worked on urge you to not make merge commits [10:00] in what VCS [10:00] damd: I have noticed this too. Have any of those projects given a decent explanation as to why they urge that? [10:00] git (conkeror), hg (work) and now bzr (emacs) [10:00] maxb: ugly logs basically [10:00] also git (postgresql) [10:01] no, i mean i worked on conkeror [10:01] most git-using projects hate merge commits ime [10:01] I haven't had any complaints from projects using other vcss [10:01] the only merge commits i find in emacs are the ones where they merge emacs-23 (bug fixes) to emacs-trunk (new features) [10:02] So they don't have feature branches? [10:02] they have those too, but those merges are very rare [10:02] but that's also because git makes a merge commit for EVERYTHING [10:03] ? [10:04] Tak: it only makes a merge commit if a fast-forward isn't possible [10:07] much like bzr [10:09] * Tak shrug [10:09] maxb: bzr pull never makes a merge commit, bzr merge always makes a merge commit [10:09] I feel like it happens a lot more often for me with git [10:10] maxb: or rather, sets the working tree up for a merge commit [10:10] maxb: that's different from the way git pull works [10:10] OK, perhaps it is more accurate to say that the default way git pull works creates merge commits in much the same scenarios as a user of bzr would [10:11] maxb: I don't think that's true. In a lot of situations it's surprising that "git pull" creates a merge commit. [10:12] Really? My understanding was that git pull will create a merge commit in exactly the same situations that bzr pull will say "branches diverged, you need to merge" [10:13] maxb: that assumes that a merge is actually what the user wants, which (at least I found) is often not true. [10:13] maxb: fwiw bzr can do the same thing using 'bzr merge --pull' [10:19] jelmer: bzr merge --pull has the disadvantage of replacing your mainline revision numbering with the other branches revision number, doesn't it? [10:20] No, it doesn't do that [10:20] Oh [10:20] I suppose it could [10:20] quicksilver: yes, but that's a problem with pull in general if the other branch has merged your tip and has additional revisions [10:20] does bzr support git like merges, so that original commit is preserved from merge? [10:20] jelmer: yup. [10:21] jelmer: just checking I understood :) We go to some lengths to keep monotonically increasing revision numbers on our branches [10:21] glen: yes [10:21] jelmer: any hints? :) [10:21] glen: how do you mean? [10:21] quicksilver: You know about append_revisions_only, right? [10:21] maxb: nope. [10:21] quicksilver: in general I think landing feature branches as merges is a good idea. it nicely groups them [10:22] jelmer: agreed. [10:22] quicksilver: It's a per branch setting to say "never change my existing mainline history" [10:22] jelmer: well. i did bzr merge ../path/to/branch, and when i did bzr commit, i had to type again commit message, but i'd like that commit messages that i merged get auto added, currently my commit after merge looks like i did the commit myself [10:22] maxb: ah, interesting. Would stop a class of stupid error. [10:23] That's the point of it, yes [10:23] maxb: still, it's never a problem to pivot the branch by repulling the right revision and push --overwriting it. [10:23] glen: The merge has a reference to the revison that was merged [10:23] maxb: the worst thing that happens is redmine gets terribly confused [10:23] jelmer: do you know is that merge rev visible in launchpad too? [10:23] (it assumes, and caches, monotonically increasing revisions) [10:23] glen: much like in git, which also doesn't copy commit messages [10:24] quicksilver: Huh. Sounds like its bzr integration wasn't exactly well thought through :-/ [10:24] glen: launchpad's main branch page just shows the mainline revisions (so the merge revisions, not the revisions that were merged by those merge revisions) [10:24] maxb: well, let's call it a known deficiency ;) [10:25] jelmer: ah, i see now. it's only in bazaar browser (http://bazaar.launchpad.net) [10:25] maxb: we use loggerhead as well [10:25] maxb: but we use redmine for issue tracking and it's convenient for the issue tracker to be able to see the repo [10:25] quicksilver: Of course, if redmine auto-set append_revisions_only on all of the branches managed under it, it *could* assume that :-) [10:27] jelmer: Hi, can we discuss bug 707170? My thought was that for the branches to be in the state they are, some software component must have done something wrong, and bzr-git seemed like the likely culprit [10:27] Launchpad bug 707170 in Bazaar Git Plugin ""Cannot add revision(s) to repository: missing text keys" branching lp:dulwich and alioth packaging branch into same shared repo" [Undecided,Incomplete] https://launchpad.net/bugs/707170 [10:33] maxb: can you explain why though? bzr-git didn't touch the broken commit at all [11:12] jelmer: Perhaps I'm misunderstanding, but I was interpreting that error as there being one revision-id that has bzr-style text-revision ids for some files in one branch, but bzr-git ones in the other [11:12] So unless bzr core managed to rewrite the text-rev ids, the problem must be in bzr core? [11:12] erm [11:12] * So unless bzr core managed to rewrite the text-rev ids, the problem must be in bzr-git? [11:19] maxb: that one revision id was created by bzr core though - bzr-git hasn't touched it [11:19] That's interesting. Does this hint at a potential bzr core bug, then? [11:19] it seems more likely to me that e.g. dpush has updated the working tree incorrectly and thus caused bzr core to create a commit with the wrong text revisions === Ursinha is now known as Ursinha-afk === oubiwann is now known as oubiwann_ [12:24] am i correct in thinking that the only way to set up hooks in bzr is to write a python plugin? [12:25] catphish: yes [12:25] there is a simple wrapper plugin that calls out to the shell but it doesn't cover all hooks, and requires per-branch configuration [12:26] i might write my own wrapper that calls out to the shell [12:26] i need to be able to define hook shell scripts on a per-repository basis [12:28] http://people.samba.org/bzr/jelmer/bzr-shell-hooks/trunk/ === oubiwann_ is now known as oubiwann === oubiwann is now known as oubiwann_ [14:55] jam: good morning ! [14:56] jam: I'm looking at your reset-checkout proposal and I'm wondering what will happen if they are existing conflicts ? [14:56] jam: I smell bogus edge cases here === Ursinha is now known as Ursinha-afk [15:02] Hi all. In Ubuntu, I'm interested in updating a package, but the source-package-branch is out of date with the current shipping package. I heard I should mention it here, and ask for advice. === Ursinha-afk is now known as Ursinha [15:04] What is the source package name? [15:04] "couchdb" [15:05] lp:ubuntu/natty/couchdb [15:06] https://bugs.launchpad.net/udd/+bug/653312 [15:06] Via http://package-import.ubuntu.com/status/ and the linked https://bugs.launchpad.net/udd/+bug/653312 I infer that a fix is not likely to be forthcoming in the immediate future, and therefore you would be best advised to do an update using the traditional methods for now [15:07] * vila nods [15:07] maxb, vila, thank you. === mnepton is now known as mneptok [15:21] vila: honestly, I don't really care about those edge cases much. for conflicts, it probably leaves them alone [15:21] this is more about "my dirstate is corrupted, fix it" [15:22] jam: my point is not to fix every edge case but let the user know some may exist (reviewing) [15:23] jam: you care at least enough to offer the ability to restore the merge parents though, I don't want users to be fooled [15:23] jam: this is a great addition, so let's publicize what it reallt does [15:24] I'm not sure I want it to be particularly public. Since it is a 'repair something broken' tool. I don't want people to think it happens often [15:24] it has happened just often enough and the workaround is clumsy [15:25] jam: I' all for leaving it hidden, what I mean (already written in the not-yet-posted review) is that when the repair succeeds, a warning should tell the user about limitations [15:28] jam: review posted [15:28] jam: spoiler: BB:tweak ;) [16:02] hi. [16:02] I have a q which may sound a bit trollish. but it's not. [16:02] what was the idea behind STARTING bzr ? [16:02] I mean it is python. there was python project already (hg) [16:04] sobersabre: technically we were first, I believe [16:04] certainly the start date is very close [16:04] also, bzr is GNU [16:05] sobersabre: we talked about joining at one point. I think the primary sticking point was copyright, followed by some disagreements about structure. We wanted to be abstract and support multiple concrete formats, etc. [16:07] There was also a lot of history in baz, the Canonical fork of tla; Tom Lord's Arch.. [16:07] A C reimplemetnation of the orignal GNU arch; a revision control system written in shell scripts (I kid ye not) [16:08] There's no code sharing between baz and bzr, but bzr was started by a lot of the same developers who first worked on baz; shares some of the ideas and concepts... at least initially [16:09] .oO( Except for cherrypicking... ) [16:58] jam: thanks. [16:58] now bzr q. [16:58] :) [16:59] I want to take a branch with all its history, and put it into a subfolder of another (unrelated branch), with all the branch's history, and mold them together into 1 branch. [16:59] I saw a plugin "merge-into", === Ursinha is now known as Ursinha-lunch [17:00] is there a way to do it without giving birth to a hedgehog and avoid using this plugin ? [17:02] ... [17:07] sobersabre: You want 'bzr join' [17:07] sobersabre: However, first please check the format of the two branches [17:07] If they are both poor-root type, this may be an issue [17:16] maxb: I am on 2a formats. [17:16] on both branches. [17:16] thanks for join. [17:16] In that case, bzr join should just work === beuno is now known as beuno-lunch === vila changed the topic of #bzr to: Bazaar version control | try https://answers.launchpad.net/bzr for more help | http://irclogs.ubuntu.com/ | Patch pilot: jam | 2.2.3 is officially out ! (rm vila) === beuno-lunch is now known as beuno === Ursinha-lunch is now known as Ursinha === r0bby is now known as robbyoconnor [21:04] heya GaryvdM [21:04] Hi bialix! [21:05] I have couple of questions to re windows-installers branches [21:05] ok [21:05] I'd like to merge Naoki's patches [21:05] but wonder if we need 2 separate branches for bzr 2.2.x and 2.3.x [21:06] Naoki's improvements are definitely for 2.3 only, IMO [21:07] GaryvdM: ^ [21:07] bialix: I should be possible to code it so that you can define different versions for 2.2 and 2.3 in the same branch [21:07] I'd say focus on 2.3, but that's just me :) [21:07] Hi vila [21:08] bonsoir vila [21:08] hey GaryvdM, hello bialix [21:08] I'm tweaking qbzr tests so they can be run with --parallel=fork [21:08] I just succeeded and will send an mp :) [21:08] GaryvdM: for 2.3 there is used tag:bzr-2.3b1, it does not sound right for me [21:09] Whether that is the best way to do it or to have different branches - I'm not sure. [21:09] yes - that should be 2.3b5! [21:10] * bialix wonders how Gary built the latest installers [21:12] Hmm - let me boot the vm I built on and check. I may have forgotten to push [21:12] GaryvdM: on the other hand the changes are straightforward, and I think we may want to upgarde TortoiseOverlays to latest version for all bzr versions [21:12] and removing unused w.exe is not big eal [21:12] and removing unused w.exe is not big deal [21:13] GaryvdM: I need you final word [21:14] ok [21:16] bialix: I would rather not change to much in 2.2. If there are bugs that people are complaining about, then yes, but otherwise I think rather stick to the known. [21:16] brb - nature call. [21:16] . o O (Strange name for a phone...) [21:18] Ah - not phone. That's slang for the toilet :-) [21:19] hehe, I know :) [21:19] ./bzr selftest -s bzrlib.plugins.qbzr --parallel=fork ===> Ran 153 tests in 2.271s [21:20] * bialix remember the walk to Chateau de La-Hulpe [21:20] instead of Ran 153 tests in 4.161s [21:20] hmm, at least windows popping all over the place are more fun [21:20] yep [21:21] they pop all together with --parallel=fork [21:21] jumping jack [21:21] anyway, patch is up for review [21:21] bialix: exactly :) [21:23] vila: thanks for the patch. I [21:23] GaryvdM: if we don't want to change 2.2 installers then I think we need to create separate branch for 2.2 installers [21:24] GaryvdM: will it be too much trouble for future 2.2 releases? [21:24] for you? [21:24] keep in mind that the cadence will slow down for 2.2 once 2.3 is out [21:25] vila: I want to review the excepthookwatcher stuff carefully - so I'll review in the morning when I have a fresh brain :-) [21:25] I know, but you just released 2.2.3 [21:25] GaryvdM: sure [21:26] vila: There is a bug in the qbzr test where a failing test can cause others to fail, when there is nothing wrong. I suspect you changes may fix that (well hoping). [21:27] bialix: indeed and the next planned release is 2.3.0, then 2.3.1 then 2.4b1 and only then and if needed 2.2.4 [21:27] bialix: I don't think thats nessary - I'm going to look now. [21:27] * fullermd is bummed that we never had a 1.1.2.3.5 [21:27] . o O (CVS lovers....) [21:28] oh, I thought we left 1.1.1.1.1.1.1.1 numbers long ago [21:28] well, we avoid them so far but who knows, may be someone will find a good use :D [21:29] GaryvdM: this change https://code.launchpad.net/~songofacandy/bzr-windows-installers/remove-unused-wexe/+merge/44708 affects Inno Setup script, that's shared between bzr installers [21:29] It's not CVS lovers, it's math geeks :p [21:29] yeah, yeah, stop pretending :-P [21:31] bialix: oh - ok I see what you mean now. [21:33] GaryvdM: well, it's possible to extend issgen.py to allow using different templates for different bzr releases [21:33] if you want I can look into this [21:34] bialix: maybe different branches is easier [21:34] although it increases complexity of the tool with almost no value [21:34] so yes different branches might be easier [21:35] vila: do you have 1 branch, or different branches for the mac installer? [21:35] 1 branch per series + trunk [21:36] this sometimes requires some cherry-pick and always require careful merging for config.py (which defines what version of which plugin is packaged) [21:36] but otherwise it works flawlessly ensuring that we don't try to do too much in older releases [21:36] Ian made it a bit better for windows installers, IMO [21:37] vila: Has the merging caused any errors in the past? [21:38] not that I know of but I am a noob there having built what... 4 or 5 installers [21:38] ok - I'm asking to just try get a feel for what works. [21:39] The nice thing with separate branches is that you have a clean history of the releases and what installer modifications went from one branch to the other [21:42] weird, -s bp.plugins.qbzr succed but running the full test suite still fail some qbzr test... [21:42] test isolation related problem probably [21:42] hey poolie ! [21:43] vila: I Think thats problem I was talking about just now. Is it a treewiget test? [21:43] yup [21:44] re-running [21:47] GaryvdM: bzrlib.plugins.qbzr.lib.tests.test_treewidget.TestTreeWidgetSelectAll.test_commit_selectall [21:48] GaryvdM: bzrlib.plugins.qbzr.lib.tests.test_treewidget.TestTreeWidgetSelectAll.test_add_selectall [21:56] GaryvdM: ha ha, tricked by a leaking excepthookwatcher.pyc [21:57] vila: :-) I've done that before [21:59] vila: I've just discovered another reason why it happens (and no surprise, it involves a global variable) [22:00] GaryvdM: I'll need to update my submission [22:02] GaryvdM: which one ? [22:03] vila bp.qbzr.lib.util.loading_queue [22:04] overrideAttr(bp....util, 'loading_queue', None) in the relevant setUp should fix it [22:05] Yes - but I also want to investigate a but as it may need a try...finally somewhere. [22:06] overrideAttr is good at avoiding try/finally if you need to purge this queue, addCleanup may help too [22:14] vila: The "missing try...finally" is in the code, not the tests, but I will also add your suggestion for an extra level of test isolation protection. [22:15] GaryvdM: oh I see === pickscrape__ is now known as pickscrape [22:17] RuntimeError: cannot join thread before it is started... never saw this one :) [22:18] GaryvdM: so, I updated my mp but the 2 failures are still there... [22:19] vila: I should have a fix soon :-) [22:19] yeah ! [22:20] I'm pretty close being able to run bzr selftest *without* --no-plugins or BZR_PLUGINS_PATH tricks [22:20] I'm pretty close being able to run bzr selftest *without* --no-plugins or BZR_PLUGIN_PATH tricks [22:21] GaryvdM: I added a self.assertEqual(None, util.loading_queue) which didn't trigger [22:22] Oh [22:22] double checking [22:22] Merging you mp now... [22:27] vila: both you fix + mine now in lp:qbzr [22:27] *your [22:29] pulled, running [22:33] GaryvdM: still 2 failures :-/ [22:33] vila: the same tests? [22:34] exact some ones, yes [22:34] AssertionError: not equal: a = [] b = ['changed'] ? [22:34] both fail because they miss unversioned-with-ignored [22:34] no [22:35] vila: Please pastebin error. [22:35] http://paste.ubuntu.com/559265/ [22:35] Ty [22:37] vila: any tips on how to reproduce? [22:37] bzr selftest --parallel=fork [22:37] :-/ [22:37] that is: the whole test suite [22:38] ok [22:38] I can try to isolate a bit more but this is never trivial [22:38] :-/ [22:41] GaryvdM: one possible cause is that the tests pass when run sequentially because another test is doing some init that is missing when run in parallel [22:43] * GaryvdM runs grep global . -r [22:44] vila: Does it fail if you run bzr -s bp.qbzr --parallel fork? [22:45] no [22:45] Ok - same for me [22:45] and meither if I run bzr selftest -s bp.qbzr.lib.tests.test_treewidget.TestTreeWidgetSelectAll.test_commit_selectall [22:46] so s/init/something/ above [22:48] jam? [22:48] hi poolie [22:53] jelmer: any plans to add support for darcs repos? [22:54] I can see how that might be tricky, given the completely opposite ways darcs and bzr look at trees and patches [22:54] jam, when you said "going on the 26th, or more like the 23rd sounds like it works better for me" which month was that? [22:55] poolie: The 9th isn't possible, as you know. Going the week ahead would work for me, but you didn't list it as a possibility. [22:55] going the direct week after would be bad [22:55] because my wife has to do late-night phone calls [22:55] so 2 weeks after or 1-2 weeks before is ok [22:56] Or put in sequence: [22:56] i understand now [22:57] Apr 25 good, May 2nd good for me, May 9, 16 bad for me, May 23rd good again [22:58] ok, and the 30th of May good too? [23:04] GaryvdM: I got the failures wi th'bzr selftest' even without --parallel=fork, weird [23:05] vila: I have a suspicion. I'm making a branch with a whole bunch of trace.mutters to check. [23:05] poolie: yeah, should be fine. [23:06] Lina says she'll have a definite schedule first week of Feb [23:07] jam, poolie: 30th of May won't be good at my place, too close to Marie's exams [23:10] poolie: I'm off for now. I really feel like we should be pushing to get history-db deployed for loggerhead. I can't say there won't be any regressions, but other than initial import speed, everytime I hit something, I end up making it 10x faster than the current production code. [23:11] I realize it shouldn't be my priority to work on it [23:11] and I'll try to stop poking after today [23:11] it's always really disappointing to leave good work undeployed [23:12] and looking back on my mail, i think that we did basically have a deal with lp that if you did this, they'd deploy it [23:12] for various reasons, some good some bad, some due to you and me not pushing it, that wasn't followed through [23:13] poolie: I def see how not having a good test suite hurts being able to maintain the software, though. [23:14] because when I make changes, I know that the things *I'm currently testing* are ok, but I don't know about far-reaching implications [23:14] * vila nods frantically [23:15] vila: what are you doing here ? [23:15] :) [23:16] vila: lp:~garyvdm/qbzr/debug_select_all_test - But I think we should stop now. [23:16] jam: fighting jetlag by hopefully reversing the pattern from these last days, staying up later to sleep between 1 and 6AM [23:18] I guess yesterday you were waking up at this time? [23:18] Anyway, i'll lurk for another 15min or so, but then I'm going to be away till tomorrow. [23:18] jam, i know what you mean about tests [23:19] jam, i'm so happy you got the things merged and the tests passing [23:19] i'm fine for you to push it as a 20%-type thing [23:19] jam: roughly yes but then I couldn't sleep until 6AM... [23:19] i do think that as well as getting it technically ready you will need to work on lining it up to be deployed [23:20] ie: do they have any concerns about the approach; are we able to deploy it on staging (etc) to test it; [23:20] jam: I think history db is really nice [23:21] jam: I'm keen to see it live; we're working within a bunch of constraints that were not really present a year ago - and if present would have given a much stronger focus to moving things forward. [23:22] anyhow, so getting a thread (maybe a new thread) about what those constraints are may be good [23:22] or maybe a lep [23:22] GaryvdM: ok, trying it now but I agree we should stop [23:22] that's kind of more important than the actual speed [23:23] it could be 100x faster, but if it will not be deployed that just makes it more sad [23:23] so [23:23] broadly they are [23:23] - limit work in progress by the lp squads, no more dancing all over the map [23:24] vila: ok - Goodnight [23:24] which implies not accepting handoffs that haven't been slotted in - help folk to self service instead [23:24] Goodnight all [23:24] GaryvdM: see you, I'll send a mail with the failure result [23:24] - data driven / operational excellence - analyse and fix the oopses we're getting === GaryvdM is now known as GaryvdM_away [23:24] also add loggerhead to the page performance report [23:25] - we need to get timing data in the loggerhead oopses [23:25] - iterate until the picture is clear about /what/ is wrong in terms of coarse user experience [23:26] e.g. it may be that what we're really suffering is sqlite cache file contention cross-thread within single loggerhead processes [23:26] at each point, we broadly want to do the single thing that will solve the most observable issues [23:26] (think pareto analysis) [23:27] so: _should_ john push on this, or is it going to be too hard to get it deployed without active help from a squad? [23:30] lifeless: I don't mind pushing a bit, but things get blackholed so often. I don't really know *how* to push lp-serve, for example. It is in an rt, and at that point... I poke at francis and poolie? [23:35] jam: so its live on qastaging ? [23:35] jam: AIUI ? [23:35] lifeless: yes [23:35] jam: in which case, hammer it until you're happy its robust [23:35] lifeless: done [23:35] jam: look in the qastaging OOPS report summaries for anything related to it [23:35] it faild at one point [23:35] jam: and then file an RT for production [23:35] as near as I can tell it was because it wasn't restarted as part of the deploy code [23:35] I've filed an RT for staging, which is "stuck" [23:36] handed off to SPM, I believe, and then ... I don't knwo [23:36] knw [23:36] know [23:36] spm: ping [23:36] jam: staging is irrelevant for getting it on production [23:36] so to answer the general question, you just need to ask them, or me [23:36] lifeless: hola [23:36] spm: ^ jam's rt - do you know where that is at? [23:37] waiting for me to get around to it [23:38] currently I've got 3 LS ones (variants on the same theme) and an ISD one ahead in terms of getting things done; as well as the edge redirector excitement. [23:38] spm: any idea on timeframe for that? [23:39] Is it possible to just upgrade that to rollout onto production, if it isn't necessary to roll out on staging first? [23:39] Mostly I'm trying to get *some* momentup [23:39] momentum [23:39] no idea unf. I spent purty much all of yesterday completely reactive to alerts. got maybe 15-20 mins to spend on RT's. [23:39] and I thought staging would at least get stuff moving. [23:39] jam: so, for clarity [23:39] But if the round-trip-to-get l-osa time is the block [23:39] jam: the pipeline ordering is production<-qastaging<-staging [23:39] then I'm happy to skip steps [23:40] jam: if its been checked on qastaging, its totally ready to move forward [23:40] jam: we only have to check *db schema changes* on staging [23:40] lifeless: that wasn't the info I heard [23:40] since qastaging was "we'll blow it away often" [23:40] vs "public people test stuff on staging" [23:40] jam: same for staging [23:40] no [23:40] qastaging and staging have the same lifetime guarantees (none) [23:40] lifeless: *developers* test on qastaging and 3rd parties use staging (from all the conversations I've overhead) [23:40] jam: where did you get the info, so I can correct it [23:40] heard) [23:41] lifeless: the bug import discussions [23:41] etc [23:41] 3rd parties are permitted to play with either qastaging or staging [23:41] developers test db schema stuff on staging, and (approximately) all other things on qastaging [23:41] things that we can we put behind a feature flag w [23:41] also comments saying that if qastaging breaks, no big deal, and I haven't heard that for staging [23:41] its a huge deal [23:41] if qastaging is broken we cannot deploy [23:42] the entire qa workflow breaks down [23:42] because there is only one qastaging for the whole site [23:42] over the rainbow, it would be modularized enough you could run something representative on a dynamically-allocated domain [23:44] see y'all tomorrow (if you're around :) [23:44] jam: cu ! [23:44] mmm [23:44] I don't think thats a great idea poolie [23:45] oh? [23:46] we have many complex parts, its all to easy to bring up a shiny short lived instance [23:46] its the heavyness of the whole thing that lets us learn about issues in qa [23:47] its only by having a 250GB db behind it that we can see various query change impacts, for instance. [23:51] mkanat: btw the css is fixed [23:52] lifeless: Great!