* vila woke up one hour later than yesterday, one more week and the jet lag will be sorted out :-( | 00:19 | |
lifeless | jam: no worries | 00:24 |
---|---|---|
jam | vila: you should definitely not be up now :) | 00:24 |
vila | jam: indeed :-/ | 00:25 |
vila | jam: nice summary in bug #589521, thanks for that | 00:25 |
ubot5 | Launchpad bug 589521 in Ubuntu Distributed Development "nagios monitoring of package imports needed" [Critical,Triaged] https://launchpad.net/bugs/589521 | 00:25 |
fullermd | vila: Sorted out? What's wrong with things like they are? It sounds like a perfectly sensible schedule to me :p | 00:38 |
vila | fullermd: the truth is, the arguments with wife and daughters tend to be easier to manage under this schedule... | 00:40 |
fullermd | See? Come to the dark side... | 00:40 |
lifeless | mkanat: hey, you said the other day that b.l.n had the wrong theme? | 02:54 |
mkanat | lifeless: Yeah. | 02:54 |
lifeless | mkanat: I meant to ask what you mean by that, but got local interrupts | 02:54 |
mkanat | lifeless: I'll see if I can show you. | 02:54 |
mkanat | lifeless: This page has the wrong CSS: http://bazaar.launchpad.net/~loggerhead-team/loggerhead/trunk-rich/view/head:/.bzrignore | 02:54 |
mkanat | lifeless: Because this file is in fact missing: http://bazaar.launchpad.net/static/css/view.css | 02:55 |
lifeless | what css should it have? | 02:55 |
mkanat | lifeless: That CSS exists in the actual branch. | 02:55 |
mkanat | lifeless: But it is a 404 on bln. | 02:56 |
lifeless | let me just nab someone | 02:56 |
poolie | mkanat, i don't suppose you would have any time left to actually measure trunk (without historydb) vs 1.18 performance on a large branch? | 02:56 |
mkanat | poolie: I might. I have 2.15 hours left. | 02:57 |
mkanat | I mean, 2 hrs and 15 minutes. | 02:57 |
poolie | facts might bring a merciful death to this thread | 02:57 |
mkanat | poolie: I don't think anybody ever said that history_db was slower. | 02:57 |
poolie | robert thinks that john said it would be slower if the db was not enabled | 02:57 |
mkanat | poolie: I didn't see that on the list. | 02:58 |
lifeless | mkanat: jam said this: | 02:58 |
lifeless | Old New Action | 02:58 |
lifeless | 10s 30s First request of a branch in a project that has not been | 02:58 |
lifeless | previously seen. (slower) | 02:58 |
lifeless | (amongst other things) | 02:58 |
lifeless | note that none of our projects have been previously seen | 02:58 |
mkanat | lifeless: That's with history_db enabled or without? | 02:59 |
lifeless | and a 30s request will be killed | 02:59 |
lifeless | mkanat: with | 02:59 |
mkanat | lifeless: Okay, we're not talking about that situation, then. | 02:59 |
mkanat | lifeless: That situation would already be handled by what you'd do to enable history_db on codehosting. | 02:59 |
lifeless | mkanat: he also said that without the plugin, it uses plain bzr apis | 02:59 |
mkanat | lifeless: Which it does now. | 02:59 |
lifeless | so will be slower than before as well | 02:59 |
mkanat | lifeless: He said that there is some difference there from before history_db? Because the branch you're currently running uses plain bzr apis. | 03:00 |
mkanat | Oh, I may be seeing what he means. | 03:01 |
poolie | ? | 03:04 |
poolie | is it that it's removed caching code that would have helped in 1.18? | 03:04 |
mkanat | I'm still reading over the code. | 03:04 |
mkanat | poolie: I believe so, yes. | 03:04 |
mkanat | I'm not sure how much that actually matters, though. | 03:05 |
mkanat | What I don't know is how to turn off history_db to test. | 03:05 |
lifeless | remove the plugin | 03:05 |
lifeless | the historydb change is the following: | 03:05 |
mkanat | lifeless: It's a part of loggerhead. | 03:05 |
lifeless | - discard all the caching apis | 03:05 |
lifeless | - switch to raw bzrlib apis | 03:05 |
poolie | ok, so istm it would be a worthwhile use of time to measure whether trunk-minus-hdb is in fact slow | 03:05 |
jam | lifeless, mkanat: 30s is slower for the first request on a project when it has to convert the full ancestry to the cached form. Note that it can incrementally build the cache, so even if the thread is killed it will eventually succeed. | 03:05 |
lifeless | - have a *bzrlib* plugin that makes those apis faster. | 03:05 |
jam | Also note, it is only for Emacs sized projects. bzr is a few seconds, IIRC | 03:06 |
mkanat | Yeah, launchpad also seems to be a few seconds, in my testing. | 03:06 |
mkanat | And yeah, even if history_db can't write to the disk, it will create its cache in memory. | 03:06 |
jam | history_db was pulled into the loggerhead codebase | 03:06 |
lifeless | jam: ah, k | 03:06 |
mkanat | So I suppose history_db is always enabled. | 03:07 |
jam | and 10s would become *every* request if you remove all the caching | 03:07 |
lifeless | so, this is really a bit of a distraction; unless its a no brainer: < a day of work, including getting up to speed, its not suitable for a maintenance squad at this time. | 03:07 |
mkanat | jam: That would happen basically if I just don't specify "--cache", right? | 03:07 |
jam | mkanat: I don't remember from the command line, tbh | 03:07 |
poolie | the question is: is there anything you could do in ~1 day that will get it at least no worse than 1.18? | 03:07 |
mkanat | poolie: I suspect that for the actual average use case, it is already no worse than 1.18. | 03:08 |
mkanat | poolie: With the scale of codebrowse, it's likely that current stable will be rebuilding caches quite a bit. | 03:08 |
poolie | ok | 03:08 |
mkanat | poolie: This is just my suspicion, though, not a proven fact. | 03:08 |
jam | poolie: I think the most immediate thing would be to teach Launchpad to shared caches between projects | 03:09 |
jam | at which point you'll hit a couple timeouts, but afterwards everything should be better | 03:09 |
poolie | mkanat, i'm wondering if we can do some measurements to get more confidence in whether that is true or not | 03:09 |
mkanat | poolie: Yeah, I could do that, for sure. | 03:10 |
poolie | jam, i know, but it seems like that is going to be more than a day of workL | 03:10 |
mkanat | poolie: I have a loggerhead load tester already. | 03:10 |
jam | poolie: it *should* just be changing the path to not include the branch name, just the project name | 03:10 |
mkanat | poolie: Let me do a test right now. | 03:10 |
jam | won't be shared between users, but probably pretty good | 03:10 |
jam | but I don't fully understand the codehosting remapping, etc. | 03:10 |
lifeless | jam: would that permit leakage from private projects | 03:10 |
jam | lifeless: revision id and revno stuff | 03:10 |
lifeless | e.g. if someone has the revid for a private project, makes that a ghost in their branch of same project | 03:11 |
jam | no revision information | 03:11 |
jam | lifeless: the cache is just the graph of revnos and revision_ids, it doesn't cache the actual content | 03:11 |
lifeless | jam: revids are sensitive too, given they include email addresses | 03:11 |
jam | so if you have the revision id, then you have all the information you're going to get | 03:11 |
jam | lifeless: but you just said you *have* it | 03:11 |
lifeless | right, thats one scenario | 03:11 |
jam | since you are making it a ghost | 03:12 |
mkanat | Okay, so I think that disabling the on-disk caching of historydb involves commenting out two lines. | 03:12 |
lifeless | the other one is it likely to expose a revid when it shouldn't. | 03:12 |
mkanat | Then it should cache in memory. | 03:12 |
lifeless | e.g. due to bugs | 03:12 |
poolie | mkanat, let's try it! | 03:12 |
mkanat | poolie: Okay, going to try it now. | 03:12 |
lifeless | remember | 03:13 |
lifeless | to test it with a stacked branch and HTTP backend | 03:13 |
mkanat | poolie, lifeless, jam: Another option is to have codebrowse cache everything in one directory, in one file. | 03:13 |
lifeless | for apples to apples comparison | 03:13 |
poolie | well | 03:13 |
poolie | his hardware and load is undoubtedly not exactly the same | 03:13 |
mkanat | My load tester creates significantly more load than codebrowse experiences. | 03:14 |
poolie | but perhaps we can get some data | 03:14 |
mkanat | Okay, I can already tell you that if you disable the on-disk cache, yes, trunk is slow. | 03:14 |
mkanat | I don't even have to run the tester. | 03:14 |
mkanat | Getting information for ViewUI: 15.945 secs | 03:14 |
poolie | on emacs? | 03:15 |
mkanat | On launchpad. | 03:15 |
poolie | ok, and you're confident that's representative? | 03:16 |
poolie | if so, that means we can't just deploy trunk | 03:16 |
mkanat | jam: I imagine that if we used loggerhead's normal configuration where there's only one history_db.sql file for all branches combined, then it would become unmanageably large, on codehosting? | 03:16 |
poolie | we would need to do whatever it is to get historydb in use | 03:16 |
mkanat | poolie: Yeah--what I didn't realize was that history_db was always being enabled, in my earlier tests. | 03:17 |
mkanat | poolie: Even if you don't specify a cache directory, loggerhead creates a temporary one when you start it. | 03:17 |
poolie | and the question then is: can that task be made sufficiently small to do it ahead of doing other loggerhead work | 03:17 |
mkanat | poolie: It's all infrastructural work on codehosting, AIUI. | 03:18 |
poolie | so it does cache to disk at the moment? but in a different form | 03:18 |
mkanat | poolie: No, your current stable loggerhead does not cache to disk. | 03:18 |
mkanat | poolie: But trunk always does. | 03:18 |
lifeless | uhm | 03:19 |
lifeless | My understanding is that it uses sqlite caches still | 03:19 |
lifeless | mwhudson: ^ | 03:19 |
mkanat | lifeless: It *can*. | 03:19 |
mkanat | lifeless: I don't think codebrowse has that enabled. | 03:19 |
mwhudson | i'm pretty sure it does | 03:20 |
lifeless | so am I | 03:20 |
poolie | it seems like changing it to write historydb caches into the same place should not be too traumatic | 03:20 |
mkanat | Yeah. | 03:20 |
jam | mkanat: I don't think it would become unmanageable, but probably bigger, and after that first attempt, probably faster. | 03:20 |
mkanat | Actually, that wouldn't be a problem at all, if you are already using the on-disk cache. | 03:20 |
poolie | jam, so it would be better to use larger caching scopes, but doing it per branch would be ok? | 03:20 |
poolie | it won't die on the first access? | 03:20 |
mkanat | Yeah, so you could actually just deploy history_db now. | 03:20 |
jam | mkanat: no, loggerhead always cached to disk, it always created a temp one, even before history_db | 03:20 |
mkanat | jam: Hmm, except that I could always make loggerhead re-generate its branch caches before history_db, with enough simultaneous access. | 03:21 |
mkanat | jam: It looked to me like it was only using the lru_cache. | 03:21 |
mkanat | (Unless you configured it to use the on-disk cache.) | 03:21 |
jam | mkanat: it just happened to regen the cache all the time | 03:21 |
jam | if you don't specify a cache, one is created that only lasts the lifetime of the process, and then is thrown out | 03:22 |
mkanat | jam: Oh indeed. | 03:22 |
poolie | (separately, lifeless, is deletion of the /beta api near enough we should worry?) | 03:22 |
jam | poolie: I have to double check the code a bit. but I believe it will populate the cache file incrementaly, so even if it was killed in the first request, the second one will succeed for any given branch | 03:23 |
poolie | or maybe the 18th? | 03:23 |
lifeless | poolie: in principle we're not adding anything to it | 03:23 |
mkanat | jam: In that case, history_db sounds like a net win with little infrastructural work required. | 03:23 |
lifeless | jam: well the second request may be to a different instance | 03:23 |
jam | lifeless: but the disk cache is in the same location | 03:23 |
jam | aiui | 03:23 |
lifeless | I find the hand wave on infrastructure work scary. | 03:23 |
jam | using the same cache that loggerhead is using today | 03:24 |
mkanat | lifeless: Why would you? | 03:24 |
mkanat | lifeless: It's just a new file in the same directory. | 03:24 |
mkanat | lifeless: That's the extent of the change that would happen. | 03:24 |
poolie | jam: is historydb broken or unfinished or dangerous in any regard? | 03:24 |
mkanat | lifeless: Out of curiosity, do both processes currently use the same on-disk cache directory? | 03:25 |
lifeless | yes | 03:25 |
lifeless | they do | 03:25 |
jam | poolie: I don't know of anything specifically broken | 03:25 |
jam | caveat code has bugs | 03:25 |
lifeless | which is one thing jam highlighted as being a concern | 03:25 |
poolie | sure | 03:25 |
jam | probably nothing worse than the existing code | 03:25 |
poolie | and rollouts are hard | 03:25 |
poolie | but, we still need to get through them | 03:25 |
lifeless | mkanat: I find it scary because handwaving around a bit of core plumbing is somewhat cavalier | 03:25 |
lifeless | mkanat: when one considers all the bits that can - and have - gone wrong in the past. | 03:26 |
mkanat | lifeless: Are you talking about infrastructure or about the internal architecture of loggerhead? | 03:26 |
lifeless | mkanat: infrastructure | 03:26 |
lifeless | and architecture | 03:26 |
mkanat | lifeless: Okay. Are you using (or do you have) sqlite 3.7? If you use a WAL, I bet both processes would be fine accessing the same cache. | 03:27 |
lifeless | I haven't looked at the risk of disclosure for private branches other than my brief queries here | 03:27 |
mkanat | lifeless: Risk of disclosure in what way? | 03:27 |
mkanat | lifeless: history_db, when we're just talking about loggerhead, is only a backend change. | 03:27 |
lifeless | bugs disclosing revision ids not in the branch being accessed | 03:27 |
poolie | but, i think at the moment we're not talking about having a shared cache? | 03:28 |
lifeless | mkanat: I realise that its a backend change. | 03:28 |
lifeless | poolie: one of jams caveats was slower initial scans w/out a shared cache. | 03:28 |
mkanat | But we would have a shared cache. | 03:28 |
lifeless | sigh | 03:28 |
mkanat | Because we'd have only one cache. | 03:28 |
lifeless | I'll come back to this, I feel that the point is ....> over there | 03:29 |
lifeless | doing a bunch of work to assess whether its small enough, based on guesses - is at best a guess. Right now, I still wouldn't confidently ask curtis or julian to take this on within maintenance scope | 03:29 |
mkanat | lifeless: It doesn't sound like any code work is required. | 03:30 |
lifeless | if I'm over concerned about the risk, its because of a history of problems. | 03:30 |
poolie | it's like a major version bump in a dependency | 03:30 |
poolie | perhaps one that has caused trouble in the past | 03:30 |
mkanat | Yeah, sure. I mean, there's still the testing thing that I just talked about on the list. | 03:30 |
mkanat | I'm fully with you on that. | 03:30 |
lifeless | mkanat: I don't know that I've claimed code work; the same engineers do code and integration - everything except deploy & server ops. | 03:30 |
lifeless | mkanat: its an integrated team leading right up to the ops team | 03:30 |
mkanat | lifeless: Okay. What do you mean by "integration"? | 03:31 |
lifeless | for me to say to curtis or julian that its in scope for maintenance, I'd want to be quite confident that there is < 12 hours work asessing, measuring, qaing on staging, deploying & dealing with the fallout. | 03:31 |
poolie | lifeless, how would you handle an upgrade to some other component we depend upon? | 03:32 |
poolie | there may well be some that are over 12h | 03:32 |
lifeless | poolie: if its small a maintenance team just does it. If its not a feature squad gets assigned it - as will happen to python2.7 | 03:32 |
poolie | ok | 03:33 |
lifeless | now, someone devs may scratch an itch and get python2.7 workable on dev machines, but actually upgrading prod etc requires considerable care. | 03:33 |
poolie | sure | 03:33 |
lifeless | 12h is an arbitrary figure | 03:33 |
poolie | ubuntu upgrades are another case that comes to mind | 03:33 |
poolie | istm we have the option of trundling along 1.18.x until we get time to safely do a bigger deployment | 03:33 |
lifeless | right, they were traumatic, and we're doing them differently next time - separating OS and dependencies; we'll go to py 2.7 *before* the next OS upgrade. | 03:34 |
lifeless | so that we're in place. | 03:34 |
poolie | but we'd probably prefer not to do major development that's duplicating what's been done in an upstream later release | 03:34 |
lifeless | poolie: I don't see that as an option; I see that as the /only/ option given the info I've got about whats up. | 03:34 |
poolie | max has said, rightly or wrongly, that he doesn't think there's any more low hanging fruit for performanec | 03:34 |
poolie | so, i don't think we can usefully schedule small blocks on this | 03:35 |
mkanat | Talking about actual development of loggerhead, that is. | 03:35 |
mkanat | Which would be a separate issue from rolling out this one. | 03:35 |
poolie | and if we're going to schedule a big block, we might as well start it by getting historydb running | 03:35 |
mkanat | lifeless: Without tests, I can't say how much time deployment would take. But if you also start changing *any* branch significantly, my answer would be the same. | 03:35 |
poolie | since we know that will probably help alot | 03:35 |
mkanat | lifeless: If everything goes fine, I suspect it will take no longer than a normal rollout, perhaps an hour or two more. | 03:36 |
mkanat | lifeless: The worst that could happen--that I can predict now--is that the two processes may need separate cache directories so that they don't lock each other out. | 03:36 |
lifeless | mkanat: so, I'm aiming at 10 minute downtime windows; thats a factor of 12 more than 'ok' | 03:36 |
lifeless | mkanat: or maybe you don't mean downtime window ? | 03:37 |
mkanat | lifeless: I don't see any downtime happening. | 03:37 |
mkanat | lifeless: Yeah, I mean total maintenance work time. | 03:37 |
mkanat | lifeless: To do the rollout. | 03:37 |
poolie | it seems like we should easily be able to run it in parallel | 03:37 |
poolie | since it's readonly etc | 03:37 |
mkanat | poolie: history_db gets written to quite a bit. | 03:37 |
lifeless | all this is fine | 03:37 |
lifeless | BUT ITS WORK. | 03:38 |
poolie | logically readonly | 03:38 |
lifeless | who is going to do it? what if the guess is wrong? whats the impact on load? will the machine thrash? | 03:38 |
mkanat | poolie: Yeah. | 03:38 |
poolie | indeed | 03:38 |
mkanat | lifeless: The machine should use considerably less memory with history_db than it does now. | 03:38 |
lifeless | I mean, I'm totally fine with whatever plan, my point has *never* been that its crap, its been that *we need to do work to make this usable in prod* | 03:38 |
lifeless | effort | 03:38 |
lifeless | time * energy. | 03:38 |
poolie | lifeless, the point is, i don't see that there is any other sensible next step other than deploying this | 03:38 |
mkanat | lifeless: In an ideal situation, the rollout requires no attention from anybody--it would just be a standard merge and push. | 03:38 |
lifeless | which btw, this conversation is sapping for me. | 03:38 |
poolie | i'm not saying you have to do it right now | 03:38 |
poolie | but there's no point discussing how to make it possible to do other things off 1.18 | 03:39 |
mkanat | lifeless: I totally understand. :-) | 03:39 |
lifeless | poolie: But thats not what I argued *for*. I argued that other than oops fixing, which there are plenty of sensible things to do in the current lp branch, we're not going ot be undertaking major works for 6-9 months, based on my knowledge of jml's pipeline. | 03:40 |
poolie | ok | 03:40 |
poolie | so again we come back to assumptions | 03:40 |
lifeless | do you agree that nearly 20K unhandled exceptions a day is a little bad? | 03:40 |
poolie | max thinks it is not easy to fix oopses in the current branch | 03:40 |
mkanat | Ah, I wouldn't exactly say that. | 03:40 |
lifeless | and do you agree that historydb is an *operational unknown*? | 03:40 |
mkanat | It was easy to fix the "no such revision" one. | 03:40 |
mkanat | lifeless: Actually, now that I understand it, I only agree minimally. | 03:41 |
mkanat | lifeless: Because codebrowse is already using on-disk caches. | 03:41 |
poolie | lifeless, i'm only spending time on this because i think it needs to be improved and i want to see us take the best course on it | 03:41 |
mkanat | lifeless: This is just another on-disk cache. | 03:41 |
lifeless | mkanat: which we have several deployment options for | 03:41 |
lifeless | a) per branch | 03:41 |
lifeless | b) shared cross branch | 03:41 |
lifeless | c) memory only | 03:41 |
lifeless | d) huge one | 03:41 |
lifeless | e) ...? | 03:41 |
mkanat | lifeless: Right, and (d) requires no code work at all. | 03:42 |
lifeless | we've seen locking issues with sqlite already | 03:42 |
poolie | so we have some undelivered work in trunk | 03:42 |
poolie | we all agree that actually deploying it will take work | 03:42 |
lifeless | a) would be equivalent to our current cache story in terms of disk, but higher upfront scans and no benefit cross-branch | 03:42 |
lifeless | etc | 03:42 |
mkanat | lifeless: (d) is essentially what codebrowse is doing right now. | 03:42 |
lifeless | mkanat: I thought a) was | 03:43 |
mkanat | lifeless: Nope--one file in one directory. | 03:43 |
lifeless | mwhudson: ^ cache per branch, right ? | 03:43 |
mwhudson | there would be little point in doing (a) | 03:43 |
mwhudson | lifeless: currently? | 03:43 |
lifeless | yes | 03:43 |
mwhudson | not sure actually | 03:43 |
* mwhudson checks | 03:43 | |
mkanat | lifeless: At least, AIUI. | 03:43 |
poolie | mwh, for my education, where are you checking to find out? | 03:43 |
mwhudson | poolie: lib/lp/launchpad_loggerhead/app.py in the launchpad tree | 03:44 |
mkanat | Oh right, launchpad-loggerhead. | 03:44 |
poolie | mwhudson, i don't have that directory... | 03:45 |
mwhudson | lifeless: it's one per branch currently | 03:45 |
mwhudson | poolie: -lp/ sorry | 03:45 |
mkanat | poolie: It's a separate project, lp:launchpad-loggerhead | 03:45 |
mwhudson | mkanat: not any more it's not | 03:45 |
mkanat | mwhudson: Oh! didn't know. | 03:45 |
lifeless | mwhudson: I don't have that directory, for whatever reason | 03:45 |
mwhudson | poolie: lib/launchpad_loggerhead/app.py | 03:45 |
lifeless | right | 03:46 |
lifeless | so, back to facts, we're deployed with (a) today | 03:46 |
lifeless | shared between two instances | 03:46 |
mkanat | lifeless: Ahh. | 03:46 |
lifeless | with some threads each | 03:46 |
mkanat | Yes, in that case, history_db would be different. | 03:46 |
poolie | ok, and are you saying that historydb is going to be slower in that setup? | 03:46 |
mkanat | poolie: No, just different. | 03:46 |
mwhudson | lifeless: changing to one cache per <whatever> would be very easy | 03:47 |
lifeless | jam says initial scan is thrice the time, subsequent operations are faster | 03:47 |
lifeless | mwhudson: I know, but its all time | 03:47 |
lifeless | mwhudson: for a maintenance squad without your context | 03:47 |
lifeless | mwhudson: who will be writing up what they figure out like crazy | 03:47 |
mwhudson | lifeless: less time than this conversation has been going on for | 03:48 |
mwhudson | lifeless: sure, they are welcome to talk to me :) | 03:48 |
lifeless | mwhudson: sure feels like it; or are you serious ? | 03:48 |
mwhudson | lifeless: sorry, not quite sure i understand | 03:49 |
lifeless | mwhudson: 'less time than this conversation has been going on for' | 03:49 |
mkanat | lifeless: I'm pretty sure he was serious. | 03:49 |
mwhudson | lifeless: yes, i was serious about that | 03:49 |
lifeless | ok | 03:49 |
mwhudson | it's a few lines | 03:49 |
lifeless | + a test run :) | 03:50 |
lifeless | + qa to make sure it behaves | 03:50 |
lifeless | anyhow | 03:50 |
mwhudson | yeah | 03:50 |
mwhudson | sadly, a test run won't make any difference, but yes | 03:50 |
lifeless | poolie: so whats your proposal thats different to what I've suggested? | 03:51 |
mkanat | mwhudson: Where's the on-disk cache setup in app.py? I only see "self.graph_cache = lru_cache.LRUCache(10)". | 03:51 |
mwhudson | lifeless: this diff will switch to using a single cache http://pastebin.ubuntu.com/558839/ | 03:51 |
lifeless | poolie: or are you trying to say I've got a bad axoim or assumption? | 03:51 |
poolie | lifeless: keep both branches as they are | 03:51 |
lifeless | mkanat: cachepath = | 03:51 |
mkanat | lifeless: Ah, thanks. | 03:51 |
poolie | if there's genuinely low fruit in 1.18, do that | 03:51 |
poolie | file a bug saying "should upgrade to trunk" | 03:51 |
poolie | do that in a safe way, but don't be scared of it | 03:51 |
mwhudson | mkanat: the BranchWSGIApp constructor | 03:51 |
mkanat | mwhudson: Yeah, I found it. :-) | 03:51 |
lifeless | poolie: I'm not scared of it; honestly. | 03:52 |
lifeless | I'm assessing how long its taken mwhudson to do this in the past | 03:52 |
lifeless | the things that have gone wrong in the past | 03:52 |
mkanat | lifeless: Okay, I'm starting to agree with you more now that I see how codebrowse currently runs. | 03:52 |
lifeless | and our pipeline of stakeholder requests | 03:52 |
mkanat | lifeless: I was under the impression that it was doing (d). | 03:52 |
poolie | it's fine if it stays in the pipeline until it's a priority | 03:52 |
lifeless | mkanat: god no, it would be flattened in an instant | 03:52 |
mkanat | lifeless: That's sort of what I thought, yeah. :-) | 03:53 |
mwhudson | lifeless: i'm not so sure about that actually | 03:53 |
lifeless | mwhudson: that was humour | 03:53 |
mwhudson | you might have some locking related fun | 03:53 |
mwhudson | ok | 03:53 |
lifeless | I'm pretty sure we do benefit from the caches | 03:53 |
mkanat | lifeless: That's possibly resolvable by WAL, but that would indeed be infrastructural and integration work for sure. | 03:53 |
lifeless | but I'm also sure there is friction in there | 03:53 |
lifeless | which needs time and attention to detail | 03:54 |
mkanat | lifeless: Okay. Out of curiosity, are you sure because you have tracebacks, or just because you suspect? | 03:54 |
lifeless | poolie: I don't think putting it in the pipeline as a given makes sense; saying 'do something to make codebrowse better' is something we should put in the pipeline; one concrete thing we can do there is productionise and deploy history db | 03:54 |
lifeless | mkanat: have seen the odd thing go by | 03:55 |
mkanat | lifeless: Okay. | 03:55 |
lifeless | mkanat: such as sqlite lock contention errors | 03:55 |
poolie | it's essentially already on the kanban towards the right hand side | 03:55 |
lifeless | poolie: I have to disagree. | 03:55 |
mkanat | lifeless: I think that it would probably be sensible and manageable to do oops fixing on trunk and 1.18 both simultaneously, and then schedule trunk rollout for a few months away. | 03:55 |
mkanat | Or on ~launchpad-pqm/devel and trunk. | 03:55 |
lifeless | poolie: theres a bunch of work been done on it, yes. But no team has been pushing it forward; its a new-situation for whomever comes along. | 03:56 |
poolie | lifeless, because... it's not that far to the right if deployment hasn't been thoroughly considered? | 03:56 |
mkanat | lifeless: If you limit the changes to just oops-fixing until trunk is ready, I think you'd be fine. | 03:56 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=fixing | 03:56 |
poolie | if jam's willing (not too scarred by lp-serve) we could ask him to do it with help from mwh and it would not be new | 03:56 |
lifeless | poolie: that, plus what I said, plus the work done hasn't been done with a view to the whole lp story around this; its been somewhat siloed off | 03:57 |
lifeless | poolie: if you want to do that, that would be awesome; don't underestimate it though - I mean, you know how gung ho I am about stuff generally, right ? | 03:57 |
poolie | heh | 03:58 |
lifeless | poolie: this is *it can be done and there is a pie on the line collins* being concerned. | 03:58 |
poolie | :) | 03:58 |
mkanat | lifeless: Given codebrowse's history, I don't blame you. | 03:58 |
poolie | so in a way i would be ok with saying "this is really not a good fit, let's throw it out" | 03:58 |
mkanat | lifeless: Also given the fact that right now, it's far more stable than it's ever been. | 03:58 |
poolie | ideally, if we had a specific reason for that | 03:58 |
lifeless | so if you want to do that, say so; if we leave trunk as-is, I'll setup a separate project to get code reviews and bugs visible to the lp team until we're converged. | 03:58 |
poolie | but, if it's done and we don't think there's anything actually wrong, we just find it hard to deploy | 03:59 |
poolie | that just seems like a bad reason | 03:59 |
poolie | if true, it seems to mean some bugs can just never be fixed :( | 04:00 |
lifeless | poolie: we have a similar set of reliability and robustness concerns around soyuz | 04:00 |
poolie | do we ultimately want to get historydb running or not? | 04:01 |
mkanat | poolie: I would think so. | 04:01 |
poolie | i'd be fine with deciding that it was just a spike experiment and we want to take another go at it, if that's what we really think | 04:01 |
lifeless | compared to loggerhead, which has - I think - one person familiar with it still on the lp team, soyuz has 4 folk familiar with it, so we can actually react quickly to issues | 04:01 |
lifeless | from my perspective, its an experiment until either a) a feature squad is given codebrowse improvements as a card, and agrees that historydb is the way forward, or b) someone else does the work necessary to get it live | 04:02 |
mkanat | It's too bad I don't have enough time left to stick around and help with the rollout. | 04:02 |
lifeless | I can't prejudge what the feature squad engineers will decide makes sense | 04:02 |
lifeless | [well I can, but I'd hate it if someone did that to me, so I don't do it to them] | 04:03 |
poolie | mkanat, it seems like organizing rollouts is a pretty hard thing to do unless you are actually full time on the lp team | 04:03 |
mkanat | lifeless: That's understandable. That might just be re-having a conversation that's already been had, though, from back when jam originally proposed history_db. | 04:03 |
lifeless | mkanat: it may be a nobrainer | 04:03 |
mkanat | lifeless: Sure. But I understand that the specific application to LP is still something to discuss. | 04:03 |
lifeless | I'm just being clear about what I can commit to confidently. | 04:03 |
mkanat | lifeless: Sure, I totally understand. | 04:03 |
mkanat | The primary thing that a designer has to deal with is the fact of uncertainty. | 04:03 |
lifeless | yup :) | 04:04 |
poolie | are there any concerns about trunk other than that the deployment may be very bumpy (and i realize that may be a very large conrcern) | 04:04 |
mkanat | Yeah. It's something that a lot of people don't understand. They try to predict an unpredictable future. :-) | 04:04 |
mkanat | poolie: Unpredictable problems that I can't imagine now, based on the fact that there have been a lot of changes. | 04:04 |
lifeless | poolie: I haven't done a rev by revision review because I think thats a waste of time. | 04:04 |
poolie | s/bumpy/& laborious whatever | 04:04 |
lifeless | poolie: once the card size is assessed as being > maintenance bucket, it goes into the queue. | 04:04 |
lifeless | poolie: history db alone is enough to do that for me | 04:05 |
poolie | ok | 04:05 |
poolie | so i propose to file a bug saying that historydb ought to be tried out and deployed | 04:05 |
poolie | if in the course of trying it out, it seems that it is not a good solution, or too much work, or whatever | 04:06 |
poolie | we can mark the bug wontfix and then pull it out of loggerhead's trunk | 04:06 |
mkanat | poolie: Where would it be tried out, though? There's no edge. | 04:06 |
lifeless | who will do the trying out ? | 04:06 |
lifeless | mkanat: we do have bazaar.qastaging.launchpad.net | 04:06 |
mkanat | lifeless: Oh, okay. | 04:06 |
lifeless | thats in the qa pipeline | 04:07 |
mkanat | lifeless: Which...is a redirect loop. | 04:07 |
poolie | do you mean 'who will work on the bug' or 'how will we test it'? | 04:07 |
lifeless | poolie: who, now how. | 04:07 |
lifeless | s/now/not/ | 04:07 |
poolie | does that matter? | 04:08 |
poolie | it's a thing we can do to make lp better | 04:08 |
poolie | either a squad will do it, or someone else | 04:08 |
lifeless | well | 04:08 |
lifeless | from my perspective this is a feature bug as I've said many times before | 04:08 |
lifeless | that means it won't get a maintenance squad timeslice | 04:08 |
mkanat | lifeless: You know...there is another option, here. | 04:09 |
mkanat | lifeless: You could just live with the OOPSes until you switch to trunk. | 04:09 |
lifeless | I'm not clear what you want to have happen, given the constrains we have. | 04:09 |
poolie | we are getting timeouts on some pages | 04:09 |
lifeless | mkanat: I'm particularly unhappy with that idea because they generally mean a user has had a bad experience | 04:09 |
mkanat | lifeless: That's true. BTW, do you have analysis of them yet? | 04:10 |
poolie | it seems at least possible that the most efficient way to fix them is to move to loggerhead trunk | 04:10 |
lifeless | mkanat: yes, the connection reset one is 16K a day | 04:10 |
poolie | or, there may be other fixes | 04:10 |
lifeless | mkanat: the others are largely below the fold | 04:10 |
mkanat | lifeless: Oh! That just happens when somebody closes their browser. | 04:10 |
lifeless | mkanat: yes | 04:10 |
mkanat | lifeless: Okay. So that would be the primary one to fix, and the others could wait. | 04:10 |
poolie | really? | 04:11 |
lifeless | mkanat: yes | 04:11 |
mkanat | lifeless: Okay. That would be easy to fix both on trunk and 1.18. | 04:11 |
poolie | i mean obviously, it can happen there, but is that really happening 16k/day? | 04:11 |
lifeless | next highest one is | 04:11 |
lifeless | 61 error: [Errno 104] Connection reset by peer | 04:11 |
lifeless | 31 http://0.0.0.0:8081/%7Evcs-imports/busybox/main/changes (Unknown) | 04:11 |
lifeless | OOPS-1852CBB1307, OOPS-1852CBB1810, OOPS-1852CBB188, OOPS-1852CBB1990, OOPS-1852CBB2626 | 04:11 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB1307 | 04:11 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB1810 | 04:11 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB188 | 04:11 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB1990 | 04:11 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBB2626 | 04:11 |
lifeless | poolie: its in the daily reports | 04:11 |
mkanat | lifeless: That's the stability-checking script that the losas run. | 04:11 |
poolie | what i mean is, there are other things that can generate econnreset | 04:11 |
lifeless | poolie: the cause may be misanalysed, but | 04:12 |
poolie | or eof | 04:12 |
lifeless | 15332 error: [Errno 32] Broken pipe | 04:12 |
lifeless | Bug: https://launchpad.net/bugs/701329 | 04:12 |
lifeless | Other: 15332 Robots: 0 Local: 0 | 04:12 |
lifeless | 7604 http://0.0.0.0:8080/%7Evcs-imports/busybox/main/changes (Unknown) | 04:12 |
lifeless | OOPS-1852CBA1, OOPS-1852CBA10, OOPS-1852CBA100, OOPS-1852CBA1000, OOPS-1852CBA1001 | 04:12 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA1 | 04:12 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA10 | 04:12 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA100 | 04:12 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA1000 | 04:12 |
ubot5 | https://lp-oops.canonical.com/oops.py/?oopsid=1852CBA1001 | 04:12 |
mkanat | poolie: It would happen with bots. | 04:12 |
lifeless | so we see 16K +- 2K a day of errno 32, and < 100 of errno 104 | 04:13 |
lifeless | fixing the 16K one *may* expose some timeout cases | 04:13 |
lifeless | this is another reason to just stay focused on stabilisation | 04:13 |
* mkanat nods. | 04:13 | |
lifeless | poolie: so it seems to me you want to push this branch forward | 04:13 |
poolie | so it seems likely to me that the epipe error is loggerhead seeing the connection closed by haproxy | 04:13 |
poolie | we don't really know what caused haproxy to close it | 04:14 |
lifeless | what I'll do then, is make a new project, pull the lp loggerhead stuff into that, which will get changes to that branch under review and bugs per the lp policy a home | 04:14 |
mkanat | poolie: Yeah, that sounds likely right. | 04:14 |
poolie | (incidentally log correlation stuff would be useful here, but maybe just less(1) is enough) | 04:14 |
lifeless | if and when trunk gets into lp we can consolidate the projects again | 04:14 |
poolie | lifeless, yeah, i would like to push it forward | 04:14 |
lifeless | I've spent enough time on this discussion, I'm going to stop now. We're not going forward, and the bits I'm describing don't affect anyone else. | 04:14 |
poolie | (copious time and all that ) | 04:14 |
poolie | well, i'm sorry it was frustrating | 04:15 |
poolie | i think it was actually useful to get a few things explicit where people had different ideas in their head | 04:15 |
mkanat | lifeless: I think that just fixing the oopses on the LP branch seems a reasonable solution. | 04:15 |
lifeless | the frustrating bit is the handwave around resources | 04:15 |
lifeless | mkanat: thats what I've been asking for! :) | 04:15 |
lifeless | mkanat: I've been *offering* lp developers doing maintenance and triage on the main loggerhead project | 04:15 |
poolie | well, i've watched/tried to help jam spend ~6m getting his work deployed | 04:15 |
lifeless | but not with the unknown in trunk | 04:15 |
mkanat | lifeless: Yeah, that seems fine. I don't even think that most of those fixes need to go into loggerhead trunk. | 04:16 |
poolie | so i may not be pessimistic _enough_, but i think i'm at least fairly pessimistic | 04:16 |
mkanat | lifeless: At least, the oops fixes. | 04:16 |
lifeless | mkanat: I doubt that that will happen with trunk as different as it is :(. | 04:16 |
lifeless | we'll see | 04:16 |
mkanat | lifeless: Yeah. As long as you just stick to OOPS fixing, though, I don't think trunk would be missing out. | 04:17 |
lifeless | mkanat: it builds up debt | 04:17 |
mkanat | lifeless: And then when the feature squad has time in six months or so, merging them back in as appropriate could be done. | 04:17 |
mkanat | lifeless: True. | 04:17 |
lifeless | mkanat: at a certain point trunk will then be unfeasible to merge | 04:17 |
mkanat | lifeless: Well, hopefully we'd just be talking primarily about two fixes. | 04:17 |
lifeless | I don't know if that will happen or not; I hope it doesn't. | 04:17 |
mkanat | lifeless: Which should be small patches. | 04:17 |
lifeless | my bigger concern is community | 04:18 |
poolie | i will at least have a look at bug 701329 | 04:18 |
lifeless | noone will be maintaining loggerhead per se | 04:18 |
ubot5 | Launchpad bug 701329 in Launchpad itself "error: [Errno 32] Broken pipe" [Critical,Triaged] https://launchpad.net/bugs/701329 | 04:18 |
mkanat | lifeless: Yeah. And there are outstanding review requests. | 04:18 |
lifeless | right | 04:18 |
lifeless | which I was aiming to solve by cunning :) | 04:18 |
poolie | and the other | 04:18 |
mkanat | lifeless: Ahhh, I see. :-) | 04:18 |
lifeless | mkanat: by having lp developers maintain the project, the various queues would be driven to zero: bug triage, code review | 04:18 |
mkanat | lifeless: Right. | 04:19 |
lifeless | mkanat: but doing that is contingent on it being straight forward, which an unusable trunk is not. (Yes, I know the nuance there) | 04:19 |
* mkanat nods. | 04:19 | |
mkanat | lifeless: I think we'll just have to delay that for 6-9 months. | 04:19 |
lifeless | so, i think this is a bit sad. Shrug. | 04:19 |
mkanat | lifeless: And then at that point it can happen. | 04:19 |
poolie | canonical put time & money into historydb | 04:20 |
poolie | i really think we should either clearly decide it was wrong, or finish it | 04:20 |
mkanat | poolie: I think we're all in agreement about that. | 04:20 |
mkanat | poolie: It's just a question of when it will happen. | 04:20 |
lifeless | are we not allowed to say 'it was a spike, its not finished, put it to the side and perhaps later' ? | 04:21 |
lifeless | thats what francis was saying | 04:21 |
mkanat | lifeless: I think unmerging it would revert a lot of good refactoring that happened along with it. | 04:21 |
lifeless | mkanat: I agree, I think thats a shame. But perhaps better than 6-9 months without maintainer? | 04:21 |
mkanat | lifeless: Hmm. | 04:22 |
poolie | lifeless, well, yes, we're allowed, but | 04:22 |
lifeless | besides, its not lost per se, its mergable if/when someone decides they have the time. | 04:22 |
poolie | things being hard to change doesn't seem like a very satisfying reason to do that | 04:22 |
lifeless | this is all about resourcing. | 04:22 |
mkanat | lifeless: I think 6-9 months without a maintainer would be no different than it was in 2010. | 04:22 |
poolie | i'd be happier to say that if it turned out the design was actually bad | 04:22 |
lifeless | its a sunk cost | 04:22 |
lifeless | it wasn't resourced with a budget up front | 04:23 |
poolie | it's true it's a sunk cost | 04:23 |
lifeless | spm: 15:55 < mkanat> lifeless: This page has the wrong CSS: http://bazaar.launchpad.net/~loggerhead-team/loggerhead/trunk-rich/view/head:/.bzrignore | 04:23 |
lifeless | 15:55 < mkanat> lifeless: Because this file is in fact missing: http://bazaar.launchpad.net/static/css/view.css | 04:23 |
lifeless | 15:55 < lifeless> what css should it have? | 04:23 |
poolie | the question is, now that the code is there, does the cost of deploying it exceed the benefit we would get from deploying it? | 04:23 |
lifeless | 15:56 < mkanat> lifeless: That CSS exists in the actual branch. | 04:23 |
poolie | if deployment is very hard that may well be true | 04:23 |
lifeless | spm: can you look at how the /static/ path is accessed and see if perhaps we have something bong going on? | 04:23 |
spm | okidoki. | 04:24 |
lifeless | poolie: I don't see that thats the question. its a question sure, but *the* question is, where does completing that work fall in the queue for the various folk that might do it. | 04:24 |
mkanat | lifeless, poolie: My vote is to fix the two OOPSes on ~launchpad-pqm/loggerhead/devel, then wait 6-9 months, stabilize trunk, and then have the LP team maintain trunk. | 04:24 |
lifeless | poolie: its 6-9 months away for LP; even if it was the very best thing since sliced bread it would still be 6-9 months away | 04:25 |
spm | actually... I might even be able to hazard a guess as to what/where/why this fault exists.... | 04:25 |
lifeless | poolie: *unless* it becomes a stakeholder priority, *or* is small enough for a maintenance squad to handle. | 04:25 |
mkanat | lifeless: I don't think it's a terrific emergency. | 04:25 |
lifeless | poolie: I've not argued its wrong or bad or whatever. But polish and doing it well matters a great deal. | 04:25 |
mkanat | lifeless: codebrowse sucks far less than it used to, already. | 04:25 |
poolie | i guess the axiom i have here is that zero timeouts is a priority, and that historydb is said to be the best way to get there | 04:26 |
lifeless | because if we get it wrong the costs are huge | 04:26 |
poolie | in that case it should not be shelved for so long | 04:26 |
poolie | perhaps there are less-risky ways to get there | 04:26 |
mkanat | poolie: Yeah. I think that "priority" in this case simply means 6-9 months. | 04:26 |
lifeless | zero timeouts are a policy | 04:26 |
mkanat | poolie: I think the least risky way would be to hire somebody to work on loggerhead. | 04:26 |
poolie | even just re-doing similar work but with stepwise deployments are a good idea | 04:26 |
lifeless | and I'm keen on those | 04:26 |
lifeless | things that are too big go into the feature queue to do, even timeouts. | 04:26 |
poolie | s/are a good idea/could be a safer course | 04:26 |
lifeless | which this falls into IMNSHO | 04:27 |
mkanat | lifeless: Are you with me on the "fix the two oopses then maintain trunk later" plan? | 04:27 |
lifeless | mkanat: I"m with you on fix the two oopses | 04:28 |
lifeless | mkanat: I hesitate to commit to the maintain trunk later plan, because it presupposes the outcome of a feature squad looking at this stuff | 04:28 |
mkanat | lifeless: Okay. In that case, "roll out history db in 6-9 months and then see how things go from there". | 04:29 |
lifeless | 'assign a feature squad to work on loggerhead performance in 6-9 months and see what happens' | 04:29 |
mkanat | lifeless: Sure, sounds great. :-) | 04:29 |
lifeless | pending flacoste and jml agreeing on priority/timeline | 04:29 |
lifeless | one obvious tactical move for such a squad is to pick this up (if noone has done it in spare / volunteer time in the interim) | 04:30 |
poolie | ok, so now back to the big one | 04:34 |
poolie | in that plan are we ok to leave lp on 1.18 and trunk a bit free-floating | 04:34 |
mkanat | poolie: Yeah. | 04:34 |
poolie | or shall we say what's in trunk was an experiment that's semi-abandoned, and move it aside? | 04:34 |
lifeless | thats to you guys | 04:34 |
lifeless | if you want maintainers from the lp team, no. If you don't care about that (indefinitely), yes. | 04:35 |
lifeless | It *was* an experiment and in the absense of maintainers, sure it must be considered abandoned. | 04:35 |
mkanat | poolie: I don't think it should be moved aside. | 04:37 |
mkanat | poolie: I suspect that in 6-9 months, the best thing for LP's feature team to do will be to stabilize and roll out trunk. | 04:38 |
lifeless | I don't have a particular investment in either answer; I have a strong bias towards having things be maintained when possible, and code that noone is looking after be clearly abandoned. | 04:38 |
lifeless | but its not 'my' project | 04:38 |
mkanat | lifeless: It was effectively abandoned before I started working on it, we'd just be putting it back into that state. | 04:38 |
mkanat | Well, besides the work that jam did on history_db, which was substantial. But nobody was fixing bugs. | 04:39 |
lifeless | mkanat: which is a shame, no ? | 04:39 |
mkanat | lifeless: It is, but it's also just a fact, and I'm okay with waiting 6-9 months for it to be resolved. | 04:39 |
mkanat | lifeless: I would love to keep working on it; that would be the most ideal solution given infinite resources. :-) | 04:40 |
lifeless | just for clarity, you think its better for users to have a nonresponsive project with some cool stuff unreleased in trunk than a responsive project with a 'future' branch earmarked for a later examination ? | 04:40 |
lifeless | mkanat: I'm assuming, perhaps incorrectly, that you're not going to be doing regular maintenance in your spare time. | 04:41 |
mkanat | lifeless: I think that trunk is probably stable and usable by everybody except LP. | 04:41 |
mkanat | lifeless: Yeah, unfortunately because I also maintain Bugzilla and some other stuff, I don't have a lot of spare bandwidth to do other free projects. | 04:41 |
lifeless | mkanat: I didn't say anything about the code quality | 04:41 |
lifeless | mkanat: or stability. | 04:41 |
lifeless | mkanat: which is totally understandable... I have the same quandry myself/ | 04:42 |
mkanat | lifeless: I think it honestly wouldn't be that hard, if you're just doing simple OOPS fixes, to apply them both to trunk and the LP branch. But I also suspect that some of those OOPS fixes will not be quite so wanted on trunk. | 04:43 |
mkanat | lifeless: I could fix the issues and do the backports myself, with one more block of hours. | 04:43 |
lifeless | mkanat: we really want the lp team to get stuck into all the crannies | 04:44 |
mkanat | lifeless: Okay. If that's the immovable object, then I think the *simplest* solution is the one that we have proposed and agreed on. | 04:44 |
lifeless | the previous silo structure led to the state loggerhead was in 2010; bringing you in had great results but has left loggerhead out on the cold | 04:44 |
mkanat | lifeless: I would say that loggerhead is a separately-maintained modular dependency, not a silo. | 04:45 |
lifeless | what I want is the following: | 04:45 |
lifeless | mkanat: 'bugs', 'code', 'soyuz' etc - those were the silos | 04:45 |
lifeless | mkanat: lp team substructure | 04:45 |
lifeless | mkanat: we've just reorganised into squads with a project wide remit | 04:46 |
mkanat | lifeless: Yeah...that's a conversation that you and I already had. :-) | 04:46 |
lifeless | right ;) | 04:46 |
lifeless | so | 04:46 |
lifeless | what I want is: | 04:46 |
mkanat | lifeless: (BTW, I think that having developers flexible is good, but I think the individual components should still have assigned individual designers.) | 04:46 |
lifeless | - a place for changes to the LP loggerhead to go for code review that code.launchpad.net/launchpad-project/+activereviews will pick up | 04:46 |
lifeless | - ditto bug reports for bugs.launchpad.net/launchpad-project/+bugs | 04:47 |
lifeless | - a clear and simple policy for what to do with changes to the project vis-a-vis upstream for developers. One of 'nothing', 'land upstream and do a release', 'file a patch upstream will review it' | 04:48 |
lifeless | if those three constraints are met, I will be satisifed. It may not be the globally best thing for canonical / bazaar / loggerhead, but I'll be confident that it will get acted on. | 04:48 |
lifeless | if any of them are not met, I'm confident it will become a spotty mess. | 04:49 |
poolie | it's already in /bazaar | 04:49 |
poolie | we read code reviews etc | 04:49 |
poolie | if lp developers put up changes i'm sure they will be piloted | 04:49 |
mkanat | lifeless: That sounds sensible. So I suppose what you could do is develop against launchpad-pqm/devel, and submit merge proposals against the trunk, but not have anybody assigned to do them. | 04:49 |
lifeless | there are multiple code branchces in that project that are not landed and approved | 04:49 |
mkanat | lifeless: Then at least when somebody was interested, they could go and look at the active MPs against trunk. | 04:49 |
lifeless | mkanat: with noone on the other end, that seems a bit noddy | 04:50 |
mkanat | lifeless: It's at least a method of making a queue to handle for the future. | 04:50 |
poolie | lifeless, i have actually read them and they're not easily landable | 04:50 |
poolie | they could be bumped to wip | 04:50 |
lifeless | then they probably should be | 04:50 |
lifeless | jam: https://code.launchpad.net/~mnordhoff/loggerhead/history_db/+merge/25381 | 04:50 |
mkanat | lifeless: Also, if you make the target reviewer ~bzr, eventually somebody will come along and do them. | 04:51 |
lifeless | mkanat: well, I'm assuming (because these are there) that that isn't the case. | 04:51 |
mkanat | lifeless: We could say that the Bzr team maintains loggerhead's upstream (without anybody actually assigned to do the work) and Launchpad maintains the launchpad-pqm branch. | 04:51 |
mkanat | lifeless: Oh, the reason those are there is that the default reviewer for loggerhead is wrong. | 04:51 |
mkanat | lifeless: It should be ~bzr instead of ~loggerhead-team, or whatever it is now. | 04:51 |
poolie | i think i added canonical-bazaar to that team | 04:52 |
poolie | or at any rate we can | 04:52 |
lifeless | so | 04:52 |
lifeless | corollaries | 04:52 |
lifeless | because lp is limited | 04:52 |
lifeless | I need a project in launchpad-project | 04:52 |
mkanat | lifeless: Ultimately I think we're just talking about what will happen for the next 6-9 months. | 04:52 |
lifeless | that can be a new one, launchpad-loggerhead repurposed, or loggerhead itself. | 04:52 |
lifeless | mkanat: I don't see this as timeboxed, unless we get rid of loggerhead | 04:52 |
mkanat | lifeless: I can't imagine the feature squad deciding that anything is that valuable besides stabilizing and deploying trunk. | 04:53 |
mkanat | lifeless: At which point the LP branch and trunk will become the same. | 04:53 |
lifeless | mkanat: sure | 04:54 |
lifeless | but they could still work off of a branch and let trunk be approximately unmaintained | 04:54 |
lifeless | or they could be doing releases | 04:54 |
mkanat | lifeless: Sure. But either way, there's no difference between the plans, after the 6-9 month timeframe. | 04:55 |
lifeless | sure there is | 04:55 |
mkanat | lifeless: You're positing the possibility that history_db never becomes trunk? | 04:56 |
lifeless | yes | 04:56 |
lifeless | and | 04:56 |
lifeless | that perhaps bazaar will maintain loggerhead | 04:56 |
mkanat | lifeless: Okay. So that bzr will maintain loggerhead seems somewhat likely. | 04:57 |
mkanat | lifeless: That history_db will never be trunk seems very unlikely. | 04:57 |
lifeless | or that what launchpad needs will be so unsuitable for trunk that trunk maintainers reject it | 04:57 |
lifeless | my crystal ball is broken | 04:57 |
mkanat | lifeless: Sure, but I have a bit more intimate knowledge of loggerhead and the problems it's faced, so I feel fairly confident in these predictions. | 04:57 |
mkanat | lifeless: Here's another possibility. | 04:58 |
mkanat | lifeless: You could merge ~launchpad-pqm/devel into 1.18. | 04:58 |
mkanat | lifeless: You could maintain 1.18 and do regular releases of it. | 04:58 |
mkanat | lifeless: Then LP would maintain the stable branch and bzr would maintain the trunk. | 05:00 |
mkanat | lifeless: That seems like a pretty reasonable solution for everybody. | 05:01 |
lifeless | mkanat: AIUI bzr don't have the cycles or domain experience to maintain loggerhead comfortably. IMBW. | 05:01 |
mkanat | lifeless: Ah, I think many of the devs have some casual knowledge of it. | 05:01 |
poolie | i think we're as well placed as lp, aside from being smaller | 05:02 |
poolie | i don't know | 05:02 |
poolie | i think you said you felt lp devs would be unwilling to work on it unless trunk was moved away | 05:02 |
mkanat | Yeah, poolie reviewed most of my code, jam did history_db, and some other folks submit patches from time to time and have done reviews. | 05:02 |
lifeless | I was extrapolating from my third constraint | 05:02 |
lifeless | poolie: ^ | 05:02 |
poolie | i'm not sure i understand it | 05:03 |
poolie | is it "where do i send my patches?" or "what do i do with third-party patches?"? | 05:03 |
lifeless | how should I approach doing my changes | 05:04 |
poolie | understand lp's running 1.18; do changes off that; ask someone experienced to review them | 05:04 |
poolie | it doesn't seem particularly harder than requesting changes in lazr.restful or bzr or whatever | 05:04 |
poolie | s//proposing | 05:05 |
lifeless | lps review and bug ui lets us down here, but sure | 05:05 |
lifeless | I don't think you can propose merges cross-project | 05:06 |
lifeless | poolie: does patch pilot work off of https://code.launchpad.net/bazaar/+activereviews ? | 05:07 |
poolie | wow | 05:07 |
poolie | obviously not :) | 05:07 |
poolie | but, i was contemplating that last week | 05:08 |
lifeless | spm: any joy? | 05:08 |
lifeless | poolie: I think you need to decide how you want to hang it together; if you want to maintain loggerhead - great, we'll send you patches, and bugs, and so on. | 05:10 |
lifeless | poolie: I understood you to be focusing on udd + treading water | 05:10 |
lifeless | poolie: so it seems a bit inconsistent to me for you also be to be picking up maintenance of this, which was effective unmaintained for quite some time. | 05:10 |
lifeless | poolie: but its up to you | 05:10 |
mkanat | lifeless: I think loggerhead is an important part of bzr adoption. | 05:11 |
lifeless | poolie: I just don't want to be held captive to this outstanding work, and want a clear answer whether you would like the lp team to pickup maintenance (subject to the constraints I'm operating under) | 05:11 |
lifeless | poolie: I feel like every time we reach an agreement, its snatched away one email later | 05:12 |
spm | lifeless: too many other things exploding atm to get a look at beyond logging into the likely problematic server | 05:12 |
lifeless | spm: should we rt it ? | 05:12 |
lifeless | spm: or would that just be throwing hay into a barn fire? | 05:12 |
spm | well, aiui (need to confirm) static files are served off crowberry directly. if guava has been updated; yet not crowberry, then this sort of mismatch seems possible. | 05:13 |
lifeless | spm: yeah, thats what I'd expect needs checking | 05:13 |
poolie | i just want to get it to improve in the most efficient way possible | 05:14 |
poolie | i have been doing reviews on it | 05:14 |
poolie | as have others | 05:14 |
lifeless | we've spent 2 hours of three people - 6 man hours discussing the most efficient way. | 05:14 |
mkanat | poolie, lifeless: I'll leave you guys to decide this. I think having bzr maintain trunk and LP maintain 1.18 is probably a good idea. | 05:14 |
mkanat | I'm out for the night, now. :-) Goodnight! | 05:17 |
poolie | night | 05:17 |
poolie | :( | 05:17 |
mkanat | poolie: Did you want me to stick around? | 05:17 |
poolie | no, go | 05:18 |
poolie | i'm just annoyed this is getting stuck | 05:18 |
mkanat | poolie: Okay. | 05:18 |
mkanat | poolie: It's only stuck if you disagree that bzr should maintain trunk. | 05:18 |
mkanat | Anyhow, I'm out. :-) | 05:19 |
mkanat | Night! | 05:19 |
poolie | lifeless, do you think having bzr maintain trunk (as we have been, at a low level) is feasible? | 05:21 |
poolie | i guess that still leaves launchpad bugs meaning it's hard for lp devs to see the queue | 05:21 |
lifeless | well | 05:24 |
lifeless | as I said | 05:24 |
lifeless | I need an LP queue | 05:24 |
lifeless | for lp devs to review lp changes | 05:24 |
lifeless | and ditto bugs | 05:24 |
poolie | how about if we just leave them where they are | 05:28 |
poolie | and if lp developers are not getting reasonable turn around on their reviews, we deal with that? | 05:28 |
poolie | so far, bzr people have been doing reviews, and lp people have not been proposing them | 05:28 |
lifeless | poolie: I don't understand | 05:31 |
lifeless | lp changes to the lp loggerhead will be reviewed by lp devs | 05:31 |
lifeless | if they go upstream, if thats desired and there is an upstream, then having reasonable turnaround is indeed important | 05:32 |
poolie | ok, so you stated your constraints; what are mine? | 05:41 |
poolie | i think it's very unfortunate to be letting historydb bitrot because nobody will schedule time to deploy it | 05:41 |
poolie | but that is not really a constraint | 05:42 |
poolie | i am also concerned that we will make these rearrangements and then nobody will actually work on it, therefore leaving things actually worse off | 05:42 |
poolie | again, not a constraint | 05:42 |
spm | lifeless: so yes. /static is served off crowberry directly. both http or https. I guessing we have revno X on crowberry for LH, and X + Y on guava - hence the mismatch. | 06:24 |
lifeless | spm: but loggerhead runs on guava? | 06:30 |
spm | yup | 06:31 |
lifeless | I can has fix? | 06:31 |
spm | sorry, I don't understand? | 06:31 |
spm | we fixed it this way to stop codebrowse being overwhelmed serving static content | 06:31 |
lifeless | can you not serve it staticly from guava? | 06:31 |
spm | we did. it was bad. | 06:31 |
lifeless | oh | 06:32 |
lifeless | that surprises me | 06:32 |
lifeless | serving it out of loggerhead would be poor | 06:32 |
lifeless | but apache on guava? | 06:32 |
lifeless | spm: are you sure it was static on guava, not being handled by lh ? | 06:32 |
spm | it was being handled by lh on guava. there is no apache on guava. ? | 06:33 |
spm | I'd suggest apache on guava be unwise - just to solve a css file woe. | 06:33 |
lifeless | ok | 06:34 |
lifeless | I had no idea there wasn't one there | 06:34 |
spm | if you give me a bit I may even be able to enunciate why :-) | 06:34 |
lifeless | can you check the revno's ? | 06:34 |
lifeless | spm: is there an LP tree on crowberry we can point into instead of the one we do now, that would be more up to date? | 06:34 |
spm | sourcecode/loggerhead/loggerhead/static <== guava is 178, crowberry is 176. | 06:36 |
spm | revno 12263 vs 12177 | 06:37 |
lifeless | and is there a view.ss on guava? | 06:37 |
lifeless | view.css | 06:37 |
spm | ahhh. codebounce is in nodowntime. | 06:37 |
lifeless | spm: so, we need an additional nodowntime deploy to a dir on crowberry, and apache pointing at that. | 06:38 |
lifeless | ? | 06:38 |
spm | I wouldn't call codehost a nodowntime deplot? | 06:38 |
spm | deply too | 06:38 |
spm | deploy 3 | 06:38 |
lifeless | spm: (after we confirm a view.css exists on guava) | 06:38 |
spm | -rw-r--r-- 2 loggerhead loggerhead 519 2011-01-14 11:05 ./css/view.css <== guava | 06:39 |
lifeless | spm: what I mean is | 06:40 |
lifeless | codehosting cannot be nodowntime yet (rt something or other) | 06:40 |
lifeless | can we have a separate lp tree on crowberry | 06:41 |
lifeless | which would be*in* the nodowntime set | 06:41 |
lifeless | and apache would be pointed into that tree | 06:41 |
spm | just for static files? Ahhh I see. clever. | 06:41 |
lifeless | or apache on guava. your call :P | 06:41 |
spm | heh | 06:41 |
spm | yeah - I like that idea. bit ugly and heavyweight, but the alts are worse. | 06:42 |
spm | lifeless: oki. new 'launchpad-static' now on codehost; about to switch the softlinks to point into there.... | 06:52 |
spm | and live | 06:53 |
spm | http://bazaar.launchpad.net/static/css/view.css <== looks good | 06:55 |
lifeless | spm: thanks! | 07:49 |
jelmer | 'morning | 08:31 |
damd | hi. i'm working on a bzr branch here where i have revisions up to 4 and now i've made uncommitted changes to r4. now i'd like to sort of "throw away" the changes in r4 and make the current state the new r4. how would i do that? | 09:23 |
damd | i.e. i already have an r4, but i'd like to replace that r4 with the current uncommitted state which builds on r4. | 09:31 |
jelmer | damd: 'bzr uncommit' | 09:32 |
jelmer | followed by a new commit | 09:32 |
damd | okay, so my changes won't be lost that way? | 09:32 |
damd | they weren't, great, thanks | 09:33 |
damd | i just did a "bzr pull" and there was a conflict in a file. now i've fixed that conflict. so is everything swell now? i'm worried that maybe i have to take some extra action to prevent e.g. a "merge" commit from appearing in the log once i commit, like the stuff that happens in mercurial unless you rebase. | 09:39 |
jelmer | damd: you'll have to run "bzr resolved" to let bzr know you've resolved the conflict (it will tell you this when you try to commit) | 09:44 |
damd | oh, okay, thanks | 09:45 |
jelmer | damd: we only create merge commits if you've actually done a 'bzr merge' invocation. 'bzr pull' will never create a merge commit | 09:45 |
damd | great! | 09:45 |
maxb | The behaviour isn't unlike Mercurial, for that matter | 09:56 |
Tak | isn't it not unlike mercurial? | 09:56 |
maxb | I suppose the difference is that "bzr pull" complains, whereas "hg pull" just reports (+1 heads) and leaves you with multiple local heads | 09:57 |
damd | if i remember correctly, in mercurial if you hg pull you need to make a "merge commit" | 09:58 |
damd | to avoid that you "hg pull --rebase" | 09:58 |
damd | but then again, i never bothered learning it properly | 09:58 |
Tak | you only need to do a merge commit if you have to do a merge | 09:58 |
damd | yes, of course | 09:58 |
maxb | It sounds to me that you are overly focused on avoiding merge commits | 09:59 |
damd | not overly focused, it's just that virtually every project i've worked on urge you to not make merge commits | 10:00 |
lifeless | in what VCS | 10:00 |
maxb | damd: I have noticed this too. Have any of those projects given a decent explanation as to why they urge that? | 10:00 |
damd | git (conkeror), hg (work) and now bzr (emacs) | 10:00 |
damd | maxb: ugly logs basically | 10:00 |
maxb | also git (postgresql) | 10:00 |
damd | no, i mean i worked on conkeror | 10:01 |
Tak | most git-using projects hate merge commits ime | 10:01 |
Tak | I haven't had any complaints from projects using other vcss | 10:01 |
damd | the only merge commits i find in emacs are the ones where they merge emacs-23 (bug fixes) to emacs-trunk (new features) | 10:01 |
maxb | So they don't have feature branches? | 10:02 |
damd | they have those too, but those merges are very rare | 10:02 |
Tak | but that's also because git makes a merge commit for EVERYTHING | 10:02 |
maxb | ? | 10:03 |
jelmer | Tak: it only makes a merge commit if a fast-forward isn't possible | 10:04 |
maxb | much like bzr | 10:07 |
* Tak shrug | 10:09 | |
jelmer | maxb: bzr pull never makes a merge commit, bzr merge always makes a merge commit | 10:09 |
Tak | I feel like it happens a lot more often for me with git | 10:09 |
jelmer | maxb: or rather, sets the working tree up for a merge commit | 10:10 |
jelmer | maxb: that's different from the way git pull works | 10:10 |
maxb | OK, perhaps it is more accurate to say that the default way git pull works creates merge commits in much the same scenarios as a user of bzr would | 10:10 |
jelmer | maxb: I don't think that's true. In a lot of situations it's surprising that "git pull" creates a merge commit. | 10:11 |
maxb | Really? My understanding was that git pull will create a merge commit in exactly the same situations that bzr pull will say "branches diverged, you need to merge" | 10:12 |
jelmer | maxb: that assumes that a merge is actually what the user wants, which (at least I found) is often not true. | 10:13 |
jelmer | maxb: fwiw bzr can do the same thing using 'bzr merge --pull' | 10:13 |
quicksilver | jelmer: bzr merge --pull has the disadvantage of replacing your mainline revision numbering with the other branches revision number, doesn't it? | 10:19 |
maxb | No, it doesn't do that | 10:20 |
maxb | Oh | 10:20 |
maxb | I suppose it could | 10:20 |
jelmer | quicksilver: yes, but that's a problem with pull in general if the other branch has merged your tip and has additional revisions | 10:20 |
glen | does bzr support git like merges, so that original commit is preserved from merge? | 10:20 |
quicksilver | jelmer: yup. | 10:20 |
quicksilver | jelmer: just checking I understood :) We go to some lengths to keep monotonically increasing revision numbers on our branches | 10:21 |
jelmer | glen: yes | 10:21 |
glen | jelmer: any hints? :) | 10:21 |
jelmer | glen: how do you mean? | 10:21 |
maxb | quicksilver: You know about append_revisions_only, right? | 10:21 |
quicksilver | maxb: nope. | 10:21 |
jelmer | quicksilver: in general I think landing feature branches as merges is a good idea. it nicely groups them | 10:21 |
quicksilver | jelmer: agreed. | 10:22 |
maxb | quicksilver: It's a per branch setting to say "never change my existing mainline history" | 10:22 |
glen | jelmer: well. i did bzr merge ../path/to/branch, and when i did bzr commit, i had to type again commit message, but i'd like that commit messages that i merged get auto added, currently my commit after merge looks like i did the commit myself | 10:22 |
quicksilver | maxb: ah, interesting. Would stop a class of stupid error. | 10:22 |
maxb | That's the point of it, yes | 10:23 |
quicksilver | maxb: still, it's never a problem to pivot the branch by repulling the right revision and push --overwriting it. | 10:23 |
jelmer | glen: The merge has a reference to the revison that was merged | 10:23 |
quicksilver | maxb: the worst thing that happens is redmine gets terribly confused | 10:23 |
glen | jelmer: do you know is that merge rev visible in launchpad too? | 10:23 |
quicksilver | (it assumes, and caches, monotonically increasing revisions) | 10:23 |
jelmer | glen: much like in git, which also doesn't copy commit messages | 10:23 |
maxb | quicksilver: Huh. Sounds like its bzr integration wasn't exactly well thought through :-/ | 10:24 |
jelmer | glen: launchpad's main branch page just shows the mainline revisions (so the merge revisions, not the revisions that were merged by those merge revisions) | 10:24 |
quicksilver | maxb: well, let's call it a known deficiency ;) | 10:24 |
glen | jelmer: ah, i see now. it's only in bazaar browser (http://bazaar.launchpad.net) | 10:25 |
quicksilver | maxb: we use loggerhead as well | 10:25 |
quicksilver | maxb: but we use redmine for issue tracking and it's convenient for the issue tracker to be able to see the repo | 10:25 |
maxb | quicksilver: Of course, if redmine auto-set append_revisions_only on all of the branches managed under it, it *could* assume that :-) | 10:25 |
maxb | jelmer: Hi, can we discuss bug 707170? My thought was that for the branches to be in the state they are, some software component must have done something wrong, and bzr-git seemed like the likely culprit | 10:27 |
ubot5 | Launchpad bug 707170 in Bazaar Git Plugin ""Cannot add revision(s) to repository: missing text keys" branching lp:dulwich and alioth packaging branch into same shared repo" [Undecided,Incomplete] https://launchpad.net/bugs/707170 | 10:27 |
jelmer | maxb: can you explain why though? bzr-git didn't touch the broken commit at all | 10:33 |
maxb | jelmer: Perhaps I'm misunderstanding, but I was interpreting that error as there being one revision-id that has bzr-style text-revision ids for some files in one branch, but bzr-git ones in the other | 11:12 |
maxb | So unless bzr core managed to rewrite the text-rev ids, the problem must be in bzr core? | 11:12 |
maxb | erm | 11:12 |
maxb | * So unless bzr core managed to rewrite the text-rev ids, the problem must be in bzr-git? | 11:12 |
jelmer | maxb: that one revision id was created by bzr core though - bzr-git hasn't touched it | 11:19 |
maxb | That's interesting. Does this hint at a potential bzr core bug, then? | 11:19 |
jelmer | it seems more likely to me that e.g. dpush has updated the working tree incorrectly and thus caused bzr core to create a commit with the wrong text revisions | 11:19 |
=== Ursinha is now known as Ursinha-afk | ||
=== oubiwann is now known as oubiwann_ | ||
catphish | am i correct in thinking that the only way to set up hooks in bzr is to write a python plugin? | 12:24 |
jelmer | catphish: yes | 12:25 |
jelmer | there is a simple wrapper plugin that calls out to the shell but it doesn't cover all hooks, and requires per-branch configuration | 12:25 |
catphish | i might write my own wrapper that calls out to the shell | 12:26 |
catphish | i need to be able to define hook shell scripts on a per-repository basis | 12:26 |
catphish | http://people.samba.org/bzr/jelmer/bzr-shell-hooks/trunk/ | 12:28 |
=== oubiwann_ is now known as oubiwann | ||
=== oubiwann is now known as oubiwann_ | ||
vila | jam: good morning ! | 14:55 |
vila | jam: I'm looking at your reset-checkout proposal and I'm wondering what will happen if they are existing conflicts ? | 14:56 |
vila | jam: I smell bogus edge cases here | 14:56 |
=== Ursinha is now known as Ursinha-afk | ||
CardinalFang | Hi all. In Ubuntu, I'm interested in updating a package, but the source-package-branch is out of date with the current shipping package. I heard I should mention it here, and ask for advice. | 15:02 |
=== Ursinha-afk is now known as Ursinha | ||
maxb | What is the source package name? | 15:04 |
CardinalFang | "couchdb" | 15:04 |
CardinalFang | lp:ubuntu/natty/couchdb | 15:05 |
vila | https://bugs.launchpad.net/udd/+bug/653312 | 15:06 |
maxb | Via http://package-import.ubuntu.com/status/ and the linked https://bugs.launchpad.net/udd/+bug/653312 I infer that a fix is not likely to be forthcoming in the immediate future, and therefore you would be best advised to do an update using the traditional methods for now | 15:06 |
* vila nods | 15:07 | |
CardinalFang | maxb, vila, thank you. | 15:07 |
=== mnepton is now known as mneptok | ||
jam | vila: honestly, I don't really care about those edge cases much. for conflicts, it probably leaves them alone | 15:21 |
jam | this is more about "my dirstate is corrupted, fix it" | 15:21 |
vila | jam: my point is not to fix every edge case but let the user know some may exist (reviewing) | 15:22 |
vila | jam: you care at least enough to offer the ability to restore the merge parents though, I don't want users to be fooled | 15:23 |
vila | jam: this is a great addition, so let's publicize what it reallt does | 15:23 |
jam | I'm not sure I want it to be particularly public. Since it is a 'repair something broken' tool. I don't want people to think it happens often | 15:24 |
jam | it has happened just often enough and the workaround is clumsy | 15:24 |
vila | jam: I' all for leaving it hidden, what I mean (already written in the not-yet-posted review) is that when the repair succeeds, a warning should tell the user about limitations | 15:25 |
vila | jam: review posted | 15:28 |
vila | jam: spoiler: BB:tweak ;) | 15:28 |
sobersabre | hi. | 16:02 |
sobersabre | I have a q which may sound a bit trollish. but it's not. | 16:02 |
sobersabre | what was the idea behind STARTING bzr ? | 16:02 |
sobersabre | I mean it is python. there was python project already (hg) | 16:02 |
jam | sobersabre: technically we were first, I believe | 16:04 |
jam | certainly the start date is very close | 16:04 |
damd | also, bzr is GNU | 16:04 |
jam | sobersabre: we talked about joining at one point. I think the primary sticking point was copyright, followed by some disagreements about structure. We wanted to be abstract and support multiple concrete formats, etc. | 16:05 |
LeoNerd | There was also a lot of history in baz, the Canonical fork of tla; Tom Lord's Arch.. | 16:07 |
LeoNerd | A C reimplemetnation of the orignal GNU arch; a revision control system written in shell scripts (I kid ye not) | 16:07 |
LeoNerd | There's no code sharing between baz and bzr, but bzr was started by a lot of the same developers who first worked on baz; shares some of the ideas and concepts... at least initially | 16:08 |
LeoNerd | .oO( Except for cherrypicking... </bitter> ) | 16:09 |
sobersabre | jam: thanks. | 16:58 |
sobersabre | now bzr q. | 16:58 |
sobersabre | :) | 16:58 |
sobersabre | I want to take a branch with all its history, and put it into a subfolder of another (unrelated branch), with all the branch's history, and mold them together into 1 branch. | 16:59 |
sobersabre | I saw a plugin "merge-into", | 16:59 |
=== Ursinha is now known as Ursinha-lunch | ||
sobersabre | is there a way to do it without giving birth to a hedgehog and avoid using this plugin ? | 17:00 |
damd | ... | 17:02 |
maxb | sobersabre: You want 'bzr join' | 17:07 |
maxb | sobersabre: However, first please check the format of the two branches | 17:07 |
maxb | If they are both poor-root type, this may be an issue | 17:07 |
sobersabre | maxb: I am on 2a formats. | 17:16 |
sobersabre | on both branches. | 17:16 |
sobersabre | thanks for join. | 17:16 |
maxb | In that case, bzr join should just work | 17:16 |
=== beuno is now known as beuno-lunch | ||
=== vila changed the topic of #bzr to: Bazaar version control | try https://answers.launchpad.net/bzr for more help | http://irclogs.ubuntu.com/ | Patch pilot: jam | 2.2.3 is officially out ! (rm vila) | ||
=== beuno-lunch is now known as beuno | ||
=== Ursinha-lunch is now known as Ursinha | ||
=== r0bby is now known as robbyoconnor | ||
bialix | heya GaryvdM | 21:04 |
GaryvdM | Hi bialix! | 21:04 |
bialix | I have couple of questions to re windows-installers branches | 21:05 |
GaryvdM | ok | 21:05 |
bialix | I'd like to merge Naoki's patches | 21:05 |
bialix | but wonder if we need 2 separate branches for bzr 2.2.x and 2.3.x | 21:05 |
bialix | Naoki's improvements are definitely for 2.3 only, IMO | 21:06 |
bialix | GaryvdM: ^ | 21:07 |
GaryvdM | bialix: I should be possible to code it so that you can define different versions for 2.2 and 2.3 in the same branch | 21:07 |
vila | I'd say focus on 2.3, but that's just me :) | 21:07 |
GaryvdM | Hi vila | 21:07 |
bialix | bonsoir vila | 21:08 |
vila | hey GaryvdM, hello bialix | 21:08 |
vila | I'm tweaking qbzr tests so they can be run with --parallel=fork | 21:08 |
vila | I just succeeded and will send an mp :) | 21:08 |
bialix | GaryvdM: for 2.3 there is used tag:bzr-2.3b1, it does not sound right for me | 21:08 |
GaryvdM | Whether that is the best way to do it or to have different branches - I'm not sure. | 21:09 |
GaryvdM | yes - that should be 2.3b5! | 21:09 |
* bialix wonders how Gary built the latest installers | 21:10 | |
GaryvdM | Hmm - let me boot the vm I built on and check. I may have forgotten to push | 21:12 |
bialix | GaryvdM: on the other hand the changes are straightforward, and I think we may want to upgarde TortoiseOverlays to latest version for all bzr versions | 21:12 |
bialix | and removing unused w.exe is not big eal | 21:12 |
bialix | and removing unused w.exe is not big deal | 21:12 |
bialix | GaryvdM: I need you final word | 21:13 |
GaryvdM | ok | 21:14 |
GaryvdM | bialix: I would rather not change to much in 2.2. If there are bugs that people are complaining about, then yes, but otherwise I think rather stick to the known. | 21:16 |
GaryvdM | brb - nature call. | 21:16 |
vila | . o O (Strange name for a phone...) | 21:16 |
GaryvdM | Ah - not phone. That's slang for the toilet :-) | 21:18 |
vila | hehe, I know :) | 21:19 |
vila | ./bzr selftest -s bzrlib.plugins.qbzr --parallel=fork ===> Ran 153 tests in 2.271s | 21:19 |
* bialix remember the walk to Chateau de La-Hulpe | 21:20 | |
vila | instead of Ran 153 tests in 4.161s | 21:20 |
vila | hmm, at least windows popping all over the place are more fun | 21:20 |
bialix | yep | 21:20 |
vila | they pop all together with --parallel=fork | 21:21 |
bialix | jumping jack | 21:21 |
vila | anyway, patch is up for review | 21:21 |
vila | bialix: exactly :) | 21:21 |
GaryvdM | vila: thanks for the patch. I | 21:23 |
bialix | GaryvdM: if we don't want to change 2.2 installers then I think we need to create separate branch for 2.2 installers | 21:23 |
bialix | GaryvdM: will it be too much trouble for future 2.2 releases? | 21:24 |
bialix | for you? | 21:24 |
vila | keep in mind that the cadence will slow down for 2.2 once 2.3 is out | 21:24 |
GaryvdM | vila: I want to review the excepthookwatcher stuff carefully - so I'll review in the morning when I have a fresh brain :-) | 21:25 |
bialix | I know, but you just released 2.2.3 | 21:25 |
vila | GaryvdM: sure | 21:25 |
GaryvdM | vila: There is a bug in the qbzr test where a failing test can cause others to fail, when there is nothing wrong. I suspect you changes may fix that (well hoping). | 21:26 |
vila | bialix: indeed and the next planned release is 2.3.0, then 2.3.1 then 2.4b1 and only then and if needed 2.2.4 | 21:27 |
GaryvdM | bialix: I don't think thats nessary - I'm going to look now. | 21:27 |
* fullermd is bummed that we never had a 1.1.2.3.5 | 21:27 | |
vila | . o O (CVS lovers....) | 21:27 |
bialix | oh, I thought we left 1.1.1.1.1.1.1.1 numbers long ago | 21:28 |
vila | well, we avoid them so far but who knows, may be someone will find a good use :D | 21:28 |
bialix | GaryvdM: this change https://code.launchpad.net/~songofacandy/bzr-windows-installers/remove-unused-wexe/+merge/44708 affects Inno Setup script, that's shared between bzr installers | 21:29 |
fullermd | It's not CVS lovers, it's math geeks :p | 21:29 |
vila | yeah, yeah, stop pretending :-P | 21:29 |
GaryvdM | bialix: oh - ok I see what you mean now. | 21:31 |
bialix | GaryvdM: well, it's possible to extend issgen.py to allow using different templates for different bzr releases | 21:33 |
bialix | if you want I can look into this | 21:33 |
GaryvdM | bialix: maybe different branches is easier | 21:34 |
bialix | although it increases complexity of the tool with almost no value | 21:34 |
bialix | so yes different branches might be easier | 21:34 |
GaryvdM | vila: do you have 1 branch, or different branches for the mac installer? | 21:35 |
vila | 1 branch per series + trunk | 21:35 |
vila | this sometimes requires some cherry-pick and always require careful merging for config.py (which defines what version of which plugin is packaged) | 21:36 |
vila | but otherwise it works flawlessly ensuring that we don't try to do too much in older releases | 21:36 |
bialix | Ian made it a bit better for windows installers, IMO | 21:36 |
GaryvdM | vila: Has the merging caused any errors in the past? | 21:37 |
vila | not that I know of but I am a noob there having built what... 4 or 5 installers | 21:38 |
GaryvdM | ok - I'm asking to just try get a feel for what works. | 21:38 |
vila | The nice thing with separate branches is that you have a clean history of the releases and what installer modifications went from one branch to the other | 21:39 |
vila | weird, -s bp.plugins.qbzr succed but running the full test suite still fail some qbzr test... | 21:42 |
vila | test isolation related problem probably | 21:42 |
vila | hey poolie ! | 21:42 |
GaryvdM | vila: I Think thats problem I was talking about just now. Is it a treewiget test? | 21:43 |
vila | yup | 21:43 |
vila | re-running | 21:44 |
vila | GaryvdM: bzrlib.plugins.qbzr.lib.tests.test_treewidget.TestTreeWidgetSelectAll.test_commit_selectall | 21:47 |
vila | GaryvdM: bzrlib.plugins.qbzr.lib.tests.test_treewidget.TestTreeWidgetSelectAll.test_add_selectall | 21:48 |
vila | GaryvdM: ha ha, tricked by a leaking excepthookwatcher.pyc | 21:56 |
GaryvdM | vila: :-) I've done that before | 21:57 |
GaryvdM | vila: I've just discovered another reason why it happens (and no surprise, it involves a global variable) | 21:59 |
vila | GaryvdM: I'll need to update my submission | 22:00 |
vila | GaryvdM: which one ? | 22:02 |
GaryvdM | vila bp.qbzr.lib.util.loading_queue | 22:03 |
vila | overrideAttr(bp....util, 'loading_queue', None) in the relevant setUp should fix it | 22:04 |
GaryvdM | Yes - but I also want to investigate a but as it may need a try...finally somewhere. | 22:05 |
vila | overrideAttr is good at avoiding try/finally if you need to purge this queue, addCleanup may help too | 22:06 |
GaryvdM | vila: The "missing try...finally" is in the code, not the tests, but I will also add your suggestion for an extra level of test isolation protection. | 22:14 |
vila | GaryvdM: oh I see | 22:15 |
=== pickscrape__ is now known as pickscrape | ||
vila | RuntimeError: cannot join thread before it is started... never saw this one :) | 22:17 |
vila | GaryvdM: so, I updated my mp but the 2 failures are still there... | 22:18 |
GaryvdM | vila: I should have a fix soon :-) | 22:19 |
vila | yeah ! | 22:19 |
vila | I'm pretty close being able to run bzr selftest *without* --no-plugins or BZR_PLUGINS_PATH tricks | 22:20 |
vila | I'm pretty close being able to run bzr selftest *without* --no-plugins or BZR_PLUGIN_PATH tricks | 22:20 |
vila | GaryvdM: I added a self.assertEqual(None, util.loading_queue) which didn't trigger | 22:21 |
GaryvdM | Oh | 22:22 |
vila | double checking | 22:22 |
GaryvdM | Merging you mp now... | 22:22 |
GaryvdM | vila: both you fix + mine now in lp:qbzr | 22:27 |
GaryvdM | *your | 22:27 |
vila | pulled, running | 22:29 |
vila | GaryvdM: still 2 failures :-/ | 22:33 |
GaryvdM | vila: the same tests? | 22:33 |
vila | exact some ones, yes | 22:34 |
GaryvdM | AssertionError: not equal: a = [] b = ['changed'] ? | 22:34 |
vila | both fail because they miss unversioned-with-ignored | 22:34 |
vila | no | 22:34 |
GaryvdM | vila: Please pastebin error. | 22:35 |
vila | http://paste.ubuntu.com/559265/ | 22:35 |
GaryvdM | Ty | 22:35 |
GaryvdM | vila: any tips on how to reproduce? | 22:37 |
vila | bzr selftest --parallel=fork | 22:37 |
vila | :-/ | 22:37 |
vila | that is: the whole test suite | 22:37 |
GaryvdM | ok | 22:38 |
vila | I can try to isolate a bit more but this is never trivial | 22:38 |
GaryvdM | :-/ | 22:38 |
vila | GaryvdM: one possible cause is that the tests pass when run sequentially because another test is doing some init that is missing when run in parallel | 22:41 |
* GaryvdM runs grep global . -r | 22:43 | |
GaryvdM | vila: Does it fail if you run bzr -s bp.qbzr --parallel fork? | 22:44 |
vila | no | 22:45 |
GaryvdM | Ok - same for me | 22:45 |
vila | and meither if I run bzr selftest -s bp.qbzr.lib.tests.test_treewidget.TestTreeWidgetSelectAll.test_commit_selectall | 22:45 |
vila | so s/init/something/ above | 22:46 |
poolie | jam? | 22:48 |
jam | hi poolie | 22:48 |
mathrick | jelmer: any plans to add support for darcs repos? | 22:53 |
mathrick | I can see how that might be tricky, given the completely opposite ways darcs and bzr look at trees and patches | 22:54 |
poolie | jam, when you said "going on the 26th, or more like the 23rd sounds like it works better for me" which month was that? | 22:54 |
jam | poolie: The 9th isn't possible, as you know. Going the week ahead would work for me, but you didn't list it as a possibility. | 22:55 |
jam | going the direct week after would be bad | 22:55 |
jam | because my wife has to do late-night phone calls | 22:55 |
jam | so 2 weeks after or 1-2 weeks before is ok | 22:55 |
jam | Or put in sequence: | 22:56 |
poolie | i understand now | 22:56 |
jam | Apr 25 good, May 2nd good for me, May 9, 16 bad for me, May 23rd good again | 22:57 |
poolie | ok, and the 30th of May good too? | 22:58 |
vila | GaryvdM: I got the failures wi th'bzr selftest' even without --parallel=fork, weird | 23:04 |
GaryvdM | vila: I have a suspicion. I'm making a branch with a whole bunch of trace.mutters to check. | 23:05 |
jam | poolie: yeah, should be fine. | 23:05 |
jam | Lina says she'll have a definite schedule first week of Feb | 23:06 |
vila | jam, poolie: 30th of May won't be good at my place, too close to Marie's exams | 23:07 |
jam | poolie: I'm off for now. I really feel like we should be pushing to get history-db deployed for loggerhead. I can't say there won't be any regressions, but other than initial import speed, everytime I hit something, I end up making it 10x faster than the current production code. | 23:10 |
jam | I realize it shouldn't be my priority to work on it | 23:11 |
jam | and I'll try to stop poking after today | 23:11 |
poolie | it's always really disappointing to leave good work undeployed | 23:11 |
poolie | and looking back on my mail, i think that we did basically have a deal with lp that if you did this, they'd deploy it | 23:12 |
poolie | for various reasons, some good some bad, some due to you and me not pushing it, that wasn't followed through | 23:12 |
jam | poolie: I def see how not having a good test suite hurts being able to maintain the software, though. | 23:13 |
jam | because when I make changes, I know that the things *I'm currently testing* are ok, but I don't know about far-reaching implications | 23:14 |
* vila nods frantically | 23:14 | |
jam | vila: what are you doing here ? | 23:15 |
jam | :) | 23:15 |
GaryvdM | vila: lp:~garyvdm/qbzr/debug_select_all_test - But I think we should stop now. | 23:16 |
vila | jam: fighting jetlag by hopefully reversing the pattern from these last days, staying up later to sleep between 1 and 6AM | 23:16 |
jam | I guess yesterday you were waking up at this time? | 23:18 |
jam | Anyway, i'll lurk for another 15min or so, but then I'm going to be away till tomorrow. | 23:18 |
poolie | jam, i know what you mean about tests | 23:18 |
poolie | jam, i'm so happy you got the things merged and the tests passing | 23:19 |
poolie | i'm fine for you to push it as a 20%-type thing | 23:19 |
vila | jam: roughly yes but then I couldn't sleep until 6AM... | 23:19 |
poolie | i do think that as well as getting it technically ready you will need to work on lining it up to be deployed | 23:19 |
poolie | ie: do they have any concerns about the approach; are we able to deploy it on staging (etc) to test it; | 23:20 |
lifeless | jam: I think history db is really nice | 23:20 |
lifeless | jam: I'm keen to see it live; we're working within a bunch of constraints that were not really present a year ago - and if present would have given a much stronger focus to moving things forward. | 23:21 |
poolie | anyhow, so getting a thread (maybe a new thread) about what those constraints are may be good | 23:22 |
poolie | or maybe a lep | 23:22 |
vila | GaryvdM: ok, trying it now but I agree we should stop | 23:22 |
poolie | that's kind of more important than the actual speed | 23:22 |
poolie | it could be 100x faster, but if it will not be deployed that just makes it more sad | 23:23 |
lifeless | so | 23:23 |
lifeless | broadly they are | 23:23 |
lifeless | - limit work in progress by the lp squads, no more dancing all over the map | 23:23 |
GaryvdM | vila: ok - Goodnight | 23:24 |
lifeless | which implies not accepting handoffs that haven't been slotted in - help folk to self service instead | 23:24 |
GaryvdM | Goodnight all | 23:24 |
vila | GaryvdM: see you, I'll send a mail with the failure result | 23:24 |
lifeless | - data driven / operational excellence - analyse and fix the oopses we're getting | 23:24 |
=== GaryvdM is now known as GaryvdM_away | ||
lifeless | also add loggerhead to the page performance report | 23:24 |
lifeless | - we need to get timing data in the loggerhead oopses | 23:25 |
lifeless | - iterate until the picture is clear about /what/ is wrong in terms of coarse user experience | 23:25 |
lifeless | e.g. it may be that what we're really suffering is sqlite cache file contention cross-thread within single loggerhead processes | 23:26 |
lifeless | at each point, we broadly want to do the single thing that will solve the most observable issues | 23:26 |
lifeless | (think pareto analysis) | 23:26 |
poolie | so: _should_ john push on this, or is it going to be too hard to get it deployed without active help from a squad? | 23:27 |
jam | lifeless: I don't mind pushing a bit, but things get blackholed so often. I don't really know *how* to push lp-serve, for example. It is in an rt, and at that point... I poke at francis and poolie? | 23:30 |
lifeless | jam: so its live on qastaging ? | 23:35 |
lifeless | jam: AIUI ? | 23:35 |
jam | lifeless: yes | 23:35 |
lifeless | jam: in which case, hammer it until you're happy its robust | 23:35 |
jam | lifeless: done | 23:35 |
lifeless | jam: look in the qastaging OOPS report summaries for anything related to it | 23:35 |
jam | it faild at one point | 23:35 |
lifeless | jam: and then file an RT for production | 23:35 |
jam | as near as I can tell it was because it wasn't restarted as part of the deploy code | 23:35 |
jam | I've filed an RT for staging, which is "stuck" | 23:35 |
jam | handed off to SPM, I believe, and then ... I don't knwo | 23:36 |
jam | knw | 23:36 |
jam | know | 23:36 |
lifeless | spm: ping | 23:36 |
lifeless | jam: staging is irrelevant for getting it on production | 23:36 |
poolie | so to answer the general question, you just need to ask them, or me | 23:36 |
spm | lifeless: hola | 23:36 |
lifeless | spm: ^ jam's rt - do you know where that is at? | 23:36 |
spm | waiting for me to get around to it | 23:37 |
spm | currently I've got 3 LS ones (variants on the same theme) and an ISD one ahead in terms of getting things done; as well as the edge redirector excitement. | 23:38 |
jam | spm: any idea on timeframe for that? | 23:38 |
jam | Is it possible to just upgrade that to rollout onto production, if it isn't necessary to roll out on staging first? | 23:39 |
jam | Mostly I'm trying to get *some* momentup | 23:39 |
jam | momentum | 23:39 |
spm | no idea unf. I spent purty much all of yesterday completely reactive to alerts. got maybe 15-20 mins to spend on RT's. | 23:39 |
jam | and I thought staging would at least get stuff moving. | 23:39 |
lifeless | jam: so, for clarity | 23:39 |
jam | But if the round-trip-to-get l-osa time is the block | 23:39 |
lifeless | jam: the pipeline ordering is production<-qastaging<-staging | 23:39 |
jam | then I'm happy to skip steps | 23:39 |
lifeless | jam: if its been checked on qastaging, its totally ready to move forward | 23:40 |
lifeless | jam: we only have to check *db schema changes* on staging | 23:40 |
jam | lifeless: that wasn't the info I heard | 23:40 |
jam | since qastaging was "we'll blow it away often" | 23:40 |
jam | vs "public people test stuff on staging" | 23:40 |
lifeless | jam: same for staging | 23:40 |
lifeless | no | 23:40 |
lifeless | qastaging and staging have the same lifetime guarantees (none) | 23:40 |
jam | lifeless: *developers* test on qastaging and 3rd parties use staging (from all the conversations I've overhead) | 23:40 |
lifeless | jam: where did you get the info, so I can correct it | 23:40 |
jam | heard) | 23:40 |
jam | lifeless: the bug import discussions | 23:41 |
jam | etc | 23:41 |
lifeless | 3rd parties are permitted to play with either qastaging or staging | 23:41 |
lifeless | developers test db schema stuff on staging, and (approximately) all other things on qastaging | 23:41 |
lifeless | things that we can we put behind a feature flag w | 23:41 |
jam | also comments saying that if qastaging breaks, no big deal, and I haven't heard that for staging | 23:41 |
lifeless | its a huge deal | 23:41 |
lifeless | if qastaging is broken we cannot deploy | 23:41 |
lifeless | the entire qa workflow breaks down | 23:42 |
poolie | because there is only one qastaging for the whole site | 23:42 |
poolie | over the rainbow, it would be modularized enough you could run something representative on a dynamically-allocated domain | 23:42 |
jam | see y'all tomorrow (if you're around :) | 23:44 |
vila | jam: cu ! | 23:44 |
lifeless | mmm | 23:44 |
lifeless | I don't think thats a great idea poolie | 23:44 |
poolie | oh? | 23:45 |
lifeless | we have many complex parts, its all to easy to bring up a shiny short lived instance | 23:46 |
lifeless | its the heavyness of the whole thing that lets us learn about issues in qa | 23:46 |
lifeless | its only by having a 250GB db behind it that we can see various query change impacts, for instance. | 23:47 |
lifeless | mkanat: btw the css is fixed | 23:51 |
mkanat | lifeless: Great! | 23:52 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!