[00:03] Good morning. [00:24] is there a good reason why I wouldnt want to have +w for group on .bzr/checkout/dirstate ? [00:25] I got problems by pushing because that file belongs to another user, although the group is set correctly, the user seems to need to write there [00:29] yann2: Odd, I wouldn't have thought that push would need to write to that file at all. [00:30] it might be the update I just added before actually :) [00:30] yann2: the reason not to set group +w for that is if you don't want to allow other users in that group to tell bzr that files have been added or deleted, or that merges have been made... [00:31] (Or that renames have happened, or conflicts resolved, etc, i.e. changes to the state of the checkout) [00:31] Yeah, the update would make sense :) [00:31] well users in the group can basically remove the .bzr folder :) [00:32] I think the problem is that I do an automated update before doing a push... mmmmh, I ll look into that, thanks... [02:32] how to diff between branches? [02:37] D'oh, missed RenaotSilva [02:38] * jelmer waves to spiv [02:38] For the record: "bzr help diff" [02:38] Hi jelmer [02:38] Any progress on the bzr-svn/chk issue? [02:39] spiv: No, I need to follow up on your email but I haven't had the time to do so yet (travelling at the moment). [02:39] I was CC'd on part of a thread, but don't know if I'm missing any discussion, and probably the discussion ought to happen on (or at least summarised on) the bug... [02:39] Ah, ok. [02:39] To UDS, I guess? [02:40] spiv: No, although I am in the US. Google Summer of Code mentor summit and GitTogether [02:40] Oh, heh. [02:40] So near and yet so far! [02:40] (not that I'm at UDS this time either) [02:42] spiv: Are you going to the epic? [02:46] I am [02:48] Cool === spiv changed the topic of #bzr to: Bazaar version control | try https://answers.launchpad.net/bzr for more help | http://irclogs.ubuntu.com/ | Patch pilot: spiv | Release Manager: vila | 2.3b2 is officially released, test it! | work on bzr: http://webapps.ubuntu.com/employment/canonical_BSE/ [07:50] hi all ! [07:51] Hi vila [07:52] _o/ === beaumonta is now known as abeaumont_ [15:10] I see jml has blizted a bunch of testtools things [15:10] wonder if he'd prefer follow up bugs for continuing issues or reopening the current ones [15:11] mgz: new bugs [15:14] will do, just found a mp for one of them and added a comment too. [15:14] I guess I could subscribe to testtools somehow. [15:23] ...might just put up some branches for these test failures, some of them are shallow. [15:34] hm, Python doesn't expose raise. I guess kill covers it anyway. === Meths_ is now known as Meths === mtaylor is now known as 52AACIHR4 === tchan1 is now known as tchan === deryck_ is now known as deryck [17:11] bzr qlog shows "Failed to decode using utf8, falling back to latin1", why? [17:12] isn't bzr aware of the encoding of the file? [17:13] RenatoSilva: bzr does not store any information about file encoding. [17:13] RenatoSilva: Individual commands can guess the current file encoding, if they choose. === mordred_ is now known as mtaylor [17:14] if they choose? a command needing to print file content ought to do that, doesnt it? [17:15] what's funny is that file is latin1 and diff in bzr qlog shows file as utf8 (drop down in bottom) but the chars are ok [17:16] RenatoSilva: No, generally just emitting the bytes to the console works quite well. [17:16] RenatoSilva: And for binaries, it's generally the only thing that can be done. [17:16] hmm this is a bug, it tries to decode from utf8 and something defaults to latin1 (msg above), but the drop down is not updated === Meths_ is now known as Meths [17:17] by emiting the bytes to console you mean relying on python's guessing right [17:19] RenatoSilva: Actually, it means relying on the console to have an encoding that is compatible with the files' encoding. [17:19] where console is the python's stdout object [17:20] RenatoSilva: The console is written to via python's stdout object, but it is not, itself, anything to do with python. [17:20] by console you mean the terminal? [17:21] RenatoSilva: yes, it is usually a terminal. === oubiwann-away is now known as oubiwann [17:46] hey guys [17:47] is there an easy idiomatic solution to "the g+w problem" yet? [17:47] i.e., when I bzr push with bzr+ssh to a repo on a shared server, I want it to be group writeable [17:47] do I still have to use a debian "alternative" and a wrapper script? :( [17:54] radix: i'm not aware of any new stuff in that area [17:55] ok [17:57] it'd be really nice if bzr serve had an option for that :) === Ursinha is now known as Ursinha-lunch [18:30] hi! is there a way in bzrlib to just check status for uncommitted changes. what I've found is http://people.canonical.com/~mwh/bzrlibapi/bzrlib.status.html#show_tree_status . so I guess it would be possible to put something on top of this, but is there a better way? === beuno is now known as beuno-lunch [18:47] Hi. I have got two separate repo 'a' and 'b'. On 'b' a 'bzr pull' from 'a' was performed at some point, long ago. Now I want to consolidate 'a' and 'b' using 'bzr join' under the same repo and it fails since there are duplicates in the inventory. Any help to work around this problem would be appreciated. [18:49] arthur_: can't you just pull all from 'a' into 'b'? [18:49] arthur_: Could you explain a little more about the kind of history these repositories contain? It seems very odd that you would want to 'bzr join' repositories which already share ancestry. Why is 'bzr merge' not the right tool at this point? [18:50] alright. maybe I should rewind a little and listen to your suggestions. Here is the situation. [18:50] I just joined a company where they currently have ~20 repositories for various services. [18:51] We want to consolidate into a single repo. [18:51] while keeping the history. How should I proceed? [18:51] well, repos don't contain history so much as revisions :) === Ursinha-lunch is now known as Ursinha [18:52] dash: well. you got my point [18:53] arthur_: I think you should not proceed. It is natural and expected for various separate services to live in different repositories [18:54] maxb: there are much more tied than you think. Think of them as interfaces and implementations rather than services. [18:54] So it truly makes sense for every developer who works on any of them, to work in a tree containing all of them, and to _always_ branch and tag them as a combined tree? [18:55] maxb: most changes are across 3-5 services... so with the current layout it is pretty complex. [18:56] And the problem is that in the past, some services started as branches of other services? [18:56] maxb: that's right. one of them... [18:56] hmm. This isn't going to be very nice :-/ [18:57] which prevents me from doing a join. Believe me I was pissed when the very last join failed :) [18:57] I cannot think of any workaround which will preserve the ability to follow all files history back to where it naturally began [18:57] The only workaround I can think of is to effectingly delete and re-add the entire contents of one of the trees with duplicate file-ids [18:58] maybe I should do that on only the set of files that were common at the time [18:58] the pull was done pretty early (revno 6) [19:07] gthorslund: yes, you can layer things on top of that api [19:07] gthorslund: or the status command object [19:07] and there is now a hook for that too === beuno-lunch is now known as beuno === Meths_ is now known as Meths [19:16] lifeless: since cmd_status is using show_tree_status that feels like a higher level then I need. looking at show_tree_status it appears to be nothing lower level that would be useful for me, so I guess show_tree_status would be right for me then. [19:19] gthorslund: depends on what you're trying to do, I guess [19:27] lifeless: I'm looking at changing revert calls in bzr-bisect with update calls instead. if there have been local changes done I kind of end up with a mess of .moved files and other. (revert just removes local changes). I refuse starting without those changes being handled some way first. [19:27] hi mtaylor! [19:27] gthorslund: iter_changes is probably what you want, then. [19:31] lifeless: so http://people.canonical.com/~mwh/bzrlibapi/bzrlib.tree.InterTree.html#iter_changes would be my homework then, right? [19:33] yes [19:33] show_tree_status calls into this [19:33] if you want a crib sheet [19:35] hi gthorslund ! [19:36] mtaylor: I'm hacking some python. hope that makes you proud of me ;-) [19:37] lifeless: looks like it should give me enough of clues. thx [19:37] gthorslund: excellent [20:01] hi jam! [20:01] got a minute to help me with meliae? [20:01] jam, we have servers blowing up all over the place and we'd like to do some diagnosing [20:02] beuno: jam is here at uds, so might not be very responsive [20:02] mwhudson, well, lucky you, I hear you're quite involved in meliae as well :) [20:03] beuno: eh, a bit [20:03] beuno: mostly a cheerleader :) [20:03] mwhudson, what would be your guess if we only get a partial meliae dump? [20:04] beuno: are you looking at jam's blog posts? [20:04] mwhudson, I am [20:04] beuno: missing a flush? [20:05] beuno: how are you invoking meliae? [20:05] mwhudson, this is what we have: https://pastebin.canonical.com/39038/ [20:06] beuno: meliae trunk? [20:06] mwhudson, 0.1.3~1.CAT.8.04 [20:07] which I presume is some sort of monster we backported to hardy [20:07] seems pretty recent [20:07] although not tagged in trunk, naughty jam === oubiwann is now known as oubiwann-away === oubiwann-away is now known as oubiwann [20:09] mwhudson, does that script look like we'd need to flush anything? [20:09] no [20:09] beuno: you get a nice 200 response back? [20:09] mwhudson, not really, it complains about not being able to write to "+meliae" [20:10] although it does write teh .json file [20:10] beuno: ?? [20:10] mwhudson, scratch that [20:10] I do get a 200 [20:10] beuno: i guess it's hard to say, but is the dump massively truncated or a little bit? [20:10] mwhudson: what needs to be tagged? [20:11] jam: there's no 0.3.1 tag in lp:meliae [20:11] 3:23 < beuno> (ami-hardy-i386)ubunet@ip-10-122-34-239:~$ wget http://127.0.0.1:8881/+meliae --no-check-certificate [20:11] 13:23 < beuno> --18:23:00-- http://127.0.0.1:8881/+meliae [20:11] 13:23 < beuno> => `+meliae' [20:11] mwhudson: I don't think I did an official release, but I'm not positive [20:11] 13:23 < beuno> Connecting to 127.0.0.1:8881... connected. [20:11] 13:23 < beuno> HTTP request sent, awaiting response... 200 OK [20:11] 13:23 < beuno> Length: unspecified [text/html] [20:11] but I think it's something wrong with the return [20:11] jam: oh ok [20:11] 13:23 < beuno> +meliae: Permission denied [20:11] 13:23 < beuno> Cannot write to `+meliae' (Permission denied). [20:11] is what I get [20:11] mwhudson: I only have 0.3.0 here [20:11] mwhudson, the process is ~800mb, and the dump is 13mb [20:11] beuno: run that in a directory that you have write access to :-) [20:11] I'd expect a lot to be missing [20:12] or wget -O- or something [20:12] mwhudson, heh, of course, wget wanting to write [20:12] would that truncate the output file? doesn't seem so [20:12] beuno: dump_all_objects() is what you want to be using, and will try to find everything [20:12] it does seem unlikely [20:13] however, being 13MB is not strictly a truncated file [20:13] jam, https://pastebin.canonical.com/39038/ [20:13] well [20:13] if you were reading and it said that there was broken content, *that* would be truncated [20:13] I say it's truncated because at the end there is half a line [20:13] beuno: well, that is what I was pointing at. [20:13] yes, that's exacly what I get [20:13] :) [20:14] so there was an issue that os.fdopen().flush() wasn't actually flushing if you used the raw FILE * pointer [20:14] which was fixed (checking the rev) [20:14] beuno: you have "0.1.3" that is 0.1 series, not 0.3 series [20:14] definitely upgrade to trunk [20:14] aha [20:15] ok [20:15] that's a good first step [20:15] doh! [20:15] if it is just the truncation thing, that should make a big difference [20:16] beuno: sorry for missing that :) [20:16] mwhudson, no worries! this is a lot of progress! [20:16] beuno: Flush was fixed in the 0.2.1 ish timeframe [20:16] thanks jam, mwhudson, I'll update the server and see what happens [20:17] beuno: I believe mwhudson did something similar but returned the dumped content to the url rather than dumping it to disk, I'm curious why you prefer this method. [20:18] (certainly from a remote user, it would be nice to get the raw content back.) [20:19] jam, I'm actually locally on the server, but what was there was this django middleware to trigger the dump === dash_ is now known as dash [20:53] beuno: any luck? [20:53] jam, just managed to get trunk on the server [20:53] waiting for some more tests that are going on to finish [20:53] and will try again [20:54] beuno: sounds good. If you need help debugging the dump, let me know. [20:54] jam, will do, thanks. Hope to try again in 10-15 [20:54] you aren't in UDS, right? [20:55] no :( [20:55] missed it this time around [20:56] np [20:56] would have been nice to sit over your shoulder while you worked with it [20:56] yeah, would of been great [20:57] I'll break more servers next uds you are there too [20:57] beuno: well if necessary, there is always "screen" [20:58] anyway, switching rooms, bbiab === Ursinha is now known as Ursinha-afk [21:04] jam, so, with the new version, they don't seem to truncate, although they do seem small [21:05] they are smaller than expected [21:05] but size may not matter after all [21:07] beuno: if you have very large strings, we only dump 100bytes of a given string [21:07] jam, I am thinking that my algorithms in https://dev.launchpad.net/Code/BranchRevisions are slightly wrong, in that they assume that there is a single mainline_parent entry for each revision. [21:07] if you have lots of small objects, then the dump is usually about the same to larger than mem [21:08] abentley: IIRC, it could still work, you would just search multiple tips concurrently [21:08] I'd have to look closely again, though [21:08] beuno: there is also 'dark' memory that I've run into, which can be a lot more than I would like [21:08] jam, this affects "Branch page" and "Merge proposal page". [21:08] jam: oh, that reminds me, thank you for writing meliae [21:08] jam: it has made my life easier. :) [21:09] zlib.decompress and zlib.compress tend to have a lot of buffers that aren't particularly accessible, for example [21:09] dash: I'm very happy it has proven useful to you. It certainly helped me. Feedback always welcome, btw [21:09] jam: well the only thing i'd change is the license. ;) [21:10] dash: to? [21:10] jam: I worry that a given revision might not be the head of a mainline_parent_range. Maybe the simplest thing is to accept ANY maninline_parent_range where the revision is mentioned. [21:10] something shorter :) gplv2 or apache/mit/etc [21:10] dash: why v2 vs v3? [21:11] jam: i.e. not the range table per se, but any entry in mainline_parent_range where the revision id is correct. [21:11] (v3 is canonical policy unless there is specific reason to do otherwise, so if you have specific reasons, I'm willing to listen) [21:11] jam: argh, "mainline_parent" [21:11] ah, i wasn't aware of that. [21:11] ok [21:12] jam: my real preference is permissive licenses, personally. it's just that gplv3 makes people in legal very nervous :) [21:12] anyway, it's not like i'm distributing it to customers, so whatever :) [21:13] jam, maybe "SELECT range FROM mainline_parent_range WHERE revision = %s ORDER BY DIST DESC LIMIT 1"? [21:13] jam, err "SELECT range FROM mainline_parent WHERE revision = %s ORDER BY DIST DESC LIMIT 1"? [21:14] dash: atm meliae is designed more as a helper than something you would bundle in your package, though [21:14] so the license doesn't matter as much [21:15] abentley: well that query doesn't require it to be the head [21:15] jam: Sure. [21:15] like i said, not a big deal. [21:18] jam, right, because there might not be an entry where it's the head. [21:19] jam, so this way at least you find the entry where it's closest to the head. [22:13] beuno: any interesting insight ? [22:13] jam, well, not from the meliae dump [22:13] it seems normal [22:13] which is baffling [22:13] since we see that processes memory grow over 1gb [22:14] but, we found a piece of code that, when removed, stops the server from dying under a small amount of load === Ursinha-afk is now known as Ursinha [22:14] beuno: 1 are you sure you are dumping while it is at its peak, 2 it could be memory fragmentation [22:14] 3) ISTR that Thread objects take up a lot of memory, which may not be tracked [22:14] * beuno nods [22:14] there are other possible hidden memory allocations [22:14] right [22:15] I am slowly getting into this part of the world [22:15] so no good ideas yet [22:15] the other problem we have [22:15] is we can't really reproduce it locally [22:15] beuno: if the dump is small enough, you could bz2 it and put it somewhere and I'll give it a look [22:15] only on ec2 instances on staging [22:15] jam, sure, it's 14mb and it should shrink quite a bit [22:16] will email [22:22] jam, sent [23:35] jml: in case you didn't see it, I can make those SIGINT tests pass, just whether or not you think the ctypes hackery is worth it.