=== Mez|OnAir is now known as Mez [00:13] is there a bzr 1.0 backport for gutsy available? (I tried myself, but it doesn't seem to be working right) [00:14] jdstrand: http://bazaar-vcs.org/DistroDownloads [00:14] I just ran dpkg-buildpackage on bzr 1.0~rc2-2, but am getting errors against a 1.0.0.candidate.1 tree [00:15] spiv: thanks! [00:20] jam: I was going to review your Graph.heads optimization, but things seem to be a bit messy-- e.g. it adds preload_parents, which your "Fixes for find_differences()" then takes away. [00:21] hi abentley [00:21] igc: Hi. [00:21] abentley: re pkg_resources, is it easy to include in our code? [00:21] seems to be part of setuptools [00:22] Yes, it is part of setuptools. [00:22] and the pep adding it to python is a future thing I gather [00:22] s/is/in/ [00:22] Yes. [00:22] it looks the right idea ... [00:22] Considering we want bazaar to be installable as an egg, using setuptools doesn't seem too crazy. [00:22] I'm just not overly comfortable making setuptools a dependency just the sake of this [00:23] Well, it may make more sense to reinvent the wheel. [00:24] But I thought it was worth discussing at least. [00:24] I'm pleased you mentioned it [00:24] I wasn't aware of it [00:24] I have no idea how hard or easy it would be to include by itself. [00:25] I can't find anything along those lines after a brief search [00:25] it probably isn't a big deal ... [00:26] but it feels a little like a huge hammer to smash a small nut in this case (to me at least) [00:26] I just want to load doc from files and not python strings :-) [00:27] What it does give you is location-independence for resources. [00:27] I was happy with leaving the docs where they were but it doesn't work with the Windows installer apparently [00:27] If your file is in a zip, getting a python string is better than getting a faile. [00:27] s/faile/file [00:29] true [00:30] pkg_resources is 82k, 2.5k lines. [00:30] So it would be a lot to manage ourselves. [00:31] What we could do is strive for api compatibility. [00:31] Or we could just solve it a simpler way for this case. [00:31] Hmm [00:32] I would like to get ref material out of the UG for 1.0 so ... [00:32] how about I raise an issue about making an api ... [00:33] that is compatible with pkg_resources [00:33] Sounds good to me. [00:33] for 1.0, I'll make the files py ones with big strings in them [00:33] so things don't break in zip files [00:33] Eh? [00:34] well ... [00:34] you can just have an API that returns the contents of a file as a string. [00:34] your point about loading from files will break [00:35] ok [00:35] pkg_resources can give you filenames if you must have them, but this will mean extracting the files from the zip first. [00:36] So IIRC, it's encouraged to retrieve file-like objects or strings. [00:36] I don't need filenames really [00:36] help_topics._load_from_file() currently takes a topic and returns a string [00:36] that's all I care about at the high level [00:37] internally, I get that content via deriving a filename but ... [00:37] Right. So the resource API should return strings or file-like objects, rather than paths. That's all I meant about zips. [00:37] I could derive a string in py module instead say [00:38] You mean stuffing our existing text into py files? [00:38] Doesn't that defeat the purpose of keeping them in doc/en? [00:39] doc/en is no good for alex [00:39] my orginal patch had them there and he wanted them moved [00:39] the windows installer doesn't package doc/*, just the generated files I believe [00:40] I don't think that's a good tradeoff. [00:40] ok ... [00:40] so if we leave the files in doc/en ... [00:40] I don't know enough about the windows installer. [00:40] and mark them as User Reference only topics ... [00:41] then the runtime system doesn't need to find them [00:41] But it must have a way of accessing resources. [00:41] and alex's problem goes away [00:41] Probably pkg_resources, in fact. [00:41] actually, only generate.doc.py needs them -to build bzr_man.txt during make [00:46] igc, interesting mail about including batteries on windows [00:46] is it really unavoidable that there are two somewhat incompatible windows environments? [00:46] that kinda suck [00:46] s [00:47] poolie: yeah - we have some work to do I think [00:47] I saw a complaint yesterday re hg [00:47] the grumble was that it bundled a dozen plugins but people had to turn them on :-( [00:48] in comparison, git has bisect etc enabled by default [00:48] so, in other words ... [00:48] git was deemed low admin because it just worked out of the box [00:49] we shouldn't try to satisfy everyone but ... [00:50] I'm beginning to think we ought to at least be bundling more, perhaps via meta-packages [00:50] disk space is cheap :-) [00:51] it's bandwidth in downloading the stuff that I think people get concerned about [00:51] i think we should too [00:51] even then, that's pretty small [00:52] except perhaps for dependent libraries like qt [00:52] maybe we could offer a "standard" and "minimal" install [00:52] yes [00:53] Compile time and general system dirtiness are concerns too. [00:53] how do you mean? [00:53] particularly on Windows and OS X [00:53] qt, to take your example, takes AGES to build. Longer than Firefox even, last I looked. [00:54] oh, on BSD or Gentoo? [00:54] ("you build qt?" :-) [00:54] i wouldn't expect most people would build it from source [00:54] Well, I don't, no. And I don't want to start because of some third-level dependancy of a package for a VCS I'm going to use the CLI of ;) [00:54] not to neglect those who do have to, but it's probably a minority cas [00:54] e [00:55] but you wouldn't install a binary of it? [00:55] fullermd: I'm good with qbzr, bzr-gtk and bzr-svn being separate installers [00:56] Well, no. But even if so, that falls into the 'system cleanliness' side; I don't like installing extra stuff I'm not going to use, particularly big bunches of extra stuff. [00:56] I mean, just installing bzrtools needs graphviz. [00:56] So I'm looking at py2exe, and there are recipes for including data files. [00:57] it's more about what we bundle as "standard" plugins to match the expectations of functionality people have [00:57] And that pulls in half of X. [00:57] http://www.py2exe.org/index.cgi/AddingConfigFiles [00:57] abentley: I'll take a look [00:57] So I really think this is not a tool limitation. [00:57] Hmm. If I get bzr.dev using rsync, the first thing I receive is bzr.dev/.bzr.backup/? [00:58] fullermd: graphviz is supposted to be optional. Please smack anyone who makes it mandatory. [00:59] abentley: I maintain the port; what sort of masochist do you take me for? ;) [01:00] I could OPTIONS-ify it I s'pose. [01:00] hehe [01:01] I think Bazaar is the only hard dependency of bzrtools. [01:02] The rest are used by one command or another. [01:02] The deps in the port are currently bzr, rsync, and dot. I think I'd just leave rsync; it's not too big, and it's self-contained. [01:03] So you're not supporting baz-import? [01:03] Well, I haven't really done much with the port except update the versions; lulf created it originally. [01:03] Ah. [01:03] I just inherited it and bzr from him. [01:03] Do I know lulf? [01:04] Ulf Lilleengen. [01:04] Think he's still listed as maintainer of the baz port. There's a relaxing job ;) [01:05] Lol. [01:05] Is there a good reason for bzr.dev to contain both .bzr and .bzr.backup? [01:05] Uhm. Make that "Is it on purpose that bzr.dev contains both .bzr and .bzr.backup?" [01:06] Sounds like an accident to me, but not a harmful one. [01:06] It just doubles the download volume. [01:06] :) [01:08] ndim, downloaded using what method? [01:08] rsync? [01:09] igc, i'd like to talk more about batteries-included, but we should do the release first [01:09] poolie: it's a 1.1 thing IMO [01:10] igc, on the movement into the user reference [01:10] poolie: rsync, yes. my local bzr is a little too old. [01:10] (which i know is a different topic) [01:10] ndim, i'll move the backup [01:11] hm, i can't move it myself [01:12] igc, i share jam's view about the desirability of keeping docs in text files rather than inside py files [01:12] if possible [01:12] poolie: I'd prefer that too [01:21] poolie: Have you considered how to un-merge using annotations? [01:22] i.e. given revisions x and y, where y is descended from x, remove all the changes introduced by x? [01:22] abentley, i have not, but it's an interesting idea [01:23] I've been thinking about it as an approach to reverse-cherrypicking, but it seems complicated. [01:26] reverse-cherrypicking is about the only thing that merge3 can do that annotate cannot. [01:38] spiv: ping [01:39] jam: pong [01:40] spiv: do you know what the per-node overhead is for a set() ? [01:41] jam: very similar to a dict [01:41] which is? [01:41] jam: standard PyObject overhead (the PyObject_HEAD fields) plus a couple more fields, plus a "smalltable" [01:42] jam: depending on the number of entries in the set, it'll put all the entries in the PySetObject if they are under a certain threshold [01:42] jam: otherwise it'll malloc separate space for the set items. [01:43] well, I'm doing a set() of the ancestry of nodes in bzr.dev [01:43] so I have 7.3k sets [01:43] (the threshold seems to be 8) [01:43] and 63M nodes inside those sets [01:44] (gdb) p sizeof(PySetObject) [01:44] $1 = 100 [01:44] I'm seeing 29 bytes per entry in the set [01:44] well, lets try it a different wy [01:44] way [01:45] So is just holding the sets in memory the problem, or is it when you try to do an operation on the sets? [01:45] 7.3k * 100 = 730KB [01:45] Well, I just loaded them, and it takes 1.8GB of Ram [01:45] so the PySetObject overhead doesn't seem to be the expense [01:46] But the 28 bytes per entry *in* the set is [01:47] spiv: shouldn't it be something like 10 bytes per node? Something to give you a pointer to the real PyObject [01:47] What are the objects? Strings? [01:47] or am I triggering something funny in the "smalltable" code [01:47] spiv: yes [01:47] but they are bzr.dev revision ids [01:47] so ~60 bytes each [01:47] 15k total [01:47] The "smalltable" code is basically just "if len(set) < 8: store in the struct directly for cache coherence/memory fragmentation goodness" [01:48] hmm... maybe that is 900MB, seems a bit big [01:48] ah, 900 KB [01:48] that makes more sense [01:48] Also, (gdb) p sizeof(PyStringObject) [01:48] $5 = 24 [01:48] spiv: sure, but aren't the PyStringObjects re-used? [01:48] If I have 1 set with 15000 entries [01:49] set1 = set(...15kentries...) [01:49] set2 = set() [01:49] set2.update(set) [01:49] set2.add('a') [01:49] jam: they might be reused, depending on how you construct them. [01:49] jam: there's an easy way to find out [01:49] jam: do a set(id(x) for x in ...) :) [01:51] But if you're building new sets by "set2.update(set1)" like in your example there, then it'd just reference already existing objects, yeah. [01:51] spiv: If I do you set(id(x) ...) and then do a set difference [01:51] I get the same number of nodes [01:51] as just doing a direct set difference [01:54] So "len(my_set)" should always be the same as "len(set(id(x) for x in my_set)))" (assuming the __eq__ of the setitems behave normally). [01:54] spiv: so doing: all = set() [01:54] for rev, nodes in self.ancestries.iteritems(): all.add(id(rev)); all.update(id(r) for r in nodes) [01:55] But what's more interesting is "ids = set(id(x) for x in my_set); ids.update(id(x) for x in my_set2); len(ids)" [01:55] (build up a set of the id of every piece I can find) [01:55] (Pdb) len(all) [01:55] 15499 [01:55] So all of the revision id strings seem to be properly shared [01:55] Ok. [01:55] Next question: are you sure it's these sets that are the problem? [01:56] spiv: well, I can say that it is during creation of these sets that my memory bloats [01:56] spiv: the code is here: http://bzr.arbash-meinel.com/plugins/test_graph/ [01:56] Because it sounds to me like 15k strings of ~60 bytes + 7.3k sets isn't adding up... [01:56] jam: http://www.twistedmatrix.com/users/spiv/countrefs.py [01:56] spiv: I completely agree, that is why I'm confused [01:57] jam: use that or something like it to get a count of the number of instances of various types. [01:58] (Pdb) countrefs.mostRefs(10) [01:58] [(152841, ), (8158, ), (7411, ), (3030, ), (1499, ), (1156, ), (985, ), (779, ), (625, ), (566, )] [01:58] 152k tuples is a bit [01:59] but still not 1.8GB worth of data, IMO [01:59] Well, it depends on how big the tuples are... [01:59] You can find out :) [01:59] all_tuples = [t for t in gc.get_objects() if type(t) is tuple] [02:00] Then you can try e.g. max(len(t) for t in all_tuples) [02:00] Pity there's no "histogram" builtin ;) [02:01] spiv: sum(len(obj) for obj in gc.get_objects() if type(obj) == tuple) [02:01] 379583 [02:01] with 152k tuples [02:01] that puts most of them at 2 [02:01] You can get the median with print sorted(all_tuples, key=len)[len(all_tuples)//2] ;) [02:02] Hmm, so no smoking gun there either. [02:02] It's possible the leak is in a type not tracked by gc (not a "heaptype" or whatever the jargon is). [02:02] print sorted(all_tuples, key=len)[len(all_tuples)//2] [02:02] (None, False) [02:02] weird [02:03] Oh, that printed the median tuple, rather than the length of the median tuple. Heh. [02:03] Still, that supports your "most of them are len 2" calculation :) [02:03] but it is still 2 [02:04] Ok, so the space isn't consumed by the sets, the strings, or the tuples. [02:04] 1500 lists, of total length 10k [02:04] again, not super long [02:04] spiv: can you try the code and see if you get the same effect? [02:05] (it is a bzr plugin, put it in ~/.bazaar/plugins and just run: "bzr test-graph bzr.dev") [02:05] So, at this point I'd try running without the optional C extensions (in case there's a leak there), and without plugins (in case e.g. the svn plugin is getting involved and thus getting its known memory leaks involved). [02:05] Ok, will do. [02:06] Well, I can do that, but the code is generally really simple [02:11] jam: I don't think I want to run that command :) [02:11] why? [02:11] jam: I only have 1G of RAM on this laptop :) [02:11] don't want to use the ram? [02:11] spiv: you could try it on something smaller [02:11] I will :) [02:11] bzrtools works for me [02:11] but doesn't bloat nearly as much [02:12] spiv: on my machine I have definitely tracked it down to the get_ancestry_for_interesting() [02:12] before the loop I'm at 25MB (with the revision graph loaded) [02:12] after the function, I'm at 1.8GB [02:15] spiv: also, python spends a long time during close, I assume doing garbage cleanup [02:17] Strange. I just "fixed" the BB performance issues by rebooting. [02:18] It was at 45% IO wait with nothing running. [02:20] How do you handle shared code in projects? We have a CVS project where some source is shared between us and another organisation. [02:21] spiv: it seems that sets simply wrap a dict in python2.4... [02:21] * data is a dictionary whose values are all True. [02:22] jam: oh, I was assuming you were using 2.5 [02:22] Talden: Generally, you package the shared code separately. [02:22] jam: which has a dedicated set type (based on the dict type's code, but a little leaner) [02:22] spiv: I am, I just have the 2.4 code available :) [02:22] Talden: libraries lend themselves to this treatment especially well. [02:23] spiv: downloading the 2.5 source now [02:23] abentley: We both contribute to the shared components code. Many of our developers would simply end up having to handle two separate projects and shipping code into the dependent project (with a general understanding that it's not maintained there). [02:24] This is acheived in CVS with a little backend hackery to mount the shared code in an appropriate path in the CVS backend. [02:24] jam: I have a bzr-svn import of python trunk and the py3k branch on my laptop :) [02:24] spiv: oooh [02:24] it won't matter for this code [02:24] as it only focuses on merge nodes [02:24] I am NOT promoting backend hackery and am fully aware that this isn't going to fly in DVCS land. [02:25] Talden: okay, I don't understand your requirements yet. [02:25] (1) Neither org can see the others non-shared code. [02:25] (2) We both contribute to the code on HEAD and frequently make contributions on our own branches that make their way to HEAD via merges. [02:25] (3) History of shared content is visible to all (to help make sense of changes originating from two orgs with very different core-business). [02:25] Why would having a separate tree lead to stuff not being maintained? [02:26] abentley: Maybe I've misunderstood your suggestion [02:26] spiv: so if I'm looking at it correctly, the overhead of a set entry should be just 8 bytes, right? [02:27] a 'long' for the hash, and a PyObject* for the key [02:27] So for the shared code, I'm suggesting that should be one branch, and your non-shared code should be another branch. [02:28] Branches or separate ancestry? [02:28] Totally separate ancestry. [02:30] Right... so a developer can't 'mount' the shared code inside the path of their branch of main code right? Or does bzr know to handle nested branches somehow? [02:30] jam: that sounds right to me [02:30] so still, something weird is happening [02:30] 63M keys *8bytes is only 500 MB [02:30] A lot, but I'm seeing almost 4x that [02:32] It's possibly just python's memory management. [02:32] Creating/resizing lots of sets might be leading to memory fragmentation. [02:33] Talden: There has been work on that, but it's not complete. [02:33] There are packages that let you set up a nested tree, but don't keep things up-to-date. [02:40] spiv: well, doing "set1.copy().update(set2)" doesn't seem to improve things [02:40] Talden: if by "mount", you just mean sticking one tree inside the other, that's easy. [02:40] * igc lunch - bbl [02:40] and that hits on 5.6k of 7.3k nodes [02:41] I guess it is actually "s = set1.copy(); s.update(set2); s.add(r)" [02:41] but I guess that involves 2 resizes, and I don't know what sort of internal grow algorithms are used [02:43] spiv: I think you are right about the malloc() issues [02:44] If I change the calls to do: [02:44] jam: so, one interesting point is that sets' internal tables are larger than the number of entries, btw. [02:44] ancestries[revision_id] = ancestry.copy() [02:44] Then it drops to 1.4G [02:44] from 1.8G [02:45] spiv: all of the "dummy" nodes? [02:45] jam: they can be twice to four times larger than the number of entries after a resize, I think; obviously you want some sparseness to avoid too many hash collisions, and provide space for new entries, etc. [02:47] (maybe frozensets are more compact?) [02:47] spiv: they seem to always be powers of 2 [02:47] newsize <<= 1 [02:47] mask = newsize - 1; [02:47] So probably when we trip past 8k revisions [02:48] it jumps to 16k for each revision [02:48] and once we get past 16k (in a few revisions) it will jump to 32k [02:49] jam: well, see also "set_table_resize(so, so->used>50000 ? so->used*2 : so->used*4);" that e.g. set_add_key does. [02:50] so grow by 4x if < 50k nodes [02:50] oh, actually have the hash be 1/4 full if < 50k used nodes [02:50] and 2k full otherwise [02:50] so with 8k nodes, we have 32k tables [02:51] and above that it goes to 64k tables [02:52] and 4:1 is about what I'm seeing [02:52] 8 bytes per node, but an average overhead of 3 blank nodes per real node [02:55] jam: what about "ancestries[revision_id] = frozenset(ancestry)"? I don't think it's any more compact than set.copy, but maybe... [02:55] spiv: just finished that one [02:55] still 1.3G [02:55] so the same as .copy() [02:55] but better than not doing it (1.8G) [02:57] spiv: thanks for helping me track this down [02:57] jam: you're welcome. [02:57] jam: I'll get back to other stuff now I think [02:57] sorry for distracting you for so long [02:57] jam: It's been fun poking at this stuff [02:57] Even if it is hard to get good answers about python memory consumption! [02:58] spiv: yeah, I wish part of the PyObject protocol was to have a "mem_size" variable [02:58] or function for objects [02:58] so you could loop and figure out what was consuming your ram [02:59] I think a "mem_size" method would be helpful, but also deceptive. [03:00] A bit like how looking at the memory stats for processes in "top" is deceptive. [03:02] Because accounting shared memory is hard, if three processes share the same pages, should you say that each process "costs" 1/3 of that memory? That's often a bit too simplistic an analysis. [03:10] jam: so one idea for you: http://rafb.net/p/SRFLNW38.html [03:10] jam: trading memory for time, basically [03:13] jam: 'area'. [03:14] abentley: Thanks for the feedback... I went and looked into the support for nested branches and there doesn't look like much there yet. It looks like some developer handholding will be needed for us to maintain similar code-sharing workflow to what we have now. I guess I'll soon see how much impact this might have on my chances of moving us to bzr. [03:15] jam: (what's this plugin checking, anyway?) [03:17] i386: back in syd ? === hexmode` is now known as hexmode [03:18] lifeless: yep [03:18] Got back this morning [03:18] Feeling pretty good for jetlag [03:18] cool [03:18] etc [03:18] hows things? [03:19] good [03:20] relaxing :) [03:20] Yeah me too [03:20] Im attempting to quit smoking [03:20] so im chewing this special gum [03:22] lifeless: james unplugged the router :/ [03:24] i386_: lol [03:27] See what happens when you quit smoking? :p [03:30] Yeah [03:30] Actually this gum does the trick so far [03:30] except I want to vomit [03:30] eek [03:31] when Lynne quit she just cold-turkeyed [03:31] I didn't exactly _quit_. I just ran out, and haven't bothered to buy another pack in the last 7 years? Something like that... [03:33] I have a friend who advocates a different system. Don't fight it; every time you want a cigarette, just eat one. [03:33] rofl [03:35] hi all [03:36] jam: I mailed poolies details to get the canonical commercial packaging folk to do it for my vacation [03:36] when pushing a new branch up to a server in a new location, (via sftp) should the files transfer as well as the .bzr files? [03:39] lifeless, hi [03:40] RichElswick, no, only the .bzr directory is transferred over sftp atm [03:40] so I would need to transfer the files as well? [03:40] then others would be able to download the version controlled files? [03:41] if others download from that directory, they'll get the whole lot [03:41] igc, re your chapter 7 [03:41] it's good, but it seems a bit out of place in this manual [03:41] ok, which is what I sort of figured. [03:41] must be my ftp account setup then. [03:42] what's going wrong? [03:42] hi poolie [03:43] you can test it out if you have a sec... [03:43] http://detroitcreativetalent.com/SSD/RandomGen/ [03:43] is the location I uploaded the files ot. [03:43] to as a test. [03:43] hm, the problem seems to be with your web server [03:44] bzr info http://detroitcreativetalent.com/SSD/RandomGen/ [03:44] bzr: ERROR: Unknown branch format: ' ... [03:44] it's redirecting to an error page or something [03:46] lifeless, can you tell me what arch.ubuntu.com is still used for, if anything? [03:47] ok, thanks [03:52] poolie: in jan if thats ok, I don't really want to page work in [03:52] or should I say, I really don't want to page work in [04:02] yeah, totally fine [04:02] i should have said that [04:07] poolie: thanks for the help... looks like my upload via bzr push sftp... didn't work out as I thought it would, so I uploaded the .bzr directory manually and it works now, thanks! [05:43] Hm, my repo for bzr has fifteen different packs. [05:45] Haha, on my very next pull, it repacked down to three. [05:46] :) [05:47] Why three? It's the old 58 MB one, plus a new 2 MB one and new 360 KB one. [05:48] Does auto-packing do any optimization like "bzr pack" does/will? [05:57] no [05:57] hi lifeless [05:57] autopack tries to minimise the amount of work it does [05:57] pack tries to perform global performance optimisations (well, if the patch I put up the day I went on leave has been merged :)) [05:57] hi thumper [06:01] lifeless: Ordering revision texts in topological order? That just went in. [06:01] lifeless: yep, jam merged it I think. [06:01] lifeless: (just bzr.dev, not 1.0) [06:02] lifeless: Does it do enough that it could be worth running "bzr pack"? What about on a server where the repo is just used for push/pull? [06:09] Peng: if you have tens of thousands of revisions, it'll make log a bit faster [06:10] Peng: so autopack exists to prevent latency growing without bounds [06:10] lifeless: Right. [06:10] lifeless: Why did it create two small packs instead of one? [06:10] Peng: thats why 3 packs - it left your largest alone because it wouldn't make any substantial difference to latency to rewrite the big one; but it would take a lot more time to rewrite it. [06:11] (imagine you were pushing into a sftp repo; you don't want that to download and upload the entire repo on a autopack, because auto pack is quite common) [06:11] lifeless: Yeah. I understand why it left the largest one, but why create two smaller ones instead of one? [06:13] Hypothetical situation: In chronological order, I have 1 100 MB pack, 10 100 KB packs, 1 50 MB pack, and 10 more 100 KB packs. Is auto-pack smart enough to leave the 50 and 100 MB packs alone but repack the others? [06:15] Peng: there is a bug open [06:15] Peng: autopack is fairly smart. at the moment its a bit 'clever' too but that should go. === Peng changed the topic of #bzr to: Bazaar Version Control System | http://bazaar-vcs.org/ | please test http://bazaar-vcs.org/releases/src/bzr-1.0rc3.tar.gz [09:09] Hey, 174 KB/sec. [09:09] Wait, that's still only 1400 Kbit/sec. That sucks. [09:09] You realize, of course, that the topic is a hot potato. Having once touched it, you're now responsible for it forever more :p [09:11] Well. [09:11] I'll just change my nick to lifeless_ and everyone will think he did it. [09:25] WTF? [09:25] If I try to push this branch over bzr+ssh, the local bzr freezes sucking 100% of the CPU forever. [09:25] I just tried pushing over sftp, and it finished after 20 seconds. There is a *lot* of data to push. [09:33] Even weirder is that it seems to be intact. [09:34] Now, when I uncommitted and recommitted something, it's been pushing for a few minutes now with lots of network activity! [09:36] Oh. Last time I tried to push when I gave it a few minutes, almost all of hte CPU time was sys. [09:36] Maybe it wasn't frozen, or something. [09:40] Pushing over SFTP now, it's not using much CPU, but it's using 250 MB of RAM.. [09:40] (Actually, it's using about zero CPU.) [09:40] It's gaining maybe a MB of RAM a second. [09:42] * Peng wanders off. [09:51] abentley: I barely understand the public_branch stuff for `bzr send`, but do you think it could make sense to use as public_branch the parent_branch of the submit_branch, maybe recursively, until it's not local? [09:56] Okay, good, now it's not using so much RAM. [09:56] dato: public_branch is simple: it's the public URL of the branch (http://bazaar-vcs.org/bzr/bzr.dev/ or whatever). [09:57] yes, that'd be included in the "barely" [09:57] Heh. [09:57] Right. [10:07] Hello hello. [10:07] Hello. [10:08] So apparently it's uploading a very large pack. [10:08] The progress bar is completely useless. [10:09] It has 22 packs, so it might be repacking. [10:09] Erk, this would work better over bzr+ssh. [10:09] After this much has been uploaded, I'm not going to kill it and switch, though... === mrevell__ is now known as mrevell [10:19] * fullermd blinks. [10:19] What happened to the ability to 'log' historic files? [10:21] I could swear that worked at one time. [10:22] Whee test coverage! [10:22] Wow, now the uploaded pack is almost 100 MB. Now I think I'm glad I'm not using bzr+ssh.. [10:23] It probably would've gotten killed by now. [10:23] Or is bzr+ssh good at RAM usage? [10:24] Would you expect it to use more than the sftp server-side? [10:25] Yes. [10:25] sftp-server is currently using 3 MB. [10:26] I'd be shocked if I ever caught bzr using less than 15. [10:26] Well, yeah, the constant factor is higher. [10:26] Yeah. [10:26] But writing the file shouldn't add much. [10:26] And, of course, it has to actually calculate stuff. [10:27] fullermd: Does it just write the file verbatim or process it? [10:27] Oh my god. [10:27] TTBOMK, push just uses all vfs functionality still. [10:27] It took it 55 minutes and a 120 MB upload for it to realize the branches had diverged. [10:28] Wait. [10:29] I don't know if it was 120 MB. I was misreading another old file in upload/ that's 12 MB. [10:29] But it was definitely over 100 MB. [10:29] Brilliant, bzr. [10:30] Oh, good. It didn't delete it. It's in packs/. [10:30] I guess it updates the repo, then tries to update the branch. [10:32] It repacked the remote side down to 7 packs. [10:32] From 22. [10:32] It still left a couple small old ones. [10:32] While combining a couple new huge ones. [10:46] New bug: #175520 in bzr "log can't see historic filenames" [Undecided,New] https://launchpad.net/bugs/175520 [11:15] New bug: #175524 in bzr "http test server is not 1.1 compliant" [High,Confirmed] https://launchpad.net/bugs/175524 [11:55] hi :-) [11:56] probably a really stupid slightly off topic question, but would i violate the gpl if i write a bsd-licensed application that imports the bzrlib? [12:02] I can't speak for bzrlib specifically, but generally importing gpl code into bsd code is against the gpl [12:03] also importing in the sense of just having somewhere in your code "import otherlib"? [12:04] that is no different than dynamic linking in c applications [12:05] :-( i guess this means i can dump 20% of my application and remove scm altogether with bzr and hg being under gpl v2 :-( thanks for the info :) [12:06] Just out of curiosity, what's your application? [12:08] Peng, it's a simple docutils wrapper that combines docutils with stuff like the jinja template engine and pygments and also simply allows to process multiple documents at a time (and also generates a index for those documents). and to get the mtime of a document i wanted to use the last mtime as seen from a scm [12:08] you can write a gpl plugin for bzrlib and run bzr in a separate process :) [12:08] which would defeat the purpose for wrting that tool in python in the first place ;-) [12:11] You could do something sucky like running 'python -c "import bzrlib; ..." in a separate process. [12:11] true, but i guess i will just remove that code :-( would have been a neat feature though ;) [12:11] cd [12:11] wrong window ^_^ [12:12] I don't suppose canonical would sue you for importing bzrlib, though :) [12:13] luks, :) i still prefer to be on the safe side :) so perhaps i will just write some kind of plugin infrastructure for that code and post a plugin thta does what i want on my homepage as "example" :P [12:13] :) [12:13] that's how gstreamer does it [12:14] so far only mercurial is affected anyway ^_^ [12:14] http://codebrowse.launchpad.net/~zerok/rstsuite/main/annotate/zerok%40zerokspot.com-20071211113206-rnyh73s36xuw7dxr?file_id=__init__.py-20071027132757-ow0jgpc6iiwc1buy-2 ^_^ [12:30] dato: That would be a wild guess. I don't think it's unreasonable to ask people to set public branches for their local mirrors. [12:57] dato: The other thing you can do is just not use a local mirror for "send": bzr send http://bazaar-vcs.org/bzr/bzr.1.0/ [13:09] That would waste some bandwidth though, right? [13:10] hi all [13:12] anyone know who made these t-shirts: http://sinecera.de/DSC03480.JPG ??? [13:15] canonical brought them to pycon last year, but I don't know what company they used [13:17] we should get some more of them... [13:17] I would like one... [13:17] same here :) [13:17] yeah, so would I [13:17] the funny thing is that none of the real developers got one [13:18] lol [13:18] we were talking about possibly slogans at the last all-hands meeting [13:19] So a new one may be on the way === RichElswick is now known as q3sour [13:20] * mwhudson dances a smug little dance [13:26] jam, mwhudson: hehehe, canonical is trying to figure who made them so that we can make more :-) [13:27] I thought I recognized your name, but /whois didn't show me anything interesting [13:27] ;-) [13:28] btw, I am making an eps version of the bzr logo for the new advertising agency [13:29] so if anyone knows the pantone colors for the logo, let me know :-) [13:33] oh, duh [13:34] zerok: you could provide a plugin that adds bzr or hg support [13:35] zerok: and distribute that separately [13:35] jelmer, yes, will do that the moment i have a idea how to handle plugins in my little tool :) [13:35] ah, k :-) [13:36] jelmer, it just feels like such an overkill for such a simple tool ;) [13:36] but eventually i will end up doing it anyway since i want to extend rst in the same way [13:39] kwwii: I like that T-Shirt. Can I get one with "bazaar/git/everything-but-SVN-or-CVS" written on it? [13:39] :) [13:41] ndim: hehe, if you point me to the person with the original artwork, I could try :p === doko_ is now known as doko [13:51] spiv: Just to reply to your comment, the code is just stress testing our graph ancestry code, to make sure that I'm giving correct answers before we finally approve it. [13:52] spiv: So basically I find all of the merge nodes in an ancestry (such as bzr.dev), and then test that find_difference() gives the same results as set operations, and that heads() gives reasonable results. [14:10] whats the syntax for bzr commit --fixes with multiple lp bug numbers? [14:16] sabdfl: iirc, --fixes lp:# --fixes lp:# --fixes lp:# [14:17] thanks LarstiQ [14:18] is there a command to show the properties on a revision? [14:21] sabdfl: i've resorted to python2.4 -c 'from bzrlib import branch; b=branch.Branch.open("."); print b.revno(), b.repository.get_revision(b.last_revision()).properties' in the past [14:22] sabdfl: cat-revision may also do it, but perhaps not in a format that you prefer === cprov is now known as cprov-lunch [14:48] Apparently when git packs, it can store deltas between any revision, not just related ones. Interesting, no? [14:56] abentley: ok, thanks. I've set up public_branch now. [15:05] New bug: #175569 in bzr "cat-revision gives backtrace when invalid revid is specified" [Undecided,New] https://launchpad.net/bugs/175569 [15:06] Peng: yes, that is something we are looking at. It can also store deltas between arbitrary file texts [15:06] Using a heuristic to determine that these file texts should be similar to those [15:08] wouldn't that make extraction slower? (since you usually need those left-hand parent texts together) [15:10] Peng: IIRC it uses the basename of the file to find similar names [15:10] which works well for the Kernel [15:10] because they have lots of Makefile and other repeated files [15:10] New bug: #175570 in bzr "bzr cat over bzr protocol fails" [Undecided,New] https://launchpad.net/bugs/175570 [15:11] a quick search through bzrlib shows 2 Makefiles (not very similar) [15:11] and a bunch of __init__.py files [15:11] which might compress well because of common Copyright headers [15:11] and no other repeated filenames [15:11] luks: It uses some other heuristics, too [15:12] Specifically, it also sorts by text size [15:12] and tends to put the largest text as a fulltext [15:12] which is often the most recent [15:12] (because people usually *add* and rarely remove) [15:13] luks: anyway, I found a few sub-optimal cases for git's algorithm [15:13] at least, if I was implementing it the way they did [15:13] going by ancestry did a *lot* better than going purely by file size [15:13] especially for merge nodes [15:13] since merge nodes tend to have a diff to one parent of the sum of changes on the other side [15:15] anyway, the #1 thing we want to get is arbitrary compression parents [15:15] but that's also size vs. extraction speed, IMO [15:15] so that we can experiment and find out what works best [15:15] because using the right parent would give you smaller diffs, but for extraction is better if texts have the same base, no? [15:15] New bug: #175572 in bzr "smart server emits traceback while get log via bzr protocol" [Undecided,New] https://launchpad.net/bugs/175572 [15:16] luks: well, at the very least the fewest number of bytes to read to reconstruct all the texts you want [15:16] but some of that gets really hand-wavy [15:18] oh, I also found that for *generating* them, usually the base that makes the smallest diff [15:18] also makes for the *fastest* diff [15:18] luks: the other problem with using right parent is more when you use both parents [15:18] because then you have to extract 2 histories for the file [15:18] rather than just along one direction [15:18] but you rarely do that, especially in bzr [15:19] luks: compare against the non-left parent? Probably true [15:19] yep [15:19] but "build_tree" performance is probably a bigger deal than "diff" [15:19] since you usually only diff a small number of files [15:19] build_tree touches every file [15:24] hmm, actually even if you use non-left parent you usually get the same base, because not many branches have over 100 commits before they get merged (or whatever is the max delta chain constant) === cprov-lunch is now known as cprov === mrevell_ is now known as mrevell [16:30] New bug: #175589 in bzr "suggested update when bound branch is out of date does confusing things" [Undecided,New] https://launchpad.net/bugs/175589 [16:44] does bazaar run under jython? [16:45] Qhestion, I'm pretty sure it doesn't [16:45] any reason for that? [16:46] i mean, i would like to build something on bazaar, and i am pretty sure i will use java [16:46] Qhestion, I don't know the specifics, Verterok does, but he usually isn't here at this hour. I think it has something to do with the OS calls, but I might be wrong. It's the lack of Jython's support for some features [16:47] Qhestion: jython was stuck in python <=2.3 ?? and we require 2.4 [16:47] dammit [16:47] I thought Verterok mentioned that jython would be updating to 2.5 compatibility [16:47] Real Soon Now [16:47] there you go, a much nicer explanation :D [16:47] one day bazaar will get me to switch to python... [16:47] As far as running the test-suite on jython [16:47] We use os.chdir() as part of the test-suite [16:47] to ensure that tests have sanitized directories to work in [16:48] (basic sandboxing of each test) [16:48] but AFAIK Java doesn't support chdir() [16:48] Qhestion, Verterok does interact with bzr from Java (eclipse, specifically) using a plugin he wrote to output XML and parse it [16:48] jam: really? [16:48] dato: last I knew, it was part of the java security model [16:48] to sort of hide anything like cwd [16:48] java-ignorant? what is so bad about java? [16:49] I *do* know that jython with python2.3 does not implement os.chdir() [16:49] Qhestion: I was stating that I don't know much myself about java intrinsics [16:49] oh ok [16:49] Qhestion: I can also say that for commandline apps, java's startup overhead is pretty prohibitive [16:49] but for something like an IDE which stays up for a while [16:49] it isn't something you pay for every command [16:50] Anyway, I haven't tried bzrlib in jython for >1 year [16:50] well, java needs only ONE startup [16:50] anyway, this is the only negative thing i see about java... [16:50] New bug: #175594 in bzr "UnicodeDecodeError after commiting" [Undecided,New] https://launchpad.net/bugs/175594 [16:51] Things could certainly have changed since then [16:51] Qhestion: if you are using a program which starts and stops a lot [16:51] does java leave the interpreter running in the background? [16:51] thats not what i mean [16:51] jam, its what windows does for you [16:51] of course it needs startup [16:51] but thats within 0.05 seconds [16:52] the REALLY slow thing is applets, but thats not really java [16:52] lets see. i just ran "ant" twice. [16:52] the only Java apps I use take forever to start (eclipse, azureus and openoffice) [16:52] 'ant' is the 'make' of java [16:53] and openoffice rarely uses java [16:53] beuno: is OOo actually java? [16:53] beuno, that has NOTHING todo with java [16:53] NOTHING. [16:53] jam, in some bits, yes [16:53] ok results: first ant run (since computer restart): ca. 1 second [16:53] second run: cant count that fast [16:53] jam, if you disable java in OOo, you lack some features, but it starts up 12x faster [16:53] Qhestion: so there are 2 bits... you can keep the interpreter running, which means all java processes re-use the interpreter. AKA one crashes they all die [16:53] no, jam, no [16:54] Qhestion: or you can spawn new ones and pay for it [16:54] Qhestion: it has to kill the interpreter [16:54] not just a simple exception [16:54] jam, its what all programs do [16:54] they get cached [16:54] anyway, I'm happy to hear that java got better, though [16:54] ok another stupid thing of java: Swing [16:55] the standard user interface [16:55] it looks ugly, feels ugly, and is slow [16:55] on any platform! [16:56] Qhestion, I've heard the java > python migration is pretty painless... :p [16:56] anyway, dont want to start a language war here, especially since bazaar is written in python, which means i am on the wrong side... [16:56] beuno, actually i use python pretty much [16:56] for SMALL scripts [16:56] (everything less 5kB) [16:57] but i definitely love the features eclipse offers me - especially code completion [16:57] ah, I tend to use bash in those scenarios [16:57] .oO(so if it grows more than 5kB, you rewrite in java? :-P) [16:57] btw, what do you bzr guys use to code in python? jam? [16:57] since there is no code completion / intellisense stuff for pyhon anyway, i just use Vim [16:57] beuno: vim [16:57] dato: head of planning? ;) [16:58] * dato vim too. [16:58] oh good. [16:58] so, what about code completion? [16:58] ^P [16:58] ^X ^L [16:58] cheater! [16:58] errm in english please? [16:58] Vim has code completion [16:59] using Ctrl+P [16:59] Qhestion, ^ == control [16:59] for single word completion [16:59] and Ctrl+X Ctrl+L for whole-line completion [16:59] beuno yes, realized that too late [16:59] A bit emacs-ish for my taste, but it generally works [16:59] I tried eclipse... [17:00] The problem is that it really didn't want to work on files that were not in an Eclipse project [17:00] which was okay, I can create on and add them [17:00] oh not THAT sort of code completion [17:00] But then an Eclipse project was fixed [17:00] i meant intelligent code completion [17:00] based on Absolute path [17:01] Rather than saying "the files underneath this directory" [17:01] which means that having 100 different branches of bzr.dev (which I do) [17:01] is out of the question [17:01] like, showing what is in namespace xyz and showing api docs in a popup when selecting one item [17:01] I wasn't going to go through the Eclipse overhead every time I wanted to work on a feature branch [17:01] Qhestion: ctags -R . [17:01] And then in vim [17:01] ^] [17:01] is that a command? [17:02] i am just too new in vim... [17:02] Will follow the tags to the definition under the current position [17:02] how do i enable that? [17:02] Qhestion: ctags is a program which reads code and builds up a "tags" file of where there definitions are located [17:02] I don't know if it is bundled into gvim for windows or not [17:02] gvim? [17:02] huh? [17:02] ahh yes i remember [17:02] On Mac, I see a Tools/Build Tags file [17:02] in the menu [17:03] "(18:03:01) jam: in the menu" --> for that you need a menu :/ [17:03] we are talking about vim right? [17:04] Qhestion, gvim, it's a GUI for vim [17:04] "(18:02:52) Qhestion: ahh yes i remember", but i dont use it... [17:04] maybe i should [17:05] Qhestion, in the end, it's whatever you're comfortable with I guess [17:05] * dato idly wonders if he'll still use vim, or at least some modal editor, in 2030. [17:05] although vim seems to win over people eventually [17:05] its not that i dislike vim. its just... well, i am only half through the manual [17:06] Qhestion: if you go through the tutorial, it does a decent job of warming you up to it [17:06] Oh, the other eclipse issue... no proper vim integration for its editor :) [17:06] It had some sort of vim plugin [17:07] but the only good one was like $20 [17:07] And the limited teaser they had was broken [17:07] jam, there is eclipse-vim... [17:07] So I couldn't evaluate whether it would be worth anything [17:07] not that i checked it... [17:07] Qhestion: I'm guessing that is the one I was looking at [17:07] I had similar issues with the W IDE [17:08] on the other hand, there is something that integrates eclipse (headless, no gui, just lib) into vim (ui) [17:08] Wingware [17:08] It *had* a Vim mode, but it didn't support the best Vim command [17:08] '.' [17:08] (To redo the last action, which I use *all* the time) [17:08] And is, IMO, one of the strongest reasons to use vim [17:08] just another question: how often do you use your browser (!) to look at apidocs? [17:09] One you get used to the idea of applying commands to a movement range [17:09] Qhestion: pretty much never [17:09] ???!?!?!? [17:09] jam: now that you mention "." [17:09] jam: http://chistera.yi.org/~adeodato/blog/uuu, in case it can amuse you [17:09] Qhestion: I find jumping around source code just a ":e bzrlib/foo.py" away [17:10] dato: interesting [17:10] Yeah, ViM uses "uuu" [17:10] "U" is the confusing one for me :) [17:10] Qhestion: the other thing you have to watch out for with vim.... Getting the Capslock turned on [17:10] Suddenly your editor nukes itself [17:10] hehe [17:10] or missing the right key with the finger... [17:11] k == scroll one line down, K == lookup this function in the man pages [17:11] and suddenly doing something dangerous... [17:11] just make CapsLock an additional control key [17:11] be done with it :) [17:11] j == scroll one line up, J == wrap the next line onto this one [17:11] btw how can i use copy/paste with vim? [17:11] Qhestion: on the plus side ViM has "uuuu" when you really need it :) [17:12] Qhestion: y == "yank" [17:12] p == paste [17:12] P == paste before this location, p == paste after this location [17:12] Qhestion: the other thing you need to get used to, is that Vim has commands that take movement keys [17:13] so "yj" is copy from this to the line above [17:13] "yk" is yank from this to the line below [17:13] "yw" is yank from this word to the next [17:13] jam: backwards [17:13] (above/below) [17:13] yes [17:13] dato: my fingers know the difference [17:13] my mind does not [17:13] hehe [17:13] well, having played nethack a lot, same goes for me [17:13] Qhestion: when you get *used* to it. It is very intuitive and powerful [17:14] although nethack was *before* vim for me [17:14] "yt)" copy all characters until you get to an end parenthesis [17:14] "c%" change all of the characters between the current parenthesis and its matching one [17:14] hjkl is intuitive for me, second-class code completion isnt [17:14] I use "c%" a lot to change the arguments to a function [17:15] jam: "t" and "f" are teh awesome, yet many vim users do not know about them [17:15] Also, you can configure both bash and zsh to use "set -o vi" [17:15] ct" anyone? [17:15] and you get to edit your command line [17:15] dato: yep [17:15] Qhestion: also, readline supports vim mode [17:15] the command i like most in vim DEFINITELY is [17:15] "help!" [17:16] so you can put into ~/.inputrc: [17:16] set editing-mode vi [17:16] $if mode=vi [17:16] set editing-mode vi [17:16] set keymap vi [17:16] $endif [17:16] Qhestion: nice [17:16] And then in python [17:16] you get vim mode as well [17:16] (in interactive python shell) [17:16] as well as any other apps that use INPUTRC and readline [17:16] The only problem is it doesn't always support all of the 'c%' craziness [17:17] so once you've been spoiled by ViM, some of the Vi-like implementations fall a bit flat [17:20] hmm if there is no .inputrc in my ~, shall i just create one? [17:20] Qhestion: yeah [17:20] You may also need to set the environment variable INPUTRC [17:20] export INPUTRC=~/.inputrc [17:20] works for me [17:21] (I put it in both ~/.bashrc, and ~/.zshrc) [17:21] on windows! :/ [17:21] Qhestion: I'm not sure there, you could set INPUTRC in environment variables [17:21] i know windows is ugly. i just wait for kde 4 to be released and then switch back [17:21] I can't walk you through the specific dialogs [17:21] to the home path or to the file? [17:22] Qhestion: file [17:22] dialogs? there is SET for that ;) [17:22] Qhestion: but SET is only for the current command [17:22] cmd.exe [17:22] Setting it by dialogs sets it from there onward [17:23] Something like Right-click on My Computer, Settings, One page is Environment Variables and Virtual Memory settings [17:23] and you have to click a button to pop up another dialog [17:23] which lets you set All-user env vars [17:23] as well as just env vars for your user [17:24] jam: for current console session, you mean. anyway, i want to test it first [17:26] so now that i have that command [17:26] erm file i meant [17:26] how do i trigger the auto completion? [17:26] Qhestion: for what program? [17:26] vim :) [17:26] Vim has completion inside it [17:26] i made that .inputrc file [17:26] which you told me to make [17:26] now what do i do now? [17:26] AFAIK windows cmd.exe doesn't handle vim-style completion [17:27] you need to be running bash or zsh [17:27] Qhestion: if you are just running vim [17:27] then it should already have it [17:28] yes, but what is the key for it? [17:28] To complete a word? ^p for "previous existing" and "^n" for next [17:29] so you type something like "get_fo^p^p^p" until it completes like you want it to [17:30] if i try for example "raw_in^p" it says not found [17:30] although THIS sort of autocompletion is what i want [17:30] Qhestion: I don't believe Vim has Intellisense yet, though I think it is high in priority [17:30] It has "matching string" completion [17:30] same goes for "os.^p", which i would LIKE to display every thing in os [17:30] so if you have a file open with "raw_input" then it will find it [17:30] Qhestion: vim doesn't do that [17:30] (I haven't really needed it) [17:31] yet? so vim is young? :/ [17:31] It is hard to do that for python, because it is a bit dynamic, so systems which try are at best approximating [17:31] jam: I think vim 7 got something, omnicompletion I think they call it [17:31] :help new-omni-completion [17:31] (never tried it) [17:31] oh yeah there was a funny comment: "High dynamic means you dont know what it will do until you run it." [17:35] oh i just remembered one more stupid thing about java: ram usage [17:35] i just thought about using my VERY old notebook for programming... but definitly not with java [17:36] dato: so new-omni-completion is not supported for python?? [17:36] At least, that seems to be what it is saying [17:36] Well, it says it is supported [17:36] but doesn't give you a link to more details [17:37] And doing "^X ^O" gives me "omnifunc not set" [17:39] hmm C can call Python functions right? [17:39] jam: try :set omnifunc=pythoncomplete#Complete [17:40] Qhestion: yes, at least you can embed the python interpreter into a C program [17:40] jam: the distributed ftplugin/python.vim does that, and it... works [17:40] O M G [17:40] with help in the top [17:41] jam: and Java can call C. so Java should be able to call python... [17:41] Qhestion: sure, but that doesn't mean you get nice-to-use objects in Java [17:42] i dont care for that [17:42] all i want is to call the bazaar functions [17:42] as if would call em from the command line [17:43] Qhestion, Verterok does that from eclipse [17:43] for the bzr-eclipse plugin [17:43] problem: the returned data should be easily parsable [17:43] and easy = programming_effort + cpu_effort [17:44] Qhestion, parsing XML seems to fall into the category, doesn't it? [17:44] Qhestion: well, you can do "bzrlib.commands.main()" but otherwise you probably want to deal in real objects [17:46] why oh why is bazaar written in python? i understand that python is the better language, but javas tools are waaaaaaay more comfortable... [17:46] Because most languages are chosen for their suitability to the problem, not because someone happens to have a little IDE widget they prefer [17:47] that was a rhetorical question. [17:47] ("why do they always answer my rhetorical questions?" -- "i know!") [17:48] oh nice, i will write it in python then. damned. === cprov is now known as cprov-away [19:14] bzr question (0.91 and 0.92) -- is 'bzr diff' supposed to show file timestamps in utc? [19:21] Hey, you're right, it does. [19:23] hmm, there appears to be overlap betwene bzr-cpick and replay in bzr-rebase [19:24] oh, I never heard of -cpick [20:03] Peng: do you think the utc-ness is intentional so I should get used to it, or patch/file bug? [20:04] thatch: I think we want to accept any timestamp and then show them consistently [20:05] so we should change log & diff and others at the same time; I thought there was a bug around this [20:05] lifeless: you know I'm awful at coming up with good search terms for bzr bugs and/or docs... [20:06] I find bug 121313 [20:06] Launchpad bug 121313 in bzr "bzr info shows me times in unuseful timezones" [Low,Confirmed] https://launchpad.net/bugs/121313 [20:08] thats probably it [20:10] I should just get used to knowing when it is in UTC, eh? :) [20:10] lol no - [20:45] hi :) [20:50] lifeless, thatch: relatedly, bzr log time zone is configurable via --timezone (local, original, or utc) [20:52] d'oh, of course it says that in the bug that thatch pasted :) [22:05] Frustrating: because the current releases are called 1.0rcX, Gentoo is waiting until the next "real" release to package it, and not bothering with the rc's. I tried pointing out that 1.0rc1 == 0.93, but they didn't buy it. [22:06] go debian! :-P [22:07] theres me thinking gentoo packaged everything with source *grin* [22:12] hello [22:13] hi andrew - that's a bit annoying [22:13] it would be good to get it in at least behind a mask (if i recall the term correctly) [22:13] poolie: yeah, that's what I was trying to convince them to do, but they sat on it. Alas. [22:14] Did those problems with the bzr:// functionality not working with packs get resolved? [22:32] afc, yes, i believe they are [22:32] That's encouraging to hear. [22:33] [it is faintly disturbing to me that the Bazaar project doesn't use bzr:// in publishing itself] [22:38] we should really set that up [22:38] in fact, i think it either is live on launchpad now, or will be soon [22:38] Seriously? Gentoo won't package it? [22:39] i'll check [22:43] Peng: they _could_ package it, but they didn't feel like it, waiting for an "actual" release instead. [22:44] Well, you *have* been releasing a lot of RCs. [22:44] Peng: (I'm sure if a Gentoo packager had been actively using it, then it'd be a different story, but for a package a person isn't using & testing & caring about, it's not entirely unreasonable) [22:45] Peng: (you mean "they"; I'm not one of the Bazaar hackers) [22:45] Okay, they. [22:46] It's not unreasonable. It's just unfortunate, especially since bzr.dev is almost as good as releases. [23:22] it would be nice someday to get a gentoo ebuild that installs from trunk [23:22] as i believe they do with emacs [23:22] but, let's get debs of that at least, first [23:23] igc, could you add me as a pypi admin for bzr sometime? [23:23] username 'mbp' [23:24] poolie: sure [23:24] thanks [23:26] jam, it varies between compilers on 64 bit platforms whether a long is 32 or 64 bits [23:26] Gentoo not giving users the latest & greatest? How very un-Gentooish. [23:26] iirc, for gcc it's 64 bits, for ms only 32 [23:34] hmm... I don't know that dot will gracefully handle 2068 nodes with text, what do you guys think? [23:35] Hey there. I have a patchfile (output from bzr diff). How do I apply those patches? [23:35] bzr diff | patch -p0 ? [23:35] patch patchfile.patch didn't do anything [23:36] dennda: I think you need -p0 [23:36] and usually you pass it in on stdin [23:36] patch -p0 < patchfile.patch [23:36] ah yes [23:36] it seems not to take a file as argument [23:37] good. worked. thank you [23:37] * jam => family time === mw is now known as mw|brb [23:57] dennda: also, bzrtools has 'bzr patch'