/srv/irclogs.ubuntu.com/2010/10/25/#bzr.txt

spivGood morning.00:03
yann2is there a good reason why I wouldnt want to have +w for group on .bzr/checkout/dirstate ?00:24
yann2I got problems by pushing because that file belongs to another user, although the group is set correctly, the user seems to need to write there00:25
spivyann2: Odd, I wouldn't have thought that push would need to write to that file at all.00:29
yann2it might be the update I just added before actually :)00:30
spivyann2: the reason not to set group +w for that is if you don't want to allow other users in that group to tell bzr that files have been added or deleted, or that merges have been made...00:30
spiv(Or that renames have happened, or conflicts resolved, etc, i.e. changes to the state of the checkout)00:31
spivYeah, the update would make sense :)00:31
yann2well users in the group can basically remove the .bzr folder :)00:31
yann2I think the problem is that I do an automated update before doing a push... mmmmh, I ll look into that, thanks...00:32
RenatoSilvahow to diff between branches?02:32
spivD'oh, missed RenaotSilva02:37
* jelmer waves to spiv02:38
spivFor the record: "bzr help diff"02:38
spivHi jelmer02:38
spivAny progress on the bzr-svn/chk issue?02:38
jelmerspiv: No, I need to follow up on your email but I haven't had the time to do so yet (travelling at the moment).02:39
spivI was CC'd on part of a thread, but don't know if I'm missing any discussion, and probably the discussion ought to happen on (or at least summarised on) the bug...02:39
spivAh, ok.02:39
spivTo UDS, I guess?02:39
jelmerspiv: No, although I am in the US. Google Summer of Code mentor summit and GitTogether02:40
spivOh, heh.02:40
spivSo near and yet so far!02:40
spiv(not that I'm at UDS this time either)02:40
jelmerspiv: Are you going to the epic?02:42
spivI am02:46
jelmerCool02:48
=== spiv changed the topic of #bzr to: Bazaar version control | try https://answers.launchpad.net/bzr for more help | http://irclogs.ubuntu.com/ | Patch pilot: spiv | Release Manager: vila | 2.3b2 is officially released, test it! | work on bzr: http://webapps.ubuntu.com/employment/canonical_BSE/
vilahi all !07:50
GaryvdMHi vila07:51
vila_o/07:52
=== beaumonta is now known as abeaumont_
mgzI see jml has blizted a bunch of testtools things15:10
mgzwonder if he'd prefer follow up bugs for continuing issues or reopening the current ones15:10
lifelessmgz: new bugs15:11
mgzwill do, just found a mp for one of them and added a comment too.15:14
mgzI guess I could subscribe to testtools somehow.15:14
mgz...might just put up some branches for these test failures, some of them are shallow.15:23
mgzhm, Python doesn't expose raise. I guess kill covers it anyway.15:34
=== Meths_ is now known as Meths
=== mtaylor is now known as 52AACIHR4
=== tchan1 is now known as tchan
=== deryck_ is now known as deryck
RenatoSilvabzr qlog shows "Failed to decode using utf8, falling back to latin1", why?17:11
RenatoSilvaisn't bzr aware of the encoding of the file?17:12
abentleyRenatoSilva: bzr does not store any information about file encoding.17:13
abentleyRenatoSilva: Individual commands can guess the current file encoding, if they choose.17:13
=== mordred_ is now known as mtaylor
RenatoSilvaif they choose? a command needing to print file content ought to do that, doesnt it?17:14
RenatoSilvawhat's funny is that file is latin1 and diff in bzr qlog shows file as utf8 (drop down in bottom) but the chars are ok17:15
abentleyRenatoSilva: No, generally just emitting the bytes to the console works quite well.17:16
abentleyRenatoSilva: And for binaries, it's generally the only thing that can be done.17:16
RenatoSilvahmm this is a bug, it tries to decode from utf8 and something defaults to latin1 (msg above), but the drop down is not updated17:16
=== Meths_ is now known as Meths
RenatoSilvaby emiting the bytes to console you mean relying on python's guessing right17:17
abentleyRenatoSilva: Actually, it means relying on the console to have an encoding that is compatible with the files' encoding.17:19
RenatoSilvawhere console is the python's stdout object17:19
abentleyRenatoSilva: The console is written to via python's stdout object, but it is not, itself, anything to do with python.17:20
RenatoSilvaby console you mean the terminal?17:20
abentleyRenatoSilva: yes, it is usually a terminal.17:21
=== oubiwann-away is now known as oubiwann
radixhey guys17:46
radixis there an easy idiomatic solution to "the g+w problem" yet?17:47
radixi.e., when I bzr push with bzr+ssh to a repo on a shared server, I want it to be group writeable17:47
radixdo I still have to use a debian "alternative" and a wrapper script? :(17:47
mwhudsonradix: i'm not aware of any new stuff in that area17:54
radixok17:55
radixit'd be really nice if bzr serve had an option for that :)17:57
=== Ursinha is now known as Ursinha-lunch
gthorslundhi! is there a way in bzrlib to just check status for uncommitted changes. what I've found is http://people.canonical.com/~mwh/bzrlibapi/bzrlib.status.html#show_tree_status . so I guess it would be possible to put something on top of this, but is there a better way?18:30
=== beuno is now known as beuno-lunch
arthur_Hi. I have got two separate repo 'a' and 'b'. On 'b' a 'bzr pull' from 'a' was performed at some point, long ago. Now I want to consolidate 'a' and 'b' using 'bzr join' under the same repo and it fails since there are duplicates in the inventory. Any help to work around this problem would be appreciated.18:47
gthorslundarthur_: can't you just pull all from 'a' into 'b'?18:49
maxbarthur_: Could you explain a little more about the kind of history these repositories contain? It seems very odd that you would want to 'bzr join' repositories which already share ancestry. Why is 'bzr merge' not the right tool at this point?18:49
arthur_alright. maybe I should rewind a little and listen to your suggestions. Here is the situation.18:50
arthur_I just joined a company where they currently have ~20 repositories for various services.18:50
arthur_We want to consolidate into a single repo.18:51
arthur_while keeping the history. How should I proceed?18:51
dashwell, repos don't contain history so much as revisions :)18:51
=== Ursinha-lunch is now known as Ursinha
arthur_dash: well. you got my point18:52
maxbarthur_: I think you should not proceed. It is natural and expected for various separate services to live in different repositories18:53
arthur_maxb: there are much more tied than you think. Think of them as interfaces and implementations rather than services.18:54
maxbSo it truly makes sense for every developer who works on any of them, to work in a tree containing all of them, and to _always_ branch and tag them as a combined tree?18:54
arthur_maxb: most changes are across 3-5 services... so with the current layout it is pretty complex.18:55
maxbAnd the problem is that in the past, some services started as branches of other services?18:56
arthur_maxb: that's right. one of them...18:56
maxbhmm. This isn't going to be very nice :-/18:56
arthur_which prevents me from doing a join. Believe me I was pissed when the very last join failed :)18:57
maxbI cannot think of any workaround which will preserve the ability to follow all files history back to where it naturally began18:57
maxbThe only workaround I can think of is to effectingly delete and re-add the entire contents of one of the trees with duplicate file-ids18:57
arthur_maybe I should do that on only the set of files that were common at the time18:58
arthur_the pull was done pretty early (revno 6)18:58
lifelessgthorslund: yes, you can layer things on top of that api19:07
lifelessgthorslund: or the status command object19:07
lifelessand there is now a hook for that too19:07
=== beuno-lunch is now known as beuno
=== Meths_ is now known as Meths
gthorslundlifeless: since cmd_status is using show_tree_status that feels like a higher level then I need. looking at show_tree_status it appears to be nothing lower level that would be useful for me, so I guess show_tree_status would be right for me then.19:16
lifelessgthorslund: depends on what you're trying to do, I guess19:19
gthorslundlifeless: I'm looking at changing revert calls in bzr-bisect with update calls instead. if there have been local changes done I kind of end up with a mess of .moved files and other. (revert just removes local changes). I refuse starting without those changes being handled some way first.19:27
gthorslundhi mtaylor!19:27
lifelessgthorslund: iter_changes is probably what you want, then.19:27
gthorslundlifeless: so http://people.canonical.com/~mwh/bzrlibapi/bzrlib.tree.InterTree.html#iter_changes would be my homework then, right?19:31
lifelessyes19:33
lifelessshow_tree_status calls into this19:33
lifelessif you want a crib sheet19:33
mtaylorhi gthorslund !19:35
gthorslundmtaylor: I'm hacking some python. hope that makes you proud of me ;-)19:36
gthorslundlifeless: looks like it should give me enough of clues. thx19:37
mtaylorgthorslund: excellent19:37
beunohi jam!20:01
beunogot a minute to help me with meliae?20:01
beunojam, we have servers blowing up all over the place and we'd like to do some diagnosing20:01
mwhudsonbeuno: jam is here at uds, so might not be very responsive20:02
beunomwhudson, well, lucky you, I hear you're quite involved in meliae as well  :)20:02
mwhudsonbeuno: eh, a bit20:03
mwhudsonbeuno: mostly a cheerleader :)20:03
beunomwhudson, what would be your guess if we only get a partial meliae dump?20:03
mwhudsonbeuno: are you looking at jam's blog posts?20:04
beunomwhudson, I am20:04
mwhudsonbeuno: missing a flush?20:04
mwhudsonbeuno: how are you invoking meliae?20:05
beunomwhudson, this is what we have: https://pastebin.canonical.com/39038/20:05
mwhudsonbeuno: meliae trunk?20:06
beunomwhudson, 0.1.3~1.CAT.8.0420:06
beunowhich I presume is some sort of monster we backported to hardy20:07
mwhudsonseems pretty recent20:07
mwhudsonalthough not tagged in trunk, naughty jam20:07
=== oubiwann is now known as oubiwann-away
=== oubiwann-away is now known as oubiwann
beunomwhudson, does that script look like we'd need to flush anything?20:09
mwhudsonno20:09
mwhudsonbeuno: you get a nice 200 response back?20:09
beunomwhudson, not really, it complains about not being able to write to "+meliae"20:09
beunoalthough it does write teh .json file20:10
mwhudsonbeuno: ??20:10
beunomwhudson, scratch that20:10
beunoI do get a 20020:10
mwhudsonbeuno: i guess it's hard to say, but is the dump massively truncated or a little bit?20:10
jammwhudson: what needs to be tagged?20:10
mwhudsonjam: there's no 0.3.1 tag in lp:meliae20:11
beuno3:23 < beuno> (ami-hardy-i386)ubunet@ip-10-122-34-239:~$ wget http://127.0.0.1:8881/+meliae --no-check-certificate20:11
beuno13:23 < beuno> --18:23:00--  http://127.0.0.1:8881/+meliae20:11
beuno13:23 < beuno>            => `+meliae'20:11
jammwhudson: I don't think I did an official release, but I'm not positive20:11
beuno13:23 < beuno> Connecting to 127.0.0.1:8881... connected.20:11
beuno13:23 < beuno> HTTP request sent, awaiting response... 200 OK20:11
beuno13:23 < beuno> Length: unspecified [text/html]20:11
beunobut I think it's something wrong with the return20:11
mwhudsonjam: oh ok20:11
beuno13:23 < beuno> +meliae: Permission denied20:11
beuno13:23 < beuno> Cannot write to `+meliae' (Permission denied).20:11
beunois what I get20:11
jammwhudson: I only have 0.3.0 here20:11
beunomwhudson, the process is ~800mb, and the dump is 13mb20:11
mwhudsonbeuno: run that in a directory that you have write access to :-)20:11
beunoI'd expect a lot to be missing20:11
mwhudsonor wget -O- or something20:12
beunomwhudson, heh, of course, wget wanting to write20:12
beunowould that truncate the output file?   doesn't seem so20:12
jambeuno: dump_all_objects() is what you want to be using, and will try to find everything20:12
mwhudsonit does seem unlikely20:12
jamhowever, being 13MB is not strictly a truncated file20:13
beunojam, https://pastebin.canonical.com/39038/20:13
beunowell20:13
jamif you were reading and it said that there was broken content, *that* would be truncated20:13
beunoI say it's truncated because at the end there is half a line20:13
jambeuno: well, that is what I was pointing at.20:13
beunoyes, that's exacly what I get20:13
beuno:)20:13
jamso there was an issue that os.fdopen().flush() wasn't actually flushing if you used the raw FILE * pointer20:14
jamwhich was fixed (checking the rev)20:14
jambeuno: you have "0.1.3" that is 0.1 series, not 0.3 series20:14
jamdefinitely upgrade to trunk20:14
beunoaha20:14
beunook20:15
beunothat's a good first step20:15
mwhudsondoh!20:15
jamif it is just the truncation thing, that should make a big difference20:15
mwhudsonbeuno: sorry for missing that :)20:16
beunomwhudson, no worries!  this is a lot of progress!20:16
jambeuno: Flush was fixed in the 0.2.1 ish timeframe20:16
beunothanks jam, mwhudson, I'll update the server and see what happens20:16
jambeuno: I believe mwhudson did something similar but returned the dumped content to the url rather than dumping it to disk, I'm curious why you prefer this method.20:17
jam(certainly from a remote user, it would be nice to get the raw content back.)20:18
beunojam, I'm actually locally on the server, but what was there was this django middleware to trigger the dump20:19
=== dash_ is now known as dash
jambeuno: any luck?20:53
beunojam, just managed to get trunk on the server20:53
beunowaiting for some more tests that are going on to finish20:53
beunoand will try again20:53
jambeuno: sounds good. If you need help debugging the dump, let me know.20:54
beunojam, will do, thanks. Hope to try again in 10-1520:54
jamyou aren't in UDS, right?20:54
beunono  :(20:55
beunomissed it this time around20:55
jamnp20:56
jamwould have been nice to sit over your shoulder while you worked with it20:56
beunoyeah, would of been great20:56
beunoI'll break more servers next uds you are there too20:57
jambeuno: well if necessary, there is always "screen"20:57
jamanyway, switching rooms, bbiab20:58
=== Ursinha is now known as Ursinha-afk
beunojam, so, with the new version, they don't seem to truncate, although they do seem small21:04
beunothey are smaller than expected21:05
beunobut size may not matter after all21:05
jambeuno: if you have very large strings, we only dump 100bytes of a given string21:07
abentleyjam, I am thinking that my algorithms in https://dev.launchpad.net/Code/BranchRevisions are slightly wrong, in that they assume that there is a single mainline_parent entry for each revision.21:07
jamif you have lots of small objects, then the dump is usually about the same to larger than mem21:07
jamabentley: IIRC, it could still work, you would just search multiple tips concurrently21:08
jamI'd have to look closely again, though21:08
jambeuno: there is also 'dark' memory that I've run into, which can be a lot more than I would like21:08
abentleyjam, this affects "Branch page" and "Merge proposal page".21:08
dashjam: oh, that reminds me, thank you for writing meliae21:08
dashjam: it has made my life easier. :)21:08
jamzlib.decompress and zlib.compress tend to have a lot of buffers that aren't particularly accessible, for example21:09
jamdash: I'm very happy it has proven useful to you. It certainly helped me. Feedback always welcome, btw21:09
dashjam: well the only thing i'd change is the license. ;)21:09
jamdash: to?21:10
abentleyjam: I worry that a given revision might not be the head of a mainline_parent_range.  Maybe the simplest thing is to accept ANY maninline_parent_range where the revision is mentioned.21:10
dashsomething shorter :) gplv2 or apache/mit/etc21:10
jamdash: why v2 vs v3?21:10
abentleyjam: i.e. not the range table per se, but any entry in mainline_parent_range where the revision id is correct.21:11
jam(v3 is canonical policy unless there is specific reason to do otherwise, so if you have specific reasons, I'm willing to listen)21:11
abentleyjam: argh, "mainline_parent"21:11
dashah, i wasn't aware of that.21:11
jamok21:11
dashjam: my real preference is permissive licenses, personally. it's just that gplv3 makes people in legal very nervous :)21:12
dashanyway, it's not like i'm distributing it to customers, so whatever :)21:12
abentleyjam, maybe "SELECT range FROM mainline_parent_range WHERE revision = %s ORDER BY DIST DESC LIMIT 1"?21:13
abentleyjam, err "SELECT range FROM mainline_parent WHERE revision = %s ORDER BY DIST DESC LIMIT 1"?21:13
jamdash: atm meliae is designed more as a helper than something you would bundle in your package, though21:14
jamso the license doesn't matter as much21:14
jamabentley: well that query doesn't require it to be the head21:15
dashjam: Sure.21:15
dashlike i said, not a big deal.21:15
abentleyjam, right, because there might not be an entry where it's the head.21:18
abentleyjam, so this way at least you find the entry where it's closest to the head.21:19
jambeuno: any interesting insight ?22:13
beunojam, well, not from the meliae dump22:13
beunoit seems normal22:13
beunowhich is baffling22:13
beunosince we see that processes memory grow over 1gb22:13
beunobut, we found a piece of code that, when removed, stops the server from dying under a small amount of load22:14
=== Ursinha-afk is now known as Ursinha
jambeuno: 1 are you sure you are dumping while it is at its peak, 2 it could be memory fragmentation22:14
jam3) ISTR that Thread objects take up a lot of memory, which may not be tracked22:14
* beuno nods22:14
jamthere are other possible hidden memory allocations22:14
beunoright22:14
beunoI am slowly getting into this part of the world22:15
beunoso no good ideas yet22:15
beunothe other problem we have22:15
beunois we can't really reproduce it locally22:15
jambeuno: if the dump is small enough, you could bz2 it and put it somewhere and I'll give it a look22:15
beunoonly on ec2 instances on staging22:15
beunojam, sure, it's 14mb and it should shrink quite a bit22:15
beunowill email22:16
beunojam, sent22:22
mgzjml: in case you didn't see it, I can make those SIGINT tests pass, just whether or not you think the ctypes hackery is worth it.23:35

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!