/srv/irclogs.ubuntu.com/2007/12/03/#bzr.txt

abentleylifeless: What sort of thing would you like to me to put in the developer guide?00:12
abentleyAnd why this, rather than the many other things we're constantly adding to the codebase?00:14
lifelessabentley: because I remembered to think about it.00:25
abentleyWell, it seems like it would be onerous to do this for everything.00:27
abentleyBut where should it go, and what should it say?00:27
abentleyIt doesn't seem to belong under "domain classes".00:28
abentleyAlso, I've just found an optimization that gets it about as fast as the old implementation.00:29
abentleyOn knits.00:30
lifelesscool00:31
lifelessok 1.0rc1 debs are up; including dapper00:35
abentleyOh, cool.00:37
abentleylifeless: Can you give me some guidance on the documenation?  I'm otherwise ready to re-submit.00:38
lifelessabentley: one sec, email frenzy00:39
abentleyOkay.00:39
jelmerlifeless: no bzr-svn debs ? would be nice to have them up, at least for gutsy00:40
lifelessjelmer: hmm, I'll see how far I get; I go on leave in two days00:46
lifelessabentley: ok, with you know.00:46
abentleyFire away.00:47
lifelessso if the answer is 'there is no developer docs 'about merge' at the moment, then you're good - don't worry,.00:47
lifelessstarting something that covers the design, motivation, tricks n tips etc for people writing to the collection of apis that combine to make up 'do merge' would be nice at some point.,00:48
abentleyI don't remember writing any, so there probably aren't any.00:48
lifelessI can't see any at a glance in doc/developers00:49
* lifeless -> food00:49
abentleyOkay.  Thanks.00:49
=== kiko-afk is now known as kiko-zzz
=== mneptok_ is now known as mneptok
abentleyigc: Hi.01:38
igchi abentley01:39
abentleyWould you be able to review my criss-cross documentation?  http://bundlebuggy.aaronbentley.com/download_patch/%3C47505D5B.9030702@utoronto.ca%3E01:39
abentleyjam was a bit concerned about the style.01:40
igcabentley: sure01:40
abentleyThe bit about merge --weave being slow on packs may be obsolete soon.01:41
igcok01:41
=== doko_ is now known as doko
PengCould someone mark https://bugs.launchpad.net/bzr/+bug/172612 as fixed? The patch is in bzr.dev now.02:31
ubotuLaunchpad bug 172612 in bzr "new commit is overly verbose" [Low,Confirmed]02:31
lifelessdone02:38
PengThanks.02:43
lifelessbbs02:51
jmllifeless: have you written up anything in English describing the scenario stuff in bzrlib.tests?03:00
* igc lunch03:03
lifelessjml: no03:25
pooliejml, i know a bit about it, i can possibly describe it in documentation03:30
pooliein fact, i did write something in HACKING, you could look at that03:30
jmlpoolie: I'll take a look.03:30
lifelessI should finish my in-file tweaks.03:30
pooliequestions welcome03:30
lifelessmight do that tomorrow03:30
=== me_too is now known as turbo0O
=== turbo0O is now known as me_too
lifelesspoolie: patch sent06:18
jmlhow do you actually use DVC to commit?06:29
jmladd a log entry and then hit C-c C-c in the log buffer apparently.06:31
jmlseems kind of backwards.06:31
vilajml: cou can hit 'c' in a dvc-diff or dvc-status buffer, it will open a buffer for the commit message and there you can hit C-c C-c06:34
jmlvila: ahh, ok06:35
jmlvila: I guess seeing as I review a diff before I commit anyway, that makes sense.06:35
vilajml: yeah the dvc-diff buffer is handy, since it's based on diff-mode you can also very easily revert any hunk by using C-c C-a or mark the files for selective commit06:36
=== quicksil1er is now known as quicksilver
=== mrevell is now known as mrevell-sprint
glatzorhi, is there a document that covers running a bzr server using sshd?11:55
luksyou don't need to run a bzr server there11:57
luksjust install bzr on the server and bzr will run it on the remote side over ssh11:57
glatzorluks: is there a way to limit the ssh access on the server?11:58
glatzorluks: I don't want to allow full ssh access on my machine.11:59
luksum, I'm not sure12:00
glatzorfor my subversion server I use the command option in the key configuration: command="/usr/bin/svnserve -R -t -r /srv/subversion"12:01
glatzorui, you are releasing 1.0 today? congratulations!12:05
mwhudsonglatzor: you can use something similar for bzr too12:19
mwhudsoni think it's "bzr serve --inet --allow-writes", but i'm not completely sure12:20
Peng--directory=/ too.12:20
Peng(And maybe even something else that gets cut off the edge of the screen.)12:21
datojelmer: sorry, I meant the trunk of 0.4, so "bzr-svn-0.4" branch.12:45
abentleyglatzor: You want to allow bzr access, but not normal shell access?  A restricted shell like rsh can do that.12:45
jelmerdato: I would be interested to hear why the trunk version broke12:48
datojelmer: lemme pull and try again. what I saw a couple of days ago is that it didn't do anything after parsing the dump file.12:48
glatzormwhudson: Peng: so I would need a user/key for every repository12:59
Pengglatzor: No.13:10
Pengglatzor: Create as many users as you want. They just all need the proper file permissions on the repo.13:10
=== kiko-zzz is now known as kiko
datojelmer: there must be something weird in my user setup; as my user, svn-import does nothing, as a newly created user it works.14:11
jelmerdato: does removing ~/.bazaar/subversion.conf help/14:13
jelmer?14:13
datolemme check14:14
datoyes14:15
=== cprov is now known as cprov-lunch
=== mw|out is now known as mw
=== kiko is now known as kiko-fud
vilaabentley: ping15:03
datojelmer: so, do you really recommend to use rich-root or rich-root-packs, instead of -with-subtrees ?15:25
jam-laptopdato: I don't know that jelmer specifically does, but it was a compromise between him and Aaron15:25
jam-laptop-subtrees is still considered experimental, because it enables nested tree detection15:26
jelmerdato: yep, I do15:26
datoright. so I have to choose between compatibility with < 1.0, using an experimental feature, or >1.0 with non-experimental stuff.15:26
* dato ponders, as this branch is for other people to use.15:27
datooh.15:30
datoyou cannot `bzr upgrade` -subtrees to rich-root, but you can pull from -subtrees into a rich-root repository :-?15:30
vilajam-laptop: ping, who uses bundle format v4 ? It requires seeking backwards (or force me to wrap http.get() into a StringIO) :-/15:42
jelmervila: are you working on the ability to apply remote bundles ?15:43
vilajelmer: no, on the ability to remove wrapping all http requests into StringIO15:44
jam-laptopvila: I want to say that v4 is the newest form, so we *all* use them.15:44
=== jam-laptop is now known as jam
jamhowever, I haven't paged through it thoroughly15:44
jamI know Andrew was working on making them more of a producer + consumer15:44
jamso he could stream chunks as part of the response15:44
vila    def get_bundle_reader(self, stream_input=True):15:45
vila        self._fileobj.seek(0)15:45
vila        return BundleReader(self._fileobj, stream_input)15:45
vila    Invalid range access in http://localhost:52409/test_bundle at 28: RangeFile: trying to seek backwards to 015:45
vilaThat's the only remaining failing test :-(15:45
jamvila: let me check15:46
jamvila: the only code that seems to be using that is the bundle code itself15:50
jamI'm guessing that BundleInfoV4 reads some of the header, etc to make sure that it really is a bundle15:50
jamand then passes the stream into the BundleReader class15:51
jamand the BR class was written to start from the beginning15:51
jamBut I notice that the first thing it does is chomp the first line off the file15:51
jamSo it is possible that we could just pass a "first line read" flag15:51
jamvila: in general, I don't see why it needs to seek, other than maybe multiple calls to get_bundle_reader?15:52
jamvila: which specific test?15:52
vilaTestReadBundleFromURL.test_read_bundle_from_url(HttpServer_urllib)15:53
vilathe strange thing is that all comments referring to stream_input says stream_input=False is currently faster but everywhere we use stream_input=True...15:56
jamvila: that is because = False has to buffer the whole file in RAM15:56
vilayeah I know ;-)15:57
jamso we chose to be memory conservative15:57
jamI also wonder how efficient IterableFile is15:57
=== cprov-lunch is now known as cprov
jamvila: ok, so it seems we have already read the whole file by the time get_bundle_reader() is being called.15:58
jamI haven't figured out why yet15:58
jambut that is why the seek is needed15:58
vilawell, I will punt on that one and mark the test as KnownFailure so that we can discuss that during review16:00
jamok16:00
vilaI think it occur because actually transport.get *seems* to always returns seekable files while ftp always returns StringIOs andd http did too16:01
jamvila: we always returned seekable files16:01
jambecause gzip required it16:01
jamgzip.GzipFile16:01
jamexpects to be able to tell() and seek()16:01
jamI think tuned_gzip does not16:01
vilaseek in both directions ?16:01
jamseek backwards, yet16:01
jamyes16:01
jamit would read a bit16:01
jamand then put it back if it didn't like what it found16:02
jambut I think that has been worked around in tuned_gzip16:02
jamwhich predictively reads16:02
jambut keeps a small buffer16:02
vilathe test suite triggers nothing related so you should be correct16:02
jamvila: do you know pdb very well?16:02
jamIn gdb there is a way to go until you go up a frame16:02
jam(something like Finish)16:03
jambut I don't see that in pdb16:03
radixr?16:03
vilajam: can't answer that ;-)16:03
jamah, thanks radix16:03
vilayou mean 'r'16:03
vilaradix: I swear I didn't read your reply ;-)16:03
radixhehe :)16:03
vilaup is handy too to inspect callers contexts16:04
jamvila: I think we need it....16:05
jamvila: yeah, I know about up16:05
jamI just needed it for all the times we nest function calls16:05
jamfoo(bar(baz()))16:05
jamI want to step into 'foo'16:05
jambut it steps me into the others first16:05
jamand I don't want to "n" my way out16:05
jamlist16:06
jamsorry16:06
jamvila: so.... the bundle code needs to read at least the start of the file16:06
jamso that it can get the first line16:07
jamto determine the bundle format16:07
jamand return an appropriate serializer16:07
vilayeah, got that, but then doing a seek to the beginning of the file *rude*16:07
jamwell, in my tiny bit of testing16:07
jamwe do:16:07
jamfor line in f:16:07
jam...16:07
jamwhich we wanted to do so you could embed a bundle in the middle of an email16:08
jamand still have it parsed16:08
jam(so we ignore everything until we get to the end of the file)16:08
jamanyway16:08
jameven though "for line in f" has only yielded a single line16:08
jamf.tell()16:08
jamreturns the end of the file16:08
jamvila: anyway, I have the feeling that you will break merging from bundles if we cannot seek16:10
jamso we won't be able to do "bzr merge http://bundlebuggy...." which is something that *I* do.16:10
jamAnd I believe Aaron does as well16:10
vilayeah, I will break something, sure ;-)16:10
vilaThe question is how will we repair it :)16:10
jamwell, for starters, you are just trying to get rid of the StringIO() wrapper in HTTPTransport.get()16:11
vilaSo far we *pretend* that we stream to be memory conservative *but* we do that by wrapping the url content into a StringIO.... Ecuse me ?16:11
jamI'm not sure that is necessary for readv() support16:11
jamI agree it is distasteful16:11
jambut it is probably okay in the short term16:12
jamwe generally don't do get() on big files16:12
jamindexes, bundles, etc.16:12
jamEverything else we use readv() on16:12
vilaNot necessary, you're right, I will rollback my last change and leave http.get() wraps into a StringIO and be done with it, just a big red flashing FIXME ;-)16:12
jamvila: thanks16:14
jamIf you are doing anything fancy, make a note of it16:14
jambut I imagine you are just removing the StringIO wrapper16:14
jamfrom what I can tell16:14
jamfor line in file:16:14
vilaWell, the FIXME was already there: # FIXME: some callers want an iterable... One step forward, three steps backwards :-/16:14
jambuffers at 8192 pages16:14
jamI'm curious if readline() in the middle would still work correctly16:15
vilaI will add a better explanaton16:15
vilajam: in my case, it's a socket, and yes, read/readline share the same buffer16:15
jamweird, the first real read is 819216:15
jambut the second is 1024016:15
vilacould be a read(-1)16:15
jamand then 10240 from there16:16
vilawhich can of file/socket ?16:16
jamsure, I'm just commenting on the "open()" (file object) behavior16:16
vilas/can/kind/16:16
jamand doing "for line in file"16:16
vilaok16:16
jamstart = 016:16
jamfor line in f:16:17
jam  start += len(line)16:17
jam  print start, f.tell()16:17
jamGives interesting results16:17
jamf.tell() is definitely being read in chunks of (8192, 10240, 10240, ...)16:17
vilafor a random text file ?16:17
jamjust something on disk16:18
jamyes16:18
jama 238k file in this case16:18
jamvila: anyway, if I was to propose a fix16:19
jamit would be to change the bundle reading code16:19
jamso that it reads a minimal amount of data16:19
jamso it can get the header16:19
jamand then it passes the header16:19
jamalong with the file object16:19
jamto the serializer which supports that header16:20
vilajam: +1 but I will not be able to do that in the coming days (new project starting in RL)16:20
jamvila: I'll write up a bug on it16:20
vilaI search a bit in that direction but saw nothing obvious and came here for help16:20
vilajam: thanks16:20
vilas/search/searched/16:23
datovila: does http://dpaste.com/26608/ sound good?16:23
datovila: (re our conversation the other day)16:23
vilas/to remote hosts/& with sftp/16:24
vilas/is useful to ensure SSL certificates are always verified/is required to verify SSL certificates/16:24
datovila: didn't you tell me it was also needed for bzr+ssh?16:24
viladato: rats, yes, but that may well be a bug16:24
vilajam: did you read the IRC logs ? It looks like smart/medium.py requires paramiko, aaron and I thought it should be able to use openssh only16:25
jamvila: at one point, we could start a bzr+ssh connection without paramiko16:25
jamif we can't16:25
jamthen it is a regression16:26
jamkind of hard to test for...16:26
vilajam: look at smart/medium.py I didn't test16:26
jamvila: I think it is a bad comment16:27
jamIt used to "from bzrlib.transport import sftp"16:27
jamwhich *does* need paramiko16:27
jam'ssh' should not16:27
jamI'm a little concerned that we use "paramiko.SSHException" in the connect_sftp code16:28
jamsuch that if we don't have paramiko, and we fail to connect16:28
jamwe will get an AttributeError exception16:28
jamvila: but other than that, everything should work without paramiko installed16:28
viladato: as jam said ;-)16:28
viladato: so forget my first comment, but please use the second one, it's the only remaining bit that urllib does not handle today and it looks like for some people it's more a problem than an help16:30
vilaso there are really two kinds of users, those that want certificate verification and those that absolutely don't want them16:31
datovila: ok. re "more a problem than a help" that's what I wanted to reflect in my wording, like "only install it if you're really sure you want such verification"16:31
jamvila: I just removed paramiko from my system16:31
jamand "bzr log bzr+ssh:// " worked fine16:31
vilagreat16:31
datooh, good16:31
jamI'll post a simple patch to the list16:32
jamhmm... it is only in connect_sftp() and our sftp subsystem *does* require paramiko to be present...16:34
=== kiko-fud is now known as kiko
jamdato: the reason why we went with Requires for paramiko (IIRC) was that people expected sftp support out of the box16:38
jamSince we now at least have bzr+ssh, I suppose we could relax it a bit16:38
datoparamiko has always been in Recommends AFAICR, but tools are supposed to install recommends by default. my question the other day was whether to demote pycurl to suggests, which I'm doing right now.16:39
jamdato: for Ubuntu, I'm pretty sure it was set to Required16:40
jambut I could be remembering that incorrectly.16:40
ubotuNew bug: #173689 in bzr "BundleReader cannot stream" [Wishlist,Triaged] https://launchpad.net/bugs/17368916:56
james_wdato: the parallel build check showed that some of bzr refuses to build parallel, at least the docs. I think this is because rules calls 'make docs', when it should be '$(MAKE) docs'.18:01
james_wdato: mind if I correct that? A grep didn't show any other uses.18:02
datojames_w: please do18:02
james_wit might actually cause build failures, or even allow a broken package to be build, if a parallel make is used, but I'm not sure how to test.18:03
datodpkg-buildpackage -j2 ?18:04
mwhudsonhmm18:09
mwhudsonso for a test, i want to lock a branch with a given lock id18:09
mwhudsonthis seems to be pretty hard18:09
datojames_w: if you don't mind, please use dch -t in the future (leaves the footer alone; so it gets added when opening a new version, and when preparing a final upload)18:09
james_wdato: ok, thanks for telling me.18:10
jammwhudson: for any branch format, or just for a specific one?18:40
jelmerjames_w: Hi James, did you get my merge request for bzr-builddeb?18:41
jambecause you could just override18:41
jambranch.control_files._lock18:41
mwhudsonjam: er, no, not really18:41
jam?18:43
jamotherwise, I guess you could override bzrlib.lockdir.rand_chars18:43
jamsince that is the final lock token18:43
james_wjelmer: yeah, the first one is fine, the second I want to think about a little.18:45
james_wjelmer: also is exposing the file properties planned/wanted/not wanted?18:45
jelmerjames_w: planned/wanted. It depends on support for custom file properties in bzr core though18:46
jelmerjames_w: what are your thoughts on the second one?18:47
mwhudsonjam: let me try to explain what i'm trying to do18:48
bialixjelmer: do you saw bzr-svn selftest results?18:48
mwhudsonjam: obviously, branch puller on launchpad locks branches while they are being updated18:49
mwhudsonjam: if things go wrong in a particular way, these branches can be left locked18:50
jamk18:50
jamwith you so far18:50
mwhudsonso what i'm doing is have the puller use a particular lock id18:51
jamand you want to make the branch puller use specially formatted nonces?18:51
mwhudson(by setting BZR_EMAIL)18:51
mwhudsonso for a test, i want to lock a branch with the same id18:51
jam(just realize that pack repositories don't have lock ids at that level)18:51
mwhudsonthen run the puller and check that the puller still runs18:52
mwhudson(it will look for and break such a lock)18:52
mwhudsonjam: ah, that's interesting18:52
mwhudsonjam: so how would you solve this problem? :)18:52
jampack repositories don't take out a physical lock at 'repo.lock_write()'18:52
jamtime18:52
jamthey take a physical lock only when they go to update the 'pack-names' file18:52
james_wjelmer: I guess I'll just wait for that, it's not a massively urgent to implement this. I suppose I could guess in the meantime as well, and there is always the command line option.18:52
lifelessmoin moin18:53
jammorning lifeless18:53
jammwhudson: are you trying to get a custom lock token (aka nonce)18:53
bialixprivet18:53
jammwhudson: or are you trying to set the username inside the lock to that specified by BZR_EMAIL18:53
james_wjelmer: as for the second one, I think it's a fine aim, I just want to work out if it is the most natural way to do it. Your final example command line is great, so I guess it may well be.18:53
jammwhudson: I think you are trying for the latter18:53
jamsuch that peek_lock() will give you back what you want18:53
jamversus having token = repo.lock_write() give you back what you want18:54
jammwhudson: do you understand the difference?18:54
mwhudsonjam: i don't think i care18:55
jelmerbialix: hi18:55
bialixjelmer: evening18:55
mwhudsonjam: i want to be able to say "is this branch locked?  if so, did i lock it?"18:55
mwhudsonthen, "if all that, break the locks"18:56
jelmerbialix: all tests should actually be succeeding, from what I understood from reports18:56
jelmerjames_w: ok18:56
mwhudsonif a custom nonce is an easier way of achieving that, then maybe that's what i want to be doing18:56
jammwhudson: well the string returned by "branch.lock_write()" is only part of the whole info on disk18:56
jambut lock_write() will spin rather than returning right away18:57
jam(default timeout of 5 minutes, etc)18:57
mwhudsonjam: right18:57
jamare you trying to avoid that as well?18:57
jam(you *can* set bzrlib.lockdir._DEFAULT_TIMEOUT_SECONDS if you want)18:57
mwhudsoni'd like to, yes18:57
bialixjelmer: I has firewall on my machine. do you want me to run tests one more time with disabled firewall?18:58
jammwhudson: so... Branch.lock_write() will take out a physical lock right away, so it is probably best to think about starting from there.18:58
mwhudsonmy impression is that lockdir.py has all this configurability and flexibility and branches have lock_write() that takes no arguments18:58
jammwhudson: pretty much18:58
jamand I assume you are trying to do it without too much surgery into bzrlib itself, right?18:59
mwhudsonjam: well, i want to deploy this code within the next two weeks, so yes18:59
lifelessmwhudson: pack repositories have no over-the-wire locks at all19:00
datohm, suddenly log in bzr.dev is very slow?19:01
lifelessmwhudson: if the hpss decides to lock a pack repository itself, it will be because the hpss is performing the write.19:01
mwhudsonlifeless: this is for the mirror puller19:01
lifelessmwhudson: ok; it will take a lock during insertion of the repository for a couple of ms19:01
jamdato: with 1.0rc1 or bzr.dev?19:01
mwhudsonlifeless: so the hpss doesn't come into this19:01
datojam: bzr.dev19:01
mwhudsonlifeless: it does sound like this is not going to be a problem for packs19:02
jambug #172975, bug #17256719:02
datojam: logging in bzr.dev19:02
ubotuLaunchpad bug 172975 in bzr "bzr log slower with packs" [Medium,Triaged] https://launchpad.net/bugs/17297519:02
ubotuLaunchpad bug 172567 in bzr "Per-file log is *very* slow with packs" [Undecided,Incomplete] https://launchpad.net/bugs/17256719:02
mwhudson(or at least, not a very real one)19:02
lifelessmwhudson: the branch object will have a longer lock held open19:02
datojam: thanks19:02
jamdato: I have a workaround in 1.0rc119:02
jamwe are trying to implement the "right fix" for bzr.dev19:02
datook19:02
lifelessjam: ah it did get merged straight to 1.0 ?19:02
mwhudsonlifeless: oh, well, we'll have the same problem then19:02
lifelessjam: good19:02
jamlifeless: I believe martin put it in the 1.0 branch19:02
lifelessjam: I checked bzr.dev; didn't see it there. I don't know if 1.0's code has the fix or not. And he's off today.19:03
jamI'll peek19:03
lifelessjam: anyhow, I have a considerably better idea of how to get my point across :). I might write something up about it19:03
jamhe did a "merge updates for 1.0"19:04
lifelessjam: did he? good. :)19:04
jambut I'm not sure what that included19:04
jam(and, of course, bzr log is slower now :)19:04
jamlifeless: it looks like my workaround is *not* in 1.0rc119:05
mwhudsonjam, lifeless: so now i'm more confused than i was when i first asked in here :)19:05
mwhudson(that i started work 11 hours ago may be related to this)19:06
jammwhudson: so... basically you want to be able to trap when a branch is already locked, peek to see if it was another process that you would have been controlling, and break it automatically when necessary19:07
mwhudsonjam: yes19:07
jelmerbialix: nah, it shouldn't do any network access at all19:07
jaminteresting, we don't even use the per-branch config username, just the global one...19:07
jelmerbialix: what version of windows are you on?19:08
bialixxp sp219:08
bialixxp home actually19:08
lifelessmwhudson: So, you want to depend on bzr's branch locking, but be willing to ignore it :)19:08
lifelessmwhudson: don't we write the pid in the extra metadata?19:08
jamwhen opening a branch19:09
jamI would probably monkey patch .lock_write()19:09
jamso that it used self.control_files.attempt_lock()19:09
jamand if that fails, then you can peek, etc.19:09
mwhudsonlifeless: not really sure how that helps?19:09
lifelessmwhudson: 'when is it safe to break_lock' ?19:09
lifelessmwhudson: so you know that all locked branches are from other pullers right?19:10
jammwhudson: when you find a locked branch, you can check the host name, see if it is your current host, and then check to see if the pid is still alive19:10
lifelessjam: it has to always be the current host19:11
jamif host == 'localhost' and pid is not present (as given by ps, or whatever)19:11
jamthen you know you have a stale lock19:11
luksumm, is there some kind of semantic difference between RevisionSpec.in_branch, in_store and in_history?19:11
lifelessjam: the public area is not directly writable by users19:11
luksthey are all aliases, but I guess each was meant for something different?19:11
mwhudsonlifeless: well, no not necessarily, sometimes for better or worse operators end up touching branches19:11
lifelessmwhudson: on the public side the most operators should do is delete them :)19:11
mwhudsonlifeless: not going to argue with that19:12
lifelessluks: the idea was in_branch -> in the left side history. in_store -> in the repository, in_history -> in the ancestry of the branch. or something.19:12
mwhudsonlifeless: this complication wasn't my idea :)19:12
lifelessluks: overly engineered we suspect19:12
jamlifeless, luks: probably true, but I think we only implemented in_store... :)19:12
mwhudsonmaybe the code should just break all locks it finds19:13
jamand a lot of code expects in_store ability19:13
jammwhudson: sounds aggressive :)19:13
lifelessmwhudson: do you mutex branches against concurrent pullers?19:13
mwhudsonlifeless: lost my nickserv password i'm afraid, can't pm you here19:13
jamI finds 'em, and I breaks 'em19:13
mwhudson(until it expires :)19:13
mwhudsonlifeless: yes19:13
bialixjelmer: at work I use win2k sp419:14
lifelessmwhudson: if you mutex them yourself, just do:19:18
lifelessif branch.get_physical_status(): branch.break_lock()19:18
lifelessunconditionally.19:18
jammwhudson: "get_physical_lock_status()"19:19
lifelessyou'll need to do something to handle the prompts for y/n that the ui layer will make19:19
jamwhich returns a True or False19:19
lifelessbreak_lock is rather ui centric19:19
mwhudsonah, i wrote code to break the locks already somewhere19:19
mwhudsonthough i'm getting the feeling that it may not work with packs19:20
lifelessit will19:20
lifelessits tested to, *if* you do get a pack repo with a locked pack-names, break-lock will work, as will get_physical_lock_status19:20
mwhudsonhttp://paste.ubuntu-nl.org/46736/19:21
mwhudsonright, what you said will probably work, i was talking about what i wrote last week :)19:22
lifelessmwhudson: so, I think if an operator fiddles with these branches, they get what they deserve; if you add more public mirrors it will be impossible for an op to fiddle them all manually.19:23
lifelessmwhudson: so being complex here to support oeprators doing the wrong thing; is nutz.19:24
mwhudsonlifeless: that sounds good and simple19:24
mwhudsonand like something to do tomorrow19:24
lifelessthat code will work on packs19:24
lifelessits not long term portable19:25
lifelesspoolie is cleaning up that interface; please file a bug about needing a programmatic break-lock facility\19:25
lifelessso that he doesn't shoot you in the feet19:25
jamlifeless: how will that not work with packs?19:29
mwhudsonlifeless: ok19:29
jamThey also go through control_files19:29
jamwhen the go to "lock_names()"19:29
lifelessjam: 06:24 < lifeless> that code will work on packs19:29
jamlifeless: oops, bad read on my part19:29
lifelessjam: :)19:29
mwhudsonthe _lock stuff was a clue that i was breaking the rules :)19:30
jamI'm not sure where I got a "not" in there19:30
mwhudsonlifeless, jam: strange, i misread it the same way too19:30
jammwhudson: well, it probably won't work for SVN branches19:30
mwhudsonanyway, it's REALLY time to stop for today19:30
jammwhudson: have a good evening19:30
mwhudsonjam: i find it moderately hard to care19:30
mwhudson:)19:30
mwhudsonbye all19:30
lifelesssee in you jan mwhudson19:30
mwhudsonlifeless: oh right yes, enjoy your time off19:30
mwhudsonlifeless: after something like one more working week of us both on, we'll be in much closer timezones :)19:31
mwif i initialise a repo with bzr init-repo --trees whatever, and then create a branch inside of it based on, say, "cvs checkout -r something_old", what is the best way to then check out something new inside that same repo?19:32
mwwould it be better to branch that branch i just created, and cvs update to something newer?  or would another direct checkout be the same or better?19:32
mwafter that point i expect to keep those branches untouched, except for occasinoally cvs updating and bzr committing inside them19:33
lifelessmw: 'the same' - no need to do any extra or specific work19:33
jammw: I'm not sure I follow completely, but if doing "another direct checkout" would involve doing a "bzr add" of the whole tree, then I would do 'bzr branch' first.19:34
mwmaybe i should explain better:  i created a repo, and then inside of that i did a cvs checkout of firefox 1.5.19:35
mwthat won't be updated much anymore :)19:35
mwbut i'd like to maintain a "parallel" checkout of firefox 2.0, which of course continues to be updated pretty often19:35
mwthey share lots of files, so i'd like a) patches for one to apply relatively easily to the other, and also to conserve space19:36
mwa) being more important than the unlabeled b)19:37
jammw: then I would do a "bzr branch" first19:51
jamactually, since you have a CVS checkout you might try using lightweight checkouts, and switching between branches19:51
jamthen you can leave the CVS checkout in-place, and just update it pointing at different Bazaar branches / Firefox CVS tags19:52
jamthe only problem is accidentally committing something to the wrong branch, but you could always bzr uncommit, and fix it19:52
mwnod19:52
mwi'm not terribly worried about screwing up my pristine branches, since i can uncommit or revert easily enough19:53
jammw: so do you know how to do switch, etc, or do you want some help with that19:53
jamvila: isn't there a '-Dtransport' ?19:56
mwjam: i don't19:56
jammw: so I would start off with a repository with no working trees19:56
jam(bzr init-repo --no-trees repo)19:56
vilajam: don't think so, may be you mean the trace+ decorator ?19:56
jamand then create a branch underneath there19:56
jambzr init repo/branch-name19:57
jamthen you create a lightweight checkout pointing at that branch19:57
mwjam: hold that thought -- i need to run out for an hour or so19:57
jambzr checkout --lightweight repo/branch-name working19:57
jammw: ok19:57
jamvila: does trace+file:/// work?19:58
jamI tried "bzr log trace+file:///PWD/" and it gave me the log output19:59
jambut no extra data in ~/.bzr.log19:59
vilajam: yes, but the collected data are available only in the transport object, what kind of data are you after ?19:59
jamvila: to help the guy on the ML who was saying: Branching bzr.dev 'hanging'... no CPU activity, no net IO,no disk IO...20:00
lifelesscoffeedude: hi there, how are you going?20:00
=== cprov is now known as cprov-out
vilajam: my transportstats plugin may help, but it more love (the decorator approach showed its limits with readv)20:02
vilas/but it/but it needs/20:02
jamlifeless: you scared him away20:03
mrZebywhat's new in 1.0 ?20:06
vilajam: finally tried the _max_readv_combine for pycurl since we can't get the data as it arrives20:06
jamvila: you can, but you need to pass pycurl a custom write function20:07
jamwhich would also mean it doesn't hang forever20:07
jamvila: but anyway, how was it?20:07
vilajam: how passing a write function will allow me to return from readv ? pycurl will not give me the return code of the request until it returns20:08
vilajam: it went nicely but from here the latency is low so I can't really measure the degradation20:09
jamvila: http://people.ubuntu.com/~jameinel/test/20:09
jamhas the branches I gave you20:09
jamthat is at least in London20:10
jambut that may not be far enough for you20:10
vilaargh, still haven't extracted that repo...20:10
jamvila: Also, I believe if you set up a custom write function, you can get the headers20:10
jamwhich means you get the "200 OK" right away.20:10
vila--- people.ubuntu.com ping statistics ---20:11
vila6 packets transmitted, 6 received, 0% packet loss, time 5399ms20:11
vilartt min/avg/max/mdev = 19.597/20.679/21.470/0.613 ms20:11
jamwell, 20ms isn't a lot of overhead20:11
vilaindeed20:11
jamvila: where is your branch, mine is 115.685/145.808/226.214/42.441 ms20:11
vilaI made my tests with http+urllib://bazaar.launchpad.net/~lifeless/bzr/repository should be the same20:12
vilathe one that leds to bugs #165061 and children20:13
ubotuLaunchpad bug 165061 in bzr "bzr branch http:// with a pack repository takes too long" [High,Fix released] https://launchpad.net/bugs/16506120:13
jamvila: sure, my question is more about where your code is, so I can pull it here and give you some benchmarks20:13
vilajam: oh! Just finishing to update NEWS and I'll mail the list with the patch, I can send it to you privately if you want20:14
jamvila: if you send it to the list, I can just grab it from there, and let you know20:14
vilaok20:15
vilaI just set it pretty arbitrarily to 25 but I'm not in a good place to judge, anyway it *feels* already better, the pb is alive ;)20:17
vilajam: I seem to remember you had a trick under linux to check the peek of memory consumed by a bzr command, I'd like to look at the benefits provided by this new readv in that respect20:20
jamvila: I have a "watch_mem.py" script20:21
jamwhich dumps the /proc/PID/status20:21
jamwell, polls it20:21
jamwhile a child process runs20:21
jamI can send it to you20:21
vilajam: please do ! thanks20:21
jamvila: it should be in the mail20:30
jamwow, I just ran into an old project that was in Weave format still20:36
jam(it was my 'scripts' catchall project)20:36
jamgood thing we've kept around at least read support for everything20:37
vilajam: in the worst case you just had to checkout an old bzr version...20:39
jamtrue20:39
jamI wonder if I have any of the flat-file branches anymore20:40
jamI ran into some a couple months back when running "bzr heads"20:40
vilajam: not only true, but also a pretty strong argument against people whining about our multiple formats, bzr has now an impressive experience in smooth migrations from one the format to the other20:41
jamvila: you may be interested in http://rafb.net/p/JUvJkI89.html20:44
vilajam: finishing my mail first ;)20:45
lifelessjam: are you going to review vila's patch?20:51
jamlifeless: the one he just sent?20:51
jamI was giving it a look now20:51
lifelesscool20:52
lifelessno need for two sets of eyes20:52
jamlifeless: did we ever get your comments on the reconcile code (the ones I asked for) merged?20:52
lifelessjam: I thought so20:52
jamok20:52
lifelesswasn't spiv proxying me for that ?20:52
datowhat are the chances of getting into core a -u switch to push, that works only for the smart protocol and makes it update a remote tree? I see why it could be rejected (functionality depending on transport), but IMVHO it'd be quite cool to have in core.20:55
vilajam: I forgot to mention: http.readv avoid some caching why yielding directly if possible, that may be applied to sftp as well.20:55
lifelessdato: how should it handle conflicts?20:55
lifelessdato: how will the user resolve them?20:55
lifelessdato: will the push fail if there are conflicts?20:56
datowarn; ssh; no.20:56
lifelessvila: some caching is good if it allows network + python concurrency20:56
lifelessdato: and if there are *already* conflicts, will it make them worse?20:57
datoplus I think most people that'd be using it would not be editing the remote tree.20:57
datocan you pull in a tree with conflicts?20:57
lifelessdato: I'm not against it per se; but I think it needs real careful consideration as to UI impact.20:57
beunolifeless, what dato is proposing is actually *exactly* how we work here20:57
lifelessbeuno: do you use jam's plugin?20:58
beunoeverybody merges locally, then pushes to the main branch20:58
beunolifeless, I used to, now I just have a cron job updating all repo's every 2-3 minutes20:58
beuno~90 branches at this point20:58
lifelessbeuno: ok. Merging locally is orthogonal to this though.20:58
beunowhich is sub-optimal20:59
beunowell, we only push changes without conflicts20:59
lifelessbeuno: if your wt is different to your branch tip; then changes to the branch can conflict with the wt.20:59
beunolifeless, nobody works on the wt, just pushes to it20:59
beunoit's always "clean"20:59
lifelessbeuno: sure. Its still a case for the code to have to handle.20:59
lifelessbeuno: which is why I asked the questions I asked.20:59
beuno:D20:59
beunothat's just my +1 for dato's proposal21:00
jamvila: you only return if the current string fits the whole buffer, which seems like it would rarely hit21:00
jamsince usually you have already combined a couple ranges21:00
lifelessdato: branches can be updated regardless of the tree's state; thats orthogonal to UI21:00
jamoh wait...21:00
jamyou are buffering everything into a file object first21:00
jamand then seeking on that21:01
vilalifeless: judging by my perfmeters the concurrency is increased, the network never starves anymore21:02
lifelesscool21:02
vilajam: RangeFile reads the body of the response on demand so we should process the data as soon as it arrives which means our code don't wait more than needed21:04
datolifeless: er, but pull finally updates the wt.21:05
datolifeless: (sorry, I was lagged)21:05
lifelessdato: I'm sorry, I don't get your point then.21:05
jamvila: do we know that the offsets will always be in increasing order for _coalesced_offsets... I think we do as we do sorted_offsets = sorted(offsets)... so we should be ok21:06
vilajam: yes we do21:06
datolifeless: you asked what happens if you push -u into a remote tree where conflicts already exist. I'm comparing it to pulling in a local tree where conflicts already exist.21:06
jamwe probably should add a line in the _coalesce_offset function that we expect the (start, length) pairs to be sorted21:06
vilawhat could happen is that the offset are not in the same order as the coalesced offsets, and in that case only we have to cache21:06
lifelessdato: I don't think they are equivalent, because bzr:// access does not imply ssh access.21:07
vilajam: true, but I reviewed all actual uses and they all sort the offsets before calling (now, that I have said that we can as well sort inside _coalesced_offsets ;-)21:08
datolifeless: the people who edited the remote tree in the first place does have ssh access, and they'd be the one with the responsibility to fix.21:08
lifelessvila: hmm, not all readv are sorted21:08
vilalifeless: true, I'm talking about _coalesced_offsets callers21:09
jamvila: I wouldn't sort, I would just comment on the api that it expects them to be sorted21:09
lifelessdato: maybe. Maybe a bug caused the conflict.21:09
jam(It won't work anyway if they aren't)21:09
vilajam: indeed, so why not sort them ?21:09
lifelessdato: but should it be made progressively worse if they are not around?21:09
jamvila: because they are already sorted21:09
jamand you don't really need to pay to sort a sorted list21:09
vilalol21:09
lifelessdato: it just seems unfriendly to progressively wedge a tree that the user has no guaranteed access to get at and fix. I don't think it should allow any conflicts ever.21:10
vilayeah, right, we better check they are not sorted before sorting them then21:10
lifelessuhm21:10
lifelessif you mean manually checking, thats slower than sorting anew, in python21:10
lifelessbytecode--21:10
jamvila: right, much more performing :)21:10
vilalifeless: joke man21:11
jamlifeless: I'm pretty sure he was being sarcastic21:11
fullermdOh, that's easy to unwedge.  You just use bzr --pretend-you're-in bzr://[...] revert21:11
datolifeless: yeah, if you want to help users not shooting themselves that'd be the thing to do. which I also think that makes sense, since personally I'd only recommend -u for remote trees that are never edited by hand. and in such cases there'll be never conflicts, so... qed/whatever.21:11
vilaof course, as soon as I made a joke, who's coming ?21:11
jamvila: http readv of aa3ab464f867aaa3609bc8ba20f1e342.pack  offsets => 14672 collapsed 6211 ...21:11
jam(with my repository, which seems to be significantly less clean than roberts)21:12
fullermdvila: It would be quicker to find the last item in the list, then append a random earlier range to it.  Then you'd know it had to be sorted   :p21:12
lifelessdato: anyhow, I'm not against the idea of push-to-branch-triggering-update. I *am* against the idea of it ever adding a remote conflict.21:12
vilajam: haaa, nice one21:12
lifelessdato: so I think a mail to the list, and discussion about policy is a good next step.21:12
jamhmm... then later we get: http readv of aa3ab464f867aaa3609bc8ba20f1e342.pack  offsets => 14672 collapsed 2021:13
lifelesswe've beena round this much before, and the basic issue is always the radical difference between on-my-disk vs remote-disk21:13
lifelessjam: revisions + inventories.21:13
datolifeless: ok, noted in my TODO21:13
jamlifeless: sure, revisions versus inventories versus texts.21:13
jamjust interesting that one section is very unclean21:13
jamthe other is relatively clean21:13
lifelessjam: do you have other projects in there?21:13
jamlifeless: yep21:13
jamjust like I do in real life21:14
lifelessthats likely it21:14
jamsure, I know *why* it is happening, just interesting to see it21:14
lifelesscool21:14
jamhmm. it still says it is copying inventory texts21:14
jamvila: when you were doing "count of data sizes downloaded" was that using your transport stats plugin?21:15
lifelessbrb; fooding21:15
vilajam: yes21:15
jamalso, shouldn't we not log every readv collapse unless we have -Dhttp ?21:15
vilajam: what ?21:16
jamvila: do a plain "bzr get" and it will show you every time it does a readv21:16
jamand how much it collapsed21:16
jam(it was an old bit that I wrote)21:16
jamI'm thinking we should probably only mutter() that if " 'http' in debug.debug_flags"21:16
jamthough I certainly find it useful right now21:17
vilajam: ha, ok, I thought you wanted more mutter but you want less21:17
jamyeah, double negatives are bad21:17
vilain fact there is only one mutter by readv, even if there is several GET requests issued21:17
jamhmm... so I currently have 28 pack files, which means that we actually have *more* round trips21:17
jamwell, probably not for all the text data21:18
jambut for the inventories21:18
jamwe have have a lot of rix round trips21:18
jamthen a more reasonable .iix round trips21:18
jamhmm.... it seems vila's code goes in a slightly different pack order than bzr.dev21:26
jamI wonder if he is just missing some of the newer patches21:26
vilaI had to merge during the patch writing when I hit is_ancestor test failing21:27
vilaI still wonder how I get a version from bazaar.org that fail to pass the test suite...21:27
jamvila: upgrading to packs broke the is_ancestor test21:28
jamis_ancestor just used the default format21:28
jampacks required the test to lock first21:29
jambut other than that, I don't know how you would have gotten a bzr that had the no-lock is_ancestor, but packs set to default21:30
vilahmmm, and I realized it only when I ran the full test suite, could be21:30
jamvila: and I do have to say, I get to see the blinkenlights21:30
vilahaaaaaaaaaaa :-)21:30
jamthat little guy, its spinning just fine21:30
jamit has 60s to beat bzr.dev's time21:31
vilajam: pycurl or urllib ?21:32
jamurllib21:32
jamit just made it21:32
jam20s faster21:32
vilahe he21:32
jam9m40s versus 10m21:32
PengIs it a good idea to run pack reconcile now?21:33
vilajam: can you try pycurl too ?21:34
jamhmm.... 64MB / 10m == 6.4MB/min or 106 kB/s... I should be able to get 160 or so, but my connection might be in use somewhere21:34
jamvila: sure21:34
jamI suppose I can install it21:34
jamI haven't to date, because I prefer urllib21:34
vilareally ? Wow, nice suprise ;)21:35
jamyeah21:35
jam^C during download is a pretty big downer for pycurl21:35
jambut also, I felt I wanted to get away from it anyway21:35
jamso might as well dogfood urllib21:35
jamI don't think I have it on any machines right now21:35
* vila <whisper> you can ^C pycurl now, but don't tell anyone ;)21:36
vilaat least I succesfully did a couple of times now21:36
jamvila: just because it downloads in smaller chunks at a time?21:36
vilayup21:36
jamor can you actually interrupt the data stream21:36
vilano, just because the probability is lower to break at bad times now21:37
jamwell, I'm doing memory dumps as I go, too21:37
jamso I'll let you know21:37
jaminitial reports look good21:38
jam(just browsing the memory log file, maybe 20M or so less consumed.)21:38
vilaso peek mem before/after patch: pycurl: 173M/95M urllib: 131M/85M21:38
lifelessnice21:38
jamvila: "peak"21:38
vilajam: rats21:39
lifelessPeng: yes, my fix for it landed in bzr.dev about 20 minutes ago21:39
jamI get 120MB versus 99MB VmPeak for urllib21:39
jamvila: is that with robert's repository ?21:40
lifeless32/64 bit maybe?21:40
vilayeah, still the one from bug #165061, by the way lifeless, do you intent to update it ?21:40
jamvila: so pycurl still gives blinkenlights, but more intermittently21:41
ubotuLaunchpad bug 165061 in bzr "bzr branch http:// with a pack repository takes too long" [High,Fix released] https://launchpad.net/bugs/16506121:41
jamso you see nothing for a few seconds, then a bunch of updates21:41
jamwhile urllib gives a steady stream of updates21:41
jamwhich is pretty expected21:41
vilajam: yes, tweak the 25, I was happy with it, but as I said my latency is low so I can't really be the one to judge21:41
jamI think this is about right for my latency21:42
jamI still prefer urllib21:42
lifelesswhats the number mean ?21:42
jambut at least it doesn't hang for minutes21:42
jamlifeless: number of ranges to collapse21:42
lifelesshmm21:42
lifelessmany ranges are small21:42
jamwell, on top of that, we have 200 ranges per request21:42
lifelessI would rather suggest to do it on the total byte count in the collapsed ranges21:42
jamso really it is 200*##21:42
jamlifeless: I agree that would probably be better21:43
lifelessbecause its byte count vs latency that the real world operates on21:43
jam(raw ranges => collapsed ranges => request)21:43
lifelessright21:44
lifelessso I'm saying raw ranges => collapsed ranges => byte count filter => requests21:44
lifelessor something like that21:44
lifelessbecause21:44
lifelessif I ask for a 5MB single text21:44
jamat the moment it is "raw ranges => count filter => collapsed ranges => requests"21:44
mwhudson~/src/bzr/173010/bzr branch http+urllib://bzr.arbash-meinel.com/branches/bzr/0.93-dev/extra_range_collapse_16506121:44
lifelessshould that give no output while it reads ?21:44
mwhudsonis still taking at least a few minutes to show any ui life21:44
jammwhudson: can you wait a bit to let my tests run?21:45
lifelessrofl21:45
mwhudsonjam: uh, sure21:45
jambefore my pipe gets killed21:45
jamI'm just finishing one21:45
vilamwhudson: where the hell did you get that ? :-)21:45
jamit should be done in ~ 5min21:45
mwhudsonvila: your patch on the mailing list21:45
vilahehe21:45
vilalifeless: I concentrate on the readv implementation, but I noticed that indeed the pb pops up a bit lately21:46
Penglifeless: 3070, "Fix an ordering insertion issue with reconcile of pack ..."?21:47
lifelessPeng: yes21:47
vilaat least 10 requests are issued before it shows up21:47
PengOhh..21:48
PengI think I already reconciled it a while ago.21:48
lifelessthen it should be trivial21:48
vilabefore I forget, yes _max_readv_combine generates many coalesced offsets which results in contiguous ranges, so we waste header space here, so lifeless is right that should be reworked21:48
PengIt took 35 minutes to check and 1 minute to reconcile then.21:49
lifelessvila: more than header space is the latency of issuing new requests.21:49
lifelessvila: clocking on the wire is less expensive than going around the world21:49
vilalifeless: sure, one more reason to use the header space efficiently to issue less requests21:50
Pengfewer21:51
Peng</Peng's mom>21:51
lifelesslol21:51
vilaExcusez-moi madame, je ne parle pas tres bien l'anglais, et en plus je n'ai pas les accents pour le francais (ni la cedille pour le c ;)21:52
lifelesscedille ?21:52
lifelessc'est quoi cedille?21:52
lifeless</BAD french>21:52
vilacedilla and you french is correct :)21:54
vilaby the way, keep correcting me21:55
lifelessoh, ç ?21:55
vilayes21:55
vilaor \,c C-M-" under emacs >-)21:56
lifelesshave you set up composing in your gnome keyboard foo?21:56
lifelessç is left-windows+comma c, for me.21:57
vilacomposing is not available everywhere, I make enough typos by constantly using different keyboards, I settled with US layout everywhere and manage to type less and less french outside emacs... :-)21:58
jamvila: mabye it was because of mwhudson, but pycurl with your patch was 14m22s21:58
vilaI have three keyboards here: one Apple, one Dell, one Sun...21:58
jamvila: you are right about the extra header stuff, but you also need some granularity so that you can break at appropriate points21:59
vilajam: yes, blinkenlights needs more requests so more latency.... told you ! ;-)21:59
vilajam: yes, trade off, *my* plan is to drop pycurl ;-)22:00
lifelessvila: I have a US keyboard. gnome lets you set the compose key22:00
jamlifeless: I understand your point about a single 5MB text... but at the moment we don't really have a way to do progress while downloading22:01
lifelessjam: nested_progress_bar().tick()22:01
Pengvila: Which is your favorite keyboard?22:01
vilalifeless: I know, but I prefer to stay with a simple setup that I can use under Ubuntu, Solaris, Mac OS, windows, etc (including VMs which loves to break that sort of thing), add synergy in the scheme (to use the same keyboards between different X servers and....)22:02
jammwhudson: you can abuse my net connection now22:02
mwhudsonjam: thanks22:03
jammwhudson: I also have: http://people.ubuntu.com/~jameinel/test/22:03
mwhudsonjam: i'm only testing the abuse you ask launchpad to give it :)22:03
vilaPeng: I code therefore I use software designed by people with US keyboards, so having ZXCV (undo/cut/copy/paste) on the same line is a win22:03
jamup with a bunch of real branches which is a copy of my working repo22:03
jammwhudson: yeah, except LP only does that 1 time, and then at least quiets down afterwards22:03
jamthough I do notice in my server logs22:03
mwhudsonjam: expect not at the moment22:03
jamthat I get a whole lot of 404's from LP22:04
mwhudsonhttps://code.edge.launchpad.net/~jameinel/bzr/extra-range-collapse-16506122:04
mwhudsonbecause it takes over 15 minutes for the progress bar to appear22:04
jammwhudson: why is LP failing on that branch?22:04
jamdo you have any idea?22:04
mwhudsonjam: yes22:04
jamoh, the LP branch is locked22:04
vilathen having {} just above [] and () all easily accessible are far better than french layout (and don't talk about windows where { is something like alt-156 or whatever)22:04
mwhudsonjam: that's just a symptom22:05
mwhudsonjam: the real problem is that i landed a branch that kills the puller worker if there is no progress bar activity for 15 minutes22:05
jamah22:05
jamhmm....22:05
jammwhudson: well, that branch is in knit format22:05
mwhudsonbecause we've had problems with pullers getting stuck for days22:05
jamso it is downloading most/all of inventory.knit22:06
jambefore it does much22:06
mwhudsonjam: to some extent22:06
mwhudsonjam: i'm not interested in your excuses :)22:06
* vila calling it a day22:06
vilanight all22:06
lifelessgnight22:06
jamIt is about 25MB at 30KB/s...22:06
jamvila: night22:06
jamwhich is 14 minutes22:07
mwhudsonit's still bad for bzr to show no activity at all for 25 minutes or however long it will take22:07
jam13.922:07
lifelessmwhudson: preaching. Choir.22:07
jammwhudson: oh, I'm not saying it isn't bad form for bazaar to act that way22:07
lifelessmwhudson: hear the angels?22:07
jamvila's change will do a lot for that22:07
mwhudsonlifeless: yes, they're singing "give it up for today"22:07
jammwhudson: I thought you gave it up about 3 hours ago22:07
mwhudsonjam: that's why i'm testing vila's change22:08
mwhudsonjam: i should have done22:08
jampacks will help a lot, too, as they copy in a different fashion that gives better progress anyway22:08
jamI'm just being stodgy on my public repo22:08
jamand forcibly testing pack => knit stuff22:08
jammwhudson: I did notice a lot of 404 in the server logs, every time LP tries to mirror a branch and finds it is in a shared repo22:09
lifelesswhich is good22:09
mwhudsonjam: launchpad just does branch.pull()22:10
jamwell, it might be nice if it could check Branch.last_revision() and see that it has not changed without doing any more work22:10
jammwhudson: sure, I know why22:10
mwhudsonjam: little chance for funny business here22:10
lifelessjam: opening the branch finds the repo22:13
jamlifeless: the way our code is written, yes, it wouldn't *have* to22:13
jamI don't really mind22:13
jamI just don't read the server logs22:13
lifelessjam: I really think it should stay the samme22:13
jammwhudson: Did the puller get updated, it seems to mirror 5x per day now22:14
jamlifeless: I think having WT act differently based on what repo it is connected to is a layering violation... and we shouldn't have to poll everything when we only need 1 bit of info22:14
jambut it isn't critical to me22:14
lifelessjam: I agree about the layering violation, that's a compromise (everyone is unhappy), but; I think the knowledge that the stack is good is valuable.22:15
mwhudsonjam: 4x is the defaul22:15
mwhudsont22:15
jamI just get to see this in my logs: http://rafb.net/p/gHjqDX47.html22:15
mwhudsonand has been for ages and ages, afaik22:15
lifelessmuch better than operating on something for X time and *then* finding out it's corrupt.22:15
jammwhudson: ok, I just see 5 queries22:15
jamI used to see 1, IIRC22:16
jamI haven't looked at it in quite a while22:16
mwhudsonprobably since before i started :)22:16
jamwhat I really need to do is update all of those to Branch6 format22:18
jamso it doesn't have to download the full revision-history file each time22:18
mwhudsonmy pull against jam's repo is spinning the bar now22:19
mwhudsondidn't notice when it started though22:19
mwhudsonpretty sure it's quicker than without vila's patch though22:20
PengHey, it's been more than 1 minute since I started reconcile.22:20
vilalifeless: one last thing, regarding obsolete-packs, what about replacing [t.remove(f) for f in t.list_dir('obsolete-packs')] by t.remove_tree('obso-packs'); t.mkdir('obso-packs') ?22:20
lifelessvila: race conditions FTW22:20
jamwow, 279 Branch5 branches...22:21
viladon't you have more race conditions with a listdir ?22:21
jamvila: race conditions, and permission issues22:21
jamvila: you don't have anyone who will try to rename a file into a missing directory22:22
jamversus having a file that was deleted for you22:22
jamor missing a file that showed up22:22
jam(which we don't really care about anyway)22:22
vilajam: permission issues I understand but what race conditions, isn't the repo locked ?22:23
jamvila: no it isn't22:23
vilaha, ok22:23
jamyou only have to lock while updating pack-names22:23
jamonce you've updated that22:23
jamnobody will try to access the packs you are moving22:24
vilaok, thks22:24
vilabye ;-)22:24
jamvila: kthxbye22:25
jamYou really need to work on your spelling22:25
jam:)22:25
jamhmm... upgrading to branch6 saved me about 30MB22:40
=== n[ate]vw is now known as natevw
=== natevw is now known as n[ate]vw
n[ate]vwjam: I'm wondering if any decisions have been made yet regarding handling Unicode normalizations?22:50
jamno new decisions22:51
jamI don't think it was discussed much recently22:51
jamwe've got a lot of other stuff on our plate22:51
jamn[ate]vw: if you want to start up a discussion, you are welcome too22:51
jamI sort of burned out on trying to make everything work22:51
jamas I seemed to be the only person who cared...22:51
jamwell, that, and it would always break for someone22:51
n[ate]vwyeah, understood22:51
jamI felt that penalizing Windows/Linux users because Mac likes to change filenamse22:52
jamfilenames, was a bit sadistic22:52
n[ate]vwjam: I'm still very much just a user at this point, but I have read up on Unicode a bit22:52
n[ate]vwI'm still kinda trying to pick between bzr/hg22:53
jamn[ate]vw: did you try my patch?22:53
jamIt would be pretty easy to get that merged in22:53
jamso at least it would let a single platform stay compatible with itself22:53
jameven if branching from Unix => Mac might break stuff22:53
jam(well, filenames would show up as missing, etc)22:54
jamwe recently merged some code which helps with the case insensitivity problems22:54
lifelessspiv, jam, igc call in 522:54
n[ate]vwjam: I haven't tried the patch, not much of a python hacker. do I apply that to the "source" tarball d/l, or can it apply straight to the installed bzr "excectuble" in bin?22:56
jamn[ate]vw: probably recommended to go for the source22:57
jamyou probably could apply it to the library in22:57
jam /usr/lib/python2.4/site-packages/bzrlib/*22:57
datohm, I'm also noticing pull being much cpu-intensive now, at least pull -v22:57
jambut I would probably recommend running from source if possible22:57
n[ate]vwnp, the source is fine22:57
jamn[ate]vw: also, you can run bzr directly from source (without installing)22:58
n[ate]vwlooks like there's a new rc out since I was testing it last22:58
jamjust run 'make' so you get extensions built22:58
jam(if you don't, it will still run, but just slower)22:58
igcmorning22:58
n[ate]vwthat's good to know22:58
n[ate]vwjam: I just read an interesting article by Drew Thaler on case sensitivity, and he happened to defend OS X's normalization in the comments: http://drewthaler.blogspot.com/2007/12/case-against-insensitivity.html#comment-227967439933524189622:59
jamn[ate]vw: I understand (in theory) why normalization is the "right thing" but because nobody else does it23:00
jamit just means yours is the only platform which breaks stuff23:00
n[ate]vwyeah, and it looks as though you guys are trying to push 1.0 out the door so I don't want to harass you about this at the moment23:01
n[ate]vwit'd be nice to see a good cross-platform solution eventually, but I understand it's a tough nut to crack23:02
jamlifeless: conference call ?23:03
logankoesterWhat package do I have to install to get the "bzr" command on my system?23:05
lifelessjam: yes, distracted by code.23:05
logankoesterI have bazaar23:05
lifelessbzr23:05
lifelesslogankoester: ^23:05
lifelessigc: conf call time if you are not already in23:06
logankoesterheh23:06
igcin23:06
logankoesterthanks ;)23:06
jamn[ate]vw: well, the small patch I gave you might be a simple thing to get into 1.023:16
jamcode-wise it is nice and small23:16
jampolitics wise, a bit more involved23:16
jamBut as I was the one who spent an irrational amount of time trying to get it all working in the first place.23:17
abentleyvila: pong23:18
n[ate]vwjam: I will try to test that patch tonight after work (unless work continues to be waiting on others, in which case sooner!) and post my results to the ticket23:23
abentleylifeless: Should I be merging my 1.0 patches into bzr.dev?23:29
abentleylifeless: Conflict detection is done before the merge is applied, so it would be possible to error out if push would trigger conflicts.23:34
PengReconcile finished in 20 minutes.23:58
* Peng wanders off.23:58

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!