[00:01] then look at bzrlib/smart/medium.py [00:01] and tell me how on earth bzr_access could ever work [00:02] * Kinnison pouts prettily at jelmer [00:05] Kinnison: Presumably if you're using bzr:// and not bzr+ssh://, IIR this problem correctly. [00:09] bzr_access is explicitly for bzr+ssh though [00:09] Then I don't recall this problem correctly. :p [00:09] Sorry. :( [00:13] :-( [00:13] bzr:// uses a tcp connection [00:13] which is fine if you can trust the network between you and the server [00:14] sftp:// is slow, lacks remote hooks, but works for untrusted networks at the expense of needing a unix user per person who can access trees [00:14] bzr+ssh with bzr_access had promise, but has turned out to be useless [00:14] * Kinnison sighs === brilliantnu1 is now known as brilliantnut === brilliantnu1 is now known as brilliantnut [05:41] New bug: #182469 in bzr "Bazaar has encountered an internal error: exceptions.MemoryError: " [Undecided,New] https://launchpad.net/bugs/182469 [06:51] New bug: #182477 in bzr "bzr crarshing when trying to commit to the APT repository" [Undecided,New] https://launchpad.net/bugs/182477 === pbor|out is now known as pbor === GaryvdM_ is now known as GaryvdM === cfbolz_ is now known as cfbolz [20:10] Eeey... How do I enable bzr support in emacs? [20:12] MWinther: I don't use emacs, did you look: https://launchpad.net/vc-bzr ? [20:15] Verterok: Oh, thanks... I missed the instructions in the comments. [20:16] np, glad to help ::D [20:25] Hi all. I'd just like a confirmation: private branches can be rebased, but as soon as they're made public they should only merge and pull, right? [20:28] yeah [20:30] Out of curiosity, what happens if I rebase a branch that someone else has already branched? [20:31] Lo-lan-do: yes; other than for bzr-svn (which can only track lhs history) there shouldn't be much reason to use rebase [20:31] s/which/because it/ [20:32] Lo-lan-do: Next time they pull they will get a branches diverged error [20:32] Lo-lan-do: and if they merge afterwards, the rebased revisions end up twice in "bzr log" [20:32] Oh. Right, I'll refrain then :-) [20:32] one use case is for right before pushing [20:33] if you notice you're diverged, you rebase and push [20:38] I see. [20:39] Say, is the LCA merge algorithm the default in 1.1? [20:40] I've read it's the bee's knees, so if it's really better I guess I should replace --weave with it in my scripts? [20:42] * rjek wonders idly if there's a bzr-cvs. [20:42] I suppose it's more of a faff due to the lack of changesets. [20:42] rjek: The website says there isn't [20:42] (I wondered the same thing yesterday) [20:43] I just want to import another project's source tree into mine. It'd be nice to have history, but it's not essential. [20:43] I'd suggest cvs2svn then bzr-svn [20:43] rjek: it is possible to import CVS to bzr using launchpad [20:43] rjek: if it is open source [20:43] thumper: It is - it's on sf's CVS server. [20:44] I only want to make a handful of changes to the source stored in CVS for my own purposes, it's more about reproducability than version control tbh. [20:44] rjek: luanchpad uses cscvs to import cvs to bzr [20:44] Does cscvs have advantages over cvs2svn? [20:45] rjek: I don't really know that much about it [20:45] rjek: except cscvs goes from cvs -> bzr [20:45] rjek: rather than cvs -> svn [20:45] Well, yes - there's an advantage in removing one of the steps, clearly. [20:45] I'd be curious to see how cvs-svn, then bzr-svn works [20:45] But I know bzr-svn does a good job. [20:46] So if cvs2svn does a good job, you'd hope the output would be high-quality. [20:46] rjek: one of the things that cscvs does is it tracks CVS changes [20:46] rjek: rather than a one-off conversion [20:46] That might be more useful, tbh. [20:46] (ie, that *is* an advantage I would be interested in.) [20:54] thumper: What's involved in getting Launchpad to track a CVS repository for me? [20:55] rjek: https://help.launchpad.net/VcsImports [20:55] Ta [21:10] * rjek mutters at Launchpad. [21:10] Apparently, optional fields are compulsory. [21:18] rjek: instead of muttering, file a bug :) [21:20] I wish to use Launchpad for the minimum amount of time I can get away with :) [21:20] rjek: why? [21:21] rjek: best to move this conversation to #launchpad [21:21] I generally find it hateful. [21:31] I'm realizing I'm sort of swimming upstream here... trying to find a nice way to at least convert, if not actively track, cvs to bzr; but we have about 40 projects in cvs on a FreeBSD system with people using Windows CVS clients to check in and manage Windows projects. The projects contain mostly Windows text but also some binaries. [21:32] I'm told cscvs is the way Launchpad does it. I'm also told Tailor, once set up, can nicely do incremental imports. [21:32] I believe bzr *always* checks in/out exactly what you send in: Check in text from windows, check out CRLF no matter where you are. But cvsps-import gets LF only from CVS in all cases I've tried, so cvsps-import can't currently manage the conversion because all checkouts would be LF-only even under Windows... [21:33] Does anyone know if cscvs (which I hadn't heard of until now) could help with this at all? [21:34] dlee: so the limiting factor here is we haven'timplemented line ending translations [21:34] rjek: I've used Tailor to some good effect with CVS (it *will* do CRLF properly it seems), but in my experience, it fails to notice CVS tags, so they don't show up on the Bzr side. [21:35] dlee: the 'right way' to convert your repository is to convert it in binary form and then use line ending translation to get the text files to be checked out as text files on windows [21:35] lifeless: Could be thought of that way. Could also be said that the limiting factor is Windows insists on an annoying EOL translation. :P~~~ [21:35] well, older macs have different EOL to unix and windows [21:35] so its a general problem [21:36] But anyway... I'd be fine with putting CRLF in Bazaar so everything on the Bzr side checks in/out with no translation... unless (I) someone knows a reason against that, or (II) I can't find a solution before Bzr implements such translation [21:37] I think you will have to hack up a solution - editing cvsps for example [21:37] lifeless: point. [21:37] lifeless: I tried hacking cvsps-import but not cvsps itself. [21:38] lifeless: Isn't it true though that, if I check in text on Windows, it becomes CRLF inside the Bazaar branch? [21:40] My reason for thinking it would be best to get things in as CRLF and then not need translation is, mostly, that I thought that is what will automatically happen for new projects we do anyway. [21:41] currently bzr just stores the bits untranslated [21:41] so if your editor makes CRLF, you get CRLF [21:41] I meant hacking cvsps-import actually ;) [21:43] Right... so in my mind at least, adding CRLF translation won't really help us here unless it happens before we kick off the whole CVS-to-Bazaar conversion, because currently, all projects either will by default, or must for consistency be made to, be stored as CRLF internally in Bazaar. [21:45] My knowledge of Python remains limited, but my last experiment there was to try using universal_newlines. I suspect subprocess.popen.communicate() (if that's the right object hierarchy) of quietly converting CRLF to LF. Cygwin could also be doing it for pipes though it doesn't for output redirection I guess. [21:46] well [21:46] cvs will ahve everything stored in LF at the moment [21:46] there used to be flags for this in the cygwin environment [21:46] I suppose the next shot I'd have is to pass all cvs/co output through an output-to-file, read-from-file sequence, and see if that preserves line endings. But I have a better idea that I don't know how to do: [21:47] (I do have MSDOS endings set in Cygwin, yet all this still happens) [21:47] Better idea, if it can be done: Add a cvsps-import flag that makes it convert to CRLF on CVS files that do not have -kb set. Sound? [21:48] I like this better because it's OS-independent, and I could mass-convert all 40 projects on the same fbsd system where the CVS repos are. :) [21:48] -kb isn't a historical setting IIRC [21:48] if it is then sure [21:48] hmm, actually paging it in - it is [21:49] Sorry... if you're doing serious work, don't swap on my account. :-) But the help is sure appreciated. [21:49] I think thats a reasonable short term strategy [21:50] the downside is that when bzr gets this natively all your text files will see a full-file diff [21:52] If/when bzr includes LF translation, won't we need a way to convert existing branches to an internal standard too? The best I can imagine is svn-like properties, where you flag a file as one ending type or another to make Bzr (I) standardize it internally and (II) spit it out according to the given type. Not sure whose project endings are though... [21:54] Scenario: Under Windows, I check in text.txt and word.doc. They are both stored without translation--text.txt as CRLF and word.doc as what it is. If I check out on Unix, I get the same--CRLF and doc. Then native ending support comes in... [21:54] If what you're saying is true, I'd see a full file diff on those then too, even though I hacked nothing. And I think it's inevitible, unless we deal with the branch conversion as a sort of upgrade-type thing. [21:57] those == text.txt [22:03] Hmm...I guess internal conversion is optional as long as flagging a file as specifically CR, CRLF, or LF forces (I) output (co etc.) translation and (II) input-time translation to ending type detected in existing branch copy. I imagine it'll be done though as input-time translation to specified type, and *maybe* output-time also. [22:04] we have wiki page about it [22:04] bzr line ending something or other - google FTW [22:08] lifeless: looking... [22:08] morning [22:16] lifeless: Not finding it...but I'll table all this until I do so you don't have to say a lot of stuff I should already know. [22:19] http://bazaar-vcs.org/LineEndings [22:19] is there a good internal API doc anywhere for bzr? [22:19] or is poking through the code the best way to get a handle on things? [22:19] there are api docs [22:19] on the web [22:20] http://starship.python.net/crew/mwh/bzrlibapi/bzrlib.html [22:20] though I think thats an old copy, google isn't finding the newer one for me [22:21] lifeless: and that's just generated from pydoc stuff, right? [22:21] I use 'pydoc foo' a lot [22:21] so the same info as in the docstrings [22:21] yes, though it is richer than pydoc cause it hyperlinks [22:22] well right... but it doesn't have an overview of like "a bundle contains a ... " or whatever [22:22] mtaylor: there is also doc/developers/ [22:23] lifeless: hm. maybe I'll look in tere [22:23] I'm working on adding bzr support to reviewboard [22:23] and I'm getting tired of reading source [22:23] :) [22:23] e.g. [22:23] http://doc.bazaar-vcs.org/latest/developers/bundles.html [22:24] http://doc.bazaar-vcs.org/latest/developers/bundle-format4.html is probably what you are working with [22:24] lifeless: ah... yes, this looks like more what I'm looking for [22:26] lifeless: so, while I'm bugging you - if I have a branch and a bundle - the bundle contains a list of file ids and a list of revisions that go with the file ids [22:26] lifeless: so I should be able to get a path of the file for a revision based on revision id and file id [22:26] have you looked at the bundle buggy code? it has to do everything you are doing [22:26] lifeless: yeah - I was just about to start walking throught that [22:27] bundles are not yet directly usable aas branches; they have to be installed somewhere, then you can get a revision tree of the revision the bundle introduced (or a delta, for cherrypick bundles) [22:27] I guess my question is... the bundle buggy code seems to merge_directive.install_revisions(branch) [22:27] ah [22:28] ok, that makes it much clearer [22:28] I was trying to think of a bundle as usable like a branch [22:28] ok [22:28] sweet. thanks! [22:29] lifeless: so if I get a Branch, and then install the revisions of the bundle into the branch, I haven't actually applied that bundle yet, right? [22:31] abentley: hey - I added some patches to a local bundlebuggy to support multiple branches - are you interested in them ? [22:32] Interested? Sure. What do they do? [22:33] abentley: so that I can configure a local repository containing more than one branch [22:33] and then bb can match a merge directive to the branch it belongs with [22:33] so you can manage more than one tree with one bb [22:33] should I send them to you directly? or to the bzr.dev bundlebuggy? [22:34] Directly to me, please. [22:34] ok [22:35] abentley: oh hey! [22:35] lifeless: Hi. [22:35] abentley: I was looking for you a few minutes back; I've just mailed about BreadthFirstSearcher and fetch [22:35] abentley: how do you feel about me modifying the searcher to not return ghosts (including the start revisions if they are absent) [22:35] I don't see anything yet... [22:36] literally just sent [22:36] lifeless: well, my gut says that's the wrong place to be filtering out ghosts. [22:37] I see it now. [22:37] I've had a poke around, and couldn't see any use case for it. ParentProvider needs to return ghosts; I don't think breadth first searching does,. [22:37] because breadthfirst searching is already hiding ancestry and topological order details [22:37] it could for instance return (next_revs, next_ghosts) if you like [22:38] that would be an api break I guess, so next_with_ghosts() or something for api transition [22:41] lifeless: I think we should be including ghosts in our APIs except where it's clear that they must not be present. Otherwise, we are likely to get bugs related to hiding ghosts, and never know it. [22:42] Bug due to using ghosts when they should be ignored will be much more visible. [22:42] this is one [22:42] And it's pretty visible, right? [22:43] I agree with your point; however bfs is discarding data [22:43] it knows what rev ids are ghosts [22:43] it needs to propogate that information [22:44] abentley: sent [22:44] or else we end up doing double-queries [22:44] lifeless: Propogating that info is completely reasonable. [22:44] What do you think of the changed return value I suggest ? [22:44] (or new on a new method) [22:44] mtaylor: got it. [22:45] lifeless: That would be fine on a new method, but I think the default method should just return revision_ids, ghost or not. [22:46] I'd like to deprecate next() I think if I add a new method unless there is a good reason for having two query interfaces [22:46] Well, I'd like to include ghosts by default. [22:47] they will be included [22:47] Won't they be split into a different group? [22:47] yes [22:47] I think that ghosts should be not split into a separate group by default. [22:47] I think that once we have identified the ghosts we should keep the separate [22:48] I think that operations should only be paying attention to whether a revision-id is a ghost if this data is relevant to the operation. [22:48] because otherwise we are forcing roundtrips on other parts of the code; or making other parts of the code be filtering when they should not [22:49] Your proposed API would encourage people to do next()[0] even when ghosts should be included, because 95% of the time there aren't any ghosts. [22:49] I think you are creating a source of performance / correctness bugs for users of that interface [22:49] I have not proposed anything that would damage performance. [22:49] I explained above how it hurts performance [22:49] Your explanation suggests a misunderstanding of what I'm saying. [22:50] anyhow, I'll do as you desire because my need will be met by the new method [22:50] If you want an API that splits out ghosts, fine. [22:50] But don't deprecate the old one. [22:50] Because it's worse for people to ignore ghosts when they should not than to pay attention to ghosts when they should not. [22:52] I understand your motivation and agree in principal; on this particular API I think you are wrong. [22:52] anyhow, -> doing the new method now. [22:53] in particular returning the start revision ids for a search when they are absent from the repository is really hard to work with [22:53] and thats just the special case of ghosts [22:53] abentley: so I have a separate request; I want to modify next() to return StopIteration if the start revisions are all ghosts [22:54] abentley: I don't have to, but please think about that [22:54] Why? [22:55] because I spent the greater part of an hour while writing the pack fetch logic to handle that case; its sufficiently confusing as an api that there is a 5 line paragraph explaining code that uses it [22:55] But now you'll have an API that splits out all ghosts. [22:55] So callers that don't want ghosts won't get any. [22:56] yes, and I'll use that; I'm thinking of users of next, if people that want to use it do so [22:57] That seems like an inconsistency in the API. [22:57] hello [22:57] If you ask for 5 ghosts and 1 non-ghost, you get all of them listed, but if you ask for 6 ghosts only, you get nothing? [22:58] you're right I guess, I'll just write the more usable api I need and leave it at that. It feels wrong to have an api that discards information which is relevant to its callers is all [22:59] Btw, one client that wants to keep references to ghosts is Graph.heads [22:59] poolie: Hi. [22:59] hello [22:59] welcome [23:00] Thanks. [23:02] abentley: it does? [23:02] Sure. We can't assume a revision isn't a head if we don't know its derivation. [23:03] we can't assume it is either; I would expect us to signal that to the caller [23:05] hmm, this means that we'll generate inconsistent last-changed revisions for baz imports, just in reverse order to how we used to do it [23:05] If the revision itself is a ghost, but is not reachable from candidate heads, it must be a head. [23:07] We may wind up with some false heads if some descendant is a ghost. I don't think we can avoid that. But we can know whether the ghost revision itself is a head. [23:09] the if the revision is /not/ a ghost, but is not reached, and we encounted ghosts, we cannot tell if it is a head or not, unless it reached the other heads [23:09] if the revision is a ghost, it is not a head if it is reachable from some other head [23:09] so yes we have to consider ghosts in heads(), but we need to know within heads() which are ghosts and which aren't to know whether to say 'head', or 'indeterminate' [23:10] lifeless: agreed. [23:10] we can avoid ending up with false heads if our api is allowed to signal that it could not determine the answer. Which any ghost handling api must be able to do [23:11] this is ok, it just means that we're not finished with heads() [23:11] Which is pretty reasonable since I implemented heads by accident :-) [23:13] mtaylor: these key lengths you're proposing don't seem to have any basis in specs. [23:14] for example, bug ids are URLs, and I don't think there's any length limit on URLs. [23:15] Even IE can do 2048-char URLs. [23:16] Apache can limit URLs. I once got a 400 Bad Request error or something for a 13,000 char URL. [23:18] squid limits to 4K at the moment, we're working on raising that [23:18] Peng_: Sure. But does the spec say anything? [23:18] its not limited [23:18] abentley: yes... that's one of the things I wanted to talk to you about... is there a place I can find what those lengths should look look? [23:18] look like, rather? [23:19] Well, URLs have no length limit. [23:19] I don't think email address do either, but I haven't checked as carefully. [23:19] ah, I suppose if I read the next line in your response. :) [23:20] ok. well then it might be a good idea to remove the lenght limit and introduce an artificial primary key there then [23:20] New bug: #182715 in bzr "Graph.heads() gives false heads sometimes" [Undecided,New] https://launchpad.net/bugs/182715 [23:20] as using a blob as a primary key is usually a really inefficient thing [23:21] mtaylor: That's why I've been slow responding to your first patch. [23:21] oh, did I send one already? [23:21] * mtaylor has a dead brain this week [23:21] I'll see if I can rework that [23:21] mtaylor: if the column is indexed, it shouldn't matter, should it? [23:21] no, it shouldn't [23:22] it's just that normally seconary indexes will contain a copy of the primary key so it knows how to point to the right row [23:22] so with a really big primary key, you wind up copying that data too much [23:23] So with variable-length keys, do you still pay a penalty when the keys are small? [23:25] not necessarily - that's sort of db engine specific [23:26] but in this case, elixir is making that column a blob [23:26] and most dbs have really bizarre implemenations of blobs at best [23:26] and in this case, mysql doesn't allow a blob as a primary key unless you specify a prefix length on the primary key [23:27] branch-format: Unable to handle http code 401: Unauthorised [23:27] Tsk. [23:28] Well, I'm in favour of using artificial keys-- makes for cleaner URLs. But I can't say I'm looking forward to adding migration code. [23:29] hehe [23:29] no [23:29] I was justing thinking that myself [23:30] I'll see if I can get it working and then send you a new patch [23:31] Also, renaming dev.cfg is okay, but I would prefer that it continue to specify SQLite. [23:34] Probably I should roll config.ini into the TG config system. [23:34] That way we can stick all the local configuration in there. [23:34] including databases and web paths. [23:36] Also, I don't think "tree" is the right term for branches. [23:38] Also, erroring if the target_tree is not going to work on bzr.dev-- *lots* of people don't set target_tree correctly. [23:38] Also, please don't comment-out code. We have a VCS, so just delete it. [23:39] We can always get it back if we want. [23:42] Also, you've got '/buggy' in a lot of places where it shouldn't be. [23:43] So I think the idea is an improvement, but there's some work needed on the implementation. [23:44] * abentley heads out for groceries [23:58] abentley: ok that patch is sent in