[00:10] <poolie> kkrev, url?
[00:11] <poolie> do you mean on the server?
[00:16] <kkrev> right on the server.  I'm asuming shared repositories don't do anything with simlinks to save space or anything like that.
[00:28] <poolie> kkrev, almost all the data will be in the repository directory
[00:28] <poolie> with just one copy
[00:28] <poolie>  the per-branch directories will be very small
[01:27] <maxb> jelmer: Concerning the removal of the bzrlib.util.configobj.configobj from the Debian package - I think we should import the system configobj to that name, as a compatibility measure. It's not unreasonable for people to consider it part of bzrlib's api
[02:04] <poolie> maxb, that sounds reasonable
[03:15] <GHellings> Can anyone tell of success stories with bzr-git bridge? I can only get it to work over raw git: protocols, and I'm not sure if it's me or bzr-git
[03:16] <mwhudson> GHellings: what other protocols have you tried?
[03:17] <GHellings> I have tried http: and git+ssh:
[03:17] <mwhudson> i know there have been bugs accessing branches from github over http
[03:17] <GHellings> I've been accessing gitorious over http unsuccessfully - same repo over git: is fine.
[03:17] <GHellings> Just tried a different, private repo over git+ssh: and it failed with the same error
[03:21] <GHellings> I'd go in to take a look, but I'd be a bull in a China shop, as I know nothing about programming bzr or its plugins
[03:22] <poolie> which error?
[03:22] <GHellings> bzr: ERROR: exceptions.AttributeError: 'RemoteGitRepository' object has no attribute '_git'
[03:23] <poolie> looks like https://bugs.launchpad.net/bugs/706990 which is claimed to be fixed in trunk
[03:24] <GHellings> Ah, I see the fix is since my comment on there. It had also claimed fixed in 0.5.2 of the release on a different report.  I'll try trunk and see how it works out.
[03:25] <poolie> great
[03:36] <GHellings> poolie, looks like it's working now, at least for git-import. Thanks. :)
[04:23] <poolie> GHellings, great
[05:52] <wgrant> spiv: Hi.
[05:53] <lifeless> that thunks down to revisions.get_known_graph_ancestry
[05:54] <wgrant> Yeah.
[05:54] <lifeless> first thing we could do is try without the pyrex.
[05:55] <james_w> are we looking at .texts .chk_bytes or what here?
[05:55] <james_w> i.e. where would we expect to find a revision?
[05:55] <james_w> .revisions I assume?
[05:55] <wgrant> .revisions, isn't it?
[05:55] <wgrant> Yeah.
[05:56] <spiv> wgrant: good afternoon
[05:57] <wgrant> spiv: Bug #715000 is giving Launchpad some issues.
[05:57] <wgrant> In particular, 3/4 of the new wheezy branches crash half way through scanning.
[05:57] <lifeless> by some, he means brick-face.
[05:57] <wgrant> Brick-face?
[05:58] <lifeless> 'lots of pain'
[05:58] <wgrant> Ah, yes.
[05:59] <wgrant> lifeless: still happens without pyrex, FWIW.
[06:00] <lifeless> ok, thats good to know
[06:00] <spiv> wgrant: hmm
[06:00] <wgrant> http://paste.ubuntu.com/564235/
[06:00] <lifeless> groupcompress.py line 1315 certainly looks like it should DTRT
[06:02] <james_w> lifeless, that's a comment here, which line is it?
[06:03] <lifeless>         parent_map, missing_keys = self._index.find_ancestry(keys)
[06:03] <lifeless>         for fallback in self._fallback_vfs:
[06:03] <lifeless>             if not missing_keys:
[06:03] <james_w> right, it would do the right thing
[06:04] <james_w> if self._fallback_vfs was correctly populated
[06:04] <lifeless> so thats a possibility; is it not populated properly?
[06:04] <james_w> my current hunch is that it adds natty as a fallback for maverick, and maverick as a fallback for lucid, when it should add both to lucid
[06:04] <lifeless> vfs is 'versioned files' here.
[06:05] <lifeless> so lucid -> maverick -> natty
[06:05] <lifeless> ?
[06:05] <lifeless> james_w: that chain should work
[06:05] <james_w> right, that's the stacking chain
[06:05] <lifeless> it might not be optimal, but it should work.
[06:05] <lifeless> oh
[06:05] <lifeless> I think I see a bug
[06:05] <james_w> however, <lucid repo>.revisions._fallback_vfs is [<maverick repo>.revisions]
[06:06] <lifeless> james_w: print the _fallback_vfs in maverick_repo.revisions
[06:08] <james_w> [<bzrlib.groupcompress.GroupCompressVersionedFiles object at 0x8f111cc>]
[06:08] <james_w> presumably natty_repo.revisions
[06:08] <lifeless> yes
[06:09] <james_w> and the code you pasted doesn't recurse
[06:09] <lifeless> yes
[06:09] <james_w> so does GroupCompressVersionedFiles._index.find_ancestry recurse()
[06:09] <james_w> ?
[06:09] <lifeless> no
[06:09] <james_w> then there is the bug
[06:10] <lifeless> it shouldn't recurse either
[06:10] <lifeless> the indices are below that layer
[06:10] <james_w> I would have assumed that most of this code didn't recurse at all call-sites, and just collected all the fallbacks when opening in to _fallback_vfs
[06:10] <lifeless> we need to be able to figure out whats *in this repo* vs *accessible via the api*
[06:10] <lifeless> each repo is mutable
[06:10] <lifeless> so open is the wrong time
[06:11] <lifeless> flattening at execution is probably appropriate
[06:11] <lifeless> that or recursing via the public api
[06:13] <james_w> ok, I'll dump what I know in the bug and someone can take on the fix
[06:13] <wgrant> What should we do for now?
[06:13] <wgrant> Stop the importer?
[06:16] <james_w> probably a good idea
[06:16] <james_w> no need to add more and more branches to clean up later
[06:17] <spiv> So this is breaking the package importer?
[06:17] <wgrant> The importer works fine.
[06:17] <wgrant> But LP breaks when it tries to scan the branch.
[06:18] <spiv> Oh, ok.
[06:18] <james_w> it may break the importer at some later time though
[06:18] <wgrant> Yeah.
[06:18] <james_w> so we should upgrade bzr there too when we have a fix
[06:19] <spiv> *nod*
[06:19] <spiv> I'll upgrade the bug's importance
[06:19] <wgrant> LP will probably cope for now, but we may need to knock out the problematic branches if it can't keep up.
[06:19] <wgrant> Since they stay in the scan queue.
[06:23] <wgrant> james_w: Have you turned off the importer?
[06:27] <lifeless> spiv: loom wants to user your newer api (re tag fetching)
[06:27] <lifeless> spiv: it has many tips to snarf
[06:30] <james_w> wgrant, nope
[06:30] <spiv> lifeless: yeah.
[06:35] <poolie> hi all
[06:45] <poolie> spiv, are you, are you working on that? if you want to talk i'm here
[06:45] <poolie> GaryvdM, hi
[06:45] <GaryvdM> Hi poolie
[06:46] <GaryvdM> poolie: I'm going to look at mysql cases today
[06:46] <poolie> hey
[06:46] <poolie> that's a good idea
[07:01] <poolie> GaryvdM, hi, did you see my pm?
[07:01] <poolie> spiv, hi, are you still here?
[07:01] <spiv> poolie: not yet, just got tests passing for the 'reconfigure --unstacked should copied tagged revs' though
[07:03] <spiv> I'm trying to resist the temptation to start on new things while I still have other things half-done.
[07:05] <poolie> good for you
[07:05] <poolie> this one probably is critical so i'll look at it
[07:06] <lifeless> wallyworld_: ^ btw - you'll want to hand out an rc stamp for including the fix to this in the rollout
[07:11] <wallyworld_> lifeless: you saying that the rollout will require a new version of bzr?
[07:11] <poolie> i'm surprised we haven't seen complaints about similar things from users, or failures in lp, before
[07:11] <poolie> perhaps most people just branch and that doesn't hit it
[07:11] <lifeless> poolie: we may have but not commonly enough to diagnose
[07:12] <lifeless> for > 1 year the code team has been focused on applications not the core
[07:12] <lifeless> (e.g. recipes)
[07:12] <lifeless> wallyworld_: we have an Incident (I assume wgrant is writing it up)
[07:12] <lifeless> wallyworld_: the solution requires a bzr code change; this has been latent forever.
[07:13] <spiv> poolie: I think more than one level of stacking hasn't been very common in the past
[07:14] <wallyworld_> lifeless: ok. we sure are going down to the wire though with last minute changes. right atm the anticipated rollout rev is in builtbot on it's way to qastaging
[07:15] <lifeless> wallyworld_: if we bless an earlier rev, thats fine.
[07:15] <wallyworld_> is this change something we could do with a no downtime rollout post 11.02 release?
[07:15]  * spiv logs off for the day.
[07:15] <lifeless> wallyworld_: we can, but what we release should be all the revs that are qa'd at the time - like if you bless N, and 5 more revs are qad by deploy time, we should deploy N+5
[07:16] <lifeless> wallyworld_: the point of the rc stamp is to let this into pqm as soon as this patch is ready rather than after you reopen pqm.
[07:17] <wgrant> It's not a DB change, so it doesn't really affect anything.
[07:17] <lifeless> exactly
[07:17] <wgrant> If it's in stable by the time we need to deploy, we can deploy it.
[07:17] <lifeless> and it will get landed via ec2test :)
[07:17] <wgrant> If not, we deploy a few minutes later :)
[07:18] <wallyworld_> lifeless: ok. it means though that pqm will need to stay in rc mode for a bit longer until this latest change is qaed
[07:18] <lifeless> wallyworld_: why?
[07:18] <lifeless> wallyworld_: it only needs to stay in rc mode until the the db rev is deployable
[07:19] <wallyworld_> i thought pqm stayed in rc mode until the rev to rollout was chosen?
[07:19] <lifeless> wallyworld_: I think the process hasn't been fully updated; the critical section is getting a db-change rev thats deployable (which may require more revs than that)
[07:19] <wgrant> Once we are OK to release we are OK to open.
[07:20] <wgrant> If we can add more stuff onto the release, all the better.
[07:20] <wallyworld_> since if there's a qa issue and people have landed subsequent revs, then that may be an issue?
[07:20] <lifeless> the hysteria around what rev to deploy during the downtime was back when we did a months worth of qa at once
[07:20] <wgrant> Since if we don't, I will just request a nodowntime deploy like an hour after the release :)
[07:20] <lifeless> wallyworld_: but *once* we have the db rev ok to deploy, thats not going to be changes by more landings is it ?
[07:22] <wallyworld_> no. but the db rev is merged into devel and it's the devel rev that is qa'ed for rollout, no?
[07:22] <lifeless> yes thats correct
[07:22] <lifeless> and consistent with what I'm saying
[07:23]  * lifeless tries again
[07:23] <lifeless> so qa is a pipeline
[07:23] <lifeless> and the db deploy is a fixed event
[07:23] <lifeless> to decrease the chance of foreign revisions interfering with fixes to the db revision
[07:23] <lifeless> once we merge db-* to devel
[07:23] <lifeless> we enter a critical section
[07:24] <lifeless> that critical section is relaxed once the devel deployment report shows that the db-* merge is deployable
[07:24] <lifeless> which might be the merge itself
[07:24] <lifeless> or require the merge plus a couple of fixup revisions
[07:24] <lifeless> at that point, if we open pqm, there is *nothing* a new landing can do to break the point we had qaed up to.
[07:25] <lifeless> which is why we now open pqm as soon as we have a candidate we *can* deploy.
[07:25] <lifeless> We we *go* to deploy, the pipeline may have advanced.
[07:26] <lifeless> we should deploy the highest deployable rev (again, what the report says is ok)
[07:26] <lifeless> the interaction with this bzr patch is that its sufficiently important we'd be willing to let it land during that critical section
[07:26] <lifeless> [not that the patch is ready]
[07:26] <lifeless> however being willing to land it during that section does not imply that the section should be extended
[07:29] <wallyworld_> i guess that my point - if a new landing breaks devel and we need to do a test fix for a rev prior due to a qa issue with a prior rev, then it's messy and better that merges to devel are frozen until after the rollout rev is chosen. i guess i've always worked that way in the past - freeze all new work while the rollout rev is qa'ed and only land test fixes
[07:29] <wallyworld_> but that was then and this is now
[07:29] <lifeless> so what I'm saying is still consistent with that
[07:30] <lifeless> once you have *a* rollout rev, you are safe.
[07:30] <lifeless> *and*
[07:30] <lifeless> you can choose a new rollout rev at T-60.
[07:30] <lifeless> based on the deploy report.
[07:30] <wallyworld_> that last bit (choosing at T-60) i think is where we differ
[07:31] <lifeless> ok; why?
[07:31] <wallyworld_> i guess i've been more conservative in the past - lock down the rollout rev and qa  and consider all change bad and needing justification so any changes not related to test fix for defined features in the release are  verboten
[07:32] <wallyworld_> until after rollout
[07:34] <wallyworld_> what happens if we choose rev 12338 (the one going to qastaging now) and someone lands a bad change? and we find at the last minute a fix is needed for rev 12336? in my scenario, we would just land the fix to 12336 without any hassle of dealing with a later rev checked in on top
[07:34] <lifeless> so
[07:34] <lifeless> the critical section is intended to let us *actually qa* 12336 - which has worked this month, for instance ;)
[07:34] <lifeless> but taking your scenario
[07:35] <lifeless> we'd just rollback 12339 at the same time as landing whatever fix (assuming 12339 also qa failed)
[07:36] <wallyworld_> but doing rollback is an extra step and any extra steps potentially introduce defects/issues
[07:36] <lifeless> the way we reduce the risk of the db deployment is deploying all we can before it - like wgrants nodowntime yesterdayish
[07:36] <lifeless> wallyworld_: in theory yes, in practice its trivial do to a merge -r . -1..-2
[07:36] <lifeless> wallyworld_: and be confident its rolling it back to a previously known ok state.
[07:36] <wallyworld_> an extra step that would not be required if landings were verboten until the release rev were fully qaed
[07:36] <lifeless> thats true
[07:36] <wallyworld_> but i see your point tooo
[07:37] <lifeless> but blocking landings is hugely expensive
[07:38] <lifeless> we land 6 branches a day on avg
[07:38] <wallyworld_> yes, so ones weighs up the pros and cons and perhaps your way is the better approach in terms of team velocity while still managing the risk
[07:38] <lifeless> 10 per [mon-fri]
[07:38] <lifeless> wallyworld_: yes, its a risk assessment problem.
[07:39] <lifeless> wallyworld_: I generally prefer a risk recovery option rather than risk elimination
[07:39] <wallyworld_> so now that today we finally have the db-devel rev sorted (pending a final qa) we can open up pqm again once rev 12338 lands on qastaging
[07:39] <lifeless> wallyworld_: particularly with low-occurence risks
[07:39] <lifeless> *once* 12338 is qa'd
[07:39] <lifeless> but yes.
[07:39] <wallyworld_> yes, i see your point. i think it comes down a lot to organisational culture
[07:39] <lifeless> its been a bad cycle
[07:40] <wallyworld_> yeah, i've been thrown in the snake pit as rm for sure :-)
[07:40] <poolie> good on you for doing it though
[07:41] <wallyworld_> it's been a learning experience and i'm glad i did it actually
[07:42] <wallyworld_> part of the issue was last week where i'm told issues with pqm/ec2/buildbot delayed a db-devel landing, which pushed everything back
[07:42] <wallyworld_> so we really need to deal with those root cause issues
[07:42] <wallyworld_> if they indeed are real issues
[07:44] <lifeless> yes, they did and the pqm swallowing errors bit should be fixed now
[07:44] <vila> hi all
[07:48] <poolie> hi vila
[07:48] <vila> poolie: see my pm ?
[08:01] <poolie> vila, https://wiki.ubuntu.com/DistributedDevelopment/UnderTheHood/Importer/Operational etc
[08:11] <poolie> vila, i'll need to go at some point so if you get a chance please persist on 715000
[08:13] <vila> poolie: don't hold your breath, I'm already working on critical stuff and if I had yet another interrupted task on top of my stack...
[08:13] <vila> s/had/add/
[08:13] <vila> damn tyops
[08:14] <poolie> ah fair enough
[08:14] <poolie> is https://devpad.canonical.com/~mbp/kanban/vila-kanban.html accurate?
[08:14] <poolie> at least in the middle columns?
[08:17] <vila> the in progress yes, the needs review is... not important
[08:18] <vila> and yes, my current branch should address both the in progress ones (and more as I discover them ;-/)
[08:30] <nil1> Hi
[08:30] <nil1> My TortoiseBzr ist Polish(?), how do I change that back again?
[08:32] <poolie> nil1, not sure, perhaps setting LANG in your environment then running it?
[08:32] <poolie> hi sidnei
[08:33] <sidnei> hi poolie!
[08:34] <nil1> poolie: no that didn't help
[08:35] <nil1> I opened the command line, "set LANG=C", killed explorer.exe and restartet it on the command line
[08:35] <nil1> err, LANG=de
[08:35] <inada-n> Please use bzr-2.3-setup.exe
[08:36]  * nil1 nods
[08:37] <inada-n> https://bugs.launchpad.net/tortoisebzr/+bug/663258
[08:37] <inada-n> It is fixed but have not shipped with bzr-2.2.x
[08:41] <nil1> okay, thanks
[08:41]  * nil1 reboots
[08:42] <poolie> yay, i might have fixed 715000
[08:43]  * vila hugs poolie
[08:44] <poolie> in some ways this is a dangerous fix
[08:44] <poolie> so it might be better to do it in 2.3 and move lp to that
[08:45] <poolie> or 2.4 even
[08:45] <poolie> not very dangerous, but not utterly obvious not to change something else
[08:47] <vila> there are other good reasons to deploy 2.4 on lp (controversial) but I think it's the right path for the coming months
[08:56] <vila> poolie: you're targeting 2.2 ?1/!!
[09:01] <poolie> vila, i did it based of 2.2 to keep our options open
[09:01] <vila> haaa, pfew, you scared me
[09:01] <vila> yeah, good point
[09:02] <vila> and good thing the bzr/2.2 branch was already opened for bugfixes
[09:02] <poolie> is it even proposed yet?
[09:02] <poolie> or did you just click through?
[09:03] <vila> hehe, I saw the bug email and fired 'bzr-review lp:~mbp....' ;)
[09:06] <poolie> ok
[09:07] <poolie> if you can review it that would be nice
[09:07] <poolie> maybe john should too
[09:08] <poolie> ok, i think i should stop
[09:08] <poolie> i'm running the whole test suite to see if this affected anything else
[09:14] <vila> poolie: yup, you (I?) should probably do that after merging up, spiv modified fetch and there may be fallouts there
[09:22] <poolie> i think it's important we check for similar problems
[09:22] <poolie> and ideally match them up to bugs
[09:22] <poolie> but i'm running out of steam
[09:25] <spiv> poolie: quick review sent
[12:09] <vila> jelmer: about review: :D and thanks
[12:09] <jelmer> you're welcome, glad I can return the favor(s) :)
[12:28] <vila> jelmer: hmm, what needs to be done to get 2.3.0 on natty ?
[12:39] <jelmer> vila: There's already a sync request in progress
[12:39]  * jelmer looks for the bug #
[12:40] <vila> jelmer: good enough !
[12:40] <vila> thanks !
[12:40] <jelmer> bug 713038
[12:41] <vila> rhaaa ! I read the related mails ! I knew ! What else will I forget next ? :D
[12:41] <jelmer> (:
[12:48] <maxb> jelmer: oh, hi. I think we should put back bzrlib.util.configobj and .elementtree in the debian package, as imports of the system versions. Does that sound sensible to you, or did you explicitly choose not to?
[12:55] <jelmer> maxb: hmm, that's a good question
[12:56] <jelmer> maxb: on the one hand I think we should just have everything do the right thing and patch it to use configobj
[12:56] <jelmer> on the other hand, pragmatism is useful too
[12:57] <maxb> I am mainly working on the basis that it's something included in bzrlib.*, so it's kind-of part of the bzrlib api
[12:57] <jelmer> maxb: the configobj from the system is a different version and has slightly different semantics, etc
[12:57] <maxb> The situation is further confused by bzrlib's in-tree configobj being slightly modified
[12:57] <spiv> jelmer: just add "sys.modules['bzrlib.util.configobj'] = __import__('configobj')" to bzrlib/__init__.py ;)
[12:58] <maxb> spiv: Don't be evil :-)
[12:58]  * spiv wanders off to bed before he can do any real damage
[13:01] <maxb> Direct users of bzrlib.util.configobj that I can find are fastimport gtk loggerhead and qbzr
[13:02] <vila> maxb: evil plugins
[13:02] <maxb> who says that bzrlib.util.* is off limits to plugins?
[13:03] <maxb> bzr-explorer conditionally uses bzrlib.util.elementtree on Python 2.4
[13:03] <vila> nobody ;) But I'm curious about what they need there and why bzrlib.config is not enough
[13:04] <jelmer> maxb: some are already patched to support using the system configobj
[13:04] <maxb> indeed they are.
[13:05] <maxb> So, I guess the real question that needs answering is "Is bzrlib.util.* part of the API?"
[13:05] <vila> maxb: yes, except for configobj :-D
[13:05] <vila> maxb: seriously: yes
[13:06] <maxb> bzr mv bzrlib/util bzrlib/__compat__ :-)
[13:06] <vila> but we should file a bug about forbidding it or provide alternatives I think (not sure and not sure I should think about that with a flu :-/)
[13:07] <vila> at least the gtk case I'm looking at shouldn't use configobj
[13:11]  * vila files bug #715143 and resumes
[13:20] <vila> ditto for fastimport and qbzr. Its less clear for loggerhead but probably true too
[13:21] <GaryvdM> I agree. qbzr should not be using it. I used it because at the time, I did not know better.
[13:23] <vila> yeah, I'm not throwing stones, bzrlib.config is still too hard to reuse IMHO
[13:24] <vila> This was a gentle "evil plugins" :D
[14:41] <jelmer> vila: thanks for the review
[14:41] <vila> hehe, I'm PP !
[14:41] <jelmer> vila: get_all() is different from items() in that it returns just the format objects, it doesn't return tuples with keys and values
[14:41] <jelmer> did you mean values() rather than items() perhaps?
[14:41] <vila> yeah
[14:42] <vila> err, you're even more evil than I thougth then :)
[14:42] <vila> use items and throw the keys :)
[14:43] <vila> anyway, you got the point right ?
[14:43] <jelmer> vila: in that case there's a slight disconnect though, as the extra formats don't have keys
[14:43] <jelmer> so they would appear in values() but not in iteritems() or items()
[14:43] <vila> haaa
[14:44] <vila> hmm, key=None won't work either I suspect
[14:44] <jelmer> vila: yeah, as most callers won't care about the non-metadir formats
[14:44] <vila> I want a value that is less than None, guido ?
[14:44] <jelmer> :)
[14:45] <jelmer> vila: I think a separate method, different from values() makes sense here.. perhaps it just needs a better docstring?
[14:45] <vila> jelmer: Is it really necessary that they don't have key ?
[14:45] <vila> what is the key used for ? Can't that be an attribute instead ?
[14:45] <jelmer> vila: even if they had a key, we would try to use them in cases where we couldn't
[14:46] <vila> jelmer: most of the refactorings you've done in this area have been somehow to transform hard-coded references to formats into feature attributes on formats
[14:47] <vila> I think this is really nice and The Right Thing
[14:47] <vila> Can't the same pattern apply here ?
[14:48] <vila> Since you're introducing registries, it would be nice it this was the case and that all formats can be properly registered and only from there diverge in what features they provide
[14:48] <jelmer> vila: the interface here (having a format string) is already something these formats don't really have
[14:49] <jelmer> vila: so I'm reluctant to expose them in the registry everywhere when they can't really be used as such
[14:49] <vila> :-/
[14:49] <vila> Be bold !
[14:49] <jelmer> heh
[14:50] <vila> Nobody is using this registry so far no ? Well, not the new one
[14:50] <jelmer> vila: the class not, but the registry itself is used by lots of things
[14:52] <vila> dev values(with_obsolete_stuff_yes_I_know=False) ? :-/ guido ?
[14:52] <vila> bah
[14:52] <jelmer> vila: that works, but that's why I'd rather add a separate method (probably with a more clearer docstring than I had earlier)
[14:53] <jelmer> get_all = lambda self: self.values() + self.extra()
[14:53] <vila> jelmer: yeah, will do, and clearly identify who is using it, sounds like a good case for a leading '_' too
[14:53] <jelmer> vila: there's only one user of that method at the moment - bt.per_repository
[14:55] <vila> good
[14:59] <jelmer> vila: so improve the get_all docstring and rename remove -> unregister?
[14:59] <vila> s/get_all/_get_whatever/
[15:00] <vila> and unregister yes,
[15:00] <jelmer> ah, ok
[15:00] <vila> another dech-debt is why Registry implements remove instead of unregister...
[15:01] <vila> jelmer: these are all mostly suggestions, I'm open to discussion ;)
[15:01] <jelmer> vila: that's why I used remove() as well, since Registry is the base class
[15:03] <vila> yeah, I was *really* surprised to find it there, most of the registries uses I've seen have some sort of unregister and unregister really is more natural to my eyes
[15:22]  * jelmer is tempted to fix some registry tech debt
[15:26]  * vila pushes jelmer still hearing poolie saying the same this morning 
[15:37] <vila> jelmer: now I'm a bit lost... id I miss the extra-repo-formats mp earlier ? Or is it a new one ?
[15:37] <vila> meh already 55 mins old >-/
[15:37] <jelmer> vila: it's a new one, on top of the one we just discussed
[15:38] <jelmer> ah, argh.. it doesn't have the prerequisite bit set
[15:38] <vila> but it still uses remove, did you postpone that ?
[15:38] <jelmer> vila: no, this branch was submitted before our discussion
[15:38] <vila> haa... pfew
[15:38] <jelmer> https://code.launchpad.net/~jelmer/bzr/extra-repo-formats/+merge/48932
[15:40] <vila> (waiting for lp to refresh) it uses get_all  + _get_extra, how about _get_all (not sure _get_extras is really needed unless you already know better)
[15:41] <vila> jelmer: right, so my question still stand, you keep using remove or you plan to rename it later ? (There are arguments both ways...)
[15:44] <jelmer> vila: I plan to rename it later, along with remove in registry itself
[15:45] <vila> ok, good to go then
[15:50] <jelmer> thanks
[16:00] <Daiben> hello
[16:00] <jelmer> hi Daiben
[16:00] <Daiben> is there a turorial for a bazaar server
[16:01] <Daiben> i'm messing with git for 3 days and it's still not working
[16:01] <Daiben> so i'm moving to bazaar
[16:01] <jelmer> Daiben: what kind of Bazaar server ? just plain anonymous readonly access over TCP/IP ?
[16:01] <Daiben> read write
[16:02] <Daiben> yes
[16:02] <Daiben> and authentication
[16:02] <jelmer> Daiben: any sftp/ssh enabled machine should be sufficient
[16:02] <Daiben> but how does it work
[16:03] <vila> Daiben: pretty well :)
[16:03] <jelmer> Daiben: see http://doc.bazaar.canonical.com/bzr.dev/en/user-guide/server.html
[16:03] <Daiben> thanks
[16:03] <vila> Daiben: as long as a user can connect with ssh and find a bzr in its path, there is nothing more to configure, the bzr client will send the right command
[16:04] <Daiben> oke:)
[16:04] <Daiben> i need to go home now i try tonight
[16:04] <Daiben> ty
[16:04] <vila> Daiben: you're welcome, come back for help if needed
[16:09] <vila> jam: if you're around and k leave you some time, poolie ask for your help/feedback on bug #715000, there is already a mp with some leads
[16:42] <vila> jelmer: the bzr daily builds failing are unrelated to the sync for 2.3.0 right ?
[16:44] <jelmer> vila: that's a good question
[16:44]  * jelmer looks
[16:44] <jelmer> vila: btw, 2.3.0 just hit natty
[16:44] <vila> jelmer: fails with conflict in __init__ probably because the ~debian-bazaar/bzr.unstable needs to import trunk ?
[16:45] <vila> \o/
[16:45] <vila> err, wait, whatdyoumean just hit ? rmadison doesn't see it yet
[16:50] <jelmer> vila: jriddell just uploaded it, it's probably not been published yet
[16:51] <vila> ha, ok, thanks for the timely feedback ;D
[16:51] <jelmer> vila: ah, right.. I need to fix that conflict
[16:54] <maxb> vila, jelmer: the bzr dailys are failing because of a trivial conflict. We need to merge 2.3 to bzr.dev after any version bump on 2.3, to make the recipe merge cleanly
[16:55] <maxb> And we need to have a recipe-fixing-blitz
[16:55] <jelmer> yeah - either that, or we need to merge it into the unstable branch
[16:55] <jelmer> I've been looking at ways to make it easier to look after the recipes
[16:55] <jelmer> but there aren't any web service API calls atm
[16:56] <maxb> oh, and we need to sync-request qbzr
[16:57] <vila> maxb, jelmer: fwiw 2.3.1dev has been merged to bzr.dev today
[16:58] <jelmer> maxb: which version of qbzr?
[16:58] <maxb> ooh
[16:58] <maxb> 0.20
[16:59]  * maxb requests a recipe build
[16:59] <jelmer> that's already in natty
[16:59] <maxb> ah. people are too efficient for me to keep up :-)
[17:00] <vila> hehe, for once :D
[17:16] <maxb> oh, boo
[17:16] <maxb> now application of patch 01_test_locale fails the recipe build
[17:22] <mgz> what is said patch? sounds interesting.
[17:32] <vila> mgz: already merged in trunk ;)
[17:33] <vila> mgz: you'll know if you weren't running such an old bzr ;-D
[17:34] <vila> by the way, remind me my you're still on 2.1 ? What is missing in trunck to get you back there ?
[17:34] <mgz> ...it's a little funny.
[17:35] <mgz> testtools broke a bunch of things for me, so I'm using a rev before that was merged.
[17:35] <vila> yeah, I kind of remember that, but is there still fallouts ?
[17:35] <mgz> I think I have finally fixed them, and of course to test any branches I'm actually working on I need it anyway.
[17:35] <mgz> so... now it's just inertia.
[17:35] <vila> haaaaaaa
[17:36] <vila> good, no problem with inertia :D
[17:36] <vila> mgz: MOVE ! NOW !
[17:36] <vila> :D
[17:36] <mgz> I'm just going to do a quick review of the ~eurokang branch, I note you beat me to it.
[17:37] <vila> mgz: I was wondering if there was some low hanging fruits around it...
[17:40] <mgz> I'm not certain on the core change, it seems like bug 255687 isn't much of a step further.
[17:40] <mgz> Also various edge cases etc...
[17:40] <vila> yup, probably a good idea for a followup
[17:41] <mgz> but what it's actually doing looks correct, and it's a first contribution,
[17:41] <mgz> so it's right to not move the goalposts.
[17:41] <vila> mgz: you're reading above my shoulder right into my brain again ?
[17:41] <mgz> I do try. :)
[17:43] <vila> . o O (Being transparent or keep secrets... That is the question )
[17:45] <mgz> I'll branch it and fiddle in a minute, the test assertion strikes me as wrong.
[17:45] <mgz> ah, no, it's build_tree_contents so the file really is empty
[17:47] <vila> which is fine for a bb test
[17:48] <mgz> yep, but I don't think the test would actually fail without the fix
[17:48] <vila> hehe, it did, I tried :-D
[17:49] <mgz> ah, at the run_bzr state
[17:49] <mgz> *stage
[17:49] <mgz> forgot taht checks the return code.
[20:28] <kkrev> I'm evaluating bazaar.  Our current tool (synergy) spits out individual file history graphically so you can see all the branch revisions and who made them: http://i.imgur.com/0uFoJ.png
[20:28] <kkrev> can bzr do this?
[20:28] <kkrev> it's not make or break, but it's something people are very used to.  the graph-ancestry tool seems to be just about how branches relate to each other?
[20:56] <vila> kkrev: try qannotate from the bzr plugin, it displays the annotated lines in the main window but a the file graph in a smaller embedded one while maintaining the link between a line and the revision that modified it
[20:57] <vila> kkrev: from the file graph view you can also request to annotate the file at this revisions or consult the diff for the file itself or the diff of the wole revisions (which helps understanding why the file was modified)
[20:59] <vila> kkrev: qlog is also a real gui when it comes to navigate the revision ancestry
[21:08] <kkrev> thanks.  investigating.
[21:10] <kkrev> is qbzr prefered to the "Bazaar Explorer" shown in the "visual tour" linked to from the front page?
[21:15] <luks> kkrev: Bazaar Explorer is built on qbzr
[21:20] <vila> kkrev: both are bundled together in most of the distributions, which OS/version are you using ?
[21:25] <kkrev> People would be running this stuff on windows.
[21:30] <vila> kkrev: then try https://launchpad.net/bzr/+download there are all-in-one installers there
[21:31] <vila> kkrev: 2.3.0 is expected to be released in two days
[21:51] <kkrev> is there a good template or writeup about how to structure a repository with several past release branches and different patch series?  Do people put all the release branches under a parent repo?  I'm guessing I'd do branch per patch series and just tag the patch releases.
[21:51] <kkrev> sorry if i'm spamming.
[22:22] <poolie> jam
[22:22] <poolie> hello jam
[23:00] <spiv> Good morning
[23:01] <poolie> hi spiv
[23:03] <poolie> what's up for you today?
[23:06] <spiv> poolie: first looking over the test changes in lp:~spiv/bzr/reconfigure-unstacked-copies-tagged-revisions-401646 before I submit that fix for review, then see if I can figure out how to reproduce https://bugs.launchpad.net/udd/+bug/653307
[23:07] <poolie> nice
[23:07] <poolie> do you need anything from me?
[23:08] <jelmer> g'morning spiv, poolie
[23:08] <poolie> hi jelmer, how are you?
[23:09] <jelmer> I'm well, thanks :)
[23:09] <jelmer> How is your day?
[23:10] <spiv> poolie: I don't think so
[23:10] <poolie> good
[23:10] <poolie> i'm planning to look for more similar stacking bugs and then updated the LEP
[23:11] <poolie> about build from branch into main
[23:11] <spiv> poolie: oh, and I'm happy to see https://code.launchpad.net/~eurokang/bzr/537442-annotate-removed/+merge/48906 — they mailed me off-list during my patch pilot week and I provided some guidance, and they got far enough to submit a reasonable patch :)
[23:11] <poolie> nice
[23:15] <poolie> there are many easy but annoying bugs
[23:15] <poolie> it'd be good to get more communty people into them
[23:15] <poolie> jelmer, what's next for you?
[23:16] <jelmer> I'm trying to get my current WIP landed - lazy hooks, non-metadir repository/branch/workingtree tests (useful for the weave plugin and foreign formats) and fixing 80% of the bzr-hg imports
[23:17] <poolie> cool
[23:17] <jelmer> after that, I was hoping to have a look at the bzr whoami warning that the losas were asking about, if nobody else beats me to it :)
[23:17] <poolie> i was wondering if you felt bikeshedded-upon about lazy hooks?
[23:17] <poolie> we should do that
[23:17] <poolie> i was hoping to myself but it keeps getting interrupted
[23:17] <poolie> so if i haven't got code up, please take it
[23:18] <jelmer> I still have some other code to finish first too so perhaps we should see who gets to it first :)
[23:18] <jelmer> I'll make sure to communicate about it if it's me.
[23:19] <jelmer> poolie: I didn't really mind the lazy hooks discussion, there was other WIP I could work on, though I do agree it was bikeshedding
[23:19] <poolie> i don't really understand from john's comment why this is important
[23:19] <poolie> it seems like we'll register ~50 hooks; 100 bytes each is not really important
[23:21] <spiv> poolie: I don't follow why it matters either
[23:22] <jelmer> I don't see the performance problem either, but I also don't feel strongly either way about using tuples or strings.
[23:24] <jelmer> I also noticed somebody was taking on the "bzr cp" bug..
[23:25] <poolie> i saw that too
[23:25] <poolie> i don't think we should block the lazy hooks thing on this
[23:25] <jelmer> Should we be proactive about contacting them to discuss their implementation? I don't want to be discouraging, but I also worry about the merge discussion being disappointing further down the road otherwise.
[23:25] <poolie> let's encourage them to discuss it
[23:26] <poolie> perhaps we can either do it in a plugin, or in a way where it won't paint us into a corner
[23:27] <jelmer> that makes sense
[23:30] <wallyworld_> poolie: do you expect there will be a bzr update for the 11.02 rollout as mooted in yesterday's irc chat?
[23:31] <poolie> re lazy hooks, there are other things i'd like to see before worrying about tuples vs strings
[23:31] <poolie> like getting rid of per-hook-group classes
[23:31] <poolie> (which will save memory and load time)
[23:31] <poolie> wallyworld_, yes, there's a patch in review now
[23:32] <poolie> wallyworld_, would it be easy to just run from a revision off the 2.3 branch?
[23:33] <jelmer> poolie: as a prerequisite for lazy hooks landing you mean? or as an independent fix?
[23:33] <wallyworld_> poolie: i guess that's up to you? afaik, we just check in the egg and update the versions.cfg so lp uses the new version
[23:35] <poolie> jelmer, independently, in the same vein
[23:35] <poolie> i mean, let's unblock this and do more on hooks
[23:36] <jelmer> poolie: wfm :)
[23:37] <poolie> and there are other things we can do that will probably do more to help memory usage and load time