[00:18] <maxb> How do I run the bzr-fastimport testsuite?
[00:18] <lifeless> bzr selftest fastimport
[00:19] <maxb> Ah, there's no such thing as running in the source tree? It only runs as part of a bzr installation?
[00:21] <lifeless> maxb: ./bzr selftest fastimport
[00:21] <lifeless> :)
[00:21] <maxb> mhm. Not if you're in a branch of fastimport itself
[00:22] <lifeless> maxb: so, the plugin has to be loaded to get at the tests
[00:22] <lifeless> you can symlink it into a bzr source tree, if you want to test with an uninstalled bzr
[00:22] <lifeless> (or just run that bzr)
[00:22] <maxb> I guess hacking on it in ~/.bazaar/plugins/fastimport isn't a silly idea, then?
[00:22] <lifeless> but the plugin needs to be discoverable to '$bzr plugins'
[00:22] <lifeless> maxb: I have a symlink from ~/.bazaar/plugins/PLUGIN to ~/source/bzr/plugins/PLUGIN/working
[00:23] <lifeless> maxb: where working is a lightweight checkout of whatever branch of the PLUGIN i'm currently hacking on/using
[00:57] <igc> morning
[01:00] <lifeless> igc: Just!
[01:01] <igc> lifeless: it's still 11am here :-)
[01:35] <sven_oostenbrink> So I use bzr now for one project. I have one central repo that I use as trunk, three developers have branches from that one.. Now, first I want to tag a certain version, but thats where bzr eludes me a little.. I can not have a trunk, since thats just another branch, but I can have tags?
[01:35] <sven_oostenbrink> how does this work exactly?
[01:37] <lifeless> you can tag a commit
[01:37] <lifeless> or you can add a new branch
[01:37] <lifeless> both wors
[01:37] <sven_oostenbrink> then another question.. I have another project, B, which is based on my current project, A.. so I think, I make a brach of A, called B, and all developers can just sub-branch B to work locally, for them, B will be the trunk of project B... whenever there are changes in B that are interresting for A, I can push them from B to A and if there are ever interresting changes for B in A, I can just push these changes from A to B..  Is this reasoning
[01:37] <sven_oostenbrink>  correct, or am I just plain silly here?
[01:38] <sven_oostenbrink> lifeless: tag a commit? you mean tag a revision?
[01:38] <lifeless> yes
[01:38] <lifeless> your second questions reasoning is fine
[01:38] <sven_oostenbrink> lifeless: so I could also just do something like bzr push trunk file://tag0.1.0 or something?
[01:39] <sven_oostenbrink> and that then, would be a tag?
[01:39] <sven_oostenbrink> lifeless: But whats the difference, when I bzr tag, the tags are like a revision or somehting?
[01:39] <lifeless> tags are metadata within a branch
[01:39] <lifeless> branches are branches
[02:02]  * spiv -> food
[02:31]  * igc lunch
[02:48] <poolie> igc, want to talk when you're back?
[03:16] <mneptok>  /m poolie i is sexy hots American booblady. you makes to chats me now!?
[03:17]  * poolie puts on his wizard hat and cloak
[03:18] <ferringb> -.^
[03:51] <igc> poolie: sure
[03:54] <poolie> 1m
[04:21] <MTecknology> How can I make something like lp: ?
[04:24] <spiv> MTecknology: write a plugin that implements a "directory service"
[04:25] <MTecknology> that sounds kinda sucky to do...
[04:25] <spiv> MTecknology: see bzrlib.directory_service, and of course the implementation of bzrlib.plugins.launchpad
[04:26] <spiv> MTecknology: it's not so hard, IIRC
[04:26] <MTecknology> spiv: I just don't want to need to type "bzr+ssh://bzr.server.com/bzr/" all the time
[04:27] <MTecknology> I want to run my own LP server but I don't have the hardware for it...
[04:28] <lifeless> MTecknology: install bzr-bookmarks, create an alias
[04:29] <spiv> MTecknology: echo export MYREPO=bzr+ssh://bzr.server.com/bzr/ >> ~/.bashrc  ;)
[04:31] <lifeless> MTecknology: or write a small directory service plugin as spiv says
[04:31] <lifeless> its really quite easy
[04:31] <MTecknology> thanks :)
[04:32] <MTecknology> I like the export idea except for needing $MYREPO
[04:33] <MTecknology> I'll try out the directory service idea
[04:33] <spiv> Look at bzrlib/tests/test_directory_service.py perhaps, it should show you how to make a pretty simple service, because that's what the unit tests do :)
[04:38] <MTecknology> spiv: aside from no understanding of python and that looking like a lot of code; I also have no idea how to impliment it :P
[04:42] <spiv> MTecknology: it's just the lines 26 to 30 of that file, plus the call to bzrlib.directory_service.directories.register
[04:42] <spiv> (that file == http://bazaar.launchpad.net/~bzr-pqm/bzr/bzr.dev/annotate/head%3A/bzrlib/tests/test_directory_service.py)
[04:44] <spiv> Put that plus the imports into ~/.bazaar/plugins/my_directory_service.py and you're practically done.
[04:45] <MTecknology> spiv: thanks
[04:46] <MTecknology> no
[04:46] <MTecknology> s/no// wrong chan
[05:01] <igc> bbl
[07:44] <vila> hi all
[07:45] <vila> bah, babune failure of the day: doctests are not properly isolated, setting 'debug_flags = static_tuple' in bazaar.conf makes one of them fail....
[07:45] <vila> Failed doctest test for bzrlib.branchbuilder.BranchBuilder
[07:46] <lifeless> vila: doctests; enough said
[07:47] <fullermd> I "corrected" a whole bunch of doctests in that DWIM thing.  I think most of them were totally useless.
[07:47] <fullermd> (I mean, even before, they were pretty lightweight, but after the change....   utterly pointless)
[07:55]  * vila whistles
[07:56] <vila> Hmm, not sure if whistling carries the same meaning as in French, which in that case would be around: "Hey, I don't like doctest much myself, but I said nothing...." :-D
[07:57] <vila> I mean, I agree with many intents of doctest, but the implementation is too brittle and this is one more example...
[08:07] <vila> On the other hand, a single slave failed here, the one that is not properly isolated itself (it uses *my* bazaar.conf), and that's what I will fix first :)
[08:18] <fullermd> So, you're saying doctests aren't the problem, vila is?   :p
[08:23] <vila> hehe. no I'm saying there is one problem and at least three fixes, all needed :-D
[09:27] <fullermd> vila: How goes timekeeping?
[09:28] <vila> fullermd: weirdly
[09:28] <vila> freebsd7 did a reset this morning, I dunno if it's related (in fact I didn't know that could happen without me noticing...)
[09:29] <vila> both 7 and 8 says ~10:00AM when it's 10:29...
[09:29] <vila> so time is still drifting, less, but still
[09:30] <fullermd> ntpd give up?
[09:30] <vila> I was about to search a bit the ntpd period of updates, but may be you already know that ? :-D
[09:30] <vila> hmm, let me look the logs
[09:30] <fullermd> Well, the poll period steps back at powers of two.  The default range starts at 16s and goes up to 1024 as it learns the system.
[09:31] <fullermd> But there's no way it should be 29 minutes off unless it gave up on sync'ing.
[09:31] <vila> nothing eye-catching for 7
[09:32] <fullermd> What does the peer list tell you?
[09:32] <fullermd> That should gives you the poll frequency, as well as the offset.
[09:33] <vila> ntpq -p doesn't specify units, let me see...
[09:33] <vila> poll is 64 for both 7 and 8
[09:33] <fullermd> Poll frequencies are on seconds, delay and offset in ms.
[09:34] <vila> delays are in 6 to 12 range, offsets are in the 300.000-500.000 range
[09:34] <vila> err, no 5.000.000 for 7 !
[09:34] <vila> gha and 3.000.000 for 8 sry
[09:34] <fullermd> 5 million ms = 5 thousand seconds = 83 minutes.
[09:35] <fullermd> That's a "screwit" sign.
[09:35] <vila> grrr, I can't read these damn things ! 364481. for 8 that's 300.00 the trailing period tricked me
[09:35] <vila> in the 8 logs I have: Nov  4 08:20:10 freebsd8 ntpd[766]: time reset +20146.933261 s
[09:35] <vila> Nov  4 08:20:10 freebsd8 ntpd[766]: kernel time sync status change 6001
[09:35] <vila> Nov  4 08:21:00 freebsd8 ntpd[766]: kernel time sync status change 2001
[09:36] <vila> so, hmm, looks like it doesn't update frequently enough
[09:36] <fullermd> The 6001/2001 messages are flipping between FLL and PLL mode (which ntpd flips on at the 512/1024s polling interval boundary)
[09:37] <fullermd> It sounds like the drift is just more than ntpd will accept.
[09:38] <fullermd> Ah, I'm off, 64s is the default minimal poll frequency.
[09:38] <fullermd> So, yeah, it just never gets past trying to initial sync up.
[09:40] <vila> And what is the option to get around that ?
[09:40] <fullermd> Isn't one, really.  I kinda expected it, from the size of the offsets you were talking about yesterday.
[09:41] <vila> Or should I just call ntpdate in a cron ? :-/
[09:41] <fullermd> Really, the solution is to figure out WTF it's so far off from reality, which is probably something to do with VB.
[09:41] <vila> wow,  4 Nov 10:10:51 ntpdate[1007]: no servers can be used, exiting
[09:41] <fullermd> cron'ing a ntpdate every stupidly often (like 5 minutes) is an ugly brute-force solution, but may be your best shot.
[09:41] <vila> Does that clearly says ntpd gave off ?
[09:42] <fullermd> That's from ntpdate, not ntpd.
[09:42] <vila> ntpdate 0.fr.pool.ntp.org
[09:42] <vila>  4 Nov 10:11:46 ntpdate[1025]: the NTP socket is in use, exiting
[09:43] <fullermd> You can't run ntpdate while you're running ntpd.
[09:43] <bob2> don't cross the streams.
[09:43] <fullermd> Mmm, marshmallows...
[09:45] <vila> Ho, so revert the ntpd and cron the ntpdate is what you suggest ?
[09:45] <vila> Otherwis, yes, VB is clearly the culprit, the log file is filled by messages about that, the bug is known and working on
[09:45] <fullermd> Well, I'd suggest tracking and solving the problem.  But that takes time and energy, so it's probably not leading your todo list.
[09:46] <vila> some incomplete fix in 3.0.10 made things worse for me, that's why I'm searching for a work-around from *inside* the slaves
[09:46] <fullermd> Brute force should always work, if enough is used   :p
[09:47] <vila> VB is still working on the bug but I dunno when that will come it could be 3.0.12 or 4.0.... nothing in the coming *month* at least (if past releases are an indication for the schedule)
[09:47] <fullermd> Whether anything will freak out when the clock starts jumping around all the time...  well, we'll find out when we find out.
[09:48] <vila> 23:58:57.284 TM: Giving up catch-up attempt at a 60 001 205 850 ns lag; new total: 20 580 677 930 084 ns
[09:48] <vila> is the kind of message in the VB logs :)
[09:48] <vila> TM probably refers to Time Manag{er,ing,ement}
[09:49] <fullermd> I get creeped out when my time is more than 10ms off, and have it in my long-term plans to get a good disciplined OXCO in the lab with a clean PPS input to the system to cut an order of magnitude or two off that error   :p
[09:49] <fullermd> Obviously, I'm slightly more anal than average...
[09:50]  * vila notes to self: mail forward the VB logs to fullermd every morning
[09:50] <fullermd> (Soekris 4501's are notoriously excellent for that, since you can use the GPIO pins to get the PPS signal into the OS with much less latency and jitter than using a RS232 port like you do on 'normal' systems)
[09:54] <vila> You do that just for fun or you have a valid reason for it ? Hard real-time constraints or ?
[09:55] <fullermd> Well...   I guess you could say "fun".
[09:55] <fullermd> More like "I'm an obsessive person and I can't stand when things are WRONG".  So it's not necessarily _fun_ per se...
[09:56] <vila> But how do you even know you're off by 10ms ?
[09:57] <vila> and off comparing to what ?
[09:57] <vila> may be you're right and *they* are off !
[09:58] <fullermd> Well, now we get into definitions.  But I run a diverse set of peers (many of which I run, and in turn deal with a different diverse set of peers), which track back to CDMA and GPS sources.  All of which, through offsets etc, should track back to TAI.
[09:58] <fullermd> If you don't get all uppity about the difference beween GPS time and TAI and UT0 and UT1 and UTC, you probably don't care   :p
[09:58] <vila> wow, wow, not so fast, CDMA and GPS, I roughly understand, TAI is ?
[09:59] <fullermd> Temps Atomique International
[09:59] <vila> Haaaa, now you're talking French :)
[09:59] <fullermd> Hey, I didn't make the acronym   :p
[10:00] <fullermd> (and don't even get me STARTED on the insanity of defining POSIX time_t...)
[10:00] <vila> Right, ok, so you're not *that* obsessive, your reality requires you to be precise :)
[10:00] <fullermd> Well, any time scale is arbitrary, since you have to pick a point in time to declare the second boundary (and that's even before you consider how relativity destroys the concept anyway)
[10:01] <fullermd> But any point is as good as any other for most purposes, as long as it's widely agreed on.  TAI fills that role.
[10:02] <fullermd> Why yes, I DO hang out on mailing lists with people who take one of a match pair of cesium beam clocks on vacation into the mountains with them to demonstrate relativity; why do you ask?   :p
[10:02] <vila> damn it restarting ntpd fixed time on 7 but not on 8 :-/
[10:02] <fullermd> It probably didn't fix it, it just did its initial step.  Now it'll start slipping into the future.
[10:03] <vila> and it knows about not doing its intial step too often ? Even across reboots ?
[10:03] <vila> On the overall, I feel better knowing you *had* to know the details about ntpd more than me :-D
[10:04] <fullermd> It only does the initial step as a first sync when it starts.
[10:04] <vila> So starting it should provides me a "correct" time, it doesn't here
[10:04] <fullermd> Past that it slews, or makes tiny steps.  If it drifts faster than some arbitrary amount, ntpd won't touch it (and it sounds like you've in that position)
[10:05] <vila> /etc/rc.d/ntpd restart
[10:05] <vila> is what I did (repeatedly even)
[10:06] <fullermd> Well, don't do it so much; it doesn't happen for some time after it starts.
[10:06] <fullermd> 's one of the reasons ntpd's "sync once and exit" mode is never going to be a replacement for ntpdate; there's a need for a quick, synchronous syncup, even if it lacks quite the precision of a longer baseline.
[10:07] <vila> so, given I'm using a VM with no precision, ntpd is the wrong tool, right ?
[10:08] <bialix> heya all, vila, fullermd
[10:08] <vila> morning bialix
[10:09]  * bialix likes what fullermd wrote about the process of inclusion patches and railroad
[10:09] <bialix> bonjour vila
[10:09] <fullermd> Yeah, without serious hacking (or possibly much more config than I know how offhand to do on it), ntpd won't handle the situation you're in.
[10:10] <bialix> poolie1: I think I deserve your pun about "care about windows". vila teaching me to shut up
[10:10] <fullermd> It's possible dropping the minpoll interval will give it enough oomph to make things happen.  But that would be real unfriendly unless you're pointing at your own local ntp servers.  And it's still questionable that it would work.
[10:11] <vila> But isn't using ntpdate unfriendly too then ?
[10:11] <fullermd> Well, dropping the minpool means every 16 seconds.
[10:12] <vila> yeah and I can survive with an ntpdate every hour I think
[10:12] <fullermd> ntpdate every 5 minutes isn't near that bad.  It's only 3x as often as the standard ntpd ceiling of 1024s polling interval.
[10:12] <fullermd> I'd try something like 10 or 15 minutes.  That's rarely enough-ish, and will leave you with smaller steps each time, which is more likely to slide under the radar of running apps.
[10:12] <fullermd> (of course, 'd be as good or better to have a local ntp server of course, but...)
[10:14] <vila> hey, the only app running is bzr selftest and time should always go forward and tests shouldn't be time sensitive, what could go wrong (famous last words :-)
[10:15] <vila> local ntp server may be an option though....
[10:16] <fullermd> Probably wouldn't change anything materially, but is nicer to the net at large, and may let you experiment with more drastic usage of it.
[10:16] <fullermd> I consider a local ntp server to be like a local DNS server; every subnet should have one.
[10:17] <vila> hmm, looks like I *already* have one ntp server... at least ntpdate is happy to use it
[10:22] <vila> fullermd: so, I'll try with 'server saw.local iburst minpoll 4 maxpoll 9' in ntp.conf and see
[10:23] <vila> fullermd: sounds correct ?
[10:24] <Peng> Ooh, time nerding! /me reads backlog.
[10:25] <vila> fullermd: well, 'minpoll 1' seems accepted by a restart even if the doc says 4 is the minimum, I'll try that
[10:30] <fullermd> I'd do burst as well as iburst.  But yah, that sounds like a plan.
[10:30] <fullermd> (burst won't help if it can't keep up without it, it'll just do a bit better if it DOES keep up)
[10:30]  * fullermd is a reliable source for nerdery   :p
[11:15] <Mez> hmm, bzr gannotate is telling me a file isn't version3ed, but it is.
[11:17] <lifeless> vila: /419776
[11:17] <lifeless> vila: bug 419776
[11:18] <Mez> also, how do I get the nautlius extensions working?
[11:19] <vila> lifeless: I'm waiting for your subunit protocol change to land. Why the ping ?
[11:19] <lifeless> vila: oh thats right
[11:20] <lifeless> don't wait on subunit
[11:20] <vila> should I un-assign myself and revert to confirmed ?
[11:21] <lifeless> whats the remaining defect
[11:22] <lifeless> just 'automtimingdecorator degrades ExpectedFailure' ?
[11:22] <lifeless> vila: if you subclass AutoTimingDecorator
[11:22] <vila> not sure exactly about the *cause* but the effect is to produce failures for expected failures IIRC
[11:22] <lifeless> and add an addExpectedFailure method, as per subunits addFailureMethod, does it work?
[11:24] <lifeless> ah here it is
[11:24] <lifeless> timing -> hooked -> decorator
[11:25] <lifeless> return self._call_maybe("addExpectedFailure", self.decorated.addFailure, test, err)
[11:25] <lifeless> subunit 0.0.2 doesn't know how to serialise xfail
[11:25] <lifeless> though it can parse it
[11:25] <lifeless> [long story]
[11:26] <lifeless> in 0.0.2 xfail -> failure in the case of a missing method
[11:26] <lifeless> in 0.0.3 xfail -> success
[11:26] <lifeless> so
[11:26] <lifeless> class BzrAutoTimingDecorator
[11:26] <lifeless> def addExpectedFailure(self, test, err):
[11:26] <lifeless>     self._before_event()
[11:27] <lifeless> return self._call_maybe("addExpectedFailure", self._degrade_skip, test, err)
[11:27] <vila> i.e. introducing a new class in bzr until your subunit protocol lands ?
[11:28] <lifeless> should fix it, for subunit 0.0.2, at the cost of not supporting the details API, which is not in 0.0.3 yet anyhow, and which degrades well anyhow.
[11:28] <lifeless> we can cater for that later
[11:34] <vila> lifeless: http://paste.ubuntu.com/309377/ is what you meant ?
[11:35] <vila> it seems to work here and I can understand why, at least
[11:35] <lifeless> the lambda x:x was gine
[11:35] <lifeless> *ine*
[11:35] <lifeless> *fine*
[11:36] <vila> as a base class ?
[11:36] <lifeless> as the final value
[11:36] <lifeless> do the definition in the try:
[11:37] <lifeless> what you've written won't work if subunit is absent
[11:37] <lifeless> you'd need __new__ not __init__
[11:37] <lifeless> which is uglier than a lambda
[11:39] <vila> oh yes, of course
[11:44] <vila> lifeless: shouldn't HookedTestResultDecorator define _before_event ?
[11:47] <lifeless> no
[11:47] <lifeless> its an ABC
[11:47] <lifeless> thats why you subclass subunit.test_results.Autotiming...
[11:47] <vila> I can see that, but why not being explicit and make it raise NotImplementedError is what I meant
[11:48] <lifeless> 'meh'
[11:51] <lifeless> small contract
[11:51] <lifeless> reasonable docs
[11:51] <lifeless> not like e.g. repository in size
[11:52] <vila> ok, I just thought the idiom was agreed upon, no worries
[11:53] <lifeless> well, for bzr yes :)
[11:53] <lifeless> I'm more lassez-faire in other situations
[11:53] <vila> hehe, yeah, having python on the bzr code base doesn't help me there :)
[11:53] <vila> hehe, yeah, having learned python on the bzr code base doesn't help me there :)
[11:54] <lifeless> so as I say, small contract
[11:54] <lifeless> if it was a bigger API and this was an optional thing that you might not encounter for a bit, it would be another matter
[11:54] <lifeless> but you can't use a subclass at all until its implemented so its pretty obvious
[11:55] <vila> yeah, on the other hand, it's only two lines and you can then forget about it whatever happen later :-D
[11:55] <lifeless> 2 lines and a test
[11:55] <vila> but no problem, I was just wondering if it was deliberate or not, I'm happy either way
[11:56] <lifeless> deliberate decision
[11:56] <vila> you meant: a test and 2 lines :-P
[12:00] <lifeless> oh wow
[12:01] <lifeless> http://lab.arc90.com/experiments/readability
[12:01] <lifeless> anyhow, I'm glad you've got it working
[12:01] <lifeless> toss it up for review if you like
[12:01] <lifeless> gnight
[12:01] <vila> lifeless: waiting for lp to update my pushed branch before clicking propose for merging :)
[12:02] <vila> done
[12:02] <vila> https://code.edge.launchpad.net/~vila/bzr/419776-subunit/+merge/14413
[12:20] <johnf1> how do you remove  a branch from inside a shared repo? do you just rm it and there is some sort of grabage collection later?
[12:22] <lifeless> rm, gc will be implemented some day
[12:24] <johnf1> lifeless: Am I correct in thinking that a few weeks ago you where sugesting we should upload the beta releases into debian?
[12:26] <lifeless> yes
[12:27] <lifeless> talk to me tomorrow though, ESLEEP
[12:27] <johnf1> ok
[12:27] <johnf1> will upload 2.0.2 for now
[14:22] <jam> morning all
[14:28] <abentley> jam: Good morning.
[14:28] <jam> hey aaron, haven't said Hi to you in a while
[14:28] <jam> Getting ready to fly to AU?
[14:29] <abentley> True.  Not really getting ready yet.  I'll start that tomorrow.
[14:29] <abentley> But yeah, we
[14:29] <abentley> 'll be saying hi in person really soon.
[18:50]  * mtaylor getting oops
[18:50] <mtaylor> AssertionError: second push failed to complete a fetch set([('inventories', 'mordred@inaugust.com-20091103224655-9f5d1vgb4alj11vi'), ('inventories', 'mordred@inaugust.com-20091103213736-nu2owzobdp73t6h8'), ('inventories', 'mordred@inaugust.com-20091103222806-e2ta52lskoqvfhrz'), ('inventories', 'mordred@inaugust.com-20091103214523-pxwopxc4tuasn4ul'), ('inventories', 'mordred@inaugust.com-20091102171023-wm0v26gdzzutxxzg'), ('inventories', 'mordred@inau
[18:50] <mtaylor> gust.com-20091103212344-3esgm3d5rnhvia2t'), ('inventories', 'mordred@inaugust.com-20091027224205-4yirn3hveb1zma5k'), ('inventories', 'mordred@inaugust.com-20091103220723-4xhq2frowehfes2n'), ('inventories', 'mordred@inaugust.com-20091027181920-mcf0d5zyf9tptghn')]).
[18:51] <mtaylor> of course - this was during a pull operation, so "second push" seems like an odd message there
[18:51] <mtaylor> bzr 2.0.1 on python 2.4.4 (Solaris-2.10-sun4v-sparc-32bit)
[18:51] <mtaylor> fwiw
[18:54] <mtaylor> ooh. looks like it's fixed in trunk
[18:54]  * mtaylor withdraws all above statements
[20:19] <jam> mtaylor: should be fixed in 2.0.2 as well
[20:20] <mtaylor> jam: awesome
[20:35] <tsmith> I have an SVN project.  I want to commit locally to bzr and then, when I'm ready, pull changes from and commit changes back to the svn.  I want to create several bzr branches against this svn repo. What should i do when initially checking out the svn?
[20:48] <corp186> what is the bzr export --filters option, and how do I use it?
[21:01] <luks> tsmith: nothing special compared to when working with a native bzr project
[21:01] <luks> in this case, it sounds like you want bzr branch/pull/push
[21:15] <lifeless> jelmer: https://edge.launchpad.net/~cjwatson/bzr-cia/server-side/+merge/14057 claims the merge hasn't landed
[21:17] <Tak> tsmith: more specifically than `bzr branch svn://foo/bar` ?
[21:17] <tsmith> Tak, some guy in here was saying how he did bzr init or something
[21:17] <tsmith> bzr init then bzr branch or something
[21:18] <Tak> probably `bzr init --rich-root-pack`
[21:19] <Tak> or similar
[21:21] <jelmer> lifeless: yeah, that's actually correct
[21:21] <jelmer> lifeless:  I should note that in the merge request
[21:21] <jelmer> lifeless: nevermind, seems I already did
[21:22] <lifeless> jelmer: you did?
[21:23] <lifeless> status is still 'needs review'
[21:30] <lifeless> jelmer: ^
[21:37] <jelmer> lifeless: reviewed
[21:45] <lifeless> jelmer: you might like to change the review status too
[21:46] <lifeless> jelmer: I don't have access to do that
[22:04] <bialix> hello jam
[22:05] <bialix> is there any problems with 2.0.2 installer for windows?
[22:06] <jelmer> lifeless: done
[22:06] <jelmer> lifeless: I'm still a bit uncomfortable marking something "reject" if I really mean "resubmit"
[22:07] <lifeless> jelmer: isn't there 'in development'
[22:07] <lifeless> jelmer: work in progress
[22:07] <lifeless> jelmer: is what you should change it to
[22:14] <eydaimon> I've got a conflict where a couple of files got deleted. I want to resolve the conflict sot hat the files do not get deleted
[22:14] <eydaimon> how can I do that?
[22:15] <spiv> eydaimon: "bzr revert filename1 filename2"
[22:15] <eydaimon> spiv: what about when I merge next time? will those files not get deleted again?
[22:15] <spiv> Correct.
[22:16] <eydaimon> thanks
[22:24] <jam> bialix: I'm waiting on igc to build the chm documentation
[22:24] <jam> otherwise, I don't know of any problems
[22:24] <bialix> ok, thanks
[22:25] <bialix> there is not so much changes since 2.0.1
[22:30] <eydaimon> http://pastie.org/684067  why aren't they merging here? (I was going to try out what spiv said just to verify)
[22:34] <eydaimon> oh, coz it's a checkout, not a clone
[22:44] <poolie> hello
[22:50] <jam> poolie: hi
[22:51] <jam> Hey, it looks like I'm not able to save a snapshot of an EC2 instance to the S3 store.
[22:51] <jam> I'm guessing I need a separate set of S3 credentials to do that.
[22:51] <jam> Which means that when I 'stopped' a running instance, I couldn't start it again
[22:51] <jam> and I had to "launch a new one"
[22:51] <jam> which meant all my state was definitively lost
[22:52] <jam> anyway, I'm still hoping to get EC2 working, but I'm sort of at a stalled point.
[22:52] <jam> Also, I'm waiting on igc to get the new documentation built, so I can build the win32 installers, so I can announce the new release.
[22:53] <poolie> jam, hm, i don't see what other credentials you would need beyond what i had
[22:53] <jam> poolie: If I try to save to "ec2.sourcefrog.net" it says I don't have access
[22:53] <poolie> oh ok
[22:53] <jam> I have an account id but not an email address, etc
[22:54] <poolie> but you could create a new bucket and save it there?
[22:54] <jam> poolie probably, if I signed up for S3 and gave Amazon a CC
[22:54] <jam> I've been avoiding that so far
[22:54] <poolie> no, i mean within this single account
[22:54] <poolie> you can't share this stuff across accounts
[22:54] <jam> So I guess... I don't really know
[22:55] <poolie> ok
[22:55] <jam> I've looked at the S3 stuff, to try and figure out how to create a new buckte
[22:55] <poolie> so
[22:55] <jam> but that looks like I need a *separate* set of credentials
[22:55] <poolie> thanks for letting me know that you were blocked
[22:55] <poolie> i was actually wondering this morning what to do next
[22:55] <poolie> i've felt a bit in the weeds as far as responding to a bunch of little things
[22:56] <jam> Trying to use S3 Firefox Organizer, "Create Bucket" fails with "unable to connect to server"
[22:57] <jam> that may be the plugin issue
[22:57] <jam> or may be any of a bunch of things
[22:57] <jam> I wanted to test if writing stuff to D, then taking a new snapshot
[22:57] <jam> really nukes the D directory on restore
[22:57] <jam> so we know whether we *have* to mount things via the Elastic Block Store
[22:58] <jam> (it is recommended, but the Postgres image has everything installed on C so you are 'up-and-running' easily)
[22:58] <jam> But they have VS 2008 Express, not Standard or Professional
[22:58] <jam> anyway, I switched to other things
[22:59] <jam> so if you want to spend time, #1 is getting the docs built so I can get the release announced
[22:59] <jam> #2 is playing with EC2
[22:59] <poolie> ok
[22:59] <poolie> and vs express is not enough, right?
[22:59] <jam> btw, I worked on Windows glob expansion, and it took about 2 hours to implement.
[22:59] <poolie> oh nice one
[22:59] <jam> poolie: express doesn't have atl which is needed for tbzr dll
[22:59] <poolie> i worked a bit on ec2 last night, and then i think got interrupted
[23:00] <jam> until naoki teaches us the secret
[23:00] <poolie> k :)
[23:00] <poolie> you could ping him?
[23:00] <jam> anyway, I need to get going
[23:00] <poolie> or we could build that separately, if it's rarely used?
[23:00] <jam> I'll try to get ahold of him
[23:00] <poolie> ok, thanks for letting me know
[23:00] <poolie> i'll do the docs first, then poke at ec2 and tell you how it goes, then have a think about what to do next
[23:01] <jam> poolie: I think we are pretty close, and the apis look sufficient that we could probably leave the instance stopped most of the time
[23:01] <jam> and just cron to spin it up, and then run the buildbot tests
[23:02] <jam> and spin up a second instance when I need to build installers manually, etc.
[23:02] <poolie> mm
[23:02] <poolie> it looks like about 10m to spin u
[23:02] <poolie> up
[23:02] <jam> yeah, fairly long
[23:02] <jam> but not something you would notice in the 3-hours it takes to get stuff done
[23:02] <poolie> but it's feasible as "ok today i'm going to do installers" then it'll be ready when you get to it
[23:02] <poolie> right
[23:02] <poolie> i think that ties in too to which disk things are installed on
[23:02] <poolie> anyhow, the basic idea was that we would all share one account
[23:02] <jam> I could spin it up at "gone gold" time, and then spin it down once we actually released
[23:03] <jam> yeah, so far the one-account seems to be working for me
[23:03] <jam> just S3 and buckets are being weird
[23:03] <poolie> you creating your own wouldn't help because there's basically no way to share things between accounts except by making them totally pubilc
[23:03] <jam> ok, really gone now :)
[23:07] <poolie> k
[23:07] <poolie> how about you today, spiv?
[23:09] <CoffeeIV> does "bzr revert -r NNN filename" do anything in the repository, or just in my local copy ?
[23:09] <lifeless> local
[23:10] <CoffeeIV> ok, thanks
[23:12] <spiv> poolie: working on full writeup of those stories, and breaking them down into smaller pieces/goals.
[23:13] <poolie> sounds good
[23:13] <poolie> can you please put some agenda items onto the sprint page too?
[23:13] <spiv> Ok, will take a look
[23:34] <eydaimon> would is a lost word
[23:57] <maxb> Ouchie... 8 minutes just for bzr-fastimport to update 33 branches *after* importing all the revisions