[00:04] <exarkun> And another one, http://buildbot.twistedmatrix.com/builders/lucid32-py2.6-select/builds/1091/steps/bzr/logs/stdio
[00:26] <glyph> poolie: hi, exarkun found some bugs
[00:28] <lifeless> hi nigelb
[00:28] <nigelb> hey lifeless
[00:29] <nigelb> hmm, I thought this was #launchpad-dev for some reason.
[00:29] <lifeless> nigelb: nup :P
[00:30] <nigelb> lifeless: you're everywhere, which confused me :P
[02:38] <poolie> glyph, nigelb, lifeless, hi
[07:17] <nigelb> hello poolie
[07:22] <jam> hey poolie, I'm going to try to be back in time for the stand-up, but we're getting a late start at the house today. So don't wait on it for me.
[07:37] <poolie> hi poolie, hi nigelb
[07:37] <poolie> np
[07:38] <poolie> hi jam
[07:43] <poolie> lifeless, hi
[07:43] <poolie> re bug 819604
[07:45] <lifeless> hi
[07:47] <poolie> your last comment there is correct as far as it goes but it's not really where i thought we ended up
[07:48] <poolie> hm
[07:48] <poolie> maybe i'll just reply
[07:51] <poolie> actually my question is really: why are you unable to deploy to codehosting again?
[07:51] <poolie> how is this any worse than before/
[07:53] <poolie> lifeless: ^^?
[07:56] <lifeless> poolie: we've stopped scheduling monthly downtime
[07:57] <lifeless> poolie: and in terms of  being worse than before, we had to kill the service every deploy, before.
[07:57] <poolie> so, if there is a disruption, it will disrupt them more frequently?
[07:57] <lifeless> if we want to be able to experiment effectively we need a graceful option, otherwise each test is downtime.
[07:58] <poolie> but, for other reasons, every deploy is downtime at the moment
[07:58] <poolie> eg the xmlrpc error that comes back during a deploy
[07:58] <poolie> unless that's recently fixed
[07:59] <lifeless> that ticket is in progress with hloeung at the moment
[07:59] <lifeless> should be fixed in a day or two I hope
[07:59] <poolie> that's great
[08:00] <poolie> what will happen to connections that are busy for a long time and not idle?
[08:00] <hloeung> yeah, it should be done within the next couple of days.
[08:00] <hloeung> poolie, I believe the hard timeout is 2mins
[08:01] <poolie> which hard timeout?
[08:03] <lifeless> poolie: appserver connections? or bzr connections?
[08:03] <poolie> bzr
[08:04] <lifeless> so, I would like to interrupt them at the next verb
[08:04] <lifeless> and if we have that, we could set a decent (say 1 hour) period after which we would interrupt anyway
[08:04] <poolie> ok, but what will happen today if we fix this particular bug?
[08:04] <lifeless> we'd reset the timeouts
[08:05] <lifeless> and have a fallback limit of 30minutes after which we would interrupt anyway
[08:06] <lifeless> the lower backend timeouts would cause a timeout after (N, to be determined - say 3) minutes of inactivity, which would likely be low enough to get the CI servers and PQM and the like to migrate over to the new service instance.
[08:07] <lifeless> we will need to address both bugs before we can deploy reliably with minimal interruption to users.
[08:10] <jelmer> vila, do you have a pointer to the testtools-related failure on babune?
[08:11] <poolie> lifeless: so, you'll leave the old back ends running for up to an hour?
[08:12] <vila> jelmer: http://babune.ladeuil.net:24842/view/%20High/job/selftest-freebsd/lastFailedBuild/#showFailuresLink
[08:12] <lifeless> poolie: something-like, once we've got all the friction sorted
[08:13] <lifeless> poolie: at the moment no timeout is high enough - we've still got a back end running from when we cut over to the haproxy setup last week, I believe.
[08:21] <jelmer> vila, poolie, jam: http://bugs.python.org/issue12544
[08:29] <gour> for which date is 2.4 scheduled?
[08:30] <vila> gour: we're talking about it right now, it depends on https://bugs.launchpad.net/bzr/+bugs?search=Search&field.importance=Critical&field.status=New&field.status=Incomplete&field.status=Confirmed&field.status=Triaged&field.status=In+Progress&field.status=Fix+Committed
[08:30] <poolie> gour: sometime between this thursday and two weeks after that
[08:30] <vila> gour: mostly bug #771184 and bug #809048
[08:30] <gour> cool...it's on time
[08:32] <gour> what's with colo branches in 2.4? they are in core?
[08:34] <poolie> not in core; very good in bzr-colo
[08:36] <gour> thank you
[08:37] <jelmer> https://bugs.launchpad.net/bzr/+bug/806348
[08:37] <jelmer> jam, vila: ^
[08:37] <gour> thanks to jelmer & {hg,git} plugins, bzr is the only dvcs we use now :-)
[08:37] <vila> gour: \o/
[08:38] <gour> (after spending many years with darcs, played with mtn, fossil, hg...)
[08:49] <gour> i've asked web2py to upgrade repo format to 2a..in order to provide correct info, since when is bzr using 2a as default?
[08:49] <lifeless> since 1.19 or something - its in NEWS
[08:50] <lifeless> poolie: did you want to talk about these codehosting bugs?
[08:50] <poolie> on the phone at the moment
[08:50] <poolie> which ones?
[08:50] <gour> lifeless: ta
[08:51] <fullermd> Think it only because default in 2.0.  It existed in 1.18...
[08:52] <gour> fullermd: ahh...that's what we need. thanks
[08:53] <fullermd> Sounds like 1.17 has support for it too...
[09:11] <jam> vila: http://babune.ladeuil.net:24842/job/selftest-chroot-oneiric/lastBuild/testReport/junit/bzrlib.tests.test_selftest/TestTestResult/test_verbose_report_known_failure/
[09:12] <jam> can't we just do a getattr() to handle different testtools/python versions?
[09:15] <jelmer> jam: instead of checking the version you mean?
[09:15] <jelmer> vila: is babune perhaps running some other pre-0.9.12 snapshot of testtools?
[09:15] <jam> jelmer: I think checking versions is going to be hard to get right
[09:16] <jelmer> jam: 0.9.12 introduces the fixes, 0.9.11 doesn't have them
[09:16] <jam> jelmer: that specific bug is in bzrlib because of attribute issues with python's runner, right?
[09:16] <jelmer> jam: ah, sorry
[09:16] <jelmer> jam: There are multiple issues here
[09:17] <jelmer> that one is unrelated to testtools, I've just put up a proposal for it that uses getattr
[09:17] <jelmer> jam: https://code.launchpad.net/~jelmer/bzr/2.4-809048-workaround/+merge/70834
[09:33] <lifeless> poolie: the two: gracefully handle tcp disconnects during idle periods of bzr+ssh connetions, and secondly a way to tell the smart server 'disconnect after serving the next verb or now if you are already idle'
[09:36] <poolie> i agree they'd be useful
[09:37] <poolie> i see you mentioned them in bug 795025 but they seem a bit distinct from the main bug in the bzr-sftp front end, so i might file two separate bugs
[09:37] <poolie> i'm going to downgrade bug 819604 because it doesn't meet our definition of critical
[09:37] <poolie> ie it's not a release blocker
[09:37] <poolie> and create series tasks
[09:38] <poolie> it should be fixed though
[09:42] <lifeless> ok
[09:42] <lifeless> poolie: so bug 795025 is about not shutting down; the *cause* for that is active sessions having no way to interupt
[09:43] <lifeless> poolie: I don't particularly care if you want to split out a bug asking for a way to interrupt, but it seems unnecessary
[09:44] <lifeless> poolie: these are critical from LP's side, in the sense of queue jumping bugs; and (as I think we've established) we cannot sensibly move forward on the forking-daemon until we have 795025 addressed in some fashion
[09:45] <poolie> so for 795025 the only change needed to fix it is for bzr to close the connections, either on request or after a 5m timeout?
[09:45] <poolie> (or, whatever idle timeout?)
[09:46] <lifeless> on request
[09:46] <poolie> i guess idleness can be done sufficiently well from haproxy,because the socket will just be idle
[09:46] <poolie> well, as long as the timeout is long enough we're sure it's not just that one side is busy
[09:47] <lifeless> a timeout in the server would be a nice bonus, but unless the work just fell out while looking at on-request, I would do it separately.
[09:47] <poolie> sure
[09:47] <poolie> i only ask because you suggested a timeout
[09:47] <poolie> so as far as deployments
[09:48] <lifeless> ah, I was describing the overall situation before, was probably unclear
[09:49] <lifeless> yes, I think haproxy is reasonable for timeouts at least in the medium term. We will have to fiddle the timeout counter in it to find a sweet spot eventually.
[09:50] <poolie> the issue is basically that if you upgrade/downgrade one of the back ends, you will need to choose between leaving the back end live indefinitely, or perhaps making long-lived clients error out
[09:50] <poolie> is it intolerable to make them error?
[09:50] <poolie> doesn't tarmac cope with this already?
[09:50] <lifeless> no, tarmac errors out, pqm errors out, bzr-eclipse errors, buildbot errors.
[09:51] <poolie> and doesn't recover?
[09:51] <poolie> how has it possibly coped with our monthly roll outs so far?
[09:51] <lifeless> well
[09:52] <lifeless> I'm sure it does when it opens a new transport
[09:52] <lifeless> users seem to find hour long test runs being aborted a bit of a pain
[09:54] <poolie> ok
[09:54] <poolie> i think i understand the situation
[09:56] <lifeless> perhaps we can touch base tomorrow; its getting late here :)
[09:56] <poolie> here too
[09:56] <poolie> i don't want to have a long discussion about whether to fix it or not
[09:56] <poolie> we should fix it
[09:56] <poolie> i just want to have clear meanings for bug metadata
[09:56] <poolie> as do you
[09:57] <lifeless> yup
[09:57] <poolie> unfortunately the meanings are a bit different :)
[09:57] <lifeless> thats a beer topic I think
[09:57] <gour> is there something one would miss for not using colo-branches when simply using shared repo and each branch in its own dir, iow. should i bother with colo-branches?
[09:59] <poolie> the main difference is that it's more biased towards having one working tree
[10:00] <poolie> you can also also use colo-sync-from and colo-sync-to
[10:00] <gour> yeah, but i do not get if having one working tree is so important
[10:00] <poolie> it's good if the tree is large relative to your disk (or other resources) or if it takes a long time to compile
[10:00] <poolie> etc
[10:01] <poolie> or if your tools like staying in one location
[10:01] <poolie> or if your working methods do
[10:01] <gour> i believe that using feature-branches is then good-enough for me
[10:02] <gour> we'll stay with python for all development (qt & web2py), so compile is not really issue here (except some cython)
[10:14] <poolie> jam, could you maybe give francesco a bit more of a tip where to add the test or where to look for inspiration?
[12:36] <jam> vila, your recent mp: https://code.launchpad.net/~vila/bzr/migrate-config-options/+merge/70767
[12:37] <vila> jam: yes ?
[12:37] <jam> You mention taking the register key from the option object
[12:37] <jam> but I don't see that in the patch
[12:37] <vila> jam: different mp
[12:37] <jam> vila: read what you wrote on that one
[12:37] <jam> bullet point 2
[12:38] <jam> I guess you were saying these are things that aren't part of this but would be done next?
[12:38] <jam> I thought it was the list of things in this patch
[12:38] <vila> jam: read what I wrote in introduction: 'follow-up'
[12:38] <vila> jam: yup
[12:38] <jam> vila: I tend to skim and focus on bullet points
[12:38] <vila> no, the introduction was just one line
[12:39] <jam> putting things that aren't part of the patch into the MP... can be confusing
[12:39] <jam> anyway, looks good
[12:39] <jam> just confused
[13:10] <jam> vila: I have a big concern about your config stack work
[13:10] <jam> booleans don't come back as booleans...
[13:10] <jam> http://paste.ubuntu.com/661920/
[13:11] <jam> I'm looking at how Martin is using "repository.fdatasync" and I'm pretty sure it is wrong
[13:11] <jam> he is doing:         self.write_stream.close(
[13:11] <jam>             want_fdatasync=self._pack_collection.config_stack.get('repository.fdatasync'))
[13:11] <jam> which is sort of what I would expect. But you can't set it to "False" or "F" or ... all of those evaluate to True in python
[13:11] <jam> (because they are strings)
[13:12] <jam> I don't see any tests from poolie that the config item can actually disable fdatasync
[13:14] <vila> jam: I'm working on the boolean config options just now
[13:14] <vila> jam: you're right that there is a bug right now, my next submission will fix it though
[13:16] <jam> vila: I'm looking to do something with it for 2.4. Should I just use "get_user_option_as_bool" for now?
[13:16] <jam> I was also a bit disturbed that just adding that suddenly made an HPSS ratchet test fail because it went from 14 HPSS calls to *34*
[13:16] <vila> jam: yup
[13:16] <jam> but that could have been my fault.
[13:16] <jam> (somehow)
[13:16] <jam> I haven't worked through all the failing tests yet
[13:17] <vila> jam: what means "something" ? You're backporting the fdatasync stuff ?
[13:18] <jam> vila: I was using fdatasync to figure out how to do "no fetch tags"
[13:18] <jam> only to find out it didn't work
[13:18] <vila> ha right, adding a new boolean option then ?
[13:18] <jam> (repository.fdatasync as an example to crib from)
[13:18] <jam> right
[13:18] <vila> k
[13:18] <jam> also, oddly the HPSS verbs are different
[13:19] <jam> branch.get_config() calls 'Branch.get_config_file'
[13:19] <jam> BranchStack() calls 'get'
[13:19] <jam> not even 'get_bytes' from what I can tell
[13:21] <vila> Store calls get_bytes, Stack doesn't know about which files are involved
[13:23] <vila> but fdatasync shouldn't care about remote repositories (unless there is a verb for that) or it's a bug no ?
[13:23] <vila> s/care/apply/
[13:23] <vila> s/care about/apply to/
[13:24] <vila> jam: be aware that remote branches doesn't obey bazaar.conf nor locations.conf, they need a specific stack (TBD)
[13:25] <jam> vila: Remote cares about fetch_tags
[13:25] <jam> I didn't test anything for fdatasync in that area
[13:25] <vila> jam: hehe, indeed :)
[13:27] <blackarchon> someone here involved in qbzr or explorer?
[13:28] <blackarchon> i'm wondering if there is indeed any code to get the window positions and window dimensions from qbzr.conf?
[13:31] <jam> blackarchon: I haven't poked at that code recently, but ISTR that there are window positions that are remembered
[13:31] <jam> I think it involves deriving from a particular class
[13:32] <jam> like QBZRWindow or somethign
[13:33] <blackarchon> well I'm asking because this doesn't seem to be working at all
[13:34] <blackarchon> ok, I will try to find the exact place for this restoreSize function which is called
[14:23] <vila> blackarchon: AFAICS, there are options for *size* but not for *positions*
[14:42] <hikiko> hello
[14:44] <hikiko> I would like some help: I ve created a branch of a project and I want to commit to it
[14:44] <hikiko> (a launchpad branch)
[14:45] <hikiko> when I use the command: bzr commit -m "blah blah"
[14:45] <hikiko> I get the following error:
[14:45] <hikiko> bzr: ERROR: Cannot lock LockDir(lp-86944400:///%2Bbranch/stellarium/.bzr/branchlock): Transport operation not possible: readonly transport
[14:46] <hikiko> although i am logged in in launchpad
[14:46] <hikiko> do you have any idea on what I am doing wrong?
[14:52] <vila> hikiko: you probably don't have *write* access on this branch (actually, you're probably using a checkout which is bound to the remote branch instead of a local branch that you then push somewhere else)
[14:54] <hikiko> how can I change this? I mean what I did was: checkout the project code
[14:54] <hikiko> then create my launchpad user and branch
[14:54] <hikiko> and then attempt to commit
[14:54] <hikiko> what step is missing?
[14:55] <hikiko> oh
[14:55] <hikiko> I see
[14:55] <hikiko> you mean i have to do sth like:
[14:56] <hikiko> bzr branch lp:~<link to my branch>
[14:56] <hikiko> and then change that code
[14:56] <hikiko> and bzr push lp: ...
[14:56] <hikiko> ?
[14:57] <blackarchon> yes, this would work.
[14:57] <fullermd> If you checked out _before_ creating your LP user, the checkout is connected over a _non-logged-in_ transport.
[14:57] <hikiko> yes I checked before
[15:01] <hikiko> and if I want to update
[15:01] <hikiko> bzr up
[15:01] <hikiko> will give me the new code from the branch or from the project?
[15:04] <hikiko> ok I found it :)
[15:04] <hikiko> thank you very very much for your help all!
[15:11] <blackarchon> um... where can I find the default ignore file, which is installed with bzr?
[15:12] <LeoNerd> ~/.bazaar/ignore  perhaps?
[15:12] <blackarchon> no, I edited it - and left no backup of it :(
[15:13] <fullermd> 'm pretty sure bzr writes it if it doesn't exist...
[15:13] <blackarchon> oh yes? let me try...
[15:13] <LeoNerd> mv it away and start again?
[15:15] <fullermd> Semirelatedly, it's probably rather discouraged to be editing it, as a rule.  Since it would no longer match what anybody else has, you can wind up with surprises in branches multiple people touch.
[15:15] <blackarchon> well no, it doesn't get magically recreated :(
[15:16] <fullermd> If I mv mine out of the way and run a 'bzr stat', it writes out a new one...
[15:17] <blackarchon> ah yes, but only if this bzr command succeeds
[15:18] <blackarchon> my bzr stat wasn't inside a bzr directory, so it hasn't created the file - thx! :)
[15:29] <blackarchon> I want to fix this bug: https://bugs.launchpad.net/qbzr/+bug/578935
[15:30] <blackarchon> so I added a check if the argument isn't a branch or a wt
[15:30] <blackarchon> https://code.launchpad.net/~andrebachmann-dd/qbzr/fix_qbrowse
[15:31] <blackarchon> but I still need an idea of how to proceed, I want to disable all objects in qbrowse
[15:32] <blackarchon> but I don't have the right hought on how to do that
[15:35] <blackarchon> any hints?
[16:44] <vila> more than 3 hours to land on pqm, is it me or it's getting worse ?
[16:48] <jelmer> vila, it's not just you
[16:49] <vila> jelmer: :)
[17:08] <jelmer> vila: can you submit my 2.4 branch? I don't seem to have permission to submit to release branches
[17:09] <vila> jelmer: 2.4-809048-workaround ?
[17:10] <jelmer> yep
[17:10] <vila> done
[17:12] <jelmer> vila: thanks
[17:13] <vila> jelmer: nothing showing up on pqm for now :-/
[17:16] <vila> jelmer: ok, here it is
[17:17] <vila> jelmer: thanks for that ! Merge it into trunk when you can !
[17:18] <jelmer> will do :)
[17:21] <vila> ok, I'm off, see you online on Thursday ;)
[19:51] <santagada> jelmer: dots don't work on bzr 2.1, only none works
[19:52] <santagada> jelmer: and I think that for deployment we can't wait for launchpad so we will just tar everything up to the clients
[23:02] <lifeless> morning poolie
[23:28] <poolie> hi thar