[00:27] okay, that's nearly back to email stability. [00:27] looks like jam fixed a couple of longstanding workingtree test issues. [00:30] If I'm reading the docs correctly bazaar doesn't implement its own network layers but instead wraps itself inside ssh, http(s), (s)ftp, etc... Does bazaar do any password authentication / storing itself, or does it delegate all that to the transport layers it uses? [00:30] hi mgz [00:30] TJNII, you can run plain bzr over tcp [00:30] but it does not do its own encryption or authentication [00:31] so plain bzr is really only for anonymous readonly mode [00:31] poolie: So it would be clear text passwords? [00:31] Will bazaar cache clear text passwords like svn? [00:36] there are no passwords at all [00:36] you should run over ssh [00:37] Got it [00:38] So bazaar itself doesn't use a password, it uses the permissions of the user / underlying authentication methods? [00:40] I've been reading the SVN book. The svn command does take a password which, until recently, it would store in clear text on the disk. That is a huge red flag for me and something I don't want. So now I'm looking at bazaar and trying to transfer my newfound knowledge to it, which includes understanding the differences. [00:41] TJNII: most people run bzr over ssh and so use whatever authentication they use with ssh (public keys mostly) [00:42] So how does bzr authenticate those users against the repository? System username? Filesystem permissions? [00:43] Strictly speaking, it doesn't, it just tries to write files it's told to. Filesystem permissions would prevent that and make things stop, to be sure. [00:43] depends which ssh server you're running [00:43] using openssh, sure, it's the system user [00:43] The ssh or higher-level authentication just determines whether you can talk to bzr in the first place. [00:44] Aah, I think I've found my answer: "Bazaar provides a script called bzr_access that can be used to provide access control based on usernames, with authentication performed by SSH." [00:44] And so, bazaar doesn't handle any passwords itself, so I don't have to worry about how it handles said passwords. Is this correct? [00:45] I know I'm asking the same basic question over and over again, but I want some certainty on this. [00:45] bzr doesn't even handle usernames. [00:45] TJNII: bzr depends on your ssh client to do password management [00:45] TJNII: there are many different ssh clients around, some that use keyrings, some that use GUIs, etc. [00:51] I think I've got it. Thanks. [00:53] o/ fullermd, mwhudson [00:55] * fullermd wave at poolie. [01:06] Good morning. [01:19] hi there spiv [02:13] o/ jelmer [02:20] spiv, so are you going to pilot some more this week? [02:20] hi jelmer [02:20] the queue is still pretty big [02:20] spiv also i'll update by command deprecation patch [02:21] I managed to get the queue back down a bit but then jelmer and vila kept proposing patches! https://code.launchpad.net/bzr/+activereviews [02:21] Er, I meant: http://webnumbr.com/bzr-active-reviews [02:22] yeah i saw [02:22] There are worse problems to have :) [02:23] I think you're on the roster for this week, but I keep can piloting this week if you'd like. [02:24] i'll take the helm, but there's such a large number i think it'd be good if you keep reviewing too [02:24] i'd like to work with vila on his config stuff [02:25] well [02:25] it's a bit fire and forget because i think he's going away from tuesday or wednesday [02:29] Ok. Mainly that leaves jelmer's patches. [03:21] spiv is https://bugs.launchpad.net/bzr/+bug/721710 a dupe of something you already fixed? [03:21] Ubuntu bug 721710 in Bazaar "Not possible to pull or update using smart server in 2.3.0" [Medium,Confirmed] [03:22] to do with locks getting tangled up on bound branches? [03:24] poolie: fixed in 2.3.something, yes [03:24] poolie: https://bugs.launchpad.net/bzr/+bug/733350 I think [03:24] Ubuntu bug 733350 in Bazaar "LockContention error when pushing (with new tag) to a bound branch" [High,Fix released] [03:24] ok will you tell Nicholas or shall i? [03:25] poolie: ah, fixed in lp:bzr/2.3, but not in a released version yet [03:25] (Interesting data point for recent discussion about the meaning of "Fix Released", perhaps?) [03:25] I'll do it [03:29] indeed; thanks [03:29] i might try to put that tweak into the kanban software later [03:38] poolie: hi! about bug 756228 . what sort of information woudl you like about the bzr repo? i have a copy of it, so hopefully i can bashi it in any ways you need [03:38] Launchpad bug 756228 in Bazaar "bzr: ERROR: exceptions.StopIteration" [Undecided,Incomplete] https://launchpad.net/bugs/756228 [03:40] kgoetz, are the filenames secret? [03:41] poolie: nope. anything you need. its a d-i checkout with my breakage added [03:41] then please just attach a tarball of the repo to the bug and reopen ti [03:42] sure [03:48] poolie: added, thanks. repair-workingtree doesn't see issues with the repository, so i'll keep watching the ticket incse i need to provide anything else [03:53] ok [06:10] ok so hi [06:10] lifeless, could you give bradm a clue how pqm finds the right chroot to work on [06:10] hi [06:11] amd64 [06:11] erm [06:11] pqm-amd64 [06:11] heh [06:11] to find it from first principles [06:11] start with the pqm crontab [06:11] so mail comes in [06:11] and is queued and run from cron? [06:11] which will tell you the pqm config file [06:11] lifeless: ah so pqm runs via cron? I didn't even know that much :-/ [06:11] the context is we want to switch it to a different chroot which is running lucid [06:11] look in the pqm config file, and it uses paths that point into the chroot + dchroot-run to run commands that DTRT cd wise into the chroot [06:12] all you should need to do to do that is have the chroot prepared and edit the bzr pqm config file [06:12] lifeless: its a dist-upgraded copy of the existing chroot, so it should be ok I hope [06:12] aha, found the path [06:13] there is also a web server [06:13] this is secondary [06:13] it might need to be updated to look at a different path too though [06:13] bzr-pqm.conf [06:14] the webserver doesn't look in the chroot at all [06:15] the log is in a separate directory? [06:15] bound under the chroot or something? [06:18] the progress stuff is done via stdout of the test process [06:18] straight into pqm, and it writes it to its own log area [06:18] okey, thats updated to the new chroot, I can juggle the paths once we're happy with this [06:19] sorry i don't understand [06:20] 'happy with this' in the sense it can now be tested? [06:20] well, assuming there's only one spot to change, yes [06:20] I don't really know pqm very well, so I'm not really sure what else if anything needs to be done [06:20] ok, i'll send something in and we'll see [06:21] cool [06:21] sent... [06:22] ugh, found a typo [06:22] there, fixed [06:23] bradm, ok, it failed because python-docutils was not installed [06:23] i think this means either bzr-build-dependencies (is that the right name?) is not installed, or it's missing that dependency [06:23] the correct name is in the rt ticket [06:24] poolie: oh, there's a python path pointing to python 2.4 as well [06:26] or that [06:26] poolie: python-docutils is definately installed in the chroot [06:27] poolie: so I guess update from python2.4 to python2.6? [06:27] update where? [06:27] i mean, where is the thing set that you're going to change? [06:27] pastebin ftw [06:30] there's a bunch of stanzas like https://pastebin.canonical.com/45898/ [06:30] I updated the chroot-amd64 to chroot-amd64-new, its the python2.4 bit I think needs to change to python2.6 [06:31] thats in pzr-pqm.conf [06:32] oh, not a pythonpath as such [06:32] bradm: yes, the python value there needs changing [06:32] i wonder if it would be better just to cut it out and use the chroot's default [06:32] s//i think it would be [06:32] poolie: its set because of bzr's Makefile [06:32] poolie: IIRC [06:33] surely we default to just 'python'? [06:33] $severalyearoldmemory [06:33] i mean, i believe you, but i think bzr will do fine without $PYTHON set [06:33] actually, hrm, I haven't seen where the chroots defined either [06:33] bradm: pqm doesn't know about the chroots per se [06:33] bradm: it knows about paths that are in the chroot [06:33] lifeless: aha, right. [06:34] bradm: and it calls (per the config) dchroot-run which will chroot + cd + exec [06:34] so it's set in /home/pqm/bin/dchroot-run [06:34] no [06:34] or something reached from there [06:34] yes [06:34] dchroot-run queries dchroot -l [06:34] probably /etc/dchroot/* [06:34] right, and matches it up to the path, cool. [06:34] that PYTHON variable is updated [06:35] time for another test? [06:37] bradm, can you just delete the PYTHON= thing? [06:37] bradm: I wrote that when the logic for pqm to do chroots was discussed and I was ... argh [06:37] avoiding overwriting stuff unnecessarily is better [06:37] poolie: *shrug* sure, if you'd prefer [06:37] i would [06:38] will save fixing it again if we go to natty or whatever [06:38] also, would you mind please running 'dchroot -l' and tell me if you get any warnings about deprecated options? [06:40] poolie: yes, there's warnings about the deprecated options [06:41] thanks [06:41] let's get this going first [06:42] then if you want i can file a separate ticket asking for them to be updated [06:42] okey, try that? [06:42] k [06:43] seems to be running ok [06:44] it will take a fair fraction of an hour to complete [06:44] hm [06:44] would you like me to file a ticket about the warnings? [06:45] um, I guess its probably worthwhile, they seem to be popping up wherever we use lucid and dchroot [06:46] ok, rt 45200 [06:47] ta [06:48] right, so if this works we can change the pqm-amd64 to be pqm-amd64-old and the new one to pqm-amd64 - leave the old one around for a bit, then get rid of it [06:49] yep [06:49] okey, let me know when the test has finished [06:50] we can leave it for a while if you'd prefer more than one test? [06:50] whats the new chrood called ? [06:51] pqm-amd64-new [06:51] heh cool [06:51] are you going to pivot it onto the old name once qa'd / [06:51] ? [06:51] lifeless: yep, thats the plan [06:51] sweet [06:52] lifeless: just didn't want any confusion - wel, any _more_ confusion [06:54] bradm, i wonder if rather than switching it to the be pqm-amd64, it would be better to rename it to something like pqm-bzr [06:54] blah [06:54] pqm-bzr-amd64-lucid [06:55] this would have saved some other questions earlier [06:55] poolie: sure, that'd make it very clear what it is [06:55] ok [06:55] i'll let you know when this job passes or fails [07:35] hi all ! [07:35] poolie: if it helps, I can put my proposals 'work in progress' before leaving [07:36] it would be nice to still review them before I come back though ;) [07:38] hi vila [07:38] i don't think you need to do that [07:38] we can do it if they're cluttering things up [07:38] or someone else may fix them [07:43] bradm, ok, that test concluded [07:43] wow that's pretty slow [07:44] poolie: interesting, I wonder why its slow then [07:44] poolie: maybe a python 2.6 thing? [07:44] it's 7-year-old hardware :} [07:44] it's no slower than it was in the other chroot [07:45] poolie: ah, right. I thought you meant it was slow compared to the other chroot [07:45] there's a separate ticket discussing getting a faster machine [07:46] anyhow, it finished, and failed [07:46] i don't know if that was because of an actual bug in the branch or something in the chroot [07:46] i might try another landing, maybe of a trivial branch [07:47] poolie: sure [07:47] what was the error ? [07:48] lifeless, see https://code.launchpad.net/~jelmer/bzr/mutableinventorytree/+merge/57061 [07:49] hi spiv? [07:51] Hi poolie === Ursula__ is now known as Ursinha-afk === Ursinha-afk is now known as Guest38069 [07:56] spiv, hm, that really is a bit of a trap in lp that you can approve a dependent branch without the predecessor being approved [07:56] i wonder if hydrazine should specifically check it [07:56] it could [07:57] poolie: yeah, I'd really like if LP made that more visible [07:57] poolie: maybe even with a loud “hey are you sure you don't want to review X first?” notification box [07:58] indeed [07:58] And also by not stating it is “ready to land” on +activereviews [07:58] ! indeed [07:59] i think there are bugs for at least some of these [07:59] It probably deserves a new section on that page, like “Approved but has unapproved prerequisites” [08:08] lifeless, so does that failure seem to you like one caused by the chroot? [08:09] it might be failing to import because it failed to build or something [08:15] is _FastPath a C / pyrex module? [08:15] if so, yes a build failure seems likely [08:18] but there is nothing shown earlier in the output [08:19] and it may also just be simply missing [08:19] anyhow, i will look in a bit [08:19] lifeless: no, just a regular class in bzrlib.mutabletree [08:20] an import failure there seems odd [08:20] That patch is reorganising tree code a bit, it might be that jelmer's accidentally introduced a circular import or something like that. [08:21] (i.e. for that error I'd suspect the patch before the chroot upgrade) [08:22] we could flip the chroot back if you wanted? [08:30] hmm, that bzr-landing-dependencies isn't installed, but for some reason that depends on python2.4 ? [08:31] bradm: the reason is the pqm is the only place testing against 2.4 to ensure bzr is still compatible with it [08:31] s/the pqm/that pqm/ [08:31] all of the dependencies are installed anyway [08:31] vila: ah, cool [08:32] but lucid doesn't provide 2.4 anymore right ? [08:32] What, don't we have users for that? :p [08:33] yes i suspected something like what spiv said [08:33] it will be easy to test [08:33] vila: erm, its still installed [08:33] bradm: python 2.4 ? [08:33] vila: yes [08:34] bradm: wow, great, then may be we should keep using it then, poolie ? [08:34] vila: obviously just not removed after the dist-upgrade from hardy [08:34] vila: interestingly it'll mean if you ever want to install the bzr-landing-dependencies somewhere it won't work until you find python2.4 debs somewhere [08:35] ha, erm, forget my remark then, I'm not familiar enough with chroots but istm that we shouldn't rely on it if it's just a fallout from the upgrade [08:36] vila what is 'it'? [08:36] bradm: right, I think bzr-landing-dependencies was a work in progress needing validation :) [08:36] vila: well, I think we just proved it wasn't right :) [08:36] i think we should update that package to depend on python>=2.4 <3 or something [08:36] poolie: python 2.4, but it may rather be a bug in bzr-landing-dependencies [08:36] It's important to test against 2.4, but we could do that just as well via babune as in pqm itself. [08:36] bradm: indeed, hence my reamark ;) [08:37] * vila nods [08:38] there are a couple of TODO items for that on babune, didn't find the time to get to them for... sometimes :) [08:40] * fullermd expects vila's support in his campaign for 36 hour days... [08:41] spiv, hm, interesting point about lack of detail towards udd stuf [08:42] fullermd: I'd rather invest in cloning than mucking with time travel *again* [08:44] vila: The beauty is, if you do either one, you have spare time to do the other too :p [08:44] :) === hunger_ is now known as hunger [08:53] poolie: glad you think so! [08:55] poolie: I'm not sure it's a simple spectrum, but it feels a bit like we haven't found the happy medium between "spec it to death so we know exactly what we plan to do to get there" and "just do it without wasting time writing specs doomed to be stale in a month's time" [08:58] right [08:58] feel free to say that outside of reviews [09:00] at the moment, i think improving the udd importer and dealing with our shortlisted bugs is a good path towards i [09:00] *it [09:00] but perhaps i should communicate that more [09:01] poolie: I'm off for the day, let me know what you want to do with the chroot :) [09:02] i think it's good; have a good night [09:02] i'll send a mail about it [09:03] poolie: righto, if you're happy I'll fix up the names tomorrow, we can go with the one you suggested if you want [09:03] great [09:03] then i guess there's no rush to delete the old one [09:03] yes, i still think that name would be good [09:04] nope, not overly, I'd just like to make sure we don't stay using pqm-amd64-new :) [09:04] but I'd like to also not keep the old one around forever if we don't need it, just more stuff filling up the disk once we're happy we don't need it [09:05] right [09:05] i think we should be able to rm it by the end of the week [09:05] excellent, works for me [09:05] even that is being pretty cautious; there really should be no reason to roll back [09:05] thanks mate [09:05] no worries, seeya tomorrow [09:09] poolie: so pqm runs python 2.5 now ? [09:10] 2.6 [09:11] hmpf, way to go ;) [09:11] erm, maybe 2.5 [09:11] lucid default [09:12] Ok, baby meltdown. I think that's my cue to finish my work day… see you folks tomorrow. [09:12] I've lost track of where all is documented :-/ I know some wiki pages were explaining everything at some point... [09:12] spiv: ouch, meltdown ? I hope I missed the joke here... [09:13] spiv: is it a way to say he has high fever or something ? [09:13] Either that, or leaking radiation (or other noxious substances) :p [09:16] vila, do we have babune builds on 2.4? [09:16] no [09:16] i looked at babune today but i couldn't work out which ones were expected to be failing and which are expected to work :{ [09:17] all builds use the defaults provided by the distribution [09:17] poolie: you came at a really bad time [09:17] there are a lot a spurious failures these days [09:17] a total mess [09:17] You could build a 2.4 BSD vm pretty easily. May need a slightly older ports tree; not sure just how decommissioned 2.4 is now. [09:18] But since everything just builds against what you've got installed... [09:18] right, I can do that on the existing hardy, jaunty or karmic slaves, but [09:19] many failures seem to derive from using the karmic slave (don't ask me for why running a karmic slave can leak to a lucid one) [09:19] so I'd probably use hardy for that, jaunty has been disabled for quite some time since it's EOL [09:20] even karmic was disabled at some point and got re-enabled... by some jenkins upgrade apparently (I didn't check carefully) [09:21] vila, so the thing is, it's not very actionable [09:21] but the thing is, many failures have re-appeared since karmic is enabled again (it may just be a red herring...) [09:21] what ? [09:21] (i realize you already probably know this, and i understanding it may be a bad day) [09:21] well [09:21] fullermd: http://xkcd.com/320/ [09:21] if i lookd at them, i don't know if i should do anything about it [09:21] hi jam [09:21] but that is only 28 hour days [09:21] hi poolie [09:21] ha, babune, yeah, total crap these days, it's a shame [09:22] the problem is that it's kind of out of control, the bugs seem to be coming from jenkins and/or vbox with no trail to follow :-( [09:22] seriously [09:22] can't we run eg hardy in a chroot? [09:23] vila: yeah, I've been following the failures, but haven't ever found a true positive recently. I guess there was one a month-or-so back [09:23] they have been there for quite some time but triggering only rarely whereas they start triggering daily lately [09:23] jam: yup, the main result is still that we don't regress, but it's becoming harder to verify [09:24] since it's still < 15 minutes a day, I still wait for a new vbox release [09:24] why do we have to use vbox? [09:24] poolie: switching to chroots is an option for Ubuntu only [09:24] how about running ubuntu from chroots [09:25] and, i guess, windows and mac os are the hard parts [09:25] I didn't so far because it's less effort to use vms and I still haven't setup a single chroot [09:25] well, I use real hardware for osx [09:26] really less effort? [09:26] do you know debootstrap? [09:27] no, less effort because unknwon >> known [09:28] err, that 'no' was for 'do you know debootstrap' [09:28] ah, ok [09:28] anyhow, so if it seems to be a source of noise, and it does, maybe we should reconsider that [09:28] for windows and mac os i don't know what to do [09:29] * maxb has been quite impressed with the ease of use of kvm for linux stuff - not sure for other OSes [09:29] that would be high on my list [09:30] right, kvm is still on my radar too, the main problem being that I can't test it on a host where vbox is installed :-/ [09:30] Especially, kvm + approx + preseeded d-i install == awesomeness for Debian install testing :-) [09:30] s/installed/running/ [09:33] anyhow [09:33] i would kind of rather shunt aside all the builders or jobs that have intermittent failures and just [09:34] have one screen that really is expected to be blue all the time and build from theer [09:34] strictly speaking, that would be empty :-( [09:35] even FreeBSD suffers from spurious failures (while it hasn't for months) [09:35] the problem is that *some* spurious failures have been good hints about real problems in the past [09:35] well [09:36] test isolations/ lazy import order, are just a few exmaples [09:36] to be more precise, i mean getting a set where a failure is highly likely to be something that we should look into as an actual upstream bug [09:36] sure [09:36] yeah [09:36] for instance if 90% or even 80% of failures were "real" failures, it would be worth investigating them [09:36] one way to address it would be automatically retry failures [09:36] yeah [09:36] my impression is that it is not so high at teh moment [09:36] last time I activated the option it half worked [09:37] ok, so that is one option [09:37] another option is to trim back to a core set plus some non-core builders [09:37] i guess another is to use chroots rather than vbox for some of them, or different virtualization software, if we think that's a cause of many failures [09:38] i guess using chroots would have the benefit of confirming whether vbox is the problem or not [09:38] well, I witnessed some cases where *all* running vms were shut down at once while a single one were asked to [09:39] so I *know* there is a bug in vbox (leading to corrupted ext4 fses, leading to spurious failures) [09:39] there is just no way I know of to 1) reproduce, 2) point a finger to a more precise part of vbox [09:39] and still [09:40] this either happen once a week or twice a day... [09:40] vila: what about trying to run the VMs more sequentially, rather than all at the same time? [09:40] (can't shut down a VM that isn't running) [09:40] well [09:40] given all this, i don't see why we should still use it [09:40] jam: they are limited to 2 indeed, I did limit to 1 in the past [09:40] when you get back maybe we can pair to set up some chroot builders [09:40] then see if they are stable [09:42] sure [09:43] I was using a *single* solution to lower maintenance costs, if the cost of the noise if the results become unbearable, that sould be re-considered [09:45] anyway, except for lucid, they are all back in blue as of *now* [09:49] vila: sure, but I follow babune via the rss feed, and when I'm at the point that I expect "failure" to be bogus, that means it isn't providing benefit [09:49] (for me) [09:49] jam: what do you propose ? [09:49] yeah, same here [09:50] vila: that stability of babune (just like our test suite) is the primary importance. Better to do less testing but to trust the results. [09:50] I don't know exactly how to get there. [09:51] having a False Positive rate higher than True Positive rate means it won't be trusted. [09:51] If we actually had more accidental bugs in bzr [09:51] then we could accept a higher false positive right :) [09:51] rate [09:51] I can submit some broken code if that'll help... [09:52] fullermd: thanks. It has to pass on PQM, but then fail on babune, so anything you can do to help [09:52] vila: we've certainly caught some windows bugs [09:52] but for the 2 bugs we caught, we've had like 10+ accidental failures. and I really don't (personally) know how to make the system more reliable. [09:52] vila: IF starting/stopping the VMs is a problem, could we have fewer and just leave them running? [09:53] i was actually just reading something like this [09:53] http://horicky.blogspot.com/2011/03/compare-machine-learning-models-with.html [09:54] fullermd, breaking bsd would be an easy way to do that :) [09:55] ok, so the lucid failure was again a fallout from a crash [09:56] anyhow [09:56] so, actually the only way to get a good picture of babune's result, is not from the rss feed (which is noisy), but to wait for me to cleanup the noise [09:56] vila, you're away from tonight? [09:56] no, tomorrow afternoon [09:56] ok [09:56] poolie: ROC curve is very popular in medical fields. Lots of chances for changing the threshold to get better/worse FPR, etc. [09:56] i look at the web page not the RSS [09:56] in a nutshell my position is: avoid false negatives is more important at this point than coverage [09:56] vila: sure, but then it is synchronous polling, vs async updates [09:57] jam: yeah, I'm just confirming the actual state, which is of course not ideal [09:57] anyhow, i am repeating myself [09:57] and i should stop [09:57] work [09:57] poolie: have a good night [09:57] thanks, you too [09:57] Could do something like "bzr selftest ; echo whee" and watch for the echo to catch VM crash interruptions, presumably. [09:58] Though I have no idea how easy that would be to hook. [09:59] fullermd: that's kind of the problem, jenkins sometimes lose the connection with vms that are still running, so you can't filter between crashes and connection lsot [09:59] lots [09:59] lost ! [10:03] Maybe it just needs the config checked by someone who can tpye ;) [10:03] fullermd: that would help, bu only a bit, everything is version controlled to catch the tyops ;) [10:03] but [10:10] jam: but back to the number of concurrent slaves running, the bug is still there in vbox when *I* run a vm outside of babune regular scheduling, so I've seen cases where running a single vm in babune killed *my* unrelated vm :-/ [10:51] hehe, retweet: Kanter's Law: Everything can look like a failure in the middle - http://j.mp/gadlnP === psynaptic|away is now known as psynaptic === psynaptic is now known as psynaptic|sick [13:01] are PQM submissions failing for anybody else? [13:01] ERROR: bzrlib.tests.test_crash.TestApportReporting.test_apport_report fails for me [13:21] jam, vila: ^ [13:23] jelmer: I fired one this morning let me check [13:24] shudder, no email feedback [13:24] I did see your MP (2.3 -> bzr merge) fail with a single error [13:24] not sure what though [13:25] where did you see that ? [13:25] http://pqm.bazaar-vcs.org/ [13:25] note that it all the commit did was really merging a news entry... [13:25] hmm, seems likely it's the same issue then [13:26] yup, that's why I mention it [13:26] jelmer: pqm wsa upgraded to lucid this morning, you know that ? [13:26] vila: yeah, that's why I'm asking. I think it's related. [13:27] ok, just checking we were all the same page [13:27] jelmer: did *you* get emails for the failures / [13:27] ? [13:27] yep [13:29] I guess I'd better have a look at what's causing the error then [13:30] vila: thanks [13:30] jelmer: was just looking briefly, this test requires a feature already, so... weird setup on pqm will be my first guess [13:30] jelmer: may be you have a traceback to refine the diagnosis though [13:33] vila: http://pastebin.ubuntu.com/592599/ [13:34] ha, right in the probe :-( [13:34] vila: do you know if PQM has its own private copy of apport? [13:34] no idea [13:34] I'll talk to a losa [13:35] well, no idea in general that is, looking at the path, I have some doubts though [13:38] this reminds me that __import__ is... trickier than it looks and sometimes returns a different module than the one you're asking him to import (a parent one though, but still I remember at least one case where it picked up the wrong one in pyutils but that's probably unrelated) [13:42] OS locks are exclusive for different processes (Bug #174055) [13:42] Launchpad bug 174055 in Bazaar "can't run bzr info while dirstate is locked (dup-of: 98836)" [Low,In progress] https://launchpad.net/bugs/174055 [13:42] Launchpad bug 98836 in Bazaar "[MASTER] "OS locks must die" - dirstate file write locks exclude readers and limit portability" [High,Confirmed] https://launchpad.net/bugs/98836 [13:42] is there a 2.3.1 version of bzr backported to hardy, I wonder? [13:43] lamont: yes, https://launchpad.net/~bzr/+archive/ppa?field.series_filter=hardy [13:44] yay [14:12] hi [14:13] i have a problem, my diff's and commits are taking way too long [14:13] like half a minute to a minute, that's not so nice :( [14:13] how can i make this faster? [14:14] Are there any plans to i18n bazaar? [14:15] vanguard: vague plans at this point, it's something we want to do in the future but nobody's actively working on it [14:15] cheater: is your tre bond to a remote branch ? [14:15] tree [14:15] jelmer: A first step would be to wrap everything in _(), right? I mean that is something I could do … [14:16] vila: not sure. i am using the "centralized" workflow [14:16] cheater: 'bzr info' should tell you [14:16] vanguard: it's not as simple as that, it needs some thought to make sure we only call gettext on things that actually get displayed [14:16] vila: so i have created a repository on a server and have done bzr checkout sftp:// blah blah [14:17] vila: ok, as soon as it's done checking in those several files -_- [14:17] jelmer: true, translating internal messages might break the program. [14:17] vanguard: that won't really be an issue, it's more about the performance [14:17] cheater: if the server is slow, so are the operations involving it, [14:17] “We” already lost the performance battle to git, so why bother? ;-) [14:18] vila: it's a dedicated server [14:18] vila: i don't think it's slow, it's a nice ubuntu server from slicehost with nothing else on it.. [14:18] cheater: slow includes network latency [14:19] how can i make this faster? [14:19] vanguard: bzr's performance has improved significantly; it's not as quick as git but we have other features and we don't want to go back to being slow. [14:19] it would be best if my commits were like instant [14:19] vanguard: it should be possible to do i18n without noticably impacting performance, so let's do it that way [14:19] jelmer: Personally, even on my 10k Line Project, I do not find bzr to be really slow. But git sometimes amazes me on how fast it is. [14:20] jelmer: where could I pour in my motivation to get i18n started? [14:21] cheater: if you work in a centralized workflow, you're *asking* that your commits happen on the server, using a decentralized workflow allows you to commit locally and *then* merge your result on the server at a different times [14:22] cheater: well, there are various intermediates, but roughly that's the main difference [14:22] vanguard: I'd recommend bringing it up on the mailing list; we might also want to invite dpm to join in the discussion [14:24] vila: how do i do this so that i merge stuff manually? [14:24] vila: i am guessing i could just have like a daemon merging stuff every 15 minutes or so [14:25] cheater: merging can crate conflicts, conflict resolution can't be automated [14:25] jelmer: I should be just able to branch lp:bzr and play around with i18n to get to know the material, right? [14:26] vila: i'm the only one using bzr right now, so it's fine for me [14:26] vila: i am currently using bzr as a better rsync. [14:26] ha, then just use a branch locally and push only when you're happy [14:27] well i don't want to bother wondering if i pushed everything so that i can continue working at home [14:27] cheater: i.e. 'bzr branch ' instead of 'bzr co ' [14:27] so i think i'll set up a job to push all the time [14:27] and then 'bzr push ' when you want to... push your changes to the server [14:27] vanguard: yep [14:27] cheater: if you are the only one with the master copy you could even to `bzr push --overwrite` [14:27] vila: that's a great idea thanks [14:27] vanguard: aha [14:27] jelmer: let's see how long it takes to branch it with my internet connection [14:28] cheater: then there are no merge conflicts, since nothing will be merged [14:28] vanguard: aha [14:28] great help, thanks guys [14:29] also i have a separate question [14:30] how can i set up bzr well for big files? [14:30] cheater: how big is big? [14:30] well it's big enough that i don't want to have it in the same repository [14:30] they're just big photos, say 20-50 mb at a time [14:30] but lots of em [14:30] cheater: I guess you just have to create multiple branches/repos [14:30] yeah [14:31] but is it better to have the centralized thing, or to branch out? [14:31] cheater: but I do not really see the point in using version controll on binary files [14:31] well i can track renames, overwrites, etc [14:32] those files get remade sometimes [14:32] cheater: okay, makes sense. And you get a backup in case you save the original and so on. [14:32] say we send something off to get photos of it, the photos come back bad, we need to get new ones, and then we overwrite them [14:32] yeah [14:33] jelmer: the mailing list is this one? https://lists.ubuntu.com/mailman/listinfo/bazaar [14:33] vanguard: yes [14:34] jelmer: I guess I will set up a new email address for this, I guess there are more than a few messages a day on it. [14:55] jelmer: where are all the strings I usually see? I found nothing really in bzrlib/status.py for example :-/ [15:00] vanguard: bzrlib/delta.py ? [15:00] vanguard: e.g. bzrlib/errors.py, a lot is in bzrlib/builtins.py [15:02] jelmer: okay, there are a lot of string literals in delta.py, but which of these can be i18n'ed … I guess I grasp how difficult that will get … [15:03] vanguard: on the positive side, 'status' may not be the easiest to start with either [15:03] vanguard: I think simply translating all the strings in the codebase is the wrong way to go about this [15:03] what would you suggest? [15:04] vanguard: A lot of these strings will never be shown to the user, so translating them has an unnecessary performance impact [15:04] vanguard: we need to discuss where the translation calls are going to happen first [15:04] jelmer: planning is never a bad idea :) [15:07] vanguard: I think you're right to investigate to see what the situation is, just mentioning it to prevent patches that would be rejected and waste your time. [15:08] jelmer: sure, I consider what I do right now just to check whether I am not thinking of a way too huge task. I do not think I will submit anything before a discussion week or so [15:14] yeah, I crashed it -> I found the right spot :D === Tak|Work is now known as Tak [15:43] jelmer: I hacked something together that works basically (result: http://pastebin.com/MSxrdcVy). But to implement it in a nice way is the challence I guess [16:02] jelmer: any news about pqm ? [16:02] vila: talking to Michael [16:03] I've tried to reproduce the issue locally, but it works fine here [16:03] ha, right, sorry [16:04] vila: I can't reproduce the actual error but it seems that the problem is that a ImportWarning is being raised rather than an ImportError [16:05] and pqm turns warnings into errors [16:06] vila: how does it do that? [16:06] hmm, nothing changed there... or is it a python thing (we switch from 2.4 to 2.? 5 or 6 ? couldn't get confirmation this morning) [16:06] hmm -Werror ? [16:07] vila: Ah, yep [16:25] vila: any chance of a review of https://code.launchpad.net/~jelmer/bzr/importwarning/+merge/57184 ? [16:28] jelmer: I'm not sure this is correct, but 1) I can't put the finger of why I have this feeling, 2) pqm needs to be unblocked [16:29] jelmer: approved [16:29] vila: thanks [16:29] jelmer: we probably want to investigate *why* this happened on pqm and fix the config there anyway [16:30] jelmer: but blocking pqm is *never* a good idea, so thanks [16:30] vila: the apport/ directory in bzr.dev doesn't have a __init__.py and -Werror triggers this problem [16:30] jelmer: but pqm didn [16:30] t try to import from there (or did it ? I couldn't parse the path properly) [16:31] vila: ImportWarning was introduced in python 2.5 [16:32] I suspect prior to that you would just get a ImportError in all cases. [16:32] aaah [16:32] ooooh [16:32] the sooner we stop supporting 2.4... [16:33] ... which we plan to do starting with 2.4 IIRC, just to confuse everybody ;) === deryck is now known as deryck[lunch] [17:21] what is the difference between cloning and branching? === JasonO_ is now known as JasonO [17:29] bzr doesn't do cloning? [17:29] ok [17:29] thanks :) [17:32] it can be really difficult to apply concepts across VCSs, because they all have different models and different terminology [17:37] hmm yeah === beuno is now known as beuno-lunch === Ursinha is now known as Ursinha-afk === JasonO_ is now known as JasonO === deryck[lunch] is now known as deryck === luks_ is now known as luks === Ursinha-afk is now known as Ursinha === beuno-lunch is now known as beuno === JasonO_ is now known as JasonO === psynaptic|sick is now known as psynaptic|away === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [23:20] hi all [23:29] those were nice numbers, john === Ursinha is now known as Ursinha-afk [23:45] morning. [23:45] poolie: so, hi :) how's the new chroot looking? [23:51] some things seem to have landed overnight [23:51] i didn't see anyone complain [23:51] * poolie checks [23:52] nup, no replies complaining [23:52] awesome. [23:52] thanks for your help [23:53] no worries, I'll flip the chroot name now and then we're all good [23:53] pqm-bzr-amd64-lucid, right? [23:55] yep; unless there's any precedent for putting those in any other order? [23:56] well, the current precedent on balleny is chroot-amd64, so I think we can ignore it :) [23:56] I'm not aware of anywhere else we do anything like that, so its good for me [23:57] I'll move the old one to pqm-bzr-amd64-hardy too [23:58] actually, chroot-pqm-bzr-amd64-lucid, although thats getting a bit long [23:59] thats what I'll call the directory [23:59] dchroot is pqm-bzr-amd64-lucid, filepath is chroot-pqm-bzr-amd64-lucid