[00:10] hazmat, because the reverse diff i did against trunk to recover from lbox submit destroyed history, i'm going to propose branches again as new branches [00:22] unfortunately recovering from destroyed history is unfortunate [00:36] jimbaker, huh? [00:36] destroyed history? [00:36] where? [00:36] when you merged trunk back to your local branch [00:36] you just reapply a reverse to your local branch [00:36] hazmat, please take a look at trunk r501 then 502 [00:37] jimbaker, you merged and then reverted [00:37] hazmat, i have not yet merged trunk to the local branch [00:37] that is, my relation-id branch [00:38] but i'm not getting proper ancestor diffs against it [00:38] hazmat, so "destroyed" may be the wrong term, but it's certainly not readily usable [00:39] jimbaker, generate a diff of your branch to the previous version of trunk 500 [00:39] merge trunk [00:39] reapply diff [00:39] to your branch [00:39] you will lose some of the history on your branch [00:39] well as you say not lose, but garble [00:40] but you'll have proper diffs [00:40] and won't lose any work [00:41] jimbaker, how's the rel-id work coming outside of that? [00:41] hazmat, is it reasonable that bzr diff -r ancestor:../../trunk produces an empty result? [00:41] hazmat, it is done [00:42] jimbaker, can you push your latest.. i'll play around with it [00:42] hazmat, thanks [00:42] hazmat, it is pushed [00:42] i will push the other 3 branches too. they are also complete [00:43] in terms of responding to all reviews [00:44] jimbaker, have you lbox propose -cr them? [00:44] hazmat, previously [00:44] hazmat, all branches have been pushed [00:45] jimbaker, they should get reproposed if their not approved [00:45] * hazmat looks at rel-id [00:45] hazmat, should i use lbox propose -cr? previously we discussed using lbox submit [00:45] for this purpose [00:49] hazmat, i will be back in about 1 hour, kids need to eat dinner [00:49] jimbaker, lbox submit merges [00:51] jimbaker, when was that? [00:51] jimbaker, here's a diff of your branch against r500 http://paste.ubuntu.com/906490/ [00:52] you can merge trunk and reapply the diff, there's 4 conflicted files [00:52] that need manual attention [00:53] jimbaker, please submit rel-id for review again after you've merged trunk and reapplied the diff, i'd like to double check the merge [09:23] rogpeppe: morning [09:24] TheMue: hiya [09:24] rogpeppe: just proposed the new config watch, looks nice [09:24] rogpeppe: now i'm off for some hours, family needs me [09:24] TheMue: i don't see any email [09:25] TheMue: have you got a codereview link? [09:25] rogpeppe: strange, google lags ;) [09:25] TheMue: ok, see you later [09:25] TheMue: i refreshed too [09:26] rogpeppe: https://codereview.appspot.com/5885059/ [09:26] TheMue: thanks [13:28] hazmat, btw, did you ever resolve the can't-actually-bootstrap-with-constraints problem you saw yesterday? [13:28] lunch [13:45] fwereade_, no.. i'm not able to bootstrap on ec2 with warned-ignore-constraint branch [13:46] fwereade_, it looks like some app failures [13:46] hazmat, bah... and it's definitely neither a juju-origin nor a PYTHONPATH problem? [13:46] hazmat, sorry, "app failures"? [13:46] fwereade_, er.. apt [13:47] hazmat, huh, weird... I don't think I changed anything related to that [13:48] hazmat, wasn't niemeyer talking to you about having some problem like that himself the other day? [13:48] fwereade_, i'm able to deploy envs with trunk without issue [13:49] hazmat, ++weird [13:49] hmm.. pythonpath.. [13:49] that was in a virtualenv.. indeed [13:50] * hazmat tries again [13:50] fwereade_, yeah.. looking over the cloud-init on the instance i don't see the constraints so that is likely the issue.. user error ;-) [13:51] * fwereade_ looks relieved [13:51] bbiam [14:16] fwereade_, hm.. even with the correct branch its hanging [14:16] although its not clear that this problem has anything to do with the branch [14:16] hazmat, do you know what's hanging? could it just be a grumpy EC2? [14:16] vs. an apt or repo issue [14:17] fwereade_, its failing doing a package install [14:17] hazmat, ah ok [14:17] hazmat, do you know what package? [14:17] hazmat, (not that that's actually going to tell me anything useful I can think of...) [14:20] fwereade_, hm. there are two.. [14:20] CalledProcessError: Command '['apt-get', '--option', 'Dpkg::Options::=--force-confold', '--assume-yes', 'update']' returned non-zero exit status 100 [14:21] bzr: ERROR: Invalid url supplied to transport: "bzr+ssh://bazaar.launchpad.net/~fwereade/juju/warn-ignored-constraint": no supported schemes [14:21] hazmat, huh, never seen anything like those before :( [14:27] fwereade_, also happens on trunk :-( [14:27] hazmat, ha, that's one of those things that would be a relief if it weren't an "OH CRAP" [14:27] fwereade_, indeed :-) [14:49] fwereade_, the repo issue is being looked into by ben and scott, its something their aware of [14:49] the bzr issue is odd as well [14:50] hazmat, http://search.dilbert.com/comic/Actively%20Waiting [14:50] ;) [14:51] hazmat, bzr is very weird [14:51] it feels like a regression of some sort [14:51] fwereade_, oh.. [14:51] your branch might be private [14:51] that would explain it [14:52] fwereade_, yeah.. i can bzr branch lp:juju but not your branch.. [14:52] hazmat, gaaah, sorry [14:53] that's just from the anon perspective.. that's odd though [14:53] rogpeppe: next watcher version is in the box [14:53] hazmat, I've merged everything except the doc changes into shadow-trunk-1204; I'll go and peer at that on launchpad and see if I can figure out what to do [14:53] TheMue: cool [14:53] hazmat, ...and hey, how did it ever work for me if that's the problem? [14:54] fwereade_, good question [14:54] TheMue: i've got the email this time. previous one must have been lost in transit sometime [14:54] hazmat, hey, it's "warn-ignored-constraints" [14:54] hazmat, your error message was missing the terminal s [14:54] fwereade_, yeah.. i was just reading about the relevant bug [14:54] bug 854713 [14:55] oh.. no mup here [14:55] https://bugs.launchpad.net/bzr/+bug/854713 [14:55] hazmat: am I reading the tea leaves correct that this is the final bit for subordinates: https://code.launchpad.net/~bcsaller/juju/subordinate-control-status/+merge/98088 ? [14:55] hazmat, icky :) [14:56] SpamapS, there's one more branch out for subordinates in addition to that one [14:56] that branches pre-requisites have been merged though [14:56] so just sub-status and sub-agent branches [15:02] rogpeppe: at least the link worked and i got your helpful comments [15:04] TheMue: LGTM (with a couple of minor comment issues) [15:05] rogpeppe: thx [15:06] TheMue: i'm very happy how this is starting to look [15:06] rogpeppe: "observices"? oh no! seems like a bad mixture of observes and service. [15:06] rogpeppe: *lol* [15:10] rogpeppe: having a native speaker as lector is really helpful. [15:10] rogpeppe: thx [15:10] TheMue: np [15:38] hazmat, did you have a chance to look at the most recent relation id branches in rietveld (there are 4)? also thanks for helping me recover relation-id with respect to trunk! [15:39] jimbaker, not yet, i haven't seen updated reitveld reviews for them outside of id [15:40] jimbaker, http://codereview.appspot.com/user/jimbaker ic them here though. its helpful to actually use a description on these. [15:41] hazmat, np, i don't know why we have Issue 5900068: [15:42] it has a description, not just a summary - and i'm quite certain it had one in the editor. i'll see if i can change that [15:42] need to work on lbox fu i suppose [15:53] jimbaker, don't worry about it, i think your not supposed to edit it [15:53] lbox may use it for sync with the mp [15:53] at least the mp always says don't edit it [15:54] hazmat, ok... we will see how it works out. i hope it uses something besides text to link (since i did edit it!) [15:54] no idea [15:54] jimbaker, the src is only a lp:lbox away [15:55] hazmat, yeah, i will think about that ;) [16:08] fwereade_, i think its probably good to merge the shadow branch and deal with the docs merge later [16:30] fwereade_, docs reviewed [16:32] hmm.. looks like the sub-status/sub-agent are updated in reitveld yet [16:32] * hazmat moves on to rel-id [16:32] hazmat, thanks [16:35] hazmat, sweet [16:36] hazmat, I'll make sure it's up to date, retest, and do a propose for form's sake [16:47] fwereade_, sounds good, but are you sure your not just going for the biggest merge medal :-) [16:48] hazmat, heh, now I know there is one that's totally what I'm doing :p [16:56] hazmat, https://code.launchpad.net/~fwereade/juju/shadow-trunk-1204/+merge/100195 [16:57] hazmat, it's only also on rietveld because I wanted to see what it looked like :p [17:00] jimbaker, i'm seeing alot of errors running tests against the rel-id branch [17:01] hazmat, let me check that [17:01] (i ran test multiple times... but... let's see it run once more) [17:04] jimbaker, i did a pull.. but it seems to be in the scheduler code.. [17:04] File "/home/kapil/canonical/ensemble/jimbaker/relation-id/juju/hooks/scheduler.py", line 185, in __init__ [17:04] self._relation_ident = relation_ident [17:04] exceptions.NameError: global name 'relation_ident' is not defined [17:05] * hazmat checks his branch for previous gymnastics causing foobar [17:05] eyup, he's got an off-brand file path there [17:05] jimbaker, yeah.. sorry it was my gymnastics to get a good diff earlier that seem to have cause the issue [17:06] fwereade__, sadly not they weren't cleanly relocatable [17:06] pipelines end up recording fullpaths [17:06] i saved the juju one for golang ;-) [17:06] hazmat, haha, nice [17:06] hazmat, if you'd give me an ultra-quick form approval on that branch I'll submit before I go out [17:08] hazmat, it's running cleanly for me. quick check, on your diffstat are you getting 10 files changed, 201 insertions(+), 99 deletions(-) ? [17:08] hazmat, (not sure why, it feels wrong for me to do it myself) [17:08] fwereade__, fair enough [17:09] jimbaker, see above comments, it was caused from me generating the diff [17:09] tests are running fine now [17:09] hazmat, that's a relief [17:10] * hazmat wonders if we can hit 2k tests b4 the release [17:11] hazmat, certainly getting close [17:13] fwereade__, that's hillarious re actively waiting comic [17:16] fwereade__, approved [17:17] hazmat, many thanks, submitting now [17:23] jimbaker, rel-id looks good [17:23] bcsaller, the sub-status and sub-agent need to be lbox proposed again if their ready for review [17:23] * hazmat looks for some food, bbiab [17:24] hazmat: not quite ready [17:24] hazmat, thanks [20:17] hazmat, any more feedback on the other relation id branches? [20:28] jimbaker, not yet [20:28] hazmat, ok [21:33] niemeyer: how ok is gocheck with concurrency? [21:34] wrtp: In which regard? [21:34] niemeyer: if a test runs a new goroutine that then does an Assert, what happens? [21:34] (when the assert fails, that is) [21:35] wrtp: The same thing that with testing [21:35] wrtp: It'd attempt to panic such a goroutine, which would cause bad things to happen [21:35] niemeyer: i don't know the situation with testing actually. [21:35] wrtp: It's ok with concurrency, but the panic-on-unknown-goroutine is a hard one to handle [21:35] niemeyer: ah, it throws a panic rather than using runtime.Goexit() ? [21:36] wrtp: It uses Goexit actually, for now, but still it's not g [21:36] reat [21:36] niemeyer: i think that would probably work ok [21:36] niemeyer: for me anyway [21:36] wrtp: It continues running, and which invalidates the point of using assert [21:37] niemeyer: i can have a chan in the goroutine that gets closed when the goroutine goexits [21:37] niemeyer: hmm, what happens if you get two asserts that fail? does it mind? [21:38] wrtp: it doesn't mind, but I'll probably mind having to maintain that logic :) [21:38] wrtp: Just do something usual [21:38] niemeyer: the alternative is to pass errors back down a channel [21:38] wrtp: Sounds fine [21:38] niemeyer: which might be the best course. i just wondered if it might be reasonable to use assert. [21:39] niemeyer: it would read well, i think. [21:40] niemeyer: by not using Assert, i lose the context for the error, which is a pity. [21:41] wrtp: You can error there [21:41] wrtp: Error, I mean [21:41] wrtp: or Check, in that case [21:41] niemeyer: yeah, Check would make things more obvious, i guess [21:43] although: c.Assert(err, IsNil) isn't quite as nice as if !c.Check(err, IsNil) { return } [21:51] niemeyer: if you think of each new goroutine as a new test, the TestFoo function as a kind of "main", and the result being the boolean OR of all the test results before TestFoo returns, it makes quite a bit of sense actually. [21:51] niemeyer: it scales to goroutines quite nicely. [21:51] wrtp: I don't think of each goroutine as a new test [21:52] niemeyer: i'm thinking of them a sub-tests [21:52] s/ a / as / [21:53] wrtp: c.Assert == t.Fatal if it fails.. a fatal that isn't fatal is bogus [21:53] niemeyer: fatal to what? [21:53] wrtp: t.Fatal [21:53] what's t.Fatal? [21:54] wrtp: Please check out the testing package [21:54] i know that Fatal, ok [21:55] but again, fatal to what? what's dying? the test has multiple pieces that can die individually [21:56] niemeyer: i think it works quite well to think of it that way. it means that tests can scale up to concurrency well. [21:56] wrtp: Please actually do read the docs [21:57] wrtp: That's not what Fatal or Assert means, and I'm not keen on changing their meaning [21:58] niemeyer: yeah, FailNow could do with an updated doc. "FailNow marks the function as having failed and stops its execution. Execution will continue at the next test or benchmark when the original Test function finishes" [21:58] niemeyer: it would be nice if Fatal/FailNow/Assert could have a useful meaning in a concurrent test [22:07] niemeyer: here's a flavour of how the two approaches look side by side: http://paste.ubuntu.com/907926/ [22:18] niemeyer: ah! i can define my own assert that calls Check and Goexits if it fails. [22:57] wrtp: I'd rather have it inlined in the test, and simple [22:58] niemeyer: it is simpler as an assert - not as much control structure to get right. [22:58] I'm heading off for dinner.. laters all, or see you all on Monday [23:01] niemeyer: see you in 10 days! [23:01] niemeyer: have a good week. [23:04] have a great Easter, everyone [23:04] see you in a bit