[00:44] davecheney, any comment re juju for armhf? ^ [00:45] * hazmat notices previous nick ref too late [00:59] hazmat: recap ? [01:00] davecheney, small. but http://paste.ubuntu.com/5692238/ mwhudson's been asking around [01:03] TTBOMK go 1.1 is not avilalble in any ubuntu form for armhf [01:03] 1.1 was imported from upstream into saucy proposed, but it didn't build on armhf [01:03] that is all I know at the moment [01:04] If you're happy to use a tarball, http://dave.cheney.net/unofficial-arm-tarballs [01:22] davecheney: how closed is the goamz group? I have a goamz branch i want to land. should i bother asking for group membership or is it best to get one of the group members to land it? [01:25] wallyworld_: dunno [01:25] is this another 'we're all not members of gophers' problem ? [01:25] i'm not a member of ~goamz [01:26] which owns goamz project [01:26] you and rodger are [01:27] wallyworld_: congrats on landing all those branches [01:28] thumper: yeah, finally! [01:28] wallyworld_: well, that is fucked [01:28] let me see if I can fix that [01:28] davecheney: thanks! [01:28] davecheney, wallyworld_: here is a trivial branch to review: https://codereview.appspot.com/9677044/ [01:28] nope, can't fix [01:28] gustavo owns it [01:29] restricted team :-( [01:29] ok, will ask him [01:29] davecheney: we can fix it... with the right request :) [01:29] davecheney: ok thanks [01:29] i just wasn't sure how protective he is of that project [01:29] * wallyworld_ looks at thumper's branch [01:30] * thumper considers a constraint of "machine=0" [01:31] --force-machine is all well and good, but doesn't capture intent in state [01:31] I'm trying to work out a way to have state represent the desire for maximum density... [01:31] a.k.a. cheap solution with everything on one ec2 instance [01:34] s/ec2/openstack as well :-) [01:34] wallyworld_: exactly [01:34] wallyworld_: I've been reading through jam's comments on the wiki page [01:35] I don't want to go too far down the path of explicitness when we know we need intent [01:35] thumper: force-machine was always a *very* short term quick fix AFAIUI, so we had better not propogate it's use [01:35] that is my thinking too [01:36] * thumper considers a multitude of edge cases [01:36] i think the comments we more along the lines of - we can do the mechanics first (and require explicitness), and once the machinery is there, automate the intent [01:36] juju deploy -n 2 wordpress --constraints "machine:0" [01:36] which i agree with at first read [01:36] should barf [01:36] what does -n do? [01:36] yeah, I've been thinking about that too [01:36] deploy two units [01:37] of the service [01:37] for HA? [01:37] load balancing [01:37] yes [01:37] although... [01:37] would be possible if (and only if) containerised [01:38] yes, makes sense to allow that scenario for testing, even if on the one physical machine [01:38] however... [01:38] what happens if you have the initial environment... [01:38] where everything was deployed onto machine/0 [01:38] then you want to scale out [01:38] and add two more units of "foo" [01:39] you don't want it to use the contstraint "machine:0: [01:39] you want it to make new machines... [01:39] ish [01:39] maybe [01:39] geez [01:39] so you wouldjust "juju deploy foo" without the constraint bit? [01:40] then it would allocate a new machine? [01:40] no, it is deployed [01:40] you use add-unit [01:40] right, yes [01:40] it already has constraints though [01:40] how about a constraint... [01:41] so you override those? [01:41] yeah [01:41] but I image it would get icky [01:41] perhaps last specified constraint wins [01:42] or maybe a constraint for "just this one operation" [01:42] yeah... [01:42] eg policy is to allocate a new machine, but just for this add-unit, do it here [01:42] a unit constraint [01:42] rather than a service constraint [01:42] yes [01:42] that _may_ work [01:42] still complicated for users to internalise the model though [01:43] juju status would need serious work i reckon [01:43] yeah, I'd prefer something semantically simple [01:43] to work with maximum unit density [01:43] i reckon we need a "tell me what you are about to do but let me conform first" switch [01:43] confirm [01:43] I think the default will soon be to have machines with lots of units on them [01:44] one machine to do multiple things [01:44] i've been wanting that for ages [01:44] didn't understand why we never had it to start with [01:44] yeah, unit density is the primary outcome for containers I think [01:44] because it is simpler to start a new machine for everything? [01:44] maybe [01:45] the trick is to record intent [01:45] but to have intent flexible [01:45] and easily understandable [01:45] * thumper waves his hands [01:45] and magic happens [01:45] that shit [01:45] USB plugged into the back of users' heads should do the trick [01:46] wasn't there a movie ages ago called "free jack" ? [01:46] not sure, perhaps [01:46] maybe only in NZ [01:46] thumper: so we still have lots to discuss before we can fire up our IDEs and start hacking me thinks [01:47] wallyworld_: well, there should be some things we can start on soon [01:47] wallyworld_: but the subtleties are the hard bit [01:47] and I'd like williams input [01:47] when he is back next week [01:48] yes. i fear even the "simple" things could turn nasty unless we fully understand the solution we are aiming for [01:49] although if we agree containers are machines with parent machines we could start reworking the relevant interfaces [01:50] yes [01:50] and having a create-machine command could be useful [01:50] we already have a destroy-machine [01:51] as in create-machine just ultimately calls the provider's startInstance() and registers the machine in the env but there's nothing deployed to it? [01:51] right [01:52] but "create-machine 0:lxc" could create an lxc container on machine 0 [01:52] if we were to do that, and come back later and do a deploy, that machine should then be used [01:52] and put a machine agent on it [01:52] rather than creating a new one [01:52] maybe... [01:52] sounds reasonable at least [01:53] i think your n:lxc thing works also [01:53] "create-machine :lxc" would create a new machine, and a container [01:53] this way, we could have nested containers [01:53] but there's no point doing those things as commands for the user to run manually unless the result machine/container were to be used by juju later [01:53] as the container is represented as a machine [01:54] wallyworld_: unless we had a constraint "machine x" [01:54] but I agree, it should be deployed to [01:54] if it is empty [01:54] hmmm. that would require the user to keep track of a lot of stuff [01:54] and fits the requested constraints [01:54] wallyworld_: not necessarily [01:54] yes, fits the constraints is essential [01:54] and it is only a first cut [01:55] and it gives a very definite "first thing" to do [01:55] thumper: ok, so i'll create a branch for the start-machine command [01:55] ok. [01:55] initially take no params [01:55] initially without :lxc [01:55] and just create a new machine from the provider [01:55] ack [01:55] yes [01:55] also... [01:56] since lxc not done yet [01:56] hmm... [01:56] we do have a weird case [01:56] we explicitly don't reuse machines [01:56] which means, if a machine is sitting there empty [01:56] we could do if they are marked as empty [01:56] we explicitly don't use it [01:56] let's add an "is empty" param [01:56] attribute [01:56] well... [01:56] it is harder than that [01:57] we effectivly have "is empty" by "no prinicple units" [01:57] and there is code there already [01:57] ok [01:57] perhaps what we need is "clean" [01:57] a machine is clean if it is new [01:57] sounds better [01:58] if a unit is deployed, not in a continer [01:58] and then removed [01:58] the machine is "unclean" [01:58] yes [01:58] if a unit is deployed in a container [01:58] and removed [01:58] the machine is clean [01:58] but if container is nixed, machine is clean [01:58] we can use clean machines [01:58] yes [01:58] not "unclean" [01:58] +1 [01:58] ok, this is almost making sense [01:59] i'll start hacking and see where the yellow brick road takes me [01:59] suspect there will be a few branches [01:59] wallyworld_: start with a create-machine [01:59] yes [01:59] I'll try to document our thought process above [01:59] ok, thans [01:59] add it to your document [02:00] i can't wait for this stuff to mature, i really want to fire up lots of stuff on my local machine [02:00] yeah [02:00] we'll make it fully manured [02:00] lol [02:00] * thumper goes to shower [02:00] bad visual image [02:03] wallyworld_: Not too protective.. just want to give access to people actually doing good work on it. [02:03] wallyworld_: If you have stuff for it, I'll add you to the team [02:04] niemeyer: thank you. i just wanted to make sure i wasn't stepping over any boundaries [02:04] wallyworld_: Not at all, thanks for contributing [02:04] niemeyer: my branch fixes a user reported bug - it allows for EC2_ env vars to be reconised for credentials [02:05] wallyworld_: Oh, haven't heard of those [02:05] wallyworld_: Thanks [02:05] wallyworld_: That's done [02:05] niemeyer: is there a landing bot or do we run tests locally and push manually [02:05] thanks [02:05] wallyworld_: The latter [02:06] wallyworld_: "LGTM" + test + lbox submit [02:06] niemeyer: thanks. bug 1174874 fwiw [02:06] <_mup_> Bug #1174874: EC2_* style Amazon environment variables not consulted [02:07] wallyworld_: Huh.. [02:07] wallyworld_: Where's the reference for these environment variables? [02:07] wallyworld_: I mean, who uses it? [02:07] not sure. i know i had them set up originally when i was got my ec2 account [02:07] wallyworld_: As John points out, the EC2 tools actually use the AWS_* ones [02:08] * niemeyer googles [02:08] i think they may have come from the aws account sign up perhaps [02:08] not sure [02:08] but i have . files with them in from ages ago [02:08] eucatools [02:08] Okay, +1 [02:08] ok, thanks [02:09] just gives a nice robustness for the user [02:09] Yeah [02:30] niemeyer: when i run the tests, there's an issue with the test http server already being started. from what i can see, a few tests start separate test servers and don't clean up????? [02:30] this issue is unrelated to my changes but best not to land something with test failures [02:31] wallyworld: In aws_test.go? [02:31] var testServer = testutil.NewHTTPServer() is in several test files [02:32] s3_test, iam_test etc [02:32] running go test ./... from the root dir exposes the issue [02:32] example error [02:32] PANIC: s3_test.go:28: S.SetUpSuite [02:32] ... Panic: listen tcp 127.0.0.1:4444: address already in use (PC=0x413D81) [02:32] /usr/lib/go/src/pkg/runtime/panic.c:229 [02:32] in panic [02:32] /home/ian/juju/go/src/launchpad.net/goamz/testutil/http.go:47 [02:32] in HTTPServer.Start [02:32] s3_test.go:29 [02:32] in S.SetUpSuite [02:33] so several tests call testServer.Start() [02:34] maybe there needs to be a Stop() method called from TearDown [02:58] niemeyer: i fixed the issue https://codereview.appspot.com/9545045 [02:58] go test ./... from toor dir now works [02:58] root [02:59] wallyworld: Hmm [03:00] wallyworld: I'd prefer to not mix the two sets of variables [03:00] which variables? the listener and testServer? [03:00] wallyworld: if auth.AccessKey == "" || auth.SecretKey == "" { ... set both of them again using EC2_* ... } [03:01] oh, you mean the bug fix [03:01] ok, i can change that [03:01] wallyworld: As it is, it's mixing the two sets of variables, which may end up pretty confusing [03:01] sure, will change it [03:01] you agree with the test fixes? [03:02] wallyworld: Yeah [03:02] with the variables thing, i just did what we did for goose [03:02] so you won't agree with goose either :-) [03:03] wallyworld: I suppose :) [03:03] maybe we'll do a drive by fix sometime [03:03] wallyworld: On Stop(), might be worth adding listener = nil at the end, although that's really minor [03:04] ok [03:28] * thumper laughs [03:28] grr [03:28] I was going to bitch at niemeyer [03:28] * thumper wonders if he'll come back [03:41] davecheney, wallyworld: updated that branch https://codereview.appspot.com/9677044/ to follow gustavo's suggestion [03:42] * wallyworld looks [03:48] wallyworld: ta [03:48] np [03:48] imagine my surpise when I found a test that actually tested the use case, and it passed [03:48] another Go gotcha [03:49] I'm pleased it isn't idiomatic to return None in python instead of [] [03:49] 'cause that kinda blows [03:52] yes indeed [04:00] thumper: i wonder, should the new machine being created use constraints stored in the env (for mem etc) or allow the user to specify different ones [04:01] yes [04:01] wallyworld: both [04:01] ok, so if none specified, use the env ones [04:01] wallyworld: specifying new ones override the existing env, but become a superset [04:01] not an override, but a merge [04:02] with specified overriding others [04:02] make sense? [04:02] yes [04:02] aye [04:07] * thumper looks at the assignment policies [04:07] again... [04:08] actually, I'll go pick up the sushi first [04:55] davecheney: what is our next juju-core milestone? === tasdomas` is now known as tasdomas_afk [06:42] morning all [07:16] morning [07:17] TheMue: hiya [07:17] dimitern: just playing around with the gmail account ;) [07:18] TheMue: oh, yeah? [07:19] dimitern: yep, I've used the automatic filtering so far. now I've got a lot of labels for each imap folder and I want to clean it up. [07:20] TheMue: i have like 60 filters in thunderbird already, didn't want to go through the hassle of migrating to gmail, so I opted out [07:23] dimitern: hehe [07:24] dimitern: I dislike client side filters as I access my mail (private as work) from multiple devices [07:24] dimitern, TheMue: my point as well. I did client side for a while, but I access the account from multiple locations (phone, laptop, desktop), and client-side doesn't work so well there. [07:25] I was trying it back when I ran my own mailserver, but server-side just works a lot better. [07:26] jam: yes. I run my private mail server (for my family and some friends) too and their I also provide server-side filtering [07:26] TheMue: gmail does ok. There are bits I'd like to be better, but at least to set up a filter you can say "filter messages like this" and it will detect the list header and set it for you from the start. [07:26] It doesn't work with custom Launchpad headers. [07:27] but for lists, it is quite easy to set up. [07:28] rogpeppe1: ping [07:29] jam: yes, looks fine so far. will now only clean up the migrated old folder. I kept too much unneeded stuff [07:29] dimitern: pong [07:29] rogpeppe1: hey, i've been trying to figure out the correct client-side logic for the LifecycleWatcher loop, in particular when to call next initially [07:30] * rogpeppe1 goes to look at state.LifecycleWatcher [07:30] rogpeppe1: I should call it once initially and then after the select (like in entitywatcher), right? [07:30] thumper: milestone ? [07:30] you mean release number [07:30] nfi [07:31] i hope someone tells me [07:32] dimitern: that sounds plausible, yes [07:32] dimitern: actually, no [07:33] dimitern: i think you should follow the pattern of entitywatcher [07:33] rogpeppe1: but for that initial call I shouldn't care as much for error handling - I mean return whatever error I got from Next, otherwise for the second call (after the select), I use the same logic as in the entitywatcher loop [07:33] rogpeppe1: well, I need some changes []string to send on the out channel [07:34] dimitern: why should the error handling be different for the first call and the second call? [07:34] rogpeppe1: and the select appears before the Next call [07:34] dimitern: ah, yes, the logic will need to be a bit different - you'll need to set the out channel to non-nil only when you've got something to send [07:35] rogpeppe1: ah, right - that tricky case [07:35] rogpeppe1: so set w.out to nil initially to bypass the select and set it to not nil after i have changes? [07:36] dimitern: yup [07:36] dimitern: well, probably have a local "out" variable to do that [07:36] dimitern: just before the select, set it to non-nil iff there's something to send. [07:37] rogpeppe1: yes, I have out := w.out; w.out = nil; var lastChanges []string [07:37] rogpeppe1: and w.out <- lastChanges (both of these are set when relevant) [07:37] dimitern: i wouldn't ever set w.out to anything different than its original value [07:38] dimitern: i think it's easier to see correctness when mutable state is in local variables [07:38] rogpeppe1: so use out < lastChanges instead of w.out in the select then? [07:39] rogpeppe1: and have var out chan []string before the for loop [07:39] dimitern: no, do: for { var out chan []string; if lastChanges != nil { out = w.out }; select {case out <- lastChanges: lastChanges = nil; case lastChanges = <-in: } } [07:39] or something like that [07:40] dimitern: you'll probably need to have a loop calling Next in its own goroutine [07:40] rogpeppe1: I didn't get the last "case" - why that? [07:41] dimitern: how are you going to receive new changes? [07:41] rogpeppe1: after the select I have if changes, err := callNext(); err != nil ..... [07:42] dimitern: you can't do that [07:42] rogpeppe1: why not? [07:42] dimitern: because the call to Next can block indefinitely [07:42] dimitern: which means you won't be able to stop the watcher [07:43] rogpeppe1: hmm - why is it working for the entitywatcher's loop then? [07:43] rogpeppe1: no gorouting for calling next there [07:43] dimitern: hmm, yes. le me think a mo [07:43] dimitern: (the other works because Stop is called in a goroutine) [07:44] rogpeppe1: i suspected as much [07:45] rogpeppe1: so with the same goroutine for stop it should work w/o another for next? [07:45] dimitern: i don't think it would be quite right [07:45] dimitern: the key is in the "Note that because the change notification contains no information" comment [07:46] dimitern: if we don't have the Next in a separate goroutine, if the receiver is slow, we'll be trying to send stale information. [07:47] dimitern: we could decide that we don't care about that, but i suspect that william might not be keen [07:47] rogpeppe1: hmm.. so how about having another go func() block before the loop for calling next, and moving all the logic & error handing around it there [07:47] dimitern: i think that sounds reasonable [07:48] dimitern: if it gets an error, it can kill the tomb and die [07:49] rogpeppe1: thus inside the loop only the select and out channel switching will remain, and the lastChanges will be a closure used by that goroutine [07:49] dimitern: i think lastChanges will be a local slice variable [07:50] rogpeppe1: local to the next() goroutine? [07:50] dimitern: local to the select loop [07:50] rogpeppe1: how will the next goroutine report the changes to back? [07:50] rogpeppe1: s/to// [07:50] dimitern: with a channel (i called it "in" above) [07:51] rogpeppe1: ah! cool [07:51] rogpeppe1: does it have to take it as an argument or it can be a closure? [07:51] dimitern: huh? "it" ? [07:52] rogpeppe1: the in chan [07:52] dimitern: how can a channel take an argument or be a closure? [07:54] rogpeppe1: sorry, i'm not explaining well enough - if I have in := make(<-chan []string); go func() { ... using in directly .. }() will it work or I have to use go func(in<-chan []string){ ... } [07:54] dimitern: ah, i see. yeah the former is just fine. [07:55] rogpeppe1: good [07:55] guys is the call now or in 1h ? [07:55] dimitern: i think it's in 5 mins [07:56] rogpeppe1: ok, I set my calendar correctly this time :) [07:57] rogpeppe1: and in addition to the in chan, i'll need a done chan error as well in the next goroutine, right? [07:58] dimitern: i don't think so [07:58] dimitern: i think you can use the tomb for that [07:58] rogpeppe1: kill the tomb on error? [08:01] * danilos is installing latest hangout plugin :/ [08:25] rogpeppe1: take a look at this please http://paste.ubuntu.com/5692992/ [08:25] rogpeppe1: does it look sane more or less? [08:26] dimitern: i'll look after the meeting [08:26] rogpeppe1: sure, np [08:48] rogpeppe1: so? [08:48] * rogpeppe1 looks [08:50] I think if we are going to keep the containerization and API everywhere lanes, we should kill core 1 and core 2 -- the idea there was one lane per feature group anyway [08:50] rogpeppe1: haven't compiled it yet (just realized I need to add panic("unreachable") in the next goroutine for go1.0x compatibility) [08:50] and we can move some of the existing stuff that is in the core1/2 lanes back into the backlog if they are just providing too much visual clutter [08:51] dimitern: i'd like to know if we still need to be go1.0.2 compatible [08:51] mramm: and change the lanes as we progress down the roadmap? [08:51] (which I think is why thumper created the new lanes) [08:51] dimitern: well, I did make them generic so we don't have to change them [08:52] rogpeppe1: i really don't want to care about go1.0[23] anymore, but we need to remove a lot of these panics throughout and possibly some other improvements are in order [08:52] but I honestly don't care, changing them occasionally is no big deal [08:52] mramm: if we do that, then add a general bugs lane [08:52] I did [08:52] * thumper has another call now [08:52] (at the top) [08:52] oh [08:52] ok [08:52] good [08:52] :) [08:52] dimitern: i was wondering about other stuff. for instance my recent tasks branch uses a go1.1 feature [08:53] dimitern: (method values) [08:53] we should've brought that up in the call, i just realized now [08:54] let's make it official and send a mail to juju-dev "We're now using go1.1 only" [08:55] TheMue, davecheney, jam, mgz, rogpeppe1, danilos, wallyworld_, mramm: any objections to the above? ^^ [08:55] Go for it :-) [08:55] dimitern: code looks reasonable [08:55] the sooner the better [08:56] dimitern: i might be tempted to factor out the Next error handling logic into a function, so it's not duplicated [08:56] wallyworld_, dimitern: I believe the original discussion ended in "we'd like to be able to compile with the platform tools". Already it is really hard to backport to precise. === rogpeppe1 is now known as rogpeppe [08:56] because we don't have the tool there. [08:56] dimitern, if we've got golang packages lined up for people wanting to do development on precise and raring, I am +1 [08:56] But if 1.1 is officially at least in Saucy, we can probably go with it. [08:56] And then we have to backport 1.1 for at least P, and probably Q and R [08:56] dimitern: only? [08:56] can we put 1.1 in backportsd? [08:56] mgz: his point was only [08:56] how do we get that on prrecise? [08:56] mgz: he just wants to make the release process harder on you. :) [08:57] mgz: the same way we'll get != go 1.0 ? [08:57] mgz: arguably if we have to do something to get go 1.0.3, we might as well get 1.1 [08:57] instea [08:57] instead [08:57] I'm not *sure* we do yet... [08:57] mgz: go1.1 works fine on precise - you just won't be able to use the standard golang package [08:57] mgz: I'm about 75% sure go 1.0 won't pass the test suite. [08:57] but we could SRU an new minor version, a new major version would be... not sruable [08:58] (as I have precise and I have to install go from the ppa to work on juju-core) [08:58] I *know* lbox is broken with go 1.0, I think juju-core might be as well. [08:58] so the agreement is not yet there - ok then, to early for an announcement on the list [08:58] mgz: major = .x ? [08:58] rogpeppe: what is duplicated about the error handling? [08:58] mgz: so I do believe that go 1.1 is source-compatible with go 1.0 [08:58] dimitern: if stuff being committed break with 1.0.3, we should tell people [08:59] jamespage is shaking his head [08:59] it is intended to be, at least, because of the "minor" bump [08:59] dimitern: it's exactly the same logic as in EntityWatcher.loop [08:59] we can't use backports [08:59] i feel strongly that we should move to using go 1.1 [08:59] rogpeppe: ah, good point, ok [08:59] when you do a backport, it doesnt build against backported packages [08:59] mgz: because of the bug that backports can't build with other backports? [08:59] jam: indeed [09:00] but if it's not possible to backport to precise, then perhaps we should stay away from 1.1-specific features [09:01] wallyworld_: I'm reading "1.1" as major and "1.0.3" as minor, but it's all pretty fuzzy, the policies of the project matter more than the actual numbers [09:01] that means we should probably gate commits on 1.0.3 compatibility [09:01] rogpeppe: I'm not sure what the current status is, or what our priorities are [09:01] yeah :-( [09:02] rogpeppe: so while I agree that use 1.1 when available, especially for official binaries. However, is there something in 1.1 which would prevent it working with 1.0.3 compiler? [09:02] I thought it was at least supposed to be syntactically the same. [09:02] maybe lib changes? [09:02] jam: there are language additions, yes [09:02] if we want newest juju installable on precise without a ppa, that then does make what version of go we run against more limited, unless we have other workarounds [09:02] jam: 1.0.3 code is compatible with 1.1 but not vice versa [09:03] jam: e.g. panic("unreachable") is no longer needed in funcs with a for loop + select only (most of the watchers, etc.) - it's a compile error in go1.0.3, not in go.1.1; there are nice features we can use in go1.1 [09:04] and there's the performance improvements, method values, race detection.. [09:05] jam: another nice feature is you can use a method as a value (it becomes a closure like you might expect) [09:06] jam: the changes are all here: http://golang.org/doc/go1.1 [09:06] dimitern: performance improvements and race detection can be used without changing syntax [09:07] method values are nice, but you can use closures, and not having panic is also nice but a bit trivial [09:07] jam: true, but if we're not allowed to use go1.1 due to (stupid?) packaging/backporting issues.. [09:07] so I think stating that for now, 1.0.3 compatible, but use 1.1 when you like. [09:07] jam: the issue i have is that it's very easy to fall outside 1.0.3 compatibility. [09:07] rogpeppe: thats what the bot is for [09:08] If it is running precise (the one I have currently is), then we have that gate in place already. [09:08] jam: so the bot would test against 1.0.2 and 1.1 ? [09:09] rogpeppe: I would tend to not do a multi-compiler pre-commit check, though that could be done. [09:09] right now, goose is still 1.0 compatible (and passes the test suite there) [09:09] so I didn't have to even try to get 1.0.2/3 there. [09:09] rogpeppe: how about http://paste.ubuntu.com/5693088/? [09:11] rogpeppe: the issue is trading off time-to-land-in-trunk with how much you expect to actually benefit from spending the time. A CI system is much better at doing multiple-platform-and-compiler level testing, vs the pre-commit testing [09:11] jam: right [09:11] so the "most expected platform/system" is the pre-commit check, and the CI lets you know that you could release this across all your supported platforms. [09:11] you could certainly do 1.0.3 in CI, but I would put that in "most expected to fail" vs 1.1 [09:13] dimitern: how about this instead? http://paste.ubuntu.com/5693094/ [09:14] dimitern: oops, with a tomb argument too, i guess [09:14] rogpeppe: yeah [09:15] rogpeppe: and the call()? [09:15] dimitern: i think that bundling the error checking with the call itself makes for a slightly more coherent function [09:15] dimitern: see callWatch in EntityWatcher.loop [09:16] rogpeppe: so call has the be an arg as well - call func() error [09:16] dimitern: with one extra argument [09:16] dimitern: yeah, you'd do: if err := watcherNext(call, &w.tomb); err != nil {w.tomb.Kill(err)} [09:17] rogpeppe: what if we have a commonWatcher with next() error and pass that instead? or even make this a method of the commonWatcher & embed it in both watchers? [09:18] rogpeppe: having commonWatcher will allow all this to be handled inside it's next() method [09:18] dimitern: i'm not sure i see a particular advantage in using a commonWatcher type rather than a closure [09:19] dimitern: but there may be something i'm not seein [09:19] g [09:19] rogpeppe: well, there are going to be at least 3 watchers with this logic - entity, life and environconfig [09:20] dimitern: what do you see going into commonWatcher other than next() ? [09:20] rogpeppe: right, we need different next calls for each one [09:20] dimitern: anything else? [09:21] rogpeppe: so watcherNext(call func(method string, result interface{}), err error, wtomb *tomb.Tomb) error {..} then [09:22] dimitern: watcherNext(call func(method string, result interface{}) error, wtomb *tomb.Tomb) error {..} [09:22] dimitern: i think [09:22] rogpeppe: oh yeah, the err is internal now [09:22] rogpeppe: sgtm [09:22] dimitern: i may very well change my position on commonWatcher if we find more stuff that can be factored out [09:23] dimitern: but for the time being, i think it makes things less obvious and would use more code [09:24] dimitern: actually, i think i see a way to factor out almost all of the code [09:25] rogpeppe: except next is different for each watcher - entitywatcher returns nothing but an error, while the lifecyclewatcher returns a slice as well [09:25] dimitern: i think entitywatcher is different [09:25] dimitern: i'm thinking about other watchers that all return some actual data [09:26] rogpeppe: so? [09:26] dimitern: go with just factoring out the next call for the time being. [09:27] dimitern: the other factor can be done with the next watcher [09:27] rogpeppe: you mean factoring out only the error handling, and not call next() inside it? [09:28] dimitern: no, i mean call next inside it, as i suggested. i *think* that looks nicer. [09:28] rogpeppe: what about the results? [09:28] dimitern: that's just an extra arg [09:29] rogpeppe: no, i mean we should return error & the results we got, right? [09:29] dimitern: ha, good point. [09:30] dimitern: watcherNext(results interface{}, call func(method string, result interface{}) error, wtomb *tomb.Tomb) error [09:31] rogpeppe: yeah, sgtm [09:52] rogpeppe: params.EnvironConfigWatcherNextResults :) that starts to look like java [09:53] dimitern: :-) [09:53] dimitern: at least you'll only be using it in about two places [09:54] rogpeppe: yeah [09:58] rogpeppe: I can think a couple of cases where commonWatcher is useful [09:58] rogpeppe: e.g. Stop() and Err() are the same [09:58] dimitern: i've got a suggestion coming up [09:58] rogpeppe: in addition we can have callNext call as a field in commonWatcher as well [10:01] rogpeppe: and the tomb and wg [10:02] and st [10:02] rogpeppe: although "commonWatcher" might be incorrect, as it's just a few bits of common code extracted, but not a complete watcher [10:05] rogpeppe: maybe "watcherBase" is better - we can have the go func() { .. spin the loop } in newBaseWatcher [10:05] dimitern: something along these kinds of lines? http://paste.ubuntu.com/5693203/ [10:06] was there a message to list about the code.google.com/p/go.crypto dep? [10:06] dimitern: it's not *quite* right, but i think it's not too bad [10:07] mgz: i don't remember one [10:08] rogpeppe: mostly ok, except for a few things [10:09] rogpeppe: call needs to handle the case when results is nil [10:09] dimitern: i think it does that ok, doesn't it? [10:09] we should probably do such things when adding deps [10:10] rogpeppe: not sure - shouldn't the last arg of st.call be &results ? [10:10] dimitern: i don't think so [10:11] dimitern: result is already a pointer (created by the newResults function) [10:11] rogpeppe: ah, ok [10:11] rogpeppe: should work then [10:12] dimitern: actually, i think there's another way of avoiding a bit more work in each watcher [10:12] dimitern: one mo [10:13] rogpeppe: how about overriding init() and setting there the out chan, call and newResult? then we can just spin up the loop goroutine right after that [10:14] dimitern: i'm not quite sure what you mean [10:14] rogpeppe: i'd like to factor out this somehow as well: http://paste.ubuntu.com/5693224/ [10:15] rogpeppe: except out's type is different so it won't work probably (damn generics!) [10:16] dimitern: yeah, i think the wg.Wait can go (suggestion in a moment) but the rest should probably stay [10:29] dimitern: marginally better, perhaps: http://paste.ubuntu.com/5693269/ [10:33] rogpeppe: nice; only in the case of if !ok { should return some error } [10:34] dimitern: no need to return an error - the tomb will already have been killed with an appropriate error [10:34] rogpeppe: and this will work for entitywatcher as well, right? [10:35] rogpeppe: it won't compile with just "return" [10:35] dimitern: i think it will. it won't be *quite* as efficient, but i don't think we mind. [10:35] dimitern: yeah, it should be return nil probably [10:36] rogpeppe: ok, and i'll put a comment about the tomb there as well [10:36] rogpeppe: thanks [10:36] dimitern: np [10:56] rogpeppe: so if we return nil from the loop, and in commonLoop we already killed the tomb with an error, won't the "return nil" wipe out that original error? [10:57] dimitern: no [10:57] dimitern: the first non-nil, non-ErrDying error will stick [10:59] rogpeppe: i see, ok [10:59] rogpeppe: wasn't immediately obvious for me looking at tomb.Kill [11:03] dimitern: rtfm :-) [11:03] dimitern: http://paste.ubuntu.com/5693339/ [11:05] rogpeppe: isn't w.commonWatcher.init() the same as w.init() ? [11:05] dimitern: yeah. i guess i prefer the explicitness there. [11:05] rogpeppe: sure [11:23] rogpeppe: so either the TestMachineWatch logic is wrong or the commonLoop logic is wrong, because Next never returns an inital event [11:24] dimitern: interesting. the Next call is being made and doesn't return? [11:24] rogpeppe: yes - it returns only after the timeout [11:25] rogpeppe: http://paste.ubuntu.com/5693378/ - added debug logs after next returns and one after sending to in [11:26] dimitern: ah, this is interesting. it looks like i designed it that way deliberately. [11:26] rogpeppe: :) so? is the test wrong or the loop? [11:26] dimitern: look at srvMachine.Watch - it reads the channel [11:27] rogpeppe: yeah, it does, but shouldn'y [11:28] dimitern: actually, i think it should [11:28] dimitern: and i think the test is right, as is the current loop [11:28] dimitern: the suggested implementation should be different though [11:28] rogpeppe: what then? [11:29] rogpeppe: it shouldn't call next initially? [11:29] dimitern: we should return the initial data with the initial watch request [11:29] dimitern: so that we don't do two round-trips whenever we start a watcher [11:29] dimitern: that was the reason for doing things this way anyway [11:30] rogpeppe: saving a roundtrip seems sane [11:30] danilos, wallyworld_, w7z: poke [11:31] rogpeppe: but what does this mean for the loop? [11:32] dimitern: something like this: http://paste.ubuntu.com/5693401/ [11:32] rogpeppe: move the next call after the select and have in := w.in [11:32] rogpeppe: ah, right [11:34] rogpeppe: but for the entitywatcher, this should be changes := struct{}{} instead [11:35] dimitern: no need to have a changes variable for the entity watcher [11:35] rogpeppe: wait, so the entitywatcher is the one not working, haven't written tests for the others yet [11:36] dimitern: the entity watcher was working before, right? [11:36] rogpeppe: yes [11:36] rogpeppe: for the lifecycle and environconfig watchers what you suggested should work [11:36] dimitern: so you're just changing it to use the commonWatcher to start with? [11:38] rogpeppe: yes, but the entitywatcher doesn't have results, so calling etype, eid, "Watch" won't report anything [11:39] rogpeppe: maybe you meant out = w.out initially for entitywatcher loop [11:39] rogpeppe: yeah, it works like this [11:39] dimitern: the entitywatcher loop will be a bit different [11:40] rogpeppe: that was the only change needed [11:40] rogpeppe: the test now pass [11:40] dimitern: cool [11:47] rogpeppe: so the same has to be applied for lifecycle and environconfig watchers? calling Changes in Watch* at server-side? [11:48] dimitern: yeah, i think so [11:48] dimitern: it's probably worth checking that all those things always produce an initial event [11:48] rogpeppe: so the params.*WatcherId should embed params.*WatcherNextResults then [11:49] jam: you forgot to ask me what i did, but that's ok. one thing for you - i talked to is about the rt, got the cc list updated, and they have also bumped its priority so hopefully will be done real soon now [11:50] wallyworld__: yay, sorry I missed you. You had been talking about landing your other branches, so I thought you were done. But good to hear about the rt. [11:50] dimitern: i don't think you can embed it, but yes, the same value should be there. and i'd probably call it, e.g. params.LifecycleWatchResults [11:50] rather than WatcherId [11:50] rogpeppe: ok [11:51] jam: i also fixed goamz - tests weren't cleaning up and so were failing, and i didn't want to land my changes without fixing even though my changes weren't responsoble [11:51] wallyworld__: so instead of cleaning up at the start of the test we cleanup at the end? [11:51] jam: each test was creating a socket and then just leaving it opened and not closing, so subsequent tests would fail [11:52] so i added cleanup to the suite teardown [11:52] rogpeppe: and afaics both lifecycle and environconfig watchers send initial event [11:52] jam: also talked to tim about containerisation, agreed on some work items, and started work on the first branch [12:01] rogpeppe, anyone else: it seems there is something like a logging loop in machine 0 when you use --force-machine. /var/log/juju/all-machines.log grows about 2 MB per second. and debug-log is unusable [12:02] frankban: what are all those log messages? [12:03] rogpeppe: it seems to repeat the same messages again and again (e.g. messages from hooks execution) [12:03] frankban: what messages? [12:07] rogpeppe: should I use params.LifecycleWatchResults for both Watch* and Next then ? [12:07] rogpeppe: http://pastebin.ubuntu.com/5693483/ just an example. the problem is the same logs are repeated in a loop. [12:07] rogpeppe: and just pass the id in next as well [12:08] dimitern: i suppose you could. or just leave the id blank for next. [12:08] rogpeppe: ok [12:08] frankban: could you paste the log with at least 3 repetitions of the cycle, so i can get an idea for what's going on, please? [12:09] s/the log/a portion of the log/ [12:11] rogpeppe: sure, grabbing it. however, to dupe: bootstrap a juju-core env, deploy wordpress (or the GUI) using --force-machine 0. [12:16] rogpeppe: http://pastebin.ubuntu.com/5693508/ [12:17] frankban: thanks [12:18] np [12:18] .me lunches [12:33] rogpeppe: I don't have to add tests like TestServerStopsOutstandingWatchMethod for the other watchers, right? [12:34] dimitern: looking [12:35] dimitern: i think that functionality should be tested for each watcher actually [12:36] dimitern: i think it's probably possible to make the test code generic though [12:36] rogpeppe: but all of them are resources and handled all the same in stopAll, right? [12:36] dimitern: that test is more about testing the client-side stop logic actually [12:37] dimitern: stopAll isn't invoked in that test, i don't think [12:37] dimitern: oops [12:37] dimitern: sorry, i'm wrong [12:38] rogpeppe: it's on the server side when stopping [12:38] rogpeppe: so I can transform that test into a table-based with setup funcs for each watcher [12:39] dimitern: it might be good to make that test something that starts one of every kind of watcher, and checks to see that they're all torn down appropriately when the server is stopped. [12:40] rogpeppe: by "torn down" you mean returning CodeStopped, right? [12:41] dimitern: yeah. [12:41] rogpeppe: ok [12:41] dimitern: i'd start all the watchers at once, then tear down the state just once. [12:42] rogpeppe: yeah, so some state setup code first, start all & wait for initial events from everyone, then stop the server and try calling changes on each one and check for CodeStopped [13:08] mgz: we missed you earlier. How's your day going? [13:10] fighting dpkg a little, otherwise okay [13:11] (should have mentioned I'd be out over lunch, but you probably saw me discussing that with gavin) [13:12] mgz: so are there plans to get go-1.0.2 or 3 into precise? [13:13] I think we have a failure in the test suite from using go-1 in gobot [13:13] (it fails reliably on gobot, but succeeds on my local machine) [13:17] not currently, but we could form some [13:18] we're the only people who'd drive that, juju is the only real user for go in precise === wedgwood_away is now known as wedgwood [14:04] rogpeppe, mramm: kanban? [14:04] dimitern: ah, good point [14:12] jam: the go bot kinda hates life it seems [15:40] mgz: well, its doing things slightly differently than what the rest of us have all worked around manually. [16:16] Hi all -- I'm getting an error with the latest updated juju-core: http://paste.ubuntu.com/5694171/ <- What am I missing? (I'm using the 'cstack' environment) [16:17] Man, I just enabled mouse mode for weechat and byobu. It's working really great. Even cut and paste is working better now. [16:17] wow, that was the wrong window. [16:22] dpb1: mgz or jam or dimitern might know what's going on there [16:25] looks like the container was not marked public... [16:29] dpb1: can you pastebin `nova endpoints` with your novarc sourced? I'm assuming e8231fcc9d9546c9961caf858676ea4e is your tenant id [16:31] checking [16:32] mgz: http://paste.ubuntu.com/5694222/ [16:32] dpb1: for now, you can probably just set public-bucket-url... though I just checked and that seems to not have the streams data [16:33] dpb1: can you file a bug please? [16:33] mgz: sure, what I have in those two pastes enough? [16:33] mgz: also, what should I set public-bucket-url to? [16:36] https://swift.canonistack.canonical.com/v1/AUTH_526ad877f3e3464589dc1145dfeaac60 [16:37] right, I'm off now [16:37] may be around later tonight [17:12] umm well we are still on the upswing of this project so we have a support team of 5 and a core team of 7 [17:12] so 12 fulltime [17:13] plus some floaters if needed for specific things we don't have the experience for already [17:13] damn [17:25] * rogpeppe1 is very happy with the new Task abstraction [18:56] anyone around to review a branch that i think's trivial that unbreaks trunk for go 1.0.* ? [18:56] if so, it's at https://codereview.appspot.com/9682047 [21:05] morning [21:11] Yep *chugs coffee* [23:19] niemeyer: you around?