=== _thumper_ is now known as thumper [00:58] bugger [00:59] followed closely by wat? [00:59] rogpeppe: pong... presumably you're asleep though [01:00] thumper: office space? [01:00] axw: yeah, there are now four of us in Dunedin [01:01] considering options around having an office [01:01] ah, cool [01:01] axw: I'm looking at the bug wallyworld_ mentioned in the email [01:01] thumper: meeting? [01:01] axw: it was the creation of ~/.juju/ssh [01:01] wallyworld_: [01:01] yeah [01:01] coming [01:01] oh? :( [01:02] sorry about that [01:07] axw: np, working on it [01:09] brb, restarting [01:11] * axw loves his new SSD [02:08] axw: don't forget to add trim support, unless you are running trusty :-) [02:09] wallyworld_: yup thanks :) [02:09] not running trusty yet [02:09] i knew you'd know to do it :-) [02:09] i want to upgrade to trusty, just a bit scared [02:09] me too [02:09] maybe best to wait a few weeks [02:11] I was going to wait a few months ;) [02:20] i've found the first beta to be ok in the past [02:31] yeah maybe I'll get in on the beta [02:37] I found that very amusing [02:37] bootstrapped local [02:37] did juju status [02:37] it says "agent-state: down" [02:37] I'm like, bollocks [02:37] otherwise status wouldn't work [02:38] axw: can I get you to review the changes to the ~/.juju/ssh work? [02:38] thumper: yes [02:39] axw: cheers, just proposing now [02:44] axw: https://codereview.appspot.com/52950043 [02:46] I wish we didn't have to run with sudo :( [02:55] me too [02:56] maybe soon [03:09] thumper: reviewed [03:09] ta [03:22] axw: can we talk about this? [03:22] thumper: sure [03:23] https://plus.google.com/hangouts/_/7ecpijqrs52ki6g2aveu0pajlg?hl=en [04:53] thumper: thanks, looks better now (to me anyway) [04:53] axw: np [04:59] wallyworld_: did you add the MachineConfig API, or was that jam? [04:59] for manual provisioning [04:59] um [04:59] i can't recall doing it [04:59] ok [05:00] what does bzr annotate say? [05:00] wallyworld_: says you did it actually :) [05:00] which file? [05:00] state/api/client.go [05:01] wallyworld_: just wondering if there's a particular reason why Series was passed in, when that can be gotten from the state.Machine [05:01] wallyworld_: and if I'm allowed to break compatibility with 1.17.3 [05:01] err [05:01] 1.17.0 [05:02] can't recall exactly - may be because the old state based api had it as a param [05:02] and the new api replicated that behaviour [05:03] i'd have to look over the code again to remember. maybe add as a topic for discussion inthe meeting tonight [05:03] it would've been like that because it was all in the same function, I think [05:03] mk [05:03] thanks [05:03] if i can get my current wip shit sorted, i'll look over the code [05:04] ok, ta [05:06] axw: the old 1.6 api seemed to require series to be passed in [05:06] wallyworld_: which 1.16 API is that? [05:06] environs/manual/provisioner.go recordMachineInState1dot16 [05:07] and then the same series is passed to MachineConfig [05:07] wallyworld_: I'm talking about the bit *after* recording the machine in state [05:08] but i guess the series is known [05:08] yeah [05:08] yeah [05:08] :) [05:08] i see what you are talking about now [05:08] I'll put in an item for the meeting to make sure I can break it [05:08] so it makes sense to drop it [05:08] i think it will be ok because 1.17 is dev [05:09] arch is known in that case too, but I'm not sure I can remove that one. MachineConfig could conceivably be used in other contexts, and Arch isn't required like Series is [05:09] (right?) [05:09] yeah I figured it's okay to break [05:11] although, MachineConfig requires arch, so... maybe it's okay for it to assume it's set on the machine and just error out if not [05:11] I'll do that [07:11] davecheney, +1 on not shipping a fork btw [07:11] davecheney, just reading your email [08:12] * fwereade needs to be out for a bit, be back by the meeting [08:28] mgz: rogpeppe: I'm going to have to miss the weekly meeting today, it is my son's first day of Karate, so I have to be there to get him all sorted out. [08:35] jam1: ok === jam1 is now known as jam [08:45] axw: i was just pinging you in case you might be up for a review of https://codereview.appspot.com/52850043/ [08:45] rogpeppe: sure, will take a look now [08:46] axw: thanks [09:00] rogpeppe: lgtm [09:00] axw: thanks [09:03] axw: maybe you're right about logging join/leave at info level [09:04] axw: it would be easy to do, and " [09:04] juju [09:04] deploy service -n 1000 [09:04] " won't spam [09:04] axw: 'cos it only makes one connection [09:04] each machine agent will connect later won't it? [09:05] axw: ah, but i guess all the machines will make a conn, yeah [09:05] that's my only reservation [09:05] axw: 2000 lines isn't much though [09:05] yeah I guess so. I've found that sort of thing useful in the past, debugging when clients unexpectedly exit [09:06] tho I think we're at debug by default [09:58] fwereade: I would appreciate a look at this later, if you have time: https://codereview.appspot.com/53040043/ [09:58] it's for hazmat [10:07] axw, fwereade, dimitern, mgz, wallyworld_: team meeting? [10:07] rogpeppe, we're all there [10:07] were're here [10:07] rogpeppe, we're having it ;p [10:07] rogpeppe, https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.sbtpoheo4q7i7atbvk9gtnb3cc [10:08] hmm, i must have clicked on the wrong link. i'm in https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.8sj9smn017584lljvp63djdnn8?authuser=1 [10:08] rogpeppe: https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.sbtpoheo4q7i7atbvk9gtnb3cc?authuser=1 [10:08] weird [10:51] rogpeppe: this is the CL mentioned before: https://codereview.appspot.com/53040043/ [10:51] axw: thanks, will look [10:51] I will bbl if you need to ask me anything about it [10:51] ta [10:53] fwereade: i think i've addressed the outstanding issues with the juju status work - the revision update functionality and the status command presentation of data. if you get a chance to take a look, great. if not, i'll chase a review tomorrow [10:55] axw: so the idea behind this is that it means that you can provision machines that you can't ssh to, right? [10:55] dimitern: have proposed a pretty trivial initial goose branch [10:56] * fwereade makes no promises to wallyworld_ but thanks him for the reminder === sparkieg` is now known as sparkiegeek` [11:09] mgz, looking === sparkiegeek` is now known as sparkiegeek [11:32] rogpeppe: yes, that is the idea [11:35] mgz, reviewed [11:37] mgz, i'm still looking at the networks spec doc and into the lxc/kvm brokers btw [12:02] axw: reviewed [12:05] noodles775, you around? [12:05] dimitern: yep, for a bit. [12:05] noodles775, re bug 1259925 you reproduced - can you do it reliably? [12:05] <_mup_> Bug #1259925: juju destroy-environment does not delete the local charm cache [12:06] noodles775, and if so, can you please leave a comment so I can try it myself and investigate? [12:09] dimitern: I've not tried since then - should I retry with 1.17.0 or trunk? (I actually tried trunk this morning but couldn't get juju status to return sensible info http://paste.ubuntu.com/6761458/) [12:10] noodles775, trunk would be preferable; have you made sure you rebuilt cmd/juju and cmd/jujud and then did bootstrap --upload-tools ? I was seeing similar status output yesterday before doing that [12:11] noodles775, also, if you can reproduce it again, can you make a mongo dump please? I'll help to see what's in the db [12:12] dimitern: no - I didn't do --upload-tools, I'll try that. Thanks. [12:14] axw, have you tested your branch against 1.16 api server? [12:24] thanks for the goose review dimitern [12:43] am I right in saying that there's no API for querying the state of a relation (e.g. hooks have fired and state comes from failures or not)? [13:15] dimitern: no, I haven't. I can do that tomorrow. [13:16] thanks rogpeppe [13:16] axw, I posted some review comments as well [13:16] cool, thanks dimitern [13:16] axw, cheers [13:17] sparkiegeek: you can find out the current status of a unit, which will show you if a relation has failed. what information are you after? [13:18] rogpeppe: I want positive confirmation of success [13:18] sparkiegeek: confirmation that a relation has been successfully made? [13:18] rogpeppe: exactly [13:19] rogpeppe: where "success" = no hooks have failed :/ [13:19] rogpeppe: and more importantly, that they *have* been run [13:19] sparkiegeek: ah, yes. you can get the former, but i'm not sure there's any current way to verify that the relation-joined hook has successfully run [13:20] fwereade: ^ am i right there? [13:20] dimitern: "I'd like to see a test about using ProvisioningScript against 1.16 API server" -- do you just mean an interactive test, or do we have some way of doing that in unit tests? [13:20] axw, we have similar tests for other api calls - grep for 1dot16 methods and their associate tests [13:21] rogpeppe, sparkiegeek: that is correct -- we have work scheduled to flag when a unit's finished doing work, but you can't tell today [13:21] fwereade: is there a bug number or blueprint you can point me at? [13:23] fwereade: i'm not sure that that will provide quite the functionality required [13:23] fwereade: a unit might have successfully joined the relation but still have work to do [13:24] sparkiegeek, I can't find one immediately, I'm writing one, will close it dupe if I can track it down [13:24] fwereade: (from relation changed events, for example) [13:24] fwereade: great! Thanks a lot [13:24] rogpeppe, "finished doing work" ie completed all hooks, nothing scheduled for the future [13:24] dimitern: ah yep I understand now. I will add a test to environs/manual that tests the compat [13:24] dimitern: thanks [13:24] axw, ta! [13:24] sparkiegeek: i'm wondering if what you really want here is a way for a charm to provide a positive indication that it's in a certain state [13:25] fwereade, sparkiegeek: i'm thinking this might be another use case for output variables [13:25] fwereade: the problem is that with juju run particularly, that state might never occur [13:26] fwereade: a relation's attributes could be constantly changing, so the charm might never reach that steady state [13:27] fwereade: but i suspect that sparkiegeek doesn't care about steady state as much as "this relation has been successfully joined" [13:27] rogpeppe: that would work for me too, assuming the states cover the scenario of being deployed vs. being deployed and successfully related to $X [13:27] rogpeppe: correct [13:28] sparkiegeek: the idea is that a charm could set an output variable to communicate something to the outside world. so the relation-joined hook could set foorelation-ready=true, for example (the variable name would be arbitrary) [13:29] rogpeppe, sparkiegeek: hmm, per-relation busy/idle flags? [13:29] fwereade: again, i don't think that quite hits the mark [13:30] fwereade: we don't care about busy vs idle but "are you in this state?" [13:30] rogpeppe, ok, please be very precise about what state we're hoping to expose and why, I may have missed something [13:31] rogpeppe, define a "successfully joined" relation [13:31] fwereade: my impression is that output variables would not be very hard to implement. can you think of something that makes them so? [13:31] fwereade: i don't think we can hope to make a canonical definition of that [13:32] rogpeppe, well I assumed it was a proxy for "what sparkiegeek really wants" [13:32] fwereade: hence i think that an output variable that lets the charm *say* "this thing has successfully happened", whether it's a relation successfully joined (by that charm's definition of success), or a web service being started, or whatever [13:32] rogpeppe, "all scheduled hooks have been runfor this relation, and none of them failed" STM to match the desires stated above [13:33] fwereade: but that might never happen, in legitimate scenario [13:33] s/in/in a/ [13:33] rogpeppe, well, I don'tsee how you can say the relation has been "successfully joined" until that's completed [13:34] rogpeppe, it might always fall over next hook [13:34] fwereade: sure. that's why i would define "success" at the juju level at all [13:34] s/would/wouldn't/ [13:35] fwereade: i *think* that output variables provide sufficient tools for a charm to communicate "success" by whatever definition it chooses [13:36] rogpeppe, I'm reluctant to abandon the idea that it should be mediated by juju somehow [13:36] fwereade: how do you mean? you think there might be some general definition of "success" in this context? [13:38] rogpeppe, I think the one I'm proposing has some utility, yes [13:39] fwereade: ISTM that it's too low level and may be fragile if people come to rely on it extensively [13:40] fwereade: it's also going to be hard to implement, i suspect - how can you actually tell when a client has no more hooks to execute? [13:40] rogpeppe, "has this unit successfully responded to all known changes [in this relation ]without errors?" doesn't seem so hard to me [13:41] fwereade: what does "respond" mean in that context? changes with respect to what base state? [13:42] rogpeppe, "responded to" = "run all relevant hooks", right? [13:42] nate_finch: your yesterdays tweet about 90% of programming is so damned right. i currently trying to find an elegant way to get the log dir which is different for containers like lxc or kvm. and the information is wonderfully private. *sigh* [13:43] fwereade: so every time you run a hook, you store that in the state? [13:44] fwereade: ISTM that output variables would be considerably easier to implement, have less runtime costs, and give access to a very useful range of new capabilities. [13:46] rogpeppe, no [13:46] rogpeppe, the GUI team *is* asking for that, and I think I'm ok activating that for specific units, but I don;t want a constant spew from all of them [13:48] fwereade: so you think output variables are a bad idea in general? [13:48] rogpeppe, no, I just don't think they solve the "wait for a unit to be ready" use case we already know we have [13:49] TheMue: yeah, I was trying to get the mongo port from the machine config into the machine agent. Should be easy, but it's not [13:49] rogpeppe, but that the report-busy-idle stuff *does* solve the are-we-ready-yet problem [13:49] rogpeppe, if we do it via output variables we fuck the gui [13:49] fwereade: there's no way in general to wait for a unit to be ready. a relation between A and B might have been successfully made, but it's quite possible that for a unit in A to be ready, B must have a made a relationship with C [13:50] rogpeppe, yes, you have to wait for the whole system to get to idle before you can be confident the whole thing will remain stable until perturbed [13:50] fwereade: even then, you can't do it [13:50] fwereade: units can asynchronously do stuff now, right? [13:51] rogpeppe, I consider juju-run to be a perturbation [13:51] fwereade: the system may very well never become idle [13:51] rogpeppe, I think that's a symptom of a problem with the system that's been constructed then [13:51] fwereade: not necessarily [13:52] fwereade: it might be perfectly reasonable for a charm to provide some update on a relation every few seconds [13:52] fwereade: i don't think we should focus too much on "idleness" [13:53] fwereade: i think we should be much more interested in "history" [13:53] rogpeppe, I am having some trouble imagining a use case there [13:53] nate_finch: sounds familiar ;) [13:53] rogpeppe, you use the relation settings to set up the channel over which the fast-changing info flows, surely [13:54] fwereade: for example: a service that's a gateway to another service with a bunch of servers that are changing. it has a relation attribute that contains that set of servers. [13:54] fwereade: it depends how fast-changing it is. [13:55] fwereade: if something's changing on the order of once every 10 seconds, i think it may be reasonable to use relation attributes [13:55] fwereade: and the point is: we've provided the capability, so people will do it *anyway* [13:55] fwereade: and it would be good to make our tools cope well when it happens [13:56] rogpeppe, I think that it's perfectly reasonablefor us to report such a system as unstable then [13:56] rogpeppe, that's coping perfectly with a system that's unstable [13:57] fwereade: unstable is fine - we can still do useful work with an unstable system [13:57] fwereade, how would that screw over the GUI? [13:57] fwereade: as an example of another possible approach, we could provide a "hook history", that lets us know which hooks have been executed in a given unit. [13:58] fwereade: when a hook is run twice, we overwrite that hook's previous entry in the history [13:58] Beret, if we let people use a completely unstructured channel to report working/not-working, the gui won't be able to show a useful distinction [13:58] ah, I was assuming the output variable would have to be structured [13:58] Beret, busy/idle *can* be interpreted by the GUI and drawn as pale-green/solid-green or similar [13:59] fwereade: each entry in the history could have its own time stamp. [13:59] or some egg timers/beachballs ;) [13:59] Beret, the "output variables" idea is that they be minimally structured -- it's the flipside of charm config, which is really a service's "input variables" [14:00] fwereade: the other thing is that a unit may look idle from a juju point of view, but be very busy internally [14:00] fwereade: we could easily impose some conventions on output variables [14:01] fwereade: to let charms show their status to the GUI in a standard way, for example [14:01] rogpeppe, the same sort of thing that worked so well with haproxy and private-address, for example? [14:02] fwereade: remind me [14:02] rogpeppe, haproxy has "host", everything else in the world uses "private-address" [14:03] rogpeppe, relying on convention is I think inadequate [14:03] fwereade: if you don't use the conventions, the GUI won't see you. seems reasonable to me. [14:03] rogpeppe, and if you accidentally use them, the gui acts werd [14:04] fwereade: you could even have the uniter set output variables actually [14:04] fwereade: (the history idea above could work that way) [14:04] fwereade: convention can work well [14:05] fwereade: i prefer a general mechanism with some conventions to building more and bigger Stuff [15:00] mgz, fwereade, meeting? [15:01] we can use this https://plus.google.com/hangouts/_/calendar/am9obi5tZWluZWxAY2Fub25pY2FsLmNvbQ.mf0d8r5pfb44m16v9b2n5i29ig [15:01] dimitern: I didn't add a hangout or anything... let's use that [15:09] fwereade, any thoughts on that suggestion I made re juju letting charms know if units are alive? === nate_finch is now known as natefinch [16:18] jamespage, well, I am still not sure it'll really solve the problem [16:18] jamespage, I know the "idle" hook I mentioned is not a thing, but I'm still interested to know if it helps you [16:19] * fwereade maybe missed a followup elsewhere..? [16:36] rogpeppe: WIP - https://codereview.appspot.com/53220043/ ignore the replicaset.go changes.. just some extraneous changes that got on the wrong branch. [16:50] fwereade, sorry - idle hook? [16:50] must has missed that bit [17:01] natefinch: reviewed [17:06] rogpeppe: thanks [17:21] rogpeppe: about restarting mongo every time the machine agent bounces... I had been assuming we'd only call this method when we knew the upstart script was either out of date or missing (i.e. on upgrade from previous versions, or when starting a new state server). I think either we'll know when we need to call this, or if we aren't sure, we'll have to call it every time just in case. [17:22] natefinch: the idea was that we'd call it if when starting up and we find we have a ManageState job [17:22] natefinch: that's kind of the point [17:23] natefinch: in that case we need to start the mongo server if it's not already started [17:25] rogpeppe: right, it occurred to me after I said that, that the "Ensure" part of the name means that logic should be encapsulated [17:25] natefinch: yes please :-) [17:29] rogpeppe, what happened to that charm lib you were writting in go? [17:30] mattyw: launchpad.net/juju-utils/gocharm [17:30] mattyw: um, maybe i got the path wrong there [17:30] mattyw: launchpad.net/juju-utils/cmd/gocharm === bjf[afk] is now known as bjf [17:30] rogpeppe, in here? lp:juju-utils [17:31] mattyw: yeah [17:31] mattyw: for docs, see http://godoc.org/launchpad.net/juju-utils/cmd/gocharm [17:31] mattyw: and http://godoc.org/launchpad.net/juju-utils/hook [17:32] mattyw: the latter is the actual charm hook interface that you write code against [17:32] mattyw: the former just compiles the code and generates hook stubs [17:33] rogpeppe, looks awesome, will try to have a play with it over the next few days [17:33] mattyw: please do. i'd love any feedback at all. [18:20] oh local environment... why do you have to throw a wrench in everything? [19:08] hey sinzui [19:09] hi jcastro [19:09] you still get this right? https://bugs.launchpad.net/golxc/+bug/1238541 [19:09] it's making life annoying, I was wondering maybe we can bug a core dev together today? [19:09] * natefinch ducks [19:09] <_mup_> Bug #1238541: Local provider isn't usable after an old environment has been destroyed [19:10] natefinch, look at how easy it is! :) [19:11] hmm [19:11] natefinch, I can also just prod the list if you'd like [19:12] jcastro, I will get someone to investigate it. I thought bug 1269363 may have indirectly addressed the issue [19:12] <_mup_> Bug #1269363: local environment broken with root perms [19:12] jcastro: yjrtr [19:13] jcastro: there's a branch with a fix already submitted. I can probably just approve the branch [19:14] that would be swell, we're doing a bunch of bootstraps/teardowns as part of the audit and cleaning up the containers by hand gets old after a while, heh [19:14] bugger, I cannot manage the bugs in golxc [19:15] oh, it is not a part of juju-project. Either a mistake of the project really isn't about juju [19:17] sinzui: I approved the fix and marked the bug as fix committed [19:18] thank you natefinch [19:36] natefinch, I owe you a beer, thanks! [20:05] morning folks [20:06] sometimes it feels like I never stop working [20:06] that is what 11pm meetings do for you :-( [20:06] thumper: haha... I know the feeling... 5am meeting for me :) [20:06] * thumper nods [20:07] natefinch: got a minute to chat? [20:07] thumper: sure [20:07] natefinch: I want to bounce an idea off someone [20:07] thumper: I can be your rubber duck [20:07] natefinch: looking for more of a teddy bear https://plus.google.com/hangouts/_/7ecpi3me7dl01vto2l2n3368ns?hl=en [20:07] thumper: not sure if that's better or worse ;) [20:22] for juju datadir normally is just /var/lib/juju? [20:22] * hazmat is trying to interpret some api params [20:23] hazmat: yeah [20:42] hmm [20:54] hazmat: I'm going to assume that hmm means everything is going perfectly. ;) [20:56] natefinch, magically delicious as always [21:39] looks like `juju destroy-environment local` removes some files which causes `sudo juju destroy-environment local` to fail looking for those files putting it into a "corrupt" state [21:39] has anyone noticed this before? [21:41] hatch, thumper fixed a similar issue to that yesterday [21:42] sinzui oh awesome [21:42] interesting... [21:42] should fix that... [21:43] :) [21:44] my test machine isn't the most pristine environment so I always like to confirm with others before I file bugs haha [21:53] thumper: there's some stuff i'm keen to land which has been partially reviewed. how many beers/red wine would it take to get you to look and hopefully +1? [21:56] wallyworld_: how long are they? the longer they are, the more wine it takes [21:57] not tooooo long [21:57] https://codereview.appspot.com/48880043/ https://codereview.appspot.com/49510043/ https://codereview.appspot.com/49500043/ [21:57] :-D [21:58] not too short either [21:58] * wallyworld_ goes to order a case of the finest French red [22:38] wallyworld_: https://codereview.appspot.com/49510043/ can you respond to williams comments? [22:46] thumper: looking [22:47] thumper: yeah, i implemented that but didn't respond because i talked to him verbally and thought he'd do the last review [22:47] will comment [22:50] wallyworld_: ta [22:51] wallyworld_: I'll take a look again after the gym [22:51] thumper: ok, appreciate it thanks [22:52] thumper, wallyworld_ , Nate marked this branch approved to merge a few hours ago, but I don't think he realised that it is managed by ~juju, not gobot. Are either of you comfortable doing the merge and push? https://code.launchpad.net/~patrick-hetu/golxc/fix-1238541/+merge/200845 [22:52] sure can do [22:52] will do now [22:58] sinzui: that should be done [22:59] thankyou wallyworld_ . Lp agrees [22:59] sinzui: how is the streams.c.c stuff going? [22:59] I learned that Ben was testing the deployment today. I hope for tomorrow to be done [23:00] \o/