[00:58] <thumper> bugger
[00:59] <thumper> followed closely by wat?
[00:59] <axw> rogpeppe: pong... presumably you're asleep though
[01:00] <axw> thumper: office space?
[01:00] <thumper> axw: yeah, there are now four of us in Dunedin
[01:01] <thumper> considering options around having an office
[01:01] <axw> ah, cool
[01:01] <thumper> axw: I'm looking at the bug wallyworld_ mentioned in the email
[01:01] <wallyworld_> thumper: meeting?
[01:01] <thumper> axw: it was the creation of ~/.juju/ssh
[01:01] <thumper> wallyworld_:
[01:01] <thumper> yeah
[01:01] <thumper> coming
[01:01] <axw> oh? :(
[01:02] <axw> sorry about that
[01:07] <thumper> axw: np, working on it
[01:09] <axw> brb, restarting
[01:11]  * axw loves his new SSD
[02:08] <wallyworld_> axw: don't forget to add trim support, unless you are running trusty :-)
[02:09] <axw> wallyworld_: yup thanks :)
[02:09] <axw> not running trusty yet
[02:09] <wallyworld_> i knew you'd know to do it :-)
[02:09] <wallyworld_> i want to upgrade to trusty, just a bit scared
[02:09] <axw> me too
[02:09] <wallyworld_> maybe best to wait a few weeks
[02:11] <axw> I was going to wait a few months ;)
[02:20] <wallyworld_> i've found the first beta to be ok in the past
[02:31] <axw> yeah maybe I'll get in on the beta
[02:37] <thumper> I found that very amusing
[02:37] <thumper> bootstrapped local
[02:37] <thumper> did juju status
[02:37] <thumper> it says "agent-state: down"
[02:37] <thumper> I'm like, bollocks
[02:37] <thumper> otherwise status wouldn't work
[02:38] <thumper> axw: can I get you to review the changes to the ~/.juju/ssh work?
[02:38] <axw> thumper: yes
[02:39] <thumper> axw: cheers, just proposing now
[02:44] <thumper> axw: https://codereview.appspot.com/52950043
[02:46] <axw> I wish we didn't have to run with sudo :(
[02:55] <thumper> me too
[02:56] <thumper> maybe soon
[03:09] <axw> thumper: reviewed
[03:09] <thumper> ta
[03:22] <thumper> axw: can we talk about this?
[03:22] <axw> thumper: sure
[03:23] <thumper> https://plus.google.com/hangouts/_/7ecpijqrs52ki6g2aveu0pajlg?hl=en
[04:53] <axw> thumper: thanks, looks better now (to me anyway)
[04:53] <thumper> axw: np
[04:59] <axw> wallyworld_: did you add the MachineConfig API, or was that jam?
[04:59] <axw> for manual provisioning
[04:59] <wallyworld_> um
[04:59] <wallyworld_> i can't recall doing it
[04:59] <axw> ok
[05:00] <wallyworld_> what does bzr annotate say?
[05:00] <axw> wallyworld_: says you did it actually :)
[05:00] <wallyworld_> which file?
[05:00] <axw> state/api/client.go
[05:01] <axw> wallyworld_: just wondering if there's a particular reason why Series was passed in, when that can be gotten from the state.Machine
[05:01] <axw> wallyworld_: and if I'm allowed to break compatibility with 1.17.3
[05:01] <axw> err
[05:01] <axw> 1.17.0
[05:02] <wallyworld_> can't recall exactly - may be because the old state based api had it as a param
[05:02] <wallyworld_> and the new api replicated that behaviour
[05:03] <wallyworld_> i'd have to look over the code again to remember. maybe add as a topic for discussion inthe meeting tonight
[05:03] <axw> it would've been like that because it was all in the same function, I think
[05:03] <axw> mk
[05:03] <axw> thanks
[05:03] <wallyworld_> if i can get my current wip shit sorted, i'll look over the code
[05:04] <axw> ok, ta
[05:06] <wallyworld_> axw: the old 1.6 api seemed to require series to be passed in
[05:06] <axw> wallyworld_: which 1.16 API is that?
[05:06] <wallyworld_> environs/manual/provisioner.go recordMachineInState1dot16
[05:07] <wallyworld_> and then the same series is passed to MachineConfig
[05:07] <axw> wallyworld_: I'm talking about the bit *after* recording the machine in state
[05:08] <wallyworld_> but i guess the series is known
[05:08] <wallyworld_> yeah
[05:08] <axw> yeah
[05:08] <axw> :)
[05:08] <wallyworld_> i see what you are talking about now
[05:08] <axw> I'll put in an item for the meeting to make sure I can break it
[05:08] <wallyworld_> so it makes sense to drop it
[05:08] <wallyworld_> i think it will be ok because 1.17 is dev
[05:09] <axw> arch is known in that case too, but I'm not sure I can remove that one. MachineConfig could conceivably be used in other contexts, and Arch isn't required like Series is
[05:09] <axw> (right?)
[05:09] <axw> yeah I figured it's okay to break
[05:11] <axw> although, MachineConfig requires arch, so... maybe it's okay for it to assume it's set on the machine and just error out if not
[05:11] <axw> I'll do that
[07:11] <jamespage> davecheney, +1 on not shipping a fork btw
[07:11] <jamespage> davecheney, just reading your email
[08:12]  * fwereade needs to be out for a bit, be back by the meeting
[08:28] <jam1> mgz: rogpeppe: I'm going to have to miss the weekly meeting today, it is my son's first day of Karate, so I have to be there to get him all sorted out.
[08:35] <rogpeppe> jam1: ok
[08:45] <rogpeppe> axw: i was just pinging you in case you might be up for a review of https://codereview.appspot.com/52850043/
[08:45] <axw> rogpeppe: sure, will take a look now
[08:46] <rogpeppe> axw: thanks
[09:00] <axw> rogpeppe: lgtm
[09:00] <rogpeppe> axw: thanks
[09:03] <rogpeppe> axw: maybe you're right about logging join/leave at info level
[09:04] <rogpeppe> axw: it would be easy to do, and "
[09:04] <rogpeppe> juju
[09:04] <rogpeppe> deploy service -n 1000
[09:04] <rogpeppe> " won't spam
[09:04] <rogpeppe> axw: 'cos it only makes one connection
[09:04] <axw> each machine agent will connect later won't it?
[09:05] <rogpeppe> axw: ah, but i guess all the machines will make a conn, yeah
[09:05] <axw> that's my only reservation
[09:05] <rogpeppe> axw: 2000 lines isn't much though
[09:05] <axw> yeah I guess so. I've found that sort of thing useful in the past, debugging when clients unexpectedly exit
[09:06] <axw> tho I think we're at debug by default
[09:58] <axw> fwereade: I would appreciate a look at this later, if you have time: https://codereview.appspot.com/53040043/
[09:58] <axw> it's for hazmat
[10:07] <rogpeppe> axw, fwereade, dimitern, mgz, wallyworld_: team meeting?
[10:07] <dimitern> rogpeppe, we're all there
[10:07] <wallyworld_> were're here
[10:07] <fwereade> rogpeppe, we're having it ;p
[10:07] <dimitern> rogpeppe, https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.sbtpoheo4q7i7atbvk9gtnb3cc
[10:08] <rogpeppe> hmm, i must have clicked on the wrong link. i'm in https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.8sj9smn017584lljvp63djdnn8?authuser=1
[10:08] <axw> rogpeppe: https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.sbtpoheo4q7i7atbvk9gtnb3cc?authuser=1
[10:08] <rogpeppe> weird
[10:51] <axw> rogpeppe: this is the CL mentioned before: https://codereview.appspot.com/53040043/
[10:51] <rogpeppe> axw: thanks, will look
[10:51] <axw> I will bbl if you need to ask me anything about it
[10:51] <axw> ta
[10:53] <wallyworld_> fwereade: i think i've addressed the outstanding issues with the juju status work - the revision update functionality and the status command presentation of data. if you get a chance to take a look, great. if not, i'll chase a review tomorrow
[10:55] <rogpeppe> axw: so the idea behind this is that it means that you can provision machines that you can't ssh to, right?
[10:55] <mgz> dimitern: have proposed a pretty trivial initial goose branch
[10:56]  * fwereade makes no promises to wallyworld_ but thanks him for the reminder
[11:09] <dimitern> mgz, looking
[11:32] <axw> rogpeppe: yes, that is the idea
[11:35] <dimitern> mgz, reviewed
[11:37] <dimitern> mgz, i'm still looking at the networks spec doc and into the lxc/kvm brokers btw
[12:02] <rogpeppe> axw: reviewed
[12:05] <dimitern> noodles775, you around?
[12:05] <noodles775> dimitern: yep, for a bit.
[12:05] <dimitern> noodles775, re bug 1259925 you reproduced - can you do it reliably?
[12:05] <_mup_> Bug #1259925: juju destroy-environment does not delete the local charm cache <destroy-environment> <local-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1259925>
[12:06] <dimitern> noodles775, and if so, can you please leave a comment so I can try it myself and investigate?
[12:09] <noodles775> dimitern: I've not tried since then - should I retry with 1.17.0 or trunk? (I actually tried trunk this morning but couldn't get juju status to return sensible info http://paste.ubuntu.com/6761458/)
[12:10] <dimitern> noodles775, trunk would be preferable; have you made sure you rebuilt cmd/juju and cmd/jujud and then did bootstrap --upload-tools ? I was seeing similar status output yesterday before doing that
[12:11] <dimitern> noodles775, also, if you can reproduce it again, can you make a mongo dump please? I'll help to see what's in the db
[12:12] <noodles775> dimitern: no - I didn't do --upload-tools, I'll try that. Thanks.
[12:14] <dimitern> axw, have you tested your branch against 1.16 api server?
[12:24] <cgz> thanks for the goose review dimitern
[12:43] <sparkiegeek> am I right in saying that there's no API for querying the state of a relation (e.g. hooks have fired and state comes from failures or not)?
[13:15] <axw> dimitern: no, I haven't. I can do that tomorrow.
[13:16] <axw> thanks rogpeppe
[13:16] <dimitern> axw, I posted some review comments as well
[13:16] <axw> cool, thanks dimitern
[13:16] <dimitern> axw, cheers
[13:17] <rogpeppe> sparkiegeek: you can find out the current status of a unit, which will show you if a relation has failed. what information are you after?
[13:18] <sparkiegeek> rogpeppe: I want positive confirmation of success
[13:18] <rogpeppe> sparkiegeek: confirmation that a relation has been successfully made?
[13:18] <sparkiegeek> rogpeppe: exactly
[13:19] <sparkiegeek> rogpeppe: where "success" = no hooks have failed :/
[13:19] <sparkiegeek> rogpeppe: and more importantly, that they *have* been run
[13:19] <rogpeppe> sparkiegeek: ah, yes. you can get the former, but i'm not sure there's any current way to verify that the relation-joined hook has successfully run
[13:20] <rogpeppe> fwereade: ^ am i right there?
[13:20] <axw> dimitern: "I'd like to see a test about using ProvisioningScript against 1.16 API server" -- do you just mean an interactive test, or do we have some way of doing that in unit tests?
[13:20] <dimitern> axw, we have similar tests for other api calls - grep for 1dot16 methods and their associate tests
[13:21] <fwereade> rogpeppe, sparkiegeek: that is correct -- we have work scheduled to flag when a unit's finished doing work, but you can't tell today
[13:21] <sparkiegeek> fwereade: is there a bug number or blueprint you can point me at?
[13:23] <rogpeppe> fwereade: i'm not sure that that will provide quite the functionality required
[13:23] <rogpeppe> fwereade: a unit might have successfully joined the relation but still have work to do
[13:24] <fwereade> sparkiegeek, I can't find one immediately, I'm writing one, will close it dupe if I can track it down
[13:24] <rogpeppe> fwereade: (from relation changed events, for example)
[13:24] <sparkiegeek> fwereade: great! Thanks a lot
[13:24] <fwereade> rogpeppe, "finished doing work" ie completed all hooks, nothing scheduled for the future
[13:24] <axw> dimitern: ah yep I understand now. I will add a test to environs/manual that tests the compat
[13:24] <axw> dimitern: thanks
[13:24] <dimitern> axw, ta!
[13:24] <rogpeppe>  sparkiegeek: i'm wondering if what you really want here is a way for a charm to provide a positive indication that it's in a certain state
[13:25] <rogpeppe> fwereade, sparkiegeek: i'm thinking this might be another use case for output variables
[13:25] <rogpeppe> fwereade: the problem is that with juju run particularly, that state might never occur
[13:26] <rogpeppe> fwereade: a relation's attributes could be constantly changing, so the charm might never reach that steady state
[13:27] <rogpeppe> fwereade: but i suspect that sparkiegeek doesn't care about steady state as much as "this relation has been successfully joined"
[13:27] <sparkiegeek> rogpeppe: that would work for me too, assuming the states cover the scenario of being deployed vs. being deployed and successfully related to $X
[13:27] <sparkiegeek> rogpeppe: correct
[13:28] <rogpeppe> sparkiegeek: the idea is that a charm could set an output variable to communicate something to the outside world. so the relation-joined hook could set foorelation-ready=true, for example (the variable name would be arbitrary)
[13:29] <fwereade> rogpeppe, sparkiegeek: hmm, per-relation busy/idle flags?
[13:29] <rogpeppe> fwereade: again, i don't think that quite hits the mark
[13:30] <rogpeppe> fwereade: we don't care about busy vs idle but "are you in this state?"
[13:30] <fwereade> rogpeppe, ok, please be very precise about what state we're hoping to expose and why, I may have missed something
[13:31] <fwereade> rogpeppe, define a "successfully joined" relation
[13:31] <rogpeppe> fwereade: my impression is that output variables would not be very hard to implement. can you think of something that makes them so?
[13:31] <rogpeppe> fwereade: i don't think we can hope to make a canonical definition of that
[13:32] <fwereade> rogpeppe, well I assumed it was a proxy for "what sparkiegeek really wants"
[13:32] <rogpeppe> fwereade: hence i think that an output variable that lets the charm *say* "this thing has successfully happened", whether it's a relation successfully joined (by that charm's definition of success), or a web service being started, or whatever
[13:32] <fwereade> rogpeppe, "all scheduled hooks have been runfor this relation, and none of them failed" STM to match the desires stated above
[13:33] <rogpeppe> fwereade: but that might never happen, in legitimate scenario
[13:33] <rogpeppe> s/in/in a/
[13:33] <fwereade> rogpeppe, well, I don'tsee how you can say the relation has been "successfully joined" until that's completed
[13:34] <fwereade> rogpeppe, it might always fall over next hook
[13:34] <rogpeppe> fwereade: sure. that's why i would define "success" at the juju level at all
[13:34] <rogpeppe> s/would/wouldn't/
[13:35] <rogpeppe> fwereade: i *think* that output variables provide sufficient tools for a charm to communicate "success" by whatever definition it chooses
[13:36] <fwereade> rogpeppe, I'm reluctant to abandon the idea that it should be mediated by juju somehow
[13:36] <rogpeppe> fwereade: how do you mean? you think there might be some general definition of "success" in this context?
[13:38] <fwereade> rogpeppe, I think the one I'm proposing has some utility, yes
[13:39] <rogpeppe> fwereade: ISTM that it's too low level and may be fragile if people come to rely on it extensively
[13:40] <rogpeppe> fwereade: it's also going to be hard to implement, i suspect - how can you actually tell when a client has no more hooks to execute?
[13:40] <fwereade> rogpeppe, "has this unit successfully responded to all known changes [in this relation ]without errors?" doesn't seem so hard to me
[13:41] <rogpeppe> fwereade: what does "respond" mean in that context? changes with respect to what base state?
[13:42] <fwereade> rogpeppe, "responded to" = "run all relevant hooks", right?
[13:42] <TheMue> nate_finch: your yesterdays tweet about 90% of programming is so damned right. i currently trying to find an elegant way to get the log dir which is different for containers like lxc or kvm. and the information is wonderfully private. *sigh*
[13:43] <rogpeppe> fwereade: so every time you run a hook, you store that in the state?
[13:44] <rogpeppe> fwereade: ISTM that output variables would be considerably easier to implement, have less runtime costs, and give access to a very useful range of new capabilities.
[13:46] <fwereade> rogpeppe, no
[13:46] <fwereade> rogpeppe, the GUI team *is* asking for that, and I think I'm ok activating that for specific units, but I don;t want a constant spew from all of them
[13:48] <rogpeppe> fwereade: so you think output variables are a bad idea in general?
[13:48] <fwereade> rogpeppe, no, I just don't think they solve the "wait for a unit to be ready" use case we already know we have
[13:49] <nate_finch> TheMue: yeah, I was trying to get the mongo port from the machine config into the machine agent.  Should be easy, but it's not
[13:49] <fwereade> rogpeppe, but that the report-busy-idle stuff *does* solve the are-we-ready-yet problem
[13:49] <fwereade> rogpeppe, if we do it via output variables we fuck the gui
[13:49] <rogpeppe> fwereade: there's no way in general to wait for a unit to be ready. a relation between A and B might have been successfully made, but it's quite possible that for a unit in A to be ready, B must have a made a relationship with C
[13:50] <fwereade> rogpeppe, yes, you have to wait for the whole system to get to idle before you can be confident the whole thing will remain stable until perturbed
[13:50] <rogpeppe> fwereade: even then, you can't do it
[13:50] <rogpeppe> fwereade: units can asynchronously do stuff now, right?
[13:51] <fwereade> rogpeppe, I consider juju-run to be a perturbation
[13:51] <rogpeppe> fwereade: the system may very well never become idle
[13:51] <fwereade> rogpeppe, I think that's a symptom of a problem with the system that's been constructed then
[13:51] <rogpeppe> fwereade: not necessarily
[13:52] <rogpeppe> fwereade: it might be perfectly reasonable for a charm to provide some update on a relation every few seconds
[13:52] <rogpeppe> fwereade: i don't think we should focus too much on "idleness"
[13:53] <rogpeppe> fwereade: i think we should be much more interested in "history"
[13:53] <fwereade> rogpeppe, I am having some trouble imagining a use case there
[13:53] <TheMue> nate_finch: sounds familiar ;)
[13:53] <fwereade> rogpeppe, you use the relation settings to set up the channel over which the fast-changing info flows, surely
[13:54] <rogpeppe> fwereade: for example: a service that's a gateway to another service with a bunch of servers that are changing. it has a relation attribute that contains that set of servers.
[13:54] <rogpeppe> fwereade: it depends how fast-changing it is.
[13:55] <rogpeppe> fwereade: if something's changing on the order of once every 10 seconds, i think it may be reasonable to use relation attributes
[13:55] <rogpeppe> fwereade: and the point is: we've provided the capability, so people will do it *anyway*
[13:55] <rogpeppe> fwereade: and it would be good to make our tools cope well when it happens
[13:56] <fwereade> rogpeppe, I think that it's perfectly reasonablefor us to report such a system as unstable then
[13:56] <fwereade> rogpeppe, that's coping perfectly with a system that's unstable
[13:57] <rogpeppe> fwereade: unstable is fine - we can still do useful work with an unstable system
[13:57] <Beret> fwereade, how would that screw over the GUI?
[13:57] <rogpeppe> fwereade: as an example of another possible approach, we could provide a "hook history", that lets us know which hooks have been executed in a given unit.
[13:58] <rogpeppe> fwereade: when a hook is run twice, we overwrite that hook's previous entry in the history
[13:58] <fwereade> Beret, if we let people use a completely unstructured channel to report working/not-working, the gui won't be able to show a useful distinction
[13:58] <Beret> ah, I was assuming the output variable would have to be structured
[13:58] <fwereade> Beret, busy/idle *can* be interpreted by the GUI and drawn as pale-green/solid-green or similar
[13:59] <rogpeppe> fwereade: each entry in the history could have its own time stamp.
[13:59] <sparkiegeek> or some egg timers/beachballs ;)
[13:59] <fwereade> Beret, the "output variables" idea is that they be minimally structured -- it's the flipside of charm config, which is really a service's "input variables"
[14:00] <rogpeppe> fwereade: the other thing is that a unit may look idle from a juju point of view, but be very busy internally
[14:00] <rogpeppe> fwereade: we could easily impose some conventions on output variables
[14:01] <rogpeppe> fwereade: to let charms show their status to the GUI in a standard way, for example
[14:01] <fwereade> rogpeppe, the same sort of thing that worked so well with haproxy and private-address, for example?
[14:02] <rogpeppe> fwereade: remind me
[14:02] <fwereade> rogpeppe, haproxy has "host", everything else in the world uses "private-address"
[14:03] <fwereade> rogpeppe, relying on convention is I think inadequate
[14:03] <rogpeppe> fwereade: if you don't use the conventions, the GUI won't see you. seems reasonable to me.
[14:03] <fwereade> rogpeppe, and if you accidentally use them, the gui acts werd
[14:04] <rogpeppe> fwereade: you could even have the uniter set output variables actually
[14:04] <rogpeppe> fwereade: (the history idea above could work that way)
[14:04] <rogpeppe> fwereade: convention can work well
[14:05] <rogpeppe> fwereade: i prefer a general mechanism with some conventions to building more and bigger Stuff
[15:00] <dimitern> mgz, fwereade, meeting?
[15:01] <dimitern> we can use this https://plus.google.com/hangouts/_/calendar/am9obi5tZWluZWxAY2Fub25pY2FsLmNvbQ.mf0d8r5pfb44m16v9b2n5i29ig
[15:01] <cgz> dimitern: I didn't add a hangout or anything... let's use that
[15:09] <jamespage> fwereade, any thoughts on that suggestion I made re juju letting charms know if units are alive?
[16:18] <fwereade> jamespage, well, I am still not sure it'll really solve the problem
[16:18] <fwereade> jamespage, I know the "idle" hook I mentioned is not a thing, but I'm still interested to know if it helps you
[16:19]  * fwereade maybe missed a followup elsewhere..?
[16:36] <natefinch> rogpeppe: WIP - https://codereview.appspot.com/53220043/   ignore the replicaset.go changes.. just some extraneous changes that got on the wrong branch.
[16:50] <jamespage> fwereade, sorry - idle hook?
[16:50] <jamespage> must has missed that bit
[17:01] <rogpeppe> natefinch: reviewed
[17:06] <natefinch> rogpeppe: thanks
[17:21] <natefinch> rogpeppe: about restarting mongo every time the machine agent bounces... I had been assuming we'd only call this method when we knew the upstart script was either out of date or missing (i.e. on upgrade from previous versions, or when starting a new state server).  I think either we'll know when we need to call this, or if we aren't sure, we'll have to call it every time just in case.
[17:22] <rogpeppe> natefinch: the idea was that we'd call it if when starting up and we find we have a ManageState job
[17:22] <rogpeppe> natefinch: that's kind of the point
[17:23] <rogpeppe> natefinch: in that case we need to start the mongo server if it's not already started
[17:25] <natefinch> rogpeppe: right, it occurred to me after I said that, that the "Ensure" part of the name means that logic should be encapsulated
[17:25] <rogpeppe> natefinch: yes please :-)
[17:29] <mattyw> rogpeppe, what happened to that charm lib you were writting in go?
[17:30] <rogpeppe> mattyw: launchpad.net/juju-utils/gocharm
[17:30] <rogpeppe> mattyw: um, maybe i got the path wrong there
[17:30] <rogpeppe> mattyw: launchpad.net/juju-utils/cmd/gocharm
[17:30] <mattyw> rogpeppe, in here? lp:juju-utils
[17:31] <rogpeppe> mattyw: yeah
[17:31] <rogpeppe> mattyw: for docs, see http://godoc.org/launchpad.net/juju-utils/cmd/gocharm
[17:31] <rogpeppe> mattyw: and http://godoc.org/launchpad.net/juju-utils/hook
[17:32] <rogpeppe> mattyw: the latter is the actual charm hook interface that you write code against
[17:32] <rogpeppe> mattyw: the former just compiles the code and generates hook stubs
[17:33] <mattyw> rogpeppe, looks awesome, will try to have a play with it over the next few days
[17:33] <rogpeppe> mattyw: please do. i'd love any feedback at all.
[18:20] <natefinch> oh local environment... why do you have to throw a wrench in everything?
[19:08] <jcastro> hey sinzui
[19:09] <sinzui> hi jcastro
[19:09] <jcastro> you still get this right? https://bugs.launchpad.net/golxc/+bug/1238541
[19:09] <jcastro> it's making life annoying, I was wondering maybe we can bug a core dev together today?
[19:09]  * natefinch ducks
[19:09] <_mup_> Bug #1238541: Local provider isn't usable after an old environment has been destroyed <intermittent-failure> <local-provider> <golxc:New> <https://launchpad.net/bugs/1238541>
[19:10] <jcastro> natefinch, look at how easy it is! :)
[19:11] <sinzui> hmm
[19:11] <jcastro> natefinch, I can also just prod the list if you'd like
[19:12] <sinzui> jcastro, I will get someone to investigate it. I thought bug 1269363 may have indirectly addressed the issue
[19:12] <_mup_> Bug #1269363: local environment broken with root perms <local-provider> <ssh> <juju-core:Fix Committed by thumper> <https://launchpad.net/bugs/1269363>
[19:12] <natefinch> jcastro: yjrtr
[19:13] <natefinch> jcastro: there's a branch with a fix already submitted.  I can probably just approve the branch
[19:14] <jcastro> that would be swell, we're doing a bunch of bootstraps/teardowns as part of the audit and cleaning up the containers by hand gets old after a while, heh
[19:14] <sinzui> bugger, I cannot manage the bugs in golxc
[19:15] <sinzui> oh, it is not a part of juju-project. Either a mistake of the project really isn't about juju
[19:17] <natefinch> sinzui: I approved the fix and marked the bug as fix committed
[19:18] <sinzui> thank you natefinch
[19:36] <jcastro> natefinch, I owe you a beer, thanks!
[20:05] <thumper> morning folks
[20:06] <thumper> sometimes it feels like I never stop working
[20:06] <thumper> that is what 11pm meetings do for you :-(
[20:06] <natefinch> thumper: haha... I know the feeling... 5am meeting for me :)
[20:06]  * thumper nods
[20:07] <thumper> natefinch: got a minute to chat?
[20:07] <natefinch> thumper: sure
[20:07] <thumper> natefinch: I want to bounce an idea off someone
[20:07] <natefinch> thumper: I can be your rubber duck
[20:07] <thumper> natefinch: looking for more of a teddy bear https://plus.google.com/hangouts/_/7ecpi3me7dl01vto2l2n3368ns?hl=en
[20:07] <natefinch> thumper: not sure if that's better or worse ;)
[20:22] <hazmat> for juju datadir normally is just /var/lib/juju?
[20:22]  * hazmat is trying to interpret some api params
[20:23] <natefinch> hazmat: yeah
[20:42] <hazmat> hmm
[20:54] <natefinch> hazmat: I'm going to assume that hmm means everything is going perfectly. ;)
[20:56] <hazmat> natefinch, magically delicious as always
[21:39] <hatch> looks like `juju destroy-environment local` removes some files which causes `sudo juju destroy-environment local` to fail looking for those files putting it into a "corrupt" state
[21:39] <hatch> has anyone noticed this before?
[21:41] <sinzui> hatch, thumper fixed a similar issue to that yesterday
[21:42] <hatch> sinzui oh awesome
[21:42] <thumper> interesting...
[21:42] <thumper> should fix that...
[21:43] <hatch> :)
[21:44] <hatch> my test machine isn't the most pristine environment so I always like to confirm with others before I file bugs haha
[21:53] <wallyworld_> thumper: there's some stuff i'm keen to land which has been partially reviewed. how many beers/red wine would it take to get you to look and hopefully +1?
[21:56] <thumper> wallyworld_: how long are they? the longer they are, the more wine it takes
[21:57] <wallyworld_> not tooooo long
[21:57] <wallyworld_> https://codereview.appspot.com/48880043/ https://codereview.appspot.com/49510043/ https://codereview.appspot.com/49500043/
[21:57] <wallyworld_> :-D
[21:58] <wallyworld_> not too short either
[21:58]  * wallyworld_ goes to order a case of the finest French red
[22:38] <thumper> wallyworld_: https://codereview.appspot.com/49510043/ can you respond to williams comments?
[22:46] <wallyworld_> thumper: looking
[22:47] <wallyworld_> thumper: yeah, i implemented that but didn't respond because i talked to him verbally and thought he'd do the last review
[22:47] <wallyworld_> will comment
[22:50] <thumper> wallyworld_: ta
[22:51] <thumper> wallyworld_: I'll take a look again after the gym
[22:51] <wallyworld_> thumper: ok, appreciate it thanks
[22:52] <sinzui> thumper, wallyworld_ , Nate marked this branch approved to merge a few hours ago, but I don't think he realised that it is managed by ~juju, not gobot. Are either of you comfortable doing the merge and push? https://code.launchpad.net/~patrick-hetu/golxc/fix-1238541/+merge/200845
[22:52] <wallyworld_> sure can do
[22:52] <wallyworld_> will do now
[22:58] <wallyworld_> sinzui: that should be done
[22:59] <sinzui> thankyou wallyworld_ . Lp agrees
[22:59] <wallyworld_> sinzui: how is the streams.c.c stuff going?
[22:59] <sinzui> I learned that Ben was testing the deployment today. I hope for tomorrow to be done
[23:00] <wallyworld_> \o/