#juju-dev 2012-06-11
<twobottux> aujuju: How do I add and call a helper script in a Juju charm? <http://askubuntu.com/questions/149240/how-do-i-add-and-call-a-helper-script-in-a-juju-charm>
<rogpeppe> davecheney, fwereade_, hazmat: mornin' all
<TheMue> Morning.
<davecheney> mornig!
<rogpeppe> TheMue: hiya
<TheMue> rogpeppe: Are you an early bird? Or do you have your notebook beside your breakfast? ;)
<rogpeppe> TheMue: i'm not historically an early bird, but these days i do seem to be waking up earlier.
<rogpeppe> TheMue: keen to get on with all the stuff that's been rumbling round my head all weekend without access to a computer...
<TheMue> rogpeppe: No access? You've been on tour? Or did Carmen shut it away?
<rogpeppe> TheMue: it's boring for Carmen if i spend the weekend hacking :-)
<TheMue> rogpeppe: Understandable.
<davecheney> crap, how did I suddenly get signed up for the intel team spam
<davecheney> OOPS: 19 passed, 1 FAILED, sooo close
<davecheney> rogpeppe: if I call dummy.StopInstances(nil), should it emit OpStopInstances ?
<rogpeppe> davecheney: i guess so
<davecheney> but passing 0 instances to stop, stops nothing
<rogpeppe> davecheney: is it not doing so?
<rogpeppe> davecheney: but the emit is about the StopInstances action; it's not OpStopInstance.
<rogpeppe> davecheney: i can see an argument for changing that, but that's how it is currently
<davecheney> so it's saying I used that method, not that it did anything
<rogpeppe> davecheney: yeah
<rogpeppe> davecheney: i think.
<rogpeppe> davecheney: let me have a looki.
<davecheney> rogpeppe: i already knwo the answer :)
<rogpeppe> davecheney: yes, the idea is that you want to know if the method has been called, even if it, for instance, returns an error.
<davecheney> how could it return an error if you pass no instances to stop
<davecheney> arguably that is a NOOP
<rogpeppe> davecheney: dummy is recording method calls, not actions.
<davecheney> rogpeppe: nm I stuck a comment in to explain, we can haggle about it in the review
<rogpeppe> davecheney: sure
<davecheney> FAIL: provisioning_test.go:288: ProvisioningSuite.TestProvisioningRemovesUnknownMachineAfterRestart
<davecheney> close now
<davecheney> man, that multiple PA's running inside the test process had me going spare this morniing
<davecheney> also, i'm not sure that defer c.Assert(p.Stop, IsNil) is safe
<davecheney> TheMue: does gocheck have a way of running a single test ?
<rogpeppe> davecheney: go test -gocheck.f regexp
<rogpeppe> (f for "filter")
<davecheney> nice
<rogpeppe> TheMue: a review for you: https://codereview.appspot.com/6307072
<TheMue> rogpeppe: Thx
<rogpeppe> davecheney: and one for you: https://codereview.appspot.com/6305063
<davecheney> rogpeppe: why thank you good sir
<TheMue> rogpeppe: And a LGTM back.
<rogpeppe> TheMue: thanks
<fwereade_> morning all
<TheMue> fwereade_: Moin.
<rogpeppe> fwereade_: yo!
<rogpeppe> davecheney: there's no point in doing defer c.Assert(p.Stop, IsNil)
<davecheney> rogpeppe: not a lot
<rogpeppe> davecheney: the result will always be the same as not deferring it
<davecheney> rogpeppe: won't the Assert failure be recorded somewhere ?
<rogpeppe> davecheney: because p.Stop is evaluated when you make the defer, not when the defer runs
<rogpeppe> davecheney: so why bother delaying the inevitable?
<davecheney> o_o!
<rogpeppe> davecheney: you might wanna do defer func() {c.Assert(p.Stop, IsNil}() though :-)
<davecheney> of course!
<davecheney> anyway, that is enough for me, i'm going downstairs to finish off the last tallboy in the fridge
<fwereade_> TheMue, did you have any comments on my review of https://codereview.appspot.com/6305067/ -- I suspect I may be missing something
<TheMue> fwereade_: The idea behind the issue has just been that in the first draft some checks are made after (!) the first node is created. Now all tests are made before so that we have no inconsistent state in ZK.
<TheMue> fwereade_: https://bugs.launchpad.net/juju-core/+bug/1007373
<fwereade_> TheMue, understood: my contention is that the inputs to that function can only sanely come from service/charm state, and that we should be able to assume that if they don't contain real values we would have errored out already
<fwereade_> TheMue, it may be that this slapdash cowboyish thinking of the worst sort
<TheMue> fwereade_: We allways should be able that a caller only uses valid values as arguments of a function. But practice show's we can't. So in case of creating node through atomic functions (sadly no transaction for the whole change) we should check.
<fwereade_> TheMue, rogpeppe: the stuff of yours that I know of LGTM
<TheMue> fwereade_: Cheers.
<rogpeppe> fwereade_: thanks
<rogpeppe> fwereade_: am just working on tests for the upgrade stuff, and hoping that gustavo thinks it's ok...
<fwereade_> rogpeppe, I know the feeling :)
<TheMue> fwereade_: Btw, most of my little change is already covered in Rogers proposal.
<rogpeppe> TheMue: which proposal?
<rogpeppe> TheMue: which change?
<TheMue> rogpeppe: My verification of endpoints in AddRelation() before doing any change in ZK.
<rogpeppe> TheMue: oh, i didn't think i had any CL about relation endpoints
<TheMue> rogpeppe: No, but regarding relations in https://codereview.appspot.com/6303060/
<rogpeppe> TheMue: that's fwereade_'s proposal
<TheMue> rogpeppe: Ooops, yep, just seen. :)
<rogpeppe> TheMue: np!
<Aram> morning.
<TheMue> Aram: Moin.
<rogpeppe> Aram: yo!
<rogpeppe> fwereade_: just having a look at https://codereview.appspot.com/6305070
<rogpeppe> fwereade_: i can't quite see what VersionWatcher gives us that ContentWatcher doesn't already have
<rogpeppe> fwereade_: doesn't a ContentWatcher
<rogpeppe> delivers a notification
<rogpeppe>   95 // whenever the note is created, written to, or deleted.
<rogpeppe> ?
<rogpeppe> oops, intended to edit before pasting :-)
<fwereade_> rogpeppe, it's more what it doesn't do
<rogpeppe> fwereade_: ah, what doesn't it do?
<fwereade_> rogpeppe, I don't think there's any need to read and parse the settings node when the actual data is going to be packed up and dumped in a queue somewhere
<fwereade_> rogpeppe, sorry, "actual data" == "only thing we care about, being the fact of a change"
 * fwereade_ suddenly changes his mind, maybe
<fwereade_> rogpeppe, ok, the issue is this
<rogpeppe> fwereade_: ah, i see.
<Aram> eh, I see that I can't do a plain 'bzr pull' in bzr, do I have to specify branch name all time?
<fwereade_> Aram, --remember
<fwereade_> rogpeppe, I think there is a case to be made that we *do* care about the exact settings of a node, and want to persist them, so that when we execute foo-relation-changed we already have access to the remote unit's settings data *at the tinme of the change event*, rather than some time after
<fwereade_> rogpeppe, ie when the hook happens to execute
<rogpeppe> fwereade_: aren't we *always* seeing data from some time after?
<rogpeppe> fwereade_: due to the inherent nature of networking
<fwereade_> rogpeppe, yeah, but we're not waiting anarbitrary amount of time and then casually getting settings, just assuming the node is still there
<Aram> fwereade_: thanks
<fwereade_> rogpeppe, which is what we do in python
<fwereade_> rogpeppe, and which bugs me somewhat
<rogpeppe> fwereade_: this seems to me like an argument *for* using ContentWatcher
<fwereade_> rogpeppe, hence the * fwereade_ suddenly changes his mind, maybe
<rogpeppe> fwereade_: but the problem is that we throw away the content we just read
<rogpeppe> fwereade_: because all the State methods go back to the db rather than using the data we just read
<rogpeppe> fwereade_: (that's probably not true of settings nodes though)
<fwereade_> rogpeppe, just keeping the content around as a string is all we need
<fwereade_> rogpeppe, we can load it and use it when we come to actually execute a hook
<rogpeppe> fwereade_: doesn't a ConfigNode keep the original data around? or are you interested in other kinds of node?
<fwereade_> rogpeppe, I'm interested specifically in presenting to a hook an environment that reflects the state that caused the hook to be triggered, rather than an arbitrary unrelated state
<fwereade_> rogpeppe, but then I may be on crack here
<rogpeppe> fwereade_: no, that sounds very reasonable to me.
<rogpeppe> fwereade_: and i've been wittering on about something similar
<rogpeppe> fwereade_: (last time i think you said you found the old model ok to work with)
<fwereade_> rogpeppe, we do already keep track of the "current" members of the relation; I presume we'd actually really also want to keep track of the "current" settings for all members
<fwereade_> rogpeppe, well, it was ok to work with, but only by coincidence, because we never cleaned up after ourselves and never had nodes disappear on us
<rogpeppe> fwereade_: could you briefly explain to me how relation members are stored in zk?
<fwereade_> rogpeppe, ok, peer example feels easiest
<fwereade_> rogpeppe, /relations/relation-XXXXX has 2 children
<fwereade_> rogpeppe,  "settings" and "peer"
<rogpeppe> fwereade_: oh, i thought that "peer" was an attribute rather than a key
<fwereade_> rogpeppe, each of those contain config nodes and presence nodes respectively, each keyed on the unit id responsible for maintaining those noes
<fwereade_> rogpeppe, in a pro/req case, it'd have 3 children: "provides", "requires", "settings"
<fwereade_> rogpeppe, containing the same data, but with the presence nodes distributed amongs provides and requires depending on which end of the relation they were on
<fwereade_> rogpeppe, but in general the /path/to/<role> and /path/to/settings structure is the important one
<rogpeppe> fwereade_: ok
<fwereade_> rogpeppe, there is variation in the /path/to container-scoped relations -- this structure is duplicated for every affected primary unit in the relation
<fwereade_> rogpeppe, so in practice sometimes /path/to is /relations/relation-XXXX and sometimes it's /relations/relation-XXXX/unit-YYY-ZZZ
<rogpeppe> fwereade_: what does the "settings" node contain?
<fwereade_> rogpeppe, the unit's private address, at least; but in general anything that is meant to be communicated over that relation, AIUI
<fwereade_> rogpeppe, the settings nodes are the conduits by which information flows across a relation
<rogpeppe> fwereade_: oh, i'm not sure i understand the distinction between "settings" and "<role>"
<fwereade_> rogpeppe, the children of the <role> node are presence nodes
<fwereade_> rogpeppe, when a unit agent joins a relation, it should set its own role node in that relation to alive
<fwereade_> rogpeppe, when the role node is alive, we watch for settings changes, and stop when the role node is no longer alive
<rogpeppe> fwereade_: all this is as it is in the python?
<fwereade_> rogpeppe, the actual code is very different but I think I have translated what actually happens with reasonable fidelity
<rogpeppe> fwereade_: the zk structure, i meant
<fwereade_> rogpeppe, yes
<rogpeppe> fwereade_: so for relationship between two services each with one unit, the structure might look like this: http://paste.ubuntu.com/1035291/
<rogpeppe> ?
<fwereade_> rogpeppe, yep
<rogpeppe> fwereade_: ok, cool, that's useful thanks.
<fwereade_> rogpeppe, ok, so, I shall dump version-watcher; unpropose unit-relation-watcher, and rework it to send settings content rather than version; and repropose without reqs
<fwereade_> rogpeppe, sound good?
<rogpeppe> fwereade_: it's perhaps interesting that presence *could* be conveyed through the config node too, presumably.
<fwereade_> rogpeppe, it probably could but I'm not entirely sure it should ;)
<fwereade_> rogpeppe, another question
<fwereade_> rogpeppe, ChildrenWatcher errors out when the watched node doesn't exist
 * rogpeppe goes to look at unit-relation-watcher again
<fwereade_> rogpeppe, this feels to me inconsistent with other watchers
<rogpeppe> fwereade_: are you actually using ChildrenWatcher?
<fwereade_> rogpeppe, in an as-yet-unprposed branch, yes
<rogpeppe> fwereade_: to watch what?
<fwereade_> rogpeppe, related units
<rogpeppe> fwereade_: that info isn't held in the topology?
<fwereade_> rogpeppe, not the current set of connected units, which updates as agents respond to topology changes, no
<fwereade_> rogpeppe, the topology is saying "this unit should be in this relation; come on, unit agent, get involved"
<rogpeppe> fwereade_: ah, and then the unit agent creates the nodes under /relations/relation-xxxx ?
<fwereade_> rogpeppe, the presence node is saying "I, the unit agent, am actively participating in this relation"
<fwereade_> rogpeppe, and the units on the other side need t know what agents are actually around, not what units we'd like to be around
<rogpeppe> fwereade_: so... in this case, you are guaranteed that the parent node does exist, right?
<fwereade_> rogpeppe, the way I've written it I'm not
<rogpeppe> fwereade_: because /relations/relation-0000/{provides,requires,settings} is created when the relation is created, no?
<fwereade_> rogpeppe, it could be but I don;t see a good reason to do so
<rogpeppe> fwereade_: why *wouldn't* you create those node immediately?
<rogpeppe> s/node/nodes/
<fwereade_> rogpeppe, in container-scoped relations, we create the role node as the unit joins because that's the only point at which we know we need a container-scoped role node under that key
<fwereade_> rogpeppe, it seems neater to me to *always* create the role node just as the unit joins
<fwereade_> rogpeppe, just because it doesn't involve smearing the responsibility for it into two rather separate places
<fwereade_> rogpeppe, so, then, *if* ChildrenWatcher were to treat "node doesn't exist" as "no children, it's cool, I'll wait" rather "OMGBBQWTF", I could use it very nicely
<rogpeppe> fwereade_: BTW why do we care about the presence of the unit at the other end of a relation, rather than just its relation settings?
<fwereade_> rogpeppe, because a bunch of settings set 3 hours ago on a machine that is now molten slag are of limited value in determining active participation (ie the ability to respond to changes)
<rogpeppe> fwereade_: sorry, what does "responding to changes" mean in this context?
<fwereade_> rogpeppe, seeing that there's a new unit on the other end of the relation and updating its own settings to match
<rogpeppe> fwereade_: still at sea. "updating its own settings to match" ?
<rogpeppe> fwereade_: apologies for my lack of knowledge in this area!
<fwereade_> rogpeppe, I am hadoop-master and I just saw a hadoop-slave come online; I consequently perform whatever hadoopy magic is necessary to start me distributing jobs to that slave in addition to all the others
<fwereade_> rogpeppe, alternatively, I am hadoop-slave and I just saw a hadoop-master appear; I'll set some stuff in my settings so he knows about me and starts sending me jobs
<rogpeppe> fwereade_: ah, so if the presence node isn't active, we assume that the unit has left?
<fwereade_> rogpeppe, exactly
<rogpeppe> fwereade_: but its settings remain around?
<fwereade_> rogpeppe, the way I see it, the validity of a settings node is contingent on an active presence node
<fwereade_> rogpeppe, it should be the UA's responsibility to create its settings node before becoming active
<rogpeppe> fwereade_: but if the unit comes back online, its settings will be as before?
<fwereade_> rogpeppe, ^^ or update its settings node -- whatever
<rogpeppe> fwereade_: for instance, if there was a network outage and then the node gets reconnected
<fwereade_> rogpeppe, it's the UA's resposibility to provide sane settings if it's going to declare itself active
<rogpeppe> fwereade_: ok
<rogpeppe> fwereade_: another question: why store the presence nodes under <role> ?
<fwereade_> rogpeppe, so in that case for example, the UA should surely always update the settings node's private-address when it comes online
<rogpeppe> fwereade_: why not just have a single "presence" directory containing presence nodes for all units participating in that relation?
<fwereade_> rogpeppe, so that provider units can watch the children of the requirer node, and requirer units can watch the chidren of the provider node, without settings on the same side of the relation causing confusion
<fwereade_> rogpeppe, with peer relations it works how you describe, but it filters out just its own changes
<rogpeppe> fwereade_: ah, i understand
<fwereade_> rogpeppe, it's really quite neat once you appreciate the details :)
<fwereade_> rogpeppe, although honestly I had a couple of aha! moments only just this weekend
<fwereade_> rogpeppe, seeing what was really going on through the thick fog of twisted was somewhat challenging ;)
<rogpeppe> fwereade_: for subordinate relations what name is used for the presence node directory?
<fwereade_> rogpeppe, subordinate relations use the unit key of the principal to store the unit relations that exist within that principal unit
<rogpeppe> fwereade_: hazmat suggested that we should copy some of the internal docs to juju-core, and i think i agree.
<fwereade_> rogpeppe, yeah, that makes a lot of sense
<rogpeppe> fwereade_: example full directory name?
<rogpeppe> fwereade_: is it stored under /units?
<fwereade_> rogpeppe, /relations/relation-X/unit-A-B/settings/unit-A-B
<fwereade_> rogpeppe, /relations/relation-X/unit-A-B/settings/unit-C-D
<rogpeppe> fwereade_: ah, so the unit key takes the place of <role?>
<rogpeppe> s/\?//
<fwereade_> rogpeppe, in which unit-A-B is a unit of the principal service, which has its own container
<fwereade_> rogpeppe, no
<fwereade_> rogpeppe, /relations/relation-X/unit-A-B/provider/unit-A-B
<rogpeppe> oh i see
<rogpeppe> gotcha
<fwereade_> rogpeppe, /relations/relation-X/unit-A-B/requirer/unit-C-D
<fwereade_> rogpeppe, ...and unit-C-D is the unit of the subordinate service which shares unit-A-B's container
<rogpeppe> fwereade_: this has been *really* useful BTW. i only had a very very handwavy idea of what was going on before.
<fwereade_> rogpeppe, it's firmed up my own confidence a lot, there's nothing like someone asking questions to test whether you really know something ;)
<fwereade_> rogpeppe, so... how would you feel about me making that change to ChildrenWatcher? IMO it's consistent with stuff like ContentWatcher returning {false, ""} for a node that doesn't exist
<rogpeppe> fwereade_: i'm just thinking about it...
<fwereade_> rogpeppe, and it makes it a lot more useful to me
<rogpeppe> fwereade_: given that this is the first genuine use case for ChildrenWatcher, i'd tend towards +1
<fwereade_> rogpeppe, the other use case is /machines, I think
<fwereade_> rogpeppe, but it doesn't affect that case
<rogpeppe> fwereade_: i don't *think* so. i think that the provisioning agent watches the topology, not /machines
<fwereade_> rogpeppe, goodness me, you're absolutely right
<rogpeppe> fwereade_: but as you say, it doesn't affect that case even so
<fwereade_> rogpeppe, cool, thanks
<rogpeppe> np
<Aram> hi niemeyer
<niemeyer> Hello there!
<niemeyer> Aram: Hey man
<rogpeppe> niemeyer: yo!
<rogpeppe> niemeyer: welcome back.
<niemeyer> rogpeppe: Heya
<niemeyer> Thanks
<niemeyer> It was a great break indeed
<niemeyer> How're you guys doing?
 * Aram is drinking his tea.
<rogpeppe> niemeyer: all good here. in particular i'm eagerly awaiting your reaction to my upgrade proposal
<niemeyer> Aram: Me too, kin dof
<niemeyer> Aram: ChimarrÃ£o, which is a tea, but we don't call it as such :)
<Aram> heh.
<niemeyer> rogpeppe: How did it change?
<rogpeppe> niemeyer: did you see my post on juju-dev?
<niemeyer> rogpeppe: I haven't read it yet
<niemeyer> rogpeppe: But I was expecting to see it close to what we talked
<rogpeppe> niemeyer: well, yeah, it's changed, but in a good way, i think. i decided that what we'd arrived at wasn't sufficient.
<niemeyer> (s/was/am/)
<niemeyer> rogpeppe: Ok, will read it
<rogpeppe> niemeyer: it's not good to just exit and let upstart deal with that.
<niemeyer> But first let me warm up the chair :)
<rogpeppe> niemeyer: :-)
<niemeyer> rogpeppe: How so?
<niemeyer> Well, you probably explained it in the mail
<rogpeppe> niemeyer: yeah, hopefully sufficiently...
<Aram> niemeyer: a simple branch for you to review: https://codereview.appspot.com/6298062/ I have some more in the pipe, but everything depends on this.
<niemeyer> Aram: Looking
<hazmat> niemeyer, g'morning
<niemeyer> hazmat: Heya
<niemeyer> Aram: Going through the description
<niemeyer> Aram: Can we talk a bit about which issues you've found that required the several attempts to be done
<Aram> well yes, sure, I did 1, 2, and 3, 4 was out of the question because preserving Path uniqueness was too important.
<Aram> usually the most direct way of doing these things is 3, make each node remember its children.
<Aram> but this is not too great.
<niemeyer> Aram: Where's the dot int hat sentence? :
<niemeyer> :)
<niemeyer> Aram: I guess you did (1, 2, and 3)?
<Aram> yes
<niemeyer> Ok
<niemeyer> Aram: So what were the issues you've found with 3?
<Aram> You need to update parent when you create/delete a child, this is not optimal.
<niemeyer> Aram: What were the issues you've found with that?
<Aram> it can be unreliable and makes the code significantly more complex. in my experience with namespace-like things, it is usually a design error for a high level call to generate multiple independent exported operations. having a 1:1 API-atomic operation model is useful and easier to debug and understand.
<niemeyer> Aram: Okay, but I'd appreciate understanding more about the decision. You're saying it's unreliable and complex, and breaks a property that you like, but why?
<niemeyer> Why is it unreliable and complex, and why do we need such a property
<Aram> it has more points of failure, e.g. it can create the node but fail to update its parent and the code to deal with such failures is complex.
<niemeyer> Aram: Ok, agreed
<niemeyer> Aram: Why do we need a parent node within a child?
<niemeyer> Erm.. s/node/field/
<Aram> so we can query a node for its children efficiently. if we have a node 'path' and we want its children we query the DB for nodes with parent 'path'. it's very fast.
<Aram> if we don't have a parent field it's more complex, in /a/b/c a query for ^/a yields both b and b/c, and we want only b, so we need to filter direct descendants client side.
<Aram> we could construct a more complex regexp, but it would not use the index, so it would be slow to use.
<Aram> this slight bit of redundancy was the only way I could come with so that we could use the index at all times.
<niemeyer> Aram: We'd use the index at all times anyway. What you're avoiding is the cost of excluding entries at either side
<Aram> yes.
<niemeyer> Aram: Probably not worth it
<rogpeppe> presumably for getting children of /, the query would return all entries in the tree
<Aram> yes
<niemeyer> Hmm
<rogpeppe> but for our use case, perhaps we don't care too much, as we only do GetChildren on shallow nodes.
<niemeyer> Fair enough.. it'd suck to have to walk through the entire tree for these cases..
<niemeyer> A bit sad that we'll need an extra index, though
<niemeyer> Aram: Did you put some thinking onto what we briefly talked about at London?
<Aram> the watchers thing? yes, working on it.
<niemeyer> Aram: Regarding the fact a parent doesn't have a well-defined number of children for a given version
<Aram> ah, that one, sure. it's in the pipe.
<Aram> I decided to Create to behave like mkdir -p
<niemeyer> Aram: Well, I don't know if we care yet.. I just would like to understand whether we do or not, and what we're ignoring
<niemeyer> Aram: in which sense?
<Aram> to create all necessary nodes. e.g create /a/b/c creates /a and /a/b if required.\
<Aram> I'm not sure if we care that much, but ATM we always return a stat of the parent when querying for children.
<Aram> so it's easy if the parent really exists.
<niemeyer> Aram: It's a bit more involved than that
<niemeyer> Aram: In the design we're leaning towards, the same parent version may have a different number of children
<niemeyer> Aram: Which I don't think can happen on real zk
<niemeyer> Aram: Maybe we don't care about that, but that should be a conscious decision if nothing else
<rogpeppe> i think having Create work like mkdir -p is quite a big semantic change
<Aram> well, if we decide that old parents are immutable in the sense that only new versions may aquire new children, it would be a simple thing to arrange for. I'm not sure if we care about that, but I could look at how we use ZK today.
<Aram> rogpeppe: yes, it's quite a big semantic change, but I think it's for the better, it should make things easier, not harder.
<niemeyer> Aram: Thanks, the key is really to understand which path we're going down towards.. not suggesting we should implement the exact semantics at this point
<rogpeppe> i'm trying to remember the awkward case i came up with before that meant that it wasn't great to make directories implicitly
 * rogpeppe fails
<niemeyer> Aram: Review in
<Aram> thanks
<Aram> reading
<Aram> niemeyer: what race? (probably I have a wrong assumption about mongodb).
<niemeyer> Aram: Duh, sorry, the comment went into the wrong location
<niemeyer> Aram: The race I was alluding to is the creation of nodes without parents
<Aram> niemeyer: I am aware of that race, don't have a good idea on how to solve it yet, have a few ideas though. at the moment you can create nodes without parents, but you can't query the non-existent parents for children.
<Aram> that means you don't get stale data, but an error, which is probably better.
<niemeyer> Aram: Both sound bad
<niemeyer> Aram: and they're both related to how we store children information.. we shouldn't dive in onto a path before we understand how we're handling this
<hazmat> rogpeppe, adding ml to our upgrade discussion, forgot to add it on the initial reply
<rogpeppe> hazmat: oh good catch, i hadn't noticed that
<hazmat> rogpeppe, apparently i fubar'd the second time two. third times the charm
<TheMue> Hmm, could it bee that proposals for juju-core don't lead to a notification in IRC?
<niemeyer> Aram: Any ideas around that?
<Aram> niemeyer: yes, thinking, will email, probably better than IRC for explaining :).
<niemeyer> Aram: Sounds good, although in some cases IRC works better
<Aram> sure
<mramm> So, I'd like to bring up something for the team to discuss
 * fwereade_ listens
 * TheMue listens too
<mramm> currently we are making good progress
<mramm> but it's hard to explain what that progress is or how far along we are to folks outside of the team
<mramm> and I would say a little bit hard for the team itself to know what exactly needs to be done before 12.10
<mramm> I don't want to create any kind of heavyweight process, but I think it would be helpful to have a bit more status information
<mramm> it was suggested to me that we use blueprints and a list of work items for each blueprint to track what needs doing
<mramm> I think that if we keep it at that kind of high level, we should not have to spend much time on it
<rogpeppe> niemeyer: ^
<niemeyer> mramm: This sounds like a nice topic for the mailing list
<niemeyer> mramm: We have members of the team that are not here, and people are not necessarily listening right now
<mramm> https://blueprints.launchpad.net/ubuntu/+spec/servercloud-q-arm-deployment would be an example (look at the work items)
<mramm> niemeyer: yea
<niemeyer> mramm: For example, aren't you supposing to be listening to that bad music waiting for a conference call with mark right now? :-)
<niemeyer> robbiew: Is that meeting happening?
<robbiew> niemeyer: man...I suspect not
<robbiew> it is 11:08pm in Taipei
<robbiew> this music is SO awesome though :/
<niemeyer> robbiew: I don't understand why we don't have just a short beep every once in a while or some such to notify about the fact the call is still up
<niemeyer> robbiew: It's so obviously counter productive to make people wait with bad music with absurd audio quality
<robbiew> 10min rule...I'm dropping
<niemeyer> Ditto
<robbiew> mramm: fyi ^^
<robbiew> we dropped
<mramm> I got in in time to drop ;)
<mramm> So, my plan is not to decide what to do here on IRC
<mramm> but just to raise the issue so that you all have time to think about it
<hazmat> the music is getting better ;-)
<mramm> If we use work items on blueprints we can get in on the burn-down chart action here: http://status.ubuntu.com/ubuntu-quantal/group/topic-quantal-servercloud-arm.html
<mramm> and have it rolled up to this: http://status.ubuntu.com/ubuntu-quantal/canonical-server.html
<mramm> So, anyway, food for thought.   I'll raise this on the list for wider discussion
<TheMue> mramm:  Success, thoughts are already running. ;)
<niemeyer> Lunch time.. back later folks
<TheMue> niemeyer:  Enjoy.
 * rogpeppe is off for the evening. see y'all tomorrow.
<mramm> have a good evening!
<rogpeppe> mramm: will do, thanks!
<niemeyer> rogpeppe: Cheers man
<twobottux> aujuju: How do I gracefully shutdown a Juju Charm? <http://askubuntu.com/questions/149550/how-do-i-gracefully-shutdown-a-juju-charm>
<twobottux> aujuju: How do I make a Juju Charm's revision match the Bazaar revision of its repo? <http://askubuntu.com/questions/149553/how-do-i-make-a-juju-charms-revision-match-the-bazaar-revision-of-its-repo>
<robbiew> mramm: ping
<mramm> robbiew: pong
<robbiew> mramm:  1:1 time? (actually about 23min ago, but I assume my private msg was missed)
<mramm> robbiew:  sure
<niemeyer> fwereade_: ping
<fwereade_> niemeyer, pong
<niemeyer> fwereade_: Heya
<niemeyer> fwereade_: Looking over the unit relation watching stuff
<niemeyer> fwereade_: Was just pondering about the choice of merging both watchers onto one
<fwereade_> niemeyer, cool, hopefully not too crackful...
<niemeyer> fwereade_: I don't have much of a feeling, but mostly wanted to learn from you why you felt this was a good direction
<niemeyer> fwereade_: It's not
<fwereade_> niemeyer, it seemed natural, I think, because the validity of the settings node is contingent on the presence node; and so synchronising th ewatching by putting them together seemed sensible
<fwereade_> niemeyer, we get a single stream of events representing everything we care about for a single relation
<fwereade_> niemeyer, single *unit* relation
<niemeyer> fwereade_: Well, kind of.. there are apparently a disjoint set of methods?
<niemeyer> fwereade_: E.g. settingsChange
<niemeyer> fwereade_: Ah, I see.. but it's routed into a single chan
<fwereade_> niemeyer, are you on https://codereview.appspot.com/6305082/ ?
<niemeyer> fwereade_: Is your idea to expose this watcher as part of the API?
<fwereade_> niemeyer, I thought I deleted the earlier proposals, I'm pretty sure settingsChange is gone
<fwereade_> niemeyer, not this one, no; there will be a RelatedUnitsWatcher returned from UnitRelation.WatchRelated
<niemeyer> fwereade_: There is a single patch set in that CL
<niemeyer> fwereade_: (with settingsChange in)
<fwereade_> niemeyer, it originally depended on a prereq we decided to drop
<niemeyer> fwereade_: Am I reviewing the wrong thing?
<fwereade_> niemeyer, I think you might be... are you sure you're on the one I linked?
<fwereade_> niemeyer, (did I paste the wrong one in themail..?)
<niemeyer> fwereade_: Have you opened the link you pasted? :-)
<fwereade_> niemeyer, yes; it seems to me to have:
<fwereade_> +// unitRelationChange describes the state of a unit relation.
<fwereade_> +type unitRelationChange struct {
<fwereade_> + Present bool
<fwereade_> + Settings string
<fwereade_> +}
<niemeyer> fwereade_:  501 func (w *unitRelationWatcher) settingsChanges() <-chan watcher.ContentChange {
<fwereade_> niemeyer, ah sorry, that errant "s" had me barking up completely the wrong tree
<fwereade_> niemeyer, an earlier version had a settingsChange type
<fwereade_> sorry :)
<niemeyer> fwereade_: Ah, phew, ok
<niemeyer> fwereade_: So this watcher is the underlying implementation of an aggregated channel that will dispatch on all units?
<fwereade_> niemeyer, yeah, there will be a RelatedUnitsWatcher that starts one of these for each candidate related unit
<niemeyer> fwereade_: Fun
<fwereade_> niemeyer, it was a fun w/e actually, I have half a dozen unpushed branches labelled things like another-abortive-ruw-attempt
<niemeyer> fwereade_: LOL
<fwereade_> niemeyer, but it's definitely an awful lot simpler than the first attempts
<niemeyer> fwereade_: Yeah, looks quite reasonable
<niemeyer> fwereade_: Missing some docs here and there, but nothing major
<fwereade_> niemeyer, cool, the followup is a little more complex, but I hope it won't be too bad -- I'm pretty sure it's an improvement on the twisted stuff which frankly made my brain hurt
<niemeyer> fwereade_: I'm having a feeling that the watcher infra on the mongo+zk will have to be optimized pretty soon
 * niemeyer looks at Aram
<fwereade_> niemeyer, I'm vaguely hoping one of you will take one look and shear off another layer of complexity ;)
<niemeyer> fwereade_: Oh, I had the feeling that the follow up would actually be simpler
<fwereade_> niemeyer, if it were just children changes I had to worry about it probably would be
<niemeyer> fwereade_: Isn't that a sign that the joint watching may not be helping after all?
<fwereade_> niemeyer, a lot of the hassle in in accommodating the fact that a presence node can exist but not actually indicate presence
<niemeyer> fwereade_: Why is that so?
<fwereade_> niemeyer, because that's how presence nodes are designed
<niemeyer> fwereade_: Sure, but that's why we have the abstraction on top of them
<niemeyer> fwereade_: We have a nice on/off answer from it, right?
<fwereade_> niemeyer, yes, and the unitRelationWatcher makes it nicer still
<fwereade_> niemeyer, but
<fwereade_> niemeyer, the RelatedUnitsWatcher also needs to deal with tedious stuff like keeping the key:name mapping going, plus a bit of bookkeeping for the watchers, then some way of tracking state...
<fwereade_> niemeyer, in terms of goroutines and channels it's pretty simple really
<fwereade_> niemeyer, the devil is as always in the details
<niemeyer> fwereade_: Yeah
<niemeyer> fwereade_: It's quite readable to be honest
<fwereade_> niemeyer, cool :)
<niemeyer> Aram: Seriously, let's talk briefly about watches when you have a moment
<niemeyer> Aram: We should have a single watching goroutine per process
<niemeyer> Or we'll explode the server with a ridiculous number of threads
<Aram> niemeyer: hmm... one goroutine per process, hmm. this complicates things a bit.
<Aram> now is a good time as every to discuss this
<Aram> ever
<Aram> well
<Aram> hmm
<niemeyer> Aram: I thought it was late for you
<niemeyer> Aram: We can discuss now if you still have the energy
<Aram> it's late but I think it's fine :).
<niemeyer> Aram: While reviewing fwereade_'s branch, it's apparent we'll have quite a few watches per process
<niemeyer> Aram: If we have a model where each watch we attempt yields a new session with a new watch, we'll end up with a massive number of threads
<niemeyer> Aram: All locked down waiting
<Aram> niemeyer: in doozer you can do watch(path, ver), but from what I can see in zk you can only do watch(path), if that is the case, it's easy to use a single goroutine and zk session to read the oplog. if we also care about versions, it's more complex, still doable.
<niemeyer> Aram: With zk we have watch(path) => ver, in a way
<niemeyer> Aram: The watch will be a delta against ver
<niemeyer> Aram: Or rather, a note stating that ver has changed
<Aram> niemeyer: that is fine, the design seems simple. a single goroutine always loops reading from the oplog, at each step it checks to see if that operation has a path someone is interested in, if so, makes the notification.
<niemeyer> Aram: Yeah.. I don't have real world experience with this, but I suspect it should be cheap, since it's a naturally ordered collection (no index, queries, or anything else really)
<Aram> yes.
<niemeyer> Aram: interestingly, I suspect this will make the design a lot simpler in some ways, since we can start the watch upfront, before any watch requests arrive
<Aram> yes, that's the way it should be done.
<Aram> niemeyer: btw, I know that it's this way because this is the way zk works, but is there a real reason why we care that create and set are different? (have to do create before set).
<niemeyer> Hmmm
<niemeyer> Aram: If one tries to Create a node, and there's one previously existent, a race was lost.. this is a pretty relevant fact for the distributed nature of the system
<niemeyer> Aram: Without distinction, the algorithm would go on pretending all was good
<Aram> not really, set should be set(path, data, ver), in effect compare-and-swap. you could specify a version for set, if there were already a node, you'd know.
<niemeyer> Aram: Sure.. we can easily call Create as Set(version=-1).. then of course there's no distinction :)
<niemeyer> Actually, not entirely.. Set(version=-1) exists.. it means change whatever the version
<niemeyer> Create means change with non-existing-version
<niemeyer> So, yeah, doesn't work
<Aram> niemeyer: I know that Set(version=-1) changes whatever the version, this idea doesn't exist in doozer, you can't change without caring for the version, and there's no create, only set.
<niemeyer> Aram: Cool, so the primitives are similar..
<davecheney> niemeyer: i was going to extract shortattempt so I can reuse it in the PA today
<davecheney> so it will need a package, do you have any preferences ?
<niemeyer> davecheney: It's a good question
<niemeyer> davecheney: Good morning, btw
<davecheney> and to you :P
<davecheney> niemeyer: some suggestions are lp/juju-core/juju/util
<davecheney> or even at the top level
<davecheney> lp/juju-core/juju
<davecheney> or in go style
<davecheney> lp/juju-core/juju/jujuutil
<davecheney> none are particularly good
<niemeyer> davecheney: Indeed
<niemeyer> davecheney: I'm always afraid of that kind of package because they end up as a bag of undefined things
<davecheney> niemeyer: oh yeah, com/atlassian/confluence/bucket
<davecheney> which was so named so people would feel ashamed of importing or using it
<niemeyer> davecheney: :)
<davecheney> which didn't work
<davecheney> com/atlassian/confluence/bucket2
<davecheney> no kidding
<niemeyer> davecheney: I suggest environs, for the moment
<davecheney> niemeyer: as good as any
<niemeyer> davecheney: It's not a great fit either, but we can look at it as a utility to be used in the implementation of environments, with aspirations of promotion onto a better home
<davecheney> or I could just copy the bits I need into cmd/jujud and we defer the discussion until we have a better example ?
<niemeyer> davecheney: Nah, the bad taste of a misplaced type is easy to fix.. code duplications tends to grow much uglier legs
<davecheney> niemeyer: understood, i'll make it so
#juju-dev 2012-06-12
<rogpeppe> fwereade: morning guv'nor
<fwereade> heya rogpeppe
<rogpeppe> fwereade: looks like my upgrader proposal was premature. still, it all works; i can drag it out later :-)
<fwereade> rogpeppe, yeah, I think it's a viable long-term direction but maybe not time for it yet
<fwereade> popping out for a mo, bbs
<Aram> morning
<rogpeppe> Aram: hiya
<fwereade> Aram, heyhey
<TheMue> Aram: Moin.
<fwereade> rogpeppe, TheMue: I would very much appreciate your thoughts on https://codereview.appspot.com/6304068
<fwereade> rogpeppe, TheMue: everything important is in the description, the code should be a really quick skim/verify if you agree with the problem statement
 * rogpeppe looks
<rogpeppe> fwereade: in exchange, a few minor cleanups in the same file: https://codereview.appspot.com/6295069/
<fwereade> rogpeppe, LGTM
<fwereade> rogpeppe, I'd call that a trivial, just merge it
<rogpeppe> fwereade: that would certainly make our lives easier w.r.t. your branch
<fwereade> rogpeppe, I think it'd be justified even if that weren't the case ;)
<fwereade> rogpeppe, does the logic seem sound to you?
<fwereade> rogpeppe, it's really just about making sure everything happens on the loop goroutine
<rogpeppe> fwereade: i think so. am still absorbing.
<rogpeppe> fwereade: small thing. i wonder if instead of having those closures all down the code, something like this might read a little better: http://paste.ubuntu.com/1037042/
<fwereade> rogpeppe, I don't think we want to defer a Stop, which is what that effectively seems to do
<fwereade> rogpeppe, we want to be sure that that code is called OAOO
<rogpeppe> fwereade: OAOO?
<fwereade> rogpeppe, Once And Only Once
<rogpeppe> fwereade: i think that code is exactly equivalent to yours, no?
<fwereade> rogpeppe, can you paste an example implementation of the Stop method?
<rogpeppe> fwereade: it's already implemented by all the sub-watchers
<fwereade> rogpeppe, ok, then our code is definitely not doing the same thing
<rogpeppe> fwereade: really?
<fwereade> rogpeppe, yours will wait until the tomb is dead, on the main goroutine, before the main goroutine calls tomb.Done()
<fwereade> rogpeppe, assuming the implementations stay as they are
<fwereade> rogpeppe, my approach ensures the cleanup code is called OAOO after the main loop has exited
<rogpeppe> fwereade: i must be missing something. AFAICS they're trivially identical.
<fwereade> rogpeppe, the problem is in allowing any cleanup code to run on an arbitrary goroutine -- the only stuff that's safe is Kill() and Wait()
<fwereade> rogpeppe, all the Stop implementations return tomb.Wait()
<rogpeppe> fwereade: your deferred closure does exactly the same thing as my three deferred function calls, no?
<fwereade> rogpeppe, if they don;t do that then clients won;t be able to extract an error (unless we add a Wait method to each watcher)
<fwereade> rogpeppe, but if they do use tomb.Wait(), then it's certainly not OK to call them on the loop goroutine, because that's the goroutine that is meant to signal that cleanup is complete in the first place
<rogpeppe> fwereade: i don't understand. i think both piece of code are trivially (potentially even by the compiler) transformable into each other.
<fwereade> rogpeppe, stop me when I say something incorrect:
<fwereade> rogpeppe, the Stop methods do the following: tomb.Kill(nil), stop the watcher, and wait on tomb completion before returning
<rogpeppe> fwereade: here are the steps of the transformation: http://paste.ubuntu.com/1037058/
<rogpeppe> fwereade: your code is calling Stop too
<fwereade> rogpeppe, oh really?
<rogpeppe> fwereade: e.g. line 51
<fwereade> rogpeppe, oh balls I totally misread it
<rogpeppe> fwereade: np
<fwereade> rogpeppe, missed a .watcher
<rogpeppe> fwereade: thought so
<rogpeppe> maybe stopWatcher could be stopSubwatcher
<fwereade> rogpeppe, nah, I think it'd be clear anyway
<fwereade> rogpeppe, still trying to figure out whether it feels like a clear improvement... I'll implement it and see how I feel ;p
<rogpeppe> fwereade: yeah, i'm not sure either.
<fwereade> rogpeppe, my crystal ball tells me that I will use it again soon, so on that basis I think it's worth it
<rogpeppe> fwereade: it was just a reaction to seeing the same closure in all of them
<rogpeppe> fwereade: i'm already about to use it
<rogpeppe> fwereade: possible comment on stopWatcher:
<rogpeppe> // stopWatcher stops a watcher and propagates
<rogpeppe> // the error to the given tomb if nessary.
<rogpeppe> f
<fwereade> rogpeppe, perfect
 * rogpeppe likes ad hoc interfaces
<fwereade> rogpeppe,  how should i integrate that if you're implementing it right now?
<rogpeppe> fwereade: i'm not implementing that. i'm implementing MachineUnitsWatcher
<rogpeppe> fwereade: and i'll merge it with your changes before submitting it
<fwereade> rogpeppe, excellent
<fwereade> rogpeppe, cheers
<rogpeppe> fwereade: np. thanks for cleaning this up.
<fwereade> rogpeppe, a pleasure :)
<TheMue> Just jumped in from lunch. Together with the watchStopper/stopWatcher() both times LGTM, looks good.
<fwereade> TheMue, thnks
<rogpeppe> fwereade: submitted. i feel a little bit naughty.
<fwereade> rogpeppe, I think that one is absolutely legitimate :)
<TheMue> rogpeppe: *lol*
<hazmat> rogpeppe, what's the support like for bzr revisions of tools?
<hazmat> rogpeppe, does that lead major.minor.patch-revno  form?
<rogpeppe> hazmat: the plan is to use major.minor.patch
<rogpeppe> hazmat: but you get a local repo that you can push stuff to
<rogpeppe> hazmat: so you build the binaries locally and push them to the storage for a given juju environment. then it'll use them.
<rogpeppe> hazmat: does that answer your question?
<hazmat> rogpeppe, indeed it does.. but it seems a bit manual for dev cycling on an env, it sounds like its effectively using the releases url as a namespace for priv dev via  local release repo
<hazmat> but that's pretty minor given the simplicity
<rogpeppe> hazmat: there's an flag to bootstrap which causes it to automatically build and upload the binaries before bootstrapping
<hazmat> rogpeppe, upload the binaries to the env or to the release url?
<hazmat> s/env/provider storage
<rogpeppe> hazmat: to the env. there's no fixed release url - it's environment-configurable.
<hazmat> rogpeppe, sure its configurable, but there not comparable without a release url per se
<hazmat> s/their
<hazmat> rogpeppe, so without that flag it would default to using the release url ?
<rogpeppe> they're? :-)
<rogpeppe> hazmat: currently there's no default. there's a "public-bucket" field in the ec2 config which would to the s3 bucket containing the public release
<rogpeppe> s/would to/would point to/
<rogpeppe> hazmat: i'd guess that when we provide one, we'll change the default to point to it
<hazmat> rogpeppe, how are you determining the version when you upload tools on bootstrap?
<rogpeppe> hazmat: it's taken from Current in the version package.
<rogpeppe> hazmat: but the version is actually irrelevant for bootstrap because the logic will always choose a version from the private storage by preference, and the private storage starts empty.
<hazmat> rogpeppe, ic, won't the priv storage will start populated though with the tools uploaded by bootstrap, ie has the one to be used
<rogpeppe> hazmat: destroy-environment deletes everything from the private storage, including uploaded tools.
<rogpeppe> hazmat: (if i've parsed your sentence correctly)
<hazmat> rogpeppe, sure.. i was referencing bootstrap --tools would upload the current toolset to provider storage before the machine came up, so it would be using the only extant version there
<rogpeppe> hazmat: that's right.
<rogpeppe> hazmat: i think the flag is "--upload-tools"
<rogpeppe> hazmat: "why not support both? ie. machine monitoring global version and local version setting."
<rogpeppe> hazmat: would the local version setting be optional?
<niemeyer> Gooood morning jujuers
<rogpeppe> niemeyer: yo!
<TheMue> niemeyer: Moin
<rogpeppe> niemeyer: any thoughts on https://codereview.appspot.com/6307072/ ?
 * niemeyer looks
<niemeyer> rogpeppe: LGTM
<rogpeppe> niemeyer: lovely, thanks.
<niemeyer> rogpeppe: np, thank you
<hazmat> rogpeppe, yes
<rogpeppe> niemeyer: this could also use a look, while you're here: https://codereview.appspot.com/6305063/
<hazmat> rogpeppe, you could effect a global change via one node change, a local change via one node change. no mass changes nesc.
<rogpeppe> hazmat: it means each machine agent would need to watch two things, but it would probably work ok, yeah.
<niemeyer> rogpeppe: Done
<rogpeppe> niemeyer: ta
<niemeyer> rogpeppe: I'd prefer to have the set(upgrade=true) operation being extremely fast..
<niemeyer> rogpeppe: Enough for us to not worry about updating several entries at once
<rogpeppe> niemeyer: could be thousands if we've got that number of machines
<niemeyer> rogpeppe: Updating thousands of values in a database is extremely fast these days
<rogpeppe> niemeyer: each one incurs a round trip. but i'll do them concurrently, which should amleliorate that.
<rogpeppe> s/aml/am/
<niemeyer> rogpeppe: You'r assuming an implementation :)
<niemeyer> rogpeppe: I hope we can fix that
<rogpeppe> niemeyer: depends on the interface in State, i think.
<niemeyer> rogpeppe: Well, if it's so painful, let's get started with global only
<rogpeppe> niemeyer: i don't think it's painful. i'm just wary of it...
<niemeyer> rogpeppe: We're not making use of the fact it's per unit right now either way
<niemeyer> rogpeppe: Ok, global only is cool for now then
<rogpeppe> niemeyer: i think it'll be relatively easy to add local machine versions later if we decide to
<niemeyer> rogpeppe: Okay, sounds good.. sorry for diverting your efforts
<rogpeppe> niemeyer: np.
<niemeyer> rogpeppe: I hope by the time we want to have it local, the backend is fast enough for us not to worry about this
<rogpeppe> niemeyer: BTW the upgrader implementation works, and is stashed away for future reference :-)
<niemeyer> rogpeppe: collection.update(upgrade=true) is a blast
<fwereade> niemeyer, are you currently reviewing convenient-relation-interface or add-unit-relation? if so, sorry: I'm about to -wip them, I think I have a better arrangement
<TheMue> niemeyer: In https://codereview.appspot.com/6296064 I concentrated on state.go.
<niemeyer> fwereade: I'm not.. just reviewing mails before stepping out for lunch.. there's still time
<niemeyer> fwereade: thanks for the note
<fwereade> niemeyer, nah, you don't need to look at those for a bit; I should make the s/UnitRelation/RelationUnit/ change at least
<fwereade> niemeyer, and they may change in more interesting ways
<niemeyer> fwereade: Does that make sense?  It felt right yesterday, but I wasn't entirely sure
<fwereade> niemeyer, I *think* it does, but I'm still exploring
<fwereade> niemeyer, if it turns out to be crack I can just repropose the original branches
<niemeyer> fwereade: I think the convention also makes sense for that tiny type we recently introduced: ServiceRelation => RelationService
<niemeyer> fwereade: I recall struggling with the same kind of issue thre
<niemeyer> there
<niemeyer> fwereade: We did reword the docs a bit to make it clear, but didn't realize the ordering made a difference at the time
<fwereade> niemeyer, yeah, that is definitely on the cards
<niemeyer> TheMue: Will have to think about this one some more..
<niemeyer> TheMue: The repetition of logic there obviously needs some love.. still thinking about what would be the best approach
<TheMue> niemeyer: Yes, I dislike it too and played with the idea of nesting it more. But that's also not "beautiful".
<TheMue> niemeyer: The early return is typical for Go, but it looks strange.
<TheMue> niemeyer: Having individual error types getting maybe a specifier, like a name, and the included error would make it look better. But logic would be the same, code would be more.
<niemeyer> TheMue: I don't think that's an issue.. there are multiple ways to preserve that
<niemeyer> TheMue: I'm just pondering about the best one
<TheMue> niemeyer: We very often use value, err := ... and need the value later. This often leads to the structure.
<niemeyer> TheMue: I understand, but the structure is fine
<niemeyer> TheMue: We can avoid the repetition without touching it
<rogpeppe> fwereade: ping
<niemeyer> TheMue: For example:
<fwereade> rogpeppe, pong
<niemeyer> defer errorContext(&err, "can't add service %q: %v", name)
<rogpeppe> fwereade: "cannot assign subordinate units directly to machines"
<fwereade> rogpeppe, yes
<rogpeppe> fwereade: how *do* we assign a subordinate unit to a machine?
<TheMue> niemeyer: Hehe, thought about this too.
<niemeyer> TheMue: >:)
<fwereade> rogpeppe, we assign it to a unit, surely?
<rogpeppe> fwereade: using AssignToMachine?
<fwereade> rogpeppe, no, certainly not
<rogpeppe> fwereade: so... how then?
<fwereade> rogpeppe, we AddUnitSubordinateTo()
<niemeyer> rogpeppe: Subordinate units must necessarily be in the same machine of their principal
<fwereade> rogpeppe, I'm not sure where the responsibility lies in python for handling that though
<rogpeppe> fwereade: ah, so subordinate units don't have a Machine set in their topology node?
<TheMue> niemeyer: And err is a named return value. In case it's nil when the message returns errorContext() will let it stay nil, otherwise it will "pimp" the error.
<niemeyer> TheMue: Yep
<rogpeppe> fwereade: that means my just-submitted Machine.Units implementation is wrong
<TheMue> niemeyer: OK, will add it.
<fwereade> rogpeppe, sorry, not sure I saw that one; how so?
<niemeyer> rogpeppe: Ah, curious
<niemeyer> rogpeppe, fwereade: Worth looking at both ends in that regard.. I certainly didn't pay attention to that myself
<rogpeppe> fwereade: you LGTM'd it...
<rogpeppe> fwereade: it's wrong because it doesn't return subordinate units
<rogpeppe> fwereade: i assumed that all units on a machine had Machine=that-machine
<rogpeppe> fwereade: but if that's not true for subordinates, i'll have to do some more delving
<rogpeppe> fwereade: i *think* that it would be good if the unit machine was set in the topology regardless of whether it's subordinate or not.
<fwereade> rogpeppe, no doubt it will click the second I see it, but I can't parse its location out of my emails... CL please?
<rogpeppe> fwereade: https://codereview.appspot.com/6307072
<fwereade> rogpeppe, there is probably a case to be made for that though
 * fwereade reads and thinks
<rogpeppe> fwereade: that would mean, i think, that AddUnitSubordinateTo would automatically assign the subordinate unit to the principal's machine if there is one, and that AssignToMachine would also assign any subordinate units to that machine.
<niemeyer> rogpeppe: I find that sensible too
<niemeyer> rogpeppe: Although.. hmm
<niemeyer> Yeah, no, that sounds good indeed
<rogpeppe> niemeyer: ok, sounds like a plan.
<fwereade> rogpeppe, niemeyer: the difficulty is kinda in thinking of Units as meaning Containers
<niemeyer> We have evidence of algorithms being written incorrectly based on lack of a sound assumption
<niemeyer> fwereade: Yeah, but they're not really
<fwereade> niemeyer, indeed so
<niemeyer> fwereade: The fact we're having to think this way is a hint rogpeppe is right
<fwereade> niemeyer, agree Units should actually be all units, not all principal units
<rogpeppe> fwereade: +1
<fwereade> rogpeppe, nice catch, ty
<rogpeppe> fwereade: ok, i'll submit that as a fix first, before submitting machine-units-watcher
<rogpeppe> fwereade: it was your catch - you wrote the error message that stopped my test running!
<fwereade> rogpeppe, ah, I missed that detail
 * fwereade feels smug
<rogpeppe>  "cannot assign subordinate units directly to machines"
<rogpeppe> ... and rightly so :-)
<niemeyer> I'll head to lunch.. may take a bit more time today as it's Valentines here so we'll go to a proper place to have lunch. Back soon for more fun.
<rogpeppe> niemeyer: have a good one!
<rogpeppe> fwereade: i think that topology.AddUnit should fail if the principal unit no longer exists. does that sound right to you?
<fwereade> rogpeppe, yeah, that sounds correct
<rogpeppe> fwereade: ok, i'll fix that
<rogpeppe> fwereade: thanks
<fwereade> anyone:
<fwereade> Rietveld: https://codereview.appspot.com/6300085
<fwereade> error: Failed to run "bzr merge": exit status 3
<fwereade> -----
<fwereade> bzr: ERROR: Cannot lock LockDir(chroot-80383440:///%2Bbranch/juju-core/.bzr/branch/lock): Transport operation not possible: readonly transport
<fwereade> -----
<fwereade> ?
<rogpeppe> fwereade: does it happen repeatedly?
<fwereade> rogpeppe, yeah
<rogpeppe> fwereade:  running lbox submit?
<fwereade> rogpeppe, indeed so
<rogpeppe> fwereade: what does the output of lbox submit -v look like? (i'm wondering if it's something to do with lp:juju-core vs lp:juju-core/juju)
<fwereade> rogpeppe, http://paste.ubuntu.com/1037422/ seems to be the relevant bit
<rogpeppe> fwereade: hmm, odd. let's see the whole thing. i doubt i can help though, i'm afraid.
<fwereade> rogpeppe, http://paste.ubuntu.com/1037428/
<fwereade> rogpeppe, but wait, yes, I see a bare lp:juju/core in there
<fwereade> rogpeppe, lp:juju-core
<rogpeppe> fwereade: that might be your problem. you might want to delete the merge proposal and try again, with an explict --for lp:juju-core/juju
<fwereade> rogpeppe, sounds good, thanks
<rogpeppe> fwereade: if in doubt, reinstall Windows.
<fwereade> rogpeppe, looks like I'll be reinstalling windows :/
<rogpeppe> fwereade: that didn't help then
<fwereade> rogpeppe, I'll try with a new branch from juju-core/juju and merge the original branch into that, see if it helps
<twobottux> aujuju: Is juju specific to ubuntu OS on EC2 <http://askubuntu.com/questions/149952/is-juju-specific-to-ubuntu-os-on-ec2>
<fwereade> rogpeppe, bah, doesn't even work when I do that
<rogpeppe>  fwereade: darn. i'd start googling for "bzr readonly transport" and maybe have a delve into the source.
<rogpeppe> fwereade: ... and thereby waste the rest of the day
<fwereade> rogpeppe, yeah, I'm going to do the stuff that's currently right on my mind and figure that out a bit later
<rogpeppe> fwereade: yeah, and niemeyer may be back by then
<TheMue> rogpeppe: I understand your concerns that just adding a prefix string with a "nice description" of the method name doesn't add much new context informations.
<TheMue> rogpeppe: But having a chain of those calls may the error string contain more and more information (a kind of error stack).
<rogpeppe> TheMue: that's going to happen either way, AFAICS.
<TheMue> rogpeppe: It's only bad analyzable.
<rogpeppe> TheMue: i can't parse that
<TheMue> rogpeppe: So a kind of struct {msg, err} would maybe help more.
<rogpeppe> TheMue: what i'm saying is orthogonal to analysability
<rogpeppe> TheMue, fwereade: subordinate units machine assignment. https://codereview.appspot.com/6299073
<rogpeppe> niemeyer: ^
<TheMue> rogpeppe: How would your solution look like?
<niemeyer> Yo
<rogpeppe> TheMue: my message described the error messages i'd use in that function
<rogpeppe> niemeyer: yi
<niemeyer> rogpeppe: You have question from bcsaller there
<rogpeppe> niemeyer: yeah, i'm just looking at it
<rogpeppe> niemeyer: i *think* we'd decided that the MA *was* responsible for all unit deployments.
<niemeyer> rogpeppe: That's not what we agreed to, no
<rogpeppe> niemeyer: ok, i must've misunderstood
<bcsaller> rogpeppe: thats hard to pull off anyway, supose the container is LXC, you don't have easy access to its internals from the MA, only from the UA inside that container
<rogpeppe> bcsaller: very true. hmm.
<bcsaller> there are ways around that, but its tricky and I didn't think it was worth it
<bcsaller> rogpeppe: that said understanding the timing issue where the subs are added to late for that call to matter is also important
<TheMue> niemeyer: The defer/error interface message sadly doesn't work, you can't set a new err in that method.
<rogpeppe> bcsaller: i'm not quite sure what you mean by "too late for that call to matter". matter to what?
<bcsaller> to make any change
<bcsaller> the principal is deployed and assigned, then a day later they add a monitoring subordinate
<TheMue> niemeyer: I've got a somewhat scruffy idea using an own helper error type.
<bcsaller> if I read it properly the principal was suppose to tag its sub
<bcsaller> with the machine id, but that won't happen
<bcsaller> rogpeppe: ^
<TheMue> niemeyer: But it's indeed not beautiful.
<Aram> did the email I sent to juju-dev went through?
<rogpeppe> bcsaller: i don't see why not. if the principal is assigned, the subsid will be assigned when it's added
<rogpeppe> bcsaller: but there is a bigger issue here. perhaps Machine.Units should not return subsidiary units, as the MA doesn't care about them.
<rogpeppe> bcsaller: in which case all that branch is crack
<bcsaller> rogpeppe: agreed, it should not, its not the thing managing them
<niemeyer> TheMue: Yes, you actually can set an error in that method
<niemeyer> s/method/function/
<rogpeppe> Aram: i haven't seen anything
<rogpeppe> Aram: if you're referring to a message you've sent just recently
<rogpeppe> [18:16:16] <rogpeppe> niemeyer: i *think* we'd decided that the MA *was* responsible for all unit deployments.
<Aram> rogpeppe: yes, I got a moderator approval needed bounce.
<rogpeppe> oh yeah, we'd actually agreed the precise opposites
<rogpeppe> opposite
<rogpeppe> niemeyer: so it looks like that branch is rubbish - Machine.Units shouldn't return subsidiary units; do you agree? then we can have a Unit.WatchSubsidiaries method later that the unit agent can use, perhaps.
<rogpeppe> Aram: did you send it from your canonical.com address?
<Aram> yes, but I used my alias instead of my real email, gah, resending.
<TheMue> niemeyer: err can be set, then it's passed as reference to the deferred errorContext(). There err (which is *error) can be set, but when returning the old err is still valid. If err would be e.g. a structure I could modify a field, yes, or a slice value or map.
<Aram> rogpeppe: this time it worked
<TheMue> niemeyer: Deferring a func() {} and modify err in there through errorContext() with a return value should help. Wouldn't be so elegant. But less repeating than today.
<niemeyer> TheMue: Sorry, I'm missing what you mean
<niemeyer> TheMue: The exact signature I suggested should work, without any further hacks
<niemeyer> TheMue: Why is it not working?
<niemeyer> rogpeppe: Subordinates
<niemeyer> rogpeppe: Why is it rubbish?
<rogpeppe> niemeyer: because there's no need for the machine agent to see subordinate units. and that's what Machine.Units and Machine.UnitsWatcher is for.
<TheMue> niemeyer: You pass an interface X by reference. You could call the interfaces methods or if it would be a struct you could modify the fields. But if you set the argument the scope is only inside the function.
<TheMue> niemeyer: errorContext(err *error) { err = fmt.Errorf("foo") } runs (ok, you need a temp variable to create a reference), but after the call of errorContext(&e) that e stays the same.
<rogpeppe> niemeyer: i've got to go, i'm afraid. i'll leave the branch out for review, but i'm pretty sure now that it's a wrong direction.
<niemeyer> TheMue: Your implementation seems to be bogus somehow
<niemeyer> TheMue: Can you please replicate your idea here: http://play.golang.org/
<TheMue> niemeyer: A defer func() { err = errorContext(err, "Foo %q", id) } would help.
<TheMue> niemeyer: For sure.
<niemeyer> TheMue: Thanks
<niemeyer> rogpeppe: There is a method in the unit.. IsSubordinate()
<niemeyer> Or IsPrincipal
<rogpeppe> niemeyer: that re-reads the topology each time, and it's unnecessary.
<rogpeppe> niemeyer: the thing watching a machine is never going to want to see subordinates
<rogpeppe> AFAICS
<TheMue> niemeyer: http://play.golang.org/p/s5yfet7UL2
<niemeyer> rogpeppe: We can cache that information
<niemeyer> rogpeppe: A unit is either subordinate or not, for its whole life
<rogpeppe> niemeyer: we could call the methods Machine.PrincipalUnits and Machine.PrincipalUnitsWatcher if we wanted to be clearer
<niemeyer> rogpeppe: Well, we can also rename the methods
<niemeyer> rogpeppe: To suit what we need
<TheMue> niemeyer: Alternative follows.
<rogpeppe> niemeyer: jinx
<niemeyer> rogpeppe: Yeah
<niemeyer> rogpeppe: I'll review the code and provide you some feedback so that we can either move forward or have some better conversation tomorrow
<niemeyer> rogpeppe: Btw, we have to sort out the branch location too
<niemeyer> rogpeppe: I'm fine with renaming it after we get some of those branches in, if we feel like so
<rogpeppe> niemeyer: what's wrong with the branch location?
<niemeyer> rogpeppe: The stuff we talked in the ML
<rogpeppe> niemeyer: ah yeah. i definitely think it should be juju-core/trunk.
<TheMue> niemeyer: http://play.golang.org/p/aFA7LtLhoR works.
<niemeyer> rogpeppe: Okay, I think juju-core/stuff would be nice too..
<rogpeppe> niemeyer: -1
<niemeyer> Hm?
<rogpeppe> niemeyer: "stuff" implies random shit
<niemeyer> rogpeppe: stuff as in {environs,juju,testing,cmd,...}
<rogpeppe> niemeyer: definitely!
<niemeyer> :)
<rogpeppe> niemeyer: i'd expect it to be just the same as goamz etc
<rogpeppe> niemeyer: i was surprised when it wasn't
<Aram> me too
<Aram> that juju seemed redundant
<niemeyer> TheMue: http://play.golang.org/p/aFA7LtLhoR
<rogpeppe> anyway, really gotta go now! happy evening/day all.
<niemeyer> rogpeppe: Thanks for the day, and have a great evening
<Aram> enjoy
<TheMue> niemeyer: Yes, that's my second solution.
<niemeyer> TheMue: Sorry, I fail
<niemeyer> TheMue: http://play.golang.org/p/yg1N6xnJyo
<TheMue> niemeyer: Ah, nice, the dereferencing of the argument has been new to me. Thx, great info.
<niemeyer> TheMue: Yeah, pointers..
<niemeyer> TheMue: Java "saves" you from that side of things
<Aram> yeah, everything is a pointer.
<niemeyer> (because *everything* is a pointer!)
<niemeyer> Aram: Jinx :)
<TheMue> niemeyer: Never have used them much. Not only Java, but also Erlang or Smalltalk.
<niemeyer> TheMue: Erlang is a veeeery different beast :)
<TheMue> niemeyer: Erlang indeeeeeeeeeeeeeeeed. But I like it, yeah.
<TheMue> niemeyer: And Smalltalk also passes everything by reference.
<niemeyer> TheMue: Passing by reference has a different meaning than what you imply, I suspect
<TheMue> niemeyer: It's different, yes.
<Aram> ALGOL passes variables not by value or reference, but by name, sort of like a macro.
<niemeyer> TheMue: C++ has by-reference semantics, and it sucks
<TheMue> niemeyer: Only done C++ a bit in the late 80s.
<niemeyer> Aram: Oh, never read about that
<TheMue> niemeyer: But it hasn't been my language. During this time I more used the good old Turbo Pascal. ;)
<Aram> long time ago I introduced a Pascal friend to C, and he loved it because it wasn't really type safe like Pascal.
<Aram> you could do lots of stupid tricks.
<TheMue> Aram: I loved Smalltalk, but it's sadly going deeper and deeper into history. And the implementations missing good concurrent VMs. There's only one experimental by IBM.
<TheMue> So, have to leave for today.
<TheMue> niemeyer: First changed methods look and feel good.
<niemeyer> TheMue: Sweet!
<TheMue> niemeyer: ;)
<niemeyer> bcsaller: ping
<bcsaller> niemeyer: whats up?
<niemeyer> bcsaller: Heya
<niemeyer> bcsaller: I was just looking at that branch from rog
<niemeyer> bcsaller: Have you found an edge case where it would leave data in an inconsistent state?
<niemeyer> Hmm
<bcsaller> niemeyer: thats not what I was pointing out, it was just that the method seemed to be triggered before subordinates were bound to the principal unless I read it wrong
<niemeyer> bcsaller: Is there a way to create a subordinate unit without binding it to a principal?
<niemeyer> bcsaller: (unit, not service)
<bcsaller> the service yes, instances only appear when the relation is satisfied, you can deploy the sub w/o a realation and only ZK is changed
<bcsaller> niemeyer: no, sub units always have a principal, but can be added well after the establishment of the principal
<niemeyer> bcsaller: Right, but his code path also covers that
<niemeyer> bcsaller: In the sense that he's checking to see if the principal is assigned by the time the subordinate is added
<niemeyer> bcsaller: Do you see another angle open?
<bcsaller> the other thing I mentioned is that machine assigment is the watch trigger to deploying a UA.
<niemeyer> Cool
<niemeyer> bcsaller: There's a race there too I suppose
<bcsaller> niemeyer: in the python version we proxy the machine id lookup to the principal for the subs and don't do the assignment
<bcsaller> niemeyer: that might not be the best way, but its a possible path forward
<niemeyer> if the unit is assigned to a machine while concurrently a subordinate unit is added to the principal, the subordinate will be added without a machine, and the principal will not notice the existence of the unit
<niemeyer> Well, or maybe not if it's properly done within a retryChange transaction
<niemeyer> bcsaller: It feels less fiddly for sure
<bcsaller> niemeyer: in this case the principal is processing its relation change events and deciding if it needs to take action (deploy a subordinate)
<bcsaller> but the MA isn't involved
<bcsaller> which in the LXC everywhere future I think is important
<andrewsmedina> why the zookeeper will be replaced?
<davecheney> andrewsmedina: for a number of reasons, one being it won't run on arm
<andrewsmedina> davecheney: hmm
<andrewsmedina> juju will run on arm?
<davecheney> i hope so, arm is a supported ubuntu server platform
<hazmat> SpamapS, +1  rest apis and soa
<hazmat> its in the plans
<hazmat> i believe
<SpamapS> good :)
<andrewsmedina> SpamapS == Clint Byrum ?
<SpamapS> andrewsmedina: indeed :)
<andrewsmedina> SpamapS: :)
#juju-dev 2012-06-13
<davecheney> niemeyer_: i'm actively responding to rog's comments on the stop/start machine CL
<davecheney> and will have another proposal by the end of the day
<davecheney> i'll remove it from review
<niemeyer_> davecheney: Cool, I was just skimming over it actually
<davecheney> it'll be 10-15% smaller by the end of the day
<niemeyer_> davecheney: Sounds good.. I'd like to have a look at it with a fresher brain tomorrow as well, so that will suit well
<davecheney> cool
<niemeyer_> davecheney: Oh, that's nice
<davecheney> niemeyer_: related question: do you know what the status of our hp cloud access is
<davecheney> ie, can we just use it, or is there something left to do ?
<niemeyer_> I have not heard, but I can check in the morning
<davecheney> niemeyer_: thanks
<Aram> morning.
 * davecheney goes back to fighting juju test
<rogpeppe> Aram, davecheney, fwereade_: hiya
<fwereade_> heyhey
<Aram> moin
<davecheney> evening lads
<davecheney> i'm just on my way out
<davecheney> have a nice evening
<rogpeppe> davecheney: what are doing to poor jujutest; she never wanted to fight anyone :-)
<rogpeppe> s/doing/you doing/
<rogpeppe> davecheney: ok, have a good one
<fwereade> rogpeppe, can I ask a favour please? I'm off today, and the rework-stop-watchers branch is still not submitting cleanly; if you would merge it for me I would be most grateful
<fwereade> rogpeppe, I can worry about why I can't submit anything else later ;)
<rogpeppe> fwereade: i will try, np
<fwereade> rogpeppe, tyvm
<niemeyer> Good morning
<rogpeppe> niemeyer: hiya
<rogpeppe> niemeyer: i'm wondering what your feeling on
<rogpeppe> cannot create new
<rogpeppe> machine node
<rogpeppe> oops
<rogpeppe> niemeyer: i'm wondering what your feelings on assign-subordinate-units are now
<rogpeppe> niemeyer: although i think bcsaller's criticism is somewhat misfounded (the change is always made properly within a retryChange transaction), i'm still inclined to drop the change and make Machine.Units (perhaps renamed) return principal units only
<niemeyer> rogpeppe: The data fiddling seems to be unnecessary
<rogpeppe> niemeyer: that's what i'm thinking
<niemeyer> rogpeppe: And it's nice to avoid depending on atomic behavior, even if we can
<rogpeppe> niemeyer: (if i understand what you mean by data fiddling here)
<rogpeppe> niemeyer: that too.
<niemeyer> rogpeppe: The Units API still looks nice, though
<rogpeppe> niemeyer: i'm happy if we document that Machine.Units returns principal units only
<niemeyer> rogpeppe: That sounds bad
<rogpeppe> niemeyer: oh
<niemeyer> rogpeppe: Have you seen the comment in the review?
<rogpeppe> niemeyer: so your suggestion, i think, is that topology.UnitsForMachine should do a search for subordinate units too, rather than just looking a topoUnit.Machine?
<rogpeppe> s/a topo/at topo/
<niemeyer> rogpeppe: Right, it's the same loop..
<rogpeppe> niemeyer: no, i think it needs to be an inner loop. it's n^2 but probably not bad in practice.
<rogpeppe> niemeyer: that is, for each unit we find with the given machine, we have to scan all other units to see if any has that unit as its principal.
<niemeyer> rogpeppe: Only if we do it badly
<rogpeppe> niemeyer: right, we could do two linear loops, building up a map in the first one, i suppose
<niemeyer> Yep
<rogpeppe> still not quite "the same loop" but, yeah, that would work ok.
<rogpeppe> niemeyer: and i have to check if we always know if a unit is principal when we create a *Unit.
<niemeyer> rogpeppe: It can actually be done in the same loop, but it doesn't matter.
<niemeyer> rogpeppe: The Principal is a setting next to the Machine
<rogpeppe> niemeyer: i don't *think* so, because when we get to a subordinate before its principal, we don't know whether to keep it around or not.
<niemeyer> rogpeppe: I'm going to argue.. it doesn't matter..
<niemeyer> I'm not going to argue
<rogpeppe> niemeyer: i'd like to do it in the same loop, which is why i'm asking. i don't mind much though.
<rogpeppe> niemeyer: ok, i'll go with that, thanks.
<rogpeppe> niemeyer: i don't think the tests need to change, which is nice.
<niemeyer> rogpeppe: I'm more concerned that it looks nice.. it doesn't have to be in the same loop
<niemeyer> rogpeppe: The tests need to ensure subordinate units are returned, because clearly they don't right now
<rogpeppe> niemeyer: that CL made it do so, and tested for that
<rogpeppe> niemeyer: but i think Unit.MachineId should look up the principal unit when necessary. that's new.
<niemeyer> rogpeppe: Yeah, indeed
<Aram> hi niemeyer
<niemeyer> Aram: Heya
<rogpeppe> mramm: hiya
<mramm> rogpeppe: hi
<TheMue> Lunchtime
 * rogpeppe *hates* the fact that ^W deletes a tab in Chrome. And that it loses the contents of any text entry box in doing so.
<TheMue> niemeyer: https://codereview.appspot.com/6296064 is just in again. Looks better, definitely. Thx for that great idea.
<Aram> rogpeppe: I hate that ^W is so close to ^Q.
<rogpeppe> Aram: i'd switch off all menu shortcuts if i could (i don't count ^X, ^C and ^V)
<niemeyer> TheMue: Heya, no problem
<niemeyer> rogpeppe: clean up data", but that's not the only thing that zkRemoveTree is used
<niemeyer> for - removing a service is not really "cleaning up data").
<niemeyer> rogpeppe: I'm not following..
<niemeyer> rogpeppe: That's exactly what zkRemoveTree is doing in that context
<niemeyer> rogpeppe: Ah, I think I see what you mean
<TheMue> rogpeppe: zkRemoveTree() only adds this, but it's called in methods adding more context.
<TheMue> rogpeppe: So you've got "can't remove server: can't clean up data: ...".
<TheMue> rogpeppe: The context of where zkRemoveTree() is called has to be added by the caller.
<TheMue> Ooops, 'can't remove service "foo": can't clean up data: <<some error during zk.Delete()>>'
<TheMue> So it looks like.
<niemeyer> TheMue: That was his point too
<niemeyer> Heading off for doc appointment
<niemeyer> TheMue: You have a review on this issue
<TheMue> niemeyer: Yes, I've seen and started the changes.
<TheMue> niemeyer: Is back in again.
 * TheMue really likes errorContext(). It later easily can be extended.
<niemeyer> TheMue: Thanks, checking again
<TheMue> rogpeppe: Does the current errorContext() doesn't work with govet? My editor uses govet live (at least I thought so) and it doesn't complain.
<rogpeppe> TheMue: in a meeting, back in a short while
<TheMue> rogpeppe: ok, have a phone call, so np
<niemeyer> TheMue: Another review in
<niemeyer> TheMue: The point of rog's comment is that he suggests go vet would actually catch mistakes with his proposed implementation. I'm not sure about that myself, and would like to see it happening before changing the function to something uglier.
<niemeyer> Is fwereade off today?
<TheMue> niemeyer: reasonable
<TheMue> niemeyer: Yes, he's off today.
 * TheMue has a hot ear from phoning a bit longer.
<niemeyer> There are two branches with similar names and similar descriptions, pushed around the same time
<TheMue> niemeyer: fwereade asked rogpeppe this morning regarding rework-stop-watchers
<niemeyer> TheMue: Oh?
<TheMue> niemeyer: It's not submitting cleanly and so he asked rog to merge it.
<niemeyer> TheMue: The branch is up for review
<niemeyer> Oh, but it's a copy of a third branch?
<niemeyer> Ugh.. messy
<TheMue> niemeyer: I think rog can tell you more about it.
<niemeyer> It seems to be the same branch that was already reviewed
<niemeyer> It just sucks that there are two merge proposals up without any hints about what's going on
<niemeyer> I'll pick the newest one and attempt to merge it
<TheMue> niemeyer: Next propose is in.
 * rogpeppe is out of the meeting
<rogpeppe> niemeyer: william wasn't able to submit the branch
<rogpeppe> niemeyer: he was getting "readonly transport" errors
<niemeyer> rogpeppe: Okay, it's in now.. hopefully I got the intention properly
<niemeyer> TheMue: Review is in
<rogpeppe> niemeyer: the point about the go vet stuff is that i think it's asking for trouble to have a function that takes a printf-like format string and deliberately miss the last argument.
<rogpeppe> niemeyer: and in fact i think the signature is slightly nicer - every single call would end in ": %v" AFAICS
<rogpeppe> niemeyer: so we can lose that
<niemeyer> rogpeppe: How's go vet related to that?
<rogpeppe> niemeyer: go vet can check printf-like format strings
<niemeyer> rogpeppe: I'm happy to have the ": %s" added automatically.. that point wasn't clear from your comment
<niemeyer> rogpeppe: Have you managed to make go vet work with that function?
<rogpeppe> niemeyer: yes
<niemeyer> rogpeppe: Can you please paste an example that fails
<niemeyer> TheMue: Please hold off merging for a moment
<TheMue> niemeyer: Yes, I'm following here.
<rogpeppe> niemeyer: actually i lie. i managed to make go tool vet work with that function.
<niemeyer> rogpeppe: Same thing.. can you please paste an example of go vet complaining?
<rogpeppe> niemeyer: http://paste.ubuntu.com/1039204/
<niemeyer> rogpeppe: Ah, you're calling vet manually with explicit parameters
<rogpeppe> niemeyer: yes, you need to tell govet about custom functions.
<rogpeppe> niemeyer: but it can still do the check
<niemeyer> rogpeppe: Ok, sounds good
<niemeyer> TheMue: That sounds good to me.. what do you think?
<TheMue> niemeyer: Yes, so I'll change it from append()... to +. Also adopt it in the error strings.
<niemeyer> TheMue, rogpeppe: Awesome, thanks
<rogpeppe> TheMue: thanks
<rogpeppe> TheMue: and note that the function name needs to end in "f"
<TheMue> niemeyer: I'll also change the name.
<TheMue> rogpeppe: Yes. ;)
<TheMue> niemeyer: Tests are green with latest trunk. Directly submit it or do you want to have a final look?
<niemeyer> TheMue: If it's all as agreed, +1 on submitting
<TheMue> niemeyer: OK, here it comes.
<niemeyer> TheMue: Woohoo :)
<TheMue> niemeyer: Hmm, not yet. Have a "Cannot lock LockDir" during submit. Any idea what I'm missing?
<rogpeppe> TheMue: i think that's the same error fwereade was getting
<rogpeppe> TheMue: does it say "readonly transport" ?
<TheMue> rogpeppe: Exactly!
<rogpeppe> niemeyer: have you tried submitting fwereade's branch? i wonder if you'll get the same problem.
<niemeyer> rogpeppe: As I mentioned, it's already in
<niemeyer> Hopefully I got the proper end of it
<niemeyer> I wonder if I screwed up the setpu somehow
<rogpeppe> niemeyer: ah, i haven't seen a submit message yet.
<niemeyer> There won't be one, but I added a note
<niemeyer> And mentioned above too
<niemeyer> TheMue: What's bzr info telling you?
<rogpeppe> niemeyer: ah, i didn't see anything that said you'd actually submitted anything. my clients probably missed the message somehow.
<niemeyer> Jun 13 12:03:00 <rogpeppe>      niemeyer: he was getting "readonly transport" errors
<niemeyer> Jun 13 12:03:25 <niemeyer>      rogpeppe: Okay, it's in now.. hopefully I got the intention properly
<TheMue> niemeyer: It's a standalone tree format 2a, push branch is  bzr+ssh://bazaar.launchpad.net/~themue/juju-core/go-state-second-error-improvement/.
<TheMue> niemeyer: Branch root is .
<rogpeppe> niemeyer: ah, i didn't realise that's what you were talking about :-)
<niemeyer> rogpeppe: Heh
<TheMue> niemeyer: Parent is ...../trunk
<niemeyer> TheMue: Hmm
<niemeyer> Somehow it's picking up a branch for merging onto that you don't have access to
<niemeyer> Which makes little sense to me
<niemeyer> Oh, it's making more sense now
<niemeyer> Crap
<niemeyer> I've pushed as gophers rather than ~juju
<niemeyer> TheMue: Please try to submit now
<TheMue> niemeyer: Yeah, great, thx!
<TheMue> niemeyer: So I can start the next branch for unit.go.
<rogpeppe> niemeyer: assign-subordinate-units updated as we discussed: https://codereview.appspot.com/6299073
<TheMue> niemeyer: Ah, I'm a gopher now, see the reason. ;)
<niemeyer> TheMue: Worked?
<niemeyer> rogpeppe: Thanks a lot man
<niemeyer> rogpeppe: I'll have a look at that as soon as I'm back from lunch
<TheMue> niemeyer: Yes, it's in.
<rogpeppe> niemeyer: thanks!
<niemeyer> TheMue: Superb
<niemeyer> rogpeppe, TheMue: So it was just me screwing over on the migration
<TheMue> niemeyer: Thank you, I'll continue with unit.go now.
<niemeyer> I'll fix that when I'm back too
<rogpeppe> TheMue: you might want to review that too. and this one also: https://codereview.appspot.com/6299082
<rogpeppe> bcsaller: i've changed https://codereview.appspot.com/6299082/ based on your feedback. i *think* the two are functionally the same, but the new version avoids the need to make changes in several places in the topology at once.
<bcsaller> rogpeppe: thanks, I'll look at it again
<rogpeppe> bcsaller: oops, i meant https://codereview.appspot.com/6299073/
<rogpeppe> bcsaller: thanks a lot for the feedback BTW. it's really good to have more people familiar with the python version looking at the changes going past in the Go version.
<bcsaller> :)
<rogpeppe> bcsaller: can you think of a good reason why the machine agent downloads the charm for the unit agent that it starts?
<bcsaller> rogpeppe: rather than the unit agent you mean?
<rogpeppe> bcsaller: yeah
<rogpeppe> bcsaller: given that the unit agent already deals with downloading charms for its subordinates
<bcsaller> rogpeppe: I cannot, I think it should be the UA for when things are in LXC containers, but that design predates the LXC everywhere model
<rogpeppe> bcsaller: cool, thanks.
<bcsaller> rogpeppe: yeah, the sub stuff is new and we factored out the deployment stuff for that to be reused but did touch the orig MA use
<rogpeppe> bcsaller: the other thing i'm thinking about is that if the machine agent crashes but the unit agents carry on, then there doesn't seem to be a way for the MA to detect that they're still around and avoid starting them again. am i missing something?
<TheMue> rogpeppe: Review is in, sadly clicked too fast so one remark and one with comments.
<rogpeppe> TheMue: ok, thanks a lot
<bcsaller> rogpeppe: on restart of the MA it needs to look at the at the assigned units and their states in ZK, no?
<rogpeppe> bcsaller: yeah, you're right. i must've missed that bit in the python code. thanks.
<rogpeppe> bcsaller: i suppose the difficulty is that there's no real way of telling if the unit has really gone down, or just the unit agent.
<bcsaller> rogpeppe: a real and persistent issue, yeah, knowing if the service is running apart from the agent isn't really handled, it could mean so many things
<rogpeppe> bcsaller: and upstart *should* handle the unit agent restart. so perhaps it's right that the machine agent only checks for unit agent presence when it starts up, rather than watching it continually.
<bcsaller> rogpeppe: I think there is nothing wrong with the idea that the MA should observe the UA presence unless we are sure there is no action it will take (due to upstart for example)
<bcsaller> rogpeppe: but if its always a no-op then sure
<rogpeppe> bcsaller: i'm trying to think of a case where we'd want the MA kill the UA (and its container) and restart it. because there's no way the MA can restart the unit in general without doing that.
<bcsaller> rogpeppe: there would have to be a coordination protocol around the stop/restart/upgrade of units, kapil has been thinking about that stuff for some time now
<rogpeppe> bcsaller: do any of those (stop/restart/upgrade) imply that the unit's container needs to be ripped down, then recreated?
<bcsaller> rogpeppe: upgrade might for some definition of ripped down, but generally no
<bcsaller> rogpeppe: upgrading the agent itself
<rogpeppe> bcsaller: upgrading the agent itself is ok i think, and can be done without affecting the container
<bcsaller> rogpeppe: agreed
<rogpeppe> TheMue: the difference between addLoggingCharm and addDummyCharm is that addDummyCharm is used a lot, so probably justifies the charm variables stored in StateSuite (avoiding the overhead of reading the charm directory each time), but i'm not sure that addLoggingCharm does...
<rogpeppe> TheMue: on the other hand, maybe a more general mechanism might be good. i'll think about that.
<niemeyer> rogpeppe: I can't think of a reason for tha MA to be killing the UA like that
<Aram> gah.
<Aram> managed to lock myself out of the apartment.
<Aram> we have these dumb doors that lock themselves at all times.
<niemeyer> rogpeppe: and +1 on having UAs handling their own charms
<rogpeppe> Aram: so you're outside the door using your wi-fi?
<Aram> rogpeppe: no, I had to wait for my girlfriend to come back.
<rogpeppe> niemeyer: cool.
<niemeyer> Aram: Oh, crap.. that was one of the first things I did when we moved
<niemeyer> Aram: I mean, replacing the door handle by a reasonable one
<Aram> heh, yeah.
<niemeyer> If you want to lock the door from outside, you need the keys.. such a nice way to turn the issue into a non-existent problem
 * rogpeppe has to go. happy evenings all.
<Aram> fare thee well.
<rogpeppe> niemeyer: if you manage to look at those two CLs (ttps://codereview.appspot.com/6299073 and https://codereview.appspot.com/6299082) that would be marvellous, thanks.
<niemeyer> rogpeppe: Certainly will, thanks
<niemeyer> rogpeppe: Have a good time there
 * TheMue has to leave too
<niemeyer> rogpeppe: If you have a spare moment in your evening by any chance, Dave's https://codereview.appspot.com/6307071/ is waiting for you
<niemeyer> Okay, review queue is in good state finally.. three days to get it cleaned after holidays. Phew.
<niemeyer> I'm having a break.. will come back later to polish some juju slides
#juju-dev 2012-06-14
<rogpeppe> davecheney: mornin'
<davecheney> rogpeppe: hello hello
<rogpeppe> davecheney: review delivered.
<davecheney> thank you good sit
<davecheney> sir
<davecheney> i'll polish that up now
<davecheney> re types
<davecheney> I agree, we should have more of them
<davecheney> i'm getting tired of typing []*state.Machine
<rogpeppe> davecheney: with slices, i don't agree actually.
<rogpeppe> davecheney: i think type MachineList []*state.Machine would be worse.
<rogpeppe> (and it's the same length)
<rogpeppe> ish
<davecheney> Machines
<davecheney> rogpeppe: do I owe you any reviews ?
<davecheney> i don't feel that i'm in the best timezone to assist, gustavo has usually got to them before me
<rogpeppe> davecheney: i'm about to submit https://codereview.appspot.com/6299073/; but you might want to glance over https://codereview.appspot.com/6299082/
<rogpeppe> davecheney: yeah, but another pair of eyes is always good
<davecheney> ok, sam just came home, but i'll check it out after dinner
<rogpeppe> davecheney: np. it's pretty much code-grind anyway.
<Aram> morning
<fwereade> hey Aram
<TheMue> Aram: moin
<TheMue> Somehow my 1.0.2 has troubles with gozk and goyaml.
<Aram> ah, new release
<Aram> works here
<TheMue> Aram: Maybe I just have to clean up the 3rd party libs and re-get.
<TheMue> rogpeppe: ping
<rogpeppe> TheMue: pong
<rogpeppe> i haven't tried 1.0.2 if that's what you're gonna ask :-)
<TheMue> rogpeppe: Ah, is AssignUnit() in unit.go by you?
<TheMue> rogpeppe: No ;)
<rogpeppe> TheMue: i've touched it recently...
<rogpeppe> TheMue: although it was fwereade's originally
<rogpeppe> i think
<TheMue> rogpeppe: OK, thx.
<fwereade> rogpeppe, TheMue: yeah, I think so
<rogpeppe> TheMue: is there a problem with it?
<TheMue> fwereade: I just wondered because it's a func taking a State as first arg.
<TheMue> fwereade: We mostly put this as method to State.
<fwereade> TheMue, that's at niemeyer's request, he didn't like having it on Unit
<fwereade> TheMue, I guess it could go on State, that is true
<TheMue> fwereade: And internally you used u.st, the state of the unit.
<Aram> TheMue: gozk and goyaml use cgo, maybe it's your gcc at fault, or you lack some dumb lib.
<fwereade> TheMue, heh that's just me being stupid
<rogpeppe> +1 on making it a state method
<fwereade> TheMue, I recommend a little CL fixing it on its own and seeing what niemeyer thinks, I can't see a problem
<TheMue> rogpeppe: thx
<TheMue> fwereade: Will you do it or shall I?
<fwereade> TheMue, would you please? I'm still trying to figure out why I can't talk to launchpad properly... :/
<TheMue> fwereade: Check if you're a member of the gophers group.
<TheMue> fwereade: That has been the problem for me.
<TheMue> fwereade: Same kind of error yesterday.
<fwereade> TheMue, looks like that's the trouble
<fwereade> TheMue, tyvm
<TheMue> fwereade: np
<TheMue> Ah, found my missing package/file for ZK. I'm wondering why this happens now. OK, it's the first rebuild after switch to 12.04.
<rogpeppe> TheMue: i think you should hold off on the tags until we get some feedback from niemeyer.
<rogpeppe> TheMue: if we agree it's good to change them, i'll do them all in that CL.
<TheMue> rogpeppe: Great, and it has been a good hint.
<rogpeppe> TheMue: also, about the error message - i'd like to keep that CL as only about the go vet fixes. error message fixing needs to happen more generally in another place.
<rogpeppe> s/another place/another CL/
<TheMue> rogpeppe: I'm doing the state error messages right now file by file.
<TheMue> rogpeppe: And in this context I only found your prefix for ec2
<fwereade> rogpeppe, just looking at presence_test; you added the "+ a little bit for scheduler glithes" to the longEnough var
<fwereade> rogpeppe, can you explain what that's about?
 * rogpeppe looks
<rogpeppe> fwereade: things can happen after they are expected to.
<rogpeppe> fwereade: it's not a real-time language
<fwereade> rogpeppe, what does go have to do with this at all?
<rogpeppe> fwereade: so it's best to allow some slack in timings
<rogpeppe> fwereade: go is firing the watch
<fwereade> rogpeppe, isn't allowing double the pinger period for detecting timeout good enough already?
<fwereade> rogpeppe, ah, hmm, I see what you mean in this context
 * fwereade thinks a bit
<rogpeppe> fwereade: no, because as the comments say, the time out might legitimately occur after 99ms
<rogpeppe> fwereade: it doesn't matter if we add some slack, but it does matter if we get test failures on a heavily loaded system
<TheMue> fwereade, rogpeppe: Move of AssignUnit() is in as https://codereview.appspot.com/6298081
<rogpeppe> TheMue: LGTM
<TheMue> rogpeppe: That's fast. ;)
<rogpeppe> TheMue: quickest review ever!
<TheMue> rogpeppe: Definitely. Cheers!
<rogpeppe> 29s...
<Aram> mramm: 15:00 GMT is fine with you?
<mramm> I believe that is my one not-free hour
<Aram> 16:00 GMT?
<mramm> that works
<Aram> good
<twobottux> aujuju: Invalid SSH key error in juju when using it with MAAS <http://askubuntu.com/questions/147714/invalid-ssh-key-error-in-juju-when-using-it-with-maas>
 * TheMue is out of the office this afternoon, will continue work later.
<rogpeppe> fwereade: is there a reason why in the python version some parameters are passed as env vars not flags?
<rogpeppe> (to agents, that is)
<fwereade> rogpeppe, I don't think so; IIRC niemeyer agreed to drop those
<rogpeppe> fwereade: cool, thanks
<fwereade> rogpeppe, (well the reason, I think, was to make python testing more convenient, so it shouldn't aply any more ;))
<rogpeppe> fwereade: thanks
<rogpeppe> fwereade: i'm just looking in juju/machine/unit.py; it seems a bit odd that the provider type is overloaded to be "subordinate" sometimes. i'm thinking that unit deployment really needs only one argument: whether to put the unit agent in a container.
<rogpeppe> fwereade: does that sound reasonable?
<rogpeppe> when i say "only one argument" obviously there are other args, but i don't think it needs to know the provider type
<fwereade> rogpeppe, sorry, went for a ciggie
 * fwereade reads back
<fwereade> rogpeppe, yeah, that makes sense to me... didn't we determine that SubordinateContainerDeployment was actually identical to UnitMachineDeployment anyway?
<rogpeppe> fwereade: that it is
<fwereade> rogpeppe, it crosses my mind that, long-term, we need to promote the concept of container somehow
<fwereade> rogpeppe, as it is we make inferences about containers in a range of places
<rogpeppe> fwereade: i think that's what i'm doing by making it a bool argument to deploy.
<rogpeppe> fwereade: but i'm probably missing something important
<fwereade> rogpeppe, as it is a unit "is" a container, except when some other unit "is" its container, and it confuses me
<rogpeppe> fwereade: it seems to me that we'd need a bool "containerise" field in the principal unit. but maybe not much more
<fwereade> rogpeppe, hold on, how does this interact with deploying into local containers? a subordinate needs to be deployed into an existing container...
<fwereade> rogpeppe, feels like there's something missing
<fwereade> rogpeppe, but I'm not really up on the context
<rogpeppe> fwereade: subordinates are deployed by the principal's unit agent
<fwereade> rogpeppe, ah, cool
<fwereade> rogpeppe, your perspective seems fine to me then :)
<rogpeppe> fwereade: i *think* that it'll all just fall out naturally, including the local provider stuff
<rogpeppe> fwereade: cool, the crack is not with me today then
<fwereade> rogpeppe, btw, https://codereview.appspot.com/6298082
<niemeyer> Guten tag!
<niemeyer> fwereade: Heya, yeah, sorry for the gophers team issue
<Aram> moin.
<niemeyer> Aram: Heya
<fwereade> niemeyer, no worries :)
<fwereade> niemeyer, that had me *utterly* foxed for a while though :)
<niemeyer> fwereade: I figured that yesterday, but I wasn't sure if I should move the branch to another team and forgot to add you to ~gophers, which should be done no matter what
<rogpeppe> niemeyer, fwereade: here's a sketch for a possible new package (proposed name launchpad.net/juju-core/juju/service) http://paste.ubuntu.com/1040813/
<rogpeppe> niemeyer, fwereade: it will encapsulate much of the logic in juju/machine/unit.py
<fwereade> rogpeppe, in essence, LGTM, but I think we should be careful about Destroy :)
<rogpeppe> fwereade: careful how?
<rogpeppe> fwereade: you mean avoid rm -rf / ?
<fwereade> rogpeppe, IIRC, that's the idea
<rogpeppe> fwereade: yeah, i'll do my best not to trash your machine or mine :-)
<fwereade> rogpeppe, nah, it's the same issue as automatically terminating machines
<fwereade> rogpeppe, we don;t d that because we feel people might want a change toretrieve their data even if they're no longer running the service
<fwereade> rogpeppe, deleting containers falls under the same category I think
<fwereade> niemeyer, is theabove broadly accurate?
<niemeyer> rogpeppe: I'm not sure.. in isolation, that interface doesn't seem very appealing
<niemeyer> rogpeppe: We have a bunch of things that "represent a deployed service", and we have a very special and well defined meaning for what a "service" is
<rogpeppe> fwereade: ok. my first thought was a package called "deploy"
<rogpeppe> oops
<rogpeppe> niemeyer: ^
<niemeyer> rogpeppe: The parameters of New() feel like a bag of unrelated attributes
<rogpeppe> yeah, "service" is not a great name
<rogpeppe> niemeyer: another thought was to put those attributes in a struct
<niemeyer> rogpeppe: I don't know.. it's hard to suggest something in isolation in this case. In my mind what we need is a function that creates the container
<rogpeppe> niemeyer: what do you think to the basic division of functionality (leaving all setup of directory structure to the unit agent)?
<niemeyer> rogpeppe: Not an interface, not a struct
<rogpeppe> niemeyer: ok, so how do we then destroy the container?
<niemeyer> rogpeppe: Thinking
<niemeyer> rogpeppe: Ok.. the API is mostly sensible I guess
<niemeyer> rogpeppe: Maybe the proper name for this is Container
<niemeyer> rogpeppe: With something along the lines of Create, Start, and Destroy methods
<rogpeppe> niemeyer: i suppose so. but it's going to be used for starting things that aren't in a container too. at least that was the idea
<niemeyer> rogpeppe: I'm thinking about Container in a more abstract way.. it's a unit container
<niemeyer> rogpeppe: (rather than an *LXC* container)
<rogpeppe> niemeyer: hmm, maybe
<niemeyer> rogpeppe: That's how we've been referring to it all along, I think
<niemeyer> rogpeppe: We have relations with scope of "container" for example
<niemeyer> rogpeppe: Despite the fact they're not deployed via LXC in some cases
<rogpeppe> niemeyer: i'm slightly uncomfortable because in the default case it doesn't "contain" anything
<niemeyer> rogpeppe: Uh.. how so?
<niemeyer> rogpeppe: Maybe I misunderstand what this is about
<rogpeppe> niemeyer: we're just starting a process and adding an upstart entry for it, if container==false
<niemeyer> rogpeppe: It contains a unit
<niemeyer> rogpeppe: That's its reason of existence
<rogpeppe> niemeyer: starts a unit, really.
<niemeyer> rogpeppe: Contains, unless I misunderstand what you mean
<niemeyer> rogpeppe: The Dir there is the root of that container
<rogpeppe> niemeyer: i suppose it comes down to what we mean by "contain"
<niemeyer> rogpeppe: Yes, that's what I've been trying to point out :)
<rogpeppe> niemeyer: we can have several "containers" that all have the same root.
<rogpeppe> niemeyer: (in the subordinate case)
<niemeyer> rogpeppe: By definition, all containers will always be in the same root.. we don't have filesystem management today
<rogpeppe> niemeyer: LXC containers have different roots from one another, no?
<niemeyer> rogpeppe: Ok, hold on, so I really misunderstand what you mean
<niemeyer> rogpeppe: Subordinates all live within *one* container
<rogpeppe> niemeyer: yeah
<niemeyer> rogpeppe: I don't know what we're discussing then
<rogpeppe> niemeyer: but the idea behind this API is that a machine agent or a unit agent can start another unit without worrying too much about if it's contained or not.
<niemeyer> rogpeppe: The machine agent has to create a container (the container I'm talking about)
<niemeyer> rogpeppe: and start a process within that container
<niemeyer> rogpeppe: That's a different procedure from what a unit agent must do
<rogpeppe> niemeyer: yup, and that's what i want this API to do
<niemeyer> rogpeppe: Okay, that's a Container interface
<niemeyer> rogpeppe: That's not what a unit agent must do
<niemeyer> rogpeppe: A unit agent has to simply put a file on the upstart directory, and run "start whatever".. done
<rogpeppe> niemeyer: however, in the future, the MA must be able to do both things, i think
<niemeyer> rogpeppe: Why?
<rogpeppe> niemeyer: because we want it to be able to start a unit both in or out of a container
<niemeyer> rogpeppe: Nope
<rogpeppe> niemeyer: so we can avoid LXC overhead if necessary
<niemeyer> rogpeppe: It's always within the abstract concept of a container that I'm talking about
<niemeyer> rogpeppe: A unit container may use LXC or not, but it's still a unit container
<rogpeppe> niemeyer: ah!
<niemeyer> rogpeppe: We call it "scope: container" even when there's no LXC involved
<rogpeppe> niemeyer: so perhaps we have a global variable in the container package which is "Current"
<rogpeppe> niemeyer: as in the container we're currently in.
<niemeyer> rogpeppe: I don't know.. that's an implementation detail that went over my head at the moment
<rogpeppe> niemeyer: the difficulty i'm having is that when the MA is deploying a unit without LXC, it's not creating a container.
<niemeyer> rogpeppe: It is.. it's just a very trivial container
<niemeyer> rogpeppe: That runs without isolation
<niemeyer> rogpeppe: Container has Create, Start, Destroy: Create the container itself, Start makes it run now and whenever the machine boots; Destroy gets rid of it all
<niemeyer> rogpeppe: That's doable both with and without LXC
<niemeyer> rogpeppe: (and we'll need both)
<niemeyer> rogpeppe: The other problem you mentioned, starting another unit from the point of view of a unit that is already running, is trivial
<niemeyer> rogpeppe: It can literally be done in a very short function that dumps an upstart script and starts it
<niemeyer> rogpeppe: Because we've already agreed that the unit is responsible for itself (downloading charm, etc)
<rogpeppe> niemeyer: here's what i've got currently: http://paste.ubuntu.com/1040866/
<rogpeppe> niemeyer: i'm not sure about the command args to Create though
<rogpeppe> niemeyer: i'm not sure i want the code outside of container to know about how upstart works.
<rogpeppe> niemeyer: a few typos rectified: http://paste.ubuntu.com/1040871/
<rogpeppe> fwereade: about destroying containers, i'm not sure. the current python code destroys containers when their units disappear from the machine. we'd need a new "terminate-unit" command, i guess.
<fwereade> rogpeppe, I don't really have a position on this: I'm not really happy with either approach :)
<rogpeppe> fwereade: if in doubt, follow the existing implementation, right?
<fwereade> rogpeppe, not sure; depends how soon ubiquitous containerisation arrives really
<fwereade> rogpeppe, I'm not sure it's even terminate-unit so much as it is terminate-container
<rogpeppe> fwereade: i don't think the user has any concept of containers,
<niemeyer> rogpeppe: I was thinking about something along these lines: http://paste.ubuntu.com/1040880/
 * rogpeppe thinks.
<niemeyer> rogpeppe: Will actually need an extra "Start() error" method
<niemeyer> Due to LXC.. Create > Deploy > Start
<rogpeppe> niemeyer: i'm thinking that if we're passing in Unit (i was originally trying to build a low level package without state dependency), we can make the package know whether to deploy as LXC or not, because it can tell from the Unit itself.
<niemeyer> rogpeppe: No, it can't
<niemeyer> rogpeppe: This is part of the environment configuration
<niemeyer> rogpeppe: In fact.. there's no point in having Create and Deploy, I believe..
<niemeyer> So it's really just Deploy and Destroy
<rogpeppe> niemeyer: DeployLXC(*Unit) (Container, error)
<rogpeppe> niemeyer: or we could just call the package "deploy"
<niemeyer> rogpeppe: Then you don't have a Container interface to pass around and allow the implementation to vary anymore
<rogpeppe> niemeyer: deploy.LXC(*Unit) (Container, error)
<rogpeppe> niemeyer: ? that's what it's returning
<rogpeppe> niemeyer: what's your "dir" arg?
<niemeyer> rogpeppe: The directory for the container (!?)
<rogpeppe> niemeyer: doesn't the LXC subsystem choose that?
<niemeyer> rogpeppe: I don't know..
<rogpeppe> niemeyer: the python code looked like it did
<rogpeppe> niemeyer: but that may be just the way they've chosen to do it
<niemeyer> This is the direction I would go: http://paste.ubuntu.com/1040898/
<niemeyer> But it sounds like we're simplifying quite a bit, which is great.
<rogpeppe> niemeyer: what can we usefully do between calling LXC and calling Deploy?
<niemeyer> So it may turn out to be a single function :)
<niemeyer> <niemeyer> rogpeppe: Then you don't have a Container interface to pass around and allow the implementation to vary anymore
<rogpeppe> niemeyer: i don't understand that
<rogpeppe> niemeyer: you mean for testing?
<niemeyer> rogpeppe: Ok, don't worry.. it really depends on the implementation.
<niemeyer> I'm happy to see real stuff happening around that, whatever it is
<niemeyer> rogpeppe: Just don't name the method as "deploy" please.. you won't want to have a destroy function within a deploy method
<niemeyer> s/method/package
<rogpeppe> niemeyer: this is what i was thinking: http://paste.ubuntu.com/1040905/
<rogpeppe> oh, ok
<rogpeppe> niemeyer: this, perhaps: http://paste.ubuntu.com/1040908/
<rogpeppe> niemeyer: or even: http://paste.ubuntu.com/1040910/
<niemeyer> rogpeppe: How to destroy a container that wasn't created in the current process run?
<rogpeppe> given that LXC is really an implementation detail
<rogpeppe> niemeyer: i'm not sure we ever want to do that.
<niemeyer> rogpeppe: Yes, we do
<rogpeppe> niemeyer: when?
<niemeyer> rogpeppe: Sorry, I'm missing what you're missing
<niemeyer> rogpeppe: Every time?!
<niemeyer> rogpeppe: Processes are restartable
<rogpeppe> oh yeah
<rogpeppe> niemeyer: i'm not sure how to do it though.
<niemeyer> rogpeppe: The interface I suggested!? :)
<rogpeppe> niemeyer: i'm wondering how the Simple and LXC functions know what they're attaching to
<niemeyer> rogpeppe: What does "attach" mean?
<rogpeppe> niemeyer: if you restart, how do you get a Container that refers to a container created in a previous process run?
<rogpeppe> niemeyer: i'm thinking that they might take a name as argument
<niemeyer> rogpeppe: With LXC(...) or Simple(...)..
<rogpeppe> niemeyer: yes, but how do you tell LXC what container you want it to talk to?
<niemeyer> rogpeppe: With the arguments to LXC(...)
<rogpeppe> niemeyer: i.e. what arguments does LXC have
<niemeyer> rogpeppe: I'd have to implement it to tell you :)
<niemeyer> rogpeppe: There is existent logic in the Python tree to serve as inspiration
<rogpeppe> niemeyer: i'm thinking that we'd probably do it by name; so it'll be the name of the unit, most likely, so perhaps my interface would work after all.
<niemeyer> rogpeppe: I'm sure you can make any interface work
<rogpeppe> niemeyer: i'm not :-)
<niemeyer> rogpeppe: It doesn't mean it will look good, necessarily :)
<rogpeppe> niemeyer: i'm thinking that if the container already exists for a unit, Simple would return it
<rogpeppe> niemeyer: but yeah, maybe we want to distinguish between create and reattach
<niemeyer> rogpeppe: deploying an LXC container as a side-effect isn't a great idea
<rogpeppe> niemeyer: wouldn't LXC() attach to an LXC container, or create it if it didn't exist?
<rogpeppe> niemeyer: or maybe it doesn't do anything until you call Deploy
<niemeyer> rogpeppe: We're starting to go in circles.. my suggested Container interface has a Deploy method
<rogpeppe> niemeyer: i'm wondering what the LXC function actually *does*.
<niemeyer> rogpeppe: Return an LXC Container implementation
<rogpeppe> niemeyer: so it's just a factory. i see.
<rogpeppe> niemeyer: hence the lack of an error return, doh!
<rogpeppe> niemeyer: sorry, i hadn't appreciated that.
<niemeyer> np
<fwereade> TheMue, btw, I like errorContextf :)
<rogpeppe> niemeyer: https://codereview.appspot.com/6299082/ should be good to go now, i hope.
<niemeyer> rogpeppe: Done
<fwereade> TheMue, ping
<rogpeppe> niemeyer: thanks a lot.
<rogpeppe> niemeyer: submitted.
<niemeyer> rogpeppe: Thank you! Really glad with how readable these watchers have become
<rogpeppe> niemeyer: i still can't help thinking if there's a way to factor out some common pattern from them... but i agree, it's easier to follow than the python logic.
<niemeyer> rogpeppe: Crosses my mind too, but I also realize that we've already been doing some of that
<rogpeppe> niemeyer: e.g. stopWatcher
<niemeyer> Yeah
<rogpeppe> niemeyer: i don't mind too much. they're similar enough that it's quite easy to look at them side-by-side and see differences.
<niemeyer> rogpeppe: Yeah, don't see as an immediate issue we need to worry about either
<niemeyer> rogpeppe: This may even change significantly if Aram's research turns out well
<niemeyer> Aram: Btw, when do you plan to start pushing some trivial branches?
<niemeyer> Aram: To get us started on the mstate stuff
<Aram> niemeyer: today, I started to work on mstate, let me create the launchpad project so I can have a place to push them
<niemeyer> Aram: I think there are easy first steps to get us going with something concrete and get the momentum going
<niemeyer> Aram: Woah what?
<Aram> yes.
<niemeyer> Aram: The Launchpad project is "juju-core" :-)
<Aram> ah, so I should push mstate there?
<niemeyer> Aram: and it should be pushed in very small steps that feel solid, so we can see the experience flowing
<fwereade> niemeyer, btw, if you have a moment to take a look at https://codereview.appspot.com/6298082/, it should be quick
<Aram> yes, of course.
<Aram> so niemeyer, the mstate directory should be in juju-core/juju? I used an out of tree directory for now.
<fwereade> niemeyer, I'm just updating it now such that it includes the error checks you suggested in the unit-relation-watcher review but that shouldn't affect how good it looks
<niemeyer> Aram: Yeah, side by side with the real state
<Aram> niemeyer: ok, understood.
<Aram> btw niemeyer, we use only a limited subset of the state api: http://paste.ubuntu.com/1040999/ that listing is about 90% accurate.
<Aram> I generated it to prioritize work.
<niemeyer> Aram: Yes, because TheMue has been rocking solidly porting the real API
<niemeyer> Aram: The API is entirely used assuming we finish the port of juju to Go using similar logic to Python
<niemeyer> Aram: I suggest starting with the easy wins, to get our feet wet
<rogpeppe> niemeyer, Aram: i wonder whether we should do some watcher stuff early on, because that's where our state stuff differs most from conventional mongo usage
<niemeyer> fwereade: Reviewed
<niemeyer> rogpeppe: As I just said, I'd prefer to start with some easy stuff first, to get our feet wet
<fwereade> niemeyer, good point, thank you
<niemeyer> rogpeppe: The mstate package doesn't exist, we have zero infrastructure working, we don't know even the basic patterns we want to use
<niemeyer> rogpeppe: Thinking about watches when we miss all of that will be messy
<rogpeppe> niemeyer: definitely. just we shouldn't get too far into implementing juju structures and then realise they're incompatible with the way we need to implement watchers
<niemeyer> rogpeppe: The state package is not too far, in its entirety
<rogpeppe> niemeyer: but gotta start with something!
<niemeyer> Let's just get something going, slowly and solidly
<rogpeppe> niemeyer: BTW sometime it would be great if you could do that goamz resilience stuff, so i can run -amazon tests more reliably.
<rogpeppe> niemeyer: (another "handshake error"...)
<niemeyer> rogpeppe: Sounds good
<niemeyer> First, I'll have lunch!
<niemeyer> :)
<niemeyer> See you guys soon
<rogpeppe> niemeyer: enjoy!
<TheMue> fwereade: pong
<fwereade> TheMue, NoRelationError is starting to feel a little bit redundant
<fwereade> TheMue, I was thinking of replacing it with a bare-faced "relation doesn't exist"
<fwereade> TheMue, and trusting the clients' errorContextf~s to give necessary context
<TheMue> fwereade: Sounds reasonable, now with errorContextf.
<TheMue> fwereade: Btw, it has been niemeyers idea.The trick with using pointers inside is cool.
<fwereade> TheMue, regardless of initial source, it's cool; thank you for implementing it :)
<TheMue> fwereade: ;)
<rogpeppe> niemeyer: i've gotta go now, but this is what i'm looking at currently: http://paste.ubuntu.com/1041100/
<niemeyer> rogpeppe: Looks nice!
<fwereade> niemeyer, btw, I've been meaning to ask
<fwereade> niemeyer, can you precis your ideas for multi-endpoint relations?
<fwereade> niemeyer, I can see how they make sense for peer relations
<fwereade> niemeyer, but beyond that my brain breaks
<fwereade> niemeyer, (where "multi" means "more than we currently accept for a given relation kind")
<niemeyer> fwereade: Hmm
<niemeyer> fwereade: My brain doesn't break.. it simply stops thinking.. :-)
<niemeyer> fwereade: I'm not foreseeing all kinds of relations we may come up with
<niemeyer> fwereade: It may well turn out that we need none other than what we have
<niemeyer> fwereade: The concerns I raised in terms of the architecture are mainly to leave the door open, since we can, rather than preparing for a specific feature that I have in mind
<fwereade> niemeyer, ok, cool, thanks
<fwereade> niemeyer, as and when we add them I imagine we'll need to take a fine-tooth comb to the existing relation code anyway, so I probably shouldn't worry *too* much about them now
<niemeyer> fwereade: Agreed, not too much
<niemeyer> fwereade: The sensible and simple will likely be a good step no matter what we need in the future
<fwereade> niemeyer, as always :)
<fwereade> niemeyer, I've re-proposed https://codereview.appspot.com/6303060/
<niemeyer> fwereade: +1 :)
<niemeyer> fwereade: Thanks a lot
<fwereade> niemeyer, it's much smaller than it looks really
<fwereade> niemeyer, and I'm not quite sure ServiceRelation is sane, but fixing that is definitely a job for another CL
<niemeyer> fwereade: +1
<niemeyer> fwereade: Wow, 6 days
<niemeyer> fwereade: Was it in the review list?
<fwereade> niemeyer, I took it out
<niemeyer> fwereade: Aha, phew, ok
<niemeyer> I'm not crazy then
<fwereade> niemeyer, that pipeline was going in a bad direction, but I think I've found the right one now
<niemeyer> fwereade: Superb
<fwereade> off for now, gn all
<niemeyer> fwereade: Night man
<TheMue> fwereade: Bye
<niemeyer> robbiew: call time?
<niemeyer> Maybe not.. :)
<robbiew> niemeyer: yep..just running late
<robbiew> niemeyer: still around?
<niemeyer> robbiew: Yo
<niemeyer> Creepy.. the phone rings with the hangout before the laptop sees it
 * niemeyer steps out for a nice coffee break
<niemeyer> Back later for more reviewing
<fwereade> rogpeppe, ping
<fwereade> rogpeppe, if you happen to get this, please let me know what the purpose of the unused state.Unit.isPrincipal field is :)
<niemeyer> fwereade: Tells if the unit is a principal unit or not?
<niemeyer> davecheney: Morning
<davecheney> niemeyer: hello
<niemeyer> davecheney: Sorry for not having gotten through your branches yet
<davecheney> niemeyer: that is ok, i know everyone is busy
<niemeyer> davecheney: I have a meeting right now, but will still try to clean things up a bit tonight
<davecheney> niemeyer: for f in $(seq 1 5); do go test launchpad.net/juju-core/juju/store ; done
<davecheney> ^ fails 1 in 5 times for e
<davecheney> me
<niemeyer> davecheney: Okay, always the same test I suppose
<davecheney> [LOG] 35.45727 Socket 0xf841bf5900 to 127.0.0.1:50017: killed again: read tcp 127.0.0.1:50017: use of closed network connection (previously: Closed explicitly)
<davecheney> OOPS: 25 passed, 1 FAILED
<davecheney> yup
<davecheney> actually it's,     c.Errorf("counter sum for %#v is %d, want %d", key, sum, expected)
<davecheney> ... Error: counter sum for []string{"charm-info", "oneiric", "wordpress"} is 0, want 1
<niemeyer> davecheney: Okay, that's shaky
<niemeyer> davecheney: Knowingly shaky, that is
<davecheney> yeah, i didn't think it was a big deal, i just rerun it
<niemeyer> davecheney: Should definitely fix it, though
<niemeyer> davecheney: WIll have a look it
#juju-dev 2012-06-15
<niemeyer> davecheney: ping
<davecheney> ack
<davecheney> niemeyer: ack
<niemeyer> davecheney: Yo
<niemeyer> davecheney: Sorry, I actually decided to just note down in the review.. we can talk after that
<davecheney> no worries
<niemeyer> davecheney: Please see what you think of https://codereview.appspot.com/6295067
<niemeyer> davecheney: I'm looking at https://codereview.appspot.com/6307071/diff/3007/environs/jujutest/livetests.go
<niemeyer> davecheney: How come t.Env.AllInstances returns 1 instance, and t.Env.Instances right above returns two?
<davecheney> niemeyer: because the code above asks for id0 twice
<davecheney> AllInstances returns a Set of known instances
<niemeyer> davecheney: ROTFL
<davecheney> maybe I should add a comment there, took me a while to figure that one out, I'm not even sure if it's correct
<davecheney> reguarding the rety branch
<davecheney> i'm just going to remove that comment
<davecheney> you are of course correct that calling p.Stop() will return nil
<davecheney> so that is a trap for someone who might add that fuctinoality (via a signal handler or something)
<niemeyer> davecheney: I've added a note on https://codereview.appspot.com/6307071/
<niemeyer> davecheney: There's one assertion there that would be nice, but otherwise please feel free to go ahead an merge it now
<davecheney> niemeyer: thank you
<niemeyer> davecheney: I'll likely move the branch over the day tomorrow to lp.net/juju-core (no /juju), so the least branches we have flying around the best
<niemeyer> Even though this time it should really not be much trouble
<niemeyer> Since reviews will remain valid and all
<davecheney> coiol
<davecheney> cool
<niemeyer> Okay, almost clean queue
<niemeyer> davecheney: I see a LGTM from rog on https://codereview.appspot.com/6294066/
<niemeyer> davecheney: I'll have a look at it tomorrow so we can get this in
<davecheney> sure
<niemeyer> Okay, bed time here
<niemeyer> davecheney: Have a good time there
<davecheney> niemeyer: will do, thanks again
<twobottux> aujuju: What is the correct way to set config options to a juju service unit with a file? <http://askubuntu.com/questions/151075/what-is-the-correct-way-to-set-config-options-to-a-juju-service-unit-with-a-file>
<twobottux> aujuju: Can't get juju to deploy, problem with ec2 keypair? <http://askubuntu.com/questions/151088/cant-get-juju-to-deploy-problem-with-ec2-keypair>
<rogpeppe> davecheney: mornin'
 * rogpeppe is a bit hung over.
<davecheney> rogpeppe: hey, just responding to your follow up now
<davecheney> will be but a moment
<davecheney> https://www.youtube.com/watch?v=AJHCsVZGnNY << audio quality is utterly atrocious
<rogpeppe> davecheney: oh yeah, i still have to finish watching that.
<rogpeppe> davecheney: i learned one new thing: hex.Dumper.
<fwereade> rogpeppe, btw, while I remember, what's state.Unit.isPrincipal for?
<fwereade> rogpeppe, doesn't seem to be used; and Unit already has IsPrincipal()...
<rogpeppe> fwereade: i didn't think there is state.Unit.isPrincipal...
<rogpeppe> fwereade: let me check
<rogpeppe> fwereade: nope, i don't see that method
<fwereade> rogpeppe, field
<fwereade> rogpeppe, you added it recently
<rogpeppe> fwereade: ah!
<rogpeppe> fwereade: yes, it's so that when we're watching units for a machine, we can tell which units are principals without having to read the topology again for each one
<rogpeppe> fwereade: i made a mistake actually
<rogpeppe> fwereade: Unit.IsPrincipal should just return u.isPrincipal
<rogpeppe> fwereade: will fix asap
<fwereade> rogpeppe, yeah, indeed
<fwereade> rogpeppe, cool, thanks
<rogpeppe> fwereade: good catch, thanks
<fwereade> rogpeppe, wasn't sure if there might have been something else going on
<fwereade> cheers
<rogpeppe> fwereade: nope, just my forgetfulness
<rogpeppe> fwereade: BTW did you see my question on #juju?
<fwereade> ah sorry no
<fwereade> rogpeppe, I get to #juju rarely, when I run out of other work to do ;)
<rogpeppe> fwereade: i was playing around with writing some charms this morning and wondered what was the best way to wait until all of several relations had been joined
<rogpeppe> fwereade: like, for instance, if one relation has been joined, can i query the joined status of other relations?
<fwereade> rogpeppe, honestly I'm a bit unclear about the current intent of the various hook commands
<fwereade> rogpeppe, not really up on what jimbaker was doing just before last release
<fwereade> rogpeppe, bu I need to find out in detail soon, so I can implement them ;)
<rogpeppe> fwereade: :-)
<fwereade> rogpeppe, can I run some ideas past you as a sort of crackfulness check please?
<rogpeppe> fwereade: please do
<fwereade> rogpeppe, stop me when you detect the whiff ;)
 * rogpeppe sniffs carefully
<fwereade> rogpeppe, RelatedService is a good name for ServiceRelation; and its role and scope should be taken from the expected endpoint (of the service it's describing)
<fwereade> rogpeppe, RelatedService.RelationName(), however, should return the RelationName of the endpoint at *this* end of the relation, because that's how the related service is seen by the "local" service
<rogpeppe> fwereade: remind me what ServiceRelation represents again...
<fwereade> rogpeppe, ie the one on which we called RelatedServices() to get that RelatedService
<fwereade> rogpeppe, my best guess is that it represents what I just described
<fwereade> rogpeppe, but there's definitely something hinky about the types in play in the python
<fwereade> rogpeppe, I'm trying to come up with something that actually fits what we need to do
<rogpeppe> i haven't looked at those types. i think i had a mental block at "UnitRelation" which gustavo's renaming has helped with
<fwereade> rogpeppe, I'm actually starting to think that UnitRelation was a real misstep, that we don't want at all
<fwereade> rogpeppe, but that's a bit further down the crackfulness list
<rogpeppe> fwereade: we're talking about types that exist to hold the settings, right?
<fwereade> rogpeppe, well, zookeeper holds the settings... and I don't think anybody except the unit agent has any right to be looking directly at those settings any more
<rogpeppe> fwereade: because there's one collection of settings for each service for each relation for each unit
<fwereade> rogpeppe, yes
<fwereade> rogpeppe, but nobody should be looking at them directly IMO
<rogpeppe> fwereade: are these types not there specifically to enable what the unit agent needs to do?
<fwereade> rogpeppe, because they run the risk of destabilising the space-time continuum by violating causality
<fwereade> rogpeppe, *kinda*
<rogpeppe> fwereade: how is the unit agent going to read and change those settings without an API in state?
<fwereade_> gaaah
<rogpeppe> lol
<fwereade_> rogpeppe, did you see any of that, about RelatedGroup?
<rogpeppe> fwereade_: nope
<fwereade_>  rogpeppe, I'm going to return to my sorta-thread of ideas briefly, I think what I'm saying will make more sense with context
<fwereade_>  rogpeppe, next: RelatedGroup is a useful concept that deserves its own (simple) type
<fwereade_>  rogpeppe, a RelatedGroup really just wraps a path that's either /relations/rel-id or /relations/rel-id/container-id
<fwereade_>  rogpeppe, but it's a lot more convenient to pass around one of those and specify things like group.settingsPath(RoleProvider, unitId)
<fwereade_>  rogpeppe, ...thn it is to hand-hack those paths everywhere
<rogpeppe> fwereade_: last i saw from you was "*kinda*" BTW
<fwereade_> rogpeppe, cool, I picked the right join point :)
<fwereade_> rogpeppe, so, re RelatedGroup, this is IMO a useful concept that we didn't previously have a name for
<fwereade_> rogpeppe, it's really the set of units that can (transitively) affect one another within a relation
<fwereade_> rogpeppe, in a global relation it's just all units
<fwereade_> rogpeppe, in a container-scoped one there's a RelatedGroup per container
<rogpeppe> fwereade_: so what kind of operations are you imagining on a RelatedGroup?
<fwereade_> rogpeppe, .presencePath() and .settingsPath() mainly
<rogpeppe> fwereade_: is there anywhere in the current code that would be changed to use it?
<fwereade_> rogpeppe, also perhaps .prepareJoin(role), which ensures that nodes have been created
<fwereade_> rogpeppe, relationUnitWatcher would directly take presencePath and settingsPath which had been calculated from a RelatedGroup
<fwereade_> rogpeppe, and finally, maybe, .Watch(...)
<fwereade_> rogpeppe, but it is true that I have not yet figured out exactly what data the various bits of behaviour are best attached to
<rogpeppe> fwereade_: just so i have an idea, what would the type definition look like?
<fwereade_> rogpeppe, it really just wraps a path
<fwereade_> rogpeppe, that's it
<rogpeppe> fwereade_: type RelatedGroup string ?
<fwereade_> rogpeppe, it'll need a zk.Conn, or State, or something too
<fwereade_> rogpeppe, this is what I have atm:
<fwereade_> / RelatedGroup represents the set of units within a specific relation that
<fwereade_> / can (transitively) affect one another. For a globally-scoped relation, this
<fwereade_> / includes all units of all services in the relation; for a container-scoped
<fwereade_> / relation, there will be one RelatedGroup per container, containing all
<fwereade_> / units of the related services that are running within that container.
<fwereade_> type RelatedGroup struct {
<fwereade_>     zk   *zookeeper.Conn
<fwereade_>     path string
<fwereade_> }
<fwereade_> rogpeppe, it's not a lot of data, but it draws together a lot of special-casing that would otherwise be smeared across the codebase
<fwereade_> rogpeppe, there are several places in python that act differently depending on relation scope
<rogpeppe> fwereade_: could you point to a few examples in the python code, so i have an idea?
<fwereade_> rogpeppe, just a sec
<fwereade_> rogpeppe, oddly enough, mainly in state/relation.py -- there are several `if self._relation_scope == "container"`s
<fwereade_> rogpeppe, there's a _self.container_path which seems kinda redundant but is also related
<fwereade_> rogpeppe, basically everything involving relation unit presence or settings, whether reading or writing, needs to care about this concept (implicitly or explicitly)
<rogpeppe> fwereade_: i think i need to be reminded of how relations are laid out in zk
<fwereade_> rogpeppe, I have vocab that will  make things easier :)
<fwereade_> rogpeppe, a "group" has (1) a settings subnode, which contains a unit-key-keyed node for every unit in the relation
<fwereade_> rogpeppe, (2) a role subnode for each role in play in the relation, which contains a unit-key-keyed presence node for every unit actively playing that role in the relation
<fwereade_> rogpeppe, sorry, in the *group*
<fwereade_> rogpeppe, a globally coped relation contains just one group, which is the relation node in ZK
<fwereade_> rogpeppe, a container-scoped relation has N container-key-keyed groups within the relation node
<fwereade_> rogpeppe, the group is the unit of... er, influence?
<fwereade_> rogpeppe, I am still slightly scrabbling around for vocabulary; let me know if you come up with improvements ;)
<rogpeppe> fwereade_: how about RelationScope ?
<fwereade_> rogpeppe, yeah, except the name is taken and I don't know what to call the thing that already has that name
<fwereade_> rogpeppe, and all the existing methods called RelationScope
<rogpeppe> fwereade_: well, we're actually talking about the same thing, right?
<fwereade_> rogpeppe, not really, RelationScope is just "container" or "global"
<fwereade_> rogpeppe, the thing we're talking about is about what units are actually within a given scope of a given relation
<rogpeppe> fwereade_: aren't those the two kinds of things you can have in a RelatedGroup?
<rogpeppe> fwereade_: it seems to me that the path in RelatedGroup is exactly expressing the scope of the relation
<fwereade_> rogpeppe, I strongly agree that RelatedGroup is, intuitively, a "scope"
<rogpeppe> fwereade_: so, can we think of a way to join the two concepts?
<fwereade_> rogpeppe, but I feel somewhat constrained naming-wise
<fwereade_> rogpeppe, I hope so :)
<fwereade_> rogpeppe, one possibility is to rename RelationScope to RelationScopeKind or something
<rogpeppe> fwereade_: vague thought: i wonder if a relation could return a RelationScope which looks like your RelatedGroup, but has a Kind method which returns what is currently RelationScope
<rogpeppe> ha!
<fwereade_> rogpeppe, then I can
<rogpeppe> fwereade_: jinx, kinda
<fwereade_> rogpeppe, the trouble is that I cannot necessarily  just go from a relation to a relation scope, without knowing the container I'm talking about (in the case of a container-scoped relation, anyway)
<fwereade_> rogpeppe, also there's no such type as a Relation ATM
<fwereade_> rogpeppe, I'll add one if it seems worthwhile ofc ;)
<rogpeppe> fwereade_: there is actually
<rogpeppe> fwereade_: see relation.go:/^type Relation
<fwereade_> rogpeppe, the way I currently have to get to a group is RelatedService.group(u *Unit)
<fwereade_> rogpeppe, I deleted it
<rogpeppe> fwereade_: ah.
<fwereade_> rogpeppe, it's entirely useless
<fwereade_> rogpeppe, anyway, can we agree that I'm not *obviously* high, yet, although there cretainly are naming issues to be sorted out here?
<rogpeppe> fwereade_: i definitely think there's something to what you're saying, but i don't think it's just naming that's at issue.
<fwereade_> rogpeppe, because the next step is to point out that a RelatedUnitsWatcher could be, in terms of vocabulary we've been discussing, a GroupRoleWatcher
<fwereade_> rogpeppe, ah, ok, let's back up then
<rogpeppe> fwereade_: or rather, naming has semantic implications too
<fwereade_> rogpeppe, oh hell yes
<fwereade_> rogpeppe, I'm kinda hoping that naming fixes will fall out of the discussion
<rogpeppe> fwereade_: not sure about GroupRoleWatcher - that sounds like it's watching for changes in group roles
<fwereade_> rogpeppe, for example with the scope idea that becomes a ScopeRoleWatcher, which feels much closer to what it's really doing
<rogpeppe> fwereade_: but it is actually watching for related units, no?
<fwereade_> rogpeppe, yeah, it could indeed just be called a RelatedUnitsWatcher
<fwereade_> rogpeppe, but it feels slightly important to make it clear that it's scoped
<rogpeppe> fwereade_: just that RelatedUnitsWatcher watches for related units within a particular scope, no?
<fwereade_> rogpeppe, yeah, it's also reasonable to just ignore that entirely -- pass in a group/scope and not mention it at all in the type name
<fwereade_> rogpeppe, probably a better idea all things considered
<rogpeppe> fwereade_: if we create the RelatedUnitsWatcher from a RelationScope, that would be obvious, perhaps
<fwereade_> rogpeppe, yeah
<fwereade_> rogpeppe, ok stepping back again a mo
<rogpeppe> fwereade_: type ScopeKind string?
<fwereade_> rogpeppe, yeah, +1
<fwereade_> rogpeppe, a unit agent has a unit
<fwereade_> rogpeppe, it can find out its service, and fro that get some RelatedServices
<fwereade_> rogpeppe, given a unit and a RelatedService we can figure out the Scope
<rogpeppe> func (r *ServiceRelation) Scope(unit *Unit) RelationScope
<fwereade_> rogpeppe, a Scope is useful both for watching related units and for signalling the unit agent's presence within the relation
<fwereade_> rogpeppe, yeah
<rogpeppe> for globally scoped relations, unit could be nil
<fwereade_> rogpeppe, doesn't matter I think
<rogpeppe> fwereade_: that's true.
<rogpeppe> i think
<fwereade_> rogpeppe, I think the question we're asking is "what scope will this unit be in if it joins, or please give me an error of the unit can't join"
<fwereade_> and finally...
<fwereade_> rogpeppe, watching related units, and signalling ones presence in the relation, are profoundly intimately bound up
<fwereade_> rogpeppe, such that we don't want a unit agent ever doing just one of those things
<fwereade_> rogpeppe, by signalling presence, it's advertising that it's also watching and responding to the related units
<rogpeppe> sorry, parcel just arrived
<fwereade_> rogpeppe, and I *think* the right high-level thing for this is `func (u*Unit) AgentJoin(s *RelatedService) (*SOMETHING, error)`
<rogpeppe> back now though
<fwereade_> rogpeppe, where that SOMETHING is... I dunno, I've been calling it a JoinedService, but that's obviously wrong
<fwereade_> rogpeppe, JoinedScope might be better
<fwereade_> rogpeppe, (it's purpose is (1) to provide access to a RelatedUnitsWatcher's Changes channel, and (2) to maintain a pinger on the approriate path so that SOMETHINGs on other unit agents detect it)
<fwereade_> rogpeppe, and if either of those tasks fail both should fail
<rogpeppe> func (u*Unit) AgentJoin(s *RelatedService) (*presence.Pinger, *someWatcherType, error) ?
<fwereade_> rogpeppe, -1
<fwereade_> rogpeppe, if one of those things dies the other should too
<fwereade_> rogpeppe, if anything that type is a RelationUnit
<fwereade_> rogpeppe, but it's something whose mere *existence* has broad impact on other processes
<fwereade_> rogpeppe, so I'm very reluctant to use that name ;)
<fwereade_> rogpeppe, ScopePresence? JoinedRelation? JoinedScope? JoinedService?
<fwereade_> rogpeppe, Joined is IMO a good word
<rogpeppe> fwereade_: i don't mind if we let the unit agent do some work, rather than pushing everything into a state type
<fwereade_> rogpeppe, the unit agent will be doing *plently* of work
<rogpeppe> fwereade_: it's very easy for the unit agent to do: defer pinger.Stop()
<fwereade_> rogpeppe, and to react to its death as well?
<fwereade_> rogpeppe, to clean up one if the other dies, and vice versa?
<fwereade_> rogpeppe, sound to me like a job for a type honestly
<rogpeppe> fwereade_: yeah, we'll already have a loop with a select in
<fwereade_> rogpeppe, and it will also be simple to express Depart() and Abandon()?
<rogpeppe> fwereade_: but i'm sure a type could work well too
<fwereade_> rogpeppe, ok, I think I'll go with ScopeKind string and RelationScope struct {*zk.Conn, string}
<fwereade_> rogpeppe, see what falls out of that
<rogpeppe> fwereade_: i think that sounds good.
<fwereade_> rogpeppe, TheMue: since I'm making this change, I wonder how you feel about s/RelationRole/RoleKind/ throughout as well
<fwereade_> rogpeppe, TheMue: for consistency's sake if nothing else
<rogpeppe> fwereade_: that sounds good to me too
<fwereade_> rogpeppe, TheMue: (also, for methods/fields currently called RelationRole/RelationScope, name them Role and Scope unless that causes ambiguity?)
<rogpeppe> fwereade_: and i'm thinking that the methods on ServiceRelation don't really benefit from having "Relation" prefixes
<rogpeppe> jinx
<fwereade_> rogpeppe, RelationName kinda does
<rogpeppe> fwereade_: i don't mind ServiceRelation.Name
<fwereade_> rogpeppe, but i want to call that type RelatedService :)
<rogpeppe> ah
<fwereade_> rogpeppe, which IMO has more appropriate connotations
<fwereade_> rogpeppe, but then, hmm.
<rogpeppe> fwereade_: in which case, yeah, RelationName is better
<fwereade_> rogpeppe, similarly, in that case, RelationScope
<fwereade_> rogpeppe, but Role is something that really is on the service not the relation so that should change
<fwereade_> rogpeppe, it's more things like
<rogpeppe> fwereade_: i don't mind Scope on RelatedService actually
<rogpeppe> fwereade_: because services don't have scopes
<fwereade_> rogpeppe, true, and the scope is shared by both ends of the relation
<rogpeppe> fwereade_: so it's obvious that it's about the Related bit of the name
<fwereade_> rogpeppe, cool
<rogpeppe> fwereade_: i vote for a small CL cleaning up these issues, then another one renaming RelationScope to ScopeKind
<fwereade_> rogpeppe, yeah, that sounds sensible
<rogpeppe> fwereade_: as they're orthogonal and the latter is more controversial
<fwereade_> rogpeppe, I'll think on it a touch more though
<rogpeppe> k
<fwereade_> rogpeppe, most of this morning's conversation is *really* about me figuring what order I can make CLs in to maximise my chances of getting all this stuff in ;)
<rogpeppe> fwereade_: ah, but you've now changed *what* you're gonna put in!
<rogpeppe> fwereade_: and i, for one, think it's a really nice step forward
<fwereade_> rogpeppe, the types haven't changed, I think, just the names :)
<TheMue> fwereade_: You know the current Py code best. If there's no conflict with Role and Scope I'm fine with it.
<fwereade_> rogpeppe, (from what I was originally intending, that is; it's definitely changed from the py in some ways ;))
<rogpeppe> fwereade_: having ScopeKind hanging off RelationScope is new, i think
<rogpeppe> fwereade_: yeah. but the names make so much difference. when it was "RelatedGroup" i was, like, "where are the things in the group?". but now it's a "RelationScope", i see it as a container and it makes sense to me.
<fwereade_> rogpeppe, yeah, tyvm for helping out with that
<rogpeppe> fwereade_: even though the underlying representation might be identical
<fwereade_> rogpeppe, that was what I was looking for, and what I got :)
<rogpeppe> fwereade_: np. i would never have got as far as you did!
<fwereade_> rogpeppe, I'm lucky to have undergone the baptism by fire that was the restartable-unit-agent work in the python
<fwereade_> rogpeppe, that was enough that figuring out the real details here was.. possible
<rogpeppe> fwereade_: just had a silly naming idea
<fwereade_> rogpeppe, oo, go on
<rogpeppe> fwereade_: so we've got these "scopes" right? so when we've got a language implementation, what do we call the actual implementation of scopes? "frames".
<rogpeppe> fwereade_: sadly it doesn't really work
<fwereade_> rogpeppe, I don't think it quite does :(
<fwereade_> rogpeppe, I culd call them continuations I suppose, if I really want someone to stab me
<rogpeppe> fwereade_: actually, maybe just RelationSettings might be appropriate
<fwereade_> rogpeppe, -1
<rogpeppe> fwereade_: as it's the settings that are scoped
<fwereade_> rogpeppe, (1) they're presence as well (2) the only thing that should actually be reading the settings is the RelatedUnitsWatcher anyway
<rogpeppe> fwereade_: yeah
<rogpeppe> fwereade_: i'm just slightly concerned that the word "scope" implies something abstract rather than concrete
<Aram> moin
<TheMue> Aram: Moin. Hehe, you adopted the usual greeting here in Northern Germany.
<TheMue> Lunchtime
<rogpeppe> fwereade_: ping
<fwereade_> rogpeppe, pong
<rogpeppe> fwereade_: i wonder if you can help me thinking about a good way to test this new package: http://paste.ubuntu.com/1042221/
<rogpeppe> fwereade_: i can mock up start commands like the upstart package does
<rogpeppe> fwereade_: but i don't think it provides any useful assurance of anything
<rogpeppe> fwereade_: i'm considering a "-root" flag to the test, for tests that need to run as root.
<rogpeppe> fwereade_: then we can test it for real.
<fwereade_> rogpeppe, yeah, it's tricky; in the python we had exactly that
<rogpeppe> fwereade_: a "-root" flag?
<fwereade_> rogpeppe, and it always felt a little icky but I never saw any other way to really verify it
<fwereade_> rogpeppe, well, a USE_SUDO env var
<fwereade_> rogpeppe, but basically yes
<rogpeppe> fwereade_: it's particularly an issue for the LXC-manipulating code
<fwereade_> rogpeppe, yeah, that's the main place we had it for
<rogpeppe> fwereade_: ok, that's useful, thanks
<fwereade_> rogpeppe, I think I had it for the upstart stuff in the local provider as well
<rogpeppe> fwereade_: i think i'll go for a "-root" flag and require that the user id be root if it's set
<fwereade_> rogpeppe, if you're going to require that, can't you just check for root and explicitly skip the tests that require it if not?
<rogpeppe> fwereade_: i thought of that, but i'm not sure i want the tests manipulating my global state without me explictly asking them to do so
<fwereade_> rogpeppe, IMO by running them as root you're explicitly asking them to do so
<rogpeppe> fwereade_: sometimes i'm running a root shell without being aware of it...
<rogpeppe> fwereade_: but maybe it's ok
<fwereade_> rogpeppe, ah, fair enough
<rogpeppe> fwereade_: and it means that i could do sudo go test ./...
<fwereade_> rogpeppe, I *think* I've broken myself of that habit even when I really want to
 * rogpeppe hates sudo
<rogpeppe> rather, i hate the requirement that so many things run as root
<fwereade_> rogpeppe, ha, yeah
<fwereade_> rogpeppe, TheMue: I have a thought about topoRelationService
 * rogpeppe is all ears
<fwereade_> rogpeppe, at the moment a topoRelation has Services, which holds topoRelationServices keyed on service key
<fwereade_> rogpeppe, tRS has RelationName and Role fields
<fwereade_> rogpeppe, each taken from the appropriate endpoint of the apropriate service
<fwereade_> rogpeppe, I contend that even here, the relation name should be taken from the opposite endpoint
<fwereade_> rogpeppe, because, in the context of a relation, the important thing about a service is what the *other* end thinking it's called
<rogpeppe> fwereade_: what is the relation name BTW? is it a key?
<fwereade_> rogpeppe, *db*-relation-joined
<fwereade_> rogpeppe, *blog*-relation-changed
<fwereade_> rogpeppe, etc
<rogpeppe> fwereade_: ok, thought so, just checking
<fwereade_> rogpeppe, it's the name used within a charm to refer to the other end of the relation
<fwereade_> rogpeppe, so it seems stupid for a service to store (1) what it does and (2) what it thinks the other end's called
<fwereade_> rogpeppe, when it could store (1) what it does and (2) what it's called (from the perspective of the rest of the relation)
<fwereade_> rogpeppe, it's non-obvious but I'm becoming convinced that the obvious solution is crack
 * rogpeppe is looking at the code
<fwereade_> TheMue, when you return, I would also appreciate your opinion of the foregoing
<rogpeppe> fwereade_: hmm. this stuff is barely used at the moment. how do you anticipate it being used?
<rogpeppe> fwereade_: (currently there's just a single call to topology.Relation
<rogpeppe> )
<fwereade_> rogpeppe, well, I have this idea the a RelatedService is {relationKey string, serviceKey string, relater RelationEndpoint}
<rogpeppe> fwereade_: what's an "endpoint" again?
<fwereade_> rogpeppe, where "relater" is the endpoint of the service at the "local" end of the relation
<fwereade_> rogpeppe, relation name, interface, role, scope
<rogpeppe> ok
<fwereade_> rogpeppe,  oh and service name
<rogpeppe> fwereade_: am i wrong that the only thing in that endpoint tuple that can't be derived from relationKey is the role?
<fwereade_> rogpeppe, also the name
<rogpeppe> fwereade_: that comes from the relationKey and role too, doesn't it
<fwereade_> rogpeppe, which needs to come from the topoRelationService, and which I contend is the wrong way round at the moment
<fwereade_> rogpeppe, but, yes; it's pretty trivial to construct the RelatedService above, given a topology and a relation key
<rogpeppe> fwereade_: i'm thinking that a RelatedService is more {relationKey string, role RelationRole}
<fwereade_> rogpeppe, sure, it could be, but the fields I described are the ones that are (1) convenient to access when constructing and (2) used subsequently
<rogpeppe> fwereade_: i'm trying to think fundamentals, not optimisation.
<fwereade_> rogpeppe, if we only stored what you suggest it would be doable but a bloody hassle
<rogpeppe> fwereade_: really?
<rogpeppe> fwereade_: well, of course it'd need to store the topology too
<fwereade_> rogpeppe, rooting around in the topology, looping through relation Services dicts trying to find the data it's looking for
<fwereade_> rogpeppe, I'd really much rather collect all the relevant info than the bare minimum necessary to infer that info
<fwereade_> rogpeppe, (otherwise why have types at all? :p)
<fwereade_> rogpeppe, this would be problematic if relations could change after being created, but the fact of a relation and its endpoints is not something that changes over the lifetime of that relation
<rogpeppe> fwereade_: i guess i was trying to get to the bottom of what is *necessary* for a RelatedService
<fwereade_> rogpeppe, it could equally be relationKey, serviceKey
<fwereade_> rogpeppe, we can infer role from service+relation just as we can infer service from relation+role
<rogpeppe> fwereade_: ah, that seems more logical, given the name
<rogpeppe> fwereade_: ah, except that relationKey isn't visible outside state
<fwereade_> rogpeppe, we don't have to expose it
<fwereade_> ;)
<fwereade_> rogpeppe, I don't think we really want to expose much on a RelatedService at all tbh
<rogpeppe> fwereade_: what do we get from RelatedService that we don't get from Scope?
<fwereade_> rogpeppe, I certainly don't need anything public *yet*
<fwereade_> rogpeppe, scope covers both ends
<fwereade_> rogpeppe, relatedservice is about one end only
<rogpeppe> fwereade_: i'm wondering about: func (svc *Service) Relation(u *Unit) *Scope
<rogpeppe> ah
<fwereade_> rogpeppe, why would I care about that?
<fwereade_> rogpeppe, the only actual use case is for the unit agent to loop over the related services and respond to changes in each
<fwereade_> rogpeppe, for that I just want a list of RelatedServices, each of which describes what I need to know to join a relation with it
 * rogpeppe is starting to see
 * fwereade_ is hugely relieved
<fwereade_> rogpeppe, I've been a bit nervous about all this -- it is definitely different to the python, but I think it is a projection of the same problem into a simpler space
<rogpeppe> :-)
<fwereade_> rogpeppe, and it's far from immediately obvious that that's the case ;)
<rogpeppe> fwereade_: the zk representation is pretty much the same though, right?
<fwereade_> rogpeppe, I need to make another pass over the topology code, but yes, I think so
<fwereade_> rogpeppe, the zk representation of the actual live relations is identical
<fwereade_> rogpeppe, but I think the code representation of that representation is much nicer like this
<rogpeppe> fwereade_: dumb question: why isn't a RelatedService just a Service?
<fwereade_> rogpeppe, I think that it will lead to a much cleaner clearer data flow
<fwereade_> rogpeppe, because a service doesn't in itself have a role/relname/scope
<fwereade_> rogpeppe, it has a whole bunch of relations, only one of which we're interested in at a time
<rogpeppe> fwereade_: but you could have something like: func (svc *Service) Join(*Service) error, though, no?
<fwereade_> rogpeppe, I *think* that it's units which have to join services
<rogpeppe> fwereade_: oh yeah
<fwereade_> rogpeppe, but I may be missing context... what would that do?
<fwereade_> rogpeppe, at first glance that really looks like a different version of AddRelation(endpoints)
<fwereade_> rogpeppe, but not specific enough to do anything, because there may be more than one valid pair of endpoints between the services
<rogpeppe> fwereade_: yeah, i've realised that
<rogpeppe> fwereade_: i'm just wondering if there's some way of doing without RelatedService
<fwereade_> rogpeppe, I thought there was at one stage but the code said it wanted it
<fwereade_> rogpeppe, (I understood the code to be saying it wanted it ;))
<rogpeppe> fwereade_: so i'm just trying to think about what the unit agent is doing
<fwereade_> rogpeppe, I'm expecting pseudocode:
<fwereade_> rogpeppe, for rs in self.unit.service.related_services: self.unit.AgentJoin(rs)
<fwereade_> rogpeppe, and therefrom marshal the changes coming out of the JoinedService, or whatever we call it, into actual hook executions
<rogpeppe> fwereade_: i was toying with the idea of: func (u *Unit) Join(svc1, svc2 *Service)
<fwereade_> rogpeppe, which is a separate and not entirely trivial thing
<fwereade_> rogpeppe, what are the two services for?
<fwereade_> rogpeppe, surely we can infer one of those from the unit
<rogpeppe> ah, we need a relation name too
<fwereade_> rogpeppe, IMO hence the value of RelatedServcei
<rogpeppe> fwereade_: func (u *Unit) Join(svc *Service, relationName string)
<fwereade_> rogpeppe, that doesn't uniquely identify the relation
<fwereade_> rogpeppe, I, mysql, might have a bunch of different "db" relations going at the same time
<rogpeppe> fwereade_: really?
<fwereade_> rogpeppe, ah but hmm maybe with service it is unique
<rogpeppe> fwereade_: i thought each relation had a unique name
<rogpeppe> fwereade_: yeah
<fwereade_> rogpeppe, nah, this was something I only recently appreciated
<fwereade_> rogpeppe, as a unit I may be participating in multiple foo relations
<rogpeppe> fwereade_: if a service provides a relation, it can be joined by more than one other service?
<fwereade_> rogpeppe, charm authors handle this by looking at RELATION_IDENT in the hook context when running foo-relation-*
<fwereade_> rogpeppe, yeah
<fwereade_> rogpeppe, but those are still multiple relations
<fwereade_> rogpeppe, (relation means different things in different contexts, unhelpfully)
<rogpeppe> fwereade_: ah, makes sense, if we want to share db connections etc
<rogpeppe> another piece of the puzzle falls into place
<fwereade_> rogpeppe, the prospect of writing a charm that handles all that right makes my head hurt
<fwereade_> rogpeppe, but we do provide all the information required to do so, I think
<fwereade_> ciggie, brb
<rogpeppe> fwereade_: k
<fwereade_> rogpeppe, b
<rogpeppe> fwereade_: but you can't have more than one of the same relation going on with a single other service, right?
<fwereade_> rogpeppe, apart from that being somewhat crazy, Idon't think there's anything stopping you from doing so
<rogpeppe> fwereade_: i'm not sure you *can* do that though
<rogpeppe> fwereade_: how would you?
<fwereade_> rogpeppe, I could certainly write a charm that had 2 relations with the same interface and role
<rogpeppe> fwereade_: yeah, but they'd have different relation names
<fwereade_> rogpeppe, but on the *other* end on the charm with just one relation, those two others would have the same name
<rogpeppe> fwereade_: which is why i'm thinking that this could work ok:  func (u *Unit) Join(svc *Service, relationName string)
<rogpeppe> fwereade_: where relationName specifies the name at the other end
<fwereade_> rogpeppe, if I'm mysql, I don't care whether your relation name is "db" or "posts" or what
<rogpeppe> fwereade_: i'm not quite sure how that's relevant.
<fwereade_> rogpeppe, I, as mysql, am calling you "db", regardless
<rogpeppe> fwereade_: ok
<rogpeppe> fwereade_: and?
<fwereade_> rogpeppe, why does one end of the relation ever want to know internal name the other end is using for the  first end?
<fwereade_> rogpeppe, except to do what you suggest
<fwereade_> rogpeppe, but then how does it get that information without smooshing all the RelatedService construction stuff inline?
<rogpeppe> fwereade_: ok, i think i'm *starting* to get there.
<fwereade_> rogpeppe, IMO far easier to have (*Unit)AgentJoin(*RelatedService), which can barf quickly and easily if the unit's service name doesn't match the relater endpoint's service name
<rogpeppe> fwereade_: so you call Service.RelatedServices. then for each one of those, you can do Unit.Join(service)
<fwereade_> rogpeppe, yeah, that's the idea
<fwereade_> rogpeppe, RelatedServices is also I think just the right info for status display
<fwereade_> rogpeppe, again we want to know what the service we're looking at thinks the related services are called, not what the related services think the current service is called
<rogpeppe> fwereade_: or even RelatedService.Add(*Unit)
<fwereade_> rogpeppe, I don;t think that fits, I think join is exactly the right word
<rogpeppe> ok, yeah it works
<rogpeppe> 'specially with the usual xxx-relation-joined stuff
<fwereade_> rogpeppe, exactly so
<fwereade_> rogpeppe, we end up with methods like JoinedRelation.Depart(), which will cause -departed hooks to fire at the other end
<fwereade_> rogpeppe, just as the Join causes -joined hooks to fire at the other end
<fwereade_> rogpeppe, I also have a plan to write an Abscond method (ie depart but try to keep it secret for as long as possible) ;)
<rogpeppe> lol
<rogpeppe> fwereade_: maybe "Relation" is a good spelling for "JoinedRelation"
<rogpeppe> fwereade_: given you deleted the Relation type
<fwereade_> rogpeppe, oh yes, I like that
<rogpeppe> fwereade_: cool.
<rogpeppe> fwereade_: so we're looking at something like this: http://paste.ubuntu.com/1042310/ ?
<fwereade_> rogpeppe, close
<fwereade_> rogpeppe, just a sec
<fwereade_> rogpeppe, more like http://paste.ubuntu.com/1042315/ I think
<fwereade_> rogpeppe, sorry crappy type name on Changes
<fwereade_> rogpeppe, RelatedUnitsChange probably better
<rogpeppe> fwereade_: you'll still need Service.WatchRelatedServices though, no?
<fwereade_> rogpeppe, well, yeah, I'll need to pay attention to what relations I'm meant to be part of
<fwereade_> rogpeppe, that feels pretty simple and not really related at this stage
<rogpeppe> fwereade_: yeah, so you can watch 'em.
<rogpeppe> fwereade_: but i think you'll need an Attach method or something because you need some way of making a Relation without creating the join node
<fwereade_> rogpeppe, so, yeah, I'll need a RelatedServicesWatcher
<fwereade_> rogpeppe, why?
<rogpeppe> fwereade_: or maybe not
<rogpeppe> fwereade_: yeah, if the node already exists, we do nothing, that works
<fwereade_> rogpeppe, I will probably want some way to get a list of the units involved, for status display
<rogpeppe> fwereade_: so unless i've missed something, the only change above is to add Relation.Changes ?
<rogpeppe> RelatedService.Units
<fwereade_> rogpeppe, yeah, I think so
<rogpeppe> fwereade_: perhaps ScopeUnitChange would be better spelled RelationSettingsChange
<fwereade_> rogpeppe, also includes departs
<rogpeppe> fwereade_: hmm
<fwereade_> rogpeppe, I think RelatedUnitsChange to go with RelatedUnitsWatcher
<rogpeppe> fwereade_: yeah
<rogpeppe> fwereade_: actually, given that there will be two independent watchers for the relation setting and the unit departure, maybe it makes sense to have two channels
<rogpeppe> fwereade_: rather than trying to stuff 'em both into the same type
<fwereade_> rogpeppe, under the hood, there may well be
<fwereade_> rogpeppe, I'm expecting RelatedUnitsChange to be {Updates map[string]string, Deletes []string}
<fwereade_> rogpeppe, which I *think* is exactly the format we wnt for a hook scheduler
<fwereade_> rogpeppe, where updates is map[unitKey]settingsData
<fwereade_> rogpeppe, there's an interesting debate to be had on the precise type to use for the settings data
<fwereade_> rogpeppe, but for today it's a derail ;)
<rogpeppe> fwereade_: ConfigNode
<fwereade_> rogpeppe, hell no
<rogpeppe> mebbe
<rogpeppe> no? ok
<fwereade_> rogpeppe, we definitely don;t want any other bastard writing to our settings ;)
<rogpeppe> fwereade_: we're in control here, remember?
<rogpeppe> fwereade_: convention is ok
<fwereade_> rogpeppe, another reason: we don;t want to see zookeeper state
<fwereade_> rogpeppe, we want to see the state as it was at the time the change was detected
<rogpeppe> fwereade_: why would we? ConfigNode has a cache
<rogpeppe> fwereade_: but yeah, why not pass along the map[string]interface{}
<fwereade_> rogpeppe, so, plausibly, map[string]map[string]interface{}
<rogpeppe> fwereade_: yup
<rogpeppe> fwereade_: so where did you want to put the departed watch info?
<fwereade_> rogpeppe, departs will be in Deletes
<fwereade_> rogpeppe, which should maybe even be called Departs because they do map cleanly onto hook executions, in a way that updates don't
<rogpeppe> {Updates map[*Unit] string; Deletes []*Unit} ?
<rogpeppe> oops
<rogpeppe> i mean
<rogpeppe> map[*Unit]map[string]interface{}
<rogpeppe> except units don't live well in maps, i suppse
<fwereade_> rogpeppe, I'm not really sure why we need the *unit, especially since it might plausibly not exist by the time we try to look at it
<fwereade_> rogpeppe, unit names only I think
<rogpeppe> fwereade_: that's true of MachineUnitsWatcher too though
<fwereade_> rogpeppe, but probably the client of MachineUnitsWatcher actually wants to do something with the *Units though
<rogpeppe> fwereade_: and the unit agent doesn't?
<fwereade_> rogpeppe, I don't think so
<fwereade_> rogpeppe, again it's trying to present (to the hooks it runs) an immutable facade representing the state at the time it was known to have changed
<rogpeppe> fwereade_: BTW a little diversionary sketch i did early this morning: http://paste.ubuntu.com/1042285/
<fwereade_> rogpeppe, I can't think of any Unit methods I'd want or need to call
<rogpeppe> fwereade_: ok, that's cool
<rogpeppe> fwereade_: so the strings would be unit names?
<fwereade_> rogpeppe, yeah, that's what the hooks expect to work with expect to see
<fwereade_> rogpeppe, I'm sure I have something kinda similar to that lying around somewhere :)
<rogpeppe> fwereade_: i think it should be quite quick to write.
<fwereade_> rogpeppe, yeah, I think I already wrote most of it a while ago, then gotstymied by the lack of relations :)
<rogpeppe> fwereade_: the idea is that in every hook you put '#!/bin/sh\ngohook $0'
<rogpeppe> fwereade_: how do you mean?
<fwereade_> rogpeppe, ok, I suspect what you wrote doesn't do what I thought it did
<fwereade_> rogpeppe, explain please?
<fwereade_> rogpeppe, (I'm not so sure about that HookKind busines either)
<rogpeppe> fwereade_: so you've got one go server that runs; the hooks talk to it with rpc.
<fwereade_> rogpeppe, yes
<rogpeppe> fwereade_: when a hook executes, Wait returns a context, which you can then use to do the usual hooky things, until you close it, which terminates the hook execution
<fwereade_> rogpeppe, ah, yes, I misread the stuff at the bottom
<rogpeppe> fwereade_: it's a kind of inversion of the usual callback-driven control flow, but makes sense in the go world i think
<fwereade_> rogpeppe, I am in general suspicious of the idea that we should let people run hooks whenever they want to
<rogpeppe> fwereade_: they can't
<rogpeppe> fwereade_: you can only get a context when a hook is run by juju
<fwereade_> rogpeppe, ok, sorry, who's calling RunHook?
<rogpeppe> fwereade_: in every hook you want to register, you have a shell script that does, as above:
<rogpeppe> #!/bin/shj
<rogpeppe> gohook $0
<rogpeppe> s/shj/sh/
<rogpeppe> otherwise there's no way of knowing what hooks you want to register with juju
<fwereade_> rogpeppe, isn't that hugely complicated compared to "if the file exists, it's a hook"?
<rogpeppe> fwereade_: it uses that mechanism
<rogpeppe> fwereade_: but all the hooks get actually executed in the same go program context.
<fwereade_> rogpeppe, still not seeing what we get out of that
<rogpeppe> fwereade_: i *think* you could write pretty charms with it.
<rogpeppe> fwereade_: 'cos i'm not a big fan of callbacks
<rogpeppe> fwereade_: but yeah it's just an idea
<fwereade_> rogpeppe, oh, I think I see
<fwereade_> rogpeppe, yeah, that is interesting :)
<fwereade_> rogpeppe, maybe one of those ones to do in our copious free time though ;p
<rogpeppe> fwereade_: yeah, definitely. all that copious free time.
<rogpeppe> fwereade_: i suspect it'd only be a hundred or so lines of code though.
<fwereade_> niemeyer, heyhey
<TheMue> niemeyer: Moin
<niemeyer> Hellos!
 * niemeyer waves
<fwereade_> TheMue, did youhave any thought on that stuff (some way) above (by now)?
<jimbaker> fwereade_, certainly feel free to ping me with any questions on the relation id stuff
<fwereade_> TheMue, starting at round about 13:24
<TheMue> fwereade_: One moment, will read it quickly.
<fwereade_> jimbaker, cool, thanks -- I'll be getting there before too long I think :)
<jimbaker> fwereade_, also if you have any more thoughts on the merge proposal for format: 2 support that would be great
<fwereade_> TheMue, sorry, it gets pretty wide-ranging
<fwereade_> jimbaker, crap, sorry, I'll take another look sometime today
<fwereade_> TheMue, the bit I'm really concerned about is that IMO we're storing the wrong relation names in the topology
<jimbaker> fwereade_, no worries. thanks for taking a look at this, you had some great comments. i did refactor to use polymorphism so the parallel code paths are all now in one place. lot cleaner code. and a lot more tests. (almost at 2000 for the python version as a whole)
<fwereade_> jimbaker, awesome
 * niemeyer sips some great chimarrÃ£o on that shadowing Friday morning
<rogpeppe> niemeyer: hiya!
<rogpeppe> fwereade_: i still don't quite see it. is your concern that the topology isn't storing the right info, or that the info is just a bit harder to get to than it should be?
<fwereade_> rogpeppe, well, all the necessary bits are there, but in a misleading configuration
<fwereade_> rogpeppe, we're storing a service's role together with what it thinks the other service is called
<fwereade_> rogpeppe, considering it from the POV of the relation
<fwereade_> rogpeppe, we should be storing name and role together
<rogpeppe> fwereade_: "storing a service's role together with what it thinks the other service is called" currently? or as you'd like it to be?
<fwereade_> rogpeppe, that's current
<rogpeppe> fwereade_: ah, that seems odd
<fwereade_> rogpeppe, indeed so
<fwereade_> rogpeppe, it's tempting to do what we currently do because we're storing information from a single endpoint together
<TheMue> fwereade_: Just for info, I didn't forget you, but it's a lot to read. ;)
<fwereade_> TheMue, ofc, np
<rogpeppe> fwereade_: actually, i can't see anywhere that topoRelationService.RelationName is set
<fwereade_> TheMue, it's restated succinctly just above though
<fwereade_> rogpeppe, it's in State.AddRelation
<fwereade_> rogpeppe, TheMue: storing the name according to the other end is more work but (1) IMO more correct and (2) also fits better with the RelatedService ideas discussed earlier
<fwereade_> TheMue, if there's anything I can clarify please ask away :)
<rogpeppe> fwereade_: there's a one-to-one mapping between endpoints and topoRelationServices. that seems good to me.
<rogpeppe> fwereade_: i suppose the problem is i can't envisage the code which the current structure would make awkward
<fwereade_> rogpeppe, the point is much more that it's wrong than that it's awkward
<rogpeppe> fwereade_: whether it's wrong depends on how we're gonna use it, no?
<fwereade_> rogpeppe, well, the name is wrong anyway... and calling it NameByWhichWeReferToTheOtherServiceInThisRelationIfItExists is somewhat awkward
<fwereade_> rogpeppe, whereas Name, implying "what other people call us", seems rather saner to me
<fwereade_> rogpeppe, and the fact that it's stored within relation data should make the context clear IMO
<TheMue> fwereade_: I don't know if I got everything right, but as far as I can follow it sounds reasonable.
<rogpeppe> fwereade_: so the RelationName in RelationEndpoint is the name as referred to by whom?
<rogpeppe> i see the awkwardness, because every relation has two names
<rogpeppe> fwereade_: so RelationName is ambiguous, which i hadn't really appreciated before
<fwereade_> rogpeppe, the very term "relation" is ambiguous
<fwereade_> rogpeppe, do we mean "thing in the 'relations' part of charm metadata", or "connection between two services"?
<rogpeppe> fwereade_: because it represents the abstract relation as defined in the metadata, but also ...
<rogpeppe> fwereade_: yeah
<rogpeppe> fwereade_: maybe if we sort out some terminology for that, then everything will become clearer
<fwereade_> rogpeppe, perhaps, but I fear both meanings are (1) user-facing and (2) entrenched already
<rogpeppe> fwereade_: is there anywhere in the current code that we use "relation" to mean "the abstract relation as defined in the metadata" ?
<fwereade_> rogpeppe, I think that's the meaning of RelationEndpoint.RelationName
<TheMue> rogpeppe: At least in the yet ported code not.
<fwereade_> rogpeppe, an endpoint is a perfectly meaningful construct even when not participating in a relation
<rogpeppe> fwereade_: hmm. so it's more PotentialRelationEndpoint than RelationEndpoint.
<fwereade_> rogpeppe, heh, if you like :)
<fwereade_> rogpeppe, I'd say it still exists even if don't connect to it
<rogpeppe> fwereade_: i'm trying to contrast with the other kind of endpoint, which actually exists and has an id
<rogpeppe> maybe. i'm still fuzzy!
<fwereade_> rogpeppe, sorry, what other kind of endpoint? topoRelationService?
<fwereade_> rogpeppe, I don't really see that as an endpoint
<fwereade_> rogpeppe, although I can certainly see a perspective from which that conception of it makes sense
<fwereade_> rogpeppe, no, can't be, TRSs don't have ids
<fwereade_> rogpeppe, I'm confused
<rogpeppe> fwereade_: what id is given to the hook? the relation key?
 * TheMue would like a whiteboard for the discussion. 
<fwereade_> rogpeppe, yeah, effectively
 * rogpeppe too
<fwereade_> rogpeppe, IIRC we strip unnecessary 0s
 * rogpeppe would like to see a snapshot of a fully populated zk tree
 * fwereade_ sympathises but doesn't think he has anything like that handy
<TheMue> fwereade_: Could you mail a kind of outline of how the entities, structure and topology should look like to juju-dev?
<TheMue> fwereade_: So it's mo simple to think about it twice and make annotations.
<fwereade_> TheMue, this is 100% internal to the topology
<fwereade_> TheMue, wait, except that it affects ServiceRelation.relationName
<TheMue> fwereade_: Which makes it a bit simpler. ;)
<fwereade_> TheMue, but if yu look at the docs for ServiceRelation it's very clear that it's "from the viewpoint of a participant service"
<fwereade_> TheMue, my point is in essence very simple: that the RelationName stored in a topoRelstionService does not actually refer to that service
<fwereade_> TheMue, and that this is a Bad Thing
<rogpeppe> fwereade_, TheMue: i'm writing a quick snapshot of how i think the whole thing will look, with a single wordpress/mysql instance. feel free to edit with me: https://docs.google.com/a/canonical.com/document/d/1z2bIJ097qawOPGtZnHABIqNPVXaQ5ePCRIQVYGfcHsg/edit
<TheMue> rogpeppe: Thx *click*
<TheMue> rogpeppe: Need access, request is sent. ;)
<rogpeppe> TheMue: you should be able to access it with your canonical account
<TheMue> rogpeppe: OK, I mostly use my old private one, also for lp.
<hazmat> huh.. haven't seen this before "Your Amazon EC2 Instance scheduled for retirement "
<hazmat> email notification
<hazmat> i think that's for the jujucharms site
<rogpeppe> hazmat: poor dear. i hope it gets a pension.
<TheMue> rogpeppe: I'm in.
<hazmat> rogpeppe, this ain't no socialist cloud! ;-)
<rogpeppe> hazmat: paid for by its earnings over its lifetime, of course :-)
<fwereade_> rogpeppe, balls, I appear to have the wrong @canonical.com google password stored... and resetting it doesn't appear to have sent me anything
<fwereade_> rogpeppe, grar ok they won't send it to me
<hazmat> rogpeppe, it was unemployed for most of its life sadly, and abused by the state, and  burden to it as we ll apparently. i'm going to appeal to amnesty international for it as a political prisoner ;-)
<fwereade_> rogpeppe, any chance we could use an etherpad for now?
<rogpeppe> fwereade_: just made it available to all
<rogpeppe> fwereade_: etherpad?
<fwereade_> rogpeppe, http://pad.ubuntu.com/
<fwereade_> rogpeppe, but anyway, I'm in
<fwereade_> rogpeppe, and I want to edit where you're sitting :p
<fwereade_> rogpeppe, that's what I'd like to see in the topology
<rogpeppe> fwereade_: can we see what there is *now*.
<fwereade_> rogpeppe, done
<rogpeppe> fwereade_: as i'd like to understand where we are before moving from that...
<fwereade_> rogpeppe, can I delete everything to do with units and machines?
<fwereade_> rogpeppe, services and relations are the only bits relevant right now I think
<rogpeppe> fwereade_: i'd like to keep the whole picture, for the moment
<rogpeppe> fwereade_: Aram will find it useful apart from anything
<fwereade_> rogpeppe, ok, but I think everything outside the topology is a red herring
<fwereade_> rogpeppe, for the purposes of the is-fwereade-on-crack-re-relation-names discussion
<niemeyer> fwereade_: I've been following the conversation, but to be honest I still miss the root
<TheMue> fwereade_: Nice project name.
<niemeyer> fwereade_: Where's the etherpad?
<rogpeppe> fwereade_: i'm sure so. but for the moment, i think this is a useful diversion.
<fwereade_> niemeyer, it's at https://docs.google.com/document/d/1z2bIJ097qawOPGtZnHABIqNPVXaQ5ePCRIQVYGfcHsg/edit
<rogpeppe> fwereade_: i've wanted to see something like this for a while, just to sort it all out in my head
<fwereade_> niemeyer, no etherpad, we'd already started on docs
<niemeyer> Cool, looking
<niemeyer> Is there audio or something?
<niemeyer> What are we doing?
<niemeyer> Besides hacking a topology? :)
<niemeyer> In other words, is there a problem statement somewhere?
<fwereade_> niemeyer, all I'm trying to do is convince people that we're storing relation names in the wrong bits of the topology
<fwereade_> niemeyer, rogpeppe is I think usefully diverted
<fwereade_> niemeyer, but not in the direction I want him to be ;p
<fwereade_> niemeyer, I would state the problem as follows:
<niemeyer> fwereade_: Why? Oh, nevermind, go on please..
<rogpeppe> fwereade_: it's ok, i'm looking back at the relations now :-)
<fwereade_> niemeyer, the topoRelationService has "Role" and "RelationName" fields
<niemeyer> fwereade_: Ok
<niemeyer> fwereade_: and is keyed by the service key on the topology
<fwereade_> niemeyer, a topoRelation has "Scope", "Interface" and "Services"; Services contains a serviceKey:topoRelationService map
<niemeyer> fwereade_: Okay, seems to make sense to me
<fwereade_> niemeyer, the RelationName fields in topoRelationService do not refer to the service by which they are keyed
<niemeyer> fwereade_: What?
<fwereade_> niemeyer, I consider this to be misleading at best
<niemeyer> fwereade_: This is a bug for sure
<fwereade_> niemeyer, they refer to the
<fwereade_> niemeyer, it's a subtle one though
<niemeyer> fwereade_: It seems rather blatant to me
<fwereade_> niemeyer, mainly because "relation" means different things depending on whether we're talking about charms alone, or... actual, er, relations... connections-between-services
<niemeyer> fwereade_: If you asked me I'd say that's not the case in the current implementation..
<fwereade_> niemeyer, from the perspective of a user, we do have the same word meaning two things
<niemeyer> fwereade_: I see them as the same in both contexts
<niemeyer> fwereade_: But even then, I don't see how that's related to what we're discussing
<fwereade_> niemeyer, they are profoundly related
<fwereade_> niemeyer, it's the cause of the problem IMO
<niemeyer> fwereade_: I'll actually see the code right now because I still can't believe the RelationName isn't for the service
<fwereade_> niemeyer, consider a RelationEndpoint
<niemeyer> fwereade_: Okay, with you
<fwereade_> niemeyer, that has a RelationName which matches the relation defined in the charm
<niemeyer> fwereade_: Yes
<fwereade_> niemeyer, however from the perspective of a hook running in a relation, that is effectively the name of the other side of the relation
<niemeyer> fwereade_: Erm, no?
<fwereade_> niemeyer, I run db-relation-joined because I think that someone has joined my db relation
<niemeyer> fwereade_: The local relation name is always the local relation name.. the charm doesn't care about what's the remote relation name
<fwereade_> niemeyer, I am seieng
<niemeyer> fwereade_: Yes, *your* db relation.. local relation name
<niemeyer> fwereade_: Your db relation has a different high-level identification for the remote charm
<niemeyer> fwereade_: and they both don't fight about that
<fwereade_> niemeyer: ok, I am mysql, and I provide "db"; you are wordpress and you require "data" (interfaces, scopes match)
<niemeyer> fwereade_: Cool
<niemeyer> fwereade_: (by the way, I just checked the code, and seems to match my understanding)
<fwereade_> niemeyer, we currently store {mysql-key: {role: provides, name:db}, wp-key: {role: requires, name:data}}
<fwereade_> niemeyer, this representation fits nicely with the charms
<niemeyer> fwereade_: Yes, that sounds correct
<niemeyer> fwereade_: in all senses of that
<fwereade_> niemeyer, but from my perspective as "mysql" I consider myself to be related to a thing called "db"
<niemeyer> fwereade_: Aha!
<niemeyer> fwereade_: That's where the misunderstanding lies
<fwereade_> niemeyer, and I don;t consider that t be *my* name at all
<niemeyer> fwereade_: But it is..
<niemeyer> fwereade_: You're not related to a thing called "db", and you have that feeling because that relation name is a poor name
<niemeyer> fwereade_: Imagine you are wordpress, and for whatever reason you support two relations with interface "mysql": "cache", and "data"
<fwereade_> niemeyer, ok
<niemeyer> fwereade_: This is the *wordpress* name for the relations..
<niemeyer> fwereade_: Who is "my cache"?
<fwereade_> yes, and in the context of a relation that's the important thing... isn;t it?
<niemeyer> fwereade_: Well, everything is important
 * TheMue listens carefully
<fwereade_> niemeyer, your "cache" is a conduit to a service somewhere running mysql
<fwereade_> niemeyer, or not?
<niemeyer> fwereade_: Yes, but it's the name you gave to it.. it's your way of calling it
<fwereade_> niemeyer, in the context of a relation, what name is important other than the name used by the other people in a relation?
<niemeyer> fwereade_: It's a local identifier for the *relation*
<niemeyer> fwereade_: Sorry, I missed your intention with that sentence
<fwereade_> niemeyer, if I'm in a relation and looking at the other side of it -- rel.Services[key-of-other-service] -- what name do I want to see?
<fwereade_> niemeyer, the name it think I have, or the name I think it has?
<niemeyer> fwereade_: This is bogus
<niemeyer> fwereade_: The relation that you have at hand is *a relation*, it's not on either side
<fwereade_> niemeyer, wait, if I'm participating in a relation, there's no "other side"?
<niemeyer> fwereade_: If I say relation.Service[serviceKey].RelationName, I want to see the "relation name" of the charm of serviceKey that is participating in this relation
<niemeyer> fwereade_: The *relation that you have at hand*.. that value that you mentioned as "rel"
<niemeyer> fwereade_: This has no side.. it's the storage for the relation data that all the sides will see
<rogpeppe> niemeyer: +1. that's a good way of putting it.
<fwereade_> niemeyer, if I am a unit of service-X and and I am looking at a relation between service-X and service-Y, then surely service-Y is the other side..?
<fwereade_> niemeyer, and vice versa?
<niemeyer> fwereade_: Yes, but do you understand what I'm trying to point out?  When you have a "rel", that "rel" has no side, because every single side in that relation will get the same data
<niemeyer> fwereade_: This is not "the relation for my side of the service"
<niemeyer> fwereade_: This is "the relation"
<fwereade_> niemeyer, I *think* I understand that just fine
<TheMue> fwereade_: It's seems like you're looking for a convenient way to navigate. "Hey, relation, I'm X, so who is my Y."
<niemeyer> fwereade_: relation[serviceKey].RelationName has the relation name from serviceKey that is participating
<niemeyer> fwereade_: and that seems correct to me
<fwereade_> niemeyer, I will concede that that is also a valid way of looking at it ;)
<niemeyer> fwereade_: It's not just a valid way.. it's the only way to look at what we have in place
<niemeyer> fwereade_: You might want to represent it in a different way, and that's certainly possible, but I don't see a reason to change it
<niemeyer> fwereade_: Maybe you do?
<hazmat> smaddock, invite out
<fwereade_> niemeyer, I *think* I do, let me just take a moment to reorder my thoughts
<niemeyer> fwereade_: Sounds good
<niemeyer> fwereade_: As a curiosity, note that we say "juju add-relation service1:rel1 service2:rel2".. that's consistent with how we store it as well.
<fwereade_> niemeyer, ok, when I want to get the relations participated in by  a specific service -- the "local" service, for the purposes of discussion -- I get a list of topoRelations out of the topology and I want to turn them into a bunch of instances that encapsulate what the local service needs to know about the various remote services
<niemeyer> fwereade_: The local service needs to know pretty much nothing about the remote relation names being used, I think
<fwereade_> niemeyer, from that perspective, it seems strange that I should get the name of the relation from my own service key, and the role it plays from its service key... but wait, dammit, I can get the name from my own key and trivially derive the role played by inverting my own role
<fwereade_> niemeyer, I'd been thinking about getting the role of the opposite service from the opposite service key
<fwereade_> niemeyer, and having to get the name from my own key, and that seemed insane
<niemeyer> fwereade_: Yeah, I think doing provider => requirer sounds fine :)
<fwereade_> niemeyer, right, I think we can safely forget I said anything :)
<fwereade_> niemeyer, sorry derail
<niemeyer> fwereade_: No worries, I actually find that kind of conversation useful to solidify the concepts we have in place
<rogpeppe> me too
<rogpeppe> and we've got this doc now as a kind of reference point: https://docs.google.com/a/canonical.com/document/d/1z2bIJ097qawOPGtZnHABIqNPVXaQ5ePCRIQVYGfcHsg/edit
<fwereade_> niemeyer, yeah, the time wasn't wasted, just wish I'd seen the other perspective some hours ago
<rogpeppe> which has been helpful for me anyway
 * TheMue appreciates it too, getting a better understanding.
<fwereade_> rogpeppe, slight update to https://codereview.appspot.com/6310046, tRS.RelationName became tRS.Name
<rogpeppe> fwereade_: sounds good
<rogpeppe> fd
<fwereade_> everyone, I'm going for a quick walk to see whether thinking this through dislodges any more misconceptions, bbiab
<rogpeppe> fwereade_: at some point we should add some comments to the fields in the structs
<fwereade_> rogpeppe, yeah, I think I will be doing that for my next trick :)
 * rogpeppe has just realised the google docs is absolutely terrible for editing a document where indentation matters
<rogpeppe> s/the/that/
<TheMue> rogpeppe: You need a shared online vim.
<rogpeppe> TheMue: noooo!
<rogpeppe> TheMue: just normal text will do fine. i don't really want to go back to vi key commands...
<rogpeppe> TheMue: or monospaced fonts
<TheMue> rogpeppe: Hehe, even if my favorite editor is a different one I still like the good old vim.
<TheMue> rogpeppe: It's always fine for admin tasks.
<rogpeppe> TheMue: ed FTW!
<TheMue> rogpeppe: For Go (and other tasks) I like my Sublime Text. highlighting, code completion, templates, fmt on save, good build support and as an editor many nice features.
<TheMue> rogpeppe: But editors are good for endless discussions. *lol*
<rogpeppe> TheMue: i had a challenge from hazmat to try sublime text for a month... if he tries acme for a month!
<TheMue> rogpeppe: hazmat is using st too? Didn't know that.
<rogpeppe> TheMue: no, i think he's an emacs user
<rogpeppe> TheMue: but there's no way i'm using emacs :-)
<TheMue> rogpeppe: Uhh, tried it for some time, but never get warm with it.
<hazmat> TheMue, actually i'm on emacs user
<hazmat> TheMue, i'd like to use ST  more
<TheMue> hazmat: Just give it a try. ;)
<hazmat> TheMue, i have.. its nice.. the go integration looks sweet
<TheMue> hazmat: Especially when installing GoSublime.
<hazmat> TheMue, indeed.. my use of ST is hindered by my workflow which has evolved into a terminal session with many open editors split across different tmux windows, all on different branches
<hazmat> i guess that translates into multiple ST editor windows
<TheMue> hazmat: Yip.
<niemeyer> Lunch time
<rogpeppe> fwereade_, niemeyer, TheMue, Aram: gotta go soon. have a great weekend all.
<fwereade_> rogpeppe, and you, happy weekend
<rogpeppe> fwereade_: just started world war Z, BTW. will let you know how i get on.
<fwereade_> rogpeppe, cool; I've got the others of yours with me this w/e so hopefully I'll manage another :)
<rogpeppe> fwereade_: can't remember what i left you with now...
<rogpeppe> fwereade_: oh yeah, valentine's castle.
<fwereade_> rogpeppe, er, nor can I now (I already read that one, must get sequels...)
<fwereade_> rogpeppe, axiomatic
<fwereade_> rogpeppe, to hold infinity
<rogpeppe> fwereade_: oh yeah, awesome shorts
<rogpeppe> fwereade_: i think you'll enjoy both
<fwereade_> rogpeppe, you haven't led me wrong yet :)
<TheMue> rogpeppe: Bye, have a nice weekend.
<niemeyer> rogpeppe: Have a good time there man
 * TheMue leaves now too.
<niemeyer> TheMue: Have a good time too
<TheMue> niemeyer: Thx, for you, and all here, a nice weekend too.
<fwereade_> niemeyer, re: redundant-relation-prefixes -- my thinking is that Role and Scope invariably refer to aspects of a relation; but Name and Key may need disambiguation, and where they do have been left alone
<fwereade_> niemeyer, and I'm slightly concerned that we're drifting down the python path of VeryLongRedundantNomenclatureNamesThatEndUpObscuringMeaning
<fwereade_> niemeyer, invalid?
<niemeyer> fwereade_: I'm not sure this is the case on that specific instance
<niemeyer> fwereade_: I find more awkward to have relationName string; scope RelationScope
<niemeyer> fwereade_: RelationRole is VeryFarFromSuchALongAndUNpleasantNameInMyOpinion :-)
<niemeyer> fwereade_: I'd be happy to take *all* prefixes off, but we can't, because RelationName is ambiguous with ServiceName in certain contexts
<fwereade_> niemeyer, fair enough, I'd really prefer prefixes only for disambiguation, but it's hardly worth arguing over :)
<niemeyer> fwereade_: I find consistency a relevant factor too
<niemeyer> fwereade_: this hurts my eyes somehow: { relationKey string; scope RelationScope }
<fwereade_> niemeyer, "prefixes only for disambiguation" is imo perfectly consistent, but as I say it's no big deal :)
<fwereade_> niemeyer, I do have another thought I should probably run by you before I propose it
<niemeyer> fwereade_: This is not consistent.. this is a statement
<niemeyer> fwereade_: The code is consistent or not, and in the example above it isn't
<niemeyer> fwereade_: type is { relationKey string; relationName; scope RelationScope; role RelationRole}
<niemeyer> fwereade_: now I have a rel
<fwereade_> niemeyer, as I say, I'm happy to drop it :)
<niemeyer> fwereade_: DO I type rel.role or rel.relationRole?
<niemeyer> fwereade_: Do I type rel.key or rel.relationKey?
<niemeyer> fwereade_: That's what I mean by inconsistency.
<fwereade_> niemeyer, I have no idea, and your position doesn't make it any easier to tell
<niemeyer> fwereade_: I understand, I'm trying to provide reasoning so it doesn't feel like arbitrating a personal opinion
<fwereade_> niemeyer, or should I go and change Unit.key to Unit.unitKey throughout?
<niemeyer> fwereade_: It does.. it's always rel.relation*
<niemeyer> fwereade_: This type, specifically, is named ServiceRelation, not Relation
<niemeyer> fwereade_: If you introduce UnitRelation, then we can have the same convention
<fwereade_> niemeyer, heh, I want to call it RelatedService actually
<niemeyer> fwereade_: Okay, I'll just shut up now. :)
<fwereade_> niemeyer, I'm entirely happy following your own preferred conventions, you don't actually have to convince me they're right every time... you favouring them is a pretty solid heuristic, and if I still don't like them in a few months we can worry about it then ;)
<fwereade_> niemeyer, but if you have a moment... aside from wanting to rename ServiceRelation, which IMO doesn't say enough about what it is or why it exists
<niemeyer> fwereade_: Sounds good.. I will still try to explain why in my opinion it isn't simply arbitrary favoring, though
<fwereade_> niemeyer, that's fine -- but I think we have quite different perspectives on some things, and I can't always be convinced ;)
<niemeyer> fwereade_: in a single type having prefix or not forces people to look it over, and this is obviously bad in my favoritism mechanism
<niemeyer> fwereade_: I won't try to convince you every time, promise :
<niemeyer> :0
<niemeyer> :)
<niemeyer> fwereade_: I'll still explain, though
<fwereade_> niemeyer, don't worry, I don't think you're just being arbitrary :)
<fwereade_> niemeyer, it's worth it often enough that I'm certainly not going to complain
<niemeyer> single type with { relationKey, relationName, relationScope, relationRole } == Good
<niemeyer> single type with { relationKey, relationName, scope, role } == Bad
<fwereade_> niemeyer, the message I failed to convey above was "yeah, I can see that's a sane perspective, I'm happy to go with it"
<niemeyer> fwereade_: Sounds good, we're in sync
<niemeyer> fwereade_: Thanks for being flexible as well, by the way
<fwereade_> niemeyer, anyway, I'd quite like to introduce a RelationScope type that actually represents a specific scope
<niemeyer> fwereade_: That counts highly
<fwereade_> niemeyer, a pleasure :)
<fwereade_> niemeyer, but this introduces naming issues
<fwereade_> niemeyer, renaming RelationScope will be annoying -- something like ScopeKind could work, but then for consistency's sake I'd want to have RoleKind as well
<niemeyer> fwereade_: Hmm, can you explain further what you mean by "actually representing a specific scope" in that sense?
<fwereade_> niemeyer, the group of units within a relation that can actually affect one another
<fwereade_> niemeyer, either all of them, or the set of them that are all in the same container, depending on scope
<fwereade_> niemeyer, this concept is sprinkled through the python code and I think deserves to be promoted
<fwereade_> niemeyer, even if the type itself is just a ZK conn and a path
<fwereade_> niemeyer, so, on the basis that I don;t really want to rename the Relation* types, how would you feel about UnitScope for that?
<niemeyer> fwereade_: Ah, interesting
<niemeyer> fwereade_: Can you talk me through the use case(s?) we have for that, just so I can picture it?
<fwereade_> niemeyer, most things to do with unit relations touch it in some way
<fwereade_> niemeyer, the settings paths and presence paths, which are both read and written by different unit agents, are consistent relative to the unitscope base path
<niemeyer> fwereade_: Right
<niemeyer> fwereade_: But how do we actually need to handle these details, code wise?
<fwereade_> niemeyer, and being able to pass it around saves us quite a lot of basePath + "/" + string(r.RelationRole) + "/" + u.key
<fwereade_> niemeyer, for example, with that we can pass it into a RelatedUnitsWatcher, and it becomes positively trivial for that to start a unitRelationWatcher by passing in the settings path and presence path directly
<fwereade_> niemeyer, with scope.presencePath(role, unit) and scope.settingsPath(unit)
<fwereade_> niemeyer, the lazy parent node creation can be tidied away in one place
<fwereade_> niemeyer, the *same* UnitScope we pass into the watcher can also be used to determine the paths the unit agent wants to write to
<niemeyer> fwereade_: Is this a RelationScope, rather than UnitScope?
<fwereade_> niemeyer, it's probably only half a dozen cases in total, but it lets us implement almost everything else without worrying about the concept of scope
<fwereade_> niemeyer, I don;t think so, because it only becomes meaningful in the context of a specific unit
<niemeyer> fwereade_: Yeah, I can see how it maps well to the issues we might have
<fwereade_> niemeyer, in a global relation, ok, every unit on either side has the same scope
<fwereade_> niemeyer, in a container relation it's a specific path that depends on the unit
<fwereade_> niemeyer, I'm afraid I am being called to supper :)
<niemeyer> fwereade_: Right, but I'm wondering if we don't simplify code and future use cases by having a representation which is unit-agnostic, if you see what I mean
<fwereade_> niemeyer, hmm, the way I see it we construct it with a unit and never think of it again
<fwereade_> niemeyer, it's just the scope and it does what it should
<niemeyer> fwereade_: That's because I suspect you have a bit of a biased view
<niemeyer> fwereade_: Having to implement that specific use case right now
<niemeyer> fwereade_: But think of a monitor, for example
<niemeyer> fwereade_: Or a debugger
<niemeyer> fwereade_: You're not sitting on a single unit anymore, and still, the concept of a Scope still exists
<fwereade_> niemeyer, it's still a <thing> which binds together N units which (transitively) can affect one another
<niemeyer> fwereade_: Exactly, and that <thing> is a relation scope
<niemeyer> fwereade_: With N units
<fwereade_> niemeyer, that was the name I originally wanted to give it
<niemeyer> fwereade_: As I understand it, your thinking was to have a relation scope which represents N-1 units
<fwereade_> niemeyer, no: all units that can transitively affect one another
<fwereade_> niemeyer, a unit is absolutely part of its own scope
<rogpeppe> niemeyer, fwereade_: new container package for review: https://codereview.appspot.com/6304085
<niemeyer> fwereade_: Oh, ok
<rogpeppe> unfortunately i can't seem to get it off WIP
<niemeyer> fwereade_: Well, sounds like we're aligned then, and just need to bikeshed on names :)
<niemeyer> fwereade_: Supper is calling you :)
<rogpeppe> niemeyer: i have this issue: http://paste.ubuntu.com/1042741/
<fwereade_> niemeyer, cool , cheers -- happy weekend if I decide to be all social tongiht :)
<niemeyer> fwereade_: Thanks, you too :)
<niemeyer> rogpeppe: Hmm
<niemeyer> rogpeppe: Is bzr info and all working successfully on /home/rog/src/go/src/launchpad.net/juju-core/juju/.bzr/cobzr/container-package
<niemeyer> ?
<rogpeppe> niemeyer: yeah
<rogpeppe> niemeyer: i did accidentally run bzr commit as root once; i suppose that might have done something bad
<niemeyer> rogpeppe: Hmm, quite possibly
<rogpeppe> niemeyer: but i did a chown -R rog.rog afterwards to try to clean things up
<niemeyer> rogpeppe: You could try fiddling, but I suggest rebranching
<niemeyer> rogpeppe: bzr branch container-package temp-branch
<niemeyer> rogpeppe: bzr branch -d container-package
<niemeyer> rogpeppe: bzr branch -m temp-branch container-package
<rogpeppe> branch -m, yeah
<rogpeppe> niemeyer: still failed: http://paste.ubuntu.com/1042771/
<niemeyer> rogpeppe: Crap
<niemeyer> rogpeppe: See if you have something on ~/.bzr.log
<niemeyer> rogpeppe: "resource not found" is an error coming out of lpad, IIRC
<niemeyer> Yeah, it is
<niemeyer> lpad is looking for something in Launchpad that does not exist
<niemeyer> rogpeppe: Can you please try running with -debug?
<niemeyer> rogpeppe: Also, can you please paste the result of "bzr info" within ~/.bzr/cobzr/container-package
<niemeyer> andrewsmedina: ping
<rogpeppe> niemeyer: last part of $HOME/.bzr.log: http://paste.ubuntu.com/1042796/
<niemeyer> rogpeppe: Ok, that's not the issue
<niemeyer> rogpeppe: bzr info?
<rogpeppe> niemeyer: http://paste.ubuntu.com/1042801/
<niemeyer> rogpeppe: That's probably the issue: /home/rog/src/go/src/launchpad.net/juju-core/juju/.bzr/cobzr/temp
<niemeyer> rogpeppe: I wonder what it was before
<niemeyer> But I'm not sure, to be hoenst
<niemeyer> rogpeppe: The output with -debug?
<rogpeppe> niemeyer: http://paste.ubuntu.com/
<rogpeppe> oops
<rogpeppe> niemeyer: http://paste.ubuntu.com/1042810/
<niemeyer> Hah
<niemeyer> Okay, that's clearly wrong
<niemeyer> Let's see where it's getting the information from
<rogpeppe> it doesn't seem to be remembering the push branch
<niemeyer> rogpeppe: bzr info within juju-core/juju?
<rogpeppe> niemeyer: at the start of that paste
<niemeyer> Very awkward.. why is it not resolving the path
<niemeyer> rogpeppe: Do you have changes in your local lbox?
<rogpeppe> niemeyer: nope
<niemeyer> rogpeppe: When I execute locally, it doesn't look up information for branches starting with "." in Launchpad
<niemeyer> rogpeppe: The code seems to back that up
<niemeyer> rogpeppe: I'm a bit puzzled about how it's doing that there
<niemeyer> Ah, I think I see
<niemeyer> Hmmm.. or maybe not
<niemeyer> rogpeppe: Here is the output of mine: http://paste.ubuntu.com/1042850/
<niemeyer> rogpeppe: Reproducing the same scenario
<niemeyer> with push --remember et al
<niemeyer> rogpeppe: I'm out of ideas really.. the only option is diving in and seeing why it's attempting to load ".bzr/..." from Launchpad
<niemeyer> rogpeppe: It shouldn't ever do that
<rogpeppe> niemeyer: oh now, this is weird. i committed a new change, then did bzr push --remember ...; and it said "No new revisions or tags to push"
<niemeyer> rogpeppe: Huh.. things aren't quite alright there
<niemeyer> rogpeppe: I'm not entirely sure of what that push --remember does with a bound branch
<niemeyer> rogpeppe: Theoretically whenever you commit, it's already being pushed onto the bound branch
<rogpeppe> niemeyer: i think i stuffed up bzr when i ran it as root :-(
<niemeyer> rogpeppe: It'd be nice to see why lbox is misbehaving, though
<niemeyer> rogpeppe: It's being mislead in a way it shouldn't
<niemeyer> rogpeppe: If we find out, we can make it have a saner behavior in the future
#juju-dev 2012-06-16
<niemeyer_> Phew
<niemeyer_> Empty review queue again
<niemeyer_> Good way to start the weekend
<niemeyer_> Night all
#juju-dev 2013-06-10
<TheMue> morning
<fwereade__> TheMue, heyhey
<TheMue> fwereade__: heya
<TheMue> fwereade__: any good idea on how to test the cleaner? maybe i should export an access to the size of the cleanup collection ... *loudThink*
<fwereade__> TheMue, yeah, export_test sounds sensible to me
<TheMue> fwereade__: ok, will go this way
<TheMue> *bibber*
 * TheMue switched clothes to summer mode, but right now we have 12Â°, not very much
<thumper> fwereade__: hey, you around?
<fwereade__> thumper, hey dude
<thumper> fwereade__: I made a calendar appt with you, but perhaps we could do it now?
<fwereade__> thumper, sure, just 5 mins to clean up some tests please?
<thumper> sure, np
<jam> hi danilos
<jam> would you prefer a G+ hangout to mumble today?
<thumper> hi jam, danilos
<danilos> jam: hi
<jam> hey thumper, I'm suprised to see you still around
<danilos> jam: whatever works for you, generally hangouts
<jam> are you still poaching mgz from me this week ?
<danilos> thumper, hey-hey
<thumper> jam: monday nights are a convenient time to catch up
<jam> danilos: https://plus.google.com/hangouts/_/865fe063d65c84b4533952ee66bfefaec8fc18d8
<thumper> grab people at the start of the week
<jam> I added it to the calendar event
<thumper> dinner is done here
<jam> thumper: and throttle them?
<danilos> jam: cool, getting in
<thumper> jam: nothing so violent
<rogpeppe1> mornin' all, BTW
<TheRealMue> rogpeppe1: heya
<thumper> mgz_: you up?
<thumper> hi rogpeppe1
<thumper> rogpeppe1: I have a juju-core branch up for the loggo changes
<rogpeppe1> thumper: yo!
<rogpeppe1> thumper: i LGTM'd it
<thumper> cool, ta
<thumper> I'll land things tomorrow
<rogpeppe1> thumper: great
<rogpeppe1> thumper: any feedback on the loggo CL?
<thumper> rogpeppe1: it's all good, will land unchanged
<rogpeppe1> thumper: lovely, thanks a lot
<mgz_> thumper: yup
<thumper> rogpeppe1: jam also raised the point about the logger and module separatness, and I think it may well be a hang over from C++ where you won't want to hand out pointers you care about
<rogpeppe1> thumper: what do you think of my other suggested changes (lose case insensitivity, always use ";" as a separator... there might've been another one too)
<thumper> because people might delete them
<rogpeppe1> ah
<rogpeppe1> thumper: i think you can probably lose the whole concept of "module"
<thumper> rogpeppe1: actually I may have missed those, but I'm ok with them
<rogpeppe1> thumper: i just mentioned them in the CL description
<thumper> rogpeppe1: yeah, I think go gives more protection there, so could be ok
<thumper> I thought the other day, when jam suggested it, that I had a good reason why it was bad
<thumper> but when I thought again, I couldn't recall
<thumper> so I'll think a bit more, and we may be able to get rid of modules... and just have loggers
<thumper> mgz_: quick hangout?
<rogpeppe1> i think that may end up nicer
<jam> thumper: I would probably say that Logger should be an Interface, rather than a concrete type. What do you think?
<rogpeppe1> jam: why would you want to do that?
<mgz_> thumper: sure
<thumper> hmm.. that is an interesting idea
<thumper> I like interfaces
 * rogpeppe1 likes interfaces too
<jam> rogpeppe1: you never want to muck with the internals anyway
<jam> and it lets you do all sorts of nice tricks in test suites, etc
<rogpeppe> jam: hmm, i'm not sure
<jam> It is a bit of "least information that you should care about" which is the interface.
<rogpeppe> jam: it's a big interface
<rogpeppe> jam: and you can already replace the logger with something test-suite specific
<rogpeppe> jam: and if it's not an interface call, the compiler can potentially work out the arguments don't escape
<jam> rogpeppe: 5 related APIs doesn't seem very wide to me, and global objects that must be concrete types can be a bit clumsy when you really want something else. However, I agree that the compile-time optimization is nice.
<rogpeppe> jam: Logger has 16 methods. i'd say that's a pretty big interface.
<rogpeppe> jam: i'd be more interested in something which gave you more than one possible logging hierarchy.
<jam> thumper: what was your thought on the Is* methods being public? Are you expecting people to do a pre-check before computing something expensive?
<rogpeppe> jam: but actually the Writer interface probably gives you everything you need for the fancy tricks in test suites
<thumper> jam: I thought it could be a possibility
<rogpeppe> thumper: i think it's reasonable to expose them, but perhaps compress them all into IsLevelEnabled(Level) bool
<jam> thumper: well, you do have GetEffectiveLevel which gives you that information, it seems a bit of "you can ask 3-different ways for the same info, because we're not sure what fits best".
<rogpeppe> jam: yeah, i tend to agree with that. given how rare it's likely to be, log.EffectiveLevel() <= level is probably good enough.
<rogpeppe> jam: BTW the compiler does currently know that the variadic argument to Logf, Errorf, etc does not escape, and saves an allocation per call accordingly
<thumper> rogpeppe: btw, if you have log.Errorf("100% failure"), it kinda messes up
<thumper> rogpeppe: as it expects params for a % thingy
<thumper> which is why the special case used to be there
<thumper> anyway, I'm off now
<rogpeppe> thumper: that's a problem with using % in any printf format string
<thumper> I'll leave you two to argue
<jam> thumper: doesn't that need to be "100%% " anyway?
<rogpeppe> jam: +1
<thumper> should have an Error one that only takes a message
<rogpeppe> thumper: govet will warn you about the above
<thumper> not touched govet
<rogpeppe> thumper: yeah. or Error(...interface)
<thumper> does it do pets?
<rogpeppe> thumper: only pet projects
<thumper> ha
<jam> thumper: govet is the "go compiler doesn't believe in warnings only errors, so things we might want warnings on we put in another tool" :)
<rogpeppe> thumper: there's also golint now
 * thumper off to watch game of thrones
<rogpeppe> thumper: enjoy!
<jam> thumper: enjoy
<rogpeppe> it's interesting seeing what golint says about our code actually
<rogpeppe> the most frequent warning is that it prefers ID to Id.
<jam> rogpeppe: on the same premise as URL vs Url ?
<rogpeppe> jam: i guess.
<rogpeppe> jam: only i'd always reasoned the other way
<jam> though ID isn't an acronym is it?
<rogpeppe> jam: exactly
<rogpeppe> jam: but evidently in google, they've gone with ID
<rogpeppe> jam: github.com/golang/lint/golint if you want to play
<fwereade__> Makyo, ping
 * rogpeppe is going to reboot again. i'm finding raring really quite buggy.
 * TheMue is at lunch
<rogpeppe> sigh.
 * rogpeppe goes off to submit a unity bug
<mgz_> jam: standup today?
<TheMue> *facepalm*
<TheMue> Looking for an error at the right place helps a lot.
<jam> mgz_: I probably missed the standup since you and danilos should be fast, but I'm around for 1:1 whenever you are ready.
<mgz_> we're just wrapping up :)
<mgz_> mumble?
<danilos> jam: you've had your chance to join in as well :)
<jam> danilos: you were doing a g+ I guess I just missed it
<danilos> jam: yeah
 * fwereade__ lunch
<TheMue> ah, bug found
<rogpeppe> not my day today. second forced reboot of the day.
<rogpeppe> at least i only lost 30 minutes of work this time
<mattyw> it looks like under the current version of juju-core (1.11) and possibly tip $HOME isn't set when a hook is running, has anyone else seen or reported this? I couldn't see a bug for it
<rogpeppe> mattyw: hmm, interesting. was it definitely set before?
<mattyw> rogpeppe, 1.10 it seemed to be yes
<rogpeppe> mattyw: fwereade__ would be yer man for knowing about what might have changed there.
<rogpeppe> mattyw: in the meantime, i'd make sure it's reproducible and report a bug
<mattyw> rogpeppe, ok thanks, - while I have your attention: you mentioned in oakland that you'd done some stuff in SBCL yeah?
<rogpeppe> mattyw: yeah
<rogpeppe> mattyw: though my CL skills are somewhat rusty by now... :-)
<mattyw> rogpeppe, have you used it for connecting to secure websockets? I tried to get clojure connected up to the juju-core api over the weekend, the java ssl stuff seems a complete mess
<rogpeppe> mattyw: no; i played a little bit with some web server stuff, but never from client side and not involving ssl
<rogpeppe> mattyw: i'd google around. there are quite a few nice libraries out there
<rogpeppe> mattyw: you're implementing stuff in lisp?
<mattyw> rogpeppe, I have a love hate relationship
<rogpeppe> mattyw: i had fun playing with it, but it really is a baroque language. i imagine clojure is a bit less idiosyncratic though
<mattyw> rogpeppe, clojure has been my favourite, but connecting to wss sockets doesn't just work like in python and go - I get all sorts of crazy exceptions
<rogpeppe> mattyw: i think you're best off persevering with clojure - porting to CL is unlikely to be a breeze
<mattyw> rogpeppe, clojure is the sanest one I've used, but being based on the jvm is a blessing and a curse
<rogpeppe> mattyw: does clojure even do macros in a similar style to CL?
<mattyw> rogpeppe, I've only done clojure macros. I've not done CL macros so I don't know
<rogpeppe> mattyw: are they hygienic?
<mattyw> rogpeppe, I think so yes
<fwereade__> rogpeppe, mattyw: I'm not sure why that would have happened, I'm afraid, although I'm a bit suspicious of needing $HOME in charms
<mattyw> fwereade__, we've got a binary that gets called in a hook. the binary uses $HOME
<rogpeppe> mattyw: sorry, my laptop just crashed again
<rogpeppe> mattyw: last thing i saw was "i think so yes"
<rogpeppe> i think this laptop might be on its way out
<mattyw> rogpeppe, it ended there
<rogpeppe> mattyw: ah, it must've crashed just after that and i hadn't seen it
 * rogpeppe is well annoyed. just lost 500 words of detailed documentation.
<rogpeppe> (at least)
<rogpeppe> mattyw: sbcl macros aren't hygienic
<rogpeppe> aargh
<rogpeppe1> g'night all
<thumper> morning
<hatch> morning
<thumper> perhaps it is time for a second coffee...
#juju-dev 2013-06-11
<thumper> wallyworld_: ping
<wallyworld_> otp, sec
<wallyworld_> thumper: hi
<thumper> wallyworld_: hey
<thumper> wallyworld_: how are things with you today?
<wallyworld_> not too bad. doing some refactoring as per code review comments. but main issue is a failing test i have no farking idea what's wrontg
<thumper> wallyworld_: want to hangout? or are you focused?
<wallyworld_> i can do a call
<wallyworld_> you have a url handy?\
<thumper> nope, will start one
<thumper> https://plus.google.com/hangouts/_/871ae6be4b5cd1f19de181069ba7255665b21443?hl=en
<wallyworld_> thumper: https://codereview.appspot.com/9886045
 * fwereade__ is up way past his bedtime, and has just proposed a reasonably monstrous pipeline, and would be ever so happy if he woke up tomorrow to some reviews from thumper and/or wallyworld_
 * fwereade__ will probably be late in tomorrow
<thumper> fwereade__: hi there
<wallyworld_> fwereade__: sure
<thumper> fwereade__: saw them incomming
<fwereade__> cheers
<fwereade__> gn :)
<thumper> night
<wallyworld_> fwereade__: i'll ping you tomorrow if i can't get my stupid test to pass
<wallyworld_> talk later
<fwereade__> wallyworld_, sorry, never got round to that -- definitely let me know tomorrow
<wallyworld_> will do. good night
<wallyworld_> thumper: found why the two tests were giving different results - stupid typo. now to fix the failure
<thumper> haha
<wallyworld_> fo
<thumper> wallyworld_: I have something for you
<wallyworld_> yeees?
<thumper> wallyworld_: what does a container instance of a *state.Machine give for the result of a Tag() call ?
<thumper> does it special case anything?
<wallyworld_> not that i have implemented
<thumper> wallyworld_: you'll need to tweak it
<wallyworld_> i haven't changed the Tag() method
<thumper> wallyworld_: as the result of Tag() should be something that works for a filename
<thumper> machine-0/lxc/1 isn't good
<thumper> replace('/', '-'
<wallyworld_> i had ":" as the separator
<thumper> yeah, I know
<thumper> we can blame fwereade__ for that
<wallyworld_> ok, will tweak the Tag() method, thanks for letting me know
<thumper> wallyworld_: can I count on you fixing that for me?
<thumper> np
<wallyworld_> yep
<thumper> wallyworld_: I'm getting my head around all this cloudinit stuff
<thumper> now that I realise that the user-data is the yaml file, things are easier
<thumper> I was trying to find the missing step
<wallyworld_> yeah
<wallyworld_> i knew that much not not too much extra
<wallyworld_> but not
 * thumper nods
<thumper> I'm starting to feel more comfortable about implementing this lxc-broker now
<thumper> I feel I know more about WTF is going on
<wallyworld_> cool
<thumper> wallyworld_: do you know how to use the api?
<thumper> wallyworld_: I want to start using that instead of *state.State
<thumper> if possible
<thumper> we should have something that returns me a *state.Machine right?
<wallyworld_> thumper: not really. i've not written code that uses it. perhaps look at the tests to see what is done?
<thumper> or should I not bother yet
<thumper> I have too many other things to worry about just now
<thumper> will leave a TODO for now
<wallyworld_> sure. let's make sure we fix that one soon though
<thumper> wallyworld_: yes, agreed
 * thumper stops hacking to do some reviews for fwereade__
<thumper> fwereade__: left the last two as an exercise for some europeans :)
 * thumper is done for today
<jam> morning danilos
<danilos> jam: morning
<rogpeppe1> mornin' all
<TheRealMue> morning
<TheRealMue> somehow my provider dislikes me this morning *hmpf*
<TheMue> fwereade__: ping
<jam> danilos: do you have a hardware switch on your mic
<danilos> jam: no
<danilos> jam: it just died
<TheMue> fwereade__: oh, technician rings at the door, bbiab
<danilos> jam: it's not showing in the "vumeter"
<fwereade__> TheMue, heyhey
<danilos> jam: it seems something got messed up in the USB stack, my mouse stopped working now
<jam> danilos: uh-oh, sounds like reboot is in order
<danilos> jam: yeah :/ brb
<jam> I'll grab a snack, be back in a sec
<jam> danilos: welcome back
<fwereade__> rogpeppe1, jam, TheMue: https://codereview.appspot.com/10083047/ and https://codereview.appspot.com/10166044/ are sizeable but important
<TheMue> fwereade__: so, back again, will take a look
<TheMue> fwereade__: thx for your review
<fwereade__> TheMue, and what can I do for you? you pinged :)
<TheMue> fwereade__: yes, the first note "Should just be a straight rename of JobServeAPI to JobManageState." is not clear for me in that context.
<fwereade__> TheMue, jobs are somewhat fluid and casual groupings of tasks; in the current intended model, the ManageState tasks include API-serving and txn-resuming (by necessity) and cleaning (by convenience/similarity of use case)
<TheMue> fwereade__: ah, so only JobManageState shall exisit in future and start all the according tasks. am i right?
<fwereade__> TheMue, JobManageState should just be a direct replacement of JobServeAPI at this point
<TheMue> fwereade__: ok
<fwereade__> TheMue, fwiw if you were to fix the watcher and just repropose the worker, I would be happy -- integration (and thus the jobs change) doesn't have to come in in this CL
<fwereade__> TheMue, smaller, simpler, quicker to review, I think
<fwereade__> TheMue, (and I'm sorry for dropping the ball on the watcher impl review)
<TheMue> fwereade__: it's ok. i had it this way earlier but changed it after a hint of dimiter. will now change it back again.
<mattyw> fwereade__, is it worth me raising a bug about $HOME not being set when hooks get run, or is it fair to say it's not there for a reason?
<TheMue> fwereade__: "Clients are expected to parse their own damn data ..." *hehe*
<fwereade__> mattyw, I would *prefer* to only guarantee the existence of the env vars we produce ourselves
<fwereade__> mattyw, let me just poke in and see if I can see what happened to it, though, just a sec
<mattyw> fwereade__, it's possible it might not have ever been there, It looked to me like it wasn't set in pyjuju either
<fwereade__> mattyw, ok, it looks like it never was there
<mattyw> fwereade__, ok thanks
<fwereade__> mattyw, I *might* be conviced that we should pass the whole shell environment down rather than constructing it from scratch (apart from $PATH, which we do copy down (with additions)), but I'd need a reasonably significant use case
<fwereade__> mattyw, feels somewhat nicer to run in as controlled an environment as possible though
<fwereade__> Makyo, ping -- I would appreciate your eyes on https://codereview.appspot.com/10166044/ -- it includes the stuff we discussed yesterday
<rogpeppe1> fwereade__: i think we should pass the whole shell environment down
<danilos> jam, all: heya, how do I run the live tests? I had it written down somewhere, but can't find it now
<jam> danilos: you go to each provider, cd environs/openstack; go test -live; cd environs/ec2; go test -amazon
<jam> you need to source your credentials into the environment first
<danilos> jam: it's fine to keep it in ~/.juju/environments.yaml or do I really need to put them into environment variables?
<jam> danilos: I don't think the test suite looks at environments.yaml
<danilos> jam: ack, thanks
 * TheMue is at lunch
<danilos> jam: I am trying with ec2 first, but "go test -amazon" doesn't work (it seems AWS_SECRET_ACCESS_KEY is being unset somewhere where it shouldn't be, since setting both through EC2 envvars makes it go past that point, but then tests fail which expect access keys to be wrong)
<danilos> jam: fwiw, this is what I had: http://pastebin.ubuntu.com/5754428/
<jam> danilos: if you are running the ec2 tests, I know you'll need to also do '-test.timeout=7200' or something like that, as the test suite is quite slow.
<jam> that isn't your specific problem
<jam> but you'll run into that in 5 min
<jam> (the default timeout for the test suite)
<danilos> jam: right
<jam> danilos: so I just sourced my ec2creds which only sets "AWS_ACCESS_KEY_ID" and "AWS_SECRET_ACCESS_KEY" and go test is happy
<jam> danilos: you might want to check the spelling of your env variables, as the above ones aren't very obvious for me.
<rogpeppe> laptop just crashed for the 4th time in two days
<rogpeppe> and lost another couple of hours' work dammit
<danilos> jam: I did, I'd hope copy-pasting from goamz/aws/aws.go should do it as a check :)
<jam> right
<jam> rogpeppe: just in your editor buffer? wouldn't it at least be on disk?
<rogpeppe> jam: i hadn't saved, stupidly
 * rogpeppe goes to change the editor so that this doesn't happen again
<jam> danilos: I can confirm your bootstrap-verify branch (as when we started) is passing the test suite. "BootstrapAndDeploy" takes 421s to complete, though.
<danilos> rogpeppe, hey, emacs auto-saves stuff for you
<mgz_> eheh, stop trying to convert people to your religion
<danilos> jam: I am having the same problem with trunk
<danilos> mgz_, just sayin' :P
<jam> mgz_: /wave
<rogpeppe> danilos: yeah. this should too - there's a Dump function that saves the current working state, but it's not called automatically
<jam> danilos: well, I do a pretty heavy cycle of running the test suite, which requires saving often
<danilos> jam: me too, I even commit with each new test/function added ;)
<jam> danilos: yeah, I'm often the same
<danilos> jam: any ideas if I am doing something blatantly wrong: http://pastebin.ubuntu.com/5754461/?
<jam> danilos: my goamz was a bit older, I just updated and will see if that breaks my stuff
<danilos> jam: thanks
<danilos> nooo, the drilling on the floor up returns
<jam> danilos: I get a lot of failures just running 'go test ./...' in goamz, does it pass for you?
<jam> TestMultiComplete fails for me.
<jam> a lot of "SpecifiedBucketDoesNotExist" log messages.
<danilos> jam: yeah, for me as well with r37: http://pastebin.ubuntu.com/5754474/
<jam> danilos: on the plus side, it has been failing for a while :)
<jam> r35 also fails for me
<jam> danilos: I see the same failure you had until I source my credentials.
<jam> Are you sure you exported them correctyl?
<jam> danilos: when I source my creds again, it starts passing, even if goamz doesn't work.
<danilos> jam: http://pastebin.ubuntu.com/5754519/
<danilos> jam: I am not sharing my keys with you, but even echoing $... does show them
<jam> danilos: sure 'set' sees it, but are they exported?
<jam> I don't know if you are using bash
<jam> but you can do:
<jam> FOO=xxyy
<jam> but you need to do
<jam> export FOO
<jam> for child processes to see it
<danilos> jam: that was it :/ ta
<jam> danilos: I'm glad it helped
<danilos> jam: I thought I tried with passing them right before the "go test", when it's passed down
<jam> danilos: AWS_XXX=YYYY go test ?
<jam> I think that should wok
<jam> work
<danilos> jam: it was confusing that it actually failed on ACCESS_SECRET_KEY, but not on ACCESS_KEY_ID which was checked for first
<danilos> jam: yeah, I know
<jam> danilos: hangout time?
<jam> you can at least join to listen in
<wallyworld__> jam:
<wallyworld__> state_test.go:1107:
<wallyworld__>     c.Errorf("mgo.Dial only paused for %v, expected at least %v", t1, 3*retryDelay)
<wallyworld__> ... Error: mgo.Dial only paused for 500.800787ms, expected at least 1.5s
<mgz_> hm, environs/azure seems to break the build?
<mgz_> fwereade__: ^am I missing something or should this get pulled out for now?
<danilos> one question about the live tests: if I interrupt the test run with C-c, does that mean I might have instances that keep on running?
<jam> mramm: are you awake yet? :)
<mgz_> danilos: potentially, tests should be capable of not leaking resources, but do you trust them to be robust like that? :)
<jam> mgz_, fwereade__: I'm pretty sure we need to roll back the azure change because it breaks 'go build'
<danilos> mgz_, tbh, no :)
<mgz_> you'll want the eucatools stuff or similar to inspect left over instances/secgroups etc
<mramm> jam: yep, I'm here
<jam> wallyworld__: I should probably have posted to the mailing list when I updated for the newer mgo, I had run into it the other way (updated mgo, the *other* tests were failing)
<jam> mramm: I'm here: https://plus.google.com/hangouts/_/e6b23e6c7391c8641a3b6660495f8fe7055e1884
<jam> danilos: yes, and in fact, I've seen unclean shutdowns without killing the test suite.
<jam> So generally, check if you have instances running
<wallyworld__> jam: no problem. i wonder why the version (v2) stayed the same even though the changes were not backwards compatible semantically
<jam> wallyworld__: technically the APIs are stable, it just added a Sleep internally
<jam> tests that depend on time.Now are a bit broken anyway
<jam> IMO
<wallyworld__> but the behaviour changed
<danilos> jam: ok, the tests pass with canonistack as well; now, I wonder if anybody knows how to choose lcy02 for https://dashboard.canonistack.canonical.com/
<jam> mgz_: ^^ ?
<mgz_> people use dashboard? :)
<fwereade__> mgz_, huh, was that not a change in response to an mgo change? are you definitely up to date there?
<mgz_> danilos: so, no, not sure, you can ask in the internal support channel, or use python-novaclient commandline tools which pay attention to the env variables you set
<fwereade__> jam, mgz_: grar, I did specifically lgtm it with a "make sure it builds"
<danilos> mgz_, heh, I was taking a look and noticed that none of my instances showed up in there
<mgz_> there may be another endpoint for lcy02
<jam> fwereade__: yep, I saw that
<jam> and I was surprised
<fwereade__> jam, if no more tan I have seen has happened, yes, please revert it
<mgz_> fwereade__: sorry, probably crossing converstions here
<mgz_> fwereade__: ian's test failure was a mgo thing, trunk not building is the azure stub
<fwereade__> mgz_, ok, we should indeed revert the azure stub if it doesn't build
<danilos> mgz_, the wiki page talks about choosing a different region in the drop down when logging in, a shame there is no such dropdown :)
<mgz_> :D
<rogpeppe> fwereade__: all reviewed i think
<danilos> mgz_, ah, there's also a "lcy01 is the only available region in the dashboard"
<rogpeppe> now back to try and remember the code i wrote this morning :-(
<fwereade__> rogpeppe, lovely, ty
<danilos> rogpeppe, nothing more joyous than having to rewrite the code from memory (it is still fast, but very annoying)
<fwereade__> mgz_, jam, has anyone proposed a revert?
<mgz_> fwereade__: proposed
<rogpeppe> danilos: yeah. it's the comments in particular i find annoying, when you've spent a while thinking of a good turn of phrase and it's all gone
<fwereade__> mgz_, approved on trust that it does what you say it does ;p
<mgz_> https://codereview.appspot.com/9714046/
<rogpeppe> mgz_: LGTM trivial
<jam> mgz_: just make sure you poke jtv so he knows its happened
<jtv> Argh.  I forgot to compile before landing.  And now I can't propose a fix because something else is broken.
<fwereade__> rogpeppe, sorry, but you seem to have reviewed all the things that already had 2 LGTMs and skipped the one's that didn't
<jtv> I thought I'd built it, and I didn't.  *bash* *bash*
<fwereade__> rogpeppe, good comments, appreciated very much
<rogpeppe> fwereade__: orly? i thought i'd gone through all the ones in activereviews
<rogpeppe> fwereade__: which ones have i missed?
<rogpeppe> fwereade__: (it's really not easy to work forward through reviews in a consistent way - i should have kept notes about which ones i'd done)
<jtv> Does anybody know what the logger.Logf errors are about?
<fwereade__> rogpeppe, https://codereview.appspot.com/10083047/ and https://codereview.appspot.com/10166044/
<fwereade__> rogpeppe, it is not impossible that you were looking at the "approved" section ;p
<fwereade__> jtv, update launchpad.net/loggo
<rogpeppe> fwereade__: ah, i was indeed
<rogpeppe> fwereade__: doh
<jtv> argh thanks
<rogpeppe> fwereade__: and one of them i'd even started and had a pending comment for
<rogpeppe> fwereade__: i started on that one and then realised i needed to go through the rest of the pipeline to get context
<jtv> Will have a build fix up shortly.
<gary_poster> mramm, are you ready for a random GUI person to descend upon the juju core daily yet, or should we wait another day?
<mramm> sure
<gary_poster> thanks
<fwereade__> rogpeppe, no worries
<jtv> Review needed for build fix: https://codereview.appspot.com/10184043
<jtv> Anybody available?
<TheMue> fwereade__: while changing test just one question. by coalescing events you mean multiple cleanups transaction lead to only one event?
<fwereade__> TheMue, yes I do, sorry
<TheMue> fwereade__: hmm, so outlined loop (I already used it before that way) does not coalescing events.
<TheMue> fwereade__: it simply fires each time.
<TheMue> fwereade__: e.g. I have two destroys w/o cleanup. here I get two events.
<fwereade__> TheMue, that's fine if the first event was notified before the second occurred
<fwereade__> TheMue, but if two events happen and then the client is notified, he should not be notified again
<fwereade__> TheMue, I think the structure I showed you is pretty common among the watchers, isn't it?
<fwereade__> TheMue, and apart from anything else
<TheMue> fwereade__: yes, that's why I used it first too.
<TheMue> fwereade__: but there is more effort e.g. when collecting and merging document ids
<fwereade__> TheMue, if you don't read from channels you register with state/watcher, you block *all* watchers
<fwereade__> TheMue, except that there is no effort in this case
<fwereade__> TheMue, any change, flip the send bit on; any send, flip it off
<TheMue> fwereade__: dunno if I got you right.but let me propose the newest version so i can show you where i'm stumbling.
<fwereade__> TheMue, sure, sgtm
<TheMue> fwereade__: https://codereview.appspot.com/10148045 is in, i removed the job integration too
<TheMue> fwereade__: in state_test.go line 1342ff I do the two Destroy() and then get two change events.
<TheMue> fwereade__, rogpeppe, mramm : hangout?
<Makyo> fwereade__, ack, looking.
<fwereade__> TheMue, I don't think you can depend on getting exactly 2 events there; a correct implementation in combination with suitably pathological scheduling decisions could lead to the watcher read only completing once both the add and the remove are processed
<TheMue> fwereade__: hmm, could you please rephrase
<fwereade__> TheMue, there are really 2 events happening, it is true
<fwereade__> TheMue, and almost always, because they are somewhat separated in time and the machine is probably not working too hard on these tests, the watcher's client will have the opportunity to read those 2 events independently
<fwereade__> TheMue, hmm, a thought
<fwereade__> TheMue, how many do you get if you Sync first, rather than StartSync?
<TheMue> fwereade__: i'm listening ...
<TheMue> fwereade__: will test, one sec
<TheMue> fwereade__: yep, then it's only one
<fwereade__> TheMue, ok, and that *could* happen with StartSync, it's just less likely
<TheMue> fwereade__: ah, now I understand, thanks. now I only have to see how to improve the testing for it.
<fwereade__> TheMue, put a couple of notes on https://codereview.appspot.com/10148045/
<TheMue> fwereade__: thank you
<fwereade__> gaah, errors.NotFoundf appends " not found" to its message
<fwereade__> this is not consistent with actual usage in general
<fwereade__> NotFoundNamef("unit %q", unit) -> `unit "foo/2" not found`
<fwereade__> NotFoundf("ping pong %q blah", unit) -> `ping pong "foo/2" blah`
<fwereade__> would both be used
<fwereade__> rogpeppe, do those names seem ok to you?
<fwereade__> rogpeppe, errors.NounPhraseNotFoundf
<fwereade__> ;p
<rogpeppe> fwereade__: i'd just have the latter. the " not found" suffix was only ever a convenience and i think it's no longer so convenient.
<fwereade__> rogpeppe, an awful lot of places include that explicitly
<fwereade__> rogpeppe, and they're either not tested properly or the tests are dumb because they're not catching "foo not found not found"
<fwereade__> rogpeppe, actually that all collapses to "not tested properly" doesn't it
<fwereade__> TheMue, mgz_, jam, danilos: any opinions on the above?
<rogpeppe> fwereade__: i see only three places that use NotFoundf("xxx not found")
<rogpeppe> fwereade__: and they all look like copypasta of the same code
<rogpeppe> fwereade__: i suspect i was probably responsible originally :-)
<fwereade__> rogpeppe, heh, I saw 3 in the first dozen I looked at
<fwereade__> rogpeppe, clearly my data set was too small :)
<rogpeppe> fwereade__: http://paste.ubuntu.com/5755219/
<fwereade__> rogpeppe, the other aspect is charm.NotFoundError
<fwereade__> rogpeppe, they have custom messages that are quite nice
<fwereade__> rogpeppe, and I'd quite like to keep them
<fwereade__> rogpeppe, but the current NotFoundf seems like a good common case so it can should probably keep its name
<fwereade__> rogpeppe, errors.NotFoundMessagef()?
<fwereade__> rogpeppe, for the custom cases?
<rogpeppe> fwereade__: errors.NotFoundError{msg} ?
<fwereade__> rogpeppe, the f is useful there too
<fwereade__> rogpeppe, ah sorry mistook you
<rogpeppe> fwereade__: BTW there's an error field in NotFoundError which is never used
<fwereade__> rogpeppe, ha
<fwereade__> rogpeppe, best drop it then
<rogpeppe> fwereade__: same with unauthorized error
<rogpeppe> fwereade__: i think it's a hangover from its goose origins
<fwereade__> rogpeppe, ok, I'll strip those too then, thanks
<rogpeppe> fwereade__: perhaps: have only one NotFoundf helper, same as current. but if you want to create a "not found" error with a custom message, do so.
<fwereade__> rogpeppe, seems reasonable
<rogpeppe> fwereade__: some time i'll have some more time to work up my new errors package. it was going in quite a nice direction, i think.
<rogpeppe> fwereade__: i found that rather than error codes, a useful concept was a "diagnosis"
<fwereade__> rogpeppe, sounds interesting
<fwereade__> rogpeppe, there's definitely a lot of room for advancement in that whole field :)
<rogpeppe> fwereade__: so an error could change but its diagnosis might remain the same. and in particular, a diagnosis could be an error itself. so to find out if something's a not-found error, you might do errors.Diagnosis(err) == somepkg.ErrNotFound
<fwereade__> rogpeppe, or indeed somepkg.ErrorHasSomeObscureProperty(errors.Diagnosis(err))
<fwereade__> rogpeppe, ofc the really hard part will be resisting the temptation to s/err/murder/
<fwereade__> rogpeppe, btw, another thing is really bugging me here
<fwereade__> rogpeppe, NotFound vs Unauthorized
<fwereade__> rogpeppe, can you think of any major consequences to s/Unauthorized/NotAuthorized/?
<fwereade__> rogpeppe1, hey, UnauthorizedError *does* use the error field, but not in a very sane way
<rogpeppe1> fwereade__: ah, i didn't notice
<fwereade__> rogpeppe1, it's in state/open.go
<fwereade__> rogpeppe1, ISTM that the first usage is clear gibberish
<fwereade__> rogpeppe1, and the second one has no reason to keep the error lying around
<rogpeppe1> fwereade__: yeah, they both look dodgy to me
<fwereade__> rogpeppe1, ok, it's used so little I'm going to make it consistent, if you have no objection to "not authorized" supplanting "unauthorized"
<fwereade__> rogpeppe1, the second one looks like it has a purpose tbh
<rogpeppe1> fwereade__: yeah, i was just thinking that
<rogpeppe1> fwereade__: but perhaps it should be: "cannot log in to juju database: not authorized: %v"
<fwereade__> rogpeppe1, NotAuthorizedf("foo %q", "bar") -> `foo "bar" not authorized`
<fwereade__> rogpeppe1, maybeUnauthorized(err error, action string)
<fwereade__> rogpeppe1, NotAuthorizedf(action)
<rogpeppe1> fwereade__: we should use either Unauthorized or NotAuthorized consistently at least
<fwereade__> rogpeppe1, Errorf("%s failed: %v", action, err)
<fwereade__> rogpeppe1, sure, that was just a thinko
<rogpeppe1> fwereade__: i *think* i prefer Unauthorized, for the function names at least
<rogpeppe1> fwereade__: in a sentence, i'm happy to use "not authorized" when it reads better
<fwereade__> rogpeppe1, even considering that Unfound would be an abomination unto the lord?
<rogpeppe1> fwereade__: sure
<rogpeppe1> fwereade__: they don't have to be the same
<fwereade__> rogpeppe1, no, but there's definitely some degree of value in consistency too
<rogpeppe1> fwereade__: i'm not sure there's any particular virtue in that particular consistency. the moment i find myself typing "NotAuthorizedError" by accident, i'll concede though :-)
<fwereade__> rogpeppe1, "<noun> not found", "<action> not authorized" both seem to read just fine to me
<fwereade__> rogpeppe1, that's a matter of personal habit alone, surely
<fwereade__> rogpeppe1, consistency is in itself a virtue
<rogpeppe1> fwereade__: "not authorized" doesn't work so well as an adjective tho
<rogpeppe1> fwereade__: although it's unavoidable with "not found"
<fwereade__> rogpeppe1, it's just harder to appreciate the value of imposing consistency when it's code you already know
<rogpeppe1> fwereade__: if you feel strongly about NotAuthorizedError, go for it
<fwereade__> rogpeppe1, strongly enough, yeah -- thanks :)
<rogpeppe1> fwereade__: though while you're about it, errors.NotAuthorized might be better when errors.NotAuthorizedError
<rogpeppe1> s/when/than
<fwereade__> rogpeppe1, then NotAuthorizedf and NotFoundf echo one another
<fwereade__> rogpeppe1, foo not bar
<fwereade__> rogpeppe1, oh and finally: errors.NotAuthorizedError vs errors.NotAuthorized; errors.IsNotAuthorizedError vs errors.IsNotAuthorized
<fwereade__> rogpeppe1, any point keeping the stuttering?
<rogpeppe1> fwereade__: i don't think so
<fwereade__> rogpeppe1, cool
<rogpeppe1> fwereade__: i thought about mentioning it but i didn't want to add yet more negativity...
<fwereade__> rogpeppe1, funny how a liitle "this isn't quite consistent, let's just fix that one bit" can balloon
<rogpeppe1> fwereade__: oh yes
 * TheMue is not happy about the cleaner test yet. it needs about 6 seconds.
<jtv> Anyone up for a second review of my branch?  It's just some forgotten imports added to the azure skeleton, and some unqualified class names qualified with their package name: https://codereview.appspot.com/10184043/
<mgz_> jtv, it would seem resaonable to carry over one of the earlier +1s and submit
<jtv> Can I do that?
<mgz_> or get fwereade__ to rubberstamp
<jtv> Just the sort of thing I was hoping to accomplish here, really.  :)
<jtv> Oh never mind.  I'll just submit as-is then.
<jtv> I wonder what this error output from lbox means...
<jtv> error: use of closed network connection
<jtv> Rietveld: https://codereview.appspot.com/10184043
<jtv> The lbox command still returns success (i.e. zero), but I think it just always does that, regardless.
<rogpeppe1> g'night all
<TheMue> rogpeppe1: n8
<TheMue> fwereade__: still available?
<jtv> nn robbiew
<jtv> I mean, rogpeppe1
<jtv> Damn over-eager tab completion...
<robbiew> ;-)
<jtv> Sorry!  :)
<robbiew> np
<mgz_> jtv: not seen that error before myself
<jtv> I've noticed that relatively often, given how rarely I've used lbox: weird errors, different every time, and if you keep retrying it succeeds eventually â but you can't ever really tell because it doesn't report errors properly.
<jtv> FWIW it does look as if lbox failed there, it just doesn't seem to have checked for the error condition.  When I retried, it succeeded.
<jam> jtv: Are you using precise? That looks like the error I used to get when compiling with go 1.0 (updating to go 1.0.3 from ppa:gophers/go
<jam> fixed it)
<jtv> jam: no, I'm on Raring.
<jtv> Go version is go1.0.2.
<jtv> Do I need to update to 1.0.3?
<jam> jtv: I think danilos is using your same configuration successfully, but I'm not sure
<andreas__> hi guys, has anyone seen this error before? It's from the bootstrap node: http://pastebin.ubuntu.com/5755877/
<andreas__> on aws, using juju-core trunk
<andreas__> juju bootstrap worked, one instance running, then I did a bunch of deploys
<andreas__> juju status says they are all pending, but ec2-describe-instances still only sees the bootstrap instance, no word on the others
<andreas__> I didn't bootstrap with --upload-tools, I wonder if that oculd be it
<fwereade__> andreas__, I suspect you're using trunk but not running --upload-tools
<fwereade__> andreas__, ha, yes
<andreas__> ok, I destroyed the env and bootstrapped again with --upload-tools, it's working now
<thumper> morning
<thumper> fwereade__: I have some questions around the provisioner change
<thumper> fwereade__: what *should* we do on bad config?
 * fwereade__ has a brief flashback
<fwereade__> thumper, I think the plan was to keep the original environment around and to continue to use it
<thumper> the tests asserted that the old environment bits were saved
<thumper> fwereade__: and regarding failure...
<thumper> fwereade__: when there was invalid config, the "get state addresses" failed
<fwereade__> thumper, ok; so, failure to provision should be communicated back via SetStatus (for now) and SetInstanceError(soon)
<thumper> hence the magic bits
<fwereade__> thumper, huh, why is the bad environ causing that? I thought we got the addresses from state now
<thumper> um...
<thumper> not sure why
<thumper> but it did
 * fwereade__ has a suspicious
<thumper> I didn't really think too hard
<thumper> probably should have
<fwereade__> might be worth another look, yeah
<thumper> but assumed it was expected
<fwereade__> I think we still have issues around that though
<fwereade__> provisioning failure is fine as above
 * thumper nods
<thumper> I'm going to focus on lxc for a bit, perhaps back to the provisioner work when I need a break
<fwereade__> unprovisioning failure should be logged and ignored, we'll try again next time we hit processMachines
<fwereade__> instance-listing failure though is most unhelpful and the best course of action is most unclear
<thumper> fwereade__: so if I set an error on the machine saying "couldn't provision", does it come back in the next watcher list?
<thumper> fwereade__: also, another Q
<fwereade__> at the moment it's handled in the provisioner by checking status itself
<fwereade__> the status path is overloaded
<thumper> fwereade__: if I was to write a small app to test my lxc stuff on machine 0 of my bootstrapped ec2
<thumper> fwereade__: what is the quickest way to get state bits?
<fwereade__> thumper, to get access to a state.State in code?
<fwereade__> thumper, for now, snarf machine 0's state credentials by making an environs/agent.Conf with sane dataDir and tag and reading from there
<fwereade__> thumper, then state.Open() and you're away
<thumper> hmm...
<thumper> there is obviously already an agent.conf file
<thumper> in /var/lib/juju/agents
<thumper> but thanks, will poke a little
<fwereade__> thumper, the Conf type is a bit odd imo
<thumper> ok
<thumper> wallyworld__: ping
<wallyworld__> thumper: hi
<thumper> wallyworld__: how close are you to landing the "make a container in state" work?
<thumper> I'd love to be able to poke it
<wallyworld__> already done
<wallyworld__> just doing a trivial followup for the Tag() change
<thumper> wallyworld__: awesome
<thumper> wallyworld__: I'm writing some code to create a test app to create a container based on that work
<wallyworld__> thumper: i hate mongo
 * thumper isn't a fan either
<wallyworld__> why the f*ck we have to work around it's tranaction limitations is beyind me
<thumper> the thing is, we are storing structured data
<thumper> highly relational structured data
<thumper> why we are shoving that into mongo and trying to do transactions on it is beyond me
 * wallyworld__ sighs
<wallyworld__> stupid design decision
<wallyworld__> oh look, mongo is new and shiny, let's use that
<thumper> it is good for rapidly evolving structures
 * thumper shrugs and goes back to wokr
<wallyworld__> that can be done with rwlational dbs too
<wallyworld__> i know, i've done it
<thumper> wallyworld__: does state.Machine have a getParent?
<thumper> or parent machine id getter?
<wallyworld__> thumper: not yet. it has a Containers(). will add Parent() now
<wallyworld__> there is a Id()
<thumper> wallyworld__: cool, because I need it
<thumper> I'm ok with parent returning the machineId
<wallyworld__> thumper: while you are waiting https://codereview.appspot.com/10019049 - very trivial
 * thumper looks
<thumper> trivial, one ack is good enough
<thumper> wallyworld__: so the other bit has landed?
<thumper> wallyworld__: and I can use it now?
<wallyworld__> yes
<thumper> \o/
<wallyworld__> i did it 90 mins ago
<wallyworld__> Parent() won't take long
<wallyworld__> thumper: https://codereview.appspot.com/10203043
<wallyworld> thumper: fwereade__: oh joy. our "unit" tests connect to the real charm store :-(
<thumper> haha
<wallyworld> not hahaha. very sad :-(
<wallyworld> i noticed because my connection dropped out when runnign the tests
<wallyworld> the tests seem to be taking a looooong time of late
<fwereade__> wallyworld, yeah, I just noticed some of those myself
 * wallyworld sighs again
<fwereade__> wallyworld, the config-7 branch does at least add a MockCharmStore that should be useful in such situations
<wallyworld> \o/
<wallyworld> thumper: there are ~50 calls to Machine.st.Machine() , so any move to use the API will need to address those. adding a new call site shouldn't matter for the Parent() call
<thumper> wallyworld: fair enough
<wallyworld> thumper: i don't follow the other comment though
<thumper> that is the only comment
<wallyworld> btw, there is a parentId(), but it is unexported
<wallyworld> i can export it i guess
<thumper> hmm...
<wallyworld> thumper:
<wallyworld> Unfortunately this is making an assumption that doesn't always hold.
<wallyworld> If I am the provisioner, and I have a state.Machine instance, I won't
<wallyworld> necessarily have access to state.
<wallyworld> </quote>
<thumper> right, from the api, it won't
<thumper> what happens now if someone gets a machine from the api?
<wallyworld> NFI. I think the machine client from the api is different to state/machine - the client just delegates stuff through to the backend
<wallyworld> so maybe I need to add Parent() to the api client as well
<wallyworld> i've not used the api stuff at all yet
<thumper> me neither
<wallyworld> thumper: do you need Parent() exposed via the api yet?
<thumper> fwereade__: Q regarding agent passwords
<wallyworld> thumper: i can add it, but can we wait till it's needed?
<thumper> fwereade__: agent.Conf.OpenState returns a new password
 * fwereade__ sighs
<thumper> fwereade__: if I'm just writing a noddy app to poke stuff, do I need to care?
<wallyworld> thumper:  i'd rather not bloat the current Machine client with methods we don't need
<thumper> wallyworld: ack
<wallyworld> thumper: so if you agree, maybe you want to +1 ?
<thumper> wallyworld: ok
<wallyworld> thumper: so now you have all the goodies you want to experient i think
<fwereade__> thumper, bleh, it's all done pretty early on
<thumper> fwereade__: yes, the OpenState connects with the old password, generates a new random one, and passes that back
<fwereade__> thumper, maybe best to wait until the machine agent has set itself to started though
<thumper> fwereade__: I'm assuming I can happily ignore the new password?
<thumper> fwereade__: this is my test app
<thumper> not anything that is likely to survive
<fwereade__> thumper, yeah, if it gives you one you should be safe to ignore it
<thumper> fwereade__: effectively I'm writing a small app to call into my container work to see the impact on the bootstrap node
<wallyworld> thumper: the Parent() stuff is trivial also - want me to land so you can use it?
<fwereade__> thumper, and if you start the app again later it'll pick up the fresh value written by the other one
<thumper> wallyworld: funnily, I don't need it quite yet, but I will do for the provisioner work
<wallyworld> ok
<thumper> wallyworld: but I'm still at the container bits
<thumper> fwereade__: can you look at wallyworld's machine parent branch?
<wallyworld> it's way past his bedtime
<thumper> hmm...
<thumper> ec2 still hasn't bootstrapped my node
<fwereade__> I'll see if I remember in 5 mins
<wallyworld> thumper: i think i'll do some work to make juju status output container info
<thumper> wallyworld: sounds good
<fwereade__> thumper, upload-tools?
<thumper> fwereade__: no i forgot...
<thumper> oops
 * wallyworld thinks upload-tools should always happen for dev versions of juju
<fwereade__> thumper, that said... we have actually broken compatibility, haven't we
<thumper> have we?
<fwereade__> thumper, we can no longer bootstrap 1.10
<fwereade__> (except with 1.10 client, that is)
<thumper> fwereade__: what have we done to cause that?
<fwereade__> thumper, added a field to the environment config
<thumper> which field?
<fwereade__> thumper, state-port and api-port I think
<thumper> ah...
<fwereade__> thumper, I'm hearing that siren song of "go on, hardcode it" and I want to slap myself
<thumper> :)
<wallyworld> wouldn't there have been defaults used, so if the configs were missing, stuff would work?
<thumper> fwereade__: will panic cause defers to run?
<fwereade__> thumper, but if the values are *present* in older code
<fwereade__> thumper, yes
<fwereade__> wallyworld, but if the values are present in older code, eg the machine agent running 1.10, it will barf
<wallyworld> ah, bollocks, right
<fwereade__> wallyworld, for that matter, an environments.yaml valid for 1.11 is not necessarily so for 1.10
<wallyworld> cause we barf on unknown config items
<wallyworld> maybe we sould be more tolerant of such data
<fwereade__> wallyworld, yeah, we should just drop that madness
<fwereade__> wallyworld, log unknown settings and move on
<wallyworld> toleranrt reads, strict writes
<thumper> +1
<wallyworld> yep
<thumper> make it so
 * wallyworld wonders why we didn't do that
<fwereade__> thumper, doesn't fix today's problem though, that 1.10 is already out doing its badness in the wild
<thumper> wallyworld: TODO: add more help for "add-machine"
 * thumper nods
<wallyworld> thumper: i didn't see an obvious example to cargo cult of how to add extensive help
<thumper> wallyworld: how do I add a new container to machine 0?
<thumper> wallyworld: in the description
<thumper> just some examples would be good
<wallyworld> juju add-machine 0/lxc
<wallyworld> will do it as a driveby
<wallyworld> thumper: let me know if above ^^^^ doesn't work for you
<thumper> about to try
 * wallyworld runs away
<wallyworld> lalalalalala
<thumper> wallyworld: how will I know if it succeeds?
<wallyworld> thumper: juju status in a day's time :-P
<wallyworld> actually, if it exists without error
<wallyworld> it will have updated the state
<wallyworld> but nothing acts on it yet
<wallyworld> actually, juju status should show it
<wallyworld> since containers are machines
<wallyworld> but the nesting etc will not be there yet
<wallyworld> make sense?
<wallyworld> thumper: how'd you go?
<thumper> wallyworld: sorry, doggy call of nature
<thumper> $ juju add-machine 0/lxc
<thumper> error: cannot add a new container: transaction aborted
<wallyworld> i guessed :-)
<wallyworld> hmmm.
<thumper> running with debug
<wallyworld> machine 0 exists?
<thumper> nah, that's all I get
<thumper> 0 is the bootstrap node
<wallyworld> sure. you've bootstrapped i guess
<thumper> yep, 0 exists
<thumper> wallyworld: have you actually tried this?
<wallyworld> tests yes
<wallyworld> let me check the tests
<wallyworld> thumper: see TestAddContainerToExistingMachine in addmachine_test.go - that's what you're trying
<wallyworld> the test adds to machine 1 though
<thumper> right
<wallyworld> maybe the bootstrap node state document is missong something due to how it was created
<wallyworld> that test would still pass even with machine 0 since AddMachine() does the right thing
<thumper> wallyworld: but machine 0 hasn't been bootstrapped in the tests
 * thumper tries with just /lxc
<wallyworld> thumper: it wouldn;t be that - what i mean is that the container ref record may not have been created
<fwereade__> wallyworld, https://codereview.appspot.com/10203043/ reviewed
<wallyworld> thumper: you could also try add-machine and then add-machine <id>/lxc
<wallyworld> fwereade__: thanks
<thumper> addmachine.go:92 created "lxc" container on machine 1/lxc/0
<thumper> wallyworld: that is from 'add-machine /lxc'
<wallyworld> that's as expected
<wallyworld> fwereade__: what if nil is a legitimate result though?
<thumper> $ juju add-machine 1/lxc --debug
<thumper> running
<thumper> 1/lxc/1 there too now
<thumper> wallyworld: so it looks like a bootstrap issue
<thumper> wallyworld: care to fix?
 * thumper works with machine 1
<wallyworld> thumper: cool, so perhaps the bootstrap machine state entry is created via a different api
<fwereade__> wallyworld, by convention you would handle the specific error that encodes the information that would otherwise be encoded in the nil return
<wallyworld> an api that doesn;t update the container ref record properly
<thumper> fwereade__: I see that 'juju scp ' needs more descriptive help too
<thumper> fwereade__: how do I copy something from local machine to machine 1?
<wallyworld> fwereade__: but it's not an error condition - it just means the machine's parent is legitimatey nil
<thumper> wallyworld: probably
<wallyworld> nil can be a valid value for things
<fwereade__> wallyworld, oh, hell, I thought InjectMachine used all the same code paths
<fwereade__> wallyworld, think of it on a pure code level
<fwereade__> wallyworld, it's a useful convention because, if it's followed, you don't need to have context on Machine to know that code using Machine.Parent will be valid
<fwereade__> wallyworld, and I think all of us to a greater or lesser extent have that pattern somewhat embedded
<wallyworld> fwereade__: so you would prefer Machine.Parent() to return an err if there is not parent?
<fwereade__> wallyworld, I'd be less bothered if it wasn't exported
<fwereade__> wallyworld, yeah, for convention's sake if nothing else
<thumper> wallyworld: I'd be happy just to have ParentId exported
<thumper> wallyworld: with "" meaning no parent
<fwereade__> wallyworld, that said ParentId() (string, bool) seems perfect to me
<wallyworld> fwereade__: i'll change it, but i think special casing nil ike this is wrong
<thumper> wallyworld, fwereade__: why bool?
<thumper> if empty, then obviously not there
<fwereade__> thumper, same logic as the err case really -- having a guarantee of validity on a separate channel is a good convention
<thumper> hmm... whatever
<thumper> not sure I agree
<thumper> but I'll go with it
<thumper> if it is current convention
<fwereade__> thumper, wallyworld: it's debatable, but conventional
<thumper> fwereade__: so, back to 'juju scp'
<thumper> fwereade__: from local to machine 1
<thumper> no examples in the built in help
 * wallyworld changes it to ParentId and goes to the corner to sulk
<thumper> hmm... suppose I could just read the source
<fwereade__> thumper, er, try `juju scp /path/to/file 0:/target/path` or something?
<thumper> trying $ juju scp ~/go/bin/test-lxc 1:~
<thumper> well, it is doing something
<fwereade__> jolly good :)
<thumper> fwereade__: unfortunately I can't create a container on the bootstrap node right now
<thumper> fwereade__: wallyworld will work out why :)
<wallyworld> thumper: yes i will. might include a fix as a driveby in the parent id branch
<thumper> wallyworld: sure, if it is small, if bigger, perhaps a separate branch would be better
<fwereade__> thumper, haha
<wallyworld> thumper: yep
 * thumper sighs
<thumper> wrong arg
<thumper> now wait a minute while I upload my new test app
<thumper> seems my upload starts fast and gets slower
<thumper> NFI way
<thumper> hmm... it is doing something
<thumper> probably downloading the ubuntu-cloud template
#juju-dev 2013-06-12
<thumper> container "machine-1-lxc-0" created
<thumper> well that bit worked...
<wallyworld> thumper: ParentId() committed, looking into bootstrap machine container issue
<thumper> ack
<wallyworld> thumper: i just bootstrapped on canonistack and everything works as expected
<thumper> wallyworld: try ec2
<wallyworld> i tried on ec2 and got an error running status- error: cannot log in to admin database: auth fails
<wallyworld> bootstrap seemed to work
<thumper> which zone?
<wallyworld> us-east-1
<thumper> hmm...
<thumper> my status works
<wallyworld> still, there should be no difference between add-machine on canonistack vs ec2
<wallyworld> i wrote a test for inject machine and containers and will add that test as a driveby
<wallyworld> thumper: did you remember to "go install launchpad.net/juju-core/..."
<thumper> aye
<wallyworld> if not, that would explain your issues perhaps
<thumper> hmm...
<thumper> v.weird
<wallyworld> why does my brightness keep maxing out. frustrating when trying to conserve power
 * thumper grabs the parent bits as needed now
<wallyworld> thumper: status just started working on ec2. go figure
<fwereade__> wallyworld, ec2 sometimes just pretends you have no instances
<wallyworld> :-(
<fwereade__> wallyworld, it's just one of those fun things than brighten your day every now and then
<thumper> ec2 sucks
<wallyworld> would that explain the auth failure on loggin to admin db?
<wallyworld> thumper: add-machine works for me on ec2
<thumper> wallyworld: how about 'juju add-machine 0/lxc'
<thumper> because that is what failed for me
<wallyworld> thumper: yes, that works
<wallyworld> that's what i tried
 * thumper pulls a face
 * thumper tries
<thumper> just merged latest, will rebuild and try
<wallyworld> ian@wallyworld:~$ juju add-machine 0/lxc  -e amazon
<wallyworld> ian@wallyworld:~$ juju status -e amazon
<wallyworld> machines:
<wallyworld>   "0":
<wallyworld>     agent-state: started
<wallyworld>     agent-version: 1.11.1.1
<wallyworld>     dns-name: ec2-107-22-111-226.compute-1.amazonaws.com
<wallyworld>     instance-id: i-e77d5b85
<wallyworld>     series: precise
<wallyworld>   0/lxc/0:
<wallyworld>     instance-id: pending
<wallyworld>     series: precise
<wallyworld> services: {}
<thumper> wallyworld: hmm...
<thumper> wallyworld: you should have quotes around 0/lxc/0
<wallyworld> ah yes. true
<wallyworld> i didn't change status or anything. i wonder why they are issing. will need to look
<wallyworld> i mean, a container is just a machine
<wallyworld> i'll fix it when i update status to indent containers etc
<fwereade__> wallyworld, thumper: no you shouldn't. yay yaml
<thumper> fwereade__: eh?
<thumper> fwereade__: so "0" shouldn't have quotes?
<fwereade__> thumper, "0" needs them lest it be interpreted as an int
<thumper> ew
<thumper> that is orrible
<wallyworld> fwereade__: sure, but when printing to console, shouldn;t the output be "nice"?
<wallyworld> iw leave off the "
<fwereade__> wallyworld, then it wouldn't be yaml
<wallyworld> sure, but it's just for the user to look at? or are we expecting to pipe it into a file? or?
<fwereade__> wallyworld, you can almost certainly customise the yaml output but I don't consider the benefit to be worth the hassle of finding out how to do so
<wallyworld> any anyway, what are you still doing awake?
<fwereade__> er, no good answer there
<fwereade__> wallyworld, it's yaml because that's meant to be nice to look at
<fwereade__> wallyworld, it's less nice than might be hoped in this case
<wallyworld> yeah
<fwereade__> wallyworld, we're not making --format yaml format output other than as yaml, and I question the cost/benefit of figuring out how to goose yaml into doing a slightly nicer thing ;)
<wallyworld> fwereade__: sure. wasn't seriously suggesting it was The Most Important Thing right now :-)
<wallyworld> but if people see it, they do wonder if something is wrong
<wallyworld> since it's inconsistent
<wallyworld> therefore bad
<fwereade__> wallyworld, I would certainly not complain about a fix for that
<wallyworld> i'll add it to my paper cuts list :-)
<thumper> ok, bootstrapping ec2 again
<wallyworld> good luck :-)
<wallyworld> let me know how you get on
<wallyworld> i only have 30 mins of battery left :-(
<wallyworld> still waiting for tradesman to arrive before i can relocate
<bigjools> I wondered whether you were coming or not wallyworld
<wallyworld> bigjools: yeah. long story. i'm really pissed off right now
<wallyworld> i'm never hiring this tiler ver again
<bigjools> what did I say about builders/tradies? :)
<wallyworld> well, i can't repeat it since there may be kids watching :-)
<bigjools> I would leave a note on your gate telling him to come back another day since he was too late
<bigjools> but I guess you're desperate for a bathroom :)
<thumper> hi bigjools
<thumper> wallyworld: it seemed to work this time
 * thumper hits it again
<bigjools> o/ thumper
<wallyworld> user error!!!
<thumper> wallyworld: bullshit
<wallyworld> pbkac :-P
<thumper> although no idea why last time
 * thumper watches pepper chew on maia's shoe
<bigjools> push it
<thumper> bigjools: push what?
<bigjools> it was an attempt at a salt and pepper joke
<bigjools> push it real good
<thumper> haha
<thumper> I've nicknamed her pepper pots
<thumper> wallyworld: ah ffs
<wallyworld> yeeeees?
<thumper> wallyworld: cloud-init checks for a valid machine id :)
<bigjools> you need an iron man now
<thumper> wallyworld: which 0/lxc/0 fails
<wallyworld> ah balls
<wallyworld> thumper: you want to comment out that check locally and i'll work up a fix?
 * thumper looks for where it fails
<wallyworld> thumper: i can include in my next branch a s i don't think you need it urgently (if you do a local hack)
<thumper> state.IsMachineId
 * wallyworld nods
<thumper> wallyworld: is a regex
<wallyworld> yes it is
<thumper> wallyworld: how is your regex?
<wallyworld> ok
<wallyworld> trivial fix
<wallyworld> you want to fix locally and i'll land a fix later? i don't have much power left
<thumper> wallyworld: I'll just comment out locally
<wallyworld> sounds good
<thumper> for now
<wallyworld> yep
<wallyworld> thumper: bigjools: power almost gone. good bye cruel world
<thumper> wallyworld: use the power adapter
<thumper> duh!
<bigjools> wallyworld: adios
<wallyworld> thumper: i said before - we have no electricty
<wallyworld> till later today :-(
<thumper> oh... missed that
<thumper> using the phone internet?
<bigjools> smack your tiler in the chops
<wallyworld> so i'm off to bigjools' house after trademan gets here
<thumper> ack
<wallyworld> thumper: yes, using phone. big data bill :-(
<wallyworld> trademan just arrived \o/
<bigjools> you get 1.5Gb free!
<wallyworld> yes, but will weekly video calls at soccer training on thursday......
<wallyworld> anyways, back online a bit later
 * bigjools considers an undercut charge for wallyworld
<thumper> ah ffs
<thumper> hmm...
 * thumper sighs
<thumper> bigjools: when wallyworld arrives, tell him that I'm fixing the IsValidMachine as I have to ... for other bits
<thumper> hi wallyworld
<thumper> wallyworld: cloudinit bits need to know if machineId is a container or not
<thumper> wallyworld: as I can't install lxc inside lxc on precise
<thumper> so the cloud-init bits need to be more careful
<wallyworld> thumper: you can look at the container type
<thumper> wallyworld: I don't have a machine, just an id
<thumper> wallyworld: https://codereview.appspot.com/10215043
<thumper> wallyworld: I might expose the ParentId function
<wallyworld> yeah
<wallyworld> or maybe implement an IsContainer?
<wallyworld> i think that would be better
<thumper> wallyworld: yeah...
<wallyworld> thumper: or pass both id and container type?
<bigjools> thumper: I missed your message because he had just arrived and I had to make the whinging Aussie a coffee
<wallyworld> i like that even more
<thumper> bigjools: haha
 * wallyworld is waiting for his backrub from bigjools
<thumper> wallyworld: what do you mean?
<wallyworld> thumper: just a sec, bigjools is snorting coffee
 * wallyworld slaps bigjools
<wallyworld> STFU
<thumper> wallyworld: you don't have to type it to him, he is next to you right?
<wallyworld> thumper: yes, but i wanted to share the hilarity
<bigjools> within spitting distance
<wallyworld> thumper: i mean instead of introspecting an id to see if the machine is a container, pass both the id and container type
<wallyworld> that sounds cleaner to me
<thumper> wallyworld: give me a func prototype
<thumper> I don't get what you are saying
<wallyworld> thumper: you say you only have an id in the place where you need to know if a machine is a container, right?
<thumper> right
<wallyworld> so, why can't we give that place access to the machine's container type as well as its id?
<wallyworld> ie pass the container type to that logic
<wallyworld> as well as the id
<thumper> wallyworld: because...
 * thumper thinks
 * wallyworld is planning for the case where we only have surrogate ids
<thumper> wallyworld: getting close to having a machine agent running inside the container
<wallyworld> \o/
<thumper> that one failed because of the valid machine check
<thumper> and I was using a previously bootstrapped instance
 * thumper destroyed, rebuilt, bootstraps
<wallyworld> thumper: you haven't seen the txn aborted issue again?
<thumper> wallyworld: I've not tried to create many containers
<thumper> just waiting for my new bootstrap node to come up
<thumper> then I'll add a couple
<wallyworld> kk
 * thumper looks at how to add a mount point to lxc config
<thumper> hi mramm
<thumper> mramm: really you, or just a machine?
<mramm> really me
<thumper> mramm: hey hey
<mramm> how goes?
<thumper> oh ffs
<thumper> wallyworld: hey
<wallyworld> hi
<thumper> wallyworld: the reason the machine agent didn't start properly on the bootstrap node
<thumper> wallyworld: is that the state server address it had was localhost :)
 * thumper kicks a new machine up
<wallyworld> thumper: so not my fault?
<thumper> wallyworld: no
<wallyworld> \o/
 * thumper has started machine 1
<thumper> hmm...
 * thumper makes notes of extra bits needed
<thumper> like, specific root file system location
<thumper> mounting /var/log/juju
<thumper> for some reason, when I run our cloud-init bits, I can't log in... damn password
<thumper> wallyworld: what I do need is a machine watcher that watches for changes in containers on the machine
<thumper> wallyworld: parameterised by container type
<thumper> wallyworld: so "get me a watcher for lxc containers on machine x"
<wallyworld> sounds reasonable
<thumper> wallyworld: ok, machine 1 lxc machine agent fails with auth issues
<thumper> so I'm probably doing something wrong there.
<wallyworld> thumper: i had auth issues also just bootstrapping
<wallyworld> and all of a sudden it came good
<wallyworld> ec2 is stupid :-(
<wallyworld> thumper: i have to run away and do the school pickup
<thumper> ack
<TheMue> morning
 * fwereade__ seeks a further review on https://codereview.appspot.com/10166044/
<TheMue> fwereade__: will do (and good morning btw)
<fwereade__> TheMue, good morning, and cheers :)
<TheMue> fwereade__: while I'm looking into your source, could you please add a hint to your review in https://codereview.appspot.com/10148045/diff/10001/state/state_test.go line 1272?
<fwereade__> TheMue, just add two services which each have a peer relation, remove them both, Sync, and check you only get one event.
<fwereade__> TheMue, might be nice as a separate test
<TheMue> fwereade__: ok, good hint, thanks
 * danilos_ steps out, back in ~30
<fwereade__> rogpeppe1, https://codereview.appspot.com/10166044/patch/1/9
<rogpeppe1> fwereade__: looking
<fwereade__> rogpeppe1, you mean to just test that it's hooked up by doing something 100% broken and hoping for a characteristic error?
<rogpeppe1> fwereade__: yes
<fwereade__> rogpeppe1, I'm not sure that's a very solid technique compared to running the operation and verifying success
<fwereade__> rogpeppe1, ah
<fwereade__> rogpeppe1, but it's just the perm test
<rogpeppe1> fwereade__: because we must get the permission-denied error (which is all this test is checking for) before it even tries to attempt the operation
<fwereade__> rogpeppe1, gotcha
<fwereade__> rogpeppe1, thanks
<rogpeppe1> fwereade__: np
<jam> mgz_, wallyworld: https://plus.google.com/hangouts/_/8868e66b07fa02bdc903be4601200d470dae9ee3  ?
<mgz_> ta
<TheMue> fwereade__: ping
<fwereade__> TheMue, pong
<TheMue> fwereade__: the resumer shall process the ResumeAll() on State.runner?
<fwereade__> TheMue, yeah
<fwereade__> TheMue, you could even just have a state.Resumer type that just happened to implement Worker
<TheMue> fwereade__: ok, and here periodically?
<fwereade__> TheMue, yes please
<fwereade__> TheMue, I think I need to go eat some lunch though, didn't quite get round to it yet
<fwereade__> bbiab
<TheMue> fwereade__: Resumer in state? OK, so I don't need a function state.ResumeAll()
<TheMue> fwereade__: enjoy your meal
<fwereade__> TheMue, I'm not sure
<fwereade__> TheMue, think through the tradeoffs and just precis why what you pick is more sensible ;p
<fwereade__> TheMue, we can chat after kanban if you like
<TheMue> fwereade__: sure, we can do
 * TheMue just fetched an espresso
<rogpeppe1> fwereade__: hasn't we decided to lose the stuttering?
<fwereade__> TheMue, https://codereview.appspot.com/10148045/ reviewed, nearly there
<TheMue> fwereade__: thanks
<TheMue> fwereade__: I'm a bit confused now. while the tests with the two Destroy() before the last propose only raised one event I now have two :(
<TheMue> fwereade__: so the coalescing as designed in the watcher doesn't work
<TheMue> fwereade__: ah, it has to do with the change of Sync() to StartSync() in the asserts
<TheMue> have to leave for today. will comeback to you tomorrow, fwereade__
<ehw> does go-juju upload the tools tarball to MAAS automatically, or does I need to upload it manually?
<ehw> I guess I should phrase that differently: is go-juju actually intended to work with MAAS?
<fwereade__> ehw, --upload-tools will work if you're building from source; the alternative is `juju sync-tools` which is somewhat inelegant -- and currently requires valid ec2 credentials in env vars -- but that'll copy from the same place other environs get theirs
<ehw> fwereade__, building from source, as in it works in unreleased versions?
<ehw> fwereade__ i.e. trunk?
<fwereade__> ehw, with trunk, you need --upload-tools
<fwereade__> ehw, we made a mistake a few revs ago and trunk doesn't currently bootstrap released 1.10 tools, so it's a necessity today
<fwereade__> ehw, in general it should not be
<ehw> fwereade__, I'm running from ppa:juju/devel, so I'm assuming it's broke?
<fwereade__> ehw, I'm not certain what version that implies -- 1.11?
<ehw> 1.11.0-1~1240~precise1
<ehw> against maas 1.2, from maas-maintainers/stable
<fwereade__> ehw, if you have ec2 credentials, you can run `juju sync-tools` which should get you matching versions copied into environment storage
<fwereade__> ehw, this is a sucky story
<fwereade__> ehw, and it is in hand
<fwereade__> ehw, but it's the story we have today
<ehw> fwereade__, will probably need to do some django hacking then; we need to go on the bug warpath over the next couple of weeks
<fwereade__> ehw, sorry, I 'm missing context -- what is it you're trying to do exactly?
<ehw> fwereade__, in the larger context, document and debug MAAS, Juju, Openstack, and Landscape in preparation for the 13.09 LDS release
<ehw> fwereade__, right now, I'm just trying to get juju-core to talk to maas ;)
<fwereade__> ehw, if it's tools that are the problem you can do it even more low-tech
<fwereade__> ehw, grabbing this and putting it in your environ's storage under the same path should get you a step further https://juju-dist.s3.amazonaws.com/tools/juju-1.11.0-precise-amd64.tgz
<ehw> fwereade__, I've got the tarball, but I need to put it where juju is expecting it
<ehw> fwereade__, which appears to be under MAAS/api/1.0/files/
<fwereade__> ehw, in that case you *should* just be able to put it in tools/ therein
<fwereade__> ehw, but: what actual error are you seeing?
<ehw> fwereade__, error: no tools available
<ehw> fwereade__, it hits the MAAS api looking for "GET /MAAS/api/1.0/files/?prefix=tools%2Fjuju-&op=list HTTP/1.1"
<fwereade__> ehw, ok, yeah, putting something under tools/ in there should help
<fwereade__> brb, let me know if it helps
<ehw> fwereade__, yeah, but it's not a filesystem, it's binary blogs in the db
<ehw> probably planning on using object storage for that at some point
<fwereade__> ehw, you would possibly want to talk to bigjools about that, then, because I have not actually deployed on maas myself
<fwereade__> ehw, but s3 is not, either, it just kinda looks like one if you squint a bit -- if the blob names can have /s you shoudl be fine
<fwereade__> ehw, sorry, I need to be gone for a while now :(
<ehw> fwereade__, no worries; I think I might be able to feel my way through this
<ehw> ok, had to pop into maas shell and put the file there manually, but it bootstraps now
<thumper> oops
<thumper> forgot to tear down my ec2 instances yesterday
<jcastro> thumper: I suppose that's better than forgetting them from 4 months ago
<thumper> jcastro: haha
<thumper> for sure
 * thumper out for lunch
#juju-dev 2013-06-13
<wallyworld_> thumper: hi, just taking to dog to the vet. i'm just about to propose the status stuff. took ages to get the new tests passing. i ended up exporting ParentId() and adding a couple of other helpers
<wallyworld_> i'll do the watcher stuff next
<thumper> wallyworld_: ok
<wallyworld_> the status stuff supported arbitary nesting
<wallyworld_> bt we don't use it yet but it will be there for when we do
<wallyworld_> bbiab
<wallyworld_> thumper: found another problem - there are docs ordered by machine id which do an Atoi() but container ids fail that since they are not numbers. so i'll have to find a way around that
<thumper> hahaha
<thumper> bummer
<wallyworld_> yeah, i'll need to split the id and process from left to right
<wallyworld_> thumper: we you have a moment https://codereview.appspot.com/10252044
<wallyworld_> when
<wallyworld_> thumper: question about container watchers - i'll first do a lifecycle watcher. does that suit your needs?
<thumper> wallyworld_: hey,
<wallyworld_> ho
<thumper> wallyworld_: I think that is what I need
<wallyworld_> ok, can add an entity watcher later if needed
<thumper> what's the difference?
<wallyworld_> i think lifecycle is created/deleted etc, entity is state changes and such
<wallyworld_> not sure exactly yet
<wallyworld_> i'll know once i write some tests
<thumper> wallyworld_: I want a watcher like the one that he provisioner watches :)
<thumper> don't know what type that is
<wallyworld_> ok
<wallyworld_> thumper: https://codereview.appspot.com/10250044
<thumper> wallyworld_: soon
<wallyworld_> thumper: no problem, just wanted to let you know they were there
<thumper> kk
 * thumper crosses fingers
 * thumper waits for ec2 to bootstrap
<thumper> wallyworld_: http://paste.ubuntu.com/5760381/
<rogpeppe1> mornin' all
<thumper> hi rogpeppe1
<rogpeppe1> thumper: yo!
<thumper> rogpeppe1:  http://paste.ubuntu.com/5760381/
<thumper> rogpeppe1: manual poking to get to this stage
<thumper> but the bits are working
<thumper> well, mostly working
<rogpeppe1> thumper: why the lack of quotes around the container name?
<rogpeppe1> thumper: good stuff  BTW!
<thumper> rogpeppe1: yaml output
<thumper> needed around 0 so not an int
<rogpeppe1> thumper: ah of course. a number needs to be quoted
<thumper> but 1/lxc/0 is obviously a string
<thumper> so no quotes needed
<thumper> still using manual hackery to manually create a provisioner on the machine
<thumper> but it is using the provisioner task and lxc broker
<thumper> with a real cloud-init to start the machine agent
 * thumper jumps into the meeting hangout
<wallyworld_> thumper: saw the pastebin earlier, looks nice, eill look even nicer with my juju status branch landed :-)
<jam> mgz: https://plus.google.com/hangouts/_/d3f48db1cccf0d24b0573a02f3a46f709af109a6
<jam> wallyworld_: ^^
<wallyworld_> oh different
<wallyworld_> TheMue: ^^^^^^^^^^
<jam> wallyworld_: where did you get your link from? (This one was on the calendar)
<wallyworld_> jam: the calendar
<fwereade__> wallyworld_, jam, I think I'm in the calendar one too... https://plus.google.com/hangouts/_/bf3f4cfe715017bf60521d59b0628e5873f2a1d3
 * fwereade__ leaves
<wallyworld_> https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.mut2dk4mvoj39eq8jqni20ukoc
<thumper> https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.mut2dk4mvoj39eq8jqni20ukoc
<wallyworld_> fwereade__: yes, thumperstuffed up :-)
<wallyworld_> thumper: were you going to chat about containers? perhaps a joint meeting with me too?
<thumper> wallyworld_: I wanted to discuss the provisioner refactoring
<wallyworld_> ok, i have some questions as well
<mramm> hey all,
<mramm> sorry I set my alarm wrong
<mramm> just went off
<mramm> and I realize the team meeting is not starting in 15 min, it started 45 min ago
<mgz> mramm: https://plus.google.com/hangouts/_/d3f48db1cccf0d24b0573a02f3a46f709af109a6
<mramm> jam: did you guys chat about when tarmac is going to "land"?
<jam> today
<jam> mramm: I'll be making it live today. Everyone seemed happy with the plan I proposed.
<mramm> cool
<mramm> sounds good
<jam> thumper: quick poke about the "InParallel" test.
<jam> What if we change it from sleeping, to having one subprocess block until the other one is started?
<thumper> jam: yes...
<jam> so it would wait for a file that the other one creates, for example
<thumper> I don't really care how it is changed as long as it is testing the parallel gathering
<jam> you could have subprocess a creates file a and waits for b
<jam> and vice versa
<jam> so they must both be called at the same time
<TheMue> fwereade__: ping
<fwereade__> TheMue, pong
<fwereade__> TheMue, still eating lunch
<fwereade__> TheMue, I may have failed to communicate that if yur tests depend on some specific number of events they're Doing It Wrong
<fwereade__> TheMue, but if the tests just watch for events until there are no more cleanups to perform... then... hmm. there may actually still be a race. bah
<fwereade__> TheMue, yeah, there's a race, but it only affects the followup
<fwereade__> TheMue, so all you need to do is watch for cleanup events until you get one and there aren't any more cleanups required
<fwereade__> TheMue, at that point you can be sure that the cleaner did its job
<jam> fwereade__: https://codereview.appspot.com/10234047 makes a test that occassionaly fails due to timing hiccups into a faster and reliable 'event' sort of mode.
<jam> Something I'd like to land before switching to tarmac
<fwereade__> jam, LGTM
<TheMue> fwereade__: Yeah, it does. Only want to talk about the timeout with you.
<fwereade__> TheMue, ah, go on
<TheMue> fwereade__: You wondered about a "typo", but sadly it is none. The watcher in the test gets the second event (where then no more cleanups are needed, I followed it) after such a long time.
<TheMue> fwereade__: Maybe you've any idea for the reason.
<TheMue> fwereade__: Because that definitely should not be.
<fwereade__> TheMue, ah, are you not syncing in your select loop? StartSync at the start of that loop, sorry I missed it
<fwereade__> TheMue, default sync period is 5s
<fwereade__> TheMue, ha
<TheMue> fwereade__: IMHO I've tested it also with StartSync in the loop, but I'll verify it.
<fwereade__> TheMue, no, I know what's going on, sorry
<TheMue> fwereade__: Oh *listening*
<fwereade__> TheMue, StartSync just at the top of the loop is not good enough
<fwereade__> TheMue, because the watcher event could come in after that
<fwereade__> TheMue, you need to keep on independently StartSyncing as you go through
<fwereade__> TheMue, with StartSync at the top, `case <-time.After(50 * time.Millisecond): continue` should do the trick
<jam> mgz: can you look at https://codereview.appspot.com/10234047 so I can land it?
<TheMue> fwereade__: ah, push harder, nice idea
<TheMue> fwereade__: will add it and come back to you
<TheMue> fwereade__: but please don't let me disturb you at lunch, i'm sorry
<fwereade__> jam, fwiw, I think there may still be branches approved that aren't necessarily ready to land, please unapprove them before you start tarmaccing ;)
<mgz> looking at it
<mgz> and ugh, shell
<jam> mgz: :)
<TheMue> fwereade__: just one last thing: fantastic, it works!
<TheMue> fwereade__: thanks
<jam> mgz: so have I turned you off so completely ?
<mgz> jam: commented
<jam> mgz: thanks
 * TheMue => lunchtime
<danilos> mgz, jam: hey, https://plus.google.com/hangouts/_/8868e66b07fa02bdc903be4601200d470dae9ee3 :)
<mgz> ta
<danilos> mgz, I am back, fwiw (and actually already fixing another annoyance I hit)
<mgz> likewise
<danilos> anybody wants to take a peek at trivial branch: https://codereview.appspot.com/10244044
<TheMue> fwereade__: any good idea on how to test the resumer?
<danilos> jam: you had a fix for StateSuite.TestOpenDelaysRetryBadAddress failure or was that something else?
<fwereade__> TheMue, none whatsoever -- just start it and stop it I reckon
<fwereade__> TheMue, (sorry I mised you)
<fwereade__> TheMue, we'd have to inflict damage to the transaction log on purpose to test it, and spending time on that in particular seems like a bit of a waste of effort
<TheMue> fwereade__: no pro, i'll add this. the rest has been simple. cleaner is also proposed again
<TheMue> fwereade__: yep, think so too. i took a look on how ResumeAll() works, and here we would have to do evil stuff under the hood :)
<jam> mgz or danilos: the juju cross-team meeting is supposed to happen in about 10 minutes, could one of you make it?
<mgz> I can if needed
<jam> mgz: thanks, I can't really make it today
<jam> (today is my wife's b-day)
<jam> mgz: https://plus.google.com/hangouts/_/calendar/YW50b25pby5yb3NhbGVzQGNhbm9uaWNhbC5jb20.2ijeg1lk8l6gq6d4ilo6jot1j4 is the link from email
<mgz> ta
<TheMue> tarmac help needed. i've got an "old" branch now with two lgtms (after a final local change). now i've committed and merged it locally. but i'm not sure about the reproposal for launchpad.
<mgz> see the instructions in jam's post to the list
<TheMue> mgz: i've seen them, but still have problems/questions. if i set my branched to approved now the final changes won't be in there.
<mgz> you just go to the branch webpage, and repropose against lp:juju-core
<TheMue> with "resubmit proposal"?
<mgz> yup
<TheMue> and my still local changes? simple bzr push before resubmit?
<mgz> yes
<TheMue> that sounds good, thanks
<TheMue> so, now i'm curious how it will work ;)
<fwereade__> TheMue, mramm: ready?
<mramm> so, meeting
<mramm> yep
<mramm> hangout?
<TheMue> yep
<fwereade__> mramm, TheMue, I'll start one
<rogpeppe1> just going to pick up van from MOT test. back in 15 mins.
<TheMue> fwereade__: ping => https://codereview.appspot.com/10266043
<fwereade__> TheMue, cheers
<jamespage> mgz, do you have a release tarball for 1.11.0?
<mgz> jamespage: no, but I can make one
<mgz> it just won't be what was actually used (as I haven't switched dave onto the new method yet)
<jamespage> mgz, great - it would be nice to stuff a new release into saucy as we make it he default
<ahasenack> so juju-core trunk now wants a service name in the config yaml file?
<ahasenack> https://code.launchpad.net/~fwereade/juju-core/config-5-state-service-config-yaml/+merge/168579
<ahasenack> just checking
<rogpeppe1> time to stop. g'night all.
<thumper> morning folks
<thumper> fwereade__: ping
 * thumper bootstraps ec2
<thumper> geez this takes a while...
<thumper> mramm2: ping
<thumper> jcastro: ping?
<thumper> m_3: ping?
 * thumper feels the loneliness of being at utc+12
<jcastro> yo thumper
<thumper> jcastro: hey man
<jcastro> I am in the middle of BBQing, so in and out
<jcastro> so keep talking
<thumper> jcastro: what is a nice simple charm to deploy?
<thumper> jcastro: the first one to be in an lxc container in ec2
<m_3> thumper: pong
<m_3> what's up?
<thumper> m_3: just feeling lonely :)
<m_3> ha
<thumper> m_3: wanting an easy charm for testing
<m_3> mysql
<m_3> it's my goto for when I need to precache lxc and stuff like that
<thumper> precache?
<jcastro> thumper: wordpress is our defacto
<jcastro> etherpad-lite has no deps though
<jcastro> if you only want a single container with no relations
<m_3> thumper: yeah, the juju-0.7 worflow was pretty flawed wrt lxc
<m_3> thumper: you'd run juju bootstrap and that'd run zk and prep stuff
<m_3> but the lxc template images wouldn't build out (or download cloudimages) until the first time you actually _deployed_ a service
<m_3> um... without letting the user know what was going on
<thumper> :)
<m_3> so it'd just quietly block while downloading
<m_3> you'd naturally kill it
<m_3> and wind up with a halfway downloaded image in /var/cache/lxc
<jcastro> let's not spoil him with how it sucked
<m_3> ha
<m_3> sorry
<m_3> he asked
<jcastro> that way he can come up with something awesome without being jaded like we are
<m_3> yes, /me will sit on his hands
<jcastro> "and then, we asked 10 people to download 450mb of images on conference wireless for this charm school"
<jcastro> hahahah
<jcastro> <--- what were we thinking
 * thumper has some thoughts around that
<thumper> jcastro: what if we wrote a simple go http server for you
<thumper> jcastro: so at a charm school, you could run this, and it serves just the ubuntu-cloud lxc tempalte
<thumper> people could then run wget to grab it locally
<m_3> we can cache cloud images lots of ways
<m_3> lxc builds nicely from cloud images now
<m_3> I also liked having the download happen during package install... i.e., if the local provider is in its own package, it can get the image templates during package install
<m_3> peiople are used to waiting on apt
<m_3> thumper: but really we're open to whatever makes sense man
<thumper> well, I can see how to make the local provider now that I've messed around a lot with lxc
<thumper> but containerisation first, then network, then local provicer
<thumper> but it is coming
 * m_3 loves lxc
<jcastro> thumper: that sounds like a plan
<wallyworld_> thumper: you happy with the juju status mp now?
<thumper> wallyworld_: sorry, not looked at it
<thumper> trying to work out why what worked yesterday isn't today
<wallyworld_> np. fat chance i'll get a 2nd +1 today :-(
<thumper> at least I have worked out why my programs were hanging
<wallyworld_> reason?
<thumper> hmm...
<thumper> lxc isn't nice
<thumper> golxc runs lxc-start
<thumper> then waits forever for it to be running
<thumper> if there is an error in the config somehow...
<thumper> then it won't start
<thumper> and lxc-start doesn't return non-zero exit
<thumper> also found
<thumper> that if I pass the lxc.conf file to lxc-create, the mount works
<thumper> if I only pass it to lxc-start, the mount fails
<thumper> so it doesn't start
 * thumper quietly rages
<thumper> wallyworld_: http://paste.ubuntu.com/5763044/
<wallyworld_> thumper: niiiiiice
<wallyworld_> would look better with my juju status :-)
<thumper> :)
<wallyworld_> can we wrap lxc-start somehow?
<thumper> that is with --force-machine
<thumper> wallyworld_: I am
<thumper> twice
<wallyworld_> maybe 3 times
<thumper> I'm going to ask for some help around that
 * thumper continues blindly (well with poor vision)
<thumper> wallyworld_: I think part of the reason is that we are asking to start --daemon
<thumper> which appears to be fire and forget
<wallyworld_> sounds about right
<thumper> so, zero rc, all good
<wallyworld_> my lxc foo is not excellent
<thumper> but the issue appears to be "we wait forever"
<wallyworld_> i really wish we had decent progress monitoring
<wallyworld_> baked into the cli
<thumper> wallyworld_: I am using my logging to trace stuff though
<thumper> very nice
<wallyworld_> \o/
<wallyworld_> but we need to productise stuff for a user
 * thumper nods
<thumper> yeah
<thumper> wallyworld_: lxc containers on the bootstrap node are working fine
<wallyworld_> yay
<thumper> wallyworld_: got a machine agent up and running on machine 0
<thumper> 0/lxc/0 that is
<wallyworld_> manually?
<thumper> sure...
<thumper> I need that container watcher
<wallyworld_> thumper: up for review yesterday
<thumper> can't automate the provisioner until I get that
<mramm2> thumper: pong
<thumper> \o/
<thumper> hi mramm2
<thumper> mramm2: was nothing important, was feeling lonely :)
<mramm2> haha
<wallyworld_> thumper: i did tell you it was there :-P
 * thumper sighs
<mramm2> thumper: I don't know when andrew starts
<mramm2> but nate the other new hire is now slated to start on the 29th
<thumper> mramm2: so... using --force-machine, you can deploy into a running container
<thumper> mramm2: but the 29th of july, not june
<mramm2> manual network bridging or what
<thumper> mramm2: just using the lxc default bridge
<mramm2> july
<thumper> so no access in, just out
<mramm2> wow, I missed that
<mramm2> you are of course right
<mramm2> that is a long time
<mramm2> hopefully andrew will start sooner
 * thumper nods
<mramm2> so you are bridging out, and everything is working
<mramm2> rock and roll
#juju-dev 2013-06-14
<wallyworld_> fwereade__: hey, thanks for the reviews now go to bed!
<thumper> fwereade__: are you up again?
<thumper> wallyworld_: want to do an old fashioned review? https://code.launchpad.net/~thumper/golxc/mockable/+merge/169317
<wallyworld_> sure
<thumper> diff still updating
<wallyworld_> i really don't like roetveld
<thumper> for latest additions
<thumper> you're all good now
<thumper> wallyworld_: also, can I have a hangout with out to teddy-bear some design ideas?
<wallyworld_> sure
<thumper> hmm...
 * thumper needs a recursive find and replace
<thumper> wallyworld_: thanks for the golxc review, will get to those
<thumper> just doing a massive move branch
<wallyworld_> np
<wallyworld_> i wish we used interfaces more in juju
<thumper> wallyworld_: fwereade__ asked to have environs.Instance moved to instance.Instance
<thumper> so container.Container doesn't need environs.Instance
<wallyworld_> ok, sounds good
 * thumper goes to make lunch
<thumper> wallyworld_: chat?
<wallyworld_> sure
<thumper> https://plus.google.com/hangouts/_/3fc078d1d8dacd2c85f1f72fb71d3ed67d97b9c1?hl=en
<thumper> wallyworld_: you have two reviews
<thumper> wallyworld_: trivial review for you Rietveld: https://codereview.appspot.com/10253053
 * wallyworld_ looks
<thumper> wallyworld_: happy for a trivial on that?
<wallyworld_> yep
 * wallyworld_ looks for the other review
 * thumper woners
<thumper> woners
<wallyworld_> try again
<thumper> wonders?
<wallyworld_> wonders perhaps why the D key is broken?
<thumper> I typed woners again first tiem
<wallyworld_> tiem?
<thumper> our tarmac lander checks for two LGTMs
<wallyworld_> :-P
<thumper> time
<thumper> pierce off fuck knuckle
<wallyworld_> now, now. public forum :-P
<wallyworld_> i forgot about the tarmac thing
<wallyworld_> maybe we need to patch it to allow "LGTM trivial" through
<thumper> wallyworld_: yeah, or perhaps we should just get people to review in lp :)
<thumper> to get the two reviews from the merge propose
<wallyworld_> PLEASE YES
<thumper> proposal
<thumper> that way we can check for a trivial tag
<wallyworld_> thumper: i'm sure i didn't merge your branch - only trunk. i last merge late yesterday i think. i normally merge one last time anyway prior to submitting
<thumper> wallyworld_: that change wasn't in trunk
<wallyworld_> hmmmm.
 * wallyworld_ is confused
<thumper> however, I have done things like this with this workflow
<thumper> switch to someone elses branch to look
<thumper> create a new brack with "switch -b new-work" for something new
<wallyworld_> nah, didn't do that
<thumper> however since I wasn't on trunk, it starts with the last revision of the other branch
<wallyworld_> anyways, i'll sort it out
<wallyworld_> after lunch :-)
<thumper> kk
 * thumper off early due to late work last night
<thumper> ciao everyone
<TheMue> morning
<TheMue> fwereade__: thx ;) the resumer is indeed not very complex.
<fwereade__> TheMue, haha, yeah :)
<TheMue> fwereade__: hmm, somehow rogers and my branch merge proposals of yesterday don't go into trunk but they have status Approved
<TheMue> fwereade__: any idea as an old launchpad user?
<jam> TheMue: https://code.launchpad.net/~themue/juju-core/025-cleaner-worker/+merge/169228 ?
<jam> As I warned, it doesn't have a Commit Message
<jam> you can copy the description into "Set Commit Message"
<fwereade__> jam, btw, thank you ever so much for doing all that work
<jam> (its probably the biggest case where things fail for people with Tarmac, I tried to be upfront that it is a failure mode)
<jam> I've had that problem as well
<jam> there is a patch somewhere that has tarmac set it back to Needs Review with a note that it needs a commit message, which at least avoids the black-hole of 'why isn't this working'
<jam> but I haven't tracked it down yet.
<TheMue> jam: Hmm, I thought I have something in the commit message. *wonder*
<jam> TheMue: https://code.launchpad.net/~themue/juju-core/025-cleaner-worker/+merge/169228 you have a Description set, but not a Commit Message
<jam> separate fields
<jam> (Launchpad models it as one is the conversation you want to have, vs the thing you want saved for posterity)
<TheMue> jam: *aaargh* have to learn this first, thanks
<jam> TheMue: as an added bonus, I believe LP's resubmit actually removes a commit message if it was set in the original proposal. And *that* is pretty stupid. :)
<TheMue> *drumroll* will see now
<jam> TheMue: no proposals for your prerequisite branch
<jam> TheMue: I'm cheating by looking at the tarmac log
<jam> your 024-cleanup-watcher is marked as 'this must land before I can land 025' but you don't have '024' proposed for merging (and approved)
<TheMue> jam: that is already merged. do i have to resubmit it too?
<jam> TheMue: I think it is just tarmac confused because of the resubmit
 * TheMue is sometimes really astonished
<jam> the old proposal was against a different branch, so it isn't seeing it.
<TheMue> jam: yes, it has been before the switch
<jam> Because you used 'resubmit' things got copied across that aren't relevant (It is no longer a prerequisit)
<jam> TheMue: fortunately most of this irons itself out naturally with time, I'll unstick it in a sec
<TheMue> jam: thankfully my next branch is one of the post-switch-era
<TheMue> jam: great, thanks. otherwise i would have resubmitted and approved it (with commit message)
<jam> TheMue: fortunately for prerequisite requirements, Tarmac *does* post to the request: https://code.launchpad.net/~themue/juju-core/025-cleaner-worker/+merge/169228/comments/376486
<jam> TheMue: what other branch is affected?
<TheMue> jam: nice guy ;)
<jam> TheMue: pre-commit hook is now running. It failed to merge, though.
<TheMue> jam: aha?
<jam> TheMue: my fault
<jam> I made a change which broke the tarmac config
<jam> will fix now
<TheMue> jam: and i'll change my "Getting started" document with our usage of the tools and the procedures
<jam> TheMue: precommit now firing again
 * jam crosses fingers
<jam> TheMue: it is currently on cmd/juju tests, so looks pretty good. We'll find out in about 15min.
<TheMue> jam: great, fantastic work
<jam> TheMue: sorry it didn't work first time right away, but thanks for being patient
<TheMue> jam: no problem, typical migration time experience
<TheMue> jam: i'm more happy about your effort for tarmac
<TheMue> hmm, sound somehow wrong
<TheMue> sounds
<TheMue> jam: happy about the work you did :)
<jam> thanks
<TheMue> jam: so to get sure: if it fails the status will be set to "Needs review" again, but currently no mail notification?
<jam> TheMue: it should set the status, and post the failure to the Launchpad proposal, which should generate an email for you
<jam> the only thing it doesn't send the email for (but should) is missing a commit message.
<TheMue> jam: ok, thanks
<jam> TheMue: I certainly see emails from earlier, check if you got them as well
 * TheMue now also has an eye on the video stream from Toulouse, where the A350XWB starts for its first flight
<TheMue> Ah, just took of, great!
<jam> TheMue: and it has landed
<TheMue> jam: yep, just seen, fantastic. great job!
 * TheMue is at lunch
<mgz> danilo, laaaging, you take over for a sec :)
<mgz> you can also install a different editor if you  like :P
<danilos> mgz, I just tried '-not -name' on find and it works (though -exec seems harder since it doesn't return the subprocess return code, so I am leaving that as xargs :))
<TheMue> *pinDrop*
<mgz> ping drop? :)
<TheMue> mgz: no, just a pin drop, it's so quiet here
<TheMue> mgz: but ping drop sound better on irc, indeed
<TheMue> sounds
<Makyo> fwereade__, chance I could get another look at https://codereview.appspot.com/10237043/ SetCharm soon?
<fwereade__> Makyo, sorry, ofc
<Makyo> fwereade__, thanks, much appreciated :)
<fwereade__> Makyo, LGTM, someone else review it too please :)
<Makyo> fwereade__, thanks.
<TheMue> Makyo: 2nd LGTM ;)
<Makyo> TheMue, Thanks \o/
<TheMue> Makyo: yw
<rogpeppe1> TheMue: you have a review
<TheMue> rogpeppe1: yust seen the mail, thanks
<TheMue> rogpeppe1: the non-fatal error is simply due to the fact that it may happen during a short network problem but shall be automatically retried later (currently each minute).
<rogpeppe1> TheMue: how is that different from any other error we encounter in the other workers when talking to the state?
<TheMue> rogpeppe1: if the resume fails what would we win if we let the worker crash?
<TheMue> rogpeppe1: the cleaner also only logs and doesn't crash
<rogpeppe1> TheMue: if the state is borked because its connection is broken, won't we need to reconnect to it?
 * rogpeppe1 wishes he had a better understanding of mgo failure-modes
<TheMue> rogpeppe1: me too
<TheMue> rogpeppe1: imo both, cleaner and resumer, don't have an own state that me be confused by an error beside their connection to mongo
<TheMue> rogpeppe1: so assuming that the connection is able to reconnect they can repeat their work
<TheMue> rogpeppe1: but indeed, i'm right now not sure with this connection
<rogpeppe1> TheMue: yeah, i dunno
<rogpeppe1> niemeyer: ping
<niemeyer> rogpeppe1: Hea
<niemeyer> heya
<rogpeppe1> niemeyer: yo!
<rogpeppe1> niemeyer: how's tricks?
<niemeyer> rogpeppe1: Going :)
<rogpeppe1> niemeyer: a couple of questions
<niemeyer> rogpeppe1: Shoot.. I may not be able to answer them right now because lunch is waiting for me, but perhaps I can think over lunch.
<rogpeppe1> niemeyer: just hoping to avoid delving into the source of mgo and wondering: if you get an error, is it necessary to reconnect to mongo, or will it happen automatically?
<niemeyer> rogpeppe1: It will reconnect by itself, but!
<niemeyer> rogpeppe1: The error on a session, after a connection is abruptly broken, won't go away until you either: 1) discard the session and create a new one; or 2) call Refresh on it
<rogpeppe1> niemeyer: second: how easy would it be to add non-authorized s3 access to public-access buckets to goamz? and would you accept a patch that does that?
<niemeyer> rogpeppe1: Can't parse the second question
<niemeyer> rogpeppe1: I'd be very much against adding non-authorized access to public buckets.. (!? :-)
<rogpeppe1> niemeyer: ok. even though they're open access?
<niemeyer> rogpeppe1: Sorry, I jokingly meant the question makes no sense
<TheMue> niemeyer: open read access, to be more specific
<rogpeppe1> niemeyer: currently you need amazon credentials to access them even though amazon doesn't require them, i think
<rogpeppe1> niemeyer: when using goamz, that is
<rogpeppe1> niemeyer: so... when does mgo reconnect by itself?
<rogpeppe1> niemeyer: i'm not helping here by mixing the two questions up together :-) let's deal with the first one first, shall we?
<niemeyer> rogpeppe1: I think it'd be okay to accept empty credentials for s3, yeah
<niemeyer> rogpeppe1: I haven't thought about the implications of that in the code base, but I'd be happy to analyze with you a patch that does that
<niemeyer> rogpeppe1: mgo reconnects whenever necessary
<rogpeppe1> niemeyer: our existing plan was to just talk directly to s3 and parse the xml ourselves, but i thought that if it's easy to do in goamz we should probably do that
<niemeyer> rogpeppe1: and once every few seconds either way, to keep the cluster state up-to-date
<rogpeppe1> niemeyer: but "after a connection is abruptly broken" it's not necessary?
<rogpeppe1> niemeyer: maybe i'm not getting the terminology - a "session" is a single connection? or a set of connections?
<niemeyer> rogpeppe1: A session is an *mgo.Session
<niemeyer> rogpeppe1: If you see a connection error on a session, that error doesn't go away until (1) or (2), as stated
<niemeyer> rogpeppe1: You don't have to reconnect manually, you just need to ack the error via closing and recreating, or via the Refresh method
<niemeyer> rogpeppe1: This is just because Bad Stuff could happen if a connection was recreated behind your bakc
<niemeyer> back
<rogpeppe1> niemeyer: i think i need to be refreshed on the terminology here.
<rogpeppe1> niemeyer: a Session corresponds to... a single TCP connection?
<niemeyer> rogpeppe1: http://godoc.org/labix.org/v2/mgo#Session.SetMode
<niemeyer> I'll have lunch while you read this
<niemeyer> biab
<rogpeppe1> niemeyer: ah, perhaps you mean that once you've got a *Session it always automatically reconnects?
<rogpeppe1> niemeyer: enjoy
<TheMue> rogpeppe1: btw, List() and URL() already work of the reader, have been trivial, like Get() will be too ;)
<rogpeppe1> TheMue: ok, cool
<TheMue> rogpeppe1: http get, unmarshalling the two nested structs, iterate of contents, fetch key field, strings.HasPrefix(), simple
<rogpeppe1> niemeyer: for when you come back: when can you see a connection error on a session? only at Dial time? or can you get a connection error at any later point? (in which case, how can you tell if it's a connection error so you know to call Refresh or discard the session?)
<TheMue> so, me has to leave for today, we'll continue on Monday morning
<TheMue> have a nice weekend
<rogpeppe1> niemeyer: i've also got to go unfortunately. i'd like to continue the conversation though, as my understanding of this area is lamentably poor and it affects how we will approach some things.
<rogpeppe1> niemeyer: have a great weekend
<rogpeppe1> and good weekends to all!
<niemeyer> rogpeppe1: You can get a socket error at any point
<niemeyer> rogpeppe1: It's a normal network connection we're talking about
<niemeyer> rogpeppe1: Supposedly, you won't want to continue doing whatever was being done on any unknown error
<niemeyer> rogpeppe1: Re-establishing a connection may happen to a different primary, and a pretty different state from what was being accessed before
<niemeyer> rogpeppe1: That's why we cannot simply transparently re-establish the connection
<FunnyLookinHat> So - I think this might be one of the last few issues barring Rackspace OpenStack support for JUJU: https://bugs.launchpad.net/goose/+bug/1124561
<_mup_> Bug #1124561: the Content-Length header is missing <Go OpenStack Exchange:Invalid> <https://launchpad.net/bugs/1124561>
<FunnyLookinHat> I can't imagine why the requests wouldn't have the correct Content-Length value
<FunnyLookinHat> Even adding Content-Length: 0 seems to be a fix ?
#juju-dev 2013-06-16
<jam> fwereade: isn't it your weekend ? :)
<jam> but really, great job on triaging
<wallyworld_> jam: looks like the landing bot doesn't like my branch. all tests pass locally
<wallyworld_> have you seen the failures before with the bot?
<jam> wallyworld_: see earlier bugs, it seems there is a thread that stays alive "sometimes" ,which on the bot is about 3 in 5 test runs
<wallyworld_> ok, i'll try re-approving
<wallyworld_> 3rd time lucky
<jam> wallyworld_: yeah, I'm trying to debug it now, inserting logging in mgo, or something.
<wallyworld_> good luck, doesn't sound straightforward
<jam> wallyworld_: well it looks like what is actually happening is an RPC failure
<jam> which causes the suite to fail to tear down correctly
<jam> [LOG] 16.33282 DEBUG juju rpc/jsoncodec: <- error: read tcp 127.0.0.1:52276: use of closed network connection (closing true) [LOG] 16.33383 DEBUG juju rpc/jsoncodec: <- error: EOF (closing false) ... Panic: Fixture has panicked (see related PANIC)
<wallyworld_> would be nice to make the teardown more robust at least
<jam> wallyworld_: well it looks like a test should be failing given the "use of closed network connection" part
<jam> though that failure *might* be: http://code.google.com/p/go/issues/detail?id=4704
 * wallyworld_ looks
<jam> which is only relevant to the patched version of go used in the packaged go binaries
<jam> but why would *that* be nondeterministic?
<jam> (for danilo's patch, it was exactly 3 submissions to succeed)
<jam> and naturally running it directly.... doesn't fail
<wallyworld_> too bad we aren't running with 1.0.3 packaged
<jam> wallyworld_: I'm using 1.0.3 from the ppa, which are you meaning?
<jam> that is the one tarmac is using
<jam> which may not have the patch, which is why I don't think it is issue 4704 from above
<wallyworld_> i just assumed we were running < 1.0.3 since the bug report linked above seemed to say t war fixed in 1.0.3
<jam> wallyworld_: right, but supposedly there was a patch to 1.0.2 which might still have been applied to this code.
<jam> Alternatively, we are always getting the connection closed, and it only sometimes causes a stale conneciton to mgo
<wallyworld_> ok. i was just guessing without having all the facts :-)
<jam> wallyworld_: I don't have many more facts than you. Another pair of eyes is appreciated.
<wallyworld_> i'll see if anything jumps out
<jam> wallyworld_: the code under test is doing stuff with timeouts also, which always sounds non-deterministic
<wallyworld_> yeah, could just be the bot vm is slow or something
<jam> wallyworld_: well it is 10 seconds of 1 socket still marked as alive, so it probably isn't tearing down yet, but maybe the connection is closing earlier because the bot is slow
<jam> wallyworld_: yeah, post commit hook fired
<jam> yay
<wallyworld_> \o/
<jam> wallyworld_: so it seems I just need to auto-requeue all requests 3 times. :)
<wallyworld_> lol
<jam> note that when we get this failure, it trips 3 different tests.
<wallyworld_> yes, i noticed that
<jam> I wonder if some other test is leaving it unclean and the rpc stuff is bogus?
<wallyworld_> maybe, would not be surprised
<jam> wallyworld_: of course I *just* triggered it locally, but I'm pretty sure it will succeed next time.
<jam> but it failed and ran only 1 test
<jam> so it is at least loacl to the test
<wallyworld_> yeah
<jam> wallyworld_: so if I add a last line: c.Fatal("failing this test")
<jam> then it always triggers
<jam> and I see a "rpc/jsoncodec EOF stuff
<jam> the one difference is "(closing false)" in my method vs "(closing true)"
<wallyworld_> which test do you add that to?
<jam> wallyworld_: so I *think* the TestManageEnviron
<jam> in cmd/jujud
<jam> machine_test.go
<jam> wallyworld_: So looking at the test, I think m1.Watch is setting up a watcher which is then polling on the server side, and we kill it at some point (w.Stop() is deferred)
<jam> but on the *server* side we just get a closed connection.
<wallyworld_> a client watcher should be able to be killed and the server should notice an sonme point and just deal with it
<jam> ERROR juju agent died with no error
<jam> wallyworld_: right, when I add the Fatalf it doesn't panic the fixture
<jam> it doesn't leave a connection alive
<jam> the point was that we end up with a very similar log file
<wallyworld_> jam: weird commit message when i pull trunk and do bzr log - claims the committer was mathew scott and not me
<wallyworld_> for the branch that just merged
<jam> wallyworld_: 'bzr log --long' it has both of you, but MScott happens to come first for some reason.
<wallyworld_> lp also shows m scott
<wallyworld_> why is he associated with my commit?
<jam> wallyworld_: if you look at the log, you merged his changes, Tarmac just includes everyone
<jam> rev 1276.1.8 has you merging his branch
<jam> so your patch brings in his changes
<wallyworld_> i merged "trunk" but it was off ~juju not go-bot
<wallyworld_> i didn't explicitly merge his code
<jam> wallyworld_: did someone 'land' code to ~juju before it moved to go-bot?
<jam> I pulled and pushed right when I switched
<wallyworld_> not that i can recall or am aware of
<jam> but maybe something was landed there after I switched it
<wallyworld_> maybe
<wallyworld_> doesn't matter, was just curious
<jam> if people were using 'lbox submit' it probably would still land to the old location, because it was the known location.
<jam> so I guess
<wallyworld_> makes sense
<jam> wallyworld_: thanks for bringing in accidentally missed changes :)
<wallyworld_> anytime :-)
<jam> wallyworld_: I may have found it.... the connection that is being created is inside mgo itself. Which runs a background pinger against the server (to make sure the server is still alive)
<jam> guess what the pinger delay is?
<wallyworld_> 500ms?
<jam> wallyworld_: 10s
<wallyworld_> ah, right
<jam> which is *exactly* the amount of time we wait for all connections to clean up
<wallyworld_> lol
<jam> so if pinger starts at any time close to when we start tearing down
<jam> we wait 10s
<jam> but it sleeps for 10.xxx seconds
<jam> and lo and behold, that thread never goes away
<wallyworld_> good catch
<jam> wallyworld_: inserting traceback logging
<jam> and then tracking through what address wasn't
<jam> dying
<jam> and then checking its traceback for why it was existing
<jam> I was *very* fortunate to be able to trigger it locally
<wallyworld_> yes indeed
<jam> once-in-a-while
<jam> I'm guessing Tarmac is slow enough
<jam> that the 10s delay
<jam> ends up being 10+s more often
<wallyworld_> yep
<jam> widening the window of failure
<wallyworld_> 10s seems like way too long to wait inside a test
<wallyworld_> too bad we can't tell mgo not to ping
<jam> wallyworld_: well mgo keeps a connection in a sleep loop
<wallyworld_> since for a test we don't care
<jam> wallyworld_: ah, I think there is a goroutine scheduling issue as well. As newServer has fired of a side thread telling it that it wants to ping
<jam> and it has already run pinger(false) # do not loop
<wallyworld_> fun
<jam> right, so it has checked and probably knows that we want to shut down, but fires off the goroutine, which runs at some point in (near) future
<jam> which is just after we started closing
<jam> but the first thing it does is sleep for 10s
<jam> I'm thinking about just inserting a 'check for closed' before we hit sleep
<wallyworld_> that would be the normal way to do these types of things
<jam> doesn't seem to matter
<jam> looks like it is getting closed in the  middle of those 10s
<jam> wallyworld_: 7+ years since I wrote a lot of C++ code, and I still type "for i = 0; i < 10; ++i" :)
<jam> (go doesn't support ++var)
<wallyworld_> yeah, Go frustrates me a lot like that
<jam> so I tried changing the single sleep into a loop of sleeping, so that we can break out earlier. No luck yet.
<jam> wallyworld_: at 0.6376 we get a 'closing server' call, at 0.6378 we see it kill the currently active socket, at 0.648 we see it *create* a new socket which then never dies... why didn't it think it was done...
<wallyworld_> why would it create a new socket after killing the active one?
<jam> (why if it just killed its socket and is trying to shut down, doesn't it think the server is closed and tries to connect again)
<jam> wallyworld_: exactly
<wallyworld_> is this in the mgo code?
<jam> wallyworld_: that's what I'm looking at right now, yes.
<wallyworld_> i guess we need to find a workaround until any upstream patch gets applied
<jam> wallyworld_: well, we can probably just set the loop to 15s and be safe
<jam> but I'd still like to understand this bit.
<jam> (I can easily run patched mgo on Tarmac bot :)
<wallyworld_> great
<jam> note the 15s is *I think* :)
<wallyworld_> but who knows if this will bite us in prouction
<jam> given it is creating a new connection
<jam> I wonder if server.closed is getting unset somehow
<wallyworld_> plausible
<jam> like, it has been deallocated and thus the default value is nil? (can't really be deallocated if it isn't unreferenced)
<jam> so I could see that if we get the Close() call, but then call AcquireSocket immediately after
<jam> we *won't* call liveSocket.Close()
<jam> because we close each socket in the server.Close() call.
<jam> so possibly this thread will never die
<jam> because it doesn't actually think the server is closed.
<wallyworld_> seems like a stock standard coding error
<jam> wallyworld_: current guess. server is not closed when we call Dial
<jam> server is closed by the time connect returns
<wallyworld_> would explain what is seen i think
<jam> wallyworld_: 85.92296 AcquireSocket, 85.93311 closing server, 85.93321 killing old connection 85.94373 newSocket
<jam> so we ask for a new socket, while that is pending, we call Close
<jam> Close doesn't see the connection we are creating right now
<wallyworld_> cause it's not fully created yet
<jam> yep
<wallyworld_> may just use a mutex
<wallyworld_> maybe
<jam> there are mutexes around
<jam> AcquireSocket releases the mutex just before calling Connect
<jam> presumably to not block waiting for connect
<jam> and grabs it again before appending to live sockets
<jam> probably needs to check for 'self.closed' before appending the new socket
<wallyworld_> or gate acquire and close with a mutex
<wallyworld_> not sure without looking at the code
<jam> wallyworld_: I think just after Connect when we re-acquire the lock to add it to our liveSockets, we need to check that we aren't actually in closing state.
<wallyworld_> sounds ok
<jam> wallyworld_: yeah, I just got "Connect returned after server was closed" but *didn't* get AcquireSocket called with closed server.
<wallyworld_> good
<thumper> wallyworld_: morning...
<wallyworld_> hi
<thumper> wallyworld_: can I get a +1 on https://codereview.appspot.com/10235047/
<thumper> wallyworld_: although I should propose again
<thumper> if you wait a few minutes, I'll do that
<wallyworld_> agains the new trrunk you mean
<thumper> no, to have the test I added in the review
<wallyworld_> ok
 * thumper is busy submitting a latter pipe
<thumper> wallyworld_: sync up chat sometime?
<wallyworld_> sure, give me a little time to propose some code
<wallyworld_> just finishing some stuff
<thumper> sure, no urgency
<thumper> I have heaps to get on with
<wallyworld_> thumper: looks like you already have your 2nd +1
<thumper> wallyworld_: yeah, just saw that too
<thumper> approving and merging now
<thumper> dfc hasn't come online yet though :)
<thumper> hi mramm
<mramm> hey hey
<thumper> mramm: working on a sunday?
<mramm> not really
<mramm> just thought I'd pop in and check on IRC
<mramm> check my e-mail
<mramm> while cooking dinner
<thumper> well, IRC is probably where you left it :)
<thumper> on problems
<thumper> all ticking along
<thumper> horrible weather here, cold and wet
<thumper> supposed to get snow on thursday
<wallyworld_> arosales: did you get the meeting invite i sent?
#juju-dev 2014-06-09
<waigani> menn0: can I run something by you?
<menn0> sure
<thumper> davecheney: around?
<davecheney> thumper: i am not, queens birthday today
<thumper> davecheney: ah, that's right
<thumper> davecheney: that'll explain it :)
<thumper> nm
<thumper> see you tomorrow
<davecheney> kk
<thumper> davecheney: you ozzies doing it a week after nz confuses me
<thumper> it's almost as if you don't like doing stuff with us
<thumper> :-)
<waigani> thumper: 1:1 ?
<thumper> yeah
<thumper> waigani: I'm in the hangout
<waigani> thumper: trying to logon - network is being annoying
<thumper> heh
<waigani> is there a way to check if a string contains a substring in the tests? c.Assert(stringVar, gc.MAGIC, subStringVar)
<waigani> jc.Contains :)
 * thumper wrote that...
<thumper> waigani: also jc.HasPrefix and jc.HasSuffix
<waigani> ah nice
<thumper> axw: do you have a different queens birthday too?
<axw> thumper: we do indeed
<axw> 29 September
<thumper> geez, how many birthdays does a queen gt?
<thumper> get
<waigani> haha
<wwitzel3> man, before my long weekend I had 18 unread .. now 163 (after filters) .. awesome.
<waigani> wwitzel3: yep, just been there
<wwitzel3> waigani: yeah, calling it a night and will have to tackle the rest in the monring
<waigani> wwitzel3: the rest + 100 more ;)
<jam> morning dimitern
<dimitern> morning jam, brt
<rogpeppe> mornin' all
<dimitern> hey rogpeppe
<rogpeppe> dimitern: hiya
<voidspace> morning all
<dimitern> morning voidspace
<voidspace> dimitern: o/
<wwitzel3> hello o/
<natefinch> morning all
<jam> natefinch: morning, be there in just a sec
<natefinch> it's funny, I got a notification that the meeting was now, but on my calendar it still says it's in 2 hours.
<voidspace> natefinch: morningg
<voidspace> wwitzel3: morning
<wwitzel3> voidspace: morning, how's it going?
<voidspace> wwitzel3: not bad, you have a good weekend?
<wwitzel3> voidspace: yeah, good, but very busy
<voidspace> wwitzel3: did you go look at the house?
<wwitzel3> voidspace: we did, sadly the old 1914 place is ruled out, the land didn't really work for our needs.
<voidspace> shame
<voidspace> wwitzel3: did you see any other places whilst you were out there?
<wwitzel3> voidspace: yeah, we viewed 12 and of those 2 stayed on the list.
<voidspace> wow
<voidspace> you were busy!
<natefinch> wwitzel3: is this places to buy?
<wwitzel3> natefinch: yep, in Raleigh
<natefinch> wwitzel3: nice
<wwitzel3> "in" Raleigh .. near there the two we still have on our list are 14 and 23 acres.
<natefinch> wwitzel3: awesome
<wwitzel3> natefinch: yeah, we are pretty excited to be getting out of FL, hopefully soon :)
<jam> dimitern: vladk: just finishing up a conversation, will be in standup soon
<dimitern> jam, sure, i'm ready
<natefinch> wwitzel3: what are you going to do with the land?
<dimitern> fwereade, will you join us for standup today?
<natefinch> wwitzel3: besides not need to worry about neighbors :)
<wwitzel3> natefinch: Jessa is going to run a CSA off it, we will probably do occasional farm to table dinner events, and Jessa also does event management / planning so she plans to setup a space for weddings, etc..
<natefinch> wwitzel3: wow!  That is awesome :)
<natefinch> wwitzel3: my 13 acres are mostly forest on a rocky hill, so finding space to plant stuff is a trick.
<wwitzel3> natefinch: yeah, that is actually why we ruled out some properties.
<wwitzel3> natefinch: fortunately we aren't really in a rush so we can be picky
<natefinch> wwitzel3: that's great.  nice to be able to move wherever you want :)
<wwitzel3> natefinch: indeed
<voidspace> natefinch: ping
<natefinch> voidspace: howdy
<voidspace> natefinch: did we come to any conclusions on the backup implementation strategy and division of labour
<voidspace> natefinch: I suggested that instead of returning a URL to a backup we serve backup files through the api
<voidspace> natefinch: which would need a "fetch backup api call / command"
<voidspace> natefinch: there isn't a ticket for this yet
<voidspace> natefinch: I can add one, if we've decided this is what we want to do
<natefinch> voidspace: yeah, that would be good
<voidspace> natefinch: which parts of backup can be parallelised?
<voidspace> natefinch: you were fleshing out the api - do you have a branch with that in progress?
<natefinch> voidspace: yes, trying to get things to build so I can push it up
<voidspace> natefinch: cool
<natefinch> dammit git
<natefinch> made a branch on juju/juju not natefinch/juju  :/
<perrito666> morning folks
<jam> mgz: poke about test suite failures
<mgz> jam: hey
<jam> https://github.com/juju/juju/pull/26 has failed 2 times with Bad Record MAC
<jam> in 2 different test cases
<mgz> hm, I can't actually find an open bug for that
<mgz> but it's not a new one
<jam> mgz: no, bad record MAC is thought to be an issue with upstream mongo + SSL/TLS
<jam> mgz: specifically: https://jira.mongodb.org/browse/SERVER-11807
<mgz> jam: I think I'll reinstate the auto-retry
<jam> mgz: I'm not particularly happy with "just rerun the test suites until they pass" approach to landing code.
<mgz> yeah, it's pretty sucky
<jam> mgz: as a for-example, if this is being a problem, can we just run 99% of the test suite with TLS disabled? We don't actually need to encrypt our data to the local mongo db
<jam> (though I'd rather have tests that just don't need mongo at all for local testing)
<mgz> ideally that
<mgz> but... we've talked but not got as far as de-mongofying tests for a while
<perrito666> could someone re-re-read this and give me a LGTM? is just docs and already passed 2 rounds of gramatical review but a final one wold be nice
<perrito666> natefinch: did you manage to find out how to write a facade?
 * voidspace lunches
<natefinch> perrito666: ish
<natefinch> rogpeppe1: any idea how to fix this? testing/environ.go:109: undefined: "github.com/juju/testing".FakeHomeSuite
<perrito666> natefinch: I am in the process of bash->go the actual backup logic, which is fairly short, then we can see how to stitch all together, sounds good to you?
<jam> natefinch: frankban just landed something that was talking about removing it
<natefinch> jam: I just needed to update juju/testing
<natefinch> jam: thought I'd just done that, but perhaps not
<jam> natefinch: ah, so it was failing in master, sure
<jam> I thought maybe you needed to crib an example from how he removed it
<jam> removed or moved
<natefinch> jam: nah... just merged from master and then my branch was broken
<jam> natefinch: yeah, I ran into that with the 6+ dependencies I had to go update
<natefinch> voidspace: when you get back, my backup api facade is at github.com/natefinch/juju  on the backup-api branch  under state/apiserver/backup
<voidspace> ok, thanks
 * voidspace really leaves for lunch
<rogpeppe1> natefinch: have you updated dependencies?
<rogpeppe1> what do people think about moving testing/mgo.go into github.com/juju/testing?
<rogpeppe1> the store tests need a mongo server to run against, so we need *some* code that starts mongod outside of core
<rogpeppe1> but i'm swithering a bit about all the stuff that's really closely related to juju-core in there
<rogpeppe1> for example the certificates - they really feel quite core-specific
<rogpeppe1> i wonder if we made the certificates an argument to the testing functions, that might work ok and mean that we wouldn't need to move the cert code
<rogpeppe1> anyway, gotta lunch now
<perrito666> fwereade: tx :)
<natefinch> wwitzel3, perrito666: today is Eric Snow's first day, I think he'll be coming on shortly... I'm unfortunately watching both my kids for a significant period of time this morning since my wife has to go to a doctor's appointment.  If one of you could help him when he gets on to get set up, that would be awesome
<perrito666> natefinch: sure
<bodie_> morning all
<lazyPower> natefinch: have you seen anything wrt 2014-06-09 12:35:57 INFO juju.worker runner.go:260 start "api"
<lazyPower> panic: runtime error: invalid memory address or nil pointer dereference
<kiko> upon startup, am getting a crash
<kiko> 2014-06-09 12:35:57 INFO juju.cmd supercommand.go:301 running juju-1.19.2-precise-amd64 [gc]
<kiko> 2014-06-09 12:35:57 INFO juju.cmd.jujud machine.go:151 machine agent machine-0 start (1.19.2-precise-amd64 [gc])
<kiko> 2014-06-09 12:35:57 DEBUG juju.agent agent.go:375 read agent config, format "1.18"
<kiko> 2014-06-09 12:35:57 INFO juju.worker runner.go:260 start "api"
<kiko> panic: runtime error: invalid memory address or nil pointer dereference
<kiko> [signal 0xb code=0x1 addr=0x0 pc=0x4643b2]
<kiko> can anybody give me a hand figuring out what is wrong?
<natefinch> lazyPower: ouch
<lazyPower> natefinch: yeah, as you can see that originated from Kiko
<lazyPower> I'm at a loss as to why it would be having memory pointer dereferencing issues
<lazyPower> i thought go used garbage collection so memory management wasn't a thing. but I really dont know much about Go so i'm not much help
<natefinch> lazyPower: there's still pointers, which can be nil, and if you try to dereference the pointer, it'll panic like that
<lazyPower> yikes
<jam> lazyPower: go just ensures you get a traceback/panic rather than undefined operation on whatever memory you happen to be pointing to
<natefinch> it's also not as big a problem in Go as in some other languages, because most of the time you return both a pointer and an error, and if there's an error, you don't use the pointer.  it's a lot more obvious than in other languages where you might just do "GetFoo().Method()" or similar, and forget that GetFoo() might occasionally return a null pointer
<wwitzel3> natefinch: re: Eric's first day, no problem
<natefinch> wwitzel3: thanks
<lazyPower> ok so we have no clue was to why kiko's deployment is panicing after a server reboot?
<bodie_> I find that if I wrote code that has a nil dereference, it means that I need to put in checks for a nil value and they'll tell me where things are going wrong
<kiko> lazyPower, well, actually, it was a server reboot + juju client upgrade from 1.18.3 to 1.18.4
<kiko> unfortunately because of other bugs (in upgrade-juju), our jujuds are all 1.19
<natefinch> so, it seems likely that it's related to upgrading to 1.19 when it wasn't supposed to.... but it's hard to know.  Do you have the full stack trace from the panic?
<natefinch> jam: yeah, that's the nice thing about Go's values always getting initialized to their zero value... pointers start out nil, not pointing at random spots in memory.
<natefinch> So, totally unrelated to anything else.... we just hatched our own chickens for the first time this morning.... like, toss some fertilized eggs under a chicken, wait 3 weeks, bam, baby chicks.  It's kind of amazing.
<perrito666> natefinch: wwitzel3 voidspace what would be really nice is to know eric's irc nickname
<perrito666> :p
<kiko> natefinch, yes, just a second
<voidspace> perrito666: heh, I don't know I'm afraid
<natefinch> perrito666: when he tells us, we'll let you know
<voidspace> perrito666: I thought it was ericsnow though...
<kiko> natefinch, http://paste.ubuntu.com/7618160/
<bodie_> when someone has a spare minute to look at my pseudo-PR here -- I'm thinking about whether we should be using markdown in our doc now
<bodie_> https://github.com/juju/juju/pull/46
<bodie_> just want to get some feedback so I know whether I'm barking up the wrong tree :)
<perrito666> bodie_: I, actually, github picks it up and renders it, which is nice
<bodie_> yup, it's very nice
<bodie_> There's also a [tag:asdf] tag which I think links to another piece of github but I'm not certain how to make it work
<bodie_> but, it could be used for example as [tag:newbies] or something -- linking to a sticky post for newcomers -- etc
<voidspace> grabbing coffee
<voidspace> maybe a coupla minutes late to standup
<perrito666> grabbing a hangout compatible computer
<jam> kiko: so the traceback you pasted says that the agent is missing the "*api.Info" it needs to actually connect to the API
<jam> so you can look for a /var/lib/juju/agents/machine-?/agent.conf file
<jam> offhand, I don't know how that would happen, given it is a pretty vital piece of information
<kiko> jam, well, the machine-0/agent.conf file is in ~/.juju/local
<kiko> jam, it contains apiaddresses, apipassword and apiport
<bodie_> hm... I think my upstream is failing tests too
<natefinch> voidspace, perrito666, wwitzel3: I think I have to miss the standup today. My wife is leaving right now, so I gotta watch the kids.  I'll try to pop back in off and on.  Try to coordinate amongst yourselves if someone doesn't have something to do
<bodie_> take care natefinch
<kiko> jam, the permissions had it originally only readable by root, but I chowned it to juju
<kiko> unfortunately I still get the same traceback
<kiko> jam, what am I missing explicitly?
<jam> kiko: still digging into the traceback, I had to pull out the 1.19.2 source code and find the exact line, because it actually looks like we don't have a config object at all, but I can't figure out from tracing various paths how that object could be nil
<kiko> jam, could it be the file could not be opened?
<kiko> jam, because I remember being told that there was a permissions change
<perrito666> voidspace: whenever you want, wwitzel3 and I are on the call
<jam> kiko: the agent on machine-0 runs as root, so you can't stop it from opening anything
<bodie_> http://paste.ubuntu.com/7618248/
<kiko> jam, I see -- what about the location where the file is?
<kiko> jam, it used to be in /var/lib/juju indeed, but I was told to move it to ~/.juju/local
<bodie_> "not okForStorage"?
<bodie_> "cannot update uploaded charm in state" -- is this because I've altered the contents of my testing/repo/quantal/dummy charm?
<kiko> jam, moving it back makes no difference though either
<jam> kiko: hm. I hadn't heard of that, as for location, I would have thought it would fail far earlier with:
<jam>         if err := a.ReadConfig(a.Tag()); err != nil {
<jam>                 return fmt.Errorf("cannot read agent configuration: %v", err)
<jam>         }
<kiko> jam, yeah, that sounds right
<jam> kiko: if it isn't in /var/lib/juju then I would expect your /etc/init/juju...conf upstart file will be passing a "--data-dir=" that would tell us where that file should be. Not that I think that must be the problem, because I *think* you would see a different error.
<jam> kiko: so... another thought. There is another pointer in there "apiDetails"
<jam> just a sec
<jam> kiko: is there a line in there called "APIAddresses" ?
<jam> in the agent.conf file?
<kiko> <kiko> jam, it contains apiaddresses, apipassword and apiport
<kiko> one sec
<jam> yeah, I just found that in traceback
<jam>         if len(configParams.APIAddresses) > 0 {
<jam>                 config.apiDetails = &connectionDetails{
<jam>                         addresses: configParams.APIAddresses,
<jam>                 }
<jam>         }
<jam> so it has to have contents as well
<jam> it should be a list
<jam> mine looks like this:
<jam> piaddresses:
<jam> - 10.0.3.1:17070
<jam> well, with an 'a' there
<jam> so it wraps to the next line and has a list that starts with "- "
<kiko> jam, my file is here: http://paste.ubuntu.com/7618297/
<jam> kiko: there is no apiaddresses in that file
<jam> kiko: my suggestion, is to put one in with "apiaddresses:\n- localhost:17070\n"
<kiko> hmm
<kiko> jam, my file also missing a sharedsecret entry
<kiko> jam (am comparing it with a freshly bootstrapped 1.18)
<kiko> jam, it runs
<kiko> jam, was that not required pre-1.18.4?
<jam> kiko: both were introduced in 1.19
<kiko> jam, I see
<kiko> jam, and I guess the upgrade isn't being done properly?
<kiko> jam, is the lack of sharedsecret going to be a problem, and if so, how do I get one
<jam> kiko: upgrades *to* dev versions aren't really supported, I'm really unclear how it actually triggered.
<kiko> jam, it's another bug I filed, let me find it
<jam> kiko: it is a random string used for when you're going into HA/--replicaSet mode.
<kiko> jam, I see, so not that important
<kiko> jam, https://bugs.launchpad.net/juju-core/+bug/1325034
<_mup_> Bug #1325034: juju upgrade-juju on 1.18.3 upgraded my agents to 1.19.2 <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1325034>
<jam> kiko: so *if* all upgrade steps were run, then we might need to be passing it to mongo, since *none* of them run, we don't actually need it. If we do, we can just put random data in there.
<jam> kiko: so as I understand it, the upgrade logic is pinned on "stable" versions, so it says "oh you are upgrading from version 1.OLD to 1.NEW, I'll run these steps" but those steps are targetted at the "next stable" release, which would have been 1.20.
<jam> since 1.19.2 isn't 1.20, nothing gets run
<jam> wallyworld knows more about the upgrade logic than I do, though.
<kiko> yeah
<kiko> all my agents are missing apiaddresses entries
<kiko> shucks
<jam> kiko: so the ones that aren't machine-0 need machine-0's address
<kiko> jam, they are all tracebacking identically, yeah
<jam> kiko: sure, I just wanted to be clear that they shouldn't have "localhost" in there
<kiko> ah
<jam> kiko: do you know where you got 1.18.3 from? ppa:juju/stable?  (it doesn't seem to be in cloud-archive:tools or Trusty archive because of problems landing updates in utopic)
<kiko> jam: yes, that's where it came from originally
<kiko> jam: I have the debs here if it helps, because I couldn't find them anywhere today either
<jam> kiko: yeah, ppa's don't keep history so there is only 1.18.4 there now.
<jam> anyway, I'm way past EOD and have to make dinner for my son, hopefully we've moved you forward a bit, and I'll try to follow up with other people.
<jam> natefinch: ^^ can you help kiko if he needs anything else?
<kiko> thanks jam
<natefinch> jam: I'll do what I can.  I'm a little distracted by two kids, but I hope I;ll be able to help
<kiko> heh
<natefinch> sinzui: have you seen any problems upgrading from 1.18.3?  looks like kiko got upgraded to 1.19.2 somehow, and it seems to have messed up his environment
<sinzui> natefinch, I have never seen that issue
<kiko> yeah, it messed it us great
<alexisb> welcome jheroux !
<alexisb> fancy seeing you here
<dimitern> jam, vladk, fwereade, others? - a mostly automatic refactoring to move network-related stuff from instance to environ/network package, i'd appreciate a review https://github.com/juju/juju/pull/49
<bodie_> anyone able to grok the difference between json-schema core and json-schema hyperschema?
<bodie_> I'm trying to write a mini validator that validates a hypothetical json-schema against the spec
<bodie_> I'm just not sure if I should be validating against json-schema or json-schema hyperschema
<rogpeppe> anyone else seen this log message "2014/06/09 16:19:30 http: TLS handshake error from 127.0.0.1:58318: tls: first record does not look like a TLS handshake"
<rogpeppe> bodie_: ha, let me just look at the description again. i *thought* i understood the difference once...
<rogpeppe> bodie_: from a naive reading of the description on the home page, json schema is what we need
<bodie_> it would make sense that hyperschema is the schema that defines json-schema itself
<bodie_> however, I don't think that's the case, or I'm just really confused, which is probably the case :)
<rogpeppe> bodie_: yeah, that's what i thought, but it doesn't look like it
<rogpeppe> bodie_: i guess that would actually be a "meta-schema"
<bodie_> some people get those words mixed up :P
<rogpeppe> bodie_: from the json schema home page:
<rogpeppe> JSON Schema	describes your JSON data format
<rogpeppe> JSON Hyper-Schema	turns your JSON data into hyper-text
<bodie_> ahhh
<bodie_> somehow missed that
<bodie_> thanks!
<rogpeppe> bodie_: the specs are not the clearest in the world
<mattyw> natefinch, fwereade do either of you have some work I'm able to pick up?
<natefinch> mattyw: not a ton for me right now.  Wrangling a couple kids this morning, so can't do much right now.  I can try to find something later today if you don't have anything else by then.
<rogpeppe> fwereade: i made a couple of improvements to the filetesting package. review appreciated: https://github.com/juju/testing/pull/14
<rogpeppe> or anyone else, please: ^
<rogpeppe> frankban, natefinch, bodie_, mgz: ^
<jheroux> alexisb: hey, I've been been asked to look at OpenStack Heat & Juju -
<alexisb> jheroux, cool
<alexisb> we should schedule some time to meet
<jheroux> alexisb: sure
<alexisb> are you jheroux@us.ibm.com?
<jheroux> I don't fully understand all of the requirements yet, will be talking to SWG, and other IBM teams shortly
<jheroux> yup, that's me
<alexisb> I will start by inviting you to our weekly interlock
<jheroux> ok, makes sense
<alexisb> and then you and can meet 1x1 and I can give you some back ground
<jcw4> marcoceppi: for juju v.next documentation... should I propose against dev branch?
<jheroux> ok, I'll look for the invite
<marcoceppi> jcw4: we're going to be making a juju-docs repo soon, which has master being the dev, then each version of the release by branch (ie 1.18 branch, etc) it's just not there yet. Right now master is the stable version of the docs on the repo and dev is a bit behind
<marcoceppi> leave the mp as it is, I'll chat with nick about how soon we can get the new repo setup
<jcw4> cool thanks marcoceppi
<perrito666> is it possible to change the libjuju path from /var/lib/juju?
<jcastro> My juju seems to only launch containers for the bootstrap, for everything else they just get stuck on "pending", ideas?
<natefinch> jcastro: might be a bit late, but it's usually an LXC problem, rebooting may help, believe it or not
<jcastro> ugh
<natefinch> jcastro: you can try killing all lxc processes
<natefinch> jcastro: sometimes that'll do it
<jcastro> ok I'll give it a shot
<voidspace> right, I'm off to Krav Maga
<voidspace> g'night all
<voidspace> EOD
<wwitzel3> night voidspace
<perrito666> I would really love to take classes of one of these martial arts if it didn't involve social interaction
<natefinch> haha
<natefinch> I don't know, getting punched in the face isn't that social
<perrito666> natefinch: I think that the whole non punching time includes social interaction
<natefinch> perrito666: if you're not constantly punching or getting punched in the face, you're doing it wrong ;)
<perrito666> we call that public transportation here
<natefinch> lol nice
<bodie_> perrito666, the fun thing about most martial arts is that you only have to socially interact with other nerds.
<bodie_> MMA is probably the exception
<perrito666> bodie_: oh no, here nerds keep their place appointed by birth and are as physically unfit as possible
<bodie_> I guess it's the same catch-22 as going to the gym.  I'm puny, so I don't go to the gym to stop being puny... haha
<perrito666> The only reason I get to have some fitness is because I was born in a ruralish town where you are forced to practice sports
<bodie_> but, generally speaking I've had good experiences with martial arts -- a good dojo teaches humility and mentorship -- so if people there aren't humble (especially the sensei), you're at the wrong dojo!
<natefinch> I think you'll find that most people at the gym will look at the puny guy or the fat guy or the old guy and think "Good for him!  He's working to make himself more fit!"
<bodie_> :)
<bodie_> unless he's obama.. then they make fun of him to the whole internet.  lol
<perrito666> I try to exercise on my own, I do a lot of biking
<perrito666> and recently a friend got me for my bd this step counting bracelet, which I find very fun and helpful
<rogpeppe> frankban: a slightly more substantial review for you: https://github.com/juju/utils/pull/2
 * lazyPower is always the fat guy at the gym
<rogpeppe> natefinch: fancy reviewing a recursive file copy implementation?
<bodie_> perrito666, you should check out beeminder -- they have an integration with those bracelets
<rogpeppe> natefinch: if so... https://github.com/juju/utils/pull/2
<rogpeppe> + anyone else
<natefinch> rogpeppe: sure
<jcw4> rogpeppe: looks good! :)
<natefinch> rogpeppe: do we care about maintaining stuff like owner, last modified date, etc?
<rogpeppe> natefinch: nope
<rogpeppe> natefinch: i'm just interested in doing a rough substitute for cp -r
 * rogpeppe is done for the day
<natefinch> ericsnow: welcome!
<perrito666> hey ericsnow hi
<ericsnow> hi there
 * perrito666 finds the doc for tar.FileInfoHeader a bit confusing
<ericsnow> so I spammed the CDO list with an intro email, but it was waiting for moderator approval the last I checked
<ericsnow> anyway, glad to be here
<natefinch> Glad to have you here.
<perrito666> ericsnow: I think you need to subscribe to that
<natefinch> perrito666 is Horacio, from Argentina (UTC-3 right now).  mgz is Martin, from the UK.    You evidently know wwitzel3.  fwereade is William, the Juju architect, who is in the UK
<jcw4> ericsnow: welcome!
<natefinch> rogpeppe is Roger, also in the UK (there's a theme here....)
<perrito666> natefinch: interesting random slice of people
<natefinch> these are some of the juju devs that are online during normal daylight hours
<jcw4> I thought fwereade was in Malta :)
<ericsnow> cool
<natefinch> jcw4: yes, I'm dumb
<perrito666> jcw4: I think malta is UK
<perrito666> isnt it?
<jcw4> republic of malta...
<natefinch> it's certainly not NEAR the UK.  The politics may be different
<perrito666> oh I am like 60 year old in my info
<perrito666> lol
<jcw4> :D
<ericsnow> perrito666: vivi en Buenos Aires 2 anos (hace 19 anos)
<natefinch> topical - countries the UK has NOT invaded: http://i.imgur.com/yKK2yrc.gif
<perrito666> natefinch: seems to be counting unsuccessful invasions too
<natefinch> perrito666: perhaps :)
<perrito666> ericsnow: you managed to escape on time :p
<jcw4> natefinch: and we wonder why engilish is the language of international business...
<jcw4> English even
<jcw4> (American notwithstanding)
<natefinch> indeed
<ericsnow> I think I've signed up for all the right stuff, but if you see me missing from anyway, just let me know
<ericsnow> s/anyway/anywhere/
<perrito666> jcw4: its a reasonable language for that, simple enough and lacks one of the things most people find confusing, which is having a gender for each object
<alexisb> welcome ericsnow !
<ericsnow> thanks
<jcw4> perrito666: +1
<jcw4> mgz: if the tests fail in CI, is it safe to push updates to the PR branch before it officially reports failure?
<alexisb> ericsnow, natefinch: I assume you guys are all set and don't need me fore anything atm
<natefinch> alexisb: as far as I am concerned, yeah'
<ericsnow> alexisb: same here
<alexisb> cool, if that changes just ping me
<ericsnow> what's the typical meeting schedule?
<ericsnow> is there a team calendar somewhere?
<natefinch> yeah, there's a google calendar... can you get on canonical email?
<ericsnow> yep
<natefinch> so.... I'll ping the the other team leads, since I don't have rights to add anyone to the team calendar... but there's a team calendar with our standups and weekly team meeting in it
<ericsnow> cool
<ericsnow> does it also include events (like sprints)
<natefinch> we have daily standups with our squads (a squad is just a small team in the larger juju core team)
<ericsnow> got it
<natefinch> the sprints often get added to it by someone on the team, once we know when they are
<natefinch> but they're not added automatically or anything
<perrito666> ericsnow: you will get notified anyway
<ericsnow> cool
<natefinch> we have a weekly meeting on ~thursday (it rotates by 8 hours within the UTC day, which means one of the rotations ends up with the meeting late Wednesday night for people in the US)
<natefinch> we have people all over the globe, so we started rotating the meeting so it wouldn't suck for the same people every week.  But it means some of the time it might be really sucky.... we only really expect people to make 2/3 of the times (more is better, but we understand the meetings might be at like 3am for some people)
<ericsnow> sounds good
<natefinch> having IRC up all day while you're working is a good idea... we use it for asking for reviews, asking for help, helping others that need help (both on juju-core and not)
<natefinch> #juju-dev is generally for people who write code for juju, #juju is for users of juju as well, so it's good to keep an eye out there, though the juju solutions and community guys are more responsible for fielding the easy questions, and will ping us if they have harder questions.
<ericsnow> I'm on both
<natefinch> yep, good
<natefinch> sinzui is our intrepid QA-master who sets up our CI system
<natefinch> oh btw, huge help - at directory.canonical.com you can search for people by irc nickname
<ericsnow> nivr
<ericsnow> nice
<natefinch> so like, when you're thinking "Wait, who the hell is talking to me?"  You can go find out pretty easily :)
<natefinch> we have a kanban board here: https://canonical.leankit.com/Boards/View/103148069#workflow-view    if you can't log in there, we can figure out how to add you
<ericsnow> are there any particular projects I should join on launchpad?
<natefinch> juju-core, definitely
<ericsnow> am I supposed to have an account for that site?
<natefinch> yes
<natefinch> you can set one up yourself for the most part
<natefinch> That's a good thing to do right now if you haven't already
<ericsnow> yeah, navigating that now
<natefinch> you'll want to give your account an ssh key, we actually use that for some stuff
<natefinch> (associated with your canonical email address)
<ericsnow> k
<perrito666> I think most of that is on the hanbook and the getting started page, ericsnow make sure to read that one too as its more reliable than natefinch's memory
<natefinch> yep
<ericsnow> did that this morning
<perrito666> ericsnow: so most likely you already added your pubkey everywhere
<ericsnow> just about
<ericsnow> am I missing something or is setting up an account for the Kanban site not trivial?
<rick_h_> ericsnow: you have to have it setup for you
<rick_h_> arosales: is a master of it ^
<rick_h_> ericsnow: email him a request with your email addr and wait a bit
<ericsnow> that makes *much* more sense :)
<rick_h_> not sure if alexisb has super powers or not
<arosales> ericsnow: I can get you set up.
<perrito666> ericsnow: either someone does it for you or it was such a traumatic experience that I forgot how I did it
<ericsnow> arosales: sweet!
<alexisb> rick_h_, I do not, I need to get super powers though as this is the second time it has come up and I havent been able to help[
<rick_h_> ericsnow: and welcome to the party
<alexisb> arosales, thank you for helping out
<ericsnow> :)
<rick_h_> alexisb: yea, I just keep pushing it to arosales until he gets fed up one day and makes me figure out how to get super powers
<alexisb> lol
<alexisb> or I can do that :)
<rick_h_> works for me :)
<ericsnow> arosales: you need my email?
<ericsnow> do I also need to get put on any launchpad teams (e.g. "juju hackers")?
<rick_h_> ericsnow: yep, hopefully if you get put into your squad team you'll get into most of what you need, though you'll have a bunch of mailing lists to sign up for
<rick_h_> ericsnow: bug your team lead on that front
<arosales> natefinch:  ^
<natefinch> just did :)
<natefinch> didn't realize I had been granted that power
<arosales> ericsnow: natefinch and alexisb may have a few getting started tasks for you to run though that may fill a lot of your access needs
<ericsnow> cool, thanks!
<arosales> ericsnow: welcome to the Juju team :-)
<natefinch> ericsnow: you're the first person I've onboarded, and I've only been here 11 months, so bear with me ;)
<ericsnow> :)
<ericsnow> no worries
<perrito666> natefinch: you made a great onboarding :)
<natefinch> so, juju just moved from launchpad.net/juju-core to github.com/juju/juju .... and by just, I mean the switchover was.... Thursday.  So things are still in a little bit of flux.  The code will live on github, but we're using launchpad as the canonical place for bug tracking
<ericsnow> so git instead of bzr?
<perrito666> yup
<natefinch> correct... there's still some packages that are on launchpad, which may get moved eventually but are not yet moved
<natefinch> stuff like the wrappers for MaaS, Azure, AWS, etc
<perrito666> ericsnow: the contributing doc on juju's git is quite good explaining the base setup
<ericsnow> k
<natefinch> ericsnow: are you running ubuntu?
<ericsnow> sort of
<ericsnow> :)
<natefinch> heh
<natefinch> VM?
<ericsnow> I have a headless box running 12.04
<natefinch> oh right, you said that
<ericsnow> laptop arriving tomorrow
<natefinch> ahh, ok
<natefinch> what laptop did you end up choosing?
<perrito666> rogpeppe: are you still around?
<ericsnow> XPS 15
<natefinch> moce
<natefinch> nice too
<natefinch> ok, so I think we'll skip setting up repos and stuff today, since we'd have to redo it tomorrow
<ericsnow> :)
<natefinch> I forget, how's your Go?
<perrito666> natefinch: actually tar -cf - /* | ssh newbox tar -xf -
<perrito666> works like a charm
<natefinch> perrito666: yeah, I guess
<natefinch> perrito666: you end up having to set up git and bzr and all that though
<ericsnow> it's new to me, but I've been reading up
<fwereade> ericsnow, welcome aboard
<ericsnow> it's not too bad
<ericsnow> thanks
<fwereade> ericsnow, in a spirit of pedantry I should point out that I live in malta
<natefinch> fwereade: someone else already pointed out my boneheaded mistake
<perrito666> natefinch: nah, I just actually ran that command for all the files on my home and got my whole setup moved :p
<natefinch> perrito666: I guess that's a good point
<fwereade> natefinch, as it happens malta is disturbingly similar to england anyway :)
<natefinch> fwereade: heh.  Are you from there originally, or just like the weather?
<fwereade> natefinch, uk originally, moved here a while ago... will be 5 years in december actually
<natefinch> fwereade: sweet, I was only half wrong :)
<fwereade> :D
<perrito666> natefinch: if we where in 1963 you would have been fully right
<natefinch> ericsnow: have you gone through the tour of Go yet?  It's a really useful intro to the language - http://tour.golang.org/#1
<natefinch> perrito666: lol, finally looked it up on wikipedia huh?
<ericsnow> yeah, did the tour last week
<natefinch> cool cool
<perrito666> natefinch: I looked up when you first said it
<natefinch> ericsnow: sweet, so you know the basics.  And luckily, with Go, that's sorta all there is to it.  Learning the Juju codebase is like 100 times as hard as learning Go :)   Which is not to say the codebase is bad, there's just a lot to it.
 * perrito666 looks at fwereade that is que cue for your doc/ line
<ericsnow> my kind of code base :)
<fwereade> ericsnow, yeah, there is at least some good stuff in the doc/ directory
<natefinch> do you have Go installed?
<natefinch> we can go get the code so you can look at the docs etc... I guess it can't hurt to set things up a little bit :)
<ericsnow> I'm set on go
<natefinch> cool cool, so just go get github.com/juju/juju/...
<natefinch> the path is actually a pattern match, if you put ... in the path, it's a wildcard.  In this case it means get all the packages under github.com/juju/juju  (you have to do it that way since the root package doesn't actually have code in it that pulls in all the rest of the code)
<natefinch> go get will get the code and all the code that it depends on and dump it in your gopath
<perrito666> natefinch: heh, I wish I had that clarification the first time I tried that, I thought ... was just abreviation for the rest of the path
<natefinch> perrito666: yeah..... took me a long time to figure out that you could do stuff like go test ./...
<natefinch> oh yeah, you need a GOPATH.  I recommend $GOPATH=$HOME  but you can put it wherever you want
<natefinch> in theory GOPATH can be a list of paths, but it's way easier just to use one
<ericsnow> go-getting now...
<natefinch> it's a little slow because we imported the whole of juju history into github, so there's quite a bit of code to download
<ericsnow> in the meantime what about reviews and CI?
<perrito666> ericsnow: you mean code reviews?
<ericsnow> yeah
<perrito666> code reviews are explained in CONTRIBUTING with detail, I recommend you read those there, I can omit something If I try to tell you the process here
<natefinch> sure, so we do reviews for github stuff on github.  You fork github.com/juju/juju, make your changes, submit a pull request, and then ping people on here to review it.  Once someone gives you a LGTM, you put a comment with the text $$merge$$  and the merge bot will pull your branch and test it after merging it to main to make sure the tests pass, then it'll merge it automatically into main
 * perrito666 was trying to avoid that ^^ clutter
<ericsnow> got it
<ericsnow> :)
<natefinch> heh
<natefinch> generally the idea with reviews is that the reviewer is careful about when to give LGTM.  They'll only give it if they mean it.  If they give it *and* give you something to fix, it means "these are minor fixes that I trust you can make, and after that, go ahead and merge"
<natefinch> often times people will say something like "this looks ok, but you should have someone else look over it".
<ericsnow> sounds right (good practice in my experience)
<natefinch> yep
<natefinch> we review every last commit.  Tests are important. We don't really check coverage, but you generally need tests for any new functionality, and preferably for fixes to old functionality (if the old tests didn't catch the bug you fix)
<natefinch> ericsnow: CI is at http://juju-ci.vapour.ws:8080/
<natefinch> fyi, I have to run in a half hour on the dot
<natefinch> man, we need to put the dependency management stuff way at the top of CONTRIBUTING
<perrito666> natefinch: you could actually add a "read this to the end"
<natefinch> no, because it should be in chronological order
<perrito666> heh "you might be wondering why the above steps did not work"
<natefinch> I'm fixing it
<perrito666> mm if I want the leading slashes removal from the header names on a tar file created with go's tar I should do it by hand on the Name of the file's header, right?
<ericsnow> okay, I'm going to work my way through the readme and contributing docs
<natefinch> perrito666: I have no idea what you're talking about :)
<natefinch> ericsnow: cool, I just updated the contributing doc a little
<perrito666> natefinch: mmm, I am trying to tar -cf blah.tar /etc/something/file.conf which will remove the leading /
<perrito666> so untar that blah.tar will yield etc/something/file.conf
<perrito666> but I am doing that in go, and i am not sure how the spec of that goes
<natefinch> thumper: this is ericsnow, he's new.  He's a couple hours behind me, so I gotta run, but he may have more questions
<thumper> o/ ericsnow
<thumper> natefinch: ack
<ericsnow> howdy
<jcastro> welcome ericsnow!
<ericsnow> thanks Nate
<natefinch> ericsnow: thumper is one of our team leads, in New Zealand
<thumper> cold wet and rainy NZ at the moment
<natefinch> jcastro is uhh... what is your title?  Cloud Community Liaison?    Man, there's a BS title if I ever heard one ;)
<natefinch> jcastro basically does a little bit of everything and tries to make sure people love us
<natefinch> which is sometimes a tough job ;)
<ericsnow> :)
<natefinch> ok, I gotta run... have fun everyone.   ericsnow, I'm usually on pretty early
<ericsnow> k
<natefinch> but get on whenever is convenient for you
<jcw4> ericsnow: what is your timezone?
<ericsnow> -0600
<jcw4> MST?
<natefinch> mountain time
<jcw4> k
<ericsnow> well, MDT right now :)
<natefinch> mountain time just sounds cool
<jcw4> I *always* get confused with that
<natefinch> ok, really going
<ericsnow> we all do :)
<thumper> ericsnow: FYI, NZ is UTC+12
<ericsnow> got it
<perrito666> ericsnow: you are then 3 hs after me i thin
<perrito666> think
<perrito666> ericsnow: what time is it there :p
<ericsnow> 15:00
<thumper> mramm: hey there, we have a scheduled meeting, are you around?
<mramm> yep
<mramm> on my way
<mramm> just finishing up with alexis/antonio/rick
<thumper> ack
<perrito666> ok, is the standup time something you can reach?
<perrito666> that would be 8AM for you
<ericsnow> should I expect to see errors when running "go install github.com/juju/juju/..."?
<ericsnow> I see a bunch of "not a package file" and a couple "undefined:" in go.crypto
<perrito666> ericsnow: you should not
<perrito666> at least I dont
<jcw4> ericsnow: you should run godeps -u dependencies.tsv
<jcw4> and you may have to go get -u missing repos depending on what godeps tells you
<jcw4> (if you don't have godeps install with: go get -u code.google.com/p/rog-go/cmd/godeps
<jcw4> )
<ericsnow> was just about to ask that :)
<jcw4> hmm, but I actually have one on launchpad too... that might be the one to install
<jcw4> go get -u launchpad.net/godeps
<jcw4> gah... I can't tell which one... rogpeppe which version of godeps should we install?
<jcw4> ericsnow: its in CONTRIBUTIING sorry
<jcw4> install the launchpad version
<ericsnow> ah, hadn't gotten there yet (ran into the errors in README)
<jcw4> :D
<jcw4> ericsnow: you know... the new guy always gets asked to update the getting started docs...
<ericsnow> jcw4: :)
<perrito666> jcw4: I think natefinch just proposed that fix
<perrito666> sadly
<jcw4> haha
<perrito666> ericsnow: can be worse, I got asked to fix "a small bug" and am right now, a few months after, re-writing the whole feature
 * ericsnow didn't see anything
<perrito666> ericsnow: I think that nate did the fix and forgot to propose it
<jcw4> I saw a push directly to juju/juju master...
<jcw4> skipped the review process all together
<jcw4> :)
<perrito666> mm... that does sounds like nate confusing origin and upstream again
<jcw4> Isn't there a git config you can do to prevent that?
<jcw4> I was trying to figure out how to prevent pushing to my own master branch this morning
<perrito666> jcw4: a hook perhaps
<jcw4> yeah...
<ericsnow> go is happier after I deleted $GOPATH/pkg/*
<perrito666> ericsnow: yes, go sometimes has tis issue
<perrito666> are you kidding me? time.Format takes an... example? as layout?
<jcw4> perrito666: yep.
<jcw4> actually pretty cool when you use it for a while
<jcw4> :)
<perrito666> jcw4: how does it handle things like yyyymmddÂ¿?
<jcw4> forget the exact date but --- 20060102??
<jcw4> that is the beauty of the template approach... each number in the template is unique so 02 always refers to the day
<jcw4> 01 always refers to the month
<jcw4> 2006 year
<jcw4> etc.
<ericsnow> how much work should I expect outside https://github.com/juju/juju?
<rick_h_> ericsnow: ? how much work?
<ericsnow> right, it seems like there are a bunch of repos (but I expect that most of my time I'll spend in the main juju repo)
<rick_h_> right now there's some work on splitting the core work into more isolated reusable modules
<rick_h_> so right now there's not a ton, but I'd expect that to grow over time
 * rick_h_ says as someone that's not working on juju core :)
<ericsnow> exactly what I needed to know :)
<perrito666> ericsnow: expect to be around github.com/juju/*
<perrito666> but ymmv
<perrito666> ok, went far enough with my code today, EOD
<perrito666> see you all tomorrow cheers
<ericsnow> perrito666: thanks for your help today
<perrito666> ericsnow: yw, bye
<wallyworld> thumper: have you seen this before - adding a lxc container seems to get stuck in a loop doing lxc-ls http://paste.ubuntu.com/7620472/
<wallyworld> thumper: you around?
<thumper> wallyworld: hi, yes but otp
<wallyworld> sure, ok
<thumper> wallyworld: ok, off calls now
<thumper> wallyworld: it isn't stuck
<wallyworld> thumper: so cjohnston  can't start up a container with local provider. logs seems to show a lxc-ls loop? http://paste.ubuntu.com/7620472/
<thumper> wallyworld: that is the machine agent I thought
<thumper> wallyworld: I have found a horrible bug in the local provider
<wallyworld> i would have expected to see lxc-start etc
<thumper> what is the bug cjohnston hits exactly?
<thumper> it may be it
<wallyworld> in the logs
<wallyworld> let me paste the bug
<wallyworld> bug 1319947
<_mup_> Bug #1319947: LXC local provider fails to provision precise instances from a trusty host - take 2 <juju-core:Confirmed> <https://launchpad.net/bugs/1319947>
<thumper> ugh...
 * thumper guesses...
<wallyworld> i don't think it's a precise or trusty thing necessarily
<thumper> no, it is a terrible communication problem
<thumper> there is a lock file created when the precise template is being written
<thumper> it takes quite a while the first time
<thumper> and people have been known to kill juju while it is doing that
<thumper> then they try to start again
<thumper> but the lock file is still there
<thumper> and it is all a bit shit
<wallyworld> ah, that rings a bell
<wallyworld> so he just needs to remove the lock file?
<thumper> easiest solution is to remove the precise template, delete the lock file, and give it 5 minutes the first time
<wallyworld> remind me the lock file path
<thumper> this is why I wanted to do the juju-local plugin to make the template image
 * thumper looks in the code
<wallyworld> sorry, i could have done that too
<wallyworld> thought you might know ottoyh :-)
<cjohnston> /var/lock/lxc/var/lib/lxc/juju-precise-template ?
<thumper> /var/lib/juju/locks
<waigani> thumper: did you work out notifications for landing PRs?
<thumper> waigani: no
<waigani> okay, I'm keen to know too :)
<wallyworld> cjohnston: so can you try what thumper says above and let us know?
<thumper> cjohnston: I think /var/lib/juju/locks/juju-precise-template
<cjohnston> ack.. removing the lock.. how to I remove the template?
<wallyworld> delete lock file and precise template
<thumper> cjohnston: I appologise for the shitty experience
<thumper> sudo lxc-destroy -n juju-precise-template
<cjohnston> :-)
<thumper> cjohnston: although I'm assuming you aren't using btrfs
<cjohnston> no
<thumper> although that shouldn't matter if you were
<thumper> as the fs should handle the snapshotting IMO
<cjohnston> thumper: http://paste.ubuntu.com/7620671/
<thumper> wtf?
<cjohnston> I try hard :-)
<cjohnston> (to break things)
<rick_h_> cjohnston: btrfs?
<rick_h_> cjohnston: I had that once before due to some btrfs issue
<cjohnston> nope
<thumper> cjohnston: I have no idea why that would fail, I've never had that before
<wallyworld> rm -rf :-)
<thumper> cjohnston: it isn't running is it?
<thumper> although should have had a different error if it was
<thumper> sudo lxc-ls --fancy
<cjohnston> STOPPED
<thumper> hmm
<thumper> and lxc-destroy fails?
<cjohnston> hrm.. the second time it worked
<thumper> \o/
<thumper> kinda
<cjohnston> thumper: just do a juju add-machine now and let it hang out for a while?
<thumper> cjohnston: yep
<thumper> cjohnston: one thing you could do
<thumper> is look at the running cloud init
<thumper> which should be in...
<thumper> /var/lib/juju/containers/juju-precise-template/console.log I think
<thumper> yep, that one
<thumper> although owned only readable by root
<cjohnston> what am I looking for there?
<thumper> cloud-init boot finished at Wed, 05 Mar 2014 21:37:44 +0000. Up 417935.05 seconds
<thumper> followed shortly after by:
<thumper>  * Will now halt
<thumper> once that happens, the template has been created, and will then be cloned
<cjohnston> ok
<thumper> and expect a few seconds of intense disk I/O
<thumper> as the new container starts up
<thumper> should be much faster after that
<cjohnston> "1" agent-state: started!
<cjohnston> thumper: fwiw, root root 0 May  9 12:09 /var/lib/juju/containers/juju-precise-template/console.log
<thumper> hmm...
 * thumper thinks
<thumper> I may be looking at an older log file...
<thumper> I recall doing something strange with it, will have to look
<thumper> cjohnston: but it is working?
<cjohnston> I don't know how much it matters, as things appear to be working now
<cjohnston> thanks much for your help thumper and wallyworld !
<cjohnston> jcastro: if you are still having problems, read the backscroll
<wallyworld> thumper did most of it
 * wallyworld will close the bug
 * thumper goes to the gym
 * cjohnston goes to celebrate
#juju-dev 2014-06-10
<jcastro> cjohnston, man, thanks for that
<cjohnston> :-)
<menn0> review please: https://github.com/juju/juju/pull/51
<menn0> really keen to get this one landed
<waigani> thumper: I'm writing the cmd branch, is there an example of mocking out the server api that you can point me to?
<thumper> debug-log mocks out the api
<waigani> thumper: cheers
<thumper> menn0: just one comment on the PR
<menn0> thumper: thanks. I'll add some comments along those lines.
<thumper> menn0: the guts of the problem is that they looked like they weren't needed, and if they are for some reason, there should be a comment explaining why
<wallyworld> axw: morning. do you know if any of the recent mongo related session fixes may address bug 1305014?
<_mup_> Bug #1305014: panic: Session already closed in TestManageEnviron <intermittent-failure> <test-failure> <juju-core:Triaged by rogpeppe> <https://launchpad.net/bugs/1305014>
<axw> wallyworld: don't think so
<menn0> thumper: understood
<wallyworld> ok, a
<wallyworld> ta
<axw> still seeing them in CI
<menn0> thumper: if API versioning lands soon then we might be able to avoid these backwards compatibility shenanigans altogether
 * thumper nods
<davecheney> thumper: https://github.com/juju/names/pull/2
<thumper> davecheney: hmm, weren't we going to squish commits before proposing?
<thumper> axw, wallyworld: was there a definitive answer?
<thumper> also, can someone tell me how to do it again?
<wallyworld> thumper: sort of optional, devs call
<wallyworld> rebase -i
<davecheney> thumper:  i have no idea
<davecheney> is this documented ?
<wallyworld> iow, i don't think there's necessarily definitive agreement
<wallyworld> yes, in the contributing doc
<axw> thumper davecheney: https://github.com/juju/juju/blob/master/CONTRIBUTING.md#proposing
<davecheney> which one do I pick ?
<davecheney> seriously
<davecheney> i don't care about this rebase stuff
<davecheney> i opt to not do it
<wallyworld> there's no one correct answer
<davecheney>  % git rebase -i --autosquash master
<davecheney> Successfully rebased and updated refs/heads/103-introduce-tags-type.
<davecheney> oh, i said don't do it an it did it anyway
<davecheney> cest la vie
<wallyworld> some people like to rebase stuff out and throw away history, others don't
<axw> use "fixup" things like "go fmt"
<wallyworld> i din't think we could get agreement on the ist
<davecheney> i'm in the latter category
<wallyworld> me too
<wallyworld> bzr makes it so you do't need to care
<bodie_> thumper, it started making more sense to me when I realized rebase is actually for replaying your commits on top of another branch (such as master) after pulling latest upstream content
<wallyworld> github's lack of support for pipelines really makes me sad
<wallyworld> totally screws up how i work
<thumper> bodie_: sure, with bzr I just used to merge trunk
<thumper> the git way seems to be rebase
<wallyworld> so now i can't propose more than 1 branch at a time
<wallyworld> so have all this stuff left pending
<bodie_> so, when you rebase, if you pass the -i (interactive) flag, it will also allow you to condense your noisy fix commits into a few more meaningful commits, for git log purposes
<bodie_> as well as replaying them on top of master
<wallyworld> thumper: git way = throw away history, no thanks :-(
<davecheney> bodie_: so you have to have a commit, at the bottom of the list
<wallyworld> one person's noise is another person's history
<davecheney> which has the description that you want ?
<davecheney> then you chuck all the others ?
<bodie_> not necessarily at the bottom, it will just take your commit history that deviates from master out of the way, then apply master's latest changes, and stick your changes at the end
<davecheney> that sounds insane
<wallyworld> yep :-(
<wallyworld> welcome to the New World
<axw> davecheney: the rebase commands describe what they do. some discard messages, others squash them together, others you can amend
<bodie_> I think it has to do with the way Git thinks of history chronologically.  that way, your commits appear when you apply them, instead of scattered through a pack of other people's commits
<bodie_> also, condensed into a meaningful log entry or two
<bodie_> s/apply/rebased/
<davecheney> i don't understand
<davecheney> if I want to 'condense' something
<davecheney> i'll just take a diff
<davecheney> make a new branch
<davecheney> apply it and propose
<rick_h_> menn0: ty for the pull request, landed
<davecheney> this whole rebase nonsense sounds like a solution that was looking for a problem
<bodie_> that's one way
<bodie_> that would accomplish basically the same thing, I suppose
<davecheney> thumper: ty
<thumper> np
<bodie_> if you're a visual person, http://git-scm.com/book/en/Git-Branching-Rebasing has some really good diagrams that exlpain it
<davecheney> big branch coming on juju/juju
<bodie_> explain*
<rick_h_> davecheney: but you can reorder commits, perform work across commits, remove them. imo nice for a lot of things once you get used to it
<bodie_> ^
<davecheney> rick_h_: why do I need that
<bodie_> the --interactive stuff can get interesting
<davecheney> reorder commits ?
<davecheney> change source code, but not inside an editor ?
<rick_h_> but I went from bzr -> hg -> git and have been using it outside for a couple of years so maybe I've learned some brain damage :P
<davecheney> maybe juggle some chainsaws blindfolded at the same time for extra dexterity points
<bodie_> http://git-scm.com/figures/18333fig0339-tn.png speaking of juggling chainsaws
<davecheney> this all sounds like a list of things you "could" do, but not a list of things that you "should" do
<rick_h_> davecheney: you're not willing to admit that perhaps the rest of the world might know a little something and the 'different' is biting some devs around here a bit? :P
<davecheney> rick_h_: i'm sure it's my closed minded attitude
<axw> rick_h_: do you have any suggestions for people used to bzr pipelines?
<bodie_> the thing with Git is, if you just merge over and over, your log history turns into a giant nightmarish sea of mini forks and chronologically meaningless noise
<rick_h_> not enough haskell in the script for you? :)
<davecheney> but I've never come across any reason do any of the things that bodie_ proposed
<rick_h_> axw: I never got into pipelines. I'd just do branches off branches and use cherry-pick to move commits among the branches of work
<axw> mk
<bodie_> I bet if you brought it up with the folks in #git they'd be thrilled to have some cultural challenge
<bodie_> ;)
<bodie_> it's actually a strangely kind arena
<wallyworld> rick_h_: that sounds error prone and a pain inthe arse
<rick_h_> davecheney: oh, the 90% use case for me is just 'feature branch...work work...commit so I don't lose crap...work work...rebase to make it all clean with nicer commit messages, pull request, make tweaks, rebase that clean, and then land
<rick_h_> davecheney: so reordering commits and such tends to just be when someone keeps a long running branch going and needs a better merge to go down
<bodie_> yep, that's the usual way.  when done properly and with very clear commit messages, it lends itself to a really great log
<wallyworld> with rewritten hisotry :-(
<rick_h_> yea, have to get past that. I find no value in "lint lint lint"
<rick_h_> in  1.5yrs of doing launchpad bzr work I only looked at the nested commits one time in a merge with trunk
<rick_h_> I think that "every little thought the dev had in his head" is supremely over valued
<wallyworld> that's the point, bzr handles that for you
<rick_h_> but I admit that's me
<rick_h_> wallyworld: right, but to what end? I mean really. bzr came out of the box in the way that no one ever used it.
<wallyworld> huh?
<wallyworld> no one?
<rick_h_> and then you had to explain crap like lightweight checkouts and pipelines and colo and all that
<bodie_> speaking of good commit messages -- https://github.com/erlang/otp/wiki/Writing-good-commit-messages
<wallyworld> rick_h_: so how do you do stuff like propose 2 branches where branch 2 depends on branch 1, you have done branch 1, waiting for review, then want to also put up branch 2
<rick_h_> yes, you had to know all kinds of bzr voodoo to actually have it not suck and spend 30min branching
<rick_h_> wallyworld: git co featurebranch1
<wallyworld> i feel the same about git voodoo
<rick_h_> git push origin featurebranch1
<rick_h_> git co -b featurebranch2
<rick_h_> worky worky worky
<rick_h_> "oh, a change for feature branch 1 is there"
<wallyworld> rick_h_: sure but how do you propose branch 2 without the diff for branch 1 being included in the pr
<rick_h_> git co feature branch one; vim file.py; git push origin...; git co branch2; git rebase branch1
<wallyworld> since github has no pre-req support
<rick_h_> wallyworld: that's the issue with github reviews. reviewboard supports that
<bodie_> see also http://git-scm.com/book/ch5-2.html for more ugly merge history diagrams
<wallyworld> is that what gui uses?
<wallyworld> should we switch to review board?
<rick_h_> wallyworld: no, I'm trying it out, but find it doesn't match up well to the github workflow. So it's a fundamental change I'm not sure I want to go down
<axw> I think we should switch to *something* eventually, GitHub review is a bit sucky
<thumper> axw: I looked at stg stacked git briefly
<thumper> and discarded it as something I wanted to do
 * thumper will work manually for a while
<axw> ah yeah
<wallyworld> stacked git is for patches
<rick_h_> https://rbcommons.com/s/jujuui/dashboard/
<rick_h_> wallyworld: happy to invite in a ccouple of people if you want to test it out
<wallyworld> rick_h_: so right now, i'm really hamstrubg but how sucky the tooling is compared with bzr + lp
<rick_h_> paying for it for a month to try it out and see if it's something we want to use
<wallyworld> have you tried gerrit?
<thumper> davecheney: do we not handle merges to juju/names through the lander?
<rick_h_> wallyworld: definitely, bzr + lp was built for our workflow and our workflow was built to bzr + lp
<axw> thumper: only juju/juju atm
<axw> more later
<thumper> axw: ok, ta
<rick_h_> wallyworld: but the rest of the world doesn't agree unfortunately. I know hg has some other concepts, but even that is having issues gaining ground
<wallyworld> rick_h_: well, i wouldn't have thought our workflow of several peieces in place at once was unique
<wallyworld> i mean, do git people realy only work on one thing at once?
<wallyworld> surely not
<rick_h_> wallyworld: well we're unique in a lot of ways. Others just deal with the tool, or use things like quilt/etc I think.
<thumper> ah fark...
<rick_h_> wallyworld: most of them aren't doing code reviews and landings and gated lands and all that
 * thumper needs to dive deeper
<rick_h_> wallyworld: they put it up and it lands
<wallyworld> rick_h_: wtf? so they just land stuff unreviewed and untested?
<wallyworld> you have to be kidding
<rick_h_> wallyworld: why do you think I work here? :)
<wallyworld> seriously?
<rick_h_> even big projects don't do a CI run post-review and pre-land like we do.
<wallyworld> i'm shocked
<davecheney> rick_h_: none of atlassian's products handle pre commit CI
 * wallyworld is just astonished
<rick_h_> wallyworld: when I brought it up to travisci they didn't feel it was their ball of wax. No one else I could find doing it.
<davecheney> land change, send it to ci
<davecheney> add it to the review queue for someone to ignore
<bodie_> I don't hate the git review
<bodie_> but, that may just be me
<rick_h_> davecheney: yea, we're pretty unique in the overall workflow
<rick_h_> which is kind of sad once you live in our world for a bit
<davecheney> the martini framework does precommit ci with werker
<rick_h_> bodie_: yea, honestly a lot of it is that you don't know what you don't have until you use something better
<axw> bodie_: Rietveld had some nice things for long on-going reviews. you could see the difference between two patchsets for example. I miss that
<rick_h_> but I never got into bzr. I did a happy dance when we got to move to git
<wallyworld> rick_h_: or worse for me, using something great and being forced to use something crap
<bodie_> yeah, I enjoyed what I saw of Rietveld
<rick_h_> maybe it just fits my brain better, I even wrote two bzr articles for python magazine back in the day
<rick_h_> wallyworld: with cherry pick running two branches dependent on each other isn't that bad tbh
<rick_h_> <3 cherry pick and bisect
<bodie_> not familiar w/ bisect
<wallyworld> rick_h_: sure, but it's way more manual than bzr pipelines and you can't get stff reviewed
<rick_h_> those two are worth their weight in gold
<rick_h_> wallyworld: but you can do a pull request with any parent
 * rick_h_ wonders if you can git pr from another git pr
<wallyworld> rick_h_: i did but it doesnt go against juju/juju
<rick_h_> I think it has to be a real branch though
<wallyworld> it went against the parent
<rick_h_> wallyworld: right, but wtf is the point anyway. If branch 1 isn't ok you can't get signoff on branch 2
<wallyworld> rick_h_: sure you can
<wallyworld> we did it all the time in lp
<rick_h_> is it really really an issue that blocks you for a significant amoutn of time.
<wallyworld> yes :-(
<wallyworld> cause if it takes a day for branch 1 to get reviewed and landed, i'm blocked on putting up branch 2
<wallyworld> so the reviews are serialised
<wallyworld> and i have to manage it all locally without nice bzr pipeline support
<rick_h_> It seems to me they should be. An ok on branch 2 while changes are still made to branch 1 that could effect branch 2 seems scary to me.
<rick_h_> but yea, your killer feature is a bzr plugin, time to get a git plugin :P
<rick_h_> or use a diff review tool that can take the diff vs a github pull request
<wallyworld> depends right. if branch 1 is api calls used by branch 2, then branch 1 can change and branch 2 doesn't care
<wallyworld> sadly right now we are stuck with what github offers out of the box
<rick_h_> how can the api calls not effect the second one?
<rick_h_> you're assuming there's no bikeshedding around those calls :P
<rick_h_> wallyworld: right, but your beef is with github not git in this case. So move beyond github.
<wallyworld> if branch 1 gets a comment saying fix this implementation bit, then branch 2 doesn't care
<wallyworld> i hate git too for other reasons
<rick_h_> wallyworld: but as the reviewer on branch 2 I don't want to LGTM until I'm sure branch 1 won't change
<rick_h_> wallyworld: :)
<wallyworld> see the canonical tech thread for comments by others that some u the issues
<rick_h_> I don't doubt it
<wallyworld> by why not lgtm branch 2? it is correct as is
<wallyworld> we did it all the time in lp
<rick_h_> wallyworld: because nothing says the branch I reviewed will actually make it.
<rick_h_> if there is a bikeshed over the api itself, and branch 2 has to change, but I already gave it a LGTM
<wallyworld> the author can ask for another review if stuff changes significantly as a result of branch 1 changing
<rick_h_> wallyworld: yea, I don't say one is better than the other, but that you lose some gain some by the different tools
<wallyworld> we have lost significcant velocity
<davecheney> lucky(~/src/github.com/juju/juju) % git rebase -i --autosquash master
<davecheney> Cannot 'squash' without a previous commit
<davecheney> lucky(~/src/github.com/juju/juju) % git rebase -i --autosquash master
<davecheney> It seems that there is already a rebase-merge directory, and
<davecheney> I wonder if you are in the middle of another rebase.  If that is the
<davecheney> case, please try git rebase (--continue | --abort | --skip)
<davecheney> If that is not the case, please rm -fr "/home/dfc/src/github.com/juju/juju/.git/rebase-merge"
<davecheney> and run me again.  I am stopping in case you still have something
<davecheney> valuable there.
<davecheney> welk fuck
<davecheney> really hate this
<rick_h_> git works sans plugin management, fast, and gives me multiple remotes, cherry-pick, bisect (priceless for chasing bugs)
<rick_h_> davecheney: git rebase --abort is safe
<rick_h_> then git diff master to see what's up. If master was updated you'll have to rebase it with your feature branch
<rick_h_> git rebase master && git rebase -i
<rick_h_> I tend to leave off the autosquash just so I can see what it's doing.
<davecheney> sorry folks, this is insane
<davecheney> if someone wants to teach the bot to squish things go for it
<davecheney> but i want no part of this
<rick_h_> I thought you all weren't going to use rebase?
<wallyworld> rick_h_: git forces all of these extra mental steps to understand the internals, kinda like outside classes looking into a something implementation. yuk
<rick_h_> wallyworld: I'll admit I didn't care for git until I read the oreilly book that got into internals and it 'clicked' a bit for me
<rick_h_> but then bzr never 'clicked' for me so I admit to being strange
<wallyworld> but why should anyone have to undertand the internal implementtion details
<rick_h_> if I build a house of wood I should understand how wood work
<wallyworld> i don't have to understand how an engine works to drive my car
<rick_h_> works
<rick_h_> imo
<rick_h_> you need to know enough to debug 'does it have gas or is the oil low'
<rick_h_> so I argue that's not quite right
<bodie_> it helps to know why you shouldn't drive with your foot on the clutch
<wallyworld> looking at some dashlights is not the same as having to understand how a crankshatft works
<wallyworld> bzr alllowed the former
<wallyworld> git forces the latter
<wallyworld> way too many footguns
<rick_h_> http://www.reviewboard.org/docs/rbtools/dev/rbt/commands/post/#tracking-branches wallyworld for the reviewboard ability to submit a review that's a diff of another branch
<wallyworld> ta
<rick_h_> wallyworld: the great thing is that there's the reflog so no matter how well you aim the gun you can get your foot back
<bodie_> perhaps we should be doing merge --squash on the bot?
<bodie_> that way it doesn't really matter how the PR's come in.  I don't like it, but it's an option
<wallyworld> rick_h_: good to know
<wallyworld> bodie_: but does that throw away history
<rick_h_> bodie_: the issue is that they often have longer running branches and I'm not sure they're rebasing those feature branches vs merging from trunk so they'll get a lot more conflicts and crappy intermingled commits that are impossible to rebase easily
<bodie_> wallyworld, yes, but things that throw away history are the subject matter, really
<wallyworld> rick_h_: so rebase vs merge. which the f*ck do i use. in bzr it just worked without having to worry
<rick_h_> I thinki roger has had to update from conflicts with trunk trying to land his last 4 branches in a row. Hard to deal with using auto tools
<rick_h_> wallyworld: rebase will roll back your commits, update the fork point to the current head, and then reapply your commits one at a time, resolving any conflicts
<rick_h_> wallyworld: but it appears in the history that your first commit wasn't until the after the latest one in the trunk branch
<bodie_> wallyworld, depends what you're trying to do.  generally speaking, when opening a PR, you want to rebase -i against master to condense your potentially noisy commit log into something more atomic and useful to the master log
<rick_h_> wallyworld: merge just says "take X and git it into Y and interweaving of commits be damn"
<rick_h_> wallyworld: then that gets into why you might want to move commits around and reorder them/etc
<wallyworld> whereas bzr handed that nicely by keeping your commits off to the side
<bodie_> (after pulling your latest master from upstream)
<rick_h_> wallyworld: it's a sign of using merge vs rebase
<rick_h_> wallyworld: right, git doesn't have that. Never will. That's the one thing you can't have. :)
<wallyworld> which is why i hate it sooo much
<rick_h_> wallyworld: have to move past that one thing. It's the big blocker I know.
<bodie_> we-e-e-lll... you could always keep your user branches around, and clone them for rebasing.
<wallyworld> i guess rebase is the right workflow then for eeping your feature beanch up to date
<rick_h_> just like I can't have C speeds in python and have to use Go, which means I lose exceptions. Have to get over it.
<wallyworld> it's more than just that one thing
<bodie_> sorry, s/user branches/feature dev branches/
<rick_h_> different tools provide different + and different -
<wallyworld> that's just the current topic :-)
<davecheney> lucky(~/src/github.com/juju/juju) % gb ./...
<davecheney> state/apiserver/apiserver.go:16:2: cannot find package "github.com/bmizerany/pat" in any of: /home/dfc/go/src/pkg/github.com/bmizerany/pat (from $GOROOT)
<davecheney> umm
<davecheney> how did this dependency creep into the code ?
<rick_h_> lol
<wallyworld> nfi
<davecheney> not cool
<davecheney> has anyone checked the licence on it ?
<rick_h_> speaking of features...dep management wtf
<wallyworld> you mean in go projects?
<bodie_> oy, don't get me started
<rick_h_> wallyworld: yea, every time I try to get my head into go and start trying to use it I hit that and my brain shuts right down
<wallyworld> rick_h_: if you think i feel strongly about bzr vs git ......
<rick_h_> wallyworld: anyway, sorry to poke the bear and stir up the angst. waaaay past my EOD and past my bed time.
<wallyworld> np, thanks for discussing
<rick_h_> thanks menn0 again for the patch.
<axw> wallyworld: there's a good use-case for a rebase in https://github.com/juju/juju/pull/50
<wallyworld> rick_h_: they'll be another patch coming to improve the initial bot message when a pr is picked up
<axw> wallyworld: do "git rebase -i" and just delete the lines with the commits you don't want in it
<rick_h_> wallyworld: coolio, yea it was definitely a MVP and love seeing it improved by others as well.
<wallyworld> axw: but i want them all - theyre history
<wallyworld> rick_h_: we do appreciate having it, thank you :-)
<axw> wallyworld: eh? you've got a commit in there and a commit that undoes it
<rick_h_> wallyworld: ugh, but the merge's to update with trunk and fix conflicts should go and be a rebase :P
<wallyworld> why? it's history
<bodie_> but not necessarily history that someone who wants to understand juju's dev history needs to understand
<wallyworld> history should be read only
<rick_h_> the two merges of master add no value to anyone at all. If you rebase, the branch appears just like you branched master, did 3 commits, and landed it
<bodie_> I agree with that in theory, though.  so I'm actually somewhat conflicted about rebase and never used it much before
<rick_h_> and I don't care that you merged master and fixed conflicts I didn't ever know existed or cared to know
<axw> yeah, once committed. I see no value. difference of opinion I suppose...
<bodie_> however, getting familiar with it right now has made me much comfier with the idea
<rick_h_> seriously, what value is that merge and fixed conflicts in a branch no one else is working on with you?
<wallyworld> i guess if the general consensus is for rebase then i need to do it
<rick_h_> if you want to say it's good because you and axw were both working on that branch at the same time, ok maybe...
<wallyworld> imagine if financial institutions could do that
<axw> wallyworld: IMO it is only confusing. I only want to see commits that I'm expected to review
<wallyworld> rewrite audit logs
<rick_h_> juju core's source is not a financial institution :P
<rick_h_> you work on him axw :)
<bodie_> he has a point, rebasing can nuke useful blame data
<axw> heh :)
<wallyworld> axw: yiu review the diff, not the commits?
<rick_h_> bodie_: any rebasing will lead to a commit. You can always bisect and find a blame point in a single chunk of diff
<wallyworld> the reviewer looks at the final diff, the commit are the audit log
<axw> wallyworld: depending on the size of the review, that can be quite onerous
<waigani> thumper: I'm blocked on all my branches now. Waiting on final reviews for https://github.com/juju/juju/pull/35 and https://github.com/juju/juju/pull/22
<wallyworld> which is why we keep diffs small
<wallyworld> and use pipelines
<thumper> waigani: ok, will look shortly
<wallyworld> large diffs are very bad, cause if tere's one issue, the whole lot gets blocked
<rick_h_> wallyworld: +1000 finally we can agree on something :P
<wallyworld> with pipelines, the downstream ones can stil get lgtm
<wallyworld> independent of the upstream
<axw> wallyworld: agree on that. can you explain to me why I care about seeing you do something and then revert ithough?
<rick_h_> plus large diffs means long running branches which means lots of conflicts and wasted time and loss of scope, wasted energy, and lack of potential to parallize work
<wallyworld> yep
<wallyworld> so who cares about the commits then
<wallyworld> just look at the diff
<wallyworld> and let the commits be as they may
<rick_h_> wallyworld: then why do you care if they go anywhere :P
<axw> people may want to cherry pick things
<waigani> wallyworld: any plans for a pipeline workflow with git?
<axw> they should be independent units of work for that purpose
<wallyworld> rick_h_: so i can use them later if needed
<rick_h_> for what? that's what I mean. what use is https://github.com/wallyworld/juju/commit/1a799afdd8133c5d63031df225e7f5f4b63a0218 ?
<wallyworld> axw: in lp+bzr, we tended to cherry pick the final revision merged in
<axw> sure, but we don't have that luxury here
<wallyworld> why not?
<wallyworld> we can cherry pick a single merge?
<axw> well I suppose you can pick out the merge one
<axw> yeah ok, you can do that.
<wallyworld> rick_h_: i may not want to look at that particular one, but it's still hisotry
<wallyworld> and shows the thought process and evolution of the branch
<rick_h_> wallyworld: stop moving your argument on me. First commits aren't important, then you might want the commit, then "it's history"
<wallyworld> rick_h_: they're not important for doing the review per se (necessarily)
<wallyworld> just se the diff for that
<wallyworld> use
<wallyworld> but, they are important for hisotry
<rick_h_> and git bisect can narrow any bug/issue down to a single commit
<rick_h_> which can get you the diff that introduced the issue
<wallyworld> well, i think of the diff that merged to trunk as that
<wallyworld> which may be composed of several working commits
<rick_h_> http://robots.thoughtbot.com/git-bisect
<rick_h_> sure, but if the diff is small, and you keep a couple of the important commits in the history
<rick_h_> it's not an issue at all
<rick_h_> after all, you just said we'd agree to keep the diffs small :)
<wallyworld> the whole review generally should be small
<wallyworld> < 800 lines was the lp guidelne
<rick_h_> +1 we use c400 as the initial guide
<wallyworld> if i want to bisect something, i'll do it at the granularity of the dev's final commit to trunk
<rick_h_> over that you need 2 reviews
<axw> wallyworld: there's actually a tool, "git bisect" that runs a command to find a commit that introduces a bug
<axw> I don't think you can choose
<wallyworld> ok, well in that case it can find the commit. i don't care if the commit is for a go fmt or whatever
<wallyworld> it just finds the commit
<rick_h_> wallyworld: well anyway. Try to let go of what's lost, look forward to what's gained, and don't let things make you so cranky :P
<wallyworld> rick_h_: sure, i'm still im mourning and trying to transition
<wallyworld> and discussing these things is healthy cause the new tooling sucks and we do need to fix it or else veloxity wil suffer
<rick_h_> wallyworld: there are things we can do better, but there's a certain amount of wasted energy in trying to stick too hard to what 'used to work'
<wallyworld> just at the moment i'm having trouble figuring out what's gained, apart from "everyone else uses it"
<rick_h_> http://devopsdays.org/events/2013-telaviv/proposals/Jenkins%20and%20automatic%20multibranch%20GIT%20support/ is interesting
<wallyworld> rick_h_: we just discussed above gated commits etc etc - that's what 'used to work' and we needed to (and need to) get to the same place
<rick_h_> wallyworld: definitely, that's something we needed to find a way to do better.
<menn0> rick_h_: sorry, just saw your messages. Thanks for getting the change in. Do I need to deploy it now or have you done that?
<rick_h_> wallyworld: but the nested commit history isn't coming from anywhere.
<wallyworld> and so i would argue the other stuff like pipelines etc falls into the same cathory
<rick_h_> menn0: I don't have access to your jenkins install so that's up to someone on your end
<rick_h_> wallyworld: well I said pipelines can be done using a diff review tool
<rick_h_> wallyworld: that's not a git issue
<menn0> rick_h_: ok cool
<axw> menn0: I think mgz has some changes to push up to trunk too, so check with him
<wallyworld> rick_h_: there's discagreement on the list about throwing away hisotry, so it's not just me pushing this line :-)
<rick_h_> wallyworld: definitely, fortunately there's more than one way to do it and it's completely possible to not throw away history
<wallyworld> rick_h_: yeah, i'm not just talking about git, but the whole toling set up
<rick_h_> wallyworld: I'll not argue that there's more than one way to do the history stuff
<menn0> axw: ok np. I'll wait.
<wallyworld> rick_h_: we need to do this oer a beer :-)
<wallyworld> over
<wallyworld> not irc :-)
<rick_h_> wallyworld: another +1000 :P in germany see you there
<wallyworld> looking forward to it
<rick_h_> hmm, maybe I'll learn to like beer if I drink german beer
<rick_h_> it's supposed to be all kinds of awesome right?
 * wallyworld dosn't like beer either :-)
<wallyworld> we'll discuss over a red
<wallyworld> rick_h_: so if i want to do a 3rd branch in my pipeline off branch 2, i should "git checkout branch2; git rebase master" first? and then git checkout -b branch3 ?
<rick_h_> wallyworld: the rebase is just to sync with trunk before you put it up for review.
<wallyworld> what if i want to ensure trunk is all there before i start branch 3
<rick_h_> wallyworld: you can do it whenever, but I just sync my feature branch before review to make sure it's clean before CI takes it and tests it
<rick_h_> wallyworld: you can do it at any point in time
<wallyworld> cool, cause i may need stuff in trunk etc
<wallyworld> or want to avoid conflicts later
<rick_h_> rgr
<wallyworld> actually, i think i need git rebase upstream master
<wallyworld> or origin master
<rick_h_> I find that it works well to create my feature branch from trunk, work on feature, get it all through and working/test passing. Then catch up with trunk, fix stuff, and then put it up for review
<wallyworld> or something
<rick_h_> wallyworld: heh, yea we have a shortcut for that
<rick_h_> wallyworld: https://github.com/juju/juju-gui/blob/develop/HACKING.rst#syncing-your-feature-branch-with-develop-trunk
<wallyworld> rick_h_: i can't create my next feature branch from trunk cause it has to come off branch 2
<rick_h_> wallyworld: it uses the sync-juju alias (we use develop as our master)
<wallyworld> rick_h_: i looked at that but from memory you guys use a develop branch also
<rick_h_> wallyworld: but you can rebase branch 2 from trunk, then create branch 3 from branch 2
<wallyworld> we don't do that
<rick_h_> wallyworld: right, so s/develop/master
<wallyworld> ok, ta
<rick_h_> master is just sacred in our world. No one touches master but on release day
<wallyworld> i'm getting there
<rick_h_> ok, seriously bed time night
<wallyworld> night
<menn0> thumper: updated agent status branch pushed. LGTY?
 * thumper looks
<thumper> um... pushed where?
<thumper> the PR looks the same
<menn0> look again... it seems like the GH web interface took a little while to see the change (the git push from my machine was done)
<menn0> thumper: ^^
<wallyworld> axw: do you know how i make it so that "git push" will magically know to push to "https://github.com/wallyworld/juju/<foo>" without needing to type in the full path?
<axw> wallyworld: push.default=simple -- see menn0's email from a couple of days ago
<axw> sorry, =current
<wallyworld> axw: ah, i had simple
<wallyworld> i wanted to use the non deprecated baheviour
<wallyworld> there must be a way of doing it with the new behaviour you would think
<wallyworld> axw: ah balls, that just pushed to github.com/juju/foo :-)
<axw> wallyworld: what's deprecated?
<wallyworld> not github.com/wallyworld/juju/foo
<axw> wallyworld: oops.. is your origin not set correctly?
<wallyworld> [environ-resource-catalogs]ian@wallyworld:~/juju/go/src/github.com/juju/juju$ git remote -v
<wallyworld> origin  https://github.com/wallyworld/juju (fetch)
<wallyworld> origin  https://github.com/wallyworld/juju (push)
<wallyworld> upstream        https://github.com/juju/juju.git (fetch)
<wallyworld> upstream        https://github.com/juju/juju.git (push)
<axw> hm
<wallyworld> looks like it went to upstream not origin
<axw> git branch -vv
<axw> what's that say?
<wallyworld> when i didn't have push.default set, it said i think current was deprecated
<axw> for the current branch
<wallyworld> [environ-resource-catalogs]ian@wallyworld:~/juju/go/src/github.com/juju/juju$ git branch -vv
<wallyworld> * environ-resource-catalogs c2001f6 [upstream/master: ahead 6] Add tests for put/remove races
<wallyworld>   master                    4e7b047 [origin/master] Merge pull request #38 from axw/remove-instance-dnsname
<wallyworld>   new-txn-package           1a799af [environ-resource-catalogs: ahead 5, behind 9] Merge trunk and fix conflicts
<axw> yeah, it's tracking upstream
<wallyworld> sigh
<axw> the bit in []
<wallyworld> so how did it do that? and how do i fix it?
<wallyworld> i didn't ask it to i don't thin
<wallyworld> k
<axw> dunno how you did that... I htink you just want to "git --unset-upstream"
<axw> did you set up branch.autosetupmerge?
<axw> I think that's what does that
<wallyworld> yeah, i did set that. i unset upstream and set it to the right thing and it seems happier
<waigani> thumper, menn0 thanks :)
<wallyworld> axw: so with branch.autosetupmerge=true, i just did a checkout -b to create a new branch off an upstream one in my pipeline, and branch -vv shows no tracking. i guess it will wait till i've done the first push?
<axw> wallyworld: you can either set it now, or pass "-u" to "git push" to set it to whatever you push to
<axw> wallyworld: e.g. git push -u origin
<wallyworld> ok, ta. so that does autosetupmerge do then?
<wallyworld> what
<axw> like --remember
<axw> umm
<axw> I don't actually know
<wallyworld> i'd push to origin/foo right?
<wallyworld> cause origin is my fork
<axw> yes
<davecheney>    /away lunch
<waigani> wallyworld, axw: I came across git-flow (https://github.com/nvie/gitflow) >9000 stars, don't know anything about it
<wallyworld> looks interesting
<axw> seems master vs. develop isn't necessary if you have gated commits. am I missing something?
<axw> gated merge I mean
<wallyworld> axw: that's what i think too
<wallyworld> hence why for juju we only have master
<wallyworld> i can't see the need for develop. just adds complexity for no good reason afaict
<thumper> menn0, davecheney: will we need to provide explicit json and bson serialisation of the new tags?
 * thumper watched many tests panic
<waigani> thumper: do we only want to get a list of envs from jenvs and discard environments.yaml?
<thumper> no
<waigani> thumper: okay, do we look in both and merge the list?
<thumper> I think so, but give preference to the jenvs
<waigani> will do
<thumper> hmm...
 * thumper thinks
<thumper> we are currently logging the passwords of all the login connections...
<thumper> that seems a bit suspicious
<thumper> I'm at that stage of looking at code where I'm thinking "how could that ever have worked"
 * thumper goes to make a coffee
<jam1> waigani: I think you need to set your membership in the Juju github team to public
<waigani> jam1: oh
<jam1> waigani: https://github.com/orgs/juju/members?page=2
<jam1> set your entry to Public
<jam1> otherwise the Bot won't accept your $$merge$$ requests.
<waigani> jam1: done. thanks for that
<waigani> jam1: do I need to recomment?
<jam1> waigani: I *think* it will see an old request, but give it a sec and see
<jam1> If it doesn't post "queued" then vote again.
<waigani> jam1: okay
<jam1> waigani: just looking through the merge queue, I'm pretty sure fwereade is clear that all API calls should talk in terms of Tags
<jam1> so we don't pass raw Machine Ids, or Unit names, or Users
<jam1> but "user-admin" and "machine-0" and "unit-mysql-1"
<jam1> waigani: so for consistency, I'm pretty sure AddServerAPI method UserInfo should take Entities
<jam1> waigani: you can assert that they are all UserTagKind early on and convert them into a list of usernames
<jam1> but the *API* should be tags
<waigani> jam1: ah right. I'll do a follow up branch to fix that.
<thumper> waigani: wait, I'd like to talk to fwereade about that
<thumper> particularly when we are talking about users and only users
<waigani> thumper: no problem
<waigani> thumper: I've got plenty to go on
<thumper> jam1: I disagree on the premise that we should have ONLY tags for ALL the api
<jam1> thumper: you can do so, but it should be debated, because it *has* been the API design that William has pushed heavily
<thumper> I can see where that makes sense for general connections, or actions
<thumper> but when you are talking to a user end point, and only dealing with users
<thumper> it is asinine to convert to and from user tags.
<jam1> thumper: I disagree on the asinine part. I can see a point about "I'm in a particular context, I shouldn't need to wrap my requests with more context", but I can also agree with consistency across the API.
<jam1> And fwereade did specifically ask for Entity here
<jam1> and so we should have the discussion
<thumper> I'm happy to have the discussion
<thumper> but I hold my asinine position :)
 * wallyworld gets some popcorn
<lifeless> thumper: what flavor :)
<lifeless> bah
<lifeless> wallyworld: ^
<wallyworld> um
<wallyworld> caramel
<thumper> salted caramel?
<thumper> jam1: we have a meeting with fwereade in about three hours anyway
<thumper> I propose claiming some of that time to discuss this
 * thumper is beating the tests into submission
<thumper> wallyworld: you'll like this branch
<wallyworld> yeah?
<thumper> wallyworld: or better put, if you don't like it, it will make me very sad
<thumper> wallyworld: trying to get some object factory concepts into out test suite
<wallyworld> .me wonders if he should make thinper sad or not
<wallyworld> yay
<thumper> time to go and make dinner
<thumper> before meeting later
<wallyworld> axw: is running rebase again on a branch meant to remember merge resolutions from last time it was run?
<axw> wallyworld: it's stateless. it just takes out your changes, and replays on top of the thing you're rebasing on. if it needs merging again, you have to do it again
<wallyworld> sigh
<wallyworld> so each time i change an upsream branch and rebase, i have to redo all the same conflict resolution :-(
<wallyworld> there has to be a better way
<axw> wallyworld: what are you doing?
<axw> a pipeline-ish thing?
<wallyworld> yeah
<wallyworld> i fixed the defers you mentioned
<wallyworld> and now i want to pump those changes to the downstream branches
<wallyworld> so i checkout to branch B
<wallyworld> and rebase branch A
<axw> ah, you want to pick up the conflict resolution you did in branch A?
<wallyworld> but it comes up with all the same merge conflicts i fixed last time when i did a rebase
<wallyworld> the conflict resolution in branch b
<wallyworld> i was in branch B ( the downstream branch) and wanted to pull in latest changes from branch A
<wallyworld> i did it once and resolved things
<wallyworld> now having changed branch A i need to do it again
<wallyworld> but it is conflicting in the same places as earlier
<axw> wallyworld: I think because it replays the commits in order
<wallyworld> branchB is the one I can't propose yet because github doesn't support dependent branches
<axw> so the fixup you did comes later down the line
<axw> although.. hmm no, it should apply to the commit you're processing at the time
<axw> I dunno. I'm still a novice
<wallyworld> yeah, i'm even more of a noob
<wallyworld> i shouldn't be this hard to be productive
<wallyworld> do git people just do things serially
<axw> github people
<wallyworld> well, git rebase seems broken too
<wallyworld> i ended up having to cherrypick the rev
<wallyworld> not very desirable if there's more than one change to bring forward
<axw> gotta get my daughter and pick up  a cake, bbl
<jam1> TheMue: just a reminder that you're On Call Reviewer today.
<dimitern> morning all
<dimitern> fwereade, jam, https://github.com/juju/juju/pull/49
<jam1> morning dimitern
<vladk> dimitern: morning, https://github.com/juju/juju/pull/54
<jam1> dimitern: first comment on your proposal, do we want it as "environs/network" or just "network/" ?
<jam1> it would be nice to not have stuff like State depend on things in Environment
<jam1> which I thought is why we moved stuff into the top level 'instance/'
<jam1> like 'agent/' shouldn't really depend on 'environs' IMO
<dimitern> jam1, fwereade suggested that originally
<dimitern> jam1, i also thought of network/ at first
<jam1> dimitern, sorry was switchnig rooms and my laptop went to sleep. was there a "but" there? last I saw was "i also thought of network/ first"
<dimitern> jam, sorry, no
<dimitern> jam, I did make it network/ first, but IIRC fwereade asked me to change it to environ/network/, as it better describes its intent
<jam1> dimitern: k, I think we should run it by him, since instance was intentionally to pull it out of environs, it seems odd to put it back in
<fwereade> dimitern, did I? that sounds kinda like I was blithering at the time, sorry
<dimitern> fwereade, that's what I recall :)
<fwereade> dimitern, network/ is fine by me, I apologise for telling you to do stupid things :)
<dimitern> fwereade, but now's the time to move it to network/ instead of environ/network/
<dimitern> fwereade, sure, np ;)
 * thumper scowls at the code
<thumper> I have no idea why this test is failing...
<voidspace> morning all
<mattyw> morning folks, I was thinking of making a start on this: thought/ comments? https://bugs.launchpad.net/juju-core/+bug/1216644
<_mup_> Bug #1216644: allow open-port to expose several ports <addressability> <improvement> <strategy> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1216644>
<axw> mattyw: it would be awesome, but it may be a deceptively large task
<axw> there's also status that needs fixing
<axw> condensing port ranges
<axw> and possibly security groups changes needed
<mattyw> axw, I'm wonder if those tasks could be split up (we can allow open-port to operate on ranges) and we change status later - although I guess specifiying a large range would make status unreadable
<axw> mattyw: we could yeah, but the concern is that if we allow people to do it, then we're recommending something that's going to immediately lead to sucky UX
<axw> it's sucky either way though I suppose
<TheMue> jam1: just came online, so starting reviews
<TheMue> morning btw
<voidspace> rebooting
<wallyworld> fwereade: i have done the work to extract out a separate txn package if you wanted to look at some time. it's used in a downstream branch but i can't propose that yet because github doesn't support dependent branches https://github.com/juju/juju/pull/50
<jam> dimitern: standup ?
<jam> fwereade: wallyworld: I was thinking about dependent branches, one option is that you could propose the branch against the one it is dependent upon, and then propose it again against trunk but ask to have it reviewed against the other proposal
<jam> that would give you just the increment diff to review
<jam> but still have something targeted to trunk when we are ready to land
<wallyworld> jam: i did propose against the unlanded one in my fork, but then people don't get emails etc and it doesn't show inthe juju/juju pull requests
<wallyworld> i guess i could also propose against juju/juju
<jam> wallyworld: right,that is why I suggested *also* proposing against master, and in that PR you say "please review the one over here: http://..."
<wallyworld> a bit clunky. there's got to be a better solution. sure we aren't the only ones wantint to do this
<jam> wallyworld: in the github world, you rebase it all down to individual commits and they review incremental work that way
<jam> so commit 1 is the first feature, commit 2 is using that, etc.
<dimitern> jam, sorry, brt
<wallyworld> what? all in one branch?
<jam> wallyworld: AIUI, yes
<wallyworld> :-(
<wallyworld> that makes me sad
<jam> wallyworld: hence why people like to rewrite their commits because that is how they get reviewed
<wallyworld> that makes me even sadder
<wallyworld> let's just alter history as we see fit
<mgz> it's the way of the world, wallyworld
<wallyworld> it may be. i just don't get how people are comfortable changing history and making all that extra work for themselves in having to do it
<wallyworld> it must kill velocity
<natefinch> talking about rebasing I guess?
<wallyworld> yeah, and that github doesn't support dependent branches
<wallyworld> so you can't propose more than one branch at a time
<wallyworld> if they are in a pipeline
<wallyworld> so you get blocked
<wallyworld> and rebasing i did today kept tripping over the same merge conflicts again and again
<wallyworld> so i had to cherry pick stuff i wanted form by upstream branch
<wallyworld> way too painful comapred with bzr pump :-(
<natefinch> I think there has to be a way to do it, we just don't know how yet.  And that's the problem.  We have a ton of bzr experts, but no git experts.
<natefinch> I cab;t believe that we're the only ones who have ever stumbled on this problem and that no one has ever solved it.
<dimitern> wallyworld, you can propose multiple branches
<jam> natefinch: so the github overview is to use the individual commits to review (at least, you can do inline comments on the step-by-step commits)
<jam> dimitern: you can, but the dependent branch contains the diff of everything
<wallyworld> sure, but if they depend on each other the diffs are screwed
<dimitern> wallyworld, and rebasing into your fork just makes two things much better: 1) linear history, 2) much easier conflict resolution
<jam> dimitern: I strongly question (2)
<dimitern> wallyworld, because you're rebasing your changes on top of upstream
<jam> and a linear, but false history is... something :)
<dimitern> wallyworld, hence, you resolve conflicts as you rebase, rather than the other option - the merge workflow
<wallyworld> dimitern: in my case i had branch A and branch B in a pipeline. changes to branch A i then switch to branch B and rebased. conflicts. so i fixed them. but then i had to make more changes and rebase again. and the same conflicts ahd to be resolved again
<dimitern> wallyworld, which *forces* you to resolve conflicts out-of-context (i.e. a huge merge commit combining a lot of changes will need to be resolved *every time* you merge upstream)
<natefinch> I wonder if the way to do a pipeline is to branch from juju to your branch A, then branch from A to B.  PR A -> juju, PR B -> A
<wallyworld> dimitern: bzr doesn't have the issue
<jam> dimitern: a good merge tool uses a new base when you merge upstream
<dimitern> wallyworld, agreed
<jam> bzr certainly always does, and I definitely expect git to do so as well
<perrito666> good morning everyone
<wallyworld> natefinch: that's what i did. i did a checkout -b when i was in branch A
<dimitern> git is not a DVCS, it's a platform to build your own DVCS on top of :)
<jam> natefinch: you can certainly do that, but mixing that model with rebasing is ... hard
<jam> stacked git is another option, though it *clearly* is favoring 1-commit == 1 feature
<wallyworld> git i find is fircing me to try and understand all the internals and it's got sooo many footguns
<voidspace> perrito666: morning
<wallyworld> jam: i get the impression stacked git is for producing patches for mailing llists
<natefinch> so, again, I'm not a git expert.  It sounds like we need to do some research on how to get this workflow done.
<jam> wallyworld: it is for evolving a series of patches
<jam> that can be viewed as a series of git commits
<perrito666> are we bashing git again?
<jam> and being able to pop them off, and make a change to an earlier one, and then pop them back on again
<dimitern> stacked git is great only locally
<voidspace> perrito666: once you start a backup with mongodump can you cancel it - or is the only way to kill the running process? (And how would mongo cope with that?)
<dimitern> but it doesn't have the nice bzr pump equivalent
<wallyworld> perrito666: no, trying to figure out how to deal with its limitations compared to bzr
<natefinch> Although, it sounds like maybe we just need to get stuff reviewed faster, if wallyworld can create a whole new CL before someone reviews his old one
<voidspace> perrito666: natefinch has defined the backup api with cancellable backups - and I wonder if that is even possible
<wallyworld> natefinch: well, i often have 3 or 4 branches on the go in a pipeline
<jam> natefinch: it often makes a lot of sense to do things as a series of "introduce the API, use the API" rather than comingle those changes
<voidspace> natefinch: ah, hi - I didn't realise you were around
<wallyworld> and can put up 1 or 2 at once
<wallyworld> jam: yep exactly
<jam> but you want to be writing the "use the API" at the same time, because you want to know the API you have is correct
<voidspace> right, I often want to split a feature into a series of branches that depend on one another
<perrito666> voidspace: seems to me that you need to kill the process
<natefinch> jam: right, so you write the API, propose... someone reviews the API.  If reviewing is a bottleneck, we should fix that
<voidspace> perrito666: would mongo be ok with that?
<natefinch> jam: I guess so.
<jam> natefinch: but you develop them concurrently, as I mentioned
<wallyworld> natefinch: no, doesn't work. because the api evolves as you write the next branh to use it
<perrito666> voidspace: absolutely, the new backup way behaves pretty much as a client
<voidspace> natefinch: it's not even so much that - sometimes you want a series of branches, and you want to be able to see the diffs of branch two from branch one
<voidspace> natefinch: not from branch two against master
<wallyworld> yep, that too
<voidspace> natefinch: I've been looking at your backup api
<jam> natefinch: and the reviews from github that I've run across in the past have asked for rebased commits so that they can review each change in isolation, which is what multiple branches and pipeline model is
<voidspace> natefinch: you propose an asynchronous api that returns the name of the new backup immediately
<voidspace> natefinch: with a cancel call to cancel one  that's in progress
<voidspace> natefinch: how do you know whether or not it's finished?
<natefinch> I think the way you do that in git is.... you write it all in one branch, then, once you're satisfied, you split up the branch into multiple PRs.
<voidspace> yuck :-)
<wallyworld> double yuck :-(
<wallyworld> what a lot of extra work
<natefinch> no way, you don't have to juggle branches that way
<voidspace> this is a github problem not a git one though, right?
<wallyworld> a bit of both i reckon
<natefinch> who wants to constantly switch branches?  Just write your code in one branch and THEN decide how it makes sense for it to go in
<jam> voidspace: tools are tools, though. and we're doing code review on github. but yes, it is mostly our code review tool that we're trying to figure out how to use.
<wallyworld> switching branche sis easy
<voidspace> that means you *have* to make monolithic changes that are harder to reason about
<natefinch> that's true voidspace
<voidspace> that's the use case for dependent branches
<jam> natefinch: voidspace: well you can write it all together, and then rebase it into incremental changes
<natefinch> wallyworld: switching is easy, but syncing them can get annoying
<wallyworld> not with bzr
<wallyworld> it's so easy
<jam> new history, but you do have "introduce here, use there" as a sequence
<jam> wallyworld: fwiw "git checkout branch" is cheaper than even bzr
<jam> just missing pipelines' pump to correlate them
<wallyworld> yep, that's the whole pont
<jam> wallyworld: going further, trying to rebase with multiple branches is going to just bite you all over
<jam> so I certainly wouldn't go there
<jam> but you don't *have* to rebase
<wallyworld> as i am finding out :-(
<wallyworld> well, how else do i pump changes from A into B downstream
<wallyworld> like with bzr pump
<natefinch> what's the difference between pump and merge?
<wallyworld> pump pushes changes all the way down a pipeline
<voidspace> natefinch: I've been reading through your proposed backup API
<wallyworld> from A->B->C->D etc
<voidspace> natefinch: I have some questions *but*
<wallyworld> and merges along the way
<natefinch> wallyworld: and conflicts along the way?
<natefinch> voidspace: yep
<voidspace> natefinch: perrito666 doesn't think we can start work on the actual code behind the api until he's ready to merge his work
<voidspace> natefinch: so we can't really parallelise this yet
<jam> wallyworld: "git checkout A", "git pull upstream", "git checkout B", "git merge A", "git checkout C", "git merge B"
<wallyworld> natefinch: if it finds a conflict is stops, you resolve and repump
<jam> it is what pump is doing for you
<natefinch> voidspace: we can start the work to download the backups
<perrito666> voidspace: I can merge that now :D
<voidspace> ah, cool
<voidspace> natefinch: I'm not sure an asynchronous backup api makes sense
<voidspace> natefinch: how do you tell whether a backup is completed or not
<natefinch> voidspace:  the API tells you it's done
<voidspace> natefinch: where?
<wallyworld> jam: i think we need a git pipeline tool :-)
<natefinch> voidspace: in list backups there's an in-progress backup listed, if one's not there, it's done
<jam> wallyworld: if I actually knew git plugins a bit better, I have the feeling it would be pretty easy
<voidspace> natefinch: ah, ok
<jam> the only trick is figuring out how/where to store what the previous/next pointers are
<voidspace> natefinch: by the way, typo
<voidspace> natefinch: // NewBackupAPI creates a new server-side FirewallerAPI facade.
<wallyworld> if i kew git and pluguns better maybe it woud be
<perrito666> natefinch: voidspace perhaps a quick hangout to sort this?
<natefinch> see?  It's not a git problem, it's a "we don't know git well enough" problem
<natefinch> so stop complaining about the problem and start figuring out a solution
<wallyworld> jam: and then we just need a better code review tool than github
<voidspace> perrito666: natefinch: sure - I'm not around for standup today so it could be good
<natefinch> voidspace: haha
<wallyworld> maybe gerrit? or reviewboard?
<jam> wallyworld: or reitveld :)
<wallyworld> noooooooo
<voidspace> let me grab coffee
<perrito666> voidspace: natefinch going to the channel
<natefinch> perrito666, voidspace:  ok, I don't have much time until I have to go help with the kiddos, but I can pop on
<perrito666> and by chanel I meant our regular hangout
<davecheney> are /win12
<perrito666> meh, I get no video on the tablet for hangouts
 * dimitern likes fast reviews, we need more of those :P
<perrito666> voidspace: natefinch where do you think this thing should live? I have it in cmd/plugin/juju-backup, but only because in the beginning it was a replacement for the current juju-backup
<natefinch> perrito666: state/backup sounds good to me
<natefinch> backup API for general comments: https://github.com/juju/juju/pull/56      jam, fwereade, anyone else?
<natefinch> gotta run
<TheMue> natefinch-afk: will review it next
<rogpeppe> frankban, dimitern, perrito666, jam: trivial PR to improve isolation a little more: https://github.com/juju/juju/pull/57
<jam> rogpeppe: lgtm
<rogpeppe> jam: thanks
<voidspace> perrito666: I would leave it there for the moment
<voidspace> perrito666: it's only when we actually create the backup commands / api that we are able to move it
<voidspace> ah
<voidspace> perrito666: or do as natefinch-afk says :-)
<natefinch-afk> perrito666, voidspace: no reason to put it once place then move it to another place later, might as well create it in the correct place.  adding a package no one uses yet is perfectly ok.
 * natefinch-afk is not really here ;)
<voidspace> heh
<jam> mgz: you around?
<mgz> jam: yeah
<jam> mgz: just wondering if you realized you're OCR today
<mgz> I'm also reviewer today, so poke...
<jam> clearly you do :)
<axw> fwereade: this may interest you: https://github.com/juju/juju/pull/55
<fwereade> axw, you and wallyworld are distracting me by doing awesome things I want to review ;p
<axw> heh :)
<axw> feel free to ignore it
<mgz> axw: can I land your goose branch btw?
<wallyworld> fwereade: i've replied to some of your comments, thanks for looking :-)
<axw> mgz: if you're fine with the latest changes, then yes please
<mgz> axw: done
<axw> mgz: thanks!
<jam> I see an awful lot of red on the CI dashboard :(
<dimitern> fwereade, ping
<fwereade> dimitern, pong
<dimitern> fwereade, re default public/private networks - they shouldn't be shown in status, right?
<dimitern> fwereade, cause after we create them there will always be at least 2 networks in state right after bootstrap
<fwereade> dimitern, I think we should show them
<fwereade> dimitern, they're part of the model and special-casing those ones will be really odd
<dimitern> fwereade, ok, but for now we don't know what their CIDR is
<dimitern> fwereade, so we'll show something like: networks:\n    private:\n        provider-id: juju-private\n    public:\n        provider-id: juju-public
<fwereade> dimitern, assuming there *is* a public network, yeah, and assuming those are really what the provider calls those networks
<dimitern> fwereade, they are juju-specific, hence the "juju-" prefix of the provider-id
<fwereade> dimitern, not sure I follow there
<dimitern> fwereade, hmm.. i think it looks like a bit deeper than I thought
<dimitern> fwereade, so you're saying we should get them from the provider, not just create them in state?
<fwereade> dimitern, isn't the provider-id the equivalent of instance-id? ie it's what the provider would call that network even if juju weren't around
<fwereade> dimitern, yeah
<fwereade> dimitern, it's what you use to identify what's really going on if you know/care about both levels
<fwereade> dimitern, I'd be fine calling those networks "juju-private" and "juju-public" though, it might be a good idea to reserve the juju-* namespace for our own use
<dimitern> fwereade, so then: 1) add Environ.GetDefaultNetworks() - returns the info, 2) change all providers to implement it, 3) call it at bootstrap to save them in state
<fwereade> dimitern, yeah, think so
<dimitern> fwereade, should we also mark them as IsDefault: true in state?
<fwereade> dimitern, sorry, I'm not up to date on that field's meaning
<dimitern> fwereade, that's some amount of special handling, but it might pay off as we progress with the model
<dimitern> fwereade, i'm proposing to add that field to the networkDoc
<fwereade> dimitern, ah ok
<dimitern> fwereade, and similarly expose it as IsDefault() on *state.Network
<fwereade> dimitern, I'm not wholeheartedly +1 on that tbh... convince me?
<dimitern> fwereade, as for the naming
<dimitern> fwereade, we could always name them "public" and "private" (their juju name)
<fwereade> dimitern, I guess my worry is clearer in the public case
<dimitern> fwereade, well, i thinking wrt selecting a default network a relation is on when not specified and things like that - knowing which is the default might help
<fwereade> dimitern, it only makes sense to have one default public network, I think
<dimitern> fwereade, and one private at least
<dimitern> fwereade, to model what we currently do anyway
<fwereade> dimitern, yeah, I'm not quibbling about the notion of defaults
<fwereade> dimitern, I'm quibbling about having as IsDefault field
<fwereade> dimitern, once you have 2 docs with IsDefault:true things become confusing
<dimitern> fwereade, well, if we stretch it further, we can just have a Environ.AllNetworks() instead an just call it at bootstrap and record whatever the provider knows
<fwereade> dimitern, that's ideal tbh
<dimitern> fwereade, that way we'll have the info already in state by the time you want to deploy a service and perform better (earlier) sanity checks on names
<fwereade> dimitern, although it constrains us a bit
<fwereade> we need to give them good names
<dimitern> fwereade, the provider will give us its id for each, but we can name them anyway we like in juju
<fwereade> dimitern, we can either accept config at bootstrap time (I want these (provider-id) networks and I want to give them these (juju) names (including mapping a particular network to juju-private etc)
<dimitern> fwereade, we can call them "public#" and "private#" # increasing from 1, and using the CIDR range to determine if it's public or private
<dimitern> fwereade, hmm.. yeah, it'll be nice to be able to remap names as part of the bootstrap config
<fwereade> dimitern, yeah, but I want users to be able to name them
<fwereade> dimitern, (and pick which one *is* juju-private for that matter)
<dimitern> fwereade, but if that's not the case, what should we pick as names?
<fwereade> dimitern, well, the other option is *not* to pick names until the user asks
<fwereade> dimitern, that's the add-network --existing case
<fwereade> dimitern, ok, baseline case is that the user specifies nothing
<dimitern> fwereade, i don't like that so much
<fwereade> dimitern, we have to guess what the default private/public ones should be, and we name them
<dimitern> fwereade, we'll already have added the network in state
<fwereade> dimitern, (assuming there *is* a public one ofc)
<dimitern> fwereade, for ec2 there is a way to get both the private (subnet(s)) and the public one
<fwereade> dimitern, can we plausibly store proto-networks? ie not full network docs, but kinda notifications that there *is* a network that will become available for use once it's named?
<dimitern> fwereade, for local and manual is easy - either no public or it needs to be explicit
<fwereade> dimitern, then in the super-shiny case the user defines them all in bootstrap config
<fwereade> dimitern, and has them all available and named from the beginning
<dimitern> fwereade, i suppose we can have a pendingnetworks collection for these
<fwereade> dimitern, yeah, something like that
<dimitern> fwereade, identified only by provider id
<dimitern> fwereade, and what other info we can get from the provider
<fwereade> dimitern, if we don't have guidance, we pick a private and (maybe) a public to promote at bootstrap time
<fwereade> dimitern, sgtm
<dimitern> fwereade, then, either using bootstrap config (mapping) or the CLI to "add" them as proper networks
<fwereade> dimitern, yeah, exactly
<fwereade> dimitern, figuring out how to expose them tastefully might be interesting
<rick_h_> bodie_: sorry for the delay but commented on the docs. Great stuff! Let me know if any of my questions don't make sense.
<fwereade> dimitern, the unnamed nets probably shouldn't be part of the default status output
<dimitern> fwereade, ok, so we need 1) Environ.ListNetworks(), which is called at bootstrap and 2) populates the pending networks (hidden for all intents and purposes except for ->), 3) CLI/API add-network --existing to promote them to real networks
<fwereade> dimitern, yeah, sgtm
<fwereade> dimitern, down the road we'll also want to poll the provider for network changes I think
<dimitern> fwereade, and 4) a way to define the mapping before bootstrapping
<fwereade> dimitern, yeah
<fwereade> dimitern, definitely talk to me in detail before starting on (4)
<dimitern> fwereade, right, we can have even a CLI like update-networks to do it manually at first
<fwereade> dimitern, I think that'd be just as hard as a worker that polls them tbh
<dimitern> fwereade, ok, i'll start with proposing the subbed-out ListNetworks and then we should decide which providers should get an implementation
<fwereade> dimitern, cool
<dimitern> s/subbed/stubbed
<dimitern> fwereade, ok, thanks!
<sinzui> The removal of instance dns-name broke CI. We need to rethink how to make the tests work without it
<mgz> sinzui: what's the breakage exactly?
<sinzui> mgz get_machine_dns_name helper never gets the info it needs to to verify the local stack is ready to confirm the charms are in a sane state
<mgz> oh, that's actual breakage
<sinzui> mgz, the tests need to learn the real host name so that we can ssh in even when juju ssh is busted
<mgz> andrews pr also removed the dns-name field from status?
<mgz> did no one object to that in review?
<sinzui> mgz, it was intentionally
 * mgz goes back and looks
<mgz> I'd be surprised if we broke status compat intentionally
<mgz> and it doesn't seem like that...
 * sinzui needs to bootstrap to see what the status really look like
<sinzui> mgz, the specific call is status['machines'][str(machine)]
<mgz> right, I'm looking at that
<sinzui> we are looking for the bootstrap nodes data
<mgz> the line after is .get('dns-name')
<mgz> which should absolutely still work
<sinzui> the joyent failure was joyent cloud's fault
 * sinzui disables CI
<sinzui> mgz, I recall we use dns-name because aws wont let us use the ip address.
<axw> my change was only removing dead code. there have been some changes to "juju status", but I didn't think they had landed yet
<bodie_> rick_h_, cool
<mgz> sinzui: do you have blame on a specific revision?
<sinzui> mgz Merge pull request #38 from axw/remove-instance-dnsname
<mgz> axw: I'll investigate, go to bed :)
<sinzui> mgz, 3 revs ago 4e7b0471
<perrito666> back
<bodie_> rick_h_, I actually made a separate PR against the juju/juju/docs docs (as opposed to the web docs)
<bodie_> https://github.com/juju/juju/pull/46
<sinzui> only local lxc is affected. kvm is fine
<mgz> o_O
<mgz> no such request "FullStatus" on client - we also broke status compat?
<mgz> or was this a really old deploy I have here...
<bodie_> anyone have any thoughts on this test issue?  I'm trying to hit an external URL to get my schema definition: http://paste.ubuntu.com/7623520/
<bodie_> why would juju testing runs be disallowed external access?
<natefinch-afk> bodie_: tests that depend on the internet are a bad idea
<mgz> bodie_: SERIOUSLY?
<mgz> er, too much caps
<natefinch> haha
<mgz> but, seriously?
<bodie_> *wipes sweat off brow*
<natefinch> man, for a minute, I thought mgz was super mad
<mgz> you want your test runs to require an internet connection to pass?
<bodie_> lol, when you put it that way...
<wwitzel3> natefinch: haha, me too
<mgz> somehow nick-tab-shift-for-colon makes me it capslock way too often
<bodie_> it's not the test itself that depends on the internet connection -- one of the features of json-schema is to reference schema specs by URL
<bodie_> so, I'm thinking that's not one of the features we want to make use of ;)
<mgz> bodie_: ANYWAY, PRACTICAL ... suggestions, override the url for the tests
<mgz> but having us hit their schema url at all seems like a bad thing
<bodie_> practically every json-schema is supposed to have a $schema key with a URL value
<alexisb> natefinch, call?
<mgz> sinzui: this is something local-specific, but what exactly I'm not sure yet
<mgz> sinzui: can you log the status output perhaps?
<mgz> bodie_: sure, but why are we resolving the url?
<sinzui> mgz, I see a rouge machine left behind that could be the cause
<mgz> this is like the old w3c's issue with dtd urls
<sinzui> mgz, the ppc64 machine was victim of another test
<sinzui> arm64 failed because the machine is too slow
<mgz> sinzui: well, that'd be a relief
<bodie_> mgz, well, the idea was to load the json-schema spec itself, which is json-schema, in order to validate our params schemae
<bodie_> that would be simple enough to inline, I suppose, it just feels ugly.  but I guess it's not as ugly as requiring external HTTP GET-ability :)
<mgz> bodie_: http://www.w3.org/blog/systeam/2008/02/08/w3c_s_excessive_dtd_traffic/
<bodie_> Hahaha, that's sort of awesomely bad
<bodie_> nearly all-caps scream-worthy!
<rick_h_> bodie_: oops, comments still valid?
<bodie_> rick_h_, awesome comments, I realized in conversation with jcw4 and mgz yesterday that our docs probably have separate scope -- my PR is against the juju/docs/actions.md file so it's not totally clear how much the scope of our docs overlaps
<bodie_> the public-facing stuff might need to be less technical and more UX oriented, while the ~/docs stuff might need to be more hacker / architecturally oriented
<rick_h_> bodie_: definitely
<rick_h_> bodie_: but I <3 that you've got docs on both ends in mind.
<bodie_> :D
<bodie_> I really want to get a hackers section pushed up to the public zone, I'd have gotten involved much sooner if I'd realized it was participation-friendly
<bodie_> whether that's a high-level overview, or just a link to a highly usable github docs made of readable, interlinked markdown
<bodie_> rick_h_, I do have a few more specific questions oriented to the frontend team that I'd love some input on, in that PR
<bodie_> but take your time, it'll be there for a while probably
<rick_h_> bodie_: sure thing, pointers?
<rick_h_> bodie_: or just want to do a hangout?
<bodie_> just specifically item 3, frontend hackers, it's all laid out there
<bodie_> primary questions are, do you guys want a json-schema getter, and any requests for tech specs or API endpoints for you guys
<bodie_> anything you need to know
<bodie_> not even totally sure all that belongs in the backend /docs, really just wanted to get conversation started for those issues
<rick_h_> bodie_: cool yea. I mean basically we need whatever you give the cli user except in a really nice machine readable form. One thing might be to provide a watcher api to the actions as they will be done async always, like bundles.
<bodie_> so, something like a polling / push mechanism to open a connection so you guys can make a progress bar or some such?
<bodie_> it might be pretty complicated to get insight into the status of the hook (i.e. the actual action on the unit) while it's running
<bodie_> but, perhaps we could set up a hook-env level call so the charm author could add such a thing to his action script
<rick_h_> bodie_: progress bars are evil :P
<bodie_> heh
<rick_h_> bodie_: so it's mainly so that we request an action and get back the UUID, then we request a watcher on that UUID so that when it changes we get that notificiation over the websocket
<bodie_> most of ours at DO were completely fake... ^_^
<bodie_> hmm, ok
<rick_h_> bodie_: and that way we can make sure to get a notification to the user
<rick_h_> bodie_: that it's completed, here's the status info, or it errored, more info, or it bombed out and crashed
<bodie_> right
<rick_h_> but we want it over the websocket vs polling in the GUI
<bodie_> instead of just polling the status query
<rick_h_> :(
<rick_h_> bodie_: polling makes everyone sad when we've got such a nice always on websocket connection sending us messages we can process async
 * bodie_ socks that away in his ponderment satchel
<rick_h_> bodie_: but we only want to worry about it while the connection is alive and someone's at the browser listening, so we ask for a watcher on it
<bodie_> I'm going to ask for the favor of putting some of that in a brief comment on that doc just so there's something for us to look at and be reminded by
<bodie_> s/doc/pr/
<bodie_> however, it's in my awareness
<rick_h_> bodie_: definitely, I've made a todo to try to spell out in more formal doc form what you've asked for there.
<bodie_> awesome1
<bodie_> !
<rick_h_> bodie_: it'll be a bit for me to get down to writing it out, but will try to get it today/tomorrow
<bodie_> yeah, again this is a longer-term thing I'd like to have people jabbering over, no rush whatsoever
<bodie_> (ideal goal state, anyway)
<rick_h_> bodie_: ok, sounds like a plan. Just feel free to get stabby if you don't have the info you need. Don't want to be a blocker on anything
<bodie_> sure, this is more like a long polling thing
<dimitern> jam, vladk, fwereade, there it is - https://github.com/juju/juju/pull/58 - Environ.ListNetworks() call, PTAL
<bodie_> mgz -- regarding that schema validation, I'd like to validate our param schemas against the actual json-schema schema itself.  do you think it would be best just to inline it in the charm/actions file?
<bodie_> that seems so ugly
<rogpeppe> frankban, dimitern, mgz: another trivial patch, avoiding the need for charm to import environs/config: https://github.com/juju/juju/pull/59
<frankban> rogpeppe: looking
<rogpeppe> frankban: thanks
<dimitern> rogpeppe, what's the IsolationSuite?
<wwitzel3> fwereade: https://github.com/juju/juju/pull/2
<rogpeppe> dimitern: it's what everything is moving towards using - it isolates all environment variables amongst other things
<rogpeppe> dimitern: it's the new BaseSuite
<rogpeppe> dimitern: but with a slightly more useful name
<dimitern> rogpeppe, right, LGTM then
<rogpeppe> s/useful/self-explanatory
<rogpeppe> dimitern: thanks
<TheMue> jam: made a PR for the planning doc, see https://github.com/juju/juju/pull/60. could you LGTM it so that we can merge it and Nick can add it to j.u.c
<dimitern> rogpeppe, I'd appreciate if you look at a similarly trivial https://github.com/juju/juju/pull/58
<TheMue> rogpeppe: will review PR 59
<rogpeppe> dimitern: looking
<TheMue> rogpeppe: oh, already reviewed
<rogpeppe> jeeze, it sometimes takes sooo long to add a github comment
<rogpeppe> ah, finally!
<wwitzel3> ericsnow: ping if you're around for standup
<rogpeppe> dimitern: reviewed
<dimitern> rogpeppe, thanks!
<jcw4> bodie_, mgz I think the point of mgz's link to the w3 article is that the uri for the schema is intended as an identifier not as a validation lookup... i.e. that uri refers to a static published schema thats not gonna change... just use your local copy for validation.
<jcw4> I don't know if we literally do store files like this locally, but it may be worth doing so if we *really* want to validate against that schema for every test run
<bodie_> right, I get that, I'm trying to separately validate the schema against the json-schema spec, which is encoded as json-schema
<TheMue> anyone else taking a look at PR 60 for me?
<bodie_> so that we don't accept bad charm schemas that happen to be meaningless but parseable as jsonschemadocuments
<bodie_> (which are basically just required to be json)
<bodie_> maybe I'm barking up the wrong tree, though
<mgz> TheMue: that's big
<mgz> is [TOC}
<mgz> *[TOC] at the top actually meant to do anything?
<jcw4> bodie_: agreed... not sure if 'accepting bad charm schemas' is what we test in the tests though...
<bodie_> I think [TOC] isn't parsed by github markdown
<TheMue> mgz: will be replaced by a table of contents by the processor
<mgz> TheMue: are we also sure we want all this in the tree? natefinch linked something earlier and had to private-ize it after
<TheMue> mgz: nick has a toolchain to generate HTML for juju.ubuntu.com out of it
<bodie_> jcw4, I guess the question I'm asking is whether charms can define bad schemas which we then try to use for validation and never work
<mgz> TheMue: I hit render and github didn't do any magic
<TheMue> mgz: it is sanitized
<mgz> k
<bodie_> I'm not certain this kind of critter exists at all though, since validation is going to be in the state scope
<rogpeppe> bodie_: i wouldn't inline the schema, but i'd suggest putting it into a file and reading it at test time
<TheMue> mgz: nick uses the python markdown
<jcw4> rogpeppe: +1
<rogpeppe> bodie_: if you wanted, you could even set up a trivial local http server and change urls to point to that
<bodie_> hmm, that's not a bad idea
<rogpeppe> bodie_: it's an approach we already use in several places
<bodie_> rogpeppe, well, it's not for the testing, it's actually a validation step at the time the charm loads its Actions schema from YAML
<rogpeppe> bodie_: surely your code should be able to verify that the schema is valid?
<bodie_> it happened to be breaking the tests because it wanted http access for that method
<rogpeppe> bodie_: or are we allowing charms to reference arbitrary schemas over the internet?
<bodie_> well, that's part of the JSON-Schema spec
<rogpeppe> bodie_: if so, i might suggest that's perhaps not a great idea
<bodie_> yeah, hehe.  got that input from mgz as well, and it makes sense
<sinzui> mgz, local host breaks were NOT juju's fault. each test was broken by something else, and the report of what broke co-incidentally matched test in the commit under test.
<rogpeppe> bodie_: i think it's reasonable to restrict schemas to vanilla schemas that don't require further dereferences
<rogpeppe> bodie_: and... surely... it's possible to generally transform a schema-with-references to a schema-without-references?
<bodie_> so you're saying that if there's a $schema key in the schema, it should be rejected?
<bodie_> or simply stripped
<sinzui> mgz: killing the rogue mongo was hard. I had to delete all the directories it used to get kill to really stop the service
<rogpeppe> bodie_: probably rejected is better
<mgz> sinzui: yeah, it's really painful
<mgz> sinzui: and one of the main issues with using a persistent machine for tests
<bodie_> rogpeppe, I don't think the $schema key is actually loaded, rather used as an identifier for the schema version -- as mgz mentioned, much like the WC3 DTD's referenced in the DOCTYPE
<sinzui> joyent is still ill. I don't want it to curse this revision
<TheMue> mgz: thx for review, will talk to nick how we can handle title and toc better in future
<rogpeppe> bodie_: so schemas can't reference arbitrary other sub-schemas that define parts of the schema?
<bodie_> well, no, they can...
<bodie_> the reason I'm hitting the issue is that I'm actually separately attempting to validate the loaded schema against json-schema itself; making references to the inner json-schema keys is not problematic, and I don't think external references are necessarily loaded (my test cases were passing with $schema URL keys defined)
<bodie_> however, since we haven't gotten to the point where we're actually using the Validate method on any legit JSON objects, it's not totally clear yet since I haven't deeply grokked the spec itself yet
<bodie_> it's possible that when Validate is run, those URLs try to resolve
<bodie_> the gojsonschema stuff I wrote doesn't touch that part, it's more to do with properly breaking apart reference URLs into constituent bits
<bodie_> right now, the reason my code is GETting is because I'm actually attempting to retrieve the remote definition for JSON-Schema itself, which I understand is a bad idea and I should probably store it as a resource in juju if I want to do that
<dimitern> rogpeppe, https://github.com/juju/juju/pull/58#discussion_r13596383
<rogpeppe> dimitern: hmm
<rogpeppe> dimitern: what code does the public/private detection?
<dimitern> rogpeppe, network.SelectPublicAddress
<dimitern> rogpeppe, but it will need improvement
<dimitern> rogpeppe, it's all part of the "we should automatically do amazing things" approach :)
<rogpeppe> dimitern: SelectPublicAddress doesn't look at the IP address itself
<rogpeppe> dimitern: just having any real IP addresses makes me think "potential isolation issue"
<dimitern> rogpeppe, it's hidden in a couple of helpers below - internalAddressMatcher etc.
<dimitern> rogpeppe, this ip range is isolated by design
<dimitern> rogpeppe, and since it won't reach anything, if we happen to use them inadvertently we'll notice at once
<rogpeppe> dimitern: i bet i can route to those addresses on my machine
<bodie_> rogpeppe, thoughts on that?  I'd really like to either ditch the validation or make a final call here so I can move on to the watcher/uniter stuff
<rogpeppe> dimitern: BTW when would anything be legimitately using network.NewAddress on the addresses returned from ListNetworks?
<dimitern> rogpeppe, it won't be used directly
<bodie_> if rogpeppe / mgz / etc think I need to enforce that JSON-Schema specs not include $schema keys, I can do that, but this is like the 5th thing that's pushed me back into this single file that should have been done ages ago -- I'm perfectly happy to make it flawless but I'm also antsy to get a product built here
<dimitern> rogpeppe, it will be stored in state as "pending" or "potential" networks, and the user then can say - i want to use this as my private/public default network
<rogpeppe> dimitern: i don't quite get it then
<dimitern> rogpeppe, or further down the line, we'll use ListNetworks to update what we know about
<rogpeppe> bodie_: so the issue is you need the schema for JSON-Schema in your tests?
<pindonga> hi, anyone around to help troubleshoot some issues with the local provider?
<mgz> pindonga: have you tried turning it off and on again?
<pindonga> several times (per hour)
<mgz> pindonga: (more seriously, try dimitern's script for wiping out left over junk, it's helped before)
<pindonga> mgz, do you have a link to it?
<rogpeppe> dimitern: i think that for testing specific logic that automatically derives public/privateness from network CIDR addresses, it's fine to use addresses that tweak that logic. but for the default networks returned by the dummy provider, i don't think so
<dimitern> pindonga, http://blog.naydenov.net/2014/03/remove-juju-local-environment-cleanly/
<dimitern> pindonga, you might need to tweak it a bit
<pindonga> dimitern, thx, will take a look
<bodie_> rogpeppe, negative, I'm attempting to validate the parsed schemas from YAML as useful JSON-Schema, by using gojsonschema's Validate method against the spec doc, which was at the remote URL -- so that's getting triggered by the tests and failing, which makes perfect sense.  I can simply ditch that bit, or keep the doc as a local file; I was under the impression you were saying I should enforce a constraint against references in the schemas themselves
<dimitern> rogpeppe, so how's 0.1.2.0/8 better than 203.0.113.0/24 ?
<cmars> jam, i'm wondering if I should just defer Machine.Destroy in Unit.Destroy, and let the machine's advanceLifecycle checks sort out whether the machine is clean (re: https://github.com/juju/juju/pull/52)
<rogpeppe> dimitern: because it gets immediately rejected by the network stack
<rogpeppe> dimitern: try it
<bodie_> i.e. oneTrueJsonSchemaSpec.Validate(userDefinedSchema)
<cmars> seems a simpler way to go about it, what do you think?
<rogpeppe> dimitern: e.g. try "telnet 203.0.113.3" and "telnet 0.1.2.3"
<bodie_> where oneTrueJsonSchemaSpec is loaded from a URL now, which is causing the issue; rather, it should be loaded from a local resource, which I get
<jam1> rogpeppe: of course the downside is that it fails with a different error
<rogpeppe> jam1: it *should* never be actually used
<rogpeppe> jam1: but better that it fails immediately, regardless of the error
<rogpeppe> jam1: rather than timing out (possibly, depending on external network state) after a minute or so
<jam1> cmars: soâ¦ I probably prefer checking before calling Destroy, because otherwise you end up with something like errors/warnings because destroy realizes it can't be destroyed
<rogpeppe> bodie_: i am +1 on avoiding references in the schemas themselves (and i assume that gojsonschema does currently have code in it to go out to fetch stuff from remote URLs)
<dimitern> rogpeppe, ok, i'm not sure i want to push for it now, i'll change both cidrs to 0.1.2.0/8 and 0.4.3.0/24 for example
<rogpeppe> bodie_: but i don't think it needs to be done now
<dimitern> rogpeppe, if i need to use actual public ips in tests, will find a workaround
<bodie_> rogpeppe, I'll know more once we get some Validate() tests written in State
<rogpeppe> dimitern: sgtm. maybe 0.10.0.0 and 0.203.0.0 to be reminiscent of the real things?
<bodie_> but I definitely see the point of what you're saying
<cmars> jam1, ack. i'll try to set it on a reasonable path to success, without duplicating too many checks
<rogpeppe> bodie_: are you still planning to replace gojsonschema?
<bodie_> I'm just getting ridiculously stir-crazy here in charm/actions.go
<rogpeppe> bodie_: ha ha
<jam1> cmars: I'm happy to use a common helper that does the checks
<bodie_> ah
<rogpeppe> dimitern: thanks
<bodie_> rogpeppe, I went through that some with fwereade -- basically, there were some really hackish workarounds in gojsonreference / gojsonpointer
<rogpeppe> bodie_: it's not nice code :-)
<bodie_> it was glossing over some test issues and simply faking good results on others
<bodie_> so, I gutted that stuff and replaced it with much more idiomatic go
<rogpeppe> bodie_: it looks to me as if it's vulnerable to being crashed too
<bodie_> gojsonschema itself looks pretty solid
<bodie_> it was its dependencies that were more half-assed
<bodie_> afaict, anyway
<rogpeppe> bodie_: i was pretty dubious about gojsonschema actually, but i haven't actually tried to implement it myself, so i guess it may be ok really...
<bodie_> we're now using the rewritten deps from my personal github, I haven't opened a PR against xeipuuv's codebase since he was unresponsive to an Issue on the topic
<rogpeppe> bodie_: if we want to just mutate gojsonschema, i'd be happy to send a review pointing out some obvious ways that i think it could look a bit better
<bodie_> I'm definitely open to that, right now I'm just getting anxious about actually pushing product on Actions since we're getting pretty overdue on our expected timeline
<bodie_> but, obviously we also don't want to deliver half-baked code :)
<rogpeppe> bodie_: fair enough
<rogpeppe> bodie_: it should be an easy-enough dependency to change later
<bodie_> yeah, I think so
 * ericsnow starts day 2 of the firehose
<rick_h_> ericsnow: hope you're thirsty!
<bodie_> in the meantime, what's the best place for storing resource type files like jsonschema-draft4.json?
<bodie_> i.e., is there a canon location for resources, or should it just go in the package folder
<rogpeppe> frankban, dimitern: another ultra-trivial PR: https://github.com/juju/juju/pull/62
<dimitern> rogpeppe, No description provided?
<rogpeppe> dimitern: isn't the subject line enough?
<dimitern> rogpeppe, LGTM
<rogpeppe> dimitern: ta!
<natefinch> see, dimitern gets it ;)
<dimitern> rogpeppe, it's nice to know what it is about :)
<dimitern> (i.e. more than a one-liner)
<rogpeppe> dimitern: sorry, thought it was obvious
<rogpeppe> dimitern: will provide a description too
<dimitern> thanks!
<ericsnow> natefinch: were you able to locate that meeting calendar?
<ericsnow> BTW, does this channel have public logs somewhere?
<natefinch> I think freenode logs stuff, but I don't know where
<natefinch> ericsnow: crud, no I forgot about the calendar.  alexisb - do you know how to get eric access to the team calendar?
<perrito666> heh https://github.com/search?l=go&q=if+err+!%3D+nil&type=Code
<jcw4> ericsnow: http://irclogs.ubuntu.com/
<ericsnow> jcw4: thanks
<natefinch> ericsnow: would you like to jump on a hangout? might be helpful for questions and stuff
<ericsnow> sure, if you have a minute
<natefinch> ericsnow: https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=1
<pindonga> mgz, I've run dimitern's cleanup script, and I'm still getting the same error over and over again... when I deploy the charm, I get
<pindonga> ERROR juju runner.go:220 worker: exited "deployer": exec ["start" "--system" "jujud-unit-click-appstore-api-0"]: exit status 1 (start: Unable to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory)
<pindonga> and the unit never finishes starting
<mgz> pindonga: sounds like a privileges issue
<pindonga> well, I shouldn't run juju as root, should I ? :)
<pindonga> and that file is owned by root
<pindonga> mhh, in any case, the juju and lxc processes *are* running as root
<mgz> pindonga: you are on what release, which what kernel and lxc versions?
<pindonga> trusty
<pindonga> 3.13.0-29-generic
<pindonga> 1.0.3-0ubuntu3
<bodie_> PR: Charm now validates YAML-loaded Actions schema https://github.com/juju/juju/pull/63
<pindonga> juju is 1.18.1-trusty-amd64
<natefinch> ericsnow: are you joining?
<bodie_> imo, you guys should have the channel bot call out fresh PR's
<bodie_> :)
<ericsnow> natefinch: I'm on
<alexisb> natefinch, let me see if I can add him, jam1/thumper has always added folks
<natefinch> alexisb: thanks
<natefinch> ericsnow: hmm... me too.  try refreshing?
<mgz> pindonga: http://askubuntu.com/questions/399382/juju-local-failed-with-var-run-dbus-system-bus-socket-no-such-file-or-director
<ericsnow> natefinch: hang on a sec
<rogpeppe> frankban: so, the charm package now has no external dependencies other than inside charm itself (charm/hooks and charm/testing). i'm going to create a new package github.com/juju/charm, and move charm there, then change core to use it.
<rogpeppe> frankban, fwereade, natefinch, mgz, jam1: does that sound reasonable?
<frankban> rogpeppe: sounds good
<rogpeppe> bodie_: this will affect you
<bodie_> cool
<mgz> rogpeppe: yurp
<fwereade> rogpeppe, +1 to that, and thanks for warning bodie_
<rogpeppe> fwereade: thanks
<pindonga> mgz, thx but sorry, dbus was installed :)
<rogpeppe> fwereade: one possibly controversial thing is that it will involve moving the testing charm directory out of core, but i think that should be ok
<fwereade> rogpeppe, one thought, I think there's a subtle interdependency between charm and names
<mgz> pindonga: did you try purging and reinstalling it like that guys did?
<fwereade> rogpeppe, that's fine by me
<rogpeppe> fwereade: names is already external
<fwereade> rogpeppe, ah, fantastic, I hadn't spotted that
<bodie_> rogpeppe, I did just open a PR against charm
<bodie_> it's a pretty simple change...
<pindonga> mgz, purging dbus? no I haven't yet... although this is a clean trusty image... it shouldn't change anything (I will test it anyway)
<rogpeppe> bodie_: cool. i'll merge that in before factoring out everything
<rogpeppe> bodie_: link?
<bodie_> https://github.com/juju/juju/pull/63
<bodie_> ergh, it probably still has some crappy comments and mess left
<bodie_> maybe you should just go ahead and I'll reopen it once you're finished
<bodie_> there, corrections are in
<mgz> pindonga: otherwise you'll need to dive into permissions, this seems like it is mostly outside of juju itself
<pindonga> mgz, am re-running everything now after reinstalling dbus... thx
<pindonga> will keep looking
<bodie_> rogpeppe, fixes are in
<rogpeppe> bodie_: still reviewing, sorry...
<bodie_> oops, didn't see some other comments, heh
<rogpeppe> bodie_: (there's no way to send all the comments at once, unfortunately)
<bodie_> no worries, I can commit as you go I think
<rogpeppe> bodie_: i don't quite see why you want to verify against the schema definition - shouldn't gojsonschema be doing that when it parses it?
<bodie_> rogpeppe, I don't think it is, I think it's just defining a well-typed map (i.e. JSON) and assuming it's json-schema, but I could be wrong
<rogpeppe> bodie_: well, IMO, it *should* verify. but i can see that you might want an additional verification step for the time being.
<rogpeppe> bodie_: i'm sorry, i was under the impression that you needed the json schema doc for testing purposes only
<bodie_> there was some talk Monday with Jeremy (JSON-Schema guy) about how / where we're going to do validation and how we need to think about schemas -- I realized most of the sample schemas we were using weren't really proper json-schema at all
<rogpeppe> bodie_: reading the schema from a file isn't really acceptable in prod
<bodie_> gotcha
<rogpeppe> bodie_: so, i'd suggest just embedding the text
<bodie_> all righty
<rogpeppe> bodie_: in a file
<rogpeppe> bodie_: sorry,
<rogpeppe> bodie_: in a Go file
<rogpeppe> bodie_: also, you don't want to be parsing it every time
<rogpeppe> bodie_: so perhaps have an init-time statement which parses it
<rogpeppe> bodie_: and stores it in a global var
<rogpeppe> bodie_: alternatively (and perhaps better) use a sync.Once to do that once, the first time it's required
<alexisb> ericsnow, I shared the team calendar with you
<rogpeppe> bodie_: i'll add a comment suggesting that last
<ericsnow> alexisb: thanks!
<bodie_> rogpeppe, all righty :)
<bodie_> rogpeppe, brb, taking the girls down to the car
<rogpeppe> bodie_: k
<rogpeppe> bodie_: i've gone ahead with creating the new repo
<rogpeppe> bodie_: you should find it quite easy to patch your changes onto it
<bodie_> great, just finishing up with mgz and jcw4 here
<rogpeppe> frankban, fwereade, bodie_: initial commit of new charm repo: https://github.com/juju/charm/pull/1
<frankban> rogpeppe: done
<frankban> mgz: https://github.com/juju/juju/pull/61 passed CI tests but did not get merged
<mgz> frankban: looking
<frankban> mgz: thanks
<mgz> gah, it should have got reported back...
<frankban> mgz: so, I guess I need to merge trunk, right?
<mgz> frankban: I'm still not sure on that
<mgz> sometimes it just seems to work the second time
<frankban> mgz: should I just try $$merge$$ again?
<mgz> frankban: I'll sort it
<frankban> mgz: thanks
<mgz> does the lander not return a non-zero exit code whe it gets an exception or summat...
<mgz> apparently not
<mgz> ret = ... print ret
 * mgz fixes
<rogpeppe> frankban: added testing repo: https://github.com/juju/charm/pull/2
<rogpeppe> pwd
<frankban> rogpeppe: why not preserving history in this case? not worth the effort?
<rogpeppe> frankban: yeah
<rogpeppe> frankban: it's in github.com/juju/juju anyway, if we need it
<rogpeppe> pwd
<frankban> rogpeppe: pwd exit status: LGTM
<rogpeppe> frankban: :-)
<mattyw> fwereade, do you have a moment?
<alexisb> mattyw, I think it is late for fwereade
<alexisb> although it is late for you too :)
<mattyw> alexisb, I was watching some of the UDS sessions to I totally lost track of time
<mattyw> alexisb, but I don't think I need him at the moment - but I don't have a way of cancelling the ping :)
<alexisb> :)
<alexisb> there should be an unping
<jcw4> are the UDS sessions recorded and/or publicly available?
<mgz> jcw4: they are
<alexisb> jcw4, http://summit.ubuntu.com/uos-1406/all/
<jcw4> cool, tx alexisb
<alexisb> yep
<TheMue> mgz: any idea why my branch doesnât merge with $$merge$$?
<fwereade> mattyw, I'm briefly here if yu need me
<voidspace> hey folks, I gotta EOD
<voidspace> g'night all
<ericsnow> voidspace: bye :)
<mattyw> fwereade, it can wait till tomorrow, thanks anyway
<natefinch> ericsnow: how goes?
<ericsnow> natefinch: trying to figure out why your change to CONTRIB. isn't showing up in git log for me
<natefinch> no idea
<natefinch> do you actually have the change locally?
<ericsnow> no...but it shows up if I git show the hash
<ericsnow> I'm sure it's a git workflow thing I'm missing
<natefinch> ericsnow: probably.  Did you pull or fetch?
<ericsnow> natefinch: fetch (and then checkout)
<jcw4> ericsnow: checkout what?
<natefinch> I think it's because you have to fetch and merge, not fetch and checkout.   pull is fetch + merge
<natefinch> if you're not on the same branch as trunk
<natefinch> master... whatever they call it in git
<ericsnow> then I'm misunderstanding merge
<menn0> morning ppl
<jcw4> menn0: o/
<natefinch> morning menn0
<menn0> jcw4: \o
<bodie_> can someone explain to me what rogpeppe was thinking with the underscore-named var here?  https://github.com/juju/juju/pull/63#discussion_r13601842
<bodie_> I think I've f'd something up with the way I implemented this, because it's going into an unterminated recursion and exploding
<natefinch> bodie_: the underscore variable is a special variable that says "throw away this data"
<ericsnow> yep, it was just the whole merge thing
<natefinch> bodie_: so in this case, whatever data the function returns, he doesn't care, he just wants to see the error
<jcw4> in this case it's an underscore prefix, not just underscore
<natefinch> oh sorry, I was looking at the code not the comment
<natefinch> no, that's horrible, don't do that
<bodie_> natefinch, er, it's not the _ pattern matching variable
<natefinch> bodie_: you mean _jsonMetaSchema
<bodie_> yeah
<bodie_> I'll be back in 60 seconds
<natefinch> I think you need to replace _jsonMetaSchema with metaSchema  and the code makes sense
<natefinch> I think he changed his variable name halfway through and didn't fix it up at the top
<bodie_> I see
<bodie_> natefinch, I'm getting what looks to be a stack explosion and I'm not clear why
<natefinch> does it involve a String() function?
<bodie_> I don't think so, why?
<bodie_> I'm really hoping it's not a recursion caused by this schema definition referencing itself
<natefinch> common mistake is to do fmt.Printf("%s", myVal) inside myVal's String() function, which then calls myVal's String() funciton....
<natefinch> bam, infinite recursion and stack blowupedness
<bodie_> hmmm
<bodie_> I don't think so
<natefinch> it was just a stab in the dark
<bodie_> natefinch, my best thought is that somehow, the Do is triggering itself
<natefinch> is it in the code that you just linked to, or somewhere else?
<jcw4> bodie_: do you have the crash stack?
<bodie_> http://golang.org/pkg/sync/#Once.Do
<bodie_> jcw4, I have
<bodie_> https://github.com/binary132/charm/blob/actions-validation-fixes/json_schema.go
<bodie_> (jcw4, natefinch)
<jcw4> bodie_: panic / stack trace?
<bodie_> http://paste.ubuntu.com/7625313/ fwiw, jcw4
<jcw4> bodie_: it looks to me like you're validating a json doc, which references the json schema at "http://json-schema.org/draft-04/schema#", and your trying to validate that, which references itself
<jcw4> the stack trace shows recursive calls between parseReference and parseSchema
<bodie_> I see, yeah
<bodie_> jcw4, well, I'm not trying to validate the "meta-schema"
<bodie_> just to load it into a JsonSchemaDocument
<bodie_> so, it shouldn't even be referencing itself.... referencing the remote document wasn't ideal, but it shouldn't be recursive
<bodie_> it was working fine as a file:// reference
<bodie_> sigh
<jcw4> :)
<jcw4> or maybe it should be :(
<bodie_> https://github.com/juju/juju/pull/63#discussion_r13601842 -- I think this is my EOD, so.... take care gentlemen.  I'll see you in the morning jcw4
<jcw4> ttyl bodie_
<bodie_> correction, https://github.com/juju/charm/pull/3
<bodie_> (rogpeppe)
 * thumper realises he left irc connected all night
<rick_h_> thumper: isn't that how it works?
<thumper> rick_h_: I try to actually disconnect
 * thumper needs to pull out of this refactoring dive RSNâ¢
<cmars> thumper, i need to restart chrome, just a minute
<thumper> cmars: ack
<perrito666> has anyone used archive/tar?
<ericsnow> perrito666: a bit
<perrito666> I have implemented a tarGz that suits my current needs but I find out that I do not know how to add folders, I add the header but then in the body part I am not sure what goes
<ericsnow> perrito666: using the tar command?
<perrito666> ericsnow: nope, te archive/tar builtin from go
<ericsnow> perrito666: sorry then
<perrito666> heh no prob
<perrito666> how are you going so far?
<ericsnow> good
<menn0> thumper: what's the easiest way I can set up a HA env to mess with? Canonistack, AWS or is it possible on my own machine?
<ericsnow> working up a patch to CONTRIBUTING
<menn0> thumper: I want to experiment with some mongo backup/restore ideas in relation to schema upgrades
<ericsnow> it's very meta, following the workflow while working on the doc on the workflow :)
<perrito666> menn0: for me aws has been the easiest
<thumper> menn0: otp
<menn0> menn0: no worries. perrito666 is helping and we can discuss further in the standup.
<wallyworld> fwereade: not sure if you're still around - wrt TransactionRunner embedded in State struct. The new storage stuff does indeed maintain an instance of a txn.TransactionRunner interface. That though is orthogonal as to whether  State struct embeds the interface or not. It is embedded as of now because previously the methods that have been moved to the TransactionRunner previously were on State itself, so the refactoring is simplified .
<wallyworld> I can make it an attr though if you prefer
<waigani> http://juju-ci.vapour.ws:8080/job/github-merge-juju/88/console tests pass but my pull request "is not mergeable"?
<waigani> I'll merge upstream and see if there are any conflicts
<davecheney> waigani: that is 'cos you don't have permission
<davecheney> use $$merge$$ and the bot will do it
<davecheney> i think that is the answer
<waigani> davecheney: the bot is doing it
<davecheney> waigani: so everything is ok ?
<waigani> davecheney: no, I mean the bot is TRYING to do it: https://github.com/juju/juju/pull/22
<ericsnow> I've put up a patch for cleaning up the CONTRIBUTING doc (PR #65, Bug #1328716)
<_mup_> Bug #1328716: CONTRIBUTING should be cleaned up a bit <juju-core:New for ericsnowcurrently> <https://launchpad.net/bugs/1328716>
<davecheney> ericsnow: link to PR >
<davecheney> ?
<ericsnow> davecheney: https://github.com/juju/juju/pull/65
<ericsnow> davecheney: thanks for taking a look :)
<ericsnow> doc changes are always such an objective affair <wink>
#juju-dev 2014-06-11
<waigani> thumper: using Entity Tag now: https://github.com/juju/juju/pull/22
<waigani> my branches were all out of sync, had to pull, godeps, make check - back on track now
<wallyworld> axw: morning. would you have any time today or tomorrow to look at bug 1325830?
<_mup_> Bug #1325830: Can't destroy MAAS environment with LXCs <destroy-environment> <landscape> <lxc> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1325830>
<axw> wallyworld: yep, I can take a look
<wallyworld> \o/
<wallyworld> axw: i have to disappear for a couple of hours to have lunch with a friend who has had a death in the family. back a bit later
<axw> wallyworld: :(  no worries, ttyl
<wallyworld> yeah, car crash :-(
<thumper> waigani: ack, will look
<davecheney> sinzui: ping
<davecheney> does anyone know where the recipes for the precise builds of juju are ?
<davecheney> wallyworld: axw  ?
<axw> nope, sorry
<davecheney> damn
<davecheney> wanna get the precise build up to go 1.2
<davecheney> so we can resolve all the dependency issues
<davecheney> does anyone know how juju even gets built for precise
<davecheney> I cannot find anything on the project page
<davecheney> there is no linkage from any milestone/series to a ubuntu series
<thumper> davecheney: I'm guessing *magic*
<rick_h_> thumper: took my answer
 * thumper high fives rick_h_
<davecheney> thumper: fond it
<davecheney> wondering how I can copy the trusty package into this ppa
<davecheney> https://launchpad.net/~juju/+archive/golang
<thumper> I appologise in advance for this massive branch
<thumper> hmm...
 * thumper tries something
<davecheney> gave up, sent email
<thumper> wtf is github.com/bmizerany/pat ?
<rick_h_> heh, someone else brought that up today
<rick_h_> missed what the result of that was
<davecheney> yes
<davecheney> who landed that
<davecheney> 'twas jam in dae6b348
<thumper> what is the command that does fetch and pull ?
<davecheney> git ?
<davecheney> or go ?
<rick_h_> fetch and merge is done with pull
<thumper> git
<thumper> rick_h_: but I don't want to merge
<thumper> I just want to update master
<rick_h_> ok, then don't use pull
<rick_h_> git pull upstream master
<thumper> rick_h_: so what do I use?
 * rick_h_ think you guys are using upstream to refer to the official juju repo
<davecheney> yup
<thumper> rick_h_: ok, new question, interactively merge parts of another branch into the current one
<rick_h_> thumper: git cherry-pick $commit? or you want per code block in the diff?
<thumper> per code block
<rick_h_> interesting question
<thumper> ah... here I was hoping that you'd know
 * thumper falls back on the push the entire massive branch up for review
<rick_h_> thumper: http://stackoverflow.com/questions/449541/how-do-you-merge-selective-files-with-git-merge
<rick_h_> "Git also has great support for doing âreverseâ squashes where a single commit is split into multiple patches. Below is an example of how to split a commit that has multiple unrelated changes in the same file. "
<rick_h_> http://magazine.redhat.com/2008/05/02/shipping-quality-code-with-git/
<thumper> fuck that
<rick_h_> lol
<thumper> sorry reviewer
 * thumper needs to rebase
<rick_h_> git cherry-pick --no-commit might also be useful
 * thumper is in conflict resolution hell
<thumper> if this test run has no problem, fair dinkum it'll be a miricle
 * thumper sighs
<thumper> spelling was never a strong suit
<axw> oh fun, we've got an ICE on the bot trying to build the ec2 tests
<davecheney> ice ?
<axw> internal compiler error
<davecheney> go ?
<axw> provider/ec2/live_test.go:1: internal compiler error: dgcsym: off=8589934928, size=589934736, type struct { overflow *struct { overflow *struct { overflow *struct { overflow *struct { overflow *<...>; keys [8]string; values [8][]instance.Id }; keys [8]string; values [8][]instance.Id }; keys [8]string; values [8][]instance.Id }; keys [8]string; values [8][]instance.Id }; keys [8]string; values [8][]instance.Id
<davecheney> wow
<davecheney> never seen that one
<davecheney> off and size are AFU
<thumper> wow, that's weird
<thumper> after merging trunk, I get a test failure in the logging worker
<thumper> futz
<thumper> I bet it isn't isolating the env var
<thumper> yup
<sinzui> davecheney, lp recipes don't work with go. I upload source packages tot he ppa. I backported 1.2 the juju-packagers devel ppa a few weeks ago. 1.19.3 precise and saucy was built with it
<thumper> in someones infinite wisdom of test refactoring, they have broken isolation
<sinzui> davecheney, as there are no reports of badness. I intend to build the 1.20 with it. I need to work with foundations though to discuss backporting 1.2 to ctools
<davecheney> sinzui:  right
<davecheney> could you please reply to that thread and tell jamespage not to do anything then
<davecheney> i was obvkoiusly looking in the wrong
<davecheney> place
<sinzui> okay
<davecheney> but that also means that mgz can land his branch to the code which uses go.crypto/ssh
<davecheney> and then juju builds from the trunk of all the source
<davecheney> \o/
<davecheney> https://bugs.launchpad.net/bugs/1312940
<_mup_> Bug #1312940: Update to use gosshnew from go.crypto <ssh> <juju-core:In Progress by gz> <https://launchpad.net/bugs/1312940>
 * thumper running daughter down to hockey
<davecheney> axw: where are the instructions for setting up origin and upstream branches
<davecheney> ?
<axw> CONTRIBUTING.md
<davecheney> i thought they were in CONTRIBUTING.md
<axw> they're not?
<davecheney> i must be blind
<davecheney> maybe not on the branch I have
 * axw looks
<axw> davecheney: https://github.com/juju/juju/blob/master/CONTRIBUTING.md#fork
<davecheney> ta
<davecheney> i must be looking at an old branch
<sinzui> bugger. the arm64 machine fell of the net. I am removing it from the list of packages and tools to build to unblock CI from testing the current revision
<axw> sinzui: is that go 1.2 PPA safe to add to our github merge job's setup already? we've got a compiler error that's likely to disappear if I upgrade it now
<axw> the next question being where is the PPA?
<sinzui> The PPA is not used to test...we cannot wait hours or days to build packages to test
<axw> how will the unit test jobs get the 1.2 compiler then?
<axw> the precise ones anyway
<sinzui> axw jamespage setup a private ppa that we build into. It is private because we need to ensure people do not download from it before I publish tools
<sinzui> axw. We can copy the built packages to a public testing ppa. I can update the run-unit-test job to add the ppa for saucy and precise.
<axw> sinzui: I'm talking about run-unit-tests-precise-amd64 for example, we'll need to update the go compiler on there
<sinzui> axw. I can do that tomorrow when I am truely awake
<axw> ok
<sinzui> axw, good point about that test I need to make the same change to the non-revision test
 * thumper sighs
<thumper> How do I squash my commits?
<thumper> rebased with master already
<axw> thumper: are you doing it interactively?
<mwhudson> you can do it in rebase -i i think
<axw> git rebase -i
<thumper> it says I'm up to date
<rick_h_> cdid you merge master or rebase master, always rebase
<mwhudson> git rebase -i master
<sinzui> thumper, I git rebase -i --autosquash master. Change all the the first commit it s (squash)
<sinzui> thumper, git will let you revise the commit message in the next screen
<thumper> sinzui: I tried that, it didn't work
<sinzui> git hates you
<thumper> it does
<sinzui> it knows who you are
<axw> ah hrm, our merge job is running trusty and hence go 1.21.1
 * thumper tries again
<axw> crapola
<sinzui> hack on git-bzr and bzr-git and then only use bzr command
<thumper> how do I set the editor git uses?
<davecheney> $EDITOR
<thumper> it obviously doesn't honour EDITOR
<davecheney> it really does
<mwhudson> unless you've set core.editor
<sinzui> thumper, It honour vi/vim, probably non-gui emacs
 * davecheney jarring chord
<davecheney> axw: thanks for the review
<axw> np
<davecheney> i should be able to land this
<davecheney> and if all the bits are in place
<davecheney> it'll work
<davecheney> if not, it won't
<axw> the landing bot is 1.2
 * davecheney fires forward missiles
<axw> easy enough to back out if we need to
<sinzui> axw, because landing bot is trusty right?
<axw> sinzui: yep
<axw> we did talk about making it precise, but haven't yet
<davecheney> fail
<davecheney> sure that bot is running 1.2 ?
<axw> Get:90 http://us-east-1.archive.ubuntu.com/ubuntu/ trusty/universe golang-go amd64 2:1.2.1-2ubuntu1 [8,104 kB]
<davecheney> hmmm
<sinzui> axw I can start the copy of the packages to https://launchpad.net/~juju/+archive/experimental now. the could be there in minute.
 * sinzui looks at the test that needs it
<axw> sinzui: thanks, no rush. the test failure I've got is failing with 1.2 :(
<axw> davecheney: I think that something maybe running earlier on with go 1
<axw> davecheney: the bit that creates the tarball I guess
<davecheney> maybe
<davecheney> i'm not sure a go 1.3 dependency didn't leak into go.crypto/ssh
<davecheney> axw: i'm investigating
<axw> okey dokey
<davecheney> not quite sure what is going on
<axw> I will continue with my ICE picking
<davecheney> ok
<davecheney> shitload of compiler errors from 1.2 are fixed in 1.3
<sinzui> all hell is breaking loose. I cannot ssh to the CI machines in lcy02
<sinzui> Why couldn't this fail during my work hours
<davecheney> sinzui: you know how these things work
<sinzui> I think fate is pushing me to move all canonistack testing to lcy01... I setup a slave to test kvm there saturday
<sinzui> and the ppc slave just went offline because the gateway is gone
 * davecheney sound of air being let out of a baloon
<thumper> wallyworld, menn0: this is a big one... https://github.com/juju/juju/pull/68
<thumper> sorry about that
<davecheney> thumper: you did all this in one commit :)
<menn0> queue jokes about thumper's big branch...
<menn0> menn0: reviewing now
<thumper> davecheney: squished it
<thumper> I'd prefer to work out my pipeline workflow with git
<thumper> then it would have been much nicer
<thumper> as bit as it is, it should be fairly straigh forward
<menn0> it might be worth looking at Stacked Git. I haven't used it before though.
<thumper> menn0: I did look at it, not really suitable for what I want
<menn0> can someone tell me what Juju uses for the mongo db username and password? I'm trying to connect using mongo shell
<davecheney> menn0: admin user ?
<davecheney> ^ note, guess
<menn0> that's what I figured too but that doesn't seem to work
<davecheney> :sadface:
<davecheney> axw: still investigating
<menn0> here's what I'm doing: mongo 127.0.0.1:37017/juju-db --ssl --username admin --password <env's admin password from .jenv>
<davecheney> my trusty vm is installing the internet
<menn0> I get "login failed"
 * menn0 hunts through the code
<davecheney> login failed tells you it's working
<menn0> ?
<davecheney> sorry, that wasn't helpful
<davecheney> got 660, 92% complete
<davecheney> large update is large
<axw> bleh. changed from using a function literal to a struct+method and it goes away...
 * davecheney sobs
<axw> davecheney: btw yeah, whatever it is is fixed in 1.3 - I'm using rc1
<davecheney> axw: i cannot reproduce that failure on a clean machine
<davecheney> dfc@trusty:~$ go get -u -v code.google.com/p/go.crypto/ssh
<davecheney> code.google.com/p/go.crypto (download)
<davecheney> code.google.com/p/go.crypto/ssh
<davecheney> dfc@trusty:~$ go version
<davecheney> go version go1.2.1 linux/amd64
<axw> davecheney: tried with go1? pretty sure the tarball creation is done on precise
<davecheney> this won't work on go1
<davecheney> axw: the bot should be doing
<davecheney> go get -d -v
<davecheney> is it possible to make it do that ?
 * axw looks what it's doing
<davecheney> (it won't be using -d)
<axw> this part is part of common CI, so I need to check if/how to change it
<davecheney> or, can you apply the go 1.2 deb ?
<davecheney> that will also solve the problem
<davecheney> and we get precise and trusty build coverage to boot
<axw> davecheney: heh, the problem is that it can't build godeps
<axw> re 1.2 on the box, not my call. it's sinzui's machine. he said he'd update it tomorrow I think
<davecheney> ok
<davecheney> thanks
<sinzui> I just copied the packages to the ~juju/experimental ppa. The tests will use them tomorrow
<davecheney> sinzui: what about updating the version of Go on the jenkins host ?
<sinzui> davecheney, We don't compile there or run unit tests on it
<davecheney> sinzui: compilation happens
<davecheney> http://juju-ci.vapour.ws:8080/job/github-merge-juju/93/console
<sinzui> davecheney, not my job
<davecheney> this is building because go get will compile anything it downloads
<sinzui> And last I saw, that job ran in an instance, not in jenkins
<davecheney> Started by remote host 54.86.142.177
<davecheney> Building on master
<davecheney> http://juju-ci.vapour.ws:8080/computer/(master)/
<davecheney> master jenkins node
<axw> sinzui: it's building the tarball
<axw> sinzui: that needs godeps, and it's having problems doing that it seems
<davecheney> sinzui: there are two solutions
<thumper> heh, I'm sure the non-americans here would appreciate that the A0 paper size is by definition one square meter
<davecheney> 1. use go get -d, which will skip building anything it go get's
<davecheney> or upgrade to go 1.2
<axw> hmm actually it mustn't be to do with godeps, it was building that fine before
 * axw looks again
<axw> it's building them on there. we could just change it to do this all on the lander
<sinzui> davecheney, That job builds the tarball locally. the actual building and testing happens in a ubuntu instance provisioned by the test. I think that ami is trusty
<sinzui> mgz, know the details
<axw> sinzui: the tarball gets built on the jenkins host, but we can change that
<axw> and part of building the tarball seems to be to run "go build ./..."
<davecheney> axw: i think it's running go get launchpad.net/juju-core/... to fetch all the dependencies
<davecheney> then runnig godeps -u ... to switch their revisions
<axw> davecheney: I can show you the script if you like. it first does "go get -d <stuff>", then builds/runs godeps, then runs "go build ./..."
<sinzui> Yes, but that those are stripped out of the packaging, and the test rebuilds with the local compiler. The cargo culted test ruins on many series and archs, and the build is redone with the proper compiler
<davecheney> axw: ok, if it's doing go build ./...
<davecheney> then the machine running that needs to run go 1.2
<axw> yeah, we'll just fix the lander script to do all this on the lander
<sinzui> I advised mgz to switch the to run-unit-tests which I can change to use specific golang.
<davecheney> m'kay
<axw> sinzui: no worries, we know what the issues is so we can deal with it now
<davecheney> thanks sinzui
<davecheney> thanks axw
<thumper> menn0: did you want to chat about the review, or is it all good?
<menn0> thumper: still looking
<menn0> thumper: all ok so far. just some minor suggestions so far
<menn0> thumper: one thing. github.com/juju/juju/state/factory is just for testing right?
<thumper> menn0: yes
<thumper> menn0: it imports gocheck
<menn0> the module name doesn't really indicate that
<thumper> menn0: I wanted a name that wasn't "testing", and moved it out of "state/testing" for that reason
<thumper> we have a proliferation of testing packages...
<thumper> could move it to be under a testing package...
<thumper> that may make it more obvious
<thumper> testing/factory maybe
<menn0> that might be worthwile.  github.com/juju/juju/state/testing/factory
<menn0>  github.com/juju/juju/testing/factory is good too
<menn0> thumper: also, given that all the factory does is make stuff what about User() instead of MakeUser(). Some of the places that are now using the factory are much longer then they used to be.
<thumper> menn0: longer yes, but clearer IMO
<thumper> re: User, maybe...
<thumper> I'd like to get wallyworld's input
<thumper> and maybe axw
<thumper> wallyworld because he understands the idiom from launchpad
<menn0> this is a definite win: -	cs.State.AddUser(state.AdminUser, "", "pass")
<menn0>  +	cs.State.AddAdminUser("pass")
<menn0> but i'm not sure that this is:
<menn0> -	_, err = s.State.AddUser("arble", "", "pass")
<menn0>  +	s.factory.MakeUser(factory.UserParams{Username: "arble"})
<thumper> I do kind of like the verb-noun aspect of MakeUser
<menn0> I see that
<thumper> menn0: the clear bit of the second is that you don't have to modify all the call sites when you change how you make users
<thumper> like I did by changing params to state.AddUser
<menn0> completely agree with that
<thumper> also
<thumper> don't need to check err
<thumper> as the factory does that
<menn0> totally agree that that's better too
<thumper> I feel that the slightly longer line is better for clarity of intent
<menn0> my only (minor) gripe is that the lines are bit harder to grok now
<menn0> overall it's a win I suppose
 * menn0 misses keyword args
<thumper> well, I'd prefer: s.factory.MakeUser(username="arable")
<thumper> but we can't do that
 * menn0 nods
<menn0> keep it the way it is
<thumper> this is my attempt to take a good python testing pattern and use it here
<menn0> it's definitely better than before
<thumper> I think so
<thumper> I think also as the factory grows other methods
<thumper> like MakeMachine, MakeService...
<thumper> etc
<thumper> MakeUnit would MakeService and MakeMachine by default...
<thumper> that type of pattern
<menn0> yep that all sounds great
<menn0> of course the methods could just be Machine, Service, Unit :)
<thumper> but if you have a service called Machine...
<menn0> so you get: s.factory.Unit(...), s.factory.Service(...)
<thumper> it doesn't say what
<thumper> meh...
<menn0> I actually don't care that much
<thumper> just being devil's advocate?
<menn0> yes
<menn0> I usually prefer verb-noun myself
<menn0> idea: you could call the factory field itself "make" so the calls look like: s.make.User(...), s.make.Unit(...) etc :)
<menn0> probably too confusing for the uninitiated
<menn0> but more concise
<thumper> haha
<thumper> hmm...
<thumper> I do expect that we'll end up with some form of DSL for tests, but this may be taking it a step too far :)
 * thumper hasn't got any lxc fixes done yet...
<thumper> bad thumper
<menn0> thumper: review done. I've included the relevant bits from our discussion here.
<thumper> ok, ta
<menn0> thumper: it should have been about PRs right? :)
<menn0> about 3 even
<thumper> yeah...
<thumper> if I was using bzr, it totally would have been
<thumper> but I don't yet know how to do this easily with git
<thumper> menn0: I'm thinking about the MakeAnyUser...
<thumper> may well be worth it
<thumper> for the general use case
<menn0> thumper: it's a minor thing but it's going to get used a lot
<thumper> that way the caller doesn't need to import the package to get a random thing
<thumper> if the factory is in the base suite
 * thumper nods
<thumper> I agree with making the default thing very easy
<menn0> yep that alone makes it worthwhile
 * thumper does that
<menn0> completely unrelated question... how do you figure out which state server is master?
<waigani_> just created a tag from a string in order to get that string from the tag
<waigani_> there's a bit of code I'm pretty proud of!
<thumper> no idea sorry menn0
<menn0> I've found a manual way to do it
<menn0> connect using mongo shell until the prompt says "PRIMARY" :)
<menn0> (trying each machine)
<thumper> wallyworld,axw: is the bot wedged? there seem to be many pending merges
 * wallyworld looks
<axw> it's currently idle
<axw> my build passed not long ago
<wallyworld> last run was 30 mins ago
<thumper> ok
<wallyworld> thumper: i just ran the lander by hand, it claims no pull requests
<thumper> ok
<wallyworld> thumper: JujuConnSuite is kinda evil but convenient
<davecheney> thumper: where is your $$merge$$ comment
<davecheney> ?
<wallyworld> should the factory be a fixture?
<wallyworld> thumper: also, one of the cts guys has volunteered to fix some juju bugs. i could get him to look at local provider usability - improved status etc, or whatever else
<wallyworld> that way you're  not on the hook for it
<waigani_> thumper: your factory looks great :)
<thumper> wallyworld: I think so
<wallyworld> which question?
<thumper> davecheney: almost there...
<thumper> just fixing something
<davecheney> ok
<thumper> wallyworld: what do you mean by "should the factory be a fixture" ?
 * thumper is in and out of the office to help make dinner
<wallyworld> don't add it to JujuConnSuite  - introduce a new Suite with the factory
<wallyworld> like the FakeHome fixtures
<wallyworld> JujuConnSite will hopefully die at some point
<wallyworld> but it's convenient i know
<wallyworld> if you had some specific local provider issues logged as bugs, let me know. the initial template image delay progress would be one
<thumper> wallyworld: it needs a state connection
<thumper> wallyworld: so, ackward
<wallyworld> yes it does, you are right
<wallyworld> i guess JujuConnSuite is the right place then for now
<thumper> wallyworld: I did a bare minimal state thing for the factory_test
<thumper> it would be good to get a base test suite that isn't as bloated as the JujuConnSuite
<thumper> ok, merge is accepted, I'm done for the day
<thumper> later ...
<wallyworld> axw: will your change to stop instances to ignore containers break the local provider?
<axw> wallyworld: no, because they're not containers as far as state is concerned
<wallyworld> np, just thought i'd check :-)
<axw> they're top-level machines that just happen to be implemented as containers
<wallyworld> yup, sounds good. can't e too careful :-)
<dimitern> morning all
<rogpeppe1> ha, you can't add review comments on github unless they're actually part of the diff
<rogpeppe1> that's so bogus
<TheMue> hmm, my branch doesnât get merged, despite the $$merge$$
<axw> TheMue: is your membership public?
<TheMue> axw: eh, where can I check this?
<axw> https://github.com/orgs/juju/members
<axw> the answer is no
<axw> make it public, and your $$merge$$ comment will count
<TheMue> *sniff* Iâm not public
<TheMue> axw: thx, will change
<axw> np
<voidspace> morning all
<mattyw> voidspace, morning
<TheMue> axw: now it tells me that the PR is not mergable, any idea? it only contains a new document, nothing else.
<axw> TheMue: that's a strange transient error from GitHub we've seen a bunch
<axw> TheMue: you'll have to $$merge$$ again I'm afraid
<mattyw> axw, have you got a moment to talk about the feedback here: https://github.com/juju/juju/pull/64?
<mattyw> axw, and by that I mean options for making the output better
<axw> mattyw: briefly, I've got a standup to go to in 10m or so
<TheMue> axw: ok, thx again
<axw> mattyw: I made that comment after seeing what the output looks like from the test case you wrote
<axw> it's all sorta glommed together and hard to read
<axw> mattyw: by which I mean the tools in the format a;b;c;d;e;f
<mattyw> axw, ok, I could just split the context.tools up by the different tools and print each on a new line, it might look weird in the log, but to console it might looks ok
<axw> hm, forgot about the log...
<axw> mattyw: maybe just leave it for now, if people are actually bothered we can change it
<mattyw> a tiny part of me wonders if the whole command could output in yaml or json (using cmd.Output) when it's in dry mode, but that feels like overkill - and doesn't really solve our problem either I think
<axw> yeah I certainly wouldn't bother going that far
<mattyw> I could also split the tools up and log each line - there's the chance the tools could get split up in the log
<mattyw> but it would probably be fine
<axw> mattyw: I reckon either leave it as is, or change to ", " separator
<axw> I'm not *that* fussed, it just looked a bit funky
<wallyworld> fwereade: hiya
<fwereade> wallyworld, heyhey
<fwereade> wallyworld, still chugging through that giant review, I'm afraid
<wallyworld> no problem, sorry
<wallyworld> fwereade: i start out to fix some of our intermittent CI bot failures and the branch ended up refactoring a bit of mongo stuff
<fwereade> wallyworld, I guess you could have created the new package before using it, but nbd
<fwereade> wallyworld, despite my cornucopia of nitpicks it's a fantastic change
<wallyworld> fwereade: you mean the txn stuff?
<wallyworld> yes, i could have created it but... git
<fwereade> wallyworld, yeah, I'm not far enough up the page to see what else you did
<wallyworld> too hard to break stuff up
<fwereade> wallyworld, indeed :)
<wallyworld> fwereade: would you hate me to take a look at another change referneced above?
<wallyworld> fwereade: i start out to fix our
<wallyworld> bah
<wallyworld> https://github.com/juju/juju/pull/72
<wallyworld> i have a theory as to why we are getting timeout errors in tests
<wallyworld> on the bt for various mongo operations
<mattyw> axw, thanks for the feedback, I'll see if I can come up with something better without changing too much
<fwereade> wallyworld, I would be delighted to, especially if it's fixing that stuff :)
<wallyworld> but it ended up turning into refactoring - getting a bunch of mongo stuff out of state
<fwereade> wallyworld, I think I'll try to finish this one first
<wallyworld> and moved from agent/mongo into top level mongo
<wallyworld> np, just thought you'd want an opinion on it
<axw> mattyw: cheers
<fwereade> wallyworld, that also sounds pretty awesome
<axw> mattyw: btw did you end up looking into the port range bug?
<wallyworld> fwereade: i need someone to double check - i removed the deprecated AddUser and replaced with UpsertUser
<wallyworld> anyways, only when you get time
<fwereade> wallyworld, txn with both insert and update?
<wallyworld> ?
<fwereade> wallyworld, was asking what you meant by upsert
<axw> it's an mgo thing
<wallyworld> oh, that's a mongo api call
<axw> insert or update I think
<wallyworld> yup
<fwereade> wallyworld, axw, I know what is normally
<fwereade> wallyworld, axw, not clear how it integrates with mgo/txn
<mattyw> axw, I looked into it a little, not done anything yet. I have some "ideas" I'd like to discuss with an adult before I do anything
<wallyworld> fwereade: the mgo driver docs say to use Upsert from 2.4 onwards
<fwereade> wallyworld, axw, and would kinda prefer to keep mongoisms out of the state interface as much as possible
<axw> mattyw: heh :)  okay. I'm interested in it too, would like to hear your thoughts some time
<wallyworld> fwereade: it's a direct replacement mgo.AddUser becomes mgo.UpsertUser
<fwereade> wallyworld, ah!
<wallyworld> state calls it
<wallyworld> sorry
<wallyworld> bad comms
<fwereade> wallyworld, sorry, I see, it's for actual*mongo* users
<fwereade> wallyworld, that sounds fine
<wallyworld> yeah, sorry
<wallyworld> Upsert needs roles defined
<wallyworld> not just readonly=true/false like adduser
<wallyworld> so i need my choice of roles checked
<mattyw> axw, there was a comment on the bug by fwereade that I need to ask about as I don't understand the implication: https://bugs.launchpad.net/juju-core/+bug/1216644/comments/2
<_mup_> Bug #1216644: allow open-port to expose several ports <addressability> <improvement> <strategy> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1216644>
<wallyworld> mgz: axw: quick standup?
<axw> wallyworld: be there in a sec
<axw> mattyw: not entirely sure, but we use a security group for each instance to firewall ports unless explicitly opened ("firewall-mode: global" changes this behaviour)
<axw> mattyw: we'll probably want to change things around to use iptables on each machine, so they manage their own firewalls
<axw> mattyw: then nuke the security groups
<fwereade> mattyw, ah, are you working on that bug?
<fwereade> mattyw, let's chat in 5 mins?
<mattyw> fwereade, I'm not, but I started looking at it yesterday to see if I could work on it
<fwereade> mattyw, it's pretty massive I'm afraid
<mattyw> fwereade, I think I have enough to keep me busy for this week, but it might be good to discuss it so we can get a plan for it?
<fwereade> mattyw, ok, let's have a hangout and I will braindump issues at you until you cry ;p
<mattyw> fwereade, just ping me when you're ready
<rogpeppe1> this PR removes the charm package from juju-core. reviews appreciated (it's all mechanical change) https://github.com/juju/juju/pull/74
<rogpeppe1> dimitern, fwereade, axw: ^
<rogpeppe1> fwereade, jam: just about to create a new repo for the charm store. wondering about names. thinking of "charmstore" rather than "store". what do you think?
<fwereade> rogpeppe1, +1
<jam> rogpeppe1: sgtm
<rogpeppe1> fwereade, jam: ok, will do, thanks
<rogpeppe1> i think it's ok even though it will also be used to store bundles
<dimitern> rogpeppe1, looking
<dimitern> rogpeppe1, reviewed +1 question
<rogpeppe1> dimitern: thanks
<wallyworld> fwereade: thanks wading though my txn bibs and bobs, i'll start addressing the concerns but won't be done till tomorrow. i got sidetracked today on fixing more CI bot failures (mgo timeouts)
<voidspace> rogpeppe1: ping
<rogpeppe1> voidspace: pong
<voidspace> rogpeppe: do you have a minute or two to chat?
<rogpeppe> voidspace: sure.
<jam> fwereade: dimitern: so I'm slowly resolving the conflicts and bringing my API versioning stuff over to git. Unfortunately it touches a lot of stuff, and we're moving code all over the place, so it is a bit slow going. However the first step of having the RPC code support versioned requests has been done.
<jam> Would you like to review it now? I took the opportunity to do it in a "more compatible" way with the existing infrastructure, since I have to work my way up the stack anyway
<jam> It isn't *much* different than before, but it means we do expect to know about the type when we lookup what method to call, but we don't expect a concrete object until we actually make the call.
<dimitern> jam, I'll definitely have a look - what's the link?
<jam> dimitern: I haven't created a pull request yet but https://github.com/jameinel/juju/tree/api-versioning
<jam> I wasn't going to do a PR for something in progress that nobody wanted to review. but I can do one now
<jam> dimitern: fwereade: https://github.com/juju/juju/pull/75
<dimitern> jam, cheers
<fwereade> jam, I will try to get to it as I can, I am slowly grinding through the reviews while not otherwise engaged
<jam> fwereade: understood
<jam> you just looked at the code before, so I thought it might be "faster" for you to look at the new version
<jam> vladk: standup ?
<rogpeppe1> fwereade: how would you feel about moving cmd/{charmd,charmload,charm-admin}into github.com/juju/charmstore ?
<rogpeppe1> fwereade: then there's nothing in juju-core that depends on charmstore
<rogpeppe1> jam: ^
<jam> rogpeppe1: aren't those the ones that just exist to build the charm tools ?
<jam> I'm trying to remember what they actually do
<jam> (clearly I'm *very* attached to where they are today :)
<rogpeppe1> jam: they're commands that talk to the store
<rogpeppe1> mgz: ping
<mgz> rogpeppe1: hey
<rogpeppe1> mgz: when you fetch dependencies in the 'bot, do you use "go get -t" ?
<rogpeppe1> mgz: to fetch testing dependencies too
<mgz> nope
<rogpeppe1> mgz: hmm, i wonder if godeps -t should fetch only testing dependencies of the initially mentioned packages, and not recursively
<mgz> we don't have any test-only deps at present right?
<rogpeppe1> mgz: yeah, we do
<rogpeppe1> mgz: github.com/bmizerany/pat imports github.com/bmizerany/assert, but only for its tests
<mgz> ah, I saw those got added, but not the context
<rogpeppe1> mgz: so when i did "godeps -t ./... > dependencies.tsv", that package (and a couple more) showed up
<rogpeppe1> mgz: but the 'bot isn't fetching them, so the PR failed
<mgz> rogpeppe1: can we even use either of those branhces?
<rogpeppe1> mgz: how do you mean?
<mgz> oh, I guess he's sort of got it licenced
<mgz> the first one has it at the bottom of the readme and the other says, but does not include "MIT licence"
<mgz> don't seem like the best deps ever though
<rogpeppe1> mgz: i'm not that happy about their inclusion, tbh
<mgz> what's requiring them?
<mgz> assert at least seems totally redundant with what we have anyway
<rogpeppe1> mgz: the new dispatch code in state/apiserver/apiserver.go
<rogpeppe1> mgz: this is only for the tests in bmizerany/pat
<rogpeppe1> mgz: so we don't actually need those deps, as long as we don't run those tests
<rogpeppe1> mgz: i think i'm best off just excluding secondary testing deps from godeps output
<rogpeppe1> mgz: the godep tool does that, it seems
<mgz> why is it the charm splitting pr...
<mgz> ah, it's unrelated cleanup? I'm generally confused
<rogpeppe1> mgz: it has always been supposed standard practice to generate dependencies.tsv by doing "godeps -t ./... > dependencies.tsv"
<rogpeppe1> mgz: so i did that, and ended up with these new deps
<mgz> yeah, and then you get the blame on unrelated breakage
<rogpeppe1> mgz: yeah
<rogpeppe1> mgz: it's ok, i'm removing the deps explicitly
<mgz> rogpeppe1: can you un-change the dependencies.tsv for now, land your branch, and post to the list about this?
<rogpeppe1> mgz: doing that
<rogpeppe1> mgz: we'll see if it lands ok now
<mgz> rogpeppe1: thanks
<rogpeppe1> mgz: i've fixed the godeps tool
<mgz> I shall pullet
<mgz> just chicken out the branch now
<bodie_> https://github.com/GoogleCloudPlatform/kubernetes
<bodie_> this is going to make some waves
<bodie_> methinks
<bodie_> morning!
<fwereade> rogpeppe1, oops, sorry I missed you: yes, move the store cmds into store
<rogpeppe1> fwereade: cool
<rogpeppe1> fwereade: that will require factoring out cmd too. do you think github.com/juju/cmd is a reasonable name?
<fwereade> rogpeppe1, yes, that's fine by me
<bodie_> oh, fwereade regarding the schema validation -- Jeremy raised the point that proper json-schemas don't look much like the samples we're working with in our tests
<fwereade> rogpeppe1, I'm trying to remember who else was talking about doing that
<bodie_> https://github.com/juju/docs/pull/117#commitcomment-6601483
<fwereade> rogpeppe1, whoever it was, you should talk to them
<bodie_> actually, I'll move this content to skunkworks
<rogpeppe1> bodie_: hiya
 * fwereade congratulates self on providing clear and useful direction :/
<rogpeppe1> bodie_: did you find out what was causing that infinite recursion?
<bodie_> no, I just got up
<rogpeppe1> bodie_: (FWIW, i'm concerned about that - it's a definite possible DOS attack)
<rogpeppe1> bodie_: np
<bodie_> yeah, it's concerning
<rogpeppe1> i'm worried that we're taking on board this whole huge json-schema standard when we really don't need anything nearly so complex
<bodie_> right.  our original PR actually didn't use json-schema at all
<rick_h_> rogpeppe1: just keep in mind that while it starts out in actions, it's on the roadmap to use it for charm/bundle config
<bodie_> mostly because I hadn't been shared the google doc where Mark was indicating his interest in json-schema
<rogpeppe1> rick_h_: i realise that. it still seems like overkill though.
<rogpeppe1> rick_h_: we could define something really simple that would probably be sufficient and easy enough to implement in a short amount of time.
<rogpeppe1> the json-schema thing seems mostly like "here's a standard that we can point to; then all our problems will go away"
<rick_h_> rogpeppe1: no doubt. However there's merit in getting on board with a published standard and help us with some of the lack of definition we hit with juju models.
<rick_h_> rogpeppe1: but yea, I'd not heard about it until Mark S linked it and got us investigating on it
<bodie_> anyway -- (also fwereade), when I had it parsing out of a file, it wasn't validating these "simplistic" schemas anyway
<rogpeppe1> rick_h_: i'd heard about it, but wrote it off when i actually looked at it
<rick_h_> rogpeppe1: I guess we're also less resistant as we've got a good set of tools on the JS end currently and json fits our life great anyway
<bodie_> wasn't *invalidating* them
<bodie_> honestly, rogpeppe1 I believe that if we run with it a bit, I think it won't be that bad, mostly because we've cleared the hurdles around it
<rogpeppe1> rick_h_: what kind of thing are you thinking of when you say "the lack of definition we hit with juju models" ?
<bodie_> now that it's in State, it should be a simple matter of x := gojsonschema.NewJsonSchemaDocument(Actions()["snapshot"].Params)
<bodie_> x.Validate(incomingJsonBlob)
<rick_h_> rogpeppe1: 'what can I put in here?' in config, actions will need to be very clear, relations are painful for new people to work with as they're not clear.
<bodie_> for errorValue := range x.Errors() or whatnot
<rick_h_> rogpeppe1: it's the 'with great flexibility comes a lack of clear immediate understanding' kind of thing
<bodie_> and, that's pretty much the whole kit n' kaboodle
 * fwereade needs to be away to laura's school for a while, back later
<rogpeppe1> rick_h_: it would help if relations config attributes were at least documented *somewhere* :-)
<rick_h_> rogpeppe1: yea, just playing a bit of devils advocate for the idea of using jsonschema :)
<rogpeppe1> rick_h_: i'm definitely not against using some kind of schema
<rogpeppe1> rick_h_: i just think that json-schema is yet another example of a W3C-style bloated and unnecessarily complex standard.
<rogpeppe1> rick_h_: which is a shame, because there's a possibility for something really nice in this space
<bodie_> on a side note, I think I'm going to discard the meta-validation branch
<rogpeppe1> bodie_: sgtm for the time being
<bodie_> it's not doing anything for us as far as I can tell (it wasn't discarding simplistic schemas) and it's turning into a more complicated problem than I had hoped, it was really more of a simple experiment
<rogpeppe1> bodie_: but i'd really like to get to the bottom of why it went infinitely recursive
<bodie_> yeah
<rogpeppe1> bodie_: it's something to do with the {"$ref": "#"}, i think
<rogpeppe1> bodie_: at least that's one portion of the infinite recursion
<rogpeppe1> bodie_: i don't know what that's supposed to mean though
<bodie_> well # is just a URL fragment with no path
<bodie_> so, it's a json-reference, I think
<rogpeppe1> bodie_: so what's it meant to mean as a reference?
<bodie_> not sure I could tell you
<bodie_> jdorn might know
<bodie_> http://json-schema.org/latest/json-schema-core.html
<bodie_> er
<bodie_> http://json-schema.org/latest/json-schema-core.html#anchor30
<bodie_> I suppose it's a reference to the root document
<bodie_> so, you're probably right that it's where the recursion is coming in
<rogpeppe1> bodie_: yeah, i'd just come to that conclusion
<bodie_> https://github.com/binary132/gojsonschema/blob/master/schemaDocument.go#L639
<rogpeppe1> bodie_: it's actually a desired recursive definition
<bodie_> I guess because it's its own schema
<bodie_> which means, a recursive self-reference like that shouldn't break it :/
<rogpeppe1> bodie_: i don't think that's the reason
<rogpeppe1> bodie_: i think it's because it's actually defining a self-referential type
<rogpeppe1> bodie_: similarly to how you'd define a linked list using struct types
<rogpeppe1> bodie_: or a grammar
<bodie_> I wonder if the recursion is happening here: "$schema": "http://json-schema.org/draft-04/schema#"
<bodie_> normally, that would be a reference to the same URL as the document itself
<bodie_> but in our case, it's a reference to a remote document
<bodie_> rogpeppe1, I see what you're saying, I think that's right
<bodie_> in other words, schemaArray.items are themselves conformant to json-schema as a whole, i.e. are json-schema documents
<rogpeppe1> bodie_: the $schema thing isn't the problem
<rogpeppe1> bodie_: it still does the same thing without that
<rogpeppe1> bodie_: yup
<bodie_> I just don't see why it would work fine as a file reference or URL reference, but not when loaded as a string
<bodie_> maybe encoding/json is altering something
<rogpeppe1> bodie_: i doubt it
<bodie_> I don't think that makes sense, but it's the only thing I can think of
<bodie_> or there's a different logical chain that gets followed if it's a url vs a map
<rogpeppe1> bodie_: it's something to do with that, i'm pretty sure
<bodie_> https://github.com/binary132/gojsonschema/blob/master/schemaDocument.go#L38
<rogpeppe1> bodie_: there's something called a "pool" which gets looked up in jsonreference
<rogpeppe1> bodie_: and i suspect that's not filled correctly
<bodie_> case string = URL, case map = the one that has a problem, that we're using
<rogpeppe1> bodie_: i bet the pool caches the result of its Get
<bodie_> well
<bodie_> rogpeppe1, I'm seeing that the getFileJson / getHttpJson methods use interface{} for unmarshaling rather than map[string]interface{}
<bodie_> it seems kind of silly, but it's possible there's a wonky type switch happening somewhere
<bodie_> but, I guess that JSON would unmarshal as a map[string]interface{} even if it was getting unmarshaled into an interface{}
<bodie_> so that seems like a dead end
<bodie_> I need to get a shower and eat before my meeting with mark... :/
<bodie_> really stressful to feel bogged down in this point of the project
<bodie_> but, it is what it is
<bodie_> you're right that json-schema caused much more complication than we expected
<rogpeppe1> bodie_: i've reduced the test case to this: http://paste.ubuntu.com/7628481/
<rogpeppe1> bodie_: it's probably not yet minimal, but it's much smaller
<rogpeppe1> bodie_: here's some code i've been using to repo the issue: http://paste.ubuntu.com/7628485/
<bodie_> rogpeppe1, do you have that on a branch in your github?  I'd already thrown out my branch and local copy
<bodie_> ah, I see
<rogpeppe1> bodie_: you can always get your branch back via the reflog
<bodie_> interesting, i've never used reflog
<rogpeppe1> bodie_: i also put a log print somewhere in the recursion so that i can do go run tst.go 2<&1 | head -10000 | wc
<rogpeppe1> bodie_: to avoid the runaway process chewing up all my resources
<rogpeppe1> bodie_: reflog is indispensible sometimes
<rogpeppe1> bodie_: branches only get GC'd after 30 days
<bodie_> awesome
<bodie_> well, that's a relief, it's not something to do with the additions I made to gojsonreference
<bodie_> or rather, reductions / surgical alterations :P
<mattyw> dimitern, ping?
<natefinch> bodie_, rogpeppe1: it seems like we could save a ton of time by just defining our own super simplistic schema.  Name, Description, Type(Bool, Int, Float, Date, String).  Done.  Could be coded up in like 20 minutes, does everything we need it to do.
<rogpeppe1> natefinch: we already had that
<rogpeppe1> natefinch: but we need something that can be deeper
<rogpeppe1> natefinch: i suggested something like: T: int | float | string | bool | struct (field: T, ...) | map (T) | array (T)
<rick_h_> natefinch: right, but for actions we need thinks like urls, file paths, and such
<natefinch> rick_h_: I hear you saying strings, strings, and such
<rogpeppe1> rick_h_: but do those things need to be verifiable as such in the schema?
<rick_h_> natefinch: and soon extensions like resource locations and the like.
<natefinch> and more strings
<rogpeppe1> i tend to agree with nate here
<bodie_> so, we're having a face-to-face with mark in about half an hour here, I'm uncomfortable coming to him and presenting "we're arguing about whether we need json-schema on the backend"
<natefinch> bodie_: make rogpeppe1 do it ;)
<bodie_> sigh, he's got a very valid point that if we're permitting users to define charms that stack explosions, that's bad
<bodie_> *cause*
<rogpeppe1> bodie_: it's a bunch of code that we didn't vet, and we don't understand
<bodie_> and then we start getting into constraining json-schema, rooting around for certain keys and certain values on those keys, we don't necessarily have a perfect picture of what's broken, anyway
<rogpeppe1> bodie_: and we don't even understand the json-schema standard ourselves
<bodie_> right
<natefinch> I think it's a really bad idea to do a standard we don't understand with code we don't understand and didn't write and don't trust.  Especially when we already have something workable that we do understand and we do trust.
<natefinch> Just be totally honest with Mark about the reservations people have with jsonschema and the code and the questionable benefits we get from using a standard that likely no one else uses either.
<rick_h_> It feels a bit like "Why use SQL when we've got data access we know and trust because we don't know SQL and the library that you use to interact with it"
<bodie_> um, because SQL is bad.  everyone knows that
<bodie_> ;)
 * rick_h_ goes back to his sql-loving corner
<natefinch> rick_h_: there's a huge difference between SQL and jsonschema
<rogpeppe1> rick_h_: it's a question of value added vs cost incurred
<natefinch> that too
<bodie_> I think the major difference here is that one has a good implementation in Go and the other doesn't
<bodie_> :/
<natefinch> bodie_: tons of people use SQL.... who uses jsonschema?
<rick_h_> and that feels like a poor reason to ditch something when we're talking about writing our own code anyway
<bodie_> don't know, I'm not a frontend programmer
<rick_h_> https://github.com/jdorn/json-editor/pulse/monthly
<rick_h_> just since it's asked.
<bodie_> I think it's probably salvageable with a little elbow grease, but it's not a guaranteed thing that this library will be in suitable shape without an indeterminate amount of monkeying about with it
<bodie_> this coming from someone who is far too heavy handed with timeline estimates
<rogpeppe1> bodie_: i suspect it would be less time to implement it from scratch.
<bodie_> and then it could also be done right
<bodie_> but, then we're diverting that energy away from <whatever the person is working on>
<rogpeppe1> bodie_: especially since we can then prioritise the pieces we need most
 * rogpeppe1 can barely resist going to do it himself.
<bodie_> I'll be glad to play along if I suddenly find myself with a lot more free time on my hands =P
<rogpeppe1> :-)
<bodie_> but, really hoping it doesn't come to that.  I'm really enjoying working with you guys, and I'd love to stay involved :) but I think I need to get some more product rolled out as proof that it would be a rational direction to go -- and this stuff has been a major eddy current
<bodie_> well, que sera, sera
<voidspace> yay, internet back
<voidspace> for now anyway
<natefinch> voidspace: yay!
<voidspace> natefinch: did you get my emails?
<voidspace> natefinch: I finally just sent the [full] one about HA and backup
<natefinch> voidspace: yes, sorry, busy morning or I would have responded.  Had a response half typed out :/
<voidspace> np
<natefinch> so, that's a good point that we can't be assured of getting the same server, especially on Azure
<voidspace> yep
<voidspace> (my connection is 2mbit downstream and 154kbps upstream by the way - so I'm not sure if that's good enough for standup / hangout. I'll try though)
<wwitzel3> natefinch: standup is an hour later today right?
<ericsnow> voidspace: avoiding me, huh? <wink>
<voidspace> wwitzel3: morning
<natefinch> wwitzel3: coreycb
<voidspace> ericsnow: hey, hi
<natefinch> wwitzel3: correct
<voidspace> ericsnow: sorry, I didn't see your messages until I had to leave
 * natefinch tries to auto-complete words that aren't usernames and falls on his face
<voidspace> ericsnow: I thought I'd been very successfully avoiding you though :-)
<ericsnow> voidspace: no worries :)
<voidspace> ericsnow: how is day 3 treating you?
<ericsnow> voidspace: good.  just starting though
<voidspace> ericsnow: welcome to the mad house
<wwitzel3> voidspace: morning :) I've been lurking addressing issues that fwereade pointed out on my pull request
<voidspace> ericsnow: is this your normal start time?
<voidspace> wwitzel3: cool
<voidspace> my internet is ropier-than-usual today
<ericsnow> voidspace: we'll see (trying to start no later than 7 so I can sync better with the squad)
<voidspace> first time it's gone down (during the day) for a while
<voidspace> ericsnow: cool
<voidspace> natefinch: what's the use case for storing backups on the state server - is it just so the juju command can complete immediately?
<voidspace> natefinch: because that doesn't seem like a *real* use case...
<ericsnow> BTW, my new laptop is mostly sorted out, trying to remember all the customizations/apps I had the last time I set up Ubuntu on my desktop
<ericsnow> so what's the story with UDS?
<dimitern> mattyw, hey, i'm back
<ericsnow> natefinch: I got a PR up before EOD for the CONTRIBUTING refresh  and a bit of good feedback from davecheney
<ericsnow> natefinch: (I also filed a bug on the tracker for the task)
<mattyw> dimitern, hey there, your review - n as well as --dry-run. Is there any precedent for that?
<mattyw> dimitern, only I've never seen -n used in that way before
<ericsnow> natefinch: how do I update the pull request with changes in response to the feedback?
<natefinch> voidspace: I don't know that there's a real need to store them on the server.  Possibly so you can easily just fire off a backup from anywhere without needing to worry about where it'll get stored (like from the GUI).
<voidspace> natefinch: the GUI can download it too, just like the CLI
<voidspace> natefinch: is storing them on the server (an asynchronous api) a requirement or our own idea?
<voidspace> natefinch: if it's a requirement we'll have to go with the more complex implementation
<natefinch> ericsnow: I think just committing to the same branch will update the PR automatically
<voidspace> natefinch: whenever a state server is asked to list backups it will have to ask the other state servers what backups they have
<natefinch> voidspace: it was an idea that was stated at the sprint.  The "requirements" are: do backup the right way
<perrito666> how about we discuss this on the meeting we should all be  :p
<dimitern> mattyw, no precedent
<voidspace> natefinch: I think "the right way" is to use cloud storage
<natefinch> perrito666: it's in an hour, I moved the wednesday one
<perrito666> lol
<voidspace> natefinch: and without that, immediate download (not storing on state server) is "good enough"
<natefinch> perrito666: since I have a TOSCA meeting now
<voidspace> my 2c worth
<perrito666> that explains why voidspace and Iare the only ones there
<dimitern> mattyw, git uses --dry-run or -n, and as long as it's documented as equivalent it should be fine
<voidspace> hah
<natefinch> voidspace: seems good to me. Let's do it
<rogpeppe1> fwereade: it seems a bit wrong to me that the --version flag is defined inside SuperCommand. do you concur?
<perrito666> voidspace: your 2 instances of you just froze
<voidspace> natefinch: that simplifies the implementation a great deal
<perrito666> I dropped from the call until an h from now
<voidspace> perrito666: haha
<voidspace> ok
<voidspace> perrito666: see you in an hour
<voidspace> natefinch: I think it means your api is unneeded
 * perrito666 wishes his calendar would move meetings instead of just adding more
<voidspace> natefinch: we have a single endpoint, handled in state/apiserver/apiserver.go that the client does a POST to
 * rogpeppe1 gets lunch
<ericsnow> natefinch: so how do I avoid polluting history with multiple commits when I ultimately just want one?
<voidspace> ericsnow: rebase I guess
<ericsnow> voidspace: a git thing, right?
 * ericsnow pines for hg
<voidspace> ericsnow: yeah - rebase is the evil history rewriting that git supports
<voidspace> ericsnow: http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html
<voidspace> ericsnow: the rest of us are pining for bzr which is even better than Hg
<voidspace> ericsnow: with Hg you get multiple commits on mainline too
<perrito666> natefinch: just ftr, you did not move the calendar appt
<ericsnow> voidspace: I guess that's a consequence of the pull request approach
<voidspace> ericsnow: right
<natefinch> perrito666: hrmph... I did something...
<natefinch> sorry guys
<wwitzel3> voidspace: well, if you are only rebasing the upstream in to your feature branch AND your feature branch has never been merged to the upstream stream .. it really is just a short cut of stashing your changes, creating a new branch, and then reapplying them.
<voidspace> wwitzel3: right
<wwitzel3> but that said, if you don't do that, it can really mess things up
 * perrito666 ponders a quick implementation of gunzip for a test
<natefinch> perrito666: http://golang.org/pkg/compress/gzip/
<ericsnow> perrito666: system()?
<perrito666> natefinch: yup, that is what I meant, I am creating the backup inside a tgz but I need to gunzip it to make sure it has the proper contents
<perrito666> ericsnow: I am so not doing that in a test :p
<ericsnow> perrito666: oh, like a real test ;)
<perrito666> ericsnow: yup, this code was faster to write than to test :p
<natefinch> perrito666: seems like no reason you can't open up a zipped file to check it's contents.
<natefinch> s/it's/its/
<perrito666> natefinch: me no understands
<perrito666> to many negations on that sentence
<natefinch> perrito666: sorry.... mostly "sure, of course you can write that test"
<natefinch> perrito666: f, err := os.Open(filename);  gz := gzip.NewReader(f);  // read from gz
<mattyw> dimitern, I worry about having -n to mean dry run, as -n is used elsewhere to mean number of
<natefinch> mattyw, dimitern: totally agree ^^
<dimitern> mattyw, it is, but not in the same command - how about "-z" then?
<mattyw> dimitern, I don't have any complaints about that, I'm just searching for what other commands use to see if there is some common usage
<natefinch> dimitern: what's wrong with just --dry-run?  Doesn't seem like it's something that needs a short flag
<mattyw> natefinch, dimitern apt-get uses -s (to mean simulate)
<mattyw> (and also has --dry-run for the same thing)
<dimitern> natefinch, i'm totally fine with --dry-run, but users might complain having to type the whole thing - hence the shortcut
<ericsnow> dimitern: isn't the problem that short options are a limited commodity (so should be assigned rather conservatively)?
<dimitern> ericsnow, that's a fair point yes
<mattyw> dimitern, if you're ok I'll change it to --dry-run but otherwise leave it, if anyone complains we can add a short option
<dimitern> mattyw, sgtm
<natefinch> perrito666, wwitzel3, ericsnow, voidspace:  my wife wants me to bring the kids to a school function... like now.  So I'm going to miss the meeting.  I'll be back in an hour and a half -ish.  Sorry for the late notice
<wwitzel3> natefinch: np
<perrito666> natefinch: np, have fun on the school thinguie
<mattyw> dimitern, that change has been pushed
<dimitern> mattyw, ta
<alexisb> natefinch, cmars: do either of you have someone on your team that would be available to help with a field bug today?
<wwitzel3> voidspace, ericsnow: standup
<ericsnow> wwitzel3: coming
<alexisb> wwitzel3, can you ask about my ping above
<wwitzel3> alexisb: yep, soon as he is back
<alexisb> thanks
<wwitzel3> alexisb: is the bug in lp?
<alexisb> yes, see #juju @ canonical
<alexisb>  bug https://bugs.launchpad.net/juju-core/+bug/1089291
<_mup_> Bug #1089291: destroy-machine --force <canonical-webops> <destroy-machine> <iso-testing> <theme-oil> <juju-core:Fix Released by fwereade> <juju-core 1.16:Fix Released by fwereade> <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Saucy):New> <https://launchpad.net/bugs/1089291>
<alexisb> pontentially related to this bug ^^
<wwitzel3> rgr, ok
<perrito666> voidspace: are you around?
<sinzui> Hi devs, We cannogt get any version of juju to work with HP Cloud. This may relate to the region changes. https://bugs.launchpad.net/juju-core/+bug/1328905
<_mup_> Bug #1328905: hpcloud: index file has no data for cloud <ci> <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1328905>
<fwereade> rogpeppe1, yes, --version should be somewhere else
<dimitern> fwereade, jam1, natefinch, others? a trivial PR introducing pending networks in state https://github.com/juju/juju/pull/78
<mattyw> alexisb, did you get any response about your field issue?
<alexisb> not yet
<rogpeppe1> fwereade: ok, thanks. i was wondering though about adding a GetVersion func() string field to SuperCommandParams, so it was easy to plug in
<rogpeppe1> although perhaps a new version of SuperCommand that embeds the other one but adds version info might be a better approach
<ericsnow> the juju core roadmap presentation at UDS starts in about 35 minutes, no?
<rogpeppe1> anyone know what the rationale is for having plugins/local separate from plugins/local/juju-local ?
<ericsnow> wwitzel3: is there a bug open for that failed-tests-leave-mongod-running thing?
<wwitzel3> ericsnow: no, it only happens if the failure is the result of a panic
<wwitzel3> and there is really no way to cleanup anything in that case
<voidspace> perrito666: yes
<ericsnow> wwitzel3: got it
<voidspace> wwitzel3: ah, I thought we were waiting for natefinch, sorry
<mattyw> alexisb, I was about to start working on another feature but if I can be any help let me know
<voidspace> wasn't watching irc
<wwitzel3> voidspace: we just miss you is all
<ericsnow> voidspace, wwitzel3: :)
<voidspace> heh
<voidspace> are you there now?
<voidspace> I just joined the room and it was empty
<rogpeppe1> ha, the --version flag looks like it's totally undocumented
<wwitzel3> rogpeppe1: self documenting?
<wwitzel3> ;)
<rogpeppe1> wwitzel3: only if you know it's there...
<sinzui> natefinch, fwereade, jam, wallyworld, thumper, alexisb: bug 1328905 is super critical Users are reporting that they cannot use any juju with HP
<_mup_> Bug #1328905: hpcloud: index file has no data for cloud <ci> <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1328905>
<alexisb> yeah more critical bugs!
<sinzui> ^ This may relate to region changes and we need to know how to specific the right regions (and AZs?) or HP has an issue that they aren't reporting
<fwereade> whoo, critical bugs, indeed :/
<alexisb> so jam, fwereade, natefinch, cmars, wallyworld, thumber, we will need to get some focus on critical bugs that have come up the last coupld of days, I will send mail with details I am aware of but I leave it to you all to assign/delegate as appropriate
<fwereade> alexisb, cheers, I will be popping on late tonight to chat to thumper anyway
<jcastro> alexisb, I am firing up the hangout now
<alexisb> jcastro, ack be there is just a moment
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYdbxLFl9wnFqKOiHH4O9UeqMAFRS_cJdePa1FtT5DQH4hv1ag?authuser=0&hl=en
<jcastro> rick_h_, ^^^
<perrito666> if anyone has a moment https://github.com/juju/juju/pull/79 review will be appreciated :)
<rick_h_> jcastro: thuoght it was 2pm?
<mfoord> grrrr
<mfoord> rubbish internet
<mfoord> natefinch: here's a prototype of the backup download handler
<mfoord> natefinch: https://github.com/voidspace/juju/compare/download-backup
<rogpeppe1> can anyone tell me what these lines are supposed to be doing? cmd/supercommand.go:264,268
<rogpeppe1> (starting "if c.subcmd.IsSuperCommand")
<rogpeppe1> it looks to me as if the first body of the if is a complete no-op
 * rogpeppe1 starts to see, and wishes that he hadn't
<mfoord> perrito666: I don't know if you saw my last message, my internet is a bit up and down
<mfoord> perrito666: where does your backup stuff live - I can't find it in trunk but I may just be being dumb
 * rogpeppe1 feels embarrassed that he signed it off in code review
<jam> sinzui: are there other critical bugs, or just that one?
<sinzui> jam, just that one.
<sinzui> jam. I fear that while hp said they were removing old regions on June 1, there were really removed on june 11
<sinzui> jam, and devs, This is what I am reading  to for evidence that everyones' config is wrong: https://docs.hpcloud.com/api/v13/compute/
<sinzui> I don't know what value really belongs here
<sinzui> region: az-3.region-a.geo-1
<jam> sinzui: so that is what used to work, we strip off the az-3 when we need to, but we need it for somethings
<sinzui> jam I get further with just region-a.geo-1, and I see in horizon that az3 was given (at random?), but but strap fails...maybe juju couldn't match images to the region?
<jam> sinzui: so, IIRC, it used to be that either swift or compute required the extra field, and we would strip it off when looking it up in the other one
<jam> mgz: are you still around ?
<mgz> jam: yup
<jam> mgz: can you investigate the HP cloud stuff?
<jam> it sounds like keystone might be returning different data now
<mgz> I also saw that the other day, but didn't dig
<mgz> the switch to 13.5 has happened now though
<mgz> so, we need to switch anything still on the old config
<perrito666> voidspace: sorry I was just having lunch, I just pull requested the backup stuff
<mfoord> perrito666: cool, what's the link?
<perrito666> mfoord:  https://github.com/juju/juju/pull/79
<voidspace> natefinch: prototype backup download endpoint - with skeleton tests
<voidspace> natefinch: https://github.com/voidspace/juju/compare/download-backup
<voidspace> my internet is horrible and I've reached a good place to stop
<voidspace> so I'm calling it a day
<voidspace> EOD folks
<voidspace> g'night
<natefinch> voidspace: cool, just got back
<natefinch> voidspace: g'night
<ericsnow> voidspace: night
<voidspace> natefinch: I would appreciate it if you took a look at the download prototype before I hook it into perrito666's backup work
<voidspace> natefinch: I don't want to go too far down this road if you think there are horrible issues with the basic approach
<voidspace> natefinch: thankfully it's very simple
<voidspace> ericsnow: o/ hopefully chat more tomorrow
<ericsnow> voidspace: :)
<wwitzel3> see ya voidspace
<voidspace> wwitzel3: g'night
<perrito666> natefinch: we need to figure out if we are going with your approach or michaels one so we can integrate backup into any of those :p
<natefinch> perrito666: michael's approach seems a lot simpler and avoids a lot of the problems of connecting to the wrong HA server
<perrito666> natefinch: so, we drop much of your work? if not all?
<natefinch> perrito666: whole damn thing.   It turns into a single synchronous call to the HTTP API and we get back the data from the backup immediately.  Maybe fwereade has an opinion on that
<perrito666> how much is 2 pts in our kanban?
<perrito666> I really need to make a post it for that
<natefinch> perrito666: a point is half a day
<natefinch> half an ideal day, but we only assume like 60-70% efficiency per calendar day
<perrito666> fwereade: if you have a sec, ptal to the note I added to https://github.com/juju/juju/pull/30 I am not sure I properly expressed what you wanted to say there.
<perrito666> bbl Ill go biking while there is sun outside
<wwitzel3> perrito666: ha, read that as bikining
<jam> wwitzel3: well, it is sunny, right?
<wwitzel3> :)
<natefinch> ericsnow: how goes?
<ericsnow> good
<ericsnow> natefinch: I have something broken in my bash config that is resulting in test failures that I'm trying to track down
<ericsnow> natefinch: I also have an update for yesterday's patch that I need to push up
<natefinch> ericsnow: interesting..... I think we've seen similar problems from people trying to run tests in non-bash shells.... which is not really a good reflection on our tests
<ericsnow> natefinch: yeah, I'll let you know what I find--maybe there is something we could change in the tests to help
<natefinch> ericsnow: you have your laptop up and running?
<ericsnow> natefinch: pretty much
<natefinch> cool
<ericsnow> natefinch: installed 14.04 without ever booting into Windows and everything has just worked :)
<natefinch> ericsnow: nice
<alexisb> ericsnow, what type of laptop did you end up getting?
<ericsnow> alexisb: dell XPS 15 (haswell)
<alexisb> ericsnow, nice
<wwitzel3> natefinch: in HA, if you a state machines goes down, it will eventually get replaced without use intervention right?
<natefinch> wwitzel3: nope, you have to manually run ensure-availability
<wwitzel3> well, that was unproductive waiting
<natefinch> sorry
<wwitzel3> yes, I blame you!
<natefinch> heh
<ericsnow> natefinch: I've pushed the updated patch and see that the pull request automatically updated...
<ericsnow> natefinch: but I'm pretty sure I don't want those merge commits showing up there
<ericsnow> natefinch: how do I keep that from happening?
<natefinch> ericsnow: you can rebase, but it can end up removing comments on old revisions
<natefinch> ericsnow: I don't think the old revisions are a big deal.  We had a big discussion about this and I believe we decided to leave all revisions after the initial pull request.
<natefinch> ericsnow: I sorta wonder how much "intro to Github" we need in our contributing doc
<ericsnow> natefinch: I'd say just a link to some external tutorial
<fwereade> perrito666, natefinch: hmm, in-env storage from wallyworld should be HA soon... is it maybe reasonable to save to disk and distribute to other state servers? feel like there might be too much to go wrong
<natefinch> ericsnow: I agree
<ericsnow> natefinch: I'll add something in
<natefinch> fwereade: too much to go wrong with our synchronous approach, or the distributing approach? :)
<fwereade> natefinch, the distributing one :-/
<natefinch> heh
<fwereade> natefinch, which is a shame because I'm not 100% sold on the sync one either
<natefinch> fwereade: me either
<bodie_> woot, got a properly breaking JSON-Schema validation
<natefinch> fwereade: depends too much on your network not failing in the middle... or impatient users hitting control-C
 * fwereade cheers at bodie_
<natefinch> bodie_: was it improperly breaking before?
<fwereade> natefinch, yeah
<natefinch> ahh, so validation failures were broken? :)
<fwereade> natefinch, I haven't properly read back so point and laugh if this is stupid -- can we do both? ie a sync request kicks off a backup, which gets teed both to disk and to env storage, and recorded as an actual backup in state? so the interface is essentially the sync one but we get half a chance of recording it properly as well?
<natefinch> fwereade: we hadn't planned on actually putting it into mongo, if that's what you're talking about
<natefinch> fwereade: but yes, both is possible.... the problem was just getting *to* the ones stored on the state machine.
<natefinch> (when in HA)
<fwereade> natefinch, yeah, I was thinking that once we had env storage in gridfs it would be silly not to use it
<bodie_> natefinch, yeah, I was having a pretty hard time getting Validate to INvalidate anything :P
<fwereade> natefinch, but it would also be silly to depend on it
<natefinch> heh
<fwereade> natefinch, so it's more a distribution mechanism than a storage if iyswim
<bodie_> natefinch, actually hadn't implemented Validate yet, since we were thinking it would be deeper into the State machinery
<bodie_> anyway, it seems to be working now
<bodie_> and, I'm discovering some interesting stuff about JSON-Schema
<bodie_> for one thing..... encoding/json.Unmarshal doesn't explode if you try to unmarshal "something"
<bodie_> it doesn't like `{"something"}`
<bodie_> but `"something"` seems to be fine
<fwereade> bodie_, that seems reasonable
<fwereade> bodie_, {"something"} is a meaningless perversion of an object
<bodie_> I didn't realize bare literals like that were JSON values
<fwereade> bodie_, "something" is just a serialized string
<bodie_> so, fwereade, I think we CAN define "simplistic" schemas
<fwereade> bodie_, fantastic
<fwereade> bodie_, I thought so too but was starting to get nervous :)
<bodie_> fwereade, so... I think if our params looks like -- the example I gave
<bodie_> https://github.com/juju/docs/pull/117#commitcomment-6601692
<bodie_> well, we'd want it to be a little different
<bodie_> let me put something interesting together
<bodie_> anyway, then the action could be called something more like --
<bodie_> snapshot "outfilename.bz2" --compression 5
<bodie_> that would probably be dumb, and maybe you'd only be able to define a single parameter like that
<bodie_> but it might be possible
<fwereade> bodie_, yeah, I'm a little bit -1 on single-parameter schemas
<jcw4> fwereade: just a little bit?
<fwereade> bodie_, your point about params being N keyed schemas seemed like a solid one
<fwereade> jcw4, yeah, more than a little bit
<fwereade> jcw4, I wish this did not apply to me quite so much, but it does: http://www.thepoke.co.uk/2011/05/17/anglo-eu-translation-guide/
<bodie_> heheh, this is gonna be good
<jcw4> yep... I grew up in Zimbabwe (aka Rhodesia)
<jcw4> we consider ourselves more british than the British
<bodie_> "quite good"
<bodie_> "incidentally"
<jcw4> I knew I interpreted your code review comments correctly...
<jcw4> fwereade: I only have a few minor comments on your PR
<jcw4> supposed to be in quotes ^^
 * fwereade looks decidedly shamefaced
<fwereade> jcw4, to be fair, even when I am asking someone to completely change things, I have in mind the worst possible case to compare it to
<bodie_> jcw4, lol
<fwereade> jcw4, so I can always say it's minor ;)
<jcw4> :)
<jcw4> fwereade: I just think it's such a civilized way of communicating
<jcw4> it's a pity non Anglos misinterperet it so much
<jcw4> :)
<fwereade> jcw4, yeah, it seems entirely natural to me :)
<ericsnow> natefinch: okay, it's starting to make sense now
<natefinch> ericsnow: good
 * fwereade disappears again for a bit, probably back soon, slightly less guaranteed than before but still likely
<thumper> fwereade: around?
<jcw4> thumper: 5 minutes ago he said he would probably be back soon
<thumper> jcw4: ok, ta
<waigani> morning all
 * thumper is super happy his branch landed last night
<menn0> waigani, thumper: (belated) hi!
<thumper> o/
 * menn0 is a little zombie-ish this morning. 
<menn0> Youngest up all night and I'm a bit sick
<thumper> menn0: oh no...
<menn0> more coffee and I should be ok... just clearing out the inbox
<wwitzel3> ugh that reminds me I've neglected email all day
 * menn0 is so glad he discovered the Google Labs Gmail Auto Advance feature.
<thumper> menn0: what does that do?
<menn0> It means you can hit # for Delete or y for Archive and it takes you to the next or prev conversation (configurable) instead of back to the inbox.
<menn0> I can churn through emails so much faster now. Almost as efficently as Mutt.
<wwitzel3> menn0: nice, I will have to enable that
<menn0> It's so good. I'm actually considering going back to Gmail for my personal mail.
<bodie_> menn0, you a zero inboxer? ;)
<ericsnow> what's the rationale on unsetting $HOME during tests?
<thumper> ericsnow: for isolation
<menn0> bodie_: I try to be and actually get pretty close most days with my Canonical inbox.
<thumper> ericsnow: IIRC, it is set to a test directory
<thumper> which is deleted at the end of the test
 * thumper is waiting around for fwereade
 * thumper needs to walk the dog...
 * fwereade is here for thumper
<wwitzel3> yeah same here, I try to zero inbox before I EOD
<wwitzel3> lol
<thumper> fwereade: heh
<thumper> fwereade: was just writing that I'll be back in 30min
<ericsnow> thumper: I'm getting a bunch of failures because some of my bash startup scripts make use of $HOME
<thumper> fwereade: but if you are here now, lets chat
<menn0> fwereade, thumper: that's really sweet :)
<thumper> menn0: we try...
<fwereade> thumper, cool, would you start a hangout while I fix another drink please?
<jcw4> awwww
<thumper> fwereade: ack
<thumper> jcw4, menn0: you guys are just jealous
<jcw4> wha?
<jcw4> me?
<thumper> fwereade: https://plus.google.com/hangouts/_/gte3jtum3i2m4lrjob2lnm72yaa?hl=en
<menn0> I shouldn't have said nice things about Gmail. It just went pop - 500 errors :)
<menn0> (as in HTTP status 500)
<jcw4> whew... I was thinking...
<menn0> :)  ... it was only dead for a minute
<bodie_> there was a fun gmail outage last year
<bodie_> I always thought they were an invulnerable billion-ton giant until that
<bodie_> now I just think they're a mostly untouchable billion-ton giant
<jcw4> :)
 * jcw4 is off for a few hours...
 * thumper takes the dog for a quick walk
<thumper> thinking time
 * ericsnow calls it a day
<sinzui> wallyworld, Do you have any insights into this bug. How can I bootstrap with private IPS https://bugs.launchpad.net/juju-core/+bug/1328905
<_mup_> Bug #1328905: hpcloud: index file has no data for cloud <ci> <hp-cloud> <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1328905>
<wallyworld> sinzui: one sec otp
<thumper> menn0: coming?
<waigani> menn0: you alive?
<wallyworld> sinzui: off the phone now, did you want a hangout about that bug?
<sinzui> wallyworld, I think I know what has happened
<sinzui> wallyworld, The new version requires manual allocation of floating-ips where as the previous version automatically did http://h30499.www3.hp.com/t5/Grounded-in-the-Cloud/Managing-your-Floating-IPs-in-HP-Cloud-13-5/ba-p/6401527#.U5jkER-cYbw
<wallyworld> oh joy
<sinzui> wallyworld, I suppose the juju config was ignored.
<thumper> hmm...
<thumper> who wrote that code... was it me?
<wallyworld> sinzui: so it failed with use-floating-ip=true?
<thumper> how would I know?
<sinzui> wallyworld, I am not sure that is really really true because we only have 5 public addresses, and our project/tenant has had more bootstrapped envs that 5
<thumper> yes
<thumper> yes it was me
<thumper> hmm...
<sinzui> wallyworld, no it works with that...but I take that to mean the 50 of us sharing juju-scale-test have 5 IPs for either 5 envs or 5 instance total
<sinzui> I vaguely recall we always got ip addresses instead of dns for HP envs
<thumper> hmm... it seems that I have already implemented what I think should be there...
<wallyworld> sinzui: so maybe i am being dumb - if we need to manually allocate a floating ip, how do we do that outside the hp console?
<sinzui> wallyworld, there is a hp cli for that
<perrito666> rogpeppe1: I know you most likely are not here, but should I consider your review done_
<sinzui> wallyworld, I need to read up about something called salt that might allow ssh to a private ip on hp
<wallyworld> sinzui: hopefully there's api specs and the transport is http or something
<wallyworld> so we can invoke if needed
<wallyworld> sinzui: so, do we have a feel for how many people are affected? how many juju users are on 13.x?
<wallyworld> is this a blocker for 1.20?
<sinzui> wallyworld, in the last 18 hours, both canonistack regions failed, azure's delete problem escalated and we ran out of resources on azure, then we finally switched over to HP's new regions loosing sane ip addresses and AZs. I need to sleep
<sinzui> wallyworld, every juju user is affected by this
<sinzui> who uses hp
<wallyworld> ok, let's talk tomorrow
<wallyworld> well that sucks
<wallyworld> a vendor change breaking juju like that
<sinzui> wallyworld, at the start of the year, it was just the new people who were setup with horizon. A lot of people in Ubuntu engineering couldn't get juju to work because the docs advise AZ prefixes to the region and our boilerplate has  use-floating-ip: false
<sinzui> wallyworld, I think the underlying issue is HP ran our of IPs and the failed to solve the network issue with ip6 or dns
<wallyworld> sure. doesn't help us though :-(
<wallyworld> juju's ip6 support is still wip anyway i think
<wallyworld> sinzui: so if we change the boilerplate as per your last bug comment, do you think that will unblock 1.20? pending a better longer term solution
<sinzui> wallyworld, It gets a little uglier https://bugs.launchpad.net/juju-core/+bug/1247500
<_mup_> Bug #1247500: Floating IPs are not recycled in OpenStack Havana <addressability> <ubuntu-openstack> <juju-core:Triaged> <https://launchpad.net/bugs/1247500>
<sinzui> ^ last comment
<wallyworld> :-(
<perrito666> thumper: howbazaar, really?
<wallyworld> sinzui: so short term - use floating ip to true and delete any when env is destroyed will unblock 1.20?
<sinzui> wallyworld, I think so...I am still coming to understand this issue after a whole day of testing
<sinzui> wallyworld, I am about to destroy and env and see what happens to the ip
<wallyworld> sinzui: thanks for doing so much work to explain the problem. really appreciated
<sinzui> well ci just did a destroy...at least CI is a little happier
<sinzui> wallyworld, test 1 did return the IP
 * sinzui does test two
 * wallyworld is hopeful
<sinzui> wallyworld, I am marking that bug  incomplete. my tests don't repeat it
<wallyworld> sinzui: ok. so for now, just the boiler plate change for 1.20
<sinzui> yep
<wallyworld> great :-)
<sinzui> I would like to know how to get the AZ working again
<wallyworld> as in specifying region with az2.region-a.blah
<wallyworld> we are getting stuff in trunk which will handle az spread automatically
<sinzui> yay
<wallyworld> sinzui: should be landed this week - currently in review
<wallyworld> so explicit region less important i guess
<wwitzel3> menn0: thanks for that tip about the gmail labs .. makes email way easier
<davecheney> thumper: https://github.com/juju/names/pull/3#discussion-diff-13674308R164
<menn0> wwitzel3: np. It's a small thing but it makes a big difference.
<davecheney> err, whatever
#juju-dev 2014-06-12
<bodie_> https://github.com/juju/charm/pull/4 (fwereade, mgz, rogpeppe1)
<bodie_> (and jcw4)
<bac> hi bigjools
<sinzui> axw, The unittest jobs and build action in the publish revision job uses golang 1.2 from the ~juju experimental ppa for saucy and precise
<axw> sinzui: cool, thank you
<waigani> thumper: you about?
<axw> sinzui: doesn't build-revision need it too?
<axw> sinzui: seems that juju-ci.vapour.ws still has 1.1.2, and that's where build-revision runs right?
<sinzui> axw no, really
<sinzui> axw we make a tarball in that step. the tarball only has source code
<thumper> waigani: am now
<davecheney> sinzui: so should I try to land that branch again ?
<sinzui> axw, the build step does give some confidence that it works, but the output is ignored
<waigani> thumper: just wondering about the info cmd
<waigani> thumper: did you catch up with fwereade about it?
<sinzui> axw the tarball made into a source package for each series, then for each source package a builder of the right series and arch is provisioned. That builder gets the right golang of gccgo.
<sinzui> axw, that is how Ubuntu and Lp does, so we do it too
<axw> sinzui: except it's "set -e", so the "go build" fails the whole script if it fails?
<axw> mmk
<sinzui> oh, well I will remove that now. it is vestigial
<thumper> waigani: yeah...
<thumper> waigani: we should chat, let me get some food first
<axw> sinzui: I think it's nice to have, but if we could drop it for now that'd be great
<sinzui> axw, the build step is gone
<axw> thanks!
<axw> davecheney: ^^   try your merge again
<davecheney> axw: ok
<davecheney> thanks
<wallyworld> axw: morning. will you have time today to look at bug 1240146 ? after the current az stuff is landed?
<axw> wallyworld: sure
<wallyworld> axw: thanks :-) azure provider performance and bugs is becoming a hot topic
<axw> wallyworld: ehm, I think I already did this a while ago :)
<axw> will look deeper, but I'm pretty sure this is resolved
<wallyworld> axw: i wondered about that, i figured there must be something new that we didn't know about
<wallyworld> i didn't look too closely, just added the card
<axw> wallyworld: I believe I added this when I did the availability set revamp
<wallyworld> that was post 1.18 right?
<axw> anyway, will confirm
<axw> yes
<axw> 1.19
<wallyworld> so it must just be people on 1.18 complaining
<wallyworld> and 1.20 is due out next week
<axw> it is? okay, I better fix that local-provider HA thing then
<wallyworld> that's the plan - release 1.19.4 this week and turn into 1.20 if all good
<wallyworld> the current 1.19.4 milestone has no show stopper bugs that are recorded against it
<wallyworld> if there's somthing to do, the bug needs to be added to the milestone
<axw> I will add it
<wallyworld> ta
<axw> davecheney: what's that error? went too far and got 1.3 changes?
<axw> oh, it's getting vet
<axw> groan
<bodie_> sigh, I really need to set up my dev system in a vagrant box
<bodie_> I want to use mongo for something I'm working on personally but I'm not confident about installing it alongside the "special" version
<bodie_> anyone know if that should be relatively painless?
<bodie_> I'm assuming more on the relatively painful end of the spectrum
<axw> davecheney: I've updated the build config to checkout go.tools as at the 1.2 release, retrying your build now
<sinzui> bodie_, I run in lxc
<bodie_> sinzui, did you see the lxc cluster scheduler google open-sourced today?
<bodie_> hmmmm... so you run mongo itself in lxc if you need your own version?  I've never played w/ lxc much
<sinzui> bodie_, I haven't
<sinzui> bodie_, give me a moment to paste bin what I think is my last container and setup
<bodie_> sinzui, https://github.com/GoogleCloudPlatform/kubernetes
<sinzui> bodie_, http://pastebin.ubuntu.com/7631475/
<axw> you could just use the local provider, too
<bodie_> that's awesome, thanks :)
<bodie_> true
<sinzui> axw, the juju-reports project's makefile always runs the project in the charm, and it sets a bzr server to server updates to the branch to the charm. instant updates that always run in production
<waigani> thumper: ready when you are
<thumper> waigani: I have a scheduled call with  wallyworld shortly, how about after that?
<waigani> thumper: okay, I have to do school run at 3
<sinzui> bodie_, I think I installed this the first time I used the container because it was missing something that made apps misbehave
<sinzui> sudo apt-get install language-pack-en avahi-daemon build-essential python-dbus
<bodie_> hmm, I just did a go install ./... from my root juju source directory, and juju bootstrap is asking me to apt-get install juju-local
<bodie_> I don't think I want to do that
<bodie_> do I?
<axw> you do if you want to use the local provider
<bodie_> isn't that part of our repo?
<axw> juju-local juse installs prereq packages: mongo, rsyslog, etc.
<axw> just*
<bodie_> I see
<bodie_> it also wants to install juju-core
<bodie_> I just don't want to override my $GOPATH/bin juju stuff
<axw> you'll have to ensure that $GOPATH/bin comes before /usr/bin in $PATH
<bodie_> fair enough, that makes sense
<axw> hurrrngh
 * axw has flashbacks to building The Build Guy at his previous job
<axw> being
<bodie_> heh
<bodie_> sorry *pat pat*
<thumper> waigani: let me know when you are back from the school run
<thumper> waigani: and we can talk
<thumper> https://github.com/juju/juju/pull/81 for anyone, simple, found the code lying around...
<thumper> wallyworld: https://bugs.launchpad.net/juju-core/+bug/1329154
<_mup_> Bug #1329154: Make it possible to create the lxc templates without a running environment <local-provider> <plugin> <juju-core:Triaged by niedbalski> <https://launchpad.net/bugs/1329154>
<wallyworld> ta
<thumper> wwitzel3: ta
<wwitzel3> thumper: np
 * thumper goes to make coffee
<waigani> thumper, menn0: ModifyUser - all tests pass.
<menn0> waigani: we just both wrote about the same email at the same time
<waigani> great minds...
<waigani> thumper if it is okay to keep it as is, can I just remove the annotation from Tag? Or is there more to it?
<waigani> thumper: also, did you want to talk about juju info?
 * thumper looks
 * thumper wonders what the hell he was thinking about...
<thumper> I'm sure there was something...
<thumper> waigani: when you said you ran all the tests, did you mean *ALL* or just the apiserver tests?
<waigani> lol thumper all
 * thumper pulls a face
<waigani> thumper: some failed with panics, reran those and they passed
<thumper> I'm sure the api client user manager tests would fail at least one
<waigani> $ cd state/apiserver/usermanager/
<waigani> .../state/apiserver/usermanager$ nolog
<waigani> OK: 10 passed
<waigani> PASS
<waigani> ok  	github.com/juju/juju/state/apiserver/usermanager	2.377s
<waigani> /state/apiserver/usermanager$
<waigani> thumper: ^
 * thumper checks
<thumper> hmm... perhaps the default json serialisation is to lower case the names...
<thumper> that is the only thing that makes sense there...
<thumper> bit I seem to recall logs where that wasn't the case...
<thumper> although that may have been the deserialized json
<thumper> although...
<waigani> thumper: I don't follow, where are you looking?
<thumper> based on what menn0 said, and that this wasn't hooked up before...
<thumper> we should just make the api clean
<thumper> which means I should delete my backwards compatability bit
<thumper> because that's dumb
<menn0> thumper: good point
<waigani> okay, actually, do we even need to keep Tag then?
<menn0> I wish I'd remember that when you made the change
<thumper> heh
<thumper> waigani: I hold to my thought that the two structures are for different things
<thumper> and we should reuse them because they are "kinda similar"
<thumper> that way lies pain as code moves on
<thumper> we never pass back password
<thumper> for example
<waigani> thumper: did you see fwereade's comment on this?
 * thumper sighs
<thumper> no
 * thumper looks
<waigani> thumper: If I understand him correctly, he is arguing the opposite
<thumper> waigani: there are a lot of fwereade's comments hidden due to changing code.
 * thumper looks at the email trail
<waigani> thumper: comment below
<waigani> so this looks to me very much like a UserInfo. What's the thinking behind having separate types? The params package really ought to be expressing the language we're using in the api, not just a grab-bag of ad-hoc structs.
<waigani> NOTE: there are certainly reasons to choose to split the types. I'm not sure the ones I can think of are quite good enough to justify it, but I want to be sure we've considered them.
<thumper> yeah
<thumper> reading and thinking
<thumper> I think we do want to split them
<thumper> because we are about to return "date created, creator, and last connection"
<waigani> okay, and the justification?
<thumper> these are not fields you send up
<thumper> make sense?
<waigani> right
<waigani> yep
<thumper> so...
<thumper> ModifyUser should stay distinct from the UserInfo struct
<thumper> ModifyUser is for add/modify user client -> server instructions
<waigani> okay, should we remove Tag from it now?
<thumper> UserInfo is for server-> client info
<thumper> well...
<thumper> that is an interesting question...
<waigani> yep, that is clear
<thumper> when we create a user, we are specifying a username for them
<thumper> this clearly isn't a tag
<thumper> however when we modify them, we may well want to define them with a tag (even though I think it is dumb)
<thumper> as long as it is a real tag
<thumper> bah humbug
<waigani> in which case we should remove the deprecated comment
<thumper> agreed
<thumper> and the behaviour should change...
<thumper> but don't do it all at ocne
<thumper> we have never ending feature creep
<thumper> we want small defined bits of work
<thumper> with as little extraneous bits of change as possible
<waigani> agreed
<thumper> so it should change, but not in this branch
<waigani> just remove the comment? and add a todo?
<thumper> no, don't touch it
<waigani> okay
<waigani> I'll remove the annotation though
<waigani> actually what am i saying
<thumper> no, it is a different struct
<waigani> it will be like it was before, no annotations
<thumper> right
<waigani> yep
<thumper> just add the new UserInfo
<waigani> yep
<thumper> although add in the extra fields that landed yesterday
<thumper> that way the struct is obviously different
<thumper> to the Modify User one
<waigani> right, cool new fields :)
<thumper> you may also want to update your tests to create the users with the factory
<waigani> oh nice :)
<waigani> is there an example to look at?
<waigani> or at least what package is it?
<thumper> yes
<waigani> suite even?
<thumper> and yes
<thumper> testing/factory
<waigani> cool, I'll get onto it
<thumper> ok
<thumper> wallyworld: here is another bug you could pass on: https://bugs.launchpad.net/ubuntu/+source/juju/+bug/1290920
<_mup_> Bug #1290920: non-default lxc-dir breaks local provider <local> <lxc> <regression> <juju-core:Triaged> <juju (Ubuntu):Confirmed> <https://launchpad.net/bugs/1290920>
<wallyworld> will do
<wallyworld> ta
 * thumper back later tonight for team meeting
<thumper> team leads that is
<dimitern> morning
<dimitern> any reviewers wanna have a look ? https://github.com/juju/juju/pull/78
<dimitern> jam, fwereade, TheMue, vladk|offline, ^^^
<wallyworld> fwereade: i've addressed all your suggestions in that txn branch, not sure if anyone else has looked. off to soccer now, back later for more meetings \o/
<jam> dimitern: so I'm trying to sort out where pending networks helps us. If it is just the list from the provider, can't we just list the provider when we need to?
<dimitern> fwereade, ping
<rogpeppe1> anyone know a decent way to easily fetch someone's pull request branch locally?
<dimitern> rogpeppe1, it's best to add another upstream first
<rogpeppe1> dimitern: that would mean i'd need an upstream for everyone that sends a PR
<rogpeppe1> dimitern: which seems a bit like overkill
<dimitern> rogpeppe1, only if you need to collaborate on another guy's branches in his fork
<rogpeppe1> dimitern: i saw this, but it doesn't seem to correspond to reality: https://help.github.com/articles/checking-out-pull-requests-locally
<rogpeppe1> dimitern: i don't want to do that
<rogpeppe1> dimitern: i just want to fetch it and maybe run the tests
<voidspace> morning all
<dimitern> rogpeppe1, if you're using github/hub, it's as easy as hub clone user/repo dir
<rogpeppe1> voidspace: hiya
<voidspace> o/
<dimitern> voidspace, yo!
<rogpeppe1> dimitern: i don't want to clone, i want to create a new branch in an existing dir
 * rogpeppe1 wishes the hub command had a hub-specific help
<axw> rogpeppe1: https://help.github.com/articles/merging-a-pull-request ?
<dimitern> rogpeppe1, then git fetch
<dimitern> git remote add mislav git://github.com/mislav/REPO.git
<dimitern> git fetch mislav
<dimitern> rogpeppe1, ^^ that's what i'd do
<rogpeppe1> dimitern: unfortunately github doesn't actually make the name of the remote repo clear
<dimitern> rogpeppe1, naming is always user/repo
<rogpeppe1> dimitern: yeah, but the "repo" might be different if the user has renamed it
<dimitern> rogpeppe1, what?! :)
<dimitern> rogpeppe1, why is that a concern if you're fetching something now?
<rogpeppe1> dimitern: like my fork of github.com/juju/utils is github.com/rogpeppe/juju-utils
<rogpeppe1> dimitern: ok, my specific example is this: https://github.com/juju/charm/pull/4/commits
<dimitern> rogpeppe1, so? you can name your fork however you wish
<jam> (11:22:43 AM) jam: dimitern: so I'm trying to sort out where pending networks helps us. If it is just the list from the provider, can't we just list the provider when we need to?
<rogpeppe1> dimitern: where can i see the repo that the pull request is coming from?
<dimitern> rogpeppe1, if you click on the commit (a58c39a in this case) it will take you to the commit, where at the top you can see the repo
<dimitern> binary132/charm
<dimitern> forked from juju/charm
<rogpeppe1> dimitern: ah, i didn't know those hex commit numbers were links
<rogpeppe1> dimitern: thanks
<dimitern> jam, we need to know what networks are there from state
<dimitern> rogpeppe1, np :)
<dimitern> jam, 1) they don't change that often, so once we have them we don't need to ask every time we try to use a network
<rogpeppe1> dimitern: i've also found that there's a link at the bottom "You can also merge branches on the _command line_" and if you click on _command line_, you get all the useful info
<dimitern> jam, 2) it allows us to watch for changes from one place over pendingnetworks and update them from another
<dimitern> rogpeppe1, nice!
<dimitern> jam, the idea is, once we have them to implement add-network --existing <provider-id> ... or even list-networks --existing
<jam> dimitern: sure, but both of those use cases should arguably go query the provider directly rather than using possibly stale data from state
<dimitern> jam, so why store addresses when we can always get them from the provider when we need to?
<dimitern> :)
<jam> dimitern: frequency of use
<jam> dimitern: I don't see us using "unknown" networks frequently
<jam> and the use cases for tracking them seem to actually be "we should refresh before using them" which means we should just probe the provider directly.
<dimitern> jam, we will use them to detect the default private/public networks
<jam> dimitern: but aren't those public and private networks in the model, and not pending networks?
<dimitern> jam, they will be promoted to be the defaults
<dimitern> jam, all this is the result of my discussion with fwereade
<dimitern> (if i understood him correctly, that is)
<jam> dimitern: fwereade: so my thought, at least, was that we would make our best guess as to public and private and then the user could fix them from there.
<jam> which is still we have 2 concrete networks out of the box
<jam> dimitern: I think the code you wrote looks pretty good (I haven't finished reading all of it),
<jam> I'm just trying to understand the specific benefit we get from caching it.
<rogpeppe1> dimitern: ha, it's actually even easier: hub checkout https://github.com/juju/charm/pull/4
<dimitern> jam, re IsVLANTag - originally I wanted to have either a (bool, error) return - returning false, nil for 0 and true, nil for 0 < tag <= 4094, false, err otherwise
<jam> because if they really are *pending* then it isn't something I feel we should be touching except for times when we actually do need to ask the provider.
<jam> dimitern: that could be ok, though it is a rather subtle distinction it does fit the question of "is this Valid, and is this a VLAN"
<dimitern> jam, the idea eventually is to have a way do define mapping in env config how to translate provider-id -> juju-name for the default networks, but first we'll have cli commands to do it manually
<jam> dimitern: you still haven't shown me a case for where we need to record the provider network as something we only sort-of know about (pending). Either it is mapped in and we track it, or it is still pending and we don't care.
<jam> when you go to "list networks" we really do need to probe the provider
<dimitern> jam, ok, so I can make it return err for 0 >= tag > 4094
<jam> because they might have just added one
<dimitern> jam, it's pending, because we need the user to give it a name
<dimitern> which is descriptive for them
<dimitern> jam, i can come up with a quick-n-dirty scheme to name any network with a private CIDR range as "private#" (if more than 1)
<dimitern> jam, but fwereade didn't like that and i can concur
<jam> dimitern: so I still don't know why we need to have described a network we haven't named yet
<jam> when the user does something to map a provider ID into a known network, we'll go find the details and put it into the model
<jam> dimitern: is this for listing what networks a machine has access to?
<jam> if so, we *still* need a name for them
<jam> to display them
<jam> unknown-$PROVIDER_ID could work
<dimitern> jam, no
<jam> but then we just need a boolean/empty name field in the actual network collection
<dimitern> jam, these are potentially all the networks, not machine-specific ones
<dimitern> jam, provider-id can be anything, that's why we don't want it as part of the strictly defined juju network name
<dimitern> (unless we transform it to contain only [a-z][a-z0-9-]
<jam> dimitern: so I still go back to "what are the concrete use cases for having a pending-networks" and do we actually benefit from it. Yes we can put the data there, but we can't *use* it yet (because it doesn't have a name), and the use cases *I* can sort out should really go back to the provider on demand.
<jam> Maybe so we have a list of "these are the other networks the machine has access to, but I don't actualyl know what to call them yet " ?
<jam> IS it so when a machine starts we can use that list to filter out matches and ignore them?
<jam> I'm not trying to fight you, I just haven't actually come up with a useful use case, if you have one, I'm happy to hear about it.
<dimitern> jam, i understand your concern :)
<dimitern> jam, i really wish fwereade could join us on this one
<dimitern> jam, i'm starting to have doubts about the approach
<jam> dimitern: so it seems better (to me) if we just had a table of known provider networks, and some of them are as-yet-unnamed
<mattyw> morning
<dimitern> mattyw, hey
<dimitern> mattyw, something occurred to me yesterday about the --dry-run option of upgrade-juju
<dimitern> mattyw, if you're not uploading the tools with --dry-run, you won't get the same results as without --dry-run, because different tools will be selected
<mattyw> dimitern, sorry - I'm still waking up, what do you mean?
<dimitern> mattyw, well, when you're about to upgrade, a check is performed first to make sure the environ agent-versions are "stable", then when --upload-tools is specified, the tools need to be uploaded to be considered for upgrade
<dimitern> mattyw, without uploading the effective version picked for upgrade will differ with and without --dry-run
<mattyw> dimitern, this only applies to --upload-tools?
<dimitern> mattyw, i think only then we try to upload new tools, no?
<dimitern> vladk, you've got a review on the networker PR
<dimitern> vladk, you should add a kanban card for the IsVirtual removal, despite being a trivial change btw :)
<dimitern> jam, right?
<jam> dimitern: yes
<jam> well, if it is follow up work, vs do this as a tweak and land it.
<dimitern> jam, it's kinda a follow-up, but the whole PR is about that
<mattyw> dimitern, we make a call to validate() before we print which I believe ensures we output the correct version in the not --upload-tools case
<vladk> dimitern: thanks
<jam> vladk: dimitern: so my statement is "if it is tweaking work that is going to take an afternoon and/or require another review" it is probably worthy of a kanban task.
<jam> if it is "tweak this and land" then that probably isn't a new task
<dimitern> mattyw, right, so I might be wrong then about it differing :) np, it was just a red light i needed to mention
<mattyw> dimitern, thanks for pointing it out, this is an area of the code I'm new to so any pointers on things to watch out for is greatly appreciated
<dimitern> jam, if by "tweak this and land" you refer to a review suggestion on an exiting PR, yes this should be part of the work associated with that PR's card in the first place, but i find it's best to have card-per-PR usually
<mattyw> dimitern, do you think I should get someone else to do a review as well?
<dimitern> mattyw, either fwereade or wallyworld should know a great deal wrt upgrades
<mattyw> dimitern, ok great, I'll make sure I get their eyes on it as well
<dimitern> mattyw, i did some stuff around it some time ago, but it seems things have changed since :)
<fwereade> jam, dimitern: I'm totally cool with as-yet-unnamed networks -- but what are we going to use for the _id?
<fwereade> jam, dimitern: if we make sure the _id isn't connected to the name then that's great, we'll be able to rename them
<fwereade> jam, dimitern: I got the impression we weren't planning/expecting to rename networks, so keeping track of the possibilities separately seemed smarter to me
<jam> fwereade: so the other question is what are we actually using the unnamed networks for? (why put them in our local model and have to keep them up to date) just to be consistent with the other ones that we do have names for?
<dimitern> fwereade, the providerId is the _id in pendingnetworks
<dimitern> fwereade, jam's point is we don't need to store unusable networks, which are likely to change, so instead we can fetch them when we call add-network --existing
<fwereade> jam, dimitern may be in a position to correct me on this, but even if we're not *configuring* the networks on the machines I feel we still need to include them in the model so that when we *do* name the networks we don't have to rescan all the machines for whether they have access to these new networks
<jam> fwereade: so I was trying to come up with those use cases, but it means we need to be careful about how we link them
<jam> since we need a way to link them not-by-name
<jam> otherwise aren't we scanning again?
<dimitern> fwereade, ah, so here's the quirk - we get all networks, but no machines associated with them
<fwereade> dimitern, re _id: hmmmmm, this has the hallmarks of the relation id troubles all over again
<dimitern> fwereade, perhaps we should though
<dimitern> fwereade, expand on that please?
<fwereade> dimitern, take a look at the relation doc sometime
<dimitern> fwereade, we have Key as _id and a separate int Id
<fwereade> dimitern, it has Key (_id) which we use in tags, and Id (id) that we use in other places, and we dance back and forth over which to use in which place
<dimitern> fwereade, right, but that's not true for networks - we always use the juju name, not providerId
<dimitern> fwereade, except when talking to the provider - like in StartInstance
<fwereade> dimitern, and pow, we introduce weird ugly races while watching
<jam> dimitern: except if we wanted to link machines to networks that we don't-yet have names for
<fwereade> dimitern, that's the main thing with relations
<fwereade> dimitern, it is fucking stupid that we can't use the int ids when watching (say) service relations
<tasdomas> is there any support for anything similar to prerequisite branches (in LP) in github?
<fwereade> dimitern, but we literally can't do that
<jam> tasdomas: not that we've sorted out
<jam> Its something we want to bring up in a wider conversation (how to do the equivalent on git)
<jam> there are a few ideas out there
<jam> (rebase, create a PR to your prereq branch, etc)
<fwereade> dimitern, because when we get a this-doc-went-away notification, we no longer have the id available to send down the wire, because it's in the doc that just went away
<jam> fwereade: is it reasonably to go to an abstract "network id" that is just the identifier for the doc in the DB ?
<dimitern> fwereade, ok, so should we drop the pending networks altogether?
<dimitern> fwereade, instead, when we need to add them, we always give them a juju name for a given provider id
<fwereade> dimitern, ...maybe? can we shelve this until the hangout, I'm kinda hoping that the relation-addresses stuff will be relatively quikc to get everyone one the same page for
 * fwereade wants a ciggie
<dimitern> fwereade, sgtm
<dimitern> mattyw, axw, fwereade, hangout?
<axw> brt
<jam> menno is gone, right?
<jam> He linked a doc, but didn't share it with comments enablefd.
<perrito666> morning
<voidspace> perrito666: morning
<mattyw> fwereade, dimitern can we make it 15 minute break, just need to run a quick errand, will ping when I'm back
<dimitern> mattyw, sure, we're there when you're back
<voidspace> perrito666: so the main entry point to your new backup work is the BackUp function (which I think should be called Backup by the way)
<voidspace> perrito666: this returns the backup filename and an sha filename
<voidspace> perrito666: we're changing the backup api to be a synchronous one that returns the backup file
<voidspace> perrito666: so we can't return the sha file *as well*
<wallyworld> mgz: 1:1 time
<voidspace> perrito666: maybe we could return the sha (instead of writing a file) and set it as a header on the response
<perrito666> voidspace: could be, I was writing into a file because we had an async api
<perrito666> I could just return the sha
<voidspace> perrito666: yep, I assumed that was the case
<voidspace> perrito666: I'm pretty sure natefinch has agreed to switch to a synchronous one
<perrito666> yup he did
<perrito666> ok, you can assume sha string is returning there instead of the filename, Ill change it
<voidspace> perrito666: cool - thanks
<voidspace> shame I can't use your branch as a dependent branch for my work :-)
<perrito666> you "can" but the workflow is not pretty
<voidspace> perrito666: and your changes show up as a diff in my branch
<perrito666> ah, true
<voidspace> perrito666: I can add your branch as a remote to my repo and then keep merging
<perrito666> voidspace: or I could have forked from you repo and then pull request to it instead of to upstream
<perrito666> but that is quite a pita
<voidspace> perrito666: right, and then it's harder to just merge your work to trunk
<natefinch> voidspace: I talked with fwereade. he suggested doing both synchronous and saving the backup to wallyworld's gridfs storage that goes across HA servers.
<perrito666> I think that is sort of the kernel way, people send patches to each maintainer who integrates the thing
<voidspace> natefinch: so we have a storage story
<natefinch> voidspace: I think we can do the synchronous approach first, and add in the save to gridfs later (should be trivial)
<natefinch> voidspace: sounds like it
<voidspace> natefinch: is that "done" or "in progress"?
<voidspace> right
<natefinch> voidspace:  in progress
<voidspace> sync is easy
<voidspace> adding to storage (and adding async api) should also be easy
<voidspace> so no problem
<voidspace> natefinch: did you manage to have a look at what I've done so far?
<voidspace> natefinch: should I be using errors.Errorf from juju/errors to be creating the errors?
<voidspace> natefinch: also, should I send the sha of the backup file as a response header?
<natefinch> voidspace: yes, I don't think so, yes
<voidspace> heh, cool - thanks
<natefinch> voidspace: I'll do a proper review, but overall it looks good
<voidspace> natefinch: sorry to bombard you with questions
<voidspace> sure, it's not "done" yet - so long as you're happy with the basic direction
<natefinch> voidspace: I don't think we're quite ready to use the errors package.  Last I heard it was getting tweaked but that was a long time ago and I haven't heard anything since.
<natefinch> (where a long time is a couple weeks)
<voidspace> ok, cool - there was a review comment on perrito666's branch suggesting using it
<perrito666> yup menn0 suggested I change fmt.Errorf for errors.Errorf or errors.Annotate/Annotatef
<natefinch> perrito666: I'd hold off on that for now
<perrito666> we should implemente a EOR marker to know when someone ended a review :p I just wait until the mails stop for a moment but you never know if the other person is reading or just finished
<perrito666> :p
<natefinch> That's a really good idea, actually.
<natefinch> post it on the list
 * perrito666 writes
<wallyworld> mgz: around?
<voidspace> natefinch: ping
<perrito666> natefinch: sent, let the bikeshedding begin
<natefinch> voidspace: what's up?
<natefinch> perrito666: cool
<mattyw> dimitern, fwereade, could we organise some time probably tomorrow to start break the open port stuff into tasks?
<dimitern> mattyw, i'm available, we can arrange it
<mattyw> dimitern, now?
<dimitern> mattyw, we're just in our standup now, but it should be short
<mattyw> dimitern, ok, just ping when you are ready
<jam> fwereade: just finishing up, will be at team leads in 1m
<natefinch> oops me too
<jam> natefinch: :)
<dimitern> mattyw, hey, just finished, but i'll need a 15m break as well (meetings for 2h+ straight so far - need to eat:)
<mattyw> dimitern, no problem, I've only just managed to open vim myself, can we start later this afternoon?
<dimitern> mattyw, sure, even better
<dimitern> mattyw, if you can some up with a rough task list breakdown by then, as you see it, we'll polish it quickly
<mattyw> dimitern, sounds good
<voidspace> perrito666: to properly cleanup after a backup, do *I* need to delete the tempdir - or will just deleting the backup file do?
<mattyw> has anyone seen these errors before? I get the same kind of failure in a number of tests http://paste.ubuntu.com/7633251/
<voidspace> mattyw: hmm... no
<voidspace> mattyw: is that trunk?
<voidspace> natefinch: is standup at "normal" time today?
<wwitzel3> natefinch: time for a hangout when you're done with the leads meeting?
<natefinch> voidspace: yep
<mattyw> voidspace, yep
<wwitzel3> voidspace: yes, it should be
<mattyw> voidspace, yes it is trunk
<voidspace> wwitzel3: morning!
<wwitzel3> voidspace: hi ya :)
<mattyw> updating mongodb did the job
<voidspace> client.call calls state.Call which calls conn.Call which delegates to conn.Go which creates a Call
<wwitzel3> obviously
<voidspace> :-)
<voidspace> and on that note
 * voidspace lunches
<natefinch> wwitzel3:  I have to help with the kids, can it wait anhour?
<wwitzel3> natefinch: yep, np
<perrito666> voidspace: I should delete it
<perrito666> I will
<hazmat> why does juju download tools from the internet for every machine, instead of using a consistent version / tools already in the env
<perrito666> rogpeppe1: I notice the error on the test with the checksums after I pushed it I am doing something wrong with the sum bc it works
<perrito666> I mean the test passes everytime, which means I am doing something wrong
<rogpeppe1> perrito666: yes, you're doing a checksum on no bytes at all
<rogpeppe1> perrito666: (as i think i mentioned in another comment)
<bodie_> morning all
<perrito666> rogpeppe1: well making a pull request is like sending a letter, you only realize you missed something once you have done it
<bodie_> so, apparently juju is bootstrapping on my machine at startup?  when I try to control it, I get this: ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: dial tcp 10.0.3.1:17070: connection refused
<bodie_> it tests fine and all that, I never use it outside of testing, so I had no idea I was running it, heh
<bodie_> anyway, I also can't kill the processes (even with kill -9, they just restart on a new pid) and I'm just not sure what to do with this
<bodie_> is this a known thing?
<rogpeppe1> perrito666: definitely!
<rogpeppe1> perrito666: that's why we have code reviews...
<rogpeppe1> perrito666: and i missed it the first time i looked through
<perrito666> rogpeppe1: I just noticed it after I pushed, then I noticed.. are we supposed to code review ourselves? because I could have totally added a few comments myself when looking at the diff
<rogpeppe1> perrito666: if i notice something after pushing and it's going to take a little while to fix, i'll sometimes comment on my own review
<rogpeppe1> perrito666: i think it can be useful
<rogpeppe1> perrito666: it's also possible (i think) to retract the PR if you want
<rogpeppe1> perrito666: although you probably don't want to do that after others have commented on it
<perrito666> rogpeppe1: well I see more value on people taking a look before I continue with it
<perrito666> or while I do
<rogpeppe1> perrito666: yeah, that too
<rogpeppe1> perrito666: i hope my comments were helpful
<perrito666> they where actually I read a few, but was waiting until you where done or the page gets a bit annoying with the notifications
<rogpeppe1> perrito666: most were trivial stuff - just usual conventions
<rogpeppe1> perrito666: yeah, i really miss rietveldt for that
<rogpeppe1> perrito666: i'm finished now, BTW
<rogpeppe1> perrito666: usually i'll try to finish the review with a general comment that's not attached to a line of code
<perrito666> I saw that
<rogpeppe1> perrito666: ha, just saw your juju-dev post
<perrito666> I will say this in favor of github, the diffs look really stylish
<rogpeppe1> perrito666: i find it difficult to see small changes, as nothing smaller than a line is highlighted
<perrito666> I did not say they where useful, just stylish
<bodie_> there we go
<bodie_> looks like my path was out of order
<perrito666> rogpeppe1: interesting, I was not aware you could do that with multiwriter
<rogpeppe1> perrito666: do what?
<perrito666> the way you included gzuo
<perrito666> gipz
<perrito666> aghh gzip
<wallyworld_> fwereade: heya, that txn branch has been updated. i don't think anyone else has looked at it. but i need to land it. all the tests pass. any chance you could take a look at my udpates to it sometime during your day?
<rogpeppe1> perrito666: that's not really a multiwriter thing, but the io.Writer interface is very nice and flexible
<rogpeppe1> perrito666: i particularly like the way that it's easy to use conditional defers
<natefinch> wwitzel3: is 15 minutes enough time, or do you want to wait until after the standup?
<fwereade> wallyworld_, sure, will do
<wwitzel3> natefinch: that's enough
<wallyworld_> awesome thanks
<natefinch> ericsnow: standup?
<voidspace> My connection is back - but I have 344kbits downstream!
<voidspace> natefinch: so not enough to join the hangup I don't think
<voidspace> I'm in, but terrible quality
<sinzui> jamespage, can you look at bug 1329256? I don't know if a dep like distro-info is missing
<_mup_> Bug #1329256: juju-core on openstack failing with trusty <juju-core:New> <https://launchpad.net/bugs/1329256>
<wwitzel3> fwereade: so calling StartSync on only the presence pinger, doesn't work
<fwereade> wwitzel3, ha
<fwereade> wwitzel3, now that is interesting :/
<wwitzel3> fwereade: yeah :/
<fwereade> wwitzel3, Sync on the presence pinger?
<natefinch> fwereade:in a standup, almost done
<wwitzel3> fwereade: Sync instead of StartSync did the trick
<fwereade> wwitzel3, awesome
<wwitzel3> fwereade: well I addressed all the implentation comments from your review at this point, just been futtzing around with testing it all.
<fwereade> wwitzel3, great
<wwitzel3> fwereade: thanks again for the hand holding :)
<fwereade> wwitzel3, a pleasure, thank you for doing all the actual work ;)
<bodie_> fwereade, what's your handle on github?
<bodie_> I'
<bodie_> I'll take an answer to that from anyone :P
<jcw4> "fwereade"
<natefinch> bodie_: https://github.com/orgs/juju/members
<bodie_> hmm, it's not offering a completion for fwereade, but it is for natefinch
<natefinch> wwitzel3: you need to set yourself to public here: https://github.com/orgs/juju/members?page=2
<fwereade> bodie_, heyhey, I'm pretty sure I'm fwereade everywhere, and I'm sure I set myself public
<jcw4> I thought it only offered completion for folks already on the thread
<fwereade> cmars, reviewed, coming along nicely
<fwereade> cmars, let me know if anything's unclear :)
<wwitzel3> natefinch: done, was that sent out in an email? I must of missed it :/
<cmars> fwereade, thanks for the feedback
<bodie_> jcw4, I think it's org-wide
<bodie_> could be mistaken about that
<natefinch> wwitzel3: not sure
<fwereade> cmars, btw, heads-up re: https://github.com/juju/juju/pull/50 which I expect to land soon -- will move the transaction hooks stuff to a different package, should be a trivial merge but will affect you
 * fwereade also issues a general call for help re https://github.com/juju/juju/pull/50 -- I'll be going through it with all the focus I can muster, but I'd really like it if someone else also audited the txn changes
<fwereade> natefinch, since you seem to be on call: ^^
<fwereade> natefinch, and I'm well aware you're not super-familiar with state so feel free to talk to me about it
<natefinch> fwereade: I'm on call?
<jcw4> what does this idiom mean:  txnRunner.testHooks = make(chan([]TestHook),1); txnRunner.testHooks <- nil;
<jcw4> specifically.. the last statement?
<jcw4> context ^^^: its in NewRunner() in state/txn/txn.go
<voidspace> rogpeppe1: ping
<jcw4> oh, I see, I think it's filling the channel with nil so that reading it won't block if there are no TestHooks
<jcw4> (filling the channel buffer)
<bodie_> https://github.com/juju/charm/pull/4
<bodie_> should be shipshape
<bodie_> oops, failing a test
<dimitern> mattyw, the call is still on, right?
<mattyw> dimitern, sorry - can we do it tomorrow instead?
<dimitern> mattyw, even better :) np
<mattyw> dimitern, I've not managed to get anything down yet
<mattyw> dimitern, I'll set something up for the morning before we get too busy
<bodie_> there we go
<voidspace> natefinch: ping
<dimitern> mattyw, ok, but please after 7 UTC :)
<mattyw> dimitern, as far as I know nothing exists before 7 UTC
<fwereade> jcw4, exactly right, yes
<jcw4> fwereade: thanks... PR 50 is heavy going, but looks great to me so far
<jcw4> basically a nice abstraction of the transaction retry logic I'd implemented under your direction before
<fwereade> jcw4, yeah, it's making me really happy :)
<dimitern> mattyw, :D
<fwereade> jcw4, but it is indeed heavy going
<fwereade> jcw4, a lot to check
<jcw4> I'll at least try to read the new code and the resulting changes to code I wrote before
<rogpeppe1> voidspace: pong
<voidspace> rogpeppe1: hey, got a minute?
<rogpeppe1> voidspace: sure
 * perrito666 just tried to grep on a piece of paper... my brain is jello
<rogpeppe1> perrito666: lol
<rogpeppe1> fwereade: do you know if we are able to mandate go 1.2 ?
<ericsnow> perrito666: wishful thinking :)
<voidspace> natefinch: unping (just in case)
<mgz> rogpeppe1: we need to for trunk now
<rogpeppe1> mgz: cool
<voidspace> great
<rogpeppe1> that means we can get rid of the NonValidatingHTTPClient security hole
<jcw4> if we already import github.com/juju/errors should we always deprecate fmt.Errorf in favor of errors.Errorf?
<fwereade> rogpeppe1, I'm not sure... I think that we should as soon as we can, but I can't remember what was stopping  us
<mgz> fwereade: largely toolchain things I believe, which are now resolved
<voidspace> state/api/client.go
<voidspace> / Due to issues with go 1.1.2, fixed later, we cannot use a
<voidspace>         // regular TLS client with the CACert here, because we get "x509:
<voidspace>         // cannot validate certificate for 127.0.0.1 because it doesn't
<voidspace>         // contain any IP SANs". Once we use a later go version, this
<voidspace>         // should be changed to connect to the API server with a regular
<voidspace>         // HTTP+TLS enabled client, using the CACert (possily cached, like
<voidspace>         // the tag and password) passed in api.Open()'s info argument.
<voidspace>         r
<bodie_> urk
<rogpeppe1> voidspace: i suggest that you write a method on Client that returns an http client suitable for connecting to the API. it would have only a single allowed root cert (the CACert)
<voidspace> rogpeppe1: ok, I'll look at that
<voidspace> rogpeppe1: I may request some assistance on getting the certs right
<rogpeppe1> voidspace: take a look at the setUpWebsocket function for inspiration
<voidspace> rogpeppe1: cool, thanks
<natefinch> fwereade: the problem with 1.2 is getting it into precise
<natefinch> fwereade: although if we wait a week or two maybe we can skip right to 1.3
<fwereade> natefinch, that'd reduce the effort ;p
<natefinch> fwereade: btw, I think this says I'm not the on call reviewer?  Or is the spreadsheet not what we're going by? https://docs.google.com/a/canonical.com/spreadsheets/d/1iQLLOWrjzxddm5VhYWYi0-2k3xI6wTMlpkvnVNJCYGY/edit#gid=0
<jcw4> sounds like fwereade was trying to pass the buck natefinch
<voidspace> natefinch: just FYI, I finally did my vegas expenses...
<fwereade> natefinch, sorry!
<fwereade> natefinch, you're absolutely right, I think I had it open from a few days before or something :/
<natefinch> voidspace: cool, I'll check them
<rogpeppe1> anyone know why CmdSuite.TestHttpTransport is there?
<rogpeppe1> (in the cmd package)
<rogpeppe1> it doesn't look like it's testing anything to do with the cmd package
<perrito666> natefinch:
<perrito666> unping
<natefinch> perrito666: howdyt
<alexisb> natefinch, be there soon
<perrito666> natefinch: voidspace and I had a doubt but we end up peer solving it
<natefinch> alexisb: me too
<natefinch> perrito666: achievement unlocked!
<voidspace> heh
<mgz> sinzui: what's our solution for a newer go version on precise?
<sinzui> mgz: two phases
<sinzui> mgz, 1. I backported trusty's 1.2 to saucy and precise last month and released package built on it. CI adds the ~juju/experimental PPA to saucy and trusty testers/builders. CI and Users don't see issues with 1.2
<sinzui> mgz, 2. The server team, Robbie or James will backport that same package to ctools for precise
<mgz> sinzui: okay, have we bugged them about that yet?
<sinzui> mgz, saucy is in a grey are of not getting the needed golang from Ubuntu.
<sinzui> mgz, an email was sent without response. They are honestly working with 1.18.4 and ubuntu. That is their top priority
<mgz> k
<mgz> means we can't bot on precise for now, right?
<sinzui> mgz, ??
<mgz> sinzui: I can't set up a precise machine which *builds* juju easily on precise
<sinzui> mgz, you mean Ubuntu cannot build 1.19.4+ on precise or saucy without the golang update?
<mgz> yup
<sinzui> mgz, good news, golang is queued for ctools
<sinzui> https://launchpad.net/~ubuntu-cloud-archive/+archive/cloud-tools-next
<mgz> ace
<voidspace> I have to EOD a bit early
<voidspace> going to London to meetup with some ex-colleagues
<voidspace> I'll try and persuade them that they want to work on juju
<voidspace> see you all tomorrow folks
<sinzui> mgz, saucy has 35 days left, I am confident that Ubuntu will delay backport or juju 1.20.0 to saucy's eol to avoid the golang issue
<rogpeppe1> if anyone's around and willing, there's a PR here that factors cmd out of core. i haven't updated core to use it yet. https://github.com/juju/cmd/pull/1
<jcw4> rogpeppe1: don't know if you want more than an LGTM from me :)  Looks pretty mechanical except for the Version change and the nice DefaultConfig refactor
<jcw4> I wanted to comment but couldn't find anything to comment on but +1 :)
<natefinch> rogpeppe1: is that the right license?  I thought we were using canonical's special static-linking-ok LGPL
<perrito666> we should start adding "rant git" as part of our agendas :p
<bodie_> git rant -vvv
<jcastro> https://bugs.launchpad.net/juju-core/+bug/1329425
<_mup_> Bug #1329425: Fatal out of memory error when bootstrapping on Azure <juju-core:New> <https://launchpad.net/bugs/1329425>
<jcastro> anyone see this before?
<natefinch> jcastro: nope...  I'd be interested to know if it always fails at approximately the same spot, or if it looks random
<jcastro> found the issue
<natefinch> oh yeah?
<jcastro> sinzui figured it out, I was specifying the .cer file, not the .pem
<jcastro> I am making a note for the docs now
<sinzui> I just updated to bug.
<natefinch> jcastro: oh yeah... the whole .cer .pem BS for azure. Feh.
<jcastro> is it possible to tell which one is which so we can give a smarter error?
<jcastro> "Wrong certificate, idiot" would be nicer I think.
<natefinch> jcastro: they're vastly different formats.  It should be easy to differentiate
<jcastro> natefinch, also: https://bugs.launchpad.net/juju-quickstart/+bug/1329449
<_mup_> Bug #1329449: Don't allow a .cer file to be used in Azure <juju-quickstart:New> <https://launchpad.net/bugs/1329449>
<jcastro> that outta help too
<natefinch> it's very interesting that it caused an out of memory error... I wonder how that happened
<natefinch> neither cert is very large, but that error was in the YAML code, I wonder if it's a bug in yaml when trying to write binary to a yaml file or something
<sebas5384> Feature request: https://github.com/juju/juju/issues/87
<leftyfb> can someone help me with this issue? https://pastebin.canonical.com/111624/
<natefinch> leftyfb: can you post the contents of /home/leftyfb/.juju/local/cloud-init-output.log   ?  There's probably more useful information in there
<leftyfb> https://pastebin.canonical.com/111626/
<perrito666> can I do an gc.Assert() that outputs a custom error if the assertion fails?
<perrito666> in a test
<natefinch> perrito666: no, you want gc.Log to output custom stuff
<perrito666> ah great
<natefinch> leftyfb: hmm... less than useful
<leftyfb> 1329480
<leftyfb> https://bugs.launchpad.net/juju-core/+bug/1329480
<_mup_> Bug #1329480: ERROR state/api: websocket.Dial wss://10.0.2.1:17070/: dial tcp 10.0.2.1:17070: connection refused <juju-core:New> <https://launchpad.net/bugs/1329480>
<natefinch> leftyfb: hmm... weird, why is it running 1.18.1 .... I think 1.18.4 is the most recent stable
<natefinch> sinzui: ^^
<leftyfb> natefinch: I'm running trusty and installed from main repo's
<leftyfb> natefinch: I did have the stable ppa before following instructions from ubuntu.com, but ran into this issue and decided to limit variables
<natefinch> oh ok
<natefinch> 1.18.4 is probably a much better bet.  We fixed a bunch of bugs in those .3
<leftyfb> ok, I can install from ppa, though I was getting the same or similar issues
<leftyfb> btw, I go through this process when starting over: https://pastebin.canonical.com/111628/
<natefinch> leftyfb: killing any lxc related processes is often helpful... and... I hate to say it, but rebooting often helps with the local provider when it gets wedged
<leftyfb> natefinch: I do that as well
<leftyfb> natefinch: for some reason reinstalling lxc doesn't start the dnsmasq process properly.... can only seem to get it going after a reboot
<leftyfb> tried sudo service lxc-net start  ... says it starts but it doesn't
<bodie_> https://github.com/juju/charm/pull/4
<leftyfb> bodie_: was that for me?
<bodie_> sorry, no
<bodie_> just could use a review on that when someone gets a chance :)
<leftyfb> natefinch: https://pastebin.canonical.com/111630/     after installing 1.18.4 from ppa:juju/stable and trying to bootstrap
<natefinch> leftyfb: that all looks good.  Is it working or?
<leftyfb> negative
<leftyfb> #leftyfb@blanchard[0]:~$ juju status
<leftyfb> ERROR state/api: websocket.Dial wss://10.0.2.1:17070/: dial tcp 10.0.2.1:17070: connection refused
<natefinch> leftyfb: can you paste /home/leftyfb/.juju/local/cloud-init-output.log again?
<leftyfb> natefinch: that's what the pastebin above is
<natefinch> oh right... uh, huh
<thumper> morning
<natefinch> morning thumper
<thumper> hi natefinch
<wwitzel3> morning thumper
<thumper> hi wwitzel3
<thumper> I hate friday mornings
<thumper> so tired
<wwitzel3> you could always wake up later?
<wwitzel3> :D
<leftyfb> natefinch: does this help at all:
<leftyfb> 2014-06-12 20:30:10 ERROR juju.cmd supercommand.go:305 symlink /home/leftyfb/.juju/local/tools/machine-0/jujud /usr/local/bin/juju-run: no such file or directory
<leftyfb> 2014-06-12 20:30:10 INFO juju.cmd supercommand.go:302 running juju-1.18.4.1-trusty-amd64 [gc]
<leftyfb> /home/leftyfb/.juju/local/tools/machine-0/jujud does exit
<leftyfb> /usr/local/bin/juju-run does not exist
<natefinch> weird.... it sounds like  /home/leftyfb/.juju/local/tools/machine-0/jujud didn't exist
<leftyfb> it does
<leftyfb> I just created the link and trying to bootstrap again
<natefinch> maybe it didn't exist when that line was running
<leftyfb> hm
<leftyfb> that was it
<leftyfb> I can get status now
<natefinch> thumper: any idea why leftyfb's jujud wouldn't have existed when we were trying to make the juju-run symlink?
<leftyfb> so after all my purging of packages and deleting of files, rebooting and reinstalling, juju isn't creating that link
<leftyfb> you know what didn't exist?
<leftyfb>  /usr/local/bin
<leftyfb> I had to create that
<natefinch> heh
<natefinch> interesting
<natefinch> can you create a bug for that?
<natefinch> I gotta run
<waigani> fwereade: are you still up?
<waigani> thumper or fwereade: could one of you review this please: https://github.com/juju/juju/pull/22
<thumper> waigani: yeah, in the middle of a thought stream right now, will look soon
<menn0> perrito666: ping?
<perrito666> menn0: oibg
<perrito666> pong
<menn0> :)
<perrito666> sorry I am working in a different kb today
<menn0> off by one on your right hand :)
<menn0> perrito666: I just wanted to confirm a few things with you regarding the upcoming backup system
<perrito666> menn0: shoot
<waigani> thumper: thanks, I've got a mountain of email to get through plus juju info so no rush
<menn0> perrito666: did I get it right that backups will be stored server side with the juju client growing some commands to list, download and remove them?
<perrito666> menn0: while you where sleeping we changed that a bit
<perrito666> and by a bit I mean completely
<menn0> ok :)
<perrito666> menn0: now backup is no longer async
<menn0> perrito666: so backups are downloaded to the client immediately without the option of storage server side?
<perrito666> menn0: that is right
<perrito666> altough you should be able to use the backup function which is rather simple
<perrito666> and that will put a backup wherever you ask it to
<menn0> perrito666: ok that's good to know
<menn0> it's a pity that things have changed because the previous proposal worked rather nicely with schema migrations but I can work with the new approach too
 * menn0 goes to update the schema migrations design document
<menn0> perrito666: at a guess, how far away is the backup functionality from landing?
<perrito666> menn0: well I am finishing the tests to re-propose it
<menn0> perrito666: that's the core backup piece right? The client and API changes are still to come?
<menn0> (although I probably don't care much about those now...)
<perrito666> yes, that you should ping voidspace for
<menn0> perrito666: thanks
<menn0> review please: https://github.com/juju/juju/pull/84 (thumper perhaps?)
<perrito666> menn0: sorry if you get a ton of spam, I just commented on the review you did.. a lot
<wallyworld_> fwereade: not sure if you're still awake/coherent - i've addressed all the txn package changes. one main difference remains - i kept the various Refresh() calls to preserve existing behaviour. if that's ok with you, i'll look to land
<menn0> perrito666: no problems. looking now.
 * menn0 misses reitveld
<perrito666> menn0: let me know when you go again to sleep so I ca change backup's arch again
<perrito666> :p
<menn0> perrito666: umm sure ;-)
 * perrito666 makes backup now require port knocking
<menn0> perrito666: screw you ;-p
<perrito666> lol
<menn0> perrito666: review done. just tiny things. you probably want to wait to here back from the others
<perrito666> menn0: yeah I am EOD
<menn0> perrito666: cool, TTYL
<perrito666> but since I have no life I stay on irc ;p
<waigani> thumper: NewUserManagerAPI already only allows client connections to use it: line 41: if !authorizer.AuthClient()
#juju-dev 2014-06-13
<davecheney> thumper: menn0 waigani https://github.com/juju/names/pull/4
<menn0> davecheney: I'm done
<davecheney> menn0: ta
<davecheney> just responding now
<davecheney> won't take long
<davecheney> changes to juju/juju are surprisingly minimal
<davecheney> https://github.com/juju/juju
<davecheney> gah
<davecheney> https://github.com/juju/juju/pull/88/files
<davecheney> if you have time to review those i can get stuck into the final change
<axw> wallyworld_: need to restart, back for 1:1 on a minute
<wallyworld_> ok
<axw> wallyworld_: mind if we defer for 10 mins or so?
<wallyworld_> sure
<axw> wallyworld_: ready when you are
<davecheney> axw: http://juju-ci.vapour.ws:8080/job/github-merge-juju/118/console
<davecheney> ^ this test failed, but the bot marked it as a success
<axw> wallyworld_: sorry did I cut you off?
<axw> sounded like you were saying something when I closed the window
<wallyworld_> not all all :-)
<axw> cool
<wallyworld_> nah, i have to learn to shut up more
<wallyworld_> i talk too much
<thumper> wallyworld_: what? no.
<wallyworld_> go away
 * axw grrs at his graphics stack
<axw> when I change res, sometimes the screen craps itself and flickers
<axw> gotta restart...
<davecheney> ERROR: Failed to merge: {u'documentation_url': u'https://developer.github.com/v3/pulls/#merge-a-pull-request-merge-button', u'message': u'Pull Request is not mergeable'}
<davecheney> + /home/ubuntu/jenkins-github-lander/bin/lander-merge-result --ini /home/ubuntu/jenkins-github-lander/development.ini '--failure=Merging failed' --pr=88 --job-name=github-merge-juju --build-number=119
<davecheney> https://api.github.com/repos/juju/juju/issues/comments/45967167
<davecheney> ++ date
<davecheney> + echo Finished: Fri Jun 13 01:27:54 UTC 2014
<davecheney> Finished: Fri Jun 13 01:27:54 UTC 2014
<davecheney> + exit 0
<davecheney> Description set: davecheney 105-introduce-tags-type-iii
<davecheney> Finished: SUCCESS
<davecheney> failed: SUCCESS
<davecheney> + bash scripts/pre-push.bash
<davecheney> mongo/prealloc.go:141: missing argument for Debugf verb %q: need 1, have 0
<davecheney> + bzr whoami 'Jenkins bot'
<davecheney> how come this didn't fail the build
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1329578
<_mup_> Bug #1329578: build: pre-build.bash failures do not fail the build <juju-core:New> <https://launchpad.net/bugs/1329578>
<axw> davecheney: sorry missed your message before. it failed once, then passed
<axw> huh
<axw> didn't see that
<axw> how come my pre-push script didn't pick it up ...
<axw> eh, because apparently I never set it. oops
<axw> davecheney: apparently "go tool vet" returns rc 0 even if it finds that
<davecheney> :(
<axw> perhaps we should check for empty output?
<davecheney> how are you involing it
<davecheney> go vet $PKG
<davecheney> gives the right rc
<davecheney> but we have to use go tool vet to tweak the flags
<axw> davecheney: it's just a copy of the old .lbox.check, it calls "go tool vet" directly because it wants to set the print funcs
<davecheney> that is bizarre
<davecheney> how does go vet know that go tool vet failed
<axw> nfi
<axw> oh what
<axw> davecheney: if you do "go tool vet ." it does something different to "go tool vet *.go"
<davecheney> :crying:
<thumper> axw: haha
<davecheney> ok  github.com/juju/juju/environs/sync143.376s
<davecheney> ^ why does this test take so long to run ?
<wwitzel3> just a mere 143 seconds
<davecheney> it must be one of the only tests that _doesn't_ spin up several mongodb's
<wwitzel3> oh I see what you did there
<wwitzel3> just to be clear, I have nothing helpful to add .. just peanut gallery
<davecheney> http://juju-ci.vapour.ws:8080/job/github-merge-juju/121/console
<davecheney> yay, i crashed mongo
<davecheney> replica sets, adding instability since 2012
<axw> actually looks related to SSL from here
<axw> maybe both
<waigani> thumper: sweet! thanks
<waigani> thumper: I also thought about the DateCreated AND LastConnection being mockable via the user factory
<davecheney> axw: the results of go test -p 1 ./...
<davecheney> appear to be ignored by the bot ?
<davecheney> is that correct ?
<axw> davecheney: erm, I don't think so? lemme see...
<axw> davecheney: the script just does "go test ./... || go test -p 2 ./..."
<davecheney> so if the build fails
<davecheney> try again faster ?
<axw> no, try again slower
<axw> "go test ./..." uses #CPU, which is 4
<davecheney> oh, indeed
<davecheney> ok
<davecheney> axw: where is the script for the build
<davecheney> i want to turn off all the -v styel output
<davecheney> ie, we don't need to spend 2,000 lines on tar xvfz
<axw> davecheney: embedded in jenkins
<davecheney> i don't understand why your build failed
<davecheney> the branch applied and the build worked
<davecheney> but then when the same merge is done for real
<davecheney> it fails ?
<axw> GitHub flaking
<axw> sometimes the merge just mysteriously fails, then you retry and it works
<axw> despite the fact that the lander managed to merge the branches fine
<axw> locally I mean
<davecheney> exactly
<davecheney> o_O!
<davecheney> its happened twice this morning
<davecheney> i don't accept that git hub is that flakey
<axw> davecheney: I've taken out the -v from tar extraction
<davecheney> i think it's a procedural error
<davecheney> axw: thank you
<axw> entirely possible, I'm not really familiar with the code that does the merge.. .will take a look and see if it looks suss
 * axw shrugs
<axw> it's just doing a PUT to the merge URL
<davecheney> so, there must be something different about the way we make the local branch and apply the PR
<davecheney> and the way that github is trying to merge the PR back onto master
<axw> mgz and I briefly discussed just doing pushing the merge directly
<axw> mgz pointed out that we could then "go fmt" the code before landing
<axw> we could also add coverage data to the repo in that way
<davecheney> +1 tasty
<axw> github and/or jenkins must hate me
 * thumper EOWs
<fwereade> wallyworld_, couple more comments on the PR -- still not convinced by those refreshes
<wallyworld_> fwereade: do you agree that they were called previously?
<fwereade> wallyworld_, no
<fwereade> wallyworld_, I think they were only called on ErrAborted
<fwereade> wallyworld_, the usual pattern is if err := runtxn(); err != ErrAborted { return err }
<wallyworld_> fwereade: oh, hang on, maybe i'm being totally stupid
<wallyworld_> sigh, i think i am
<wallyworld_> ffs
<fwereade> wallyworld_, no worries, those constructs are a bit tangled
<wallyworld_> sorry
 * wallyworld_ goes to kick the dog
<fwereade> wallyworld_, it just means the code can get a bit simpler
 * fwereade feels like an accessory to animal cruelty
<wallyworld_> yeah, still doesn't absolve my stupidity
<fwereade> wallyworld_, oh bollocks
<fwereade> wallyworld_, the benefit of factoring that package out absolves far worse
<wallyworld_> i am looking forward to being able to use it
<wallyworld_> fwereade: was the bollocks for a bad reason?
<fwereade> wallyworld_, I was expostulating violently at you beating yourself up for imo inadequate reasons
<wallyworld_> well, fsvo inadequate :-)
<wallyworld_> thanks for the review though, was a bigun
<fwereade> wallyworld_, quickly searching through I think we're actually behaving noticeably better now because all the refreshes seem to be in `if attempt != 0` branches now
<fwereade> wallyworld_, (once the others are droped anyway)
<wallyworld_> yup
<fwereade> wallyworld_, one quick thought (don't do it now)
<fwereade> wallyworld_, we're not really using attempt, just attempt != 0
<wallyworld_> um, i think there's a place where we are
<fwereade> wallyworld_, gut feel on (firstAttempt bool) vs (attempt int)?
<fwereade> wallyworld_, ah ok then
<wallyworld_> attempt ==1 is somewhere
<wallyworld_> only one place
<fwereade> wallyworld_, must have missed it
<fwereade> wallyworld_, disregard :)
<wallyworld_> ok :-)
<wallyworld_> fwereade: i'm currently fixing a nasty goose json bug which fucks up our floating ip usage for hp cloud. so i'll land soon
<fwereade> wallyworld_, sweet
<wallyworld_> and i got a meeting soonish as well
<fwereade> wallyworld_, no rush
<wallyworld_> yeah, i'm the only one who needs it right now :-)
<fwereade> wallyworld_, I'm just happily imagining all the nice stuff we'll be able to do more easily in future
<wallyworld_> yep
<fwereade> wallyworld_, backoff from failed txns
<wallyworld_> like internal storag
<wallyworld_> yep, that too
<fwereade> wallyworld_, even a txn type so we can detect and reject ops for the same doc -- or at least combine them all
<wallyworld_> would be nice
<fwereade> wallyworld_, not to mention cleanly adding metrics like txn spread, time to execute, number of retries
<wallyworld_> indeed
<fwereade> axw, ping
<axw> fwereade: pong
<fwereade> axw, maas 1.5 supports zones, we should do that too -- did I miss it?
<axw> fwereade: just haven't gotten to it yet
<axw> wasn't sure if it was ranked as important yet
<fwereade> axw, cool
<fwereade> axw, I think it is
<axw> okey dokey, I shall add a card
<fwereade> axw, maas is pretty much the flagship provider for a whole bunch of use cases
<axw> I'm sure it'll be important to people building OpenStack clouds
<fwereade> axw, yeah
<axw> (on top of MAAS)
<fwereade> axw, another thing, btw -- and this is probably a bit big to charge to the azs work, but we need to think about it
<fwereade> axw, azs, instance types, and networks all affect constraint validity
<fwereade> axw, it would be really good if we could expose an api that lists the choices for those things
<axw> fwereade: heh :)  was talking about this with wallyworld before
<fwereade> axw, so that the gui can have dropdowns/checkboxes/whatever
<axw> there's a card for instance types already
<wallyworld_> fwereade: there's a plan for it
<axw> I figured we'd want it for placement directives like AZs too
<fwereade> axw, wallyworld_: you rock :)
<wallyworld_> fwereade: i've discussed with gui guys
<axw> fwereade: not sure how AZs affect constraints though?
<wallyworld_> fwereade: there will be a new client API call returning a struct containing inst types, availability zones, and possibly env constraints etc
<fwereade> axw, ha, not constraints for azs, you're right
<axw> regardless, it'll be nice to be able to list them
<fwereade> axw, although... hmm... why *shouldn't* we do az constraints?
<axw> I thought you didn't want to :)
<fwereade> axw, I know this is feature creep, I'm not necessarily asking for us to do it now
<axw> fwereade: hazmat wants them though
<fwereade> axw, just trying to figure out if it's a good idea and we should pre-emptively add a lets-do-it-one-day bug for it
<fwereade> axw, ha, ok, that's generally good enough for me
<fwereade> axw, am I right in thinking that joyent doesn't support azs?
<axw> fwereade: IIRC, CloudFoundry has some components that want to be partitioned across AZs
<axw> I have no idea, haven't looked
 * axw looks
<fwereade> axw, I just poked around quickly, couldn't see anything
<axw> okey dokey
<fwereade> axw, re cloudfoundry -- so we're ideally looking for something like 2 services, 4 azs, non-overlapping spread?
<fwereade> axw, ha, now I remember why I don't want zone constraints
<fwereade> axw, I want near/far, not explicit zones
<axw> fwereade: I don't know the specifics. I was under the impression that services were tied to a single AZ
<fwereade> axw, oh ok, that's interesting
 * fwereade probably needs to know more about cloudfoundry :/
<axw> i.e. all units of one service in one AZ, all units of another service in another
<axw> need to chat with hazmat more about it
<davecheney> urgh
<davecheney> FAIL: run_test.go:269: RunSuite.TestAllMachines
<davecheney> run_test.go:293: c.Check(testing.Stdout(context), gc.Equals, string(jsonFormatted)+"\n")
<davecheney> ... obtained string = "[{\"Error\":\"command timed out\",\"MachineId\":\"1\",\"Stdout\":\"\"},{\"MachineId\":\"0\",\"Stdout\":\"megatron\\n\"}]\n"
<davecheney> ... expected string = "[{\"MachineId\":\"0\",\"Stdout\":\"megatron\\n\"},{\"Error\":\"command timed out\",\"MachineId\":\"1\",\"Stdout\":\"\"}]\n"
<davecheney> OOPS: 233 passed, 1 FAILED
<davecheney> --- FAIL: TestPackage (155.82 seconds)
<davecheney> FAIL
<davecheney> FAIL    github.com/juju/juju/cmd/juju   155.991s
<wallyworld_> axw: can i bother you for a review to fix one of the 1.19.4 bugs? i'll update and land the dependencies fix after goose is sorted https://codereview.appspot.com/105180043
<axw> no worries
<axw> wallyworld_: reviewed
<wallyworld_> thanks
<wallyworld_> axw: is there a branch for the azure card?
<axw> wallyworld_: which one?
<wallyworld_> bug 1324910
<_mup_> Bug #1324910: azure destroy-environment does not complete <azure-provider> <destroy-environment> <juju-core:Triaged> <https://launchpad.net/bugs/1324910>
<axw> wallyworld_: no branch, I only just started looking
<wallyworld_> ah ok, it was iin the review lane
<axw> oops
<wallyworld_> :-)
<axw> thanks
<voidspace> morning all
<axw> morning
<voidspace> o/
<mattyw> morning all
<TheMue> morning all o/
<wallyworld_> axw: trivial https://github.com/juju/juju/pull/90
<axw> looking
<TheMue> hehe, really trivial
<axw> wallyworld_: done
<wallyworld_> thanks :-)
<wallyworld_> axw: i have no soccer tonight so if mgz  is around we can do a standup in 40 mins if you are free
<axw> wallyworld_: sure, I think I will be
<wallyworld_> ok
 * fwereade off in a minute for a swim and a think, does anyone need me for anything?
<wallyworld_> nah
<TheMue> enjoy swimming
<wallyworld_> axw: looking at the bot, there's a fucktonne of blue dots :-)
<axw> :)
<axw> wallyworld_: I'm getting a fucktonne of spurious "cannot merge" errors though :(
<wallyworld_> oh :-(
<wallyworld_> let's talk to mgz at standup
<voidspace> bloodearnest: morning
<bloodearnest> bloodearnest: heya
<bloodearnest> voidspace: heya, even :)
<mattyw> axw, ping?
<axw> mattyw: yo
<voidspace> bloodearnest: :-)
<mattyw> axw, I've added the charm link https://github.com/juju/juju/pull/80. Shall we land it or think of a better summary?
<axw> mattyw: land it, hopefully someone will think of something better after seeing it ;)
<mattyw> axw, I seem to remember there was a thread on one of the mailng lists 18 months ago that tried to come up with a decent summary, but I couldn't find it
<axw> *shrug* I wasn't here 18 months ago :)
<dimitern> mattyw, hey
<dimitern> mattyw, do we still need to discuss port ranges tasks?
<mattyw> dimitern, sorry totally forgot
<mattyw> dimitern, yeah lets do it
<dimitern> mattyw, ok, i'm joining
<mattyw> fwereade, https://github.com/juju/juju/pull/64 is this what you had in mind?
<TheMue> dimitern: standup
<perrito666> morning
<dimitern> TheMue, brt, sorry
<perrito666> rogpeppe1: tx again :)
<voidspace> perrito666: morning
<rogpeppe1> perrito666: np - it's looking loads better
<rogpeppe1> perrito666: BTW i introduced some folks to backgammon at the pub last night...
<perrito666> axw: btw, I noticed the 2013/2014 issue, I let 2013 because lots of new code seems to have 2013 perhaps this is something we should address too?
<perrito666> rogpeppe1: and backgammon is not an euphemism for your fist being drunk right? :p
<perrito666> rogpeppe1: you are a bg evangelist
<rogpeppe1> perrito666: i kind of am, but i actually haven't played in months. i just saw that the pub had a set.
<axw> perrito666: if it's code that started in 2013 then that's fine, but this is all new isn't it?
<perrito666> axw: it is
<voidspace> actually enjoying writing tests
<voidspace> it turns out this particular piece of code I'm working on is testable
<perrito666> voidspace: lol
<voidspace> :-)
<voidspace> and I'm getting to use the os / ioutil / path libraries
<voidspace> always a good part of a language's standard library to be familar with
<perrito666> rogpeppe1: the strip parameter is to emulate -C in gnutar it is stripped from the h.Name
<rogpeppe1> perrito666: but you don't use it, right?
<perrito666> I do, if I dont I screwed the merge
 * perrito666 checks
<perrito666> line 59
<rogpeppe1> perrito666: that's just passing the strip parameter through, though AFAICS
<rogpeppe1> perrito666: but it's always empty
 * perrito666 finds a bit odd that there is path.Join and filepath.Join
<rogpeppe1> perrito666: path is forward-slash-separated paths
<perrito666> rogpeppe1: nope, when building the tar.gz it strips the tempdir path
<rogpeppe1> perrito666: filepath is OS-dependant
<perrito666> mm, that is a useful piece of advice
 * perrito666 tries the hard task of finding a movie theater that broadcasts x-men in English
<rogpeppe1> perrito666: ah, got it. in which case, why not get rid of line 78?
<rogpeppe1> perrito666: and pass "/" in where currently you pass ""
<perrito666> rogpeppe1: yup, makes a lot of sense
<rogpeppe1> perrito666: one other thing - you should probably use filepath.ToSlash for the file name you store in the tar
<rogpeppe1> perrito666: so that even if we're generating a tar file on windows, the files will look normal
<perrito666> rogpeppe1: will that work when untaring on windows?
<rogpeppe1> perrito666: if it didn't, you'd never be able to untar any tar file on windows :-)
<rogpeppe1> perrito666: so, yes, i think so
<perrito666> rogpeppe1: to be honest.. I never tried to untar on windows
<perrito666> :p
<rogpeppe1> perrito666: i probably have :-)
<rogpeppe1> perrito666: tbh none of the /var/lib/juju paths will work in windows, so it's all pretty academic
<perrito666> I have been using either osx or linux since .. well since like a ton of years, I recall using xp as my desktop when It jut came out for a few months and then that was it
<voidspace> I think the tests are done for the backup downloading (server side)
<perrito666> rogpeppe1: I know, that is why the pat is set up there, I guess that if we try to run in windows we will have some convenience getter for that
<voidspace> now for the client side implementation
<rogpeppe1> perrito666:  yeah, you could have the getFiles implementation forked, one for each platform in two files
<perrito666> rogpeppe1: from the tests of archive/tar on the go source I cannot find a clear way to actually untar those into the hd, specially directories, have you ever done it?
<rogpeppe1> perrito666: you don't have to untar to disk
<rogpeppe1> perrito666: just read the contents (for files)
<rogpeppe1> perrito666: see io.ReadAll
<perrito666> rogpeppe1: well, the restore part wich Ill write after I fix your comments says otherwise :p
<rogpeppe1> perrito666: ah, i see
<rogpeppe1> perrito666: i thought you were talking about checking contents
<perrito666> rogpeppe1: nope, that is pretty much clear, I just need to add contents to these files
<perrito666> :p
<rogpeppe1> perrito666: the current restore plugin shows how you can extract files from a tar file in go
<perrito666> rogpeppe1: really?
<rogpeppe1> perrito666: well, it doesn't bother to actually extract them to disk, it's true
<rogpeppe1> perrito666: it should be relatively straightforward to write a tar file extraction function
<rogpeppe1> perrito666: there might be one already around somewhere
<rogpeppe1> perrito666: but it's just a matter of iterating through the contents, creating and writing each file in turn
<perrito666> rogpeppe1: I am not sure how to handle directories, I did google a bit but found nothing, everyone seems to be happy enough with buffer
<rogpeppe1> perrito666: if there's an explicit dir entry, mkdir it, otherwise MkdirAll the file's directory before writing the file
<perrito666> ok, tx
<rogpeppe1> does anyone know anything about the juju local plugin?
<rogpeppe1> i can't see that it does anything
<rogpeppe1> the only substantive logic in there is runAsRoot, but AFAICS that never gets called
<rogpeppe1> i guess it might be a place holder for future stuff
<rogpeppe1> fwereade, dimitern, jam: any idea?
<jam1> rogpeppe1: thumper was working on it as a namespace for the local provider tweaks he wanted to do
<jam1> IIRC he wanted to be able to refresh the LXC templates
<jam1> though now we cache on all providers
<jam1> which means just having it in "local" isn't good enough
<jam1> rogpeppe1: so either it isn't complete yet, or its functionality got moved out
<jam1> (I'm personally not a fan of putting that into a plugin vs core functionality, but thumper likes it)
<rogpeppe1> jam1: ok, thanks
<rogpeppe1> jam1: one other thing: do you know why it was split into two packages rather than just having one main package like all the other commands?
<jam1> no idea
<rogpeppe1> jam1: ta
<fwereade> rogpeppe1, I would guess that it's because he shares my opinion that the single-main-package thing is a horrible antipattern :)
<rogpeppe1> fwereade: i don't see how factoring out the one-line main() function helps
<rogpeppe1> fwereade: if the package being imported was actually useful to be used from Go, i might agree
<rogpeppe1> fwereade: but the difference between func Main and func main seems trivial to me
<fwereade> rogpeppe1, it means that (1) all the code is easily accessible from elsewhere without weirdness, so it's a good habit independent of whether the code in a given command is currently obviously useful from elsewhere; and (2) it turns in-package testing into a shameful fallback, as it should be, rather than a necessity ;p
<rogpeppe1> fwereade: if Main didn't call os.Exit, i might possibly agree to (1). but it does - packages should be written to be useful to call from Go. just separating main from Main doesn't help anyone.
<fwereade> rogpeppe1, fair point there, sure
<fwereade> rogpeppe1, I think that's just an argument that part of it is in the wrong package, rather than an argument against spltting packages, though
<rogpeppe1> fwereade: and about in-package testing... really, who cares? for main packages, all the stuff you're going to be testing is likely to be internal anyway
<rogpeppe1> fwereade: i'm totally up for factoring out useful functionality from main packages
<rogpeppe1> fwereade: but splitting them just "because" just adds obfuscation
<fwereade> rogpeppe1, if you need internal testing, your packages are probably too big ;)
<rogpeppe1> fwereade: i don't think all the grubby implementation details of a package are always worth factoring out. it's nice to hide things. but they may very well be worth testing.
<fwereade> rogpeppe1, I dunno, the ideal code is lasagna, but given a choice between spaghetti and ravioli I've generally found the latter to be more palatable
<mattyw> does anyone know why this might be failing? http://juju-ci.vapour.ws:8080/job/github-merge-juju/138/console
<rogpeppe1> fwereade: we can bake lasagna within packages too :-)
<fwereade> rogpeppe1, IME that doesn't work -- without the language helping you to enforce the boundaries, it invariably degrades spaghettiwards
<fwereade> rogpeppe1, even super-strong convention like python's _field members doesn't seem to actually help
<fwereade> rogpeppe1, it does help a *little* but the temptation to violate layers is strong for the programmer in a hurry
<rogpeppe1> fwereade: if people want to violate layers, they'll do it regardless of package boundaries. hiding implementation details inside a well defined package boundary is good for avoiding needless reliance of external code on internal details
<fwereade> rogpeppe1, right, but without package boundaries people are often not aware that the layers even exist to be violated
<rogpeppe1> fwereade: internal types are a pretty good indication
<fwereade> rogpeppe1, even the careful and conscientious programmer-in-a-hurry is susceptible to perpetrating that sort of breakage
<rogpeppe1> fwereade: there are many forms of breakage that we can do at any time :-)
<fwereade> rogpeppe1, sure, I'm arguing from anecdotal evidence: I have spent a lot of my life rolling my eyes at many forms of bad code -- not to mention writing plenty of it myself -- and ISTM that failing to take advantage of the encapsulation constructs available is the root of an awful lot of avoidable evil
<fwereade> rogpeppe1, that go only offers package boundaries is not *necessarily* a strike against go, but it does constrain the forms that long-term-robust code can actually take
<rogpeppe1> fwereade: it's a trade-off, as usual. making packages with no decently explainable reasons for existing is another way to make spaghetti code that's really hard to fathom IME.
<fwereade> rogpeppe1, that's the ravioli code metaphor I think
<fwereade> rogpeppe1, hundreds of tiny things with hardly any meat in
<fwereade> rogpeppe1, long-term, it's *much* easier to deal with than spaghetti is
<fwereade> rogpeppe1, possibly this is just because all the boundaries make it harder to spaghettify
<fwereade> rogpeppe1, so after N years of maintenance there's still *some* structure discernible
<rogpeppe1> fwereade: i think it just makes for spaghetti at another level
<rogpeppe1> fwereade: for me, the key thing is strong, useful abstractions
<rogpeppe1> fwereade: then you can understand why a package exists and how you might change it to do what you want without compromising external clients.
<rogpeppe1> fwereade: in all honesty, i don't think we suffer much from violation of intra-package boundaries in juju. the most egregious violations of boundaries that i've seen have been cross-package.
<bodie_> if I could get a LGTM on this, it would be fantastic! https://github.com/juju/charm/pull/4
<bodie_> also, good morning all
<mattyw> has anyone ever seen the 'Pull Request is not mergeable' error from the github-lander?
<mattyw> I can't see a reason why the request isn't landable
<rogpeppe1> anyway, i strongly disagree that if it's worth testing a function, it's worth exporting it from another package. a package is often built from quite a few functions that make sense only within the context of that package and operate on types and concepts that are really internal implementation details. an example is desiredPeerGroup in worker/peergrouper. it's very useful to be able to test it directly, but it only makes sense in the
<rogpeppe1> context of what the rest of the package is doing.
<fwereade> rogpeppe1, I'm not saying internal tests are *never* a good idea
<rogpeppe1> [13:37:39] <fwereade> rogpeppe1, if you need internal testing, your packages are probably too big ;)
<natefinch> it's often way way WAY easier to test a single internal function than it is to test a huge external function.   A failure from a test against a very small internal function is a hell of a lot easier to understand than a failure from a huge external function.
<fwereade> rogpeppe1, I would say they *are* a code smell, though -- an indication that something's probably a bit off
<fwereade> rogpeppe1, closer inspection may reveal that everything's really fine
<rogpeppe1> fwereade: the only thing that's "off" about them is IMO is that it makes it harder to refactor the code because your tests are dependant on internal implementation details to some extent.
<natefinch> fwereade: I disagree completely.  There are lots of individual bits of logic that you can test in isolation which can help you when you're editing/refactoring later.
<fwereade> natefinch, yeah, agreed there -- I'd just usually rather put a couple of helper packages behind the outward-facing one, and have properly segregated tests against theose
<rogpeppe1> fwereade: but that's not made any better by factoring out the code
<rogpeppe1> fwereade: because they're implementation details whether they're in a separate package or not
<natefinch> fwereade: except that then you have exposed logic that should only be an implementation detail, but now other developers will be tempted to use it
<fwereade> natefinch, you've written a package which is the language in which another package is written
<fwereade> natefinch, ratherthan embedding the implementation language in the implementation
<rogpeppe1> fwereade: that makes sense *if* that language is coherent in itself
<fwereade> rogpeppe1, and so it should be..?
<rogpeppe1> fwereade: but if it's just an implementation detail, it often makes sense only in the package that's doing the implementation
<natefinch> fwereade: it just breaks encapsulation... and many times, the implementation details are not something you want anyone else to get access to, so you can freely refactor later.  If they're exposed, then you have to worry about maintaining compatibility with other packages that use them
<rogpeppe1> natefinch: fwereade is concerned about inappropriate access to parts of that implementation from within the package.
<fwereade> natefinch, I'd rather take the risk code-sharing than giant overgrown packages ;p
<fwereade> natefinch, yes, it demands a bit of extra effort to build the layers in the right places
<rogpeppe1> natefinch: and i can see that p.o.v. too - for instance (to continue with worker/peergrouper) some other part of the package *could* call one of the other functions inside desired.go.
<fwereade> natefinch, but IME the more clients you have the better-factored the code generally is, but the causal relationship is not entirely clear
<rogpeppe1> natefinch: but tbh i don't see that as a significant risk
<bodie_> at the last place I worked, we used to have 4-hour yelling arguments in the conference room.  I much prefer this style ^_^
<fwereade> bodie_, haha
<wwitzel3> my love of corkscrew pasta has doomed me to always writing poor code.
<fwereade> wwitzel3, lol
<rogpeppe1> lol
<rogpeppe1> using Go packages like java name spaces is not a great fit IMO.
<fwereade> natefinch, rogpeppe1: anyway, I'm worried that I'm not making myself clear -- I understand the risks you cite, I'm just saying that I'd rather [err towards small packages and risk inappropriate reuse of incomplete abstractions] than [err towards large packages and risk spaghetti behind clean, but large, boundaries] because I have generally found it easier to unfuck systems that made the former mistake
<fwereade> natefinch, rogpeppe1: I'm certainly not saying my approach can't go wrong
<rogpeppe1> personally i'd prefer to err towards packages with small boundaries that encapsulate useful functionality, even if that internal implementation might get big sometimes
<fwereade> natefinch, rogpeppe1: everything goes wrong
<fwereade> rogpeppe1, natefinch: https://twitter.com/DEVOPS_BORAT/status/224171832771739648
<rogpeppe1> i think one can easily have a package with a significantly sized implementation that nonetheless has a small API
<bodie_> fwereade, on that note, I now have a reasonably good JSON-Schema validator and YAML grinder-into-suitable-format and I'm thinking about how it might be useful for charm Config
<fwereade> bodie_, I heartily endorse that direction, but not at the moment :)
<bodie_> right now, those things are members of charm Actions, the grinder is an unexported method
<rogpeppe1> many things in the stdlib are a good example of that
<bodie_> was thinking about whether that stuff needs to be its own bit in Charm, or where it would belong
<gnuoy> Hi, I'm having my first go at compiling juju (1.18) from src. After "go get -v launchpad.net/juju-core/1.18/..." I'm moving the content of the 1.18 dir up one into juju-core otherwise go install seems unable to find anything. After doing that install fails due to missing src/github.com/juju/testing/logging, so I grabbed those files from old revision. After doing that install fails with "utils/ssh/ssh_gocrypto.go:77: undefined: ssh.ClientConn". I can't
<gnuoy> help but feel I'm doing something fundamentally wrong here
<fwereade> bodie_, I would personally consider each of those pieces to be a viable small package
<bodie_> gnuoy, I think what you want to do is rm -rf launchpad.net/juju-core and go get -u -v github.com/juju/juju/...
<bodie_> it's moved to github :)
<fwereade> gnuoy, if you need 1.18, moving the branch up a level is right; but you need to run `godeps -u dependencies.tsv` as well to get the right packages
<bodie_> although, I'm not totally certain where to go for the versioned historic branches
<bodie_> pay attention to fwereade
<gnuoy> bodie_, achk :)
<gnuoy> fwereade, thanks
<fwereade> gnuoy, most of what you need to know should be in CONTRIBUTING and README
<mattyw> mgz, ping?
<gnuoy> fwereade, ok, thank you
<bodie_> fwereade, on a side note, I've updated some code in my gojsonschema dependency -- does that mean I need to push out a tweak to dependencies.tsv and tell people they need to re-up on their deps?
<mgz> mattyw: hey
<fwereade> bodie_, yeah, although it probably doesn't demand a specific message, running godeps is just a habit one needs to develop, I think
<mgz> gnuoy: you just need to run godeps on the branch, which is in CONTRIBUTING
<mgz> gnuoy: 1.18 is on launchpad still
<gnuoy> mgz, ta
<mattyw> mgz, for the purposes of this conversation you're responsible for github and jenkins - ok?
<mgz> mattyw: okay :)
<mattyw> mgz, any idea what the pull request not mergable error is all about? http://juju-ci.vapour.ws:8080/job/github-merge-juju/138/console
<mattyw> mgz, only it seems mergable to me - it's only a trivial change to the readme
<mgz> mattyw: as best I can tell it's the github api being weird, just retrying often seems to work
<mattyw> mgz, I'll try again - thanks :)
<mgz> mattyw: the only other thing to check is if it merges cleanly into trunk, but I expect it does
<mattyw> mgz, just in case my git foo isn't good enough - I did a git merge master on my local branch - and master is up to date with trunk - and that all worked
<bodie_> mgz / fwereade -- so, I have some new code in juju/charm that has an updated dependency requirement, but I noticed the dependencies.tsv is in juju/juju
<bodie_> I just became aware that that could pose a problem
<mgz> rogpeppe1: ^
<mgz> what's our solution to new subpackages that have dep requirements?
<bodie_> I'm guessing I just need to put the updated dep in juju/juju
<bodie_> but, it seems a little clunky
<bodie_> then again, dep management in go seems... a little clunky ;)
<mgz> mattyw: the bot did successfully merge your branhc into tip, so it's an ongoing mystery why github is failing with that error sometimes
<voidspace> perrito666: how far off merging is your branch?
<rogpeppe1> bodie_: yeah, in general only main specifies deps
<voidspace> perrito666: I think mine (well - the first of mine) is ready to be integrated with yours
<rogpeppe1> bodie_: otherwise you can get conflicting deps
<bodie_> ah
<bodie_> that makes sense
<rogpeppe1> bodie_: the real solution to this particular problem is to export stable APIs
<mattyw> mgz, I can't see it in tip
<mattyw> mgz, and https://github.com/juju/juju the readme is still old
<rogpeppe1> bodie_: then you can go get -u with impunity
<mgz> mattyw: I'm not saying it landed
<mattyw> mgz, oh right
<mgz> I'm saying the *bot* managed to do the merge in order to run the tests, so the merge itself shouldn't be the reason github failed the merge itself at the end
<rogpeppe1> bodie_: that's something we should really do for all our packages under github.com/juju, perhaps excepting juju core itself
<mattyw> mgz, nice, well I tried it again, 3rd time's the charm I guess
<mgz> one option if this continues to be a pain is to drop using the github api here and get the bot to commit/push/etc
<bodie_> rogpeppe1, not sure I'm following -- I made a tweak to gojsonschema to get a subset of functionality working properly, so the API I'm exporting definitely depends on that fix
<bodie_> now I'm worrying about whether there will be more such cases
<bodie_> obviously I don't want to keep forcing dep updates
<rogpeppe1> bodie_: it sounds like this is basically a bug fix to gojsonschema, right?
<rogpeppe1> bodie_: not a backwardly incompatible change
<bodie_> yeah, but tests won't pass on charm without the fix
<bodie_> so... not totally backward compatible with our test base
<bodie_> s/totally/
<bodie_>  //
<rogpeppe1> bodie_: that's fine. go get -u will fix it, and because it's backwardly compatible, won't break anything else that's importing gojsonschema
<rogpeppe1> bodie_: in general, we should test against the latest version of a package
<gnuoy> mgz, assuming the "no version control" messages are just warning the dep process completed without error http://paste.ubuntu.com/7638931/ but the undefined: ssh.ClientConn error is still present when trying to do an install
<bodie_> rogpeppe1, okay, so I can basically update the dependency willy-nilly as long as I'm not changing the API significantly / in such a way that builds will fail?
<rogpeppe1> bodie_: yup
<mgz> gnuoy: you're doing it the wrong way round :)
<bodie_> rogpeppe1, but, shouldn't that be in dependencies.tsv?
<mgz> you *have* a dependencies.tsv in the juju-core branch, you want to make the other branches match that, which is `godeps -u ...`
<mgz> not - t
<rogpeppe1> bodie_: see http://labix.org/gopkg.in for an explanation of how to maintain backward compatibility
<gnuoy> mgz, ah, ok
<mgz> so, un-overwrite the deps, and just do the -u
<rogpeppe1> bodie_: if every repo has its own dependencies.tsv, which one do you believe?
<bodie_> heh
<mgz> you probably still need a go get first, use the -d flag on that
<rogpeppe1> bodie_: IMO, the only decent place to specify dependencies is at the root (i.e. in main packages)
<perrito666> voidspace: working a re-reproposal
<mgz> gnuoy: so, `cd juju-core && go get -v -d ./... && godeps -u dependencies.tsv`
<voidspace> perrito666: heh, you'll be happy to finally get this in!
<gnuoy> tbh it looked to me like go get was doing all the dependency handling
<bodie_> rogpeppe1, good link re. gopkg :)
<rogpeppe1> bodie_: and if you combine that with maintaining stable APIs, we should find that a) all dependencies should build and test ok against (at least) the latest version of their deps and b) we can have reproducible builds of our binaries
<gnuoy> mgz, all compiled and happy, thank yo
 * fwereade needs to be away, on later tonight
<mgz> gnuoy: ace
<voidspace> ericsnow: this is the initial prototype of the api client Backup method
<voidspace> ericsnow: https://github.com/voidspace/juju/compare/backup-client#diff-e3e783960401dd1c43cf368383b4992eR577
<ericsnow> voidspace: yeah, plugging that into the CLI shouldn't be a problem
<voidspace> ericsnow: cool
<voidspace> ericsnow: it occurs to me that I can just leave you to test it... ;-)
<voidspace> only mostly joking...
<ericsnow> voidspace: we'll see :)
<ericsnow> voidspace: such a kidder
<rogpeppe1> frankban, dimitern_, fwereade, voidspace, mgz, perrito666: factor cmd package out of juju-core: https://github.com/juju/juju/pull/93
<dimitern_> rogpeppe1, why?!
<rogpeppe1> dimitern_: because we want stuff outside of juju-core to be able to use it
<rogpeppe1> dimitern_: specifically, the store commands are moving out of juju-core too
<dimitern_> rogpeppe1, oh brother..
<dimitern_> explosion of deps and repos :)
<mgz> dimitern_: most of these pacakge splits are so rog can use 'em from elsewhere
<rogpeppe1> dimitern_: and really, it should not be juju-specific
<rogpeppe1> dimitern_: i thought you liked external repos :-)
<mgz> it's quite a lot of short term pain though...
<mgz> well, and some ongoing pain, given the non-smoothness of dep management
<voidspace> for every new repo I have to create a new mail rule...
<rogpeppe1> voidspace: can't you create one mail rule that matches all of 'em?
<dimitern_> rogpeppe1, I'll like them even more when we stop moving stuff around in 5000+ line diffs :)
<rogpeppe1> dimitern_: most of the diff there is deletion
<rogpeppe1> dimitern_: i'm sure i remember you arguing that more stuff should be factored out of juju-core, ages ao
<rogpeppe1> ago
<voidspace> rogpeppe1: well I could do - so long as I never work with any non-juju git repos
<dimitern_> rogpeppe1, even the agents are not juju-specific?
<voidspace> rogpeppe1: and I don't yet, but I don't want to assume I never will
<natefinch> voidspace: on your canonical account?  Probably safe assumption
<rogpeppe1> dimitern_: the agents haven't moved
<voidspace> natefinch: I only have one github account
<rogpeppe1> dimitern_: it's the cmd package only
<natefinch> voidspace: you can give it multiple email addresses and address notifications on a per-team basis
<rogpeppe1> dimitern_: (and cmd/testing)
<voidspace> natefinch: ah, I didn't know that
<natefinch> voidspace: so my juju notifications all go to canonical, everything else to my gmail
<dimitern_> rogpeppe1, so there will be juju/cmd repo and juju/juju/cmd/jujud ?
<voidspace> natefinch: I will do that immediately - thanks!
<natefinch> voidspace: np
<voidspace> yep, that's exactly what I want
<rogpeppe1> dimitern_: yup
 * voidspace lunches
<dimitern_> rogpeppe1, ah, i see - just the root cmd package then
<rogpeppe1> dimitern_: yup
<rogpeppe1> dimitern_: which is pretty general tbh (and should probably be more so, but i didn't want to break everything and take ages doing that)
<dimitern_> rogpeppe1, lgtm
<rogpeppe1> dimitern_: thanks!
<rogpeppe1> dimitern_: there were a couple of non-mechanical pieces
<rogpeppe1> dimitern_: specifically, the new functions NewSuperCommand and NewSubSuperCommand which are now in github.com/juju/juju/cmd
<rogpeppe1> dimitern_: which wrap cmd.NewSuperCommand in a juju-specific way (to do the expected logging)
<frankban> rogpeppe1: LGTM thanks
<dimitern_> rogpeppe1, right, still lgtm
<bodie_> odd
<bodie_> I tagged my repo as .v1
<bodie_> http://gopkg.in/binary132/gojsonschema.v1
<bodie_> when adding it to imports( ... ) as gopkg.in/binary132/gojsonschema.v1, I think gofix is altering it to point to the repo it was forked from
<bodie_> never mind, I'd had the original repo alongside my forked one
<bodie_> in $GOPATH
<bodie_> removed and it works
<rogpeppe1> bodie_: awesome, good plan
<bodie_> yeah, gofix was messing with my head though, heh.
<bodie_> anyway, hopefully this is the last we have to think about gojsonschema for quite some time.
 * rogpeppe1 hopes so too
<bodie_> rogpeppe1, since I now am using the gopkg.in dep, I need to alter dependencies.tsv too, don't I.
<rogpeppe1> bodie_: yup
<bodie_> very well then.  I think this is prepared for review other than the addition of the dep.  I'll add that in a separate PR momentarily
<bodie_> https://github.com/juju/charm/pull/4
<bodie_> rogpeppe1, mgz, fwereade
<bodie_> might be cool to add tagged releases to github juju
<bodie_> that way if someone wanted a stable build from source, they could check out and build a specific version
<rick_h_> we use master vs develop for that :)
<rick_h_> in UI land
<bodie_> okay, I'm having some confusion here with the dependencies.tsv stuff
<bodie_> I made sure I had the latest juju/juju with go get -u -v, and then ran godeps -u dependencies.tsv in juju/juju
<bodie_> but, it looks like a bunch of my versions are different in my new dependencies.tsv
<bodie_> er, the one I just generated with godeps -t $(go list github.com/juju/juju/...) > dependencies.tsv.new
<bodie_> http://www.diffchecker.com/3en2efa8
<bodie_> okay, I didn't have the correct pinned deps due to an issue with the new ones I got.  looks good now.
<bodie_> THERE we go.
<bodie_> okay, I need to get this PR in so that I can update dependencies.tsv properly.  can anyone vet this?  https://github.com/juju/charm/pull/4
<bodie_> it's not terribly complicated
<bodie_> however, juju/juju/dependencies.tsv has an incorrect dep on a dev branch for charm, so I need to get that merged into master in order to update deps correctly.
<bodie_> fwereade, mgz, rogpeppe1, anyone?  I guess you guys are probably in a meeting
<rogpeppe1> bodie_: oops
<rogpeppe1> i wonder how that got merged
<bodie_> it didn't
<bodie_> rogpeppe1, I'm building against a dev branch right now
<bodie_> the master doesn't have my changes yet
<bodie_> so, my generated dependencies.tsv doesn't have the correct version of charm
<bodie_> slowly going insane
<bodie_> lol
<rogpeppe1> bodie_: i'm not sure exactly what you're asking me to vet
<rogpeppe1> bodie_: unfortunately github doesn't make it easy to see what changes have been made in response to what comments
<bodie_> rogpeppe1, basically I nuked most of that PR, fixed a bunch of code, and it should be thought of as a new PR
<rogpeppe1> bodie_: you rebased?
<bodie_> rebased, amended commit message
<rogpeppe1> bodie_: hmm, it would have been nice to have been able to diff from the comments
<rogpeppe1> bodie_: because now i have no idea what's changed
<bodie_> rogpeppe1, right....  so, when I alter something that you commented on, the history hides the comment
<rogpeppe1> bodie_: it doesn't look totally different
<rogpeppe1> bodie_: i can see the old comments
<rogpeppe1> bodie_: but i want to find out how the code has changed since i made the comment
<bodie_> it's not that different, I mostly addressed your issues and beefed up the tests, stripped out some useless tests
<rogpeppe1> bodie_: so i don't have to continually cross-refer
<bodie_> hmmmm
<bodie_> there must be a way to do that
<rogpeppe1> bodie_: perhaps you could reply to the comments saying which ones you've done or not done
<rogpeppe1> bodie_: it's something i found indispensable in rietveldt, and i'm a bit at a loss without it
<bodie_> perhaps the issue is my rebase
<bodie_> rogpeppe1, basically I addressed the concerns exactly, except for a couple where I left comments
<bodie_> https://github.com/juju/charm/pull/4#discussion_r13701371
<bodie_> https://github.com/juju/charm/pull/4#discussion_r13704644
<bodie_> I don't see why they don't have a feature to address exactly your concern, that does seem really silly
<rogpeppe1> bodie_: hmm, looks like i never published one comment
<natefinch> how github gets new features is really quite a mystery given the very obvious deficiencies they've had for a long long time.
<rogpeppe1> bodie_: the cleanse function doesn't properly give an error on all bad keys
<rogpeppe1> bodie_: (AFAICS)
<bodie_> rogpeppe1, in the case of type map[i{}]i{}, it just coerces the keys to strings and then runs through it again in the m[s]i{} case
<rogpeppe1> bodie_: ah, i missed that
<rogpeppe1> bodie_: BTW if prohibitedSchemaKeys was a map[string]bool (or a set.Strings) then the lookup would not require a loop
<bodie_> ah, good point rogpeppe1
<rogpeppe1> godeps: cannot parse "/mnt/jenkinshome/jobs/github-merge-juju/workspace/tmp.gduuL94JJe/RELEASE/src/github.com/juju/juju/dependencies.tsv": cannot find directory for "github.com/binary132/gojsonpointer": not found in GOPATH
<rogpeppe1> mgz: doesn't the 'bot automatically get new deps?
<rogpeppe1> mgz: if not, then how can i fix it (FWIW, i don't *think* i added that dependency)
<mgz> rogpeppe1: it does, and that's not a new dep
<rogpeppe1> hmm, weird
<mgz> is it not actually imported in the branch?
<mgz> we still rely of go get for fetch
<bodie_> rogpeppe1, I'm actually going to revert the gopkg changes
<rogpeppe1> mgz: ah...
<bodie_> I think mgz had some good points that we're already pinning to a version via dependencies.tsv
<rogpeppe1> mgz: it's quite possible we don't have that as a dep any more
<rogpeppe1> mgz: hmm, no, we must do
<mgz> rogpeppe1: I think it's just gopkg weirdness we don't need to deal with for now
<rogpeppe1> mgz: gopkg?
<mgz> rogpeppe1: gopkg.in
<rogpeppe1> mgz: oh, i see
<rogpeppe1> pwd
<rogpeppe1> bodie_: but the branch i'm committing doesn't have any gopkg.in deps
<bodie_> rogpeppe1, come again?
<bodie_> rogpeppe1, I'm going to push a couple of tweaks to the PR, so we don't have to deal with this gopkg stuff right now
<rogpeppe1> bodie_: ok, fair enough.
<rogpeppe1> bodie_: sorry, i thought you were linking it to my 'bot failure
<mgz> rogpeppe1: it is related
<mgz> see the go get higher up in the console log
<mgz> having tip of gojsonschema pointing at gopkg.in borked us
<mgz> I'll requeue the proposals when that gets sorted
<mgz> bodie_: poke me when we're back to normality please :)
<rogpeppe1> ah, i get it
<rogpeppe1> we to go get, which has no dependencies; then we update the deps, and that means we really need to go get again
<wwitzel3> fwereade: updated https://github.com/juju/juju/pull/2 when you have a moment to review. thanks.
<bodie_> rogpeppe1, mgz https://github.com/juju/charm/pull/4 ready
<rogpeppe1> bodie_: thanks for not rebasing :-)
<rogpeppe1> bodie_: i think a load of my comments must've got lost in the ether
<rogpeppe1> bodie_: did you see a comment about var swap = unmarshaledActions.ActionSpecs[name]
<rogpeppe1> ?
<rogpeppe1> bodie_: i suggested: spec := unmarshaledActions.ActionSpecs[name]
<rogpeppe1> bodie_: because nothing is being actually swapped
<mgz> rogpeppe1: there's a possibly valid error on your 010 merge
<rogpeppe1> mgz: link?
<mgz> job 144, should be on the pr
<mgz> (sorry, can't easily paste url into irc)
<bodie_> rogpeppe1, one moment, looking
<rogpeppe1> mgz: ok, looking
<rogpeppe1> mgz: ah, perhaps i haven't pushed the latest version of cmd
 * rogpeppe1 juggles dependencies
<bodie_> rogpeppe1, https://github.com/binary132/charm/blob/actions-validate/actions.go#L77-L94
<bodie_> I'm not seeing the problem here
<rogpeppe1> bodie_: the "swap" variable isn't actually swapping anything. it's just a temporary mutable store for the struct.
<rogpeppe1> bodie_: it confused me for a few moments
<bodie_> ah, so just the naming
<rogpeppe1> yeah
<mgz> yeah, I'd generally name those kinds of things temp
<bodie_> that syntax is a little kludgy, imo
<mgz> (or more commonly, tempSomething)
<rogpeppe1> bodie_: also, we can do this later, but i'm inclined to think that bundling the bad-key checking inside the cleanse function isn't great
<bodie_> I get why it's there, since map resolution happens at runtime, and things need to be good at compile time
<rogpeppe1> bodie_: why what's there?
<rogpeppe1> i generally name it after the thing that it is
<rogpeppe1> in this case "spec" should work fine
<bodie_> er, the need to use a temp stuct
<bodie_> struct*
<bodie_> rather than assigning to the struct member via map resolution ActionSpecs[name]
<rogpeppe1> bodie_: there have been murmurings about allowing the assignment of fields inside by-value structs in maps
<rogpeppe1> bodie_: it's a slightly awkward language spec change to make though
<rogpeppe1> bodie_: as values in maps are deliberately not addressable
<bodie_> there we go
<rogpeppe1> bodie_: and that's currently the rule for allowing mutation of struct members
<bodie_> new bit up
<bodie_> I think it's nice that Go is oriented towards usefulness without total depth of knowledge ;)
<natefinch> bodie_: I like that total depth of knowledge is not too hard to obtain
<natefinch> (especially compared to other languages)
<bodie_> :)
<bodie_> I always find myself comparing everything to C in my head
<bodie_> it doesn't get much simpler than that, really
<natefinch> bodie_: there's a lot of memory management stuff that you need to know about C and using pointers as arrays & strings
<bodie_> you mean, like chunking for the cpu and caching behavior, heap vs stack?
<natefinch> like malloc etc
<bodie_> yeah, the mechanics can be kind of klutzy, for sure, especially starting out
<natefinch> oh yeah, and data structures getting created with completely random data... that's an awesome feature
<bodie_> makes perfect sense, though
<natefinch> lots of foot guns :)
<bodie_> what's really fun is when you manage to mangle your stored code, and things start blowing up in weird ways that are impossible to debug "the normal way"
<bodie_> because you're actually altering your program's behavior in unexpected, unpredictable ways....
<bodie_> wheee
<bodie_> rogpeppe1 / mgz is that pr good to land?
<mgz> bodie_: is from my perspective
<bodie_> I changed the name of the variable, I'd like to get this dependency stuff over with so I can fix the deps.tsv stuff and get on with my life (heh)
<rogpeppe1> bodie_: yeah, land it. i might fix up a couple of trivials later.
<bodie_> rogpeppe1, it's meant to be landed with $$merge$$ .... right?
<rogpeppe1> bodie_: no, just click merge
<bodie_> ah
<rogpeppe1> bodie_: we haven't got CI set up yet on any of the external repos
<rogpeppe1> bodie_: make sure that the tests pass tho'!
<bodie_> :)
<bodie_> I'm not seeing a merge button.
<bodie_> I'm sure that now that I've said so, it will become immediately obvious
<bodie_> probably just need to rebase on master
<bodie_> nope
<bodie_> rogpeppe1, I'm sorry for all the constant interrupt polling I'm doing here, do I just not have permissions or what's going on here?  I just want to get this finished.
<rogpeppe1> bodie_: hmm, you *should* have perms, assuming you're a member of juju-hackers. i'll just check.
<bodie_> I am a member
<bodie_> I think...?
<jcw4> And you're marked public I think bodie_ right?
<bodie_> I'm a member of juju
<rogpeppe1> bodie_: ah, try again
<rogpeppe1> bodie_: i hadn't added juju-hackers to the collabs
<bodie_> I see
<bodie_> well, that's nice to know, at least I wasn't doing something daft
<bodie_> and merged.
<bodie_> thank god.
<rogpeppe1> yay, merged!
 * rogpeppe1 high fives bodie_
<bodie_> this doesn't look right
<bodie_> https://github.com/binary132/juju/compare/dependencies-charm-gojsonschema?expand=1
<bodie_> ugh
<bodie_> okay, fixing
<bodie_> https://github.com/juju/juju/pull/96
<bodie_> rogpeppe1, mgz.... that should be all
 * rogpeppe1 is done for the day
<voidspace> g'night folks
<voidspace> EOW
<voidspace> see you all on Monday
<ericsnow> voidspace: have a nice one
<natefinch> see ya voidspace
<perrito666> btw, adding this to your ~/.gitconfig might easy your lives http://pastebin.ubuntu.com/7640097/
<jcw4> perrito666: which part in particular, or all of it?
<natefinch> perrito666: looks like there's some duplicate data in there
<natefinch> like color for status
<perrito666> natefinch: might be some dupes I have ben tweaking it for some time
<jcw4> perrito666: I see now.. I just saw a wall of text and didn't read enough to see it was all related to color coding git status info
<perrito666> jcw4: I get colors on status, a few useful shortcuts
<perrito666> jcw4: yup, basically it all makes to display git data into better ways
<natefinch> it's funny, my biggest gripe with git is that git difftool won't open more than one diff at a time.  I like being able to go through all the diffs in different tabs and see how they relate to one another
<perrito666> jcw4: looking it more in depth, I might have two files pasted there :p I just over cated
<natefinch> heh
<jcw4> :)
<perrito666> but that makes very clear the output of my git :) just needs a bit of cleaning
<perrito666> jcw4: natefinch http://pastebin.ubuntu.com/7640139/
<perrito666> there better
<jcw4> sweet
<perrito666> I find the aliases at the bottom to be specially life improving
<perrito666> as you can see some of those come from when I used svn :p
<natefinch> heh... yeah... I wish all the vcs people could just agree on a naming scheme. It would make things so much simpler
<jcw4> perrito666: I like git aliases too...http://paste.ubuntu.com/7640151/
<perrito666> natefinch: oh git agrees on the naming, they just have the commands do different stuff
<natefinch> haha, exactly
<bodie_> so, any chance I could get that dep update landed?  do I need a LGTM for this?
<bodie_> https://github.com/juju/juju/pull/96
<natefinch> wyou always need a LGTM :)
<natefinch> not that I can tell at all if you screwed up or not ;)
<bodie_> hehe, yep...  seemed like a pretty simple thing to get landed
<jcw4> bodie_: LGTM
<jcw4> ;0
<jcw4> ;) even
<bodie_> I'm really anxious to move on to the action stuff...
<natefinch> jcw4: you gotta do it for realz in the PR, so it's recorded that it's your head on the line if there's a problem :)
<jcw4> er... umm
<jcw4> well...
<bodie_> what, natefinch, you can't eyeball the difference between 1be916ca1fee152004f27cf83df96b130411ea9f and dbe10f58fcb80669aa3972574f51d92fdd20ee47?
<jcw4> my head is now on the line
<bodie_> from a certain perspective, it's just moved from one line to another
<natefinch> you can actually just go look at the ids of the heads of those repos and eyeball that they're the same as in the dependencies file
<natefinch> LGTM to me too
<bodie_> thanks gents
<bodie_> and this is where I need to do the $$merge$$ bit?
<jcw4> bodie_: yep $$merge$$
<perrito666> mm, just the kind of mail to get from curtis a friday afternoon
<natefinch> ahh, the weekly "everything is totally screwed up!" email :/
<jcw4> what does PTAL mean?
<natefinch> please take a look
<jcw4> ah... thanks!
<natefinch> took me a while to figure that one out too
<jcw4> I got SGTM, LGTM, WTF (j/k) but I couldn't get that one
<lazyPower> have you guys had a chance to look at https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1329429
<_mup_> Bug #1329429: local bootstrap fails <amd64> <apport-bug> <trusty> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1329429>
<lazyPower> i'm seeing more users show up with this null pointer dereference problem, so far about 3
<lazyPower> however, the user being affected by this particular issue is here now, ali1234  is the filer of 1329429
<natefinch> lazyPower: you shouldn't need to sudo for bootstrap... in fact that may mess things up (obviously a nil pointer panic is not really an acceptable handling of that)
<ali1234> hi
<lazyPower> natefinch: is that the root of the issue, using sudo?
<lazyPower> let me update that AU answer - since juju has evolved beyond requiring sudo to run the bootstrap for local.
<natefinch> I don't know.  juju bootstrap with local will work without sudo (if it needs sudo, it'll prompt you for creds)
<natefinch> so... it may need sudo, but running it preemptively with sudo is not necessary nor recommended
<lazyPower> natefinch: good point. I've got an edit filed for that AU answer, i didn't realize it wasp ointing to still using sudo
<lazyPower> that's been corrected for a little over what, 4 months now?
<natefinch> something like that yeah
<natefinch> we wanted it to be completely sudo-less but there were some userspace-lxc things that I think either didn't make it in or weren't available on all platforms
<lazyPower> ali1234: when you run juju bootstrap without prefixing with sudo, do you still get the nil pointer problem?
<ali1234> yes
<ali1234> ~/.juju/environments/ is owned by root, this probably doesn't help
<natefinch> I figured that would be too much to hope for
<ali1234> i only get the null pointer problem when running without sudo
<ali1234> i don't get it when running with sudo, i get a different error telling me not to use sudo :)
<natefinch> oh yeah, sorry, I see that now in the bug description. well, it's good to get the documentation correct anyway
<ali1234> yeah. i expect if i hadn't made that mistake i wouldn't be in this situation
<natefinch> I'm trying to dredge up the 1.18.1 code so I can make sure I'm looking at the right code for the tracebacks in the bug
<lazyPower> Thanks for taking a look natefinch
<ali1234> ~/.juju/environments/ was empty so i deleted it... now i don't get the error
<natefinch> huh
<natefinch> like.... bootstrap worked?
<ali1234> yes
<ali1234> it recreated the dir and now i have a local.jenv inside it
<natefinch> I wonder if that got created with root credentials and then it couldn't be written to
<ali1234> yes, that is exactly what happened
<natefinch> I think we've seen that.... I think we fixed it... there was another bug for it, let me check
<natefinch> Have you tried this with 1.18.4?
<ali1234> no
<ali1234> i just wanted to try out juju, i have never used it before
<ali1234> i have no idea how to do that
<natefinch> apologies for such a bad first impression
<ali1234> np, i've seen worse
<jcw4> [miracle max's wife]:  you think it'll work?
<jcw4> [miracle max]:  it'll take a miracle
<ali1234> so my question now is are there going to be more root owned files? where does it keep all the local stuff?
<natefinch> There should not be any root owned files.  We only need root for local environments because we start lxc containers.
<natefinch> the local stuff is all kept under $HOME/.juju/
<ali1234> yeah, where are those containers stored? in ~/.juju?
<ali1234> ~/.juju/local/ has a load of root owned files
<natefinch> ali1234: the containers are stored in the standard lxc place, I believe, /var/lib/lxc?  something like that.
<ali1234> /var/lib/lxc/ is empty
<ali1234> should i just nuke ~/.juju and start again?
<natefinch> ali1234: that's probably not a bad idea.  I was wrong, it looks like there are some root owned files in .juju, at least under the .juju/local/ directory
<ali1234> now i get "ERROR cannot use 37017 as state port, already in use"
<ali1234> apparently mongodb is running on that port
<natefinch> ali1234: so the mongo server from the first bootstrap is evidently still running.  Assuming you're not running mongod on your local box for anything else, you can just do sudo killall mongod
<ali1234> start: Job is already running: juju-agent-al-local Bootstrap failed, destroying environment
<natefinch> try running    juju destroy-environment local --force
<ali1234> ERROR open /home/al/.juju/environments.yaml: no such file or directory
<natefinch> heh, oops, ok, so we've been blowing stuff away so it's all confused
<ali1234> i did init, switch, destroy and that seems to have worked
<natefinch> ahh, cool
<ali1234> bootstrap appears to have worked
<ali1234> i was under the impression it had to download a lot of stuff, but it ran very quickly
<natefinch> the local provider is super quick
<natefinch> great for demos and proof of concepts
<ali1234> so the askubuntu page says "The first time this runs it might take a bit, as it's doing a netinstall  for the container, it's around a 300 megabyte download. Subsequent  bootstraps should be much quicker. "
<ali1234> it definitely didn't download 300mb...
<ali1234> looks like it's working now, thanks!
<bodie_> does anyone happen to know much about juju run?
<bodie_> I'm trying to figure out where to look for the bits where it builds the hook context
<natefinch> bodie_: thumper wrote that, but it's saturday where he is
<bodie_> :)
<bodie_> on that note, if fwereade sees this -- https://github.com/juju/juju/blob/master/doc/charms-in-action.txt#L80-L92
<bodie_> I think this content is outdated, but I want to be sure
<perrito666> ok, EOD | EOW for me, cheers
#juju-dev 2014-06-15
<waigani> morning all
<waigani> simple review if anyone's interested: https://github.com/juju/juju/pull/91
<menn0> waigani: mornin'. looking at that PR now.
<waigani> menn0: sweet, thanks. I suspect I may have to add more tests - just started with the basic one.
<menn0> waigani: is the description for the PR correct? You say this implements the client side of the UserInfo but this looks to be the server side.
<menn0> waigani: ignore me...
<menn0> waigani: I'm tired. You're right.
<waigani> menn0: hehe, I was just composing my explanation
<thumper> I don't particularly like the "todo, not tested" bits
<menn0> I got my directory paths and file names mixed up
<thumper> can we not work out how to test it?
<menn0> that was going to be my first review comment... you should be able to mock out the API call to test the error case
<waigani> thumper: okay, there are other places in the API with the same todo - I was being lazy and followed suit
<menn0> waigani, thumper: I've just commented with a suggestion on how to do this.
<waigani> menn0: yay thanks. I'll plow through these emails asap and get onto it.
<menn0> waigani: I've added a few more comments and I'm done for now.
<waigani> menn0: Great. Thank you!
<thumper> argh...
<thumper> my head hurts
<thumper> although not boxing this time
<thumper> lots of thinking around identities and users
<waigani> thumper: and lack of wine?
<thumper> not yet
<thumper> had a drink last night
<thumper> but it will be the last for a while
<waigani> the rest of us will have to drink more to put up with you!
<waigani> I'm not sure how to mock out a method, without creating a constructor for the object/struct
<waigani> oh, haha - I just spotted NewUserManagerAPI, looks like a constructor
<waigani> ignore me
<waigani> thumper, menn0: I've made the server side UserInfo mockable. How do I mock it out from the client side? It is called over the wire and initialized separately from the client.
<thumper> waigani: you shouldn't do it that way
<waigani> oh
<thumper> you should mock out the client's user manager
<thumper> not the server side
<waigani> back to the drawing board
<thumper> at least I think...
<thumper> are there examples of this anywhere?
<thumper> I have vague recollections of this before
<waigani> thumper: i get what you're saying, mock out the server endpoint on the client side
<thumper> yeah
<waigani> I'll give it another go after standup
<thumper> api/client_test.go line 75
<waigani> thumper: sweet, thanks
<menn0> waigani, thumper: in this case it's really easy isn't it? All calls are made through a single standalone function (call()) so patch that function for tests where you'd like to create an error condition.
<thumper> not sure how helpful it'll be
<thumper> menn0: that seems the obvious place...
<waigani> connection dropped, we were done right?
<JoshStrobl> Any feedback from Juju documenters? https://lists.ubuntu.com/archives/juju-dev/2014-June/002708.html
<menn0> waigani: yep, we were done
<thumper> JoshStrobl: we agree it needs to be updated :-)
<menn0> bikeshedding required! when a relation hook fails should the agent info in the status output look like:
<menn0> a) hook failed: some-relation-changed for mysql:server
<menn0> b) hook failed: some-relation-changed with mysql:server
<menn0> c) hook failed: some-relation-changed for wordpress:mysql to mysql:server
<menn0> (if the error was being reported for the wordpress charm)
<menn0> currently we get something like: hook failed: some-relation-changed
<waigani> c
<waigani> menn0: call is a method of Client struct. If I was to make the method mockable, I'd have to add a constructor to the struct.
<menn0> waigani: I'm talking about the call function at usermanager/client.go:21
<waigani> menn0: oooohhh
<menn0> thumper: any thoughts on the colour of the bike shed?
<menn0> waigani: to patch you'll probably need to introduce another level of indirection (i.e. a standalone function which takes st and the other args that call() takes.
<waigani> menn0: yep, I just missed the call func in usermanager - I went looking further up the tree in api/client
<menn0> davecheney: any preference on how relation hook failures should be displayed in status output?
#juju-dev 2015-06-08
<menn0> thumper: another intermittent test failure fix: http://reviews.vapour.ws/r/1877/
 * thumper looks
<axw> menn0: can I bother you for a small txn assert fix? http://reviews.vapour.ws/r/1879/
<menn0> axw: sure
<menn0> looking
<axw> thanks
<menn0> axw: is the single worker responsible for block device updates a singular worker?
<axw> menn0: hah, good point. I'll check, but it doesn't actually matter anyway
<menn0> axw: yeah I think I agree, but if it isn' then one of your points isn't accurate :)
<axw> menn0: it's instancepoller, and no it's not (should it be?)
<axw> menn0: I'll update the description ;)
<menn0> axw: not sure if it should be or not
<menn0> axw: i don't know much about instancepoller
<menn0> axw: does setAddresses get called with the complete list of current addresses?
<axw> menn0: it's called with the in-memory ones. hmm, I guess it should really refresh shouldn't it
<menn0> axw: that's what I was thinking
<menn0> axw: otherwise there is a concurrent update problem
<axw> menn0: I'll write a test and fix, and poke you again in a bit
<menn0> axw: sounds good
<menn0> axw: i'll publish the one minor comment I had now
<axw> thanks
<thumper> coffee time
<axw> waigani: thanks for fixing that bug
<waigani> axw: np :) thanks for the review
<axw> menn0: PTAL. I changed it to not use a txn loop anymore, since it'll never leave the state that fails the assertion
<menn0> axw: will do. otp
<menn0> axw: i think your change is probably ok b/c everyone who sets address should be trying to set the same thing
<menn0> axw: the only problem I can see is if one of the address setting processes is slow and so ends up overwriting a more recent (and correct) set of addresses with a stale set
<axw> menn0: that would still happen before. it'd just have gone through the txn loop twice
 * menn0 nods
<menn0> axw: i guess it would
<axw> menn0: it shouldn't be a major issue anyway, since the updates are periodic
<menn0> axw: and i just noticed that the addresses are refreshed before every update as well now too
<menn0> axw: which helps
<axw> yep, thanks for picking that up
<menn0> axw: ship it!
<axw> menn0: thanks
<menn0> thumper: big debuglog refactoring... preparation for upcoming db log changes: http://reviews.vapour.ws/r/1883/
<menn0> thumper: another refactoring branch to come
<thumper> menn0: done
 * thumper heads off
<mup> Bug #1462874 opened: Collapse workload-state and agent-state into state <juju-core:New> <https://launchpad.net/bugs/1462874>
<dimitern> jam, morning
<dimitern> jam, 1:1 ?
<TheMue> dimitern: ping
<dimitern> TheMue, sorry, omw
<dimitern> TheMue, in the mean time http://reviews.vapour.ws/r/1884/ ? :)
<TheMue> dimitern: hehe, ok
<TheMue> dimitern: thx for fixes, looks good
<dimitern> TheMue, thanks for the review :)
<dooferlad> dimitern: hangout?
<voidspace> dimitern: it's pushed http://reviews.vapour.ws/r/1860/
<voidspace> dimitern: when you have time
<dimitern> voidspace, cheers, will have a look shortly
<voidspace> dimitern: you've frozen
<dimitern> voidspace, where?
<voidspace> dimitern: are you still in the hangout?
<dimitern> voidspace, no
<voidspace> heh
<voidspace> ok
<dimitern> voidspace, :)I'm looking at your PR and will join our 1:1 hangout in a bit
<voidspace> dimitern: ok, I'll grab coffee
 * TheMue is afk, bike ride to co-location office
<dimitern> voidspace, LGTM, I'm in the hangout btw
<voidspace> dimitern: omw
<perrito666> Morning al
<voidspace> dimitern: do I need to manually merge my branch into a feature branch?
<voidspace> dimitern: https://github.com/juju/juju/pull/2493
<voidspace> dimitern: I thought the bot could handle feature branches
<voidspace> mgz_: ^^^
<voidspace> dimitern: mgz_: never mind...
<voidspace> my page was just stale
<voidspace> it has already happened...
<voidspace> :-)
<dimitern> voidspace, yeah, all good
<mup> Bug #1461529 changed: juju upgrade-charm has no effect for subordinate charms <subordinate> <upgrade-charm> <juju-core:Invalid> <https://launchpad.net/bugs/1461529>
<dimitern> voidspace, we need to sync up the devices branch with master
<dimitern> voidspace, before it diverges too much
<voidspace> dimitern: ok, I'll create a PR
<dimitern> voidspace, cheers!
<dimitern> dooferlad, omw
<voidspace> dimitern: https://github.com/juju/juju/pull/2520
<mup> Bug #1462966 opened: worker/provisioner: multiple data races <race-condition> <juju-core:Triaged> <https://launchpad.net/bugs/1462966>
<dimitern> voidspace, it's just a straight merge right?
<dimitern> voidspace, LGTM
<voidspace> dimitern: yup, straight merge
<wallyworld> jam: you around for the system image meeting?
<bac> hi frankban_, now that i'm home i see the results from 'make fcheck' i ran friday afternoon
<frankban_> bac: cool, any problems?
<bac> frankban_: yeah, quite a few failures
<frankban_> bac: weird it works here
<bac> frankban_:  let me paste them
<bac> frankban_: http://paste.ubuntu.com/11648474/
<bac> frankban_: was i supposed to have an environment bootstrapped before running the tests?
<frankban_> bac: so those seems related to the charm hook failing rather than quickstart.
<frankban_> bac: no, no environment should be bootstrapped
<frankban_> bac: I presume this is related to the lp outage
<bac> frankban_: ah, yes.
<frankban_> bac: functional tests use the network
<frankban_> bac: because charm isntallation requires network access
<frankban_> bac: perhaps if you run them again you'll see them passing
<frankban_> bac: make ftest could be faster
<bac> frankban_: ok, i'll try again. i have no idea if LP is more accessible today than friday
<frankban_> bac: yeah, but I run ftests this morning for another branch and they passed
<bac> ok
<wallyworld> axw: perrito666: just finishing another meeting, be there is a sec
<katco> wwitzel3: fyi, they cable company is messing with my line. i might go dark for a bit. if i do, just have the standup w/o me
<katco> wwitzel3: 1st day of the iteration, shouldn't be too much
<mup> Bug #1461954 opened: failed to unmarshall 503 <ci> <juju-core:Triaged> <juju-quickstart:Invalid> <https://launchpad.net/bugs/1461954>
<mup> Bug #1463047 opened: TestUpgraderRetryAndChanged fails <ci> <intermittent-failure> <test-failure> <juju-core:Incomplete> <juju-core 1.24:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1463047>
<wwitzel3> katco: standup was as you'd expect :)
<mup> Bug #1463053 opened: NetworkSuite setup fails <ci> <intermittent-failure> <unit-tests> <juju-core:Incomplete> <juju-core feature-proc-mgmt:Triaged> <https://launchpad.net/bugs/1463053>
<voidspace> dimitern: ping
<dimitern> voidspace, pong
<voidspace> dimitern: so in the container broker we're *already* populating the MAC address
<voidspace> dimitern: using the template
<voidspace> dimitern: const MACAddressTemplate = "00:16:3e:xx:xx:xx"
<voidspace> dimitern: so the current code will put that into the XML for every KVM instance... :-)
<voidspace> dimitern: I seem to recall you wanted to deliberately override any mac address coming from anywhere else
<voidspace> dimitern: can you remember why?
<voidspace> dimitern: this is lxc-broker.go:566
<dimitern> voidspace, that's because the lxc package understands :xx:xx and renders them to be unique
<dimitern> voidspace, we have to do the same for kvm I guess
<dimitern> voidspace, feel free to drop or changethe MACAddressTemplate
<voidspace> dimitern: well, I'll replace it with a *specific* mac address
<voidspace> a generated one
<voidspace> still within that template
<voidspace> that template range
<perrito666> mmpf, does anyone else have the problem of the comment button on rb not creating a text box?
<dimitern> voidspace, sgtm
<voidspace> dimitern: I think the rand package on play.golang.org may not be entirely random...
<voidspace> dimitern: this code always generates the same MAC address...
<voidspace> dimitern: http://play.golang.org/p/0wHt2bd9YX
<voidspace> either that or results are cached
<voidspace> dimitern: note that in that code I have to use []interface{} to pass to Printf (or Sprintf in the real code)
<dimitern> voidspace, will look in a bit
<voidspace> dimitern: ok
<voidspace> dimitern: I wondered why, I think that go is doing a shortcut when you unpack a slice in a call and collect them back up in a function definition
<voidspace> and []string can't be converted to []interface{}
<voidspace> natefinch: you'd probably know
<voidspace> natefinch: : http://play.golang.org/p/0wHt2bd9YX
<voidspace> natefinch: change []interface{} to []string and watch it fail
<voidspace> natefinch: that surprised me
<voidspace> (although I know you can't cast an arbitrary slice to []interface{} - but in this case it's Go doing the cast not my code)
<natefinch> voidspace: well, you're passing a []string into something expecting a []interface{}.... so it is you :)
<voidspace> natefinch: well, I'm unpacking it
<voidspace> natefinch: the call is "digits..."
<natefinch> right
<voidspace> so I'm *not* passing the slice
<voidspace> I suspect that as Printf is packing back into a slice there's a shortcut that is a cast
<natefinch> I think that's just syntactic sugar, so that Go knows you mean to pass each value in separately, rather than the whole slice as a single value
<voidspace> which doesn't work
<voidspace> natefinch: right, but it's *not* passing them in spearately
<voidspace> casting a string to an interface{} would work, right?
<natefinch> correct
<dimitern> voidspace, generating the same mac address seems to be related to not initializing rand seed (or just a quirk of the playground)
<voidspace> dimitern: yeah, I expect so
<voidspace> was noting it more than asking a question :-)
<voidspace> it was the failed cast that I actually wondered about
<natefinch> voidspace: I do think it's a little surprising, but it is actually documented as simply being passed in that way: If the final argument is assignable to a slice type []T, it may be passed unchanged as the value for a ...T parameter if the argument is followed by .... In this case no new slice is created.
<natefinch> from http://golang.org/ref/spec#Passing_arguments_to_..._parameters
<dimitern> voidspace, why not generating a slice of uint8 and converting that to a mac address?
<natefinch> voidspace: that snippet of your was very pythonic, btw
<voidspace> dimitern: would that actually be less code?
<voidspace> natefinch: thank you
<katco> wwitzel3: cool, ty :)
<natefinch> voidspace: it's not a good thing when you're writing go ;)
<voidspace> natefinch: it's *always* a good thing  ;-)
<voidspace> Python has better primitives for generating hex strings... like a hex built-in
<voidspace> although maybe Go has one too, I didn't look very far to be fair
<voidspace> ah, I can use %x
<dimitern> voidspace, I don't know, just a suggestion :)
<voidspace> dimitern: using hex.EncodeToString seems the best way - I still need to generate three of them
<voidspace> dimitern: as I need them colon separated
<voidspace> natefinch: thanks for that reference by the way, useful
<voidspace> natefinch: it is surprising, and has the downside that you can never unpack a slice into a call that accepts []interface{}
<TheMue> dimitern: leaving from colocation now back home, but should be there when net meeting is starting
<voidspace> natefinch: would you still say this is pythonic?
<voidspace> natefinch: http://play.golang.org/p/QfCqUYZ5P3
<voidspace> natefinch: if so, could you explain what a more go-thonic approach would be?
<voidspace> (I could just put three calls to rand.Intn(256) into the Printf call - but that seems a bit gross)
<natefinch> voidspace: haha, I have exactly that code in my playground buffer
<voidspace> natefinch: heh, cool
<voidspace> I just didn't know about %x
<voidspace> but then I didn't look very hard
<voidspace> natefinch: thanks
<natefinch> voidspace: I can't make it 0 pad the result though... so <17 is a single difit
<natefinch> digit
<natefinch> er <16
<voidspace> ah
<voidspace> yes
<natefinch> %0x should work , but doesn't somehow
<voidspace> maybe generating 6 digits is better then
<dimitern> TheMue, no problem
<dimitern> voidspace, EncodeToString sounds reasonable
<voidspace> dimitern: %x does that anyway (under the hood)
<perrito666> natefinch:  :%02x"
<perrito666> voidspace: ^
<voidspace> perrito666: thanks
<natefinch> oh, right...
<natefinch> perrito666: the documentation for that is not good
<perrito666> natefinch: it is not I just learned all that stuff by trial and error while doing nolog
<natefinch> perrito666: it's the way C/C++ do it... but the actual language in the Go doc is confusing.
<perrito666> it is not the clearest formatting indeed
<lazyPower> Did we make any changes to juju-log in the latest 1.24-beta release?
<lazyPower> i'm not seeing anything in the project milestone changes that would indiciate it has  https://launchpad.net/juju-project/+milestone/1.24-beta5
<lazyPower> Does anyone have a moment to help me troubleshoot what info i shoudl be dumping into a bug? i've been able to reproduce the above consistently in the latest juju-1.24-beta6.1 release with unit commands hanging indefinately while attacehd to the unit via debug-hooks.
<lazyPower> heres what i have, and hopefully its got enough information to reproduce: https://bugs.launchpad.net/juju-core/+bug/1463117
<mup> Bug #1463117: Juju 1.24-beta6.1 unit commands hang indefinately <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463117>
<mup> Bug #1350171 changed: windows service needs to implement Exists <juju-core:Fix Released> <https://launchpad.net/bugs/1350171>
<mup> Bug #1449617 changed: service.Service implementations are missing functional tests. <systemd> <upstart> <windows> <juju-core:Fix Released by gabriel-samfira> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1449617>
<mup> Bug #1463117 opened: Juju 1.24-beta6.1 unit commands hang indefinately <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463117>
<perrito666> lazyPower: thanks for the report, I think I bumped into that a couple of weeks ago and then missed it
<lazyPower> perrito666: yeah, its really confusing at how specific and isolated it is
<lazyPower> the fact when you detach and re-execute, leads me to beleive its something minor like a variable not making it into the context for auth w/ the socket or something.
<lazyPower> and it only cropped up in this beta 6.1, it was working as expected in beta-5
<perrito666> katco: I swear I am not trolling
<katco> perrito666: lol
<perrito666> worth the clarification
<katco> perrito666: as someone who did not grow up with twitter, the 140char limit is infuriating
<perrito666> katco: it is and it outlived its purpose long ago
<katco> perrito666: what i mean to say is patterns are the canonical way to solve a problem. they have been uncovered by people over the years
<katco> perrito666: if someone uses the wrong tool for the job, that doesn't make the pattern wrong, it makes the developer wrong
<perrito666> katco: patterns are the canonical way to communicate a solution, at least from my pov
<katco> perrito666: yes, and that label implies a way of doing something
<perrito666> katco: yes, but, as any other convention it is only useful if properly adopted
<katco> perrito666: yep
<katco> perrito666: but i disagree with the rally cry of "down with design patterns"
<perrito666> so if the amount of people using a convetion in the wrong way is larger than the people using it right, it means the convetion is not useful since people cannot adopt it
<katco> perrito666: to me it is tantamount to saying "down with vocabulary"
 * perrito666 will hold a troll comment about english vocabulary :p
<katco> perrito666: but we're not talking about conventions, really. patterns are patterns because no one has discovered a better way.
<katco> perrito666: ignoring past exploration into the problem makes no sense to me.
<natefinch> to me, too often it obscures the actual logic behind the code.  "Visit" should never be a function name in your code (unless you're running AirBNB or something)
<perrito666> natefinch: well if someone implements a pattern verbatim then its cargo culting
<natefinch> also, they tend to overcomplicate the problem for dubious benefit
<perrito666> like, if I tell you "solve this with a product consumer" you will most likely know what I mean but not implement the example out of wikipedia
<perrito666> the issue is that there are a ton of cutpasters that call themselves developers
<perrito666> aghh why do I keep having to do changes to things that lack tests and end up implementing the whole thing
<natefinch> heh
<katco> sorry, actual work stuff distracting me :) natefinch: if they don't have a benefit, then you shouldn't be using them
<perrito666> katco: did you just tautologized nate?
<perrito666> :p
<katco> perrito666: well that's kind of my point... the argument against patterns seem to be "but when i don't need a pattern and i use it, it doesn't provide value"
<katco> perrito666: i'm trying to point out that the utility is a f(developer) not f(pattern)
<natefinch> katco: that's what I keep telling people who keep using patterns, but they were published in a book by some people known as a gang, so they must be good.
<natefinch> for example, the visitor pattern turns simple iteration into callback hell.  Case in point: http://golang.org/pkg/path/filepath/#Walk
<katco> natefinch: i don't think you understood me. patterns are just code that solve specific problems, and the community has given them names
<katco> natefinch: if the costs outweigh the benefits, then don't use that code
<katco> natefinch: but that doesn't make the code intrinsically useless
<natefinch> katco: but by giving it a name and a default structure, it makes people think they have to use the whole thing, or it's "wrong", rather than just using the idea behind the pattern.  "Hey, you can let the object handle its own iteration, so you don't have to duplicate that code everywhere!"  "Oh, neat idea!"
<katco> natefinch: once again, problem with the person, not the label
<natefinch> katco: but a *lot* of people.
<TheMue> katco: I wouldn't use the term "code" for patterns, they are just an idea on how entities can collaborate elegantly to solve a task, simplifying the communication about it between developers
<katco> TheMue: right
<natefinch> sinzui: I have a bizarre thought.... do we really need anything other than the juju client built via ubuntu's methods?  Jujud just gets downloaded via simplestreams... in theory that jujud could be built however we want it to be, right?
<katco> natefinch: a pattern is a truth. it exists regardless of adoption. saying "don't use it" is like saying "a lot of people get predicate calculus wrong, so no one use it"
<mup> Bug #1449617 opened: service.Service implementations are missing functional tests. <systemd> <upstart> <windows> <juju-core:Triaged by gabriel-samfira> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1449617>
<mup> Bug #1463133 opened: migrateLocalProviderAgentConfigSuite teardown fails <ci> <intermittent-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1463133>
<mup> Bug #1463135 opened: TestUniterHookSynchronisation fails <ci> <intermittent-failure> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1463135>
<natefinch> katco: a lot of the reason I like go is because it takes a lot of existing solutions and says "don't do that".  For example, pointer arithmetic, operator overloading, etc.
<TheMue> natefinch: you're right that many developers think that they have to code and name exactly like in the patterns, that's surely naive.
<katco> natefinch: my talk at gophercon will be attempting to convince people that go is not an excuse to ignore the lessons learned over the past 30, 40 years :)
<katco> natefinch: this is a great example
<TheMue> natefinch: oh, coming to go, even go has "patterns". they are only called idioms here. ;)
<natefinch> katco: Oh, I'm not ignoring them.  I'm actively trying to kill them with fire ;)
<katco> natefinch: that's like trying to kill the number 5, it makes no sense
<natefinch> katco: I hate the number 5.
<natefinch> :D
<sinzui> natefinch: correct
<katco> natefinch: and yet its existence is implicit to the universe, with no regard to your feelings ;)
<natefinch> sinzui: that was not the answer I expected :)
<sinzui> natefinch: We build the Jujud using Lp, now, We used to let ubuntu do. Not any more. CI makes the Win and centos agents. We already have the infrastucture to make all jujuds separate from client packaging
<sinzui> QA considered it because we really want to upload just one agent per arch and OS
<natefinch> sinzui: so, in theory, if we really needed to, we could keep the client 1.2.1 compatible, but still use 1.4.2 with Jujud?
<sinzui> natefinch: The only spanner in the works is that local-provider will not use streams. You cannot force it. Ubuntu want their packages to provide both client and server for local-host
<sinzui> natefinch: absolutely. That might also help us get to a smaller client.
<natefinch> sinzui: true, but don't you already need a PPA for juju-local?
<sinzui> natefinch: no. anyone making the source package will get juju-core anf juju-localâ¦but we both know the jujud wasnât provided by juju-local. The client package still containers jujud
<natefinch> sinzui: so it sounds like the answer is "no, we can't have jujud require 1.4.2 unless we have 1.4.2 in ubuntu"
<sinzui> natefinch: I donât think that is true. The SRU is all aout the client. If core fixes local-provider to work with streams, then we and foundations change the packaging to be just about the client, then a user will get the new juju cllient, bootstrap, and still get a local env.
<natefinch> sinzui: well, you just gave me my Friday labs project :)
<sinzui> I look forward to test the local provider with an untainted jujud
<perrito666> wasnt going to be a lxd thing?
<natefinch> perrito666: I'm not holding my breath
<natefinch> it just now occurred to me that charms are basically plugins for juju
<natefinch> ericsnow, katco, wwitzel3: I think I nuked the workload process technology plugins card.... I tried to drag it into "in progress" and when I dropped it, it zooped up and away semewhere
<katco> natefinch: looks like it was in archive. moved it back for ya
<natefinch> katco: oh, thanks, archive was scrolled off my screen, so I forgot it existed
<natefinch> ericsnow: I see now what you were talking about all this time with modules... interesting approach.
<ericsnow> natefinch: yeah :)
<katco> and another lightbulb goes on :)
<natefinch> ericsnow: should this be "args []string"?  https://github.com/ericsnowcurrently/juju/blob/ww3-container-mgmt/procmanager/plugin.go#L11
<ericsnow> natefinch: perhaps
<ericsnow> natefinch: if anything it would probably be a struct
<natefinch> ericsnow: right
<natefinch> ericsnow: also, do you really need to pass the description to launch?
<ericsnow> natefinch: probably not
<natefinch> ericsnow: ok :)
<ericsnow> natefinch: keep in mind that the Plugin (and PluginResource) interface were created mostly to sketch out the concept
<ericsnow> natefinch: it may be that those two types are unnecessary
<ericsnow> natefinch: I expect that in the end something like them will exist, though
<natefinch> ericsnow: no problem, was just making sure I wasn't missing important things you guys had already hashed out.
<ericsnow> natefinch: k
<natefinch> ericsnow: what is the ID in "register <options> PROC-NAME ID" ?
<ericsnow> natefinch: the ID that comes back from the plugin
<ericsnow> natefinch: same as UniqueID in LaunchDetails
<katco> natefinch: ericsnow: is the high-level architecture documented anywhere?
<natefinch> ericsnow: so proc-name is the juju-name for it, and id is the plugin's name
<katco> natefinch: ericsnow: to scale this conversation to the team?
<ericsnow> natefinch: pretty much
<ericsnow> katco: not really (yet)
<katco> ericsnow: natefinch: now's a great time :) "make it so"
<natefinch> katco: I'm working off this: https://docs.google.com/document/d/1PcRQXaerlsACro4y1y5LWD-uvhfHya2CkOcoljyFyCU/edit#
<ericsnow> natefinch: that doesn't specify much about the plugins though
<katco> natefinch: that doc doesn't really document the architecture
<natefinch> I know :)
<natefinch> sort of my point ;)
<katco> natefinch: specs != architectural docs
<katco> the questions are coming up naturally now, so it's a good time to write down the answers
<natefinch> katco, ericsnow: I started a doc on the plugin spec: https://docs.google.com/document/d/1qlHneZ7UHoNgGska46XKhqyYOHLKUSob41puQyG80GQ/edit#
<natefinch> katco, ericsnow: I can start one on the high level architecture
<natefinch> architecture/glossary :)
<katco> natefinch: doesn't have to be anything too elaborate. just a high-level component diagram, maybe a sequence diagram
<ericsnow> natefinch: k
 * natefinch didn't sign up for diagrams....
<natefinch> :)
<katco> natefinch: ericsnow can walk you through what we're doing on min version
<natefinch> I can do diagrams.  Not my forte, but I'll make do, and someone who's better will come along and make them look nice
<katco> natefinch: use plantuml. declarative, and very quick
 * natefinch debates drawing stuff on his tablet with his finger
<thumper> katco, natefinch: saw what appeared to be a long twitter conflab about patterns, decided quickly not to get involved...
<thumper> took effort though
<natefinch> thumper: haha
<katco> :p
<natefinch> thumper: twitter is so bad for arguments
<thumper> true
<natefinch> we finished it up (for some value of finished) here
<thumper> although we like to call them discussions :)
<katco> thumper: well at any rate it's good that you are silently agreeing with me. >:D
<natefinch> rofl
<natefinch> I gotta run.  Will be back later to work on specs & arch docs for process mgmt stuff
<thumper> waigani: re bug 1463117 change happened between beta5 and beta6
<mup> Bug #1463117: Juju 1.24-beta6.1 unit commands hang indefinitely <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463117>
<waigani> thumper: cheers
<katco> wwitzel3: ping
<wwitzel3> katco: pong
<katco> wwitzel3: do you know why you handed this to sergey? https://bugs.launchpad.net/juju-core/+bug/1449210
<katco> wwitzel3: https://plus.google.com/hangouts/_/canonical.com/juju-release?authuser=1 if you're interested in verbally giving a report
<wwitzel3> katco: he fixed it last time
<wwitzel3> katco: it just wasn't visible that he did it because he didn't have an LP account at the time
<wwitzel3> katco: I can forward you the email chain, it might of been someone else from the cloudsigma team and not sergey who fixed it, but it isn't a bug with juju, it was fixed in the remote API last time.
<katco> wwitzel3: that would be great. ty for the insight
<wwitzel3> katco: forwarded, yep, np
<perrito666> Wallyworld axw I am a couple of mins late
<wallyworld> ok
<alexisb> ericsnow, you still there?
<alexisb> if you are gsamfira may have questions for you
<ericsnow> alexisb: yep
<alexisb> gsamfira, ^^^
<gsamfira> thanks. Forking && fixing
#juju-dev 2015-06-09
<davecheney> thumper: can I do a 1:1 with you today ?
<davecheney> 'cos it was actual queens birthday yesterday
 * natefinch wonders if it's worth the effort to propose something he thinks is better, but different.
<perrito666> natefinch: isn't that like the best thing you can propose?
<natefinch> perrito666: people don't like different, even if it's better
<natefinch> perrito666: which means I have to fight for it.
<perrito666> natefinch: are yo aksing us to tell you it is ok to be lazy? :p
<thumper> davecheney: yep
<natefinch> perrito666: nah
<perrito666> natefinch: too bad I was about to tell you it is ok to be lazy :p
<natefinch> perrito666: heh.  Nah, I'm the type of person that always tries the hard way first.
<natefinch> perrito666: I had an epiphany tonight - I realized our charms are actually plugins for juju that we ask other people to write.  And looking at them from that lens... our plugin architecture is pretty terrible.
<natefinch> which is why we hide it with charm helpers and gocharm and the like.
<perrito666> natefinch: ook
<perrito666> natefinch: It would seem to me you are hitting the wrong plants :p
 * perrito666 wonders if his battery will run out before the tests finish the run
<natefinch> anyway, I'm working on the spec for workload process plugins (i.e. plugins to run docker or rocket or whatever containers etc).... and trying not to make the same mistakes.  At some point I'd love to propose a "new-style charm" architecture.  But for now I'm just wondering if I can get these new plugins to not follow in the footsteps of the old ones.
<axw> wallyworld: would you mind reviewing https://github.com/juju/juju/pull/2524 please? 1.22 and 1.24 differ a little bit
<wallyworld> sure
<wallyworld> axw: looks good. the block device stuff only came in 1.23 right?
<axw> wallyworld: it seems it was in there in 1.22, but in a different form
<axw> wallyworld: one which didn't have the issue I fixed in 1.24
<wallyworld> ok, so this fix isn't relevant
<axw> right
<wallyworld> great
<axw> wallyworld: I'm just going to rename that card to "port to 1.22 and master"
<wallyworld> ok
<axw> not really worth adding another card right?
<wallyworld> nah
<thumper> waigani: how goes the hunt on bug 1463117 ?
<mup> Bug #1463117: Juju 1.24-beta6.1 unit commands hang indefinitely <juju-core:Triaged> <juju-core 1.24:Triaged by waigani> <https://launchpad.net/bugs/1463117>
<waigani> thumper: it came in on this commit: f35a8c6c2d2ca2
<thumper> waigani: when is that commit?
<waigani> thumper: so the commands in the debug-hook context are expecting stdin from the cmd context
<thumper> waigani: and what was the branch fixing / adding?
<waigani> thumper: but that branch changes how it works
<thumper> ahh... I remember that branch
 * thumper looks at axw
<thumper> luckily axw did the debug-hooks originally too :)
<waigani> thumper: it also brought in a new juju/utils dependency, this is the log for that:
<waigani> exec: don't use stdin to send commands to shell
<waigani>     
<waigani>     Using stdin to send commands to the shell means that commands run with this cannot use stdin, or they will consume subsequent script input
 * thumper nods
<waigani> thumper: but I don't think the bug is in the dep
<thumper> by fixing the jujuc commands to support stdin, we broke debug hooks...
<thumper> *sad face*
<waigani> thumper: in summary, yes
<waigani> thumper: I'm poking around cmd/jujud/main.go where we read from stdin and pass it in with the jujuc.Request
<waigani> thumper: if we don't read from stdin and pass it in with the req, and instead set up the cmd.Context.Stdin with a nil bytes.Buffer pointer on the jujuc Server - it works
<thumper> um... ok... I don't understand all those bits very well
<thumper> waigani: best thing would be to run a solution past axw
<thumper> however my gut says "here be dragons"
<thumper> I feel that any change may have repercussions in other unintended areas
<thumper> however, saying that...
<thumper> if we are able to test:
<thumper> jujuc command simply
<thumper> jujuc command piping stdin
<thumper> juju run
<thumper> hooks in scripts
<thumper> and debug hooks
<thumper> if they all work,
<thumper> then it seems like we have probably got things in good order
<waigani> thumper: right, also the solution can't break the bug the original commit fixed.
<waigani> thumper: which was this guy 1454678
<thumper> waigani: that was 'jujuc command piping stdin' IIRC
<waigani> #1454678
<mup> Bug #1454678: "relation-set --file -" doesn't seem to work <landscape> <relation-set> <juju-core:Fix Released by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1454678>
<waigani> thumper: yep
<waigani> axw: ping?
<axw> waigani: pong. sorry, was making lunch
<waigani> axw: hey, I think commit f35a8c6 broke debug hooks
<waigani> f35a8c6 made jujuc support stdin
<axw> waigani: oh, what's it doing? eating atm, I'll have a look after
<waigani> axw: I think where we read out the bytes buffer of stdin leaves the cmd.Context stdin empty and the commands in the debug hook context are expecting it
<waigani> axw: I'm currently poking to back up that theory
<waigani> axw: so I'll step back and give you the context
<waigani> axw: I'm looking at this bug #1463117
<mup> Bug #1463117: Juju 1.24-beta6.1 unit commands hang indefinitely <juju-core:Triaged> <juju-core 1.24:Triaged by waigani> <https://launchpad.net/bugs/1463117>
<waigani> If I comment out the lines where we read from cmd.Context.Stdin (around cmd/jujud/main.go:86) and don't set Stdin in the jujuc.Request, the bug is gone
<waigani> i.e. I can debug a hook, call config-get and it returns instantly
<waigani> if I leave the change in, the call hangs indefinitely
<axw> waigani: odd, it should just get EOF?
<axw> hm no, it's going to read from the terminal
<waigani> axw: could we use something like peek to read from Stdin without emptying he buffer?
<axw> waigani: no, I think we need to do it differently: pass the FD to the server process
<axw> going to get messy
<waigani> that was thumper's concern
<axw> waigani: it might be safer to back it out and reopen the bug, defer to 1.25
<waigani> axw: yeah?
<axw> wallyworld: ^^ that change to support passing config via stdin is causing problems
<wallyworld> oh bother
<wallyworld> um
<wallyworld> sounds like breaking debug hooks is worse than relation set --file not working
<wallyworld> we would look to fix in a 1.24.1 though
<wallyworld> there's potentially other work queued up for 1.24.1 also
<axw> sounds fine
<axw> I just don't think it's worth blocking 1.24
<wallyworld> at this stage, we need to get the release out yes
<wallyworld> and the relation-set one seems like the lesser of 2 evils
<wallyworld> unless i'm missing something
<axw> wallyworld: yep, that's my take
<wallyworld> axw: would one need to back out the utils change too?
<axw> waigani: do you want to do that, or shall I take it off your hands?
<axw> wallyworld: I don't think that's necessary
<wallyworld> yeah, that's what i was thinking
 * axw looks
<wallyworld> or hoping
<waigani> wallyworld: I've tested just the utils and it does not cause the bug to happen
<wallyworld> waigani: awesome. ty
<waigani> axw: why would "peeking" at stdin not work?
<waigani> axw: It fixed the bug in my testing
<axw> waigani: can you explain what you mean by that?
<axw> waigani: I mean, in terms of how you would do it. what change did you make in testing?
<axw> waigani: data may come into stdin at any time, you can't peek at what's not there yet
<axw> waigani: until the file is closed, whatever's reading from it will block until there's data
<axw> I mean, block until there's data or it's closed
<waigani> axw: right, what I did was a hack using mock data - not a real solution. But it did show the problem was reading from stdin left it empty. Is there a way of resetting cmd.Context.Stdin to os.Stdin and pre-populate it with the command we just read off?
<axw> waigani: the problem is not that we've read data off stdin and then something else wants it and can't get it. the problem is that there was no stdin data to start with, and the ioutil.ReadAll in cmd/jujud/main.go is waiting for it (forever)
<axw> waigani: it's not a problem when the jujuc commands are run by agents, because they don't have a terminal. but when you use debug-hooks, you're running the hooks from inside a terminal.
<waigani> axw: ah, got it
<davecheney> thumper: I think the LTS animal should be Xuanhuasauru
<davecheney> because everyone will think it's spelt Xanthusela
<waigani> axw: so your idea was to pass through the file descriptor so the alias command can be executed directly from the terminal?
<axw> waigani: that's one option, though I'm not sure how that'll work on Windows. it might be simpler to ditch net/rpc for juju/rpc, so we can have the server request data from stdin on demand
<axw> waigani: anyway, I don't think we should do that yet
<waigani> axw: yep, not a quick fix. I'll update the bug report, back out that commit and reopen the bug it fixed.
<axw> waigani: thank you
<waigani> davecheney: http://www.teamnameshirts.com/teamshirts.php?name=XUANHUASAURUS
<davecheney> % go test -race -timeout=600m ./state
<davecheney> smoke me a kipper
<davecheney> waigani: nailed it
<wallyworld> axw: can i edit the resources spec so i can add a link to the new delivery schedule doc?
<waigani> axw: http://reviews.vapour.ws/r/1895/
<axw> wallyworld: sure, just a moment
<wallyworld> ta
<axw> wallyworld: you have edit rights now
<wallyworld> ty
<davecheney> thumper: % go test -race -timeout=600m ./state
<davecheney> ok      github.com/juju/juju/state      808.283s
<waigani> \o/
<thumper> heh... 600m
<axw> waigani: thanks, LGTM
<waigani> axw: cheers
<axw> wallyworld: I'm going to have to do the lifecycle card before the persistent volume deletion one. we don't currently trigger lifecycle changes in all hte necessary places
<wallyworld> ok
<wallyworld> do we need to do for 1.24?
<axw> wallyworld: doesn't have to be done for 1.24
<wallyworld> even better
<mup> Bug #1454678 opened: "relation-set --file -" doesn't seem to work <landscape> <relation-set> <juju-core:Triaged by axwalk> <juju-core 1.24:Triaged by axwalk> <https://launchpad.net/bugs/1454678>
<thumper> davechen1y: with go, is it safe to delete entries from a map while iterating through it?
<thumper> my gut says no
<thumper> but perhaps go is "special"
<davechen1y> thumper: your gut is correct
<thumper> kk, ta
<davechen1y> let me look up the specifics
<davechen1y> The iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next. If map entries that have not yet been reached are removed during iteration, the corresponding iteration values will not be produced. If map entries are created during iteration, that entry may be produced during the iteration or may be skipped. The choice may vary for each entry created and from one iteration to the next. If 
<davechen1y> hmm
<davechen1y> i guess that is less pesimistic than I suggested
<davechen1y> it certainly won't cause an infinite loop
<davechen1y> thumper: some good news, the reflection crash on ppc64 that I reported last month is fixed
<davechen1y> so I'll roll back the workaround this arvo
<thumper> w00t
<thumper> davechen1y: I guess I was wanting confirmation that removing elements from a map won't cause elements that should be iterated over to be skipped due to some underlying structure rebalance
<dimitern> voidspace, hey there
<voidspace> dimitern: hi
<dimitern> voidspace, I've confirmed that lxc-stop does not trigger a DHCPRELEASE on a normal container, but adding a quick upstart job to call ifdown -a does the trick (as observed using dhcpdump)
<voidspace> dimitern: awesome, that would help for the graceful shutdown
<voidspace> dimitern: but not forcible termination of host
<voidspace> dimitern: do we have to deal with vivid containers yet (systemd)?
<dimitern> voidspace, I suppose for vivid should be the same, just the job is different
<voidspace> dimitern: yep
<dimitern> voidspace, yeah, not forcible termination of the host
<voidspace> dimitern: anyway, your findings match mine
<voidspace> dimitern: that graceful shutdown doesn't release but an explicit ifdown does
<dimitern> voidspace, yes indeed
<voidspace> dimitern: fixing the graceful shutdown case would be a big step forwards
<dimitern> voidspace, yeah, and it's easy to do I guess
<dimitern> voidspace, that's the job I verified to work: http://paste.ubuntu.com/11667420/
<dimitern> my tests so far were with a normal lxc container, now trying on maas
<voidspace> cool
<voidspace> looks good
<dimitern> voidspace, it works!
<dimitern> just deploying a container on maas and injecting the shutdown job causes it to trigger ifdown on shutdown (e.g. destroy-machine 0/lxc/0 --force)
<voidspace> dimitern: awesome
<voidspace> Ship It!
<dimitern> voidspace, I haven't even proposed anything yet :)
<TheMue> fwereade: thx for the hint with the bulk calls, thats a better approach
<dimitern> gsamfira, hey there
<dimitern> gsamfira, are you around for a quick chat?
<perrito666> morning
<gsamfira> dimitern: heya
<gsamfira> dimitern: I am now
<mup> Bug #1463399 opened: TestMachineAgentSymlinkJujuRun fails <ci> <intermittent-failure> <test-failure> <juju-core:Incomplete> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463399>
<mup> Bug #1463401 opened: FactorySuite teardown fails <ci> <test-failure> <juju-core:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1463401>
<wallyworld> gsamfira: hey, so looks like bug 1451626 now has a fix ommitted? can you update launchpad to reflect the status of the bug?
<mup> Bug #1451626: Erroneous Juju user data on Windows for Juju version 1.23 <1.23> <blocker> <juju> <oil> <regression> <windows> <juju-core:Triaged> <juju-core 1.24:In Progress by gabriel-samfira> <https://launchpad.net/bugs/1451626>
<wallyworld> so we know the beta7 is good to release
<wallyworld> (once CO passes)
<wallyworld> CI
<katco> wallyworld: aren't there still open bugs?
<katco> wallyworld: ah nm, just saw your previous message to gsamfira
<wallyworld> katco: just the cloud sigma one which we agreed coud be deferred as it's ehind a flag
<mgz_> wallyworld: the issue is juju doesn't build with that fix landed
<wallyworld> oh
<wallyworld> bollocks
<mgz_> http://reports.vapour.ws/releases/rules/239
<wallyworld> mgz_: katco: do we know the issue? is "someone" fxing?
<mgz_> we need go 1.4 for it, which curtis raised a ml thread about
<katco> wallyworld: ericsnow was working with sergey on a work-around
<katco> wallyworld: i will follow up in our standup
<wallyworld> mgz_: i thought the idea of this bug fix was to remove the need for 1.4 by vendoring some other code
<wallyworld> ty
<wallyworld> i'll go away now :-)
<wallyworld> katco: and let's no mention the soccer :-(
<mgz_> wallyworld: you lost? :P
<wallyworld> not that i know of :-)
<wallyworld> some would say so
<katco> wallyworld: aus was amazing in the first half
<katco> wallyworld: i thought you were going to take the us
<wallyworld> katco: well, the usa goal keep was simply brilliant :-(
<wallyworld> er
<katco> wallyworld: she really is... 2 or 3 huge saves
<wallyworld> yup :-(
<katco> wallyworld: aussies right forward is a beast
<mgz_> ah, actual internation match
<katco> wallyworld: she is so fast
<katco> mgz_: woman's world cup
<katco> women's world cup rather
<wallyworld> katco: sadly i didn't see the matc, just the highlights on the news
<wallyworld> i was going to ask "which woman" :-)
<mup> Bug #1463399 changed: TestMachineAgentSymlinkJujuRun fails <ci> <intermittent-failure> <test-failure> <juju-core:Incomplete> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463399>
<mup> Bug #1463401 changed: FactorySuite teardown fails <ci> <test-failure> <juju-core:Triaged> <juju-core 1.24:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1463401>
<katco> haha
<mup> Bug #1463399 opened: TestMachineAgentSymlinkJujuRun fails <ci> <intermittent-failure> <test-failure> <juju-core:Incomplete> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463399>
<mup> Bug #1463401 opened: FactorySuite teardown fails <ci> <test-failure> <juju-core:Triaged> <juju-core 1.24:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1463401>
<abentley> mgz_: Could you take a look at this?  It looks like a coding error, but it also look intermittent: http://reports.vapour.ws/releases/2741/job/run-unit-tests-trusty-ppc64el/attempt/3176
<mup> Bug #1463408 opened: TestRunCommand fails <ci> <intermittent-failure> <test-failure> <juju-core:Incomplete> <juju-core db-log:Triaged> <https://launchpad.net/bugs/1463408>
<perrito666> https://s-media-cache-ak0.pinimg.com/originals/1e/49/a9/1e49a91eeb15dcd60cf9a935a9b70e0e.jpg <-- google hangout
<mgz_> abentley: looking
<mgz_> abentley: I can't see how a build issue like that could be intermittent
<mgz_> most confusing
<abentley> mgz_: Me neither, but if you click "next", the next attempt got further.
<mgz_> is the clock skew somehow making us not run all the right code?
<mgz_> but the next run doesn't have clock skew...
<mgz_> are they run against different machines?
<mup> Bug #1451626 changed: Erroneous Juju user data on Windows for Juju version 1.23 <1.23> <blocker> <juju> <oil> <regression> <windows> <juju-core:Fix Released> <juju-core 1.24:Fix Released by gabriel-samfira> <https://launchpad.net/bugs/1451626>
<mup> Bug #1463420 opened: Zip archived tools needed for bootstrap on winows <juju-core:New for bteleaga> <https://launchpad.net/bugs/1463420>
<mgz_> nope, both .137
<mgz_> abentley: my best guess is our script cleanup is not good enough
<abentley> mgz_: Yes, that could be it.
<abentley> mgz_: The clock skew can be explained because the second run happened later.
<mgz_> yeah, with the same tarball
 * mgz_ pokes fun at bogdanteleaga for tyoping windows as 'winows'
<mgz_> it's like a small fish
<mgz_> (thanks for filing the bug :)
<bogdanteleaga> mgz_: oops :)
<mup> Bug #1449210 opened: cloudsigma index file has no data for cloud <bootstrap> <cloudsigma-provider> <tech-debt> <juju-core:Fix Committed by s-matyukevich> <juju-core 1.24:Triaged by s-matyukevich> <juju-core db-log:Triaged> <https://launchpad.net/bugs/1449210>
<mup> Bug #1463439 opened: golang.org/x/sys/windows requires go 1.4 <windows> <juju-core:Triaged> <juju-core 1.24:Fix Committed by gabriel-samfira> <https://launchpad.net/bugs/1463439>
<natefinch> ericsnow:  I was thinking of putting the plugin stuff in a directory under procmanager, just to try to keep things contained some. What do you think?
<mup> Bug #1228243 changed: juju provided peer relation leader feature <feature> <landscape> <juju-core:Fix Released> <https://launchpad.net/bugs/1228243>
<mup> Bug #1463455 opened: package github.com/juju/txn has conflicting licences <packaging> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463455>
<ericsnow> natefinch: sounds good
<katco> natefinch: was bug 1424901 fixed with our latest log changes?
<mup> Bug #1424901: Agents don't sent logs to any rsyslogd unless all targets are available <logging> <juju-core:Triaged> <https://launchpad.net/bugs/1424901>
<natefinch> katco: not likely.  I didn't change rsyslog at all
<natefinch> katco: at least functionally... just cleaned up the API of the function
<katco> natefinch: yeah, not the lumberjack stuff... i thought you had coordinated around rsyslog though?
<natefinch> katco: wwitzel3 did most of the rsyslog work.  I don't know the specific details of the code.
<katco> natefinch: k ty. wwitzel3, any comments?
<natefinch> katco: although, this might be the Go syslog stuff, which gsamfira wrote
<natefinch> I think wayne just did the receiving side
<natefinch> IIRC
<wwitzel3> I don't remember what I did for rsyslog
<natefinch> defensive amnesia, it's understandable
<wwitzel3> hah
<katco> wwitzel3: no worries. just wondering if we could close out that bug
<natefinch> katco: I betcha it still exists
<wwitzel3> my guess is no, it looks valid
<natefinch> the package hasn't been updated since way before that bug was filed
<natefinch> wwitzel3, katco: yeah, looks like any error connecting will exit out: https://github.com/juju/juju/blob/master/worker/rsyslog/worker.go#L203
<katco> natefinch: ty
<katco> natefinch: maybe comment on the bug since you went through the trouble to find that line?
<katco> natefinch: make sure you pin the GH link to that revision so its a perma-link
<natefinch> katco: I copied the link, but how do I pin the revision?  I was actually thinking that it would be easy for that link to get stale
<natefinch> katco: nvm I got it
<natefinch> dammit, I wish you could edit bug comments on launchpad :/
<katco> natefinch: can't you?
<katco> natefinch: or do you mean comments which aren't yours?
<natefinch> katco: I mean my own.  I don't see a UI to do anything other that "hide"
<wwitzel3> I just make sure everything I do is perfect the first time.
<natefinch> wwitzel3: ahh, that was my mistake
<wwitzel3> :)
<katco> wwitzel3: "i disagree!" /slap
<wwitzel3> katco: actually, I just shout No .. I disagree is too formal
<katco> haha
<wwitzel3> ericsnow: you are frozen
<mup> Bug #1463480 opened: Failed upgrade, mixed up HA addresses <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1463480>
<voidspace> dammit
<voidspace> ah well
<voidspace> bootstrapped without allocatable addresses on
<sinzui> katco: wwitzel3, natefinch. I see menn0 added the prune files to github.com/juju/txn as agpl. the project is LGPL. Can you confirm this is a mistake? Would you accept my patch to change them to LGP?
<sinzui> LGPL?
<natefinch> sinzui: I'm sure it was a mistake
<katco> sinzui: ditto
<sinzui> fab. I will take care of this
<natefinch> I hate it when people make a type that could implement io.Writer or io.Reader, but don't
 * natefinch is looking at you, loggo
<sinzui> katco: natefinch: can either of you review http://reviews.vapour.ws/r/1898/
<natefinch> sinzui: ship it!
<sinzui> thank you natefinch
<natefinch> katco, ericsnow: start of the plugins code here: https://github.com/natefinch/juju/blob/wpt-plugins/procmanager/plugins/plugins.go
<natefinch> afk for a couple hours
<ericsnow> natefinch-afk: cool
<sinzui> ericsnow: katco: do either of you have a few minutes to review http://reviews.vapour.ws/r/1899/
<ericsnow> sinzui: sure
<ericsnow> sinzui: ship-it!
<sinzui> thank you ericsnow
<sinzui> katco: ericsnow: Do either of you have a few minutes to review a change for proposed 1.24.0? http://reviews.vapour.ws/r/1900/
<ericsnow> sinzui: LGTM
<sinzui> thank you ericsnow
<katco> all: we have a new blocking bug (bug 1463480) for 1.22.6 and 1.24.1. please consider this top priority. i need a volunteer to do some triaging
<mup> Bug #1463480: Failed upgrade, mixed up HA addresses <blocker> <canonical-bootstack> <juju-core:New> <juju-core 1.22:New> <juju-core 1.24:New> <https://launchpad.net/bugs/1463480>
<katco> perrito666: cherylj: wwitzel3: ericsnow: natefinch-afk: cmars
<katco> perrito666: cherylj: wwitzel3: ericsnow: natefinch-afk: cmars: anyone there?
<ericsnow> katco: I can take a look in a sec
<katco> ericsnow: ty, please remember to assign yourself to the bug (1.22 first) and mark as in progress
<katco> ericsnow: i added a card for you on the kanban
<perrito666> katco: sorry I was not paying attention at the irc monitor
<katco> perrito666: np, happens to me too
 * perrito666 swims in test failures
<sinzui> katco: ericsnow : do either of you have a moment to review http://reviews.vapour.ws/r/1902/
<katco> sinzui: tal
<katco> sinzui: ship it
<sinzui> thank you katco
<katco> sinzui: yw
<wwitzel3> mbruzek: ping
<mbruzek> wwitzel3:  https://plus.google.com/hangouts/_/canonical.com/dockercon?authuser=0
<wwitzel3> ericsnow: in moonstone now :)
<alexisb> thumper, ping
<thumper> alexisb: pong
<alexisb> heya thumper
<alexisb> had a critical bug come in on 1.22.6, I assigned it to waigani
<alexisb> it is a port of an existing fix he has already done
<alexisb> lp 1463480
<alexisb> lp1463480
<alexisb> https://bugs.launchpad.net/juju-core/+bug/1463480
<mup> Bug #1463480: Failed upgrade, mixed up HA addresses <blocker> <canonical-bootstack> <ha> <upgrade-juju> <juju-core:Incomplete> <juju-core 1.22:Incomplete> <juju-core 1.24:Incomplete> <https://launchpad.net/bugs/1463480>
<alexisb> there we go
<alexisb> crap
<alexisb> wait wrong bug :)
<alexisb> https://bugs.launchpad.net/juju-core/+bug/1441478
<mup> Bug #1441478: state: availability zone upgrade fails if containers are present <canonical-bootstack> <upgrade-juju> <juju-core:Fix Committed by waigani> <juju-core 1.22:New for waigani> <juju-core 1.24:Fix Released by waigani> <https://launchpad.net/bugs/1441478>
<thumper> alexisb: so this is just a backport of an existing fix?
<alexisb> yep
<thumper> kk, I'll bring it up in the standup
<alexisb> if for whatever reason we cannot do a port to 1.22 please update the bug with a summary and mark it as will not fix
<alexisb> so that I can report back
<thumper> k
<natefinch> back in a while folks
<mup> Bug #1463455 changed: package github.com/juju/txn has conflicting licences <packaging> <juju-core:Fix Released by sinzui> <juju-core 1.22:Fix Committed by sinzui> <juju-core 1.24:In Progress by sinzui> <https://launchpad.net/bugs/1463455>
<Syed_A> Hello, I am curious as how "juju-trusty-lxc-template" is generated and where does it reside on the "juju deploy" node
<katco> ericsnow: meeting
<mup> Bug #1463401 changed: FactorySuite teardown fails <ci> <test-failure> <juju-core:Invalid> <juju-core 1.24:Invalid> <juju-core jes-cli:Invalid> <https://launchpad.net/bugs/1463401>
<thumper> cmars: sorry for missing our chat
<thumper> cmars: did you want to catch up?
<mup> Bug #1463401 opened: FactorySuite teardown fails <ci> <test-failure> <juju-core:Invalid> <juju-core 1.24:Invalid> <juju-core jes-cli:Invalid> <https://launchpad.net/bugs/1463401>
<mup> Bug #1463401 changed: FactorySuite teardown fails <ci> <test-failure> <juju-core:Invalid> <juju-core 1.24:Invalid> <juju-core jes-cli:Invalid> <https://launchpad.net/bugs/1463401>
<waigani> alexisb, thumper: fix committed on 1.22.6 for #1441478
<mup> Bug #1441478: state: availability zone upgrade fails if containers are present <canonical-bootstack> <upgrade-juju> <juju-core:Fix Committed by waigani> <juju-core 1.22:Fix Committed by waigani> <juju-core 1.24:Fix Released by waigani> <https://launchpad.net/bugs/1441478>
<alexisb> waigani, thank you!
<waigani> alexisb: np :)
<alexisb> alrighty all, I am out for the day see you all tomorrow!
<mup> Bug #1463608 opened: Deprecate support for 32-bit and PV AMI's for AWS Images <juju-core:New> <https://launchpad.net/bugs/1463608>
#juju-dev 2015-06-10
<natefinch> niemeyer: you around?
<thumper> wallyworld: ping
<wallyworld> yo
<thumper> chat?
<wallyworld> sure
<thumper> 1:1
<perrito666> natefinch: its nearly midnight where he lives
<natefinch> perrito666: yeah, figured
<natefinch> perrito666: same where you live though, right?
<perrito666> natefinch: yup
<mup> Bug #1463641 opened: lease: data race in tests (again) <juju-core:In Progress by dave-cheney> <https://launchpad.net/bugs/1463641>
<mup> Bug #1463643 opened: worker/provisioner: 9 data races in tests <juju-core:Confirmed for dave-cheney> <https://launchpad.net/bugs/1463643>
<davechen1y>  % go test
<davechen1y> OOPS: 91 passed, 7 FAILED
<davechen1y> --- FAIL: Test (79.72s)
<davechen1y> FAIL
<davechen1y> exit status 1
<davechen1y> FAIL    github.com/juju/ju
<davechen1y> well, gocheck, don't keep it to yourself ...
<menn0> davechen1y: just reviewing your PR
<menn0> davechen1y: how did races creep in again?
<davechen1y> menn0: i don't know
<menn0> davechen1y: someone messed up conflict resolution during a merge perhaps
<davechen1y> no idea, but it got unfixed
<menn0> davechen1y: ship it although the test still sucks
<menn0> davechen1y: but that's not your fault
<natefinch> davechen1y: am I right in assuming that if I have an RPC method that doesn't have any return data, I still have to give it an argument, something like this: https://github.com/natefinch/juju/blob/wpt-plugins/procmanager/plugins/api/err.go#L13
<davechen1y> natefinch: sorry
<davechen1y> no idea off hand
<natefinch> davechen1y: np
<davechen1y> i think you can all rpc methods that don't have a return value
<davechen1y> but i cannot think of any off hand
<menn0> davechen1y: the goroutine that's defined in-line in the test seems unnecessary
<menn0> davechen1y: AFAICS the select at the bottom should be reading directly from the subscription channel
<menn0> davechen1y: and the way things are now if the lease is never released that goroutine will hang around forever
<davechen1y> oh no
<davechen1y> gocheck isn't thread safe
<davechen1y> i think i need to go and have a lie down
<davechen1y> menn0: i'm just applying the old fix from PR 2422
<menn0> davechen1y: yeah I know. i don't expect you to fix it.
<davechen1y> i don't want to marry the test
<menn0> davechen1y: i just noticed that it could be better while reviewing your change
<davechen1y> patches accepted :thumbsup:
<davechen1y> in david's britton, any test importing "time" would be shot on sight
<thumper> davechen1y: http://paste.ubuntu.com/11686336/
<thumper> davechen1y: wondering about the best way to test this
<thumper> davechen1y: fails on windows due to OS error message being different
<thumper> davechen1y: is there a nice OS agnostic way to do this better?
<davechen1y> thumper: does that error fit though os.IsNotExist ? http://golang.org/pkg/os/#IsNotExist
<davechen1y> if it's been gift wrapped, you might have to write a helper to unwrap it
<thumper> kk
<thumper> I'll check
<davechen1y> does anyone remember off the top of their head if it is safe to call c.Fail inside a goroutine ?
<davechen1y> this is the same rule as t.Fatal
<davechen1y> oh no
<davechen1y> actually this is far worse
<davechen1y> c.Assert in a goroutine triggers a race
<axw> wallyworld: I've put up a PR that addresses all the storage relationships I can. there's a bunch more we'll need to do when we support shared storage
<mup> Bug #1463661 opened: worker/provisioner: kvm broker test failure  <juju-core:New> <https://launchpad.net/bugs/1463661>
<axw> wallyworld: and detaching/attaching  volumes  and filesystems from machines
<wallyworld> fair enough, ty, will look
<thumper> alrighty then, I'm done
<thumper> laters
<wallyworld> axw: given in this new pr SetFilesystemInfo ensures any required volume is provisioned, that means that 1.24 could be broken?
<axw> wallyworld: in theory. it's just a safe guard, the worker should be doing the right thing anyway
<axw> wallyworld: storageprovisioner waits until the volume is provisioned before trying to provision the filesystem
<wallyworld> ok
<mup> Bug #1463643 changed: worker/provisioner: 9 data races in tests <juju-core:Confirmed for dave-cheney> <https://launchpad.net/bugs/1463643>
<dimitern> TheMue, standup?
<mup> Bug #1463643 opened: worker/provisioner: 9 data races in tests <juju-core:Confirmed for dave-cheney> <https://launchpad.net/bugs/1463643>
<mup> Bug #1463643 changed: worker/provisioner: 9 data races in tests <juju-core:Confirmed for dave-cheney> <https://launchpad.net/bugs/1463643>
 * TheMue discovered params.ErrorResults.Combine(), nice
<wwitzel3> katco: ping
<jam> fwereade: probably a public issue. I'm trying to run "juju run relation-get" but I'm running into a problem because it is a peer relation
<jam> specifically, trying to do "relation-ids" returns an empty list
<jam> and relation-list et al all tell me I'm using an "unknown relation id"
<jam> though if you pass the name of a known-incorrect relation name, it doesn't give an error, so I'm entirely possibly doing it wrong
<jam> ok it was pebkac. I wasn't specifying the endpoint name correctly
<fwereade> jam, that was not a deliberate http://dilbert.com/strip/1997-11-06 but glad to see it's resolved :)
<jam> fwereade: heh. So one problem is that finding the relation-id means it is a 2 round trip process
<fwereade> jamif it's inconvenient to jam the whole script into juju-run, you could just make it part of the charm and juju-run that script
<jam> fwereade: I'm trying out "relation-get -r $(relation-ids endpoint)" if you assume there is only one, which seems to work
<fwereade> jam, if it's a peer relation you can be sure
<natefinch> niemeyer: note the difference between https://godoc.org/github.com/natefinch/juju and http://godoc.org/gopkg.in/natefinch/juju.v0
<natefinch> niedbalski: the latter doesn't show the directories (except the one I manually typed into the url at one point)
<natefinch> niemeyer:  ^ (sorry niedbalski)
<perrito666> morning
<natefinch> morning perrito666
 * natefinch found a neat hack to show godoc of his WIP branches... just git tag your branch as v0 and then you can serve it up via godoc: https://godoc.org/gopkg.in/natefinch/juju.v0/procmanager/plugins/api
<katco> dimitern: this looked like it might be a networking bug (bug 1463480). it's a blocker; has anyone in our half of the world looked at this yet?
<mup> Bug #1463480: Failed upgrade, mixed up HA addresses <blocker> <canonical-bootstack> <ha> <upgrade-juju> <juju-core:Incomplete> <juju-core 1.22:Incomplete> <juju-core 1.24:Incomplete> <https://launchpad.net/bugs/1463480>
<dimitern> katco, I'll have a look
<katco> dimitern: ty sir
<dimitern> katco, it doesn't quite look like a networking issue, more likely something around sorting / picking a private address for the unit
<dimitern> katco, I'll add comments
 * fwereade off collecting laura again, cath's back this evening, normal service will resume shortly
<sinzui> dimitern: katco : We need an engineer to comment on bug 1463608. Does modern Juju us pv (paravirtual) types in ec2? That type is going away alone with i386?
<mup> Bug #1463608: Deprecate support for 32-bit and PV AMI's for AWS Images <ec2-provider> <i386> <streams> <juju-core:Triaged> <juju-release-tools:Triaged> <https://launchpad.net/bugs/1463608>
<mgz_> sinzui: we have instance types with pv, yeah
<mgz_> and have had user requests to keep t1.micro working, even though not all aws regions provide it
<sinzui> mgz_: Yeah. I recalled the request to keep that running, but AWS doesnât agree. I think this raised the priority of my concern that we are loosing our ability to retest old juju
 * sinzui want to stop making i386 agents.
<mgz_> sinzui: I think in all current supported jujus we can always force the instance type specifically with a constraint
<mgz_> provided there's at least one instance type that juju knows about that's still available
<mup> Bug #1463826 opened: TestProvisioningDoesNotOccurWithAnInvalidEnvironment fails <ci> <intermittent-failure> <test-failure> <juju-core:Incomplete> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1463826>
<sinzui> mgz_: http://cloud-images.ubuntu.com/releases/streams/v1/com.ubuntu.cloud:released:aws.json is still dominated by pv. I assume hvm will become the peferred typ
<mgz_> sinzui: yes, I think we should have a hvm one for each release though?
<sinzui> mgz_: I think so. Our get_ami prefers pv. I think all ami we provision are pv
<mup> Bug #1458721 changed: lease: data races in tests <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1458721>
<mup> Bug #1459085 changed: worker/logger: data race in tests <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1459085>
<mup> Bug #1461385 changed: apiserver/instancepoller: data race in test <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1461385>
<mup> Bug #1458721 opened: lease: data races in tests <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1458721>
<mup> Bug #1459085 opened: worker/logger: data race in tests <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1459085>
<mup> Bug #1461385 opened: apiserver/instancepoller: data race in test <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1461385>
<mup> Bug # changed: 1456315, 1458721, 1459085, 1461385, 1462412
<sinzui> mgz_: can you review http://reviews.vapour.ws/r/1909/
<mgz_> sinzui: lgtm
<sinzui> thank you mgz_
<mup> Bug #1463870 opened: unitSuite teardown fails <ci> <unit-tests> <juju-core:Incomplete> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1463870>
<voidspace> dimitern: it's simple
<voidspace> dimitern: we're not populating the NetworkInfo of StartInstanceResult inside the StartInstance method of the brokers
<voidspace> dimitern: easy fix
<dimitern> voidspace, really? I though we did..
 * dimitern is looking at commit history of lxc-broker.go
<voidspace> dimitern: take a look at the last line of StartInstance in lxc-broker.go
<dimitern> voidspace, aw ffs..
<dimitern> voidspace, of course :) good catch! I was thinking of args.NetworkInfo being updated after PCII()
<voidspace> dimitern: at least it's easy to fix :-)
<voidspace> dimitern: from my debugging, it *will* be propagated all the way up
<dimitern> voidspace, yeah, easy fix indeed, but please make a quick live test to make sure populating it won't break anything
<voidspace> dimitern: sure
<voidspace> dimitern: it *really* shouldn't
<voidspace> dimitern: but you never know I guess...
<dimitern> voidspace, yeah, it might break some tests around juju status, if it expects no networks to be listed
<dimitern> voidspace, as the networks added during SetInstanceInfo will show up there
<voidspace> dimitern: sure, I'll have to fix test failures
<voidspace> dimitern: and I'll check any failures to see if they're significant
<dimitern> voidspace, cheers!
<TheMue> this state.IPAddress <-> network.Address <-> params.Address sometimes drives me crazy.
<TheMue> would have liked a dumb type model package with pure structs for our model and and pure function interface to State (or however the persistency layer is called).
<TheMue> and the api as well as the persistency layer are accepting the dumb model
 * TheMue takes a relaxing bike ride home and continues later
<voidspace> dimitern: ooh, now I have a provisioner error - invalid network name, so it looks like the parameters need a bit of massaging
<voidspace> digging in
<voidspace> empty network name: ""
<dooferlad> TheMue: Could you do me a review of http://reviews.vapour.ws/r/1910/ ?
<dimitern> voidspace, right
<dimitern> voidspace, that's because NetworkName should be set on the InterfaceInfo
<voidspace> dimitern: sure, but I just passed it through from the BridgeNetworkConfig
<dimitern> voidspace, for now I'd suggest just turning the empty name into a warning log instead of an error in state.AddNetwork
<voidspace> or maybe just not add the network
<voidspace> or does every interface need to be associated with a network?
<voidspace> I'll look
<dimitern> voidspace, technically - yes, but it's not used anywhere yet
<voidspace> dimitern: I can still ssh into the container with "juju ssh"
<voidspace> dimitern: so it's still created ok
<dimitern> voidspace, that's good :)
<katco> ericsnow: hey reviewing your patch http://reviews.vapour.ws/r/1905
<ericsnow> katco: thanks
<katco> ericsnow: you lead in saying it's a refactor, but it looks like there's some new stuff? is that right?
<ericsnow> katco: depends on how you look at it :)
<katco> ericsnow: well for instance, i'm not seeing where the leadership stuff came from?
<ericsnow> katco: what leadership stuff?
<ericsnow> katco: ah
<ericsnow> katco: the old test double didn't have those method explicitly but I added them
<katco> ok, ty, that helps frame things
<dimitern> dooferlad, you have a review btw
<dimitern> dooferlad, good work!
<dimitern> dooferlad, don't forget to move your card ;)
<mup> Bug #1461871 changed: worker/diskmanager sometimes goes into a restart loop due to failing to update state <canonical-bootstack> <storage> <juju-core:Fix Released> <juju-core 1.22:Fix Committed by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1461871>
<mup> Bug #1461871 opened: worker/diskmanager sometimes goes into a restart loop due to failing to update state <canonical-bootstack> <storage> <juju-core:Fix Released> <juju-core 1.22:Fix Committed by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1461871>
<mup> Bug #1461871 changed: worker/diskmanager sometimes goes into a restart loop due to failing to update state <canonical-bootstack> <storage> <juju-core:Fix Released> <juju-core 1.22:Fix Committed by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1461871>
<mup> Bug #1463904 opened: TestReadLeadershipSettings fails <ci> <intermittent-failure> <test-failure> <juju-core:Incomplete> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1463904>
<natefinch> ericsnow, wwitzel3, katco: I need input on whether my plugin architecture is overkill or not: https://godoc.org/gopkg.in/natefinch/juju.v0/procmanager/plugins/api
<ericsnow> natefinch: k
<natefinch> I think that if we never add anything to it, then it is.  But I think the chances that we never add anything to it are small.
<katco> natefinch: tal
<natefinch> right now it can all be accomplished with simple CLI input and stdout output with return codes.....
<ericsnow> katco: thanks for the review
<katco> ericsnow: ty for the refactor
<katco> natefinch: it doesn't seem like overkill to me
<katco> natefinch: one suggestion: for states: maybe just have 1 "errored" state, and allow juju to query the plugin for more info?
<katco> natefinch: it could allow for plugin-specific detail w/o having to enumerate all possible states
<natefinch> katco: my concern is that I'm making plugin authors write a whole RPC client thingy, when, right now, the requirements are simple enough to allow just printing to stdout and signifying an error via a non-zero exit code.
<katco> natefinch: that is a good point
<katco> natefinch: let's look at it practically... what languages will people most likely be writing plugins in?
<natefinch> katco: no idea.  It would be trivial for us to make a helper packages for Go and python.... but as I state in my doc, it pretty much precludes bash.
<katco> natefinch: well not completely, right? bash can just as easily fork out to something that could create the json
<natefinch> katco: well, yes.  but it's a lot harder than it is in a real programming language where someone's already written all the json-rpc code.
<katco> natefinch: well, what i mean to say is: echo $(python my-rpc-generator --id=1 --err="Couldn't start container") or some such
<katco> natefinch: plugins can look for assistance for the heavy lifting
<natefinch> I guess the question is - how likely is it that the requirements for these plugins is going to get significantly more complicated?  If it's pretty likely, then I think the current architecture is fine.  If it's not likely to change, then we might want to lower the barrier of entry for third parties writing plugins (that being said, I kinda wonder how many third parties we'll really have writing plugins for this).
<katco> natefinch: i think you're right to worry about the way juju interacts with containers as something that will evolve over time.
<katco> natefinch: that's my take. i'd love ericsnow and wwitzel3's
<katco> natefinch: i think the p(the way juju interacts w/ containers changes) > p(need to support more containers)
<mup> Bug #1463910 opened: Upgrade tests timeout on ppc64 <ci> <gccgo> <intermittent-failure> <regression> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463910>
<ericsnow> natefinch: it may be too much
<ericsnow> natefinch: we need 3 commands (launch/status/stop)
<ericsnow> natefinch: I don't think we need an Info; launch should return the LaunchDetails
<natefinch> ericsnow: info is status
<ericsnow> natefinch: the status command should return just the status string that makes sense for the plugin
<ericsnow> natefinch: I could see the status command returning extra volatile status-like data
<natefinch> ericsnow: this is what I have for how juju calls the plugins: https://godoc.org/gopkg.in/natefinch/juju.v2/procmanager/plugins
<ericsnow> natefinch: having the status command return ProcessInfo means the plugin would have to store all that information (information Juju already has and would ignore)
<natefinch> ericsnow: I was assuming that you're getting information from the plugin, and storing that in state... otherwise, where are you getting that information from
<ericsnow> natefinch: that is correct
<ericsnow> natefinch: the plugin is not expected to store that information
<natefinch> ericsnow: I didn't think it would.  Info just returns information about the process... whatever information it thinks is appropriate
<ericsnow> natefinch: the only thing we need right now would be status
<ericsnow> natefinch: I'm not sure that extra "Details" field is worth it
<ericsnow> natefinch: is your branch based on our feature branch?
<natefinch> ericsnow: no...  I could make it so, though
<ericsnow> natefinch: that may help you
<natefinch> ericsnow: I was referencing it, and the doc... under "info" it has "technology specific details" which I figured must come from the plugin
<ericsnow> natefinch: those details are a one-time thing at launch time
<natefinch> ericsnow: ok, I didn't realize that
<ericsnow> natefinch: ah, sorry that wasn't more clear
<mup> Bug #1463922 opened: Text file busy <bootstrap> <ci> <intermittent-failure> <juju-core:Incomplete> <juju-core feature-proc-mgmt:Triaged> <https://launchpad.net/bugs/1463922>
<natefinch> katco, ericsnow: ok, so, I think what I'll do is rework the plugins to just be CLI input writing to stdout, and if we later need to make them complicated, we can just create new-style plugins or something.
<ericsnow> natefinch: k
<cholcombe> are there certain things that are not available in the leader context?  when leader-elected is run I tried to gather some context info and nothing worked
<natefinch> cholcombe: should be the same as any other context
<cholcombe> ok then i'm seeing some odd behavior
<cholcombe> in the leader-elected hook JUJU_RELATION_ID and JUJU_RELATION are coming up blank
<natefinch> I'm not 100% up to speed on the leader election stuff.... but those are relation-specific variables that generally only get set during relation hooks (relation changed, relation joined etc).
<cholcombe> ok
<cholcombe> so what should i be doing in the leader-elected hook?
<cholcombe> i'm not sure i have enough info in that hook to do anything
<natefinch> cholcombe: that's possible.  you can always run is-leader or leader-get from other hooks to get info about the leader
<cholcombe> yeah that's what is also strange
<cholcombe> when i run is-leader in the other hooks they always return False
<cholcombe> i'm on juju 1.23.3
<mup> Bug #1461578 opened: TestKVMProvisionerObservesConfigChange fails <ci> <intermittent-failure> <test-failure> <juju-core:Triaged> <juju-core 1.22:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1461578>
<natefinch> cholcombe: The guy who knows most about the leadership stuff is not here right now, though I think he said he'll be on later.  Look for fwereade.  Unfortunately, I don't know very much about it, since it's a new feature and still in the experimental phase.  It should *function* however...
<cholcombe> ok thanks :)
<ericsnow> katco: PTAL: https://github.com/juju/charm/pull/137
 * natefinch just got smacked with a "Pull Requests welcome" when filing a bug against godoc.org :/
<katco> ericsnow: this could use some comments: https://github.com/juju/charm/pull/137/files#diff-c0e084d1eb96781ebe2d307ae3dcebcbR265
<katco> ericsnow: i'm having a little trouble understanding what that code is trying to do
<katco> ericsnow: the public method doesn't have a comment either (drive-by fix)
<ericsnow> katco: k
<ericsnow> katco: feel free to leave comments on the PR
<katco> ericsnow: sorry, doing so
<natefinch> ericsnow: so you said launch is expected to return an id and some details?
 * perrito666 read lunch
<natefinch> haha
<ericsnow> natefinch: yep
<natefinch> ericsnow: where details is... like a map or something?
<ericsnow> natefinch: LaunchDetails in package/plugin.go
<natefinch> ericsnow: did you push?  I don't see it: https://github.com/juju/juju/blob/feature-proc-mgmt/process/plugin.go
<ericsnow> natefinch: oh, it's ProcessDetails
<natefinch> ericsnow: I figured status was what was returned by the Status call
<ericsnow> natefinch: yep
<ericsnow> natefinch: it's in ProcessDetails for convenience
<natefinch> ok, so... launch should return the ID of the process and also whatever status on that id would report?
<katco> i've been neglectful of my duties. someone needs to be looking at this: bug 1463480
<mup> Bug #1463480: Failed upgrade, mixed up HA addresses <blocker> <canonical-bootstack> <ha> <upgrade-juju> <juju-core:Triaged> <juju-core 1.22:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463480>
<katco> (let ((victims '(wwitzel3 ericsnow perrito666 natefinch cherylj)))
<katco>   (elt victims (random (length victims))))
<katco> cherylj you have been selected by my computer's pseudorandomness
<natefinch> katco: you need a bot to do that
<katco> natefinch: since i use emacs, my irc client is the bot :)
<natefinch> katco: yes, but if there was a bot then other people with lesser editors could use it, too :)
<katco> mup: why can't you do this, ya mup.
<mup> katco: I apologize, but I'm pretty strict about only responding to known commands.
<natefinch> mup: help
<mup> natefinch: Run "help <cmdname>" for details on: bug, contrib, echo, help, infer, login, poke, register, run, sendraw, sms
<cherylj> heh
<katco> mup: help infer
<mup> katco: infer [-all] <query ...> â Queries the WolframAlpha engine.
<mup> katco: If -all is provided, all known information about the query is displayed rather than just the primary results.
<cherylj> katco: yeah, I can take a look...
<katco> mup: infer randomly select between 1 and 4
<mup> katco: 1.
<katco> mup: infer randomly select between these options: natefinch wwitzel3 ericsnow and cherylj
<mup> katco: Cannot infer much out of this. :-(
<katco> mup: infer randomly select natefinch wwitzel3 ericsnow cherylj
<mup> katco: Cannot infer much out of this. :-(
<katco> mup: infer RandomChoice{"natefinch"|"wwitzel3"|"ericsnow"|"cherylj"}
<mup> katco: Cannot infer much out of this. :-(
<katco> mup: infer RandomChoice{"natefinch" | "wwitzel3" | "ericsnow" | "cherylj"}
<mup> katco: Cannot infer much out of this. :-(
<katco> gah
<katco> i thought gustavo was using WA for this
<katco> mup: infer how good is juju?
<mup> katco: Cannot infer much out of this. :-(
<katco> cherylj: sorry, thank you for tal.
<perrito666> katco: it will be a lot easier to go look for the code
<katco> cherylj: i was having too much fun with mup
<katco> perrito666: RandomChoice works on WA's web interface
<perrito666> katco: the code of mup
<alexisb> alright all, I am shutting down to head into town to meet up with canonical pdxers, if you need me while I am not online call my cell
<katco> alexlist: tc
<natefinch>  mup: infer RandomChoice[{'wwitzel', 'ericsnow', 'natefinch', 'cherylj'}]
<katco> lol
<katco> natefinch: mup straight up just doesn't like you
<natefinch> mup: infer RandomChoice[{'wwitzel', 'ericsnow', 'natefinch', 'cherylj'}]
<mup> natefinch: natefinch.
<katco> LOL
<natefinch> lol
<natefinch> mup: infer RandomChoice[{'wwitzel', 'ericsnow', 'natefinch', 'cherylj'}]
<mup> natefinch: cherylj.
<natefinch> there ya go
<katco> confirmed. doesn't like you.
<natefinch> ericsnow: so... when I launch a process, should launch return both the id and whatever status(id) would return?  Or is there special data that is expected to be returned only at launch time?
<ericsnow> natefinch: yeah, there may be extra data
<natefinch> ericsnow: ok, so I'm going to return a string id and a map[string]interface{} that'll be whatever yaml garbage launch wants to spit out
<ericsnow> natefinch: yep
 * natefinch assumes we'll want it to be yaml because that's the Juju way, even though yaml is terrible ;)
<natefinch> katco, ericsnow: new plugins - https://godoc.org/gopkg.in/natefinch/juju.v3/process/plugin
<natefinch> out for dinner, back in a few hours
<thumper> mramm: I'm in our hangout a bit early if you are free
<thumper> davecheney: would https://github.com/go-check/check/pull/35/files fix our problem?
<thumper> trivial review for someone: http://reviews.vapour.ws/r/1911/diff/#
<thumper> if we had a trivial tag, I'd just land it :-)
#juju-dev 2015-06-11
<davecheney> thumper: thats most of the fix
<davecheney> there is still another place they need to put a mutex, in _loopWorker or whatever it's called
<cherylj> I think this bootstack bug #1463480 might be a dup of bug #1416928.  Can anyone take a look and sanity check for me?
<mup> Bug #1463480: Failed upgrade, mixed up HA addresses <blocker> <canonical-bootstack> <ha> <upgrade-juju> <juju-core:Triaged> <juju-core 1.22:In Progress by cherylj> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463480>
<mup> Bug #1416928: juju agent using lxcbr0 address as apiaddress instead of juju-br0 breaks agents <api> <lxc> <network> <juju-core:Fix Released by dooferlad> <juju-core 1.21:Fix Released by dimitern> <juju-core 1.22:Fix Released by dooferlad> <https://launchpad.net/bugs/1416928>
<waigani> wallyworld: PR for #1447899 https://bugs.launchpad.net/juju-core/+bug/1447899 I assume we'll also want to target 1.22?
<mup> Bug #1447899: upgrade fails if no explicit version is specified <upgrade-juju> <juju-core:In Progress by waigani> <https://launchpad.net/bugs/1447899>
<wallyworld> waigani: um, we do have a window for 1.22.6 fixes. i don't know that this bug is worth backporting though
<waigani> wallyworld: okay, that PR targets 1.25 - as specified by the bug report
<wallyworld> sounds good ty
<axw> waigani: have you tested that tools upgrade fix live?
<waigani> axw: yes, sorry forgot to put that in the PR
<waigani> axw: tested on aws
<waigani> axw: ec2 provider
<axw> waigani: nps, thanks. LGTM
<waigani> axw: used same versions as in original bug
<axw> cool
<waigani> axw: awesome, thanks
<davecheney> thumper: panic: rescanned document misses transaction in queue
<davecheney> goroutine 189 [running]:
<davecheney> runtime.panic(0xe6fa60, 0xc210b9e990) /usr/lib/go/src/pkg/runtime/panic.c:266 +0xb6
<davecheney> your build just failed with this error
<davecheney> menn0: this looks very serious
<thumper> ?!
<axw> waigani: with forward/back ports, I don't normally wait for LGTM unless there were significant merge differences/conflicts
<thumper> davecheney: I think this is what William has been looking at fixing recently
<waigani> axw: yep, same. It just automatically pops up in RB. I usually close them straight away.
<menn0> davecheney: https://bugs.launchpad.net/juju-core/+bug/1449054
<mup> Bug #1449054: Intermittent panic: rescanned document <ci> <test-failure> <juju-core:Triaged by fwereade> <juju-core 1.22:Fix Released by dimitern> <juju-core 1.23:Won't Fix by fwereade> <juju-core 1.24:Fix Released by fwereade> <https://launchpad.net/bugs/1449054>
<axw> waigani: okey dokey, just thought I'd mention in case you were waiting around
<axw> ship-it'd anyway
<menn0> davecheney: looks like it's been fixed in 1.22, 1.23 and 1.24 but not master
<menn0> actually, not 1.23
<menn0> it's probably just a matter of bumping the mgo version but it hasn't been done for some reason
<waigani> axw: sweet, cheers
<davecheney> waigani: are there any details for this bug ? https://canonical.leankit.com/Boards/View/115065967/115396849
<waigani> davecheney: only happens on my machine. Let me write you an email with details.
<davecheney> ok
<natefinch> davecheney, wallyworld: this was the talk I was thinking of.  The whole thing is very good, but the link brings you to a 2 minute section where he talks about not using assert libraries for testing.  https://www.youtube.com/watch?v=yi5A3cK1LNA&t=9m40s
<natefinch> wallyworld, davecheney:  here's a test file for a package I wrote recently with tests just using the stdlib's testing package.  https://github.com/natefinch/pie/blob/master/pie_test.go    I don't think it's terribly more verbose than using gocheck.
<wallyworld> natefinch: it's terrible IMO :-) all those it statements when a sinlge line asset will do
<wallyworld> 3 lines of syntax vs 1
<wallyworld> and no tooling for things like deep equals, err nil, length checks  etc
<natefinch> If statements... yeah, those are rough
<natefinch> expecially simple ones like if len(a) != expected {
<wallyworld> boilerplate
<wallyworld> and you've only done simple checks so far
<wallyworld> wait till you need to do deep equas, or same contents etc etc
<natefinch> wallyworld: sure, some helper functions can help... but you don't need thousands of lines of code just to avoid writing an if statement.... line returns aren't harder to type than other characters
<wallyworld> you're missing the point - frameworks do more than just eliminate lots of boilerplate
<natefinch> wallyworld: and honestly, most of my gocheck code looks just like these tests.  I really try hard to make tests simple
<wallyworld> they ensure consistency and common patterns across the code base
<davecheney> natefinch: wallyworld leave me out of this
<natefinch> davecheney: heh will do
<davecheney> i don't want to start a land war in asia
<davecheney> all I want is gocheck to be maintained
<wallyworld> +1
<thumper> wallyworld,axw: ping
<wallyworld> wot
<thumper> looking at fixing the help command
<wallyworld> \o/
<thumper> so we can go:
<thumper> juju help system environmenst
<thumper> and not have to put the help in the middle
<wallyworld> -e as well?
<thumper> however, storage is doing weird shit
<thumper> -e is harder and out of scope from this change
<wallyworld> storage is 3 levels
<thumper> why do you have your own Command struct in storage for the super command
<thumper> that doesn't matter
<thumper> by wrapping it in another struct
<thumper> it means I can't go:
<wallyworld> i can't give you a good answer off hand
<thumper> juju help storage pool
<thumper> I may just throw that piece away and remove a level of indirection
<thumper> I think it'll still work as expected
<wallyworld> let me go look at the code real quick
<wallyworld> thumper: yeah, i have NFI, that doesn't make sense why it was done
<wallyworld> remove it i say
<thumper> ok, I'll do that in my branch
<wallyworld> ty
<thumper> yep, still works
<wallyworld> axw: would it make sense for resource-get to send its streamed resource data to its stdout and the charm which invoked the get can then suck on stdout to consume? or redirect to file or pipe elsewhere or whatever
<axw> wallyworld: yes, that's what I was thinking
<axw> wallyworld: so you can "resource-get thingy | my-fancy-stream-processing-program" or whatever
<wallyworld> axw: the reason i initially went with an option for consuming the data in the hook was convenience
<wallyworld> yep
<axw> wallyworld: consuming the data?
<axw> wallyworld: you mean unpacking onto disk?
<wallyworld> save the charm author from having then to invoke the hook tool
<axw> or installing deb or whatever
<wallyworld> yeah, consume the data
<axw> wallyworld: yes, that's probably the common scenario, so we should have that as an option/probably default
<wallyworld> ok, will add it back
<wallyworld> i still think we'll need to introduce a usage-type
<axw> wallyworld: sorry, I didn't realise you took it out in favour of this
<wallyworld> np :-)
<axw> wallyworld: I was thinking it'd be a property of the resource type. is that conflating?
<wallyworld> axw: yeah, because some charms will want to just get the deb or zip as a file, others would want to unit agent to install or unpack say
<wallyworld> IMO
<wallyworld> so having a separate usage-type attribute allows the charm to say it wants not just the file, but some type specific post processing
<axw> wallyworld: the reason that feels awkward to me, is that what you do with the resource depends on the type of the resource. and the type of the resource may change depending on how you deploy the service, right?
<axw> wallyworld: e.g. I might deploy with a deb to install mysql, or with a github repo that's my fork of mysql
<axw> wallyworld: so what does the usage type mean in that case?
<wallyworld> they would be different types though , right?
<wallyworld> this is hard to discuss here, let's defer to tomorrow in 1:1
<axw> wallyworld: sure :)
<davecheney>     // TODO(rog) Fix this so it doesn't wait for so long.
<davecheney>     // https://bugs.launchpad.net/juju-core/+bug/1163983
<davecheney>     c.Fatalf("timed out waiting for agent to terminate")
<davecheney> ... Error: timed out waiting for agent to terminate
<mup> Bug #1163983: some tests take an unnecessarily long time waiting for the poll interval <performance> <tech-debt> <testing> <juju-core:Triaged> <https://launchpad.net/bugs/1163983>
<davecheney> .... seriously
<TheMue> dimitern: ping
<dimitern> TheMue, pong
<TheMue> dimitern: one question regarding IPAddress() and the address type
<TheMue> dimitern: in state
<dimitern> TheMue, yeah?
<TheMue> dimitern: the signature is IPAddress(value string) ... and here value is taken as the docID. but the returned address type itself has also a field called Value containing the IP-Address. how they all are related?
<TheMue> dimitern: does the docID contain the same value as Value and only exists for database reasons?
<TheMue> dimitern: oh, and yes, params.Address contains Value as field (a string) but no ID. ?!?
<dimitern> TheMue, the docID should contain the environment UUID as prefix to the address value
<TheMue> dimitern: hmm, ok, so when retrieving the IP addresses via the API I only have the value, which doesn't allows me to retrieve the IP addresses again (as state instances) for later removal.
<TheMue> dimitern: any troubles with extending the params.Address with the ID?
<TheMue> *aaaargh*
 * TheMue doesn't cry due to addresses
<TheMue> thought I could stay on the veranda, but left hand a lawn mower and right hand craftsmen building on a house
<TheMue> too much noise *grmblx*
<dimitern> TheMue, the env uuid is automatically prepended to the id you pass
<dimitern> TheMue, so you don't need anything else than the value
<voidspace> if you use full disk encryption then ubuntu creates an unecrypted /boot partition where the linux image lives
<voidspace> it makes it 237Mb - so when an update includes a new image (leaving the current one there too, plus any old ones which aren't auto un-installed)
<voidspace> it fills up!
<voidspace> every other time an update includes an updated image I have to do surgery on /boot
<voidspace> and because the other partition is encrypted resizing partitions is non-trivial (can be done by booting a live-cd which I'll do at some point)
<voidspace> *sigh*
<TheMue> dimitern: great
<voidspace> and due to the bug with plymouth after an upgrade (when you have an encrypted filesystem) I'm still doing a recovery boot every time
<dimitern> jam, voidspace, dooferlad, standup?
<voidspace> dimitern: omw
<jam> dimitern: currently with mark, I'll be late if I'm there
<dimitern> jam, np, sure
<voidspace> dimitern: we could then filter juju-private from status if necessary
 * dimitern steps out for a short while
<dimitern> voidspace, if we need to, yes
<dimitern> voidspace, TheMue, dooferlad, I'd appreciate a review on http://reviews.vapour.ws/r/1918/
<TheMue> dimitern: just seen it
<voidspace> dimitern: although this is putting workarounds in place for code that only has imaginery future use cases... *sigh*
<jam> fwereade: I have a relation-get question for you when you have time
<jam> wallyworld: when you are back, we did spend about 10min on resources, and there was a pretty big thing to consider
<voidspace> dimitern: hmmm... a network *also* requires a ProviderId
<voidspace> dimitern: ok to allow that to be empty? They're supposed to be unique per provider I guess.
<voidspace> dimitern: yeah, there's an index on provider id
<voidspace> unique index
<voidspace> dimitern: what we *want* here is an interface that isn't associated with a provider network (except maybe the default one)
<voidspace> dimitern: we need to be clear about our model though
<voidspace> dimitern: so we either need to model the provider network that the nic is associated with *or* we need interfaces not associated with a network
<voidspace> dimitern: the latter seems easier at this stage
<voidspace> dimitern: or we could make provider id a sparse index I guess...
<voidspace> although we'd still have problems with duplicates
 * dimitern is back
<dimitern> voidspace, ProviderId is supposed to be coming from the maas network name
<dimitern> voidspace, but I guess setting it to the same as the network name *for now* should be fine
<voidspace> dimitern: this is not just for maas
<voidspace> dimitern: this is for ec2 (and then openstack too)
<dimitern> voidspace, I know
<dimitern> voidspace, but the model (as implemented in state) will change anyway (e.g. subnets instead of networks, while networks are a different concept)
<voidspace> dimitern: right, but at the time we're creating the network configuration for the container I'm not sure that we *have* the network name
<voidspace> as it's not relevant
<voidspace> I'll look through the code a bit more carefully
<dimitern> voidspace, we *should* have it, as the container's host is already on it
<dimitern> voidspace, but it's not populated I guess
<voidspace> I don't think we have it in configureContainerNetwork I mean
<voidspace> I'll check
<voidspace> the interface info we're working with comes from the PrepareContainerInterfaceInfo call and NetworkName is not populated
<voidspace> I can see if it's possible to populate it there
<dimitern> voidspace, right
<dimitern> voidspace, if the host itself had it populated at provisioning, it should be possible to get it from there and use the same
<dimitern> voidspace, but it won't be populated on the host unless you deploy it e.g. with --networks maas-net-name
<voidspace> right - we do copy the NetworkName into the result, so we're being given an empty name and ProviderId
<dimitern> voidspace, so just hardcoding it to juju-private (with a TODO comment please that we'll need to fix that as the model gets changed) in PCII() should do the trick I guess?
<voidspace> dimitern: for provider Id too? that has to be unique
<voidspace> it's actually ProviderId that's the problem now
<dimitern> voidspace, unique how?
<voidspace> dimitern: there's a unique index on provider id in mongo
<dimitern> voidspace, it's no
<voidspace> dimitern: go look in state/open.go
<dimitern> voidspace, hmm
<dimitern> voidspace, that's correct - providerid + envuuid has to be unique within the networks collection
<voidspace> right, so we need a unique provider id and at the moment it's empty
<dimitern> voidspace, but state.AddNetwork works with existing providerId+envuuid docs and just ignores it
<voidspace> dimitern: yes, just found that
<voidspace> SetInstanceInfo ignores it
<voidspace> not AddNetwork
<voidspace> but same effect
<dimitern> voidspace, so it *should* be fine
<voidspace> dimitern: so set ProviderId to juju-private too? that's actually incorrect - network name is internal to us, provider id is meant to be, well, a provider id :-)
<voidspace> dimitern: I can call it "invalid" to make it explicit that we've faked it - or "juju-private-default"
<voidspace> which is a bit of a less scary name
<voidspace> if it ever leaks
<dimitern> voidspace, how about "not-available" ?
<voidspace> ok
<voidspace> so long as that can never clash with a real one
<dimitern> cool, but please make it a const somewhere (network/ I guess)
<voidspace> which I guess is probably safe...
<dimitern> I don't believe it matters, as it's not used anywhere user-visible
<voidspace> dimitern: the danger with ProviderId is that we actually attempt to fetch it from the provider at some point
<voidspace> and if it accidentally matches a real one then weird things will happen
<voidspace> that's why I suggested a juju prefixed name
<dimitern> voidspace, "juju-not-available" ?
<dimitern> :)
<voidspace> heh
<dimitern> voidspace, let's go with "juju-unknown" ?
<voidspace> ok
<voidspace> bootstrapping with this code...
<voidspace> and going for coffee whilst it's churning
<wwitzel3> hrmm coffee, good idea
<voidspace> wwitzel3: o/
<voidspace> dimitern: and that seems to work, but the network *is* displayed in "juju status"
<dimitern> voidspace, unless tests start failing for status I can live with that for now :)
<dimitern> dooferlad, I've discovered another issue with GetContainerInterfaceInfo (or around it)
<dimitern> dooferlad, after host reboot, no routes are added for KVM instances' IPs (for LXC works)
<dimitern> dooferlad, additionally, ip route add can fail with exit 2 (RTNETLINK: File exists) and that should be handled better (it currently fails the process of the network config on the host altogether)
<dimitern> dooferlad, it should ignore exit 2 and only fail for exit 1 (e.g. Error: an inet prefix is expected rather than "10.15.0.". ; when doing ip route add 10.15.0. dev eth0)
<dimitern> dooferlad, I've added a card for you with details, please ping me if unclear
<dooferlad> dimitern: just back from lunch. Looking.
<dooferlad> dimitern: any idea why it has HTML tags in the description text? Rather noisy to read!
<TheMue> dimitern: I've seen the constants, and I liked them. I meant those expectScripts in the tests
<dimitern> dooferlad, where are these HTML tags?
<dooferlad> dimitern: don't worry, I got rid of them. They were in the card description. Perhaps some copy/paste thing?
<dimitern> TheMue, ah, I see
<dooferlad> dimitern: The leankit UI is horrible anyway - I don't spend much time looking at it, so it isn't a big deal.
<dimitern> dooferlad, ah ok
<marcoceppi> Can someone with familiarity of the OpenStack provider respond to the email in the Juju mailing list about Rackspace?
<mgz_> marcoceppi: we probably just need a bug filed for firewall-group: none still creating groups
<mgz_> marcoceppi: that setting was only added for scale testing, not for clouds that don't actually have the firewalling calls
<mgz_> but provider could not do the work
<marcoceppi> mgz_: could you let them know that? seems like it's the only blocker for rackspace which would be a huge win fwiu
<mgz_> I doubt it's the *only* blocker...
<mgz_> certainly the first though, and probably wouldn't be hard to make it work
<mgz_> I never bothered because it was a pain to get an account and I wanted the instance-level firewalling support
<mgz_> marcoceppi: I'll reply to the mail
<marcoceppi> mgz_: thank you!
<dimitern> voidspace, hey
<dimitern> voidspace, just a heads up - I've filed a separate bug 1464237 for the devices api work, leaving bug 1348663 only for the latest fix I did
<mup> Bug #1464237: juju should use devices API on MAAS 1.8+ for addressable containers <addressability> <feature> <kvm> <lxc> <maas-provider> <network> <juju-core:In Progress by mfoord> <https://launchpad.net/bugs/1464237>
<mup> Bug #1348663: DHCP addresses for containers should be released on teardown <maas-provider> <network> <oil> <juju-core:In Progress by dimitern> <juju-core 1.24:Fix Committed by dimitern> <MAAS:Invalid> <https://launchpad.net/bugs/1348663>
 * dimitern steps out
<marcoceppi> is ther any way to get the environment details from the api? I'm trying to gather the region a deployment is in from API only
<marcoceppi> (if region is set)
<marcoceppi> Environment.info doesn't seem to have those details
<marcoceppi> ah, found it, get_env_config in the python-jujuclient will retrieve this information
<mup> Bug #1464235 opened: failed to create a security group with firewall-mode: none <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1464235>
<mup> Bug #1464237 opened: juju should use devices API on MAAS 1.8+ for addressable containers <addressability> <feature> <kvm> <lxc> <maas-provider> <network> <juju-core:In Progress by mfoord> <https://launchpad.net/bugs/1464237>
<dimitern> reviews appreciated - http://reviews.vapour.ws/r/1919/ - port of the fix for bug 1348663 to master
<mup> Bug #1348663: DHCP addresses for containers should be released on teardown <maas-provider> <network> <oil> <juju-core:In Progress by dimitern> <juju-core 1.24:Fix Committed by dimitern> <MAAS:Invalid> <https://launchpad.net/bugs/1348663>
<mup> Bug #1464254 opened: charmsSuite teardown fails <ci> <intermittent-failure> <unit-tests> <juju-core:Incomplete> <juju-core feature-proc-mgmt:Triaged> <https://launchpad.net/bugs/1464254>
<mup> Bug #1464255 opened: statesuite teardown fails <ci> <intermittent-failure> <unit-tests> <juju-core:New> <juju-core feature-proc-mgmt:Triaged> <https://launchpad.net/bugs/1464255>
<katco> wwitzel3: standup
 * TheMue 's yesterday rant about a pure functional persistency layer and a dumb model would make mocking simpler. model could be used, in-memory persistency mock would be maintained with real persistency layer
<katco> TheMue: agreed. Moonstone is making strides towards such a dream.
<TheMue> katco: aaaaaaaaaaaah, great enlighting news
<TheMue> katco: makes my day
<katco> TheMue: mind you, it will take awhile to get to the end-state, but we are chipping away :)
<TheMue> katco: sure, nothing to be done in a week. *lol* but it's worth it.
 * TheMue with this news definitely feels better while hacking his current mocks
<voidspace> dimitern: so, cmd/juju tests time out
<voidspace> dimitern: I'm guessing there's something that blocks waiting for status to match...
<voidspace> dimitern: I'll find it
<voidspace> dimitern: fixed a handful of other tests
<dimitern> voidspace, right, I've suspected this much, but hopefully it shouldn't be too hard to fix
<voidspace> dimitern: yeah, just a case of finding the right test :-)
<mup> Bug #1464280 opened: storeManagerStateSuite teardown fails <ci> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1464280>
<voidspace> dimitern: the panic traceback is 30211 lines long
<voidspace> dimitern: that's quite a few goroutines...
<dimitern> voidspace, wow :) not unusual
<dimitern> voidspace, 90% of the cases the first few tracebacks are the most useful
<dimitern> voidspace, I'd appreciate a review on http://reviews.vapour.ws/r/1919/ btw - live tests just passed ok on maas and ec2
<voidspace> dimitern: ok
<voidspace> dimitern: I think I found the test in that panic
<voidspace> dimitern: not obvious from reading the test why it should block
<voidspace> but about to try it
<voidspace> dimitern: anyway, looking
<voidspace> dimitern: so you went with reboot doing an "ifdown"
<dimitern> voidspace, try also running suspected tests in isolation
<voidspace> dimitern: that's what I'm about to do...
<dimitern> voidspace, yeah, works well
<voidspace> dimitern: ok
<voidspace> dimitern: I hate tests that just copy large blocks of text from the code into the test
<voidspace> dimitern: just testing that the config is added would be enough, duplicating the script in the test proves nothing
<dimitern> voidspace, it proves the shutdown job for systemd is as expected for vivid
<voidspace> dimitern: but you got expected by copying and pasting
<voidspace> dimitern: so if there's an error in it the error will be in the test too
<voidspace> dimitern: an equally good test would just be checking a match for some unique part of it, and it wouldn't bloat the tests so much
<voidspace> dimitern: and they wouldn't be so fragile
<voidspace> I had tests fail because my editor trimmed whitespace from a script
<voidspace> and the whitespace was duplicated in the test :-/
<dimitern> voidspace, heh :) good point, but I've also live tested that it works
<voidspace> I'm talking about test cleanliness
<voidspace> I'm sure it works, it's just horrible tests
<voidspace> they're a menace
<dimitern> voidspace, since that PR is for master I'm fine with trying to make the tests a bit better if you think it's worth it - I'll check your comments
<voidspace> dimitern: ok, I'll put a comment on the test
<dimitern> voidspace, cheers
<voidspace> dimitern: hmmm, that test passes on its own
<voidspace> dimitern: but it takes >5minutes
<voidspace> dimitern: I suspect they're timing out because they're awfully slow tests
<voidspace> (cmd/juju I mean)
<voidspace> I wonder why these changes would make them slower
<voidspace> there's some extra logging I can pull out
<dimitern> voidspace, try running them with -race - might uncover the issue
<voidspace> dimitern: ok, thanks
<katco> wwitzel3: starting to think it might be on my end.
<katco> wwitzel3: flash doesn't seem to want to work
<wwitzel3> katco: I'm back in
 * dimitern will be back in 1h
<voidspace> dimitern: a ShipIt with an open issue for the worst offender of the tests...
<voidspace> test run with "-race" still going
<voidspace> and killed
<voidspace> trying just the one test
<cherylj> dooferlad,  dimitern,   could you guys take a quick peek at bug 1463480?  I suspect that it's a dup of bug 1416928, but I'm not 100%
<mup> Bug #1463480: Failed upgrade, mixed up HA addresses <blocker> <canonical-bootstack> <ha> <upgrade-juju> <juju-core:Triaged> <juju-core 1.22:In Progress by cherylj> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463480>
<mup> Bug #1416928: juju agent using lxcbr0 address as apiaddress instead of juju-br0 breaks agents <api> <lxc> <network> <juju-core:Fix Released by dooferlad> <juju-core 1.21:Fix Released by dimitern> <juju-core 1.22:Fix Released by dooferlad> <https://launchpad.net/bugs/1416928>
<dooferlad> cherylj: looking
<mup> Bug #1464304 opened: Sending a SIGABRT to jujud process causes jujud to uninstall (wiping /var/lib/juju) <cts> <sts> <juju-core:New> <https://launchpad.net/bugs/1464304>
<natefinch> ^^ what, thats a feature
<mup> Bug #1464304 changed: Sending a SIGABRT to jujud process causes jujud to uninstall (wiping /var/lib/juju) <cts> <sts> <juju-core:New> <https://launchpad.net/bugs/1464304>
<natefinch> mup: infer RandomChoice[{'thats a feature', 'thats a bug'}]
<mup> natefinch: Cannot infer much out of this. :-(
<natefinch> aww
<wwitzel3> ericsnow: just finishing up a quick snack so we can work up until you have to take off
<ericsnow> wwitzel3: wrapping up a review so we should be good
<mup> Bug #1464304 opened: Sending a SIGABRT to jujud process causes jujud to uninstall (wiping /var/lib/juju) <cts> <sts> <juju-core:New> <https://launchpad.net/bugs/1464304>
<voidspace> what's the flag to gocheck to not swallow logging output as it's running?
<voidspace> -gocheck.vv
<wwitzel3> ericsnow: heading to moonstone
<mup> Bug # changed: 1403165, 1441478, 1442257, 1453785, 1454697, 1455158, 1456989, 1463117, 1463439
<mup> Bug #1464335 opened: debug-log does not work with local provider <debug-log> <local-provider> <regression> <vivid> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1464335>
<mup> Bug #1464356 opened: TestCloudInit fails <ci> <intermittent-failure> <test-failure> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1464356>
<natefinch> evidently gc.ErrorMatches can't handle error messages with line returns in them
<natefinch> because even this assert is failing: c.Assert(err, gc.ErrorMatches, `.*`)
<perrito666> well if it uses the regex, it is correct
<perrito666> .* does not include newlines
<perrito666> iirc
<natefinch> and this is why I hate regexes... well, one of a thousand such reasons
<mup> Bug #1464369 opened: Sufficiently sized int config values are converted to floats <oil> <juju-core:New> <https://launchpad.net/bugs/1464369>
<mup> Bug #1464369 changed: Sufficiently sized int config values are converted to floats <config> <oil> <juju-core:New> <https://launchpad.net/bugs/1464369>
<mup> Bug #1464392 opened: mgo iterators not being closed in state/actions.go <juju-core:New> <https://launchpad.net/bugs/1464392>
<natefinch> dinner time!
<natefinch> mup: infer RandomChoice[{'bye!', 'adios!'}]
<mup> natefinch: Cannot infer much out of this. :-(
<natefinch> mup: infer RandomChoice[{'bye', 'adios'}]
<mup> natefinch: bye.
<natefinch> man... picky
<thumper> alexisb: oh hai
<alexisb> heya thumper
<alexisb> I need to chat go with you today before I go
<alexisb> :)
<alexisb> but I am in meetings for a but
<alexisb> bit
<alexisb> still
<alexisb> thumper, I will ping you when I am done with calls
<thumper> kk
 * thumper is about to have a meeting too
<alexisb> cherylj, what is the next step for https://bugs.launchpad.net/juju-core/+bug/1463480
<mup> Bug #1463480: Failed upgrade, mixed up HA addresses <blocker> <canonical-bootstack> <ha> <upgrade-juju> <juju-core:Triaged> <juju-core 1.22:In Progress by cherylj> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1463480>
<cherylj> alexisb: I chatted with thumper about it a few minutes ago, and I think there's not much we can do.  It is a known problem that was fixed in 1.22
<cherylj> we can give them a workaround to fix the problem if they see it again
<alexisb> cherylj, ack, can you please follow-up with pete and make sure they go through the work-around
<cherylj> alexisb: will do
<alexisb> ok thumper
<alexisb> our 1x1 hangout
#juju-dev 2015-06-12
<thumper> wallyworld: I'm going to be out for lunch during our normal call time
<thumper> wallyworld: catch up with you later in the afternoon?
<wallyworld> sure
<wallyworld> just ping me
<thumper> kk
<dpb1> wallyworld: https://pastebin.canonical.com/133053/ -- I tried to write out the environment.yaml as 'agent-version: $(juju --version)'
<wallyworld> looking
<wallyworld> dpb1: i think the agent version is 1.23.3
<wallyworld> not the full juju version wich includes the series and arch
<wallyworld> so you'll need to awk or sed it
<axw> wallyworld: you wanted me to ping you when I got back I think? I can't remember why... :)
<axw> wallyworld: no wait, I think you just wanted me to send info to perrito666
<wallyworld> axw: not that i recall, i think i mentioned some 1:1 topis
<wallyworld> yes
<wallyworld> a couple of metadata examples for filesystem storage on ebs and cinder etc to use in CI tests
<axw> yup
<wallyworld> and some notes on what to test if any apart from the stuff we discussed
<dpb1> wallyworld: sure, but in that email thread it was mentioned that $(juju --version) was correct.
<wallyworld> dpb1: we will make it correct for bootstrap, i'm not sure if it's currently supported like that. i'll need to review the emails to be sure of what the context was
<dpb1> ok, I'll reply on thread, just wanted to check if I was way off base.
<wallyworld> dpb1: not offbase, setting the agent-version attribute needs to just be 1.23.4 etc
<dpb1> k
<wallyworld> dpb1: hopefully next week we'll start the --no-auto-upgrade (or whatever we call it) option for bootstrap
<wallyworld> and/or $(juju version)
<dpb1> wallyworld: ok, understood
<wallyworld> dpb1: also, with 1.24, bootstrap will not return until it knows there are no upgrades to be done - so in the case of deployer, we will hopefully avoid the situation where deployer gets disconnected because just as it starts its work, the agent restarts from underneath it
<wallyworld> so even if one of those implicit upgrades happens, deployer won't start and then get a disconnect error
<dpb1> wallyworld: OK, I wasn't aware that made it into 1.24, thanks for letting me know.
<wallyworld> sure, np
<axw> wallyworld: just spilt tea everywhere, will be a bit late
<wallyworld> oh, no
<wallyworld> just ping me
<axw> wallyworld: I'm in
<mup> Bug #1464470 opened: A subordinate charm hook scheduled to run(but it is waiting for the principal charm hook to release the lock) goes to an error state after the principal charm triggers a reboot. <1.24> <1.25> <juju> <juju-core:New> <https://launchpad.net/bugs/1464470>
<menn0> thumper: http://reviews.vapour.ws/r/1921/
<wallyworld> menn0: still onine?
<menn0> wallyworld: yep
<wallyworld> menn0: i forgot to ask you about bug 1464335
<mup> Bug #1464335: debug-log does not work with local provider <debug-log> <local-provider> <regression> <vivid> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1464335>
<wallyworld> this has been reported against 1.24
 * menn0 is looking
<wallyworld> looks like all-machines log is missing
<wallyworld> maybe i should have asked thumper
<wallyworld> as it is claimed inthe bug he was involved
<menn0> wallyworld: i didn't do it</simpson name=bart>
<thumper> :(
<thumper> wasn't me
<thumper> but I recall it
<thumper> it not working that is
<wallyworld> it was claimed as a regression of 1202682
<wallyworld> bug 1202682
<mup> Bug #1202682: debug-log doesn't work with lxc provider <cts-cloud-review> <debug-log> <landscape> <local-provider> <papercut> <ssh> <ui> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1202682>
<wallyworld> but i haven't dug in
<thumper> wallyworld: someone broke it again
<wallyworld> huzaar
 * wallyworld sighs
<wallyworld> that tells me our tests are fooked
 * menn0 spins up an env to have a quick look
<thumper> wallyworld: file permission problem
<wallyworld> sudo !
<thumper> wallyworld: menno had the same problem and his ~/.juju was 0700
<thumper> mine is 0755
<thumper> and all is good
<wallyworld> hmmm, ok
<thumper> he changed his permissions and an all-machine.log turned up
<wallyworld> huh
<wallyworld> maybe juju init is wrong
<thumper> the syslog user couldn't see the config file
<wallyworld> or there was an old .juju
<thumper> well, the .pem cert actually
<wallyworld> thanks for looking
 * wallyworld goes back to spec writing
<thumper> wallyworld: actually...
<thumper> wallyworld: the rsyslog files were in the log dir, and natefinch moved them recently to the datadir
<thumper> wallyworld: since the log dir is in /var/log/....
<thumper> the syslog user could access
<thumper> also...
 * wallyworld waits with baited breath
<thumper> nah, that's it
<thumper> I changed my mind
<thumper> was thinking of apparmor
<thumper> but menno's one started working
<thumper> so it wasn't that
<wallyworld> ok
<wallyworld> do you recall if juju init is wrong?
<wallyworld> will need to check
<thumper> it was the move of the cert from the logdir to datadir that broke it
<thumper> some people like locking down their home dirs
<thumper> we can't guarantee this
<wallyworld> ah
<thumper> lets just log to the db and fix all this
<thumper> :)
<wallyworld> sure, but for 1.24.1
<wallyworld> so what do you suggest?
<menn0> well thumper, interesting that mention it....
<thumper> well, we have two choices
<thumper> tell people to change permissions everywhere
<thumper> or move the files back out to the log dir
<thumper> or
<thumper> some other dir
<thumper> perhaps make /var/lib/juju-<namespace>
<thumper> /var/lib/juju/local/<namespace> for config files
<wallyworld> can't we just create log dir with correct permissions
<thumper> it isn't the log dir that is the problem
<thumper> the cert that rsyslog uses is in the local provider datadir
<thumper> which defaults to $JUJU_HOME/<namespace>/
<wallyworld> sorry, i meant the datadir
<thumper> this would mean changing the default datadir for the local provider
<thumper> it is effectively requiring moving the datadir out of the home dir
<thumper> so...
<thumper> to main options
<wallyworld> can't we just change permissions on ~/.juju
<thumper> 1) move the rsyslog conf back to the logdir
<wallyworld> no :-(
<wallyworld> that would be bad
<thumper> 2) move local provider datadir out of $JUJU_HOME
<wallyworld> rm -rf logs and you break everything
<thumper> 3) tell everyone to change the permissions on their home dirs and .juju
<thumper> that's about it
<menn0> wallyworld: changing their perms on ~/.juju isn't enough... their home dirs have to be open too
<wallyworld> can we just change permissions on .juju
<wallyworld> ah
<wallyworld> option 2 maybe
<wallyworld> i can't see option 1 as any good at all
<wallyworld> can be fixed next week
<wallyworld> can you update bug? maybe someone in US timezone will pick it up, we should maybe even assign to nate
<menn0> wallyworld: to be really sure I just confirmed the problem still happens when ~/.juju's perms are open but $HOME is restricted
<wallyworld> yeah, bollocks
<menn0> wallyworld: ticket updated
<wallyworld> tyvm
<urulama> wallyworld: let me reboot
<wallyworld> ok
<tasdomas> could someone take a look at http://reviews.vapour.ws/r/1888/ ?
<voidspace> dooferlad: discovered that those tests had a failing assert that was stopping the test but not marking it as a failure
<voidspace> dooferlad: and that six other tests had the same issue - and some of them fail if you actually run them!
<voidspace> dooferlad: so we have failing tests on master that aren't being reported...
<dooferlad> voidspace: yikes!
<voidspace> yeah :-/
<perrito666> Voidspace which ones?
<voidspace> perrito666: lxcBrokerSuite in workker/provisioner/lxc-broker_test.go
<voidspace> perrito666: the problem is in startInstance
<voidspace> perrito666: anything that calls that just halts - there's actually an assert failure but the test is marked as a pass
<mattyw> rogpeppe1, ping?
<rogpeppe1> mattyw: pong
<mup> Bug #1464600 opened: Juju 1.22.5: pending LXC containers <cloud-installer> <landscape> <juju-core:New> <https://launchpad.net/bugs/1464600>
<fwereade> sorry, I seem to keep dcing from freenode
<fwereade> canonical irc seems fine though
<mup> Bug #1464616 opened: destroy-machine --force no longer forceably destroys machine <juju-core:New> <https://launchpad.net/bugs/1464616>
<mup> Bug #1464616 changed: destroy-machine --force no longer forceably destroys machine <juju-core:New> <https://launchpad.net/bugs/1464616>
<mup> Bug #1464616 opened: destroy-machine --force no longer forceably destroys machine <juju-core:New> <https://launchpad.net/bugs/1464616>
<mup> Bug #1464633 opened: juju status should show if an upgrade is available <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1464633>
<mup> Bug #1464665 opened: backupsSuite setup failed <ci> <intermittent-failure> <unit-tests> <juju-core:Incomplete> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1464665>
<Syed_A> Hello folks, I was wondering how the lxc template "juju-trusty-lxc-template" is generated ?
<Syed_A> jamespage: Hello !
<Syed_A> Please correct me if i am wrong, but the way juju spawn containers is that first it will transfer the "juju-trusty-lxc-template" to the target host
<Syed_A> and then juju will use this template to spawn new conatiners from this template.
<Syed_A> I want to change the "juju-trusty-lxc-template" before it is transferred to the target host for the lxc container. Is it possible to do it ?
<dimitern> Syed_A, the template container is created, when lxc-clone is true (the default) in environments.yaml for the environment, the first time an lxc container is deployed on the machine (of the given series)
<dimitern> Syed_A, it is possible to change the template container, but why do you need that? juju also changes it in some cases
<mup> Bug #1464671 opened: clientSuite setup fails <ci> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1464671>
<Syed_A> dimitern: I want the containers to have 3 nics and take ip addresses from three bridges on the hosts. Right now i have to manually edit the template. but if somehow i can edit the template on the juju machine, then whenever a new machine clone this template it has all the values for the 3 bridges.
<dimitern> Syed_A, it's better to do that by deploying a charm (principal or, better, subordinate) to each container; that charm can contain scripts to add NICs and other things
<Syed_A> dimitern: Right now i have to log inside the machine hosting the container and edit the template manually. I want to edit the template on juju node so that edited template gets transferred whenever i add a new machine to start containers on it.
<dimitern> Syed_A, how about what I suggested above?
<dimitern> Syed_A, with a charm that does the custom config, you can write it once and then use it on every machine
<Syed_A> dimitern: that make sense. But i have never written a charm before.
<dimitern> Syed_A, it can be very simple, e.g. take the ubuntu charm as a base and just add some scripts in the hooks/install
<dimitern> Syed_A, https://jujucharms.com/ubuntu/trusty/3 - there's a "Download zip" link on the right; then, unzip this in a dir with the following structure: mycharms/trusty/ubuntu
<Syed_A> dimitern: opening it.
<dimitern> Syed_A, then inside the ubuntu dir, add a "hooks" dir and inside it an "install" script (bash or python, but needs to be executable)
<dimitern> Syed_A, in that install hook, you can do things like:
<dimitern> Syed_A, stop a given container, change the lxc.conf it uses to add more networks, restart it
<katco> natefinch: standup
<dimitern> Syed_A, finally, to test this, use: juju deploy local:trusty/ubuntu --repository <path/to/mycharms> --to 1 (or later juju add-unit ubuntu --to X - machine id)
<natefinch> katco: coming
<dimitern> Syed_A, there are lots of folks in #juju which can help you with your charm, and there's the online docs about it - https://jujucharms.com/docs/stable/getting-started
<Syed_A> dimitern: This sounds like a great plan but alternatively if i could just edit the template on juju node itself then i will be good to go.
<dimitern> Syed_A, you can try - it's in /var/lib/lxc/juju-trusty-lxc-template/config (and int the same dir, rootfs/etc/network/interfaces needs to be changed)
<dimitern> Syed_A, changing the template and then deploying another lxc (the first one you deployed, which triggered the template creation you can destroy) will use the changed template
<Syed_A> dimitern: Yes, this is what i was doing.
<dimitern> Syed_A, however, beware this is not supported and your mileage may vary
<Syed_A> dimitern: Changing /var/lib/lxc/juju-trusty-lxc-template/config on every host
<Syed_A> dimitern: This file will only be present on a machine if you start a container on it.
<dimitern> Syed_A, that's the only way manually (still can be automated somewhat by using "juju run" or "juju ssh"/"juju scp"), using a charm allows you to take advantage of juju's automation
<dimitern> Syed_A, hmm, something occurred to me now
<Syed_A> dimitern: My question is from where does this "/var/lib/lxc/juju-trusty-lxc-template/" comes from ?
<dimitern> Syed_A, if all your machines are more or less the same, you could potentially do this once on one machine and then just copy the template to /var/lib/lxc and try if it will work
<dimitern> Syed_A, it's created using lxc-create --debug --userdata <path/to/file/generated/by/juju> --hostid <e.g. juju-trusty-lxc-template> -r trusty
<Syed_A> dimitern: I can do that but then juju might overwrite it ? Probably there could be a semaphore ?
<dimitern> Syed_A, and then once it boots it's shutdown and cloned
<Syed_A> dimitern: lxc-create -n juju-trusty-lxc-template21 \
<Syed_A>   -t ubuntu-cloud \
<Syed_A>   -f /var/lib/juju/containers/juju-trusty-lxc-template/lxc.conf \
<Syed_A>   -- --debug \
<Syed_A>   --userdata /var/lib/juju/containers/juju-trusty-lxc-template/cloud-init \
<Syed_A>   --hostid juju-trusty-lxc-template \
<Syed_A>   -r trusty
<dimitern> Syed_A, juju checks if juju-trusty-lxc-template is present and uses it, it won't overwrite it
<Syed_A> dimitern: Sorry about the paste.
<Syed_A> dimitern: If that's the case then it will be a really easy and good solution.
<dimitern> Syed_A, well, you never know until you try I guess :)
<Syed_A> dimitern: Yep! ;)
<dimitern> Syed_A, anyway, I hope it works and if you have other questions, here or in #juju someone will be able to help you
<Syed_A> dimitern: Thanks!
<mup> Bug #1464679 opened: juju status oneline format missing info <landscape> <status> <juju-core:New> <https://launchpad.net/bugs/1464679>
<natefinch> wwitzel3: let me know when you're around.... not a huge fan of the ContextComponent's using of interface{} everywhere
<wwitzel3> natefinch: I'm around
<wwitzel3> natefinch: by everywhere, you mean in the Get and Set fields?
<wwitzel3> natefinch: or are there other places of concern?
<natefinch> wwitzel3: sory, hyperbole... but just not a fan of not-type-safe code
<natefinch> wwitzel3: so I'd like to know what this is buying us
<wwitzel3> natefinch: it is type safe because we explicitly check that the type conversion works in the component side and error if it doesn't.
<wwitzel3> natefinch: but what it buys us is the ability to register components without the hook context needing to know implementation details about the components (and there for, not import that as a package)
<wwitzel3> natefinch: so the ContextComponent interface is our intersection between juju internals and the external component
<natefinch> wwitzel3: do you want to jump in a hangout... I feel like i'm behind on a lot of the decisions made in the spec, so I'd like to understand better about the use cases the spec solves
<wwitzel3> sure, katco if you're interested, we are heading in to moonstone
<katco> wwitzel3: yeah be there in a sec
<wwitzel3> alexisb: you'll be happy to know, there is a ceiling fan in my office now :)
<alexisb> lol
<alexisb> nice wwitzel3
<katco> wwitzel3: i think natefinch needs those tabs for godoc
<wwitzel3> katco: ah, ok, I thought it could be spaces too and i was just going for consistency
<natefinch> spaces or tabs work
<natefinch> tab is fewer keystrokes
<natefinch> https://godoc.org/github.com/natefinch/godocgo#hdr-Formatting
<katco> natefinch: ah that's what i was looking for. didn't know you could use spaces as well
<wwitzel3> I just hate tabs
<wwitzel3> lol
<katco> wwitzel3: tabs in emacs alllll day. come at me!
<wwitzel3> I'll sneakily edit your config at gophercon so your tabstop is 32
<natefinch> I love tabs.  Their whole point is to indent.  And you never have to backspace or hit space multiple times to indent.  Plus, everyone can set their tabs to be whatever width they want, and no one else has to complain they're too big or small
<katco> wwitzel3: good luck with that. my config is currently 1409 lines long and source controlled ;)
<wwitzel3> katco: I wouldn't be able to actually do it anyway, I don't know how to boot emacs
<katco> wwitzel3: quantum computing is coming. i'm hoping those machines will be able to run it
<natefinch> if you don't already have it, I highly recommend getting the hub tool from github.  It wraps git and does a lot of awesome github-specific things from the command line.  Like making it super easy to checkout the code for a PR, or make a PR from your command line, etc
<natefinch> and it's written in Go.... go get github.com/github/hub
<natefinch> it wraps git, so you can actually replace git with it entirely
<natefinch> sinzui: about that "SIGABRT uninstalls jujud" bug - why do we have any signal that tells juju to uninstall?  why is that a feature?
<sinzui> natefinch: I think that is the mechanism that Other engineers decided was the right way to clean up. It used to be another more common sig. axw changed it to ABRT last year
<natefinch> fwereade: I don't suppose you're around?
<pmatulis> does 'juju generate-config' work on linux and windows alike, ito of dir/file being created?
<natefinch> pmatulis: yes, it works on both.
<natefinch> pmatulis: not sure what the second half of your question was supposed to be
<pmatulis> natefinch: this command creates ~/.juju/environments.yaml on linux (if not presetn). does it do the same on windows for %LOCALAPPDATA%/Juju ?
<natefinch> pmatulis: yes
<natefinch> pmatulis: the client is totally compatible between linux, windows, and OSX, following all the normal conventions of each OS.
<natefinch> pmatulis: er, when I say compatible, you still need the OS-specific binary, but it all comes from the same code and they are treated as equally important
<cherylj> katco: ping?
<pmatulis> natefinch: thank you
<katco> cherylj: pong
<cherylj> katco: I have a process question / comment for you
<katco> cherylj: sure what
<katco> what's up?
<cherylj> katco: I know we close bugs that have no activity for 60 days
<cherylj> but I saw a bug that got closed because of that with a tag of tech debt
<cherylj> and I'm thinking that maybe tech debt bugs shouldn't be closed because of inactivity
<katco> cherylj: i agree with that... and feature requests
<cherylj> katco: do you know who owns the bot or whatever that closes these bugs?
<cherylj> it may also be the case that this has been fixed.  The bug I'm looking at was closed last year
<katco> cherylj: almost certainly CI (ping sinzui)
<cherylj> katco: will do, thanks!
<sinzui> cherylj: Launchpad doesnât care about tags. Lpâs 60 day expiration is based on incomplete status witth no one ot other task assigned
<sinzui> cherylj: incomplete mean there isnât enough information to accept this issue as work.
<sinzui> SO tech-debt cannot be incomplete by definition
<cherylj> sinzui: ah, thanks for clarifying.
<fwereade> natefinch, heyhey, have I missed you?
<natefinch> fwereade: nope
<natefinch> fwereade: I have a few questions about why we're doing things in the way we are... which I probably already know the answer to, but want to make sure I understand
<fwereade> natefinch, sure, can you manage a slightly laggy irc conversation about them?
<natefinch> fwereade: that's fine L(
<natefinch> fwereade: er :)
<natefinch> fwereade: what's up with github.com/juju/schema ?  It seems like we're reimplementing what the yaml and json parsers already do to take yaml and shove it in a struct.  Seems like we could strip out a ton of code that uses schema and just deserialize into a struct
 * fwereade is excavating his mind
<fwereade> natefinch, well, you *could* do more sophisticated things than we usually do, but I don't think there's a 100% overlap with "just deserialize into a struct"
 * perrito666 imagines fwereade fiddling with a spoon
<fwereade> natefinch, I would have no objection to replacing schema with something simpler where it's too unwieldy a hammer for the situation
<fwereade> natefinch, but I'm not convinced it's completely unhelpful across the board
<fwereade> natefinch, do you have particularly egregious examples off the top of your head?
<natefinch> fwereade:  looking...
<fwereade> natefinch, I do know that last time I thought I wanted to use the schema package I got tied up in knots trying to make a self-referential one that would turn nested map[interface{}]interface{} into nested map[string]interface{}
<natefinch> yeah,... it seems like you should never need map[interface{}]interface{} ... map[string]interface{} should always be sufficient.  I don't think anything uses non-strings as keys, do they?
<natefinch> fwereade: anyway, my other question is about the hook context code
<natefinch> fwereade: there's a bunch of interfaces that seem to only be used for testing... so I wanted to make sure that they had no other function other than to make the code easier to test: https://github.com/juju/juju/blob/master/worker/uniter/runner/jujuc/context.go#L45
<natefinch> wwitzel3: btw, when you get back, I think I have a solution to the generics problem with the ContextComponent stuff
<natefinch> fwereade: looking at git blame, the one big interface recently got refactored into a bunch of smaller ones... but it doesn't change the fact that it seems like there's no one using it outside the tests.
<fwereade> natefinch, I am entirely comfortable with exposing functionality behind interfaces by default, for all sorts of reasons
<fwereade> natefinch, ease of testing is certainly one of them
<fwereade> natefinch, but the Context interface is, I think, an important thing in itself
<fwereade> natefinch, it's the bottleneck through which all hook communication needs to happen
<fwereade> natefinch, it keeps the hook tools unaware of more distant layers
<fwereade> natefinch, and all those things push towards it being an interface
<natefinch> fwereade: have you seen Eric and Wayne's ideas for Components?
<fwereade> natefinch, not sure I have actually -- hwere were they?
<katco> fwereade: you have. this is the whole intersection of features with layers of architecture discussion
<fwereade> katco, ah, right
<natefinch> fwereade: the reason I brought it up is because of what you said about hook tools not needing to know about distant layers.  While that's true... it's also true that you're likely to write the action-set hook tool at the same time as the action API call, etc.  So it's actually not terrible if the action-set tool knows about the action API... what's actually better is not requiring *everyone else* to know about those things
<natefinch> ... which I think is what the Component idea is trying to push.
<natefinch> I don't know what my point was there, though
<fwereade> natefinch, strongly disagree
<fwereade> natefinch, it is terrible if the hook tools know about the api
<fwereade> natefinch, makes it impossible to understand and maintain them in isolation
<katco> natefinch: the idea is that you have an adapter where features and architectural layers intersect, so that neither are strongly coupled to the other
<fwereade> natefinch, I am all for improving horizontal boundaries, but not at the cost of the vertical ones
<katco> natefinch: these are the "abstraction" cards on our kanban
<natefinch> yes, sorry, saying it wasn't terrible was hyperbole.  Certainly keeping things decoupled is good.
<fwereade> natefinch, katco: so I frequently articulate this poorly
<fwereade> natefinch, katco: but I think a good model *is* to cluster feature-specific code into a package or small closely-connected group of packages
<fwereade> natefinch, katco: those bits define the juju entities and how they act; other model code defines how model entities can/should interact
<katco> fwereade: yep
<fwereade> natefinch, katco: and then, as much as possible, we use those same model representations wherever we can
<fwereade> natefinch, katco: so the business logic doesn't need to be rewritten in subtly different ways at every level
<fwereade> natefinch, katco: the trouble is
<katco> fwereade: yeah, i think we're still on the same page
<fwereade> natefinch, katco: that many of our business rules are encoded at the storage level and aren't likely to move any time soon
<katco> fwereade: right, we've also identified that as the hardest piece
<fwereade> natefinch, because it's really damn difficult to extract the business rules without dragging nasty gobbets of persistence concerns with them
<katco> fwereade: we're not planning on doing a whole lot there atm b/c it's such a large effort
<katco> fwereade: but we are going to introduce a few small bits which will hopefully chip away at the problem
<fwereade> natefinch, katco: so, yeah, I think we're all on the same page. so long as you keep the layers separated as much as you can, I'm happy :)
<katco> fwereade: yes :) we have that in the forefront of our mind for sure :)\
<natefinch> yep
<natefinch> eod.  happy weekend all
<wwitzel3> bah
<wwitzel3> just missing him
#juju-dev 2015-06-13
<mup> Bug #1464600 changed: Juju 1.22.5: pending LXC containers <cloud-installer> <landscape> <juju-core:Triaged> <juju-core 1.22:Triaged by cherylj> <https://launchpad.net/bugs/1464600>
#juju-dev 2015-06-14
<davecheney> thumper: here is an example http://paste.ubuntu.com/11716589/
 * thumper looks
<thumper> davecheney: I have been noticing many intermittent failures with cmd/jujud/agent
<thumper> davecheney: running that package tests with the race detector has indicated a number of data races
<thumper> davecheney: just wanted to check you weren't looking at that package now
 * thumper headdesks
<thumper> wallyworld: love a chat when you turn up
<wallyworld> ok, will ping you after another coffee
<wallyworld> thumper: free now in 1:1 if you want
<thumper> coming
<thumper> wallyworld: http://reviews.vapour.ws/r/1925/
<thumper> wallyworld: wasn't able to fix the intermittent failure with this
<thumper> but it does fix the races
<thumper> menn0: if you are curious later, waitForUpgradeToFinish in cmd/jujud/agent/upgrade_test fails intermittently for some weird reason
<menn0> thumper: do you have any details?
<thumper> menn0: nope... only that I had it fail here, then pass 8 times in a row
<thumper> menn0: this test -> UpgradeSuite.TestUpgradeStepsHostMachine
<thumper> menn0: just fails with false is not true
<wallyworld> ok, will look aftr standup
<thumper> wallyworld: cheers
<menn0> thumper: I believe there's a ticket for that one already
<thumper> yeah
<menn0> thumper: that test actually predates me but I've worked a lot with that area so i'll take a look
<menn0> thumper: bug 1444576 is the closest match
<mup> Bug #1444576: Skipped TestUpgradeSteps* in cmd/jujud/agent/upgrade_test.go <skipped-test> <test-failure> <juju-core:Triaged by menno.smits> <juju-core 1.24:Triaged by menno.smits> <https://launchpad.net/bugs/1444576>
<menn0> thumper: i've just grabbed it although it'll be a friday afternoon one
<thumper> kk
<thumper> menn0: re: http://reviews.vapour.ws/r/1924/diff/# you were mentioning that you don't actually have to close the iterators, is that right?
<menn0> thumper: yeah it's not critical, the servers kills iterators after 10mins of inactivity anywayu
<thumper> but the code is fine, right?
<menn0> thumper: the code you reviewed is fine
 * thumper -> gym
<menn0> thumper: closing the session to force the blocked next call to unblock is not
<menn0> thumper: depending where mgo is at, it might panci
<davecheney> thumper: menn0 i'll look at jujud/agent next when I'm done with this PR
#juju-dev 2016-06-13
<axw> wallyworld: http://reviews.vapour.ws/r/5045/
<wallyworld> ok, ta
<wallyworld> axw: while i'm looking, here's a small one http://reviews.vapour.ws/r/5046/
<axw> wallyworld: okey dokey
<wallyworld> axw: i wonder if migration should fail if the model credential does not exist in the target controller - should that check be done? i think so?
<axw> wallyworld: yeah, there's a TODO in NewModel to check that
<wallyworld> ah right
<axw> wallyworld: intending to do in a follow up, can do in this branch if you prefer
<wallyworld> follow up fine
<wallyworld> it's all stuff immentently in progress
<thumper> wallyworld: got a few minutes?
<wallyworld> thumper: sure, just finishibg a review, give me 5
<thumper> wallyworld: ack, I'll jump in 1:1 and wait
<wallyworld> axw: i left a few questions / suggestions
<axw> ta
<menn0> thumper: found one typo. otherwise LGTM.
<axw> wallyworld: I was copying "CloudRegion() string" above in core/description. I can change both to ...Name() if you like, but IMO it's clear enough as it is
<wallyworld> axw: ok
<thumper> menn0: awesome
<thumper> ta
<axw> wallyworld: please see replies
<wallyworld> ok
<wallyworld> axw: lgtm, we should think about the cloud config issue though
<axw> wallyworld: okey dokey. perhaps we could/should separate clouds.yaml management from the core representation of clouds/regions
<axw> wallyworld: maybe move clouds.yaml management into jujuclient? and then config-free version in cloud
<axw> just a thought... need to think through more
<wallyworld> yep, i was thnking similar
<thumper> bah humbug
<thumper> wallyworld: where has the password gone from agent.Config ?
<wallyworld> thumper: the api password? the one that used to be old-password?
<thumper> there is no an old password, and no other passwords
<wallyworld> it's in accounts.yaml
<thumper> not on machine-0
<wallyworld> no, i think it was removed, but i'd need to check
<wallyworld> i'm late for vet but i'm pretty sure it was removed from agent config because it's just not needed there
<wallyworld> it is stored in the accounts yaml of the person who bootstrapped
<thumper> yeah...
<thumper> it is
<wallyworld> and others get access via register
<thumper> for the agent to connect over the api
<thumper> screw users
<thumper> I'm talking about agents
<wallyworld> agents have the password elsewhere i thought
<wallyworld> but can't recall offhand
<thumper> hmm
<wallyworld> but gotta run, bbiab
<thumper> ah...
<thumper> I should just be using APIInfo()
<mup> Bug #1525030 changed: 'ERROR while removing instance' when destroying lxd environment <2.0-count> <juju-release-support> <juju-core:Expired> <https://launchpad.net/bugs/1525030>
<mup> Bug #1531719 changed: Runaway memory allocation in jujud unit agent <2.0-count> <juju-core:Expired> <https://launchpad.net/bugs/1531719>
<mup> Bug #1525030 opened: 'ERROR while removing instance' when destroying lxd environment <2.0-count> <juju-release-support> <juju-core:Expired> <https://launchpad.net/bugs/1525030>
<mup> Bug #1531719 opened: Runaway memory allocation in jujud unit agent <2.0-count> <juju-core:Expired> <https://launchpad.net/bugs/1531719>
<mup> Bug #1525030 changed: 'ERROR while removing instance' when destroying lxd environment <2.0-count> <juju-release-support> <juju-core:Expired> <https://launchpad.net/bugs/1525030>
<mup> Bug #1531719 changed: Runaway memory allocation in jujud unit agent <2.0-count> <juju-core:Expired> <https://launchpad.net/bugs/1531719>
<wallyworld> axw: been looking at removing ControllerUUID() from environs.Config(). if it's to be done, it will need to involve constructing the Environ with the controller UUID, which means environs.New() and EnvironProvider.Open()  will change
<axw> wallyworld: I think it's only needed by StartInstance and Destroy?
<axw> wallyworld: I was thinking we'd just add it to StartInstanceParams, and add a new DestroyAll(controllerUUID string), or something like that
<wallyworld> axw: AllInstances needs it as well (for openstack at least)
<wallyworld> and likely ControllerInstances() for others
<axw> wallyworld: ControllerInstances shouldn't need it, I'm pretty sure it's expected to be called on the controller Environ anyway
<wallyworld> ControllerInstances() on openstack needs it
<axw> wallyworld: AllInstances shouldn't need the controller UUID either. I think we can change the openstack code
<wallyworld> ok, i'll look to see if openstack is the only one where AllInstances needs it
<frobware> dimitern: I tried http://reviews.vapour.ws/r/5040/ - LGTM but will test a little more with LXD this morning. If it's all OK I'll dump the distinction between multi and single NIC configs we talked about on Friday
<dimitern> frobware: great! thanks
<dimitern> frobware: I'm going to remove the legacy AC-FF code today as well
<frobware> dimitern: need to sync with CI folks too as tests on their end should vanish
<dimitern> frobware: yeah, there's one affected test IIRC
<wallyworld> axw: i think ec2 provider is broken for ControllerInstances() - it adds a filter based on model uuid not controller uuid. it may be that we only call it with the controller model environ so it works by fluke
<axw> wallyworld: pretty sure that's the only place we ever call it
<axw> wallyworld: I think we could reasonably restrict it to that
<axw> the caller can arrange to open the correct Environ, rather than requiring more of the provider
<wallyworld> axw: yeah, we seem to call it to get an api connection from an env made from bootstrtap config
<wallyworld> so i could change openstack to use ModelUUID and it would fix the current issue
<wallyworld> but seems quite fragile
<wallyworld> i could throw an error if cfg doesn't include a controller uuid attribute
<wallyworld> that would stop any inadvertant usage slipping through
<axw> wallyworld: isn't the proposal for all configs to be that? :)
<wallyworld> not model configs
<wallyworld> model configs obtained from state will not include controller uuid
<axw> wallyworld: I thought the goal was to remove controller-uuid from environs.Config altogether
<wallyworld> and i'd like to totally from controller uuid from from end processing too
<wallyworld> yes, but we still need it at the front end at bootstrap
<wallyworld> for now
<wallyworld> it's a huge change
<axw> ok, I'm just saying if we intend to remove it eventually, it doesn't make sense to add more code dependent on its presence (or lack thereof)
<wallyworld> but for now, if we just call ControllerInstances() to get api info (as a backup) then we need to assume controller uuid comes from coonfig
<wallyworld> i could pass uuid to controller instances
<wallyworld> that would work
<axw> wallyworld: sure, works for me
<wallyworld> yay, more yak shaving
<wallyworld> axw: and maas still appears to use the provider state file for ControllerInstances(), so we may be able to clean that up also
<axw> wallyworld: eep :/
<wallyworld> indeed
<axw> wallyworld: I thought thumper was adding the tags support. I guess he didn't get there yet
<wallyworld> not enough friday afternoons
<axw> heh :)
<wallyworld> but maas only *just* got tags support in 2.0
<wallyworld> maybe that bit was left out
<wallyworld> ie tags are added but that code didn;t get fixed
<axw> yep I know, prereq for shortening the names though I thought
<axw> actually.. no, because we use agent-name
<axw> so just for getting rid of reliance on maas storage
<wallyworld> well maas storage is still used for the provider state file
<wallyworld> so maybe *now* we will be rid of it
<wallyworld> yep, maas is all that uses it
<wallyworld> so there'll be a bit deletion coming up
<wallyworld> big
<voidspace> dimitern: dooferlad: babbageclunk: firefox being slow, omw
<voidspace> dimitern: frobware: dooferlad: babbageclunk: I'm in - I can hear but can't see anyone and no icons so I can't unmute yet...
<dimitern> frobware: standup?
<frobware> omw
<babbageclunk> I should really get a headset with a mike - recommendations?
<frobware> I keep getting kicked out as soon as it connects
<dimitern> frobware: try firefox?
<frobware> start without me
<voidspace> babbageclunk: http://thewirecutter.com/reviews/best-usb-office-headset/
<frobware> dimitern: (et al) can't connect, which is weird since I just had a 1:1 with jam for an hour...
<frobware> dimitern: keep on dropping again... :(
<dimitern> frobware: are you using chrome?
<frobware> dimitern: yep, but tried FF. Just seems anything google-y. gmail and/or hangouts
<dimitern> frobware: I've seen this happen on Mondays usually :/
<frobware> dimitern: first call was fine; for standup call I had to do the auth dance, now I repeatedly get kicked out
<dimitern> frobware: I'd guess signing out completely and removing cookies might help (or try in an incognito window?)
<babbageclunk> voidspace: Thanks, those sound good (ha). Have you got one of them?
<voidspace> babbageclunk: no, I have some crappy old ones that need replacing
<babbageclunk> voidspace: aspirational then
<voidspace> babbageclunk: I'll buy the jabra I think
<voidspace> babbageclunk: I trust the wirecutter though
<babbageclunk> voidspace: I'm probably leaning towards the microsoft ones for better music sound.
<voidspace> babbageclunk: ah right - yeah, I don't use them for music
<voidspace> babbageclunk: I have 7:1 audio in my man cave
<babbageclunk> voidspace: ooh, nice. I assume it's all just ride of the valkyries then.
<babbageclunk> voidspace: And a fan to blow your hair back while you code.
<voidspace> babbageclunk: hehe, nice idea
<mup> Bug #1591939 opened: juju failed unmarshal the /server/{server_id} api  response body <juju-core:New> <https://launchpad.net/bugs/1591939>
<mup> Bug #1591940 opened: juju failed unmarshal the /server/{server_id} api  response body <juju-core:New> <https://launchpad.net/bugs/1591940>
<babbageclunk> voidspace, dimitern, frobware: I'm trying to add a card to the Sapphire board for the application workload version work (well, the first part of it), but I get a permission error (maybe Leankit thinks I'm not on the team?).
<dimitern> babbageclunk: your should have access now
<babbageclunk> dimitern: Thanks! I'm looking at your change btw.
<dimitern> babbageclunk: thank you :)
<babbageclunk> dimitern: Hmm - I can't assign the card I just created to myself.
<dimitern> babbageclunk: try reloading lkk ?
<babbageclunk> dimitern: Yeah, that's it - thanks again!
<dimitern> :)
<dimitern> fwereade: hey, do you mind skipping our 1:1 ?
<mup> Bug #1401423 changed: networker should handle joyent-specific network config better <compatibility> <joyent-provider> <network> <juju-core:Won't Fix> <https://launchpad.net/bugs/1401423>
<mup> Bug #1591962 opened: be able to set juju management network <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1591962>
<fwereade> dimitern, np at all, I'd completely forgotten
<frobware> dimitern: /etc/network/interfaces.d/eth0.cfg --<<< on MAAS images. I thought that had gone away.
<thomnico> Hello team can someone give #1591488 and/or #1591499 some love .. it is blocking preparing demos for next week events with partners .. (we are already late as partners sits on top of openstack)
<mup> Bug #1591488: Can not bootstrap on private openstack juju 1.25 or 2.0 <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591488>
<mup> Bug #1591499: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
<dimitern> frobware: is this on trusty?
<mgz> thomnico: those look less like bugs and more like a cry for help
<frobware> dimitern: ah yes
<thomnico> won't can not bootstrap for days qualify ??
<mgz> thomnico: he just doesn't have image metadata set properly
<thomnico> it does
<dimitern> frobware: is it missing when you disable cloud-init's networking?
<thomnico> hence the bug
<mgz> thomnico: I can see from the log it's not, falls back to cloud-images.ubuntu.com
<frobware> dimitern: don't know because we never do that for a MAAS image
<thomnico> yes this IS the buf
<thomnico> bug
<mgz> thomnico: so, there's certainly some help needed
<mgz> thomnico: see bug 1591225 etc
<mup> Bug #1591225: Generated image stream is not considered in bootstrap on private cloud <juju-core:Incomplete> <https://launchpad.net/bugs/1591225>
<mgz> thomnico: for this one, it seems like image-metadata-url is just not set in environments.yaml or the streams have been incorrectly referenced
<thomnico> mgz, SO ??
<mgz> which is what cheryl suggested
<thomnico> I have been bootstraping juju on openstack for 3 years and helped customer doing it
<dimitern> frobware: you're testing my PR? does that eth0.cfg mess things up?
<thomnico> expect a recent change it is not a pebkac pb as far as I can tell
<mgz> thomnico: we haven't released a new 1.25 recently
<thomnico> in 1591488 the cli tells juju to read the metadata and juju ignored it
<mgz> thomnico: as far as I can see, image-metadata-url is not set
<mgz> it must be.
<thomnico> that is new
<thomnico> and not documented as far as I can tell
<mgz> it's what howto-privatecloud tells you to do
<thomnico> once set and checked I have : 1591499
<frobware> dimitern: no, was going back to http://reviews.vapour.ws/r/4969/
<frobware> dimitern: that it would be painless and quick (ha!)
<dimitern> frobware: ah :)
<mgz> thomnico: does 192.168.16.5 mean the same thing from the machine being brought up in the private cloud?
<mgz> if it can't route there it won't work
<thomnico> yes
<frobware> dimitern: the problem with switching to ip route commands is we have no unit tests - we didn't before, but we were /only/ calling ifup/down
<thomnico> I use a jumpserver and they share the private network
<mgz> thomnico: then you're probably into needing to set custom loglevel with trace for juju.environs.simplestreams then
<thomnico> I cheked it can be reached from the started VM
<dimitern> frobware: we can use the PatchExecutableAsEchoArgs (or the newer PatchExecHelper natefinch did recently)
<frobware> dimitern: for?
<thomnico> can you copy paste the way to do that ??
<frobware> dimitern: oh, tests.
<frobware> dimitern: sure
<frobware> dimitern: however, the implementation is python.
<dimitern> frobware: I guess that will make it easier (at the slight expense of introducing non-go tests :)
<frobware> dimitern: we still have the option of going back to ifup/down in light of some understanding around the boot failure
<mgz> something like export JUJU_LOGGING_CONFIG="<root>=DEBUG;juju.environs.simplestreams=TRACE"
<dimitern> frobware: I don't recall the ifup/down was tested?
<frobware> dimitern: it wasn't. my point is that we were doing significantly less
<thomnico> thanks mgz will update the bug asap
<frobware> dimitern: and was there / is there any value in validating that we call a command ifup/down?
<dimitern> frobware: I guess not much
<frobware> dimitern: now we have lots of commands with varying args and we don't have much validation that we haven't changed something
<dimitern> frobware: but why go back to ifup/down when we know the boot slowdowns and issues ?
<mgz> thomnico: remember --keep-broken stops juju taking the bootstrap machine down if it fails
<thomnico> great to know too :)
<frobware> dimitern: separation of concerns. re my email a week ago: why don't we get the root cause fixed
<thomnico> ssh tail -f to local file was working too
<thomnico> FYI if it helps I can provide access to the evenrionment
<dimitern> frobware: ideally, we'll wait for the root cause to get fixed
<dimitern> frobware: however, judging by the comments on that ifupdown bug about restarting networking, it's highly unlikely
<thomnico> will need 1h or so to reproduce and collect traces .. cheers,
 * dimitern whew... all tests pass again, after removing >7700 lines
<redelmann> hi there, anyone from juju resources?, have a short question
<redelmann> if anyway to set a default resource in metadata.yaml?
<redelmann> i'm looking to set a resource from local charmdir on deploy
<redelmann> my python workaround: mypath = resource_get(name) or os.path.join(charm_dir(), name)
<dimitern> mgz: ping
<mgz> dimitern: yo
<dimitern> mgz: hey, I'm about to propose a PR which removes the legacy address allocation code
<mgz> okay
<dimitern> mgz: once it lands, the CI job on AWS that uses the feature flag will no longer pass
<mup> Bug #1591499 opened: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
<mgz> I'll mkae it 1.25 only
<dimitern> mgz: ok, thanks!
<mgz> so, for now aws containers are just not addressable right?
<dimitern> yeah
<dimitern> with the fan everywhere PoC though, we should get addressable containers by default
<mup> Bug #1591499 changed: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
<mup> Bug #1591499 opened: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
<dimitern> frobware, voidspace, babbageclunk: here it is - http://reviews.vapour.ws/r/5048/, please have a look
<frobware> dimitern: do you think there's any value in having a branch to CI test this change?
<dimitern> frobware: we can do that, but I doubt it's necessary
<frobware> dimitern: sure - just posing the question. And then I read the scrollback with mgz - so seems fine
<mgz> frobware: yeah, I just disables/version-selected the ci jobs in advance
<dimitern> voidspace: if you can also have a look at http://reviews.vapour.ws/r/5040/ will be great ;)
<babbageclunk> fwereade: The service -> application change - do you know if there's still stuff being done on it?
<babbageclunk> fwereade: My understanding is it was just made to things that were externally visible (to make it vaguely manageable). If I'm making changes to internal things that are still using the old name, is it reasonable to also convert them to the new name at the same time?
<natefinch> dimitern, frobware: where are we with the lxd container networking issues?  my lxc to lxd conversion branch (where we drop lxc support) can't land until we have parity with the old abilities to network lxc containers.
<alexisb> natefinch, we will not achieve complete parity give functional changes in netowrking of lxd
<alexisb> natefinch, there was one specific bug we needed to address for CI
<alexisb> natefinch, let me see if I can find it
<fwereade> babbageclunk, yes please, do convert as you can
<babbageclunk> fwereade: ok, thanks
<mgz> alexisb: dimitern has proposed a change that just pulls out the old feature flag path completely
<alexisb> mgz, yep I see it
<voidspace> dimitern: ping
<katco> redir: standup time
<thomnico> hello again folks  Bug #1591499 updated with trace
<mup> Bug #1591499: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
<alexisb> katco, redir is sprinting this week I am pretty sure
<katco> alexisb: ah ok
<dimitern> voidspace: pong
<dimitern> natefinch: we have a few known lxd issues, most of which are in progress of getting fixed
<dimitern> natefinch: apart from those, I believe we have feature parity with lxc
<natefinch> dimitern:b awesome :)
<voidspace> dimitern: I'm (still) trying to understand link layer device parent references
<voidspace> dimitern: when generating increment ops we attempt to parse the name as a global key first - to see if it's a reference to another machine
<voidspace> dimitern: as far as I can tell, parentname is only *ever* set to a device name on the same machine
<voidspace> dimitern: am I missing something?
<dimitern> voidspace: ParentName can be a allowed to be a global key only in 1 case
<voidspace> dimitern: where is the code for that?
<voidspace> dimitern: (and what is that case)
<voidspace> if it's at all possible I'll have to allow for it I suppose
<dimitern> voidspace: when ParentName = "m#42#d#br-eth0" the child device will be e.g. m#42/lxd/0#d#eth1
<voidspace> dimitern: right, but where in the code is that parent name generated
<voidspace> dimitern: in the machiner it looks like it's only ever set to device name
<mup> Bug #1592031 opened: update-clouds does not create public-clouds.yaml if public-clouds.syaml matches compiled-in values <compatibility> <jujuqa> <update-clouds> <juju-core:Triaged> <https://launchpad.net/bugs/1592031>
<voidspace> ParentDevice
<dimitern> voidspace: assuming machine 42 is the host machine of a container 42/lxd/0, and "eth1" is the container ethernet device connected to the "br-eth0" bridge device on the host machine
<dimitern> voidspace: yeah, the only place that uses the above scenario is SetContainerLinkLayerDevices, called in the provisioner
<dimitern> voidspace: ParentDevice returns the device itself, if it's set
<voidspace> dimitern: ah, the provisioner
<voidspace> dimitern: I did miss that - thanks
<dimitern> voidspace: yeah, but there are state tests as well
<dimitern> (apiserver/provisioner)
<dimitern> natefinch: btw my monstrous branch removing the legacy address allocation (http://reviews.vapour.ws/r/5048/) will conflict with your lxc removal branch, but hopefully not too badly, as it's mostly removals
<natefinch> dimitern: yeah, we'll see... I don't think it'll be a big deal... probably just resolving you changed it vs. I deleted it, and vice versa.
<natefinch> dimitern: between our two branches we deleted like 13,000 lines of code :D
<dimitern> natefinch: yeah, and it feels great! :D
<natefinch> cmars: is your npipe branch ready to go?
<frobware> dimitern: ping, you about? Wanted to catch up regarding fe80:: addrs
<dimitern> frobware: yeah
<dimitern> frobware: it's not a big deal, I remembered reading about it and though it might be causing some of the issues perhaps
<frobware> dimitern: but does it definitively happen?
<natefinch> cmars: I'm gonna answer that for you and say no it's not :)  at the very least there's some commented out code in there... ping me when you think it's good to go, or if you need me to take it over and get it to a landable state.
<natefinch> alexisb: ^
<frobware> dimitern: http://pastebin.ubuntu.com/17293984/
<natefinch> cmars: we want to try to get that landed today, so if me taking it over is what it'll take, that's fine.  Just wanted to make sure you don't have local changes or anything.
<dimitern> frobware: I haven't t tested it specifically
<frobware> dimitern: so that ^^ PB seems to propagate the fe80 addrs
<dimitern> frobware: yeah, looks ok
<dimitern> babbageclunk: thanks for sending that mail btw
<babbageclunk> dimitern: Cheers - wasn't sure what info to put in. It was ok?
<rick_h_> dimitern: frobware dooferlad call?
<frobware> rick_h_: omw
<dimitern> oops, omw
<ericsnow> fwereade: ping
<dimitern> babbageclunk: you've covered all the important bits I think
<babbageclunk> dimitern: cool
<dimitern> frobware: so about http://reviews.vapour.ws/r/5040/ ..
<dimitern> frobware: should I leave it for tomorrow I guess?
<frobware> dimitern: can I sleep on that one; I have a fix for my other review
<frobware> dimitern: it may be that it's sensible to land your patch but only with a subset of my patch
<frobware> dimitern: sound ok?
<dimitern> frobware: ok we can sync tomorrow then; and the other one ? http://reviews.vapour.ws/r/5048/
<frobware> dimitern: I would need to review but no real concerns there; we should update the release notes to say it has gone completely now
<dimitern> frobware: yeah, good point - I'll add a comment on the docs planing doc
<dimitern> ok, I should be going
<bdx> hey has anyone hit this yet, or know a way around it -> https://bugs.launchpad.net/juju-core/+bug/1592101
<mup> Bug #1592101: Error connecting with cached addresses <juju-core:New> <https://launchpad.net/bugs/1592101>
<mup> Bug #1592101 opened: Error connecting with cached addresses <juju-core:New> <https://launchpad.net/bugs/1592101>
<perrito666> bbl
<mup> Bug #1592101 changed: Error connecting with cached addresses <juju-core:New> <https://launchpad.net/bugs/1592101>
<mup> Bug #1592101 opened: Error connecting with cached addresses <juju-core:New> <https://launchpad.net/bugs/1592101>
<mup> Bug #1588574 changed: Session already closed in state/presence <blocker> <ci> <intermittent-failure> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1588574>
<mup> Bug #1590065 changed: container/lxd: One rename too far -> "application", "restart", "lxd-bridge" <regression> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1590065>
<mup> Bug #1588574 opened: Session already closed in state/presence <blocker> <ci> <intermittent-failure> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1588574>
<mup> Bug #1590065 opened: container/lxd: One rename too far -> "application", "restart", "lxd-bridge" <regression> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1590065>
<mup> Bug #1588574 changed: Session already closed in state/presence <blocker> <ci> <intermittent-failure> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1588574>
<mup> Bug #1590065 changed: container/lxd: One rename too far -> "application", "restart", "lxd-bridge" <regression> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1590065>
<alexisb> natefinch, in our 1x1 hangout when you are ready
<natefinch> alexisb: omw
<thumper> fwereade: call?
<fwereade> thumper, joining
<natefinch> ahh, tests running against mongodb on windows.  my favorite
<natefinch> sinzui, mgz: is there a trick to getting the windows tests to run?  I get a million of these errors:   cannot replace tools directory: cannot update tools symlink: rename C:\\Users\\Nate\\AppData\\Local\\Temp\\check-6656273794189705274\\365\\var\\lib\\juju\\tools\\tmpfilea49d086a-99df-452f-8dda-3a7f0bd811b8 C:\\Users\\Nate\\AppData\\Local\\Temp\\check-6656273794189705274\\365\\var\\lib\\juju/tools/machine-0: The system cannot find the file
<natefinch> specified.
<natefinch> mgz, sinzui: (this is the cmd/jujud/agent tests, for reference
<natefinch> ahh... I think the answer is "run the tests as administrator"
<frobware> dooferlad: if you're about can you take a peek at my updates to http://reviews.vapour.ws/r/4969/
<dooferlad> sorry frobware: not interruptible right now
<mup> Bug #1592155 opened: restore-backup fails when attempting to 'replay oplog'. <juju-core:New> <https://launchpad.net/bugs/1592155>
<thumper> davechen1y: this is the bug you are working on yes? https://bugs.launchpad.net/juju-core/+bug/1588143
<mup> Bug #1588143: cmd/juju/controller: send on a closed channel panic <blocker> <race-condition> <juju-core:Triaged> <https://launchpad.net/bugs/1588143>
<perrito666> this error is completely nonsensicall: state changing too quickly; try again soon
<perrito666> what is the user going to do to try that again soon?
<perrito666> and what is more, state is not changing too quickly, the assertion is just plain broken
<perrito666> </rant>
<wallyworld> natefinch: how goes the lxc->lxd branch landing?
 * perrito666 ponders learning to crochet something while running state tests
<thumper> cherylj: the api explicit tagging branch landed
<natefinch> wallyworld: I think I'm still blocked on the networking stuff. It's a little unclear.  I made a fix friday so we'd treat lxc containers defined in bundles as lxd.  That was easy, thankfully.  Worked on the npipe bug today: https://bugs.launchpad.net/juju-core/+bug/1581157  there's a fix proposed, but it needed some major work, so I've been working on that.  will try to get it landed tonight.
<mup> Bug #1581157: github.com/juju/juju/cmd/jujud test timeout on windows <blocker> <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Triaged by dave-cheney> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1581157>
<wallyworld> natefinch: i'm told this morning that christian had fixed the bug already?
<wallyworld> so we need to land the work if we can if we are sure the bug is fixed
<natefinch> wallyworld: oh, I may have been confused
<wallyworld> it was apparently a network related issue
<wallyworld> and the saphire guys were onto it, or that's my understanding
<wallyworld> so may just need a retest to confirm and then you can land
<alexisb> natefinch, you have a green light to land
<natefinch> alexisb: thanks
<natefinch> alexisb: rebased and doing some tests now
<natefinch> I guess it works?
<natefinch> are we supposed to be able to get to services deployed inside lxd from the outside?
<natefinch> because, this is not super promising: public-address: 10.0.4.21
<mup> Bug #1592179 opened: Juju2 with MAAS2, log shows errors after having created a controller <juju-core:New> <https://launchpad.net/bugs/1592179>
<natefinch> but I dont' think that worked with lxc either, so I guess that's fine
<natefinch> landing now
<perrito666> you shouldn't fly and chat :p
<natefinch> heh
<wallyworld> rick_h_: did we need to catch up about settings?
<natefinch> afk for a bit, putting kids to bed
<bdx> hey whats up everyone? - Quick question on the manual provider .... is `juju enable-ha` supported for manual provider?
<natefinch-afk> bdx: I think you can do it with add-machine and then juju ensure-availability --to
<natefinch-afk> bdx: I haven't tried it though
<natefinch-afk> bdx: no reason it shouldn't work, in theory
<bdx> natefinch-afk, yeah, accept ensure-availability isn't a command anymore ... its now 'enable-ha'
<natefinch-afk> bdx: oh, I was looking at juju1.. for 2, not sure
<bdx> yea ... I don't think the functionality is there ....
<perrito666>   wallyworld anastasiamac axw redir brt
<wallyworld> ok
<wallyworld> bdx: the only difference in 2.0 is that the command has been renamed. enable-ha --to shoudl work for manual
<bdx> wallyworld: yea, even after I add a machine to my 'controller' model, so I have 0 and 1 ... I run 'juju enable-ha --to 1' and get -> ERROR failed to create new controller machines: use "juju add-machine ssh:[user@]<host>" to provision machines
<wallyworld> bdx: you need 3 or 5 or7 etc machines for ha
<axw> --to 1,2 I think?
<wallyworld> so you need to add at least 2 additional machines
<bdx> ooooh, let me try
<bdx> sick
<bdx> wallyworld: good call
<wallyworld> yay
<bdx> natefinch-afk, wallyworld: thanks!
<rick_h_> wallyworld: sure, got time?
<wallyworld> rick_h_: join us in https://hangouts.google.com/hangouts/_/canonical.com/tanzanite-stand if you want
<rick_h_> omw
#juju-dev 2016-06-14
<menn0> mwhudson, davechen1y: do we already know about the panic inside Go 1.7 that the worker/uniter tests induce?
<mwhudson> menn0: no0
<menn0> mwhudson, davechen1y : http://paste.ubuntu.com/17312432/
<menn0> mwhudson, davechen1y : doesn't happen under 1.6
<mwhudson> menn0: exciting
<mwhudson> menn0: that's with golang-1.7 from my ppa?
<menn0> mwhudson: it is indeed
<mwhudson> menn0: what command did you run?
<mwhudson> go test -c github.com/juju/juju/worker/uniter ?
<menn0> go test ./worker/uniter/... (while in the github/juju/juju directory
<mwhudson> heh i don't have my own ppa installed
<mwhudson> enabled
<mwhudson> whatever
<perrito666> axw: you where right its --nojournal
<axw> perrito666: cool, hopefully easy to fix then
<perrito666> well its a commit to testing then another to master
<menn0> mwhudson: no -c when I did it... just trying that now
<mwhudson> menn0: well it's clearly the compiler crashing
<mwhudson> just wondering which package it's crashing on
<menn0> mwhudson: go test -c for worker/uniter works.
<mwhudson> menn0: hmm
<menn0> i'll track down the subpackage
<mwhudson> thank
<mwhudson> s
<mwhudson> i'm installing  and updating things and building go tip
<menn0> mwhudson: I've got to go for lunch but I have a loop doing go test over each subpackage
<mwhudson> menn0: ok, i'll see if it reproduces for me
<davechen1y> menn0: news to me
<menn0> mwhudson, davechen1y: it's worker/uniter/runner/context
<mwhudson> huh
<mwhudson> works for me
<mwhudson> /usr/lib/go-1.7/bin/go test -c ./worker/uniter/runner/context
<menn0> thumper: you're not the only one who gets to delete stuff: http://reviews.vapour.ws/r/5049/
<mwhudson> menn0: do you have golang-tip installed? maybe you could try with that? (after lunch when a new version should have built)
<menn0> mwhudson: i'll try with current master
<mwhudson> or that, yeah
<mwhudson> there have been some compiler fixes
<menn0> mwhudson: it still happens with master for me
<menn0> mwhudson: i'll wait for the new go version and try again
<mwhudson> menn0: hmm
<mwhudson> menn0: what exact command line?
<mwhudson> menn0: also, are you still on wily?
<menn0> go test -c ./worker/uniter/runner/context
<mwhudson> (just trying to think up differences...)
<menn0> nope on xenial
<mwhudson> ok
<mwhudson> and "which go"?
<menn0> $ which go
<menn0> $ readlink -f /home/menno/bin/go
<menn0> stupid irc
<menn0> $ which go
<menn0>  /home/menno/bin/go
<menn0> $ readlink -f /home/menno/bin/go
<menn0>  /usr/lib/go-1.7/bin/go
<mwhudson> haha
<thumper> menn0: reviewed
<mwhudson> ok thanks
<menn0> mwhudson:
<menn0> $ dpkg -S /usr/lib/go-1.7/bin/go
<menn0> golang-1.7-go: /usr/lib/go-1.7/bin/go
<mwhudson> menn0: and it happens every time for you?
<menn0> mwhudson: seems to
<menn0> mwhudson: it's not stopping me doing what I need to do. just noticed it when running tests.
<menn0> mwhudson: this is the package version: 1.7~~devel.201605030049.499cd33+ppa1~ubuntu16.04.1
<menn0> back in a bit
<mwhudson> menn0: ooh
<mwhudson> menn0: that's the old ppa
<mwhudson> menn0: use ppa:gophers/archive instead
<menn0> mwhudson: that could be it then. installing the updating package from the newer ppa now
<menn0> mwhudson, davechen1y : well that fixed it. nothing to see here. sorry for the noise
<davechen1y> 2np
<mup> Bug #1592210 opened: juju2beta9 agent stuck in allocating state <conjure> <juju-core:New> <https://launchpad.net/bugs/1592210>
<stokachu> hey anyone seen http://paste.ubuntu.com/17313966/ line 184 before?
<axw> wallyworld: I'm back, just replying to an email and will ping you to talk about openstack/controller-uuid
<wallyworld> axw: ok, i'm just finishing a review, haven't looked too deeply again yet, will ping you
<axw> wallyworld: ok
<wallyworld> stokachu: ah, that was an upstream bug in godbus
<wallyworld> i thought it had been fixed
<wallyworld> maybe it was not fixed properly
<stokachu> wallyworld: yea i can pretty easily reproduce
<wallyworld> can you file a bug and i'll get someone to look?
<stokachu> wallyworld: that was the bug 1592210
<mup> Bug #1592210: juju2beta9 agent stuck in allocating state <conjure> <juju-core:New> <https://launchpad.net/bugs/1592210>
<stokachu> wallyworld: ^
<wallyworld> ok, ta
<stokachu> thanks! we were going crazy trying to figure out why all but 1 service was failing
<wallyworld> natefinch-afk: ^^^^^ can you see if your previous fix maybe needs tweaking?
<stokachu> wallyworld: i can reproduce it when doing a bunch of single application deploys one right after the other
<stokachu> wallyworld: not so much when doing just a bundle deploy
<wallyworld> damn ok
<wallyworld> i am not entirely familiar with the upstream issue off hand'
<stokachu> wallyworld: cool np, just have nate ping me and i can test or answer any other questions
<wallyworld>  but i did think it ws fixed, maybe it was done in 1.25 not master, will need to check
<wallyworld> ty
<stokachu> ok cool
<wallyworld> axw: i can just use the goose client to get all groups and do filtering in the openstack code, so i'll look at that and see how it pans out
<mup> Bug #1592221 opened: restore-backup empty CloudName not valid <backup-restore> <blocker> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1592221>
<perrito666> oh cmoooon
<mup> Bug #1591940 changed: juju failed unmarshal the /server/{server_id} api  response body <juju-core:New> <https://launchpad.net/bugs/1591940>
<menn0> thumper: i've had 3 attempts at trying to land a PR now and I keep getting random "unexpected message" errors from mongo
<menn0> seems like "unexpected message" is the new "bad record MAC"
<thumper> :(
<davecheney> menn0: linkage
<davecheney> maybe it's something I broke
<menn0> davecheney: http://juju-ci.vapour.ws:8080/job/github-merge-juju/8086/consoleFull
<menn0> davecheney: the failures are in areas not touched by the PR
<menn0> davecheney: these are into the model-migration branch which a bit behind master
<davecheney> nope, that's not my fault
<davecheney> that's just the usual mongo couldn't actaully talk TLS when it said it could
<davecheney> nonsense
<davecheney> followed by it segfaulting or just crapping the bed
<menn0> so i'm just unlucky then
 * menn0 tries again
<davecheney> menn0: don't forget the bug that was introduced that means if mongo is not running with /tmp on a ramdisk
<davecheney> then it will take 20+ seconds for the test to kick off
<menn0> that's a mongo 3.2 thing right?
<davecheney> nope mongo 2.4
<davecheney> which is what CI runs
<menn0> davecheney: hmm ok. I didn't know about that one
 * menn0 does have tmp on a ramdisk though
<davecheney> menn0: have you been following hte "ci times increasing at alarming rates" thread ?
<menn0> davecheney: yes
<davecheney> menn0: if you didn't have /tmp on a ramdisk
<davecheney> then you might be upset about the reduction in lifespan of your ssd :)
<menn0> davecheney: yep, I remember seeing that
<davecheney> that'll be hurting CI as it runs mongo 2.4.9
<thumper> how come we have enable-ha but no disable-ha?
<thumper> just curious
<natefinch> thumper: I don't think we have code for disable HA
<thumper> if I want to go from 3 back to 1, can I?
<thumper> does going from 5 -> 3 work?
<natefinch> thumper: not to my knowledge
<natefinch> thumper: you can go up, but you can't go down
<thumper> why?
<natefinch> thumper: there's nothing inherently stopping us from doing it, it was just going to be some significant added work, which we were going to get to later....
<natefinch> thumper: is it later yet? :)
<thumper> heh
<thumper> I think I can get what I want by going :
<thumper> ensure-ha
<thumper> kill machine 2
<thumper> and then ...
<thumper> what?
<thumper> ensure-ha again?
<perrito666> davecheney: menn0 we should not have journal enabled in the test for 2.4, that was a bit of a careless change
<natefinch> I don't know what the commands do in 2.0.  in juju1, ensure-availability would start new machines when you did ensure-ha again
<natefinch> so basically, ensure-ha is "please try to get me into the state I original asked for when I did ensure-availability the first time"
<perrito666> menn0: https://pastebin.canonical.com/158638/ in github.com/juju/testing will make your test be less sad
<menn0> perrito666: ok, but i'm seeing the problems in CI... why was --nojournal removed in the first place?
<perrito666> menn0: someone did changes to testing to enable mongo3 but droped support for mongo2.4 in the process
<perrito666> and by support I just mean optimization
<wallyworld> natefinch: looks like you forgot to take out the ubuntu-cloudimg-query stuff
<menn0> davecheney, perrito666: the model-migration branch is sufficiently behind master that it doesn't have that change in place
<natefinch> wallyworld: last I'd heard it was still up in the air.... was going to ask you since I saw it failing.  I guess I know what the answer is.
<wallyworld> you're getting mixed up
<menn0> so that's not relevant to what I'm seeing
<davecheney> menn0: thumper https://github.com/juju/juju/pull/5607
<wallyworld> there's a different between the mechanism to download lxc images and image caching in general
<davecheney> menn0: you probably have it by accident as it is on the juju/testing repository
<menn0> wallyworld, perrito666: but still good to know (and we should fix it)
<menn0> davecheney: nope. I made sure.
<wallyworld> natefinch: we need to remove the bits to download and cache lxc images, but not yet the image caching store
<davecheney> menn0: no idea then
<davecheney> just mongo be awful then
<natefinch> wallyworld: I didn't realize there were different parts of it... kinda thought of it all as one feature.  So I can drop the image downloading stuff, I take it?
<wallyworld> yep
<menn0> davecheney: reviewed
<davecheney> ta
<menn0> davecheney: it just failed - known flaky resumer test
<menn0> not my day
<menn0> thumper: here's WatchMinionReports: http://reviews.vapour.ws/r/5051/
 * thumper looks
<thumper> menn0: I'm about to run out and take Maia to BJJ
<thumper> back soonish
<menn0> thumper: no rush
 * menn0 stacks the PRs
<davecheney> menn0: having the same problem http://juju-ci.vapour.ws:8080/job/github-merge-juju/8088/console
<davecheney> on master branch
<menn0> yeah, it's pretty awful
<axw> wallyworld: I'm probably going to have to tack and do the add-model changes before finishing validation
<axw> wallyworld: otherwise everything except lxd will be broken
<wallyworld> eek, ok
<axw> wallyworld: I'll push what I've got if you want to review what's there
<axw> I don't *think* it will change any more
<wallyworld> i'm stuck fixing tests with this controller uuid change, won't get to look till a little later
<axw> wallyworld: no worries. there's a WIP PR up now, whenever you can
<wallyworld> will do, ty
<cherylj> natefinch: ping?
<natefinch> cherylj: yo
<cherylj> did you land the update for godbus in master?  (I think so)
<cherylj> I see the 1.25 update here: https://github.com/juju/juju/pull/5363
<natefinch> cherylj: I didn't
<cherylj> natefinch: was it because master was blocked?  or some other reason?
<natefinch> cherylj: yes, because master was blocked, and then it fell off my radar
<cherylj> heh, perfectly understandable
<cherylj> natefinch: could you merge it as $$fixes-1592210$$ ?
<cherylj> stokachu: is running into that issue for bug 1592210
<mup> Bug #1592210: juju2beta9 agent stuck in allocating state <blocker> <conjure> <juju-core:Triaged> <https://launchpad.net/bugs/1592210>
<natefinch> yeah, it's an ugly bug, definitely should go in
<cherylj> thanks!!
<mup> Bug #1583893 changed: 1.25.5: goroutine panic launching container on xenial <landscape> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1583893>
<natefinch> gah... helps if I actually commit my changes before rerunning CI :/
<axw> wallyworld: does this sound sensible to you? "juju add-model --credential foo" will automatically upload a credential "foo" from the client to controller if it doesn't exist in the controller already
<wallyworld> i think so - it should print what it does
<axw> wallyworld: yep will add some output
<wallyworld> ok
<dimitern> frobware: sorry, I'll be right there
<frobware> dimitern: 1:1 in 5? Ok?
<dimitern> frobware: ok, sure
<dimitern> jam, dooferlad?: standup?
<frobware> dimitern: https://github.com/frobware/juju/tree/master-lp1566801-strike2
<frobware> dimitern: requires your change
<frobware> babbageclunk: http://pastebin.ubuntu.com/17320468/
<frobware> dimitern: for i in /usr/bin/apt* ; do sudo ln -s /usr/bin/eatmydata /usr/local/bin/$(basename $i); done
<frobware> dimitern: http://pastebin.ubuntu.com/17320499/
<axw> wallyworld: I've just put up a new WIP branched off the other one that updates add-model, tested live with lxd and aws
<voidspace> dimitern: frobware: babbageclunk: ok, I think it's ready for review http://reviews.vapour.ws/r/5019/
<dimitern> voidspace: looking
<frobware> voidspace: adding to my review TODO...
<voidspace> thanks
<voidspace> dimitern: frobware: migration import (and test) is where the fiddly stuff with global keys and references is done
<voidspace> validation of the imported model still to be added as a separate step - in fact that's next as the network entities have all been done now
<dimitern> ok, cheers
<frobware> dimitern: AA-FF reviewed
<voidspace> babbageclunk: you're right about that comment being incorrect - it's from a copy/paste + search & replace
<voidspace> babbageclunk: the filename plurality is consistent with ipaddresses and ipaddress_test - but that's inconsistent with the others
<voidspace> babbageclunk: so I'll change both
<babbageclunk> voidspace: :)
<dimitern> frobware: thanks!
<frobware> dimitern: do you run maas20 in lxd at all?
<dimitern> frobware: I tried some weeks ago, but couldn't get it to work (mounting loop devs was the issue)
<admcleod> any juju devs here?
<frobware> dimitern: I tried many months ago. IIRC, you have to mknod the devices first
<dimitern> frobware: i didn't get that far tbo
<dimitern> tbh even
<dimitern> admcleod: you're at the right place :) what do you need?
<admcleod> dimitern: theres a guy in #juju (brunogirin) with a juju 2.0 / go stack trace, and ive hit the extent of my troubleshooting :}
<dimitern> admcleod: ok, let's have a look then
<admcleod> dimitern: thanks :)
<mup> Bug #1592366 opened: Juju machines failed to die, with added storage <juju-core:New> <https://launchpad.net/bugs/1592366>
<mup> Bug #1592365 opened: Juju 2.0 crashes on bootstrap with datacentred.co.uk OpenStack cloud <juju-core:New> <https://launchpad.net/bugs/1592365>
<frobware> dimitern, voidspace, babbageclunk: backport from master to 1.25 -- http://reviews.vapour.ws/r/5054/
<dimitern> frobware: looking
<aisrael> axw: Hey, are you still the best person to talk to about storage?
<dimitern> frobware: LGTM on that backport btw
<rick_h_> aisrael: +1 to the right person but he's APAC so zzzz atm
<aisrael> rick_h_: ack, thanks. I'll send him an email
<frobware> dimitern: ping
<frobware> dimitern: can you join the networking tests call?
<dimitern> frobware: omw
<frobware> dimitern, voidspace, babbageclunk, dooferlad: FYI (last comment in): https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1513529
<mup> Bug #1513529: cloud images should be built with the same /etc/apt/sources.list as server images <awaiting-canonical> <bot-comment> <cloud-images:Fix Released by daniel-thewatkins> <livecd-rootfs (Ubuntu):Fix Released by daniel-thewatkins> <https://launchpad.net/bugs/1513529>
<frobware> dimitern: ping - have 10 minutes to talk about the removal of "ifup -a || true"?
<dimitern> frobware: yeah
<dimitern> frobware: it wasn't necessary anymore (tests on aws and maas confirmed it)
<frobware> dimitern: this was my conclusion too
<dimitern> frobware: hey, no eth0.cfg in the maas trusty cloud image from dailies
<frobware> dimitern: yep, saw that too. I added -proposed and installed curtin-389 this morning
<dimitern> frobware: and trusty-backports is there
<dimitern> sweet!
<frobware> dimitern: didn't look for that, but \o/
<frobware> dimitern: things are beginning to come together
<dimitern> frobware: due to that curtin issue, every other trusty deployment fails, but still, once it's done lxd also works
<frobware> dimitern: we still have the 50-cloud-init.cfg conflation with LXD containers on MAAS
<dimitern> frobware: because of that missing cloud-init fix?
<frobware> dimitern: we need a variation of: https://github.com/frobware/juju/tree/master-lp1566801-strike2
<frobware> dimitern: one that isn't so intrusive and that will work on KVM too
<frobware> dimitern: but with that branch and latest curtin/daily cloud image it should be just peachy!
<dimitern> frobware: yeah! re kvm - we should see what can be done with virt-copy-in
<frobware> dimitern: I really want to look at seeding cloud-into to DWIM & DTRT
<frobware> dimitern: because that's a common interface for both LXD and KVM
<dimitern> frobware: that would be better, indeed
<dimitern> gsamfira, bogdanteleaga: ping?
<dimitern> or anybody from cloudbase? there's a guy in #juju (gennadiy) who asks how to get the password of a windows instance deployed with juju on openstack
<dimitern> frobware: still there?
<alexisb> dimitern, he will be back later
<dimitern> alexisb: ok
<dimitern> natefinch: hey, did you see my comment and wallyworld's comments on the LXC-to-LXD?
<mup> Bug #1592456 opened: juju publish hangs <jujuqa> <publish> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1592456>
<frobware> cherylj: ping - any concerns for landing stuff into 1.25 (e.g., http://reviews.vapour.ws/r/5054/)?
<cherylj> frobware: that's fine, thanks :)
 * katco goes to run errand
<natefinch> jam: just to amke sure... there's also an imagemanager facade.... that's ok to stay around?
<natefinch> jam: or is that part of what should be deleted?
<jam> natefinch: looking
<jam> natefinch: so I think that should stay. I think that could be adapted to how we cache LXD images as well, so that you can save space, etc.
<natefinch> jam: ok
<jam> I wouldn't cry if we removed it, because it won't do anything in 2.0
<jam> but when we do bring something like it back, it would be nice for a 2.0 client to be able to ask a 2.1 server to list/delete images.
<natefinch> jam: mit means we have a command that doesn't do anything, juju cached-images
<jam> natefinch: so as stated above, in 2.0 it won't do anything, but likely it will in 2.1 and I don't see a reason why it couldn't look like that
<jam> which would mean a 2.0 client could ask a 2.1 server to list/delete cached images.
<jam> *but* if there is something that makes it hard to keep around
<jam> I wouldn't feel terrible removing something that doesn't do anything right now.
<perrito666> cmars: https://www.youtube.com/watch?list=PLKfWL8IXgKBte4TfD53pLaHONfSYCX0RH&v=ZZTp8NeGbmk
<natefinch> sinzui: my PR had its job cancelled, but now the bot isn't picking it up - https://github.com/juju/juju/pull/5583
<natefinch> mgz: ^
<sinzui> natefinch: I will find it it and attempt a rebuild.
<sinzui> natefinch: the lander didn't call back to the PR so it doesn't know it needs to look again
<sinzui> natefinch:  You job is rebuiling. http://juju-ci.vapour.ws:8080/job/github-merge-juju/8111/console
<natefinch> sinzui: thanks
<sinzui> natefinch: once it calls back to the PR you can try again if you need...our you can just rebuild
<natefinch> sinzui: I don't know how to make it rebuild
<sinzui> natefinch: from http://juju-ci.vapour.ws:8080/job/github-merge-juju/ I can see "natefinch lxc-to-lxd" among the jobs. Its build 8104. when I follow that link to see the actual build I also see a link to rebuild on the left.
<sinzui> natefinch: ah, but you also need to be logged in
<natefinch> sinzui: ahh, ok, I see, thanks.
<mup> Bug #1592537 opened: No test for upgrade-juju upload-tools for non-controller model <juju-core:Triaged by macgreagoir> <https://launchpad.net/bugs/1592537>
<alexisb> natefinch, looks like there are some issues with building your branch is it clear from the build log?
<natefinch> alexisb: gah... I had to rebase on top of some changes, and there were conflicts.  I'm sure I can figure it out.
<natefinch> alexisb: I think I lost some changes dimiter made, because I renamed a file
<natefinch> alexisb: should be a trivial fix
<alexisb> natefinch, thanks
<natefinch> alexisb: fixed and resubmitting
<alexisb> natefinch, thank you
<alexisb> natefinch, were are we on the deps update for the dbus package?
<alexisb> that is the last thing for a beta9 release
<alexisb> do you need me to hand that off to someone else?
<natefinch> alexisb: it's super trivial, one sec
<natefinch> alexisb: if someone can just cherry pick this commit and fix the conflcits... 9bdc72a76405b5e247f4561aafbdb4775136afc6
<natefinch> alexisb: sorry, I gotta run, the kids are crazy and need dinner.  I'll be back on in a few hours, but that change is super trivial
<alexisb> natefinch, yep figured it was that time :)
<natefinch> alexisb: doesn't even need to be cherry pick... it's just 2 lines in dependencies .tsv
<alexisb> thanks for the work today natefinc, I will follow-up with someoen on the deps issue
<natefinch> alexisb: thanks
<alexisb> ericsnow, katco Is this something that one of you can pick up from natefinch-afk quickly? ^^^
<katco> alexisb: the cherry-picking is easy, but the resolving conflicts always requires context
<alexisb> relates to bug https://bugs.launchpad.net/juju-core/+bug/1592210
<mup> Bug #1592210: goroutine panic launching container on xenial <blocker> <conjure> <juju-core:In Progress by natefinch> <juju-core 1.25:Fix Committed by natefinch> <https://launchpad.net/bugs/1592210>
<katco> alexisb: we are both under tight deadlines per wallyworld; up to him?
<alexisb> katco is should be a couple like dependancy change
<katco> alexisb: ok, let me get to a stopping point and i'll TAL rq
<wallyworld> i can do it also
<alexisb> katco, if you are actively work something continue
<alexisb> wallyworld, I would like to get this in in the next hour if possible as it will allow for a green light for beta9
<wallyworld> yep, sure, figured as much :-)
<alexisb> wallyworld, thanks
<wallyworld> alexisb: as per my email, you know we need to land a PR unless we want to ship with add-model broken right?
<alexisb> ug
<alexisb> wallyworld, I didnt realize it was broken
<alexisb> that changes things
<wallyworld> sorry, i should have been more explicit
<alexisb> where are we with that pr?
<wallyworld> it will land asap this morning
<alexisb> wallyworld, no I could of looked harder not your fault
<alexisb> wallyworld, ok that is fine
<alexisb> we just need to get a CI run started your morning
<alexisb> so that we can release asap tomorrow
<wallyworld> ok
<alexisb> wallyworld, thumper I will some of your time post the sts call
<thumper> ack
<mup> Bug #1589680 changed: Upgrading to cloud-archive:mitaka breaks lxc creation <canonical-bootstack> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1589680>
<mup> Bug #1592582 opened: Failing unit tests: TestAddStorageMixDistinctAndNonDistinctErrors <juju-core:New> <https://launchpad.net/bugs/1592582>
<mup> Bug #1592583 opened: Unit test failures in github.com/juju/juju/cmd/juju/commands <juju-core:New> <https://launchpad.net/bugs/1592583>
<alexisb> https://hangouts.google.com/hangouts/_/canonical.com/core-leads-call
<alexisb> wallyworld, thumper ^^
<wallyworld> katco: ericsnow: did you guys have a moment? https://hangouts.google.com/hangouts/_/canonical.com/tanzanite-stand
<mup> Bug #1477010 changed: provider/openstack: AttachVolumes does not observe asynchronous attachment failures <openstack-provider> <juju-core:Invalid> <https://launchpad.net/bugs/1477010>
<alexisb_> cherylj, did you want to chat?
<cherylj> alexisb_: I would like to at some point to figure out how we're going to address the failing CI tests before an RC
<cherylj> dare to dream, right?
<katco> cherylj: hold on to those stones
<katco> cherylj: you can dream... forever!
<cherylj> we had a saying at IBM  "the sweet release of the grave"
<katco> cherylj: well, in juju we have tombs, catacombs, undertakers, and reaping.
<katco> cherylj: coincidence?
<perrito666> davecheney: tx a lot for the fix on testing
<cherylj> this team has an obsession with death
<alexisb_> cherylj, ok, we can chat later then
<wallyworld> alexisb_: thumper: there now
<alexisb_> wallyworld, you lie
<wallyworld> alexisb_: the team leads hangout?
<alexisb_> https://hangouts.google.com/hangouts/_/canonical.com/core-leads-call
<mup> Bug #1591962 changed: be able to set juju management network <canonical-bootstack> <juju-core:Fix Released> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1591962>
<wallyworld> axw: perrito666: be there is  asec
<perrito666> wallyworld: k
#juju-dev 2016-06-15
<menn0> thumper: next MM PR: http://reviews.vapour.ws/r/5057/
<menn0> sorry that it's a bit large
 * thumper looks
<davecheney> thumper: https://github.com/juju/juju/pull/5564
<davecheney> ^ pinger fixes from the last two days
<davecheney> minus the removal of the recover() in ping.ping
<davecheney> that'll live another day
<thumper> ok, I'll look shortly
 * thumper afk for a walk
<mup> Bug #1592609 opened: Unit stuck in allocating with juju 1.25.5 <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1592609>
<wallyworld> axw: the CloudCredential attributes on BootstrapParams appear unsed in master right now, correct?
<axw> wallyworld: yeah, one of the issues I found when validating :)
<wallyworld> axw: cause restore is broken - i am fixing it by adding cloud and region info to bootstrap params when we restore with -b, but credential info is not at hand
<axw> wallyworld: doh, sorry
<wallyworld> axw: i can propose and land this fix and you can then take alook?
<axw> wallyworld: sure
<wallyworld> ok, will push when i check tests
<axw> wallyworld: it's fixed in the validation branch, but I can't land that until the add-model bits are done
<axw> wallyworld: I can pull it out if it's pressing
<wallyworld> ok, let me propose this and we'll take a view
<mup> Bug #1592613 opened: juju status reports incorrect number of cores for arm64 machines <juju-core:New> <https://launchpad.net/bugs/1592613>
<natefinch> wallyworld: in the 1:1 when you're ready.
<wallyworld> be right there
<axw> wallyworld: I think I'm going to add a cloud facade anyway, need to be able to determine the cloud type for a controller in order to finalize credentials
<axw> we should probably be exposing the credentials schema over the API, rather than relying on the providers, but that will have to come later
<wallyworld> axw: ok, otp, will still aim to push a restore pr fix soon as i can
<thumper> davecheney: just one question on the PR
<wallyworld> axw: a quick restore fix http://reviews.vapour.ws/r/5059/
<axw> wallyworld: responded
<wallyworld> ta
<wallyworld> axw: returning *config.Config and jujuclient.BootstrapConfig is no good because BootstrapConfig doesn't contain a Credential struct.  so i needed to cobble together a new struct that had everything
<wallyworld> maybe i can rework a bit
<axw> wallyworld: I think the second option I suggested would work?
<axw> wallyworld: that would give you enough to construct the *config.Config, and the the names to pass in
<wallyworld> yeah probs, i guess i was trying to avoid return 2 structs instead of one, but the name of the franken struct does suck
<axw> wallyworld: I've gtg out for a little while, will take another look when I return
<wallyworld> np ty
<davecheney> thumper: w.Wait() is not a channel
<davecheney> :(
<natefinch> thumper: heard you're working on the fslock stuff... anything you need help with?
<thumper> natefinch: perhaps when i get to CreateMutexEx :)
<thumper> the docs are pretty clear
<thumper> and I have a win machine here
<thumper> but it isn't set up for go
<natefinch> thumper: setting up for go is pretty easy
<davecheney> thumper: http://stackoverflow.com/a/2332868
<davecheney> ^ name spitballing
<davecheney> mutex fits our use case
<mup> Bug #1570368 changed: juju commands timeout while a bootstrap is in process <conjure> <juju-core:Expired> <https://launchpad.net/bugs/1570368>
<axw> wallyworld: sorry was out for longer than expected. LGTM
<wallyworld> no worries ty
<cmars> perrito666, rofl
<axw> wallyworld: what incantations do I need to make to register a facade? I've got a RegisterStandardFacade call, updated allfacades, updated the client facadeversions.go...
<thumper> axw: got two minutes?
<axw> thumper: yup?
<thumper> axw: priv msg hangout link
<axw> wallyworld: never mind, I was missing the "restrctedRootNames" bit
<wallyworld> axw: ah sorry, had popped out to get coffee, was an emergency
<axw> no worries
<axw> wallyworld: do you think it would be reasonable to move authorized-keys out of model config? and have it stored separately in state? we already manage them specially
<axw> I'd rather like to not see long lines of authorized keys in model config
<wallyworld> axw: i think so - a lot of stuff in model config schema only exsits there so we can collect it all up when bootstrapping
<axw> wallyworld: I've got a new facade for managing creds now, need to add auth and write tests
<wallyworld> yay
<axw> wallyworld: just live tested with AWS successfully, specifying a cred that doesn't exist in the controller auto uploads
<axw> and tells the user
<wallyworld> awesome
<babbageclunk> If I've got a juju model with a lxd container running on it, how can I ssh into the container? The container doesn't have a public address, so I need to ssh from the host machine, but that doesn't have the juju ssh key on it (I think). Is the right thing just to scp the private key to the host machine and ssh from there to ubuntu@<lxd-ip>? Or is there a better way of doing this?
<dimitern> babbageclunk: I'd use sshuttle or (if just for a simple test), ssh -L2222:<lxd-ip>:22 <host-ip> and then ssh -i ~/.local/share/juju/ssh/juju_id_rsa ubuntu@localhost -p 2222
<babbageclunk> dimitern: Oh, I keep forgetting about sshuttle! Thanks dimitern. Copying the key around definitely felt wrong.
<dimitern> babbageclunk: :) it should
<babbageclunk> fwereade_: ping?
<fwereade_> babbageclunk, heyhey
<babbageclunk> fwereade_: So, about workload version - I think you're right it should be done at unit level.
<babbageclunk> fwereade_: Should I just keep doing that now? I don't really follow what Rick was suggesting - I'm not very familiar with resources. And I'm not very clear on what jam was suggesting - the status stuff is a bit more involved and I'm not sure how to add version info into it.
<babbageclunk> fwereade_: So I've t
<babbageclunk> fwereade_: oops. I've parked my application.SetWorkloadVersion branch and done a unit.SetWorkloadVersion branch instead, but I should probably understand rick_h_ and jam's suggestions better before going too far with it. I didn't see a response from thumper - have you heard anything from him?
<fwereade_> babbageclunk, sorry, processing
<babbageclunk> fwereade_: :)
<axw> wallyworld: problem. we've renamed the "--service" flag to "--application" in the status-set hook tool. that's going to break a bunch of charms
<axw> wallyworld: I think we should probably alias that.
<wallyworld> axw: yep, fair point
<axw> I'll file a bug
<axw> just tried deploying cassandra, and it blew up
<wallyworld> it may be a blocker for brta9
<wallyworld> axw: i had to tweak the dummy provider to not break with no controller id on Open(), running more tests now, can look at bug after that
<babbageclunk> axw, wallyworld: minor but related thing - I see a traceback with "KeyError: 'services'" when tab completing in the juju command (specifically juju scp). Already known? Should I raise a bug?
<wallyworld> babbageclunk: not known to me, raise abug and the tab completion fairies will need to fix it
<babbageclunk> wallyworld: ok cool
<fwereade_> babbageclunk, based on conversation, I'm not entirely certain I understand what problem we're solving; added another comment but I think I too need guidance from rick_h_ and/or jam
<babbageclunk> wallyworld: I dug a bit more, looks like it was already fixed, I just hadn't installed it since the change.
<wallyworld> ah cool
<mup> Bug #1592733 opened: rename of status-set --service flag to --application breaks charms <juju-core:Triaged> <https://launchpad.net/bugs/1592733>
<jam> babbageclunk: I'm heading back to the hangout now. if fwereade_ wants to join us that would be fine with me
<babbageclunk> fwereade_: joooooooooin ussssssssss
 * dimitern needs to step out for ~30m
<wallyworld> axw: http://reviews.vapour.ws/r/5062/
<jam> babbageclunk: you mentioned in your email a while back that "juju deploy foo --to lxd" didn't work
<jam> did you try "juju deploy foo --to lxd:" ?
<jam> or is the point of that email "why do I need to type the colon" ?
<babbageclunk> jam: yeah, I did - that didn't work.
<dimitern> jam: with the colon it fails
<dimitern> somewhat surprising.
<babbageclunk> jam: The point of the email was actually "this works now" - so maybe it didn't work!
<babbageclunk> jam: (The email I mean.)
<jam> dimitern: are you interested in chatting soon?
<dimitern> jam: let's do it now if you can?
<jam> k, just need a quick stop and brt, see you in standup
<dimitern> ok
<babbageclunk> fwereade_: just to check again having familiarised myself with the status code - you're saying store workload version as a status but with a different global key to the unit global key (probably the unit global key with #wlversion in it somewhere). Sound right?
<fwereade_> babbageclunk, yeah, sgtm
<babbageclunk> fwereade_: <thumbs-up emoji>
<fwereade_> babbageclunk, cheers
<fwereade_> babbageclunk, #<something> tacked on the end, anyway: a disambiguating namespace like '#sat#workload-version' might be a good idea (sat[ellite], bad name; still, dyswim?)
<babbageclunk> fwereade_: yeah, that makes sense.
<fwereade_> babbageclunk, ta
<wallyworld> dimitern: i'd love a small review to allow old charms to work with new set-status for beta 9 http://reviews.vapour.ws/r/5062/
<voidspace> dimitern: ping
<voidspace> dimitern: in order to validate LinkLayerDevices I need to validate ParentName references a real device on a real machine
<voidspace> dimitern: which means I need to parse ParentName as a global key (potentially)
<voidspace> dimitern: do you think I should duplicate or move (or make public) the parsing code?
<voidspace> dimitern: core/description doesn't yet depend on state - so making it public and adding a dependency seems like a bad idea
<voidspace> dimitern: it's only string parsing, and not much code, so duplicating it seems best
<dimitern> voidspace: pong; sorry, otp so will respond as I can
<voidspace> dimitern: ok
<voidspace> dimitern: I've duplicated for the moment - only a few lines of code
<dimitern> voidspace: I don't think we should export the parsing code, as it's an implementation detail
<dimitern> voidspace: and as fwereade_ correctly pointed out, ParentName being a global key should have never be allowed to escape the state package
<voidspace> dimitern: right
<voidspace> dimitern: I still need to parse it for the moment, so I'll duplicate the code until we solve the problem (add ParentMachineID maybe)
<dimitern> voidspace: what would've been better is to have a ParentName() always returning plain names, and add a ParentMachineID() to return a non-empty id in the case currently handled with ParentName as global key
<dimitern> voidspace: +1
<voidspace> :-)
<voidspace> those global keys are pretty ugly
<mup> Bug #1592811 opened: 2.0 beta8: networking broken for dense deployment and vsphere as provider <oil> <vsphere> <juju-core:New> <https://launchpad.net/bugs/1592811>
<wallyworld> fwereade_: i'd love a small review for beta 9 http://reviews.vapour.ws/r/5062/
<katco> fwereade_: could also use a follow-up on http://reviews.vapour.ws/r/5006/ :)
 * katco disappears again for a bit
<fwereade_> ha
<fwereade_> I had not looked at irc for a while, and had somehow psychically gone to look at both those reviews :)
<fwereade_> wallyworld, lgtm; katco, looking
<wallyworld> ty
<frankban> wallyworld: hey, do you know about any recent change in API server facades that wuld break the GUI? like we are getting "unknown object type "Client"
<rick_h_> gsamfira: ping, around? We've got a user in #juju working with windows and asking about example windows subordinates and such
<rick_h_> gsamfira: curious if you'd be interested in asking someone to chat with him and might have some samples or fodder to help the user along?
<wallyworld> frankban: um, some of the serialisation names for the params was changed but that shouldn't affect the ability to find a facade. what facade version are you using for client?
<frankban> wallyworld: no version sent, so 0
<frankban> wallyworld: ah maybe that's the issue
<wallyworld> should be 1 i think
<perrito666> fwereade_: could you go to your regular nickname so I dont have to open another window? :p
<frankban> wallyworld: yeah Login returns 1
<frankban> wallyworld: so the lower-cased-api-fields-geddon has been merged? will be in beta9?
<wallyworld> frankban: yeah. i hope it won't affect too much
<alexisb_> rick_h_, mail might be good or bogdanteleaga may be able ot help
<rick_h_> alexisb_: rgr ty
<mup> Bug #1592832 opened: enable-ha embeds ModelCommand but should be controller-specific <juju-core:New> <https://launchpad.net/bugs/1592832>
<mup> Bug #1592837 opened: juju upgrade-gui is a model command but operates on controllers only <juju-core:New> <https://launchpad.net/bugs/1592837>
<mup> Bug #1592832 changed: enable-ha embeds ModelCommand but should be controller-specific <juju-core:New> <https://launchpad.net/bugs/1592832>
<mup> Bug #1592837 changed: juju upgrade-gui is a model command but operates on controllers only <juju-core:New> <https://launchpad.net/bugs/1592837>
 * frobware discovers the tp-link switch CLI does NOT save the config by default. Grrr.
<mup> Bug #1592832 opened: enable-ha embeds ModelCommand but should be controller-specific <juju-core:New> <https://launchpad.net/bugs/1592832>
<mup> Bug #1592837 opened: juju upgrade-gui is a model command but operates on controllers only <juju-core:New> <https://launchpad.net/bugs/1592837>
<fwereade> perrito666, sorrt
<frankban> wallyworld: we'll fix the gui for that change, thanks for confirming. fyi, the embedded gui will be broken in beta9
<wallyworld> that's ok, we'll have a beta 10 next week
<frankban> wallyworld: cool
<dimitern> alexisb_: ping
<alexisb_> dimitern, heya
<alexisb_> I am on the hangout
<dimitern> alexisb_: sorry, I was chatting with frobware just now - omw
<mup> Bug #1592872 opened: juju status not showing correct output when specifying a sepcific service/application <juju-core:New> <https://launchpad.net/bugs/1592872>
<fwereade> katco, sorry, wall of issues, but I think mostly very minor
<katco> fwereade: tal, and no worries
<katco> fwereade: ta for the review
<fwereade> katco, np :)
<mup> Bug #1592887 opened: juju destroy-service deletes openstack volumes <juju-core:New> <https://launchpad.net/bugs/1592887>
<perrito666> this channel is unusually slient today
<katco> perrito666: oh way to ruin it perrito666. jees... we had a streak going.
<katco> perrito666: this is why we can't have nice quiet things.
<perrito666> I am south american, we are loud
<alexisb_> katco, perrito666: can I get one of you to give a second +1 on this revert so we can merge please: http://reviews.vapour.ws/r/5066/
<perrito666> alexisb_: looking
<katco> alexisb_: ditto
<katco> alexisb_: cherylj: i don't understand the commit message... we are reversing changes made in commit Y by reverting commit X?
<perrito666> alexisb_: looks like a revert :) and gtm, but why is this revert?
<perrito666> ie, code looks good dunno about the reason
<alexisb_> perrito666, there were facade changes without version bump and fair warning to api users
<alexisb_> everyone is horrible broken as a result
<perrito666> do we fair warn in master?
<alexisb_> perrito666, yes we should always update those we know are using our api's before breaking changes like this
<alexisb_> we did not in this case, and we also should always keep to api versioning
<perrito666> alexisb_: lgtm, need an official stamp?
<perrito666> agree re api vers
<alexisb_> perrito666, yes please
<perrito666> done
<alexisb_> katco, not sure why the lxd bug fix is mentioned in the commit message, cheryljwhat have to provide clarity, my guess is that there were revisions to that bug fix due to it using an updated facade
<katco> alexisb_: ok; pursuant to our recent discussion about commit history, it might be worth clarifying
<alexisb_> :)
<alexisb_> agreed
<alexisb_> thanks for looking katco and perrito666, I will follow-up with cherylj when she is back from lunch
<katco> alexisb_: i know she's busy, and this was probably done in 5m between people bothering her ;)
<alexisb_> yes yes it was
<alexisb_> and it is not like she is running an induction sprint atm
<katco> alexisb_: yeah, seriously. laziest lady i know...
<cherylj> katco: The commit message was just what git spit out.  I think it's just saying that it's reverting the change applied on top of that lxd commit
<cherylj> that lxd commit was the last merge before the one I reverted
<katco> cherylj: ah that makes sense
<katco> cherylj: probably not worth changing if it's idiomatic git
<cherylj> ok
<katco> cherylj: in that case, i just need to update my understanding :)
<cherylj> heh
<alexisb_> cherylj, you are good to merge with bz2 and perrito666 +1
<cherylj> ok, merging
<alexisb_> thumper, when you have a minute I could use a moment of your time
<thumper> alexisb_: I'm free for the next 8 minutes :)
<alexisb_> good enough
<alexisb_> meet you in our 1x1 HO
<katco> does anyone have an opinion on this?
<katco> we only log information about API requests if the loggo level is <= loggo.DEBUG
<katco> but we log ALL API connection open/close regardless of logging level
<thumper> yes, I have opinions
<katco> thumper: the right man for the job!
<katco> it would make my life much easier if we would consistently log API information (i.e. joins/parts as well as misc. API info) rather than treating joins/parts as something different
<katco> is there a reason to do so?
<thumper> secrets
<thumper> they were the reason
<thumper> there was no consistent way to remove secrets from the details
<thumper> soon we hope there will be when all providers describe config using the more descriptive way
<thumper> but until then
<thumper> this is what we have
<katco> thumper: ah... can i discontinue logging joins/parts if the loggo level is <= DEBUG?
<thumper> joins/parts?
<katco> thumper: sorry... connects/disconnects
<katco> thumper: https://github.com/juju/juju/blob/master/apiserver/apiserver.go#L561-L567
<katco> thumper: i want to get rid of that and say we either log all API events, or none, depending on the loggo level
<thumper> I think we should always have the notifier
<alexisb_> thumper, ping
<katco> thumper: i seem to remember mark complaining about log spam?
<thumper> babbageclunk: with you shortly
<babbageclunk> thumper: cool cool
<thumper> babbageclunk: having many parallel conversations just now
<thumper> alexisb_: ya?
<katco> thumper: better throw a fslock in there
<thumper> katco: I don't even...
<alexisb_> thumper, nevermind
<thumper> we've logged api calls at debug for so long, if we want to only do that on trace, we should tell people on juju-dev
<katco> thumper: i'm good with that, as long as it passes your smell check as something sane to do
<katco> thumper: however it looks like we ALWAYS log requests: https://github.com/juju/juju/blob/master/apiserver/apiserver.go#L541-L543
<katco> thumper: so, sorry: that's really what i'm asking; can the logging of connects/disconnects safely be tucked away under debug rather than always on?
<thumper> yeah, don't see why not
<katco> thumper: you have just made me very happy.
<mup> Bug #1531719 opened: Runaway memory allocation in jujud unit agent <2.0-count> <sts-needs-review> <juju-core:Triaged> <https://launchpad.net/bugs/1531719>
<stokachu> thumper: how long do you think before your revert makes it into master?
<thumper> stokachu: it is trying to land now
<stokachu> thumper: ok cool
<mup> Bug #1592981 opened: LXD and manual provider shouldn't require auth type <juju-core:New> <https://launchpad.net/bugs/1592981>
<mup> Bug #1592987 opened: Cannot bootstrap MAAS: missing CloudRegion not valid <blocker> <bootstrap> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1592987>
<thumper> \o/ flock impl works
<perrito666> \o/
<thumper> now on to windows
<perrito666> have fun
<alexisb_> wallyworld, ping
<wallyworld> alexisb_: hey, trying to get in, there's no video link
<alexisb_> https://hangouts.google.com/hangouts/_/canonical.com/core-leads-call
<anastasiamac> axw: wallyworld: m looking at agent config file version bug 1575794
<mup> Bug #1575794: Agent config format version should be changed for 2.0 <juju-release-support> <rc1> <tech-debt> <juju-core:Triaged by anastasia-macmood> <https://launchpad.net/bugs/1575794>
<wallyworld> great ty
<anastasiamac> axw: wallyworld: m tempted to rename our format-1.18 file to just be format :D and change the variable that puts version number to 2.0
<anastasiamac> is there any repricussion m missing?
<wallyworld> not sure, otp, so can't devote brain space to it
<axw> anastasiamac: should be fine
<axw> anastasiamac: I'd call it format-2.0, since it may change again
<anastasiamac> axw: the problem is that this file has already been updated to contain 2.0 changes. it's no longer just 1.18 :)
<axw> anastasiamac: that's a problem regardless of the name. and if we don't include the version in the name, people will be *more* likely to change it in incompatible ways without bumping the version
<anastasiamac> axw: k. I'll rename and change the variable name and its value, adding comments to not alter contents but creating new file for next format version \o/
#juju-dev 2016-06-16
<redir> perrito666: yt?
<perrito666> redir: always
<redir> great
<redir> so looking at bug 1577776
<mup> Bug #1577776: 2.0b6: asks for domain-name, then doesn't know what it is <juju-release-support> <landscape> <rc1> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1577776>
<alexisb_> sorry menn0 wallyworld and I are running over
 * perrito666 clicks
<perrito666> redir: ok
<redir> You added 'domain-name' to the credential config for opentack
<redir> yes?
<redir> I can add it to the openstack environ schema and provider configurator... but is there more that needs to happen?
<perrito666> redir: I believe so, for keystone 3
<redir> Apparently I am getting kicked from the conf room.
<perrito666> redir: not sure really, the underlying code should be there (I am a bit worried that this bug exists)
<redir> OK. I'll find you later tonight or more likely tomorrow
<redir> perrito666: me too since it seems that something got nuked or was never tested
<redir> bbiab
<redir> or tomorrow
<perrito666> redir: k
<alexisb_> menn0, are you still available to meet or did you want to skip this week?
<menn0> alexisb_: I'm available now. I waited around for a while for you.
<menn0> alexisb_: I don't have anything major to discuss except maybe the sprint review workshops.
<alexisb_> menn0, I can join for a bit
<alexisb_> I will hop on
<perrito666> k, eod, CUALL tomorrow
<thumper> bugger
<thumper> CreateMutex with a name doesn't give us the semantics we want on windows for the mutex
 * thumper digs some more into windows world
<thumper> hmm...
<thumper> dog walk time, then I'll try something else
<axw> wallyworld: http://reviews.vapour.ws/r/5072/ -- I'll test with manual now
<wallyworld> ta, looking
<wallyworld> axw: lgtm. works on lxd i assume as well
<axw> wallyworld: will lxd test shortly, pretty certain it will
<wallyworld> yeah, looks like it will
<axw> wallyworld: seems there's a different issue with manual, investigating/fixing in a follow up
<wallyworld> ok
<axw> wallyworld: I'm getting "creating hosted model: model already exists" -- possibly the same thing the grnet folks are getting
<wallyworld> axw: was that grnet issue preceeding this work then?
<axw> wallyworld: the issue the found was before you landed my changes, if that's what you mean
<wallyworld> yeah
<wallyworld> so good to fix
<axw> well I don't know if it's what they're seeing, but need to fix it anyway
<axw> we'll see
<wallyworld> axw: is there a bug number?
<axw> wallyworld: no, it was on the mailing list. about the synnefo provider
<axw> I'll create a bug for the issue I'm seeing now tho
<wallyworld> ok, ta
<axw> wallyworld: actually I suspect it's different, I think it's to do with detecting regions
<wallyworld> rightio. we should maybe test all providers we can to be sure
<axw> cloud says it has no regions, but if you bootstrap manual/<IP> then that goes in as a region
<axw> altho hm I think we add that in.. anyway, I'll look into it and stop guessing
<mup> Bug #1593033 opened: manual: bootstrapping fails with "creating hosted model: model already exists" <blocker> <juju-core:In Progress> <https://launchpad.net/bugs/1593033>
<menn0> thumper: simple bit of infrastructure I need for current MM work: http://reviews.vapour.ws/r/5074/
<axw> wallyworld: http://reviews.vapour.ws/r/5075/
<wallyworld> looking
<wallyworld> axw: lgtm to but it seems there's a bit of test coverage missing unless i missed it
<axw> wallyworld: yeah, because it's a PITA to inject/test with custom environ providers. I'll see if I can make an incremental improvement...
<wallyworld> axw: we can land for the beta and do in a follow up, let's get the merge happening, so long as you've tested by hand
<wallyworld> axw: also, could you add a few lines to the release notes about this and the cloud/cred/addmodel changes, eg for add model a user visible change is that the --credential argument doesn't take a cloud name prefix anymore
<axw> wallyworld: yep. I'll mention --region, but it's not fully baked yet because you still have to specify region in --config
<axw> wallyworld: on account of us still storing cloud/cred details in model config
<wallyworld> no worries, just the minimal so folks know what to do and are given info on what's different. i have to do a bit on shared config also
<wallyworld> axw: also, when you're free, i have pushed that controller uuid change, but i'll land after the beta revision is finalised
<axw> wallyworld: ok. still gotta look into the CreateModel compatibility
<wallyworld> yep, that takes priority for sure
<axw> wallyworld: bleh, GUI uses ConfigSkeleton
<wallyworld> fark
<wallyworld> we'll have to put it back for now and let them know to remove it
<axw> I guess I'll put it back in, in some minimal form
<thumper> fuck yeah!!
<thumper> got windows working too
<thumper> can't use a mutex because of stupid windows thread bollocks
<thumper> if the current thread already owns the mutex, any call to wait for that mutex succeeds
<thumper> so I'm using a named semaphore
<thumper> with a max count of 1
<thumper> that works
<thumper> phew
<davecheney> wow, that could have been a buzzkill
<mup> Bug #1593042 opened: Juju GUI cannot create new models <blocker> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1593042>
<thumper> davecheney: but windows mutex now has exactly the same behaviour as iOS and linux
<thumper> I'm busy adding docstrings
<thumper> and a few extra tests
<davecheney> sgtm
<axw> wallyworld: http://reviews.vapour.ws/r/5076/
<wallyworld> looking
<wallyworld> axw: was hoping it would be simple like that. i did something similar for deploy with adding vs not adding charms
<axw> wallyworld: the existing release notes don't say anything about a cloud prefix on --credential
<wallyworld> well, looks like that omission is now fixed :-)
<axw> wallyworld: on second thoughts, I think we should just be silent about --region until it's fully supported
<wallyworld> ok
<menn0> thumper: migration minion now reporting to master: http://reviews.vapour.ws/r/5077/
<thumper> meetingology: ool
<meetingology> thumper: Error: "ool" is not a valid command.
<thumper> oops
<thumper> ugh
<thumper> meetingology: cool
<meetingology> thumper: Error: "cool" is not a valid command.
<thumper> menn0: cool
<thumper> who asked meetingology along anyway
<menn0> thumper: it should have been broken up into smaller PRs, but it's not actually that big overall
<menn0> thumper: what is meetingology ?
<thumper> a bot by the look of it
<menn0> I got that :)
<menn0> meetingology: help
<meetingology> menn0: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.
<menn0> meetingology: list
<meetingology> menn0: Admin, Channel, Config, MeetBot, Misc, NickAuth, NickCapture, Owner, and User
<menn0> meetingology: Owner
<meetingology> menn0: Error: "Owner" is not a valid command.
<menn0> meetingology: help Owner
<meetingology> menn0: Error: There is no command "owner". However, "Owner" is the name of a loaded plugin, and you may be able to find its provided commands using 'list Owner'.
<menn0> meetingology: list Owner
<meetingology> menn0: announce, defaultcapability, defaultplugin, disable, enable, flush, ircquote, load, logmark, quit, reload, reloadlocale, rename, unload, unrename, and upkeep
<thumper> davecheney, axw, menn0: https://github.com/juju/mutex/pull/1
<menn0> thumper: will look later. i've got to pick up my daughter from drama class.
<thumper> kk
 * davecheney looks
<cherylj> hey axw - got a sec for a bug question?
<axw> cherylj: yup
<cherylj> I'm looking at bug 1592887, and the behavior the reporter is describing is what I would expect to happen
<mup> Bug #1592887: juju destroy-service deletes openstack volumes <juju-core:New> <https://launchpad.net/bugs/1592887>
<cherylj> wanted to sanity check
<axw> thumper: https://github.com/juju/mutex/blob/192dc9c6d06c8353820d24757b3ed3150ccbae6c/impl_flock.go  <-- it's probably a good idea to say in the docs that a name is scoped to user ... but really I don't think we shold do that. that's different behaviour to the other implementations
<axw> thumper: the caller can always put $USER in the name if they want to
<axw> cherylj: looking
<thumper> I think that could be fine
<cherylj> thx
<thumper> axw: there is a PR there
<thumper> it can have comments :)
<axw> cherylj: expected behaviour. there was always an intention of adding support for disassociating volumes from the lifetime of a service, but never fully implemented
<cherylj> thanks, axw!
<cherylj> does anyone know if there's an equivalent json tag for yaml's 'inline'?
<thumper> axw, davecheney, menn0: I'll let you argue for a bit about the minor details, but I'm -1 on adding panics
 * thumper is heading to BJJ
<mup> Bug #1592832 changed: enable-ha embeds ModelCommand but should be controller-specific <juju-core:New> <https://launchpad.net/bugs/1592832>
<mup> Bug #1592887 changed: juju destroy-service deletes openstack volumes <juju-core:Invalid> <https://launchpad.net/bugs/1592887>
<wallyworld> axw: found another problem if i have a clouds.yam with a custom lxd cloud, validating initialization args: validating cloud credentials: credential "" with auth-type "empty" is not supported (expected one of [])
<axw> le sigh
<axw> wallyworld: we should check for empty AuthTypes list as well as one that includes "empty"
<wallyworld> yeah, that sounds ok doesn't it
<axw> wallyworld: you can have auth-types: ["empty"]
<axw> but we should relax that
<wallyworld> sgtm, cover all cases
<axw> wallyworld: currently knee deep in adding cloud tests, can you please file a bug if you're not looking to fix it
<wallyworld> sure, will probs just chuck up a quick fix
<axw> wallyworld: state/cloudcredentials.go is one place to fix, not sure if there are others
<wallyworld> yup, i'll test manually
<axw> wallyworld: http://reviews.vapour.ws/r/5073/diff/# -- why has bundlechanges changed?
<wallyworld> axw: they merge the service->app rename PR and so we now pull from master and not the feature branch
<wallyworld> same code
<axw> ok
<wallyworld> axw: this fixed immediate issue but now i get a "model already exists" error, need to investigate that http://reviews.vapour.ws/r/5081/
<axw> wallyworld: failed txn assertion when adding a model
<axw> wallyworld: the error is misleading, basically means the assertions are invalid
<wallyworld> rightio, so fix not perfect :-)
<axw> wallyworld: did you pull master?
<wallyworld> yup
<axw> wallyworld: you need to not add empty to the requiredAuthTypes below
<axw> wallyworld: well really it should be an assertion of  len(auth-types)==0 || includes "empty"
<wallyworld> ok, wasn't sure about adding or not
<axw> wallyworld: perhaps it would be simpler if we just require that all clouds have a non-empty AuthTypes
<axw> wallyworld: and when we bootstrap we add []{"empty"] if there's nothing
<wallyworld> that sounds like a bit of a pita when defining clouds
<wallyworld> i'll make this quick fix now and we cn take a view
<axw> wallyworld: no, we'll just add one on the way in
<axw> wallyworld: rather than complicating the assertions everywhere else
<wallyworld> ah, right, i see
<wallyworld> i'll try that
<wallyworld> axw: pushed fix to add auth types at bootstrap, seems to work
<axw> wallyworld: thanks, still going through your other one
<wallyworld> axw: np, sorry, it's large in scope. this other one though is a few lines - i'd like to get it landed asap if possible
<wallyworld> so we can get CI done for  release
<axw> wallyworld: reviewed
<axw> (the small one)
<wallyworld> ta
<axw> wallyworld: reviewed the other one too
<wallyworld> axw: thanks for reviews,  was otp so missed them. about to head to soccer. the latest commit to master is in CI now (the empty creds one). but the one before which should have had all manual / lxd fixes looks to be failing on manual still http://reports.vapour.ws/releases/4061
<wallyworld> axw: can you take a look and fix any issue, i'll check back in after soccer; needless to say if there is an issue we need a fix asap
<Yash> Hello
<Yash> I'm facing a problem.
<Yash> 2016-06-16 08:36:27 DEBUG juju.api apiclient.go:500 error dialing "wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api", will retry: websocket.Dial wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api: dial tcp [fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070: getsockopt: connection refused
<Yash> How to solve this?
<Yash> I rebooted my machine many times with no luck
<voidspace> frobware: dimitern: standup
<davecheney> Yash: it looks like your machine is trying to reach that host on ipv6 but either the juju server isn't listening on ipv6 or its blocked
<Yash> ok
<Yash> I disabled ufw
<Yash> Is that a problem?
<Yash> I pinf ipv6 address but not that port
<Yash> nmap failed to connect
<Yash> davecheney : Please suggest how to solve?
<axw> wallyworld: the issue is that https://bazaar.launchpad.net/~juju-qa/+junk/cloud-city/view/head:/clouds.yaml contains an entry called "manual"
<axw> which does not have any regions
<axw> wallyworld: it was working before by luck, not by design
<voidspace> babbageclunk: buy this and I'll buy one (or two) off you http://www.ebay.co.uk/itm/252426716669?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT
<axw> wallyworld: we only do the auto-detect thing if there's no cloud with the given name
<babbageclunk> voidspace: They just seem like the ground effect lights that boy-racers put on their car so they can drive around the square in Palmerston North.
<babbageclunk> babbageclunk: But for your living room!
<babbageclunk> Gah, dumb brain.
<dimitern> pimping up your living room? :)
<babbageclunk> dimitern: exactly
<babbageclunk> Yash: What does sudo lxc list show?
<babbageclunk> Yash: Does that container have an ipv4 address as well, or just an ipv6 one?
<babbageclunk> Something I said?
<admcleod> maybe he's from palmerston north
<dimitern> :)
 * babbageclunk lols
<admcleod> babbageclunk: thanks for that
<babbageclunk> admcleod: Well, I don't think I was much help. Hopefully they come back.
<axw> mgz, sinzui, balloons: can one of you please remove the "manual" entry in https://bazaar.launchpad.net/~juju-qa/+junk/cloud-city/view/head:/clouds.yaml  -- it's interfering with the ability to use the manual/<host> syntax in bootstrap
<babbageclunk> dimitern, voidspace: Go testing question - when I run go test ./... at the top level, am I right to think that each of the packages' tests are run as separate processes?
<babbageclunk> If I did something in juju/testing/mgo.go:init(), it would still run multiple times across the whole test suite.
<dimitern> babbageclunk: go test ./... just recurses into each subdir, doing the same it does otherwise - i.e. build a <pkg>.test binary in a subdir then run it
<admcleod> babbageclunk: how did you get your canonical hostname irc cloak?
<babbageclunk> dimitern: Ok, thanks - that's what I thought.
<admcleod> or, anyone
<babbageclunk> admcleod: ? They asked me what irc name I wanted when I was joining.
<admcleod> hrm ok
<babbageclunk> admcleod: Hang on - what do you mean irc cloak?
<dimitern> admcleod: he's in bluefin, but otherwise do a full-text search on wiki.c.c for "cloak"
<dimitern> admcleod: there's a separate process for freenode and canonical IIRC
<admcleod> dimitern: yeah nothing on the wiki that i can find
<admcleod> babbageclunk: when you whois someone, what should be their hostname/ip address is cloaked/hidden
<dimitern> https://wiki.canonical.com/StayingInTouch/IRC/FreeNodeRegistration?highlight=%28cloak%29
<admcleod> clearly i dont know how to use the search box
<admcleod> thanks :)
<dimitern> ;)
<dimitern> babbageclunk: a cloak can mask your source address for /whois on IRC
<admcleod> dimitern: oh right. enter does a title search.
<babbageclunk> admcleod: <reads> I don't think I have one.
<dimitern> babbageclunk: you do :) all bluefin folk have one by default
<admcleod> babbageclunk: [babbageclunk] (user@nat/canonical/x-zxxacbyqsikrawst)
<babbageclunk> admcleod, dimitern: ah. cool!
<admcleod> babbageclunk: i believe kjackal may have a question for you also
<kjackal> hi babbageclunk
<babbageclunk> kjackal: hi!
<kjackal> I am on juju beta8
<kjackal> http://pastebin.ubuntu.com/17393462/
<kjackal> Deployed a machine (with kafka but that does not matter) and the fqdn is not resolvable/pingable
<kjackal> is this expected?
<babbageclunk> kjackal: What are you deploying to?
<kjackal> solly yes, lxd
<kjackal> sorry
<babbageclunk> kjackal: realised that after looking at the pastebin, sorry.
<admcleod> kjackal: nslookup is dns server specific, it doesnt look at /etc/hosts
<kjackal> admcleod: yes true!
<babbageclunk> kjackal: I'm not sure whether we'd expect the container to be resolvable. dimitern?
<kjackal> Actually! After sometime it gets resolvable and pingable!
<babbageclunk> kjackal: What's the broader problem?
<kjackal> http://pastebin.ubuntu.com/17393499/
<babbageclunk> Yash: welcome back! Did you see my question above?
<kjackal> So, the fqdn is not resolvable immediately after the container comes up
<kjackal> babbageclunk: ^
<babbageclunk> kjackal: Sure - but is that actually causing a problem?
<kjackal> babbageclunk: Yes because if i try to start immediately after I get the machine a service (in my case kafka) it will fail because all the handshakes with other services will fail and in the case of kafka the fqdn is used internaly for figuring out the interfaces to listen to
<Yash> no
<babbageclunk> kjackal: (I mean, it might be, but I just want to know what the context is - what's the problem you're trying to solve?)
<Yash> babbageclunk: Now I cleaning machine. :(
<babbageclunk> 10:32 <babbageclunk> Yash: What does sudo lxc list show?
<babbageclunk> > 10:32 <babbageclunk> Yash: Does that container have an ipv4 address
<babbageclunk> Yash: :(
<Yash> 10 machines with ip4 and ipv6 ips
<Yash> all are running
<babbageclunk> kjackal: Ah, ok - thanks
<kjackal> my exact problem: starting kafka without its fqdn resolvable fails throwing an unknown host exception
<Yash> I cann't ssh ubuntu@publicip
<Yash> connetion refused
<kjackal> also I think that if I have two services (eg kafka and zookeeper) trying to comunicate using their fqdns will fail
<kjackal> babbageclunk: ^
<babbageclunk> kjackal: Sorry, I don't think I'm the right person to answer this (I'm a newbie here). dimitern, any ideas?
<kjackal> babbageclunk: I appreciate your help
<babbageclunk> kjackal: Wish I could be more helpful!
<babbageclunk> Yash: so you get connection refused from ssh, rather than permission denied?
<dimitern> kjackal, babbageclunk: sorry, was afk for a bit
<dimitern> kjackal, babbageclunk: it looks like either a lxd issue or juju-using-lxd-incorrectly I guess
<dimitern> kjackal: what's inside /etc/resolv.conf inside the container?
<kjackal> just a sec
<dimitern> kjackal: it might be relevant whether this is a trusty or xenial container..
<babbageclunk> Yash: Can you get into the container using "sudo lxc exec <container-name> -- /bin/bash"?
<kjackal> dimitern: http://pastebin.ubuntu.com/17393776/
<dimitern> kjackal: ok, that looks good - do you have another lxd that cannot resolve itself?
<kjackal> I have to spin up a new one
<kjackal> what do you want me to check there?
<kjackal> the resolv.conf?
<dimitern> kjackal: yeah, and also pinging e.g. google.com
<kjackal> dimitern: ^
<kjackal> Ok doing it now, will ping you in a moment
<dimitern> kjackal: btw the /etc/hosts does not have lxd's own hostname.. grr looks like bug 1513165 but on lxd rather than maas
<mup> Bug #1513165: Containers registered with MAAS use wrong name <cdo-qa-blocker> <sts> <sts-needs-review> <juju-core:Fix Released by thumper> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1513165>
<dimitern> kjackal: just to clarify - what version of juju are you using?
<kjackal> 2.0-beta8-xenial-amd64
<dimitern> kjackal: can you please file a bug like the above, so we can track it separately?
<kjackal> yes I will do that
<dimitern> kjackal: please include a paste with /etc/hosts, /etc/resolv.conf, and /etc/network/interfaces, ideally for a container that works ok (like that one above) and a not working one
<dimitern> kjackal: thank you!
<kjackal> the ubuntu lxd container came up fine
<dimitern> kjackal: :/ yeah.. it looks like one of those lxd race conditions..
<kjackal> dimitern: here is what I got this time http://pastebin.ubuntu.com/17393827/
<kjackal> and is resolvable immediatelly
<kjackal> cool, let me file this bug and i will try again to repro, to see what the resolv.conf is
<dimitern> kjackal: ok, try deploying the charm that had the issue I guess?
<dimitern> kjackal: awesome! thanks, and sorry for the issues :/
<kjackal> yes, that is the plan. Do not mention it, this is why we are here for
<dimitern> kjackal: hey, I've just "groked" your nick btw :) - how's the provider going?
<kjackal> :) "how is it going" in what sence
<kjackal> ?
<dimitern> kjackal: we should try and help you guys with any issues along the way.. it's a bit of a rough path, but you're trailblazers :)
<dimitern> kjackal: well, I've been looking the ML for progress
<dimitern> just curious
<kjackal> dimitern: I am looking at bug https://bugs.launchpad.net/juju-core/+bug/1513165
<mup> Bug #1513165: Containers registered with MAAS use wrong name <cdo-qa-blocker> <sts> <sts-needs-review> <juju-core:Fix Released by thumper> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1513165>
<kjackal> in the lxd case both juju-095f51-2.lxd and juju-095f51-2 are resolvable
<kjackal> dimitern: Should I still open a bug?
<dimitern> so the dnsmasq instance listening on lxdbr0 is handling DNS requests
<dimitern> kjackal: yes please, esp. if you find a way to reproduce the issue
<kjackal> they are both pingable http://pastebin.ubuntu.com/17393893/ but we also need the /etc/hosts to include the fqdn
<dimitern> kjackal: yeah, sounds like https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1574844
<mup> Bug #1574844: juju2 gives ipv6 address for one lxd, rabbit doesn't appreciate it. <conjure> <juju-release-support> <landscape> <lxd-provider> <juju-core:Won't Fix> <rabbitmq-server (Juju Charms Collection):Fix Released by james-page> <https://launchpad.net/bugs/1574844>
<dimitern> kjackal: not quite, but the same issue - not having the lxd's address and hostname in /etc/hosts
<dimitern> kjackal: so filing a bug about this will be appreciated!
<kjackal> here you go dimitern https://bugs.launchpad.net/juju-core/+bug/1593185
<mup> Bug #1593185: In lxd the containers own fqdn is not inclused in /etc/hosts <juju-core:New> <https://launchpad.net/bugs/1593185>
<dimitern> kjackal: thank you!
<kjackal> dimitern: how is it different from deploying a local charm from fetching it from the store in terms of name resolutions, could this be related
<kjackal> I deployed my test kafka charm from local and is not resolved...
<dimitern> kjackal: it shouldn't matter where the charm comes from
<kjackal> ah wait, I am deploying in trusty series!
<kjackal> let me try deploying ubuntu from the store trusty series
<kjackal> I think we have found something dimitern, ubuntu trusty series not resolvable
<dimitern> kjackal: with a local charm though, you *could* use a similar workaround jamespage did for rabbitmq-server - i.e. update /etc/hosts to include a line with the IP address (returned by `unit-get private-address` in the charm) and the `hostname` (short and fqdn)
<kjackal> nameserver 10.173.130.1
<dimitern> kjackal: no search?
<kjackal> search lxd
<kjackal> dimitern: this kind of workaround is not good, because the hostname has to be resolvable by both itself and by others!
<dimitern> kjackal: what does `nslookup juju-xxx.lxd 10.173.130.1` return?
<kjackal> http://pastebin.ubuntu.com/17394126/
<dimitern> kjackal: and with the .lxd suffix?
<kjackal> same
<dimitern> kjackal: odd.. can you ping the dns ip?
<kjackal> yes 10.173.130.1 is pingable
<dimitern> kjackal: also, can you paste the output of `ip -d route show` ?
<kjackal> http://pastebin.ubuntu.com/17394152/
<dimitern> kjackal: ok, I suspect there might be a lingering `dhclient` process on eth0 due to /etc/network/interfaces.d/eth0.cfg being present..
<dimitern> kjackal: can you please paste /etc/network/interfaces on the container?
<mup> Bug #1593185 opened: In lxd the containers own fqdn is not inclused in /etc/hosts <juju-core:New> <https://launchpad.net/bugs/1593185>
<mup> Bug #1593188 opened: Include complete information in Client.CharmInfo API call <blocker> <juju-core:In Progress by frankban> <https://launchpad.net/bugs/1593188>
<kjackal> Yes there is a process http://pastebin.ubuntu.com/17394175/
<kjackal> dimitern: ^
<kjackal> http://pastebin.ubuntu.com/17394184/
<kjackal> dimitern: the interfaces ^
<dimitern> kjackal: yeah :/ I suspect you'll have better outcome with a more recent version than beta8, e.g. from the daily ppa
<kjackal> awesome! there is an issue already resolved?
<dimitern> kjackal: that one with eth0.cfg messing up things should be, not the missing /etc/hosts line
<dimitern> kjackal: but unfortunately the daily ppa seems out of date - last build was on June, 8, so it won't have the fix
<kjackal> Beta 9 should have the fix right?
<dimitern> kjackal: would you mind trying to build juju from source?
<dimitern> yeah, once it's out - which, I'm told should be by end of this week
<kjackal> dimitern: Ah I always wanted to do that!!! (compile from sources)
<kjackal> going to grab something to eat and will start after that, ok?
<dimitern> kjackal: ok! always good to have early feedback :)
<dimitern> kjackal: ok, try to follow the steps in the README.md on https://github.com/juju/juju
<dimitern> I'll help if needed
<dimitern> babbageclunk: can you open LKK now?
<dimitern> babbageclunk: ah, sorry - it's back up apparently
<wallyworld> axw: thanks for investigating, i'll send an email
<perrito666> morning
<voidspace> perrito666: morning
<voidspace> perrito666: what country are you in?
<perrito666> voidspace: argentina, why?
<voidspace> perrito666: ah, there's a chance I might be visiting Brasil later this year and I couldn't remember if you were Brasil or Argentina
<voidspace> sorry :-)
<perrito666> I am actually on a bus traveling from my city to a much smaller city
<voidspace> sounds like fun
<voidspace> why?
<perrito666> well some family issues (and also an excelent chance to stress test my lte modem)
<voidspace> perrito666: ah, sorry about the family issues
<fwereade> perrito666, wallyworld, if you're both around: can you talk me through the two collections it looks like we'll be using?
<babbageclunk> perrito666: Want to test your modem with a review? (It's the Mongo version detection.) http://reviews.vapour.ws/r/5084/
<babbageclunk> perrito666: (Hope the family is ok)
<perrito666> babbageclunk: going, family is ok, just annoying :p
<babbageclunk> perrito666: Oh good. Mine too!
<wallyworld> fwereade: modeluser collection is from the initial multi-model work - it is like a sql join table between models and users. the permissions collection records an access privilege for a user on a target (model/controller)
<fwereade> wallyworld, how does the latter not encompass the former?
<fwereade> wallyworld, I'm all for the perms collection, but I think it renders modelusers redundant
<wallyworld> the former is who can access a model at some level. the permissions collection is generic and used to record permissions on various targets (controller, model) for various source entities (users, groups)
<wallyworld> modelusers records things like last access time etc (from memory)
<wallyworld> not permission related
<fwereade> wallyworld, ok, so we're dropping Access from modeluser? and requiring that we create a perms doc for each modeluser?
<wallyworld> yep
<fwereade> wallyworld, ok, thanks, makes sense now, sorry
<fwereade> perrito666, ^^
<wallyworld> separation of concerns and all that
<wallyworld> and separate mongo docs to avoid write locks
<fwereade> wallyworld, yeah, absolutely
<fwereade> wallyworld, tyvm
<perrito666> ok, you saved me the explanation tx :)
<wallyworld> np, glad we got the design right :-)
<fwereade> wallyworld, I had it in my mind that last-connand stuff were elsewhere
<fwereade> cheers
<wallyworld> thanks for asking the question, always good to be sure
<kjackal> Hey dimitern, it seems the problem still exists in juju beta 9
<kjackal> I compiling from source juju still results in unresolvable fqdns in trusty containers
<dimitern> kjackal: how about dhclient - is it still there?
<kjackal> yes it is still here :(
<dimitern> kjackal: and can you paste the /etc/network/interfaces?
<kjackal> dimitern: http://pastebin.ubuntu.com/17395368/ at the very bottom
<kjackal> you were suspecting this would be fixed right? do we have a ticket for me to read so that I understand what is happening? If I understand this correctly dhcp handshake hungs and therefore the nameresolution service on the host is not updating its entries
<dimitern> kjackal: ok, that's not good - did you bootstrap with --upload-tools ?
<kjackal> dimitern: yes http://pastebin.ubuntu.com/17395425/
<sinzui> axw wallyworld Do we still need remove manual from clouds.yaml to mark bug 1593033 fix released?
<mup> Bug #1593033: manual: bootstrapping fails with "creating hosted model: model already exists" <blocker> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1593033>
<dimitern> kjackal: the stanza 'source etc/network/interfaces.d/*.cfg' pulls in that eth0.cfg causing the dhclient to come up and acquire a DHCP lease, even if there's already a static address for eth0
<wallyworld> sinzui:  that bug is a different root cause, the manual in clouds causes CI failures
<wallyworld> the root cause for that bug should be fixed
<sinzui> wallyworld: okay, I report a separate bug for clouds.yaml
<voidspace> dimitern: babbageclunk: actually turned out to be easy to test
<voidspace> dimitern: babbageclunk: http://reviews.vapour.ws/r/5085/
<wallyworld> sinzui: ok, and if we retest the manual deployments, it should work with manual removed
<dimitern> kjackal: hmm, can you check what's the commit hash in $GOPATH/src/github.com/juju/juju/ ?
<dimitern> git log -n1 in there..
<dimitern> voidspace: cheers, will have a look shortly
<kjackal> dimitern: http://pastebin.ubuntu.com/17395483/
<dimitern> kjackal: if you try running `go install -v github.com/juju/juju/...` does it rebuild anything?
<dimitern> kjackal: it's also worth noting the first jujud binary in $PATH (`which jujud`) will be uploaded with --upload-tools, and it might be not the one you built but the system-wide one from the package
<kjackal> dimitern: the jujud binary: /home/jackal/workspace/gogogo/bin/jujud
<kjackal> dimitern: go install -v github.com/juju/juju/... did not build anything
<dimitern> kjackal: ok, just wanted to double check the binary is correct..
 * dimitern *facepalms*
<dimitern> kjackal: sorry, I forgot the issue I was thinking about is fixed, but for containers on maas provider and others, not the lxd provider
<dimitern> kjackal: so, it's not fixed, but we'll get to it soon
<kjackal> dimitern: do we have a ticket we can monitor. We have a couple of charms that we will need to review as soon as this issue is resolved
<dimitern> kjackal: yeah - that one you filed is already triaged as high and affecting 1.25 and master
<perrito666> bbl
<kjackal> So you are saying is I add the hostname in the hosts file all will be well?
<kjackal> dimitern: I do not understand the mechanism, but I trust you
<kjackal> thanks
<dimitern> kjackal: well, assuming inside the container eth0 has IP 10.20.30.42, if you add a line `10.20.30.42  juju-767941-0 juju-767941-0.lxd` in /etc/hosts after the line about 127.0.0.1, resolving should work inside the container
<mup> Bug #1593221 opened: remove manual from clouds.yaml <juju-core:Fix Committed by sinzui> <https://launchpad.net/bugs/1593221>
<kjackal> yes, from within the container resolving the juju-767941-0.lxd would work. However will that fqdn be resolvable from others?
<kjackal> dimitern: Kafka talks to Zookeeper and says "Hey zookeeper, I am a new kafka unit so that you know I am juju-767941-0.lxd", then Zookeeper talks to Kafka and says "Nah, you are nobody, I cannot find you on DNS"
<dimitern> kjackal: so long as both kafka and zookeeper use the same dns server, and the server works, it should be fine
<dimitern> kjackal: the issue is the dnsmasq seems to not recognize the hostnames we set inside the container
<kjackal> dimitern: ok, thank you!
<dimitern> hmm... that's the real problem
<dimitern> kjackal: can you compare what you see as hostname in /etc/hostname and in /var/log/cloud-init-output.log during the initial boot in the container?
<kjackal> what am I searching for in cloud-init-output?
<kjackal> I can see The key fingerprint is:
<kjackal> 35:24:ce:24:69:5b:89:d5:76:35:51:ab:67:1b:88:57 root@juju-767941-2
<dimitern> kjackal: search for 'hostname' - from the top I guess
<kjackal> the "juju-767941-2" is the /etc/hostname
<dimitern> kjackal: either in /v/l/cloud-init-output.log, or in /v/l/cloud-init.log (where it says as it's about to configure each bit)
<dimitern> kjackal: ah, maybe easier - check /var/lib/cloud/ - rgrep for "hostname:" ?
<kjackal> dimitern: cc_set_hostname.py[DEBUG]: Setting the hostname to juju-767941-2.localdomain (juju-767941-2)
<dimitern> kjackal: that's it!
<dimitern> kjackal: but I bet `nslookup juju-767941-2.localdomain` doesn't resolve?
<kjackal> dimitern:  nope, it does not resolve
<dimitern> kjackal: ok, thanks - added a comment about that to the bug as it looks like the real cause
<kjackal> thanks
<dimitern> voidspace: reviewed
<voidspace> dimitern: ta
<mup> Bug #1593221 changed: remove manual from clouds.yaml <juju-core:Fix Released by sinzui> <https://launchpad.net/bugs/1593221>
<mup> Bug #1593221 opened: remove manual from clouds.yaml <juju-core:Fix Released by sinzui> <https://launchpad.net/bugs/1593221>
<fwereade> whoa, why are we erasing status history for removed units?
<fwereade> perrito666, ^^
<fwereade> not even for *removed* units, merely *destroyed* units
<perrito666> fwereade: we.. good question, I think we deemed it proper cleanup
<perrito666> but I am sure you disagree
<fwereade> perrito666, I do think it's a misleading use of the word "history"
<fwereade> perrito666, and apart from anything else the unit will still be setting statuses and recording history as it shuts down, won't it?
<perrito666> fwereade: agree, open abug pls
<Yash> Hello dimitern
<Yash> I'm facing a problem. 14         pending 10.100.100.131 juju-4667c953-1492-4846-8123-071efb515fe4-machine-14 xenial
<Yash> 14         pending 10.100.100.131 juju-4667c953-1492-4846-8123-071efb515fe4-machine-14 xenial
<Yash> Machine is in pending state
<mup> Bug #1593221 changed: remove manual from clouds.yaml <juju-core:Fix Released by sinzui> <https://launchpad.net/bugs/1593221>
<Yash> can we start it?
<mup> Bug #1593263 opened: juju deploy openstack-base results in error with ceph-mon <juju-core:New> <https://launchpad.net/bugs/1593263>
<dimitern> Yash: hey, I'd check the maas ui to see what's keeping it up? or is this lxd?
<babbageclunk> Yash: terminal only?
<babbageclunk> Yash: It's a way of getting onto the machine that doesn't rely on ssh (handy if the network isn't coming up).
<Yash> I mean is it I'm inside running machine
<Yash> virtual machine
<Yash> ok
<Yash> server is down ..I can see no server running
<Yash> ss -t -a
<babbageclunk> so to remind myself - you've bootstrapped to LXD? And the controller is stuck in pending?
<babbageclunk> Or another machine?
<Yash> Hey babbageclunk restarting machine 0 brings server back
<Yash> I can see status now
<babbageclunk> Yash: yay - when you say server do you mean the juju controller?
<Yash> but one unit is showing agent is lost, sorry! See 'juju status-history neutron-api
<babbageclunk> Yash: sometimes after rebooting it takes a while for things to get restarted and reconnected.
<Yash> Problem is when I try to deploy openstack one by one components.. (openstack bundle never works for me)
<Yash> Suddenly one machine never starts and remain in pending state
<Yash> so I restart / sometimes remove server with force machine removal
<Yash> Now can you please help me what is this error
<Yash> agent is lost, sorry! See 'juju status-history neutron-a
<Yash> -api
<babbageclunk> Yash: in juju status, does that machine show up with an ipv6 address as its public address?
<babbageclunk> because we had a bug that sounds like this.
<Yash> This time I'm not using ipv6 only ipv4
<Yash> in lxd bridge configuration..I ignored ipv6
<babbageclunk> Can you paste juju status to http://pastebin.ubuntu.com/
<Yash> Ipv6 surely some problem there
<Yash> juju status --debug
<Yash> or ?
<Yash> how
<babbageclunk> Yash: yeah, we definitely had a bug with lxd containers and ipv6
<babbageclunk> Yash: Is juju status working for you now?
<Yash> yes
<Yash> restarting worked..
<babbageclunk> So could you cut and paste it into a pastebin?
<Yash> Sure..is there any shortcut of doing...I mean command line tool or like that
<marcoceppi> katco: did those patches for OSM/manual provider cleanup ever get released in a 1.25 release?
<katco> marcoceppi: not to my knowledge. only in a special branch/binary.
<Yash> http://pastebin.ubuntu.com/17399592/
<Yash> This time it's clean
<Yash> only agent lost
<Yash> How can I fix that error?
<marcoceppi> katco: cool, I'm about to reply to an email about it, testing seems positive, so I'm sure the stakeholders will want to see that in a point release sooner (rather than later). I'll email some folks about it when I get home
<katco> marcoceppi: cool.
<babbageclunk> Yash: awesome, thanks
<mup> Bug #1593274 opened: remove-unit deletes (some) status history <juju-core:Triaged> <https://launchpad.net/bugs/1593274>
<babbageclunk> Ok, so can you ssh onto machine 9?
<Yash> http://pastebin.ubuntu.com/17399701/
<Yash> Here we can see 106 ip machine
<Yash> but not with juju status
<Yash> yea..sorry we see
<babbageclunk> Hmm - you can see the .106 machine in the machines list.
<Yash> juju ssh 9  : failed
<mup> Bug #1593263 changed: juju deploy openstack-base results in error with ceph-mon <juju-core:Invalid> <https://launchpad.net/bugs/1593263>
<Yash> ssh ubuntu@ip  : connection refused
<Yash> same error
<Yash> This command rocks
<Yash> sudo lxc exec juju-4667c953-1492-4846-8123-071efb515fe4-machine-9 -- /bin/bash
<Yash> Please include somewhere in docs
<Yash> it will everyone
<Yash> I restarted machine..let's c
<babbageclunk> No!
<babbageclunk> I mean, you can do that.
<babbageclunk> But then we can't work out what's gone wrong.
<Yash> ok how?
<babbageclunk> Ah well
<Yash> now machine is back..agent lost problem solved
<Yash> you guys rocked :)
<babbageclunk> :)
<babbageclunk> Don't know if we helped much - it's better if the machine comes up properly!
<Yash> That will be awesome
<Yash> one bug is it's really slow and I have wait unless one machine comes up then try another
<Yash> two or more deploy create problems
<Yash> I think it's not bug or I missing something
<Yash> ceph-osd/0        blocked         idle        2.0-beta7 6                10.100.100.130 No block devices detected using current configuration
<Yash> No block devices detected using current configuration
<Yash> What is this?
<Yash> Do I need to configure something inside machine which should be automatic ? not sure though
<babbageclunk> Not sure, but I think ceph needs block devices to provide its storage. https://jujucharms.com/docs/devel/charms-storage
<babbageclunk> Actually, I don't know how it relates to cinder (openstack block storage)
<Yash> ok some charms are still for trusty like
<Yash> juju deploy cs:~axwalk/postgresql --storage data=cinder,10G
<Yash> on devel doc which suppose to be for xenial ..Am I right?
<mup> Bug #1593263 opened: juju deploy openstack-base results in error with ceph-mon <juju-core:Invalid> <https://launchpad.net/bugs/1593263>
<babbageclunk> I think that doc was written before xenial was out.
<Yash> ok
<Yash> How to access http://10.100.100.217/gui/uuid from laptop ......when lxd is in desktop within same network
<Yash> I mean 10.100.100.217 is private ip inside desktop
<Yash> Its required for Juju gui and openstack dashboard
<Yash> I can't find doc on this topic?
<Yash> Please suggest?
<babbageclunk> I use sshuttle: https://github.com/apenwarr/sshuttle
<babbageclunk> on the laptop, run sshuttle -r user@desktop 10.0.0.0/8
<Yash> and from web broswer
<Yash> That's cool tool
<babbageclunk> Yeah, it's really neat.
<babbageclunk> Once it's running you should be able to see 10.100.100.217 in the browser.
<mup> Bug #1593263 changed: juju deploy openstack-base results in error with ceph-mon <juju-core:Invalid> <https://launchpad.net/bugs/1593263>
<Yash> let me try :)
<babbageclunk> (dimitern only put me on to it a few days ago)
<mup> Bug #1591225 changed: Generated image stream is not considered in bootstrap on private cloud <juju-core:Opinion> <https://launchpad.net/bugs/1591225>
<mup> Bug #1592887 opened: juju destroy-service deletes openstack volumes <juju-core:New> <https://launchpad.net/bugs/1592887>
<mup> Bug #1593299 opened: HA recovery fails in azure <azure-provider> <blocker> <ci> <ha> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1593299>
<mup> Bug #1593303 opened: Google Compute Engine provider often reports wrong IP address as the public address <juju-core:New> <https://launchpad.net/bugs/1593303>
<dooferlad> dimitern, frobware: do you have an update on https://bugs.launchpad.net/maas/+bug/1590689 - just being asked myself
<mup> Bug #1590689: MAAS 1.9.3 + Juju 1.25.5 - on the Juju controller node eth0 and juju-br0 interfaces have the same IP address at the same time <cpec> <juju> <maas> <sts> <juju-core:Fix Committed> <juju-core 1.25:In Progress by dimitern> <MAAS:Invalid> <https://launchpad.net/bugs/1590689>
<dimitern> dooferlad: I have a working fix, but not done as it causes issues for lxcs still, but on the up side is confirmed to work with bonds ok
<dooferlad> dimitern: thanks for the update. If you could put a quick update in the bug then anastasiamac will be very happy. Do you have an ETA?
<dimitern> dooferlad: later tonight, will update the bug as well - need to be afk for a while though..
<dooferlad> dimitern: please ping me for the review.
<dimitern> dooferlad: ok
<mup> Bug # changed: 1577945, 1589353, 1592221, 1592582, 1592981, 1592987
<dimitern> dooferlad: here it is, for review (might need to tweak it here and there after I do more testing): http://reviews.vapour.ws/r/5087/
<dimitern> frobware, voidspace: ^^ if you can have a look as well
<mup> Bug # changed: 1482634, 1504637, 1537585, 1553059, 1571545, 1576318, 1576674, 1577614, 1579633, 1581627, 1581893, 1583412, 1585005, 1586298, 1587788, 1588095, 1588446, 1588559, 1588924, 1589061, 1589066, 1589748, 1590095, 1590205, 1590689, 1590960, 1592210, 1592733, 1593033, 1593042, 1593188
<dooferlad> dimitern: so, not a small change then *sigh*
<anastasiamac> fwereade: format 2.0 PR is pretty much just a rename of existing file. I'll remove further refs to 1.18 u've identified but do not feel like fixing yaml ouput/tags is concern of this PR :D
<anastasiamac> fwereade: m happy to be corrected :)
<dimitern> dooferlad: it's not huge - the important parts are in add-juju-bridge.py mostly; the other stuff is mostly backported from master (except for the state changes, which turned out to be needed)
<voidspace> dimitern: looking
<voidspace> dimitern: I seem to have reviewed this before...
<dimitern> voidspace: it's mostly backported
<Yash> Now I can see dashboard login...hurray :)
<Yash> http://10.100.100.197/horizon/auth/login/
<Yash> what is password  and username?
<fwereade> anastasiamac, I would really appreciate it if you would add the tags, I feel we should be eliminating implicit serializations as we touch them
<anastasiamac> fwereade: only because u ask so nicely and will really appreciate it \o/
<anastasiamac> fwereade: tyvm for review :D
<mup> Bug #1593350 opened: Juju register allows users to register alternatively named controllers of the same UUID <juju-core:In Progress by macgreagoir> <https://launchpad.net/bugs/1593350>
<voidspace> dimitern: LGTM
<redir> perrito666: yt?
<redir> katco: yt?
<katco> redir: otp, ttyl
<perrito666> redir: yup
<redir> perrito666: I'm back re: keystone3
<perrito666> redir: tell me
<redir> perrito666: https://goo.gl/d5S5mF says If "domain-name" is present (and
<redir> user/password too) juju will use V3 authentication by default.y
<redir> so is that present as a key or present as a non empty string?
<redir> perrito666: ^
<redir> hapy to HO if htat is easier
<perrito666> as a non empty string
<perrito666> redir: I am in a place where I could not ho properly, apologies
<redir> tx
<redir> np perrito666
<perrito666> redir: I did the original of that and katco did an amazin job fixing it because I had dome some dumb things
<mup> Bug #1592155 changed: restore-backup fails when attempting to 'replay oplog'. <backup-restore> <blocker> <juju-core:Fix Released> <https://launchpad.net/bugs/1592155>
<mup> Bug #1592155 opened: restore-backup fails when attempting to 'replay oplog'. <backup-restore> <blocker> <juju-core:Fix Released> <https://launchpad.net/bugs/1592155>
<dimitern> voidspace: thanks!
<mup> Bug #1592155 changed: restore-backup fails when attempting to 'replay oplog'. <backup-restore> <blocker> <juju-core:Fix Released> <https://launchpad.net/bugs/1592155>
<mup> Bug #1579750 changed: Wrong hostname added to dnsmasq using LXD <lxd-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1579750>
<dooferlad> cherylj, anastasiamac, macgreag1ir, redir: http://paste.ubuntu.com/17408402/
<redir> perrito666: http://reviews.vapour.ws/r/5088/
<redir> katco: perrito666 ^ can you have a look? That makes the warning go away, but unsure if other bits were lost along the way
<mup> Bug #1593394 opened: model already exists but can't be destroyed because it's not found <v-pil> <juju-core:New> <https://launchpad.net/bugs/1593394>
<mup> Bug #1593395 opened: model already exists but can't be destroyed because it's not found <v-pil> <juju-core:New> <https://launchpad.net/bugs/1593395>
<mup> Bug #1593395 changed: model already exists but can't be destroyed because it's not found <v-pil> <juju-core:New> <https://launchpad.net/bugs/1593395>
<perrito666> this make sno sense, the whole trip coming to this city I had LTE and now returning from I only have 3g
<perrito666> oh there it is
<perrito666> wallyworld: wanna?
<wallyworld> sure
 * katco has been forced to retreat to her headless to run the full suite of juju tests
<perrito666> wallyworld: loading
<alexisb_> wallyworld, available whenever you are free
<wallyworld> alexisb_: be there in a minute, just talking to horatio
<alexisb_> perrito666 I love all teh different variations of you name wallyworld comes up with
<perrito666> wallyworld: sorry went from LTE to Edge
<wallyworld> perrito666: np, i think we had finished
<wallyworld> enjoy the rest of the bus trip back home :-)
<perrito666> alexisb_: he usually goes very close to hora[a-z]*o
<perrito666> wallyworld: yes, people seems rather unhappy that I awoke them by speaking english loudly :p
<wallyworld> :-)
<perrito666> I still cant understand how this same trip this morning had LTE coverage and now it hasnt
<perrito666> and honestly, who sleeps in a 6pm trip?
<thumper> axw, davechen1y, menn0: I have updated the juju/mutex PR, keeping the Acquirer interface, but removing Acquire function to have only one way. Renamed Spec to Mutex, removed error return from Release
<thumper> tested on linux, mac and windows
<thumper> also added a few more tests
<thumper> davechen1y: axw asked for the interface
<thumper> can you to duke it out
<thumper> I don't particularly care one way or the other
<thumper> but I'd like to start removing fslock asap
<thumper> menn0: do you have an opinion on Acquirer interface vs. package function?
#juju-dev 2016-06-17
<redir> katco: fwereade: http://reviews.vapour.ws/r/5088/ better?
<mup> Bug #1593492 opened: Failure bootstrap a controller on openstack reports may report misleading error <openstack-provider> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1593492>
<menn0> thumper: I prefer Acquirer interface
<menn0> easier to deal with in tests
<fwereade> thumper, menn0: while I'm awake: there seems to be more getRawCollection usage around than we should have; is this a sign we need a getUnfilteredCollection, that returns a mongo.Collection not a *mgo.Collection?
<thumper> fwereade: while you're up
<fwereade> thumper, menn0: because, *arrrgh* direct mongo access with all the lets-unwittingly-destroy-integrity methods
<thumper> fwereade: I'd like you to take a quick look at the mutex code again
<thumper> davechen1y: sprinkle the Release methods with sync.Mutex?
<thumper> davechen1y: easy enough if valuable
 * thumper fetches his sparkley sprinkle dust
 * thumper pushes
<thumper> fwereade: quick Q
<thumper> the sole remaining argument with the mutex package is this one:
<thumper> Acquirer interface or mutex.Acquire function
<fwereade> Acquire(spec) +100 from me
<fwereade> thumper, have only got more convinced of that through the day
<thumper> fwereade: so what about axw and menn0's reasoning of mocking out in tests?
<menn0> fwereade, thumper: sorry was having lunch
<thumper> personally I'm partial to mutex.Acquire(spec)
<thumper> and have unique spec's for tests if needed
<menn0> fwereade: getUnfilteredCollection sounds like a good idea to me (maybe call it getGlobalCollection to match the naming used in allcollections.go)
<fwereade> thumper, where mutex is the package?
<thumper> the system level mutex should be fast and not really a problem
<thumper> fwereade: yeah
<fwereade> menn0, COOL
<fwereade> well, cool, possibly not *quite* that cool
<menn0> haha
<fwereade> thumper, my instinct says `Acquire(Spec) (*Mutex, error)`, which gets wrapped up as a `func(mutex.Spec) (Releaser, error)` by clients
<thumper> menn0: how stronly do you feel about the Acquirer interface?
<fwereade> menn0, axw: horrible? ^^
<menn0> thumper: not hugely... tests that want something that creates a mutex instance can always take a callable.
<menn0> as above
 * thumper will have to go back and bring spec back
 * thumper sighs
<thumper> who can I get to give a blessing to the work?
<menn0> if you have fwereade, axw and me is that enough?
 * menn0 has to pick up his son from preschool
<thumper> menn0: but you all haven't given agreement
<fwereade> I think we're enough
<thumper> :)
<menn0> you have my blessing with either approach
<menn0> they're both workable and I really just want fslock to DIAF
<thumper> I'll go back and change Acquire to a function
<thumper> and rename Mutex back to Spec
<thumper> menn0: me too
 * thumper does one last (hopefully) rename dance
<axw> thumper menn0 fwereade: I'm a bit beyond caring TBH, let's just do *something* and fix it if it's a problem. it's not going to be hard to change.
<thumper> k
 * thumper merges this bad boy
<mup> Bug #1593506 opened: controller won't die <juju-core:New> <https://launchpad.net/bugs/1593506>
<mup> Bug #1593509 opened: Enhance error message when user not logged in <juju-core:New> <https://launchpad.net/bugs/1593509>
<davechen1y> thumper: good stuff
<davechen1y> what's next ?
<thumper> davechen1y: uniter hook lock
 * thumper is looking
<thumper> wallyworld: I'm back now, did you want to chat?
<wallyworld> thumper: can we do it in say 20-30 after another meeting?
<thumper> wallyworld: sure
<wallyworld> ok, will ping
 * thumper grabs the uniter hook execution lock thread and starts pulling
<davechen1y> thumper: can I start replacing juju/utils/filelock ?
 * thumper looks at that
<thumper> davechen1y: yes
<thumper> davechen1y: where is that used?
<thumper> davechen1y: it doesn't look used anywhere
<davechen1y> thumper: famous last words
<wallyworld> thumper: free if you are
<thumper> wallyworld: coming
<thumper> heh
 * thumper just made an amusing typo 
<thumper> stepFuck
<thumper> shoulda been stepFunc
<thumper> guess I type that more than I thought
<thumper> arghh
<thumper> :(
<thumper> Just came across uniter_test:68 again
<thumper> ffs
<davechen1y> thumper: eh ?
<thumper> where it builds jujud in setup suite
 * davechen1y insert hulk rage gif here
<thumper> hmm...
<thumper> I think I can just delete it...
 * thumper tries
<thumper> I'm getting very aware that this thread is getting longer
<thumper> but I'm not done pulling yet
<thumper>  17 files changed, 134 insertions(+), 410 deletions(-)
<thumper> deletions winning...
<axw> wallyworld: set-numa-control-policy should be moved to controller config
<wallyworld> that's not model specific?
<menn0> thumper: bug fix for an issue I discovered during manual testing: http://reviews.vapour.ws/r/5093/
<davecheney> thumper: turns out you were right
<davecheney> nothing uses juju/itils/filelock
<davecheney> PR incoming
<axw> wallyworld: nope, only affects how we set up mongo
<wallyworld> ah, right, will do
<axw> wallyworld: also, does cloudimg-base-url still make sense? I can't remember what the story for lxd is going to be
<wallyworld> axw: no, it doesn't without lxc i am pretty sure, i thought that would have been investigated and removed as part of the lxc cleanup
<axw> wallyworld: I guess we should remove it and whatever needs to be done for lxd can replace it
<wallyworld> yup
 * thumper afk for a bit
<axw> wallyworld: I'd like to remove storage-default-block-source from model config, and just have environ providers register their default. any objections?
<wallyworld> it was in config so users could override though right?
<axw> wallyworld: I think so, but I'm pretty sure nobody is using it, or even knows about it. OTOH, we have been bitten by providers not setting it several times
<axw> wallyworld: we could keep it and default to whatever the provider registers?
<wallyworld> sgtm
<davecheney> thumper: http://reviews.vapour.ws/r/5094/
<axw> wallyworld: as in the default won't be specified in model config
<wallyworld> so can a user specify their own default block source?
<wallyworld> if they don't like the provider default
<axw> wallyworld: yes
<wallyworld> in config as a global thing though?
<wallyworld> i guess we don't need it
<axw> wallyworld: I'm also thinking that while I'm doing this separation of config, I'll remove name, type, and uuid from model config. they're part of the model's identity, not the config
<axw> they'll still be available via environ config of course
<wallyworld> yep. so you also doing the numa thing etc too?
<axw> wallyworld: not atm, just thinking about the myriad things that need to be done
<wallyworld> ok
<menn0> axw: would you mind taking a look at this one? it's tiny. http://reviews.vapour.ws/r/5093/
<axw> menn0: sure, looking
<axw> menn0: LGTM
<menn0> axw: thanks
<thumper> :-)
<thumper> this change is falling out nicely
<thumper> not quite there yet
<thumper> but getting there
 * thumper is done
<thumper> very close to replacing the uniter hook lock with a mutex
<thumper> this includes the uniter, meterstatus, juju-run, container init
<thumper> and reboot
<thumper> \o/
<thumper> will be ready monday I reckon
<thumper> then to backport to 1.25
<thumper> phew
<thumper> laters peeps
<mup> Bug #1593566 opened: Bootstrap reports oath1 not supported with maas 2.0 <bootstrap> <cdo-qa> <cdo-qa-blocker> <maas-provider> <juju-core:New> <https://launchpad.net/bugs/1593566>
<wallyworld> axw: i've pushed some changes to that branch; i reckon there were previously bugs in lxd and/or gce that we didn't know about
<Yash> hello
<Yash> how to solve
<Yash> nova-compute/10         error           idle        2.0-beta7 2                      10.100.100.200 hook failed: "install"
<axw> wallyworld: sorry need to knock off early today, will try to look later on
<wallyworld> np
<admcleod_> when is 2.0 stable expected?
<Yash1> admcleod: Do let me know if you need any logs of my installation attempt. If yes how?
<admcleod> Yash1: the best way would be pastebin.com or pastebin.ubuntu.com or another similar service
<Yash1> ok
<Yash1> https://10.100.100.17:17070/gui/b4579691-8e4e-4892-8640-c0c5a8a758a6/
<Yash1> I can't see xenial option in series
<Yash1> admcleod:  Can you please suggest ?
<admcleod> Yash1: im sorry, i cant access that internal ip address and i need to go afk - this question is probably better for #juju though i think. i will try to help when i get back if you have not resolved it
<Yash1> I manually xenial in url and now able to see that but now
<Yash1> Could  not deploy the requested service. Server responded with: no such  request - method Client(1).ServiceDeploy is not implementedCould not add the requested unit. Server responded with: no such request - method Client(1).AddServiceUnits is not implemented
<Yash1> GUI message
<Yash1> babbageclunk: Hey
<Yash1> admcleod: ok c above also
<babbageclunk> Yash1: Sounds like a version mismatch between juju client and the running juju controller - have you just upgraded?
<babbageclunk> Yash1: service was renamed to application in the latest beta release.
<babbageclunk> Yash1: If you've just upgraded juju on the machine you should probably rebootstrap (I think).
<wallyworld> admcleod: we're hoping to get a release candidate out soon (maybe 3 weeks). there's no exact date for a 2.0 final. "when it's ready" really :-)
<wallyworld> dimitern: do we still use ignore-machine-addresses? well it seems we do, but were we going to remove it for 2.0?
<dimitern> wallyworld: it's there as an 'off switch' if it causes trouble
<dimitern> wallyworld: but in 2.0 we're getting closer to being able to drop it (not quite there yet..)
<wallyworld> ok, thanks. just doing some config yak shaving
<frobware> dimitern: do you have time to sync?
<dimitern> frobware: yeah, sure
<frobware> dimitern: 1:1 HO
<dimitern> frobware: omw
<mup> Bug #1593708 opened: Why wait for Lp to make agents when testing made them <juju-core:Triaged> <https://launchpad.net/bugs/1593708>
<frobware> dimitern: how are you creating the VLANs on the bond?
<dimitern> frobware: with the maas ui, why?
<frobware> dimitern: 1.9.3? I don't see any option. Create the bond but can only creates aliases on top of the bond.
<dimitern> frobware: make sure you're on the right fabric first - then you should see the vlans in the dropdown
<frobware> dimitern: in 1.9?
<dimitern> frobware: yeah (to clarify I'm talking about the node details page's interfaces section)
<frobware> dimitern: well... care to HO...
<dimitern> frobware: ok
<dimitern> frobware: let me dig out my other headset first..
<frobware> dimitern: don't worry... I know exactly why / what I did...
<dimitern> frobware: oh? did you manage to sort it out?
<frobware> dimitern: yeah, I just reliased I re-installed that MAAS recently and have no VLANs configured... oops
<dimitern> frobware: ah :) right
<frobware> dimitern: wrong vmass install ;)
<dimitern> frobware: so I sent another tarball to lcavassa to verify, this time with a change to the bridge script so it omits and source stanzas while rendering the modified version
<frobware> dimitern: it's something I had contemplated. in fact, I think we spoke about this recently
<dimitern> frobware: as that infamous eth0.cfg strikes again :/
<frobware> dimitern: yep
<frobware> dimitern: bbiab (lunch)
<dimitern> frobware: enjoy :)
<mup> Bug #1593730 opened: Network error after reboot agent <juju-core:New> <https://launchpad.net/bugs/1593730>
<babbageclunk> dimitern, voidspace, frobware: feel like a little Friday-afternoon reviewing? You could look at the state/migration part of the workload version change! http://reviews.vapour.ws/r/5095/
<mup> Bug #1593730 changed: Network error after reboot agent <juju-core:New> <https://launchpad.net/bugs/1593730>
<dimitern> babbageclunk: I'll have a look
<babbageclunk> dimitern: Thanks!
<dimitern> frobware: replied to your comment btw
<frobware> dimitern: lookig
<frobware> dimitern: ok, so the source stanza is a pain...
<frobware> dimitern: I think this should be an option :)
<dimitern> frobware: an argument you mean?
<mup> Bug #1593761 opened: Cannot bootstrap in gce using jsonfile in credentials <add-credential> <bootstrap> <ci> <gce-provider> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1593761>
<frobware> dimitern: yep
<babbageclunk> dimitern, voidspace: nice easy one! http://reviews.vapour.ws/r/5096/
<frobware> dimitern: ./add-bridge --omit-source-stanza
<dimitern> frobware: sgtm - with 'true' as the default
<dimitern> babbageclunk: almost done with the first one
<dimitern> frobware: how about --keep-source-stanza ? :)
<frobware> dimitern: was experimenting with --ignore-source-stanzas
<frobware> dimitern: default=true
<dimitern> frobware: as long as it omits them by default, I don't mind the name
<dimitern> babbageclunk: both PRs reviewed
<katco> ericsnow: standup time
<babbageclunk> dimitern: yaythanks
<dimitern> frobware: I think I got it to work, pushing updated diff for http://reviews.vapour.ws/r/5087/ in a moment
<dimitern> frobware: doing a final test now, just in case.. if you're happy with the approach, let's land it?
<frobware> dimitern: I have concerns about all the fixes as one commit
<dimitern> frobware: they're in different commits
<frobware> dimitern: ok, correction as one big PR
<dimitern> frobware: ok, I'll leave it up to you then I guess
<dimitern> frobware: I have confirmation it works on bootstack
<frobware> dimitern: I couldn't bootstrap with your branch
<frobware> dimitern: for reasons I'm unsure of right now
<dimitern> frobware: on a bond or ?
<frobware> dimitern: vlan on a bond
<frobware> dimitern: on real dual-nic h/w
<dimitern> frobware: did you deploy ok on the same node without juju?
<frobware> dimitern: yes with just a bond.
<frobware> dimitern: but that's when I found out I didn't have any VLANs
<frobware> dimitern: so I didn't try deploying from MAAS after that, straight to juju
<dimitern> frobware: and bootstrap worked ok on a bond with no vlan?
<frobware> dimitern: correct
<dimitern> frobware: what did you do next then?
<frobware> dimitern: created some VLANs - tried to bootstrap which is then what I started reviewing your change
<dimitern> frobware: right
<dimitern> frobware: I have a kvm node with 2 nics in a bond, and a single VLAN on it, currently being deployed by juju
<frobware> dimitern: that is essentially my setup s/kvm/hardware/
<frobware> dimitern: let me try again
<dimitern> frobware: it looks like it works.. same as on bootstack (as reported by lorenzo - 3 out of 3, added comment to the bug)
<dimitern> frobware: I do run sshuttle with all subnets on that maas though
 * dimitern is outta here
<dimitern> happy weekends everybody ;)
<babbageclunk> frobware: I can't bootstrap to AWS - I get this error:
<babbageclunk> frobware: ERROR failed to bootstrap model: cannot start bootstrap instance: missing controller UUID
<frobware> babbageclunk: well, a new one for me...
<frobware> babbageclunk: rewind a few commits... perhaps
<babbageclunk> frobware: yeah - checking master.
<frobware> babbageclunk: I can't bootstrap on MAAS so ...
<babbageclunk> frobware: swap you
<frobware> babbageclunk: ah, I wasn't patient enough. this dual-nic celeron ... is just that!
<babbageclunk> frobware: nope, still same failure on master. I wonder if there's some new piece of setup I need?
<frobware> babbageclunk: you're so far ahead... I'm currently testing 1.25.6
<voidspace> frobware: babbageclunk: relatively easy one http://reviews.vapour.ws/r/5098/
<frobware> voidspace: dne
<frobware> done even
<voidspace> frobware: thanks
<frobware> voidspace: how do do the "fix it, then ship it" lark?
<voidspace> frobware: don't know - I've never been able to do that
<balloons> the uuid change is commit f3cf6b
<frobware> balloons: is this related to bootstrap failure?
<balloons> frobware, yes, it's the cause
<alexisb_> we think
<frobware> balloons: well... so happy to be testing on 1.25 today. :)
<voidspace> frobware: you caught my deliberate typo :-)
<voidspace> frobware: in the gateway address...
<frobware> voidspace: oh! I didn't catch on that it was deliberate.
<voidspace> it's always good to have something to check reviewers are actually reading the code ;-)
<voidspace> frobware: it wasn't
<frobware> voidspace: hehe
<frobware> voidspace: I chuckled becase in the 'real world' the configuration of the interface would have failed.
 * frobware thinks celerons are slower than he imagined...
<voidspace> frobware: nice
<voidspace> frobware: all those issues fixed - thanks for the review
<natefinch> gsamfira: you around?
<gsamfira> yup. I am now
<gsamfira> natefinch: what up?
<natefinch> gsamfira: wanted to talk about this PR: https://github.com/natefinch/npipe/pull/20
<gsamfira> natefinch: its a really rough PoC :). Tries to keep track of the connections that get made. It should not be merged as is
<gsamfira> natefinch: and it will probably fail if there is another process using the same named pipe, and decides to close it while we still try to listen on it. So there should probably be a way to test the pipe, and see if its still open
<gsamfira> natefinch: while clients already listening will get the event and disconnect, there is a potential race condition if we assume we are the only process using that named pipe. So starting a wait forever on a named pipe that just got closed, will probably hang the thread.
<natefinch> gsamfira: what's the actual problem that it's trying to fix?   I see there's a race condition on close/accept
<natefinch> gsamfira: ahh
<gsamfira> the best example is the broken test I told you about, that uses both rpc.Listen and implements its own listener. If the named pipe gets closed (by a second goroutine), and you try to wait on it from the first, it will hang forever
<gsamfira> the npipe package only keeps track of the last connection it makes
<gsamfira> it does not care about the rest
<natefinch> I don't think keeping a global map of connections is the way to go... as you said, other processes can still cause that problem.
<gsamfira> yap, you are correct
<gsamfira> that code is something I slapped together to see if that was indeed the problem
<gsamfira> but the solution should be something else
<natefinch> it seems like the answer is to give the caller an option for the wait to time out
<gsamfira> natefinch: having waitForCompletion wait forever might not be the best thing to do
<gsamfira> yeah
<gsamfira> or do some kind of poling
<gsamfira> natefinch: maybe even check between polls if the named pipe is still there
<voidspace> natefinch: maybe you can elucidate something for me
<natefinch> yeah, I'm sort of surprised that closing the pipe doesn't cause WaitForSingleObject to fail
<voidspace> natefinch: state/sshhostkeys.go - SetSSHHostKeys
<gsamfira> natefinch: if you call WaitForSingleObject immediately after closing the pipe, the file descriptor for the named pipe might be allocated to some other process, like notepad....and its going to wait for that :)
<gsamfira> for as long as its active
<voidspace> natefinch: why have both the insert and update ops - is that because if the doc exists the insert will silently fail but the update will work?
<natefinch> voidspace: yeah.  There's no upsert in mongo, so you have to try insert first, and if it exists, then do an update.  It's horrible.
<voidspace> natefinch: weird, thanks
<gsamfira> natefinch: waitForsingleObject takes a file descriptor...it does not care if that FD is a named pipe or an FD held open by MS Paint :). If you tell it to wait for file descriptor 500, and between the time you disconnect the named pipe and the time you call WaitForSingleObject that FD gets repurposed, you are pretty much up the creek :)
<gsamfira> natefinch: so WaitForSingleObject is potentially dangerous
<Yash> admcleod: Hey
<Yash> juju deploy cs:bundle/openstack-base-43
<Yash> ERROR cannot deploy bundle: cannot create machine for holding ceph-mon unit: invalid container type "lxc"
<Yash> With fresh install
<Yash> 2.0-beta9-xenial-amd64
<Yash> What logs are required to log bug?
<natefinch> Yash: I know what that is... I thought I'd checked in the fix Tuesday, but maybe not
<natefinch> Yash: we dropped lxc support, but juju is supposed to seamlessly translate lxc specification in bundles to lxd
<Yash> ok but I just  lxd init and  bootstrap
<Yash> then deploy
<Yash> so I was not using any lxc as my own
<natefinch> Yash: right, but the bundle you're deploying probably specifies putting something in an lxc container
<Yash> natefinch: Its little crazy for me. I'm trying all this for past 1-2 weeks and nothing worked
<Yash> Should I use 1.x instead
<Yash> I used beta 7 and now beta 9
<natefinch> Yash: we're kind of in a mad dash to get 2.0 out the door.  We're trying to maintain a working product, but sometimes things slip through the cracks
<Yash> And in beta 9 there are several problems in juju gui also
<natefinch> Yash: beta is especially beta this time around.  We're trying to make it somewhat less beta ASAP.
<Yash> I tried to deploy juju gui unit also and same problem
<Yash> ok
<Yash> Any release date
<Yash> There is no roadmap on site
<Yash> Please put a roadmap and milestone. That will help.
<natefinch> I expect end of next week to see a lot of stability improvements... and especially these "basic things are broken" will be fixed in the next few days
<katco> Yash: there is this, but it is out of date: https://github.com/juju/juju/wiki/Juju-Release-Schedule
<Yash> ok
<katco> cherylj: ^^^
<Yash> Yes already checked. Very outdated :-(
<cherylj> yes, it is
<Yash> You want me to wait or should I try 1.x instead
<Yash> I want to use latest but facing so many problems
<katco> Yash: there is also this, but dates are not updated: https://launchpad.net/juju-core/+milestones
<Yash> This also checked. Only showing date which are released. :)
<Yash> I googled a lot
<alexisb_> Yash, in general we will be releasing betas until we feel we are stable enough to go rc
<alexisb_> yash beta7 is going to be your most "stable" version for now
<Yash> ohh.. That's a scary line
<Yash> beta 7 is not stable
<alexisb_> as there are a lot of big changes in the upcoming betas
<Yash> I worked and can confirm
<alexisb_> well it is a 2.0 beta
<alexisb_> 1.25 is our stable version
<Yash> Yea.:(
<Yash> I will try 1.x now
<katco> Yash: we've been regularly adding new features in betas; i know that's not traditionally done in betas, but that's what we're calling it
<katco> Yash: if you're looking for stability, 1.25 is definitely the way to go; but be aware 2.0 changes a lot of things
<Yash> I'm happy to see all features and that's why interested and trying to use for past 1 week or more
<katco> Yash: really apologize for the inconvenience. we're working diligently to approach a 2.0 release
<Yash> Let me try 1.x for the moment.
<Yash> katco: np and Thank you for great work.
<Yash> only concern it's not worked even with my lots of effort :(
<alexisb_> Yash, thank you for testing things out, we will get on teh lxc fix asap
<Yash> If you are looking for external tester I can participate.
<Yash> I'm python developer by profession. :)
<katco> Yash: we do it for people like you :) i wouldn't be alarmed that you haven't been able to get it to work; there are just a number of bugs that, while seem large, don't have a lot of depth to them. they just require intimate knowledge of juju to understand what's happening.
<katco> Yash: i think we're still trying to get a python libjuju going if you feel like contributing :)
<Yash> Yea. right. In starting nothing comes in my mind but now I know a lot atleast basic things
<Yash> it will be paid contributing or community one :-D
<katco> Yash: the biggest difference between 1.x and 2.x is there is only 1 model per controller in 1.x
<Yash> katco: https://github.com/juju/juju-bundlelib ?
<katco> Yash: no, that's not it. marcoceppi do you know where libjuju is?
<marcoceppi> katco: the one we're building now?
<marcoceppi> it's pretty heavy dev still
<katco> marcoceppi: yes. Yash is a python dev and might want to contribute :)
<Yash> I can try atleast
<mup> Bug #1593812 opened: Failed to bootstrap: missing controller UUID <bootstrap> <juju-gui> <juju-core:Triaged> <https://launchpad.net/bugs/1593812>
<Yash> katco: How to start on it? Is it on github?
<katco> Yash: i think marcoceppi is still looking; i don't know where it is unfortunately
<Yash> katco: Is it important or just a idea?
<katco> Yash: it's definitely more than just an idea. it exists; people are working on it
<Yash> ok
<mup> Bug #1593506 changed: juju can't kill a controller that's already dead <juju-core:New> <https://launchpad.net/bugs/1593506>
<mup> Bug #1593509 changed: Enhance error message when user not logged in <juju-core:New> <https://launchpad.net/bugs/1593509>
<mup> Bug #1593828 opened: cannot assign unit E11000 duplicate key error collection: juju.txns.stash <conjure> <juju-core:New> <https://launchpad.net/bugs/1593828>
<Yash> katco: I'm waiting now for that thing... :)
<katco> marcoceppi: don't leave us hanging :) ^^^^
<marcoceppi> katco Yash ask tvansteenburgh1 :)
<marcoceppi> katco Yash https://github.com/juju-solutions/python-libjuju though tvansteenburgh1 has a branch he's working
<tvansteenburgh1> katco, Yash: it's not really ready for contribs yet. basic architecture still be nailed down
<tvansteenburgh1> s/be/being/
<katco> marcoceppi: tvansteenburgh1: ta for the status
<mup> Bug #1593838 opened: juju beta9 does not support "lxc" notation in bundles <blocker> <bundles> <cdo-qa> <cdo-qa-blocker> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1593838>
<natefinch> alexisb_, katco, ericsnow: quick review for a fix for lxc in bundles: http://reviews.vapour.ws/r/5099/
<ericsnow> natefinch: LGTM
<natefinch> ericsnow: thanks!
<mup> Bug #1593850 opened: Deployment stuck in "Pending" for all containers <cdo-qa> <cdo-qa-blocker> <juju-core:New> <https://launchpad.net/bugs/1593850>
<mup> Bug #1593855 opened: agent.config file must always have mongodb version <juju-core:Triaged> <https://launchpad.net/bugs/1593855>
<Yash> tvansteenburgh1: ok
<mup> Bug #1593859 opened: agent config format and cloud-init data test <juju-core:New> <https://launchpad.net/bugs/1593859>
<mup> Bug #1593859 changed: agent config format and cloud-init data test <juju-core:Invalid> <https://launchpad.net/bugs/1593859>
<mup> Bug #1593859 opened: agent config format and cloud-init data test <juju-core:Invalid> <https://launchpad.net/bugs/1593859>
<mup> Bug #1593859 changed: agent config format and cloud-init data test <juju-core:New> <https://launchpad.net/bugs/1593859>
<cherylj> is there anyone around who knows the azure provider?
<cherylj> wallyworld: are you around?
<alexisb> cherylj, andrew is you reman
<alexisb> whats up?
<cherylj> alexisb: I fixed the first problem with bootstrapping on azure, but now it's running into a different problem because of some changes wallyworld made
<alexisb> cherylj, ok
<alexisb> do we know what is housing the builds due to wallyworlds commit?
<wallyworld> cherylj: i can assign bug 1593812 to myself and fix the fallout in one go if you want
<mup> Bug #1593812: Failed to bootstrap: missing controller UUID <blocker> <bootstrap> <juju-gui> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1593812>
<cherylj> wallyworld: sure
<mup> Bug # changed: 811226, 814974, 906008, 1195187
<alexisb> wallyworld, thank you
<wallyworld> alexisb: no need to thank me, just fixing something i broke :-(
<alexisb> heh well, I was being nice ;)
<alexisb> given it is your saturday and all
<mup> Bug # changed: 1187803, 1188167, 1193430, 1194880
<mup> Bug # changed: 1178770, 1182508, 1183571, 1186264
<mup> Bug # changed: 1168154, 1169588, 1176961, 1178306, 1178314
<wallyworld> cherylj: http://reviews.vapour.ws/r/5100/
#juju-dev 2016-06-18
<cherylj> wallyworld: I find it a little confusing that the controllerUUID is effectively passed into environs/bootstrap.Bootstrap two ways.  One is explicit as a Param, and one is through the environ.Config
<cherylj> wallyworld: and the Bootstrap function validates the controllerUUID in the environ.Config
<cherylj> but doesn't look at the one passed in the BootstrapParams
<cherylj> wallyworld: should environs/bootstrap.Bootstrap verify that they're equal and non empty?
<wallyworld> cherylj: it's a little messed up as we transition to passing the uuid explicitly rather than in config, i'll see if i can do some more work to extract it
<cherylj> wallyworld: ok, then my comment would be to perform the non-empty check in Bootstrap on the BootstrapParam.ControllerUUID
<cherylj> rather than what's passed into the environ.Config
<cherylj> (or both as it's transitioning)
<wallyworld> will do, i'll see if i can now even remove it from config
<cherylj> k, sounds good.
<cherylj> wallyworld: it also looks like you had a leftover comment in controller/modelmanager/createmodel.go
<wallyworld> cherylj: i just looked again, that stuff checking for it in config should have been removed, it was a carry over from the old way
<cherylj> ok
<wallyworld> cherylj: changes pushed. there's still more todo - we don't want to have controller uuid on config attrs at all, but it's like untangling spaghetti
<cherylj> thanks, wallyworld.  I'm verifying it on azure right now
<wallyworld> awesome
<mup> Bug #1157022 changed: environs/openstack: openstack.Instance should implement Stringer <logging> <openstack-provider> <ui> <juju-core:Fix Released> <https://launchpad.net/bugs/1157022>
<mup> Bug #1164220 changed: environs.MongoURL sould check that the fallback option actually exists <tech-debt> <juju-core:Invalid> <https://launchpad.net/bugs/1164220>
<mup> Bug #1155276 changed: support ~/.juju/environments.d <improvement> <juju-core:Fix Released> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1155276>
<wallyworld> cherylj: thanks! will land now
<mup> Bug # changed: 1558223, 1561566, 1569969, 1570175
<mup> Bug #1593978 opened: failed to bootstrap <juju-core:New> <https://launchpad.net/bugs/1593978>
<mup> Bug #1593996 opened: unit agent tests cannot run uniter <juju-core:Triaged> <https://launchpad.net/bugs/1593996>
<mup> Bug #1593996 changed: unit agent tests cannot run uniter <juju-core:Invalid> <https://launchpad.net/bugs/1593996>
<mup> Bug #1593978 changed: failed to bootstrap <juju-core:New> <https://launchpad.net/bugs/1593978>
#juju-dev 2016-06-19
<thumper> what is the godeps flag to look at the current deps?
<thumper> nm
 * thumper is running all the tests
<thumper> exorcise complete in master
<thumper> exorcism
<thumper> 34 files, +370 â691
<thumper> not a bad change
<thumper> http://reviews.vapour.ws/r/5103/
<thumper> hmm
<thumper> I spotted a few things myself in that review
 * thumper goes to tweak...
 * thumper runs up to physio
<thumper> bbs
<thumper> wallyworld: ping
<thumper> davecheney: review above removes fslock from master
 * thumper starts on 1.25 backport
<wallyworld> thumper: hey
<menn0> thumper: this adds access to the minion reports to the migrationmaster apiserver facade: http://reviews.vapour.ws/r/5105/
<thumper> wallyworld: you are reviewer, and I have a review :)
<wallyworld> so i see
 * thumper sighs...
<davecheney> thumper: ta
<wallyworld> thumper: in RunTestSuite, why pass in a wall clock instead of a mock? is the thinking that the delay is "only" 250ms? why not use testing.ShortDelay?
<thumper> wallyworld: I was trying to reduce the work to get the code in
<thumper> really it isn't going to matter because the lock isn't held
<thumper> so the delay never occurs
<thumper> could change to testing.ShortDelay
<thumper> should have zero impact
<wallyworld> yes please
<wallyworld> for consistency
<wallyworld> with out tests around the place
<wallyworld> *other
<thumper> add comments
<thumper> I'll go through them
<wallyworld> if lock is never held, isn't that a gap in the tetst?
<thumper> there are some places where we test for held locks
<thumper> in the important places
<thumper> well, most places that actually use the lock
<wallyworld> ok
<davecheney> thumper: looks pretty close to me
<thumper> wallyworld: while you finish looking, I'll head out to lunch and address both davecheney's comments and yours after lunch
<wallyworld> ok
#juju-dev 2017-06-12
<wallyworld> axw: running a minute late, otp with tim
<axw> okey dokey
<axw> babbageclunk: I think you're right, we should invest in a perf testing scaffold
<axw> babbageclunk: I'm thinking we could just write something that would add a bunch of units and spread them over a few lxd containers
<axw> just something that'll cause a bunch of connections and load on mongo
<axw> we could do microtests, but the closer to reality the better
<babbageclunk> axw: yeah, makes sense.
<axw> wallyworld: BTW these might be useful: https://prometheus.io/docs/instrumenting/writing_exporters/, https://prometheus.io/docs/practices/instrumentation/, https://prometheus.io/docs/practices/naming/
<wallyworld> thanks
<blahdeblah> Anyone able to explain why I can't deploy to azure?  http://pastebin.ubuntu.com/24838522/
<blahdeblah> s/deploy to/create a new model on/
<blahdeblah> If I retry with just the cloud name, I get: http://pastebin.ubuntu.com/24838529/
<jam> wallyworld: are we meeting today? I know Tim is traveling
<wallyworld> jam: we could, de debrief on what's the next steps on site perhaps
<jam> see you in ~30 then
<babbageclunk> hey jam - can I pick your brains about bug 1696509 quickly?
<mup> Bug #1696509: status-history-pruner fails under load <performance> <pruning> <statuseshistory> <juju:In Progress by 2-xtian> <https://launchpad.net/bugs/1696509>
<wallyworld> blahdeblah: a controller can only support one cloud
<wallyworld> looks like you are trying to an an azure model to an openstack controller
<blahdeblah> wallyworld: orly?
<wallyworld> since forever
<blahdeblah> Wow - OK; never realised that was a limitation
<blahdeblah> thanks
<wallyworld> jaas of course is different
<blahdeblah> o.O
<wallyworld> in juju 1, cloud and creds were art of environment.yaml
<wallyworld> and only single model
<blahdeblah> this is not juju 1
<wallyworld> xright
<wallyworld> just giving you background
<blahdeblah> ah, ok
<wallyworld> we evolved to support multi-model, but still only on one cloud
<wallyworld> and mukti-creds also
<blahdeblah> I might log a bug about the error message it gives, then
<blahdeblah> Because it gives the impression you can just throw creds, give it the cloud, and expect it to work
<jam> babbageclunk: i'm happy to talk about it, but I haven't really started my day yet
<babbageclunk> jam: oh sorry - let's do it after you've started.
<babbageclunk> jam: I'm just going to try something anyway
<jam> babbageclunk: my quick thought on it was to say "I need to delete everything before X, what is approximately 10,000 records should be Y, lets slowly increment up to X"
<jam> instead of issuing a single db delete that will take 3hrs to complete, and gives no feedbakc in the mean time
<menn0> axw, wallyworld: you guys have 10 mins to discuss mgo session handling?
<wallyworld> ok
<jam> hey menn0, good to see you around
<babbageclunk> jam: sure - I have something that gets ids back and issues a delete for those, but rereading you're suggesting doing it by time instead.
<menn0> jam: howdy jam
<jam> babbageclunk: more that I don't think we actually want to grab 1M ids and then slowly walks through them
<jam> babbageclunk: we want to be doing big batch deletes, just not everything-all-at-once
<jam> babbageclunk: if what you have doesn't slow it down a lot, then I'm fine with it
<menn0> wallyworld, axw : https://hangouts.google.com/hangouts/_/canonical.com/mgo-sessions
<babbageclunk> jam: ok cool - I'm just checking that now. It shouldn't be getting all of the ids at once.
<jam> babbageclunk: as always, its more about "how does it handle when things are bad, vs happy case its greate"
<wallyworld> jam: want to join https://hangouts.google.com/hangouts/_/canonical.com/mgo-sessions
 * jam screams nooooooo! and runs away and hides
<jam> weird, 'my connection was lost', rejoining
<jam> ah, 2fa kicked in
<blahdeblah> Anyone able to explain why deploy via path no longer works on 2.1.3?  Worked for me last time I tried this (2.1.2, IIRC).  http://pastebin.ubuntu.com/24838818/
<veebers> blahdeblah: I'm pretty sure that is should just be the path to the charm, so should be: juju deploy ./ntp
<blahdeblah> veebers: . is the path to the charm. If I deploy with full path, I get the same result
<veebers> blahdeblah: ah, I see
<veebers> blahdeblah: what's the 'ntp' part?
<blahdeblah> service name
<veebers> blahdeblah: ah ok, I'm totally barking up the wrong tree, looking at the error again I see "Bad Gateway" as the request error, I'm not sure about that sorry. I'm sure someone around here would have an idea though
<blahdeblah> veebers: no worries - thanks for looking
<axw> menn0: sorry I went to the shops, eating now... did you already talk?
<menn0> axw: still going
<axw> wallyworld: it looks fine to me, can you try using "juju-introspect metrics" and grep for one? maybe the UI has cached the keys
<wallyworld> ok
<wallyworld> axw: just got to deal with something here so will do it soon
<axw> wallyworld: and also, are you sure it's running your code? did you set the version to 2.2-rc2? if it is, it would have picked up the released binary if you didn't --build-agent
<axw> wallyworld: sure
<wallyworld> it's rc3
<wallyworld> and i added logging
<jam> blahdeblah: IP:17070/model/6dec752c-5d00-4947-8dff-17b66ef22833/charms?revision=0&schema=local&series=xenial: Bad Gateway
<wallyworld> axw: yeah, must have been browser caching, wotks from cli
<jam> blahdeblah: that isn't a "cannot deploy the local directory"
<jam> blahdeblah: that is "cannot talk to the controller to upload the local charm"
<axw> wallyworld: cool
<blahdeblah> jam: Except that I can talk to the controller just fine
<blahdeblah> If I push exactly that directory to my charmstore account, it deploys OK as well.
<jam> blahdeblah: I can't say for sure. I can say that we do a different operation on upload (instead of connecting and upgrading to a websocket, we just do a POST of the content)
<jam> 502 Bad Gateway seems odd
<jam> as it seems to indicate you have a reverse proxy inbetween you and Juju
<jam> seems 502 is for when you talk to a proxy, and the proxy forwards your request but gets a bad response from the target
<blahdeblah> a forward proxy is possible
<blahdeblah> (i.e. my local squid)
 * blahdeblah looks for errors there
<jam> both 'juju status' and 'juju deploy' will talk to 17070, but maybe the local squid tries to inspect POST in a way that it doesn't try to mess with UPGRADE to websocket
<blahdeblah> seems like https_proxy is being used for one, but not the other
<blahdeblah> anyhow, found the squid acl stopping it; hopefully it will work next time
<blahdeblah> thanks jam
<wallyworld> axw: here's that metrics PR https://github.com/juju/juju/pull/7487
<axw> wallyworld: looking
<axw> wallyworld: left a few comments
<wallyworld> ok, ta
<wallyworld> axw: also, with agents dialing the controller, it seems to me that we don't use last successful connection - we try all the addresses concurrently, first wins. and this will have an effect of picking the least busy controller as it will be likely to respond first
<wallyworld> func dialAPI(info *Info, opts0 DialOpts) (*dialResult, error) {
<axw> wallyworld: yes, but I think there's a delay before we dial the [1:] addrs?
 * axw checks
<axw> wallyworld: yep, see DialAddressInterval
<wallyworld> axw: oh right, i just read the method doc
<wallyworld> it should make that bit clear
<wallyworld> axw: i wonder what happens when a counter rolls over
<wallyworld> i guess it is so far off it won't happen in practice
<axw> wallyworld: it's 64 bit, so not going to happen
<wallyworld> well, not any time soon :-)
<axw> heat death and all that... :)
<wallyworld> axw: could you PTAL?
<axw> looking
<wallyworld> i guess i should make it a uint64
<axw> wallyworld: I meant drop connection_rate in favour of a total. no need for both is there?
<wallyworld> well, i like the idea of not need to use any extra prometnius config to get the rate
<wallyworld> just having the raw number served seems a good thing if you are using juju-introspect
<wallyworld> you can eyeball it
<axw> wallyworld: gtg get charlotte, bbs then will finish
<wallyworld> ok
<wallyworld> axw: and another when you are back https://github.com/juju/juju/pull/7488
<axw> wallyworld: reviewed
<wallyworld> ta. i was thinking clients would benefit too, but maybe not
<wallyworld> we can restrict for now
<wallyworld> s/clients/CLI
<axw> wallyworld: the controller *might* benefit, but the clients might also be slowed down by connecting to things that aren't known to previously be good
<axw> I think it makes sense to optimise for UX for the client
<axw> CLI
<wallyworld> axw: maybe this needs to be  config option
<axw> wallyworld: what kind of config option?
<wallyworld> in agent.conf or something
<wallyworld> i guess only apiservers have that though
<axw> wallyworld: I don't think it's necessary, it's only going to slow down connections a little bit
<axw> if at all
<wallyworld> ok, i'll move to workers then
<wallyworld> axw: the shuffle code is now in apicaller
<axw> looking
<wallyworld> axw: agree about rand, but was too big a change to set it all up in the manifolds
<wallyworld> i think it will be ok for now
<axw> wallyworld: yeah, no big deal if it's not particularly random anyway
<wallyworld> yep
<wallyworld> axw: going afk for a bit for dinner. if your findings today are worth sharing, can you email tim, me, john etc? even just to add some extra context to what tim might see on site tomorrow
<axw> wallyworld: I don't have anything much to share today
<wallyworld> ok, np
<axw> hard slog through unpicking watcher code atm
<wallyworld> bbiab
<axw> later
<thumper> o/
<pogi> is there any juju hello world for local provider
<wpk> charm?
<wpk> I'd go with mediawiki-single
<rick_h> pogi: just juju deploy ubuntu
<pogi> Can I use Ansible base with local provider any example
<pogi> ansible hook
<babbageclunk> menn0: ping?
 * babbageclunk goes to the doctor
#juju-dev 2017-06-13
<axw> wallyworld: did I miss anything this morning?
<wallyworld> axw: not really, i can fill you in if you wanted a qick HO
<axw> wallyworld: ok, see you at standup
<wallyworld> with site news etc
<menn0> babbageclunk: ping?
<babbageclunk> menn0: hey
<menn0> babbageclunk: so I think your hacked mgoprune takes care of a case which mgopurge doesn't handle (as you found)
<babbageclunk> menn0: ok
<babbageclunk> menn0: I mean, it just reports them, doesn't do any cleanup
<menn0> babbageclunk: you were seeing Insert ops in the txns collection which didn't have a matching entry in stash - is that right?
<babbageclunk> menn0: they had a stash record, but that stash didn't have the txnid_nonce in txn-queue
<menn0> hmmm ok
<babbageclunk> menn0: It was raising the error from flusher.go:475
<menn0> babbageclunk: the existing PurgeMissing function goes through the stash and looks for txn-queue entries there which don't have a matching txns doc
<menn0> babbageclunk: but this is the other way around
<babbageclunk> menn0: right
<menn0> babbageclunk: I guess we should extend mgopurge to handle this
<babbageclunk> menn0: yeah, that was my thinking
<menn0> babbageclunk: how did you fix the issue?
<babbageclunk> menn0: The bad transactions were all leadership lease updates (I think), so blahdeblah thought it made more sense to remove them.
<babbageclunk> menn0: so we removed the txn id from the txn-queue of the *other* record and then removed the txn record.
<babbageclunk> i.e. all of the txns were an update to one record and an insert.
<babbageclunk> menn0: I thought the other way to fix it was to insert the txn id into the start of the stash record that was missing it. That's probably the more general fix.
<babbageclunk> (Although we didn't try it, so I guess we can't be sure it's actually rignt.)
<menn0> babbageclunk: that might work but it seems riskier... I wonder if it's safer to just remove the badness
<babbageclunk> menn0: that was the thinking in this case, although it might not only happen with this specific type of (pretty disposable) transaction
<menn0> babbageclunk: true... i'll have a play around
<blahdeblah> babbageclunk: blame me, why don't you? :-P
<babbageclunk> menn0: maybe the fact that we've only seen it with leadership-related transactions means it's something to do with the way we're handling them? Or maybe it's just that we have lots of them, so that's where it turns up.
<menn0> I think it's the latter
<menn0> babbageclunk: but it would be great to understand why these problems at all
<menn0> happen at all
<menn0> babbageclunk: we've never really managed to figure that out... it's either mongodb screwing up or a bug in mgo
<babbageclunk> blahdeblah: :) I was just trying to lend the decision some weight by adding your name to it!
<blahdeblah> I'm not fat; my chest just slipped a bit
<axw> wallyworld: I've found some nice low hanging fruit for reducing the number of active sessions. every agent (machine & unit) has a session open for the lifetime of the agent's connection, for logging to the db
<wallyworld> yeah, i think it's 2 per agent?
<axw> wallyworld: I'm thinking we might want to apply jam's idea of aggregating writes into a bulk insertion for presence here as well
<axw> wallyworld: just 1 AFAICS
<wallyworld> sgtm
<babbageclunk> wallyworld, axw: seen the message about mongo replication in #juju@irc.canonical.com? Any ideas?
<axw> babbageclunk: I'm afraid not
<axw> babbageclunk: I mean I've seen it now, I don't have any good ideas
<jam> axw: wallyworld: "reducing the number of active sessions", is that closing the session while its active, or doing all of the rights by Cloning the one session per agent?
<jam> axw: wallyworld: I'd also say aggregating presence should certainly be 2.2.1 not delay 2.2rc3/2.2.0
<wallyworld> no, it won't delay
<wallyworld> it will be 2.2.1
<jam> I'm off today for my wife's birthday, so I won't be around much
<wallyworld> have fun
<axw> jam: if we aggregate, then either Copy() for the lifetime of the aggregator, or Copy() just before doing a bulk insert, and close straight after. I'm not sure which is best yet
<wallyworld> site looks ok atm
<jam> axw: I'd copy before insert, given if we bulk updating, we shouldn't have a lot of those active.
<jam> axw: I was trying to find a nice number for the frequency, given its a 30s window, it feels like we could batch all the way up to 1s intervals
 * jam goes to take the dog out
<axw> jam: that sounds reasonable. it could be made adaptive too, i.e. insert more frequently if the incoming rate is slower
 * axw waves
<jam> axw: the other option is that we just aggregate whatever we get until the last update finishes
<jam> so we only have 1 presence-based write at any time
<jam> and just splat down whatever updates we've gotten since then
 * axw nods
<anastasiamac> menn0: standup?
<axw> wallyworld: do you know if it's ok for us to merge things like https://github.com/juju/docs/pull/1910 into master? or should we wait for evilnick or someone else on the docs team to merge?
<axw> not sure if there's some special process, don't want to bork things
<wallyworld> axw: yeah, not sure. for something like that would be nice to merge
<axw> wallyworld: ok. burton, I'll merge and we can ask for forgiveness... :)
<burton-aus> axw wallyworld GREAT!
<axw> wallyworld: no rush since it won't be landing until >2.2, but https://github.com/juju/juju/pull/7496 is a prereq for a follow-up db logger session reduction PR
<wpk>  netpln--help
<wallyworld> axw: just getting dinner, will look after
<wpk> wallyworld: btw, I got your message, thank you, I'm still looking for a hotel in Brisbane (but there are rooms available, I'll book something in the next few days)
<wallyworld> wpk: great, let me know if you need any help etc. i've booked you an extra 2 nights a the venue and then monday night will be in Brisbane
<wpk> Is there a way to check if goyaml.Unmarshal processed everything?
<wpk> I see an error returned if there's a field of wrong type, but what about field that is in yaml but not in structure?
<mup> Bug #1697664 opened: Juju 2.X using MAAS, juju status hangs after juju controller reboot <juju-core:New> <https://launchpad.net/bugs/1697664>
<wpk> anyone familiar with yaml.v2 ?
<stub> wpk: Fields in the yaml and not in the structure get dropped. I think if you don't want that, you need to unmarshal to a mapping and handle it yourself.
<wpk> stub: I added an UnmarshalStrict method and PRed
<wpk> Other thing is that yamllint says that a list marshaled by goyaml is badly indented
<babbageclunk> What's the best way to generate a lot of status updates for a model? Can I do it with juju run?
<babbageclunk> Or do we have a charm that will do it?
<babbageclunk> Asking for a friend.
<babbageclunk> wallyworld, menn0: ^
<wallyworld> babbageclunk: you mean the hook execution?
<babbageclunk> wallyworld: Oh, will any hook execution generate updates? So I can just use peer-xplod?
<wallyworld> babbageclunk: there needs to be an actual implementation of that hook to then write values
<wallyworld> and juju only runs the hook once every 5 minutes
<wallyworld> or are you talking about unit status history log?
<babbageclunk> wallyworld: Hmm, that'll take too long. I guess I should make a charm that just sits in a loop updating status?
<wallyworld> rather than unit/machine status values
<babbageclunk> wallyworld: For the pruner it doesn't matter - just want to generate megs of updates in status history, right?
<wallyworld> oh, i thought you were talking about status values
<wallyworld> "status updates"
<wallyworld> you can always hack a charm as you suggest
<wallyworld> or use juju run
<babbageclunk> <squints> I don't understand the difference between status values ands status updates.
<babbageclunk> Oh, you mean the update status hook.
<wallyworld> sorry, my brain was differentiating between what goes into the status collection vs status history collection
<wallyworld> it's all very confusing, similar terminology
<babbageclunk> Gotcha, sorry. Yeah, I want to fill up the history so that I can QA my pruning work.
<wallyworld> you want to test history pruning
<babbageclunk> I'm not really asking for a friend, that was a joke. :)
<wallyworld> lol
<wallyworld> i'd just do a juju run inside a bash loop?
<wallyworld> you can execute the status-set hook tool inside juju run
<babbageclunk> Ah, awesome - playing with that now - thanks!
<menn0> babbageclunk: easy review please: https://github.com/juju/txn/pull/37
<babbageclunk> menn0: lookign
<babbageclunk> ng
<babbageclunk> menn0: Approved
<menn0> babbageclunk: cheers
#juju-dev 2017-06-14
<menn0> babbageclunk: another easy one: https://github.com/juju/juju/pull/7499
<babbageclunk> menn0: approved
<menn0> babbageclunk: thanks. develop one on it's way.
<babbageclunk> menn0: I don't think you need that reviewed though.
<menn0> babbageclunk: true
<axw> wallyworld: doing the sort/check in SetAPIHostPorts should be fine, we don't seem to care about the order in the worker, so shouldn't there either
<wallyworld> axw: excellent, thanks for checking
<babbageclunk> menn0: can you please review https://github.com/juju/juju/pull/7501?
<menn0> will do
<babbageclunk> thanks!
 * babbageclunk goes for a run, anyway
<babbageclunk> jam: ping?
<jam> babbageclunk: in standup will ping when done
<babbageclunk> jam: cool cool
<babbageclunk> menn0: ping?
<menn0> babbageclunk: otp
 * babbageclunk sulks
<jam> babbageclunk: what's up
<jam> babbageclunk: we have the same standup :)
<babbageclunk> jam: yeah, sorry - forgot!
<babbageclunk> jam: Here's my change to the status history pruning, if you want to take a look: https://github.com/juju/juju/pull/7501
<jam> babbageclunk: so my concern with the 'while true' stuff, is you probably aren't getting close to actually having 4GB of history
<jam> and that's the point where we need to watch out for how it operates.
<jam> babbageclunk: did you do any big scale testing to know that it performs decently ?
<babbageclunk> jam: I tried it with 2,000,000 records in unit tests, but not any more than that.
<jam> and/or analysis of how much memory we store, how long it takes to get the list of things to be deleted
<babbageclunk> jam: with those it was getting through ~400k rows in each 15s chunk
<jam> babbageclunk: where was this running?
<babbageclunk> jam: but you're right, checking how long the initial query takes is something I'll do
<jam> babbageclunk: yeah, one of the particular concerns is just the 'grab all the ids' never returning before getting killed
<babbageclunk> on my nice fast laptop with ssd, so I'm not sure how it'll scale.
<jam> babbageclunk: ah, I guess you are walking the iterator and deleting, right ?
<jam> so it doesn't have to read all 2M before it starts removing stuff
<babbageclunk> jam: *In theory* it should be streaming the query, but that's definitely something I should confirm for really sure.
<babbageclunk> yup
<jam> babbageclunk: I've seen collection.count() fail
<jam> in some of the logs
<jam> and *that* should be much cheaper
<babbageclunk> jam: yeah, that bit's problematic anyway - the number we get back is only a rough estimate and we frequently exit the loop before deleting all of them (because the size drops below threshold).
<babbageclunk> jam: maybe there's a more stats-y way to get a rough row count (since the scale is really the information we need)
<jam> babbageclunk: so coll.Count() should just be reading the stats on the collection
<jam> as long as we aren't doing query.Count()
<jam> babbageclunk: and it may be that if something like Count() is failing, there just really isn't anything we can do
<babbageclunk> jam: yeah, true
<jam> cause we wouldn't be able to get useful work done
<jam> babbageclunk: so *if* you're looking at this, is there any chance you could try a perf test of doing Bulk().Remove()*1000, vs coll.RemoveAll({_id: $in{1000}}) ?
<babbageclunk> jam: sorry, I was a bit unclear - the count is probably exact but our calculation of how many rows need to be deleted is approximate.
<jam> babbageclunk: no I understood that part, and I think its still worthwhile from a "ok, you've taken 10min, are you going to take another 30s, or another 3hrs"
<babbageclunk> jam: Oh yes - I did that. The bulk call is a bit faster at a batch size of 1000.
<jam> babbageclunk: k. I have a very strong feeling it is much slower on mongo 2.4 (cause it doesn't support actual pipelined operations, so mgo fakes it)
<jam> which *may* impact us on Trusty, where I'm told we're still using 2.4, but I can live with that.
<babbageclunk> jam: RemoveAll is faster with a batch size of 10000, but bulk doesn't support more than 1000
<jam> babbageclunk: do you have numbers?
<babbageclunk> jam: It's easy for me to change back to using RemoveAll - it's all in one commit.
<jam> as in 10% difference, 2x faster, ?
<babbageclunk> 25% faster
<jam> RemoveAll(10k) is 25% faster than Bulk(1k)
<jam> ?
<babbageclunk> yuo
<babbageclunk> yup
<jam> babbageclunk: and RemoveAll(1k) vs Bulk(1k) ?
<babbageclunk> I was getting 400k / 15s block for bulk vs 490k / 15s for RemoveAll(10k)
<babbageclunk> I can't remember the number for RemoveAll(1k) - there wasn't much difference between Bulk and RemoveAll
 * babbageclunk checks scrollback
<jam> babbageclunk: surprisingly historicalStatusDoc also doesn't have any ,omitempty fields
<jam> babbageclunk: can you create a bug against 2.3 about adding them?
<babbageclunk> jam: ok. I'm only selecting ids in this case though, so hopefully that wouldn't change it.
<jam> only one I kind of care about is StatusData as it is very likely to be empty and thats just bytes on the docs we don't need.
<jam> babbageclunk: not about anything you're doing
<babbageclunk> ok
<jam> babbageclunk: its about "oh hey, this isn't correct"
<babbageclunk> sure :)
<babbageclunk> It's like when you lift a rock and see all the creepy crawlies
<jam> babbageclunk: its a change that I'm not 100% comfortable just landing in a 2.2.* series, but also low hanging fruit for a 2.3
<babbageclunk> Yeah, makes sense
<jam> babbageclunk: every status history doc is at least 52 bytes long just from the keyword fields
<jam> given we have Millions of them, we probably should also consider being more frugal
<jam> babbageclunk: so just a general "we should re-evaluate the fields in statuses-history because we are being wasteful with size of a doc we have lots of"
<jam> babbageclunk: my gut (and I'd like menn0 to have a say here), is to go with RemoveAll(10k), because there are fewer total moving parts, and it will do better on 2.4 anyway
<jam> babbageclunk: the other thing to compare against is what is the total time when it was a single query?
<babbageclunk> I guess we can't really shorten the stored field names at this point either.
<jam> as in, RemoveAlL(, t < X) => 500k/15s average time
<jam> babbageclunk: well that we can do with an upgrade step
<jam> *could*
<wallyworld> axw: i'm confused. somethings use a *Macaroon. other times we pass around a [][]*Macaroon (in fact []macaroon.Slice). do you know if just storing a single macaroon for cmr auth will be sufficient?
<jam> babbageclunk: or is RemoveAll(t < X) 1M/15s
<jam> babbageclunk: do you have that number?
<jam> I'm guessing it may still be worth it to give feedback and be making concrete progress
<babbageclunk> No, sorry, haven't tested that.
<jam> babbageclunk: ok, I'd like you to do that comparison, just to have some information about whether we're making a big impact or not.
<babbageclunk> Would be good to know how much the incremental processing is costing.
<babbageclunk> Ok, I'll compare to that.
<jam> babbageclunk: my experience on Pruning is that Read is *way* cheaper that Remove
<jam> than
<jam> babbageclunk: as in, PruneAll takes seconds to figure out what to do and minutes to remove them
<babbageclunk> Right
<jam> babbageclunk: but I'd like confirmation here
<jam> babbageclunk: also, make sure that you're doing the prunes while the charms are firin
<jam> firing
<jam> so that there are active inserts while we're removing
<babbageclunk> Yup
<babbageclunk> jam: ok - I have to go help feed the kids before I get in trouble.
<babbageclunk> jam: Thanks though - I'll compare those.
<jam> babbageclunk: np, I approved the PR conditional on the testing
<babbageclunk> jam: awesome
<axw> wallyworld: sorry, was riding. uhm. IIRC, you pass around a collection if you'll need to discharge. I think in your case you only need to pass around one
<wallyworld> yeah, that's all i was counting on doing
<menn0> jam: did you want to discuss mgo/txn changes?
<jam> menn0: indeed
<menn0> jam: hangout?
<jam> menn0: https://hangouts.google.com/hangouts/_/canonical.com/john-menno?authuser=1
<menn0> jam: sorry, having auth issues
<menn0> bear with me
<jam> menn0: np, I thought I was having the problems, hangouts doesn't like me all the time
<menn0> jam: i'm in the hangout now
<jam> menn0: I'm finally there, are you still connecting?
<menn0> jam: i've been in it for a while
<jam> axw: reviewed your log buffering
<rick_h> hml: ping
<hml> rich_h: hi
<hml> or rick_h even.  :-) hi
#juju-dev 2017-06-15
<rick_h> hml: hi, I just wanted to ping because I'm attempting to rain on a parade I guess.
<hml> rick_h: :-(
<rick_h> hml: and wanted to make sure my comment was coherent and make myself available if there were any questions/etc but please don't land https://github.com/juju/juju/pull/7449
<hml> rick_h: okay - iâll give it a look - we wanted to get something out there for feedback - expected input - you were on my list to ping.
<rick_h> hml: k, I think the intent is <3 but the way the command is put together is a bad precedent imo
<hml> rick_h: sounds like a chat is a good way to go.  do you have some time now? or tomorrow?
<rick_h> hml: yes I can now
<rick_h> Let me head my headphones
<rick_h> hml: https://hangouts.google.com/hangouts/_/canonical.com/rick?authuser=1 when you want
<axw> wallyworld: do you know where I can find peer-xplod? jam's charm says unauthorized for me
<axw> wallyworld: ISTR you used it recently
<wallyworld> axw: yeah, i copied the source locally
<wallyworld> axw: i go tthe source from juju acceptance tests
<wallyworld> there's a copy of peer-xplod in there that is up to date
<axw> wallyworld: cool, thanks
<wallyworld> menn0: not sure how busy you are - it's ok if you won't get a chance to look at my recent cmr pr today - i'll bug someone else tomorrow. just need to know so that i can plan next steps
<menn0> wallyworld: i'm in calls and usual household crazy period at the moment so am unlikely to be able to review today
<menn0> wallyworld: I could do first thing tomorrow though
<wallyworld> np, all good
<wallyworld> i'll branch off that branch and continue work
<menn0> wallyworld: email me with the link if you want me to take a look
<wallyworld> will do ta
<wallyworld> https://github.com/juju/juju/pull/7504
<wallyworld> i asked you because we discussed a bit yesterday
<wallyworld> the new controller details collection
<menn0> wallyworld: cool cool
<menn0> jam: i'm likely to be a bit late to the standup. have to pick up a child.
<jam> menn0: understood
<axw> wallyworld jam: I've just added some data to https://github.com/juju/juju/pull/7502#issuecomment-308411294, going for a ride - I'll land when I get back if you're happy
<wallyworld> ok
<wallyworld> looking
<mup> Bug #1697175 changed: juju 2.1.2 trusty node put into recovery mode <juju-core:New> <https://launchpad.net/bugs/1697175>
<mup> Bug #1697175 opened: juju 2.1.2 trusty node put into recovery mode <juju-core:New> <https://launchpad.net/bugs/1697175>
<mup> Bug #1697175 changed: juju 2.1.2 trusty node put into recovery mode <juju-core:New> <https://launchpad.net/bugs/1697175>
<rogpeppe> in case anyone's around, here's a PR that provides a drop-in replacement for context.WithDeadline that uses mockable clocks: https://github.com/juju/utils/pull/282
<tvansteenburgh> thumper: wallyworld: can either of you clarify what, if any, logging changes were made in 2.2? i noticed yesterday, using a 2.2 client, that i was not getting charm logging by default like i used to. is that expected? if it is, we really need to announce that and explain how to turn it back on, because w/o those logs it's much harder to debug charm problems
<tvansteenburgh> this was on a model that i created on an aws jaas controller
<rogpeppe> this PR makes api.Open time out more reliably: https://github.com/juju/juju/pull/7507
<anastasiamac> axw: any objections? https://github.com/juju/juju/pull/7508
<menn0> wallyworld, axw: do you happen to know the prometheus collection interval?
<menn0> typically
<wallyworld_> menn0: 15s i think by default
<wallyworld_> menn0: it depends on the yaml config
<menn0> ok
<wallyworld_> prometheus will poll juju
<menn0> wallyworld_: b/c the state metrics collector is written in a fairly inefficient way
<menn0> every poll it calls ForModel for every model
<menn0> that's going to hurt
<wallyworld_> yeah
<wallyworld_> menn0: we have a todo to fix that, maybe it's a mental todo
<menn0> wallyworld_: ok cool
<menn0> wallyworld_: I'm going through where ForModel is used, looking into problems axino is seeing
<menn0> I've found one bad and unnecessary use in the allwatcher
<menn0> easy to fix too so i'll do that today
<wallyworld_> menn0: yeah, there's no reason ti use for model anymore - state pool should be it. we just haven't updated evertything yet. we've been doing as a driveby
<wallyworld_> as we find things
<menn0> wallyworld_: the reason i'm looking is that axino is seeing query storms on txns.log
<menn0> wallyworld_: this is when the txn watcher resyncs
<menn0> he's seeing 300 at a time
<menn0> there shouldn't be that many watcher instances
<menn0> possible cause is lots of new State creation for some reason
<menn0> hence me looking at ForModel calls
<menn0> wallyworld_: I also have an idea for getting rid of all the txn watchers so there's just one
<menn0> wallyworld_: without major code churn (I think)
<menn0> wallyworld_: I'd like to chat about that later
<menn0> wallyworld_: does your PR still need review?
<wallyworld_> menn0: yeah it does :-)
<wallyworld_> menn0: axw is also looking at txn watchers, so let's chat for sure
<axw> anastasiamac: I fear that exposing that charm method will lead to misuse; it's just a convenience. maybe you can show how you would use it?
<axw> wallyworld_: can you please take a look at https://github.com/juju/juju/pull/7505 when you have a few minutes?
<wallyworld_> sure
<axw> menn0: I'd love to know how you plan to have just one txn watcher. I spent some time on it, it got messy... but I was going for a more drastic refactor too
<wallyworld_> menn0: standup?
<axw> menn0: btw I started looking at using StatePool for the statemetrics code yesterday, happy to continue if you have other things you'd rather do
#juju-dev 2017-06-16
<menn0> axw: I wasn't going to tackle the statemetrics bit so if you've already started on that then great
<axw> menn0 wallyworld_: I'm back if you want to chat
<wallyworld_> meeting in today's standup HO?
<menn0> axw, wallyworld_: sounds good
<axw> wallyworld_: what's a sensible upper bound on the flush interval? 10 minutes?
<wallyworld_> i was thinking more like seconds
<wallyworld_> since we need to not lose errors if agent is killed
<wallyworld_> and actually, we should maybe flush immediately if there's an error
<wallyworld_> or a warning or ciritical
<axw> wallyworld_: it'll flush when the agent shuts down
<wallyworld_> what about when it is killed
<axw> and it'll go to logsink.log immediately anyway
<axw> wallyworld_: what do you mean by killed?
<wallyworld_> hard stopped via kill -9 or something
<wallyworld_> do we catch signals
<axw> wallyworld_: I can't recall if we catch the signal, but we should if we don't
<wallyworld_> yep, and systemd restart etc
<axw> wallyworld_: SIGTERM should lead to graceful shutdown, which would involve flushing
<wallyworld_> ok, all that aside, i think 5 or 10s max for flush interval, IMO
<axw> wallyworld_: I'll make it 10 seconds. if we get to 10 seconds then you've basically got no logging happening, and you shouldn't have the flush interval that high anyway
<wallyworld_> exactly, hence the desire to not allow the user to do something dumb by mistake
<axw> menn0: on the allwatcher frame size thread: doesn't the first response have to have all the records? i.e. like the other watchers, the first response gives you the initial state
<menn0> axw: gah, good point
<menn0> axw: maybe there needs to be some special handling for the allwatcher
<axw> menn0: yeah, it could get pretty huge
<axw> menn0: perhaps some kind of continuation flag
<menn0> axw: exactly
<menn0> wallyworld_: ping
<wallyworld_> hey
<menn0> wallyworld_: I have some questions about your PR. quick hangout?
<wallyworld_> sure
<menn0> wallyworld_: standup again?
<wallyworld_> yep
<wallyworld_> menn0: with where the interface lives, it may be used multiple places so saying it needs to live where it is used doesn't always work. For state, because it's a bit of a ball of mud, we to date have tended to include interfaces for entity foo in the state/foo.go file. For other things like Environ, we have a separate interfaces.go file, but if we were to do that for state as it exists now, it wouldn't be pretty
<menn0> wallyworld_: understood but for this interface it seems like it might be better to be defined at point of use.
<menn0> wallyworld_: happy to defer to your judgement
<wallyworld_> menn0: i guess i'm saying "point of use" could be many places not just one place in the code
<menn0> wallyworld_: fair enough
<wallyworld_> but the other place, i should return a concrete type though
<anastasiamac> axw: ping
<rogpeppe> axw, anastasiamac: looking for a second review of https://github.com/juju/juju/pull/7507 if you fancy it...
<rogpeppe> wallyworld_: ^
<axw> rogpeppe: sure
<rogpeppe> axw: ta!
<axw> rogpeppe: ah, I was wondering what the context of your tweet was :)
<rogpeppe> axw: :)
<axw> rogpeppe: LGTM, thanks!
<rogpeppe> git diff
<rogpeppe> axw: tyvm
<rogpeppe> axw: another (tiny) one - I just noticed I missed out a return statement in the ContextWithDeadline code https://github.com/juju/utils/pull/283
<axw> rogpeppe: LGTM
<rogpeppe> axw: thanks
#juju-dev 2018-06-13
<mup> Bug #1776690 opened: upgrade-charm with --switch cs:charmname does not auto-detect current series of charm <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1776690>
<mup> Bug #1776690 changed: upgrade-charm with --switch cs:charmname does not auto-detect current series of charm <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1776690>
<mup> Bug #1776690 opened: upgrade-charm with --switch cs:charmname does not auto-detect current series of charm <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1776690>
#juju-dev 2018-06-14
<mup> Bug #1776690 changed: upgrade-charm with --switch cs:charmname does not auto-detect current series of charm <canonical-bootstack> <juju-core:Won't Fix> <https://launchpad.net/bugs/1776690>
#juju-dev 2018-06-15
<gman> hi
<gman> I seem to be have issue with juju etcd  3.2.9, is there a way to install a partitcular version like etcd 2.3.8
