[00:27] <davecheney> sinzui: I am having trouble enabling the 'proposed' repository for trusty
[00:28] <davecheney> oh no
[00:28] <davecheney> wait
[00:28] <davecheney> yeah, that got it
[02:14] <wallyworld> http://www.windowsazure.com/en-us/documentation/articles/manage-availability-virtual-machines/
[02:27] <thumper> wallyworld, axw, waigani: https://docs.google.com/a/canonical.com/document/d/1PJiqBiAndIKvT6WYT1fG4rBX2b4sF90j3m7GTLNoZiE/edit
[02:32] <bigjools> I upgraded to trusty and the environment I started under saucy doesn't look great now...
[02:32] <bigjools> http://paste.ubuntu.com/6795052/
[02:32] <bigjools> what's going on?
[02:33] <thumper> bigjools: you probably need to upgrade the environment
[02:33] <thumper> bigjools: but the short answre is "we hate you"
[02:33] <bigjools> thumper: I knew *that* already
[02:34] <wallyworld> bigjools: first mistake - upgrading to alpha software for production purposes
[02:34] <wallyworld> :-P
[02:34] <bigjools> who said it was production purposes? :)
[02:35] <bigjools> anyway you're missing all the fun weather over here
[02:37] <wallyworld> hot?
[02:37] <bigjools> heat index is nearly 47
[02:37] <wallyworld> ah, so cool then
[02:37] <bigjools> your missus will be asking you for a/c when you get back :)
[02:37] <wallyworld> amongst other things
[02:39] <bigjools> thumper: how do I upgrade an environment?
[02:39] <thumper> bigjools: juju upgrade ??
[02:39] <bigjools> you may point me at TFM
[02:39] <bigjools> upgrade-juju?
[02:40] <wallyworld> think so
[02:41] <wallyworld> if it's upgrading to a trunk release from source, you need --upload-tools IIRC
[02:41] <bigjools> I am using 1.17.0-trusty-amd64
[02:41] <bigjools> I think my existing env is 1.16
[02:41] <bigjools> so it should find tools for 1.17 without the --upload I guess?
[02:43] <bigjools> well looks like it didn't work anyway :(
[03:35] <wallyworld> thumper: https://codereview.appspot.com/55300043/
[07:28] <bigjools> folks, I deleted a service and it's machine so I can redeploy it on a new series, but when trying to deploy it says the service already exists.  It doesn't appear on "juju status" though..... is this a bug?  v1.17.0.1
[07:28] <bigjools> its*  gah hate that
[07:36] <dimitern> bigjools, is this a local env?
[07:36] <bigjools> dimitern: canonistack
[07:37] <bigjools> dimitern: http://paste.ubuntu.com/6795916/
[07:38] <dimitern> bigjools, hmm that's weird
[07:38] <bigjools> dimitern: I upgraded the tools from 1.16 earlier.  I suspect that may have something to do with it
[07:38] <dimitern> bigjools, we had issues like that but with the local provider, and they were fixed since
[07:38] <dimitern> bigjools, can you perhaps do a mongo dump and file a bug please?
[07:39] <bigjools> dimitern: if you give me monkey-like instructions to do that, yes :)
[07:40] <dimitern> bigjools, :) just a sec
[07:42] <bigjools> dimitern: np.  There's a massive storm bearing down on me that is likely to cut my power/internet, so no pressure.  :)
[07:42] <dimitern> bigjools, using juju backup
[07:43] <dimitern> bigjools, it's a plugin that does that - it has --help as well
[07:43] <bigjools> where do I get it?
[07:49] <bigjools> dimitern: I have to run out for a bit, can you send me details please :)
[08:25] <fwereade> axw, am I right in thinking bigjools' problem above is that one about the env life field, that's fixed in trunk?
[08:37] <TheMue> fwereade: morning
[08:37] <TheMue> fwereade: any chance to take another look at https://codereview.appspot.com/44540043/ this morning?
[08:37] <fwereade> TheMue, heyhey
[08:37] <fwereade> TheMue, all I can say is "maybe" :/
[08:39] <TheMue> fwereade: ok, thx
[08:39] <TheMue> dimitern: after your help yesterday a deeper look at https://codereview.appspot.com/44540043/ would be appreciated too
[08:40]  * TheMue is getting closer, but so far didn't found any watcher via API used in cmd/juju so far
[08:49] <dimitern> TheMue, definitely looks better now
[08:49] <dimitern> bigjools, it's part of core - in cmd/juju/plugins
[08:51] <rogpeppe1> mornin' all
[08:59] <dimitern> rogpeppe1, morning
[08:59] <dimitern> rogpeppe1, thanks again for all the reviews yesterday
[09:00] <rogpeppe1> dimitern: np
[09:00] <rogpeppe1> dimitern: i'm looking for a review of https://codereview.appspot.com/55150043 BTW
[09:00] <dimitern> rogpeppe1, looking
[09:06] <dimitern> rogpeppe1, quick question - what's the deal with NoVote and VotingMachineIds ?
[09:07] <rogpeppe1> dimitern: that's the way that EnsureAvailability will be able to reduce the number of machines in the peer group without just removing them.
[09:08] <dimitern> rogpeppe1, and a peer group is a set of machines in a replicaset ?
[09:08] <rogpeppe1> dimitern: if a machine goes down, we don't want to just remove it from the peer group, because it may come up again
[09:08] <rogpeppe1> dimitern: yes
[09:08] <dimitern> rogpeppe1, i see
[09:08] <rogpeppe1> dimitern: not all machines in a replica set are necessarily voting
[09:08] <rogpeppe1> dimitern: it also gives us the freedom in the future to be able to deliberately bring up non-voting machines to act as backup
[09:09] <rogpeppe1> dimitern: there will also be another field associated with the machine: IsVoting - indicating the actual voting status of the machine.
[09:09] <rogpeppe1> dimitern: that will be set by the peergrouper worker
[09:10] <dimitern> rogpeppe1, IsVoting or WantVote ?
[09:10] <rogpeppe1> dimitern: IsVoting
[09:10] <dimitern> rogpeppe1, and WantVote ?
[09:10] <rogpeppe1> dimitern: that's just the negation of NoVote
[09:10] <dimitern> rogpeppe1, (looking in the code comments of Ensure..)
[09:10] <dimitern> rogpeppe1, ah, ok
[09:10] <rogpeppe1> dimitern: i didn't want to do WantsVote in the machineDoc because that would be incompatible with previous versions.
[09:11] <rogpeppe1> dimitern: which wouldn't have the field in the schema
[09:14] <dimitern> rogpeppe1, yeah, but can't it be added in a compatible way? Like the default value (when missing) is fine?
[09:15] <rogpeppe1> dimitern: that just makes the logic harder - you'd have to look for "wantsVote==true || wantsVote==nil"
[09:15] <rogpeppe1> dimitern: i think it works pretty well inverted actually
[09:15] <dimitern> rogpeppe1, i see, well this can be hidden behind a machine method, no?
[09:16] <rogpeppe1> dimitern: it is, isn't it?
[09:16] <dimitern> rogpeppe1, sorry, just got there
[09:16] <dimitern> rogpeppe1, I see now
[09:16] <rogpeppe1> dimitern: apart from in the MachineTemplate, where NoVote actually makes sense, because you actually almost always want a vote
[09:17] <rogpeppe1> dimitern: things became simpler when I changed MachineTemplate.WantsVote to MachineTemplate.NoVote
[09:17] <dimitern> rogpeppe1, yeah, i agree
[09:29] <dimitern> rogpeppe1, reviewed
[09:29] <rogpeppe1> dimitern: thanks a lot
[09:37] <jam> fwereade: ping about API version numbering
[09:37] <jam> I've been trying to respond to your email and thinking about it, but I think it comes across rambling in email form, so it might be better as a 1:1 chat
[09:44] <fwereade> jam, pong
[09:45] <jam> fwereade: hey hey
[09:45] <fwereade> jam, was I terribly unclear?
[09:47] <jam> fwereade: there is just a lot that I've thought about it.
[09:47] <jam> fwereade: specifically, what does it actually *mean* to have a global rev vs a local rev, how does stuff look like in code, etc.
[09:48] <fwereade> jam, so, I think the local revs are purely an internal tool to help us manage bloat
[09:49] <jam> fwereade: I sent my long rambly email
[09:49] <jam> fwereade: there are problems with global rev that we should probably explore as well
[09:49] <jam> namely, where /when is that sent on the wire
[09:49] <fwereade> jam, codewise, a request comes in with a version, namespace, method, params; we get the facade by looking up the namespace given the version and get an actual facade on which we can call.Method(Params)
[09:49] <jam> fwereade: codewise *today* it comes in with namespace, ??, method, params
[09:50] <jam> and we can make ?? == version
[09:50] <fwereade> jam, my suggestion earlier was that we demand it on every call
[09:50] <fwereade> jam, yeah
[09:50] <jam> It looks confusing that global version comes after namespace
[09:50] <jam> but supersedes it
[09:50] <jam> fwereade: so the next failure I explored is that we can have a relatively new client
[09:50] <jam> that still calls a really old version
[09:50] <jam> just because we never touched that code
[09:50] <jam> and then we can't actually remove compatibility
[09:51] <jam> so even though the API is 5 years old, and "Machine", "100", "Add" works just fine and is identical to "Machine", "1", "Add"
[09:51] <jam> the very newest 'juju' commandline tool still calls
[09:51] <jam> "Machine", "1", "Add"
[09:52] <fwereade> jam, I think that's the least of our problems because we can at least control that -- it's everybody else calling v1 that's the problem, surely?
[09:52] <fwereade> jam, and also that there will be old server versions that we'll have to fall back to even with new clients, at least for some time
[09:52] <jam> fwereade: we can audit for it, but it is fairly expensive to sort out what version of Juju was calling what version of the API. If we work on making the juju CLI always call "latest" versions, then you don't have 2 pieces that break independently
[09:53] <fwereade> jam, well, we *do* probably want to keep the CLI up to date with API versions, it's true, but we also need to leave fallback code in place, right?
[09:53] <jam> fwereade: so, if you have 1 global rev number, then all call sites get bumped any time you change anything, right?
[09:54] <jam> except the ones where that actually changes the implementation so we use the 'slightly older one' as we transition / (we want to update all our call sites eventually, I'm sure)
[09:54] <fwereade> jam, they could do, yes, but I don't think it's necessarily required or expected that everybody does so
[09:56] <jam> fwereade: so there is a little bit of "what do we do in Juju because we control both ends" and "what do we suggest 3rd parties do"
[09:56] <jam> if we're going with a global bump, and we can remove old ones after we've deprecated it for "long enough"
[09:56] <fwereade> jam, yeah, I think *we* should, but we can't necessarily impose it on everybody else
[09:56] <jam> then we do want some sort of push for people to go to the latest thing they could
[09:57] <jam> for example, how do people get informed that the API they are using is deprecated?
[09:58] <fwereade> jam, I'm not sure what mechanisms we have for encouraging people to update their clients (apart from public warnings that vX will be going away in N months -- and making version removal clear in release notes)
[09:59] <jam> fwereade: well, you can return something in the API that "this is deprecated" but it needs to be done in a way that is actually useful
[10:00] <jam> Users don't actually care
[10:00] <jam> its developers that need to be informed about it
[10:00] <jam> I guess you could have client versions logged, and calling a deprecated version generates an email to juju-core that client X version Y is still using a now deprecated API ? :)
[10:01] <fwereade> jam, ok, you're talking about us here? or everyone else as well?
[10:02] <jam> fwereade: I mean the juju server would notice when a client connects and calls a deprecated API and aggregate that into something useful for other people
[10:02] <jam> then you have some sort of report about "juju-1.16.5 is still very common in the wild and is making lots of calls to X", and "juju-developer 0.8 is still calling Y"
[10:03] <jam> it would be *useful* in the fact that we can then contact the developer directly and let them know we're deprecating something
[10:03] <jam> but it is highly invasive :)
[10:03] <fwereade> jam, well there is some reasonable worry about collecting that data
[10:45] <rvba> Hi guys, I'm was trying to bootstrap a juju env and I got this: http://paste.ubuntu.com/6796595/.  Is this a known problem?  It does not seem to happen consistently but I don't think that's the first time I saw this.
[10:45] <jam> rvba: looking
[10:49] <jam> rvba: what version of the client are you using?
[10:49] <jam> (juju version)
[10:49] <jam> mgz: wallyworld: standup?
[10:50] <rvba> jam: apt-cache policy juju-core → 1.16.3-0ubuntu0.13.10.1
[10:50] <mgz> ta jam
[10:50] <jam> rvba: it is *possible* that this was something we fixed in 1.16.5
[10:50] <jam> rvba: there was an EOF bug that we implemented a workaround fro
[10:50] <jam> for
[10:51] <rvba> jam: ah ok, thanks. I guess I'll upgrade using the PPA and test again…
[10:52] <jam> rvba: you'll probably want a 1.16 series, rather than 1.17, I think
[10:53] <rvba> The stable PPA has 1.16.5-0ubuntu1~ubuntu13.10.1~juju1.
[10:53] <jam> rvba: great, I think devel has 1.17.0
[10:57] <rvba> jam: same problem with the new version: http://paste.ubuntu.com/6796653/
[10:57] <jam> rvba: looks like https://bugs.launchpad.net/juju-core/+bug/1239558
[10:58] <jam> but that actually claims it was only fixed in 1.17 :(
[10:58] <rvba> ah, right.
[11:04] <rvba> jam: sorry to bother you again… I'm trying to use juju on Trusty (1.17.0-0ubuntu2) and now I get: http://paste.ubuntu.com/6796670/
[11:04] <rvba> Any idea what might be wrong?
[11:05] <jam> rvba: there you need to use "juju bootstrap --upload-toolS"
[11:05] <jam> we're looking for tools on streams.canonical.com
[11:05] <jam> but that hasn't been finished being set up yet
[11:05] <rvba> Ah okay, makes sense.
[11:05] <rvba> Thanks.
[11:23] <jam> mgz: do you have a chance to investigate why "juju-1.17" can't do "status" vs a 1.16 server? (Did it get fixed and I'm missing something?)
[11:25] <jam> (at least here trunk r2239 still fails)
[11:26] <mgz> jam: er, I'll recheck
[11:31] <bigjools> I am trying to bootstrap an environment using 1.17 and it insists that I am already bootstrapped but I am not.  What else is it checking? there are no instances running
[11:32] <mgz> bigjools: probably the bootstrap-verify file in your cloud storage
[11:32] <mgz> bigjools: you can just run `juju destroy-environment` to clear everything out
[11:32] <bigjools> mgz: it fails
[11:32] <bigjools> because I have no instances
[11:33] <mgz> I guess that's a provider bug then
[11:33] <bigjools> Continue [y/N]? y
[11:33] <bigjools> ERROR no instances found
[11:33] <mgz> it should always do all the work, even if some parts of it have already completed
[11:33] <bigjools> this is the openstack provider
[11:33] <mgz> can you list the control bucket externally and see what it contains?
[11:34] <bigjools> how do I do that?  sorry I know bugger all about openstack
[11:34] <mgz> eg, look in ~/.juju/environment/<right one> for the control bucket name
[11:34] <mgz> then `swift list <that name>`
[11:34] <jam> mgz: just have him delete the fixed bucket in environments.yaml ?
[11:34] <bigjools> ok
[11:34] <mgz> python-swiftclient if you don't have it
[11:34] <jam> so we generate a random one
[11:35] <mgz> jam: I curious if we actually have a provider bug on destroy
[11:35] <bigjools> yes it lists the files
[11:35] <bigjools> provider bug
[11:36] <mgz> so, now delete those with `swift delete` and bootstrap - if that works, please do file a bug
[11:36] <bigjools> ok cheers
[11:37] <bigjools> mgz: that fixed it indeed.  thanks for the help
[11:37]  * bigjools files  bug
[11:39] <natefinch> fwereade: when you get a chance, I'd like to talk about your comments on the EnsureMongoServer stuff.
[11:47] <rogpeppe1> fwereade: it turns out that Machine.Destroy only fails if the machine has JobManageEnviron, not JobManageState...
[11:48] <rogpeppe1> fwereade: i think that's probably a bug, but it makes me think (yet again) of changing the semantics of those two jobs
[12:37] <rvba> jam: I /think/ the problem we're struggling with in MAAS+Juju might be related to https://bugs.launchpad.net/juju-core/+bug/1257649.  I don't understand the comment on there that says this is irrelevant now… care to enlighten me?
[13:03] <fwereade> rogpeppe1, I'd be +1 on that
[13:03] <fwereade> natefinch, might need to be a bit later
[13:04] <natefinch> fwereade: np, I am currently encumbered anyway
[13:06] <rogpeppe1> fwereade: my thought is to have two manager jobs, one for jobs which can be blown away with probs (e.g. the API server) and one for jobs which require special handling with EnsureAvailability and freidns
[13:06] <rogpeppe1> friends, even
[13:06] <rogpeppe1> s/with probs/without probs/
[13:06] <rogpeppe1> fwereade: for the time being though, i'm wondering about just ditching one of the jobs entirely
[13:06] <fwereade> rogpeppe1, I'd be most happy just ditching one of them
[13:07] <rogpeppe1> fwereade: i don't think the distinction between ManageEnviron and ManageState is currently useful
[13:07] <rogpeppe1> fwereade: cool, i'll do that.
[13:07] <fwereade> rogpeppe1, agreed, it seemed like a good idea at the time
[13:07] <rogpeppe1> fwereade: yeah
[13:07] <fwereade> rogpeppe1, the universal explanation
[13:07] <rogpeppe1> fwereade: at some point in the indefinite future, i'd like to be able to scale up API servers independently of mongo servers
[13:08] <rogpeppe1> fwereade: but i think that's easier to do by adding a new job then
[13:08] <fwereade> rogpeppe1, agreed, but I don't think we're actually getting there any faster by maintaining that somewhat fuzzy distinction at the moment
[13:08] <rogpeppe1> fwereade: yup
[13:08] <fwereade> rogpeppe1, sweet, thanks
[13:10] <jam> rvba: in the current code in trunk, we don't have an internal timeout for the SSH connection. (I think) I'd have to double check what version you're running and what disconnections you're seeing
[13:28] <rvba> jam: we think the bug in question might be the cause of the problem we're seeing in the lab.  It takes more than 10 minutes to bring up a bootstrap node in MAAS because before installing jujud on it, we have to install it using d-i.
[13:29] <rvba> jam: we're trying to re-build juju with a different timeout to confirm this.
[13:38] <rvba> jam: what we're seeing is that 'juju bootstrap' gives up after 10 minutes and we think it might have succeeded with a bit more time :)
[14:25] <TheMue> debugging-by-println, sometimes nothing else works :D
[14:30] <TheMue> aaaaargh, apiserver.StringsWatcher requires to be called by an agent, client is not allowed (dunno why). THATS why debug-log has a "permission denied error"
[14:30] <TheMue> *sigh*
[14:35] <rick_h_> can anyone verify that a juju 'upgrade' to an older revision is possible on pyjuju please?
[14:50] <rogpeppe> fwereade: re: your comment on EnsureMongoServer, it's not adding anything that isn't already there
[14:51] <rogpeppe> fwereade: and if the state port isn't stored in the agent config, where *would* it be stored?
[14:52] <fwereade> rogpeppe, well it's already in the env config
[14:52] <rogpeppe> fwereade: but the agent doesn't have access to the env config until they've connected to the state, right?
[14:53] <fwereade> rogpeppe, this is all bootstrap time, isn't it? perhaps I'm confused there
[14:54] <rogpeppe> fwereade: it's also used after bootstrap time, when new state servers come up
[14:54] <rogpeppe> fwereade: but...
[14:54] <rogpeppe> fwereade: i'm thinking you may be right, because a second state server will have access to the API, even if they can't connect to mongo
[14:54] <fwereade> rogpeppe, but anything setting up a state server is either doing so at bootstrap, or in response to being told to by the API, right?
[14:54] <fwereade> rogpeppe, yeah
[14:55] <fwereade> rick_h_, pyjuju didn't have juju upgrades at all
[14:55] <TheMue> fwereade: you may help me with a design decision? why do the watcher in apiserver/root.go require to be called by an agent?
[14:56] <rogpeppe> fwereade, natefinch: huh, that call to ensureMongoServer in MachineAgent.Run doesn't look right at all
[14:56] <fwereade> TheMue, that was the only thing that needed it at the time
[14:56] <TheMue> fwereade: would like to use the StringsWatcher from the client
[14:56] <TheMue> fwereade: so changing would be ok?
[14:56] <fwereade> TheMue, let me take a quick look at the code
[14:56] <TheMue> fwereade: yep
[14:57] <fwereade> TheMue, aw hell
[14:57] <fwereade> TheMue, yes it's fine to change it
[14:57] <fwereade> TheMue, but it's not a simpleone
[14:58] <TheMue> fwereade: great, that helps a lot for debug-log
[14:58] <fwereade> TheMue, ISTM that there's a pretty serious problem with all the watchers
[14:58] <TheMue> fwereade: oh *listening*
[14:58] <fwereade> TheMue, there's nothing preventing clients from grabbing each other's watchers willy-nilly
[14:59] <TheMue> fwereade: come on, let them have some fun *lol*
[14:59] <fwereade> TheMue, there's presumably something about a given resource that ties it to a particular client, otherwise we couldn't tidy up resources when conns are closed
[14:59] <fwereade> TheMue, haha
[14:59] <rogpeppe> fwereade: clients don't share resources
[14:59] <fwereade> TheMue, so ISTM this is something that can and should (nay must) be fixed for all the watchers
[14:59] <TheMue> rogpeppe: exactly, that's how I understood it too
[15:00] <fwereade> rogpeppe, yeah, I just can't remember the exact mechanism
[15:00] <fwereade> rogpeppe, ohhhhh ok it's maybe ok? individual clients get separate resource idspaces anyway, right?
[15:00] <rogpeppe> fwereade: exactly
[15:00] <fwereade> rogpeppe, cool
[15:00] <fwereade> TheMue, belay that panic then
[15:01] <fwereade> TheMue, you can just open up stringswatcher I think
[15:01] <TheMue> fwereade: yes, will start with that one as I only need it
[15:01] <rogpeppe> fwereade: specifically, newSrvRoot allocates a new resources map, and it's called when a client successfully logged in, and stored in the root object for that client
[15:02] <rogpeppe> TheMue, fwereade: opening up StringsWatcher seems fine to me
[15:03]  * TheMue takes the key to open it
[15:28] <natefinch> fwereade: back, let me know when you'd like to talk
[15:48] <dimitern> rogpeppe, fwereade, natefinch - I have two branches up for review, which complete the firewaller API story: https://codereview.appspot.com/55670043 and -  https://codereview.appspot.com/55680043 I'd really appreciate if someone takes a look
[16:32] <adeuring> could somebody have a llok here: https://codereview.appspot.com/55690043 ?
[16:35] <natefinch> adeuring: looking
[16:36] <adeuring> thanks
[16:46] <natefinch> adeuring: reviewed
[16:46] <adeuring> natefinch: thanks!
[16:46] <natefinch> adeuring: thank you :)
[16:57] <natefinch> rogpeppe: what was the problem you had with ensuremongoserver?  I saw a couple problems while reviewing it with William (like, we only need to call it if we have a managestate job, and I need to be more tolerant in case the directory and files already exist).
[16:57] <rogpeppe> natefinch: why is it called unconditionally in the machine agent?
[16:58] <rogpeppe> natefinch: ah, yes, that's "we only need to call it if we have a managestate job"
[16:58] <natefinch> rogpeppe: right, I realized that when I went back to look over the code
[17:01] <rogpeppe> natefinch: i'm just about to delete JobManageState throughout the code, BTW, and just use JobManageEnviron throughout
[17:03] <natefinch> rogpeppe: cool, I'll check for that one then
[17:03] <dimitern> natefinch, rogpeppe, review poke?
[17:03] <rogpeppe> dimitern: will look shortly
[17:03] <dimitern> rogpeppe, cheers
[17:12] <natefinch> rogpeppe: is openApiState the appropriate way to get the list of jobs for the machine agent?  I don't see an easier way to do it
[17:12] <rogpeppe> natefinch: yes
[17:12] <natefinch> rogpeppe: cool
[17:12] <rogpeppe> natefinch: you might not find it that easy
[17:13] <rogpeppe> natefinch: but actually, as long as jobs stay constant, it's probably not too bad
[17:13] <rogpeppe> natefinch: the difficulty comes at bootstrap time
[17:14] <rogpeppe> natefinch: because at bootstrap time (or if a single remaining state server machine reboots) there's no API server to go to to ask for jobs
[17:15] <natefinch> rogpeppe: ahh, right
[17:17] <dimitern> natefinch, can you take a look at this please? https://codereview.appspot.com/55680043
[17:18] <natefinch> dimitern: sure thing
[17:20] <dimitern> natefinch, cheers
[17:45] <rogpeppe> hmm, a recent update (revno 2226) has broken tests for me
[17:58] <dimitern> rogpeppe, natefinch , guys, sorry to poke you again, but i'd like to land these two today if possible
[17:59] <rogpeppe> dimitern: looking, sorry, i've been struggling to get tests to pass
[17:59] <dimitern> rogpeppe, what's wrong?
[18:00] <rogpeppe> dimitern: i've just reported https://bugs.launchpad.net/juju-core/+bug/1271674 and https://bugs.launchpad.net/juju-core/+bug/1271672,
[18:00] <rogpeppe> dimitern: but mostly just crap from removing JobManageState
[18:01] <dimitern> rogpeppe, wow test.invalid resolves in your local net?
[18:01] <rogpeppe> dimitern: yeah. blame my ISP
[18:02] <rogpeppe> dimitern: they resolve everything to their yahoo search page
[18:02] <dimitern> rogpeppe, ah, nasty
[18:02] <rogpeppe> dimitern: it resolves to 92.242.132.16 for me
[18:03] <dimitern> to what lengths some people go for the sake of marketing
[18:06] <natefinch> dimitern: sorry, caught me during lunch.  I'm looking now for real though
[18:07] <dimitern> natefinch, thanks
[18:07] <natefinch> rogpeppe: my ISP does similar things, forwards DNS failures to their own page.  In theory you can disable it at the router, but I've forgotten the password to it :/
[18:08]  * dimitern is aft for a while
[18:08] <dimitern> afk even
[18:24] <natefinch> dimitern: reviewed
[18:50] <arosales> will any folks here be joning the charm writing virt. sprint?
[19:00] <natefinch> arosales: core is pretty slammed right now.  I know I'm pretty behind where I'm supposed to be.... not sure about others.
[19:03] <arosales> natefinch, ack I'll check with some others
[19:19] <lazypower> to whomever fixed the state-watcher bug, i could hug you.
[19:22] <rogpeppe> lazypower: you're welcome :-)
[19:22] <lazypower> rogpeppe, you + me + beer in our future
[19:23] <rogpeppe> lazypower: SGTM!
[20:47] <thumper> arosales: hey there
[20:47] <thumper> arosales: I think you are under the mistaken impression that we know how to write charms :-)
[20:49] <natefinch> thumper: roger does, more or less :)  He wrote that Go charm thing, which looks pretty cool.
[20:49] <thumper> hey natefinch
[20:49]  * natefinch might be biased for anything Go and against anything bash, though
[20:49] <natefinch> thumper: howdy
[20:50] <thumper> natefinch: heh
[20:50] <natefinch> thumper: you're using Sublime Text and GoSublime, right?
[20:50] <thumper> natefinch: I am now
[20:51] <thumper> getting the hang of it
[20:51] <natefinch> does go to definition work for you on methods?  Like if you have s.Foo()  can you go to the definition of the Foo() method?  It doesn't work for me, and it's really annoying
[20:52] <natefinch> I can go to definition of types and top level functions, but not methods on types.
[20:53] <natefinch> I end up having to resort to a full text search like some kind of Neanderthal.
[20:54] <arosales> thumper, you guys are coding kings I am sure you can crank out all sorts of quality code
[20:54] <thumper> bwahaha
[20:54] <fwereade> thumper, heyhey
[20:54] <thumper> arosales: we are kinda focused on several areas that will make others more happier
[20:54] <thumper> fwereade: hey dude
[20:55] <arosales> thumper, understood. I thought it wouldn't hurt to ask though
[20:55] <fwereade> thumper, I should have come on earlier and arranged things and so on
[20:55] <arosales> just in case :-)
[20:55] <thumper> arosales: sure
[20:55] <arosales> thumper, thanks for the consideration though
[20:55] <fwereade> thumper, don't suppose you're free to chat about availability sets a bit?
[20:55] <thumper> arosales: if we were just sitting around drinking, then yeah, we'd help
[20:55] <thumper> fwereade: sure
[20:56] <arosales> thumper,  lol
[20:56] <natefinch> thumper: that doesn't sound like you guys....
[20:56] <fwereade> thumper, or have you just been explaining to arosales that you wouldn't do them? ;p
[20:56] <arosales> thumper, I know you aren't doing that given I brought up the availability sets too
[20:56] <thumper> arosales: we aren't working on the availability sets just yet, but we have talked about it...
[20:56] <arosales> fwereade, that was charm tests thumper was breaking my heart on
[20:56] <thumper> arosales: we are working on proxy support, local provider improvements
[20:57] <fwereade> aww
[20:57] <thumper> fwereade: want a hangout?
[20:57] <fwereade> thumper, sgtm
[20:57] <arosales> thumper, its only mid week  :-)
[20:58] <thumper> fwereade: https://plus.google.com/hangouts/_/7acpibmqriabeqpmu77ctb34ns?hl=en
[20:58] <thumper> arosales: for you.. thursday morning for us
[20:59] <arosales> thats 2 full days :-)  I'll let you get back to work
[21:23] <fwereade> natefinch, ping
[21:23] <natefinch> fwereade: pong
[21:33] <thumper> natefinch: we should chat about upgrades
[21:33] <natefinch> thumper: I love upgrades
[21:34] <natefinch> thumper:  shall I join the hangout?
[21:34] <thumper> natefinch: sure
[21:34] <fwereade> natefinch, damn sorry I had a response half typed out
[21:35] <thumper> natefinch: fwereade just left the hangout, but I'm still there waiting for you
[22:19] <hazmat> thumper, runonallmachines .. runs on all units.. or just machines?
[22:19] <thumper> hazmat: it does what it says :-)
[22:19] <thumper> on the machiens
[22:19] <thumper> or machines
[22:19] <thumper> we don't have an --all-units
[22:20] <thumper> if you need it, file a bug :)
[22:20] <hazmat> thumper, so this isn't juju-run.. this is arbitrary script run?
[22:20] <thumper> no...
[22:20] <thumper> only when running on a unit does it get the unit context
[22:20] <thumper> you can also run on the machine outside of a unit context
[22:21] <thumper> however it will get the hook execution flock
[22:21] <thumper> to make sure that it doesn't run in parallel with a hook
[22:21] <hazmat> cool, was just doing my monthly check for new api methods and came across.
[22:21] <thumper> :-)
[22:22] <thumper> it is freaky magic shit
[22:22] <hazmat> orchestration even ;-)
[22:22] <thumper> yeah, that
[22:22] <thumper> ansible-lite
[22:23] <hazmat> thumper, so commands string need a #!/bin/executable preprend?
[22:23] <thumper> no
[22:23] <thumper> we do that
[22:23] <thumper> actually, we send it through /bin/bash -s
[22:23] <hazmat> hmm.. so assumes bash?
[22:23] <thumper> yes
[22:23]  * hazmat files a bug ;-)
[22:24] <hazmat> bcsaller, ^ you might be interested in this
[22:24] <thumper> nah
[22:24] <thumper> why?
[22:24] <thumper> if you want to execute a magic script
[22:24] <thumper> write it to disk and call it
[22:24] <hazmat> i guess exec works.. just feels a little klunky that way
[22:24]  * bcsaller reads
[22:27] <thumper> bcsaller: we have 'juju run' now
[22:27] <thumper> which has an api end point
[22:27] <thumper> so in theory, the gui could run arbitrary code on any machine, service or unit
[22:28] <bcsaller> thumper: good stuff
[22:29]  * thumper considers a button on a machine in the gui called 'reboot'
[22:29]  * thumper chuckles to himself
[22:30] <hazmat> if only there were machines in the gui ;-)
[22:31] <thumper> yeah...
[22:31] <thumper> hazmat: we could reboot services :)
[22:31] <thumper> and it would work as long as the service isn't colocated with the api server :)
[22:31] <hazmat> thumper, ah.. that's no fun.. although the nondeterminism is pretty cool.. i'd prefer rm -Rf /
[22:32] <thumper> hazmat: don't get me wrong, it would reboot
[22:32] <hazmat> the openstack charms w/ ha .. do tons of rebooting.
[23:48] <davecheney> thumper: o/
[23:48] <davecheney> thatnks for the good news on proxy support
[23:49] <thumper> hi davecheney
[23:49] <davecheney> i need that super bad for testing on that platform that we can't talk about
[23:49] <thumper> cool