[00:10] <davecheney> hey upgrade-charm --switch works
[00:10] <davecheney> that is awesome
[00:15] <davecheney> wallyworld_: was there a command to print the current enviroment ?
[00:15] <davecheney> nm, juju switch without a name
[00:15] <wallyworld_> yeah
[00:16] <wallyworld_> davecheney: how did the hp cloud demo go? well i hope
[00:19] <davecheney> wallyworld_: yeah, it's working well
[00:19] <davecheney> marco is demoing juju deployer
[00:19] <wallyworld_> great
[00:19] <davecheney> and about to wow them with an import of a whole openstack HA setup
[00:19] <davecheney> entirely hands off
[00:19] <wallyworld_> good luck :-)
[00:19] <davecheney> wallyworld_: thanks, we've tested this
[00:19] <davecheney> well, at least once
[00:20] <wallyworld_> how many machines do you need for openstack HA?
[00:21] <davecheney> dozen at least
[00:21] <wallyworld_> can't wait till containers are cooked a bit more so we can reduce that
[00:22] <davecheney> man, the lack of the auto layout in the gui is killing us
[00:22] <wallyworld_> i thought they were going to work on that?
[00:22] <wallyworld_> something was said in oakland anyway
[00:23] <davecheney> wallyworld_: yes, i wonder why it hasn't landed
[00:23] <davecheney> wallyworld_: have you used upgrade-charm --switch much ?
[00:24] <wallyworld_> no, not at all sorry
[00:24] <davecheney> fair enough
[00:24] <wallyworld_> is there an issue?
[00:24] <davecheney> i guess that is thumper/rog/fwereade
[00:24] <davecheney> wallyworld_: no, it looks like it works
[00:24] <wallyworld_> great
[00:24] <davecheney> we didn't relalised it was done
[00:24] <davecheney> which is good
[00:25] <davecheney> because it hepls when you need to fix a charm locally
[00:54] <sidnei> thumper: just as a heads up, i hope you haven't started on the lxc-clone thing discussed at iom, i'm planning to dive down into it tomorrow as soon as my other branch lands.
[00:58] <wallyworld_> arosales: not sure if you are around - do you have a preference for juju tools location? perhaps "http://juju.canonical.com/tools" afaiui, we want to avoid using ubuntu in the url? i need to ask IS if they can set up that location
[01:02] <axw> wallyworld_: when you have a moment, can you please review this (regarding syslog)? https://codereview.appspot.com/12909044/
[01:03] <axw> rsyslog even - apparently you wrote that
[01:03] <wallyworld_> sure
[01:03] <axw> ta
[01:04] <wallyworld_> axw: it was written with the understanding that we would not deploy services to the bootstrap node unless in a container. clearly now that we have the --to option, that is no longer true :-/
[01:04] <axw> wallyworld_: yeah I figured that was the case
[01:04] <axw> my "solution" is still not perfect even
[01:04] <axw> if you have multiple state servers... they're not going to see each others' logs
[01:04] <davecheney> wallyworld_: that url looks good
[01:05] <wallyworld_> davecheney: ta. afaik, juju.canonical.com doesn't exist yet, so i'll see if IS can set everything up for me
[01:05] <axw> davecheney: heya. I reproduced that line-ending issue on lcy02 yesterday
[01:06] <davecheney> axw: and I just ran into it on ec2
[01:06] <davecheney> it doesn't happen every time .... ?!?
[01:06] <axw> davecheney: seems the rsyslog conf file got mangled, and the "\n" got interpreted
[01:06] <axw> ah
[01:06] <axw> wtf :(
[01:07] <axw> davecheney: if yo utake a look at /etc/rsyslog.d/25-juju.conf, then you'll see a " on its own line
[01:07] <axw> it should be up a line, with a literal \n before it
[01:08] <axw> I better update the bug that says it works fine on EC2 then
[01:15] <wallyworld_> axw: sorry, i marked the merge proposal as not lgtm since there's a cleaner was we can check if a machine is a state server, and we can abstract it behind a method for possible subsequent refactoring when we go to ha
[01:15] <davecheney> urgh, ec2 speed is killing me
[01:15] <axw> wallyworld_: nps, thanks for the review
[01:16] <davecheney> just bootstrapped a machine that had so little io bandwidth it could not unpack the tools
[01:16] <wallyworld_> let me know if you have questions
[01:16] <axw> wallyworld_: will do, ta
[01:17] <wallyworld_> axw: see state/machine.go - it defines the jobs a machine can run and the db document for machine containing the Jobs attr
[01:17] <davecheney> axw: nice diagnosis
[01:18] <thumper> sidnei: it is all yours :-)
[01:18] <axw> wallyworld_: I would've done that actually, but Machine isn't exposed to SimpleContext at the moment. I see now there's only one place that NewSimpleContext is called tho, so it's easy enough to pass one through
[01:19] <wallyworld_> axw: cool, i didn't really look too much at the surrounding code. i assumed a deployer or whatever would have had access to the machine it was operating on
[01:20] <axw> wallyworld_: nope, but it will soon :)  thanks for the comments
[01:20] <wallyworld_> np
[01:24] <thumper> hi wallyworld_
[01:24] <thumper> wallyworld_: how's the world?
[01:24] <wallyworld_> g'day
[01:24] <wallyworld_> busy
[01:24] <wallyworld_> lots of simplestreams stuff swirling around
[01:24] <wallyworld_> how was iom? sounds like it was good
[01:24]  * thumper nods
[01:25] <thumper> wallyworld_: yeah pretty good
[01:25] <thumper> didn't catch as much flak for the api lateness as i thought we might
[01:25] <wallyworld_> :-)
[01:25] <davecheney> thumper: thanks for implementing update-charm --switch
[01:25] <wallyworld_> thumper: i'm on an ease of use mission, so hopefully soon juju will Just Work better
[01:26] <thumper> davecheney: ah... I didn't
[01:26] <thumper> wallyworld_: awesome
[01:26] <wallyworld_> lots to do though
[01:26] <thumper> wallyworld_: got general agreement between me, jam and fwereade__ around -v meaning verbose, not showlog
[01:26] <wallyworld_> \o/
[01:26] <thumper> also, useful output by default
[01:26] <thumper> with -q meaning quiet
[01:27] <wallyworld_> thumper: what about recognising that log != cli
[01:27] <thumper> yeah, we are going to log into $JUJU_HOME/juju.log (or something) for every command
[01:27] <wallyworld_> log should go to file, with different stuff going to stdout for cli feedback
[01:27] <thumper> kinda like .bzrlog
[01:27] <thumper> wallyworld_: ack
[01:27] <wallyworld_> cool, can't wait
[01:28] <davecheney> wallyworld_: +1
[01:28] <davecheney> if it means we don't have to run the command twice to debug it
[01:28] <davecheney> actually +1 million yen
[01:28] <wallyworld_> i would have done it myself if i had the time
[01:28] <thumper> well, we wanted to make sure that we had agreement before starting
[01:28] <thumper> so we wouldn't get rebuffed in reviews
[01:29] <wallyworld_> yeah, that too
[01:29] <davecheney> thumper: wallyworld_ anything that gets most of hte debug shit out of hte logs
[01:29] <wallyworld_> my initial email didn;t get much love way back when
[01:29] <davecheney> it's really hard to demo juju and explain what is happening when 1/2 the line is full of shit like file:line etc
[01:29] <thumper> davecheney: ack
[01:29] <wallyworld_> yep
[01:30] <davecheney> same goes for the agents
[01:30] <wallyworld_> thumper: the ability for a user to define a log config and have that used would be great too
[01:30] <davecheney> which tend to log most things as debug then at info
[01:30] <davecheney> wallyworld_: -1, overengineering
[01:30] <thumper> wallyworld_: I have a branch for that
[01:30] <davecheney> there should be one log level
[01:30] <davecheney> silent or not silent
[01:30] <wallyworld_> davecheney: disagree
[01:30] <davecheney> anything more than that goes into this log file
[01:30] <thumper> davecheney: there are two users here
[01:30] <thumper> developers and users
[01:30] <thumper> developers need configurable logging
[01:31] <thumper> users shouldn't have to look at it
[01:31] <thumper> ever!
[01:31] <wallyworld_> what he said
[01:31] <wallyworld_> also, as a developer, i might want to configure logging to there is a stdout sink
[01:31] <thumper> which is why we want useful output by default
[01:31] <thumper> and verbose meaning "show me more" not "show me the developer log messages"
[01:31] <davecheney> thumper: sounds good to me
[01:31] <davecheney> i never want to know about it
[01:33] <axw> davecheney: "10mbit ethernet is considered high speed in australia"  ;)  I was thinking the same thing, but then remembered there will be other guests
[01:33] <davecheney> axw: honestly, this is the best we get
[01:34] <axw> yeah I know
[01:34] <davecheney> the bigger problem will be their shitful captive wifi portal
[01:47] <davecheney> axw: wallyworld_ thumper is there a way to show the default constraints ?
[01:48] <davecheney> similar to juju get $SERVICE
[01:48] <wallyworld_> davecheney: you mean the constraints specified on bootstrap i assume, which become the so-called environment constraints
[01:49] <davecheney> wallyworld_: yes
[01:49] <wallyworld_> i think there may be a get command of some sort, but i've not used it. i'll see if i can find what it may be
[01:50] <davecheney> juju get-constraints blocks
[01:50] <davecheney> juju get-constraints $SERVICE returns nothing
[01:50] <wallyworld_> get-env constraints  ??
[01:53] <thumper> davecheney: I don't know actuall
[01:53] <thumper> y
[01:53] <wallyworld_> i looked at the code, get-constraints should be the thing to use
[01:53] <wallyworld_> not sure why it is blocking
[01:56]  * thumper tries to remember which nights in oakland were non-claimable
[01:56] <thumper> putting in expense reports
[01:58] <davecheney> thumper: none of them :)
[01:58] <thumper> davecheney: well, we did have a team dinner didn't we?
[01:59]  * thumper is trying to remember
[01:59] <thumper> I remember the go meetup
[01:59] <thumper> but having trouble thinking of other evenings out
[01:59] <thumper> except for that horrible vegan place
[02:00] <davecheney> thumper: the steak house on thursday ?
[02:00] <thumper> thursday was the go meetup
[02:00] <davecheney> tuesday ?
[02:01] <thumper> sounds right, but I'm having a lot of difficulty remembering it
[02:01] <thumper> ah
[02:01] <thumper> kincaids
[02:01] <thumper> yes, I remember now
[02:01] <davecheney> thumper: i don't think it matters
[02:01] <davecheney> just make sure the number of days add up to the right number
[02:01] <thumper> davecheney: it matters to me :)
[02:01] <davecheney> thumper: it was tuesday
[02:02] <thumper> ta
[02:12] <axw> that was quick, thanks thumper
[02:12] <thumper> np
[02:12] <thumper> I saw it come up
[02:12] <thumper> axw: trivial ones means only one review needed
[02:12] <axw> thumper: yup, thank you
[02:13] <thumper> actually we have an experiment coming up where we are moving to a "one lgtm needed" for a month
[02:13] <thumper> to see how it goes
[02:13] <axw> hmmk
[02:13] <thumper> axw: there have been a number of times where good work is blocked on noone around to do a second review
[02:13] <axw> I had a change last week where I got a "LGTM, this looks solid" and then a "NOT LGTM this is going to break everything catastrophically" ;)
[02:14] <thumper> the idea is that we'll be doing weekly reviews of modules
[02:14] <thumper> axw: which was that on?
[02:14] <axw> thumper: a change to handling of Dying in the uniter/modes code
[02:14] <axw> so that a uniter would die if it hadn't started yet
[02:14] <axw> I can get the number if you're interested
[02:15] <thumper> no, that's fine
[02:16] <thumper> axw: also, technically I should be on-call reviewer today
[02:17] <thumper> this morning was catching up on all the emails
[02:17] <thumper> now expenses etc.
[02:17] <axw> thumper: ah yes, what does that mean exactly?
[02:17] <thumper> generally, that your primary focus should be reviewing others code
[02:17] <thumper> and some bug work when not reviewing
[02:17] <thumper> if all that is done
[02:17] <thumper> then you can code
[02:18] <axw> gotcha
[02:18] <thumper> otherwise people tend to prioritise their own work
[02:18] <thumper> and no one looks at bugs :)
[02:18] <axw> thumper: doesn't mean people are going to call you up out of hours for reviews? :)
[02:18] <axw> heh
[02:18] <thumper> no
[02:18] <thumper> :)
[02:19] <axw> thumper: speaking of reviews, did you see my manual provisioner stuff?
[02:19] <axw> I'd like to know if I'm on the right track...
[02:19] <thumper> axw: I haven't but I want to look at it
[02:19] <axw> thumper: thanks. I have other stuff to carry on with for now
[02:20] <thumper> I need to get my wallet for the IOM expenses, but the dog is sleeping in front of the door and I really don't want to wake her up
[02:20]  * thumper nods
[02:20] <axw> hehe
[02:36] <axw> oops, that change was noise.. state isn't used directly in the code creating the deployer
[02:39] <axw> thumper: turns out I should've added that method to state/api/agent/Entity, if anywhere at all
[02:40] <axw> that code seems much more  minimalist than state
[02:40] <axw> should I be leaving it that way, or go ahead and add it in there too?
[02:40] <thumper> why do you feel it should be moved?
[02:41] <axw> no I made a mistake in the first instance; the code I need to update uses api, not state
[02:42] <axw> I'm updating jujud to check if it's a state server when creating a deployer context
[02:42] <axw> and jujud is working with api/agent Entity objects, rather than state.Machines
[02:56] <thumper> ah..
[02:56] <thumper> makes sense
[02:56] <thumper> why does jujud care if it is a state server for the deployer context?
[03:01] <axw> thumper: because the deployer needs to not install rsyslog forwarding config if it's on a state server, or we get a feedback loop
[03:02] <axw> bug 1211147
[03:02] <_mup_> Bug #1211147: Deploying service to bootstrap node causes debug-log to spew messages <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1211147>
[03:15] <axw> thumper: thanks for the review comments. I realise it's ugly/messy, was more interested in making sure I wasn't way off with how I understood things should work
[03:15]  * thumper nods
[03:15] <thumper> axw: I think you are kinda there
[03:15] <axw> so what's the "manual" provider going to do?
[03:15] <axw> nothing?
[03:15] <thumper> right
[03:15] <thumper> a manual provider isn't really a provider at all
[03:15] <axw> it exists just to be referred to?
[03:15]  * thumper nods
[03:15] <axw> ok
[03:16] <thumper> more importantly, so we can distinguish between environmentally created machines
[03:16] <thumper> and ones you have added manually
[03:16] <thumper> the whole "destroy-machine" thing needs thought
[03:16] <axw> yeah didn't even touch that yet
[03:17] <thumper> it seems that there needs to be a hook inside the machine agent so it can nuke itself
[03:17] <thumper> as in, remove the upstart job
[03:17] <thumper> and die
[03:17] <thumper> right now
[03:17] <thumper> without any way to discriminate between top level machines
[03:17] <thumper> the environ provider will raise an error
[03:17] <thumper> when it notices that a "manual" machine needs to die
[03:17] <thumper> because it can't find it
[03:18] <thumper> at least the process will die and restart
[03:35] <arosales> wallyworld_, +1 on juju.canonical.com/tools
[03:35] <wallyworld_> arosales: great thanks, will get the ball rolling
[03:36] <arosales> wallyworld_, thanks, are you going to file an RT with IS?
[03:36] <wallyworld_> yes
[03:36] <wallyworld_> will do it today
[03:48] <arosales> wallyworld_, thank you
[03:48] <wallyworld_> np, i'm looking forward to getting this all working
[03:48] <arosales> wallyworld_, ben also reports simple streams for HP and Azure are all set with the SRU for Azure inprogress to enable 12.04 on Azure
[03:48] <arosales> +1 on "it just works"
[03:48] <arosales> :-)
[03:49] <wallyworld_> arosales: so we can theoretically delete the hack up simplestream data i did for hp cloud
[03:50] <wallyworld_> i might test that locally
[03:50] <wallyworld_> s/might/will
[03:50] <arosales> +1 on testing it :-)
[03:50] <arosales> wallyworld_, I think you have access to the sub hp account
[03:51] <wallyworld_> yeah, i uploaded new metadata a day or so ago for the demo
[03:51] <arosales> wallyworld_, cool, and thank you
[03:51] <wallyworld_> np
[03:51] <wallyworld_> hope things are going well for you guys over there
[03:51] <arosales> its marco and dave that are in the trenches in japan
[03:51] <arosales> I am just assisting remotely
[03:52] <wallyworld_> ah ok
[03:52] <arosales> but I hear things are going better now
[03:52] <wallyworld_> \o/
[03:52] <arosales> is tim around
[03:52] <arosales> thumper, ?
[03:53] <arosales> thumper, need to confirm your plan/travel for the Brisbane sprint
[03:53] <arosales> email is also out on that subject
[03:54] <bigjools> o/ arosales
[03:54] <arosales> bigjools, hello
[03:54] <arosales> davecheney, all going well with the charm shool?
[04:02] <thumper> arosales: here
[04:02] <thumper> arosales: was having a coffee and some fud
[04:03] <arosales> thumper, umm coffee . . .
[04:03] <thumper> arosales: oh yes, as much as davecheney disagrees, I think they'll be fine without me
[04:03] <arosales> thumper, hey I was just going to check if you need to book any travel for the brisbane sprint
[04:03] <thumper> arosales: no, no I don't
[04:03] <arosales> thumper, ah, ok
[04:03] <thumper> :)
[04:03] <thumper> arosales: I should go edit some docs I guess
[04:04] <thumper> if the flights were better, it perhaps could have been an option
[04:04] <thumper> but it is another five days away
[04:04] <arosales> ah
[04:04] <thumper> which doesn't gel with the family
[04:05] <thumper> I had it calculated for me that i have been away for a total around 8 months since I started with canonical
[04:05] <arosales> wow
[04:06] <arosales> thumper, I updated the ss
[04:06] <davecheney> thumper: arosales yup
[04:06] <davecheney> whatever
[04:06] <davecheney> after this week, I don't think there is anything you four can throw at me
[04:07] <thumper> davecheney: don't worry, you get to see me in October
[04:07] <davecheney> thumper: :heart
[04:07] <thumper> davecheney: really, whazzup?
[04:07] <davecheney> thumper: this week has been ... trying
[04:07] <arosales> davecheney, battle hardened
[04:07] <axw> wallyworld_: second attempt: https://codereview.appspot.com/12852044
[04:08] <wallyworld_> axw: will look soon, just finishing a bit of stuff
[04:08] <axw> nps, thanks
[04:08] <bigjools> thumper, damn, I was looking forward to insulting you in person
[04:08] <thumper> bigjools: sorry dude
[04:08] <arosales> bigjools, lol
[04:08] <thumper> bigjools: you'll have to wait until next year, or when you're up to travelling again
[04:08] <arosales> bigjools, I don't know thumper seems like a dude not to mess with in person
[04:08] <arosales> probably safer over IRC
[04:08] <thumper> family has a trip planned for gold coast next winter
[04:09] <bigjools> arosales: yes he's good at body checking
[04:09] <thumper> bigjools: and foot stomping ;-)
[04:09] <bigjools> ah yes :)
[04:09] <bigjools> thumper: cool, we could head there too and ruin it for you :)
[04:09] <thumper> arosales: just realised that SS is spread sheet, not secret service
[04:10] <arosales> ah yes, sorry about that
[04:10] <bigjools> arosales: I just realised I am going to miss the Monday of the sprint
[04:10] <thumper> arosales: did have me wondering why they cared...
[04:10] <bigjools> but I was only planning on being there one day anyway
[04:11] <arosales> thumper, lol
[04:11] <arosales> bigjools, well at least you will be able to make it for a few days.
[04:12] <bigjools> thumper: so anyway yes, I am unlikely to be travelling anywhere for the forseeable future :/
[04:12] <arosales> ok fellas I am going to get some sleep.  Have a good day!
[04:12] <axw> good night
[04:12] <arosales> davecheney, keep rocking the charm school, and may the juju force be with you.
[04:13] <bigjools> nn arosales
[04:45] <thumper> I was considering leaving the office and coming back later for the meeting
[04:45] <thumper> but I can hear the kids not cleaning the kitchen
[04:45] <thumper> perhaps I'll just stay here for a bit
[05:31] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1212538
[05:31] <_mup_> Bug #1212538: cmd/juju: deploy --to a non existent machine fails too late in the process <papercut> <juju-core:New> <https://launchpad.net/bugs/1212538>
[05:59] <davechen2y> what the heck is --format=constaints ?
[06:00] <davechen2y> what the heck is --format=constraints ?
[06:01] <davechen2y> mmm, love that legacy smell
[07:58] <rogpeppe> mornin' all
[07:59] <axw> morning rogpeppe
[08:41] <mgz> rogpeppe: can you review the trivial https://codereview.appspot.com/12768044/ please?
[09:48] <rogpeppe> mgz: LGTM
[09:53] <mgz> how go-idomatic would it be to do something like... range over an anon struct, in real code, rather than just for table tests?
[09:55] <mgz> it's the closest thing I have to the python idiom of say `for key in ('thing', 'other'): val = getattr(obj, key, None); if val is not None: ...`
[09:58] <fwereade> wallyworld_, ping
[09:58] <wallyworld_> heya
[09:59] <fwereade> wallyworld_, wondering about the deployer syslog change
[09:59] <wallyworld_> ah, i did an initial review but forgot to take another look
[09:59] <fwereade> wallyworld_, it's not immediately apparent how the unit logs will get to all-machines.log if we just make forwarding contingent on non-state-serveriness
[10:00] <fwereade> wallyworld_, but you probably know if/how it will work without me digging ;p
[10:00] <wallyworld_> you mean unit logs on state server machines?
[10:00] <fwereade> wallyworld_, yeah
[10:01] <wallyworld_> i must admit i didn't think that bit through as i was more concerned with stopping the loop
[10:01] <wallyworld_> i think it will just be a tweak to the syslog conf when deploying on state server
[10:02] <wallyworld_> but i'd have to look at the syslog docs
[10:02] <wallyworld_> to see the specs for a "append to file" module
[10:03] <wallyworld_> as opposed to a "forward to this port" module that we use now
[10:03]  * wallyworld_ still hates that we support deploying directly to state servers
[10:05]  * fwereade sympathises but doesn't think we can kill it yet
[10:06] <fwereade> wallyworld_, hmm, when we get onto the HA work we'll probably have to consider job changes
[10:06] <wallyworld_> yeah, likely
[10:06] <fwereade> wallyworld_, and we could perhaps then default to creating state server machines without JobHostUnits
[10:07] <fwereade> wallyworld_, but then I'm not totally wild about directly exposing machine jobs to the user
[10:07] <fwereade> worth bearing in mid though
[10:07] <wallyworld_> when do we expose machine jibs to the usrr?
[10:07] <fwereade> wallyworld_, we'd have to if we wanted to keep a path to allow it
[10:08] <wallyworld_> sorry, i must have missed something
[10:08] <fwereade> wallyworld_, ah sorry
[10:09] <fwereade> wallyworld_, I think we have to allow that path, evil and crackful though it may be, to enable cheap operation even without containers
[10:09] <fwereade> wallyworld_, if we get containers everywhere the conflict disappears
[10:09] <fwereade> wallyworld_, but if we have containers in *most* places I think it'd be ok to disallow units on 0 by default
[10:09] <wallyworld_> fwereade: "allow that path" - not quite getting the context
[10:09] <fwereade> wallyworld_, allow JHU on machine 0
[10:09] <wallyworld_> right
[10:10] <fwereade> wallyworld_, I think we might be able to gradually restrict it
[10:10] <fwereade> wallyworld_, but I suspect it will remain a valid (if distasteful) use case for the forseeable future
[10:10] <wallyworld_> well if we can deploy containers to machine 0 easily, then problem solved
[10:10] <fwereade> wallyworld_, yeah, but that depends on provider capabilities
[10:11] <wallyworld_> yeah, i realise :-(
[10:11] <wallyworld_> it was more wishful thinking
[10:11] <rogpeppe> mgz: i'd need to see the actual code
[10:12] <fwereade> wallyworld_, but if we have "enough" container addresability support we might be able to switch off JHU on 0 by default -- but we'd need some way to reenable it in annoying contexts... hence some level of user exposure to machine jobs
[10:13] <wallyworld_> well maybe not Jobs per se - just allow users the ability to toggle the capability to host units from a semantic/logical perspective
[10:17] <fwereade> wallyworld_, yeah, indeed, I'm just worried that we'll end up with full control of machine jobs in a really ad hoc way
[10:17] <fwereade> wallyworld_, I'm just fretting uselessly really )
[10:17] <wallyworld_> i think you are - we would never expose jobs directly :-) just the abstract notion of "capability"
[10:28] <mgz> rogpeppe: in this case, I think I should actually just reuse an existing struct, and just not append if its lacking a value
[10:28] <rogpeppe> mgz: i'm afraid i can't provide any useful input when i don't know the context :-)
[10:34] <rogpeppe> fwereade: chat?
[10:34] <fwereade> rogpeppe, I'm starting to think that this connection just straight-up hates hangouts, can we do it by irc perhaps?
[10:35] <rogpeppe> fwereade: it's just possible that if you *start* a hangout, it might be hosted somewhere more sensible for you
[10:35] <rogpeppe> fwereade: although that's possibly not how they work at all though
[10:35] <mgz> rogpeppe: `bzr diff -c1646 lp:~gz/juju-core/ec2_addresses` then `bzr diff -c1647 lp:~gz/juju-core/ec2_addresses`
[10:53] <mgz> does gocheck have any composable matchery things
[10:53] <mgz> ?
[10:55] <mgz> to do something like... c.Assert(aStruct, DeepEquals, ....actually, this doesn't make any sense in a go context, the types wouldn't match
[10:56] <mgz> that makes checking a list of structs where you don't care about one field more annoying
[11:03] <mgz> all: will miss standup again as it's Thursday in town day
[11:29] <fwereade> so, I'm going to try to join the standup hangout in a mo, but this connection seems to throw a fit whenever I do so
[11:29] <fwereade> so if I'm not there don't wait for me
[11:33] <natefinch> rogpeppe, jam: standup?
[11:33] <rogpeppe> natefinch: good point
[11:44] <dimitern> fwereade, rogpeppe: relation ops https://codereview.appspot.com/12990043
[11:49] <dimitern> fwereade, rogpeppe: I forgot KeyRelation() - i'll be adding it
[11:58] <dimitern> fwereade: shouldn't I do that?
[12:19] <dimitern> rogpeppe, fwereade: poke :)
[12:19] <rogpeppe> dimitern: i'm on it
[12:19] <dimitern> rogpeppe: thanks
[12:29] <rogpeppe> dimitern: reviewed
[12:30] <dimitern> rogpeppe: cheers
[12:30] <dimitern> rogpeppe: didn't want to use the struct by value to avoid copying in stateEndpointToParams
[12:30] <dimitern> rogpeppe: does that apply?
[12:31] <rogpeppe> dimitern: no
[12:31] <dimitern> rogpeppe: ok
[12:31] <rogpeppe> dimitern: i mean, it will be copied, but the cost of that is irrelevant
[12:31] <rogpeppe> dimitern: there's more cost from the allocation
[12:31] <dimitern> rogpeppe: :)
[12:31] <dimitern> rogpeppe: noted
[12:32] <noodles775> natefinch, sidnei: RE bug 1208504, I've got a fix (now that I've got an HP account to test) - it's pretty straight forward. Though unit-testing it is proving a little more difficult (the test double for openstack doesn't seem to have something similar to the ec2 double's SetInitialInstanceState).
[12:32] <_mup_> Bug #1208504: "error: no instances found" Post bootstrap on HP Cloud  <papercut> <juju-core:In Progress by michael.nelson> <https://launchpad.net/bugs/1208504>
[12:33] <natefinch> noodles775: ahh, good.
[12:36] <dimitern> noodles775: there is something more powerful in goose: ControlHooks
[12:36] <dimitern> noodles775: take a look at ProcessFunctionHook and how it's used
[12:37] <dimitern> noodles775: it allows you to return anything from a test double's method
[12:37] <noodles775> dimitern: hrm, I was looking at RegisterControlPoint but couldn't see how it can be used to change the actual state.
[12:37] <noodles775> dimitern: Ah? OK - I could only see how I could return an *error* for any method. Let me look again.
[12:37] <dimitern> noodles775: what do you need? change the state of a started instance?
[12:38] <noodles775> dimitern: yep
[12:38] <noodles775> dimitern: I'm happy to look again though - I thought the ServiceControl was the only option.
[12:38]  * noodles775 looks.
[12:40] <dimitern> noodles775: in addition to returning the error, you have access to the nova object n
[12:42] <noodles775> dimitern: I tried that too (ie. doing sc.(novaservice.Nova) but got impossible type assertion: novaservice.Nova does not implement hook.ServiceControl (RegisterControlPoint method requires pointer receiver)
[12:42] <dimitern> noodles775: try sc.(*novaservice.Nova) perhaps?
[12:44]  * dimitern gtg - be back in a couple of hours
[12:45] <noodles775> Thanks dimitern, I'll try that.
[12:45] <dimitern> fwereade: one last poke for https://codereview.appspot.com/12990043/
[12:47] <noodles775> dimitern: so the conversion works, but there's not much exposed on Nova (like .servers :/). I'll look more closely.
[12:50] <mgz> noodles775: what's your current diff, out of curiosity?
[12:51] <rogpeppe> fwereade: https://docs.google.com/document/d/1NRkTkZiVXcOL7wPQJ8GGxnW3YtPiwRQThTGNNT92HW4/edit?usp=sharing
[12:53] <sidnei> fwereade: good morning! got a few reviews on https://codereview.appspot.com/12859043/ already, but waiting on your ack before merging
[12:53] <noodles775> mgz: https://code.launchpad.net/~michael.nelson/juju-core/1208504-post-bootstrap-hp-no-instances-found-try2/+merge/179899
[12:54] <noodles775> mgz: It is also possible to fix it without increasing the shortAttempt, but it would rely on us checking for HP specific status codes (ie. "BUILD(spawning)")
[12:55] <noodles775> mgz: For the various HP BUILD states, see http://docs.hpcloud.com/api/compute#ServerStates
[12:55] <rogpeppe> fwereade: one thing i didn't mention: i was thinking of moving environs/{all,azure,dummy,ec2,local,maas,openstack} to environs/provider as the environs name space is getting really cluttered these days
[12:56] <mgz> noodles775: either of those approaches seems fine to me
[12:58] <noodles775> mgz: cool, thanks.
[13:30] <fwereade> rogpeppe, +1 to that
[13:30] <cruejones> seems there was a juju upgrade in juju/stable today which removed 'juju' cmdline symlink?  anyone else experience this?
[13:31] <fwereade> sidnei, hmm, root-disk would make sense if we did make sure to allocate a suitably sized ebs volume for the 0 instance-store cases
[13:33] <fwereade> sidnei, but adding that seems likely to be non-trivial
[13:34] <sidnei> fwereade: i don't understand what you're referring to there? we're using ebs instances and those all have an 8G root disk which can be resized on boot by passing the proper block device mapping.
[13:34] <sidnei> i man, i don't understand the '0 instance-store' cases
[13:34] <sidnei> s/man/mean
[13:36] <fwereade> sidnei, ah, I see -- but it seems a bit strange to be claiming 8G for every ec2 instance type
[13:36] <fwereade> sidnei, the implication is that you can't get anything on ec2 with more than 8G
[13:37] <sidnei> fwereade: that's currently the case until we change goamz to allow passing a custom block device mapping yes.
[13:38] <fwereade> sidnei, ...and while I see your point that that's what you get for the root disk in each case, it rather seems that that is actually nothing to do with instance type
[13:38] <fwereade> sidnei, I may be too ec2 focused, but it seems very strange to be presenting (say) an m1.xlarge as having 8G rather than >1T
[13:39] <sidnei> fwereade: as mentioned in my first reply, this is only for the root disk, not the additional ephemeral disks
[13:40] <fwereade> sidnei, OTOH the instance storage available is maybe somewhat irrelevant given that you need to know where it is to make use of it and the average charm does not
[13:41] <fwereade> sidnei, I don't suppose you know offhand whether it's possible to map root-disk cleanly to maas? dimitern, mgz, jam?
[13:41] <fwereade> or azure? jtv, rvba?
[13:41] <rvba> Hi fwereade.
[13:42] <sidnei> fwereade: i already added the azure mapping
[13:42] <fwereade> sidnei, <3
[13:42] <sidnei> fwereade: since the ebs root device (/dev/sda1) is resizable on boot, a proper follow-up branch would be to set the RootDisk to nil on the instance types and pass through the requested constraint via block device mapping to resize it to the requested size
[13:43] <sidnei> so for ec2 all instance types would match, and you would always get the size you requested on the constraint for the root disk
[13:43] <fwereade> sidnei, that would be *fantastic* and would basically negate my quibbles; how realistic is it in the near future?
[13:44] <fwereade> rvba, in fact, you know maas too, right?
[13:44] <sidnei> fwereade: very realistic, the required changes in goamz seem to be fairly simple to implement.
[13:44] <rvba> fwereade: I do, but I confess I'm not entirely sure what you're talking about right now :)
[13:44] <fwereade> rvba, how trivial is it to ask for a machine with a particular amount of root disk space?
[13:45] <rvba> fwereade: the machines is installed using d-i so it's only a matter of passing the right config to d-i.
[13:45] <rvba> I think it's something that smoser talked about a few months ago.
[13:46] <fwereade> sidnei, ok, I'll do a quick pass, but I'll be happy on the condition we do get followups addressing that issue
[13:46] <fwereade> rvba, I don't recall that -- from a juju-core perspective, what would need to be done to hook up a rot-disk constraint for maas?
[13:46] <fwereade> er root-disk
[13:47] <sidnei> fwereade: ack. my priority list is landing this to unblock webops on openstack, then look at quick lxc-clone in local provider, then return and implement the block device mapping.
[13:48] <fwereade> sidnei, that sounds awesome -- would you add a juju-core bug and assign yourself please, so it doesn't get lost?
[13:48] <sidnei> definitely
[13:48] <rvba> fwereade: hum, actually i was talking about a slightly different thing: I was talking about configuring d-i so that the partitions follow a certain plan.
[13:49] <fwereade> rvba, I thought you might be, I didn't quite follow what you said at first :)
[13:50] <rvba> fwereade: okay, so the disk size is something that MAAS collects from the lshw output (same as RAM capacity) so it's probably only a matter of exposing that as another available constraint.
[13:51] <fwereade> rvba, cool, so we can basically make it a straight passthrough
[13:51] <rvba> Yes.
[13:51] <sidnei> https://bugs.launchpad.net/juju-core/+bug/1212688
[13:51] <_mup_> Bug #1212688: ec2: pass root-disk constraint via block device mapping <juju-core:Triaged by sidnei> <https://launchpad.net/bugs/1212688>
[13:51] <fwereade> rvba, fantastic, tyvm
[13:51] <rvba> welcome
[13:51] <fwereade> sidnei, great, thanks
[13:52] <sidnei> i'll file one about maas too
[13:58] <fwereade> sidnei, you read my mind :)
[13:59] <fwereade> sidnei, LGTM
[13:59] <fwereade> sidnei, thanks very much
[13:59] <sidnei> i'll sneak in a change to ignore and log root-disk on maas for now, just like it does for cpu-power
[13:59] <sidnei> i wonder what to do for local provider
[14:00] <mgz> sidnei: ignoring there seems sensible as well
[14:01] <sidnei> ok
[14:06] <sidnei> seems local just passes through all constraints without even looking at them
[14:08] <fwereade> sidnei, yeah, there'll be more to do wrt containers in general too but I don;t think that needs to be on your plate
[14:16] <sidnei> fwereade: natefinch had a couple interesting comments on his review, which are mostly stylistic issues, i wonder what is your opinion.
[14:16] <sidnei> https://codereview.appspot.com/12859043/#msg11
[14:17] <mgz> sidnei: one final rename in state/machine.go and lgtm.
[14:17] <fwereade> sidnei, I'm not too bothered by the overwriting of results on error, IMO any error renders the whole object trash anyway
[14:18] <sidnei> mgz: ha! good catch.
[14:18] <fwereade> sidnei, but, whoops, I completely missed the "" case, and I need to reload state to remember
[14:18] <fwereade> just a mo
[14:19] <mgz> fwereade is currently creating a multi-gig swapfile
[14:21] <fwereade> sidnei, I think that ""->0 is correct -- the implication is "I don't care" rather than "fall back to environment constraints"
[14:22] <fwereade> sidnei, since 0 matches everything it's fine -- in arch, for example, it needs to be handled a little differently but again "" implies "I don't care", overriding whatever env defaults may exist
[14:22] <fwereade> sidnei, to fall back you just don;t specify
[14:23] <mgz> the old constraint case was "blah=" meaning "use the default" and "blah=any" meaning "unset the constraint"
[14:24] <mgz> so, "mem=" would ask for a machine with 512mb or whatever, and "mem=any" wouldn't check ram at all
[14:29] <fwereade> mgz, yeah; but IIRC there was general agreement that juju-level defaults were madness and horror, and use of a special string was just too unpleasant
[14:29] <fwereade> mgz, so essentially the "default" decision is now just left up to the providers
[14:30] <mgz> fwereade: indeed
[14:31] <sidnei> there were missing tests for rootdisk in findCleanMachineQuery, added those and verified working too.
[14:37] <natefinch> fwereade, mgz, sidnei:  maybe change the code to have "" return early, and make it more obvious that it's purposeful?
[14:38] <fwereade> natefinch, mgz, sidnei: not sure it helps a great deal tbh, but happy to follow consensus
[14:38] <natefinch> possibly even with a comment? :)
[14:39] <fwereade> natefinch, mgz, sidnei: a named "doesntMatter" const might be even better
[14:39] <fwereade> natefinch, mgz, sidnei: not that that's a *good* name
[14:40] <fwereade> natefinch, the trouble with a comment vs a name is the necessity of repeating it
[14:41] <natefinch> fwereade: yes, a name is as good as a comment. Mostly I just wanted it to be clear that this is the intended effect, and not just happenstance from the way it was written
[14:41] <fwereade> natefinch, +1 indeed
[14:42] <natefinch> fwereade: especially because once it's out there, there's going to be 1000 people who have discovered the happenstance and now rely on that functionality
[14:42] <fwereade> natefinch, quite so
[14:42] <natefinch> or just one really loud person
[14:44] <natefinch> fwereade: do you have a suggestion for a bug for me to work on. The one John mentioned to me yesterday seems taken care of.  I got a bunch of suggestions from arosales, but I'm not really sure on the relative priority of them
[14:44] <fwereade> natefinch, I can *certainly* find you a bug if there's nothing addressability-related you can do with mgz
[14:47] <natefinch> fwereade: I'd be happy to help mgz, but it seems it's a little difficult to split off pieces
[14:48] <mgz> natefinch: so, we still need to land the remaining ec2 bits
[14:48] <mgz> then do basically the same for azure/maas, which are even simpler
[14:50] <natefinch> mgz: I like simple
[14:53] <mgz> so, azure is nice and easy, no gwacl changes needed
[14:53] <mgz> the structs built from the xml have some address details in them, that just wants exposing under the Addresses() method as we did for ec2
[14:55] <fwereade> natefinch, mgz: excellent
[14:56] <fwereade> natefinch, I have a couple that aren't so simple if mg thinks it's likely that he'll be unable to use yu for a day or 2 at any stage
[14:57] <fwereade> natefinch, but I'd be interested to know arosales' requests, because he should have a good handle on the ones that are really upsetting charmers, and they are an important audience
[14:58] <natefinch> fwereade: https://bugs.launchpad.net/juju-core/+bug/1210576
[14:58] <_mup_> Bug #1210576: Show exposed URL after exposing and on command 'show' <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1210576>
[14:59] <natefinch> fwereade: https://bugs.launchpad.net/juju-core/+bug/1201628
[14:59] <natefinch> fwereade: https://bugs.launchpad.net/juju-core/+bug/1210593
[14:59] <_mup_> Bug #1210593: Show the user all current running environments <juju-core:Triaged> <https://launchpad.net/bugs/1210593>
[15:00] <arosales> fwereade, that is the one we discussed at the sprint
[15:00] <mgz> natefinch: see RoleInstance in xmlobjects.go in gwacl for starts
[15:00] <fwereade> arosales, I think I can split a simple and useful one out of the first one
[15:01] <natefinch> mgz: cool.. what branch are you working on, should I just check it out and work on that one as well?
[15:01] <fwereade> arosales, `juju <verb-tbd> wordpress/0` => open browser if service exposed/complain if not
[15:01] <fwereade> arosales, the wait-for-some-unit-to-be-exposed functionality is a little tricky
[15:01] <mgz> niemeyer: thanks for the review
[15:02] <arosales> fwereade, ya the first "juju show" I think has more utility
[15:02] <mgz> natefinch: a fresh branch should be the right thing, shouldn't be any conflicts
[15:02] <natefinch> mgz: cool
[15:02] <mgz> natefinch: I'm going to wrap up the ec2 bits and get them landed
[15:02] <fwereade> arosales, listing running environments is a good one, but I think it'll have to wait a bit until rog/ian's work on environments has progressed a little
[15:02] <natefinch> mgz: how do I do that branch switching you were doing?
[15:02] <natefinch> mgz: bzr help switch was not very helpful :)
[15:03] <mgz> `bzr switch trunk` to get you onto trunk again (or create trunk) `bzr pull` for latest code, `bzr switch -b new_feature` to switch to a new feature branch
[15:03] <arosales> fwereade, ack. The thought there was I am a new juju user I tried to deploy a few times in AWS not I am not certina if I have instances spending my money
[15:03] <mgz> where 'new_feature' is an appropriate name of some kind
[15:05] <fwereade> arosales, I think that if I was mistrustful of juju I'd be checking the console directly -- and that environments can be removed from environments.yaml anyway, rendering the functionality incomplete at best -- so I'm not 100% sure of the utility
[15:05] <fwereade> arosales, I understand the intent
[15:06] <natefinch> mgz: bzr switch trunk says bzr: ERROR: Not a branch: "/home/nate/code/src/launchpad.net/trunk/"   when I do it from code/src/launchpad.net/juju-core (which is targetted at trunk)
[15:06] <fwereade> arosales, and hopefully a clear way of listing will land soon
[15:06] <arosales> cool, and I saw you mentioned the "show" command may be a bit tricky, but I have _full_ confidence in juju core devs :-)
[15:08] <mgz> natefinch: you need a new branch is you're changing the workflow you're using probably
[15:08] <mgz> *if
[15:08] <mgz> or use -b if you've not switched to that mode yet
[15:09] <mgz> `bzr branches` should list several things
[15:09] <mgz> if it just says (default) you're not using colocation yet
[15:11] <natefinch> mgz: ok, cool, I get it
[15:17] <arosales> mramm, per the cross team meeting seems a lot of folks are hitting https://bugs.launchpad.net/juju-core/+bug/1188126
[15:17] <_mup_> Bug #1188126: Juju unable to interact consistently with an openstack deployment where tenant has multiple networks configured <canonistack> <openstack> <serverstack> <juju:New> <juju-core:Triaged> <https://launchpad.net/bugs/1188126>
[15:18] <arosales> fwereade ^
[15:18] <fwereade> arosales, hmm, thanks for the heads up
[15:18] <arosales> fwereade, issue hit by IS, Server, and Lanscape
[15:19] <fwereade> arosales, yeah, there's no way that's wishlist
[15:19] <fwereade> arosales, tyvm
[15:19] <arosales> fwereade, could you update the importance?
[15:19] <fwereade> arosales, just made it high
[15:19] <arosales> fwereade, thank you
[15:19] <mgz> that bug is too much of a hydra
[15:19] <fwereade> mgz, I suspect this interacts with what you're doing
[15:19] <mgz> everyone commenting means something else by it
[15:20] <mgz> james' would be fixed by using the neutron api, or fixing openstack to be less insane about how it selects networks, or making cloud-init understand multiple networks
[15:20] <mgz> elmo's they worked around by not doing that in the end
[15:21] <mgz> and I don't even know what the issue the landscape guys have is really
[15:21] <natefinch> the fix is obviously to have juju disable all networks it isn't using ;)
[15:21] <fwereade> haha
[15:23] <arosales> other juju core bugs that were surfaced in the cross team meeting, fwiw
[15:23] <arosales> https://bugs.launchpad.net/juju-core/+bug/1170337
[15:23] <_mup_> Bug #1170337: maas provider: missing support for maas-specific constraints <openstack> <juju-core:Triaged> <https://launchpad.net/bugs/1170337>
[15:23] <arosales> and
[15:23] <arosales> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1200878
[15:23] <_mup_> Bug #1200878: Upgrade breaks existing pyjuju deployment <apport-collected> <papercut> <regression-release> <saucy> <juju-core:New> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1200878>
[15:23] <mgz> if cloud-init doesn't bring up all networks, we're kinda screwed. with the current changes, we can try multiple addresses, or be smarter at selecting which one to use, but giving they're selected by netron UUID it's a little painfule (and fragile) to do 'correctly' in juju
[15:24] <mgz> we'd need to annotate the addresses in state with their internal id (after querying that over the netron api), and sort the list in the same fashion openstack does when selecting a preferred address
[15:25] <fwereade> mgz, ouch, that sounds fragile indeed
[15:26] <fwereade> mgz, how bad would it be to try all of them? it looks like there's a TODO in the code for that already
[15:26] <mgz> we can try all of them, the fun part is then what do we tell charms via the legacy api, which expect one canonical address?
[15:26] <arosales> mgz, (brainstrom) would it be helpful to pass cloud-init a param to init all detected networks?
[15:27] <mgz> we'd need to predetermine which work, and mark some as broken
[15:27] <mgz> yeah, cloud-init should learn how to bring up multiple networks
[15:27] <mgz> then you can just select any
[15:28]  * arosales asks smoser in #juju
[15:29] <fwereade> mgz, or just not record the ones that don;t work?
[15:38]  * dimitern is bacl
[15:38] <dimitern> back*
[15:38] <dimitern> fwereade: https://codereview.appspot.com/12990043/ please?
[15:38] <fwereade> arosales, lp:1200878 is indeed awful
[15:38] <fwereade> dimitern, sorry; on that now
[15:40] <arosales> fwereade, ya mramm said there would be a story there. juju-core just needs to communicate what that store is with folks doing packaging (james page) and in general. When the story is ready.
[15:40] <fwereade> rogpeppe, dimitern: re the bug above: shouldn't LoadState be detecting pyjuju environments?
[15:40] <rogpeppe> fwereade: it won't get that far
[15:41] <rogpeppe> fwereade: it's failing to create the config
[15:41] <dimitern> fwereade: yeah, it's more than what danilos did it seems
[15:41] <fwereade> rogpeppe, search for "no CA cert"
[15:42] <fwereade> rogpeppe, checks in LoadState would let us catch that problem
[15:42] <rogpeppe> fwereade: hmm, yes, you're absolutely right
[15:42] <fwereade> rogpeppe, knowing exactly how to handle it is maybe trickier -- is it reasonable to assume that someone in this situation *must* have juju 0.7 installed
[15:43] <rogpeppe> fwereade: pyjuju 0.7?
[15:43] <mramm> fwereade: arosales: here's the release schedule: https://wiki.ubuntu.com/SaucySalamander/ReleaseSchedule
[15:43] <rogpeppe> fwereade: is there anything in the py juju state info file that marks it out as py juju ?
[15:44] <mramm> we need to think about what we do for it.   We will have a new cloud archive tools pocket, so I'm not sure exactly how critical it is -- but what we put into Saucy should be pretty stable if we can manage it
[15:44] <hazmat> rogpeppe, state info  file?
[15:44] <fwereade> rogpeppe, danilos did work to detect exactly this situation
[15:44] <hazmat> rogpeppe, the unit state  files are completely different between the two
[15:44] <fwereade> hazmat, this is the file in provider storage pointing to the state instance
[15:44] <hazmat> fwereade, ah
[15:45] <fwereade> rogpeppe, LoadState having to read two things to determine what sort of env it's talking to is fine by me
[15:45] <mramm> anyway, thie release stuff is something for us to talk through next week
[15:45] <fwereade> rogpeppe, alternatively, just sticking another key in that file would allow us to differentiate going forward
[15:45] <rogpeppe> hazmat: they're both in the same provider-state file aren't they?
[15:46] <hazmat> rogpeppe, they are but they have different keys for state servers
[15:46] <rogpeppe> hazmat: cool
[15:46] <mgz> rogpeppe: can I have a stamp on <https://codereview.appspot.com/12765043/> please? also, I'm assuming the landing process is just lbox submit still?
[15:46] <hazmat> rogpeppe, pyjuju uses 'zookeeper-servers', juju-core uses 'state-servers'
[15:46] <rogpeppe> fwereade: i think it could just look for len(StateInstances) == 0
[15:46] <rogpeppe> fwereade: or, better, StateInstances==nil
[15:46] <rogpeppe> fwereade: and assume that if that's the case, it's a pyjuju-created file
[15:46] <hazmat> er... 'state-instances'
[15:47] <rogpeppe> hazmat: yeah
[15:47] <fwereade> rogpeppe, better yet
[15:47] <rogpeppe> mgz: looking
[15:48] <arosales> mramm, ack. I'll follow up with david c. when he returns.
[15:48] <rogpeppe> mgz: reviewed
[15:49] <mgz> rogpeppe: I also updated the other goamz proposal again, sorry for the bother :)
[15:50] <rogpeppe> mgz: np
[15:50] <hazmat> mgz, its not really about cloud-init understanding the networks afaics, its about juju understanding the right network to use
[15:50] <hazmat> afaics
[15:51] <mgz> okay, let's go in here :)
[15:51] <mgz> hazmat: in james' case, that's true. but the intention of the IS migration was that both addresses would work for a time
[15:51] <mgz> there's nothing on the juju side that I can see we can do to enable that.
[15:53] <hazmat> hmm.. so there's a net device attached to the instance, with an ip allocated from openstack, but nothing for it on the host since its not configured/up there?
[15:53] <hazmat> s/host/instance
[15:55] <hazmat> yeah. cloud-init doing the net dance sounds sane, but then we come back to juju handling the multiple ip addresses/net devs.. and using the right one for its own usage.
[15:55] <mgz> hazmat: yes, that's my understanding from agy's postmortem email to canonistack-announce
[15:57] <fwereade> dimitern, reviewed
[15:58] <hazmat> re merging to juju-core have the instructions changed from just $ lbox submit
[15:59] <hazmat> oh.. tarmac
[16:03] <mgz> hazmat: yup, you can use lp:rvsubmit bzr plugin if you like the submitty workflow
[16:04] <rogpeppe> mgz: reviewed
[16:09] <mramm> arosales: we should also add curtis from orange squad to the release planning discussion
[16:17] <arosales> mramm, ack
[16:17] <dimitern> fwereade: thanks!
[16:25] <rogpeppe> go doc is going :-( https://codereview.appspot.com/12974043/
[16:27] <rogpeppe> fwereade: do you have any idea why, in the local provider, environProvider.Open sets AgentVersion in the config?
[16:28] <smoser> mgz, i'm interested in knowing what the question and solutoin was for networking things.,
[16:30] <fwereade> rogpeppe, I *suspect* that it's part of thumper's hackery wrt upload-tools vs sync-tools, but I'm not directly familiar with the code in question
[16:30] <arosales> mgz, smoser is referencing the question I surfaced in regards to bug 1188126 that you and hazmat were discussing ealier
[16:30] <_mup_> Bug #1188126: Juju unable to interact consistently with an openstack deployment where tenant has multiple networks configured <canonistack> <openstack> <serverstack> <juju:New> <juju-core:Triaged> <https://launchpad.net/bugs/1188126>
[16:30] <rogpeppe> fwereade: ah, i thought you were the principal reviewer
[16:31] <rogpeppe> fwereade: what's the specific hackery you're talking about?
[16:31] <mgz> smoser: the solution for one of those cases, was IS just not doing that for the lcy02 network migration
[16:31] <mgz> right, I have to run now I'm afraid
[16:31] <fwereade> rogpeppe, he abuses upload-tools pretty badly to mimic sync-tools -- there's some layer breaking around juju bootstrap
[16:32] <fwereade> rogpeppe, I managed to convince him it was crazy at iom
[16:32]  * rogpeppe should have spent more time looking at that code
[16:32] <fwereade> rogpeppe, but a fix has as usual been put off until it actively hurts us -- it may be you're in that situation now
[16:36] <fwereade> rogpeppe, fwiw the motivation for landing it as was was, well, we needed the local provider stat, and the effects seemed to be localized
[16:36] <rogpeppe> fwereade: yeah, istr that
[16:37] <rogpeppe> fwereade: i've just realised that if Prepare does all the work, then Bootstrapper is unnecessary. we can just have Prepare(cfg *config.Config) (Environ, error)
[16:38] <fwereade> rogpeppe, awesome, ++simplicity
[16:38]  * rogpeppe undoes a load of stuff
[16:39]  * fwereade sympathises
[16:41]  * rogpeppe is actually very happy about that
[16:52] <rogpeppe> fwereade: how about this as a suggestion: the environment file contains *all* attributes that are not explicitly mentioned in the environments.yaml file (or later, the juju.conf file)?
[16:55] <rogpeppe> fwereade: hmm, that's simple to implement (and results in a nicely predictable user model - stuff in the home directory is only consulted the first time an environment is prepared) but it will result in more attributes than we strictly need.
[16:57] <dimitern> fwereade: updated https://codereview.appspot.com/12990043 - PTAL
[16:57] <fwereade> rogpeppe, I'm not immediately keen tbh, I feel like every attribute we put in there that's not strictly necessary is likely to end up a dependency of some sort
[16:59] <fwereade> dimitern, looking
[16:59] <rogpeppe> fwereade: the way i'm seeing it is that the environment file binds the attributes, so that subsequent operations have a consistent view of them, rather than looking around in the home directory for authorized-keys, for example
[17:00] <fwereade> rogpeppe, will we ever want to look up authkeys except at bootstrap time?
[17:01] <fwereade> rogpeppe, those should be packaged up, sent to the server, and forgotten, I think
[17:01] <rogpeppe> fwereade: no - quite a few attributes are only useful in the interval between preparation and bootstrap
[17:02] <fwereade> rogpeppe, authkeys in particular is not useful *until* bootstrap, is it?
[17:04] <rogpeppe> fwereade: yeah, but it seems a bit odd to have keys that are bound at different times. perhaps that's just me though. i'm still thinking it over.
[17:06] <fwereade> rogpeppe, authkeys is crack anyway
[17:06] <fwereade> rogpeppe, if a key represents anything it's a user
[17:06] <fwereade> rogpeppe, the only reason it was ever in the environment is because we were trying to get away without a user model :/
[17:07] <rogpeppe> fwereade: users are often represented by principals a.k.a. private keys
[17:07] <rogpeppe> fwereade: so authkeys doesn't seem too bad to me
[17:08] <fwereade> dimitern, wrt RelationInfo I was just asking whether RelationResult{RelationInfo, Error} might be cleaner -- it's probably not, if it doesn't make you immediately say "YEAH!" then I wouldn't bother
[17:08] <fwereade> rogpeppe, I'm not saying we shouldn't have authorized keys -- just that they shouldn't be on the environment
[17:08] <dimitern> fwereade: I didn't get what were you referring to as RelationInfo?
[17:09] <fwereade> rogpeppe, the env should have users, and those users should themselves have authkeys
[17:09] <rogpeppe> fwereade: i guess we'll still want to allow ssh access to the bootstrap node
[17:09] <fwereade> dimitern, just struct {Key, Id, Endpoints}
[17:09] <fwereade> rogpeppe, I can well imagine cases in which you don't want any ssh access enabled anywhere by default
[17:09] <rogpeppe> fwereade: do you mean "the *state* should have users" ?
[17:09] <dimitern> fwereade: ah, there is a RelationInfo in params already, and it's just Key and []Endpoint
[17:10] <dimitern> fwereade: used by the allwatcher
[17:10] <rogpeppe> fwereade: well, that's easy enough to arrange - just provide an invalid authorized key :-)
[17:10] <fwereade> rogpeppe, ha, yeah
[17:10] <fwereade> dimitern, hmm
[17:11] <fwereade> dimitern, I reckon that one ought to have id in there as well, really
[17:11] <dimitern> fwereade: I can add it, but I'll need to change a bunch of watcher tests
[17:12] <fwereade> dimitern, let's not actually
[17:13] <dimitern> fwereade: I'm trying it now
[17:13] <fwereade> dimitern, I'm not willing to say "these two things in these two contexts are actually the same" because I think they're probably not -- the uniter has no reason to know the remote endpoint
[17:14] <fwereade> dimitern, and while the allwatcher surely should include the relation id, that's not what we're concerned with here
[17:14] <fwereade> dimitern, my point about LocalEndpoint is just that there's no reason for the uniter to know what service it's having relations with
[17:14] <fwereade> dimitern, so why send that information?
[17:15] <fwereade> dimitern, when we call Relation we can trivially find out the service the connected unit is a member of, and get that endpoint directly and send that down alone
[17:15] <fwereade> dimitern, am I making any sense there?
[17:16] <dimitern> fwereade: sorry, not really
[17:17] <dimitern> fwereade: are you saying we don't need a "all endpoints" field at all?
[17:17] <fwereade> dimitern, yeah
[17:17] <dimitern> fwereade: then, we'll need an Endpoint() call for each service name, right?
[17:17] <fwereade> dimitern, then the Relation type we expose to the uniter literally just has an Endpoint(no args) method
[17:18] <fwereade> dimitern, because any uniter.Relation only needs to know one, and that's the one the API will be guaranteed to send anyway
[17:18] <dimitern> fwereade: how about that case when we call rel.Endpoint(u.unit.ServiceName()) ?
[17:19] <fwereade> dimitern, it's always u.unit.ServiceName -- we always know what that is ahead of time
[17:19] <fwereade> dimitern, so we just call Endpoint(), and that gives us the only endpoint we have a right to know about
[17:19] <rogpeppe> dimitern: reviewed
[17:19] <dimitern> rogpeppe: cheers
[17:19] <dimitern> fwereade: so what do we return on Relation() API call? only endpoints for our unit-tag?
[17:20] <dimitern> fwereade: so we need both rel-tag and unit-tag to call the API Relation() methods
[17:22] <fwereade> dimitern, that would be reasonable, I think, I had been imagining we could figure it out from knowing the connected entity, but that's probably sleazy/lazy/short-sighted
[17:23] <dimitern> fwereade: ok, so I'll define a params.Relations, []params.Relation which has Relation and Unit string fields, expected to be the respective tags
[17:23] <dimitern> fwereade: and then I can use the same params type for Enter and LeaveScope, Settings, etc.
[17:23] <fwereade> dimitern, that sgtm
[17:23] <dimitern> fwereade: ok then
[17:23] <fwereade> dimitern, thanks, sorry for the hassle
[17:25]  * fwereade bbl
[18:06] <dimitern> rogpeppe, fwereade: updated https://codereview.appspot.com/12990043/ again - I hope this time I managed to do all suggestions
[18:12] <dimitern> rogpeppe, fwereade: if it looks ok, I'll land it
[20:20] <sidnei> uhm, anyone around for some go advice? niemeyer?
[20:20] <niemeyer> sidnei: Here.. just a sec.. finishing a meeting
[20:21] <sidnei> niemeyer: golxc has a Container interface with Clone(name), now i need to change that to Clone(name string, snapshot boolean, backingStore BackingStore, templateArgs ...string), but i guess to not break bw-compat i would need to create a new interface?
[20:31] <niemeyer> sidnei: Hmm
[20:31] <niemeyer> sidnei: There are lots of details around such a change.. hard to give good advice without much info
[20:31] <niemeyer> sidnei: In general, yes, if you drop a method that satisfies an interface you don't satisfy th einterface anymore
[20:33] <sidnei> niemeyer: and i can't add a new method to the same interface as anyone that should be implementing that interface will suddenly not be implementing it anymore right?
[20:42] <niemeyer> sidnei: Exactly
[20:42] <niemeyer> sidnei: But that's easy to solve
[20:42] <niemeyer> sidnei: You can just define another interface that contains the new method
[20:49] <sidnei> niemeyer: and embed the old interface?
[20:49] <niemeyer> sidnei: Not necessarily
[20:49] <niemeyer> sidnei: If they are alternatives to each other, they can be independent
[20:50] <sidnei> uhm, they are not alternatives, the only difference is the new method, all the other existing methods are still required
[21:30] <hallyn_> hm, 'juju bootstrap' claims my ec2 environment is already bootstrapped.  (it's not)
[21:31] <thumper> hallyn_: it could be that the "special bootstrap file" has been writted to the storage
[21:31] <thumper> hallyn_: try destroy-environment then try again?
[21:31] <thumper> hallyn_: if it still says it is bootstrapped, then it is probably a bug
[21:31] <thumper> could well be a bug anyway
[21:32] <hallyn_> thumper: aaah!  that worked.  where is that special file?
[21:32] <thumper> hmm...
[21:32]  * thumper looks at the code
[21:32] <hallyn_> thumper: thanks!
[21:32] <niemeyer> sidnei: Sorry, system crashed badly
[21:32] <hallyn_> thumper: no nm, don't waste your time on that right nwo :)
[21:32] <niemeyer> sidnei: So the old method is required even for those implementing the new method?
[21:33] <sidnei> niemeyer: nope, but all the other old methods in the interface are.
[21:33] <thumper> hallyn_: "provider-state"
[21:33] <thumper> hallyn_: it writes the state instance id there I think
[21:34] <hallyn_> thumper: in what dir?  that didn't exist in my ~/.juju
[21:34] <thumper> hallyn_: in the root of the private storage I think
[21:34] <hallyn_> 'the private storage'?
[21:36] <thumper> in ec2 speak, the private bucket
[21:36] <thumper> we use "storage" internally
[21:41] <thumper> anyone know where we are re ARM versions?
[22:01] <sidnei> hallyn_: around?
[22:02] <hallyn_> sidnei: yup
[22:02] <sidnei> hallyn_: heya, getting the faster lxc-clone into juju, hitting a small roadblock
[22:02] <sidnei> hallyn_: http://paste.ubuntu.com/5990680/
[22:03] <hallyn_> sidnei: does /var/log/juju exist in the container?
[22:04] <sidnei> hallyn_: if that's meant to be /var/lib/lxc/sidnei-local-machine-1/delta0/var/log/juju, then no
[22:04] <sidnei> i guess i should create that in the base template anyway
[22:04] <hallyn_> sidnei: in delta0, or in the underlying rootfs (whichever container that comes from)
[22:05] <hallyn_> sidnei: what does 'grep rootfs /var/lib/lxc/sidnei-local-machine-1/config' give?
[22:05] <sidnei> lxc.rootfs = overlayfs:/var/lib/lxc/sidnei-local-precise-template/rootfs:/var/lib/lxc/sidnei-local-machine-1/delta0
[22:06] <hallyn_> sidnei: ok, then /var/lib/lxc/sidnei-local-precise-template/rootfs/var/log/juju existing would also suffice
[22:06] <hallyn_> but yeah you can't mount to a nonexisting dir
[22:06] <sidnei> hallyn_: ok, trying again.
[22:06] <hallyn_> \o
[22:06] <hallyn_> haven't had time today, but can't wait to try out the owncloud charm :)
[22:07] <hallyn_> (using lxc)
[22:15] <sidnei> thumper: did you see my question to gustavo above? do you have any ideas?
[22:15] <sidnei> thumper: need to change the signature of golxc Clone() to pass extra args, but i dont want to break bw compat
[22:15] <thumper> sidnei: which one?
[22:16] <thumper> sidnei: well, we don't use the Clone method
[22:16] <thumper> sidnei: so you won't break juju-core
[22:16] <thumper> and I don't know of anyone else using it
[22:16] <thumper> so go for it
[22:17] <sidnei> lol wut
[22:17] <sidnei> ok
[22:19] <sidnei> thumper: https://code.launchpad.net/~sidnei/juju-core/lxc-clone-with-overlayfs/ and https://code.launchpad.net/~sidnei/golxc/clone-with-backing-store/ is what i have so far, it's getting pretty far then blowing up in cloud-init: http://paste.ubuntu.com/5990730/
[22:21] <thumper> sidnei: what are the bits leading up to that?
[22:25] <sidnei> thumper: following the steps from smoser, create a base template with no userdata, then lxc-clone with userdata, then lxc-start
[22:25] <thumper> sidnei: so not running a juju-command
[22:25] <sidnei> thumper: yes, as a juju command with the above two branches
[22:26] <sidnei> so bootstrap then juju deploy ubuntu
[22:26] <thumper> sidnei: well, it looks like the upstart job isn't being created in the userdata
[22:27] <sidnei> it should be, i just moved userdata from lxc-start to lxc-clone, without changing anything
[22:30] <sidnei> thumper: here's the generated userdata: http://paste.ubuntu.com/5990758/
[22:33] <sidnei> oddly /var/lib/lxc/sidnei-local-machine-1/delta0/etc/init/jujud-machine-1.conf exists
[22:34] <thumper> sidnei: hmm, sorry, no idea right now
[22:35] <thumper> hmm...
[22:35] <thumper> overlay problem??
[22:35] <sidnei> mebbe
[22:35] <sidnei> going to try again shortly
[22:40] <sidnei> odd, the file is there inside the container after ssh in
[22:40] <sidnei> but upstart doesn't know about it
[22:47] <sidnei> ha, kill -HUP 1 to the rescue
[22:51] <bigjools> word up
[22:51]  * sidnei waves to bigjools
[22:52] <bigjools> hey sidnei
[22:52] <bigjools> how's the twins?
[22:55] <sidnei> bigjools: they're doing great. though staying at home this week because it's freezing cold outside and they were coughing heavily over the weekend, after 2 weeks of antibiotics. :/
[22:55] <sidnei> so a little more fun than usual on my afternoons
[22:56] <sidnei> bigjools: still at the hospital?
[22:56] <thumper> sidnei: IIRC I overheard some talk about overlayfs not working well with inotify
[22:56] <bigjools> sidnei: thankfully no
[22:57] <sidnei> thumper: indeed, smoser on that email thread that you started. this is bound to make things very interesting. i replied.
[22:57] <thumper> bigjools: can I get you to boot up your maas?
[22:58] <bigjools> thumper: sure.  /me needs to get ipmi working on the server node
[22:58]  * thumper pretends to know what bigjools is talking about
[22:58] <sidnei> bigjools: great to hear you're out
[22:58] <bigjools> thumper: it would mean I could turn it on without getting out my seat :)
[22:58] <bigjools> sidnei: thanks man
[22:59] <sidnei> thumper: so kill -HUP 1 does the trick, booted a couple more units without problem
[23:05] <bigjools> thumper: I need to dist-upgrade my saucy maas server if you're ok with that?
[23:06] <thumper> bigjools: yep
[23:06] <bigjools> ok - it will probably take down the appserver briefly
[23:15] <thumper> bigjools: if you can tell me when it is up again, that be swell :)
[23:15] <sidnei> thumper: https://code.launchpad.net/~sidnei/golxc/clone-with-backing-store/+merge/180444 https://code.launchpad.net/~sidnei/juju-core/lxc-clone-with-overlayfs/+merge/180445 both wip, will do more tests tomorrow
[23:15] <thumper> kk
[23:15] <bigjools> thumper: ok
[23:16] <bigjools> thumper: try in ~15 minutes
[23:17] <bigjools> or more to the point, ping me if I've not poked you by then
[23:19] <thumper> kk
[23:32] <thumper> bigjools: says 102 packages can be updated
[23:32] <thumper> bigjools: want me to update them?
[23:32] <bigjools> thumper: it's upgrading already
[23:32] <thumper> oh
[23:32] <thumper> ok
[23:32] <bigjools> run top and you will see the disk getting hammered
[23:33] <bigjools> ah the joys of a development release, package upgrade failures
[23:38] <thumper> \o/
[23:38] <thumper> system restart required
[23:38] <thumper> wanna bounce it?
[23:39] <thumper> bigjools: ^?
[23:40] <bigjools> thumper: not yet
[23:40] <bigjools> mysql is stuck
[23:41] <thumper> kk
[23:41] <thumper> so I should wait before I boot some maas nodes?
[23:41] <bigjools> thumper: yes, I need to reboot in a moment
[23:41] <thumper> kk
[23:47]  * thumper afk for lunch and haircut
[23:50] <bigjools> thumper: it's up
[23:56] <arosales> any folks know here if juju does a log rotate in 1.13 >
[23:56] <arosales> or should I file a bug
[23:57] <davecheney> arosales: i think it *sohuld* log rotate
[23:57] <davecheney> certainly for all-machines.log
[23:57] <davecheney> wallyworld_: committed that
[23:57] <wallyworld_> i didn't do the rotation bit
[23:57] <arosales> okay that came up in #juju not sure if it was address in 1.x juju
[23:59] <bigjools> hopefully it's just re-opening the log on a sighup?
[23:59] <davecheney> bigjools: doubt it
[23:59] <davecheney> dunno what loggo does