#juju-dev 2013-01-21
<wallyworld__> jam: hello, meeting time?
<jam> wallyworld__: it was, but I have to run to the grocery store for my wife. Will be back in a bit. Sorry I missed my alarm earlier.
<wallyworld__> jam: ok, ping me when you get back
<jam> wallyworld__: back
<wallyworld__> ok
<TheMue> Morning all.
<rogpeppe> TheMue, jam. wallyworld__: mornin'
<jam> morning rogpeppe
<rogpeppe> jam: how're you doing for snow?
<jam> rogpeppe: here in Dubai, the low is about 10deg C. Not much snow here :)
<rogpeppe> jam: ha, i thought you were in the uk...
<jam> (well, weather reports says low of 17 this week)
<jam> I used to be in NL
<jam> mgz is in UK
<rogpeppe> jam: doh, why did i think you were in the UK?
<rogpeppe> jam: i've no idea
<jam> i've been there for conferences and such, but I've never lived in UK
<rogpeppe> jam: anyway, lots of snow here, and more coming... definite advantages to working at home
<jam> rogpeppe: you usually still have to make it out for food. but yeah, not having hours of commuting in the snow is a good thing.
<rogpeppe> jam: still got a few carrots in the garden :-)
<jam> rogpeppe: ready frozen carrots :)
<rogpeppe> jam: mmm
<TheMue> We had some centimeters fresh snow this morning, but in south it seems to be more. The major airport yesterday also had ice rain.
 * TheMue likes new technology, his boarding pass for his flight tomorrow to MUC is now in passbook on his iphone.
<jam> dimitern: cfx3ac2..
<jam> ugh
<jam> ignore that :)
<dimitern> jam: :) ok
<TheMue> lunchtime
<niemeyer> Good morning!
<dimitern> niemeyer: morning!
<TheMue> niemeyer: Morning.
<rogpeppe> niemeyer, dimitern: yo!
<dimitern> rogpeppe: hiya
<niemeyer> dimitern, TheMue, rogpeppe: Heya!
<niemeyer> Good weekends?
<dimitern> good :) and lazy
<rogpeppe> niemeyer: yes thanks - went to see some old friends in the Lake District
<rogpeppe> niemeyer: i now ache from too much energetic sliding around on an iced over pond
<rogpeppe> niemeyer: quick question about mgo
<niemeyer> rogpeppe: Wow, sounds fun :)
<niemeyer> rogpeppe: Sure
<rogpeppe> niemeyer: if a connection to mgo is broken, will mgo retry by making a new connection, or will that (and subsequent) requests just give an error?
<rogpeppe> niemeyer: it was. their kids loved it, after some initial trepidation (the ice did crack alarmingly!)
<niemeyer> rogpeppe: An on going request returns an error, since assumptions about that specific connection may change, but post-Refresh sessions will block until the connection is back
<jam> mgz: poke for great mumbling
<niemeyer> rogpeppe: Ugh, sounds dangerous :-)
<mgz> jam: I'm on
<rogpeppe> niemeyer: the subtext is: i want to test what happens in the machine agent if the different independent parts die, so i'm wondering about a good way to provoke that
<rogpeppe> niemeyer: it would have been dangerous... if the pond was more that 6 inches deep :-)
<rogpeppe> s/that/than/
<niemeyer> rogpeppe: Haha :)
<niemeyer> rogpeppe: Yeah, not a big deal :)
<niemeyer> rogpeppe: So
<niemeyer> rogpeppe: Provoke what, specifically?
<niemeyer> rogpeppe: Break the connection?
<rogpeppe> niemeyer: yeah
<niemeyer> rogpeppe: It's a normal TCP connection, so the usual goes.. mgo tests stop the server for a moment, for example
<rogpeppe> niemeyer: thanks. i might try that (i was thinking setting up a proxy address, but just stopping the server would probably be easier)
<niemeyer> rogpeppe: Yeah. In theory we actually already do that, just at a different time
<niemeyer> rogpeppe: ping
<rogpeppe> niemeyer: pong
<niemeyer> rogpeppe: Have you seen my previous messages, or was my connection broken already?
<rogpeppe> niemeyer: last message i saw was:
<rogpeppe> [11:40:25] <niemeyer> rogpeppe: Yeah. In theory we actually already do that, just at a different time
<niemeyer> rogpeppe: Okay
<niemeyer> rogpeppe: So I sent you a message about a SSL change coming in MongoDB
<niemeyer> rogpeppe: The only thing relevant there is the link, actually
<rogpeppe> niemeyer: ah, sounds useful
<niemeyer> rogpeppe: It says client certificates will be enforced in 2.4, and that we can actually force them with a flag explicitly before that
<niemeyer> rogpeppe: We should probably enable that flag
<niemeyer> rogpeppe: I'm slightly unsure of the consequences, though.. I don't know what OpenSSL does when one asks to verify the peer certificate, without actually providing any client CAs
<rogpeppe> niemeyer: hmm
<rogpeppe> niemeyer: who do we want to sign the client certificate?
<niemeyer> rogpeppe: I don't know.. I suppose the same CA would actually be fine.. but the question is really how validation occurs in this case. I'm unsure
<rogpeppe> niemeyer: i thought we were using passwods to verify the client
<niemeyer> rogpeppe: That's something else..
<rogpeppe> niemeyer: perhaps it's not a good idea to use the same CA, otherwise any of our clients can masquerade as a server, no?
<niemeyer> rogpeppe: It realliy depends on what approach we used
<rogpeppe> niemeyer: oh, perhaps i'm understanding something different by client-cert verification
<niemeyer> rogpeppe: Exactly.. I don't know what that really means for Mongo either.. the code looks pretty simplistic
<niemeyer> rogpeppe: It just calls SSL_get_verify_result on whatever cert was provided by the peer
<rogpeppe> niemeyer: the peer (as in the mongodb peer server)? or the client?
<dimitern> jam, mgz: so when you can PTAL https://codereview.appspot.com/7098074/ - landing this and the other branch https://codereview.appspot.com/7133043/ will unblock my bootstrap work
<rogpeppe> niemeyer: if it's going to do this by default, we definitely want to enable it early though, to see what effect it has on our stuff.
<niemeyer> rogpeppe: The client
<niemeyer> rogpeppe: I actually think we shouldn't worry by now
<niemeyer> rogpeppe: We should really focus on the API stuff so we can obsolete that stuff on our usage ASAP
<rogpeppe> niemeyer: yeah. i wonder what it does if the client doesn't present a cert
<niemeyer> rogpeppe: It fails
<rogpeppe> niemeyer: yeah, we defnitely want to get the API going properly before that's enabled. it makes sense for the way we'll be using mongo after we've got the API (the api server can make its own client cert for connection, if it wants)
<niemeyer> error: cannot upload tools: cannot write file "tools/juju-1.9.7-precise-amd64.tgz" to control bucket: Please re-send this request to the specified temporary endpoint. Continue to use the original request endpoint for future requests.
<niemeyer> I thought we had fixed this
<niemeyer> Oh, hmm
<niemeyer> No.. still broken
<niemeyer> Yeah, for some reason I can't make it work on sa-east-1
<niemeyer> Will ask David
<rogpeppe> niemeyer: i haven't seen that error before
<niemeyer> rogpeppe: Happens only when I try to use the local region
<rogpeppe> niemeyer: ah. i should probably try different regions.
<rogpeppe> niemeyer: BTW TestBootstrapWithDefaultSeries fails every time for me live
<rogpeppe> niemeyer: i've been meaning to get around to looking at it
<rogpeppe> niemeyer: because it's the only thing that consistently stops all the live tests passing
<niemeyer> rogpeppe: Would be great to know what's up with it
<rogpeppe> niemeyer: yeah. do you see the same problem, BTW?
<niemeyer> rogpeppe: No, I haven't run it
<rogpeppe> niemeyer: (i only get it when running all live tests)
<niemeyer> machines:
<niemeyer>   "0":
<niemeyer> That's a backwards incompatible change, I guess
<niemeyer> An unnecessary one at that
<rogpeppe> niemeyer: i guess that happened when we moved to string machine ids. i'd presumed the tests remained the same.
<rogpeppe> niemeyer: hmm, i'm not sure how TestBootstrapWithDefaultSeries can ever have worked in combination with the other tests - it doesn't destroy the environment before bootstrapping
<rogpeppe> niemeyer: actually, it can't do that either. more thought required.
<TheMue> Aaaah, back again, my MAAS server stolen my network connection. Sh*t.
<rogpeppe> ultra-trivial CL anyone? (code move only) https://codereview.appspot.com/7175043
<rogpeppe> TheMue, niemeyer: %
<rogpeppe> ^
<niemeyer> rogpeppe: Hmm.. -1 in principle..
<niemeyer> rogpeppe: They're close to the only location where the methods are used right now
<niemeyer> rogpeppe: Aren't they?
<rogpeppe> niemeyer: no, they're used by the tests for all agents that use agentSuite
<rogpeppe> niemeyer: that's why i made them methods on agentSuite - i just didn't move them in the original CL so that i minimised code churn
<niemeyer> rogpeppe: I see.. LGTM
<rogpeppe> niemeyer: after checking, they're not *all* used by all agents, but each one is used by at least one test outside upgrade_test.go
<rogpeppe> niemeyer: which i think makes it worth it
<rogpeppe> niemeyer: thanks
<niemeyer> rogpeppe: Any luck with the test?
<rogpeppe> niemeyer: yes, i've got a fix. it's not right, but it's ok for now i think - it's certainly no worse
<rogpeppe> niemeyer: one mo, i'll propose it
<rogpeppe> niemeyer: https://codereview.appspot.com/7179043
<rogpeppe> niemeyer: the comment is a bit hopeful BTW - i have no way of verifying that it works on openstack.
<niemeyer> rogpeppe: This seems crackful
<niemeyer> rogpeppe: The goal of the test was to not affect the real environment from the suite
<niemeyer> rogpeppe: I probably screwed that up
<niemeyer> rogpeppe: I think we should just remove that test for now
<niemeyer> rogpeppe: The amount of hackery going on there and the amount of time we've spent on this outweights its benefits
<niemeyer> rogpeppe: I was silly to propose this
<niemeyer> rogpeppe: We should have a functional test that runs that kind of verification from outsie
<niemeyer> outside
<niemeyer> rogpeppe: Instead of trying to build a complete environment within the suite that includes multiple distros, multiple environments, and whatnot
<rogpeppe> niemeyer: i think you're right
<rogpeppe> niemeyer: do you want to redo the test?
<niemeyer> rogpeppe: redo?
<rogpeppe> niemeyer: i'm just going for lunch, BTW
<rogpeppe> niemeyer: oh, sorry, i thought you'd written it
<niemeyer> <niemeyer> rogpeppe: I think we should just remove that test for now
<niemeyer> rogpeppe: Yes, I did
<rogpeppe> niemeyer: i was referring to your "we should have a function test" remark
<rogpeppe> al
<rogpeppe> niemeyer: ok, i'll remove it for now.
<niemeyer> rogpeppe: Thanks
<niemeyer> I'll go for lunch too
<rogpeppe> niemeyer: slightly longer lunch break today if that's ok - back in a couple of hours
<dimitern> fwereade: hey! arrived ok and ready for sprinting? ;)
<mgz> dimitern: is there anything else you need reviewing today to keep moving forward?
<dimitern> mgz: no, until those branches land and I propose my changes
<mgz> dimitern: ace, poke me if you need anything else
<dimitern> mgz: sure, thanks
<rogpeppe> back
<niemeyer> Hmpf
 * niemeyer curious about the sprint activities
<rogpeppe> niemeyer: actually running the API server now (not yet connecting to it though) https://codereview.appspot.com/7178044
<rogpeppe> and that's me for the day
<rogpeppe> g'night all
<niemeyer> rogpeppe: Sweet, have a great night
#juju-dev 2013-01-22
<jam> hey wallyworld_, how's it going?
<wallyworld_> g'day
<wallyworld_> ell thanks
<wallyworld_> well
<jam> wallyworld_: any reviews, etc, I can help you out with?
<jam> (My review email box sometimes has 400 things in it from just a couple days. Unforunately, I'm in too many groups on LP :)
<wallyworld_> i've landed everything, am working on some stuff. i'd be happy to fill you in via mumble since i will not be at the meeting
<jam> wallyworld_: sure
<rogpeppe> if anyone has some space to look at this, a review would be very much appreciated: https://codereview.appspot.com/7178044/
<dimitern> what's the idea with panicWrite() -> writeCertAndKey called unexpectedly in the bootstrap tests?
<dimitern> it seems it'll fail always in the normal case
<dimitern> rogpeppe: can you help here? ^^
<rogpeppe> dimitern: hmm, i haven't seen that. looking.
<dimitern> rogpeppe: so there's that callback in environs.Bootstrap(Env, uploadTools, writeCertAndKey())
<dimitern> rogpeppe: which in the tests is always panicWrite instead
<rogpeppe> dimitern: yeah. that will only happen if the environment config doesn't have a CA cert and key
<rogpeppe> dimitern: so if that's happening in the openstack tests, i'd guess you've got a config without those in
<dimitern> rogpeppe: so for the tests what I'm supposed to do - remove the ca/key from ~/.juju ?
<rogpeppe> dimitern: no, pass the testing CA cert and key in the env config
<dimitern> rogpeppe: missing? ahaa ok
<rogpeppe> dimitern: i *think* that's right. give me a few moments to check.
<rogpeppe> dimitern: yeah, that seems about right.
<rogpeppe> dimitern: grep for CACert in environs/ec2/*.go
<dimitern> rogpeppe: I saw it - authorized-keys as well perhaps?
<rogpeppe> dimitern: maybe
<rogpeppe> dimitern: not for live tests
<rogpeppe> dimitern: you actually want it to use the authorized keys from your home directory for live tests, so you can ssh to the started instances
<dimitern> rogpeppe: I see, ok - now I added these conf keys and it passed
<rogpeppe> dimitern: cool
<aram> hi.
<rogpeppe> aram: hiya
<dimitern> aram: hey
<dimitern> rogpeppe: so after the bootstrap is done, we're trying to open the state and connect to the bootstrap machine - how is this simulated in the local live tests?
<dimitern> and why is mongo on my machine defaulting to 27017 and juju expects it to be at :37017 ?
<rogpeppe> dimitern: we don't simulate that in the local live tests AFAIR
<rogpeppe> dimitern: we decided that trying to simulate the cloud-init stuff was going too far
<dimitern> rogpeppe: but I can see things like: [LOG] 18.27607 JUJU environs/openstack: started instance "8cd71439-c5f6-47bd-94e1-f7693d47f11a"
<dimitern> opening connection
<dimitern> [LOG] 23.11676 JUJU environs/openstack: waiting for DNS name(s) of state server instances [8cd71439-c5f6-47bd-94e1-f7693d47f11a]
<dimitern> [LOG] 23.11794 JUJU state: opening state; mongo addresses: ["localhost:27017"]; entity ""
<dimitern> [LOG] 23.11974 JUJU state: connecting to 127.0.0.1:27017
<rogpeppe> dimitern: does the connection succeed?
<dimitern> I changed 37017 to 27017 to match my running mongo and StateInfo to return localhost
<dimitern> no
<dimitern> I'm lost.. what has to happen at this step in BootstrapAndDeploy ?
<dimitern> it's still a test (live, if local)
<rogpeppe> dimitern: if you look at the ec2 live tests, you'll find that LiveTests.CanOpenState is false for the local "live" tests
<dimitern> rogpeppe: got you! right this should work then :)
<rogpeppe> dimitern: and BootstrapAndDeploy looks at CanOpenState to see if it can connect to the state
<rogpeppe> dimitern: BootstrapAndDeploy can't work unless the relevant agents are started too
<dimitern> rogpeppe: so it's only a truly live test then?
<rogpeppe> dimitern: which we don't do in the local tests - we test them independently
<rogpeppe> dimitern: yes
<rogpeppe> dimitern: although it does test some stuff, i guess
<dimitern> rogpeppe: I see
<rogpeppe> dimitern: (in the local case)
<dimitern> rogpeppe: and HasProvisioner = false as well for local tests?
<rogpeppe> dimitern: yeah
<rogpeppe> dimitern: i did once suggest that we could have the local tests start the agents too, but it was considered not worth it.
<jam> dimitern: I'm pretty sure Mongo gets set up directly by the test infrastructure. Which is why it isn't on the port you expected.
<dimitern> jam: it seems something like that, anyway I'm skipping these now
<jam> dimitern: cloudinit_test.go spawns it with "--port 37017"
<dimitern> jam: yeah, I saw this
<dimitern> jam: but was still confusing at first
<niemeyer> Good morning
<dimitern> niemeyer: morning
<dimitern> jam: shall we start?
<jam> dimitern: I'm on
<jam> mgz: poke
<jam> wallyworld won't be making it, movie night
<mgz> dimitern: the other part (talking to the metadata service) is in juju.unit.address.OpenStackUnitAddress
<dimitern> finally, openstack bootstrap - https://codereview.appspot.com/7181046/
<rogpeppe> niemeyer: hiya
<niemeyer> rogpeppe: Just finished a pass on https://codereview.appspot.com/7178044/
<rogpeppe> niemeyer: thanks
<rogpeppe> niemeyer: i'm not entirely sure of the best approach to the two-error thing
<rogpeppe> niemeyer: if we get ErrDead and some other error, which error should we act on?
<rogpeppe> niemeyer: and which error should we return from Run?
<niemeyer> rogpeppe: Both are good questions.. "whatever" is probably not the right answer. :-)
<niemeyer> rogpeppe: I'll step out to grab some coffee and ponder meanwhile
<rogpeppe> niemeyer: runTasks returns the first error and logs the others
<rogpeppe> niemeyer: ok
<niemeyer> rogpeppe: Think about how both are handled meanwhile
<niemeyer> rogpeppe: So we can talk
<niemeyer> rogpeppe: SO, how do we handle ErrDead vs. other errors?
<rogpeppe> niemeyer: i think we could make an importance hierarchy of errors, perhaps
<rogpeppe> niemeyer: ErrDead > UpgradedError > * > nil
<niemeyer> rogpeppe: How do we handle ErrDead vs. other errors today in that context?
<rogpeppe> niemeyer: we just choose the first one, except that UpgradedError gets priority
<niemeyer> rogpeppe: You mean we never compare an error to ErrDead?
<niemeyer> rogpeppe: Why do we even have it?
<rogpeppe> niemeyer: we do, in order to return nil from Run if the error is ErrDead
<rogpeppe> niemeyer: because if the entity has died, we just exit normally
<niemeyer> rogpeppe: Yes, and that means we actually exit from the process
<rogpeppe> niemeyer: and in that case, we probably don't care if someone wants to upgrade us, or other errors.
<rogpeppe> niemeyer: yup
<niemeyer> rogpeppe: I think we do care we've seen an error, to log it if nothing else.. but indeed ErrDead should not be dropped silently
<rogpeppe> niemeyer: i've been toying with the idea of letting the tomb package have some sort of control over error precedence, rather than just discarding later erorrs
<rogpeppe> niemeyer: for instance, one might give it an error-precedence comparison function.
<niemeyer> rogpeppe: Seems like unnecessary complexity
<rogpeppe> niemeyer: you're probably right
<niemeyer> rogpeppe: Let's just solve the problem at hand
<rogpeppe> niemeyer: there are two places we'd want to solve it in: runTasks and the machine agent top level. both are really a similar thing.
<rogpeppe> niemeyer: and there's also the issue of the "configuration does not have state server cert/key" fatal error.
<rogpeppe> niemeyer: i wonder if we could define func IsFatal(err error) bool
<rogpeppe> niemeyer: then we  wouldn't have to kill the tomb in the above place
<rogpeppe> niemeyer: which means we don't run the risk of squashing those errors.
<niemeyer> rogpeppe: Isn't that unrelated to the actual problem?
<niemeyer> rogpeppe: ErrDead must not be dropped silently.. there are two possible errors, and one of them is being dropped silently. That sounds rather easy.
<rogpeppe> niemeyer: actually there are n possible errors, one for each task (if you look at runTasks)
<rogpeppe> niemeyer: we don't want to drop UpgradedError either
<niemeyer> rogpeppe: I'm talking about that loop
<niemeyer> rogpeppe: There are only two possible errors there
<rogpeppe> niemeyer: the loop in MachineAgent.Run ?
<niemeyer> rogpeppe: The one where the review comment was made
<rogpeppe> niemeyer: it's possible that we can drop ErrDead silently in runTasks too
<rogpeppe> niemeyer: but maybe we don't care that much.
<niemeyer> rogpeppe: There's already logic there to pick a proper error
<niemeyer> rogpeppe: If it's broken, it should be fixed
<niemeyer> rogpeppe: If it isn't..
<rogpeppe> niemeyer: it just chooses the first error (unless something returned an UpgradedError)
<rogpeppe> niemeyer: tbh dropping ErrDead in favour of UpgradedError isn't a problem
<rogpeppe> niemeyer: it'll just mean it'll restart only to die with ErrDead
<niemeyer> rogpeppe: It's also cheap to do it right, but that's really unrelated to the problem at hand
<rogpeppe> niemeyer: the loop in MachineAgent.Run is the moral equivalent of the loop in runTasks IMHO - both loops are multiplexing more than one task and returning the result
<rogpeppe> niemeyer: which is why i'm thinking about both of them here
<niemeyer> rogpeppe: Sure, whatever works
<rogpeppe> niemeyer: i think i know a decent way to solve the problem.
<rogpeppe> niemeyer: it's kinda funny that we'll end up ignoring the tomb.Err() value, but i think we've gone beyond its capabilities here.
<niemeyer> rogpeppe: tomb.Err should be whatever error took precedence
<rogpeppe> niemeyer: tomb.Err is always going to be the first error that happened to kill the tomb
<niemeyer> rogpeppe: Sure, and that's great
<rogpeppe> niemeyer: that's not necessarily the error we're going to use though
<niemeyer> rogpeppe: If it was killed ahead of time
<niemeyer> rogpeppe: All errors are used
<niemeyer> rogpeppe: Some of them are logged, others returned
<rogpeppe> niemeyer: sure. tomb.Err won't be the error that took precedence though.
<niemeyer> rogpeppe: Yes, nevermind.. I think we both understand the context
<rogpeppe> niemeyer: yeah.
<dimitern> are we having the weekly meeting?
<dimitern> it seems no one but me is there.. at least using the link in the calendar
<rogpeppe> dimitern: i guess so.
<rogpeppe> dimitern: i'll join now
<aram> what is the url?
<rogpeppe> aram: https://plus.google.com/hangouts/_/33acfc2c8af8792568274fa110371db956f9fde7
<aram> thanks
<rogpeppe> niemeyer: meeting?
<niemeyer> Yep, let's do it
<rogpeppe> mramm: meeting?
<aram> mramm: https://plus.google.com/hangouts/_/33acfc2c8af8792568274fa110371db956f9fde7
<rogpeppe> niemeyer: PTAL https://codereview.appspot.com/7178044/
 * rogpeppe goes for some lunch
<niemeyer> rogpeppe: LGTM
<niemeyer> rogpeppe: Thanks for the changes
<rogpeppe> niemeyer: cool, thanks for the prompt review
<rogpeppe> niemeyer: ping
<niemeyer> rogpeppe: Hey
<rogpeppe> niemeyer: i just wanted to run an idea past you
<niemeyer> rogpeppe: Of course
<rogpeppe> niemeyer: i find myself replacing state.Info with a pair of (state.Info, api.Info) everywhere, and i'm wondering if it would be bad to just add a state.Info member to api.Info, to be removed when we can use the API on its own.
<niemeyer> rogpeppe: I think we covered that before
<rogpeppe> niemeyer: that was the connection itself
<rogpeppe> niemeyer: this is just the info, which i think is more reasonable
<rogpeppe> niemeyer: as it's got no methods, and i wouldn't embed it
<niemeyer> rogpeppe: I think the reasoning is the same
<rogpeppe> niemeyer: could you remind me of the reason this is a bad idea
<rogpeppe> ?
<niemeyer> rogpeppe: Sure: http://pastebin.ubuntu.com/1559918/
<rogpeppe> niemeyer: i think there's a significant difference between Info and State here.
<niemeyer> rogpeppe: The idea is exactly the same
<rogpeppe> niemeyer: we're going to need the state.Info around until the very last remnant goes
<rogpeppe> niemeyer: because it's used in setting up the thing that everything talks to
<niemeyer> rogpeppe: Disagree.. you only need it in places where the API doesn't cover
<rogpeppe> niemeyer: you need it in the top-level places where the state is connected to - in juju.Conn and at the top level in jujud. those places are used by everything.
<rogpeppe> niemeyer: so as long as there's something using it, you'll still need it, AFAICS
<niemeyer> rogpeppe: The top level in jujud is "used by everything"?
<rogpeppe> niemeyer: the top level in jujud creates a *state.Info, which is then passed to all the tasks.
<rogpeppe> oops
<rogpeppe> creates a *state.State, of course
<rogpeppe> niemeyer: as long as we have one task that needs a *state.State, we'll need the *state.Info.
<niemeyer> rogpeppe: That's totally fine?
<rogpeppe> niemeyer: but at that point we could just remove the StateInfo field from api.Info
<rogpeppe> niemeyer: with considerably less disruption to the APIs
<niemeyer> rogpeppe: That will mean that every single entity in the system that has a StateInfo will have the api.Info, even if it doesn't care about it
<rogpeppe> niemeyer: no
<niemeyer> rogpeppe: It's exactly the same motivation described before
<niemeyer> Dec 12 09:55:31 <niemeyer>	rogpeppe: It's pretty much impossible to tell if things you're passing apiState in actually are ported or not, without investigating all calls
<niemeyer> Dec 12 09:56:09 <niemeyer>	rogpeppe: In general, when something is ported we can actually *port* it
<rogpeppe> niemeyer: every single entity in the system that has a api.Info will have state.Info
<rogpeppe> niemeyer: but not vice versa
<niemeyer> rogpeppe: Sorry, my opinion is the same.. if you want to talk to William and get agreement on something else, please feel free to move on with it
<niemeyer> rogpeppe: I would just be explicit myself
<rogpeppe> niemeyer: ok, fair enough.
<rogpeppe> niemeyer: in which case, does the following change to Environ make sense to you? http://paste.ubuntu.com/1559933/
<rogpeppe> niemeyer: i was considering having two entry points, StateInfo and APIInfo, but they duplicate almost all of their work.
<rogpeppe> niemeyer: and we'll only want one eventually
<niemeyer> rogpeppe: It is kind of weird.. but that method always was
<rogpeppe> niemeyer: StartInstance?
<niemeyer> rogpeppe: I think eventually it should be returning something environ-specific
<niemeyer> rogpeppe: StateInfo
<rogpeppe> niemeyer: then the environ would connect to the state?
<niemeyer> rogpeppe: Same thing about StartInstance
<niemeyer> rogpeppe: But I think this change looks fine for the moment, yeah
<niemeyer> rogpeppe: Those other things are things to think in the future
<niemeyer> rogpeppe: Why does it need both, btw?
<rogpeppe> niemeyer: how else does a new instance know how to connect to the API?
<niemeyer> rogpeppe: Supposedly there's a lot of duplication in them.. what's the information that is unique to each?
<rogpeppe> niemeyer: the port
<rogpeppe> niemeyer: and, currently, api.Info holds a single address, not a list
<niemeyer> rogpeppe: Hmm.. any particular reason?
<rogpeppe> niemeyer: just that it was easy to get going, and i'm not quite sure how we'll make use of an address list as an API client.
<rogpeppe> niemeyer: perhaps just choose one at random
<niemeyer> rogpeppe: I'm not sure either, but it's quite clear that this will happen
<rogpeppe> niemeyer: but given that nothing has more than one address currently, it seems a bit like overkill
<niemeyer> rogpeppe: Yes, probably random
<rogpeppe> niemeyer: it'll be easy enough to change when HA comes in
<niemeyer> rogpeppe: I'd not do thta
<niemeyer> rogpeppe: It's easy to overlook it
<niemeyer> rogpeppe: For example, the data file in Amazon holds a list
<niemeyer> rogpeppe: In S3, that is
<niemeyer> rogpeppe: Changing to multiple addresses is just a matter of adding more of them, and that won't break previous clients
<rogpeppe> niemeyer: i'd never intended to change that, just the Addr field in api.Info
<niemeyer> rogpeppe: My point is that it's easy to overlook stuff up if you hide the fact this is *intended* to be a list
<rogpeppe> niemeyer: but if you think it should be changed now, i'll do that
<niemeyer> rogpeppe: Using the single item in a list is easy.. backporting data files and making sure previous clients continue to work correctly when you provide a list isn't
<niemeyer> I'm going to step out to take OtÃ¡vio to his ear check-up
<niemeyer> Back in a bit
<hazmat> niemeyer, ping
 * niemeyer is back
<rogpeppe> niemeyer: i'm presuming the same reasoning applies to juju.Conn too, so i'm making juju.APIConn which is the same as juju.Conn but holds api.State
<niemeyer> rogpeppe: Uh?
<rogpeppe> niemeyer: the alternative is to have juju.Conn hold both *state.State and *api.State
<niemeyer> rogpeppe: I didn't imagine we'd be forking Conn
<rogpeppe> niemeyer: your reasoning applies there too, i think
<niemeyer> rogpeppe: I don't see how..
<niemeyer> rogpeppe: Conn is what we want to port..
<rogpeppe> niemeyer: if we're just passing around juju.Conn, with both connections in it, how can we know easily what's left to port?
<niemeyer> rogpeppe: Whatever uses juju.Conn will continue using juju.Conn
<rogpeppe> niemeyer: ok, what about things that use Conn.State directly?
<niemeyer> rogpeppe: Either return both, or add an APIState method
<rogpeppe> niemeyer: it's a field
<niemeyer> rogpeppe: The latter is probably better
<niemeyer> rogpeppe: Sure, whatever
<rogpeppe> niemeyer: i actually started doing it that way, but thought you'd prefer it separate. and actually, the code *is* quite a bit cleaner when it's kept separate.
<niemeyer> rogpeppe: and then duplicate all the methods of Conn, and all the tests?
<rogpeppe> niemeyer: each method of Conn will need to be ported individually
<bigjools> has go got an x509 library?
<rogpeppe> niemeyer: because most of them have state-related types in the signatures
<niemeyer> rogpeppe: Yes, and it can be ported on the same Conn
<niemeyer> bigjools: Yep
<bigjools> cool, ta
<niemeyer> bigjools: In the standard library.. have a look under crypto
<bigjools> sounds good, ta
<niemeyer> bigjools: http://golang.org/pkg/crypto/
<rogpeppe> bigjools: we do a fair amount of x509 hackery in juju-core - grep for x509
<niemeyer> rogpeppe: Still.. the idea is being able to port one at a time
<niemeyer> rogpeppe: But, I really don't mind too much, to be honest..
<bigjools> ya, we need to use it for Azure
<niemeyer> rogpeppe: It feels like a lot of work to be introducing a second juju.Conn and duplicating all the tests and porting all the client sites
<niemeyer> rogpeppe: If you feel like that's easier..
<rogpeppe> niemeyer: i think it may be
<rogpeppe> niemeyer: i'll see how it goes.
<rogpeppe> niemeyer: currently it certainly seems cleaner than interleaving the code,  but i may well change my mind!
<niemeyer> rogpeppe: If there are two methods, you'll be unable to port one of them at a time, for example..
<niemeyer> rogpeppe: But you have better context than I do, so go for it
<rogpeppe> niemeyer: it would be awkward the other way around too - e.g. AddService returns *state.Service, which AddUnits expects as an argument
<niemeyer> rogpeppe: Yeah, and maybe the API is not that big anyway
<rogpeppe> niemeyer: i think so. we'll see
<rogpeppe> niemeyer: i've gotta go now. good progress made today, BTW.
<rogpeppe> g'night all
<niemeyer> rogpeppe: Great to hear!
<niemeyer> rogpeppe: It looks like things are progressing very well indeed
<niemeyer> rogpeppe: Have a good evening there
<bigjools> What are the requirements of provider storage now?  What needs to be stored and what key is used to access it?
#juju-dev 2013-01-23
<rogpeppe> davecheney: ping
<rogpeppe> mornin' all!
<rogpeppe> wallyworld_: you around?
<aram> hello.
<rogpeppe> aram, dimitern: hiya
<dimitern> rogpeppe, aram: morning :)
<rogpeppe> dimitern: openstack tests are broken in trunk BTW, if you don't have OS_TENANT_NAME set
<dimitern> rogpeppe: hmm
<dimitern> rogpeppe: that's now or the branch I proposed yesterday?
<rogpeppe> dimitern: currently
<rogpeppe> dimitern: it would also be nice if the goose "required environment variable" error message printed the name of the required environment variable rather than the struct field name...
<rogpeppe> "required environment variable not set for credentials attribute: TenantName" makes it look as if the required env var is $TenantName
<dimitern> rogpeppe: tell me about it :) I did it like that originally, but.. anyway I'll take a look
<dimitern> rogpeppe: fixed https://codereview.appspot.com/7196043 - this is trivial, if you think it's ok, I'll land it now
<jam> mgz: poke
<rogpeppe> dimitern: LGTM. agreed trivial.
<dimitern> rogpeppe: 10x, submitting then
<dimitern> rogpeppe: how about looking at my bootstrap branch - https://codereview.appspot.com/7181046/
<rogpeppe> dimitern: will do
<mgz> jam: hey
<jam> mgz: I had some questions about how to setup tarmac, and thought you might be good to bounce off for feedback.
<jam> I think I have it up and running, and a small patch to handle some stuff I didn't realize.
<jam> However, what branch do we want it to actually manage?
<jam> (do we want to add it to ~gophers, and give it write access there, do we want a different team, a branch only accessble by goose-bot, ... ?)
<jam> (I guess the next direct step is to add support for running juju-core tests before landing trunk code, but I'd like to answer the above first)
<mgz> hm, the bzr-pqm way is to have the branch owned by the bot
<jam> mgz: right
<jam> the major downside is the "manual escape hatch"
<mgz> but being able to get around it and commit anyway will probably be useful for now
<jam> especially because of wanting to be able to land goose code and juju-core code concurrently
<mgz> yup.
<mgz> so, maybe we want a new team that the bot can be a member of to own the goose code
<jam> I tend to be cautious, so giving it full '~gophers' access seemed a bit much, but it is also a fair amount of bookkeeping just to do goose.
<mgz> yeah.
<dimitern> jam: but isn't that like the other go projects? all owned by ~gophers?
<jam> dimitern: sure, and all of us have the ability to push whatever we want to them. I'm hesitant to allow a bot the same access.
<mgz> it's easy enough to make ~gophers a member of the group that owns goose trunk, we just don't want the bot to have too much power :)
<jam> dimitern: it also stems from a group that doesn't have a habit of using a bot to control their trunks, vs the launchpad group using that method for a long time.
<dimitern> jam: wouldn't it need to merge and push changes?
<jam> dimitern: goose bot needs to be able to write to goose's trunk
<jam> I'm not sure that it needs rights to everything else owned by ~gophers
<jam> (juju-core, goamz, etc, etc.)
<dimitern> you don't trust the bot? c'mon :D
<dimitern> it's a nice guy
<jam> dimitern: the inverse is actually just as important (to me). I would rather the equivalent of having to run 'sudo' to be able to alter trunk.
<jam> so that: yes anyone on our team can fix trunk if we need to, but we have to do so very explicitly instead of accidentally
<dimitern> jam, makes sense - preventing accidental mistakes, provided it doesn't impede normal workflow
<rogpeppe> dimitern: reviewed
<jam> so mostly I'm trying to find the tasteful point in balancing these desires, adding goose-bot to ~gophers is the most expedient solution, though not one I personally control.
<jam> as is having the branch owned explicitly by goose-bot
<jam> I suppose goose-bot the launchpad account could just have us all know the password, and all have our ssh-keys attached to the account.
<jam> mgz: ^^?
<jam> that would give us "do X to become the goose-bot superuser"
<dimitern> rogpeppe: tyvm
<jam> mgz: ah, but the problem ends up that only goose-bot can approve changes to ~goose-bot/ branches. I'll have to look into the review side of things to figure out how to fix that
<mgz> that would probably be okay
<jam> mgz: looks like you can set a "reviewers" on a branch
<jam> which I can just make ~gophers
<mgz> yeah
<jam> so they can approve branches even though they can't commit directly to it.
<mgz> jam: poke?
<wallyworld_> rogpeppe: i am around now, am in a stand up
<rogpeppe> wallyworld_: np, the issue's been fixed now.
<wallyworld_> ok, sorry i was at soccer training
<rogpeppe> wallyworld_: i only pinged you 'cos you were the only one that showed up on the IRC user list :-)
<wallyworld_> ah ok
<wallyworld_> i didn't log out at my EOD
<rogpeppe> a couple of CLs if anyone wants. the first one is pretty trivial, the second a bit bigger: https://codereview.appspot.com/7197043/ https://codereview.appspot.com/7133063/
<dimitern> rogpeppe: I'm on the first one
<rogpeppe> dimitern: thanks
<dimitern> rogpeppe: reviewed
<rogpeppe> dimitern: thanks
<rogpeppe> anyone else wanna take it to the magic two LGTMs? :-) https://codereview.appspot.com/7197043/
<dimitern> rogpeppe: it's weird though, unlike the usual case the first comment wasn't "Please take a look" from you
<rogpeppe> dimitern: ha, i'd forgotten to lbox propose it without -wip
<rogpeppe> dimitern: it's got that message now :-)
<dimitern> rogpeppe: ok :)
<rogpeppe> niemeyer: yo!
<niemeyer> rogpeppe: Heya!
<dimitern> niemeyer: hey!
<niemeyer> dimitern: Hey there
<dimitern> niemeyer: would you like to take a look at this - https://codereview.appspot.com/7181046/
<jam> mgz, wallyworld_, dimitern: I just sent you guys an email with the security keys, etc you'll need. I used your public GPG keys, but if you need help figuring out how to decode them, just ask.
<dimitern> jam: cheers
<dimitern> jam: yep, I have to replace that PGP key for which I forgot the passphrase a long time ago
<dimitern> jam: I'll do it today and you can resend it with my new key perhaps?
<rogpeppe> niemeyer: small CL to make add multiple addrs to the API: https://codereview.appspot.com/7197043/
<niemeyer> dimitern: CHecking
<dimitern> thanks
<niemeyer> rogpeppe: Super, thanks
<niemeyer> rogpeppe: Btw, have you had a chance to check that bug Dave filed?
<rogpeppe> niemeyer: which one?
<niemeyer> rogpeppe: The one saying bootstrap is broken
<rogpeppe> niemeyer: i couldn't reproduce it. i suspect he was running without --upload-tools.
<niemeyer> rogpeppe: Have you tried in the regions he suggested?
<niemeyer> rogpeppe: Or at least one of them?
<rogpeppe> niemeyer: i'll do that. the error in the bug report "error: API entity name not found in configuration" should not be possible to happen in the branch he was talking about.
<rogpeppe> niemeyer: but there may well be another bug
<niemeyer> rogpeppe: I'm poking because he asked there, so it'd be good to at least interact to clarify the bug
<niemeyer> Since Dave isn't on the same timezone, if we don't get back to him on async locations, we won't talk
<niemeyer> dimitern: Any significant change to the ec2 logic on that bootstrap function?
<dimitern> niemeyer: how do you mean?
<dimitern> niemeyer: no, it's essentially the same
<dimitern> niemeyer: the Bootstrap() code, that is
<rogpeppe> niemeyer: i'm trying it now
<niemeyer> rogpeppe: Thanks a lot
<niemeyer> dimitern: Cool, I was really just curious if there was anything that had to be done differently there
<niemeyer> dimitern: It's great that there isn't (except for Roger's point in the CL, which I agree with)
<niemeyer> dimitern: Have you seen rogpeppe's comment?
<niemeyer> dimitern: Are there consistency issues there too?
<dimitern> niemeyer: well, it's seems not - both tests (svc doubles and live) pass - although on canonistack the lack of floating IPs fails the test, but everything else works
<dimitern> niemeyer: about consistency - not sure how to test that, but I've run into other consistency issues with OS on some API calls
<niemeyer> dimitern: Oh, consistency issues and "seems" don't go well together :-)
<niemeyer> dimitern: The idea, in S3, is that the answer is totally arbitrary for an undefined period of time
<rogpeppe> dimitern: i suggest starting without any of the eventual-consistency loops, and seeing if things still work.
<niemeyer> dimitern: You create, and it's not there.. you delete, and it's still there
<niemeyer> rogpeppe: Please no
<dimitern> niemeyer: I've had the same issues with swift while doing live tests
<niemeyer> rogpeppe: We already have the logic for that.. let's not introduce the bug to see if we can hit it
<dimitern> niemeyer: creating/deleting/modifying containers in quick succession
<niemeyer> rogpeppe: I agree with you cleaner would be better, but let's ask someone that actually knows that stuff if OpenStack is consistent or not
<rogpeppe> niemeyer: i suppose so. but if there are no such issues in openstack, then that code is more complex for no good reason
<rogpeppe> niemeyer: yes, that's a good plan
<niemeyer> rogpeppe: Sure, that's the part I agree.. I just don't think rolling back something we know works well to wait for the bug is great
<niemeyer> dimitern: Oh, then it has the same issues
<dimitern> rogpeppe: well, we can ask yes, but how can it be consistent, when the arch is the same - distributed, multi-tier, API disconnected from the actual service, etc.
<niemeyer> dimitern, rogpeppe: I suggest just leaving the same logic there then
<dimitern> niemeyer: yeah, I agree
<niemeyer> dimitern, rogpeppe: If we start to have a few other backends with the exact same logic of Bootstrap, we should factor it out..
<rogpeppe> niemeyer: agreed
<niemeyer> dimitern, rogpeppe: That said, I think OpenStack is perhaps too similar to EC2.. I suggest seeing how the others look like before going over the trouble of generalizing
<rogpeppe> niemeyer: +1
<dimitern> niemeyer: +1 as well - let's not refactor/optimize prematurely :)
<niemeyer> dimitern: LGTM
<niemeyer> dimitern: Does it work for real already?
<dimitern> niemeyer: great, 10x
<dimitern> niemeyer: yes it does on canonistack, partially
<dimitern> niemeyer: starting the instance works, putting the tools, all ok, except for the floating iP shortage, which fails to add an address to the machine after building
<dimitern> rogpeppe: can I consider your review as LGTM as well, with the points mentioned addressed?
<rogpeppe> dimitern: i'll have a quick once-over, if that's ok
<rogpeppe> niemeyer: i have a suspicion that eventual-consistency issues are fouling upload-tools.
<dimitern> rogpeppe: you mean repropose it after changes before submitting?
<rogpeppe> dimitern: yeah
<dimitern> rogpeppe: sure
<rogpeppe> niemeyer: ah, no that's not it.
<dimitern> rogpeppe: reproposed - https://codereview.appspot.com/7181046/
<rogpeppe> dimitern: looking
<rogpeppe> dimitern: LGTM
<dimitern> rogpeppe: cheers
<rogpeppe> niemeyer: bootstrap against sa-east-1 works fine for me, although it failed for me first time because i used a different juju to the one it was uploading
<rogpeppe> niemeyer: i suspect that was dfc's issue too
<niemeyer> rogpeppe: Curious.. was the symptom the same?
<rogpeppe> niemeyer: similar
<rogpeppe> niemeyer: the problem (for me) was that it was making the cloudinit script with an old version of the agent config data structure.
<niemeyer> rogpeppe: How did that change again?
<niemeyer> rogpeppe: This kind of thing is interesting to observe and keep in mind
<niemeyer> rogpeppe: THis is a major breakage
<rogpeppe> niemeyer: in my case, i was using a version that produced JSON, not YAML
<niemeyer> rogpeppe: Ah, that
<rogpeppe> niemeyer: the problem is that we're not incrementing the version numbers when we make incompatible changes
<rogpeppe> niemeyer: which is of course convenient when we're breaking things all the time, but will lead to these kinds of issue
<rogpeppe> niemeyer: actually... i don't think we check that the tools we're uploading are compatible with the version that's running. that should be fixed, if true.
<rogpeppe> niemeyer: (not that that would've helped here though
<niemeyer> rogpeppe: What I mean is that breaking isn't okay.. the problem is breaking
<niemeyer> rogpeppe: and breaking without even being aware that you're breaking
<rogpeppe> niemeyer: i was very aware that i was making breaking changes
<niemeyer> rogpeppe: You should mention that, and coordinate with Dave an update to the tools
<niemeyer> rogpeppe: Since whatever is up there doesn't work anymore
<niemeyer> rogpeppe: Just being aware isn't quite enough :)
<rogpeppe> niemeyer: i was presuming dave was updating the tools on every release, given that every release is breaking backward compatibility, pretty much
<niemeyer> rogpeppe: Sorry, I'm talking past you
<rogpeppe> niemeyer: the next one is going to too - how big a priority for us is preserving backward compatibility currently?
<niemeyer> rogpeppe: Please coordinate with Dave when you introduce breaking changes.
<niemeyer> rogpeppe: Thanks.
<rogpeppe> niemeyer: will do
<rogpeppe> lunch
<niemeyer> rogpeppe: Please ping once you're back
<rogpeppe> niemeyer: ping
<niemeyer> rogpeppe: Yo :)
<niemeyer> rogpeppe: Do you have some time for a call in.. 10 mins?
<rogpeppe> niemeyer: sure
<niemeyer> rogpeppe: COol thanks
<niemeyer> rogpeppe: Will send the invite
<aram> I won't be able to join today's kanban meeting, have to run to some place. mramm, I think this applies to our 1:1 as well (though not 100% yet, I might be back soon enough).
<mramm> I may not be able to make the 1 on 1 either
<mramm> there are sabdfl meetings all day here in Austin
<rogpeppe> is anyone gonna make the kanban meeting?
<rogpeppe> i'll be there if anyone else will
<rogpeppe> fwereade, mramm: hiya
<rogpeppe> fwereade: how's tricks?
<fwereade> rogpeppe, heyhey, not bad thanks
<fwereade> rogpeppe, and yourself?
<rogpeppe> fwereade: not bad at all thanks, pushing on.
<rogpeppe> fwereade: missing your reviews :-)
<fwereade> rogpeppe, awesome
<fwereade> rogpeppe, I came here with the best of intentions but it's been hard to find the energy ;)
<rogpeppe> fwereade: no, i wouldn't expect you to while you're there tbh
<fwereade> rogpeppe, jolly good :)
<dimitern> fwereade: hey :) how's texas?
<niemeyer> Lunch.. biab
<fwereade> dimitern, heyhey
<fwereade> dimitern, texas is... texan :)
<fwereade> dimitern, although that's an unfair thing to say really, this is austin after all
<fwereade> dimitern, and I've barely left the hotel actually
<fwereade> dimitern, when not working, sleeping has been high priority
<fwereade> dimitern, I *am* very happy that my bag was delivered last night
<fwereade> dimitern, that's 3 timesin a row now
<fwereade> dimitern, tyvm for reviews btw
<dimitern> fwereade: wow, lost again?
<fwereade> dimitern, yep :/
<fwereade> dimitern, I'm getting used to it now though
<dimitern> fwereade: np, my pleasure
<dimitern> fwereade: hehe, all in there?
<fwereade> dimitern, yeah, nothing lost
<fwereade> dimitern, all I really suffered was a brief blood-pressure spike at houston
<dimitern> fwereade: hmm, that doesn't sound good
<dimitern> fwereade: how was the flight?
<fwereade> dimitern, only metaphorical -- a brief internal GAAAAAAAAAAAAAAAAH
<fwereade> dimitern, flight was way smoother than I expected given the weather at heathrow
<fwereade> dimitern, greenland was covered in clouds, which was a shame, that place is *beautiful* if you have a nice view from 40000ft
<dimitern> fwereade: yeah, the cancelled like 250 flights that day
<fwereade> dimitern, oof, luckier than I thought then :)
<dimitern> luckily you managed to sneak through
<fwereade> dimitern, was able to be pretty relaxed about the prospect though, I could always have stayed the night with my sister, she's pretty close
<dimitern> fwereade: she always seems around when you're somewhere on a trip :)
<fwereade> dimitern, heh, maybe :)
<fwereade> hazmat, ping
<dimitern> fwereade: we have bootstrap working with openstack now, landed today
<fwereade> dimitern, w00t!
<dimitern> :)
<dimitern> yeah, it's taking shape faster now I think
<dimitern> the provider
<hazmat> fwereade, pong
<fwereade> hazmat, I think we have enough people to discuss identities if you can tear yourelf away
<fwereade> hazmat, mramm has been collecting interested parties
<rogpeppe> i'd love a review of this branch, if anyone can spare a mo. i'm stacking up stuff on top of it. it shouldn't be too bad, despite the number of files changed: https://codereview.appspot.com/7133063/
<rogpeppe> dimitern: unfortunately the branch you reviewed earlier was WIP for a good reason - i hadn't got tests to pass yet!
<mgz> minor details :)
<rogpeppe> dimitern: i've now updated it so they do, but somehow it was necessary to add some more stuff along the way, and it's acquired a dependency. if you could have another look (and at the dependency too), that'd be great. https://codereview.appspot.com/7197049/
<dimitern> rogpeppe: sorry, just saw this
<rogpeppe> dimitern: np
<dimitern> rogpeppe: I'll take a look now
<rogpeppe> dimitern: thanks
<dimitern> rogpeppe: while I'm on it, would like to look at https://codereview.appspot.com/7195046
<rogpeppe> dimitern: looking
<niemeyer> rogpeppe: Would be interesting to have your opinion on https://codereview.appspot.com/7149043/
<niemeyer> rogpeppe: as you're working right around it
<rogpeppe> niemeyer: i've pinged dfc for a conversation about this
<rogpeppe> niemeyer: hopefully we'll have one tonight
<rogpeppe> niemeyer: i think it's the wrong thing to do, but he might have better plans than me, so i want a chat before commenting too much
<niemeyer> rogpeppe: What seems wrong, specifically?
<niemeyer> rogpeppe: Looks quite straightforward to me
<rogpeppe> niemeyer: i think it should be a machiner job, not a new subcommand
<niemeyer> rogpeppe: Ah, indeed
<rogpeppe> niemeyer: then it works at the same level as the API server, and in the future we'll be able to dynamically adjust servers etc
<niemeyer> rogpeppe: Agreed
<dimitern> rogpeppe: you've got a review
<rogpeppe> dimitern: thanks a lot
<niemeyer> rogpeppe: Would you mind to point that out in the CL?
<niemeyer> rogpeppe: Even if you include "Let's talk live about it."?
<rogpeppe> niemeyer: ok, i will do
<niemeyer> rogpeppe: Thanks!
<niemeyer> rogpeppe: The stater itself looked nice.. I've skimmed over too quickly thinking it was a job as well
<rogpeppe> niemeyer: yes, the stater itself is ok, but i'm not sure it works very well as a task
<rogpeppe> niemeyer: most of the logic is fine though
<niemeyer> rogpeppe: Maybe it needs additional stuff, but what's there seems well structured, and necessary
<rogpeppe> niemeyer: yes. i'm not entirely sure that worker/stater is the right place, but all the unpacking and running logic is good, which makes up the bulk.
<niemeyer> rogpeppe: Seems right as a worker too
<dimitern> mgz: ping
<rogpeppe> niemeyer: i was wondering if the tar unpacking code might live nicely in a standalone package, but it's probably ok where it is until we need it elsewhere.
<niemeyer> rogpeppe: Indeed
<rogpeppe> niemeyer: i'm going to try to catch dave tonight - about when he starts work.
<niemeyer> rogpeppe: Super, cheers
<dimitern> niemeyer: if you have time, take a look at https://codereview.appspot.com/7195046/
 * rogpeppe just started running first live test that connects to the API server... i wonder if it'll work.
<mgz> rogpeppe: did it? :)
<rogpeppe> mgz: still running...
<rogpeppe> mgz: oh, darn
<rogpeppe> mgz: failed
<mgz> ;_;
<dimitern> mgz: ah, you're here :) can you look at this https://codereview.appspot.com/7195046/ - especially the way private and public addresses are discovered (ported the logic from py-juju)
<mgz> dimitern: sure
<dimitern> mgz: thanks
<mgz> you ported the tests too, right? :)
<dimitern> mgz: well, not really, but I test for all paths I think
<mgz> the tests are the bit with value, because they cover the sort of json I actually saw in the wild
<dimitern> mgz: I see, then I could port their essence as well
<mgz> hm... and GetServerAddresses really shouldn't be needed, it's included in servers/{}/detail which juju is likely using in all cases
<dimitern> mgz: unfortunately, it's not the same always
<mgz> that's an ongoing issue though, the ip/name bindings aren't static
<mgz> unless you mean something else?
<dimitern> mgz: sometimes (at startup, but it happened to me multiple times after some uptime) the address fields are empty there, while the /ips returns them correctly
<mgz> right
<mgz> well, ips will only work once a binding exists
<mgz> but that's pretty much the same thing
<dimitern> what binding?
<dimitern> i'm not talking about floating ips
<mgz> when you first start a machine, you get details back before the networking setup is done
<dimitern> the automatic ones nova assigns at boot
<mgz> there's a hack you might notice in the python code that polls for contents of addresses in launch in one case (where you need to assign a floating ip)
<dimitern> mgz: yes, but even after that /servers/detail won't get me the addresses - they're always empty
<mgz> they're not always empty, they're just initially empty
<dimitern> mgz: well, /ips works even at BUILD, while /servers/detail doesn't
<mgz> I think because it reflects current reality rather than intent or something
<mgz> it works at BUILD, just not straight away, only when it's actually bound
<mgz> anyway, this might be a neat trick for the moment
<dimitern> anyway, it's not reliable to use /servers/detail for addresses in my experience
<dimitern> and having it this way simplifies the tests / doubles a bit
<mgz> really juju needs to learn that addresses aren't reliable
<dimitern> :)
<dimitern> especially if they float
<mgz> but if this hacks around that broken assumption for now, it's not the end of the world
<dimitern> yeah
<rogpeppe> mgz: i was listening on localhost!
<rogpeppe> time to stop
<rogpeppe> g'night all
<dimitern> rogpeppe: c u
<mgz> dimitern: commented, fine overall, but I think much of that code should live in juju-core
<dimitern> mgz: thanks, I'll think about it some more tomorrow
<fwereade> davecheney, ping
<fwereade> davecheney, I need to head to meetings in a few minutes but I should be on again in a couple of hours, let me know if I can clarify anything...
<rogpeppe> davecheney: ping
<davecheney> rogpeppe: ack
<rogpeppe> davecheney: yo!
<rogpeppe> davecheney: G+?
<davecheney> sure
<davecheney> ringing ...
<davecheney> this is fucking annoying
<davecheney> can we use Skype like grownups ?
#juju-dev 2013-01-24
<jtv> I'm trying to figure out how to unit-test a non-exported function...  Is that possible?  Is there a knack to it?
<rogpeppe> jtv: there are two techniques for that
<rogpeppe> jtv: you can write a test file that's in the package itself (package foo, not package foo_test)
<rogpeppe> jtv: or you can export the function for the tests only
<rogpeppe> jtv: i'm assuming you're talking about Go here
<jtv> rogpeppe: yes, Go, thanks.
<rogpeppe> jtv: there's a convention for applying the latter technique
<jtv> The test and the module are both in the same package.  Maybe the problem is that we have multiple source files and test files in the same package?
<rogpeppe> jtv: they're in the same directory, yes, but what does the "package" declaration at the top of the test file read?
<rogpeppe> jtv: there's no problem with having multiple source files and test files in the same directory
<jtv> They're in the same package (as well as the same directory)
<rogpeppe> jtv: then there should be no problem using unexported functions
<jtv> Hmmm
<jtv> Maybe the problem is getting the branch into my Go environment.
<rogpeppe> jtv: what error message are you getting?
<jtv> Just having it in $GOPATH/src/launchpad.net/ doesn't seem to do the trick.
 * jtv compiles...
<jtv> undefined: <my lower-case function name>
<rogpeppe> jtv: if you push the branch, i could take a brief look
<jtv> It's based on lp:~maas-maintainers/maas/gomaasapi (which I think already has the problem)
<jtv> Meanwhile, I'll commit & push.
<rogpeppe> jtv: i just ran go test in gomaasapi and it worked ok
<jtv> By the way, how do you get a branch with a URL like that into your Go environment?
<rogpeppe> jtv: it lives inside $GOPATH/src/$packagepath
<rogpeppe> jtv: so in this case, you might have it at $GOPATH/src/launchpad.net/maas
<rogpeppe> jtv: the awkward thing is that you can't have a differently named directory per branch, which doesn't fit well with the usual bzr way of doing things
<jtv> Yeah
<rogpeppe> jtv: because package paths are always absolute
<jtv> Hi there rvba
<rvba> \o jtv
<rogpeppe> jtv: i use cobzr
<jtv> Well current bzr versions supersede that, don't they?
<rogpeppe> jtv: but i know others have different workarounds
<rogpeppe> jtv: tell me more :-)
<jtv> See "Co-located branches with Bazaar" here: https://docs.google.com/a/canonical.com/document/d/1GQ3u7keE3hJH2oaweAm28K78hQZAbqeQqR9xwD9OR4I/edit#
<rogpeppe> jtv: cool!
<jtv> Nowhere near as cool as the title.
<rogpeppe> jtv: nice doc, BTW. i'll have a look through it.
<jtv> The more the merrier!
 * rogpeppe should really upgrade to q
<rogpeppe> "But if you do want pass-by-reference, it may be best to pass a pointer to the interface value instead."
<rogpeppe> i think that's spurious
<jtv> It depends a bit on what the purpose of the interface is, I suspect.
<jtv> But in light of other gotchas that have come up, it's certainly not one of our big worries. I'll take it out.
<rogpeppe> jtv: passing a pointer to the interface doesn't help if you want the methods to be operating on the reference
<jtv> No, of course not.
<jtv> But they do help if you want to see, for instance, changes that are made by other functions you pass it to.
<rogpeppe> jtv: yeah. but that's part of the method contract. a pass-by-value implementation of a method can still have side effects that are reflected in later method calls to the same object.
<rogpeppe> jtv: (for example, it might be operating on a global data structure, or an external file)
<jtv> Yes, I wasn't thinking so much of pointer receiver types as of general parameter-parameter passing.
<jtv> (Okay, I'm stating the obvious because an interface can't be a receiver type :)
<rogpeppe> jtv: yeah
<rogpeppe> jtv: in the section on non-nil interfaces, you might want to insert this link: http://golang.org/doc/faq#nil_error
<jtv> Oh yes, thanks
<jtv> Very nice to have some external feedback.  :)
<jtv> rogpeppe: My "go get launchpad.net/maas/gomaasapi" is taking forever.  I'm going to take a break.  After that, I'll see if my problems go away with the right branch setup.
<rogpeppe> jtv: i recommend using play.golang.org links rather than paste.ubuntu.com links. then people can easily try out the examples
<jtv> Oh, thanks.  I'll add a note.
<rogpeppe> jtv: in the "static polymorphism between structurally identical objects" section, you might want to link to the relevant spec section: http://golang.org/ref/spec#Conversions
<jtv> Thanks.  I'm trying to go off on a break here.  :)
<rogpeppe> jtv: ok, np. enjoy!
<jtv> :)
<rogpeppe> jtv: could you give me permission to add comments?
<rogpeppe> davecheney: hiya
<davecheney> rogpeppe: yo
<rogpeppe> davecheney: i'll do a proper review of worker/stater now we think we know where we're going...
<davecheney> rogpeppe: cool, i'mma about to step out for the evening
<davecheney> http://osakamonaurail.com/en/tour.html
<davecheney> ^ tonight
<davecheney> live
<davecheney> for the sydney festival
<davecheney> i have tix
<rogpeppe> davecheney: enjoy. i'll ping you this evening, 7am your time maybe
<rogpeppe> davecheney: cool. not heard of 'em
<rogpeppe> davecheney: japanese soul, interesting
<davecheney> they are tighter than tight
<rogpeppe> davecheney: :-) when my gran said "tight" she meant "drunk"
<rogpeppe> jtv: i looked for launchpad.net/maas/gomaasapi, but it doesn't seem to exist
<rogpeppe> $ bzr branch lp:maas/gomaasapi
<rogpeppe> bzr: ERROR: Permission denied: "Cannot create 'gomaasapi'. Only Bazaar branches are allowed."
<rvba> jtv: not sure why you would want to use launchpad.net/maas/gomaasapi right nowâ¦ the code is in lp:~maas-maintainers/maas/gomaasapi while we're still getting the project started. http://paste.ubuntu.com/1565492/
<jtv> Gah.  I was using the wrong invocation of "go test" again.  That's all there is to it.
<rvba> That's what I suspected ;)
<jtv> Lost a lot of stupid time to that, barking up the wrong trees.
<rogpeppe> jtv: almost always i just use plain old "go test"...
<rogpeppe> jtv: an invocation that's hard to get wrong :-)
<jtv> otp
<rogpeppe> dimitern: hiya
<dimitern> rogpeppe: yo!
<dimitern> mgz, jam: standup?
<rogpeppe> aram: fancy doing some reviews at some point?
<niemeyer> Hello all!
<rogpeppe> niemeyer: yo!
<rogpeppe> niemeyer: are you still on reviews?
<rogpeppe> niemeyer: yay! just had first successful live API test.
<rogpeppe> niemeyer: well, "success" did take the form of a failure :-)
<rogpeppe> 3:49.482     c.Assert(apiInstId0, Equals, instId0)
<rogpeppe> 3:49.482 ... obtained string = "i-963dcbe6"
<rogpeppe> 3:49.482 ... expected state.InstanceId = "i-963dcbe6"
<niemeyer> rogpeppe: Woah
<niemeyer> rogpeppe: That's cool :-)
<rogpeppe> niemeyer: the string there came back from the Request method of the API
<niemeyer> rogpeppe: In that specific case, even better than silence, as you know it actually worked :-)
<rogpeppe> niemeyer: indeed
<rogpeppe> niemeyer: i had a good chat with davecheney last night BTW
<rogpeppe> niemeyer: i think we know where we're going with the HA stuff now
<niemeyer> rogpeppe: Any changes?
<rogpeppe> niemeyer: pretty much according to the sketch i showed you
<rogpeppe> all live tests pass
<niemeyer> rogpeppe: How's it different from what is already being pushed by Dave?
<rogpeppe> niemeyer: it doesn't involve stater as a jujud subcommand
<rogpeppe> niemeyer: worker/stater stays pretty much as he's proposing it though
<rogpeppe> niemeyer: although it needs more logic to check if it's already running
<niemeyer> rogpeppe: Cool, that makes sense
<rogpeppe> niemeyer: we run the stater as a job in the machine agent
<niemeyer> rogpeppe: +1
<rogpeppe> niemeyer: i'm happy with the direction. i think i can see it all hanging together now.
<rogpeppe> niemeyer: if you were up for doing some reviews, i've got a sequence of 4 noisy-but-fairly-insubstantial reviews that take us up to having a working API
<rogpeppe> https://codereview.appspot.com/7133063/
<rogpeppe> https://codereview.appspot.com/7197049/
<rogpeppe> https://codereview.appspot.com/7133072/
<rogpeppe> https://codereview.appspot.com/7195051/
<rogpeppe> niemeyer: but i totally understand if you don't want to
<niemeyer> rogpeppe: I'll surely have a look
<rogpeppe> niemeyer: thanks
<rogpeppe> niemeyer: how's 17070 for a port number? it's not on either of the two assigned-number lists i've looked at so far.
<rogpeppe> niemeyer: and 7070 is kinda mnemonic for juju if you squint right
<niemeyer> rogpeppe: Hah, looks good :)
<rogpeppe> niemeyer: i only use random ports in the tests, so we can run tests concurrently on the same machine without clashing
<niemeyer> rogpeppe: The default port for mongo is 27017, btw.. that's where 37017 comes from
<rogpeppe> niemeyer: i suspect that we may not need to use a fixed port number in the future, if an agent is actively publishing server addresses. but we may want to anyway.
<niemeyer> rogpeppe: You mean an agent with a fixed port just to tell us what the dynamic port is? :-)
<rogpeppe> niemeyer: no, i don't think we'd *need* any agent with a fixed port.
<niemeyer> rogpeppe: "if an agent is actively publishing server addresses"
<niemeyer> rogpeppe: How would one connect to said agent?
<rogpeppe> niemeyer: yeah, we need a place to publish, but we already need a place to public (currently it's the private storage; you were suggesting tags, which seems like a good plan too)
<rogpeppe> s/to public/publish/
<niemeyer> rogpeppe: Doesn't sound good
<rogpeppe> niemeyer: what i mean is that we've already got a well known place to look, without necessarily needing to have well known ports too
<niemeyer> rogpeppe: One thing is what we do.. something else is requiring that everyone talking to an API implement the full logic to figure out where to talk to
<dimitern> mgz: ping
<niemeyer> rogpeppe: Our "well known place" is not observable without a good amount of logic
<rogpeppe> niemeyer: we already need to find out the IP address. a port isn't much addition to that.
<niemeyer> rogpeppe: The only thing we need is to hand out an IP address
<niemeyer> rogpeppe: and the TCP port
<mgz> dimitern: hey
<niemeyer> rogpeppe: Which is somewhat common for a few decades :)
<rogpeppe> niemeyer: indeed. i'm not sure why the port needs to be well known though. we can hand out an IP-address/port pair if we want.
<niemeyer> rogpeppe: Man.. that's the kind of discussion I'd hope was unnecessary
<rogpeppe> niemeyer: and that might have advantages
<dimitern> mgz: so I found out what I was doing wrong - I assumed in server detail accessIPv4 and accessIPv6 are the addresses to look, but there indeed is addresses with the same structure as server/ips returns
<niemeyer> rogpeppe: Please use a fixed port number for the service
<rogpeppe> niemeyer: for instance, we could potentially run several API servers on the same machine
<rogpeppe> niemeyer: i will, and i am
<niemeyer> rogpeppe: Thakns
<rogpeppe> niemeyer: apart from in tests
<niemeyer> rogpeppe: Sure
<rogpeppe> niemeyer: as for authentication, i don't mind it being open for the moment, as the only thing you can do is request an instance id, which doesn't seem too harmful.
<dimitern> mgz: so instead of landing the CL about /ips, I'll propose a new one, parsing the addresses part of server detail and use that to get the public/private addresses in juju
<mgz> right, it's the 'addresses' key you care about, which has a list of dicts as content
<rogpeppe> niemeyer: authentication is next on the agenda
<niemeyer> rogpeppe: Sounds good then.. please mention that in the CL
<rogpeppe> niemeyer: will do
<niemeyer> rogpeppe: There's no "Destroy" or anything, right?
<rogpeppe> niemeyer: maybe i'll just slip one in for the hell of it. plus a root shell :)
<dimitern> mgz: but the thing is - should I parse and make available all addresses reported, from goose into ServerDetail, so juju can pick which ones to use?
<mgz> dimitern: yes, I'd do that.
<dimitern> mgz: that's what you suggested - move the logic out of goose and into juju, providing all the necessary details
<rogpeppe> niemeyer: thanks v much for the reviews. that's helped immensely.
<dimitern> mgz: ok then
<niemeyer> rogpeppe: My pleasure
<mgz> dimitern: a list of struct that pretty much just corresponds to the json should be fine for now
<dimitern> mgz: what I'm not yet sure is how to implement the "wait" part of getting DNSName - just keep calling server/detail until I have addresses?
<mgz> sounds reasonable for first stab
<dimitern> mgz: cool, I should have it soon
<mgz> it's pretty much what the python code does
<mgz> see the while loop in juju.providers.openstack.launch
<dimitern> mgz: ok, 10x
<mgz> (it only does it when configured to provided a public ip, but the principle is the same)
 * aram doesn't feel very well.
<dimitern> aram, get well soon!
<aram> trying; thanks.
<mgz> fix internal bugs! or kill them...
<rogpeppe> API is live in trunk. not that it *does* anything yet :-)
<rogpeppe> time for lunch and a walk in the snow, i think
<dimitern> rogpeppe: at least you have snow :)
<rogpeppe> dimitern: yeah it's great. still aching from sledging activity at the weekend :-)
<dimitern> rogpeppe: man, I envy you a bit - my snowboarding skills got all rusty in sunny malta
<dimitern> but the summer's great :)
<rogpeppe> dimitern: i haven't been skiing in years
<rogpeppe> dimitern: i miss it a  bit
<dimitern> rogpeppe: at lest you don't have to catch a plane to go skiing :)
<rogpeppe> dimitern: i do... the snow only happens here for about a week a year and the hills aren't big enough for skiing unless i go up to scotland where they're still small and the skiing's crap
<rogpeppe> anyway, lunch!
<dimitern> yeah, enjoy
<fss> niemeyer: ping
<niemeyer> fss: Hey
<fss> niemeyer: may we get those iam patches merged? :-)
<niemeyer> fss: Yeah, the end is near
<fss> niemeyer: thanks
<niemeyer> fss: I'm planning some time next week to go over goamz-everything
<fss> niemeyer: nice, we needed some ELB support too, and flaviamissi prepared patches for that too
<niemeyer> fss: Neat
<niemeyer> fss: Are they up for review already?
<fss> niemeyer: not sure, let me ask her
<fss> and there we go :-)
<flaviamissi> niemeyer, https://codereview.appspot.com/7014047/
<flaviamissi> hi, btw
<flaviamissi> x)
<niemeyer> flaviamissi: Cheers!
<niemeyer> flaviamissi: Hi :)
<flaviamissi> niemeyer, I still have to change the regions endpoints and update some stuff... I'll probably do it tomorrow
<flaviamissi> but if you could take a look
<niemeyer> flaviamissi: Sounds good. I'll probably not get to that before next week
<flaviamissi> niemeyer, alright then x)
<flaviamissi> thanks!
<fwereade> niemeyer, rogpeppe, davecheney: https://codereview.appspot.com/7198051 should be very simple: it's basically a straight move (no behaviour changes), leavened by the deletion of a whole bunch of redundant tests
<rogpeppe> fwereade: looking
<rogpeppe> fwereade: morning, BTW!
<fwereade> rogpeppe, heyhey :)
<rogpeppe> fwereade: assuming no test coverage has been lost, LGTM
<fwereade> rogpeppe, I believe that to be the case -- the only lack of redundancy I could find was moved into RUS
<fwereade> rogpeppe, and the intent was always that RUS be the real tests... ORUS was only there out of state-swap paranoia
<rogpeppe> fwereade: we have a "working" API BTW.
<fwereade> rogpeppe, sweeeet!
<rogpeppe> fwereade: it only has one request so far: Machine.InstanceId
<rogpeppe> fwereade: just to check end-to-end sanity
<fwereade> rogpeppe, a black triangle :)
<rogpeppe> fwereade: indeed so
<rogpeppe> fwereade: and now i'm pondering about how much (if any) to resurrect of my old rpc package branch
<fwereade> rogpeppe, oh yes?
<rogpeppe> fwereade: well, i want *something* to take the dogwork out of writing rpc methods
<rogpeppe> fwereade: and the standard rpc package isn't up to it
<rogpeppe> fwereade: because it can't deal with per-connection auth
<rogpeppe> fwereade: i'm wondering about the best way to express permissions without turning the code into a rats' nest
<fwereade> rogpeppe, ha, yes, this has been somewhat on my mind
<rogpeppe> fwereade: three immediate possibilities come to mind
 * fwereade is listening
<rogpeppe> fwereade: 1) map users (entity names) to capabilities; check the requisite capability inside each method
<rogpeppe> fwereade: 2) map capabilities to allowed users; check for the requisite capabilitiy inside each method
<rogpeppe> fwereade: 3) dispense with a table and just check for permissions explicitly inside each method
<rogpeppe> fwereade: rather, check for particular entities inside each method
<niemeyer> rogpeppe: It's not just about having a capability or not
<fwereade> rogpeppe, explicit per-method checks feel clearest and simplest to me... the first two smack a little of premature abstraction
<niemeyer> rogpeppe: Unit foo/1 has see config capability, for its *own* config
<rogpeppe> niemeyer: that's the "see own config" capability :-)
<niemeyer> rogpeppe: Heh
<rogpeppe> niemeyer: i think we'll always need *some* explicit per-method checks
<niemeyer> rogpeppe: That was my point
<niemeyer> What fwereade said, though
<niemeyer> I'd get a feeling for the problem first
<rogpeppe> niemeyer: yes
<fwereade> rogpeppe, I'm sure the right abstractions will emerge in the fullness of time ;)
 * rogpeppe is off for the night.
<rogpeppe> see y'all tomorrow
<fwereade> gn rog
<rogpeppe> davecheney: ping
<davecheney> rogpeppe: ack
<davecheney> sorry i'm late
<davecheney> is now too late to call ?
<rogpeppe1> davecheney: sorry, missed your ack and now it *is* too late :-)
<davecheney> that is ok
<davecheney> would 7 tomorrow work for you ?
<davecheney> sorry 19:00
<rogpeppe1> davecheney: 1900 would be 6am for you i think
<davecheney> well, i'll ping you when I'm up
<davecheney> if it works out, all the better
<rogpeppe1> davecheney: sounds good. if you're up for an early saturday morning, it'd be good to chat...
<rogpeppe1> davecheney: otherwise maybe sunday night (monday morning for you). or the day after. if you toggle your away status, i'll know if it's worth pinging you.
<rogpeppe1> davecheney: anyway, things going ok?
<rogpeppe1> davecheney: what was the last thing you saw me say?
<rogpeppe1> ach well
<rogpeppe1> davecheney_: hope to speak some time
#juju-dev 2013-01-25
<mramm> bar/cafe in 15 min?
<wallyworld> davecheney: there's no easy way for me to call unexported functions on a struct from a test in different package is there? i want to be able to poke the internal state of something to set it up for a test but not have to use the public api which is insufficient
<davecheney> wallyworld: var F = f()
<davecheney> put that in export_test.go
<davecheney> if they are methods then you can extend the struct in export_test.go and forward the methods
<davecheney> hang on, i'll find you a good example
<davecheney> wallyworld: juju-core/environs/export_test.go is a good example of the former
<davecheney> juju-core/environs/ec2/export_test.go is an example of the latter
<wallyworld> davecheney: but what about if the test code is in a different package to the unexported stuff
<davecheney> unpossible
<wallyworld> :-(
<wallyworld> that's quite a limiation
<davecheney> it is the way it is
<davecheney> stuffs either exported or not
<wallyworld> so now we have to go and make a whole bunch of stuff public sadly
<davecheney> yup, Go has no concept of Protected or Friend
<wallyworld> np, thanks for clarifying
<wallyworld> I keep running into such limitations with Go
<davecheney> there is a long and boring explanation that justifies it
<davecheney> but the short version is, it is what it is
<wallyworld> fair enough
<davecheney> wallyworld: one possibility
<davecheney> add exported methods to your type
<davecheney> but return an interface to 'filter' those methods
<davecheney> then ou can type assert back to the original type
<davecheney> that may or may not be applicable
<wallyworld> davecheney: thanks, will look into that. the use case is that we have service doubles for nova etc, and want tests to be able to poke the internal state
<rogpeppe1> davecheney: mornin'
<rogpeppe1> wallyworld, jam: review of a trivial environs/openstack change? https://codereview.appspot.com/7196054
<TheMue> Morning
<rogpeppe1> TheMue: hiya
<rogpeppe1> TheMue: good conference?
<TheMue> rogpeppe1: Yes, many interesting talks there (typically mostly not about our business but still good).
<TheMue> rogpeppe1: And my own talk has been good, the people have been very interested (many questions).
<rogpeppe1> TheMue: what did you talk about?
<TheMue> rogpeppe1: Additionally my publisher and I had a possible idea for my next book. ;)
<TheMue> rogpeppe1: I talked about world-wide distributed development of open-source software. Guess why. :D
<rogpeppe1> :-)
<rogpeppe1> TheMue: and your book idea?
<TheMue> rogpeppe1: It's this time not technological, but about how to present or give talks from very small groups like in a pre-sales presentation over stuff like trainings or lightning talks in the own company up to talks on conferences.
<rogpeppe1> TheMue: interesting.
<TheMue> rogpeppe1: I like this topic since a longer time and we found, that many IT people have problems in present their ideas or products.
<TheMue> rogpeppe1: Many are introverted, but after they visited a training they shall present it to the rest of the team. Or they shall support the sales team. Those are the people we wonna address.
<rogpeppe1> TheMue: yeah.
<TheMue> rogpeppe1: Yesterday I had to write my first obituary, Andreas Raab, a Smalltalk developer died. Has been a hard job.
<rogpeppe1> TheMue: yeah, very hard to get right
<TheMue> rogpeppe1: And how has the Juju week been?
<rogpeppe1> TheMue: pretty quiet with william and yourself away
<TheMue> rogpeppe1: OK, right now Daves timezone is really bad. So you only have been three.
<rogpeppe1> TheMue: yeah
<rogpeppe1> TheMue: i did manage to have a chat with dave for the first time this year a couple of days ago :-)
<rogpeppe1> TheMue: 10pm for me, 9am for him...
<TheMue> rogpeppe1: Hehe, yes, sometimes when I'm still in the net I see him starting his day.
<mgz> dimitern: these bootstrap tests taking this long isn't normal, right? <http://paste.ubuntu.com/1569225/>
<mgz> ...no dimitern
<mgz> oh, and rogpeppe has a branch up that looks pertinent
<mgz> rogpeppe1: taking 12 seconds on that test still seems unreasonable, but your branch is an improvement
<rogpeppe1> mgz: it was the first immediate cause that i could see
<rogpeppe1> mgz: starting and stopping mongo is fairly slow and often accounts for quite a bit of time
<rogpeppe1> mgz: 6s is spent compiling the juju source
<rogpeppe1> mgz: that will probably drop when we move to using a single binary (linking is the slowest part of go compilation)
<rogpeppe1> mgz: please respond to the review if you approve of it.
<mgz> rogpeppe1: I'm on it.
<rogpeppe1> mgz: ta
<mgz> ...I hate having to spend 5 minutes just to understand the imports when doing review... why is this in export_test.go but that is package openstack and imported from that name... ;_;
<mgz> shall just trust it makes some kind of sense.
<rogpeppe1> mgz: yeah too many "openstack" identifiers
<rogpeppe1> mgz: openstack := openstack.New() :-)
<rogpeppe1> mgz: things are easier when you have a tool that can reliably show you the definition of any identifier you find.
<rogpeppe1> mgz: i could give you one, but you'd have to tool it into whatever editor you use
<rogpeppe1> jam: second opinion on https://codereview.appspot.com/7196054/ ? it's pretty trivial.
<mgz> rogpeppe1: Fridays aren't work days in UAE, dimitern should be around though?
<rogpeppe1> mgz: doesn't appear to be
<rogpeppe1> mgz: trivial enough to submit anyway?
<mgz> yeah, I would.
<TheMue> rogpeppe1: I've got SMTP problems today with our server (IMAP is OK). Do you have the same troubles?
<rogpeppe1> TheMue: looks ok. last message came in for me an hour ago. i'll try to see if i can send myself a message.
<TheMue> rogpeppe1: Receiving is fine here too, only sending doesn't work due to an SSL error.
<rogpeppe1> TheMue: yes, i'm getting email fine
<rogpeppe1> TheMue: ah, ok, i'll try sending you an email
<rogpeppe1> TheMue: ok, i've sent you an email.
<TheMue> rogpeppe: Has worked, hmm, have to look deeper. Thanks.
<TheMue> rogpeppe: Ha, found it. It has been a change in my network configuration due to the conference rooms and the hotel. Now it is working again.
<rogpeppe> TheMue: cool
<rogpeppe> TheMue: are you back home?
<TheMue> rogpeppe: Yep
<rogpeppe> fwereade: ping
<fwereade> rogpeppe, pong
<fwereade> rogpeppe, how's it going?
<rogpeppe> fwereade: not bad. i've just made a list of state entry points and the entities that currently use/need to use them
<rogpeppe> fwereade: http://paste.ubuntu.com/1569931/
<fwereade> rogpeppe, excellent
<rogpeppe> fwereade: the fuzzy bits are around uniter
<rogpeppe> fwereade: (every time i say "restrictions?" it means i don't know what restrictions there should be
<rogpeppe> )
<rogpeppe> fwereade: i'd appreciate it if you could go over it and have a check for sanity/omitted bits
<rogpeppe> fwereade: in fact, maybe i'll make it a google doc
<fwereade> rogpeppe, +1
<fwereade> rogpeppe, one thing that leaps out is State.EnvironConfig, which the uniter uses but shouldn't
<rogpeppe> fwereade: https://docs.google.com/a/canonical.com/document/d/16irceDbjcOuQElcRT1pjL2cCbSF_-YSXg7Yte9Qf3RI/edit
<rogpeppe> fwereade: yes, i've got my eye on that
<fwereade> rogpeppe, it just needs State.EnvironType
<rogpeppe> fwereade: indeed
<fwereade> rogpeppe, and it's not clear why machines need charms or relations
<fwereade> rogpeppe, re AssignUnit, would you take a look at https://bugs.launchpad.net/juju-core/+bug/1101139 and let me know your thoughts?
<_mup_> Bug #1101139: units are not assigned transactionally <juju-core:New> < https://launchpad.net/bugs/1101139 >
<rogpeppe> fwereade: ah yes, those were from early in my list making, and i got it wrong.
<fwereade> rogpeppe, cool
<rogpeppe> fwereade: fixed in the google doc
<fwereade> rogpeppe, I seem to be view only on the doc
<rogpeppe> fwereade: are you as your canonical address?
<fwereade> rogpeppe, it seems to think I am
<rogpeppe> fwereade: ah, sorry, i misinterpreted a message. changed.
<rogpeppe> fwereade: you should be able to edit now
<fwereade> rogpeppe, cheers
<rogpeppe> fwereade: we're going to have to think carefully about permissions checking, and how to make so we don't incur an extra round trip (api server <-> mongo) on many operations just to check permissions.
<rogpeppe> fwereade: i'm just off to lunch. catch you in a little bit, i hope.
<fwereade> rogpeppe, enjoy, please consider that bug I linked
<fwereade> rogpeppe, it impacts that doc quite hard if I'm not on crack
<fwereade> aram, ping
<dimitern> fwereade: hey, aram said yesterday he wasn't feeling well
<fwereade> dimitern, heh, ok, I wish he'd mail us about this sort of thing
<dimitern> fwereade: dunno
<dimitern> fwereade: how's it going there?
<fwereade> dimitern, np, thanks for letting us know
<fwereade> dimitern, pretty good actually, I feel like we know what we're doing
<fwereade> dimitern, how's the rock?
<dimitern> fwereade: that's good! :)
<dimitern> fwereade: raining pretty much every day lately.. but sometimes sunshine
<fwereade> dimitern, cool, looking forward to coming back
<dimitern> fwereade: it's seems the weather in the morning is plotting against my idea to go for a run - it  always pours early
<fwereade> dimitern, haha
<dimitern> fwereade: yeah, I realized I forgot to give you something from back home
<fwereade> dimitern, yay, bulgarian deliciousness
<fwereade> dimitern, looking forward to it :)
<dimitern> fwereade: not really, this is more to look at :) one of my mother's pastels
<fwereade> dimitern, oh wow!
<dimitern> fwereade: I'll get it next time
<fwereade> dimitern, excellent
<fwereade> dimitern, I'll be back sunday lunchtime, flights permitting, and will probably be trying to stay up until night time, so I'll probably be around a bit
<dimitern> fwereade: great
<fwereade> bbiab
<dimitern> fwereade: btw, if you happen to pass a book store which has Iain M Bank's The Hydrogen Sonata - it's worth getting one ;) excellent
<rogpeppe> dimitern, fwereade: i've almost finished Gone Away World, which is brilliant too
<dimitern> rogpeppe: who's the author?
<rogpeppe> dimitern: nick harkaway.
<rogpeppe> dimitern: its his first novel
<fwereade> rogpeppe, oh yes? cool
<fwereade> rogpeppe, dimitern: I enjoyed Blue Remembered Earth (alastar reynolds)
<dimitern> rogpeppe: i'll check it out, 10x
<rogpeppe> fwereade: i'll bring it to the sprint if you haven't got it first
 * TheMue just started Porterhouse Blue by Tom Sharpe. Not easy for a non-native speaker.
<dimitern> fwereade: more reynolds stuff, good
<rogpeppe> fwereade: brilliant twists
<fwereade> rogpeppe, cool, I might be up to date on your books by then :)
<rogpeppe> fwereade: :-)
<fwereade> rogpeppe, excellent, it is always nice to be surprised -- the things in BRE that were, I think, meant to be surprises were... less so
<rogpeppe> fwereade: it's also extremely funny
<fwereade> rogpeppe, and I've got a mental option on a surprise for the next book
<fwereade> rogpeppe, excellent
<rogpeppe> fwereade: BTW are we always guaranteed that there's only one provisioner?
<rogpeppe> fwereade: how does that fit with HA?
<rogpeppe> fwereade: (looking at your description of the bug you mentioned)
<fwereade> rogpeppe, IMO, no we are not, because we will require that the provisioner be HA as well
<rogpeppe> fwereade: so this is spurious, right?
<rogpeppe> we could trust the Provisioner (which also has Environ access) to do exactly the same things... but without races (because there's only one provisioner).
<fwereade> rogpeppe, I think I address that a bit further down, I'm talking about Right Now at that point
<rogpeppe> fwereade: ok
<rogpeppe> fwereade: ha, i thought i'd got to the end
<fwereade> rogpeppe, ie we'll need some sort of magic locking for the provisioner and I'm comfortable handwaing that for now
<rogpeppe> fwereade: yeah. taking a lease on a unit might work for locking.
<fwereade> rogpeppe, I'm partly wondering whether ultra-coarse-grained locking might be acceptable for the forseeable future
<fwereade> rogpeppe, only one provisioner active at once
<rogpeppe> fwereade: doesn't it have to be some kind of lease? otherwise a hung-up provisioner will stop other provisioners from running.
<rogpeppe> fwereade: but coarse-grained is probably fine, yeah
<fwereade> rogpeppe, sure, I'm just talking about the graininess
<rogpeppe> fwereade: big +1 on the suggestion in the bug
<fwereade> rogpeppe, excellent, ty
<rogpeppe> fwereade: i don't think it loses generality, as we can always provide the means for a client to *specify* a machine when adding the unit, even if it doesn't allocate it itself.
<fwereade> rogpeppe, yeah, indeed
<rogpeppe> fwereade: in that case, the unit may remain unassigned, but i think that's ok
<fwereade> rogpeppe, hmm, we will need to be careful about that
<rogpeppe> fwereade: we could treat it as a broken unit
<fwereade> rogpeppe, yeah, but we still need to write new code to handle it anyway
<rogpeppe> fwereade: it's really the responsibility of whoever is adding units, if they're specifying machines that clash.
<fwereade> rogpeppe, a direct assignment from the CLI is not necessarily too horrible txn-wise
<rogpeppe> fwereade: but, yeah, it would be nice to be able to say "this machine should not be taken by a random addunit; save it for explicit assignment instead"
<rogpeppe> fwereade: ah that's true!
<rogpeppe> fwereade: the problem is in searching for an unassigned machine
<fwereade> rogpeppe, it's just the assignment dance that gives me night terrors ;)
<fwereade> yeah
<rogpeppe> fwereade: yeah, it's not great, and there's no recovery.
<rogpeppe> fwereade: and this change would pave the way for set-unit-count too, if we wanted that
<fwereade> rogpeppe, +1
<fwereade> rogpeppe, (I think we do, but there's no time for that this cycle ;))
 * fwereade slopes off for another ciggie
<rogpeppe> fwereade: indeed
<rogpeppe> fwereade: "(not deploy: relations added in AddService)"
<rogpeppe> fwereade: Conn.AddService calls State.AddRelation
<rogpeppe> fwereade: so i still think we have to say State.AddRelation is currently called by juju deploy.
<rogpeppe> fwereade: we may well wish to factor the juju.Conn calls into their own rpc calls though
<rogpeppe> fwereade: or... perhaps Conn.AddService no longer needs to add peer relations?
<fwereade> rogpeppe, I'm about to propose that change
<rogpeppe> fwereade: ah!
<fwereade> rogpeppe, I've actually dropped Conn.AddService -- the only thing it did was to choose a name for the service if empty
<fwereade> rogpeppe, (apart from the relations, that is)
<rogpeppe> fwereade: +1
<fwereade> rogpeppe, actually, can I ask you to take a quick look at the reviews I have up? there are a couple with 1 LGTM
<rogpeppe> fwereade: ok, will do
<rogpeppe> fwereade: which one first?
<fwereade> rogpeppe, cli-destroy-service is easiest
<fwereade> rogpeppe, other ones in order
<fwereade> rogpeppe, I ballsed up the prereqs a little, https://codereview.appspot.com/7205058 doesn't strictly require it, but I'm not changing it now... and the followup does require all the prereqs
<rogpeppe> fwereade: i'm getting "chunk mismatch" on https://codereview.appspot.com/7138062/
<fwereade> rogpeppe, ah balls
<rogpeppe> fwereade: i can review the raw diffs though
<rogpeppe> fwereade: assuming they're not too involved
<fwereade> rogpeppe, it's trivial frankly
<rogpeppe> fwereade: LGTM with a couple of trivial comments
<fwereade> rogpeppe, cheers
<rogpeppe> fwereade: have you run all the tests on https://codereview.appspot.com/7205058/ ?
<fwereade> rogpeppe, yes
<fwereade> rogpeppe, is something obviously screwed?
<rogpeppe> fwereade: ah ha! i believe you haven't run the live tests.
<fwereade> rogpeppe, ah!
<rogpeppe> fwereade: in livetests.go:
<rogpeppe> svc, err := conn.State.AddService("", sch)
<fwereade> rogpeppe, good catch, thank you
 * fwereade hangs head in shame
<rogpeppe> fwereade: it will probably work once you've fixed that, but worth running live tests anyway :-)
<fwereade> rogpeppe, will do
<rogpeppe> fwereade: i think you've got reviews on everything now
<fwereade> rogpeppe, <3
<TheMue> yip
<fwereade> TheMue, <3 also :)
<fwereade> TheMue, would you take a very quick look at https://codereview.appspot.com/7198051/ please?
<TheMue> fwereade: Oh, missed it. Sure. *click*
<TheMue> fwereade: You've got a +1 on it. ;)
<fwereade> TheMue, cheers
<TheMue> fwereade: yw
<rogpeppe> fwereade: if you have some moments, it'd be great to fill out the rest of that api doc. particularly the uniter parts. i'm not sure what restrictions are appropriate for a uniter on accessing other units.
<fwereade> rogpeppe, good point, ty
<rogpeppe> fwereade: have a safe trip back
<fwereade> rogpeppe, cheers, have a good weekend :)
<rogpeppe> and great weekends to all
<rogpeppe> g'night
#juju-dev 2014-01-20
<benonsoftware> Hiya, I'm wondering if there's anyway to make juju launch spot instances?
<thumper> o/ bigjools
<jam> morning dimitern
<dimitern> morning
<jam> dimitern: 1:1?
<dimitern> jam, sorry, coming
<jam> morning fwereade
<jam> rogpeppe: 1:1?
<fwereade> heya jam
<rogpeppe> jam: just going there
<wallyworld> fwereade: hi, i need to add a SetAddresses() api to machiner. most of the other machiner apis are plugged in via common structs since the api is applicable to machine or unit. But SetAddresses appears exclusive yo machine, even though unit does have SetPrivateAddress, SetPublicAddress. Are those latter 2 apis sticking around? Should they be refactored to SetAddresses?
<fwereade> wallyworld, those unit methods are due for retirement at some point, because unit addresses *themselves* are due for retirement -- a unit's address will be determined by its machine
<wallyworld> fwereade: that's what i was hoping :-)
<fwereade> wallyworld, (and likely what network it might be on, but that's a distinct issue)
<fwereade> wallyworld, one thing that needs to be considered
<fwereade> wallyworld, is how we handle identifying those networks
<fwereade> wallyworld, (if you look at state.Address, you will see a network field)
<wallyworld> fwereade: is is for the local provider which currently creates a container and we don't currently record the address
<fwereade> wallyworld, and it's not clear how the machine agent is to infer this
<fwereade> wallyworld, so in that case it's actually easy, it goes on the "private" network
<wallyworld> fwereade: so the use case is initially limited, but when this is fixed, juju ssh etc will work for local rpovider, since we will know the address
<fwereade> wallyworld, I understand why it's important
<fwereade> wallyworld, but I also want to keep all my machine agents acting the same way
<fwereade> wallyworld, and we need to figure out exactly how this overlaps with addressupdater
<wallyworld> sadly, we can't quite do that fully cause the address updater causes Addresses() on the provider and the local provider cannot get the container's address
<wallyworld> the only way i could see was for the container to write the address info to cloud storage and Addresses() would read it from there, but yuk
<fwereade> wallyworld, nah that's perfectly horrible, we shouldn't do that ;p
<fwereade> wallyworld, (hmm; lxc-attach?)
<wallyworld> maybe, but cannot be done for kvm outside container
<wallyworld> we really, really want to remove "not implemented" from Addresses()  for local provder but i don't see how it is possible :-(
<wallyworld> sux to have to do it different for local provider
<wallyworld> happy to consider viable options we haven't thought of
<fwereade> wallyworld, hold on, given the total control we have over the situation, can we not literally carve out a range on our container network and assign the addresses ourselves
<wallyworld> for local provider do we really have the ability to do that?
<fwereade> wallyworld, I guess not if people are using their own bridge, bah
<wallyworld> and not if we want local provider containers to be exposed externally
<fwereade> wallyworld, we don't want *that*
<wallyworld> ok, so just access from local machine
<fwereade> wallyworld, but the forces there are very similar tothe forces involved in exposing non-local-provider containers
<fwereade> wallyworld, so I really think we need to come up with some model that really covers all the cases, at the moment it feels like there's a big fuzzy boundary somewhere around containers
<wallyworld> i think so yeah
<wallyworld> i'd be happy to defer any work to the folks working on container addressibility
<fwereade> wallyworld, haha :)
<fwereade> wallyworld, I'm really keen on you doing this stuff, but it would be good if you could sync up on it with mgz and/or dimitern
<wallyworld> if it ovrlaps greatly with non local provider containers, no poin me doing it
<wallyworld> andrew is actually doing it too
<fwereade> wallyworld, well, that's what I want to discover, possibly we can chat about it at standup time?
<dimitern> wallyworld, we haven't started actual work on the containers yet, just preliminary prereqs, like changes to goose (goamz to follow)
<wallyworld> ok, if i'm still awake, i'm in nz :-)
<fwereade> wallyworld, my suspicion is that there is a chunk of work that could be done by either group and that it's important to figure out what that chunk is and who's best placed to do it
<wallyworld> yeah
<fwereade> wallyworld, it might alternatively just be a matter of defining exactly how juju builds up a picture of the network by paying attention to *both* agent- and provider-supplied addresses
<fwereade> wallyworld, that might be best of all, even
<wallyworld> i think andrew was experimenting with doing something with jujud
<fwereade> wallyworld, oh yes?
<fwereade> wallyworld, (btw, I hope the sprint is going nicely)
<wallyworld> jujud runs inside container, gets addresses, calls machiner.SetAddresses()
<wallyworld> yeah it is, lots of talking today, plus hacking
<wallyworld> fwereade: if you wanted to discuss, perhaps we can have a chat before standup with whoever, so i can get some sleep?
<fwereade> wallyworld, dimitern: that sgtm -- I'd like mgz involved though, so please sleep whenever you need: I think I can represent your interests adequately ;)
<wallyworld> ok :-)
<wallyworld> fwereade: if i'm not around, can you pop me a quick note so we can see what the outcome is?
<fwereade> wallyworld, will do
<wallyworld> i'll try and stay
<mgz> morning
<jam> mgz: mumble?
<mgz> it's not happy for some reason....
<jam> mgz: g+ works fine for me, so whatever works for you
<mgz> need new mumble version it seems, updating
<mgz> jam: okay, I'm on
<eagles0513875_> hey fwereade  :D dimitern
<dimitern> eagles0513875_, hey man! :)
<eagles0513875_> dimitern: missed you at the meeting on saturday :)
<dimitern> eagles0513875_, well, it would've been a bit difficult to attend - i'm back home in sofia
<eagles0513875_> dimitern: well we have it recorded. :)
<eagles0513875_> dimitern: hows the weather there
<dimitern> eagles0513875_, getting better despite the forecast - 12-14ish and sunshine
<eagles0513875_> cooler then here for sure
<eagles0513875_> sunny and warm here already
<dimitern> i know :) don't tease me hehe
<eagles0513875_> dimitern: not trying to if its this warm now I'm predicting a super hot summer
<dimitern> eagles0513875_, and diving!
<eagles0513875_> i dont dive but this is off topic for this channel :)
<eagles0513875_> i would like to start contributing to juju and I would like to work to potentially and eventually try my luck at becoming part of the juju dev team
<dimitern> eagles0513875_, yep
<eagles0513875_> what would be a good bug for me to start off with
<dimitern> eagles0513875_, great!
<dimitern> eagles0513875_, well, there are some docs to read, in the source /doc
<dimitern> eagles0513875_, you can pull the source from lp:juju-core with bzr and try setting up so you can run the unit tests locally
<eagles0513875_> already have core installed but don't have my linux laptop atm
<dimitern> eagles0513875_, and before all, you'll need to get golang packages and setup your GOPATH
<eagles0513875_> ok
 * eagles0513875_ taps fingers patiently on the desk waiting for my mom to come back with the car and my laptop lol
<dimitern> eagles0513875_, you can use virtualbox or something similar and run ubuntu in a vm
<eagles0513875_> dimitern: in the case of my laptop i would prefer taking advantage of the full performance it can give me
<eagles0513875_> dimitern: im also thinking of working with my vps provider to allow ubuntu users to easily use juju on their vps's
<dimitern> eagles0513875_, that's a great idea, please do
<eagles0513875_> :)
<jam> rogpeppe: fwereade, mgz: standup ?
<mgz> I joined just now, and there was no one there...
<frankban> core-devs: morning all. I saw "juju switch" in trusty just returns the default environment name. The output no longer is "Current environment: x". Should we consider this the new behavior? Should we expect this change in saucy too?
<TheMue> frankban: it is the change based on https://bugs.launchpad.net/juju-core/+bug/1193244
<_mup_> Bug #1193244: juju env could be friendlier to scripts <improvement> <juju-core:Fix Released by themue> <https://launchpad.net/bugs/1193244>
<TheMue> frankban: is in with 1.17
<frankban> TheMue: yes I know. I saw that juju-core is in trusty/universe. the juju stable ppa for previous series is still on 1.16.5. So I guess this change will also land there, right?
<TheMue> frankban: hmm, here I don't know the planning, sorry
<frankban> TheMue: ok thanks
<dimitern> jam, reviewed https://codereview.appspot.com/54270043/
 * dimitern is afk for a while
<eagles0513875> fwereade: ping
<eagles0513875> hey fwereade
<fwereade> eagles0513875, linked here for the convenience of any other potential lurking contributors: http://bazaar.launchpad.net/~go-bot/juju-core/trunk/view/head:/CONTRIBUTING
<eagles0513875> :D
<dimitern> i have proposed a long chain of mostly simple CLs which will lead to the the firewaller server-side API soon, if anyone can take a look I'd appreciate it: (in order) https://codereview.appspot.com/54620043/, https://codereview.appspot.com/52840044/, https://codereview.appspot.com/54630043/,https://codereview.appspot.com/54640043/, andhttps://codereview.appspot.com/54650043/
<eagles0513875> dimitern: i have a crazy idea I would like to run by you if you dont mind
<dimitern> eagles0513875, sure
<eagles0513875> dimitern: im thinking of potentially making linux desktop users that work in cloud environments a GUI interface that interfaces with juju to make deployments easier instead of needing command line
<eagles0513875> what do you think
<dimitern> :)
<dimitern> eagles0513875, have you seen Juju GUI ?
<dimitern> eagles0513875, https://jujucharms.com/ for a demo
<eagles0513875> dimitern: yes Its good but lacking i think
<eagles0513875> dimitern: what i was thinking was more of an application one installs on their laptops and can manage deployments that way
<dimitern> eagles0513875, well, it's open source, you can fork it and change it as you need
<eagles0513875> good point
<eagles0513875> dimitern: i want to do work that would be beneficial to juju as a whole as well as potentially allow me to get hired and join the team :)
<dimitern> eagles0513875, and what you're talking about might be achieved with a "web application" that runs the gui in a browser frame, but has an icon and a setup
<eagles0513875> dimitern: there is one problem lets say your internet to the outside is down
<eagles0513875> then what
<dimitern> eagles0513875, sure, i understand
<eagles0513875> very hypothetical situation in a datacenter environment but y ou are kinda screwed when it comes to deplooyments
<dimitern> eagles0513875, you'll need internet to interact with a juju environment anyway
<eagles0513875> good point
<dimitern> eagles0513875, even if it's a private cloud, unaccessible from outside public
<eagles0513875> im having fun with the local provider on my laptop
<dimitern> eagles0513875, you still need a connection to it, and can deploy the gui there
<eagles0513875> is there a way though to shutdown a machine and any instances
<dimitern> eagles0513875, sweet! does it work? :)
<eagles0513875> yes :)
<eagles0513875> i found a corner case regarding ssh though
<eagles0513875> it seems though the local provider and im not sure about any other cloud providers that you have to have the .pub ssh key in the .ssh folder
<eagles0513875> with out it it gives an error
<dimitern> well, if you use juju destroy-machine XXX in a local environment, that will shutdown the container
<eagles0513875> then i would need to destroy the environment then
<dimitern> you'll need A ssh key somewhere, if it's not in ~/.ssh/, you can specify it in the environments.yaml
<dimitern> no, you can destroy individual machines, without taking down the whole env
<eagles0513875> dimitern: this is on my laptop what happens if i need to shut it off
<eagles0513875> will my system take down the machine as well and bring it back up on boot up
<dimitern> just so in a local env, any "machine" you can see is actually an lxc container running on your machine (machine 0 is the only "real" one)
<eagles0513875> dimitern: O_o what cropped up for me is this. i dont have my .pub in the .ssh folder on my local machine but on a remote server in its .ssh which tbh i dont see why you really need your .pub key if you are deploying to other things besides the local provider
<eagles0513875> ahh ok :)
<dimitern> hmm.. that's a fair question - I think there was some work done to restart any containers at boot time
<dimitern> you can certainly try rebooting without destroying you local env and see if it comes back up after
<dimitern> i think it should (we're using local upstart jobs for the agents)
<eagles0513875> ok :)
<eagles0513875> im guessing fwereade has gone to eat so ill have to continue on the ssh issue i managed to unearth tomorrow with him
<eagles0513875> dimitern: btw mlug has our own irc channel lug-mt
<dimitern> eagles0513875, ah, 10x - i'll try lurking there from time to time :)
<eagles0513875> no problem
<eagles0513875> dimitern: how can i deploy to my local provider using the juju charms site
<eagles0513875> or do i need to deploy juju-gui in a container on my laptop
<dimitern> yeah, you'll need to deploy the juju-gui charm in your local env, and then use it to deploy stuff
<dimitern> i think you need to expose it as well, so you can access it through a browser
<dimitern> check the juju-gui docs if in doubt
<eagles0513875> dimitern: ok :) i think the example will gave it was exposed to the internet :)
<eagles0513875> dimitern: i have a question though the install of the os in regards to 12.04 is it a full install or a minimal install  of ubuntu installing only features that are needed
<dimitern> which os install?
<dimitern> in a container?
<eagles0513875> yes
<eagles0513875> of precise
<eagles0513875> i think over all deployment speeds can be improved by using a minimal install no?
<dimitern> it just acts like precise, but it uses the same kernel as your host (which appears as precise)
<dimitern> it's a minimal system
<eagles0513875> ahh ok
<eagles0513875> seems a bit slow to setup the containers etc
<eagles0513875> even when william was deploying to hpc or ec2
<dimitern> initially yes, but I think we cache some stuff
<eagles0513875> oh ok :)
<dimitern> but don't take my word for it - try it out (I think there were plans for speeding up local env deployments, but i'm not 100% sure we did it yet)
<dimitern> i'm signing off for today
<dimitern> g'night all
<fwereade> eagles0513875, fwiw, we need the public key to authorize on the instances we deploy, so you can ssh to them if you want to
<wallyworld_> fwereade: hi, you around?
<thumper> o/ fwereade
<thumper> fwereade: we need coffee
<thumper> wallyworld_, axw and thumper afk to get it
<thumper> back
<fwereade> wallyworld_, thumper, hey guys
<thumper> fwereade: hey
<wallyworld_> yello
<fwereade> thumper, wallyworld_, how's it going?
<wallyworld_> did you want to talk about azure?
<wallyworld_> peachy
<wallyworld_> now that we have had coffee
<fwereade> wallyworld_, yeah, we probably should
<fwereade> I'm slowing down pretty badly myself
<wallyworld_> should != want :-)
<wallyworld_> we can do it tomorrow
<wallyworld_> we have enough work to do today
<thumper> yeah, getting the proxy stuff done today hopefully
<fwereade> you know what, I think I'll take that as an answer and go to bed
<wallyworld_> ok :-)
<fwereade> you rock guys :)
<wallyworld_> i might see you for the standup, if not tomorrow :-)
<wallyworld_> nighty night
<thumper> fwereade: ack, sleep well
<thumper> axw, wallyworld_: review plz https://codereview.appspot.com/54720043/
#juju-dev 2014-01-21
<davecheney> lucky(~) % juju status
<davecheney> WARNING discarding API open error: <nil>
<davecheney> wut ?
<davecheney> wallyworld_: http://paste.ubuntu.com/6789000/
<davecheney> i cannot bootstrap
<davecheney> what is going on ?
<thumper> ?!
<thumper> davecheney: wallyworld_ is not back from lunch yet
<thumper> davecheney: that is a shitty error
<thumper> should file a bug about that
<davecheney> looks like it's sending back a 404 error
<davecheney> <hr /> smells of the apache 502 page
<davecheney> or something
<davecheney> what is the URL it tries to hit ?
<davecheney> oh, hang on
<davecheney> i didn't use --upload tools
<thumper> haha
<thumper> still a shitty error
<davecheney> << PEBKAC
<davecheney> Attempting to connect to ec2-54-253-211-44.ap-southeast-2.compute.amazonaws.com:22
<davecheney> Attempting to connect to ip-10-249-43-61.ap-southeast-2.compute.internal:22
<davecheney> Attempting to connect to 54.253.211.44:22
<davecheney> ^uh, the second will *NEVER* work
<davecheney> Attempting to connect to 10.249.43.61:22
<davecheney> in fact, it could work
<davecheney> but connect you to something totally unexpected
<wallyworld_> davecheney: that xml error thing is fixed in trunk - there was some old s3 bucket code still lurking in the system
<davecheney> cool
<wallyworld_> thumper: when you are bored or need a change from swearing at your keyboard https://codereview.appspot.com/54740043/
<axw> davecheney: bootstrap doesn't know which IP is the right one. it checks each one that the provider tells you about, and verifies the machine connected to is the one it started
<axw> depending on the provider, and where you're bootstrapping from, you may indeed want the private IP
<axw> (think bootstrapping canonistack from within canonistack)
<davecheney> true, but 10/0 may not be remote on your network
<davecheney> imagine what happened if a provider gave out 192.168/16 addresses
<davecheney> that is used by most modems
<davecheney> sorry, private networks
<axw> davecheney: we verify the identity of the machine by confirming the nonce. so the random machine you get would (a) have to have your SSH public key in ~ubuntu/authorized_keys and (b) have the random nonce generated in a specific location
<davecheney> sure, but if juju connets to the private address, say 10.x.x.x
<davecheney> and it gets refused
<davecheney> or more specifically, authn failed
<davecheney> will it try the public address ?
<axw> yes
<axw> it tries them all in parallel
<axw> it blows that we need to check them all, but it covers all scenarios with zero user intervention
<davecheney> ok
<davecheney> ok
<davecheney> cool
<davecheney> how is the land of hobbits ?
<axw> :)  nice and cool
<axw> better than the ~35 degrees back in Perth
<davecheney> http://paste.ubuntu.com/6789743/
<davecheney> ^ can anyone confirm this gocheck breakage ?
<wallyworld_> waigani: https://pastebin.canonical.com/103270/
<wallyworld_> waigani: https://pastebin.canonical.com/103271/
<dimitern> rogpeppe, I have a bunch of reviews up, if you can take a look at some of them? https://codereview.appspot.com/54620043/ https://codereview.appspot.com/54630043/ https://codereview.appspot.com/54640043/https://codereview.appspot.com/54650043/
<rogpeppe> dimitern: will do
<dimitern> mgz, jam, others? ^^
<dimitern> rogpeppe, review poke
<rogpeppe> dimitern: sorry, i've been trying to get tests passing cleanly before starting on reviews. hopefully not much more now.
<dimitern> rogpeppe, ok
<eagles0513875> morning dimitern :)
<dimitern> eagles0513875, morning man :)
<eagles0513875> dimitern: :) good news btw on start up the environment i setup on the local provider also came up
<eagles0513875> but im not sure if i should be worried about one thing
<eagles0513875> dimitern: im seeing this when i do juju status instance-state: missing
<dimitern> good news then
<dimitern> hmm .. i'm not sure the local provider supports instance-state
<dimitern> it's a cloud specific value that describes what state the vm is in (i.e. "building", "rebooting", "running")
<dimitern> rogpeppe, and this is the last bit https://codereview.appspot.com/52550045/ (6/6)
<eagles0513875> dimitern: i think it does this for the local provider as well
<dimitern> eagles0513875, maybe it's a bug then - can you look for similar existing bugs and file one if none exists please?
<eagles0513875> dimitern: i do know if i setup a new deployment it does show the pending etc
<eagles0513875> but what i find interesting is that it doesnt seem to preserver or boot up any previous instances on the local provider after reboot
<eagles0513875> fwereade: any idea about the above that im noticing?
<jam> eagles0513875: you should be able to use "sudo lxc-ls --fancy" and see what instances have been set up on your machine
<eagles0513875> let me try that
<jam> eagles0513875: what series are you on ? (Precise, Saucy, T, etc) ?
<jam> and then you can try "apt-cache madison lxc"
<eagles0513875> ok well the laptop im testing out the local provider is 13.10
<eagles0513875> the instances are precise
<eagles0513875> ok the main bootstrapped environment is showing up and that is it
<eagles0513875> let me pastebin
<eagles0513875> http://paste.ubuntu.com/6790813/ jam dimitern
<jam> eagles0513875: I don't think instance-state: missing means something is wrong, as dimitern mentioned, I'm pretty sure it is empty for local instances today
<eagles0513875> isnt 1 an instance of something i might have had setup?
<eagles0513875> i did have an instance of the juju-gui there yesterday
<eagles0513875> which i probably forgot to destroy before shutting down
<eagles0513875> how can you start an instance?
<dimitern> eagles0513875, by deploying something or using add-machine
<eagles0513875> dimitern: ok lets say the host power cycles for some reason
<eagles0513875> how can you bring up the instances you would have had deployed there.
<eagles0513875> thats what happened i shut down my laptop http://paste.ubuntu.com/6790874/ this paste is after a new boot strap so for sure I had something deployed
<eagles0513875> i think this is one serious flaw potentially in juju unless things are handled differently then what im thinking
<dimitern> eagles0513875, i'm not well versed with the local provider, but if you can wait until late in the evening, you might ask thumper, when he's around, or perhaps better, send a question on the juju-dev mailing list
<eagles0513875> where do i subscribe :)
<dimitern> https://lists.ubuntu.com/mailman/listinfo/juju-dev
<eagles0513875> thank you :)
<dimitern> np :)
<jam> fwereade: mgz: standup ?
 * fwereade stands up
<eagles0513875> fwereade: :P you is in trouble :p
<jam> mgz: poke?
<eagles0513875> fwereade: i think i found a very interesting use case this morning after running a test with the local provider that could be a big flaw with juju as it stands right now or im just not aware of the features
 * eagles0513875 pokes jam with a stick and it slides right through him/her
<eagles0513875> dimitern: hope my thinking above isnt too outside the box
<mgz> jam: wandered off, here now
<eagles0513875> mgz: thats what happens when you leave jam out for too long lol it becomes a big sticky puddle :p
<dimitern> eagles0513875, filing bugs or juju-dev discussions are most beneficial - for the record, etc.
 * eagles0513875 goes into lurking mode.
<dimitern> rogpeppe, updated https://codereview.appspot.com/54620043/ as suggested
<rogpeppe> dimitern: the only config.Config field that is secret is admin-secret, and that's not required
<rogpeppe> dimitern: putting "not available" in secret attributes just seems like asking for trouble
<dimitern> rogpeppe, please confirm with fwereade about this behavior
<fwereade> dimitern, rogpeppe, ehhhhhh just a mo
<rogpeppe> fwereade: could you explain the reasoning behind filling out secret attributes with "not available" rather than deleting them?
<fwereade> rogpeppe, the provisioner's all fucked up, and expects an environ config even if it's just a container provisioner
<rogpeppe> fwereade: but what if the environ config expects an integer-valued secret attribute?
<fwereade> rogpeppe, it doesn't
<fwereade> rogpeppe, we changed the interface
<fwereade> rogpeppe, strings only
<rogpeppe> fwereade: the environment config can't contain anything except strings?
<fwereade> rogpeppe, justthe secrets
<rogpeppe> fwereade: ah. i didn't see that. where's it documented?
<fwereade> rogpeppe, EnvironProvider.SecretAttrs
<fwereade> rogpeppe, dimitern: it would ofc be best to unfuck the provisioner rather than further promoting that hack
<rogpeppe> fwereade: and this is just to work around the provisioner bug
<rogpeppe> ?
<dimitern> fwereade, yes, hence the bug 1231384
<_mup_> Bug #1231384: Provisioner API's EnvironConfig() shouldn't allow non-manager nodes to call it <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1231384>
<fwereade> dimitern, rogpeppe: OMG NO WAI it looks like someone may have done that
 * fwereade looks more closely
<rogpeppe> fwereade: given that the bug is in the provisioner, i'd be much happier if the workaround-hack was in the provisioner too
<dimitern> rogpeppe, it is in the provisioner api
<rogpeppe> fwereade: rather than corrupting core interfaces for no good reason
<rogpeppe> dimitern: the bug is in the provisioner agent code
<rogpeppe> dimitern: which could easily add the required dummy secret attributes itself
<dimitern> rogpeppe, yes, but without some less than trivial refactoring, it can't be fixed
<fwereade> rogpeppe, dimitern: take a look at the provisioner, I don't think the container provisioner uses it any more
<rogpeppe> dimitern: sure it can. but even better if the container provisioner doesn't do it
<dimitern> fwereade, I can see it uses it on line 173 in provisioner.go
<dimitern> fwereade, ah, that's just the env. provisioner
<fwereade> dimitern, yeah
<dimitern> fwereade, cool! so we can get rid of the masking and mark the bug fix committed
<fwereade> dimitern, I think we can
<fwereade> dimitern, test it live please though -- you have access to a MAAS, right?
<dimitern> fwereade, i don't think i do
<dimitern> fwereade, even if i do, i've never used it
<dimitern> for live testing
<dimitern> perhaps natefinch can help with that?
<dimitern> fwereade, why maas exactly?
<fwereade> dimitern, that has containers that work
<dimitern> fwereade, so ec2/canonistack won't do?
<fwereade> dimitern, I think probably not for verifying that container provisioners really really work properly
<dimitern> fwereade, i'd appreciate any pointers on how to connect to maas and test it
<fwereade> dimitern, yeah, I'm trying to find the email
<dimitern> fwereade, I'll follow the wiki and ask on #is
<dimitern> fwereade, it seems i'm not a member of any of the groups that have access to maas (iom-maas or qa-lab)
<dimitern> fwereade, it seems I can't use the maas lab - it's booked for the next few weeks
<TheMue> o/ <= I'm looking for help with API and authentication
<dimitern> TheMue, hey, what do you need?
<TheMue> my debug-log command get's a "permission denied"
<dimitern> TheMue, that's the client api
<dimitern> TheMue, run it with --show-log and --debug and paste the output?
<TheMue> dimitern: I'm using juju.NewAPIClientFromName und client has a method for calling my facade
<TheMue> dimitern: ok, will do
<TheMue> dimitern: tia
<dimitern> TheMue, pasting some code can also help to see what you're trying to do
<TheMue> dimitern: will put it all together, just have to bootstrap
<dimitern> TheMue, ta
<dimitern> rogpeppe1, hey
<rogpeppe1> dimitern: yo
<dimitern> rogpeppe1, since live testing  on maas of whether bug 1231384 is fixed
<rogpeppe1> dimitern: sorry, i've been afk for a while 'cos a friend came around
<_mup_> Bug #1231384: Provisioner API's EnvironConfig() shouldn't allow non-manager nodes to call it <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1231384>
<dimitern> rogpeppe1, let's live with it for now, so I can land the firewaller api
<rogpeppe1> dimitern: are you saying that the bug isn't fixed?
<dimitern> rogpeppe1, once it's available, I'll look into it and remove the code as suggested, after successful live testing
<rogpeppe1> dimitern: or that you don't know?
<dimitern> rogpeppe1, i can't confirm it won't work for real without the workaround
<dimitern> s/won't/will/
<rogpeppe1> dimitern: isn't it easy to test by live testing in ec2?
<dimitern> rogpeppe1, ec2 does not use container provisioner apparently
<dimitern> rogpeppe1, only maas does
<rogpeppe1> dimitern: ah, of course.
<rogpeppe1> dimitern: i suppose another way would be to run the container provisioner tests and check that the EnvironConfig API call is never made
<dimitern> rogpeppe1, it doesn't beat live testing
<TheMue> dimitern: log does not tell very much: http://paste.ubuntu.com/6791555/ , the implementation so far at https://codereview.appspot.com/44540043/ (the predecessor branch in review by fwereade)
<rogpeppe1> dimitern: tbh i think a code inspection is good enough here. it's easy to verify that EnvironConfig is only called in an environProvisioner, and that NewEnvironProvisioner is only called if the machine has JobManageEnviron
<rogpeppe1> dimitern: if that's not the case, it's not *too* bad.
<rogpeppe1> dimitern: if it breaks trunk, just blame me :-)
<dimitern> rogpeppe1, how about backwards compatibility?
<dimitern> rogpeppe1, removing EnvironConfig() from the provisioner facade is a breaking change, and needs workarounds for 1.16
<rogpeppe1> dimitern: that's a reasonable point.
<dimitern> rogpeppe1, see - all the more reasons to dedicate a cl + time on it alone
<rogpeppe1> dimitern: yeah - perhaps just put a comment saying "delete when 1.16 compatibility no longer required"
<dimitern> rogpeppe1, sure
<dimitern> rogpeppe1, updated https://codereview.appspot.com/54620043/ again - should be good to land now?
<dimitern> TheMue, sorry, back to you
<dimitern> TheMue, in DebugLogCommand.Init you need to call NewAPIClientFromName in Run(), not Init
<dimitern> s/in Debug../looking at Debug../
<dimitern> TheMue, see the other commands for examples
<TheMue> dimitern: that's all? would be great. thx, I'll try
<rogpeppe1> dimitern: LGTM
<dimitern> TheMue, well, that bit is fishy, but might be something else
<dimitern> rogpeppe1, ta!
<dimitern> rogpeppe1, did you look at the others?
<rogpeppe1> dimitern: am on them
<TheMue> dimitern: the fishy one showed at least a first step of success http://paste.ubuntu.com/6791768/
<TheMue> dimitern: but still no connection
<TheMue> eh
<TheMue> no permission
<dimitern> TheMue, does any other command work?
<dimitern> TheMue, I see the problem
<dimitern> TheMue, https://codereview.appspot.com/44540043/diff/140001/state/apiserver/debugger/debugger.go - in NewDebuggerAPI you're checking for AuthEnvironManager
<dimitern> TheMue, that's not how it should be for a client api (it's used for the agent api only)
<TheMue> dimitern: ic
<dimitern> TheMue, the whole facade is written as if it's an agent api facade
<TheMue> dimitern: yeah, only tried to follow what I found ;) but seems I didn't got the intention behind it (client and agent)
<dimitern> TheMue, look at apiserver/client/client.go - you need AuthClient() in the Debugger(id string) call
<TheMue> dimitern: hmm, don't have AuthClient() there, only AuthClient() bool in apiserver/root.go
<dimitern> TheMue, the root is an authorizer
<TheMue> dimitern: the debugger API is intended to be used from the CLI and the UI. we call both "client" opposite to "agent" for the agents on the machines?
<TheMue> dimitern: I've got my standup now, but your hints already showed the way
<TheMue> dimitern: maybe I'll come back with more questions later
<TheMue> dimitern: thx so far
<dimitern> TheMue, np
<rogpeppe1> dimitern: all reviewed, i think
<dimitern> rogpeppe1, tyvm
<dimitern> rogpeppe1, except the firewaller api itself?
<rogpeppe1> dimitern: ah, which one is that?
<dimitern> rogpeppe1, https://codereview.appspot.com/52550045/
<rogpeppe1> dimitern: ah, you didn't mention that in the list you pasted this morning...
<rogpeppe1> dimitern: looking
<dimitern> rogpeppe1, that's the last one
<dimitern> rogpeppe1, ta
<rogpeppe1> dimitern: i've got a meeting in 5 minutes - don't think i'll have done it before then, i'm afraid
<dimitern> rogpeppe1, it's ok - later perhaps?
<rogpeppe1> dimitern: yes
<TheMue> dimitern: step by step, calling the command seems to work now, but it looks I'm using the watcher wrong. do we have an example of using the watcher from the client side?
<dimitern> TheMue, only the AllWatcher is used by the client api so far
<TheMue> dimitern: ok, will look how it is done there
 * fwereade away early, cath's birthday
<mgz> have fun!
<TheMue> fwereade: enjoy and congrats from me
<gary_poster> sinzui: hi.  will 1.17.1 be made from trunk?  specifically, will it have the AddLocalCharm method http://bazaar.launchpad.net/~go-bot/juju-core/trunk/revision/2155 from dimiter?
<sinzui> gary_poster, yes it will
<gary_poster> awesome thanks sinzui
<rogpeppe1> dimitern: reviewed
<dimitern> rogpeppe1, thanks
<jamespage> sinzui, will the 1.17.1 release include using juju-mongodb on trusty+ ?
<sinzui> jamespage, It can. I can make that change  and start testing ittoday
<jamespage> sinzui, please
<sinzui> jamespage, ack
<natefinch> interesting, if I leave saturday night for London, I can get a direct flight for $500 less than a 1 stop flight leaving sunday morning at 5am.  Seems like a no brainer.
<natefinch> man, flight prices are wacky
<rogpeppe1> natefinch, fwereade, mgz: review anyone? https://codereview.appspot.com/55150043
<rogpeppe1> EnsureAvailability finally :-)
<natefinch> rogpeppe1: nice!  I'll check it out.
<rogpeppe1> natefinch: thanks!
<wallyworld> fwereade: you around?
<lazypower> Does a variable exist indicating which node is the dying node? eg unit['status'] = dying?
<axw> waigani: https://juju.ubuntu.com/docs/tools-charm-tools.html
<dpb1> any trick to get relation-get --help text?  I get env variables not defined,  when I try to run it.
<fwereade> wallyworld, hey, I need to be a bit brief, but I *am* herefor now
<wallyworld> fwereade: we were only looking to see if you wanted to join in azure or upgrade discussion
<wallyworld> up to you. but we do need to start talking even if you are not available. we were going to talk about azure this afternoon since you were not around earlier
<fwereade> wallyworld, I don't think I've really got time for a deep discussion, I'm afraid :(
<wallyworld> np. we were pinging you cause you asked us to :-)
<fwereade> wallyworld, re azure my gut wants to jam availability sets in as placement directives, but I'mnot sure if we can make that fly
<fwereade> wallyworld, and I'd really appreciate an update of what you all discussed when you've discussed it
<wallyworld> i think it needs to support auto generated as well / instead of
<fwereade> wallyworld, yeah
<wallyworld> ie each service is placed in it's own availability set
<fwereade> wallyworld, that's fine until we want density
<wallyworld> we can tweak it once density is considered
<fwereade> wallyworld, and don't forget HA, juju itself needs one
<wallyworld> have a config for auto
<wallyworld> yeah
<fwereade> wallyworld, cheers
<wallyworld> we'll keep you in the loop
<wallyworld> i'm thinking the avail set name would be mysql-<env uuid>
<wallyworld> config for auto avail sets would default to true, would be turned off for density perhaps
<wallyworld> or even no option, and if density directives used, don't do auto
<fwereade> wallyworld, hard to switch over though
<fwereade> wallyworld, anyway I must return to my other life
<fwereade> wallyworld, cheers
<wallyworld> sure, see ya
<wallyworld> what, you have a life?
<fwereade> ;p
<marcoceppi> fwereade: are you around?
<marcoceppi> thumper: hey, just installed juju-core from source, boostraping local provider, but it's using 1.17.0 tools, --upload-tools doesnt' seem to work, how can I get 1.17.1 tools for local from compiled juju?
#juju-dev 2014-01-22
<davecheney> sinzui: I am having trouble enabling the 'proposed' repository for trusty
<davecheney> oh no
<davecheney> wait
<davecheney> yeah, that got it
<wallyworld> http://www.windowsazure.com/en-us/documentation/articles/manage-availability-virtual-machines/
<thumper> wallyworld, axw, waigani: https://docs.google.com/a/canonical.com/document/d/1PJiqBiAndIKvT6WYT1fG4rBX2b4sF90j3m7GTLNoZiE/edit
<bigjools> I upgraded to trusty and the environment I started under saucy doesn't look great now...
<bigjools> http://paste.ubuntu.com/6795052/
<bigjools> what's going on?
<thumper> bigjools: you probably need to upgrade the environment
<thumper> bigjools: but the short answre is "we hate you"
<bigjools> thumper: I knew *that* already
<wallyworld> bigjools: first mistake - upgrading to alpha software for production purposes
<wallyworld> :-P
<bigjools> who said it was production purposes? :)
<bigjools> anyway you're missing all the fun weather over here
<wallyworld> hot?
<bigjools> heat index is nearly 47
<wallyworld> ah, so cool then
<bigjools> your missus will be asking you for a/c when you get back :)
<wallyworld> amongst other things
<bigjools> thumper: how do I upgrade an environment?
<thumper> bigjools: juju upgrade ??
<bigjools> you may point me at TFM
<bigjools> upgrade-juju?
<wallyworld> think so
<wallyworld> if it's upgrading to a trunk release from source, you need --upload-tools IIRC
<bigjools> I am using 1.17.0-trusty-amd64
<bigjools> I think my existing env is 1.16
<bigjools> so it should find tools for 1.17 without the --upload I guess?
<bigjools> well looks like it didn't work anyway :(
<wallyworld> thumper: https://codereview.appspot.com/55300043/
<bigjools> folks, I deleted a service and it's machine so I can redeploy it on a new series, but when trying to deploy it says the service already exists.  It doesn't appear on "juju status" though..... is this a bug?  v1.17.0.1
<bigjools> its*  gah hate that
<dimitern> bigjools, is this a local env?
<bigjools> dimitern: canonistack
<bigjools> dimitern: http://paste.ubuntu.com/6795916/
<dimitern> bigjools, hmm that's weird
<bigjools> dimitern: I upgraded the tools from 1.16 earlier.  I suspect that may have something to do with it
<dimitern> bigjools, we had issues like that but with the local provider, and they were fixed since
<dimitern> bigjools, can you perhaps do a mongo dump and file a bug please?
<bigjools> dimitern: if you give me monkey-like instructions to do that, yes :)
<dimitern> bigjools, :) just a sec
<bigjools> dimitern: np.  There's a massive storm bearing down on me that is likely to cut my power/internet, so no pressure.  :)
<dimitern> bigjools, using juju backup
<dimitern> bigjools, it's a plugin that does that - it has --help as well
<bigjools> where do I get it?
<bigjools> dimitern: I have to run out for a bit, can you send me details please :)
<fwereade> axw, am I right in thinking bigjools' problem above is that one about the env life field, that's fixed in trunk?
<TheMue> fwereade: morning
<TheMue> fwereade: any chance to take another look at https://codereview.appspot.com/44540043/ this morning?
<fwereade> TheMue, heyhey
<fwereade> TheMue, all I can say is "maybe" :/
<TheMue> fwereade: ok, thx
<TheMue> dimitern: after your help yesterday a deeper look at https://codereview.appspot.com/44540043/ would be appreciated too
 * TheMue is getting closer, but so far didn't found any watcher via API used in cmd/juju so far
<dimitern> TheMue, definitely looks better now
<dimitern> bigjools, it's part of core - in cmd/juju/plugins
<rogpeppe1> mornin' all
<dimitern> rogpeppe1, morning
<dimitern> rogpeppe1, thanks again for all the reviews yesterday
<rogpeppe1> dimitern: np
<rogpeppe1> dimitern: i'm looking for a review of https://codereview.appspot.com/55150043 BTW
<dimitern> rogpeppe1, looking
<dimitern> rogpeppe1, quick question - what's the deal with NoVote and VotingMachineIds ?
<rogpeppe1> dimitern: that's the way that EnsureAvailability will be able to reduce the number of machines in the peer group without just removing them.
<dimitern> rogpeppe1, and a peer group is a set of machines in a replicaset ?
<rogpeppe1> dimitern: if a machine goes down, we don't want to just remove it from the peer group, because it may come up again
<rogpeppe1> dimitern: yes
<dimitern> rogpeppe1, i see
<rogpeppe1> dimitern: not all machines in a replica set are necessarily voting
<rogpeppe1> dimitern: it also gives us the freedom in the future to be able to deliberately bring up non-voting machines to act as backup
<rogpeppe1> dimitern: there will also be another field associated with the machine: IsVoting - indicating the actual voting status of the machine.
<rogpeppe1> dimitern: that will be set by the peergrouper worker
<dimitern> rogpeppe1, IsVoting or WantVote ?
<rogpeppe1> dimitern: IsVoting
<dimitern> rogpeppe1, and WantVote ?
<rogpeppe1> dimitern: that's just the negation of NoVote
<dimitern> rogpeppe1, (looking in the code comments of Ensure..)
<dimitern> rogpeppe1, ah, ok
<rogpeppe1> dimitern: i didn't want to do WantsVote in the machineDoc because that would be incompatible with previous versions.
<rogpeppe1> dimitern: which wouldn't have the field in the schema
<dimitern> rogpeppe1, yeah, but can't it be added in a compatible way? Like the default value (when missing) is fine?
<rogpeppe1> dimitern: that just makes the logic harder - you'd have to look for "wantsVote==true || wantsVote==nil"
<rogpeppe1> dimitern: i think it works pretty well inverted actually
<dimitern> rogpeppe1, i see, well this can be hidden behind a machine method, no?
<rogpeppe1> dimitern: it is, isn't it?
<dimitern> rogpeppe1, sorry, just got there
<dimitern> rogpeppe1, I see now
<rogpeppe1> dimitern: apart from in the MachineTemplate, where NoVote actually makes sense, because you actually almost always want a vote
<rogpeppe1> dimitern: things became simpler when I changed MachineTemplate.WantsVote to MachineTemplate.NoVote
<dimitern> rogpeppe1, yeah, i agree
<dimitern> rogpeppe1, reviewed
<rogpeppe1> dimitern: thanks a lot
<jam> fwereade: ping about API version numbering
<jam> I've been trying to respond to your email and thinking about it, but I think it comes across rambling in email form, so it might be better as a 1:1 chat
<fwereade> jam, pong
<jam> fwereade: hey hey
<fwereade> jam, was I terribly unclear?
<jam> fwereade: there is just a lot that I've thought about it.
<jam> fwereade: specifically, what does it actually *mean* to have a global rev vs a local rev, how does stuff look like in code, etc.
<fwereade> jam, so, I think the local revs are purely an internal tool to help us manage bloat
<jam> fwereade: I sent my long rambly email
<jam> fwereade: there are problems with global rev that we should probably explore as well
<jam> namely, where /when is that sent on the wire
<fwereade> jam, codewise, a request comes in with a version, namespace, method, params; we get the facade by looking up the namespace given the version and get an actual facade on which we can call.Method(Params)
<jam> fwereade: codewise *today* it comes in with namespace, ??, method, params
<jam> and we can make ?? == version
<fwereade> jam, my suggestion earlier was that we demand it on every call
<fwereade> jam, yeah
<jam> It looks confusing that global version comes after namespace
<jam> but supersedes it
<jam> fwereade: so the next failure I explored is that we can have a relatively new client
<jam> that still calls a really old version
<jam> just because we never touched that code
<jam> and then we can't actually remove compatibility
<jam> so even though the API is 5 years old, and "Machine", "100", "Add" works just fine and is identical to "Machine", "1", "Add"
<jam> the very newest 'juju' commandline tool still calls
<jam> "Machine", "1", "Add"
<fwereade> jam, I think that's the least of our problems because we can at least control that -- it's everybody else calling v1 that's the problem, surely?
<fwereade> jam, and also that there will be old server versions that we'll have to fall back to even with new clients, at least for some time
<jam> fwereade: we can audit for it, but it is fairly expensive to sort out what version of Juju was calling what version of the API. If we work on making the juju CLI always call "latest" versions, then you don't have 2 pieces that break independently
<fwereade> jam, well, we *do* probably want to keep the CLI up to date with API versions, it's true, but we also need to leave fallback code in place, right?
<jam> fwereade: so, if you have 1 global rev number, then all call sites get bumped any time you change anything, right?
<jam> except the ones where that actually changes the implementation so we use the 'slightly older one' as we transition / (we want to update all our call sites eventually, I'm sure)
<fwereade> jam, they could do, yes, but I don't think it's necessarily required or expected that everybody does so
<jam> fwereade: so there is a little bit of "what do we do in Juju because we control both ends" and "what do we suggest 3rd parties do"
<jam> if we're going with a global bump, and we can remove old ones after we've deprecated it for "long enough"
<fwereade> jam, yeah, I think *we* should, but we can't necessarily impose it on everybody else
<jam> then we do want some sort of push for people to go to the latest thing they could
<jam> for example, how do people get informed that the API they are using is deprecated?
<fwereade> jam, I'm not sure what mechanisms we have for encouraging people to update their clients (apart from public warnings that vX will be going away in N months -- and making version removal clear in release notes)
<jam> fwereade: well, you can return something in the API that "this is deprecated" but it needs to be done in a way that is actually useful
<jam> Users don't actually care
<jam> its developers that need to be informed about it
<jam> I guess you could have client versions logged, and calling a deprecated version generates an email to juju-core that client X version Y is still using a now deprecated API ? :)
<fwereade> jam, ok, you're talking about us here? or everyone else as well?
<jam> fwereade: I mean the juju server would notice when a client connects and calls a deprecated API and aggregate that into something useful for other people
<jam> then you have some sort of report about "juju-1.16.5 is still very common in the wild and is making lots of calls to X", and "juju-developer 0.8 is still calling Y"
<jam> it would be *useful* in the fact that we can then contact the developer directly and let them know we're deprecating something
<jam> but it is highly invasive :)
<fwereade> jam, well there is some reasonable worry about collecting that data
<rvba> Hi guys, I'm was trying to bootstrap a juju env and I got this: http://paste.ubuntu.com/6796595/.  Is this a known problem?  It does not seem to happen consistently but I don't think that's the first time I saw this.
<jam> rvba: looking
<jam> rvba: what version of the client are you using?
<jam> (juju version)
<jam> mgz: wallyworld: standup?
<rvba> jam: apt-cache policy juju-core â 1.16.3-0ubuntu0.13.10.1
<mgz> ta jam
<jam> rvba: it is *possible* that this was something we fixed in 1.16.5
<jam> rvba: there was an EOF bug that we implemented a workaround fro
<jam> for
<rvba> jam: ah ok, thanks. I guess I'll upgrade using the PPA and test againâ¦
<jam> rvba: you'll probably want a 1.16 series, rather than 1.17, I think
<rvba> The stable PPA has 1.16.5-0ubuntu1~ubuntu13.10.1~juju1.
<jam> rvba: great, I think devel has 1.17.0
<rvba> jam: same problem with the new version: http://paste.ubuntu.com/6796653/
<jam> rvba: looks like https://bugs.launchpad.net/juju-core/+bug/1239558
<jam> but that actually claims it was only fixed in 1.17 :(
<rvba> ah, right.
<rvba> jam: sorry to bother you againâ¦ I'm trying to use juju on Trusty (1.17.0-0ubuntu2) and now I get: http://paste.ubuntu.com/6796670/
<rvba> Any idea what might be wrong?
<jam> rvba: there you need to use "juju bootstrap --upload-toolS"
<jam> we're looking for tools on streams.canonical.com
<jam> but that hasn't been finished being set up yet
<rvba> Ah okay, makes sense.
<rvba> Thanks.
<jam> mgz: do you have a chance to investigate why "juju-1.17" can't do "status" vs a 1.16 server? (Did it get fixed and I'm missing something?)
<jam> (at least here trunk r2239 still fails)
<mgz> jam: er, I'll recheck
<bigjools> I am trying to bootstrap an environment using 1.17 and it insists that I am already bootstrapped but I am not.  What else is it checking? there are no instances running
<mgz> bigjools: probably the bootstrap-verify file in your cloud storage
<mgz> bigjools: you can just run `juju destroy-environment` to clear everything out
<bigjools> mgz: it fails
<bigjools> because I have no instances
<mgz> I guess that's a provider bug then
<bigjools> Continue [y/N]? y
<bigjools> ERROR no instances found
<mgz> it should always do all the work, even if some parts of it have already completed
<bigjools> this is the openstack provider
<mgz> can you list the control bucket externally and see what it contains?
<bigjools> how do I do that?  sorry I know bugger all about openstack
<mgz> eg, look in ~/.juju/environment/<right one> for the control bucket name
<mgz> then `swift list <that name>`
<jam> mgz: just have him delete the fixed bucket in environments.yaml ?
<bigjools> ok
<mgz> python-swiftclient if you don't have it
<jam> so we generate a random one
<mgz> jam: I curious if we actually have a provider bug on destroy
<bigjools> yes it lists the files
<bigjools> provider bug
<mgz> so, now delete those with `swift delete` and bootstrap - if that works, please do file a bug
<bigjools> ok cheers
<bigjools> mgz: that fixed it indeed.  thanks for the help
 * bigjools files  bug
<natefinch> fwereade: when you get a chance, I'd like to talk about your comments on the EnsureMongoServer stuff.
<rogpeppe1> fwereade: it turns out that Machine.Destroy only fails if the machine has JobManageEnviron, not JobManageState...
<rogpeppe1> fwereade: i think that's probably a bug, but it makes me think (yet again) of changing the semantics of those two jobs
<rvba> jam: I /think/ the problem we're struggling with in MAAS+Juju might be related to https://bugs.launchpad.net/juju-core/+bug/1257649.  I don't understand the comment on there that says this is irrelevant nowâ¦ care to enlighten me?
<fwereade> rogpeppe1, I'd be +1 on that
<fwereade> natefinch, might need to be a bit later
<natefinch> fwereade: np, I am currently encumbered anyway
<rogpeppe1> fwereade: my thought is to have two manager jobs, one for jobs which can be blown away with probs (e.g. the API server) and one for jobs which require special handling with EnsureAvailability and freidns
<rogpeppe1> friends, even
<rogpeppe1> s/with probs/without probs/
<rogpeppe1> fwereade: for the time being though, i'm wondering about just ditching one of the jobs entirely
<fwereade> rogpeppe1, I'd be most happy just ditching one of them
<rogpeppe1> fwereade: i don't think the distinction between ManageEnviron and ManageState is currently useful
<rogpeppe1> fwereade: cool, i'll do that.
<fwereade> rogpeppe1, agreed, it seemed like a good idea at the time
<rogpeppe1> fwereade: yeah
<fwereade> rogpeppe1, the universal explanation
<rogpeppe1> fwereade: at some point in the indefinite future, i'd like to be able to scale up API servers independently of mongo servers
<rogpeppe1> fwereade: but i think that's easier to do by adding a new job then
<fwereade> rogpeppe1, agreed, but I don't think we're actually getting there any faster by maintaining that somewhat fuzzy distinction at the moment
<rogpeppe1> fwereade: yup
<fwereade> rogpeppe1, sweet, thanks
<jam> rvba: in the current code in trunk, we don't have an internal timeout for the SSH connection. (I think) I'd have to double check what version you're running and what disconnections you're seeing
<rvba> jam: we think the bug in question might be the cause of the problem we're seeing in the lab.  It takes more than 10 minutes to bring up a bootstrap node in MAAS because before installing jujud on it, we have to install it using d-i.
<rvba> jam: we're trying to re-build juju with a different timeout to confirm this.
<rvba> jam: what we're seeing is that 'juju bootstrap' gives up after 10 minutes and we think it might have succeeded with a bit more time :)
<TheMue> debugging-by-println, sometimes nothing else works :D
<TheMue> aaaaargh, apiserver.StringsWatcher requires to be called by an agent, client is not allowed (dunno why). THATS why debug-log has a "permission denied error"
<TheMue> *sigh*
<rick_h_> can anyone verify that a juju 'upgrade' to an older revision is possible on pyjuju please?
<rogpeppe> fwereade: re: your comment on EnsureMongoServer, it's not adding anything that isn't already there
<rogpeppe> fwereade: and if the state port isn't stored in the agent config, where *would* it be stored?
<fwereade> rogpeppe, well it's already in the env config
<rogpeppe> fwereade: but the agent doesn't have access to the env config until they've connected to the state, right?
<fwereade> rogpeppe, this is all bootstrap time, isn't it? perhaps I'm confused there
<rogpeppe> fwereade: it's also used after bootstrap time, when new state servers come up
<rogpeppe> fwereade: but...
<rogpeppe> fwereade: i'm thinking you may be right, because a second state server will have access to the API, even if they can't connect to mongo
<fwereade> rogpeppe, but anything setting up a state server is either doing so at bootstrap, or in response to being told to by the API, right?
<fwereade> rogpeppe, yeah
<fwereade> rick_h_, pyjuju didn't have juju upgrades at all
<TheMue> fwereade: you may help me with a design decision? why do the watcher in apiserver/root.go require to be called by an agent?
<rogpeppe> fwereade, natefinch: huh, that call to ensureMongoServer in MachineAgent.Run doesn't look right at all
<fwereade> TheMue, that was the only thing that needed it at the time
<TheMue> fwereade: would like to use the StringsWatcher from the client
<TheMue> fwereade: so changing would be ok?
<fwereade> TheMue, let me take a quick look at the code
<TheMue> fwereade: yep
<fwereade> TheMue, aw hell
<fwereade> TheMue, yes it's fine to change it
<fwereade> TheMue, but it's not a simpleone
<TheMue> fwereade: great, that helps a lot for debug-log
<fwereade> TheMue, ISTM that there's a pretty serious problem with all the watchers
<TheMue> fwereade: oh *listening*
<fwereade> TheMue, there's nothing preventing clients from grabbing each other's watchers willy-nilly
<TheMue> fwereade: come on, let them have some fun *lol*
<fwereade> TheMue, there's presumably something about a given resource that ties it to a particular client, otherwise we couldn't tidy up resources when conns are closed
<fwereade> TheMue, haha
<rogpeppe> fwereade: clients don't share resources
<fwereade> TheMue, so ISTM this is something that can and should (nay must) be fixed for all the watchers
<TheMue> rogpeppe: exactly, that's how I understood it too
<fwereade> rogpeppe, yeah, I just can't remember the exact mechanism
<fwereade> rogpeppe, ohhhhh ok it's maybe ok? individual clients get separate resource idspaces anyway, right?
<rogpeppe> fwereade: exactly
<fwereade> rogpeppe, cool
<fwereade> TheMue, belay that panic then
<fwereade> TheMue, you can just open up stringswatcher I think
<TheMue> fwereade: yes, will start with that one as I only need it
<rogpeppe> fwereade: specifically, newSrvRoot allocates a new resources map, and it's called when a client successfully logged in, and stored in the root object for that client
<rogpeppe> TheMue, fwereade: opening up StringsWatcher seems fine to me
 * TheMue takes the key to open it
<natefinch> fwereade: back, let me know when you'd like to talk
<dimitern> rogpeppe, fwereade, natefinch - I have two branches up for review, which complete the firewaller API story: https://codereview.appspot.com/55670043 and -  https://codereview.appspot.com/55680043 I'd really appreciate if someone takes a look
<adeuring> could somebody have a llok here: https://codereview.appspot.com/55690043 ?
<natefinch> adeuring: looking
<adeuring> thanks
<natefinch> adeuring: reviewed
<adeuring> natefinch: thanks!
<natefinch> adeuring: thank you :)
<natefinch> rogpeppe: what was the problem you had with ensuremongoserver?  I saw a couple problems while reviewing it with William (like, we only need to call it if we have a managestate job, and I need to be more tolerant in case the directory and files already exist).
<rogpeppe> natefinch: why is it called unconditionally in the machine agent?
<rogpeppe> natefinch: ah, yes, that's "we only need to call it if we have a managestate job"
<natefinch> rogpeppe: right, I realized that when I went back to look over the code
<rogpeppe> natefinch: i'm just about to delete JobManageState throughout the code, BTW, and just use JobManageEnviron throughout
<natefinch> rogpeppe: cool, I'll check for that one then
<dimitern> natefinch, rogpeppe, review poke?
<rogpeppe> dimitern: will look shortly
<dimitern> rogpeppe, cheers
<natefinch> rogpeppe: is openApiState the appropriate way to get the list of jobs for the machine agent?  I don't see an easier way to do it
<rogpeppe> natefinch: yes
<natefinch> rogpeppe: cool
<rogpeppe> natefinch: you might not find it that easy
<rogpeppe> natefinch: but actually, as long as jobs stay constant, it's probably not too bad
<rogpeppe> natefinch: the difficulty comes at bootstrap time
<rogpeppe> natefinch: because at bootstrap time (or if a single remaining state server machine reboots) there's no API server to go to to ask for jobs
<natefinch> rogpeppe: ahh, right
<dimitern> natefinch, can you take a look at this please? https://codereview.appspot.com/55680043
<natefinch> dimitern: sure thing
<dimitern> natefinch, cheers
<rogpeppe> hmm, a recent update (revno 2226) has broken tests for me
<dimitern> rogpeppe, natefinch , guys, sorry to poke you again, but i'd like to land these two today if possible
<rogpeppe> dimitern: looking, sorry, i've been struggling to get tests to pass
<dimitern> rogpeppe, what's wrong?
<rogpeppe> dimitern: i've just reported https://bugs.launchpad.net/juju-core/+bug/1271674 and https://bugs.launchpad.net/juju-core/+bug/1271672,
<rogpeppe> dimitern: but mostly just crap from removing JobManageState
<dimitern> rogpeppe, wow test.invalid resolves in your local net?
<rogpeppe> dimitern: yeah. blame my ISP
<rogpeppe> dimitern: they resolve everything to their yahoo search page
<dimitern> rogpeppe, ah, nasty
<rogpeppe> dimitern: it resolves to 92.242.132.16 for me
<dimitern> to what lengths some people go for the sake of marketing
<natefinch> dimitern: sorry, caught me during lunch.  I'm looking now for real though
<dimitern> natefinch, thanks
<natefinch> rogpeppe: my ISP does similar things, forwards DNS failures to their own page.  In theory you can disable it at the router, but I've forgotten the password to it :/
 * dimitern is aft for a while
<dimitern> afk even
<natefinch> dimitern: reviewed
<arosales> will any folks here be joning the charm writing virt. sprint?
<natefinch> arosales: core is pretty slammed right now.  I know I'm pretty behind where I'm supposed to be.... not sure about others.
<arosales> natefinch, ack I'll check with some others
<lazypower> to whomever fixed the state-watcher bug, i could hug you.
<rogpeppe> lazypower: you're welcome :-)
<lazypower> rogpeppe, you + me + beer in our future
<rogpeppe> lazypower: SGTM!
<thumper> arosales: hey there
<thumper> arosales: I think you are under the mistaken impression that we know how to write charms :-)
<natefinch> thumper: roger does, more or less :)  He wrote that Go charm thing, which looks pretty cool.
<thumper> hey natefinch
 * natefinch might be biased for anything Go and against anything bash, though
<natefinch> thumper: howdy
<thumper> natefinch: heh
<natefinch> thumper: you're using Sublime Text and GoSublime, right?
<thumper> natefinch: I am now
<thumper> getting the hang of it
<natefinch> does go to definition work for you on methods?  Like if you have s.Foo()  can you go to the definition of the Foo() method?  It doesn't work for me, and it's really annoying
<natefinch> I can go to definition of types and top level functions, but not methods on types.
<natefinch> I end up having to resort to a full text search like some kind of Neanderthal.
<arosales> thumper, you guys are coding kings I am sure you can crank out all sorts of quality code
<thumper> bwahaha
<fwereade> thumper, heyhey
<thumper> arosales: we are kinda focused on several areas that will make others more happier
<thumper> fwereade: hey dude
<arosales> thumper, understood. I thought it wouldn't hurt to ask though
<fwereade> thumper, I should have come on earlier and arranged things and so on
<arosales> just in case :-)
<thumper> arosales: sure
<arosales> thumper, thanks for the consideration though
<fwereade> thumper, don't suppose you're free to chat about availability sets a bit?
<thumper> arosales: if we were just sitting around drinking, then yeah, we'd help
<thumper> fwereade: sure
<arosales> thumper,  lol
<natefinch> thumper: that doesn't sound like you guys....
<fwereade> thumper, or have you just been explaining to arosales that you wouldn't do them? ;p
<arosales> thumper, I know you aren't doing that given I brought up the availability sets too
<thumper> arosales: we aren't working on the availability sets just yet, but we have talked about it...
<arosales> fwereade, that was charm tests thumper was breaking my heart on
<thumper> arosales: we are working on proxy support, local provider improvements
<fwereade> aww
<thumper> fwereade: want a hangout?
<fwereade> thumper, sgtm
<arosales> thumper, its only mid week  :-)
<thumper> fwereade: https://plus.google.com/hangouts/_/7acpibmqriabeqpmu77ctb34ns?hl=en
<thumper> arosales: for you.. thursday morning for us
<arosales> thats 2 full days :-)  I'll let you get back to work
<fwereade> natefinch, ping
<natefinch> fwereade: pong
<thumper> natefinch: we should chat about upgrades
<natefinch> thumper: I love upgrades
<natefinch> thumper:  shall I join the hangout?
<thumper> natefinch: sure
<fwereade> natefinch, damn sorry I had a response half typed out
<thumper> natefinch: fwereade just left the hangout, but I'm still there waiting for you
<hazmat> thumper, runonallmachines .. runs on all units.. or just machines?
<thumper> hazmat: it does what it says :-)
<thumper> on the machiens
<thumper> or machines
<thumper> we don't have an --all-units
<thumper> if you need it, file a bug :)
<hazmat> thumper, so this isn't juju-run.. this is arbitrary script run?
<thumper> no...
<thumper> only when running on a unit does it get the unit context
<thumper> you can also run on the machine outside of a unit context
<thumper> however it will get the hook execution flock
<thumper> to make sure that it doesn't run in parallel with a hook
<hazmat> cool, was just doing my monthly check for new api methods and came across.
<thumper> :-)
<thumper> it is freaky magic shit
<hazmat> orchestration even ;-)
<thumper> yeah, that
<thumper> ansible-lite
<hazmat> thumper, so commands string need a #!/bin/executable preprend?
<thumper> no
<thumper> we do that
<thumper> actually, we send it through /bin/bash -s
<hazmat> hmm.. so assumes bash?
<thumper> yes
 * hazmat files a bug ;-)
<hazmat> bcsaller, ^ you might be interested in this
<thumper> nah
<thumper> why?
<thumper> if you want to execute a magic script
<thumper> write it to disk and call it
<hazmat> i guess exec works.. just feels a little klunky that way
 * bcsaller reads
<thumper> bcsaller: we have 'juju run' now
<thumper> which has an api end point
<thumper> so in theory, the gui could run arbitrary code on any machine, service or unit
<bcsaller> thumper: good stuff
 * thumper considers a button on a machine in the gui called 'reboot'
 * thumper chuckles to himself
<hazmat> if only there were machines in the gui ;-)
<thumper> yeah...
<thumper> hazmat: we could reboot services :)
<thumper> and it would work as long as the service isn't colocated with the api server :)
<hazmat> thumper, ah.. that's no fun.. although the nondeterminism is pretty cool.. i'd prefer rm -Rf /
<thumper> hazmat: don't get me wrong, it would reboot
<hazmat> the openstack charms w/ ha .. do tons of rebooting.
<davecheney> thumper: o/
<davecheney> thatnks for the good news on proxy support
<thumper> hi davecheney
<davecheney> i need that super bad for testing on that platform that we can't talk about
<thumper> cool
#juju-dev 2014-01-23
<thumper> axw, wallyworld: a review plz? https://codereview.appspot.com/55840043/
<davecheney> can I bootstrap trusty environments yet with juju
<davecheney> serious question
<davecheney> i'm having trouble getting a working debootstrap'd chroot
<waigani> http://www.amazon.com/Haribo-Gummy-Candy-Sugarless-5-Pound/dp/B000EVQWKC
<waigani> axw,wallyworld,thumper http://www.buzzfeed.com/michaelrusch/haribo-gummy-bear-reviews-on-amazon-are-the-most-insane-thin
<bigjools> is it intentional that the parameter for --config= gets appended to `pwd` ?
<bigjools> oh ha, it looks for a leading / otherwise assumes it's relative.  Which fails when I start my path with ~
<dimitern> faÃ§adeName string ??
<dimitern> c'mon guys :)
<dimitern> should I start using cyrillic in variable names as well?
<jam> dimitern: I had not seen that. While go is fully UTF-8, I'm pretty sure we want to stick to ascii
<dimitern> jam, i though so as well
<jamespage> davecheney, I have been bootstrapping trusty with juju
<jamespage> not sure what your schroot problem is - I'm not seeing that (indeed I build the juju-core with gccgo in a sbuild schroot OK)
<rogpeppe> wallyworld_: i just tried to get utils/ssh tests to pass with cgo disabled. they failed because user.Current doesn't work. i wonder whether a better approach than user.Current might be to set the owner to the same as the owner of the parent directory.
<rogpeppe> trivial review, anyone, BTW? https://codereview.appspot.com/55950043
<dimitern> rogpeppe, LGTM
<dimitern> (was looking already :)
<rogpeppe> dimitern: thanks
<rogpeppe> dimitern: i thought that was quick!
<dimitern> rogpeppe, just link the bugs to the branch please
<dimitern> rogpeppe, and mark them as appropriate
<rogpeppe> dimitern: what do you mean by "mark them as appropriate"?
<dimitern> mgz, are you about?
<dimitern> rogpeppe, i mean fix committed and assign them to yourself?
<rogpeppe> dimitern: i'd only mark one of them as fix committed
<dimitern> rogpeppe, right, the other once is just skipped
<rogpeppe> dimitern: i seem to be unable to link the bugs to the branch
<dimitern> rogpeppe, error?
<rogpeppe> dimitern: actually, one just succeeded - the search was failing. it worked eventually.
<dimitern> rogpeppe, good :)
<dimitern> mgz, if you're not working on goamz's vpc support i might pick it up now
<jam> mgz: did you get a chance to look into status? Should I ?
<rogpeppe> dimitern: ping
<dimitern> rogpeppe, hey
<rogpeppe> dimitern: i'm investigating a problem with provisionerSuite.TestLifeAsMachineAgent
<rogpeppe> dimitern: it tries to destroy a state manager machine
<rogpeppe> dimitern: i'm can't see quite why it wants to do that
<dimitern> rogpeppe, to test life changes?
<rogpeppe> dimitern: (it was succeeding before now because of a bug in Machine.Destroy)
<dimitern> rogpeppe, what bug?
<rogpeppe> dimitern: it allowed destruction of a machine with JobManageState
<dimitern> rogpeppe, well, for the environ provisioner, there should be a way to test Life() and the only way to do that is to call EnsureDead/Destroy on it
<dimitern> rogpeppe, at least in tests
<rogpeppe> dimitern: the odd thing is that we making the machine dead that we're connecting as
<dimitern> rogpeppe, because that's how authentication works, right? you need an existing manager machine to login as
<rogpeppe> dimitern: yes, but why does it need to be dying?
<rogpeppe> dimitern: sorry, dead
<dimitern> rogpeppe, so Life() will return something else than alive
<rogpeppe> dimitern: i don't quite get what exactly we're trying to test here.
<rogpeppe> dimitern: what provisioner functionality are we testing by having the authorized agent dead?
<dimitern> rogpeppe, we're testing the api, not the worker
<rogpeppe> dimitern: because we *can't* test in exactly the same way as currently, because it's not valid to destroy a state server machine. so i'm trying to work out the best way to fix the test.
<dimitern> rogpeppe, you mean the state/api/provisioner tests, right?
<rogpeppe> dimitern: no, state/apiserver/provisioner
<rogpeppe> dimitern: that's the only suite that has TestLifeAsMachineAgent
<dimitern> rogpeppe, ah, it easier there - there's no requirement to add a JobManageState machine for the test, because we can override the authorizer to think the logged in user is an environ manager
<rogpeppe> dimitern: i don't want to change the tests too much, but i'd like to know what provisioner facade functionality we're testing by setting machines[0] to dead
<dimitern> rogpeppe, the Life() method
<rogpeppe> dimitern: isn't that checked by seeing the return from the other dead machine?
<dimitern> rogpeppe, i think the fix is to just delete line 53 in provisioner_test
<rogpeppe> dimitern: i don't think that works
<dimitern> rogpeppe, we don't really care if machine 0 has JobManageState
<rogpeppe> dimitern: because we want to test:
<rogpeppe> 	// 2. Environment managers can access any machine without
<rogpeppe> 	// a parent.
<dimitern> rogpeppe, yes?
<dimitern> rogpeppe, it won't have a parent
<rogpeppe> dimitern: so if machine 0 isn't an environment manager (which it won't be if we delete line 53) then we won't have an environment manager to call Life as
<dimitern> rogpeppe, why are you concerned with line 53?
<dimitern> rogpeppe, if we add 3 machines with JobHostUnits, and use EnvironManager: true in the authorizer, it should work
<rogpeppe> dimitern: ah yes, i see that now. i thought that we needed to use machines[0] because it was the environ manager
<rogpeppe> dimitern: but we're bypassing all that
<dimitern> rogpeppe, exactly
<dimitern> rogpeppe, i would be more concerned if we had a similar case for the client-side api tests
<dimitern> rogpeppe, because there we can't play with a fake authorizer
<rogpeppe> dimitern: unfortunately that breaks another test, sigh.
<dimitern> rogpeppe, which one?
<rogpeppe> dimitern: TestAPIAddresses and TestStateAddresses, which both require a state server
<rogpeppe> dimitern: i'm changing the tests to add a state server machine and 3 others, then renumbering everything
<dimitern> rogpeppe, is this still in the apiserver tests?
<rogpeppe> dimitern: yes
<dimitern> rogpeppe, ah, I see - yeah for these two the set up will be a bit different
<rogpeppe> oh bugger it, i'll make another test suite
<fwereade> dimitern, mgz: meeting
<jam> fwereade: so my machine is acting *really* choppy, can you try to run the meeting?
 * dimitern lunch
<TheMue> rogpeppe: around?
<rogpeppe> TheMue: yup
<TheMue> rogpeppe: could it be that in case of stopping an application with ctrl-c a defer doesn't run?
<rogpeppe> TheMue: that's definitely true
<rogpeppe> TheMue: unless you specifically arrange for it
<rogpeppe> TheMue: same as calling os.Exit
<TheMue> rogpeppe: ok, then I have to do it
<rogpeppe> TheMue: in the tests?
<TheMue> rogpeppe: how would you do it? by runtime.SetFinalizer?
<rogpeppe> TheMue: no, that won't work
<rogpeppe> TheMue: is this in the tests, or the production code?
<TheMue> rogpeppe: no, debug-log will be stopped by ctrl-c, and in this case the severside logger isn't stopped
<rogpeppe> TheMue: that's easy to arrange; i don't think you want to rely on explicit shutdown for that
<TheMue> rogpeppe: if there's a nice way I'm happy
<rogpeppe> TheMue: you can register the server side logger as a resource that will be stopped when the connection goes away
<TheMue> rogpeppe: so far I had a defer, but that showed troubles
<TheMue> rogpeppe: it is registered
<rogpeppe> TheMue: so why isn't it stopping when the client drops the connection?
<TheMue> rogpeppe: but calling debug-log another time later shows troubles (will make a paste next time)
<rogpeppe> TheMue: is this code that's already in trunk?
<TheMue> rogpeppe: the logged content somehow looks self-contained (older api call and response data contained in new ones)
<TheMue> rogpeppe: no, it's under development
<TheMue> rogpeppe: let me just make a fix at one of the branches that is already in review, found my mysterious bug there today, and later I can post the successor
<TheMue> rogpeppe: first one is the server side, second one the new debug-log command (in a very initial stadium)
<TheMue> rogpeppe: but yeah, I now get my debug log filtered via the api :D
<TheMue> rogpeppe: if you wonna see some of the head-aching shit I get: http://paste.ubuntu.com/6802890/ :/ scroll right and you'll see
<rogpeppe> TheMue: wow!
<rogpeppe> TheMue: that's some serious backslash escaping
<TheMue> rogpeppe: yeah, absolutely, the responses contain other responses
<TheMue> rogpeppe: btw, what I wonna change now is really weird too. we've got code like watcher, ok := resources.Get(Id).(state.Watcher)
<TheMue> rogpeppe: if not ok we report that the id is unknown
<rogpeppe> TheMue: BTW I find launchpad.net/rjson/cmd/rjson useful for unpacking json requests into something readable
<TheMue> rogpeppe: but in my case the id has been ok, but the interface hasn't been implemented
<TheMue> rogpeppe: thx for the hint, will take a look at it
<rogpeppe> TheMue: oh, of course i see why you're seeing responses inside responses
<rogpeppe> TheMue: because you're using the API to get the log stream
<rogpeppe> TheMue: I think that's actually not a good idea
<TheMue> rogpeppe: sure, that has been the idea and my task
<rogpeppe> TheMue: i think it would probably be quite a bit better if the log streamed out through a normal http GET request
<rogpeppe> TheMue: in a similar way to the way that charms are currently uploaded.
<rogpeppe> TheMue: does that make sense?
<TheMue> rogpeppe: beside the command the UI guys want the API
<rogpeppe> TheMue: the GUI guys can issue a GET request easily too
<TheMue> rogpeppe: has been a task by jam and william already started reviewing it
<TheMue> rogpeppe: could you tell me what's wrong with a watcher?
<rogpeppe> TheMue: watchers are not idea for streaming large quantities of data when you can't drop messages
<rogpeppe> TheMue: they're designed for watching a stateful system (the juju state)
<rogpeppe> TheMue: when streaming the log, we want as much bandwidth as possible, otherwise we're limited by network latency
<TheMue> rogpeppe: where is the limit?
<rogpeppe> TheMue: TCP is ideal for that, and that's what a GET request gives you
<rogpeppe> TheMue: the limit on what?
<TheMue> rogpeppe: using watchers. sure TCP may be used with higher bandwidth, but if this is no requirement.
<rogpeppe> TheMue: of course it's a requirement - the log is huge amounts of data. it's really good if clients can keep up.
 * TheMue has troubles with changing a concept and implementation, where multiple people already agreed on, after a short discussion here on irc
<rogpeppe> TheMue: i understand that, sorry
<rogpeppe> TheMue: i have suggested this implementation multiple times in the past
<TheMue> rogpeppe: the log is filtered and limited on server-side, that's the job of the tailer
<rogpeppe> TheMue: that's fine - you can still do that
<TheMue> rogpeppe: so you start with initially let's say 10 lines and get deltas, filtered for the entity (machine/unit) you want
<rogpeppe> TheMue: you'll just send the output to the http.ResponseWriter rather than using the stringswatcher
<TheMue> rogpeppe: that reduces the bandwith roblem
<rogpeppe> only if you're actually filtering
<TheMue> rogpeppe: would you mind to discuss this as a design decision with jam and william?
<rogpeppe> TheMue: i would be happy to, if either of them was around
<TheMue> rogpeppe: can do so later
<TheMue> rogpeppe: so I'll continue with the current way, there are right now many stakeholders expecting it that way
<rogpeppe> TheMue: the other advantage of this approach is that you don't have to worry about exponentially repeating log requests
<rogpeppe> TheMue: ok. but you know what i'll say in the review :-)
<TheMue> rogpeppe: you can do so, no prob. then there will be a good discussion. the review is around since a longer time and william and dimitern took a look at it (only comments by william so far)
<rogpeppe> TheMue: what's the link for the review?
<TheMue> rogpeppe: will post it here again after the next update to not interfere, have to do one change there before
<TheMue> rogpeppe: doesn't last long, I've already tested it in the successor branch
<mgz> sinzui: two things: I retargetted a bunch of bugs, yell at me if you have issues; please review https://codereview.appspot.com/56020044
<sinzui> mgz, I saw. I was going to retarget them to 1.17.2 myself
 * sinzui has to create it first though
<mgz> renaming the milestone would also work as a trick, no?
<mgz> (I don't rely like th
<mgz> *really like the workflow of targetting at next minor dev release, and bumping and bumping)
<mgz> (but it does work kinda with the landing bot flipping the fix committed bit itself)
<sinzui> mgz: We are getting pressure from users who want o to bleed. We need to release more often...I don't think we will ever know how many point releases are needed to get to stable. maybe we target to 18, and pull the few bugs we care about into the point release.
<mgz> sinzui: that would be better I think
<mgz> our bleeding users need to love destroy-environment though, dev releases need to break things
<rogpeppe> fwereade: ping
<fwereade> rogpeppe, meeting, reping in 15 please?
<rogpeppe> fwereade: ok
<rogpeppe> dimitern: i think i've discovered a bug in worker/firewaller; can i run through it with you please?
<dimitern> rogpeppe, sure, what is it?
<rogpeppe> (i'm not sure it was the one i'm looking for that causes the cmd/jujud tests to hang up, but it still looks buggy)
<rogpeppe> dimitern: in Firewaller.startMachine, if we get an error calling fw.unitsChanged, we return immediately
<rogpeppe> dimitern: but we've just added a new machined to the machineds map
<rogpeppe> dimitern: so it hasn't got a running watch loop
<rogpeppe> dimitern: so when the firewaller tries to quit later, it'll wait forever for that machined
<rogpeppe> dimitern: does that make sense to you?
<dimitern> rogpeppe, is this something i changed with the api introduction or?
<rogpeppe> dimitern: i dunno
<rogpeppe> dimitern: perhaps the API introduced some other error which triggered this issue
<rogpeppe> dimitern: yes, it looks like it's a recent change
<dimitern> rogpeppe, some parts of the code had to be changed, because all api methods return an error as well
<dimitern> rogpeppe, so I had to change a few places to either return the error or kill the tomb and return
<rogpeppe> dimitern: actually it looks like the bug was still there before
<rogpeppe> dimitern: i'll see if fixing it fixes my problem
<dimitern> rogpeppe, whew at least it's not my doing
<rogpeppe> dimitern: :-)
<dimitern> rogpeppe, and i did test it thoroughly on ec2 after merging the api changes
<rogpeppe> dimitern: (i know that feeling very well)
<dimitern> rogpeppe, actually i had to stop myself from tearing most of the firewaller tests apart and getting rid of 50% duplicated code, i'm sure the main code can be improved in such a manner as well
<jamespage> fwereade, sorry to be a pita but can I bring bug 1271941 to your attention
<jamespage> https://bugs.launchpad.net/juju-core/+bug/1271941
<hatch> yesterday I built juju-core from trunk and it doesn't looks like it works to deploy locally on precise. After bootstrapping and deploying the GUI I only get a list of two machine names when I do juju status -e local
<hatch> has the functionality of status been changed?
<hatch> version is 1.17.1
<natefinch> hatch: status is mostly the same.... what information is it missing?
<hatch> natefinch https://www.evernote.com/shard/s219/sh/e97b60b5-20e8-4d3e-bded-3c0f62efa179/23d15a7ed70c5bdade557146fc08e377
<hatch> natefinch so...all of it? ;)
<hatch> lxc-ls shows two machines
<hatch> I can tear it down and try again but I wanted to check here first
<natefinch> hatch: heh, that does seem somewhat... spartan
<natefinch> hatch: it definitely should have all the same information
<hatch> ok thanks I'll tear it down and try again, if I get the same issue I'll file a bug
<fwereade> jamespage, I'm going to bounce that one straight over to thumper (I'll mail him) -- last time I saw him he said he was going to look into lxc on trusty, and this is highly likely to be related
<jamespage> fwereade, I confirmed with lxc upstream folks that the way autostart is specified has changed
<jamespage> is now lxc config for the container
<fwereade> jamespage, ok, that makes sense, thanks
<fwereade> jamespage, I see that's already in the bug, double-thanks
<sinzui> fwereade, jam, I was planning on 1.17.1 release today. Are we tracking mramm's request to update goose with a bug?
<mramm> sinzui: I think mgz said he had that fix in, but was working on a follow up.
<fwereade> sinzui, mramm, mgz: sorry, I'm not clear what *updating goose* gives us
<fwereade> sinzui, mramm, mgz: my understanding was that it was a prerequ for the forthcoming branch
<mramm> fwereade: right, it requires both goose and juju changes
<mramm> or at least that's what I understood as well
<mgz> that's it.
<mgz> if we do 1.17.1 today, the networking fix is not making it
<sinzui> mgz, I can delay till tomorrow.
<fwereade> mgz, and your ETA for reviewable code was later today?
 * sinzui want to stop explaining to people daily builds wont get them fixes
<fwereade> mgz, sinzui: tomorrow sounds smart then
<mgz> okay, sounds like a plan.
 * rogpeppe goes for lunch
<TheMue> fwereade: any chance to take the next look at my branch?
<fwereade> TheMue, if not by EOD, I'll do it before bed
<TheMue> fwereade: EOD or tomorrow, don't destroy your evening
<TheMue> fwereade: I'm currently on the client side and it looks good so far, only one weird behavior left
<TheMue> fwereade: but rogpeppe has a different idea about the design, not using API watchers but HTTP requests
<TheMue> fwereade: I asked him to discuss it with jam and you
<fwereade> TheMue, added a couple of comments, please let me know your responses, expect mine slow, I'll be meeting in a sec
<rogpeppe> fwereade: dammit, missed you again :-)
<rogpeppe> fwereade: ok, so i think i said this before: i think the logging streaming would be better done outside the main RPC-oriented API interface
<rogpeppe> fwereade: either as a long lived http GET request or a unidirectional websocket connection, or whatever is most appropriate
<rogpeppe> fwereade: the API is well designed for state-oriented watchers, but not so much for streaming large quantities of data
<rogpeppe> fwereade: so I'm suggesting we go for an approach similar to the charm upload stuff (except the other direction, of course)
<TheMue> rogpeppe: btw, where is the log in a HA environment? would be in interesting aspect for the implementation too
<fwereade> rogpeppe, elaborate on the impact please?
<rogpeppe> fwereade: the impact of what?
<rogpeppe> fwereade: as far as implementation goes, i think this makes both server and client simpler
<rogpeppe> fwereade: as far as runtime performance goes, the download speed will not be latency-limited
<rogpeppe> fwereade: (and you'll win quite a bit by losing JSON encapsulation
<rogpeppe> )
<fwereade> rogpeppe, I guess what I'm asking is at what point you think the current implementation would actually hurt us, weighed against the benefits of providing an implementation that works for the CLI (probably whole-env) and the GUI (almost certainly unit-/machine-specific)
<rogpeppe> fwereade: yes, i think that it will actively affect the efficacy of debug-log in real environments
<rogpeppe> fwereade: GUI connections tend to be very high-latency
<rogpeppe> fwereade: plus i don't think it would actually be at all difficult to implement - there's actually less logic required than when using a watcher
<TheMue> fwereade: thx for review, added notes
<rogpeppe> TheMue, fwereade: just looking at the apiserver/debugger implementation. it's really not right, i'm afraid.
<rogpeppe> TheMue: there is absolutely no need to drop data on the floor
<TheMue> rogpeppe: we already agreed on not doing that
<TheMue> rogpeppe: see comments
<rogpeppe> TheMue: you don't need to return an error either
<rogpeppe> TheMue: we can just wait until the client is ready to read.
<TheMue> rogpeppe: if a client doesn't drop the connection but also doesn't call Next()?
<rogpeppe> TheMue: yes
<TheMue> rogpeppe: but the tailer fill its buffer during this time
<rogpeppe> TheMue: with a separate websocket connection, we can drop all the logic from that file
<rogpeppe> TheMue: why does that matter?
<TheMue> rogpeppe: too much memory consumption?
<rogpeppe> TheMue: what? the data is in a file, no?
<rogpeppe> TheMue: surely all that's needed here is NewTailer(logfile, websocketConnection, ...) ?
<TheMue> rogpeppe: the tailer polls out of it and write to the writer
<TheMue> rogpeppe: so you now also want to change the tailer?
<rogpeppe> TheMue: there's no need to change the tailer
<rogpeppe> TheMue: the tailer does not use unlimited memory if a call to Write blocks, does it?
<TheMue> rogpeppe: afaik not, yes
<TheMue> rogpeppe: it simply calls Write() of the writer
<rogpeppe> TheMue: exactly
<rogpeppe> TheMue: and that writer can be a websocket
<TheMue> rogpeppe: ok, and how would the client side (cli and ui) would look like?
<rogpeppe> TheMue: so in that case you don't need any of the logic in debugger/logtailer.go at all
<rogpeppe> TheMue: the client side would just make a websocket connection exactly the same as it does for the current API
<rogpeppe> TheMue: you would need a little bit of extra logic to allow the client to adjust the filter
<jcastro> natefinch, hah, I was doing "juju status foo"
<jcastro> I forgot the -e!
<rogpeppe> jcastro: i saw that!
<TheMue> rogpeppe: a second, different way beside the current way. in in an HA env? where do they connect to
<natefinch> jcastro: I was thinking "hey yeah, that would be useful...." and then I was like.... "wait a minute..."
<rogpeppe> TheMue: it's similar to the way that the charm upload currently works
<jcastro> I was like "man that is a great idea let me try it." but it always returned "local" no matter what
<natefinch> jcastro: sounds like the bug is that we don't complain about extra parameters
<jcastro> natefinch, it's nice when you ask for things and someone else has thought of it and it's like nearly finished or done by the time you ask, heh.
<TheMue> rogpeppe: sounds relative ok so far (only having another way beside the api doesn't convince me). but to change that john, william, gary (who is waiting for the api access) and curtis have to agree
<rogpeppe> TheMue: i've chatted with gary and the gui team. they think it's a reasonable idea (except they don't want to delay things)
<TheMue> rogpeppe: currently cannot estimate the effort
<rogpeppe> TheMue: i'm quickly putting together an example so you can see what i'm talking about
<TheMue> rogpeppe: thanks
 * TheMue is off for today, dinner. will take a look in again later
<hatch> I'm trying to build juju-core on another machine and after following the steps in the readme, when I get to using `go install` it throws a few errors saying it can't find crypto, openpgp, websocket, clearsign... any ideas on how to resolve this?
<hatch> for example `cannot find package "code.google.com/p/go.crypto/ssh"`
<mgz> argh. arghargh.
<mgz> rogpeppe: have a moment? by what mechanism is coretesting.CACert treated as valid by the default go http stuff?
<mgz> I want a not-valid cert to add a test for an error path
<rogpeppe> mgz: it's added to the valid root certificates in the request
<rogpeppe> pwd
<mgz> where?
<rogpeppe> g
<rogpeppe> one mo
<mgz> :)
<rogpeppe> mgz: look in state/api/apiclient.go:/^Open
<mgz> specifically, if I get http.DefaultClient, that doesn't complain i
<mgz> ona
<mgz> on an http request against a tls-created server
<mgz> I think I'm missing a core idea here
<rogpeppe> mgz: https request, right?
<mgz> ...probably not, that may well be the issue
<mgz> it's pretty confusing http: scheme works against tls servers at all
<rogpeppe> mgz: yeah
<rogpeppe> TheMue: https://codereview.appspot.com/56100043/
<rogpeppe> TheMue: see state/apiserver/log.go which implements all the logic
<rogpeppe> TheMue: at least, it might do - i haven't actually run the code
<rogpeppe> TheMue: but i *think* that's about all that's needed
<mgz> ...this is no longer a simple change
<rogpeppe> TheMue: it allows the client to asynchronously change the filtering, and it use a websocket with one frame for each line of the log
<rogpeppe> mgz: what error path are you trying to check?
<rogpeppe> fwereade: this is what i was thinking of for the logging interface: https://codereview.appspot.com/56100043/
<mgz> rogpeppe: well, that gets me on to an obscure https error "local error: record overflow"... possibly not from inside our code
<rogpeppe> mgz: ha
<rogpeppe> mgz: what *are* you doing :-)
<rogpeppe> pwd
<mgz> trying to write some tests for LoadStateFromURL after making it do the right thing
<rogpeppe> mgz: can't you just use a different CACert to connect?
<mgz> the code for the server (which may be at fault?) is in environs/httpstorage/backend.go
<mgz> rogpeppe: I could, at the moment I'm using the same one from coretesting both sides and it's not happy
<rogpeppe> mgz: you can't use CACert on the server side
<mgz> can I get a better traceback for this error
<mgz> rogpeppe: ah. that's what some existing testing did.
<rogpeppe> mgz: find out where "record overflow" is generated and add a log print
<mgz> rogpeppe: I'm pretty sure it's from inside net/http
<rogpeppe> mgz: that's ok, just bung a log print in there
<mgz> fair enough
<rogpeppe> mgz: (use the standard log package)
<rogpeppe> mgz: the reason CACert doesn't work on the server side is that it hasn't got the private key in there. you need to use CAKey
<mgz> the server gets setup with both
<rogpeppe> mgz: ah, well that should be fine
<rogpeppe> mgz: BTW for your debug print, i often find this package useful: code.google.com/p/rog-go/exp/runtime/debug
<rogpeppe> mgz: log.Printf("something; callers: %s", debug.Callers(0, 20))
<rogpeppe> mgz: it'll print the current stack trace on a single line (just file/line numbers)
<mgz> ta
<mgz> rogpeppe: proposing what I have in case you have any insight from looking at it
<rogpeppe> mgz: ok. i'm afraid i've got no more time tonight
<rogpeppe> mgz: i'm trying to get a branch proposed before supper happens in 10 minutes :-)
<mgz> :D
<mgz> I think it's just our testing stuff is too naive around tls somewhere. but it's not clear to me how
<rogpeppe> mgz: it's entirely possible
<mgz> proposed, I may have inspiration going back to it later.
<rogpeppe> if anyone cares for a review, there are lot of files in this one, but the change is quite simple: https://codereview.appspot.com/54230044/
<rogpeppe> right, that's me
<rogpeppe> i smell fooood
<rogpeppe> g'night all
<hatch> is there anyone besides dimitern who is familiar with the PutCharm code? In testing the GUI implementation I am getting a 405 (Method not allowed) when trying to post the charm on a trunk build of juju-core
<natefinch> hatch: I don't know anything about putcharm, unfortunately.  However, the log might have more useful information
<hatch> natefinch is there a separate log from the ones in /var/log/juju which may contain more information? (I've already checked those)
<natefinch> hatch: no, that's where I'd look..... 405 sounds like the server isn't liking the request
<hatch> yeah which is odd because that's what it's supposed to do :) oh well no problem I'll chat with Dimiter when he gets in tomorrow thanks anyways
<natefinch> if you overlap with ian, it looks like he's had his fingers in there semi-recently too
<_thumper_> fwereade: around?
<natefinch> o/ thumper
<thumper> hi natefinch
<thumper> waigani: https://bugs.launchpad.net/juju-core/+bug/1271941
<natefinch> thumper: is there an environment variable that describes where the upstart directory is, or is it just always /etc/init?
<thumper> natefinch: always /etc/init
<thumper> pretty sure
<natefinch> dang... trying to mock out upstart stuff so I can test the code I wrote (which interacts with upstart)
<axw> natefinch: I'm pretty sure in the juju-core/upstart package you can specify the init dir
<axw> not sure if that helps you or not
<axw> yeah - the local provider in trunk changes it in testing (I'm about to change all that though)
<natefinch> axw: you can change it per-service that gets created, but that doesn't help from outside a function that I want to test. There's no global override.  Which pretty much means I have to make one.... not the end of the world, but kind of ugly to expose
<axw> ah ok
<axw> yep
<natefinch> ....wow, I can't believe that worked on the first try
<natefinch> thumper: I made a bunch of changes and then realized I was on the wrong branch.  Nothing committed yet. How can I switch branches and keep the local edits?
<thumper> natefinch: I think if you just do 'bzr switch' it should just work
<natefinch> thumper: ahh, cool
<thumper> alternatively, use 'bzr shelve' change, then 'bzr unshelve'
<natefinch> thumper: thanks, bzr switch just worked
<thumper> coolio
<wallyworld> thumper: https://pastebin.canonical.com/103503/
#juju-dev 2014-01-24
<hazmat> thumper, how's the proxy support coming? was specifically curious if the charm store downloads (addcharm via api) were using it yet
<thumper> hazmat: hey
<thumper> ah... wat?
<thumper> how would the charm store stuff use proxies?
<hazmat> thumper, that's the env downloading charms from the store via proxy
<thumper> ah...
<thumper> not tested that
<hazmat> k
<thumper> you know what... not even considered that :-)
<thumper> support has been to set the environment for all the charm hooks
<thumper> and for juju-run
<thumper> but not for juju itself...
<thumper> hazmat: ETA would be early next week
<hazmat> thumper, cool
<waigani> wallyworld: www.youtube.com/watch?v=KTc3PsW5ghQ
<davecheney> shit, status is fast!
<jamespage> davecheney, still awake?
<rogpeppe> TheMue, fwereade: did you see https://codereview.appspot.com/56100043/ ?
<TheMue> rogpeppe: just going through your branch
<rogpeppe> TheMue: ok
<TheMue> rogpeppe: btw, the removed logger package hasn't been by my, it's for watching the logging configuration. I used it first don't knowing the intention, but then moved my stuff into the debugger package.
<rogpeppe> TheMue: oh, sorry - they seemed to be related
<TheMue> rogpeppe: yeah, had the same wrong thought
<TheMue> rogpeppe: fwereade told me then about its intention
<rogpeppe> TheMue: so why was logTailer in the debugger package?
<TheMue> rogpeppe: because it only is a private helper type for the debugger API
<rogpeppe> TheMue: ah, it's kind of weird that "debugger" was separate from "logger", but anyway, i see now
<TheMue> rogpeppe: thought so too
<TheMue> rogpeppe: some of the changes don't really belong to the logging, don't they?
<rogpeppe> TheMue: a couple of changes were forced on me because i couldn't propose without govet passing cleanly
<TheMue> rogpeppe: and the code in api/client.go isn't needed anymore
<rogpeppe> TheMue: right
<TheMue> rogpeppe: so how would a client call that? never worked with with websockets, only implemented direct on IP so far
<rogpeppe> TheMue: the same way a client gets a websocket connection to the regular API
<rogpeppe> TheMue: (except with a different URL path)
<TheMue> rogpeppe: sorry, a bit more practical would be nice, so that I directly can remove my lack of knowledge with you
<TheMue> rogpeppe: for the API usage I simply used already existing functionality
<rogpeppe> TheMue: ok, so... look at state/api/apiclient.go
<TheMue> rogpeppe: yep
<rogpeppe> TheMue: the Open function in there makes a new websocket by dialling the API server
<TheMue> rogpeppe: ic, ok
<rogpeppe> TheMue: the Client.AddLocalCharm method is an example that makes its own http request
<rogpeppe> TheMue: so to do the logging stuff, i would factor out some of the logic from Open so it can be reused in another request
<TheMue> rogpeppe: hehe, just had the same idea
<rogpeppe> TheMue: i wouldn't include any of the retry logic in that
<TheMue> rogpeppe: so while the server site gets more simple the client side gets more complex (in the sense of that I've been able to simply reuse API and watcher mechanisms so far)
<rogpeppe> TheMue: it shouldn't be much more code at all - most of it just code movement
<TheMue> rogpeppe: as long as your sure what to do and this is no new area for you. don't underestimate the learning curve. you're doing all this API and websockets stuff since a longer time.
<TheMue> rogpeppe: but gives me the chance to take a deeper look into it too. ;)
<rogpeppe> TheMue: i'll sketch out a little more of the client side if you like
<TheMue> rogpeppe: only if you've got the room for it, a quick outline. the first hints here already helped a lot
<TheMue> rogpeppe: and I need the ok by fwereade
<TheMue> rogpeppe: but I like the approach. I only would liked the API able to handle streams of data also by default more
<rogpeppe> TheMue: most of our API stuff doesn't work with streams of data, but streams of changing state, which is a different thing
<rogpeppe> TheMue: with the latter it's fine to drop intermediate state
<TheMue> rogpeppe: I know, but the current way is that every new requirement of streamed data get's an extra way
<TheMue> rogpeppe: a more generic implementation once, integrated in the api but different from the watchers today, yes, could then better be reused
<rogpeppe> TheMue: you mean another watcher type?
<TheMue> rogpeppe: maybe, would have to make more thoughts about it
<rogpeppe1> TheMue: something like this for the client side: http://paste.ubuntu.com/6807442/
<TheMue> rogpeppe1: is authentication handled with it?
<rogpeppe1> TheMue: ah, no, you'll need to set the basic-auth info in the config headers
<TheMue> rogpeppe1: could you add it to your branch above, so we can discuss it with fwereade
<TheMue> ?
<rogpeppe1> TheMue: unfortunately i think that means you'll have to copy the basicAuth function from net/http because it works on a Request not a Header
<rogpeppe1> TheMue: ok, will do
<TheMue> rogpeppe1: thx
<rogpeppe1> TheMue: https://codereview.appspot.com/56100043/diff/40001/state/api/client.go
<TheMue> rogpeppe1: already looking
<TheMue> rogpeppe1: you know when fwereade is here again?
<rogpeppe1> TheMue: no, sorry
<fwereade> TheMue, sorry, I have been around, just trying to catch up on some other bits
<TheMue> fwereade: ah, ok
<TheMue> fwereade: rogers solution looks nice, quite different, but nice
<TheMue> fwereade: you should take a look at it
<benonsoftware> Hiya, I'm wondering if there's anyway to make juju launch spot instances?
<hazmat> benonsoftware, nutshell no. its possible though to use external provisioning with the manual provider and just have juju manage the workload
<benonsoftware> hazmat: Ah, okay then. Thanks.
<rogpeppe1> would anyone be able to review this, please? https://codereview.appspot.com/54230044/
<TheMue> rogpeppe1: taking a look
<rogpeppe> TheMue: thanks
<rogpeppe> TheMue: we had a discussion about the debug-log stuff, BTW. we're going to talk about it a bit more this afternoon.
<TheMue> rogpeppe: great
<TheMue> rogpeppe: reviewed
<rogpeppe> TheMue: thanks
<TheMue> rogpeppe: yw
<rogpeppe> dimitern: any chance of a second look at https://codereview.appspot.com/54230044/ please?
<dimitern> rogpeppe, looking
<rogpeppe> dimitern: ta!
<dimitern> rogpeppe, reviewed
<rogpeppe> dimitern: ta!
<dimitern> niemeyer, hey
<niemeyer> dimitern: Yo
<dimitern> niemeyer, when you have time, can you please review and possibly comment on the proposed changes we need in goamz (see the mail I've just sent for a link to the document)
<niemeyer> dimitern: Definitely
<niemeyer> dimitern: Should we have a call next week about it?
<fwereade> niemeyer, dimitern, +1, invite me please
 * fwereade forgot he has to collect laura, bbl
<niemeyer> dimitern: Can you please book it and invite us?
<niemeyer> fwereade: Uh oh :)
<dimitern> niemeyer, sure and preference for day/time?
<niemeyer> dimitern: Wednesday is probably a good day, as it's already filled with meetings
<niemeyer> dimitern: 12:30 UTC?
<dimitern> niemeyer, will do
<dimitern> niemeyer, done - you should receive and invite
<niemeyer> dimitern: Thanks
<dimitern> fwereade, next wednesday, 12:30 utc
<rogpeppe> anyone else see this error occasionally? (this is second time i've seen it today) ... value *net.OpError = &net.OpError{Op:"local error", Net:"", Addr:net.Addr(nil), Err:0x14} ("local error: bad record MAC")
<dimitern> rogpeppe, nope
<mgz> okay, I'm at the "hey, this should all work, but man I lack tests" stage with the network stuff
<rogpeppe> natefinch: i see the replicaset tests fail reasonable often, usually in TestAddRemoveSet
<rogpeppe> natefinch: usually with "panic: no reachable servers" (and a very long run time - 167s in this case)
<natefinch> rogpeppe: hmm
<natefinch> rogpeppe: I think this is just mongo taking forever to come up for some reason
<rogpeppe> natefinch: probably
<rogpeppe> natefinch: it's a bit worrying though, because maybe it's *never* coming up and perhaps this might make juju HA flaky for real
<natefinch> rogpeppe: yeah :(
<rogpeppe> natefinch, fwereade: https://plus.google.com/hangouts/_/calendar/am9obi5tZWluZWxAY2Fub25pY2FsLmNvbQ.mf0d8r5pfb44m16v9b2n5i29ig?authuser=1
<sinzui> mgz, fwereade, which bug should I be watching to know the rev we want to release as 1.17.1?
<mgz> sinzui: sec, I'll attach a couple to the milestone
<mgz> bug 1241674 is mine
<_mup_> Bug #1241674: juju-core broken with OpenStack Havana for tenants with multiple networks <cts-cloud-review> <openstack-provider> <juju-core:In Progress by gz> <https://launchpad.net/bugs/1241674>
<frankban> core devs: using trunk (revno 2253) I am not able to deploy a local charm in a lxc env. unit logs: http://pastebin.ubuntu.com/6808945/, machine logs include permission denied error responses to CharmArchiveURL requests
<sinzui> mgz, oh I didn't realise the issue was that specific bug. I thought the issue was something in a recent change to openstack
<sinzui> thank you mgz
<fwereade> sinzui, mgz's one is the only really important one, I think
<mgz> I'm currently done... apart from I've got some tests losing my new config value and can't work out why
<rogpeppe> anyone reasonably familiar with worker/provisioner?
<rogpeppe> i'm seen a sporadic test failure, and i can't work out quite what a particular test is supposed to be testing
<rogpeppe> s/i'm/i've/
<fwereade> rogpeppe, which test?
<rogpeppe> fwereade: TestProvisioningSafeModeChange
<rogpeppe> fwereade: at the end of it, it ensuredead+removes a machine, and expects the provisioner not to remove the instance
<rogpeppe> fwereade: that seems kind of odd to me
<rogpeppe> fwereade: it seems like it's relying on the fact that the provisioner hasn't seen the new machine
<fwereade> rogpeppe, so, the deal is that because the instance got *removed* the provisioner has no basis for taking down the instance
<fwereade> rogpeppe, but ISTM that that's a bad tes
<fwereade> rogpeppe, the provisioner should be stopped for those terminal actions on m3, m4, else it could plausibly see m4 when dead, and legitimately remove it
<rogpeppe> aw, frick, *another* sporadic failure
<rogpeppe> fwereade: exactly
<rogpeppe> fwereade: and that's how the test failed
<fwereade> rogpeppe, if that's what you see too, though, at least it's one we've found and can fix, I think
<rogpeppe> fwereade: but if we stop the provisioner, then what are we testing?
<fwereade> rogpeppe, ha
<fwereade> rogpeppe, that just reduces to TestProvisionerSafeMode, doesn't it?
<fwereade> rogpeppe, and there's the advantage of mocking out the api server
<rogpeppe> fwereade: ISTM that a better test would actually add an instance behind the scenes
<fwereade> rogpeppe, that sounds good to me
<fwereade> rogpeppe, I think that's equivalent
<mgz> well, I hate our testing infrastructure
<rogpeppe> mgz: it could be better :-)
<mgz> I have no clue where it's losing the config value, or how I should actually just write a damn test with a different config value
<mgz> okay, finally
<mgz> randomly try things till it works joy
<rogpeppe> mgz: sorry, i would help but i'm up to my eyeballs currently
<mgz> okay, I'm done
<mgz> proposing now
<mgz> meh, need another goose bump but hey
<rogpeppe> oh balls, another sporadic failure. that's four in a row
<mgz> goose bump... I just realise that was a pun
<mgz> part #1: https://codereview.appspot.com/56670043
<mgz> I need that one sort of right now, to land so I can bump dependencies.tsv
<natefinch> mgz: I can review
<mgz> ta! it's a bit of a cop-out proposal for now
<mgz> Ill put the main bit up sans the bump too for reference
<rogpeppe> hmm, i think it's wrong that the local provider should have added an authorized key to my ~/.ssh/authorized_keys
<rogpeppe> fwereade: would you agree ?
<fwereade> rogpeppe, ha! definitely
<fwereade> rogpeppe, well spotted
<rogpeppe> fwereade: also, trunk is currently producing zillions of errors when you've got a badly formatted key (for instance the FakeConfig auth keys)
<fwereade> rogpeppe, and that's causing sporadic failures in jujud tests
<rogpeppe> fwereade: (try running "go test -gocheck.f ManageEnviron'$' -gocheck.vv)
<rogpeppe> fwereade: yes
<fwereade> rogpeppe, thumper came across that last night
<rogpeppe> fwereade: i'm just fixing the fake config to have a valid key
<fwereade> rogpeppe, <3
<rogpeppe> fwereade: but really the authorized keys worker should not be barfing
<rogpeppe> fwereade: in fact, i'll just fix that
<fwereade> rogpeppe, mmmm I'mmore inclined to say we should be refusing bad input
<rogpeppe> fwereade: mebbe
<rogpeppe> fwereade: we should probably parse authorized keys on config.New
<mgz> part #2: https://codereview.appspot.com/52710048
<mgz> fwereade: ^you probably care about this one
<rogpeppe> one more barrier, sigh
<rogpeppe> ha, not only did it change my authorized_keys, it replaced it with a file owned by root and globally readable
<fwereade> mgz, quickly pre-reviewed, trying to finish rog's
<mgz> fwereade: ta
<fwereade> rogpeppe, reviewed, almost there, I think there may be a commentor two you missed from the earlier one?
<fwereade> rogpeppe, maybe actually there
<fwereade> rogpeppe, let me know :)
<rogpeppe> fwereade: entirely possible
<rogpeppe> fwereade: ok
<rogpeppe> fwereade: thanks
<rogpeppe> fwereade: it's going to take at least another 30 minutes before i can re-propose though, because now i have two prereqs
<rogpeppe> fwereade: ah, i hadn't published my reply
<rogpeppe> fwereade: my reply was "
<rogpeppe> you're right - i was surprised when this worked.
<rogpeppe> it's a bug in Machine.Destroy - it doesn't fail
<rogpeppe> when the machine only has JobManageState.
<rogpeppe> "
<rogpeppe> fwereade: and it's the fallout from fixing that that's taken me two days to get over
<rogpeppe> fwereade: so you will see that i've fixed that in the latest version, when i can actually propose it :-\
<fwereade> rogpeppe, ha, ok, cool
<fwereade> rogpeppe, I'm kinda eoding but I'll be in and out
<fwereade> rogpeppe, will be interviewing someone this evening so I'll pick it up then if not before
<mgz> fwereade: I dont' follow the config omit/,ok comments, is that a new config thing?
<mgz> in the providers, we've always just used blanks and defaults
<rogpeppe> fwereade: i *think* i've addressed all the issues, (although i kept the bool in effectiveMachineTemplate for reasons i outlined in the comments i've just published)
<mgz> okay, goose bits landed, juju-core fix ready to go
<fwereade> mgz, I don't think we always have, but I'm pretty unsure about those suggestions
<fwereade> mgz, not gonna fight them, just tabling the option in case you feel it fits better
<mgz> it may well make sense tweaking some of these things after 1.17.1 - but that should be safe
<mgz> I'd not be surprised if something's borked and needs fixing once I've had some people test this better anyway
<mgz> if I have a ltm I'll land and crtis can get started
<fwereade> mgz, for reasons too complicated to explain quickly I can't do this right now, but I'd LGTM it with a more explicit test for that one with .*s in the error message
<rogpeppe> ha, these two strings are different... where? http://paste.ubuntu.com/6809355/
<mgz> fwereade: okay, I'll tweak that one a bit
<rogpeppe> ha, got it
<mgz> okay, I need to go out till later on, feel free to flip that branch to approved in my absence
<rogpeppe> ha, some of the tests actually rely on the fact that the authorized_keys used in testing are invalid
<rogpeppe> sigh
<rogpeppe> fwereade, mgz, natefinch: this fixes the testing auth keys issue: https://codereview.appspot.com/56680043
<rogpeppe> i'd very much appreciate a review
<rogpeppe> oh well, guess it's not gonna happen
<natefinch> rogpeppe: I can do it, was just having lunch
<rogpeppe> natefinch: that would be great, thanks - i have to stop now, but if that's approved, i should be able to able to get other branches in
<natefinch> rogpeppe: cool
<rogpeppe> g'night all
<natefinch> g'night rog
<hazmat> no upper case chars in services names?
<hazmat> seems strange
<natefinch> hazmat: keeping them all lowercase does simplify things somewhat, so you don't have to type out people's wEiRd nAMes... though you'd think that just making them case insensitive would work as ewll
<hazmat> true.. though theres some other oddity to the naming.. ie. no wierd-1, but okay on wierd-1s
<hazmat> not a big deal.. i'm autogenerating service names at the moment was just a surprise
<natefinch> hazmat: yeah, I remember seeing that they need to start and end with a letter... don't know offhand why
<mgz> sinzui: where are we at?
<sinzui> mgz, cursing cannonistack because we have run out of resources to test
<mgz> urg
<sinzui> mgz, once your work lands, CI will start the tests. I may need to run some manually to cleanup secgroups. If all pass, I will do the release. I can do it ina  few hours. I can do the release in the mornings of the weekend too
<mgz> natefinch: can I have a lgtm on my second part if you have a moment?
<natefinch> mgz: oh yeah, I thought someone else was looking, but I'll look now
<mgz> thanks!
<natefinch> mgz: done
<mgz> natefinch: thanks!
<mgz> sinzui: sending to land now, you should get a notification when it's in
<sinzui> mgz: thank you.
<sinzui> mgz, I think gobot hates you
<mgz> looking
<mgz> have tried resubing a couple of times, it may need more unsticking somehow
<mgz> sinzui: landed!
<sinzui> thank you!
#juju-dev 2014-01-26
<thumper> waigani: hey, I'm just about to get on a call for a while
<waigani> ah cool, I've got a few small questions for you, but they can wait
<waigani> I've ended up shacking up in the study center on Castle St btw - quite nice!
<waigani> does anyone know how to get the lxc version?
<thumper> yes
<thumper> waigani: what are you wanting to know the version for?
<waigani> thumper: https://codereview.appspot.com/56370043/
 * thumper looks
<waigani> I've found the command: lxc-version
<waigani> just wondering the correct way to parse that from within Juju
<thumper> don't
<thumper> I'll respond to fwereade
<waigani> oh, okay
<thumper> waigani: lp doesn't think the branch is pushed to...
<thumper> for some reason...
#juju-dev 2015-01-19
 * thumper sighs
<thumper> tests failed because other people landed code...
 * thumper merges and updates
<axw> wallyworld__: actually, before tmpfs it would be more useful to have "loop"
<axw> wallyworld__: so we have the fallback case covered
<wallyworld__> axw: yes, i was hoping we'd churn out tmpfs, loop more or less together
<wallyworld__> but loop first, i agree
<menn0> thumper, wallyworld__ : holy crap there's lots of crazy duplication of test helpers in state
<thumper> :)
<menn0> thumper, wallyworld__ : fixing a bunch of it now
<thumper> cheers
<wallyworld__> \o/
<menn0> thumper, wallyworld__ : it's tricky because some tests in state are internal, but I see a path to making a lot of it better
<axw> wallyworld__: http://reviews.vapour.ws/r/757/
<wallyworld__> looking
<thumper> menn0: I'm getting there with the tests but I have one leaving a dirty socket around
<thumper> a mongo connection
<thumper> it is definitely the test where we have a different connection from state
<thumper> but I can see that my resource cleanup is getting called
<thumper> so not sure where
<thumper> if you are good with it, perhaps we could go through it in the morning to see WTF is going on?
<menn0> thumper: happy to
<thumper> menn0: good, I think I've looked at it too long this afternoon, pushing up current
<menn0> thumper: this state testing refactoring is going really well btw
<menn0> thumper: tons of lines being removed
<thumper> cool
<thumper> night folks
<axw> wallyworld__: mind seeing my comments before I land?
<wallyworld__> axw: yah, looking now
<wallyworld__> axw: commented, not sure if you wsnt to discuss
<axw> wallyworld__: LVM is at a higher level. there's nothing stopping you from creating a volume group across different providers. I'm not sure if we should be using quite the same machinery for them. how do you tell it which device(s) to operate on?
<axw> wallyworld__: I could well see an LVM storage provider that creates "disks" as logical volumes for you, but I think the relationship between the LV and the disks in its volume group is outside this model
<axw> wallyworld: not sure where you saw up to...
<axw> wallyworld: LVM is at a higher level. there's nothing stopping you from creating a volume group across different providers. I'm not sure if we should be using quite the same machinery for them. how do you tell it which device(s) to operate on?
<axw> wallyworld: I could well see an LVM storage provider that creates "disks" as logical volumes for you, but I think the relationship between the LV and the disks in its volume group is outside this model
<wallyworld> axw: sigh, freenode hates me, keeps kicking me off :-(
<wallyworld> axw: instead of LV, what about loopback? i guess the point for me is that the provisioner etc sees a block device and it's outside of its interest how that is provider, physical disk or otherwise
<wallyworld> axw: afk, bbiab
<axw> wallyworld: the name Disk is probably just a poor one. what I mean is "block device that the provider manages", as opposed to "block device that a machine sees"
<axw> wallyworld: let me know when you're back
<axw> -_-
<wallyworld> axw: yeah, that's how i think of it too. Disk though gives the opposite connotation. gad i hate naming things
<axw> wallyworld: it's really "volume" and "attachment", but that doesn't marry with discovered devices
<wallyworld> should or could we go with Volume instead of Disk for what we have currently? that would still work with discovered devices i think?
<wallyworld> i *think* that may remove the current ambiguity?
<wallyworld> anastasiamac: it's clear these a mix of approaches wrt api param naming :-( let's go with what you have so we can get it landed
<axw> wallyworld: I'm fine with calling it Volume, though I don't see much of a difference
<axw> except that it might be a little clearer that it's not necessarily a physical or dedicated disk
<wallyworld> axw: perhaps i'm being too pedantic wrt naming. or confused. or both. I think of Disk as a physical device, whereas Volume is an abstraction that better applies to various device types loopback vs VG vs disk etc
<wallyworld> yes
<wallyworld> that's where i'm coming from
<axw> wallyworld: fair enough. I'll change it. I'll need to update juju/names too, but I'll do that later
<axw> wallyworld: PTAL
<wallyworld> sure
<wallyworld> axw: thanks, i think the naming is clearer now. well to me it is :-)
<axw> wallyworld: cool
<menn0> wallyworld: you might like this: http://reviews.vapour.ws/r/759/
<wallyworld> looking
<wallyworld> oooh refactoring
<axw> wallyworld: in case you missed it, I added a TODO to rename the results of {Create,Describe}Volumes to return something other than BlockDevice, since it will want to convey information that isn't pertinent to the machine level
<axw> such as detachability, persistence, etc.
<menn0> wallyworld: see what happens when you make a single review comment about extracting some test helpers out
<wallyworld> axw: yep, sounds good
<menn0> wallyworld: I then went and noticed all the awful duplication and felt the need to fix it
<wallyworld> menn0: \o/ thank you
<wallyworld> axw: short term goal is to get collaboration possible, i think this branch enables that
<menn0> wallyworld: dealing with kids now so no rush
<axw> wallyworld: yep. I will land and get back to juju deploy
<wallyworld> menn0: ok, will finish a few things and then look, really appreciate the work to do this
<wallyworld> axw: we still need some glue to plug it all in, but at least the storage sources for eg loopback can be written and tested
<dimitern> morning all
<anastasiamac> dimitern: morning :D
<dimitern> anastasiamac, o/
<dimitern> anastasiamac, nice new facade btw :)
<menn0> wallyworld_: thanks for the review. i'll add that comment and then get it in.
<wallyworld_> awesome
<anastasiamac> dimitern: thnx :)
<wallyworld__> axw: got a few minutes? hangout in our 1:1?
<axw> wallyworld__: sure
<jam1> morning dimitern
<dimitern> jam1, morning, sorry - omw :)
<jam1> just grabbing a coffee, but I joined the hangout
<axw> wallyworld__: are you there still?
<axw> can't hear you on hangouts...
<TheMue> dimitern: so, system is started
<dimitern> TheMue, great! i'll be in the hangout shortly
<TheMue> dimitern: fine
<voidspace> dimitern: I'll be back in 5 minutes as well
<dimitern> voidspace, ok
<voidspace> dimitern: and back...
<dimitern> voidspace, I've reviewed your subnets PR btw
<voidspace> dimitern: you've suggested using a string set, which I would *love*
<voidspace> dimitern: which packge is that from
<voidspace> dimitern: import "set" does not work...
<dimitern> voidspace, it's in juju/utils/set IIRC
<voidspace> ah....
<voidspace> *great*
<voidspace> dimitern: ah, I can't use the set, because I'm using the underlying value (true/false) to carry meaning
<voidspace> dimitern: (did we actually find the subnet)
<perrito666> morning to the few of you who are here
<voidspace> perrito666: morning
<dimitern> voidspace, well, you could construct the set initially with all ids, then remove ones that are not found..
<dimitern> voidspace, but i'll leave it up to you
<voidspace> well, I *can't* remove the ones not found
<voidspace> I use the set to track *found* ones :-)
<voidspace> knowing which ones I haven't found is the problem I'm trying to solve :-)
<voidspace> but I've switched the map to string[bool] as it involves less casts
<voidspace> one instead of three
<voidspace> dimitern: I've hit $$merge$$ and am onto the next task
<dimitern> voidspace, ok, fair enough
<dimitern> voidspace, cheers! :)
<anastasiamac> voidspace: merge failed :(
 * dimitern needs to step out for ~45m
 * perrito666 is still drinking mate trying to swallow the news from his govt
<perrito666> fwereade: sweeeeeeet 666 additions  :p
<perrito666> that is your last patch
<fwereade> perrito666, lol, awesome
 * perrito666 imagines fwereade adding blank lines to get there
<fwereade> perrito666, if I'd noticed I was near I probably would have done, yeah ;p
<voidspace> anastasiamac: thanks, fixed
<voidspace> dimitern: ping
<dimitern> voidspace, pong
<voidspace> dimitern: should I work on (first) either a) NetworkInterfaces for MaaS
<voidspace> dimitern: or b) filling in the missing bits of the ec2 implementation?
<voidspace> or c) it doesn't matter which order...
<voidspace> if b) I have some follow up questions :-D
<dimitern> voidspace, well :)
<dimitern> voidspace, i'm not quite done with the ec2test filtering for NICs, so I suppose a) is best for now
<voidspace> ...
<voidspace> ah, cool
<voidspace> that's what I wanted to hear...
<voidspace> dimitern: thanks
<dimitern> voidspace, :)
<dimitern> voidspace, and I think splitting up the ec2 and maas implementations in separate PRs is good
<voidspace> dimitern: yeah
<voidspace> parsing lshw, yay!
<dimitern> voidspace, :/ I sympathize
<voidspace> dimitern: I created separate cards for EC2/MaaS implementation
<voidspace> dimitern: and assigned MaaS to me and EC2 to you
<voidspace> doesn't have to stay like that but will do for now
<dimitern> voidspace, thanks!
<voidspace> although, extractInterfaces pretty much does it all already
<voidspace> might need to extend it a bit, but not much
<dimitern> voidspace, while doing that would you mind adding ProviderSubnetId network.Id to InterfaceInfo ?
<voidspace> dimitern: ok
<voidspace> dimitern: ProviderSubnetId instead of SubnetId ?
<dimitern> voidspace, and rephrase ProviderId's doc comment to clarify it's the NIC id
<voidspace> ok
<dimitern> voidspace, yeah, let's make it obvious
<voidspace> okeydokey
<dimitern> voidspace, cheers
<voidspace> dimitern: looks like I'll be visiting friends on the Monday and Tuesday of our sprint week
<voidspace> dimitern: so Wednesday for team meal would be ideal please :-)
<dimitern> voidspace, sure, will keep that in mind
<voidspace> dimitern: thanks, sorry to be a pain
<dimitern> voidspace, no trouble at all :)
<voidspace> dimitern: I don't get down to London often, so have a couple of friends keen to see me
<voidspace> dimitern: and I was already booked for a meal there on Thursday!
<dimitern> voidspace, of course, sgtm
 * TheMue will also visit a friend, but not date yet fixed ;)
<voidspace> TheMue: don't make it Wednesday please :-)
<TheMue> voidspace: hehe, sure, just seen
<voidspace> cool
<voidspace> otherwise we have a dilemna
<voidspace> and dimitern has to decide who he loves more ;-)
<TheMue> *rofl*
<dimitern> :D
<TheMue> dimitern: do we have a kind of definition how we extend the provider interface for networking? or is it currently just a loose coupled set of functions?
<voidspace> TheMue: we add new Environ methods ad-hoc as we need them...
<voidspace> TheMue: the current "entry point" for new networking stuff is the SupportAddressAllocation method
<voidspace> TheMue: if a provider returns True for this then it must support all the new networking methods
<voidspace> TheMue: this is only true for MaaS and EC2 currently
<TheMue> voidspace: ok
<voidspace> TheMue: the other providers return false for SupportAddressAllocation and errors.NotImplemented for the other new networking methods
<TheMue> voidspace: so do those methods have an own interface and where needed we we do a type assertion?
<voidspace> TheMue: they *don't*, they *could*
<TheMue> voidspace: or do the others have to implement "empty" methods?
<voidspace> TheMue: they're just part of the Environ interface
<voidspace> so yes, at the moment all providers implement them
<voidspace> a separate interface that extends Environ would probably be clearer
<TheMue> voidspace: so all providers have to provide them, even w/o address allocation?
<voidspace> yep
<voidspace> you can see that they do
<voidspace> e.g. for NetworkInterfaces Subnets (etc)
<TheMue> voidspace: yeah, then I would prefer one or more extra interfaces
<dimitern> TheMue, I had a sort of plan 9 months ago - everything changed since then, so now it's ad-hoc until we get it done :)
<voidspace> TheMue: that would be a great initial contribution :-)
<TheMue> voidspace: hah
<voidspace> TheMue: or did you mean you would prefer it if someone else did it?
<voidspace> in which case, me too! :-)
<voidspace> TheMue: just teasing, but yes I agree in principle
<TheMue> voidspace: no, would be fine to me, so I'm forced to take a look at each networking method
<voidspace> go for it
<voidspace> would be quite easy, and then you'd see the new methods
<TheMue> dimitern: how is p9 doing it?
<voidspace> AllocateAddress and ReleaseAddress too
<voidspace> I believe
<dimitern> TheMue, I have no idea frankly :)
<TheMue> dimitern: *rofl*
<TheMue> dimitern: what I once liked is the idea of "everything is a device"
<dimitern> TheMue, I do like this - nice and polymorphic, but alas..
<dimitern> voidspace, ping
<dimitern> voidspace, have a look at https://github.com/go-amz/amz/pull/20 later if you have time
<voidspace> dimitern: ok
<voidspace> dimitern: not long before I leave, but I can take a look
<dimitern> wallyworld___, axw, katco , rogpeppe, can any of you please review the above PR for goamz that improves ec2test server? ^^
<voidspace> dimitern: cool, nice work
<dimitern> voidspace, cheers! the important part is that now we can filter NICs by "attachment.instance-id"
<rogpeppe> dimitern: am finished for the day now and doing a sprint this week, so it might be a little while until i get around to it
<voidspace> rogpeppe: hey, I don't suppose you can remember the name of that restaurant we went to in London?
<voidspace> rogpeppe: we're back in London next week...
<dimitern> rogpeppe, np, I'm just mass-pinging in the hope someone might have a look :)
<rogpeppe> voidspace: hmm, it was nice, wasn't it?
<rogpeppe> voidspace: i'm in london now...
<voidspace> rogpeppe: yep...
<rogpeppe> voidspace: no, sorry, i can't remember at all
<voidspace> rogpeppe: heh, cool :-)
<voidspace> never mind, shame
<rogpeppe> voidspace: we probably found it through tripadvisor
<voidspace> ah, I'll try searching for restaurants near Blue Fin
<voidspace> there's probably only about 200 to check in that radius...
<rogpeppe> dimitern: from a quick skim, it looks reasonable
<rogpeppe> voidspace: istr it might've been north of the river
<voidspace> ah yes, I vaguely remember crossing the river
<voidspace> I'm signing off for the day
<voidspace> time to go jogging
<voidspace> g'night all
<dimitern> rogpeppe, cheers
<thumper> jw4: are you working or taking the day off?
<jw4> thumper: working, but I was just about to break for the gym
<thumper> jw4: just wanted to make sure you were aware: https://bugs.launchpad.net/juju-core/+bug/1412292
<mup> Bug #1412292: Intermittent test failure in ActionSuite.TestActionsWatcherEmitsInitialChanges <action> <intermittent-failure> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1412292>
<jw4> thumper: yeah I saw that.  Thanks for filing it.  I need to figure out what causes it.  I thought that test was isolated well enough but obviously not.
<thumper> coolio
<hazmat> how do we set feature flags ?
<hazmat> looks like via env var on client
<hazmat> thumper, re env manager, how does one get access to it? its not showing up in the client api facades on logged in conn
<thumper> hazmat: no, it isn't hooked up
<thumper> and yes, feature flags are set by env var
<hazmat> thumper, ah, that would explain it
<hazmat> JUJU_DEV_FEATURE_FLAGS valid values storage,actions
<hazmat> yeah
<thumper> hazmat: I have a branch, but got derailed
<thumper> found two kinda critical things that were more important
<thumper> and now I'm banging my head against the fucking dummy provider
<hazmat> thumper, no worries was just looking at restructuring the jujuclient to automatically expose facades as objects off the conn, corresponding to those actually present
<thumper> I do feel like we are getting close though
<hazmat> thumper, there's a couple of interesting integrations around envmanager(obviously) mostly looking to get those rolling end of feb
 * thumper nods
<thumper> should be well hooked up before then :)
<thumper> ah mah gaad
<thumper> finally have the test failing the way I expect
<thumper> what a freaking mission
 * thumper must remember to reset the branch before proposing to remove expletives
<thumper> so... apart from that panic, we're good
 * thumper grumbles
<thumper> meh
 * thumper head desks
<thumper> fark!!!!!
 * thumper wonders if he has broken anything else
<perrito666> thumper: feeling like colombian revolution?
<thumper> perrito666: I don't get it
<perrito666> thumper: farc is the name the colombian "revolutionary" group that drives narcotrafic calls them selves
<thumper> ah
<perrito666> iirc fuerzas armadas revolucionarias de colombia
<thumper> no I was mentally bitching about why the common.Resources didn't stop things in the reverse order they were registered
<thumper> but rather randomly
<thumper> which ment my pinger died because the state connection was closed already
<perrito666> by design?
<thumper> by not caring
<thumper> now they do work that way
<thumper> before there wasn't any dependency between resources
<thumper> now there can be
<thumper> before it didn't matter
<thumper> now it does
<perrito666> changes, i hate those
<thumper> menn0: I think I have this ready to go now
<thumper> menn0: all the tests are running and passing except for one "bad record MAC:
<menn0> thumper: orrsum
<thumper> menn0: although to get it fully working, there is more work needed in the apiserver code
<thumper> but for the other handlers
<thumper> like tools, charms, iamges, debuglog
<menn0> wallyworld_: i've updated http://reviews.vapour.ws/r/754/ in response to your review comments.
<anastasiamac> thumper: what's worng with charms handlers?..
<wallyworld_> menn0: ty, looking
<menn0> wallyworld_: there's a couple of things that i need your feedback on
<thumper> anastasiamac: they don't support the correct state connectino for non-state server environments
<anastasiamac> thumper: :(
<anastasiamac> thumper: what's correct state connection look like?
<thumper> I'll fix it.   All the workers will need the same fix
<menn0> wallyworld_: no rush. I'm about to have lunch and take Amelia to preschool.
<anastasiamac> thumper: can't wait :D
<wallyworld_> ok
<thumper> menn0: http://reviews.vapour.ws/r/763/
<thumper> 13 changed files  with 250 additions and 93 deletions.
<thumper> never freakin easy, is it?
<menn0> thumper: at least the diff still fits on one RB page :)
<thumper> :)
<thumper> I think I hit the same problem waigani did before
<thumper> where I was wanting to add machines to an environment that isn't the state server one
<thumper> that required the provider.Prepare in the factory
<waigani> thumper: how did you solve that?
<thumper> waigani: take a look at the review above
<thumper> wallyworld_: nested errors.Trace call are preferable
<thumper> otherwise you don't get a stack
<thumper> that's the whole point
<thumper> wallyworld_: there is no harm in returning 'return errors.Trace(err)' over 'return err'
<thumper> wallyworld_: and I would say it is preferable to a bare return err
<anastasiamac> thumper: would
<wallyworld_> thumper: ok, wasn't sure of the preferred usage
<anastasiamac> wouldn't u want to trace once and then annotate as u come up?
<thumper> anastasiamac: you either trace or annotate or wrap
<thumper> anastasiamac: if there is no useful context to add with an annotation, tracing is fine
<wallyworld_> why not trace and then annotate at layer boundary
<anastasiamac> thumper: what wallyworld_ said^^
<thumper> wallyworld_: it depends where the useful context is
<thumper> context isn't always at the boundary
<wallyworld_> and i maintain tracing is not always necessary
<thumper> wallyworld_: it is *never* necessary
<thumper> wallyworld_: but it is very useful for debugging
<thumper> and it doesn't hurt
<thumper> so why not?
<wallyworld_> can be, but not always needed
<wallyworld_> code clutter
<thumper> dude, it is *never* needed
 * thumper challenges that
<wallyworld_> you know what i mean
<menn0> thumper: so, I'm fairly sure i've added machines to the non-initial env using the factory before
<thumper> bah humbug, actions test failure  - the intermittent one I filed that bug for yesterday
<menn0> thumper: so why is the prepare stuff req'd
<menn0> ?
<thumper> menn0: ah... because I'm using the dummy provider
<thumper> not an unregistered 'someprovider'
<jw4> thumper: sorry :)
<menn0> thumper: cool.
<thumper> dummy needs to be registered
<thumper> sorry, prepared
<menn0> thumper: i've had a quick look over your PR and it looks good. but I need to have a closer look once i've done the school run etc
#juju-dev 2015-01-20
<thumper> I added a constraint that a new environment should have the same provider type as the state server environment
<thumper> well, default
<thumper> not constraint
<jw4> thumper: I've not been able to reproduce the bug 1412292 - are those errors happening consistently for you?
<mup> Bug #1412292: Intermittent test failure in ActionSuite.TestActionsWatcherEmitsInitialChanges <action> <intermittent-failure> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1412292>
<thumper> jw4: about one time in three
<jw4> thumper: hmm; do you have a fast machine, or a slow one - or is it PPC - or anything like that?
<thumper> two year old fast i7, lots of ram and fast ssd
<jw4> thumper: kk - I'll try picking apart the logic to see what I can find
<thumper> ok
<menn0> thumper: Fix it, then Ship it
<jw4> thumper: two quick questions - 1) I assume the defect report has the full output of the test run. Is that true? 2) what version of go are you using?
<jw4> thumper: fwiw I suspect the problem is the watcher isn't getting updated in time that the two actions were removed
<thumper> menn0: ta
<thumper> jw4: using what ever version is in trusty
<thumper> jw4: I'm guessing it is a race condition
<thumper> jw4: but I've not looked at the code
<thumper> man...
<thumper> getting endorsements on linked in by folk I don't remember linking to
<thumper> oh... he is a juju gui peep
<axw> wallyworld_: about to send an email to fwereade for some advice on the state model, I've CC'd you in case you have ideas on it
<wallyworld_> ok
<axw> the relations to units and services are feeling a bit messy
 * thumper finishes the apiserver httphandler before enfixorating the branch menno reviewed
<thumper> menn0: I have another branch that fixes the rest of the apiserver
<thumper> menn0: I just need to write tests for each of the api end points to exercise the additional environment
<menn0> thumper: sweet
<menn0> thumper: i'm deep into integrating envWorkerManager into the machine agent
<thumper> awesome
<thumper> I feel the pieces are slowly coming together
<menn0> thumper: lots of awkward details to figure out
<thumper> menn0: yep, I know
<thumper> menn0: so many fiddly bits
<menn0> thumper: e.g the singular runner
<thumper> boo... hiss...
<thumper> damn
<thumper> I bet that two line tweak might break something
<thumper> I'm done
<thumper> these tests are taking ages to write
<thumper> laters
<dimitern> morning all
<anastasiamac> dimitern: morning :) 5pm here tho
<dimitern> anastasiamac, afternoon then :)
<anastasiamac> dimitern: thnx for consideration :)
<anastasiamac> dimitern: one thing i miss about Nothern hemisphere - r long dusks in summer
<dimitern> anastasiamac, oh, they're not so long there?
<dimitern> anastasiamac, I'd have a hard time adjusting to summer in january :)
<anastasiamac> dimitern: with +40 and an awful humidity, u will not mistake this for winter!
<anastasiamac> dimitern: there is no dusk :(
<dimitern> anastasiamac, *lol*
<anastasiamac> dimitern: one minute it's day, the next night
<anastasiamac> dimitern: there r millions of other advantages tho :)
<dimitern> anastasiamac, that is a bit weird
<anastasiamac> dimitern: k... to be fair it takes about 20 minutes for day to turn to night :D
<dimitern> anastasiamac, I bet there are, although with all the creepy crawlers in AU..
<anastasiamac> dimitern: and don't forget flying cockroaches :P
<dimitern> anastasiamac, oh we have those as well - nasty creatures
<anastasiamac> dimitern: certainly very exotic
<wallyworld_> axw: i've made some changes; i was taking some shortcuts previously to try and get it done more quickly; tests should have better coverage now
<dimitern> wallyworld_, axw, hey guys - can one of you review this goamz PR please? https://github.com/go-amz/amz/pull/20
<axw> wallyworld_: ok cool
<axw> wallyworld_: looking
<axw> dimitern: will take a look in a bit
<wallyworld_> dimitern: i'm about to get ready for soccer, but can look when i get back if not done
<dimitern> axw, thank you
<dimitern> wallyworld_, cheers
<dimitern> wallyworld_, isn't it too hot for soccer? :)
<wallyworld_> dimitern: inly if you're soft :-)
<dimitern> wallyworld_, :D
<wallyworld_> axw: i gotta dash, but i might next add some basic framework for registering storage provider types eg loop with a skeleton implementation. then the blanks can be filled in while work is done on making the loop storage available to env providers, and fallback logic implemented etc
<wallyworld_> axw: tnks for review, i just used state in tests to get end-end tested for now, can mock out later
<axw> wallyworld_: that SGTM
<axw> thanks
<axw> dimitern: reviewed
<dimitern> axw, ta!
<dimitern> voidspace, standup?
<voidspace> dimitern: omw
<axw> dimitern: when you're free, can you please review https://github.com/juju/names/pull/37 ?
<dimitern> axw, sure thing
<dimitern> axw, lgtm
<axw> dimitern: do you mean vary the storage name? i.e. something other than "store"?
<axw> thanks
<dimitern> axw, yeah
<axw> dimitern: okey dokey, will do
<dimitern> axw, I only later read the description of what it is :)
<perrito666> morning
<anastasiamac> perrito666: morning...
<axw> dimitern: another small one http://reviews.vapour.ws/r/766/
<dimitern> axw, ship it! :)
<axw> dimitern: thanks
<perrito666> bogdanteleaga: there is something wrong with http://reviews.vapour.ws/r/632/
<perrito666> it lacks code
<perrito666> jam2: inet problems?
<jam2> perrito666: its been happening for a couple of days now. They "upgraded" my local router in preparation for changing something about the networking to the building, and now every 15-30min I get a SSL session negotiation failure.
<bogdanteleaga> perrito666: hmm, it hasn't been created automatically so I tried to do it manually, apparently it didn't work
<bogdanteleaga> perrito666: the code is on github however can I update it from the web interface?
<perrito666> bogdanteleaga: if you did the post manually probably it worked but you need to go to reviewboard and publish it
<dimitern> voidspace, maas meeting?
 * perrito666 actually had to prepare a brew to review fwereade's branches
<fwereade> perrito666, oh dear, is it that bad?
<perrito666> nope, I just checked the lenght and decided that i better prepare my brew before and not after
<fwereade> perrito666, good thinking :)
<fwereade> perrito666, if it helps, I think the test changes outweigh the code in both of them
<perrito666> abentley: this was the failing test of the last cursed email I got http://juju-ci.vapour.ws:8080/job/win-client-deploy/1319/console that is the test right?
<abentley> perrito666: That looks like a problem with the machine running the test.
 * perrito666 stops peeking at the local newspaper before getting an ulcera
<natefinch> Morning everybody
<dimitern> hey natefinch!
<natefinch> dimitern: howdy :)
<perrito666> hey natefinch back?
<natefinch> perrito666: back, yes
<natefinch> 058524
<perrito666> natefinch: we wherent expecting you for a couple of weeks
<perrito666> natefinch: tx nice pass
<natefinch> lol
<natefinch> just my google authenticator number
<natefinch> yeah, this is about when I expected to be back.... 2 weeks after the company shutdown
<voidspace> dimitern: damn!
<voidspace> dimitern: sorry
<dimitern> voidspace, :) np
<voidspace> dimitern: just forgot :-(
<voidspace> dimitern: I need to tattoo it to my hand or something
<dimitern> voidspace, *lol*
<dimitern> voidspace, nothing major - non-installable nodes can be tested very soon - check the minutes doc if you like
<sinzui> natefinch, perrito666 dimitern : I don't see any 1.21-beta5 specific issues in play in the this bug. I don't think it should block the release of 1.21-rc1 https://bugs.launchpad.net/juju-core/+bug/1410876
<mup> Bug #1410876: Error executing lxc-clone: lxc_container: utils.c: mkdir_p 220 Not a directory - Could not destroy  snapshot %s - failed to allocate a pty; Insufficent privileges to control  juju-trusty-lxc-template <lxc> <oil> <trusty> <juju-core:Incomplete> <juju-core 1.21:Incomplete> <juju-core
<mup> 1.22:Incomplete> <https://launchpad.net/bugs/1410876>
<voidspace> dimitern: I will do, cool
<voidspace> dimitern: sounds good
<dimitern> sinzui, looking
<voidspace> dimitern: we need an api to assign resources (like ip addresses) to them for non-installable nodes to be directly useful to us
<perrito666> sinzui: I agree thumper and Ã documented our findings there and it seems it is outside juju
<sinzui> thank you perrito666 . I will try to remove the bug from rc1
<dimitern> sinzui, yep, I was just checking the code - it seems 1.21-rc1 shouldn't be affected
<sinzui> fab
<wwitzel3> perrito666: ping
<sinzui> natefinch, dimitern:do you have a moment to review this branch http://reviews.vapour.ws/r/768/
<natefinch> sinzui: ship it!
<sinzui> thank you natefinch
<jw4> OCR PTAL http://reviews.vapour.ws/r/769/ - fix for bug 1412292
<mup> Bug #1412292: Intermittent test failure in ActionSuite.TestActionsWatcherEmitsInitialChanges <action> <intermittent-failure> <test-failure> <juju-core:Triaged by johnweldon4> <https://launchpad.net/bugs/1412292>
<dimitern> jw4, looking
<jw4> dimitern: thanks
<dimitern> jw4, if the issue is with the watcher, why not fix it there?
<jw4> dimitern: 1) I'm not sure it's a problem with the watcher 2) making a change there would have widespread effect
<jw4> dimitern: the issue in the watcher is that it waits up to 10 ms every time after a change to see if there are more changes before returning
<jw4> dimitern: that's probably not bad in real life, but in the tests we were expecting it to be a bit more deterministic
<dimitern> jw4, right, there might be an issue lurking in there, but the fix seems fine - almost ready with the review
<jw4> dimitern: tx
<dimitern> jw4, done
<jw4> dimitern: ta
<voidspace> just spent about an hour trying to work out if I can get the MACAddress out of a network definition in lshw
<voidspace> only to realise a few minutes ago that it's there, it's just called "serial" instead of mac address
<voidspace> d'oh...
<voidspace> at least I have it
<hazmat> is usermanagement so to be functional
<hazmat> nevermind
<voidspace> right
<voidspace> g'night all
<voidspace> EOD
<katco> when we create our lxc templates for cloning, we start them and then wait for them to stop. what causes them to eventually stop so cloning can continue?
<natefinch> katco: not sure
<katco> for reference: https://github.com/juju/juju/blob/1.20/container/lxc/clonetemplate.go#L162
<katco> trying to dx https://bugs.launchpad.net/juju-core/+bug/1348386
<mup> Bug #1348386: lxc template fails to stop <clone> <lxc> <oil> <juju-core:Fix Released by cox-katherine-e> <juju-core 1.20:Fix Released by cox-katherine-e> <https://launchpad.net/bugs/1348386>
<hazmat> are tags supposed to leak through the api?
<katco> hazmat: they are not
<hazmat> cause it seems like we started off without doing that, but now their leaking all over the place
<hazmat> actions, usermanager, etc
<hazmat> ssh key manager.. basically anything using Entities is expecting tags.
<hazmat> param.Entities that is
<katco> hazmat: in the api/* packages? or apiserver/*?
<hazmat> katco, in apiserver
<katco> hazmat: ah. that is designed to take tags
<katco> hazmat: tags are OtW format, api/* is what is client-facing
<hazmat> well that is the exposed api
<katco> hazmat: as i understand it anyway
<hazmat> api/* is an implementation of a client.. server is the api so we have a rather unfortunate situation on server that things are a mix of tags and not tags.
<jw4> hazmat, katco as I recall from discussion w/fwereade the apiserver takes string representations of tags
<katco> hazmat: this could be wrong, but how it was explained to me is that tags are OtW format and so as jw4 said we pass the string-representation of tags into apiserver/*
<jw4> hazmat: there was some confusion about tags vs. string representation of tags - are you referring to the string representation or the type names.Tag ?
<katco> hazmat: i.e.: tags are how clients communicate with the RPC server
<hazmat> none of the core api does (deploy/add-machine/remove-machine/add-units/remove-units/setcharm/etc), but all the things that have been added recently are using tags heavily
<hazmat> so we now have proper mismash of different ids in the api
<jw4> hazmat: as I recall it that is because of a shift in focus too - the intent being that all ids are internal only and string representation of tags used ubiquitously
<jw4> hazmat: we just haven't migrated everything yet
<hazmat> or maybe its just client facade shouldn't use tags
<hazmat> and other facades are all tag centric
<jw4> hazmat: but I could be wrong - hoping fwereade notices and weighs in
<fwereade> hazmat, all facades should be tag centric -- but the client ones were mostly written before tags were a thing, and many have not been (needed to be) updated yet
<fwereade> hazmat, tags were introduced as a way to encode id and type for things like the annotations api
<hazmat> tags have been around a while, we just never exposed them through to client facade consumers
<hazmat> fwereade, which means they were in the first release of the api really.. ie ever since gui supported juju-core
<fwereade> hazmat, so we didn't need to ensure that every entity namespace was disjoint from every other
<fwereade> hazmat, near enough, yes, it was specifically for annotations that we added them iirc
<hazmat> okay.. i guess i'm going to just keep with other client impl not forcing users to care about tags, just automatically append per appropriate call
<hazmat> the only place there was ever a need for tag disambiguation was the annotations (and perhaps not there), i'd rather not have api consumers actually have to concern themselves with it unless by  choice.
<natefinch> morning thumper
<natefinch> wwitzel3, ericsnow: can you guys post a review for the GCE stuff so I can get started on it?
<ericsnow> natefinch: almost there
<natefinch> ericsnow: cool... feel free to make a PR with a big WIP on the front of it if that's helpful.
<aznashwan> ericsnow: have a moment?
<ericsnow> aznashwan: almost :)
<ericsnow> aznashwan: gimme a minute
 * thumper headdesks
<thumper> damn... damn... damn....
<katco> thumper: good morning
<thumper> I've just realised that although the tests pass, this will fail under real load
<thumper> hi katco
<ericsnow> aznashwan: what's up?
<aznashwan> ericsnow: i was the loonie with the systemd service management PR from Christmas Eve (namely this one: http://reviews.vapour.ws/r/671/)
<ericsnow> aznashwan: yep :)
<aznashwan> ericsnow: the thing is it's done exactly as upstart (namely I used exec calls to systemctl for managing the services), and since have been told that a more native approach would be more favorable...
<aznashwan> I've recently been show this package from the coreOS guys: https://github.com/coreos/go-systemd which might be exactly what we're looking for.
<ericsnow> aznashwan: yeah, that is what we were planning on using
<aznashwan> It would not take too long to adapt my PR to use that, however i'm in the middle of exam season right now, and am not able to get to it sooner than this weekend
<aznashwan> plus however much time responding to review requests and etc... would take
<aznashwan> is that timeline ok for you guys?
<ericsnow> aznashwan: it depends on what we end up getting done this week with regards to systemd
<ericsnow> aznashwan: I expect that we will already have take care of things by this weekend
<ericsnow> aznashwan: we were planning on incorporating changes from your patch in whatever we did
<wwitzel3> natefinch: http://reviews.vapour.ws/r/771/
<aznashwan> ericsnow: as you will, apart from changing the exec calls and (sadly) having to redo the tests, my patch is pretty much on par with the upstart one
<ericsnow> aznashwan: cool
<aznashwan> ericsnow: are you guys also planning to do anything about how the service management system is selected in the (very) near future?
<ericsnow> aznashwan: yeah, that will be a part of what we are working on
<ericsnow> aznashwan: we also have to refactor various bits of upstart-specific code throughout juju core to be init-system agnostic :(
<aznashwan> ericsnow: yes, the places were services were used seemed very upstart-specific in the first place...
<ericsnow> \me gears up for paying off some technical debt
<ericsnow> natefinch: for fun take a look at the list of commits on that PR
<aznashwan> ericsnow: well, best of luck to you guys. as i've said, I would be more than happy to do it on Saturday of you don't get around to it by then
<ericsnow> aznashwan: sounds great :)
<aznashwan> ericsnow: keep in touch. godspeed :D
<natefinch> ericsnow, wwitzel3: cool, looking
<katco> /
<menn0> axw: i'm currently reviewing your charm storage PR
<thumper> menn0: where is the statesuite that you created?
<thumper> state/testing
<thumper> nm
<menn0> thumper: :)
<jw4> thumper: I'm pretty sure I've fixed the watcher test - fyi :)
<wwitzel3> ericsnow: ping
<ericsnow> wwitzel3: present
<wwitzel3> ericsnow: going to jump in to moonstone for a bit
<ericsnow> k
<anastasiamac> thumper: super command of super command..
<anastasiamac> thumper: for eg. juju storage pool list....
<anastasiamac> thumper: is there a precedence?..
#juju-dev 2015-01-21
<thumper> anastasiamac: yes
<thumper> anastasiamac: we are moving everything that way
<anastasiamac> thumper: i know of one level of super commands
<anastasiamac> thumper: e.g. juju machine add
<anastasiamac> thumper: could u point me to an example with 2 level super
<anastasiamac> thumper: ?
<thumper> well, juju is a super, so juju machine is super of super
<thumper> which is what I thought you ment
<anastasiamac> thumper: i want (! :-D) `juju <1-super command> <2-super command> subcommand
<anastasiamac> thumper: juju storage volume list
<anastasiamac> thumper: params and options after `list`
<anastasiamac> thumper: possible?
<thumper> sure, don't see why not
<thumper> menn0: is there a reason that the StateSuite doesn't have a factory?
 * thumper wants one there
 * thumper adds one and runs all the tests
<axw> menn0: thanks very much for wading through all that code
<menn0> axw: no worries. It was all pretty easy to follow so thanks for writing clear code.
<menn0> thumper: there was a reason I didn't add the factory to state suite but that reason might not be valid now
<menn0> thumper: I think it was due to circular imports in some places
<thumper> menn0: everything passes with me adding one
<thumper> so I'll keep it there :)
<menn0> thumper: yeah, I think that i rearranged things so it wasn't a problem
<thumper> menn0: slowly working my way through all this crap
<menn0> thumper: certainly happy to have the factory there
<thumper> wallyworld_: hey
<wallyworld_> yo
<thumper> wallyworld_: I have some questions about the image cache thing
<thumper> got 5 minutes for a hangout?
<wallyworld_> shoot
<wallyworld_> sure
<menn0> thumper: maybe change the various StateSuite users that are creating their own factory to use the one from StateSuite?
<thumper> menn0: maybe... :)
<thumper> wallyworld_: just use our 11! hangout
<thumper> 1:1
<menn0> thumper: there aren't many :)
<katco> wallyworld_: i'm in the hangout early if you want to chat
<wallyworld_> katco: just talking to tim. be there soon
<katco> wallyworld_: no worries
<thumper> wallyworld_: although...
<thumper> wallyworld_: any reason we should server /images from the root of the API?
<thumper> oh...
<thumper> s'ok
<thumper> I know
<wallyworld_> ok
<thumper> before the patch I landed just the other day
<thumper> all agents would come in to the root
<thumper> nm
<thumper> all good
<thumper> um...
<thumper> wallyworld_:
<thumper> wallyworld_: the image stuff isn't being used yet is it?
<wallyworld_> thumper: otp. sec
<thumper> wallyworld_: because before two days ago, none of the agents would have been able to access the api
<wallyworld_> thumper: it worked for me
<wallyworld_> i tested live with local and ec2
<thumper> ??
<wallyworld_> it doesn't work on upgraded environments because of the cert issue
<wallyworld_> but a fresh trunk install worked
<thumper> oh... I know why... but it is truly horrid
<thumper> you were serving the following url:
<wallyworld_> thumper: what happens from the node is that wget is used
<thumper> /environment//images/...
<wallyworld_> it nakes a https get call
<thumper> with an empty :envuuid
<wallyworld_> ah could be, can't recall now
<thumper> it is
 * thumper tweaks
<wallyworld_> ok :-)
<axw> wallyworld_: can you please review my responses to your comments when you're free?
<wallyworld_> axw: sure, i'll just finish this current review
<thumper> menn0: that's the rest of the apiserver fixed
 * thumper writes the merge description
<thumper> menn0: http://reviews.vapour.ws/r/776/
<menn0> thumper: looking
<wallyworld> axw: just a question about machine vs machineid for index
<wallyworld> i'm not sure why not machineid
<axw> wallyworld_: I can change both to machineid now, I was thinking of doing that anyway
<wallyworld_> stupid freenode :0(
<wallyworld_> ok, machineid sounds good
<axw> wallyworld_: I'll change storageinstance to storageinstanceid too ?
<wallyworld_> i think so please, is consistent and arguably more accurate
<axw> wallyworld_: fixed
<wallyworld_> ty
<wallyworld_> land that sucker
<axw> cool
<menn0> axw: man! every time wallyworld_ reviews something that i've already reviewed he always find a bunch of good stuff I didn't notice :)
<wallyworld_> menn0: well, this is an area where our team knows the domain a bit better
<axw> menn0: that's cool, you raised good points too :)
<menn0> wallyworld_: I suspect domain knowledge is part of it, but also that you're really good at reviewing
<wallyworld_> sometimes, i miss stuff myself
<wallyworld_> especially when i'm trying to race through 3 big reviews this morning :-)
<wallyworld_> axw: when you had time, could we have a hangout about the registry?
<axw> wallyworld_: sure, now's fine
<axw> see you in 1:1
<katco> wallyworld_: axw: would i be intruding if i joined?
<wallyworld_> course not
<axw> katco: fine by me, let's make it standup then
<katco> cool, tanzanite?
<axw> yep, I'm there
<menn0> thumper: so you didn't end up using StateSuite in this PR, even though you added Factory to it?
<thumper> menn0: I forgot to add the utils_test.go file
<thumper> just pushed it up
<thumper> sorry
<menn0> thumper: no worries
<menn0> thumper: review done
<thumper> menn0: ta
 * thumper waits to see if this branch will merge or not
<thumper> ...
<thumper> it has been about 13 minutes...
<thumper> should be done soon
 * thumper sighs
<thumper> another intermittent test failure
<thumper> bug 1413067 if anyone wants to fix it...
<mup> Bug #1413067: Intermittent test failure FilterSuite.TestConfigAndAddressEventsDiscarded <intermittent-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1413067>
<wallyworld_> axw: sadly, doing this: VolumeSource(environs.Environ, *Config) (VolumeSource, error) - creates an import loop as apiserver/params references back to storage
<axw> wallyworld_: how about just *environs/config.Config
<axw> wallyworld_: alternatively we could duplicate the storage structures in apiserver/params ...
<wallyworld_> i was leaning towards the latter
<wallyworld_> but hadn't considered the former
<wallyworld_> less work right now to do the former
<axw> wallyworld_: I'm fine with either. we'll probably want to duplicate eventually, but it can wait
<wallyworld_> yeah
<anastasiamac> wallyworld_: axw: if u putting some things in apiserver/params
<anastasiamac> axw: wallyworld_: i'd like to know...
<anastasiamac> axw: wallyworld_::D
<wallyworld_> anastasiamac: nothing at the moment
<anastasiamac> wallyworld_: \o/
<wallyworld_> axw: no rush. when you get a chance, http://reviews.vapour.ws/r/777/.
<dimitern> morning all
<wallyworld_> o/
<dimitern> hey everyone, fwereade just texted me to let you know he has problems with the internet connection
<dimitern> he's not sure when he'll be back online, but working on it
<wallyworld_> axw: i have another branch ready to land once the registry one goes in - it sets up registration of default pools and provides the skeleton loop volume source for katherine
<axw> wallyworld_: sweet. sorry, I'll take a look now. been trying to plough through the formatter
<wallyworld_> np
<axw> wallyworld_: why do we need to record a default storage provider if we're also going to have default pools?
<axw> pool tells you what the provider type is, so... seems redundant
<wallyworld_> axw: good question, i don't think we do
<axw> ok
<wallyworld_> i'll remove the default business
<wallyworld_> i'll make it record pool name instead
<axw> wallyworld_: reviewed
<wallyworld_> ty
<axw> ah, I think you updated while I was looking...
<wallyworld_> yeah
<wallyworld_> axw: IsProviderSupported - use case is when specifying pool name when deploying, get provider type from pool and then see if it is a valid type for the environment
<axw> wallyworld_: ok
<axw> wallyworld_: hmm. not sure it should be possible to create a pool that is invalid for the environment though...
<axw> that seems like asking for trouble
<wallyworld_> axw: true, so you'd use the check when creating
<axw> wallyworld_: true :)
<wallyworld_> sorry, i'm speaing conceptually :-)
<wallyworld_> speaking
<axw> sure, I get it - we still need it but in another spot
<wallyworld_> yeah. ok, so landing now, then i can propose the next one
<dimitern> wallyworld_, axw, PTAL http://reviews.vapour.ws/r/778/ - updates deps for goamz
<axw> dimitern: looking
<axw> dimitern: LGTM
<axw> that was a hard one
<dimitern> axw, thanks! :)
<axw> woop
<axw> wallyworld_: 2015-01-21 10:18:43 INFO juju.worker.diskformatter diskformatter.go:157 created filesystem on "/dev/xvdf1"
<wallyworld_> whoot :-)
<axw> no tests yet, but should be ready soonish
<wallyworld_> great
<wallyworld_> axw: this hopefully will allow katherine to get stuck into loop provider, plus there's the ability to register out of the box pools http://reviews.vapour.ws/r/779/. maybe take a look after you finish your fs one
<axw> wallyworld_: cool. I need to go get dinner on, should be able to take a look later on
<wallyworld_> ok, ty
<perrito666> morning
<voidspace> interesting
<voidspace> MaaS has an api call for getting mac addresses - it takes a mac address as a parameter
<voidspace> so given a mac address you can get the mac address back!
<voidspace> useful
<perrito666> lol
<voidspace> perrito666: o/ morning
<dimitern> voidspace, or for fun and games you can implement "guess my mac" game via the maas api :D
<voidspace> dimitern: hah, yeah - I hadn't thought of that
<voidspace> dimitern: I should implement that
<dimitern> *lol*
 * perrito666 hates mornings
<dimitern> holy sh*t! need to reboot - uptime: 13:25:03 up  3:08,  2 users,  load average: 21,34, 21,33, 21,13
 * dimitern needs to step out - back in ~1h
<voidspace> dimitern: ping
<voidspace> dimitern: question, if a network interface is *not* associated with a network should I leave those details blank in the returned InterfaceInfo
<voidspace> dimitern: or should I skip that interface?
<dimitern> voidspace, hmm.. let me think
<dimitern> voidspace, leave them out - it's a possibly valid case for maas
<voidspace> dimitern: when you say "leave them out"
<voidspace> dimitern: which of my two options do you mean?
<voidspace> dimitern: as both options involve leaving something out...
<voidspace> and my mind reading skills are sadly lacking today I'm afraid :-(
<voidspace> although I could deduce that the use of "them" (plural) indicates leaving out the missing details, but leaving the interface
<voidspace> did I get it right?
<voidspace> *leaving the interface in
<dimitern> voidspace, :) ah, sorry - I meant keep the interfaces in the result, but leave the fields empty
<voidspace> dimitern: cool, will do
<voidspace> that's how I've coded it anyway as it happens :-)
<dimitern> voidspace, thanks! :)
<mgz> gsamfira: bug 1398415
<mup> Bug #1398415: TestAddRemoveSet fails intermittently <ci> <intermittent-failure> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1398415>
<gsamfira> mgz: thanks! :)
<mgz> try it again, maybe... fifth time... lucky?
<voidspace> right, lunch
<voidspace> so NetworkInterfaces is kinda done, just need to test it
<voidspace> which probably means extending the gomaasapi test server I bet...
<voidspace> :-/
<voidspace> maybe not though
<dimitern> voidspace, enjoy! I'm mostly done with mine, so I'll help you if I can
<voidspace> dimitern: will do
<voidspace> dimitern: I'm wondering if the test server will support the "list_connected_macs" op and I'm assuming it probably won't...
<voidspace> dimitern: hopefully I'm wrong
<gsamfira> mgz: worked on the 3rd :)
<voidspace> anyway, really going on lunch
<dimitern> voidspace, thankfully it does - I added it at some point - ConnectNodeToNetworkWithMACAddress() :)
<dimitern> voidspace, menn0, katco (if around, as OCRs): please take a look http://reviews.vapour.ws/r/780/
<perrito666> is anyone already using vivid here?
<jamespage> perrito666, I'm trying to
<jamespage> invalid series "vivid"
<jamespage> hmm
<perrito666> jamespage: I meant as a desktop :p
<jamespage> perrito666, well I'm doing that as well
<perrito666> I am thinking on moving to and fear to break my work env
 * perrito666 tries to trigger update
<jamespage> perrito666, well I've been running vivid since a week into the cycle with systemd as default :-)
<jamespage> only minor niggles so far
<perrito666> well in 10mins 7s Ill tell you how well it works on my laptop :p
<jamespage> perrito666, awesome
 * perrito666 prepares the tablet for the standup just in case is not all that awesome :p
<wwitzel3> ericsnow: I just finished the base tests for GetDisks and GetHardwareCharacteristics
<ericsnow> wwitzel3: sweet!
<ericsnow> katco: PTAL https://github.com/juju/testing/pull/48
<katco> ericsnow: ty, will try and get to it. just preparing for a meeting
<wwitzel3> ericsnow: just wrapping up the last base test now
<ericsnow> katco: no worries :)
<ericsnow> wwitzel3: you rock!
<katco> ericsnow: wwitzel3: looking forward to reading through the GCE code
<ericsnow> katco: best consumed one bite at a time ;)
 * katco secretly uses Amazon's mechanical turk to farm out code reviews
<ericsnow> katco: lol great idea
<wwitzel3> ericsnow: pushed
<voidspace> dimitern: have you actually done the live tests?
<dimitern> voidspace, still doing them :) and it turned out more stuff will be needed
<voidspace> dimitern: shall I review this PR as is, or wait?
<dimitern> voidspace, but even if it lands as is there won't be any harm
<voidspace> right, as it's not used yet...
<dimitern> voidspace, please do, i'll prepare follow-ups for what I found
<dimitern> voidspace, yes - not used yet, but once we start I found out some curious interactions between lxc-create, lxc-clone, and lxc-start wrt the generated lxc.conf
<voidspace> dimitern: ah right
<voidspace> sounds like fun
 * perrito666 has an epyphany and suddenly fixes his computers external microphone
<dimitern> voidspace, :D that's an understatement
<voidspace> :-)
<dimitern> voidspace, so.. live tests on local at least show no regressions - bootstrapping on ec2 and maas now
<dimitern> voidspace, unfortunately all this is *not* enough to make the generated lxc conf take effect always
<dimitern> voidspace, e.g. with pre-existing lxc template (which is then cloned), with no template - getting/caching the template works fine, the config is there, but once you clone off that - no joy
<voidspace> yeah right, I wondered about cloning
<dimitern> voidspace, I was actually *quite* surprised lxc-clone does not allow you to pass new config
<voidspace> dimitern: doesn't it mean we have to stop cloning
<dimitern> voidspace, oh, no :) i'm not giving up so easy
<voidspace> haha, ok
<dimitern> voidspace, already have a couple of pointers how it might work
<voidspace> I just assumed we'd have to stop cloning
<dimitern> voidspace, the benefits of cloning are too great - mostly performance-wise, but also disk footprint (with snapshots)
<voidspace> right
<voidspace> so clone and then reconfigure
<dimitern> voidspace, yep, something like that - I have to consult the lxc guys though
<perrito666> wwitzel3: you laughed so you either where paying attention or became very good at the simulation thing
<wwitzel3> perrito666: :)
<voidspace> dimitern: container/lxc/lxc.go lines 605-618
<voidspace> dimitern: why wouldn't the cidr already be in the expected format?
<voidspace> dimitern: and why couldn't you use ipNet.String() to produce it?
<dimitern> voidspace, it might be empty or invalid - I'm being extra cautions there
<voidspace> dimitern: can you adjust the comment to make it clear that you're just validating rather than actually changing format as the current comment implies
<dimitern> voidspace, good point about ipNet.String() - I can parse the cidr, then replace the ip part
<voidspace> yep
<dimitern> voidspace, ah, sure
<voidspace> ah, the IPV4Address needs to be a CIDR
<voidspace> so you want to take the mask from the cidr and append it to the address
<dimitern> voidspace, yeah
<voidspace> so you really are extracting the mask
<voidspace> now I understand
<voidspace> seems odd that you go from a CIDR to a CIDR, seemed unnecessary
<voidspace> makes more sense now
<dimitern> voidspace, I'd appreciate suggestions for better comments there :)
<voidspace> slightly more anyway
<voidspace> maybe - LXC expects the IPv4 address formatted as a CIDR (e.g. 1.2.3.4/5)
<dimitern> voidspace, i'm not sure whether the cidr will be like x.y.z.0/M or x.y.z.t/M
<voidspace> and then the rest of your comment makes sense (so extract the mask from the cidr
<voidspace> right
<voidspace> and you need the actual address, not the 0
<dimitern> voidspace, I'd *expect* it to have 0-s at the end, but then that's not really a hard rule it seems
<dimitern> voidspace, yeah
<voidspace> dimitern: it all looks good, including the tests
<voidspace> dimitern: modulo the fact that it doesn't actually work...
<dimitern> voidspace, not yet at least :/
<voidspace> sure
<voidspace> a good basis to iterate on
<dimitern> voidspace, cheers! live tests just finished ok on maas, ec2, and local
<voidspace> cool
<dimitern> voidspace, thanks! no suggestions at all? :)
<voidspace> dimitern: well only the comment change
<voidspace> dimitern: you're taking a new parameter and rendering text templates from them
<voidspace> dimitern: it's not rocket science
<dimitern> voidspace, ok, will change to clarify about the cidr
<dimitern> voidspace, i'll step away for a while now, but will have a look a bit later
<voidspace> dimitern: ok chap, see you later
<voidspace> dimitern: have a good evening.
<dimitern> voidspace, you too :)
<bodie_> I have a very redundant method which I've now defined three times in different packages.  (it inserts new values in maps at arbitrary keys.)  Is there a common place such things should go?
<voidspace> g'night all
<perrito666> nite
<voidspace> EOD
<voidspace> bodie_: juju/utils
<sinzui> natefinch, can you ask someone to look into bug 1413245 found in rc1? I think I identified the issue with vivd
<mup> Bug #1413245: 1.21-rc1 does not support vivid <juju-core:Triaged> <juju-core 1.21:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1413245>
<sinzui> vivid
<natefinch> sinzui: ok
<alexisb> sinzui, is systemd on by default on vivid?
 * sinzui looks at his instance
 * perrito666 notices he would not know how to figure that out
<sinzui> alexisb, yes it is there
<perrito666> sinzui: how do you tell/
<perrito666> ?
<alexisb> then juju wont support it without tweaking until 1.23
<sinzui> perrito666, apt-cache policy systemd shows it was installed
<sinzui> perrito666, and I can see most of the system is using upstart still
<perrito666> sinzui: says the same for upstart
<sinzui> perrito666, I can see a lot installed /etc/systemd on my instance has only a few core services in it
<thumper> o/
<perrito666> thumper: morning
<katco> hey hey thumper
<freyes> hi there
<menn0> thumper: so to open an API connection for another env I take the agent's APIInfo, set a new environ UUID in it and pass that to api.Open?
<thumper> menn0: not quite
<thumper> well... hmm...
<thumper> ghit
<thumper> shit
<thumper> menn0: we need to talk
<menn0> thumper: yeah we do. i'm seeing other things that worry me.
<menn0> standup hangout?
<thumper> standup hangout?
<thumper> snap
<thumper> menn0: heh, just noticed that we pretty much ignore bad credentials
<thumper> menn0: that is pretty bad
 * thumper writes a test...
<thumper> oh good, it does fail
 * thumper reads more to work out why he thought it didn't
 * thumper misread != as ==
<menn0> thumper: :)
<anastasiamac> should I worry too much? godeps: cannot update "...github.com/juju/names": fatal: reference is not a tree: f2761fa05db34fb5b4ceb44721a067ffd22d5956
<katco> anastasiamac: i've never looked too much into why that happens; i just do a go get -u <package> and re-run godeps
<thumper> menn0: do you know off the top of your head if the JujuConnSuite has a machine in it by default?
<anastasiamac> katco: thnx!
<anastasiamac> katco:  i've ran go get -u github.com/juju/names
<anastasiamac> katco: then godeps
<anastasiamac> katco: but it still has the same issue
<thumper> make godeps
<thumper> anastasiamac: do this...
<anastasiamac> thumper: make godeps?
<thumper> in the juju/names tree
<thumper> git checkout master
<jw4> anastasiamac: I ran into that when my origin was my own fork of the repo
<thumper> git branch -u upstream/master
<katco> ah
<thumper> anastasiamac: then the default will work
<katco> ty thumper
<thumper> anastasiamac: I added make godeps when I got sick of it not working
<thumper> you need to make sure you have an environment variable set
<katco> nice
<thumper> export JUJU_MAKE_GODEPS=true
<thumper> then 'make check' does godeps first, then runs all the tests
<menn0> thumper: sorry I don't know ... haven't used JujuConnSuite much
<thumper> anastasiamac: the git commands are assuming you have origin set to your repo, and upstream set to the juju repo
<thumper> menn0: nm
<thumper> menn0: I'll find out :)
<anastasiamac> katco: j4w: thnx for ur help!!!
<jw4> anastasiamac: yw :)
<menn0> thumper: I have the provisioner worker under EnvWorkerManager
<anastasiamac> thumper: u r awesome -all works :D
<menn0> thumper: but man do I have a lot of XXX's in this branch
<thumper> menn0: heh
<thumper> menn0: I have the check creds bit done, just writing extra tests now
<menn0> thumper: cool
<anastasiamac> jw4: katco: u r awesome 2 - m astonished with fast response.
<jw4> anastasiamac: lol
<anastasiamac> menn0: u have XXX's in the branch?..
<wallyworld_> thumper: 3 line review please for 1.21 release http://reviews.vapour.ws/r/783/
 * thumper looks
<thumper> done
<wallyworld_> tyvm
 * thumper interrupts current branch for a different one
<thumper> menn0: I have a branch I'm just proposing that has factory.MakeMachine return the password so we don't have to keep making known ones in tests when we need to know what it is
<thumper> just running all the tests now
<thumper> menn0: http://reviews.vapour.ws/r/784/
<thumper> or wallyworld_
 * thumper goes to lunch
<alexisb> thumper, wallyworld_ is busy for the next hour
<menn0> thumper: looking
<sinzui> wallyworld_, alexisb, thumper, oil reported bug 1413424 we need to understand why they cannot use rc1
<mup> Bug #1413424: ERROR failed to bootstrap environment: cannot start bootstrap instance: no matching tools available <oil> <juju-core:New> <https://launchpad.net/bugs/1413424>
<wallyworld_> sinzui: on a call with alexisb, will talk soon
<thumper> menn0: all tests pass
<menn0> thumper: review done
<thumper> menn0: ta
<menn0> thumper: I've suggested another approach but I'm not wedded to it.
<thumper> hmm...
#juju-dev 2015-01-22
<thumper> yeah, I think two methods could work
<menn0> my suggested name sucks :)
<anastasiamac> thumper: how can i test the command behind a feature flag?
<anastasiamac> thumper: how do i set feture flag for tests*?
<thumper> anastasiamac: set the feature flag in the test
<thumper> anastasiamac: I have something in my current branch
<thumper> anastasiamac: that makes it trivial
<thumper> anastasiamac: can you wait?
<thumper> where current means hopefully by the eod
<anastasiamac> thumper: i can wait for the final land, but I need a way while developing now...
<anastasiamac> thumper: i'll look at what's currently done..
<thumper> anastasiamac: look at the actions code
<anastasiamac> thumper: unfortunately, the test is not mine, m just adding to it...
<anastasiamac> thumper: thnx!
<jw4> anastasiamac: cmd/juju/action/action.go :: const FeatureFlag string = "action"
<jw4> anastasiamac: and then just grep usages of FeatureFlag in cmd/juju/action/...
<anastasiamac> jw4: yep. i got this far. but it's never set in any of the tests....
<jw4> anastasiamac: ah; gimme a sec
<thumper> jw4: btw, my branch does change that... hopefully it'll land by EOD
<thumper> if all goes well
<menn0> thumper: right. i have all the per-env workers starter under the envworkermanager
<menn0> thumper: but there's no way it'll work for a non-state server env just yet
<thumper> yep
<jw4> anastasiamac: cmd/juju/main_test.go
<thumper> menn0: getting there :)
<menn0> thumper: yep :)
<menn0> thumper: the remaining bits aren't hard, but I do need to change the envworkermanager a little
<menn0> thumper: doing that first in a separate branch
<jw4> anastasiamac: setFeatureFlags("...")
<anastasiamac> jw4: AHHH! AWESOME :D
<anastasiamac> jw4: thnx!
<jw4> anastasiamac: yw - I'm looking forward to seeing thumpers branch too :)
<anastasiamac> jw4: u r removing "action" for one of the runs... mind if I'll modify it ot be a bit more genric to remove any features that are behind the flags?
<jw4> anastasiamac: not at all - please do
<anastasiamac> jw4: gr8 :D
<thumper> menn0: branch done, just waiting for the factory one to land
<thumper> before proposing
 * thumper pops the stack again
<menn0> thumper: kk
<menn0> thumper: here's a little change to envWorkerManager that I need to get things working correctly in the machine agent
<menn0> http://reviews.vapour.ws/r/785/
<thumper> menn0: http://reviews.vapour.ws/r/786/
<anastasiamac> jw4: m thinkn http://pastebin.ubuntu.com/9812212/
<jw4> anastasiamac: that looks good to me
<anastasiamac> jw4: excellent! i'll keep it for now until thumper's branch lands :P
<jw4> coolio
<jw4> :)
<katco> wallyworld_: got time for a chat about leadership settings before the standup?
<wallyworld_> katco: maybe, i'm in the middle or trawling log files for a potential 1.21 issue, give me a few minutes and i'll ping you
<katco> wallyworld_: np at all, whenever you're ready
 * thumper has popped back up to the start of the day
<thumper> just one branch on the stack now
<menn0> thumper: looking
<thumper> ah poo
<thumper> can't wrap the facade registration in the init method because the init methods are called before any other code, like mocking out the environment
<thumper> so we have to intercept calls
<thumper> bah humbug
<menn0> thumper: review done
<menn0> looks good
<perrito666> sometimes embedding feels like being back with html and iframes
 * thumper pushes two items on to the stack
<menn0> thumper: can we have a hangout? I have some stuff to show you.
<thumper> menn0: give me five minutes?
<menn0> thumper: sure
<thumper> menn0: now?
<thumper> hangout again?
<thumper> wow, exactly five minutes
<thumper> amigood or amigood?
<thumper> menn0, axw, wallyworld_: https://github.com/juju/utils/pull/106
<wallyworld_> thumper: we're in standup,
<thumper> kk
<menn0> thumper: sorry... I missed your message
<menn0> thumper: hangout now?
<thumper> I'm there
 * perrito666 remaps his brain for go instead of python where said map was still missing
<perrito666> it is amazin how python finds ways to crawl back
 * thumper headdesks
<thumper> why isn't this working...
<thumper> ah fark...
<thumper> WTF?  why is it looking for facade version 0?
<anastasiamac> thumper: version 0 is for facades that existed before versioning was done...
<thumper> but I have registered it with version 1
<thumper> but the apiserver is doing a lookup for 0
<thumper> version 1 is returned in the login results
<anastasiamac> thumper: m sure it is doing what u asked it to :D
<anastasiamac> thumper: if u r in the tests, i saw hardcoding to 0...
<anastasiamac> thumper: testing/apicaller.go
<anastasiamac> thumper: in juju/juju/api/base
<thumper> caller.BestFacadeVersion is returnig 0
<anastasiamac> thumper: yep :(
<thumper> WTF?
<thumper> damn,... this code is terrible
<thumper> I needed to add to the map in api/facadeversions.go
<dimitern> morning o/
<perrito666> dimitern: I see your morning and raise you a gnight
<dimitern> perrito666, good night to you then ;)
<axw> that moment when you think you've finished a review, only to see you're on page 1 of 3
<anastasiamac> axw: m glad that fun is equally shared within the team :D
<axw> for some value of fun
<menn0> thumper or axw: an easy review: http://reviews.vapour.ws/r/788/
<thumper> menn0: a not so easy review http://reviews.vapour.ws/r/789/
<menn0> thumper: that seems like an unfair trade :)
<thumper> :)
<menn0> thumper: looking :)
<menn0> thumper: done. a thing of beauty :)
<thumper> menn0: fixed and submitted for merge
<axw> wallyworld: I'm just going to change the diskformatter apiserver to filter out unattached block devices
<axw> should've done that in the first place
<wallyworld> ok
<axw> wallyworld: published my reply so you can see my comments while I'm doing that
<wallyworld> looking
<wallyworld> axw: disagree about the singular comment. singular tends to apply for Set operations; but we also have ListKeys(), AddKeys(), MachinesWithTransientErrors(), WatchEnvironMachines() etc
<wallyworld> the plural reflects that we get multiple results
<axw> wallyworld: you can argue either way. in apiserver/provisioner we have Machine, DistributionGroup, ProvisioningInfo
<wallyworld> Machine() get a single machine only
<wallyworld> for a single given tag
<wallyworld> whereas here we are getting multiple BlockDevices for many tags
<axw> wallyworld: uniter AssignedMachine, GetOwnerTag ... these take params.Entities
<axw> whatever, I can change it
<wallyworld> if that's ok it would be good. AssignedMachine is a little unfortunate but i can see why it was done - it gets the single AssignedMachine for each of the specified units
<wallyworld> axw: when done, can you ping me as i'd like to discuss filesystem providers etc
<axw> wallyworld: just testing live, won't be long
<wallyworld> sure
<axw> le sigh, now I've broken something
<axw> wallyworld: let's chat, I'll fix this later
<wallyworld> ok
<wallyworld> axw: i imagine we will be adding a new hook context value for the storage attached hook - "storageInstanceId". this will then be used in storage-get. agree?
<axw> wallyworld: yes
<wallyworld> rightio
<axw> wallyworld: although, I've been wondering whether it shouldn't just be storage-changed; collapse multiple attachments into one event
<axw> in which case there would be no ID
<axw> and instead we'd need a storage-list
<axw> which we may well want anyway
<wallyworld> hmmm, that might work
<wallyworld> since the spec calls for storage-get, perhaps we should still do that
<wallyworld> and storage-attached
<axw> wallyworld: we would need storage-get anyway, but it may need a parameter
<wallyworld> i could make storage-get take a list
<wallyworld> well, it would need the storageInstanceId parameter
<wallyworld> i could make that a slice of ids
<wallyworld> i guess
<wallyworld> i assume hook tool parameters can be slices, never written one before
<axw> wallyworld: they're just commands, they can take whatever you want them to
<wallyworld> wasn't sure about the marshalling implications
<wallyworld> of passing around the parameters
<wallyworld> oh i see, there's a Context
<wallyworld> never written a hook or hook command before  :-)
<dimitern> voidspace, axw, wallyworld, others? PTAL http://reviews.vapour.ws/r/790/
<TheMue> dimitern: core meeting?
<dimitern> TheMue, omw
<voidspace> dimitern: oops, got distracted omw
<dimitern> axw, perrito666, wallyworld, team meeting?
<voidspace> dimitern: TheMue: are we waiting for 11am for our standup or doing it now?
<TheMue> voidspace: dimitern: np with doing it now
<dimitern> voidspace, TheMue, just give me 3 mins and I'll be there
<voidspace> dimitern: ok
<TheMue> ok
<dimitern> voidspace, i'm in the hangout
<voidspace> omw
<perrito666> morning
<voidspace> perrito666: o/
<dimitern> voidspace, I've reviewed your NetworkInterfaces() implementation for MAAS - it's awesome, please propose it :)
 * dimitern steps out for a while
<voidspace> TheMue: thanks for the review
<voidspace> TheMue: listConnectedMacs calls the MAAS API list_connected_macs
<TheMue> voidspace: yw, only minor notes
<voidspace> TheMue: so I don't think the suggested name change is actually helpful
<voidspace> TheMue: but a comment is a good idea
<TheMue> voidspace: I know, but inside a code I like to use its conventions instead of the one of the API it calls ;)
<voidspace> TheMue: well, the name should make it's clear what its doing - and I don't think it's unclear in usage or naming
<voidspace> TheMue: and the parallel with the underlying API is useful for understanding
<voidspace> I'll happily add the comment though
<voidspace> doing it now
<TheMue> voidspace: also sometimes more explicit naming is helpful for the poor guy maintaining this code in three years :D
<voidspace> TheMue: comment pushed, should be clear now http://reviews.vapour.ws/r/791/diff/#
<mfoord> I somehow managed to crash my machine
<mfoord> dropped the keyboard, knocking the usb hub, and whoops...
<perrito666> wow, that is amazing
<mfoord> perrito666: yeah, not sure what happened :-)
<perrito666> I would love to see the bug report :p
<perrito666> mfoord: https://bugs.kde.org/show_bug.cgi?id=108312
<perrito666> it used to be bundled with a picture of the ferrent
<perrito666> ferret*
<mfoord> Hah
<mfoord> nice
<perrito666> https://bugsfiles.kde.org/attachment.cgi?id=11622
<dimitern> mfoord, you've got a review
<dimitern> mfoord, and thanks for reviewing mine - btw can you clarify a bit about replaceContainerConfig?
<dimitern> mfoord, ah, sorry I got you
<dimitern> mfoord, so I didn't want to couple replaceContainerConfig too much with the networking config, as like this it's also useful for storage config
<mfoord> dimitern: ah, ok
<mfoord> dimitern: at least instead of a list of lines it could take a string and do the split itself
<mfoord> dimitern: have to call split manually before every call (all one of them, right?) seems pointless
<mfoord> dimitern: not tying it to network is fine though
<dimitern> mfoord, fair point
<dimitern> mfoord, will do the split internally
<mfoord> TheMue: dimitern: thanks for the reviews guys
<mfoord> dimitern: I'm still looking through that PR
<mfoord> dimitern: but going on lunch now, will return to it
<TheMue> mfoord: yw
<dimitern> mfoord, sure, enjoy!
<TheMue> dimitern: would you mind taking a look at https://github.com/TheMue/juju/tree/networking-interfaces?
<TheMue> dimitern: it not yet has merged michaels latest changes
<TheMue> dimitern: but so far it compiles and tests fine
<dimitern> TheMue, sure, I'll have a look in a bit
<TheMue> dimitern: thx
<wwitzel3> ericsnow: standup?
<ericsnow> wwitzel3: coming
<sinzui> natefinch, master (1.23) has a regression in the last commit. I reported bug 1413652
<mup> Bug #1413652: TestNetworkInterfaces fails on ppv64el unit tests <ci> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1413652>
<alexisb> dimitern, you still around?
<perrito666> natefinch: ?
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs:  1413424 1413424
<dimitern> alexisb, yes
<alexisb> dimitern, I was just wondering if you had insight on bug 14135652 mentioned above
<dimitern> TheMue, mfoord, I've updated the PR - can I get an LGTM please? :)
<mgz> alexisb: it's just using DeepEquals where it shouldn't
<TheMue> dimitern: looking
<dimitern> alexisb, hmm.. I might have, let me have a look
<dimitern> alexisb, mgz, oh this looks like a map ordering ppc64 issue
<mgz> dimitern: yup
<mfoord> dimitern: yep, reading
<dimitern> mfoord, yeah - definitely; sorry I should've thought of that - can you propose a fix for that bug?
<mfoord> dimitern: I meant I was reading your PR...
<dimitern> mfoord, basically the results should not rely on map traversal order but be sorted (e.g. by DeviceIndex)
<mfoord> dimitern: ah, is this my fault?
<mfoord> dimitern: ok, easy
<dimitern> mfoord, i'm afraid :)
<mfoord> dimitern: I'll do it now
<dimitern> mfoord, cheers!
<dimitern> mgz, thanks for bringing this up so quickly btw
<mgz> sinzui was on the ball
<alexisb> yes sinzui thank you!
<dimitern> thank you sinzui
<mfoord> dimitern: do we have anything in utils for sorting structs by field name?
<mfoord> dimitern: or should I wrap them?
<dimitern> mfoord, I don't believe so
<mfoord> dimitern: ok
<alexisb> mfoord, where is voidspace?
<mfoord> alexisb: heh, not sure
<dimitern> mfoord, if you check network.SortAddresses - it should be similar
<mgz> mfoord: I think what we did last time was just add the sort methods
<mgz> like dimitern points at there
<mfoord> alexisb: I hard crashed a while back and I guess "voidspace" was still logged in when I resurfaced...
<dimitern> TheMue, sorry, I'm looking at your branch now - got distracted
<mfoord> mgz: dimitern: we're adding sort methods to production structs just to be able to test them?
<mfoord> mgz: dimitern: ah, SortAddresses is used in production too
<dimitern> mfoord, :) well isn't it nice to have a deterministic result?
<mgz> mfoord: yeah, I prefer having any api visible stuff in stable order
<dimitern> mfoord, so not just for testing
<mgz> for backend things, just sorting in the test or doing contentsequals instead of deepequals is fine
<mfoord> mgz: dimitern: ah, SortAddresses is used in production too
<mfoord> hah, oops already said that
<dimitern> :)
<mfoord> mgz: this isn't API visible stuff - I think this is just arbitrarily ordered test input data
<mfoord> I *think*, checking
<TheMue> dimitern: np
<dimitern> TheMue, first look - environs.Networking should also include SupportsAddressAllocation
<TheMue> dimitern: thought about, but on the other hand it's a capability
<TheMue> dimitern: can move it
<dimitern> TheMue, well, SupportNetworks was also a capability IIRC
<TheMue> dimitern: got me :D
<dimitern> TheMue, :) so let's try moving SAA() into the interface and make it a regular method
<TheMue> dimitern: will do so
<dimitern> TheMue, cheers
<dimitern> TheMue, also, for ec2 and MAAS which already supports networking ISTM we should do the SupportsNetworking once somewhere - like in SetUpTest or e.g. setUpInstanceWithDefaultVpc
<dimitern> TheMue, to save a few lines and avoid calling it in each test
<mfoord> dimitern: it would be relatively easy to write a generic version of that sort taking a field name and using reflection...
<dimitern> mfoord, I agree, would you prefer doing this in utils and then proposing the fix?
<mfoord> dimitern: heh, that will take me longer
<mfoord> dimitern: I've nearly finished the "standard" fix
<dimitern> mfoord, :) exactly my point
<mfoord> dimitern: I'd be interested in doing the general one afterwards
<mfoord> dimitern: I would much *prefer* doing a general one
<dimitern> mfoord, but we should add a generic sort like this to utils nevertheless - after that
<dimitern> mfoord, +1
<mfoord> dimitern: cool, sounds like fun :-)
<TheMue> dimitern: yes, that's my planning. only wanted to show the mechanism to check if an env supports networking and the return of the combined interface
<dimitern> TheMue, it looks solid enough for proposing - minus those few suggestions
<dimitern> TheMue, and I like how many lines get removed
<TheMue> dimitern: fine, will change those then and then do the proposal
<TheMue> dimitern: yeah, that's one of the nice features, remove code
<dimitern> TheMue, cheers! the cmd/juju/action/queue.go seems it doesn't belong there though
<dimitern> TheMue, I'll review it more carefully when you propose it
<TheMue> dimitern: ouch, added it by accident :/
<dimitern> TheMue, np :)
<mfoord> dimitern:
<mfoord> dimitern: http://reviews.vapour.ws/r/792/
<dimitern> mfoord, looking
<dimitern> mfoord, reviewed
<mfoord> dimitern: see my last commit message...
<mfoord> dimitern: https://github.com/juju/juju/pull/1470
<mfoord> dimitern: and thanks :-)
<dimitern> mfoord, :D
<dimitern> mfoord, thank you for the quick fix
<dimitern> mfoord, can I get LGTM on mine now please? ;)
<mfoord> dimitern: going back to reading it
<dimitern> mfoord, cheers
<mfoord> dimitern: so when updating the config we overwrite the file in place?
<mfoord> dimitern: a common pattern is to write the new file as a temp one and then only move it into place when writing is complete
<mfoord> dimitern: avoids creating partial files / invalid files due to a bug
<mfoord> dimitern: probably not an issue for config changing though, because if we crashed we wouldn't use the config file anyway?
<dimitern> mfoord, fair point, and I was itching to use AtomicWriteFile, but since none of the other config updating calls do it I decided not to for now
<mfoord> dimitern: ok
<mfoord> dimitern: if you don't think it's worth it then just drop the issue I created
<mfoord> dimitern: ah wait, you write to it in one go at the end anyway - you're "writing into" a bytes buffer anyway
<mfoord> dimitern: and then you seek, truncate and write
<mfoord> seems like a lot of boilerplate just to write a file... ah well.
<mfoord> dimitern: there's never a need for duplicate prefixes in lxc?
<dimitern> mfoord, ok, I'll look into changing it to use AtomicWriteFile instead
<dimitern> mfoord, there are valid cases for duplicate prefixes - e.g. each lxc.network.type defines a new NIC config
<mfoord> dimitern: ah, but you'd want the line multiple times in the input lines then?
<mfoord> dimitern: it's just that popping it from the parsedLines values dictmeans multiple replacements need multiple entries
<mfoord> sounds like that's desirable though
<mfoord> "lxc.network.type = foo" in the input line will only replace the first occurence
<mfoord> heh, testing templates makes for long tests
<mfoord> dimitern: LGTM
<dimitern> mfoord, yes - I tried to explain the way it works, but I'm bad at explaining complex things :)
<dimitern> mfoord, thanks!
<mfoord> "occurrences of any setting in lines will be replaced with the value of that setting."
<mfoord> hmm, is that true? Let me read the code again.
<mfoord> parsedLines is a map of prefixes to new values to use
<mfoord> once a prefix has been encountered, the first value from parsedLines[prefix] is used - and the used value is *removed*
<dimitern> mfoord, it actually got more difficult to explain with a string rather than []string lines :)
<dimitern> mfoord, yes
<mfoord> so next time it is encountered the prefix *won't* be replacedd with the new value
<mfoord> so it isn't "any setting in lines" - it's the first occurence of that setting is used. If the same prefix is specified multiple times they are used in order.
<dimitern> mfoord, no, so if "lxc.foo" > []string{"bar","baz"} and you have lxc.foo=1\nlxc.foo=2\lxc.foo=3\n it becomes lxc.foo=bar\nlxc.foo=baz\nlxc.foo=3\n
<mfoord> right, but if I have "lxc.foo" > []string{"bar"}
<mfoord> the documentation implies that "any occurence" of "lxc.foo" will become bar.
<dimitern> mfoord, then it will be lxc.foo=bar\nlxc.foo=2\nlxc.foo=3\n
<mfoord> Whereas it's actually only the first.
<dimitern> mfoord, right, right - I'll change this
<mfoord> "occurrences of any setting"
<mfoord> dimitern: it's fiddly to explain as what it is doing is fiddly
<dimitern> mfoord, how would you explain it now that you know what the code does?
<mfoord> but "occurrences of any setting" implies to me multiple replacements from a single value
<mfoord> hehe
<dimitern> :)
<dimitern> mfoord, maybe just with a few examples?
<mfoord> you've pointed people to the tests and there are some examples there
<mfoord> so I don't think more examples are needed in the docstring
<mfoord> just maybe a wording tweak for that one line
<mfoord> I'm thinking
<mfoord> Then the occurrence of a setting in a line of the config file will be replaced by values from newConfig. Values in newConfig are only used once (in the order provided), so multiple replacements must be supplied as multiple input values in newConfig.
<mfoord> Replacing "Then any..."
<mfoord> It's not so concise but a bit more precise I think.
<mfoord> and I need more coffee
<mfoord> dimitern: mgz: the bug fix for 1413652 landed so I changed the issue to fix committed
<mgz> mfoord: ace
<dimitern> mfoord, nice, I'll take it :)
<dimitern> mfoord, thanks
<mfoord> mgz:  ace is such an eighties word :-)
<mgz> it's when I was born :)
<mfoord> :-)
<mfoord> my Dad still uses it because we used it when we were kids
<mfoord> he thinks it's still cool to be ace
<mfoord> and I guess he's right...
 * mfoord goes on a coffee hunt...
<wwitzel3> ericsnow: ok, grabbing some food then we can take a look at that review
<ericsnow> wwitzel3: great
<hazmat> perrito666, ping
<perrito666> hazmat: pong
<hazmat> perrito666, just trying to understand some of the backups api
<hazmat> perrito666, afaics info method doesn't have a purpose, given all content is in list
<perrito666> hazmat: you will have to talk to ericsnow about it, I just wrote restore, there is little left of th backup logic I once did.
<perrito666> it might be redundant though, since backup+restore lacks a spec it was mostly played "by ear"
<hazmat> perrito666, ok.. i think i got most of it from the code, i'm just looking at trunk
<perrito666> hazmat: hacking something into backups?
<hazmat> perrito666, exposing all the facades via jujuclient atm
<ericsnow> hazmat, perrito666: yeah, Info is effectively just a special case List (where you don't have to look up the one you care about in the result)
<hazmat> and exercising all the apis while i'm at it.
<hazmat> sort of wishing that agent version was in login result
<hazmat> its a bit hard to specialize some of the /environment/:uuid/backups|charms etc urls without knowing the version
<ericsnow> hazmat: keep in mind that the backups API methods related to restore probably should be avoided (once they land)
<wwitzel3> ericsnow: headed in to moonstone
<ericsnow> hazmat: they have to work together and require some set up (the restore command will cover all that)
<hazmat> ericsnow, cause their not stable?
<hazmat> cool
<hazmat> its just upload  & restore i would hope, but fair enough
<ericsnow> hazmat: each of the restore-related methods makes assumptions (e.g. that bootstrap has already happened in a new env)
<perrito666> hazmat: restore is abig piece of twisted machinery and its landed in small bits since no one seems to be able to review it as a whole so I am entering the bits as they are approved
<ericsnow> hazmat, perrito666: I suppose it could make sense to roll that logic out of the command and into the API client, but bindings other than our API client would have to know those details to incorporate them
<hazmat> bootstrap already seems reasonable, i'll bake what i can into my functional tests.
<ericsnow> perrito666: perhaps you could give hazmat a link to the cmd and API client code so he can see what's going on
<hazmat> i don't see restore in trunk looking at api server source.
<ericsnow> hazmat, perrito666: really it would be nice if restore could be handled on the server side by a single API call, but I'm not sure that's feasible (or worth it)
<ericsnow> hazmat: http://reviews.vapour.ws/r/732/ (just the API server part)
<hazmat> ericsnow, thanks
<hazmat> so three calls prep/restore/finish
<perrito666> hazmat: yup, finish is not really required, you see prep sets the server in a status that prevents anything el se to be done (to avoid data loss from whatever you asked the server to do between deciding to restore and restore actually taking effect)
<perrito666> restore does well, restore
<perrito666> and finish just finishes ok if restore finished properly (restore will kill the server so your connection will be droped)
<hazmat> got it, thanks
<perrito666> somehow each time I explain this part of restore I think of willie coyote cutting the branch he is standing on
<mfoord> g'night all
 * perrito666 listens to the afternoon news on the radio and wonders why did he decide to do that in the first place
<mwhudson> waigani, cherylj, menn0: morning
<mwhudson> oops
<perrito666> mwhudson: ?
<mwhudson> forgot stand up was now :)
<natefinch> Good god, bank websites blow
<perrito666> natefinch: mine works well although the bank might have been forbidden to do certain operations by the govt bc they where helping in money laundring schemes
<natefinch> perrito666: heh... this is just stupid crap like not giving you sane options anyone with half a brain would want, or making things unclear when all you need is a tiny bit more status text to make it clear.
<natefinch> the problem today was that I wanted to set up automatic payments for my credit card, but they won't let you do that when you have an outstanding payment due.... which is like... why do you think I want to set up automatic payments?!  So I schedule a payment for today, see it's not actually quite enough, so I schedule a second one... but they won't let you schedule two payments for the same day.
<perrito666> lol
<perrito666> we do that the other way round
<perrito666> we set the bank to accept charges from the card
<perrito666> and the rest is not my problem
<katco> sanity check: does DeepEquals resolve the map ordering issue in tests?
<natefinch> katco: I think so.... I think the problem we had was converting maps to slices of values and then comparing the slices
<katco> natefinch: gotcha thank you!
 * perrito666 stumbles upon a function half using new errors half using old style and cries
<perrito666> omg state/unit.go:1462 is a PITA, it composes this error in bits (a word per line) making it so hard to find
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs:  1413652
<sinzui> wallyworld_, Can you review this branch when you have time http://reviews.vapour.ws/r/795/
<wallyworld_> sure
<wallyworld_> sinzui: done, plus 1.22-beta1 going through landing tests
<sinzui> \o/
#juju-dev 2015-01-23
<perrito666> wallyworld_: THE amount of things my pr breaks that I had not noticed :p
<wallyworld_> perrito666: wot? be with you soon, in a meeting
<wallyworld_> perrito666: so what's broken?
<anastasiamac> axw: wallyworld_: PTAL http://reviews.vapour.ws/r/796/
<wallyworld_> soon, knee deep in writing tests :-(
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: None
<anastasiamac> wallyworld_: thnx :D
<axw> anastasiamac: reviewed
<anastasiamac> axw: thnx :) will look
<anastasiamac> axw: thnx for review!
<axw> np
<anastasiamac> axw: from brief look at the cmments it's not far off :)
<axw> anastasiamac: yes I think so, a few key assumptions to fix but mostly looking good
<anastasiamac> r u happy for me to change unit name/storage name pair to storage id?
<anastasiamac> or.... shall we allow show to "be a version of list for unit/storage pair
<axw> anastasiamac: not sure, it's not exactly a very friendly interface
<axw> anastasiamac: I guess it does double up on list if that's the case
<axw> anastasiamac: list should allow you to filter on unit+storage
<axw> anastasiamac: maybe just have it take storage ID for now
<anastasiamac> axw: k.... :D
<anastasiamac> axw: id or tag?...
<axw> anastasiamac: ID at the user level, tag at the API level
<axw> i.e. convert user-input ID to a StorageTag
<axw> and pass that to backend
<anastasiamac> axw: awesome! will do
<menn0> wallyworld_or axw: tiny review please: http://reviews.vapour.ws/r/797/
<axw> looking
<menn0> axw: this fixes something that bit me while writing tests around the new envWorkerManager
<axw> menn0: reviewed
<menn0> axw: cheers (agree with the comment. will fix)
<axw> cool, nps
<anastasiamac> axw: made the changes...
<axw> looking
<axw> anastasiamac: replied
<anastasiamac> axw: thnx :D
<anastasiamac> axw: m so gald to know that I ams ooo close :D
<anastasiamac> am sooo*
<axw> :)
<anastasiamac> axw: i'll address them now-ish... kids r coming home soon :(
<axw> no worries
<wallyworld_> axw: it's all very rushed as i have to disappear for a while, but here's storage-get implementation. the uniter worker factory tests use jujuconnsuite so the implementation works end-end http://reviews.vapour.ws/r/798/
<axw> wallyworld_: cool, I'm just trying to get storage-attached working atm, will take a look in a bit
<wallyworld_> no hurry as i'm gone for a few hours
<menn0> axw: sorry to bug you again. another micro-review pls? http://reviews.vapour.ws/r/799/
<axw> sure
<axw> menn0: LGTM
<menn0> axw: tyvm
<axw> woo
<axw> wallyworld_: 2015-01-23 08:19:15 INFO juju.worker.uniter.context runner.go:149 skipped "data-storage-attached" hook (not implemented)
<axw> fwereade: can you please cast your eyes over http://reviews.vapour.ws/r/800/, specifically the changes in the worker/uniter package. I mostly want feedback on the refactoring of HookSource/HookSender
<fwereade> axw, certainly
<fwereade> axw, from the summary, that's almost completely awesome, because I was planning to extract them myself at some point :)
<fwereade> axw, but I'll take a look at the details ;)
<axw> fwereade: great, hopefully the implementation lives up to it ;)
<dimitern> hey axw
<axw> heya dimitern
<dimitern> my new lxc.updateContainerConfig helper landed just now, please have a look when you can, as I designed it to be useful in general (not just networking) and should also help with configuring storage settings for lxc
<axw> dimitern: awesome, thanks. I'll take a look now
<dimitern> axw, cheers
<fwereade> axw, remind me, when exactly do the storage instances get put on the unit?
<fwereade> axw, after the machine's set them up and everything's in place?
<axw> fwereade: storage instances get created and assigned to a unit when the unit is created (the watcher logic is wrong atm)
<axw> fwereade: it should be checking to see that the storage instances are provisioned
<TheMue> morning
<axw> ahoy
<fwereade> TheMue, o/
<TheMue> ;)
<dimitern> hey TheMue, you've got a review btw :)
<fwereade> axw, is the final intent that the unit only react once they are provisioned? or will the unit be responsible for any part of that?
<TheMue> dimitern: ah, great, thx. will take a look
<dimitern> TheMue, I haven't said :ship it: as I'd like to have another look when you implement the suggestions
<axw> fwereade: the *uniter* will only react to provisioned ones, but there will be another worker in the unit agent that takes care of provisioning *some* storage instances (e.g. tmpfs)
<TheMue> dimitern: sure
<fwereade> axw, would it be reasonable to have that on the machine agent instead?
<axw> fwereade: there will also be a storage provisioner on the state server (for IaaS volumes), and one on the machine agent (for disks)
<axw> fwereade: not for filesystems-type storage instances I think, since they're owned by the unit
<fwereade> axw, hm, I think something may have just crystallized for me
<fwereade> axw, it feels like there's maybe a risk of repeating the mistakes we made with subordinates?
<axw> fwereade: (btw, I'm not at all happy with the storage model atm, if you have suggestions I'm very happy to hear)
<axw> fwereade: I'm not sure I know what those mistakes are
<fwereade> axw, so the big one is making subordinates specific to units, not to machines
<axw> fwereade: ok. so, in fact, storage instances can be owned by services or units, this could be extended to machines
 * dimitern is sick of typing "juju" instead of "git"
<dimitern> it will be nice to be able to do  "juju commit -m "Changes after review" or "juju checkout master"\
<axw> fwereade: in which case I can look at putting all provisioning on the machine agent
<fwereade> axw, it means the relation scoping rules are weird, and if you have two colocated units of X with subordinate Y you *also* get two colocated Ys
<dimitern> :)
<axw> fwereade: for storage, I think that would be expected (placement is disabled for storage right now, though)
<fwereade> axw, agreed on that front
<fwereade> axw, ok, from another angle, I think my primary motivation
<fwereade> axw, is that having more workers in the uniter makes it harder to unify the agents
<axw> fwereade: yep, fair enough. I think I can deal with having it on the machine agent
<fwereade> axw, there's a strong dose of "the physical disks are definitely the machine's responsibility, and it's weird to have certain storage types handled elsewhere"
<fwereade> axw, but to try to explore the original feeling
<fwereade> axw, machines have units
<fwereade> axw, units have subordinate units
<fwereade> axw, all of those things run together on the same machine
<fwereade> axw, so there's a mismatch
<fwereade> axw, it led to a horrible watcher
<fwereade> axw, because the machine wants to know about the complete set of units that need to be deployed
<fwereade> axw, and we need tolook all over the db to find them ;p
<axw> ah, I see
<fwereade> axw, I'm thinking along the lines of "creating a unit/service adds a record for appropriate storage, somewhere it'll be seen by a suitable provisioner"
<fwereade> axw, similarly, adding storage later does the same
<fwereade> axw, neither of these things need to touch the unit/service docs
<axw> fwereade: so we have a "storageinstances" collection that holds them
<axw> fwereade: the docs have an "owner" field which is either a unit or service tag
<axw> that's currently how the watcher filters
<axw> fwereade: possibly I'm doing something dumb, but they do touch the unit/service to add a reference (see my latest email)
<fwereade> axw, right, indeed, that is necessary
<fwereade> axw, yeah, I've been thinking about that and have yet to usefully formulate a response, but part of that is why I'm talking to you now
<axw> ok :)
<fwereade> axw, so, exactly, it's the backreferences that make me uncomfortable
<fwereade> axw, assume everything we have today but without the backreferences
<fwereade> axw, but also associate storageinstances explicitly with the machine (and?, hmm, remote storage provider?)
<fwereade> axw, such that in either case we can have a simple watcher that connects easily with the right storage provisioner worker
<axw> fwereade: ok. so it's okay to destroy a unit that has active storage instances?
<fwereade> axw, and then (analogously to how we set instancedata once a machine's provisioned) we can put the actual info the unit agent needs to know in another collection somewhere, such that it's simple for the unit agent to watch it and respond once it's there
<fwereade> axw, and, yes, I think it is
<fwereade> axw, so long as they do get cleaned up, it doesn't have to be the unit's responsibility
<fwereade> axw, you could just have a cleanup step in state that finds all storage tagged with the removed unit and sets it to dying
<fwereade> axw, in fact it's quite nice to have it on the machine for that reason as well
<axw> fwereade: we *would* want to block machine destruction until that's done though right?
<fwereade> axw, machine destruction, yes, I think so
<axw> ok, sounds reasonable
<fwereade> axw, which would maybe imply backreferences on the machine
<fwereade> axw, but I would also say, ideally, *not* on the machine *document*
<axw> fwereade: right, I will try to avoid doing that
<fwereade> axw, we need to be really careful about adding fields to those docs without carefully considering the impact on existing watchers
<fwereade> axw, basically given the current watcher implementation it's usually sensible to group fields according to their read/write patterns
<axw> fwereade: I'm going to have to rework the storage model a bit too. because of shared storage, we'll need to have a separate entity that models a unit's (machine's?) attachment to that storage instance
<axw> likewise for volumes/block devices
<axw> (i.e. for multi-attach volumes)
<fwereade> axw, got you
<fwereade> axw, and in fact
<fwereade> axw, that document containing the instancedata-equivalent
<fwereade> axw, which is whatthe unit needs to see and watch
 * axw nods
<fwereade> axw, maps pretty much perfectly onto that entity
<fwereade> axw, there may still be a question about shared storage
<fwereade> axw, ah-ha, maybe
<axw> ?
<fwereade> axw, *provisioning* and *accessing* a given storage are fundamentally different responsibilities
<axw> right
<fwereade> axw, so all the provisioning happens on the machine agent
<fwereade> axw, ie creation/destruction of actual substrate
<fwereade> axw, and once the storage exists, theminimum necessary info to access it is handed over to the unit agent
<axw> fwereade: I think that sounds sensible. the machine agent will provision it and send that info the the API server, which will update the storage instance *attachment* doc with the location, etc., which the unit will then see
<fwereade> axw, the unit agent is then responsible for turning it into the form the actual unit expects -- possibly even including things like "mount the device" and "create the filesystem"? -- but, crucially, this becomes synchronised with the unit agent, because we can do that work inside operations, and queue suitable hooks for afterwards
<fwereade> axw, the other thing
<fwereade> axw, is that the storage-watching seems like it's targeted at the dynamic storage model and I think the first use case to satisfy is the "run all the storage hooks before the install hook" one
<fwereade> axw, that's not to say it won't become useful
<axw> fwereade: I was intending to do that next, because it's of less demo value :)
<axw> fwereade: right now we just want to see the hooks firing when storage is assigned
<axw> fwereade: so what I was going to do is after the deploy operation has finished, but before the "install" or "upgrade-charm" hooks are run, wait for the required storage to be attached
<axw> is that the right place to interpose?
<fwereade> axw, yes, I think so
<fwereade> axw, so instead of queueing an install hook post-deploy we instead record and run some attaching-storage mode
<fwereade> axw, forgive dumb questions, haven't read it all yet
<fwereade> axw, local persistent what's-attached storage?
<axw> fwereade: attached storage is storage that's assigned to the unit and ready for use. i.e. the block device is visible and/or the filesystem is formatted and mounted
<fwereade> axw, we need to bear in mind that we could get bounced at any time, and we'd really like to be able to pick up seamlessly
<fwereade> axw, how do we know we've already run the appropriate storage-attached hook?
<fwereade> axw, I don't think we want to attach all the storage every time we start the agent
<fwereade> axw, it's bad enough doing that with config-changed ;)
<axw> fwereade: we don't (yet), I will be adding local state persistence
<axw> a la the relationer
<fwereade> axw, cool
<fwereade> axw, but it brings me to another thing
<fwereade> axw, that storage interface in uniter basically looks good, and the correspondence with relations is good and sane
<fwereade> axw, the trouble is that I'm pretty sure the relationers are in the wrong place
<fwereade> axw, and that they need to be associated more closely with operation state, and not with the uniter itself
<fwereade> axw, are you roughly familiar with any of the uniter/runner code?
<axw> fwereade: very roughly :)
<fwereade> axw, there's a yucky callback into the uniter when we set up a hook context to find out what relation state currently looks like
<fwereade> axw, but operations themselves are composed of methods that accept a state and return a new state
<fwereade> axw, we could and should be feeding that info along an explicit path
<axw> I see, so we're going uniter -> operation -> runner and then back to uniter via a callback?
<fwereade> axw, so the state we pass into the operation methods includes what storage there is, what relations there are, etc etc
<fwereade> axw, yeah exactly
<fwereade> axw, I have done a reasonable job of splitting out the types that needed to exist
<fwereade> axw, but now the borders are in place there's still a bunch of moving things to the right place
<axw> fwereade: ok, I'll bear it in mind when getting to the context bit... which will be very soon
<fwereade> axw, ofc, good point
<fwereade> axw, yes, please make sure it's part of the state you make accessible via the operation executor
<fwereade> axw, (and yes it's a *bit* weird that *that* is responsible for operation state persistence, feel free to massage it towards sanity)
<fwereade> axw, and if you're doing that...
<fwereade> axw, would you just move relations in the same way at the same time please?
<fwereade> axw, you don't have to fix the callback, although it would be cool if you did
<fwereade> axw, just, if you're moving one field please move the other because the same forces apply to both
<fwereade> axw, making sense?
<axw> fwereade: nope. moving which field?
<fwereade> sorry
<fwereade> axw, uniter.relations
<fwereade> axw, I think is completely analogous to uniter.storage
<axw> right
<fwereade> axw, that storage info needs to get to the execution contexts
<fwereade> axw, as does relation info
<fwereade> axw, which currently uses a scary/evil callback
<fwereade> axw, we should not repeat the relations mistake
<fwereade> axw, we should make sure storage state is passed into operations in the same way the other local state is
<axw> ok
<fwereade> axw, (see the uniter/operation.Operation interface)
<fwereade> axw, (and Executor which uses it)
<voidspace> TheMue: dimitern: sorry guys - I'll be a few minutes late to standu
<voidspace> *standup
<fwereade> axw, this likely implies changing something about the Operation/Executor interface (or possibly the operation.State type??)
<fwereade> axw, whatever you do to it to add storage, please also do to add relations
<axw> fwereade: ok, understood. I'll need to figure that bit out first, but I'll do both at the same time
<fwereade> axw, you can keep the field on the uniter and the callback to it -- just make sure the relations do get to the runner context via an explicit chain of calls, and then it'll be easy to remove the callback and the uniter's reference to relations
<axw> fwereade: ok. sorry for being dumb, but where would relations and storage live if not on uniter?
<axw> modes feeds them...
<fwereade> axw, I think they're part of operation state
<fwereade> axw, not sure you've had a chance to keep up with the most recent uniter stuff?
<fwereade> axw, particularly the operation stuff
<fwereade> axw, wait, you did review 760 I think
<fwereade> axw, so the direction we're currently moving in is
<fwereade> axw, strip uniter down as far as we can, it's got too many responsibilities, and the only one that's *clearly* the uniter's problem is creating (and maintaining) the other components that need to work together
<fwereade> axw, the filter, the juju-run handler, the actual main loop
<axw> fwereade: ok. so uniter will be responsible for taking things from the filter and passing them to the operation state
<axw> operation state will encapsulate the relations and storage bits
<axw> and ... will have a hook output channel?
<fwereade> axw, to support this, the various tangled skeins of "what actually happens when we X changes" are being extracted into operations
<fwereade> axw, which will ideally operate purely against the state they're given, but are currently supported by alarming numbers of callbacks
<fwereade> axw, which is still better than before because at least we can test them in some detail
<fwereade> axw, I'm not sure about the output channel
<fwereade> axw, did you see relation.Peeker? I think it was 761
<axw> I didn't look at that one
<fwereade> axw, so the core idea there is that relying on one big select with a bunch of inputs is not actually good enough to get any sane sort of hook ordering behaviour
<fwereade> axw, so relation.Peeker is an alternative to a HookSender for using a HookSource
<fwereade> axw, it has a Peeks channel, which has something to give you if the source is not empty
<fwereade> axw, when yu read from the peeks channel you get a Peek with a hook.Info, and Consume/Reject methods
<fwereade> axw, you have to consume or reject the peek before it'll give you another, or read from the watcher again
<fwereade> axw, but basically you can leave them to run in the background and only interrupt them when you know there's no higher-priority hook to run
<fwereade> axw, so the "make it a HookSource" thing is great
<fwereade> axw, but please make sure it works with Peeker as well as HookSender
<fwereade> axw, all that CL has is the addition of the type, I don't use it yet
<axw> fwereade: I did update relation/peeker.go, I just didn't look into what it does
<axw> thanks for the explanation
<fwereade> axw, perfect then :)
<fwereade> axw, maybe non-obvious: this means I intend to do away with the whole starthooks/stophooks malarkey in modeabide
<fwereade> axw, when we know about the relations we can just watch them forever and only check the queues when we need them
<axw> fwereade: ahhh I see, so they'll just be running watchers (in a goroutine) all the time
<fwereade> axw, yeah, I think it's much less complex
<axw> yep
<axw> I thought it was a bit awkward
<fwereade> axw, since you're here and in this area though, another thing to be aware of and feel free to do if I haven't and it helps you
<fwereade> axw, when passing storage state around
<fwereade> axw, the current interface for Operation is wrong : it accepts a State, and returns *State, error, and the executor is responsible for writing out the new state
<fwereade> axw, this is already awkward to some degree and will not make sense at all with relations and storage
<fwereade> axw, it ought to be accepting something a State type that it can modify and then write if it wants to
<fwereade> axw, because certain operations will absolutely want to change that state
<fwereade> axw, attaching storage is absolutely something you want to record having done
<fwereade> axw, as is recording relation hook state
<axw> fwereade: how does the current interface not handle that?
<fwereade> axw, magic callbacks into the Uniter! if you want a catalogue of my sins, see uniter/op_callbacks.go
<fwereade> axw, hopefully I will get that down to (almost) nothing over time
<fwereade> axw, and it's already paid off because now I can at least test what individual operations actually do in some detail
<axw> fwereade: oh right, it modifies state but not the relation state
<fwereade> axw, yeah
<fwereade> axw, and the relation state modification happens in CommitHook iirc
<axw> ok
<fwereade> axw, as you have on the Storage interface
<axw> yup
<fwereade> axw, when it would be much nicer to have the state-to-modify passed in directly in both instances
<axw> :q
<axw> oops
<fwereade> axw, anyway, sorry my code led you astray
<axw> fwereade: no worries, thanks for the lesson :)  this has been very informative
<fwereade> axw, cool
<axw> not sure how I'm going to do all that by Tuesday though.. maybe not sleep
<fwereade> axw, as always I am giving you high-level advice, we can't realise the vision all in one go
<axw> :)
<fwereade> axw, so, for example, so long as you do promise to fix it later
<fwereade> axw, it would be very reasonable to start off by just tacking the storage state onto the existing operation.State
<axw> fwereade: noted. I will probably have to use that line
<axw> ok
<fwereade> axw, because I think you *do* need persistence of what you've done
<fwereade> axw, and you can at least keep using an existing mechanism (ie executor writing it out for you)
<fwereade> axw, (by you returning a state with whatever changes)
 * axw nods
<fwereade> axw, fwiw, the other thing I need to do with uniter.relations
<fwereade> axw, is to turn Update into an operation
<fwereade> axw, so there will be storage ops, and relations ops -- neither of those will run hooks, but they will write out state that directly or indirectly queues other hooks
<fwereade> axw, and uniter will hopefully shrink to almost nothing
<fwereade> sorry bbs
<axw> ok
<fwereade> axw, ok back
<fwereade> axw, so do you feel usefully guided? :)
<axw> fwereade: mostly. I'm not really seeing how I can get rid of PrepareHook/CommitHook yet
<axw> in the callbacks
<axw> fwereade: I mean, apart from coding relation/storage logic into the operations package
<fwereade> axw, I think that's exactly what we need
<axw> ok
<fwereade> axw, we currently have deploy ops, and these are analogous, I think
<fwereade> axw, I need to do a RelationsChanged operation soon
<fwereade> axw, I still don't have the full picture because of the starthooks/stophooks thing
<axw> I see, so I'd be adding a StorageAttached operation which queues a <name>-storage-attached hook if it's committed
<fwereade> axw, exactly
<fwereade> axw, (what I might do with start/stop hooks is just make them run in every mode and shut down when the uniter shuts down
<fwereade> axw, and that feels sort of correct
<fwereade> axw, in that it's essentially just another kind of operation filter
<fwereade> axw, and we don't start/stop all that every time uniter state changes
<axw> sounds logical
<fwereade> axw, I'm wondering whether uniter is eventually going to collapse down into a single worker.Runner constructor :)
<axw> fwereade: heh, I was thinking that earlier :)
<fwereade> axw, I think there's enough of that stuff going on that it's what we need somewhere
<fwereade> axw, it's a single non-trivial responsibility and it deserves some isolation
<fwereade> axw, we'll get there one day :)
 * dimitern steps out for ~1h
<fwereade> axw, reviewed in some detail, hope it's helpful
<perrito666> morning
<axw> thanks fwereade, about to eat, I'll take a look later
<wallyworld_> axw: i was thinking the storage-attached hook might reasonably pass in multiple storageids if several are attached before the hook fires
<anastasiamac> axw: updated PR. plugged in bare minimum :D
<axw> wallyworld_: what's the point though? if you can't guarantee that?
<axw> anastasiamac: awesome
<wallyworld_> axw: similar to bulk calls - sure it may only be 1 most times, but why not allow for > 1 at minimal extra cost
<axw> wallyworld_: I think it's more important to be consistent with other hooks. the most similar we have to compare to is relation hooks, and we don't combine hook calls for different relations
<wallyworld_> ok, fair point, i'll change it. i'm not overly familiar with the design philosphy behind hooks
<axw> wallyworld_: I think this is something that would be good to get fwereade's input on, maybe when you're at the sprint
<wallyworld_> will do, i'll make it one for now - that will also simply the output and remove the need for parsing
<wallyworld_> if we add an attribute key as you suggested
<fwereade> wallyworld_, yeah, I think we'd prefer to go for granular and stay consistent
<wallyworld_> fwereade: the cost is that it il translate into multiple api calls to get the instance details
<wallyworld_> one api call per hook invocation
<fwereade> wallyworld_, axw: bulk hooks are absolutely a good idea but one we always held off from in the past (and as you know I favour a something-changed approach for charms that reach a certain threshold of complexity
<fwereade> )
<fwereade> wallyworld_, axw: so I'd prefer us to stay consistent with the existing approach for now and have one invocation per storage instance as we have one invocation per remote unit
<wallyworld_> ok, will do
<wallyworld_> it's not per remote unit that i'm talking about - it's per storage instance
<wallyworld_> a unit can require > 1 storage instances
<wallyworld_> these may become attached and available on the host machine prior to the attached hook firing
<wallyworld_> hence if we know of the many, we could pass them all into the hook for that unit, and a single api call made to get the storage instance details
<wallyworld_> but, the preference is for one id per hook
<wallyworld_> so i'll change it
<anastasiamac> axw: the Rb shows 1 issue against my PR but all issues are reolved and 1 dropped :P
<anastasiamac> axw: RB lies?... :D
<axw> anastasiamac: *shrug*  I am reviewing now
<anastasiamac> axw: oh Awesome!! thnx :) didn't want to pull u from dina...
<axw> nah, finished
 * dimitern is back
<axw> anastasiamac: done. almost there, a couple more things (sorry)
<anastasiamac> axw: thnx for review. will address it tomorrow..
<axw> anastasiamac: no worries, get some sleep :)
<anastasiamac> axw: sleep? what's that? :P
<axw> fwereade: thanks for the comments. I'm looking into changing the storage stuff to an operation now, which I think will clean things up a bit in my code. if I do go ahead with that, I'll probably do a followup for the relation code when time's not so tight
<fwereade> axw, yeah, you don't have to do that for relations -- it's just if you're moving the one field please send relations via the same path if and when it's ~trivial to do so
<jw4> I've noticed a couple review comments in the last week or so about copyright headers - do we have a documented standard way to update copyright headers?
<jw4> 2013,2014,2015?  2013-2015?  replace all with 2015? etc.
<perrito666> jw4: new files get the current year
<jw4> perrito666: makes sense - how do we update existing files though
<perrito666> jw4: no clue
<perrito666> I would say you are right n-m
<perrito666> meh, vivid broke my skype
<axw> jw4: first year of publication
<axw> documented... no. there was a juju-dev thread on this a while ago
<jw4> axw: what is the first year of publication?  the year the file was created, or the year a new function was added, etc. etc. -
<jw4> axw: was there a conclusion on the mailing list? I'll have to go back and find it I guess
<axw> jw4: good question. this is the point I say IANAL ;)
<axw> jw4: *I* just put the year the file was created
<axw> others put the range
<axw> some put all the years with commas, but I think that is discouraged
<jw4> axw: heh - I see.  I only ask because I've noticed an uptick on review comments giving different guidance
<jw4> expected in January I guess
<hazmat> perrito666, ericsnow minor ux feature bug 1414021 for backup downloads
<mup> Bug #1414021: Send size in download backup to allow for progress indicator <juju-core:New> <https://launchpad.net/bugs/1414021>
<perrito666> hazmat: tx
<hazmat> what version of juju is it that starts having the /environment/:uuid/endpoints .. trying to do some auto negotiation in the client
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: 1414016
<sinzui> dimitern, sorry I think your last commit broke local provider lxc jobs for trusty, vivid, and utopic.
<sinzui> dimitern, bug 1414016
<mup> Bug #1414016: Local-provider lxc Failed to create lxc_container <ci> <local-provider> <lxc> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1414016>
<dimitern> sinzui, oh boy, looking - thanks
<TheMue> dimitern: thx for review
<dimitern> sinzui, can I get some more info for that bug? /var/lib/juju/containers/* and /var/lib/juju/removed-containers/*, as well as /var/lib/lxc/juju-*/config and /var/lib/lxc/jenkins-*/config ?
<dimitern> TheMue, np
<sinzui> dimitern, not at moment, 1.22-beta1 has started testing
<sinzui> I will visit a machine and try to run what was left in place
<dimitern> sinzui, also these lxc jobs should all be running with logging-config: juju.container=TRACE
<dimitern> sinzui, ok, that'll work
<sinzui> dimitern, I sent a email with the logs
<dimitern> sinzui, thanks!
<dimitern> sinzui, ok I think I understood the problem and started to work on a fix
<sinzui> thank you dimitern
<hazmat> hmm.. the charms facade is enabled in trunk, and is returned in login info, but invoking any of the Charms facade methods gets 'Error': u'unknown object type "Charms"'
<hazmat> anastasiamac, is the charms facade active? feel like i'm missing something
<sinzui> perrito666, katco voidspace : I need help triaging this bug. It isn't clear why juju failed or if the issue is in the cloud image or network. https://bugs.launchpad.net/juju-core/+bug/1413752
<mup> Bug #1413752: Bootstrapping to Openstack Environment fails with "no instances found" <hyperscale> <openstack-provider> <juju-core:New> <https://launchpad.net/bugs/1413752>
<alexisb> sinzui, that bug looks like it may need to be looked at by the onyx team who are gone for the weekend
<alexisb> and hazmat anastasiamac should also be on her saturday :)
<sinzui> alexisb, :/
<alexisb> sinzui, I can send tim and team a note
<sinzui> alexisb, I will subscribe tim and ask for him to give an opion
<alexisb> sinzui, and of course if perrito666 katco voidspace and team can feel free to chime in
<cherylj> Is there anyone around who can help me with a problem with Review Board?
<katco> ericsnow: ^^
<ericsnow> cherylj: I'd be glad to help :)
<cherylj> thanks, ericsnow
<cherylj> I just pushed some changes to address some review comments, but I'm not seeing that Review Board picked them up
<cherylj> I see that it thinks there's another revision
<cherylj> But it shows nothing in the diff from this and the previous revision
<cherylj> http://reviews.vapour.ws/r/770/
<cherylj> if I go to the PR on github, I see the correct changes
<ericsnow> \me takes a look
<katco> cherylj: hm that should have happened automatically i think, but you can probably do "rbt post -u" to force the issue
<cherylj> Well, I saw that it created a new revision
<ericsnow> cherylj: did you refresh the reviewboard page? :)
<cherylj> But it didn't have the changes
<cherylj> Yes :)
<ericsnow> cherylj: ah, "revision" 4 is the same as 3, right?
<cherylj> yes
<katco> cherylj: this was the change? https://github.com/cherylj/juju/commit/b9404581ae177ab7fcd9d3a12505c97405e3cc28
<ericsnow> cherylj: hmm, as far as I can tell, the blank line you removed in your most recent commit is also gone in reviewboard
<katco> ericsnow: yeah i was thinking that maybe RB's diff just is incorrectly hiding that
<katco> i.e. it doesn't show it if i toggle the whitespace flag on the diff page
<ericsnow> katco, cherylj: ah, right; I believe there is a setting in RB to ignore whitespace changes (or something like that)
<katco> ericsnow: yeah it's in the bottom right of the control panel on the diff page
<cherylj> Hmm, the change was more than that.  Let me see if I can find the commit
<katco> ericsnow: but it doesn't seem to be working
<katco> cherylj: possibly? https://github.com/cherylj/juju/commit/03a14eff116a4edc28dcaa398c50141b090788f7
<ericsnow> katco: yeah, I noticed that too.
<cherylj> katco:  yes, that's the one
<katco> cherylj: ok so you just want to look at 2-4 then
<katco> cherylj: it picked up that latest revision too which is "3"
<katco> i mean 4
<katco> "3" is the one you're interested in
<cherylj> katco, 3 doesn't have the changes from the last commit
<cherylj> (the last one you linked to)
<katco> cherylj: did you make the changes in rapid succession?
<katco> those last 2 commits?
<cherylj> Yes
<katco> ericsnow: perhaps the github hook only picks the last commit? and if you do many quick ones it won't pick up some?
<katco> cherylj: you will want to install "review board tools"
<katco> cherylj: and rub rbt post -u
<katco> run even
<ericsnow> katco: I'll check
<cherylj> katco, thanks, I'll do that
<katco> cherylj: https://www.reviewboard.org/downloads/rbtools/
<katco> cd $GOPATH/src/github.com/juju/juju && rbt post -u
<ericsnow> katco: well, the hook didn't fail so something else is afoot
<katco> ericsnow: odd; can you tell if the hook grabbed all PRs, or only the last one?
<ericsnow> katco: it simply records the PR event (in this case "synchonize") with all it's info
<katco> ericsnow: but in this case there were multiple PR events, right?
<ericsnow> katco: the hook then (on the RB side) makes a GH API request to pull the current diff for the PR
<ericsnow> katco: each PR event would trigger the hook separately
<ericsnow> katco: which is what happened
<katco> hrm. interesting.
<ericsnow> katco: and each time the hook did not fail
<katco> ericsnow: well, the lesson is clear. don't be too productive. ;)
<ericsnow> cherylj: when did you push those changes?
<cherylj> ericsnow, about an hour ago
<ericsnow> cherylj: okay, I see one "synchronize" event for you about an hour ago (not two or more)
<ericsnow> cherylj: so if you did two pushes in rapid succession then it may be that github merged those into one hook event
<ericsnow> cherylj: and it's conceivable that that situation resulted in some weird behavior on RB
<ericsnow> cherylj: could you try refreshing your branch somehow (to trigger the GH hook again)?
<cherylj> ericsnow, just did, and just got notification from RB...  let me look at the diff
<cherylj> ericsnow, and it worked!
<cherylj> thanks ericsnow, katco
<katco> cherylj: happy coding! ericsnow is the RB master :)
<ericsnow> cherylj: I'll chalk it up to github doing something weird that RB couldn't handle :)
<ericsnow> katco: I wouldn't go that far :P
<cherylj> hehe :)
<katco> ericsnow: hehe :)
<cherylj> I just got a call from day care and my daughter is not feeling well :(
<cherylj> so, off I go
<katco> doh
<katco> hope she feels better cherylj
<cherylj> me too, thanks katco
<mbruzek1> kateco what is the flag for the one-line juju status output again?
<katco> mbruzek1: whoops sorry, your ping didn't register. you can do short, line, or oneline
<mbruzek1> kateco this is not in juju help status
<katco> mbruzek1: hm what version? it is on mine
<voidspace> right, EOW
<voidspace> g'night all
<katco> mbruzek1: (also, no e in katco if you want to ping me)
<mbruzek1> oh sorry
<katco> mbruzek1: no worries! :)
<mbruzek1> 1.20.14-trusty-amd64
<mbruzek1> http://paste.ubuntu.com/9839141/
<katco> https://github.com/juju/juju/blame/master/cmd/juju/status.go#L34-L35
<katco> mbruzek1: i think it will be in 1.21, let me check
<katco> mbruzek1: the "oneline" flavor is in 1.21
<katco> mbruzek1: 1.22 will support all the flavors
<mbruzek1> katco: Ok I will have to update to get that.  Thanks.
<katco> mbruzek1: yeah looks like. sorry about that
<katco> alexisb: ping
<katco> sinzui: ping
<sinzui> hi katco
<katco> sinzui: hey, happy fri :)
<katco> sinzui: just a heads up, it looks like all of the status related work got missed for the release notes
<katco> sinzui: i'm adding it in, but i thought i'd let you know in case it's a process issue
<sinzui> katco, thank you. dimitern also gave me some revisions. Just update the gdoc. I also made change for the official release next week
<katco> sinzui: thanks, will do. just didn't know if there was a process issue that we might want to look at for next time. i honestly don't remember what all i worked on for v1.21
<sinzui> katco, I read my history of merged branches.
<katco> sinzui: weird, is it the status stuff not in there?
<sinzui> katco, I see your changes that you demoed in Brussels.
<sinzui> katco, http://pastebin.ubuntu.com/9839406/
<katco> sinzui: ah ok, yup that's it :)
<katco> sinzui: in addition to new filtering methods
<katco> sinzui: thanks, this is my first involvement with our release notes, so i wasn't sure how this all worked. i've updated the document.
<alexisb> katco, pong
<alexisb> whats up?
<katco> alexisb: hey it was just a question about the release docs
<katco> alexisb: i think sinzui got me straightened away (see backlog)
<alexisb> ah I see you and sinzui conversation below that cool
<katco> alexisb: ty for responding :) and happy friday to you!
<alexisb> katco, cool, sorry for the delay I just got back
<katco> alexisb: no worries at all
<alexisb> and happy friday to you too!
<sinzui> alexisb, I am editing the 1.22-beta1 release notes. I thought MESS would be more complete for 1.22 https://docs.google.com/a/canonical.com/document/d/1IPmXCwujtq8zZs9mlfps7wGcBYYv4MLC_8w2bN2CetQ/edit
<sinzui> alexisb, I am wondering if the info in this doc is stale
<alexisb> sinzui, let me go look
<alexisb> sinzui, that info is correct, lots of it is there but is hidden behind feature flags and/or not exposed to the user
<alexisb> MESS is definitely not targeted for 1.22
<sinzui> alexisb, rock. then once I finish these notes, I will continue with the release of 1.22-beta1, which is already queued for release
<alexisb> sinzui, you da' man! thank you
#juju-dev 2015-01-24
<axw> wallyworld: are you around?
<wallyworld> axw: hey
<axw> wallyworld: ahoy. just letting you know I'm around if you wanted to chat
<axw> I just replied to your mail
<wallyworld> axw: i think i'm about to take dog out, i'll look at reply, ta
<axw> wallyworld: no worries. ttyl
<wallyworld> axw: we may be able to take shortcut and assign an added disk to the only unit on a machine
<wallyworld> hust for demo purposes
<axw> wallyworld: hmm maybe, I'm hesitant to demo something like that tho, because then the audience may expect it to become reality
<axw> wallyworld: since it's added OOB, I think it's reasonable to expect people to take manual action to bring it into the environment
<wallyworld> ok, the manual bit was a stop gap since we won't have a storage provisioner
<axw> I see
<wallyworld> was just to demo how it could work
<axw> wallyworld: I'll see if I can implement a minimal provisioner
<wallyworld> ok, see how we go
<wallyworld> a few stretch goals in our todo list
<hazmat> is an actionreceiver always a unit?
<hazmat> ah.. atm ic. atm yes. with some notion of services supported with leader el
<anastasiamac> hazmat: ping
<hazmat> anastasiamac, pong
<anastasiamac> hazmat: it's a weekened!
<hazmat> anastasiamac, and miles to go before i sleep ;-)
<anastasiamac> hazmat: r u k with Charms client or still an issue? :D
<hazmat> anastasiamac still an issue
<anastasiamac> i've seen similar behaviour on my machine after multi-environment commits went in...
<hazmat> anastasiamac, its wierd. that it shows up in the facade list, but invoking it still in error
<anastasiamac> i *think* the way facade calls are invoked has changed...
<hazmat> anastasiamac, i've tried using the multi-env api endpoint ala /environment/:uuid/api  and the default one
<hazmat> anastasiamac, i filed a bug http://pad.lv/1414086
<hazmat> anastasiamac, i wonder if its the stateshim  stuff
<anastasiamac> without onyx (and probably more specifically tim online
<anastasiamac> i only know how to reproduce this error if I write "juju charms list" command...
<anastasiamac> i'll probably do that now (and c if I can give u some pointers)
<anastasiamac> m not too sure if I could help before Tuesday tho
<hazmat> anastasiamac, well any facade api usage should suffice
<hazmat> k
<anastasiamac> it's long weekend here
<anastasiamac> i'll let u know of my progress either way :D
<hazmat> i'll watch the bug, i've moved on to flushing out the other facades, its only annotations and charms with the issue, the only delta of note i could find was the state shim.. debugging further sans debugger, means print statements in registry lookup etc.
<hazmat> anastasiamac, thanks
<anastasiamac> hazmat: thnx for headsup!!!
#juju-dev 2015-01-25
<wallyworld_> axw: hey, you seen this issue before? I'm trying to land dimiter's fix to unblock trunk
<wallyworld_> Extant directories unknown:
<wallyworld_>  labix.org/v2/mgo
<axw> errm
<axw> nope
<wallyworld_> seems a bot scripting issue
<axw> we shouldn't be using labix.org/v2/mgo anymore
<wallyworld_> yup
<axw> it's not snuck into dependencies.tsv is it?
<wallyworld_> hmm, don't think so, i'll check
<axw> nope
<wallyworld_> i'll have to grep the release tools scripts, i think they print that message
<wallyworld_> i really want to get our stuff landed, otherwise will need to make a feature branch, but if i do it off trunk now, local lxc will be broken until the fix lands
<anastasiamac> menn0: waigani:ping
<menn0> anastasiamac: pong
<menn0> anastasiamac: pong (again)
<menn0> waigani: just reviewed the FindId/RemoveId change.
<waigani> menn0: thanks
<anastasiamac> menn0: sorry - day off... kids running around... m missing ur answers :D
<anastasiamac> menn0: do u know what tim changed in facade creation/calling?
<anastasiamac> menn0: related bug 1414027
<mup> Bug #1414027: Increment client facade or return agent version in login <juju-core:Triaged> <https://launchpad.net/bugs/1414027>
<menn0> anastasiamac: not off the top of my head
<anastasiamac> menn0: i think its smth in dependencies...
<menn0> anastasiamac: but I can try and figure it out if it's urgent
<anastasiamac> menn0: is tim not around today?
<menn0> anastasiamac: he's back tomorrow
<menn0> anastasiamac: has the day off today
<anastasiamac> menn0: k. i'll ask tomorrow
<anastasiamac> menn0: it's not urgent 4 me :D
<menn0> anastasiamac: ok
<anastasiamac> menn0: but hazmat has troubles calling all new clients
<anastasiamac> menn0: he can see them but when he calls them he get errors
<menn0> anastasiamac: ok. i'll try and have a look and see if this is related to Tim's changes
<menn0> anastasiamac: stop working on your day off!
<anastasiamac> menn0: m nt working :P m talking to u :D
<menn0> anastasiamac: that's work :)
<anastasiamac> menn0: just an fyi, all feature tests run fine
<anastasiamac> menn0: they all work when u do a full go install ../... from juju root...
<anastasiamac> menn0: only new facades are affected
<anastasiamac> menn0: axw saw the same behaviour on his environment (i did too)
<anastasiamac> menn0: and had to re-build from scratch...
<anastasiamac> menn0: as per above :D
<menn0> anastasiamac: ok. good to know.
<anastasiamac> menn0: thnx! that's what at least 2 cases of scotch now?.. :D
<menn0> anastasiamac: umm... I haven't done anything yet :)
<menn0> anastasiamac: but yes please :-p
<anastasiamac> menn0: 1 for CI blockers (well for unblocking them) and 2nd in anticipation :D
<menn0> anastasiamac: i'll see what I can do. onyx is in crunch mode at the moment, trying to get MESS working before the mid-cycle sprint.
<anastasiamac> menn0: understood... tanzanite's storage has a new client for sprint demo too...
<anastasiamac> menn0: just making sure that there will b no hiccups for us either :D
<menn0> anastasiamac: I will look at that bug later today
<menn0> anastasiamac: I even just assigned to me and everythign
<anastasiamac> menn0: thnx :D but really m happy to wait til tim and tomorrow
<menn0> anastasiamac: well I'll see if I can figure it out but if it turns into an epic then Tim can look tomorrow.
<anastasiamac> menn0: thnx :D
<hazmat> menn0, dropped a note on 1414027
<menn0> hazmat: just read it. I'm pretty sure that the idea is if the client knows about /environment/:uuid/api it should use that.
<menn0> hazmat: I believe that much of the API available at the root is going to go away at some point
<hazmat> menn0, you mean it should keep trying different endpoints till one works?
<hazmat> ie. does a 1.20.x env respond to that?
<menn0> hazmat: not sure. thumper has a clearer picture of how this is supposed to work.
 * menn0 checks the code
<hazmat> menn0, i'm writing an app/client that's autonegotiating from 1.18 to  1.23.. on facades that esasy
<hazmat> changing the endpoint is something else
 * hazmat waits up for the australian open to start
<menn0> hazmat: ok, I'm not exactly sure what the plan is. thumper will know more and he's back tomorrow.
<menn0> hazmat: AFAIK the existing env-specific functionality will continue to work from the root for some time (but probably only for the initial environment I guess)
<menn0> hazmat: actually, thinking some more, the env functionality at root should work for any env, as long as you log in to that env.
<hazmat> menn0, sure but without telling clients that they may talking to a multi-state server env, they don't know till they Info() on the env to get version.. it should go to the jenv so they know where to connect,  connecting to root only works for legacy envs (env 0) in multi-tenant state servers, and connecting to multiple state servers. juju should add some info to disambiguate that in the client side info it stores, and avoid apps having to hit multiple endp
<hazmat> oints (legacy, multi-style) to discover the correct one.
<hazmat> menn0, you login by username, password, not by env uuid
<hazmat> on the root
<menn0> hazmat: i'm also pretty sure you can specific the environment tag at login time
 * menn0 checks
<menn0> hazmat: actually, it's before login
<menn0> hazmat: api.Info has a EnvironTag field. if you set that you get a connection for that envrionment
<menn0> hazmat: if it's blank you get the initial / state server env
<hazmat> re before login, was there it weaved together prior to login in the src?
<menn0> never mind
<menn0> i've just looked further
<menn0> setting that field changed the endpoint URL used by the juju API client
<menn0> it's not passed through
 * menn0 goes to check what 1.20 does with that...
<menn0> and 1.18 even
<hazmat> s/was/where
<hazmat> k
<hazmat> menn0, so yeah.. client api compatibility across multiple versions is the question.. i think dropping it in the jenv is appropriate. a multi-tenant root would be useful, but there's no initial request outside of login which would be a fine place to pass as older clients would ignore, but its also keeping compatibility with the older endpoints
<hazmat> which is nice, but given apps want to be compatible with both, also enshrines it as endpoint for a little while... and of course assuming it does take env-uuid as a param on login
<hazmat> environ tag is already in the jenv
<hazmat> at least for 1.23.. dunno how far back that goes.
<menn0> still digging... code for 1.18 was in LP so it's a little harder to check
<hazmat> i'm currently doing conditional on that.. but that's a rather subtle distinction which would be better explicit, given the variety of tooling around the api.
<hazmat> ie. juju client compatibility is great but there are other api clients that need to do the same dance..  at least  js and multiple py clients wrt to common usage... so some explicit delineation and email/doc would be useful..
<menn0> hazmat: i've almost got this figured out. bear with me.
<menn0> hazmat: so environ-uuid has been in the .jenv since 1.20.0. 1.18 didn't have it.
<menn0> hazmat: and if you have environ-uuid in the .jenv then you should be using a endpoint of /environment/:uuid/api
<menn0> hazmat: otherwise you use the root
<menn0> hazmat: basically, if the jenv includes environ-uuid then the server supports it
<menn0> hazmat: thumper will probably have some more detail but the above is what I'm seeing from the code at various points in time
<menn0> hazmat: agreed that this needs to be documented better
<hazmat> menn0, cool, thanks.. although i'll note.. not all clients use jenv (landscape and gui both collect info via forms)
<menn0> hazmat: ok. and that's been fine up until now because there's only been one env and if you connect at root you just get the initial env.
<hazmat> for client-side py tools that should handle things, thanks again
<menn0> hazmat: those forms are going to need to take the env uuid as well i guess
<menn0> hazmat: i'll follow up with thumper tomorrow and confirm all this and get the docs updated
#juju-dev 2016-01-25
<davecheney> thumper: time for a chat ?
<thumper> yep, 2 minutes?
<davecheney> mkay
<davecheney> does anyone know the way to bootstrap with --upload tools if i am running trusty but need to bootstrap a precise environment
<davecheney> because reasons
<davecheney> going to try juju bootstrap --bootstrap-series=precise --upload-tools
<davecheney> looks good !
<davecheney> thumper: this machine just decided to die
<davecheney> http://paste.ubuntu.com/14659482/
<thumper> awesome
<davecheney> just didn't feel like getting an IP address, Â¯\_(ã)_/Â¯
<davecheney> 2016-01-25 01:44:35 ERROR juju.worker runner.go:226 exited "machiner": setting machine addresses: cannot set machine addresses of machine 3: state changing too quickly; try again soon
<davecheney> fucking great
<davecheney> who's changing state so quickly ?
<davecheney> this is the only agent left running ?
<thumper> ah
<thumper> this normally means an assertion is failing
<thumper> poor error message
<davecheney> i shall raise a bug
<davecheney> thumper: https://bugs.launchpad.net/juju-core/+bug/1537585
<mup> Bug #1537585: machine agent failed to register IP addresses, borks agent <juju-core:New> <https://launchpad.net/bugs/1537585>
<mup> Bug #1537585 opened: machine agent failed to register IP addresses, borks agent <juju-core:New> <https://launchpad.net/bugs/1537585>
<mup> Bug #1537586 opened: tools version checker spams public release bucket for unreleased tools <juju-core:New> <https://launchpad.net/bugs/1537586>
<davecheney> lucky(~/devel/swift-test) % juju status --format short
<davecheney> - glance/0:  (error)
<davecheney> - keystone/0: 54.253.213.57 (started)
<davecheney> - mysql/0: 54.206.217.205 (started)
<davecheney> - swift-proxy/0: 54.79.202.139 (started)
<davecheney> - swift-proxy/1: 54.79.198.107 (started) 8080/tcp
<davecheney> - swift-proxy/2: 54.253.216.163 (started)
<davecheney> - swift-storage-z1/0: 54.206.230.31 (started)
<davecheney> - swift-storage-z2/0: 54.66.53.57 (started)
<davecheney> - swift-storage-z3/0: 54.206.156.78 (pending)
<davecheney> ffs
<davecheney> failed again
<davecheney> see glance/0 has no ip address
<thumper> ugh
<thumper> that blows
<davecheney> oh, and the you cannot delete the env
<davecheney> because it goes apeshit that security groups are in use
<davecheney> and then locks you AWS account because of rate limiting
<davecheney> juju deploy, http://scorpiondagger.tumblr.com/post/100098695703/check-out-the-good-book
<davecheney> thumper: that "too fast" error
<davecheney> is very worrying
<davecheney> its coming from the txn package
<davecheney> the environment is idle
<davecheney> nothign deployed because one of the machiens could not register it's ip
<davecheney> but the txn package has fucked up copletely
<davecheney> to be fair
<davecheney> this was on a precise instance
<davecheney> and maybe the version of mongo installed is buggered
<davecheney> but this environment is idle
<davecheney> there should be nobody adding txn's to the queue, and all the participants should be attemtping to clear the txn queue
<axw> wallyworld: starting on your 9-page diff now... :/
<wallyworld> sorry ;-(
<wallyworld> it's not as bad as the 17 page one i did for anastasia :-)
<axw> wallyworld: heh :p
<davecheney> OH MY GOD
<davecheney> the config-changed hook of swift does the whole apt-get update/upgrade/install dance
<davecheney> over and over again
<thumper> davecheney: the "changing too quickly" error is almost certainly a bad assertion that keeps failing
 * thumper is heading off for the day
<davecheney> it's coming directly from juju/txn.Run
<davecheney> ahh
<davecheney> ok, so it's not that the txn is not comitting
<davecheney> but it fails ?
<davecheney> and txn cannot tell the difference between an assertion failure, and a general failure ?
<davecheney> \o/
<mup> Bug #1537620 opened: environs/ec2: a stopped machine will cause kill-controller to fail and blow the rate limit <juju-core:New> <https://launchpad.net/bugs/1537620>
<wallyworld> axw: thanks for review, will fix that method. got a fark load of conflicts with previous work to resolve first, sigh
<axw> wallyworld: nps
<axw> wallyworld: I'm going to assume your changes are just changing env->model, as requested, rather than going through another 7 pages of diff. I see you changed the method name.
<wallyworld> axw: yeah, correct. the method name plus more things like envUUID -> modelUUID etc
<wallyworld> and a file rename or two
<wallyworld> environdestroy.go -> modeldestroy.go
<wallyworld> axw: if i can land this one, plus the latest api alias one, plus possibly 1 more i'll do after soccer tonight, we should be good for a CI run using the 2.0 scripts
<axw> wallyworld: sorry, went afk for a while
<axw> will review the last one now
<axw> wallyworld: shipit
 * axw goes back to debugging quassel on his laptop
<voidspace> dimitern: dooferlad: frobware: grabbing coffee
<voidspace> sorry
<dimitern> oops omw as well
<voidspace> dimitern: space "name translation" has now landed on maas-spaces, it's easy code to understand but you may want to look at it to check you're happy with the algorithm
<voidspace> dimitern: very easy to change if you're not (in the discoverspaces worker)
<dimitern> will do
<frobware> voidspace, I'm back...
<voidspace> frobware: let me grab coffee :-)
<voidspace> frobware: right, omw
<mup> Bug #1537717 opened: status format should default to tabular in 2.0 <juju-core:New> <https://launchpad.net/bugs/1537717>
<mup> Bug #1537731 opened: Repeated log handling getting recursive <landscape> <juju-core:New> <https://launchpad.net/bugs/1537731>
<mup> Bug #1537731 changed: Repeated log handling getting recursive <landscape> <juju-core:New> <https://launchpad.net/bugs/1537731>
<mup> Bug #1537731 opened: Repeated log handling getting recursive <landscape> <juju-core:New> <https://launchpad.net/bugs/1537731>
<mup> Bug #1537740 opened: sudo error on bootstrap and working on units <juju-core:New> <https://launchpad.net/bugs/1537740>
<mup> Bug #1537740 changed: sudo error on bootstrap and working on units <juju-core:New> <https://launchpad.net/bugs/1537740>
<perrito666> morning all
<mup> Bug #1537740 opened: sudo error on bootstrap and working on units <juju-core:New> <https://launchpad.net/bugs/1537740>
<abentley> sinzui: You've seen ian's mail?
<perrito666> wow lxd provider didnt even flinch at a power outage during a deploy, that is awesome
<rick_h_> perrito666: <3
<voidspace> dimitern: frobware: dooferlad: merge master onto maas-spaces again
<voidspace> dimitern: frobware: dooferlad: http://reviews.vapour.ws/r/3624/
<frobware> voidspace, is this at all different when we tried earlier today?
<voidspace> frobware: no, just updated and nothing new came in
<voidspace> frobware: but I omitted to actually turn it into a PR earlier today...
<frobware> voidspace, ok. LGTM based on my brief experiment with the merge earlier. However, ... my change into maas-spaces end of last week generated CI based unit test failures. did you get a clean unit test run?
<frobware> voidspace, the failures were transient as my next $$merge$$ was OK
<voidspace> frobware: yep, clean run - and I saw those failures last week
<voidspace> I've had the same problem before unfortunately :-/
<dimitern> voidspace, cheers - looking
<voidspace> frobware: thanks
<frobware> voidspace, locally or always in the CI run?
<dimitern> voidspace, lgtm
<voidspace> frobware: I've only ever seen it on CI run, never locally
<voidspace> dimitern: thanks
<frobware> voidspace, ditto
<mgz> blue rock guys, assess_container_networking fails on master and also doesn't clean up security groups neatly, anyone free to have a look?
<perrito666> mgz: I actually had to google that
<perrito666> mgz: that would be me?
<mgz> ...I actually don't know, you're on that team?
<mgz> I thought you were on a different rock colour
<perrito666> mgz: google says that it could be: sapphire, tanzanite, topaz, zircon, spine, aquamarine, apatite, lapis lazuli, tourmaline and a few others
<mgz> ehehhee
<mgz> too many blue
<mgz> tanzanite is more yellow
<perrito666> http://www.gemselect.com/gem-info/gems-by-color.php
<cherylj> natefinch: can you take another look at http://reviews.vapour.ws/r/3599/ ?  I took a different approach.
<mgz> okay, purple. I do not know my rocks.
<dooferlad> frobware: going to be late to our meeting. Need to pick up daughter. She may be watching for a bit!
<natefinch> cherylj: nice, really like where that ended up. Ship it.
<cherylj> thanks, natefinch!
<mgz> cherylj: looks good, thanks!
<mgz> katco: can I bug you for a small goamz review? github.com/go-amz/amz/pull/65
<katco> mgz: sure... will be after my next meeting
<katco> mgz: (hour or so)
<mgz> ta
<katco> mgz: or you know, it can be a 30 line diff and i can do it quickly :)
<katco> mgz: review up
<mgz> :D
<cherylj> mgz, natefinch - can you guys look at my master change for the x-javascript bug.  I couldn't do a cherry-pick since the code's changed, but it's pretty much exactly the same:  http://reviews.vapour.ws/r/3627/
<mup> Bug #1491688 changed: all-machine logging stopped, x509: certificate signed by unknown authority <bug-squad> <landscape> <logging> <rsyslog> <sts> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1491688>
<mup> Bug #1491688 opened: all-machine logging stopped, x509: certificate signed by unknown authority <bug-squad> <landscape> <logging> <rsyslog> <sts> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1491688>
<mup> Bug #1491688 changed: all-machine logging stopped, x509: certificate signed by unknown authority <bug-squad> <landscape> <logging> <rsyslog> <sts> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1491688>
<mup> Bug #1491688 opened: all-machine logging stopped, x509: certificate signed by unknown authority <bug-squad> <landscape> <logging> <rsyslog> <sts> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1491688>
<mup> Bug #1491688 changed: all-machine logging stopped, x509: certificate signed by unknown authority <bug-squad> <landscape> <logging> <rsyslog> <sts> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1491688>
<mup> Bug #1491688 opened: all-machine logging stopped, x509: certificate signed by unknown authority <bug-squad> <landscape> <logging> <rsyslog> <sts> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1491688>
<natefinch> cherylj: ship it!
<cherylj> thanks, natefinch!
<mup> Bug #1491688 changed: all-machine logging stopped, x509: certificate signed by unknown authority <bug-squad> <landscape> <logging> <rsyslog> <sts> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1491688>
<mup> Bug #1537880 opened: Juju missed a MAAS node transitioning to "deployed", shows it as pending <kanban-cross-team> <landscape> <juju-core:New> <https://launchpad.net/bugs/1537880>
<natefinch> fwereade: you around for a quick question not related to tests? ;)
<natefinch> Could someone online with knowledge of linux comment on this PR?  https://github.com/natefinch/lumberjack/pull/20  Guy wants to set umask to 0002 in one of my tests, because his umask is set to 0022, which makes the test fail.  I understand what umask is and everything, but this just doesn't quite sound like the right fix, but I'm not familiar enough with Linux to know if there's a better way, or what..
<perrito666> uf, syscall umask, big nono unless you know what you are doing
<perrito666> natefinch: no, if this fails inside a temp dir you create for this purpose with the defaults of the system it is most likely a missconfiguration on their system
<perrito666> natefinch: where is maketempdir defined?
<natefinch> perrito666: https://github.com/natefinch/lumberjack/blob/v2.0/lumberjack_test.go#L635
<natefinch> perrito666: maybe that mkdir should be 0770
<natefinch> ?
<perrito666> 700 actually
<natefinch> perrito666: right, that makes sense
<perrito666> or even 600
 * natefinch is from windows, I just keep typing 7s until it works
<perrito666> natefinch: dont make me go there and hit you with a unix manual
<natefinch> lol
<natefinch> I'm mostly joking :)
<natefinch> perrito666: thanks, I'll set it to 0600 tonight and ask the guy to retry that way.  I was pretty sure setting umask was not the answer.
<natefinch> I guess I could also set my umask to 0022 and test it :)
<katco> natefinch: what was the type you were talking about returning from the API layer now for listing resources?
<natefinch> katco: still a ResourcesResults, which has a []ResourcesResult per service, but now each ResourcesResult will also have a []UnitResources, which will have the unit tag and a []Resource
<katco> natefinch: do you have a branch up i can look at?
<natefinch> katco: I will in just a bit... I had to undo a bunch of my work from Friday, and getting something that'll compile is taking a little bit of time.
<katco> natefinch: k, please ping me when it's up
<natefinch> katco: will do
<natefinch> katco: it's get-unit-resources, the key change to the api is adding the UnitResources field here: https://github.com/natefinch/juju/blob/get-unit-resources/resource/api/data.go#L53
<natefinch> katco: I still have to fix the tests, but that shouldn't be too bad.
<katco> natefinch: ty
<natefinch> whoever made params.Entities have a field called Entities of type params.Entity should be ashamed of themselves.
<natefinch> er []params.Entity for that last bit.  gah
<perrito666> hehehe
<perrito666> that is a common pattern in params
<natefinch> it's just awful in struct literals
<natefinch> and elsewhere: args.Entities.Entities = append(args.Entities.Entities, params.Entity{
<natefinch> anyway... back in a bit
<mup> Bug #1537937 opened: localHTTPSServerSuite no trusty arm64 images <arm64> <go1.5> <test-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1537937>
<mup> Bug #1537937 changed: localHTTPSServerSuite no trusty arm64 images <arm64> <go1.5> <test-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1537937>
<mup> Bug #1537937 opened: localHTTPSServerSuite no trusty arm64 images <arm64> <go1.5> <test-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1537937>
#juju-dev 2016-01-26
<cherylj> sinzui, katco, alexisb - here's the PR for reverting the rename of environments.yaml:  http://reviews.vapour.ws/r/3629/
<cherylj> rb, not pr.  whatever :P
<thumper> cherylj, sinzui: failure on machine-dep-engine appears to be bad-record-mac
<cherylj> thumper: yeah, I saw that.
<cherylj> If other tests pass, I'd say merge away
<cherylj> looks like the maas tests (and a few others) are still running / queued
<thumper> yeah... not final yet
<voidspace> dimitern: ping
<voidspace> dimitern: do you know the root cause of the trusty deploy CI failure we're seeing
<voidspace> dimitern: "dpkg-deb: error: --extract needs a target directory"
<voidspace> dimitern: it looks like a build failure
<voidspace> mgz: ping
<voidspace> we have two failures the same on xenial
<voidspace> a lxd-deploy and aws-deploy-trusty
<dimitern> voidspace, hmm let me have a look
<voidspace> dimitern: http://reports.vapour.ws/releases/3533
<dimitern> voidspace, mgz, it looks like it failed to build xenial deb package, but the job still succeeded: http://data.vapour.ws/juju-ci/products/version-3533/build-binary-xenial-amd64/build-130/consoleText
<voidspace> debian/rules:26: recipe for target 'binary' failed
<voidspace> dh_install: juju-core missing files: src/github.com/juju/juju/cmd/plugins/juju-backup/juju-backup
<voidspace> dimitern: this doesn't fail for master though it seems
<voidspace> cmd/plugins/juju-backup is also missing on master
<voidspace> maybe a CI rule has been updated for master
<dimitern> voidspace, it seems the debian/rules for master and other branches are different
<dimitern> which tells me they fixed it for master, but apparently each branch uses its own rules
<voidspace> dimitern: ok, cool
<voidspace> dimitern: so it's their fault :-)
<voidspace> mgz: ^^^
<dimitern> :)
<voidspace> dimitern: where did you check the debian rules?
<dimitern> voidspace, in the log - check for error 2
<dimitern> at the end of that "successful" job log I pasted earlier
<voidspace> dimitern: thanks
<jam> hey guys, sorry I missed the standup. I am around, just spaced the actual event.
<jam> Anyone remember who setup reviewboard? I'm noticing that Tycho has submitted stuff to Github but it doesn't seem to end up in RB as well.
<jam> https://github.com/juju/juju/pulls/4191
<voidspace> jam: it was ericsnow who setup reviewboard
<voidspace> we're still talking FWIW
<perrito666> Jam: i believe review board requires for the user to do the link with github account to work
<perrito666> Btw did you see axw email regarding xdg?
<frobware> dimitern, voidspace, dooferlad: please could you take a look at my (mostly complete) bridge script that allows an explicit interface to be specified: http://reviews.vapour.ws/r/3630/
<frobware> dimitern, voidspace, dooferlad: if we're happy with the approach I'll take maas-spaces script to 1.25 and test accordingly.
<dimitern> frobware, sure, looking
<frobware> dimitern, what's the right answer for this? http://pastebin.ubuntu.com/14671696/
<dimitern> frobware, reviewed
<dimitern> frobware, re the paste - the only difference is no auto eth1 in initial, right?
<dimitern> frobware, a guess the answer should be "if it worked before the script shouldn't break it" :)
<frobware> dimitern, correct.
<dimitern> s/script/script, running it/
<frobware> dimitern, I think we should mimic what we got, hence I just pushed an additional test to cover this case.
<dimitern> frobware, +1
<frobware> dimitern, I haven't see that in the wild, but some time ago I put a check in to only add the 'auto' stanza if there was one before. That all happened in the _bridge() functions but in the main loop we skip over interfaces and this now gets missed, hence the additional test.
<frobware> dimitern, bridge-prefix is a default so it will always have a value even if bridge-name && interface-to-bridge are specified.
<dimitern> frobware, right - the more cases we cover in tests, the better I think
<dimitern> frobware, ah, well I missed that
<frobware> dimitern, yeah, the new functionality is a little at odds with the existing behaviour.
<dimitern> frobware, feel free to ignore the second issue then :)
<dimitern> wow go 1.6b2 got smarter error messages :)
<dimitern> ../../state/watcher.go:2395: doc.MachineId undefined (type networkInterfaceDoc has no field or method MachineId, but does have MachineID)
<frobware>  
<frobware> voidspace, ping
<perrito666> mgz: ping
<voidspace> frobware: is maas meeting on?
<dooferlad> frobware, dimitern, voidspace: https://plus.google.com/hangouts/_/canonical.com/maas-juju-net
<voidspace> dooferlad: just grabbing coffee
<voidspace> be there in 2mins
<perrito666> does anyone know if we have specs to add relations to tabular format?
<voidspace>  destroy-environment --force is gone and destroy-controller has no force option but just hangs... :-(
<voidspace> mgz: sinzui: ping
<sinzui> hi voidspace
<voidspace> sinzui: hey, hi
<voidspace> sinzui: maas-spaces CI jobs are failing the xenial tests for us because they're looking for juju-backups
<voidspace> sinzui: and it looks like this is configured in debian rules per CI task
<voidspace> sinzui: as the same doesn't happen on master
<sinzui> voidspace: when you need to use the force in that situation, use kill-controller, (a safe force)
<voidspace> sinzui: ah, thanks
<sinzui> voidspace: I fixed packaging last Friday to not look for a plugin
<voidspace> sinzui: that worked
<sinzui> voidspace: but lt me see if xenial has a different rule
<voidspace> sinzui: we have a fail on monday: http://reports.vapour.ws/releases/3533
<voidspace> sinzui: aws-deploy-trusty...xenial and lxd-deploy-xenial both failed for that reason
<sinzui> voidspace: the lxd-deploy-xenial issue was a temporary network failure
<voidspace> ah, my apologies then
<voidspace> I'm fairly sure about the other one :-)
<sinzui> voidspace: we can see next test of the branch did have a good network http://reports.vapour.ws/releases/3542
<voidspace> sinzui: ah, awesome - I hadn't seen that
<voidspace> sinzui: appreciated
<voidspace> still cursed though
<voidspace> *sigh*
<voidspace> frobware: dimitern: dooferlad: new CI run for maas-spaces http://reports.vapour.ws/releases/3542
<voidspace> frobware: dimitern: dooferlad: those tests that failed now pass (rules fixed), but we have a wily unit test failure
<voidspace> I'm running wily and I don't think that test fails for me
<voidspace> investigating
<voidspace> frobware: dimitern: dooferlad: http://reports.vapour.ws/releases/3542/job/run-unit-tests-wily-amd64/attempt/1210
<dimitern> voidspace, that looks like flakiness
<dimitern> voidspace, hmm maybe not - I can see it here on wily
<voidspace> dimitern: cool, we'll see in the next test run then
<voidspace> it's running for me
<voidspace> slowly...
<voidspace> heh
<voidspace> dimitern: so with my code that blocks logins until space discovery is complete
<voidspace> dimitern: bootstrap completes - but shows that error
<voidspace> dimitern: so bootstrap doesn't wait, it just appears to error out
<voidspace> so more work needed
<voidspace> frobware: ^^
<voidspace> dimitern: that test passes for me on wily
<voidspace> hmm, I may not be up to date though
<voidspace> dimitern: right, I've pulled in the merge from master now
<voidspace> I wonder if that is the cause
<mup> Bug #1526072 changed: Juju-deployer 0.6.0 and juju-core 1.25 - Build doesn't time out and keeps running until aborted <oil> <juju-core:New> <juju-deployer:New> <https://launchpad.net/bugs/1526072>
<dimitern> voidspace, I've run cmd/jujud/agent tests a few times - it fails some times, but no more than usual
<voidspace> dimitern: right, passes for me every time so far
<alexisb> I am seeing random failures while running test locally due to mgo not cleaning up (on several branches)
<alexisb> is there something I need to do locally to make sure things are getting cleaned up?
<perrito666> Nope, the errors you see might be the cleanup errors after a panic from something else
<perrito666> Tmp tends to fill out with mgo stuff though so you might want to flush that
<voidspace> alexisb: there really shouldn't be!
<voidspace> I gotta go pick up the daughter from school, sorry
<alexisb> ok, just seems test on master have gotten really flaky
<alexisb> around cmd/jujud
<perrito666> Alexisb which are failing?
<alexisb> agent and unit
<perrito666> Pastebin?
<alexisb> github.com/juju/juju/cmd/jujud/agent/...
<alexisb> fails consistently
<perrito666> I did stumble upon agent failing lately
<dimitern> alexisb, try GOMAXPROC=1 go test -check.v github.com/juju/juju/cmd/jujud/agent/... to see if it helps
<natefinch> GOMAXPROCS is not the cause.  I run with GOMAXPROCS=8 all the time, and don't have many consistent failures.
<natefinch> alexisb: what version of Go?  (run go version)
<dimitern> voidspace, dooferlad, frobware, care to review a -3080 line diff :) ? http://reviews.vapour.ws/r/3631/
<dimitern> frobware, voidspace, dooferlad, any of you still there?
<frobware> dimitern, otp
<dimitern> ok
<dimitern> just pestering for a review btw
<dooferlad> dimitern: can take a look in a moment
<dimitern> dooferlad, cheers
<frobware> dimitern, one question I have is the "risk" of landing your change in light of merging back to master. just curious...
<frobware> dimitern, I don't want to derail the opportunity to be back in master.
<alexisb> natefinch, I am running 1.5
<frobware> dimitern, ooh. Now that I've actually looked at your PR it's all deletes.. \o/
<dimitern> frobware, yeah - for a change :)
<dooferlad> dimitern: +1 delete all the old things!
<frobware> dimitern, so my question still stands though: do you see any risk for CI tests, stopping a merge into master. It's likely that master may be blocked for a few days so that we can land.
<dimitern> frobware, I'm not sure about that PR causing CI tests to fail - not that I know of, and since it's dropping code which was not running for quite a while, I think we're safe
<frobware> dimitern, k
<frobware> dimitern, just trying to avoid another 24 cycle on getting maas-spaces into master
<natefinch> alexisb: 1.4 is more reliable.... 1.5 has some known problems with juju still, last I heard
<frobware> alexisb, I have pre-built deb of go 1.4.3 if it helps. http://178.62.20.154/~aim/go1.4.3_1.0.0-14_amd64.deb - it installs into /usr/local/go1.4.3
<alexisb> natefinch, perrito666 this is the failure I am seeing consistently with latest master and golang 1.5:
<alexisb> https://pastebin.canonical.com/148393/
<natefinch> alexisb: yeah, pretty sure I was seeing the sockets thing with go 1.5 too
<perrito666> yup, that is go 1.5
<perrito666> I have smae issues only in my go1.5 machine
<alexisb> yay!
<natefinch> sort of :/
<alexisb> any idea if it is addressed with 1.5.2+?
<natefinch> alexisb: I don't think it's a Go bug, I think it's a juju bug, but I haven't looked super deep into it.
<alexisb> natefinch, ok
<cmars> ashipika, were you seeing that sockets error as well? ^^ (https://pastebin.canonical.com/148393/)
<ashipika> cmars: yes, i was
<ashipika> cmars: in cmd/jujud/agent package
<ashipika> cmars: and golang 1.5.3
<perrito666> how would you call both ends of a relation ?
<perrito666> I am trying to add relations to tabular status and finding this problem
<ashipika> cmars, alexisb: https://pastebin.canonical.com/148394/
<alexisb> ashipika, that is the same failure I am seeing
<alexisb> trying with go 1.4 now just to see if it is different
<natefinch> ericsnow: is your branch pushed in a good state for me to rebase?  I just got my branch all building, tests passing, etc.
<ericsnow> natefinch: working on it
<ericsnow> natefinch: should be good now
<natefinch> ericsnow: cool
<alexisb> so it looks like go 1.4 is allowing the agent tests to pass but I am still seeing failures on master for the status test
<alexisb> anyone else seeing failures on master for the status tests?
<mup> Bug #1538241 opened: 2.0-alpha2 stabilization <blocker> <juju-core:Triaged> <https://launchpad.net/bugs/1538241>
<natefinch> dammit
<perrito666> natefinch: ?
<natefinch> machine was frozen when I came back and now its doing that "running in low graphics mode" crap
<perrito666> natefinch: something is very wrong with some part of your computer
<perrito666> did you ever replace the battery?
<natefinch> yes I did
<natefinch> bad when I can't even get to a command prompt
<natefinch> ahh Ctrl alt f1 works at least
<katco> ericsnow: natefinch: finally finished revising the user-stories. i posted comments in the doc, but i'd like to review the diff with you two after i do lunch
<ericsnow> katco: k
<natefinch> katco - hopefully I can figure out my laptop problems before then
<katco> natefinch: if not, do you have any other machine/phone you could attend on? i need you there
<katco> natefinch: also, is this going to affect your ability to finish your card?
<natefinch> I can attend on my tablet if needed.  yes it'll screw up getting the card done, but I'm hopeful I can fix it
<katco> natefinch: k, keep me posted please.
 * katco lunches
<natefinch> hmmm less hopeful now
<perrito666> natefinch: need a hand?
<natefinch> perrito666 yes
<perrito666> natefinch: lets privmsg
<natefinch> k
<frobware> cherylj, ping; I'm around for 30 mins. anything I can do or help with regard to maas-spaces CI builds... ?
<natefinch_> it would be really nice if linux would notice that all possible video drivers have been blacklisted and give me some kind of option to de-blacklist one of them
<natefinch_> perrito666: thanks for the help... I figured it out.  I foolishly tried to enable the nvidia driver, but it evidently didn't actualy take until I rebooted after ubuntu froze... but when it did that, it blacklisted the nouveau driver...
<perrito666> but why would the nvidia kernel fail?
<natefinch> perrito666: because I'm special like that
<perrito666> dont worry, this is the year of linux on the desktop :p
<cherylj> frobware: I haven't looked at any of the failures, do you guys need me to?
<cherylj> (sorry, was out for a walk before the rain starts)
<perrito666> ok, something weird just happened
<perrito666> I ran a whole test suite and didnt got spammed by mongo
<katco> natefinch: meeting time
<natefinch> katco: oops, sorry, coming
<natefinch> ericsnow: did you say you had a charm that utilized resource-get?
<ericsnow> natefinch: yeah, yours :)
<ericsnow> natefinch: check it out in my branch in test-charms/.../starsay
<natefinch> ericsnow: ahh, I see it.  linux's default directory listing order always messes me up
<perrito666> yeah, yeah, blame the OS
<natefinch> perrito666: it's not my fault they decided to go top to bottom then left to right, rather than left to right, top to bottom
<mup> Bug #1538303 opened: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
<mup> Bug #1538303 changed: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
<mup> Bug #1538303 opened: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
<natefinch> katco, ericsnow: trying to debug why the unit resources aren't getting returned when I call show-service-resources.  I'm not sure resource-get is working correctly.
<mup> Bug #1538303 changed: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
<natefinch> katco, ericsnow: but I have to run for a while for dinner.
<ericsnow> natefinch: k
<mup> Bug #1538303 opened: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
<ericsnow> katco: FYI, I have 6 patches up on RB now for the resource-get stuff
<katco> ericsnow: awesome! in a meeting, i'll try and review tonight
<ericsnow> katco: note that tests bloat the line count substantially on a couple of those patches :)
<katco> ericsnow: lol
<ericsnow> katco: I'm going to land them all in 1 PR once they all pass review
<katco> ericsnow: that's clever. good idea
<ericsnow> katco: yeah, rbt post --parent <parent> was my friend here
<ericsnow> katco: so we'll revisit our iteration backlog after standup tomorrow?
<katco> ericsnow: yes
<ericsnow> katco: sounds good
<katco> ericsnow: sorry not sure i can get to the reviews tonight. need to start dinner and the family gets home soon
<ericsnow> katco: np
<ericsnow> katco: you wrote a decent chunk of it :)
<wallyworld> perrito666: we will end standup, can't hear you
<perrito666> wallyworld: yes, apparently my 3g network sucks at upload
<wallyworld> perrito666: mongo3 will be next week's focus
<wallyworld> after we get the next alpha out
<perrito666> wallyworld: my question was, regarding master
<perrito666> my last branch, merged with master, when running the test suite
<perrito666> it doesnt spam syslog
<perrito666> that is new
<wallyworld> perrito666: rsyslog is removed from juju core, you talking about mongo syslog spam?
<perrito666> wallyworld: that is a problem from mongo not syslog
<perrito666> it was at least :p
<perrito666> you know, when running the full suite, there was a spam to all the terminals at one point
<wallyworld> sure, but what i meant was, there will be no juju core syslog spam now
<perrito666> I assumed that spam was being done by mongo, not juju
<wallyworld> i thought it was core, but not sure now. anyway, mongo2 still being used for now
<perrito666> wallyworld: k
<perrito666> bbl, ill go try to resurrect internet
#juju-dev 2016-01-27
<thumper> it is right about now I want a debugger with local variable introspection
<wallyworld> thumper: use PyCharm :-D
<thumper> for golang?
<wallyworld> yup
<thumper> has debugging?
<wallyworld> yup
<thumper> since when?
<wallyworld> ages
<wallyworld> you use a golang plugin
<thumper> but... but... PyCharm
<wallyworld> any of the IDEs work
<wallyworld> IntelliJ etc
<wallyworld> but i like PyCharm so I can also open the Python juju client etc
<wallyworld> they all use the same plugin
<wallyworld> i don't think the offical plugin has all the features, i use one that's in development which will become offical
<wallyworld> jetbrains took it over
<wallyworld> axw: i have to head out to take kid to movie theatre, bbiab
<axw> wallyworld: sure, I'm half way through your review
<wallyworld> i owe you a beer :-(
<natefinch> ericsnow: you around?
<davechen1y> thumper: https://github.com/juju/juju/pull/4212
<davechen1y> ugly ass backport to 1.25
<davechen1y> needs a lot of testing as I had to bump the version of juju/utils to pick up dependencies of the juju/juju/testing package which the state/leadership tests use
<thumper> ugh
<thumper> reviweed
<thumper> fuck-a-doodle
 * thumper bangs his head on the desk
<thumper> just figuratively
<thumper> oh FFS
 * thumper takes a deep breath and dives into the code again
<davechen1y> um
<davechen1y> % juju bootstrap --bootstrap-series=precise --upload-tools -v
<davechen1y> error: flag provided but not defined: --bootstrap-series
<davechen1y> how am I supposed to test 1.25 when it doens't understand this flag ...
<davechen1y> what's olde english for --boostrap-series
<natefinch> --series?
<davechen1y> i'll try it ?
<davechen1y> i'll try it
<natefinch> also, juju help bootstrap ;)
<davechen1y> lucky(~/src/github.com/juju/juju) % juju bootstrap --series=precise --upload-tools -v
<davechen1y> Use of --series is obsolete. --upload-tools now expands to all supported series of the same operating system.
<davechen1y> WARNING ignoring environments.yaml: using bootstrap config in file ""
<davechen1y> WARNING This juju environment is already bootstrapped. If you want to start a new Juju
<davechen1y> environment, first run juju destroy-environment to clean up, or switch to an
<davechen1y> alternative environment.
<davechen1y> ERROR environment is already bootstrapped
<davechen1y> ^ that is a lie
<natefinch> juju destroy-environment foo --force -y
<natefinch> already bootstrapped sometimes means "last bootstrapped failed and we didn't clean up enough" :/
<davechen1y> thumper: did you find the details about the juju/txn bug we talked about at standup
<davechen1y> i'd like to look into that as a priority
<davechen1y> 'cos that is horrible
<thumper> no, sorry
<davechen1y> :(
<davechen1y> wow, 1.25 is old
<davechen1y> no --format short
<thumper> davechen1y: juju/txn ErrExcessiveContention
<axw> davechen1y: there was never a way to specify series for just the bootstrap machine before. you had to set preferred-series in environments.yaml
<davechen1y> that's the one
<davechen1y> axw: ahh
<thumper> txn.go line 151
<davechen1y> yeah, i know that
<davechen1y> so we said at standup that error X is happening, which doesn't match any of the preconditions, so the errror is replaced by ErrExcessiveContention
<davechen1y> ok, i was wrong juju status --format short is implemented
<davechen1y> but returns nothing if you have no machines deployed
<davechen1y> which makes sense in a "thanks, that didn't help" way
<thumper> heh
<davechen1y> thumper: can you find the fix that you landed last time this happened ?
<davechen1y> which i imagine probably jiggled a transaction operation to not do something
<thumper> let me take a quick look
<davechen1y> ta
<davechen1y> thumper: fix confirmed, submitting http://reviews.vapour.ws/r/3645/
<thumper> ok
<thumper> quick look doesn't find it I'm afraid
<thumper> davechen1y: my test tear down is failing saying I have dirty sockets...
<thumper> davechen1y: after more looking, I can't see what's going on
<thumper> any hints on how to debug this?
<davechen1y> thumper: linkage
<thumper> wat?
<davechen1y> link for some text of what you are seeing
<davechen1y> please
<thumper> davechen1y: http://paste.ubuntu.com/14676889/
<davechen1y> ta
<davechen1y> thumper: normally whent hat happens mongo has crapped itself waaaaaya up in the piece
<thumper> yeah, but only running one test
<thumper> and it is recurring
<thumper> so I'm guessing it is me
<davechen1y> off
<davechen1y> odd
<thumper> normally this is when you forget to close something
<davechen1y> i've always seen that in big test runs when mongo poops itself and this is just the aftershock of every suite setup and tear down
<thumper> but I have the defer close
<davechen1y> failing
<thumper> hmm... logging is getting me places... slowly
<davechen1y> thumper: this merge gets worse and worse
<davechen1y> 'cos juju/juju/testing needs a newer version of juju/utils, had to update that
<davechen1y> and juju/utils needs yaml.v2
<davechen1y> this is like the story about needing a nail to fix a horse shoe
<davechen1y> to get a penny
<davechen1y> to buy some milk
<davechen1y> to feed a pig
<davechen1y> to ride on sunday
<thumper> well, at least you can see your journey...
<thumper> my test is being torn down while running
<thumper> beat that
<thumper> I mean, WTF?
<thumper> davechen1y: right now I have *no* clue as to what's going on
<davechen1y> push it to a branch
<davechen1y> i'll take a look
<thumper> pushing
<natefinch> this is (one of the reasons) why dumping a bunch of unrelated crap into juju/utils is a terrible idea.  If it was all separate repos, you could just update the one you need, and not update everything else.
<thumper> davechen1y: https://github.com/howbazaar/juju/tree/migrate-machines
<thumper> davechen1y: although I'm pretty much done...
<thumper> davechen1y: let me point you at a few things:
<thumper> state/migration_import_test.go line 185
<thumper> that line is never reached in the test
<thumper> state/migration_import.go line 142, both the possible return places have logging, neither happens
<thumper> I get: [LOG] 0:00.255 DEBUG juju.state.import-model importing machine 0
<thumper> START: <autogenerated>:784: MigrationImportSuite.TearDownTest
<thumper> [LOG] 0:00.255 DEBUG juju.state closed state without error
<thumper> no test assertion failed AFAICT
<thumper> just start of tear down
<thumper> it would be great if you figured this out, but I'm going to sleep on it
<thumper> laters
<frobware> dooferlad, running 10 mins late...
<dooferlad> frobware: ack
<frobware> dooferlad, back...
<dooferlad> frobware: see in in a sec then
<jam> wallyworld: I just submitted bug #1538462 about "juju bootstrap --debug" containing a lot of simplestreams "garbage"
<mup> Bug #1538462: simplestreams debug content is useless (juju bootstrap --debug) <logging> <simplestreams> <juju-core:Triaged> <https://launchpad.net/bugs/1538462>
<jam> specifically, it is serializing something and putting just the pointer content into the log
<jam> 20130404:0xc8201f1e00 20130423:0xc8201f1ec0 20131123:0xc82040c0c0 20130313:0xc8201f16e0 20131022:0xc8201f18c0 20130928:0xc8201f1980
<jam> is not particularly useful for anyone
<mup> Bug #1538462 opened: simplestreams debug content is useless (juju bootstrap --debug) <logging> <simplestreams> <juju-core:Triaged> <https://launchpad.net/bugs/1538462>
<jam> frobware: dimitern: I got roped into a phone call for a client, so I won't be able to make standup today.
<frobware> jam: ack
<frobware> dimitern, do you have an /e/n/i I could look at from your IPv6 setup?
<dimitern> jam, np
<dimitern> frobware, not right now, but I can get one
<frobware> dimitern, not desperately urgent
<frobware> dimitern, voidspace: standup
<dimitern> sorry, omw
<dimitern> frobware, I'm typing in the HO chat
<voidspace> dimitern: we said don't do the hack with a yaml setting and take a couple of days to do it properly
<frobware> dimitern, I'm confused about the proposed PR/change for default.
<frobware> dimitern, are we going for a change that does NOT require a yaml addition?
<dimitern> frobware, voidspace, ok, let me think for a bit to see what are the options
<dimitern> and then we can decide?
<frobware> dimitern, thanks. we need to convey our plans to cheryl too
<axw> wallyworld: https://github.com/juju/juju/compare/cloud-credentials...axw:cloud-credentials-prepareforbootstrapparams  -- all tests pass, but I want to do some live tests and audit code before I propose it (tomorrow)
<axw> wallyworld: if you want to look before then, I don't expect too much will change
<wallyworld> axw: ok, will do
<dimitern> frobware, I think I have a good plan for a "proper" fix
<frobware> dimitern, great!
<wallyworld> jam: hey, you still online? what are your issues with splitting config and dat directories?
<dimitern> here's the gist of it: http://paste.ubuntu.com/14678495/
<frobware> dimitern, want to HO to discuss? I can act as a sounding board.
<jam> wallyworld: I don't have a huge problem with it, I don't know that it actually provides clarity rather than just complexity, though.
<dimitern> frobware, I need to go out now, but let's chat when I'm back in <1h
<jam> wallyworld: the data we have I wouldn't put into /usr/share, for example
<frobware> dimitern, ok, but probably after that as I have a couple of back-2-back meetings.
<jam> its not that kind of dataa
<dimitern> frobware, np
<wallyworld> jam: it keeps pure config data that the user can and should look at and edit separate from stuff they should not edit lest the stuff things up. and it conforms to a standard
<wallyworld> sure not in /usr/share but ~/config/juju for clouds and credentials
<wallyworld> and the data dir for controller cache and other runtime stuff users should not see
<wallyworld> we don't want to put in their faces files they are not expected to look at
<wallyworld> we tell them - anything in ~/.config/juju you can edit and look at
<wallyworld> that makes it easy for them
<jam> wallyworld: so can you link the actual "this is where you should put file X" stuff? The actual XDG spec just says what the variables are, but not how to pick which one
<wallyworld> jam: i don't have the link, but perrito666 probably does
<jam> wallyworld: so there are a couple of articles on "ploum.net" that talk about this, but actually his notes seem to say everything should be in DATA
<jam> as they aren't just "preferences that can be reset"
<wallyworld> everything in data makes no sense
<wallyworld> clouds and credentials etc are definitely config
<jam> https://ploum.net/184-cleaning-user-preferences-keeping-user-data
<jam> wallyworld: credentials are critical information about remote systems
<jam> not just simple config
<jam> as in, if I lose them, I'm hosed
<jam> vs "I want tabular by default" is config
<jam> the pluom article at least says that you should be able to wipe config and start fresh with default settings
<jam> and that is *not* true of credentials
<wallyworld> so perhaps the new jenv type stuff should go in "<runtime>" not data
<wallyworld> i'll discuss it some more
<jam> wallyworld: runtime is for "non-essential runtime files such as sockets and named-pipes"
<jam> and the lifetime of RUNTIME is bound to the logged in user
<jam> definitely not what we want
<jam> logging out shouldn't reset your credentials :)
<wallyworld> so there be some way of separating user editable stuff from juju stuff
<jam> wallyworld: if users are vim ~/*/juju we've probably failed our CLI UX
<jam> as in, there may be thing they *could* edit, but we'd really be better off not suggesting they edit anything
<jam> (vim is not typesafe)
<wallyworld> they need to poentially edit their personal clouds.yaml
<jam> wallyworld: we should provide CLI ways to describe a cloud instead
<jam> wallyworld: vim won't do syntax checking that the information they type is valid
<wallyworld> the spec has an add-cloud command that assumes that a clouds.yaml exists already. we could allow it to be edited using a schema like we will for credentials
<wallyworld> that article does make a point that i hadn't considered
<jam> I can't say pluom is authoritative, but the XDG spec tells you *what* the variables are, but not why to use them (at least that I've fonud)
<wallyworld> +1 to that
<wallyworld> ok, we'll look to see how using <data> will turn out, and treat stuff there as needing to be persistent
<alexisb> fwereade, ping
<perrito666> morning
<rogpeppe1> dimitern: you might like this new proposed addition to juju/utils: https://github.com/juju/utils/pull/193/files
<dimitern> rogpeppe1, looks nice :)
<rogpeppe1> dimitern: thanks :)
<dimitern> rogpeppe1, I have a growing number of things to add to juju/utils .. in my abundant free time :)
<rogpeppe1> dimitern: :)
<rogpeppe1> dimitern: tbh i think we should split it up
<dimitern> rogpeppe1, subpackages or subrepos?
<rogpeppe1> dimitern: subpackages
<dimitern> rogpeppe1, +100
<rogpeppe1> dimitern: and it's somewhat criminal that importing juju/utils side-effects http.DefaultClient
<dimitern> rogpeppe1, oh.. GetNonValidating.. ?
<rogpeppe1> dimitern: no, grep 'func init'
<dimitern> rogpeppe1, ah, right :/
<frobware> dimitern, dooferlad, voidspace: http://reviews.vapour.ws/r/3648/ -- will do more manual testing, and testing with stakeholders before merging. but a review would help move that forward.
<dimitern> frobware, is this the same script as in master?
<frobware> dimitern, yep
<frobware> dimitern, should be 100% identical. the control happens via cli args.
<dimitern> frobware, this line: self.is_active = self.method == "dhcp" or self.method == "static"
<dimitern> frobware, reminded me of that case where we have a auto eth0 but manual, which has active, static vlan children
<dimitern> frobware, did we do anything about that?
<frobware> dimitern, do you have a node to hand that boots with that config?
<dimitern> frobware, not now, but IIRC thedac had to deal with something similar in dellstack
<frobware> dimitern, that case was covered by https://github.com/juju/juju/pull/4148
<mup> Bug #1538583 opened: manual provider add-machine failed in api-command-rename <add-machine> <ci> <manual-provider> <regression> <juju-core:Incomplete> <juju-core api-command-rename:Triaged> <https://launchpad.net/bugs/1538583>
<mup> Bug #1538589 opened: Container has different subnet <ci> <lxc> <network> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1538589>
<dimitern> frobware, right! so we're good then - LGTM I think
<frobware> cherylj, can we get a CI build for #1532167  https://github.com/frobware/juju/tree/1.25-lp1532167
<mup> Bug #1532167: maas bridge script handles VLAN NICs incorrectly <addressability> <maas-provider> <network> <juju-core:Triaged> <juju-core 1.25:In Progress by frobware> <https://launchpad.net/bugs/1532167>
<cherylj> abentley, mgz, can we get a CI run on frobware's branch ^^?
<cherylj> this is the one that backports the maas-spaces script to 1.25
<mgz> cherylj: I'll push it into the juju namespace
<cherylj> mgz: thanks!
<dimitern> frobware, can you have a look at this org draft - http://paste.ubuntu.com/14679880/
<dimitern> frobware, so it came down to a few more things than I originally thought
<frobware> dimitern, certainly looks like it.
<dimitern> frobware, I can still probably do it by standup time tomorrow, but not today
<frobware> dimitern, sure. it's important we get it right.
 * voidspace reading
<dimitern> voidspace, dooferlad, please have a look as well ^^
<voidspace> dimitern: so for maas now the juju user has to tell us what space they want as default (effectively)?
<voidspace> dimitern: what about blake's suggestion - bootstrap without constraint and then pick as default the space that we bootstrap to?
<voidspace> to me this looks more complex than we *need*, but I'm digesting it properly
<voidspace> *still* digesting it properly
<dimitern> voidspace, the idea is to use environ constraints for the default, but also (in maas) discover and save the name in state for use in unspec. bindings later
<dimitern> voidspace, so it includes what we discussed with blake
<dimitern> voidspace, i.e. I think this is the "proper way" to do it, and it's not maas specific
<voidspace> dimitern: why do we *need* a different default space and controller space specifications
<voidspace> dimitern: can't we initially have just a default one
<voidspace> dimitern: or is that just for where controller and specified default *are* different
<dimitern> voidspace, they need to be distinct in the general case, as the controller "space" is more like a set of addresses from possibly different subnets in different spaces
<voidspace> hmm, doesn't sound like a space at all
<dimitern> voidspace, but in the default path, we'll set them to be the same in maas (when you don't give us anything specific)
<dimitern> voidspace, it's a space with no subnets, just addresses
<dimitern> voidspace, well, we can "invent" subnets like 10.20.30.44/32 strictly speaking
<voidspace> dimitern: according to our network model that *isn't* a space, and it would probably be less confusing if we didn't call it a space
<dimitern> but that's only internally in juju
<dooferlad> dimitern: my problem with it is that you are asking the user for information they may not have, and don't need.
<dooferlad> you can, at bootstrap, just get a machine and see what spaces it is in once it has been provisioned (as Blake suggested)
<dimitern> voidspace, no, we're not :) we're doing what the user said, if they did - otherwise we pick a sane default (at bootstrap) for both names
<voidspace> for maas, network.DefaultSpace is not necessarily a sane default
<voidspace> dimitern: if controller space isn't a real space, what's the use case for setting it to a real space? (and your doc says it must be a real space)
<voidspace> dimitern: what I'm really trying to say is that I don't understand why we need controller space and what it's for
<jcastro> does anyone know where the new account/creds stuff is documented? We want to show juju 2.0 to people at the charm summit but we can't seem to find the docs about like credentials.yaml and friends
<voidspace> dimitern: and I still don't think we need it
<dimitern> voidspace, because it may very well be a real space, dedicated for juju controllers only, and the user has configured it so it can be accessed from all other spaces
<voidspace> dimitern: the only place your doc mentions it being used is for selecting API endpooints
<dimitern> and know what they are doing
<dimitern> voidspace, that's what we need it for
<voidspace> your implementation as described *requires* it to be a real space (we always set it to something), whereas above you stated that it was really just a list of endpoints possibly in different spaces
<voidspace> dimitern: the implementation you describe in your doc doesn't permit that - it *must* be a real space
<voidspace> surely the only thing we *need* to overcome the existing problem is as follows (I may be wrong):
<voidspace> set the juju "default space" to the space from the primary nic of machine-0
<voidspace> if the user wants to specify a default space they can, for the moment, specify that space as a bootstrap constraint
<voidspace> later on we can add a default space setting
<dimitern> the setting is need to store it
<dimitern> but it can be just in state, not in env.yaml
<voidspace> dimitern: right
<voidspace> dimitern: we need to store it
<mup> Bug #1538589 changed: Container has different subnet <ci> <lxc> <network> <juju-core:Invalid> <https://launchpad.net/bugs/1538589>
<mup> Bug #1538589 opened: Container has different subnet <ci> <lxc> <network> <juju-core:Invalid> <https://launchpad.net/bugs/1538589>
<mup> Bug #1538589 changed: Container has different subnet <ci> <lxc> <network> <juju-core:Invalid> <https://launchpad.net/bugs/1538589>
<mup> Bug #1538589 opened: Container has different subnet <ci> <lxc> <network> <juju-core:Invalid> <https://launchpad.net/bugs/1538589>
<mup> Bug #1538589 changed: Container has different subnet <ci> <lxc> <network> <juju-core:Invalid> <https://launchpad.net/bugs/1538589>
<perrito666> could anyone ptal http://reviews.vapour.ws/r/3634/ ?
<natefinch> katco, ericsnow: I have to pick up my daughter from school right when our meeting is starting today... can we start 15 minutes late?  We could meet for a while beforehand, if you think we might run out of time.
<katco> natefinch: as long as you're available until the meeting is over, that's fine
<katco> natefinch: i don't anticipate this one running long
<natefinch> katco: ok
<natefinch> katco: oh, uh, I just noticed it runs through 5:30... I really kind of have to stop at 5 so I can make dinner for the kids.
<katco> natefinch: then we should start on time
<natefinch> katco: ok, I'll work it out.
<katco> natefinch: i can move it up a little as well if that works for ericsnow
<ericsnow> katco: fine with me
<katco> natefinch: but that puts you leaving in the middle of it?
<katco> natefinch: well, just let me know what works for you.
<mup> Bug #1538652 opened: MAAS doesn't assign DHCP IP to new Node and doesn't update DNS table <maas-provider> <juju-core:New> <https://launchpad.net/bugs/1538652>
<natefinch> katco: I'll just pick up my daughter early, if I can be done at 5.
<katco> natefinch: moved it up. we should be done by 5
<mup> Bug #1538652 changed: MAAS doesn't assign DHCP IP to new Node and doesn't update DNS table <maas-provider> <juju-core:New> <https://launchpad.net/bugs/1538652>
<natefinch> katco: sorry, I think I misunderstood.  I was going to pick her up early to make the 2:30 time.... is there a way we can do an hour now-ish and 2 hours from 3-5 my time?  I'm trying to avoid having to pick her up 45 minutes early from school.
<katco> natefinch: have you guys figured out resource-get?
<natefinch> katco: yeah, lemme give it one last test
<mup> Bug #1538652 opened: MAAS doesn't assign DHCP IP to new Node and doesn't update DNS table <maas-provider> <juju-core:New> <https://launchpad.net/bugs/1538652>
<natefinch> is there a way to force an update-status hook to get fired?
<natefinch> hey! it worked
<katco> \o/
<natefinch> it works better when you don't forget you git stashed your previous bugfix
<natefinch> katco: can we meet for an hour now?
<katco> natefinch: sorry, i need to run some errands over lunch for the trip
<katco> natefinch: this is why things go on the calendar
<katco> natefinch: bbiab
<voidspace> frobware: dooferlad: the good news is that *only* the maas deploy tests are failing for maas-spaces now: http://reports.vapour.ws/releases/3546
<katco> natefinch: ok back. so do you want me to put the meeting back to the original time?
<katco> ericsnow: i'm going to push the iteration planning back to its original time. i think natefinch is probably picking up his kiddo atm.
<ericsnow> katco: k
<natefinch> katco: I'm here
<katco> natefinch: k, let's do this
<katco> ericsnow: going now after all
<ericsnow> katco: k
<perrito666> cherylj: ping
<cherylj> perrito666: pong
<perrito666> cherylj: I have been assigned to https://bugs.launchpad.net/juju-core/+bug/1472711 and there is mention of work you are doing and I recall ian telling me too that you where doing work on this and that I was to help you
<mup> Bug #1472711: MAAS node has "failed deployment", juju just says "pending" when using juju add-machine <cpec> <feature> <landscape> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1472711>
<cherylj> perrito666: give me a minute, on a stand up.  I'll let you know what's up with that :)
<perrito666> thank you
<thumper> sinzui: why do pull requestest for machine-dep-engine keep getting opened and quickly closed?
<perrito666> thumper: because this is the bot https://www.youtube.com/watch?v=Z86V_ICUCD4
<mup> Bug #1538735 opened: default-series breaks --bootstrap-series <bootstrap> <ci> <juju-core:Triaged> <https://launchpad.net/bugs/1538735>
<mup> Bug #1538735 changed: default-series breaks --bootstrap-series <bootstrap> <ci> <juju-core:Triaged> <https://launchpad.net/bugs/1538735>
<sinzui> thumper: I open them, then github points to conflcits :(. We are blocking master so that the version the machine-dep-engine tests remain valid
<cherylj> perrito666: the WIP I have for the machine provisioning status is here:   https://github.com/cherylj/juju/tree/new_provisioning_status
<mup> Bug #1538735 opened: default-series breaks --bootstrap-series <bootstrap> <ci> <juju-core:Triaged> <https://launchpad.net/bugs/1538735>
 * perrito666 compares to master
<perrito666> cherylj:  so, if I understand this correctly I should continue that work? (I am not all that informed as you can see :p )
<cherylj> perrito666: let me talk to wallyworld
<perrito666> cherylj: there should be no wallyworld so early :p
<cherylj> perrito666: if you are going to take it, I'd want to set up some time to do a walkthrough of the code
<cherylj> perrito666: some of it is no longer relevant as we won't be using lxc for containers
<cherylj> although, it would need to be translated into lxd since we're going to use the same broker approach
<cherylj> so n/m
<perrito666> hehe. yeah, so I see the task has some important amount of time assigned and the description is, and I am being literal: "finish observability"
<perrito666> I am sure Ian was more descriptive on our meeting but this AM I was not feeling very well and my head dropped a few packages
<cherylj> I know that feeling
<mup> Bug #1538742 opened: lxc containers stay pending when using maas provider and juju 2.0-alpha1 <cpec> <juju-core:New> <https://launchpad.net/bugs/1538742>
<mup> Bug #1538742 changed: lxc containers stay pending when using maas provider and juju 2.0-alpha1 <cpec> <juju-core:New> <https://launchpad.net/bugs/1538742>
<mup> Bug #1538742 opened: lxc containers stay pending when using maas provider and juju 2.0-alpha1 <cpec> <juju-core:New> <https://launchpad.net/bugs/1538742>
<jw4> hi juju peeps - is it a dumb question to ask if I can use juju to deploy a group of applications and databases through mesos/marathon?  Seems like juju and marathon/mesos are subtly different enough that there could be good 'synergy' there?
<jw4> thanks lazypower
<perrito666> cherylj: so I just synced with wallyworld lemme know when you have a moment
<lazypower> np o/ :D
<cherylj> perrito666: sure, give me a couple minutes to finish a review
<marcoceppi> helloooooooooooooo core
<marcoceppi> cherylj: is virtual services on the roadmap for 2.0?
<alexisb> marcoceppi, no
<marcoceppi> ok :(
<alexisb> thumper, ping
<cherylj> waigani: ping?
<waigani> cherylj: pong
<cherylj> hey waigani, would you be able to rebase machine-dep-engine?  This should be the last time as we've blocked master
<perrito666> nice table tenis game going on here
<waigani> cherylj: sure
<cherylj> thanks, waigani!
<cherylj> waigani: also let me know if it ends up being trivial.  we may be able to skip the test run if it was
<waigani> cherylj: okay, will do
#juju-dev 2016-01-28
<davecheney> suddently, https://github.com/juju/juju/pull/4212
<waigani> yeah, just hit the same :/
<davecheney> i guess it's time to apply all those security updates i'd been avoiding
<thumper> fark...
<thumper> who would have thought that the design on state/status.go would come back and bite?
<thumper> / statusDoc represents a entity status in Mongodb.  The implicit
<thumper> / _id field is explicitly set to the global key of the associated
<thumper> / entity in the document's creation transaction, but omitted to allow
<thumper> / direct use of the document in both create and update transactions.
<thumper> gee, we omit a vital piece of infomration to make a few other places of code easier...
<thumper> I can't read all the status docs at once because we can't link the doc to the souce
<thumper> stupid
<thumper> I'll have to read out as maps
 * thumper loos for that pic he saw davecheney use
<davecheney> google "burning bag of shit gif"
<thumper> https://twitter.com/davecheney/status/691479334867988480
 * thumper throws in the towel and eats lunch
<axw> wallyworld: when you can, can you please take a look at http://reviews.vapour.ws/r/3654/
<axw> working on updating the bootstrap code now
<wallyworld> sure, i'll just finish soemthing
<davecheney> is github still down ?
<thumper> it was
<thumper> I'm finding it up again
<davecheney> thumper: https://github.com/juju/juju/pull/4219
<davecheney> attempt number 2 to backport the leadership worked fix
<davecheney> i modifeid the tests to work with the testing utility packages that exist on 1.25
<davecheney> which limited the scope of the backport
<thumper> k
<thumper> davecheney: what was the change between this backport on the other?
<thumper> wow... first clean run through all the tests for a while with no intermittent failures
<mup> Bug #1538868 opened: kill-controller failed: sudo: switch: command not found <ci> <kill-controller> <local-provider> <regression> <juju-core:Incomplete> <juju-core api-command-rename:Triaged> <https://launchpad.net/bugs/1538868>
<waigani_> cherylj: merge done. Very non trivial, sent you an email.
<cherylj> waigani_: thanks.  If you can merge your changes into the machine-dep-engine branch, it'll be queued for another CI run.
<waigani_> cherylj: okay will do
<cherylj> thanks, waigani_,  I appreciate the help.  We've been trying to get that branch merged for days!
<waigani_> cherylj: Yep it's the syncing hell. Not a nice place to be.
<cherylj> I feel bad for everyone who has to rebase after the api-command-rename branch lands
<cherylj> that's going to be absolutely awful
<waigani_> yeah true ... I should get a move on with these workers!
<waigani_> ;)
<natefinch> damn I hate that we sling around IDs as strings everywhere.  Just wasted multiple hours because I swapped a serviceID and a unitID.  We even *have* type-specific Tags to avoid that :/
 * thumper is about to bootstrap for the first time in freaking ages
<thumper> will try lxd first
 * thumper wonders what he needs
<natefinch> thumper: go 1.3+, the lxd apt package.... and there's a couple manual steps before bootstrap
<thumper> got go
<thumper> got lxd
<thumper> what are the manual steps?
 * thumper wonders why we need manual steps
<davecheney> thumper: i modified leadership/fixture_test.go to work with the older version of the juju/testing.Clock type
<thumper> davecheney: is that it?
<davecheney> I have to modify testing.Clock as well to fit the test
<davecheney> yup
<thumper> shipit (when unfroze)(
<davecheney> someone refactored testing.Clock after 1.25
<thumper> hazaah
<davecheney> so I just applied as much of the refactoring to the 1.25 testing.Clock type to get the tests to pass
<natefinch> thumper: the steps are spelled out in the environments.yml if you juju init.... but assuming you already have an environments.yml.... if you haven't logged out since installing lxd, you have to run newgrp lxd ... that's just lxd setup
<natefinch> thumper: and then -
<natefinch> lxc remote add images images.linuxcontainers.org
<thumper> I've read the help
<natefinch> lxd-images import ubuntu --alias ubuntu-trusty
<thumper> I'm wondering why the lxd provider doesn't import the images by itself
<davecheney> hold your breath people, i'm hitting merge
<natefinch> thumper: *shrug*  it's kinda slow, but yeah, it would be nice to remove any manual steps, especially if we expect this to replace the local provider
<thumper> which I thought we were doing for 2.0
<natefinch> I have not heard anything concrete about removing local for 2.0
 * thumper bootstraps his first lxd env since 2.0
<cherylj> natefinch: yes, we are going to be removing the local provider for 2.0
<natefinch> cherylj: awesome.  I hope moonstone isn't on the hook to do it.
<cherylj> ha, I think the pleasure is going to tanzanite
<natefinch> as much as I'd love to rip out its beating heart, we're overcommitted just with resources
 * davecheney cue dramatic chipmonk
<natefinch> we were overcommitted on resources BEFORE we lost wayne :/
<thumper> natefinch: FYI, everyone is overcommitted
<thumper> somethings I think the powers that be think we work best that way
<thumper> when the opposite is in fact true
<natefinch> thumper: I'll let you know when I find a software company that doesn't continually overcommit.
<thumper> hmm...
<thumper> my attempt at getting a new environment with lxd is struggling
<thumper> particularly adding machines to another environment
<thumper> bah humbug, recreating with debug
<axw> thumper wallyworld: do you recall the outcome of discussion at oakland about removing the live tests?
<wallyworld> i think many were in facor
<wallyworld> favour
<wallyworld> since we have CI
<wallyworld> and they are only for aws and openstack
<axw> but no actual decision
<wallyworld> um, not that i recall, but i reckon jfdi
<axw> I'll send an email to juju-dev saying I'm doing it unless there's strong objections
<thumper> hmm...
<thumper> this time I am getting new machines
<thumper> I wonder what happened last time
<wallyworld> axw: keep the openstack test doubles though, just remove the actual live tests against a real cloud
<axw> wallyworld: what does that mean? there's "local live", which runs the live tests against doubles
<wallyworld> there's 3 types of test: local, loval live, and live
<wallyworld> i think we can remove the live bits
<wallyworld> the setup than runs the tests against an actual cloud
<axw> wallyworld: so you want to keep local live? I don't think there's anything to gain from removing just the live ones then
<wallyworld> from memory, the suites use a fixture which embed the same tests to be run either locally against doubles or live
<wallyworld> if we remove the local live ones, there's nothing to replace then afaik (for openstack say)
<thumper> hehehe
<thumper> debug-log is so nice now
<davecheney> boom
<davecheney> merges pull request; drops mic
<wallyworld> axw: so other providers would have been written with more unit test coverage, but openstack relied on the local / live test fixture i think
<wallyworld> but i could be wrong, would need to check
<axw> wallyworld: bleh, I'll leave it. I'm trying to avoid having to touch the live tests with credentials bits, unavoidable if they have to stay for local live
<axw> wallyworld: the live tests aren't particularly exhaustive, and openstack at least has a bunch more local-only ones... but I cbf right now
<wallyworld> axw: ok, i am just being cautious about removing test coverage. the actual live tests against a real cloud aren't run by default, only if a flag is passed to go test. but it would be nice to clean that stuff up at some point
<davecheney> really 1.25 is going to live on for another 2 years
<wallyworld> i know the openstack tests use hooks to test various failure sceanrios
<davecheney> it's a tire fire
<wallyworld> and deleting the tests will result in a loss of that coverage
<thumper> and here is an exported model http://paste.ubuntu.com/14686150/
<thumper> for two machines...
<thumper> I know it doesn't look a lot
 * thumper is done for the day
<mup> Bug #1538742 changed: lxc containers stay pending when using maas provider and juju 2.0-alpha1 <cpec> <juju-core:Invalid> <https://launchpad.net/bugs/1538742>
<mup> Bug #1538742 opened: lxc containers stay pending when using maas provider and juju 2.0-alpha1 <cpec> <juju-core:Invalid> <https://launchpad.net/bugs/1538742>
<mup> Bug #1538742 changed: lxc containers stay pending when using maas provider and juju 2.0-alpha1 <cpec> <juju-core:Invalid> <https://launchpad.net/bugs/1538742>
<wallyworld> axw: hey, if you get a chance at some stage today, there's a couple of PRs related to the api-command-rename branch. one small, one large. i hope that's the last major work that's needed besides smaller cleanup. the good news is that on what's landed to date, CI is happy
<axw> wallyworld: ok. trying to get tests for bootstrap finished atm, will see how I go
<wallyworld> sure, np. ty
<wallyworld> i have soccer in about an hour anyway
<voidspace> frobware: dooferlad: dimitern: another master merge http://reviews.vapour.ws/r/3661/
<dimitern> voidspace, cheers - looking
<dimitern> voidspace, reviewed
<voidspace> dimitern: thanks
<dimitern> voidspace, jam, fwereade, standup?
<voidspace> dimitern: sorry, omw
<mup> Bug #1417526 changed: environs/bootstrap: EnsureNotBootstrapped should not rely on StateServerInstances <azure-provider> <bootstrap> <juju-core:Won't Fix> <https://launchpad.net/bugs/1417526>
<axw> wallyworld: I'm not likely to be able to finish updating the bootstrap tests, but if you want to see what's there already, it's at https://github.com/juju/juju/compare/cloud-credentials...axw:cloud-credentials-bootstrap2
<axw> wallyworld: I've updated ec2, openstack, gce, and lxd so far
<axw> wallyworld: other providers will be done in a followup
<wallyworld> axw: ty, we can use what works to put the demo together and follow up with proper tests
<perrito666> morning
<voidspace> dooferlad: ping
<voidspace> dimitern: ping
<voidspace> dimitern: dooferlad: unping
<dimitern> :)
<mup> Bug #1539116 opened: TestManageEnvironRunsPeergrouper panics <intermittent-failure> <panic> <juju-core:Triaged> <https://launchpad.net/bugs/1539116>
<mup> Bug #1539116 changed: TestManageEnvironRunsPeergrouper panics <intermittent-failure> <panic> <juju-core:Triaged> <https://launchpad.net/bugs/1539116>
<mup> Bug #1539116 opened: TestManageEnvironRunsPeergrouper panics <intermittent-failure> <panic> <juju-core:Triaged> <https://launchpad.net/bugs/1539116>
<mup> Bug #1458585 opened: TestCommand fails <ci> <gccgo> <go1.5> <intermittent-failure> <regression> <test-failure> <wily> <xenial> <juju-core:Incomplete> <juju-core 1.23:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1458585>
<mup> Bug #1458585 changed: TestCommand fails <ci> <gccgo> <go1.5> <intermittent-failure> <regression> <test-failure> <wily> <xenial> <juju-core:Incomplete> <juju-core 1.23:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1458585>
<mup> Bug #1458585 opened: TestCommand fails <ci> <gccgo> <go1.5> <intermittent-failure> <regression> <test-failure> <wily> <xenial> <juju-core:Incomplete> <juju-core 1.23:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1458585>
<mup> Bug #1458585 changed: TestCommand fails <ci> <gccgo> <go1.5> <intermittent-failure> <regression> <test-failure> <wily> <xenial> <juju-core:Incomplete> <juju-core 1.23:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1458585>
<mup> Bug #1458585 opened: TestCommand fails <ci> <gccgo> <go1.5> <intermittent-failure> <regression> <test-failure> <wily> <xenial> <juju-core:Incomplete> <juju-core 1.23:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1458585>
<cherylj> hey frobware, I see you got some feedback on your changes for bug 1532167
<mup> Bug #1532167: maas bridge script handles VLAN NICs incorrectly <addressability> <maas-provider> <network> <juju-core:Triaged> <juju-core 1.25:In Progress by frobware> <https://launchpad.net/bugs/1532167>
<cherylj> think it will be hard to address?
<cherylj> hey thomnico - were you able to get everything up and running for your manual provider?
<thomnico> cherylj, yes I figured it ou
<thomnico> out
<cherylj> great!
<thomnico> but this is buggy
<thomnico> and I had another alert
<cherylj> thomnico: I'm going to try to bootstrap with a trusty machine and see if I run into the same issue you mentioned
<thomnico> they use 10.0.3.0 as an actual network .... which conflict with default lxc
<cherylj> I was using wily
<thomnico> the issue is really cleaning after destroy
<cherylj> ah, yeah
<thomnico> even with systemctl I had to clean up manually for being able to restart
<thomnico> but I got unblocked in the morning
<cherylj> ok, good
<thomnico> logs are not really telling what check failed .. so I had to dig into the code
<thomnico> I hope you have enough to reproduce .. I am not blocked .. many thanks for helping
<cherylj> thomnico: you can also specify a different subnet for the lxc bridge
<thomnico> in the environment ?
<thomnico> got lost looking for it :)
<cherylj> thomnico, no just on the machines
<thomnico> indeed that is how I fix it
<cherylj> you can specify a different bridge device, but you'd still need to make sure it was using the right subnet
<thomnico> did not bother you if I find how to do thing
<thomnico> ok so it is the expected behavior, great
<thomnico> in a nutshell if destroy-environment (not force) run can clean the manual bootstrap machine that will solve a lot of headaches
<cherylj> yes, I've run into that myself and I saw you mentioned the bug in your email
<thomnico> and a CI test for manual : boostrap , destroy , bootstrap again
<thomnico> with the NfV realm will see the need for this more often
<thomnico> so it worth it imho
<thomnico> and try a machine without systemd :)
<thomnico> please
<thomnico> should I add the info in mail in the bug report ?
<thomnico> have to go back to lab.. respond to mail
<thomnico> if needed
<thomnico> thanks for all
<mup> Bug #1539156 opened: juju does not ignore 'bootstrap-host' when bootstrap --to specified <bootstrap> <ci> <juju-core:Triaged> <https://launchpad.net/bugs/1539156>
<mup> Bug #1539156 changed: juju does not ignore 'bootstrap-host' when bootstrap --to specified <bootstrap> <ci> <juju-core:Triaged> <https://launchpad.net/bugs/1539156>
<mup> Bug #1539156 opened: juju does not ignore 'bootstrap-host' when bootstrap --to specified <bootstrap> <ci> <juju-core:Triaged> <https://launchpad.net/bugs/1539156>
<mup> Bug #1539167 opened: Intermittent test failure "Test left sockets in a dirty state" (github.com/juju/juju/cmd/jujud/agent)  <juju-core:New> <https://launchpad.net/bugs/1539167>
<voidspace> dimitern: ping
<dimitern> voidspace, pong
<voidspace> dimitern: I'm really struggling to test the MachineAgent changes that wait for space discovery to complete
<voidspace> dimitern: ideally I'd like to start a machine agent with the discoverspaces worker patched out
<voidspace> dimitern: assert that the discoverspaces worker was created
<voidspace> dimitern: connect to state as a client and confirm that the login fails
<voidspace> dimitern: then calll the "setDiscoverSpacesComplete" function that is given to the worker
<voidspace> dimitern: confirm that login now succeeds
<voidspace> dimitern: as far as I can tell in cmd/jujud/agent/machine_test.go the creation of all singular workers is patched out with the singularRecord
<voidspace> dimitern: so I'm failing at the first hurdle - I can't create a MachineAgent with job ManageEnviron that will actually attempt to create a discoverspaces worker
<dimitern> voidspace, yeah, well that part of the code changed a lot I think when the dep-engine stuff landed
<dimitern> voidspace, which was supposed to make testing things like this easier
<voidspace> dimitern: as far as I can tell the login limiting (for upgrades or restore) isn't tested at the MachineAgent level
<dimitern> voidspace, hmm
<dimitern> voidspace, well, can you reasonably test it in isolation?
<voidspace> dimitern: at the moment my conclusion is that I can't
<voidspace> dimitern: maybe I can test through the api
<voidspace> dimitern: I'll look and see - maybe that's how it's tested for upgrade/restore
<dimitern> voidspace, possibly
<dimitern> voidspace, it should be easier to test that client logins are disabled once you start the worker?
<voidspace> dimitern: only if it's started from the MachineAgent, which is where the disabling is done
<voidspace> dimitern: and I can't yet create a MachineAgent that will actually start the worker
<voidspace> it's the MachineAgent that limits logins, not the worker
<voidspace> dimitern: commonMachineSuite, which all the machine agent tests use patches out the SingularRunner
<voidspace> dimitern: so none of the workers started that way are actually created
<dimitern> voidspace, can you test it in a feature test?
<voidspace> dimitern: yes, probably
<dimitern> voidspace, e.g. the way we test that end-to-end "juju space list" works
<voidspace> dimitern: I'll look at that
<voidspace> dimitern: thanks
<dimitern> in featuretests
<voidspace> hmmm... the singularRunnerRecord that has a fakeSingularRunner looks like it might have a real runner underlying it
<voidspace> which is odd, because my start function *isn't* being called
<voidspace> dimitern: I'll do a bit more investigation before I give in and go to a feature test
<dimitern> voidspace, cheers
<mup> Bug #1539167 changed: Intermittent test failure "Test left sockets in a dirty state" (github.com/juju/juju/cmd/jujud/agent)  <juju-core:New> <https://launchpad.net/bugs/1539167>
<mup> Bug #1539190 opened: manual provider requires bootstrap-host, even when --to specified <bootstrap> <manual-provider> <placement> <juju-core:Triaged> <https://launchpad.net/bugs/1539190>
<mup> Bug #1539216 opened: bootstrap rootdisk full - need to compress rotated logs <juju-core:New> <https://launchpad.net/bugs/1539216>
<natefinch> heh, I've had a couple requests for compressing the rotated logs in Lumberjack.  Guess I should spend some time on it ;)
 * thumper afk for ~30 min
<perrito666> wallyworld: brt
<wallyworld> axw: just you and me for standup, wanna do it now a bit early?
<axw> wallyworld: omw
<cherylj> wallyworld: the fix we talked about for bug 1538868 is wrong
<mup> Bug #1538868: kill-controller failed: sudo: switch: command not found <ci> <kill-controller> <local-provider> <regression> <juju-core:Incomplete> <juju-core api-command-rename:Triaged> <https://launchpad.net/bugs/1538868>
<cherylj> I'll fix it
<wallyworld> cherylj: only if you have time, i am happy to do it as it will need to go into the api-command-rename branch
<cherylj> It'll just take me a minute.  I'll do it now
<cherylj> wallyworld: http://reviews.vapour.ws/r/3673/
<wallyworld> cherylj: awesome, ty, will look after current meeting
<cherylj> wallyworld: k, I'm going to have dinner, so merge it for me if you ship it!
<wallyworld> cherylj: you awesome ty
#juju-dev 2016-01-29
<thumper> oh shit
 * thumper sadface
<thumper> wallyworld: juju run seems to be broken for hosted environments
<wallyworld> yay
<wallyworld> in master?
<thumper> I recall having a different system identity created for the hosted environments
<thumper> yes master
<thumper> I have a hosted env called to-move
<thumper> juju ssh -e to-move 0
<thumper> works
<thumper> juju run -e to-move --machine 0 'ls'
<thumper> does not
<wallyworld> damn
<axw> wallyworld: probably not the most thorough review I ever did, but your PR is good to go
<wallyworld> axw: tyvm, a lot of it is boilerplate - moving stuff around rather than too many renames. we can fix things if we find them
<thumper> first migration demo is a success
<thumper> all seems to work exactly as designed
<thumper> as long as you only have simple machines and no services or storage
<thumper> or networks
<thumper> :)
<thumper> and manual hackery to update agents config
 * thumper wonders why the 2UTC call is at 3UTC
<axw> wallyworld: how would you feel about me adding the -o flag at least temporarily? existing bootstrap tests depend on certain config set (e.g. default-series, agent-stream)
<wallyworld> axw: sure, go for it
<axw> wallyworld: I can back it out once we have a --config
<wallyworld> np
<wallyworld> axw: we'll just let rick know it's a stop gap
<axw> wallyworld: I wasn't planning on documenting it, but can do
<wallyworld> axw: if it's not needed for the demo, then don't :-)
<axw> wallyworld: no, just for the unit tests
<wallyworld> ok, np, no doco needed
<cherylj> can I get a quick review?  http://reviews.vapour.ws/r/3676/
<cherylj> It's adding some logging in an attempt to track down some recent CI failures
 * thumper is done done
<thumper> laters
<cherylj> wallyworld: fwiw, you'll probably want to pull that PR into the api-command-rename branch as well, as it is also seeing the connection is shutdown failures.
<wallyworld> cherylj: i do need to merge master, will do that after the latest stuff lands
<cherylj> wallyworld: could you take a quick look at that RB?  http://reviews.vapour.ws/r/3676/
<cherylj> oh
<cherylj> n/m
<wallyworld> cherylj: i just hit merge :-)
<cherylj> You'll need to JFDI it
<cherylj> I can do it
<wallyworld> ah yeah
<axw> wallyworld: finally, http://reviews.vapour.ws/r/3677
<wallyworld> ok
<wallyworld> axw: one possibly stupid question to answer
<axw> wallyworld: ta
<wallyworld> np
<axw> wallyworld: which one's the possibly stupid question?
<wallyworld> axw: so many to choose from :-) the one about why region is set to cloud name
<axw> wallyworld: comment about setting region?
<axw> ok
<wallyworld> yup
<axw> wallyworld: that's a hack for lxd
<axw> wallyworld: I think we want to detect the region name instead
<wallyworld> region should be local in that case
<wallyworld> localhost
<wallyworld> and only done for lxd and not all clouds
<axw> wallyworld: sure, the lxd provider would return a region of "localhost" when asked to detect. I'll write a TODO to fix this.
<wallyworld> ta
<axw> wallyworld: well, I don't want to go special casing things like we did for the local provider
<wallyworld> yep, sorry, was just using a general statement
<axw> wallyworld: I think there's value in having detection for other things too, e.g. to pick up OS_REGION... I'll add a TODO and we can go over it in more detail later
<wallyworld> implementation tba sort of thing
<axw> yep
<wallyworld> yeah, todo is good, allows readers of code to grok what is wip etc
<axw> wallyworld: I was running tests and found out why the cmd/juju/status tests are so insanely slow. the status filtering code is doing IP resolution (and way more than it even needs to)
<wallyworld> oh dear
<axw> wallyworld: it shouldn't need to at all, since we store IPs
<wallyworld> let's add a card to fix that
<axw> yep
<wallyworld> ty
<mup> Bug #1539428 opened: cmd/juju/status: status filtering performs IP resolution of patterns <status> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1539428>
<rogpeppe1> fwereade_: any idea how a unit might not have a CharmURL (i.e. why does state.Unit.CharmURL return a bool) ?
<icey> axw: does the open stack provider have the storage hook support?
<axw> icey: yes, it supports block storage via cinder
<icey> awesome, thanks axw!
<axw> nps
<icey> axw: what version is required for thta?
<icey> that*
<axw> icey: 1.25
<axw> icey: something going wrong?
<icey> nope, just discussing what we can do with testing for the ceph charms, wanting to get some more advanced storage stuff tested axw
<axw> icey: okey dokey. FYI, we have storage support in AWS, MAAS, OpenStack, Azure, and GCE
<axw> all from 1.25 onwards
<icey> great, thanks axw!
<voidspace> dimitern: ping
<dimitern> voidspace, pong
<voidspace> dimitern: soooo...
<voidspace> dimitern: I'm still working on a test in MachineSuite, in cmd/jujud/agent/machine_test.go
<voidspace> dimitern: and PatchValue isn't working!
<voidspace> dimitern: I'm doing exactly what other tests are doing (as far as I can see)
<voidspace> but with the patch in place the original function is still called (if I put a panic in the original function and run just the new test it panics)
<dimitern> voidspace, hmm - are you doing it in SetUpSuite?
<voidspace> dimitern: not in SetUpSuite, in the test
<voidspace> dimitern: https://github.com/juju/juju/compare/maas-spaces...voidspace:maas-spaces-networking-api10#diff-918e88c5445d929c38db9bf4f0a85cc8R1014
<dimitern> voidspace, looking
<dimitern> +	s.PatchValue(&newDiscoverSpaces, newDiscoverSpaces)
<voidspace> it's called from startEnvWorkers in machine.go
<voidspace> https://github.com/juju/juju/compare/maas-spaces...voidspace:maas-spaces-networking-api10#diff-f5ec9ed405cc8f3a833355afdc629bd3R1257
<dimitern> what are you patching?
<voidspace> dimitern: that's an alias for discoverspaces.NewWorker
<voidspace> dimitern: this is exactly what many other tests do for patching out worker creation
<voidspace> as far as I can tell identical anyway, I'm obviously missing something
<dimitern> voidspace, well, the func you're patching it with has the same name, so aren't you doing a no-op?
<voidspace> hah
<voidspace> dimitern: yep
<voidspace> dimitern: thank you
<dimitern> voidspace, ;)
<voidspace> the patch is applying locally, I'm shadowing it
<dimitern> voidspace, np
<voidspace> dimitern: needed another set of eyes
<voidspace> I burned more than an hour on that yesterday
<voidspace> dimitern: thanks
<dimitern> voidspace, I'll do that later today for some reviews btw
<voidspace> dimitern: heh, no problem
<dimitern> voidspace, I know I said I could use some help yesterday, but it turned out it's simpler to do as one PR than to split it artificially
<voidspace> hmmmm, no still doesn't seem like my func is being called
<voidspace> let me put that panic back in
<voidspace> ok, it doesn't panic so the fake is in place at least
<dimitern> voidspace, why not pass a channel to newDiscoverSpaces and closed it when done, instead of a mutex?
<voidspace> dimitern: I only need to wait for discovery if discovery was actually started
<voidspace> dimitern: so with a bool, it defaults to false and we only set it to true when discovery starts
<voidspace> dimitern: not sure there's a clean way of doing the same with a channel
<dimitern> voidspace, I'm trying to find a good example..
<voidspace> dimitern: I'd need a way to tell if a channel has been started but not yet closed
<dimitern> http://dave.cheney.net/2013/04/30/curious-channels
<voidspace> dimitern: frobware: dooferlad: stdup?
<voidspace> Friday meeting room
<dimitern> voidspace, you can make it buffered, so you write true to it once it starts and wait to read from it in a select later
<dimitern> voidspace, omw
<voidspace> dimitern: that doesn't sound any simpler than a bool and a mutex
<fwereade_> rogpeppe1, sorry I missed you
<rogpeppe1> fwereade_: np, i think i worked it out anyway
<fwereade_> rogpeppe1, a unit doesn't have a charm url until it knows what charm it's actually going to run because it's downloaded and verified
<rogpeppe1> fwereade_: yeah, i figured that
<perrito666> morning all
 * perrito666 visits an office and fixes their network
<voidspace> dimitern: hey, is this meeting still happening this afternoon?
<dimitern> voidspace, the one in ~15m ?
<dimitern> I think so
<voidspace> dimitern: I thought frobware and dooferlad would be invited, but they're not on the list so I assume so too :-)
<voidspace> dimitern: my test is complete and passes
<voidspace> dimitern: changing it to use a channel now, which does change the test - but shouldn't take long
<voidspace> ah, no it doesn't pass
<voidspace> grrr, it did a minute ago
<voidspace> nearly there anyway
<voidspace> ah, admin user can't log in
<voidspace> how odd
<voidspace> "invalid entity name or password"
<voidspace> have to find someone who can login
<dimitern> voidspace, right - it's more about the plan for merging into master - it's not necessary to go if you don't want to
<dimitern> voidspace, ah :) you'd likely need to also create the user you're trying to use
<dimitern> and set password
<voidspace> grrr
<voidspace> this is from s.AdminUserTag(c)
<voidspace> I assumed that user existed
<voidspace> it's probably the wrong password
<voidspace> dimitern: hah, the password is "dummy-secret"!!!
<voidspace> now it passes...
<dimitern> voidspace, \o/ sweet!
<frobware> dimitern, is there a merge meeting taking place that I should attend?
<voidspace> frobware: https://plus.google.com/hangouts/_/canonical.com/maas-spaces?authuser=0&hceid=YWxleGlzLmJydWVtbWVyQGNhbm9uaWNhbC5jb20.5fqsbuut5c9fh29v70q11o7vt4
<dimitern> frobware, it's scheduled, but I don't know more - I'm joining now
<voidspace> dimitern: frobware: PR for merge of latest master onto maas-spaces
<voidspace> http://reviews.vapour.ws/r/3681/
<voidspace> dimitern: the NetworkManager job has gone now too, right?
<voidspace> that came back across in the merge from master and I deleted it
<dimitern> voidspace, it's not gone, but it's no longer used to decide whether the networker should run or not (as it's gone as well)
<voidspace> dimitern: interestingly on maas-spaces JobManageNetworking isn't in cmd/jujud/agent/machine.go
<voidspace> ok
<dimitern> voidspace, that will take some time to read btw ... will have a look through the 15+ pages a bit later
<voidspace> dimitern: I deleted it from machine.go after the merge from master which had added it back
<voidspace> heh
<voidspace> dimitern: most of it was automatic - except cmd/jujud/agent/machine.go
<voidspace> and the networker changes which I had to redelete
<dimitern> voidspace, ok, it should be easier then
<voidspace> dimitern: if we're dropping support for maas < 1.9 then we can drop some of the legacy codepaths
<voidspace> and just fail if maas < 1.9
<frobware> dimitern, voidspace: part of me says we should understand why 1.8 doesn't work - CI-wise.
<dimitern> frobware, agreed
<dimitern> voidspace, it's too early to do that properly I think
<voidspace> understanding is good
<voidspace> but we have until monday to merge...
<frobware> voidspace, this could be just ... take maas-spaces and bootstrap on 1.8, even if its just to say "aha..."
<frobware> voidspace, were there any conflicts?
<ericsnow> natefinch: standup?
<voidspace> frobware: changes to the networker worker and api, which we've deleted (so easy to resolve)
<natefinch> ericsnow: coming
<voidspace> frobware: and changes around starting the networker in cmd./jujud/agent/machine.go
<voidspace> frobware: I just deleted all references, dimitern can check if I did it right.
<voidspace> frobware: everything alright at home - you still able to go to FOSDEM and charm conf?
<dimitern> voidspace, have you seen the changes around the machine agent?
<dimitern> voidspace, it looks like with them it's much simpler to check the order of started workers, as well as add a "discover-spaces-started-gate" like the "upgrade-steps-gate" to signaling the import started
<voidspace> dimitern: I haven't
<voidspace> dimitern: I guess I should look and I'll have to change my test :-)
<voidspace> dimitern: maybe have to change how this worker is started
<voidspace> dimitern: let's get the merge in
<voidspace> dimitern: I'm taking a late lunch
<dimitern> voidspace, well, I think it will be worth it - it does look much nicer and easier
<voidspace> dimitern: great
<dimitern> voidspace, *whew*.. finished with that monstrous diff - it looks good
<voidspace> dimitern: thanks
<mup> Bug #1539656 opened: leadership dependency failures in CI deploy tests <juju-core:Incomplete> <juju-core api-command-rename:Triaged> <https://launchpad.net/bugs/1539656>
<mup> Bug #1539684 opened: storage-get unable to access previously attached devices <juju-core:New> <https://launchpad.net/bugs/1539684>
<ericsnow> natefinch: looks like I've addressed everything except the context patch review (and I'm working on that now)
<natefinch> ericsnow: cool
<natefinch> ericsnow: btw, thanks for the pointer on the new doc comments in the names package. That helps clarify things a lot... and it sounds like we agree entirely about how they should be used.  I really there was only one string representation of a tag... having two just seems like it's asking for trouble, especially when the two functions are called Id() and String().
<natefinch> ericsnow: "new" (march of last year ;)
<ericsnow> natefinch:  :)
<natefinch> ericsnow: in the persistence layer, I was using fmt.Sprintf("resource#%s#%s#%s", serviceID, id, unitID) for the unitresource's id... but do you think we really need the service ID?  maybe just fmt.Sprintf("resource#%s#%s", unitID, id)?
<ericsnow> natefinch: given the dependence of the unit ID on the service ID, yeah, we probably don't need it
<natefinch> ericsnow: would it be evil to consolidate SetUnitResource and SetResource into a single SetResource(id, ownerID string, res resource.Resource)?
<natefinch> ericsnow: since the only difference now is the name of the function and the name of the argument
<ericsnow> natefinch: that would make it too easy to get it wrong
<natefinch> ericsnow: fair enough, I can make two exported functions, but send them to the same internal functions
<ericsnow> natefinch: the distinction still matters even if they do the same thing
<ericsnow> natefinch: sounds good
<ericsnow> natefinch: I've responded to all the context-related review comments and addressed nearly all of them
<ericsnow> natefinch: would you mind running through them real quick?
<natefinch> ericsnow: sure, I'll go look right now
<ericsnow> natefinch: I should be able to land the branch now
<natefinch> ericsnow: sweet
<ericsnow> natefinch: we can resolve the more philosophical differences once we have time to breath
<perrito666> marcoceppi: ping
<ericsnow> natefinch: BTW, just ran through a manual test and it worked correctly on the first try :)
<alexisb> ericsnow, that sounds encouraging :)
<natefinch> heh, I've been doing manual tests for a few days... other than those two bugs we found earlier in the week, yeah, it all looks good
<ericsnow> alexisb: it was working already, but we've since cleaned up the code we'd hastily written and added tests :)
<natefinch> nice that it's surviving refactoring and edits, though :)
<ericsnow> yep
<cherylj> fwereade_: do you happen to be around?
<ericsnow> natefinch: I'm going to land the patch and leave those extra issues open on the context review as a reminder
<perrito666> mm, at a status level we seem to lack information regarding if a relation is a peer or normal relation
<natefinch> ericsnow: that's fine
<natefinch> ericsnow: I'll be back tonight and will rebase my patches off of the feature branch and finish up the review changes
<ericsnow> natefinch: k
<mup> Bug #1539785 opened: lxd provider leaks resources <landscape> <juju-core:New> <https://launchpad.net/bugs/1539785>
<wallyworld> cherylj: thats for creating that doc. the latest run has the restore tests passing. as you found, the CI scripts need tweaking and that should address some of the other failures, so that branch is looking good for next week
<wallyworld> *thanks
<cherylj> wallyworld: I am worried about this bug, though:  https://bugs.launchpad.net/juju-core/+bug/1539656
<mup> Bug #1539656: leadership dependency failures in CI deploy tests <juju-core:Incomplete> <juju-core api-command-rename:Triaged> <https://launchpad.net/bugs/1539656>
<wallyworld> cherylj: that issue is only seen in the feature branch?
<cherylj> god I've looked at too many bugs today.  Let me check
<wallyworld> there was a note in the doc that said it has been seen elsewhere
<wallyworld> if that's the case, and it's in master, then we shouldn't block the feature branch on that
<cherylj> ah, glad I wrote it down at some point, then
<wallyworld> :-) you has so much to keep track of
<wallyworld> your brain must be full
<wallyworld> cherylj: there were some leadership changes in master just recently to address another bug i can't recall right now, might be related to that
<cherylj> wallyworld: I checked that master passed with dave's changes
<cherylj> and it did
<cherylj> now I'm wondering if I was mistaken about this leadership one...  the other failure with similar symptoms have different logs
<wallyworld> cherylj: ok, will need to look closer then
<wallyworld> cherylj: so it seems that bug above is only on the maas 1.9 deploy according to the doc; the other maas failures are due to the replicaset issue
<cherylj> wallyworld: the 1.7 maas deploy also fails with that leadership problem
<wallyworld> regardless, it's weird it's only on one substrate. dort of indicates it's a timing issue
<cherylj> wallyworld: and I'm thinking the replicaset issue may be an unfortunate "working as designed"
<cherylj> wallyworld: I think that's the only place this charm is deployed in CI?  maybe?
<wallyworld> not sure, will need to check
<cherylj> sinzui, abentley mgz - is the dummy-sink-0 charm deployed to other substrates in CI tests?  or is that just a MAAS test?
<sinzui> cherylj: every substrate for every series
<cherylj> hmm
<cherylj> interesting
<sinzui> cherylj: the dummies are trrivial and consistent for every series, though the windows version is definitely different
<cherylj> sinzui: what would the other tests be named?
<cherylj> I want to look at their test logs
<cherylj> gce-deployer-bundle?
<sinzui> cherylj: almost every job that is xxx-<depoloy|upgrade>-*
<cherylj> ok
<wallyworld> cherylj: on the surface, i can't see how this is related to the branch per se, but it could be. the fact it's ony one substrate indicates a more general timing or other issue. will need to look closer into it
<sinzui> cherylj: deployer and quickskstart and functional are not dummies
 * wallyworld afk for a bit
<cherylj> wallyworld:  here's an example of a successful deploy of that charm:  http://data.vapour.ws/juju-ci/products/version-3555/aws-deploy-trusty-amd64-on-wily-amd64/build-588/unit-dummy-sink-0.log.gz
<cherylj> for when you get back :)
<sinzui> cherylj: http://reports.vapour.ws/releases/3557 The CPC and Substrates sections are all using dummy source and sink to very hooks trade infoirmation
<cherylj> and sinzui, wallyworld, for that manual provider connection is shut down bug, I think the changing of the replicaset is valid.  It's switching to use the cloud-local address rather than the public address.
<cherylj> it's unfortunate that it makes mongo drop the connections
<cherylj> anyways, I need to go pick up the kid
<cherylj> bbl
<mup> Bug #1539806 opened: [ARM64][LXD Provider][ 2.0-alpha1-0ubuntu1~16.04.1~juju1] kibana 'hook failed: "install"' <juju-core:New> <https://launchpad.net/bugs/1539806>
<mup> Bug #1539806 changed: [ARM64][LXD Provider][ 2.0-alpha1-0ubuntu1~16.04.1~juju1] kibana 'hook failed: "install"' <juju-core:New> <https://launchpad.net/bugs/1539806>
<mup> Bug #1539806 opened: [ARM64][LXD Provider][ 2.0-alpha1-0ubuntu1~16.04.1~juju1] kibana 'hook failed: "install"' <juju-core:New> <https://launchpad.net/bugs/1539806>
<wallyworld> cherylj: with the successful deploy referenced above in build 3555, it also works in build 3557 on mass 1.8 trusty amd64 and all other substrates, but fails only maas 1.7. and it works on maas 1.9 whereas it failed in a previous run where the only change was to add the restore-backup alias. on the surface, I cannot see how this issue is specific to the feature branch
#juju-dev 2016-01-30
<mup> Bug #1539806 changed: [ARM64][LXD Provider][ 2.0-alpha1-0ubuntu1~16.04.1~juju1] kibana 'hook failed: "install"' <juju-core:Invalid> <kibana (Juju Charms Collection):New> <https://launchpad.net/bugs/1539806>
<mup> Bug #1539806 opened: [ARM64][LXD Provider][ 2.0-alpha1-0ubuntu1~16.04.1~juju1] kibana 'hook failed: "install"' <juju-core:Invalid> <kibana (Juju Charms Collection):New> <https://launchpad.net/bugs/1539806>
<mup> Bug #1539806 changed: [ARM64][LXD Provider][ 2.0-alpha1-0ubuntu1~16.04.1~juju1] kibana 'hook failed: "install"' <juju-core:Invalid> <kibana (Juju Charms Collection):New> <https://launchpad.net/bugs/1539806>
<cherylj> wallyworld: I am able to recreate that leadership failure on my maas
<cherylj> using the api-command-rename branch
<mup> Bug #1539873 opened: debug-log non-functional with xenial bootstrap unit <juju-core:New> <https://launchpad.net/bugs/1539873>
<mup> Bug #1539873 changed: debug-log non-functional with xenial bootstrap unit <juju-core:New> <https://launchpad.net/bugs/1539873>
<mup> Bug #1539873 opened: debug-log non-functional with xenial bootstrap unit <juju-core:New> <https://launchpad.net/bugs/1539873>
<mup> Bug #1539873 changed: debug-log non-functional with xenial bootstrap unit <juju-core:New> <https://launchpad.net/bugs/1539873>
<mup> Bug #1539873 opened: debug-log non-functional with xenial bootstrap unit <juju-core:New> <https://launchpad.net/bugs/1539873>
<mup> Bug #1539873 changed: debug-log non-functional with xenial bootstrap unit <openstack> <xenial> <juju-core:New> <https://launchpad.net/bugs/1539873>
<mup> Bug #1539873 opened: debug-log non-functional with xenial bootstrap unit <openstack> <xenial> <juju-core:New> <https://launchpad.net/bugs/1539873>
<mup> Bug #1539873 changed: debug-log non-functional with xenial bootstrap unit <openstack> <xenial> <juju-core:New> <https://launchpad.net/bugs/1539873>
#juju-dev 2016-01-31
<mup> Bug #1499356 changed: all units have false hook errors after reboot <sts> <juju-core:Expired> <https://launchpad.net/bugs/1499356>
<mup> Bug #1499356 opened: all units have false hook errors after reboot <sts> <juju-core:Expired> <https://launchpad.net/bugs/1499356>
<mup> Bug #1499356 changed: all units have false hook errors after reboot <sts> <juju-core:Expired> <https://launchpad.net/bugs/1499356>
<lazypower|travel> do we have 3 instances of mup running somewhere? seems odd that mup is spamming the same state changes 3 times over a course of 9 minutes.
<mup> Bug #1540047 opened: Error in Jujuclient with Python3 <juju-core:New> <https://launchpad.net/bugs/1540047>
<mup> Bug #1540047 changed: Error in Jujuclient with Python3 <juju-core:New> <https://launchpad.net/bugs/1540047>
<mup> Bug #1540047 opened: Error in Jujuclient with Python3 <juju-core:New> <https://launchpad.net/bugs/1540047>
<mup> Bug #1540061 opened: juju 2.0 alpha + local LXD provider: dns hostname inconsistency <juju-core:New> <https://launchpad.net/bugs/1540061>
<mup> Bug #1540061 changed: juju 2.0 alpha + local LXD provider: dns hostname inconsistency <juju-core:New> <https://launchpad.net/bugs/1540061>
<mup> Bug #1540061 opened: juju 2.0 alpha + local LXD provider: dns hostname inconsistency <juju-core:New> <https://launchpad.net/bugs/1540061>
<mup> Bug #1540076 opened: Storage Hooks before Install <juju-core:New> <https://launchpad.net/bugs/1540076>
<axw> wallyworld: we can remove the "show-cloud" card from backlog right?
<wallyworld> axw: yep
#juju-dev 2017-01-23
<axw> nurfet: ok, cool. at the moment, the ubuntu user is hard-coded throughout the codebase. I think it would be difficult to change that now, so your provider would need to create the ubuntu user
<axw> nurfet: will your provider not rely on cloud-init?
<nurfet> I see. no my provider does not support cloud-init at the moment
<axw> nurfet: I have to go, will be back in an hour or so if you still have questions. otherwise thumper or wallyworld may be able to assist
<nurfet> axw: thanks
<thumper> well... fuck
<perrito666> Good morning to you too thumper
<thumper> o/
<redir> is it morning already
<perrito666> Not for me I just found funny the first thing I read from
<perrito666> Thumper
<redir> I'm gonna give up soon
<redir> for the day.
<perrito666> Ahhggg i hate phone kb
<thumper> :)
<perrito666> I was just passing by
<redir> now I can't deploy because the maas can't see resolve simplestreams host.
 * redir eods
<redir> looks like it is still having trouble fetching the image from simplestreams
<thumper> wallyworld: https://github.com/juju/juju/pull/6854
<wallyworld> you want a review? ok
<thumper> please
<wallyworld> thumper: +1, jeez why is arm so sloooooow
<thumper> reasons
<veebers> wallyworld: Is there an easy way to grab what series a controller is running? Or will it be the ol' "juju ssh lsb-release..."?
<wallyworld> it might be in status, can't recall ottomh
<veebers> wallyworld: ack I'll poke. Cheers
<veebers> ah it totally is
<wallyworld> axw: i'm having an issue with some azure unit tests. but maybe i should also validate the approach i'm using. did you have time for a HO?
<axw> wallyworld: just finishing my lunch, in a short while
<wallyworld> ok
<axw> wallyworld: ready, see you in 1:1
<wallyworld> axw: here's the WIP  https://github.com/juju/juju/compare/develop...wallyworld:azure-ingress-rules?expand=1
<axw> wallyworld: should be fine. limit is 80 characters: https://docs.microsoft.com/en-us/azure/guidance/guidance-naming-conventions
<wallyworld> great ty
<wallyworld> i have tests passing now, just need t add some more
<wallyworld> axw: that PR is now up. much of it is cleanup https://github.com/juju/juju/pull/6855
<axw> wallyworld: OK, will look shortly
<wallyworld> no rush ty
<axw> wallyworld: what are the rules for grouping ingress rules, if any? as it is, there's no guarantee that IngressRules is going to return the rules with the same grouping as they were opened
<axw> wallyworld: e.g. I might open port range 1000-2000, and later 2000-3000; then IngressRules will report them as one
<axw> oh wait
<axw> never mind...
<wallyworld> axw: i am sure i read somewhere in our code that we do not allow overlapping ort ranges
<wallyworld> i think if we do need to cater for that sort of thing, it needs to be done in the firewaller worker
<axw> wallyworld: 2001-3000 then. what I mean is this: I could open 1000-2000 for 192.168.0.1/24, and then later 192.168.0.2/24. the azure implementation of IngressRules turns that into one IngressRule. is that the expection of the interface? does it *matter* that they're combined?
<axw> (or not combined)
<axw> wallyworld: if it matters, it should be specified on the interface method
<axw> if not, then I'm wondering why we bother
<axw> particularly in the provider. if the firewaller cares about grouping them, it should do it - relieve the providers of the burden
<wallyworld> axw: the idea was that we OpenPorts() with an ingress rule that is has grouped CIDRs for a given port range, so we should mirror thaton the way back out when IngressRules() is called
<axw> wallyworld: I understand. but what about when they're *not* grouped on the way in?
<axw> why favour the one over the other?
<wallyworld> the network.IngressRule struct contains a slice of CIDRs, so we try and stick to that where possible
<wallyworld> otherwise why bother having a slice of CIDrs in the IngressRule
<axw> wallyworld: it makes sense coming in, as an optimisation
<axw> wallyworld: on the way out, I don't think it matters at all
<axw> wallyworld: at least not at the provider level
<wallyworld> i guess i also see it as an optimisation on the way out
<wallyworld> but it can be changed if we think that's what needs to be done
<axw> wallyworld: an optimisation for what?
<wallyworld> the caller
<wallyworld> who would otherwise need to do the grouping
<axw> wallyworld: my point is this: either the caller needs to do the grouping, the provider needs to, or neither needs to; please document or remove
<axw> (if the caller *needs* them grouped, and the provider doesn't do it, then it's breaking its contract. that contract needs to be specified, so people don't write dodgy providers)
<wallyworld> i 'll add some doc the the IngressRules() interface method. we can see how it all plays out then when the firewaller is refactored to use this new stuff properly. we can always change the providers not to group if that turns out to be best
<axw> wallyworld: I'm just going to give it a quick test, looks good tho
<wallyworld> ty
<wallyworld> axw: eating dinner soon, but thanks for review, will fix issues. let me know if any testing shows any issues
<axw> wallyworld: sure, I'll let you know either way. still waiting on bootstrap...
<wallyworld> azure is fast :-/
<axw> wallyworld: QA OK
<wallyworld> awesome, ty
<perrito666> Morning
<hoenir> Good morning comrades !
<frankban> axw: ping
<perrito666> morning again
<frankban> perrito666: do you know anything about the azure provider?
<rick_h> frankban: what's up?
<perrito666> frankban: very little sadly, the person to as is axw, but shoot the question, we might sort it out
<frankban> perrito666: ok thanks
<frankban> perrito666: I'll send an email
<perrito666> frankban: k
<mbruzek> rick_h: Who works on packaging the juju bits that are installed on the deployed VMs?
<rick_h> mbruzek: the juju agents? they're pulled directly from streams so not really a package?
<perrito666> mbruzek: you are experiencing problems with agents?
<mbruzek> rick_h: I was running a security tool and found jujud was not owned by a user on the deployed system. This set off a security issue, I filed a bug, but want to follow up with someone about it.
<mbruzek> https://bugs.launchpad.net/juju-core/+bug/1658549
<rick_h> mbruzek: ah, yea saw that bug.
<mbruzek> ok
<rick_h> mbruzek: not sure there's any "one" person on that.
<mbruzek> I can be the "one" just point me at the code that does that.
<rick_h> mbruzek: hmm, so I'd imagine that's done during the agent install process as part of the agent setup bits https://github.com/juju/juju/blob/staging/agent/agent.go
<rick_h> mbruzek: but yea, might need to get someone like perrito666 or core folks to help sanity check where the agents are installed from/setup.
<perrito666> mbruzek: rick_h I am not sure where the exact code is but typically its a tar.gz downloaded from our streams
<rick_h> mbruzek: wallyworld is probably the best 'expert' on the whole agent setup bits
<perrito666> the user on them must be the one fro jenkins on the machine that builds them
<perrito666> so a chown is in order
<rick_h> perrito666: right, but this is when the jujud is unpacked on the managed machine it needs to be chown there
<rick_h> perrito666: at least that's how I read the bug
<perrito666> rick_h: agreed
<perrito666> I find it a bit odd that juju does not have its own user
<rick_h> perrito666: hmm isn't it root?
<mbruzek> The other files in that directory were symbolic links to something else. This was the only file not owed by root and not a symbolic link.
<mbruzek> iirc
<perrito666> rick_h: yeah I would prefer us to be owned by a juju user
<perrito666> mbruzek: looking for the relevant code
<mbruzek> Thanks perrito666
<mbruzek> perrito666: I have to relocate, if you find anything please leave it in the bug I opened and I can take action on it. I am happy to help on this issue, just need a hint on where to look and/or get started.
<perrito666> so ideally this should be done inside agent/tools/toolsdir.go -> UpackTools
<mbruzek> Ok
<mbruzek> writing that down now
<perrito666> mbruzek: ping me when you are back
<perrito666> :) ill have more info for you
<perrito666> happy relocation
<mbruzek> brb
<mbruzek> perrito666: back
<perrito666> mbruzek: hi, I was saying "so ideally this should be done inside agent/tools/toolsdir.go -> UpackTools"
<mbruzek> perrito666: I will take a look and propose a fix
<perrito666> buut, that is only called by upgrades so cloud-init most likely has some repeated code doing that
<perrito666> mbruzek: looking for the relevant part
<perrito666> brb lunch
<redir> morning
<perrito666> redir: hello :D
<redir> :)
<alexisb> morning redir, perrito666
<perrito666> alexisb: morning
<redir> alexisb: 0/
<perrito666> k, EOD until standup
#juju-dev 2017-01-24
<axw> thumper babbageclunk: so, I'm a bit nervous about having gaps in tagging that would be introduced by Unset/Set. did you discuss my suggestion of just sending the version of the controller to UpdateController?
<axw> thumper babbageclunk: only disadvantage I see to that is that the provider needs to remember all the old methods of tagging. but I think that's preferable to possible resource leakage
<axw> thumper babbageclunk: finally (the clincher for me), some resources are tracked with things other than tags. e.g. OpenStack, security groups are tracked by formatting their names in a specific way. AFAIK that's immutable. so UpdateController would need to recreate the security group
<babbageclunk> axw, thumper: so we'd need to create a new security group, move everything over to it and then get rid of the old one?
<axw> babbageclunk: yep
<axw> (unless you can rename things in openstack?)
<babbageclunk> axw, thumper: yeah, that does seem like a pretty big problem for the unset/set approach
<menn0> thumper: can you have another quick look at this one pls: https://github.com/juju/juju/pull/6839
<axw> babbageclunk: ah you can rename instances, so maybe security groups too. but I'm a bit wary of expecting we can do that in all cases for every provider
<axw> thumper: ready to chat whenever'
<babbageclunk> axw: Well really anything where we tag by naming doesn't work in the unset/set case anyway.
<babbageclunk> axw: thumper made the point that it's possible we'd change how resources were tagged with the model as well - that makes update with a version harder too, doesn't it? Environ.AllInstances might not find the migrated machines because it would be looking for the model tag by the wrong key (or whatever).
<babbageclunk> I need to go for a run to think about this, then I'll definitely need to pick people's brains some more.
<babbageclunk> :(
<axw> babbageclunk: I don't see how that's a problem. since you can only migrate to a controller >= version (right?), then as long as we keep a history in the provider of how tagging was done, we can determine how to identify the instances based on the source controller's version
<babbageclunk> axw: Yeah, I think you're right - and I guess we only need to introduce that handling inside the provider once  we actually need to make that change in a provider.
<axw> babbageclunk: right
<babbageclunk> axw: ok, sold. Thanks! thumper I'm doing this ^
<axw> babbageclunk: cool, no worries
<thumper> axw, babbageclunk: re openstack security groups, are they created in the account, or controller?
<thumper> if it is the account, then nothing changes
<thumper> axw: still good to chat?
<axw> thumper: not sure what you mean by "in the account or the controller". they're created by the controller for a model, and should be destroyed when the model or controller is destroyed
<axw> thumper: yep
<axw> thumper: https://hangouts.google.com/hangouts/_/canonical.com/andrew-tim?authuser=1
<thumper> axw: the model's machines aren't moving, so any security group created for a model and the model's credentials are still equally valid on the target controller
<axw> thumper: yeah, it's about ownership tho. what to do if you kill-controller on the source/target controller
<thumper> menn0: got a few minutes?
<menn0> thumper: yep
<thumper> 1:1
<redir> fu... our docs say you have to create the pflash drive, but it looks like xenial's version of libvirt-bin now supports creating one by default:|
<thumper> redir: bugger... :-|
<redir> I guess that would be good, less (code) is more
<redir> not a huge deal just painful to live test
<menn0> babbageclunk: could you take a quick look at this one please? https://github.com/juju/juju/pull/6839
<babbageclunk> menn0: Sure
<menn0> babbageclunk: thanks
<rick_h> wallyworld: ping around?
<wallyworld> hey
<rick_h> wallyworld: hey, have a sec to give me a hand?
<wallyworld> sure
<wallyworld> a hand what?
<rick_h> wallyworld: https://pastebin.canonical.com/176845/ trying to redo my maas setup here and bootstrap is stuck here for 20+min
<rick_h> wallyworld: can't ssh to the machine with ubuntu@IP, but log seems to say it did work?
<rick_h> wallyworld: so can't get at the log and I'm confused on wtf
<wallyworld> rick_h: looks like the controller can't see streams.canonical.com
<rick_h> hmmm, k the GUI fetch didn't error so assumed it worked
 * rick_h figures out to test that out
<wallyworld> i'm just guessing on the "fetching agent" bit
<wallyworld> maybe that did succeed
<wallyworld> but after that it also needs to connect to the archives
<wallyworld> to get mongo
<wallyworld> IIANM
<rick_h> IIANM?
<wallyworld> if i am not mistaken
<wallyworld> maybe turning on debug would help. the info logging is a bit terse
<rick_h> yea, that's with --debug on bootstsrap
<wallyworld> doh, of course
<wallyworld> i *think* the fetching juju agent messages correlates with get the tools binaries and get mongo
<wallyworld> the last step in getting jujud running
 * rick_h tries to deploy a maas node w/o Juju and see if I can ssh to it and check the connectivity
<wallyworld> that's a reasonable next step
<wallyworld> each time there's been similar issues, from memory, it's been network/routing related
<rick_h> k, thanks
<rick_h> wallyworld: ty, looks like I needed an iptables rules to enable nat on the maas machine.
<rick_h> wallyworld: at least ping now works so hopefully it'll bootstrap now successuflly
<wallyworld> ah, right
<wallyworld> the symptoms seemed to fit not having outbound traffic
<rick_h> yea
<rick_h> at least it works now on a manually started maas node, so checking to see if the juju node works now
 * wallyworld crosses fingers
 * rick_h wishes there was a clear success/fail message on these things (like fetching the gui, agents, etc)
<wallyworld> maybe it hadn't timed out yet, not sure
<wallyworld> but it does need to surface the connectivity issue better
<babbageclunk> menn0: LGTM
<menn0> babbageclunk: tyvm
<rick_h> wallyworld: "Installing Juju machine agent"
<rick_h> seems like we're in business woot
<wallyworld> yay
<rick_h> ty wallyworld
<wallyworld> didn't do much :-)
<rick_h> well poked me to double check that egress. I thought I had it worked but I had rebooted and guess the settings didn't keep
<redir> this one should be an easy review... https://github.com/juju/juju/pull/6860/files
 * redir eods
<redir> see you tomorrow juju-dev
<menn0> axw: could you pls take a look at this one? https://github.com/juju/juju/pull/6859
<axw> menn0: looking
<menn0> axw: thanks
<axw> menn0: done
<menn0> axw: thanks
<jam> balloons: it looks like !!build!! is blocked again
<mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<mup> Bug #1658549 changed: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<mup> Bug #1658549 changed: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<perrito666> Morning
<mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<mup> Bug #1658549 changed: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<rogpeppe1> if I want environment isolation and a fake home, which testing suite should I be using?
<rogpeppe1> perrito666: any idea?
<perrito666> rogpeppe1: sorry I dont know which one has a home
<rogpeppe1> perrito666: i think i worked out the answer: use IsolationSuite and call MakeFakeHome
<rogpeppe1> perrito666: FakeHomeSuite doesn't provide env isolation unfortunately
 * rogpeppe1 wishes that all the fixture suites were orthogonal to one another
<rogpeppe1> here's a feature-branch PR to add support for preferred authentication domains to the juju client:  https://github.com/juju/juju/pull/6862
<rogpeppe1> jam: i've replied to your comments on https://github.com/juju/juju/pull/6862
<jam> rogpeppe1: is there a discussion response or just a code response?
<rogpeppe1> jam: both
<hoenir> Morning.
<perrito666> hoenir: morning
<ahasenack> hi juju devs, do you have commit rights on go-yaml?
<ahasenack> we have an issue with it, and apparently there is a PR for it too
<ahasenack> specifically, https://github.com/go-yaml/yaml/pull/223
<ahasenack> which affects juju
<ahasenack> issue is https://github.com/go-yaml/yaml/issues/157
<rick_h> ahasenack: no, I think we have to work to get the PR looked at and landed
<ahasenack> rick_h: you don't have commit rights, you mean? Your team
<rick_h> ahasenack: I don't think anyone in juju land does other than gustavo
<ahasenack> ok, thanks
<rick_h> ahasenack: so to push this you'll need to reach out to him
<niemeyer> ahasenack, rick_h: go-yaml is long due some maintenance
<niemeyer> ahasenack: Not sure if that fix is the right one..
<ahasenack> niemeyer: tbh I haven't looked at the PR itself
<ahasenack> I was going to point out just the issue and someone whispered to me there was a PR for it already :)
<redir> perrito666: https://github.com/juju/juju/pull/6860 PTAL if you have a minute
<alexisb> redir, ping
<perrito666> redir: sure
<redir> alexisb: pong
<redir> perrito666: tx
<alexisb> perrito666, redir I need someone to pick up the OSX regression
<redir> issue?
<alexisb> https://bugs.launchpad.net/juju/+bug/1659016
<alexisb> ^^^
<alexisb> do either of you have an OSX workstation?
<perrito666> alexisb: nope
<redir> alexisb: https://bugs.launchpad.net/juju/+bug/1659016 this one?
<redir> alexisb: I have an ancient macbook pro
<alexisb> redir, yes that one
<redir> might take a while to bring it into this decade
<redir> brb
<redir> back
<natefinch> heh
<natefinch> oops wrong window
<redir> heh
<perrito666> a bless on develop \o/
<alexisb> \o/
<perrito666> followed by a curse, this parallel testing thing doesnt give much room for happyness
<alexisb> perrito666, I know you plyed I big part in that bless, so thank you
<perrito666> alexisb: I havent submitted ode in the past week or so so I dont think so :p
<perrito666> I have a couple of PRs dancing around develop
<alexisb> perrito666, this was way more than a weeks worth of work
<alexisb> dont sell yourself short
<perrito666> alexisb: I never do, I do rent myself short :p
 * perrito666 is a leasing person
<rick_h> lol, yea way more than a week
<perrito666> I wish that bless would have come at an hour where I wouldnt feel guilty about opening a beer
<perrito666> :p
<rick_h> perrito666: lol
<rick_h> perrito666: there is no guilt
<perrito666> rick_h: its 3pm and I have to be up and running until standup at 20:45 so there is a little guilt involved
<rick_h> perrito666: 3pm for you?
<perrito666> yup
<rick_h> I don't know why but I thought you were EST or Central
<perrito666> rick_h: GMT-3, whatever that means in non standard TZs :p
<alexisb> perrito666, I expect to see a celebratory beer at standup tonight
<perrito666> alexisb: well if you force my hand...
<rick_h> heh, I forget how far over mexico kicks east
<alexisb> perrito666, :)
 * rick_h thinks of things more in a straight line which is false brokenness
<perrito666> rick_h: yes, you have mercator to blame for that
<rick_h> redir: ping
<perrito666> also my country is in the wrong timezone for dumb political reasons :p
<redir> rick_h: pong
<rick_h> redir: what's your LP username?
<redir> reedobrien
<redir> i think
<rick_h> redir: k, that worked ty
<redir> yeah
<rick_h> redir: heads up that guimaas is rebuilt on newer maas, a few nodes aren't running atm
<redir> I used to also have redir but had it merged
<rick_h> redir: but have it working with 5 of hte 8 nodes with juju 2.1beta4
<rick_h> redir: so if you ever need to tinker again it should work like before, same machine/url/etc
 * redir does peanutbutter and jelly banana dance
<redir> thanks rick_h
<rick_h> redir: I've got a new nuc3 coming tomorrow and a video cable to get 7 & 8 back going so by end of tomorrow should have all 8 working fine with modern tool stack
<redir> awesome
<redir> rick_h: in the mean time I've set up vmaas on my workstation and on an old box, which works great except it has only one nic, so I can only test it from itself.
<redir> well without a lot of twiddling
<rick_h> redir: yea, always good to have around, but also if we need real hardware there's some sitting free-ish
<redir> rick_h: yes, tyvm for letting me know it is back.
<redir> it is faster than my vm
<redir> :)
<redir> lunch
<perrito666> redir: k
<thumper> morning
<redir> thumper: o/
<wallyworld> thumper: quick! merge develop to staging, we have a bless!
<thumper> wallyworld: that is done automatically
<wallyworld> well, i should check then to see if it has happened
<wallyworld> looks like it has
<redir> thumper: wallyworld easy review https://github.com/juju/juju/pull/6863
<thumper> redir: shipit
<wallyworld> lgtm
<redir> tx
<redir> someone needs to enable their pre-push hook
<redir> staging fails pre-push too
<redir> but for unreachable code
<redir> looks like a fine merge problem
<natefinch> Anyone understand this merge error?  http://juju-ci.vapour.ws:8080/job/github-merge-juju/10041/artifact/artifacts/windows-err.log
<natefinch> sinzui, abentley ^
<sinzui> natefinch: remember to ignore the -err., use the -out http://juju-ci.vapour.ws:8080/job/github-merge-juju/10041/artifact/artifacts/windows-out.log
<natefinch> sinzui: sorry, the console output said to see the -err :)
<natefinch> sinzui: so, looks like mongo just bit the dust.  retry I suppose?
<sinzui> natefinch: yeah, that will be fixed one day
<sinzui> natefinch: yes, I agree to retry.
<abentley> redir: It failed: http://juju-ci.vapour.ws/job/github-merge-juju/10100/console
<redir> abentley: ?
<abentley> redir: Your fix/prepush-failures
<redir> oh, yes, but because mongo not because the prepush hook failed
<redir> I understand that if someone fails to setup their pre-push hook or uses the --no-verify flag that things can make it into a PR,
<redir> but I would expect that the CI pipeline would run the verify script before bothering to run tests...
<abentley> redir: Well, you know what happens when you assume...
<redir> abentley: I was expecting not assuming:)
<redir> poor expectations on my part
<abentley> redir: Maybe the verify script can become part of make test?
<redir> abentley: seems reasonable that there should be some target that runs verify and and builds all together
<redir> abentley: looks like sinzui found waldo
<redir> this is the staging thing I see when updating my fork https://github.com/juju/juju/compare/staging...reedobrien:fix/prepush-on-staging?expand=1
<sinzui> redir: the fix is in place
<redir> sinzui: thanks!
<babbageclunk> thumper: the azure provider tags at least 8 different kinds of things with the controller id.
<thumper> hazaah
<babbageclunk> thumper: the destroy controller handles it by just destroying the resource group to kill everything in one fell swoop.
<thumper> sigh
<babbageclunk> thumper: do you know whether there's something similar to update lots of things?
<babbageclunk> thumper: all at once?
<thumper> no, but axw might
<babbageclunk> axw: ^^
<babbageclunk> oh, probably a bit early for him yet
<perrito666> babbageclunk: sounds about the time he gets up, our previous standup used to be in 15 mins from now
<axw> babbageclunk: no, I don't think there is
<axw> babbageclunk: the tags are for informational purposes as well as, FYI (i.e. so people navigating through the portal can tie things back to juju land)
<babbageclunk> axw: stink, I didn't think so. What I'm doing at the moment is listing all of the things in the resource group and updating the tag for each, which requires dispatching on the GenericResource.Type
<babbageclunk> axw: That way at least we'll be able to log an error if a new resource type gets tagged but the code to update it isn't.
<axw> babbageclunk: you can use https://github.com/Azure/azure-sdk-for-go/blob/master/arm/resources/resources/resources.go#L191 to update arbitrary resources
<axw> babbageclunk: is that what you're doing?
<babbageclunk> axw: Ooh, yes, that's what I was hoping to find - thanks! Removes the need to dispatch by the type.
<axw> babbageclunk: I'm not sure how it handles the non-common resource properties
<axw> babbageclunk: the azure API supports PATCH, but I don't think it's exposed by the SDK at all
<babbageclunk> axw: It looks like you pass in a generic resource - I hope it'd only update the things that are on there and leaves everything else alone? Otherwise how could you use it?
<axw> babbageclunk: I hope so to. dunno otherwise
<axw> babbageclunk: I can imagine how it'd work when making the HTTP request: you would just add the properties free-form
<axw> babbageclunk: but not using the Go SDK
<axw> babbageclunk: oh! never mind, GenericResource has a "properties" attribute
<axw> babbageclunk: so if you just preserve that, you're good
<babbageclunk> axw, awesome. Oh man, that documentation is terrible. "CreateOrUpdate creates a resource." Is that all it does?
<axw> babbageclunk: and now that I think of it, you *could* do it all in one shot using a template deployment. I think you should see how it goes without that first though
<babbageclunk> axw
<babbageclunk> oops
<axw> babbageclunk: heh :/  it's auto-generated from swagger
<babbageclunk> axw: ok, will do - I don't think it'll be too bad like this.
#juju-dev 2017-01-25
<blahdeblah> axw: Welcome back!  How do you feel about advising on how to handle some juju2 controller unresponsive type of bugs? :-)
<axw> blahdeblah: thanks. absolutely miserable ;p  just kidding, what can I help with?
<blahdeblah> So, I've got a juju2 controller not responding, meaning juju status doesn't work on either the controller model, or the single model it's supporting.
<axw> blahdeblah: ok, got logs for the controller?
<blahdeblah> Yep, but there's been nothing in machine-0.log since 2017-01-20 10:24:10
<blahdeblah> (UTC)
<blahdeblah> Oh, looks like logsink.log is the new hotness
<blahdeblah> Nothing in that for over half an hour, though
<axw> blahdeblah: is CPU pegged? memory? disk?
<blahdeblah> axw: None of the above? :-)
<axw> hmmk
<blahdeblah> axw: Our normal process for this is just to restart jujud-machine-0, and it usually works, but I'm keen to get it narrowed down to which bug this is, and get you folks something useful to make progress on.
<blahdeblah> Oh, just got some log activity
<blahdeblah> Lots of API terminations
<axw> blahdeblah: what ports is jujud listening on?
<blahdeblah> tcp6       0      0 :::17070                :::*                    LISTEN      53852/jujud
<blahdeblah> after that flurry of log messages, juju status is now responsive
<axw> blahdeblah: huh :/
<axw> blahdeblah: can you share the logs? maybe there's something buried that'll jump out
<blahdeblah> axw: So it turns out someone else started working on this too, and just restarted the agents
<axw> blahdeblah: doh
<blahdeblah> So I guess this will have to wait until next time
<blahdeblah> It's been happening reasonably regularly, though, so hopefully it will be something we can gather again soon.
<menn0> axw, wallyworld or thumper: https://github.com/juju/juju/pull/6859
<wallyworld> says it's merged
<axw> menn0: wrong PR?
<menn0> axw: right you are: https://github.com/juju/juju/pull/6866
<axw> ok, looking
<wallyworld> menn0: should we also be changing the cleanup job to now remove dead local charm docs, since we don't need them for revnum checking anymore
<menn0> wallyworld: yeah, I almost did that but left it out for now to save time
<menn0> wallyworld: i'll do that tomorrow
<wallyworld> ok
<thumper> axw: do you remember if we create an ubuntu user for centos?
<thumper> axw: and whether centos has sudo?
<axw> thumper: cloud-init creates the ubuntu user
<axw> thumper: and yes it does
<thumper> axw: can lxd provider on ubuntu deploy a centos container?
<axw> menn0: LGTM
<axw> oh, I see wallyworld has some comments tho
<menn0> axw: thank you
<wallyworld> a few yeah
<axw> thumper: not at the moment. there's probably not *too* much to do to get it to work
<thumper> hmm... ok
<axw> thumper: I did try using manual with the centos LXD image at one point, IIRC it didn't work because they didn't have sshd running by default?
<axw> thumper: or something else was missing, I forget
 * thumper nods
<menn0> thumper: lxc image list lcorg: | grep -i centos
<menn0> thumper, axw: if sshd isn't running you can always lxc exec in and sort that out
 * redir is eod
<wallyworld> menn0: if there are existing tests for removing dead charm docs and checking new rev numbers, how did they pass without sequence support?
<menn0> wallyworld: previously charm revs were calculated by keeping all charm docs around, including dead ones
<axw> menn0: yeah, it was just that it wasn't working OOTB. I think there may have been other things too
<wallyworld> also, with the addition of "-1" to the charm urls in MakeCharm(), was that an across the board change or just for those metrics ones?
<axw> too long ago, can't remember now
<menn0> axw: yeah, the images don't even have sshd installed, they are very bare bones
<menn0> wallyworld: it should never have been valid to add charms with no revision, but State.AddCharm was allowing it and those (buggy) tests were taking advantage of it
<menn0> wallyworld: with my changes that's no longer possible, which caused the tests to fail
<wallyworld> so in apiserver/uniter/uniter_test
<wallyworld> 	s.meteredCharm = s.Factory.MakeCharm(c, &jujuFactory.CharmParams{
<wallyworld> 		Name: "metered",
<wallyworld> 		URL:  "cs:quantal/metered",
<wallyworld> isn't that a case where the revision is missing?
<wallyworld> my other comment about testing the url rev number incrementing - unless we remove dead charm docs manually in the unit tests, they will still be around right? which means we are not 100% sure we are testing that the seqence behaviour is hooked in
<menn0> wallyworld: ah ha, the prep code which requires a revision is specified only applies to local charms and that's a store charm
<menn0> wallyworld: that's why the test still passed
<wallyworld> doh! missed that
<menn0> wallyworld: it's technically not correct though right?
<wallyworld> probably not, but outside the scope of this change
<menn0> wallyworld: I'll do the removal of charm docs tomorrow (and will make it part of that PR)
<wallyworld> ok, ta. do you agree with my point about thAT?
 * wallyworld hates caps lock being next to the A key
<menn0> wallyworld: I see your point (although you can obviously see that the code which queries the collection to find the next rev number has been replaced)
<thumper> oh FFS
<thumper> poop
<wallyworld> i can. but...... uness we remove the old artifacts that made it possbile for the old behaviour to work, we can't be sure
<wallyworld> i relaise it's pedantry
<wallyworld> TDD, write failing test, fix code, make tests pass
<wallyworld> there's no failing tests here
<wallyworld> since the dead docs still exist
<wallyworld> well, i guess if the code is removed they would have fsiled
<redir> menn0: that dutch skit was great thanks
<menn0> redir: my wife showed it to me last night. i had tears rolling down my face
<thumper> well, fuck me, that worked
<thumper> I'm mildly surprised...
<babbageclunk> axw: azure pagination - if I don't pass anything for the top-n parameter to a list, will it return all the results at once, or do I still need to handle next-result getting?
<axw> babbageclunk: I believe you have to handle next result still
<axw> babbageclunk: (we're not doing that anywhere yet, I know)
<babbageclunk> axw: yeah, thought that might be the case - thanks
<axw> babbageclunk: limits on VMs, vnets, etc. preclude there being too many of a single resource AFAIK, but when you're listing all resources it might be needed
<babbageclunk> axw: ok - it's not too fiddly, just thought I'd make sure.
<babbageclunk> axw: gah - I can't work out what resourceProviderNamespace and parentResourcePath should be in the CreateOrUpdate call. The docs only tell me helpful things like "The namespace of the resource provider."
<babbageclunk> axw: sorry to keep bugging
<axw> babbageclunk: no worries
<axw> babbageclunk: un moment
<axw> babbageclunk: the namespace is something like "Microsoft.Compute", and parent resource path is the URL path proceeding the namespace, preceding the resource type. e.g. for security rules, the parent resource path will be something like "networkSecurityGroups/{network-security-group-name}" (I think)
<axw> babbageclunk: compare https://docs.microsoft.com/en-us/rest/api/resources/resources to the path template in https://msdn.microsoft.com/en-us/library/azure/mt163645.aspx
<axw> babbageclunk: Microsoft.Network is the namespace, "securityRules" is the resource type, {security-rule-name} is the resource name
<axw> babbageclunk: everything between the namespace and the resource type is the parent resource path
<babbageclunk> axw: thanks, that's awesome! There turns out to be a lot of azure to understand.
<menn0> thumper: are we using the tech-board HO?
<babbageclunk> axw: can you take a look at this? https://github.com/juju/juju/pull/6868
<axw> babbageclunk: okey dokey
<babbageclunk> axw: WIP at the moment - I'm doing tests now.
<axw> babbageclunk: just eating atm, will look soon
<babbageclunk> axw: thanks, no rush!
<axw> babbageclunk: you should be able to get the azure creds that CI uses
<axw> babbageclunk: and test with them
<axw> babbageclunk: just looking for them, will pm
<axw> wallyworld: can you please review https://github.com/juju/juju/pull/6869 when you're free?
<wallyworld> sure, give me 5
<wallyworld> axw: looks good, a nice refactoring
<axw> wallyworld: thanks
<axw> wallyworld: got time for a HO? I need to brainstorm on a problem I've encountered with LXD creds
<wallyworld> axw: otp with uros, give me 10?
<axw> wallyworld: no rush, let me know when
<wallyworld> axw: free now
<axw> wallyworld: I think I figured out what I need to do. I'll ping you later if I'm wrong :)
<wallyworld> axw: ok, no worries. i may be afk for dinner or whatever but i'll be around
<axw> wallyworld: np. it can wait if not
<axw> wallyworld: enjoy your long weekend
<wallyworld> will do :-)
<wallyworld> i still need to land a pr first
<perrito666> Morning
<jam> morning perrito666
<jam> anyone want to review my attempt to make "juju ssh" better when the remote machine has multiple addresses?
<jam> https://github.com/juju/juju/pull/6857
<perrito666> Hey I'll be unresponsive for the next couple of hours my wife is at the hospital with a stomach issue mail me If you need me
<jam> perrito666: I hope everything is ok. hope she feels better soon
<jam> axw: babbageclunk: so I noticed both of you guys are making changes to how we're doing clouds between 2.1 and 2.2. I'd like to make sure your changes are resolved relative to each other.
<jam> specifically babbageclunk worked on playing with the names, while it looks like axw got rid of the clouds.BuiltinClouds map
<jam> I'm trying to make sure other 2.1 landed features get forward ported to 2.2
<jam> can I get some hints as to how this should be resolved?
<perrito666> Another bless what a week
<rick_h> lol congrats jujuteam
<jam> balloons: ping
<balloons> hey jam
<jam> hi balloons. I just sent you an email. Is it possible for us to carve out some time to go over CI and 2.1-dynamic-briges?
<balloons> jam, sure
<jam> I need to go spend time with my family before they disown me, but I can come back in a few hours.
<balloons> jam, see to that and we'll talk later
<jam> balloons: I sent a tentative invite
<jam> have a good one
<frobware> jam: AFAICT, trusty seems ok for the testing I have done.
<redir> morning
<frobware> jam: https://github.com/juju/juju/pull/6870 <<- bond-reconfigure-delay
<mbruzek> cmars: Is there a way to remove a term from a charm? One of the IBM charms still has lorem-ipsum
<cmars> mbruzek, push a new version of the charm to the charmstore without that term in the metadata
<jam> frobware: I reviewed it, I feel like it would be better to be having the config of 'bond-reconfigure-delay' but have the API down to the script itself just think in terms of "reconfigure-delay"
<jam> so the API server notices this is a bond, and sends a different value
<jam> rather than having the "is this a bond" in the script
<jam> is that too invasive?
<frobware> jam: yep, we could do that. i think...
<frobware> jam: can we discuss this in the morning?
<jam> frobware: certainly
<frobware> jam: it would be ideal to shore up these changes _this_ _week_
 * frobware sighs.
<jam> yep
<jam> frobware: I think its ready to land with that change, as I'm happy with the rest
<frobware> jam: there was one additional test I thought of in config.go; we test for a custom value, but I don't think for the default value.
<frobware> jam: when you say "just think in terms of "reconfigure-delay"" - is that just naming or a deeper semantic change?
<jam> frobware: the script should not have the "is this a bond", but the api server should be making that decision and returning a value for the bridges it is creating
<frobware> jam: ok, so if you pass reconfigure-delay=<some-value-gt-0> it will do the sleep regardless of type. yes?
<frobware> jam: we're just talking about the origin of truth, correct?
<jam> frobware: if the script gets that value yes
<frobware> jam: makes sense.
<jam> the thing deciding how long to sleep is the API server
<frobware> jam: and it should know the device type/kind/category.
<jam> right
<frobware> jam: caveat emptor: if you run the script manually (as I do a lot) don't forget to pass a value. Just sayin...
<jam> frobware: have a default value of 1-3 ?
<jam> or a default value of 10-20 ?
<frobware> jam: 3 was a winner for so long. I vote for 4.2.
<jam> frobware: then if you forget it still fails saif
<jam> :)
<frobware> jam: so we're saying cli arguments win, but if not supplied we'll default to 4.2 (or 3). correct?
<jam> frobware: yeah, you had a default of 30 in the script already if I read it correctly
<frobware> jam: ah, crap. Hmm.
<jam> frobware: add_argument(..., default=30) I thought I read
<frobware> jam: this from my hacking.... probably....
<frobware> jam: I had meant to leave it to whatever it was previously knowing full well that it would be driven from model-config. but perhaps 30 is nice and safe if you run it manually.
<jam> frobware: seems a little too safe
<jam> frobware: especially if it is the one that SSH is talking on and you wonder if everything is dead
<frobware> jam: a interface is better up than down. :)
<frobware> jam: so the only time you'll get 30 is if you specify 0 from juju for bond-reconfigure-delay.
<jam> frobware: we should just always pass the aggregate value
<jam> even if it is 0 so it overrides the default
 * frobware needs to go and look at this to make sure. But tomorrow now...
<frobware> jam: I think this comes back to: who drives the value and why do we need a default. if juju passes 0, you get 0. if it passes 1234, you get ... and so on.
<frobware> jam: should the script carry a default?
<jam> frobware: the reason for a default is if you run the script by hand (IMO)
<frobware> jam: right. but I would counter, it has no documentation, is wrapped up by juju so in the true spirit of *nix you have rope to hang yourself with.
<jam> sounds like you're just trying to be spiteful now :)
<frobware> jam: you might be interested in: https://github.com/frobware/juju/tree/go-debinterfaces
<frobware> jam: I started a rewrite of the script in Go. The parsing side is done, there's 100% test coverage. I didn't do the bridging side as...
<frobware> jam: code is in juju/network/debinterfaces/...
<alexisb_> thumper, ping
#juju-dev 2017-01-26
<thumper> menn0: https://github.com/juju/juju/pull/6873
<thumper> menn0: that's the other half
<menn0> thumper: looking
<thumper> menn0: just doing extra-info for model config now
<menn0> thumper: done
<thumper> menn0: ta
<babbageclunk> man, I was complaining about azure's SDK documentation but their web UI is really slick.
<menn0> babbageclunk: well at least that's something :)
<babbageclunk> click click click
<thumper> menn0, babbageclunk: https://github.com/juju/juju/pull/6874
<babbageclunk> thumper: looking
<babbageclunk> thumper: why are we making it a string rather than an object though?
<babbageclunk> I guess they're happy feeding it through yaml unmarshalling themselves
<babbageclunk> thumper: LGTM'd
<thumper> babbageclunk: yeah
<thumper> babbageclunk: string allows more flexibility and simpler checks
<thumper> babbageclunk: also what I agreed with :)
<babbageclunk> thumper: it does make it hard for anything else to use it at the same time, but I guess it's per-model.
<thumper> yeah, is per model
<thumper> I think if we are going to have multiple things attempt to store data we probably want to think this through a bit more
<thumper> and have a different place to store it
<thumper> and different API etc
<babbageclunk> thumper: What are they using it for out of curiosity? He doesn't say in the bug.
<thumper> not sure to be honest
<thumper> conjure-up stuff
<thumper> hmm...
<thumper> babbageclunk: let my look at Tattrs
<thumper> which is map[string]string
<thumper> ew...
<thumper> $ juju model-config info
<thumper> ERROR key "info" not found in {<nil> ""} model.
<thumper> that's not a great error message
<babbageclunk> ah well
<thumper> babbageclunk: looked at storing as Tattr, I don't think that is a goer...
<thumper> there is too much special casing around it for it to be robust
<thumper> like, it doesn't deal with spaces very well
<thumper> babbageclunk: so I'm back to previous thought, that if we are wanting to store more structured user data, we do it properly with different API can collection
 * redir eods
<babbageclunk> thumper: yeah, I think you're probably right.
<thumper> babbageclunk: jam mentioned annotations in the review, and I think that would be the way to go
<thumper> add a 'juju annotation' cli
<thumper> that would work for models or applications or units
<thumper> or parhaps machines
<thumper> and have the behaviour be the same in general as the model-config
<thumper> ish
<thumper> babbageclunk: do you have a few minutes to chat?
<thumper> I want to talk about the state pool
<babbageclunk> thumper: sure. standup hangout?
<thumper> sure
<menn0> thumper: I'm EOW (started early today). See email for the final PR for the charm URL changes that's merging right now.
<thumper> ack
<thumper> menn0: have a good weekend
<thumper> menn0: see you in just over a week
<thumper> holy crap
<thumper> babbageclunk: every AllWatcher created creates a state pool for all models
<thumper> babbageclunk: each GUI connection creates an all watcher
<thumper> babbageclunk: it could just be connecting GUIs to the controller killing things
<thumper> I'm going to refactor things so that the AllWatchers created use the single state pool
<thumper> and looks at exposing the state pool through the facade base objects
<thumper> this may be a bit of surgery
<thumper> but shouldn't be too bad
 * thumper hopes
<thumper> well, that is tomorrow sorted
<jam> thumper: every state object stores a list of all transactions for every model even if it only uses some of them
<jam> thumper: state/watcher/watcher.go 'current[watchKey]int64' tracks the revno of every object seen in the transaction log
<jam> I'd like to see them sharing that underlying watcher if possible
<babbageclunk> thumper: ooh, that might be it.
<babbageclunk> gah, using the resource API to update tags on azure resources requires specifying the API version per resource type. :(
<mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<mup> Bug #1658549 changed: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
<deanman> Morning, is there a way (even a hack) after all to be able to deploy CentOS workloads locally with LXD ? Related bug here -> https://bugs.launchpad.net/juju/+bug/1495978
<perrito666> morning
<perrito666> deanman: ill be gone for about an hour, but I dont think that woul work
<perrito666> deanman: have you tried jus passing the series and see what hapens? (assumin you have centos templates)
<deanman> perrito666, I have simple charm that has only centos7 defined in its metadata.
<deanman> Checking logs it nags about `machine-0: 12:08:29 INFO juju.tools.lxdclient no image for ubuntu-centos7 found in https://streams.canonical.com/juju/images/releases/`
<deanman> so sometime ago it was hinted that you could trick it by creating a new alias for a local centos7 image with the name `ubuntu-centos7` which didn't work.
<deanman> so open to any other suggestions :-)
<redir> morning juju-dev
<perrito666> redir: morning
<perrito666> redir: hey, by any chance have you ever added a config value?
<perrito666> in config/config.go?
<redir> perrito666: HasDefaultSEries and PreferredSeries
<redir> funcs not values...
<perrito666> redir: care to evacuate some doubts for me?
<redir> I'll try
<perrito666> redir: ho?
<redir> sure
<perrito666> standup ho
<redir> omw
<redir> brb going to run a couple errands
<redir> or bbiab
 * thumper-busy rages against the code
<perrito666> ah the typicall rage against the machine :p
 * redir back
<thumper-busy> if this works, fair dinkum, it'll be a miracle
<thumper> babbageclunk: mornign
<babbageclunk> thumper: morning - how's the releaser stuff?
<thumper> quick hangout?
<babbageclunk> thumper: actually it's afternoon now
<babbageclunk> thumper: sure
<thumper> 1:1
 * redir wonders what a fair dinkum is
#juju-dev 2017-01-27
 * thumper off to the vet
<perrito666> So annyone can point me to where is the nice path to go from controller/config.go to let's say state? I find that I am missing something
<babbageclunk> thumper's getting a cone of shame applied.
<perrito666> Oh I would so pay to see that
<babbageclunk> axw: yay, adding in an individual get works, thanks!
<axw> babbageclunk: sweet :)
<babbageclunk> axw: I've got one other thing that is a bit nasty in my standalone code now though - maybe you can tell me a better way?
<babbageclunk> axw: In order to pass the right APIVersion values I get all of the providers first and get the latest version from the ProviderClient.
<babbageclunk> axw: Basically building a big map of resource type -> version.
<babbageclunk> axw: uh, that was clumsily phrased - I mean, I get all the providers from the provider client and build a big map of the latest api version for each type.
<babbageclunk> Then I need to look up the version before each call for the different resources.
<axw> babbageclunk: that's probably the right thing to do, I'm not really sure tho - haven't done this before
<axw> babbageclunk: I don't think we'd want to hard-code, because we're using the generic resource API
<axw> so I'm presuming we get the most recent version
<axw> babbageclunk: are you sure it's not meant to be the API version for the generic resource API?
<babbageclunk> axw: Yeah, I'm sure - I get errors if I leave it the resource API version - for some resource types it says there's no provider registered.
<axw> babbageclunk: ok
<babbageclunk> axw: The error message nicely says which are available, so I could compare that to the output from the azure CLI and check.
<axw> babbageclunk: I think listing providers is the way to go
<babbageclunk> axw: Cool. Anyway, thanks for all your help! I think it's a simple matter of porting what I've got into the environ and putting tests in.
<axw> babbageclunk: cool. I replied to your query from the other day about reentrancy btw
<babbageclunk> axw: oh great - I'll check it out.
<redir> is there a way to specify series for a controller: e.g. juju bootstrap cloud controller --config series=trusty
<redir> i know you can set default-series but can you only specify for the controller
<redir> ?
<babbageclunk> axw: replied - I think it'll handle that ok.
<axw> babbageclunk: yes, I was really just suggesting an optimisation. feel free to ignore, since this is going to happen infrequently
<babbageclunk> axw: Oh, I'll add the check. I guess the extra thing would be to filter by the controller tag (with the old uuid) in the call to ListResources? But that would require the old uuid being passed in.
<axw> babbageclunk: I think we can do without that for now
<axw> babbageclunk: and trust that the migration worker does the right thing :)
<axw> babbageclunk: ping? coming to perf meeting?
<blahdeblah> Hi all, could I have some quick advice, please?  I'm trying to work out why this env is just sitting in "waiting for machine" state across the board: https://pastebin.canonical.com/177303/
<blahdeblah> Where's the best place to look for debug info?
<babbageclunk> blahdeblah: I'd look at the controller logs - juju debug-log -m controller
<babbageclunk> blahdeblah: Can you see the machines in the underlying cloud?
<blahdeblah> babbageclunk: no, apparently it never gets that far
<babbageclunk> blahdeblah: it's weird though - normally the machines reporting "down" indicates that the machines have been provisioned but the agent isn't running.
<blahdeblah> babbageclunk: the CI job has just timed out and destroyed the model; all I got relevant from the controller logs was this: https://pastebin.canonical.com/177304/
<blahdeblah> I'll run the job again and try the debug log while it's starting up
<blahdeblah> babbageclunk: can we run the debug log before the model we're interested in exists?
<blahdeblah> Oh, looks like that's just the equivalent of juju switch controller; juju debug-log, so I'm going to assume yes.
<babbageclunk> blahdeblah: yeah, that's what I was about to say :)
<blahdeblah> :-)
<blahdeblah> OK - another run under way now
<babbageclunk> blahdeblah: if it happens again, are you able to ssh into the provisioned machines without using `juju ssh`?
<blahdeblah> babbageclunk: It doesn't seem to be getting that far
<blahdeblah> OK, here's progress: https://pastebin.canonical.com/177306/
<babbageclunk> blahdeblah: hmm. When I've seen machines stuck in down, it's because there's a problem running cloud-init and so it hasn't been able to start jujud.
<blahdeblah> Yeah - I think it's failing before that
<babbageclunk> blahdeblah: yeah, I see what you mean
<babbageclunk> blahdeblah: I don't know much about openstack - does that error message help?
<blahdeblah> babbageclunk: yeah - it does; just checking our build scripts to see what's responsible for specfiying it
<blahdeblah> thanks for the help
<babbageclunk> blahdeblah: no worries!
 * babbageclunk goes for a run
<blahdeblah> excellent choice
<blahdeblah> Although, depending on what part of the world you're in, maybe not. :-)
 * redir goes to make dinner and eods
<thumper> babbageclunk: https://github.com/juju/juju/pull/6879
<blahdeblah> babbageclunk: Thanks for all the help; adding the correct openstack network sorted that out
<thumper> babbageclunk: hold off on that review for a few minutes, found a few more test failures to fix
<babbageclunk> blahdeblah: glad to hear it! (I'm in NZ, so it was a great choice!)
<blahdeblah> babbageclunk: I went running with some folks at LCA2017, and one is a Linux kernel engineer who works for Microsoft somewhere in Canada.  He reckons running anywhere down to about -30C is fine, except for the ice beard.  I reckon he's nuts. :-)
<babbageclunk> blahdeblah: ouch! I've been running at -2C (during a cold snap in London) but I can't imagine running in that kind of cold.
<blahdeblah> I'm too wimpy to run below about 10.  :-)
<babbageclunk> blahdeblah: also I had to read "linux kernel engineer working for Microsoft" a few times.
<blahdeblah> Yeah - pretty funny
<blahdeblah> But apparently true
<babbageclunk> nice
<babbageclunk> a former colleague of mine always says "no such thing as bad weather, just the wrong clothes"
<babbageclunk> thumper: let me know when it's good to look at that PR
<thumper> babbageclunk: it's there now
 * thumper heads off to jitz
<bradm> anyone about?  I'm getting a weird message constantly streaming in my juju state server logs - http://pastebin.ubuntu.com/23872991/
<thumper> babbageclunk: thanks for the review, I won't land it just yet as I'm about to head off for a week
<thumper> babbageclunk: stack.Debug is *way* faster than all the DB access we do
<thumper> so any db access will completely hide any cost to stack.Debug
<thumper> bradm: that looks badish, what version?
<thumper> bradm: I'm guessing pre 2.0
<bradm> thumper: nope, 2.0
<thumper> bradm: hmm...
<bradm> thumper: I had some weird transaction issues, so I cleaned those up with mgopurge, which seemed to work ok
<thumper> bradm: what is the background with the controller / models?
<bradm> thumper: I've had to end up re-bootstrapping :-/
<thumper> I'm guessing... it was mongo data loss...
<thumper> which would have caused transaction issues
<bradm> thumper: but the background is its a stack being deployed, so its being taken up and down to test out the deployment
<thumper> and missing setting doc
<thumper> well... poo
<bradm> thumper: so long running controller, with the model being deployed, torn down, remade with same name, then deployed again
<thumper> hmm
<thumper> there shouldn't be a problem
<bradm> thumper: the fun part was a juju list-models didn't show it, but a list-models --all whinged about missing data
<thumper> the only thing I have seen that causes transaction issues is mongo data loss
<thumper> hmm... sounds like a half torn down model
<bradm> yeah, I wonder if some corruption happened during tearing down the model
<bradm> huh, this is actually 2.0.1, I should probably update to 2.0.2
<thumper> good luck folks
 * thumper out
<deanman> jcastro, Are you available? Following up the action needed to be complete by charmers program owner so charmers can test their centos charms.
<mup> Bug #1587644 changed: jujud and mongo cpu/ram usage spike <canonical-bootstack> <canonical-is> <eda> <performance> <juju:Fix Committed by axwalk> <juju-core:Fix Released by axwalk> <juju-core 1.25:Fix Released by axwalk> <https://launchpad.net/bugs/1587644>
<mup> Bug #1654528 changed: log sending broke between 1.25.6 and 1.25.9 on trusty <canonical-is> <juju-core:Fix Released by rogpeppe> <https://launchpad.net/bugs/1654528>
<jcastro> deanman: sure, I'm free for the next 1h20m, how can I help?
<deanman> Using private simplestreams from sinzui and i get the following error http://paste.ubuntu.com/23875938/ and i think it just a simple action from you or Marco
<jcastro> marcoceppi_: have you seen that before? ^^
<marcoceppi_> jcastro deanman nope.
<marcoceppi_> that's weird
<jcastro> I wonder if it's a new account that's hit a limit?
<jcastro> but we've gotten that before and the error looks nothing like that
<deanman> I could search for my chat transcripts we had end of December, sinzui was basically letting me use his private simplestreams with access to centos images. Would that help ?
<jcastro> I wonder if you need to accept some eula or something to use those images?
<deanman> well from the error message description i can understand that i have, but since i don't have access to aws console, i can't see that through ?
<jcastro> does the account holder have access to the console?
<jcastro> balloons: yo anyone else on your team know anything about these centos images?
<jcastro> I've never even seen this error before
<deanman> marcoceppi_, Did you try that link https://aws.amazon.com/marketplace/pp?sku=aw0evgkw8e5c1q413zgy5pjce ?
<marcoceppi_> deanman: OH I SEE
<marcoceppi_> deanman: okay, I can help, probably
<marcoceppi_> deanman: what region?
<deanman> im on eu-west-1
<marcoceppi_> deanman: try now?
<deanman> thank you sir, it worked!
<deanman> some weird other behavior though....
<deanman> marcoceppi_, i could see the machine being created but gets stuck at pending with the following http://paste.ubuntu.com/23876093/. Does that look normal to you?
<marcoceppi_> deanman: that's beyond me
<deanman> marcoceppi_, no worries, I'll try look into this more and try get help at some other time
<mskalka> Not sure if here or #juju is the right place for this, but does the reactive charm framework support @when('config-changed') or similar? My google-fu is failing
<cmars> mskalka, it does.. looking up the state names for this
<cmars> mskalka, https://github.com/juju-solutions/layer-basic#reactive-states
<mskalka> cmars: Thanks! That's exactly what I was looking for.
#juju-dev 2017-01-29
<menn0> morning all
<babbageclunk> morning menn0
<babbageclunk> how's it going?
<menn0> babbageclunk: pretty good. had a great camping trip with the family at little river and we saw Moana last night (awesome!)
<menn0> babbageclunk: how about you?
<babbageclunk> Oh yeah, Moana's great! Ada still bursts into her version of the Shiny song every so often.
<babbageclunk> menn0: I bought a push mower and mowed our lawn yesterday, felt super-domestic!
<menn0> babbageclunk: haha, I know that feeling. A few weeks ago I got really excited about a retractable hose that I'd bought. sad yet funny :)
<babbageclunk> menn0: ahh, growing up
#juju-dev 2018-01-22
<mup> Bug #1728902 changed: leadership worker wakes up too often <leadership> <juju:Fix Released> <https://launchpad.net/bugs/1728902>
<mup> Bug #1732163 changed: juju status triggers some uninteresting DEBUG level messages <logging> <usability> <juju:Triaged> <https://launchpad.net/bugs/1732163>
<mup> Bug #1728902 opened: leadership worker wakes up too often <leadership> <juju:Fix Released> <https://launchpad.net/bugs/1728902>
<mup> Bug #1732163 opened: juju status triggers some uninteresting DEBUG level messages <logging> <usability> <juju:Triaged> <https://launchpad.net/bugs/1732163>
<mup> Bug #1728902 changed: leadership worker wakes up too often <leadership> <juju:Fix Released> <https://launchpad.net/bugs/1728902>
<mup> Bug #1732163 changed: juju status triggers some uninteresting DEBUG level messages <logging> <usability> <juju:Triaged> <https://launchpad.net/bugs/1732163>
<mup> Bug #1702236 changed: Juju 1.25.10 is running hooks prior to additional network interfaces being up <canonical-bootstack> <juju-core:Won't Fix> <https://launchpad.net/bugs/1702236>
<rogpeppe> axw: i've got a couple of PRs up for review that you might be interested in: https://github.com/juju/juju/pull/8304 https://github.com/juju/juju/pull/8305
<axw> rogpeppe: cool. won't be able to look today, but maybe tomorrow
<rogpeppe> axw: thanks
#juju-dev 2018-01-23
 * babbageclunk pops out to do a pre-exchange check on the new house!
#juju-dev 2018-01-24
<wallyworld> babbageclunk: sorry! cut you off
<veebers> babbageclunk: exciting!
<babbageclunk> wallyworld: No worries!
<babbageclunk> I don't think it was anything exciting.
<babbageclunk> veebers: i know right!
<veebers> babbageclunk: hopefully everything was in order
<babbageclunk> veebers: yup, it looked better than I remembered it!
<veebers> that's a good sign!
<wallyworld> axw: should i hold off reviewing the raft cluster pr pending changes to use the (single) address as the node id? I think that's what we agreed right?
<axw> wallyworld: yeah, and the rafttransport PR I guess
<wallyworld> yep
<axw> wallyworld: thanks for the review. can you please see my reply before I merge
<wallyworld> sure
<wallyworld> axw: ok, no worries, lgtm
<axw> thanks
<balloons> hml, do you know who is working on retooling backup / restore (if anyone?)?
<balloons> vern, I'm still not sure who's working backup / restore fixes, if anyone atm fyi
<vern> hml was doing some work on backup/restore. not sure if it was just bug-fixes
<hml> vern: agprado and i are going to take a look at back/restore shortly - some bug fixes but review overall
<hml> vern: is there somewhere we need to look?
<vern> hml: I'm going to write/extend the qa test. want to be sure to have a good plan and to anticipate anything that isn't working right
<hml> vern: okay - iâll have more info next week
<balloons> hml, specifically I think the plan was to simplify things right? Or will you focus on bugfixes?
<hml> balloons: yes - simplify if we can and try to improve the experience
<balloons> vern, try and prep your thoughts on the existing state of the test for standup if you could.
<vern> will do
<rogpeppe1> anastasiamac: hiya! thanks for working on https://bugs.launchpad.net/juju/+bug/1745231 so quickly! I just sent a reply to that issue.
<mup> Bug #1745231: removed model can cause allmodelwatcher to die permanently <juju:In Progress by anastasia-macmood> <juju 2.3:In Progress by anastasia-macmood> <https://launchpad.net/bugs/1745231>
<anastasiamac> rogpeppe1: nws - thnx for looking into it :)
<rogpeppe1> anastasiamac: it's been bugging us for ages!
<rogpeppe1> anastasiamac: luckily the last time it happened i got them to set the logging level to INFO
<rogpeppe1> anastasiamac: so i finally saw the problem
<anastasiamac> rogpeppe1: yeah, it has been a hard road... m glad u were patient and insightful
<rogpeppe1> anastasiamac: BTW i think it would be nice if more places that logged errors logged the error details too
<rogpeppe1> anastasiamac: np, it was useful to have got there in the end :)
<rogpeppe1> anastasiamac: to get a timely notification when the problem started happening i built a nice prometheus-monitoring tool that shows "jimm: ok" at the top of my screen (in the indicator bar).
<rogpeppe1> anastasiamac: i'm sure i can do more interesting things with that :)
<anastasiamac> rogpeppe1: that sounds amazing \o/ anything that can be shared with us so that we can keep an eye on it too?
<rogpeppe1> anastasiamac: well, indicator-sysmonitor makes it easy to put custom stuff on the indicator panel
<rogpeppe1> anastasiamac: and the little localhost server to monitor prometheus is this code: http://paste.ubuntu.com/26454773/
<anastasiamac> rogpeppe1: niiice
<rogpeppe1> anastasiamac: then the script called by indicator-sysmonitor is just (more-or-less) bhttp http://localhost:12468/status
<anastasiamac> rogpeppe1: thnx !
#juju-dev 2018-01-25
<wallyworld> babbageclunk: hey, i know you'd love a caas review to finish your week :-D https://github.com/juju/juju/pull/8318
<babbageclunk> wallyworld: hey, looking now! Sorry, Tim popped around to say hello.
<wallyworld> no rush :-) you may need to ask questions about how it all hangs together. i'm interviewing again for the next hour though
<babbageclunk> wallyworld: when you're not interviewing can you look at this? https://github.com/juju/juju/pull/8315
<wallyworld> will do
<wallyworld> babbageclunk: almost done review; i assume you cribbed from the model config command?
<babbageclunk> wallyworld: yup - I looked at consolidating them but there were enough differences that it didn't seem worth it.
<wallyworld> no worries, just helps me skim a bit :-)
<babbageclunk> cool cool
<wallyworld> babbageclunk: lgtm, just a little iffy on the []interface{} setting values
<babbageclunk> wallyworld: thanks - I think we need to be able to handle those because that's what schema.List.Coerce returns.
<wallyworld> babbageclunk: yeah, i think previously stuff stayed as a string when stored and was coereced on the way back out
<wallyworld> can't recall the specifics
<wallyworld> but i know there were issues around this stuff (corner cases etc)
<babbageclunk> wallyworld: yeah, I could see some pieces of that in the ConfigFlag code
<wallyworld> yeah, so there be dragons is all
<wallyworld> it would be good to perhaps stick with our existing conventions - i'm sure some model config stuff is a map after being coerced
<wallyworld> but is stored as a string
<wallyworld> or maybe time to make a change, not sure
<babbageclunk> Really? I'll have a look but I didn't change the way the values were stored in controller config (ie they were already being coerced, I just added a list value).
<babbageclunk> wallyworld: It looks like we use SetPreserveStringValue for the config options on the deploy command (so I guess that's charm config) but not anywhere else.
<wallyworld> babbageclunk: ok, let's get this in - perhaps an email to initiate a wider discussion would be good just to make sure we're all ok with the change. the fact that == changed to reflect.DeepEquals() suggests things used to work one way and now not, or there was a bug
<babbageclunk> wallyworld: ok, will to
<babbageclunk> s/t/d
<babbageclunk> ha
<rogpeppe1> anastasiamac: thanks for your quick fix!
<anastasiamac> rogpeppe1: nws - need to land it now and see the difference! thank you for the analysis
<rogpeppe1> anastasiamac: BTW a good point was raised by axino in #is - do you know why the dead model entry is staying around in the model pool for so long?
<anastasiamac> rogpeppe1: it has also sparked an interesting and invigorating discussion about what to do with connections that pool manages ;) thank you for keeping us on our toes?.. ;)
<anastasiamac> rogpeppe1: there must b a model connection that is not closed...
<anastasiamac> rogpeppe1: we r considering to force close these on dead models after a timeout period (default 5mins but potentially configurable)
<anastasiamac> rogpeppe1: i have suspected that we r not closing something somewhere but tracking it down is really really hard in asynch and http context
<anastasiamac> rogpeppe1: it's easier to just close connections when needed
<rogpeppe1> anastasiamac: there are techniques that can be used to track down stuff like that
<anastasiamac> rogpeppe1: yes and i've tried... to what i have seen it grew for me as our connections are long lived and are reused for more than one biz concern/function... anyway, m planning to focus on it next week (tomorrow is public holiday in OZ and am about eod)
<rogpeppe1> anastasiamac: i'm trying to remember/find the Go package that i think might help with this
<anastasiamac> rogpeppe1: would love to have ur opinion once i come up with solution(s) :D
<anastasiamac> rogpeppe1: ohh u'd make my day!
<rogpeppe1> anastasiamac: ah, this is what i was thinking of: https://godoc.org/golang.org/x/net/trace
<rogpeppe1> anastasiamac: not sure if it would actually help though
<anastasiamac> rogpeppe1: niice :) i'll see what i can do :)
<rogpeppe1> anastasiamac: i'm not sure that records context of where existing objects are used though.
<anastasiamac> rogpeppe1: last time i've checked i had a list o2-pages long of different cllas that have been made on one connection... i needed to pair up with someone at the time but TZs is not always in my favor :) maybe next week :)
<rogpeppe1> anastasiamac: i might be inclined to store current user (perhaps the pcs of the stack trace) in the object when it's used
<anastasiamac> rogpeppe1: i can record that context myself ;) i was thinking of adding more logging at trace for non-released connections so that we can track what's going on say in jimm...
<rogpeppe1> anastasiamac: are connections shared concurrently between multiple things?
<anastasiamac> rogpeppe1: i do not believe so
<anastasiamac> rogpeppe1: i think it's at least one connection per http connection... but i'd need to check and m starting to checkout: hungry mounths to feed, etc ;)
<rogpeppe1> anastasiamac: so, looking at StatePool, i think it might be worth recording the stack pcs as well as the sourceKey in the referenceSources map
<rogpeppe1> anastasiamac: then it would be possible to find out where all the Get calls for currently outstanding unreleased State instances were allocated
<anastasiamac> rogpeppe1: yep, that what i have on a branch that i have abandoned last year.. was going to pick it up again next week :) and have stuff running off it
<rogpeppe1> anastasiamac: you'd need some way of interrogating that too though
<anastasiamac> rogpeppe1: my plan was to log it and run locally then just observing the log.. but i like the interogation idea better since it can help in the field too :)
<rogpeppe1> anastasiamac: ooo, there's already IntrospectionReport
<rogpeppe1> anastasiamac: don't know where it's used tho
<rogpeppe1> anastasiamac: cool, looks like you can just GET /statepool
<anastasiamac> rogpeppe1: yep
<rogpeppe1> anastasiamac: BTW do you know when jam/wallyworld/thumper are back online?
<jam> rogpeppe1: wallyworld and thumper are gone this week and I think next. I'm around.
<anastasiamac> rogpeppe1: wallyworld is sprinting next week and thumper should b back towards 2nd half next week
<rogpeppe1> jam: ah, cool. i've a couple of PRs I wonder if you could take a look at.
<rogpeppe1> anastasiamac: https://golang.org/pkg/runtime/#Callers - you could store a []uintptr in each referenceSources entry; relatively low cost, but provides the capability to get full stack trace out
<rogpeppe1> jam: https://github.com/juju/juju/pull/8304 and https://github.com/juju/juju/pull/8305
<anastasiamac> rogpeppe1: full stack trace in prev look only returned http caontext not actual calls but i'll have a look
<rogpeppe1> anastasiamac: i mean recording the stack trace when Get is called
<rogpeppe1> anastasiamac: or... maybe that's called too early to be of use :-(
<anastasiamac> rogpeppe1: yeah, i found it was a bit early, i had to add some fun callbacks to get the real trace :0 i'll need to revisit my shelf to re-fresh my memory but m about to switch off my machine
<anastasiamac> rogpeppe1: so next week :)
<rogpeppe1> anastasiamac: have a great weekend!
<rogpeppe1> anastasiamac: and thanks for the quick fixes
<anastasiamac> rogpeppe1: my pleasure :)
<balloons> vern, where you able to conquer jjb and get a new job created?
<vern> actually, I forgot and was looking at the joyent job yesterday
<vern> I remembered late last night. am tackling jjb now :)
<balloons> vern, no worries :-)
<balloons> vern, it also seems we should be fairly safe from any big overhauls that would affect the simple use case for backup restore
<balloons> I spoke with wallyworld about intentions to confirm
<vern> cool. thanks for checking, balloons
<hml> anyone tried to bootstrap localhost with juju snap (edge channel) today?
<hml> iâm using the lxd snap too
<hml> ERROR detecting credentials for "localhost" cloud provider: failed to connecti to local LXD: not found
<hml> stable channel is okay
<rick_h> hml: sec, I can test it on my machine
<hml> rick_h: thank you
<hml> my controller from yesterday - canât connect to the containers anylonger - they lost their ip addresses
<rick_h> hml: hmm, I can't get lxd init to go getting "error: Error inserting default into database: attempt to write a readonly database"
<hml> i saw that too at one point
<hml> i think the lxd snap updated yesterday
<hml> i wonder how 2.3.2 canât bootstrap lxd?
<hml> s/canât/can
<rick_h> hml: so I've got installed:  lxd 2.21 from yesterday. Did you have a later one?
<hml> rick_h: iâm on the same - it refreshed yesterday
<hml> rick_h: lxc init does work for me but the new container is STOPPED
<rick_h> hml: sorry, can't help atm as mine is busted up. Looking into what's up.
<hml> rick_h: iâm in the same state :-) though knowing lxd is giving others issues helps
<hml> rick_h: any luck?  looks like something with the lxc networking, the juju failures lead there - and the container which bootstraped today  lost itâs ip address and juju can no longer reach it
<rick_h> hml: no, this error seems to be around a sqlite db not being able to be written and the only bug I see atm is around enough inodes?
<rick_h> hml: sorry, was just rebooting the thing and then got sidetracked while it rebooted
<hml> :-)
<hml> rick_h: i havenât been able to find any errors in logs around this
<hml> rick_h: i tried to revert the snap, but that failed
<rick_h> hml: so after reboot my inode issue is ok, lxd init works, and bootstrapping with --debug now
<hml> rick_h: hmmmm
<rick_h> hml: getting images...wheeee
<hml> rick_h: good that itâs back up and running - no cool if it requires a reboot
<rick_h> hml: so all done and bootstrapped here
<rick_h> 2.4-beta1+develop-dc661f7
<hml> rick_h: okay - iâm taking the opporturnity to improve on the âERROR not foundâ message
<rick_h> hml: lol why? I mean that's clearly ... umm ...
<hml> lol
 * balloons reads backlog
<balloons> I thought it was just me this morning
<balloons> sounds like if I try again things will work :)
<hml> balloons: reboot first
<hml> :-/
<balloons> my system was rebooting from the microcode issues
<balloons> that was annoying. I think it's stopped today finally
 * hml fingers crossed
<hml> quick pr review anyone?  https://github.com/juju/juju/pull/8320
<balloons> I feel like we might end up playing with the message
<hml> balloons: word play suggestions are welcome
<balloons> hml, I left a comment and approved
<hml> balloons: ty
<rick_h> "there once was an error from nantucket..."
<hml> ha!
<hml> rick_h: balloons: i think my ubuntu box has gremlins :-) - previously failing lxc commands have started working again. and restarted containers have ip addrs (so far)
<hml> vern: balloons : do you know where I can look to see why this failed: http://ci.jujucharms.com/job/github-merge-juju/800/console  ?  ty
<vern> hml: good question... I'd expect right there but not much more in that output than "ERROR: Build failure, reporting on proposal"
<vern> checking the jenkins job...
<veebers> hml: go install github.com/juju/juju/acceptancetests/repository/charms/fill-logs/actions: build output "/var/lib/jenkins/workspace/github-merge-juju@tmp/bug1745459-800/tmp.4wz9MQ0PI7/juju-core_2.4-beta1/bin/actions" already exists and is not an object file
<veebers> I'm not sure why that's the case those :-\ Although I am currently updating that job
<veebers> I lie, I'm not updating that job
<veebers> I'm doing the merge check job
<hml> veebers: :-) it was the 2nd attempt at merge???  but that âshouldnâtâ matter
<veebers> hml: shouldn't have, not sure why it choked. Seems happier this time around, hopefully those unit tests pass
<hml> veebers: 3rd time the charm?  first was a known transcient failure
<vern> hml: for what it's worth: the newer job is already past the point the other failed
<hml> good news
<veebers> *sigh* any time the config for the check-merge job is saved it re-scans and re-runs for old PRs :-\
<veebers> If you see a comment on an older PR pop up this is why
#juju-dev 2018-01-26
<balloons> vern, how'd adding a job go?
<vern> balloons: no success yet. veebers updated the readme with some useful info. still getting a KeyError on the job push. will try stashing my changes and testing a pre-existing job to see if the problem exists with my new ci job or my environment
<balloons> vern, ack
<balloons> vern, you should be able to update any of the existing jobs as well as you mention
