#juju 2011-11-28
<rog> mornin'
<TheMue> hi rog
<niemeyer> Hello all!
<mainerror> Hello.
<rog> niemeyer: welcome home!
<niemeyer> rog: Thanks! :)
<TheMue> niemeyer: welcome back
<niemeyer> TheMue: Thanks!
<niemeyer> rog: lbox seems to be working well
<niemeyer> rog: You're right about pre-req.. the diff should come from the pre-req branch
<niemeyer> rog: Will address this later
<rog> niemeyer: it's difficult though, because the prereq branch is a moving target. as is the target itself, i guess
<niemeyer> rog: Yeah, that's probably not a big deal
<rog> niemeyer: it's a pity the target has to be on disk.
<rog> niemeyer: i've made a shell script that does quite a lot of the stuff i always need to do; i don't know how much would be generally applicable though.
<niemeyer> rog: The main concern I have is that it feels easy to merge a branch without addressing the pre-req first, and as a consequence merging the pre-req without reviewing it
<niemeyer> rog: I was going to address that in your email, but we can talk here as well
<rog> http://paste.ubuntu.com/752390/
<niemeyer> rog: It's actually not a pity
<niemeyer> rog: This is standard distributed revision controlling
<rog> i just fetch the target every time now
<rog> into /tmp or somewhere
<niemeyer> rog: You're used to the Go process, but their process is well sub-standard in that regard
<niemeyer> rog: This feels quite bad
<rog> yeah, but it stopped me making the same mistake every time
<niemeyer> rog: The usual way to develop software with any of the DVCS tools (bzr, git, hg, ..)
<rog> niemeyer: which was that i'd specify, say, ../go-trunk as a target, but i might have a different push target inside go-trunk
<niemeyer> rog: is to fetch the pristine code locally, and work with branches on top of it
<niemeyer> rog: Hmmm.. how do you mean?
<niemeyer> rog: There's only one go-trunk?
<rog> i think i'd probably done: cd go-trunk; make-some-changes; push --remember lp:~rogpeppe/blah/foo
<niemeyer> rog: Yeah, that's the issue
<rog> which was probably a silly thing to do, but it caught me out twiece
<rog> s/twie/twi
<niemeyer> rog: yeah, don't do it.. you may end up even screwing up the real trunk by mistake
<niemeyer> rog: branch locally
<rog> the other thing is when i want to edit some code that i've goinstalled
<rog> i want to do: cd $GOROOT/src/pkg/launchpad.net/goamz/ec2; edit; commit; propose
<niemeyer> rog: We covered that at UDS
<niemeyer> rog: Don't edit the pristine version
<niemeyer> rog: Work in a real branch
<rog> yeah, it's just much easier to get things wrong in that situation, because i'll be using two packages pushing to the same location in GOROOT
<rog> and i have to remember where i put the darn branch
<niemeyer> rog: Not really, that's the idea
<rog> also (and a different issue i think), i can't specify $GOROOT/src/pkg/launchpad.net/goamz/ec2 as a target
<niemeyer> rog: Use different GOPATHs
<rog> i can't use GOPATH because gotest doesn't work
<rog> i'm still biting my lip on that one
<rog> i really want to use GOPATH!
<niemeyer> rog: The two things are unrelated
<niemeyer> rog: Sorry, let me rephrase that
<niemeyer> rog: You don't have to keep your source code at $GOROOT
<niemeyer> rog: That's the real point
<niemeyer> rog: You can have as many branches as you want installing onto GOROOT, or even a different GOPATH for that matter
<rog> niemeyer: i think i do if i want to be able to use gotest on the source code
<rog> niemeyer: unless it's been fixed recently
<niemeyer> rog: I use that scheme, and I use gotest
<niemeyer> rog: Nope, no recent changes to that
<niemeyer> rog: make install always installs to $GOROOT
<rog> niemeyer: hmm, so if i've got GOPATH set and i go into a source directory in GOPATH and gotest, it works?
<niemeyer> rog: gotest doesn't use GOPATH.. you can use gotest locally irrespective of where your source lives
<rog> but i also like to be able to use goinstall
<niemeyer> rog: Yeah, I like ponies too :)
<rog> because it means that when i upgrade Go, i don't have to manually go through running make on every package i might have forked
<rog> goinstall -a is really useful
<rog> (and has saved me lots of time)
<rog> anyway, my current scheme works ok - i keep directory $HOME/tmp/targets with a set of pristine targets, then before proposing, i update the target and use that
<rog> that means that i don't have to worry about whether i might have accidentally locally corrupted a target
<rog> niemeyer: here's an idea: in the file with the summary & description that's edited as part of the propose process, why not put a summary of all the details that propose has inferred?
<rog> then it's more obvious when you've got the wrong push target, missed the prereq, etc
<niemeyer> rog: We can do that, but it doesn't solve the main problem, which is the danger of merging a branch that has a pre-req without addressing the pre-req first
<niemeyer> rog: We can probably create some convention to avoid the problem.. I'll think about it some more
<niemeyer> rog: I'll also tweak lbox to not force the description
<rog> rog: it would also be nice if you didn't have to give the target when re-running lbox propose. maybe "lbox update" would be a more appropriate name.
<niemeyer> rog: The name is fine (you're still proposing), and in the setup I suggested you don't actually have to provide a target at all
<niemeyer> rog: Because it's the parent branch
<rog> niemeyer: hmm. what it's not the parent branch?
<niemeyer> rog: Sorry, I don't get the question
<rog> niemeyer: what if you're not pushing to the parent branch?
<rog> niemeyer: that is, if you've branched from another local branch, but want to use the original parent as target?
<rog> niemeyer: or perhaps i've misunderstood
<niemeyer> rog: I see
<niemeyer> rog: No, you got it
<niemeyer> rog: Hmm.. this might help solve both issues perhaps..
<niemeyer> rog: I'm wondering how the workflow would look like if it _was_ the actual target
<rog> niemeyer: if what was the target?
<rog> the prereq?
<niemeyer> rog: The awkwardness is that it'd prevent the base from being merged
<niemeyer> rog: Right
<niemeyer> rog: Since we'd want to merge the follow up on it
<niemeyer> Probably a bad idea
<rog> yeah, it doesn't sound quite right
<rog> the other issue i came up against is that you can't have two prereqs
<niemeyer> rog: Indeed, I've missed that before too
<niemeyer> rog: That said, at some point it's easier to just hold off the branch a bit
<niemeyer> rog: Even for everyone's sanity
<rog> niemeyer: maybe i should have just done that, and push each branch when you've LGTM'd the previous one
<rog> BTW when i did lbox propose to upload a change to goyaml-error-fixes, i got this: http://paste.ubuntu.com/752421/
<rog> i wonder why it didn't find the previous merge proposal
<niemeyer> rog: This would work, but it's a nice feature to be able to push dependent branches..
<niemeyer> rog: Maybe we can, by convention, not LGTM a branch that has pre-reqs before the pre-reqs themselves have been sorted
<niemeyer> rog: This would solve the worries
<rog> niemeyer: a related possibility is to use an lbox command to do the final push
<rog> niemeyer: and that could check that the prereq had been pushed before doing it
<niemeyer> rog: It found the branch itself as the landing target
<niemeyer> rog: Can you please paste "bzr info" there?
<rog> yeah, i just worked that out
<rog> i know why it happened
<niemeyer> rog: It sounds like your custom workflow is getting in the way there
<rog> it's because i'd deleted my local copy of the branch
<rog> thinking i could always re-get it from lp later
<rog> but by getting it again, i lost the original parent
<rog> http://paste.ubuntu.com/752425/
<niemeyer> rog: Yeah, that's it
<niemeyer> Cool
<rog> niemeyer: i don't think it was because of my custom workflow
<niemeyer> rog: It was..
<rog> niemeyer: because deleting a local copy of the branch isn't customary?
<niemeyer> rog: In traditional DVCS workflows you don't really kill the local branch while you're working on it
<rog> niemeyer: i actually didn't delete it - i carried on editing and committing towards a later merge request
<niemeyer> <rog> it's because i'd deleted my local copy of the branch
<niemeyer> <rog> thinking i could always re-get it from lp later
<niemeyer> ??
<rog> niemeyer: well, in my head it was deleted...
<niemeyer> rog: In Bazaar's head too, apparently, since you got the branch again from Launchpad (the parent says so)
<rog> niemeyer: it had turned into a new branch, which i didn't want to interfere with
<rog> niemeyer: maybe there was another way i could have got it from launchpad so that the original parent was preserved
<rog> niemeyer: i didn't know that parentage was so important
<niemeyer> rog: Just don't kill the branch while you're working on it
<niemeyer> rog: The branch existence is important
<rog> niemeyer: yeah, i should just re-branch every time i do a merge request
<rog> niemeyer: rather than carrying on working in the same directory
<niemeyer> rog: Yeah, every time you want to start a new line of development, re-branch
<rog> niemeyer: it's a bit of a pain because i have to lose all my editing state.
<niemeyer> rog: FWIW, that's a normal impedance mismatch when getting into DVCS
<niemeyer> rog: That bit of it is due to Bazaar's way of working with multiple directories
<niemeyer> rog: This is going to be addressed soon
<niemeyer> rog: With a git-like workflow, you can have multiple branches in the same directory
<rog> niemeyer: of course. other VCSs you'd still be in the same dir
<niemeyer> rog: bzr is getting the same feature
<rog> niemeyer: i think you *can* do that now - someone pointed me at a way of doing it
<rog> niemeyer: but it's probably a hack
<niemeyer> rog: http://doc.bazaar.canonical.com/developers/colocated-branches.html
<rog> niemeyer: anyway, regardless of glitches, it's awesome that we've got proper code review working! nice one.
<niemeyer> rog: Totally, I'm very happy about that
<rog> niemeyer: one thing: i think lbox propose should complain if there are uncommitted changes. i often forget to commit!
<rog> niemeyer: (just like bzr push complains)
<niemeyer> rog: Agreed
<niemeyer> rog: I was already planning something like that.. will tweak the Delta interface on goetveld
<rog> niemeyer: BTW, i tried pushing to lp:goyaml and got "Transport operation not possible: readonly transport". is that because i'm not yet recognised as a member?
<rog> niemeyer: or do i have to re-branch, now that i am?
<niemeyer> rog: No, just because of the URL
<rog> niemeyer: is it the wrong url?
<niemeyer> rog: If you look at bzr info, you'll see the URL is a read-only one
<niemeyer> rog: You can push explicitly to lp:~gophers/goyaml/trunk
<rog> where in the info (http://paste.ubuntu.com/752435/) does it say read-only?
<rog> ah, you mean the url for lp:goyaml?
<rog> so is lp:~gophers/goyaml/trunk an alias for lp:goyaml ?
<rog> niemeyer: or is there something else subtle going on here?
<niemeyer> rog: It is now, but it wasn't at the time you branched
<niemeyer> rog: Oh, hold on
<niemeyer> rog: It's my fault, actually
<niemeyer> rog: I've changed the project maintainer, but not the branch
<niemeyer> rog: It's still pointing to my personal branch
<rog> niemeyer: ah, so bzr push lp:goyaml should work, in fact?
<niemeyer> rog: So, let's do this.. push it to the ~gophers URL.. I'll tweak the official branch location after that
<niemeyer> rog: It will, definitely
<rog> cool
<rog> niemeyer: pushed now
<niemeyer> rog: Done, we have a new lp:goyaml
<rog> niemeyer: cool.
<niemeyer> rog: You're right, btw, we need an "lbox merge"
<niemeyer> fwereade: Hey!
<fwereade> heya niemeyer!
<niemeyer> rog: Or "lbox submit" perhaps.. :)
<fwereade> good holiday?
<niemeyer> fwereade: Yeah, awesome
<fwereade> niemeyer, cool, where did you go?
<rog> fwereade: afternoon guvnor
<fwereade> heya rog
<niemeyer> fwereade: I went to JoÃ£o Pessoa, a very nice region in the northeast of Brazil
<hazmat> g'morning
<niemeyer> fwereade: I went to JoÃ£o Pessoa, a very nice region in the northeast of Brazil
<niemeyer> hazmat: morning!
<fwereade> heya hazmat :)
<hazmat> niemeyer, welcome back, sounds like a nice vacation
<niemeyer> hazmat: Thanks! Yeah, it was very relaxing
<niemeyer> Left the laptop at home for a change
<hazmat> niemeyer, ah a disconnected holiday, even better.. pics of those clear blue seas from pessoa look amazing
<niemeyer> hazmat: Not entirely disconnected.. still had a phone with me.. but at least severely restricted, let's say ;)
<niemeyer> hazmat: Yeah, in the last day we went snorkeling here: http://perlbal.hi-pi.com/blog-images/536061/gd/1264122522/Picaozinho-Joao-Pessoa.jpg
<rog> niemeyer: another occasion it didn't seem to find the original merge proposal: http://paste.ubuntu.com/752444/
<niemeyer> It's about 1km from the coast inwards
<rog> niemeyer: looks lovely
<niemeyer> rog: yeah, now the problem is a different one
<niemeyer> rog: The merge proposal is off since we renamed the branch
<rog> niemeyer: ah, because lp:goyaml is just an alias, right?
<niemeyer> rog: That's right.. the merge proposal is still against lp:~niemeyer/...
<niemeyer> rog: You can let it know by hand of the target, and it will work
<rog> niemeyer: that would be better than making a new proposal?
<niemeyer> rog: I guess.. it'd avoid having you jump back and forth removing the previous one in lp and cr
<rog> niemeyer: we might have implemented lbox submit... which would submit to the wrong place :-)
<niemeyer> rog: Indeed.. luckily we won't be renaming things like that very often
<rog> true
<rog> niemeyer: so the original target was lp:~niemeyer/goyaml/goyaml ?
<niemeyer>  /trunk at the end
<rog> ah, of course
<rog> niemeyer: ok, i did the propose, but the codereview diffs seem unchanged
<rog> niemeyer: https://codereview.appspot.com/5432068/
<rog> ahhh f*!#
 * niemeyer waits for the bomb
<rog> niemeyer: i have to change the target... of course!
<niemeyer> rog: Ah, indeed :-)
<rog> niemeyer: otherwise i don't see the diffs against the target i've just pushed to
<rog> doh!
<rog> niemeyer: that's better: https://codereview.appspot.com/5431087/
<niemeyer> rog: WOohay!
<niemeyer> rog: Done
<rog> niemeyer: "branches have diverged". dammit!
<niemeyer> rog: I don't know what your doing, but the way this generally works is that we all have a local copy of trunk..
<niemeyer> rog: Before merging a branch, pull from the remote to get the latest changes,
<niemeyer> rog: Then merge and push
<rog> niemeyer: yes, that was silly - i forgot the merge step.
<rog> niemeyer: i thought the problem was because the trunk had changed name
<rog> niemeyer: submitted
<marcoceppi> Is there anyway to catch if a relation has been broken?
<koolhead11> hi all
<mainerror> marcoceppi: Can't you do that with juju status?
<koolhead11> Is there a specific documentation i should look at for running juju in my existing openstack infrastructre?
<koolhead11> the documentation i am looking at is https://juju.ubuntu.com/docs/getting-started.html
<niemeyer> marcoceppi: You mean within a charm?
<marcoceppi> niemeyer: Yes
<niemeyer> marcoceppi: Yeah, there's both relation-departed and relation-broken
<niemeyer> marcoceppi: https://juju.ubuntu.com/docs/charm.html#hooks
<niemeyer> marcoceppi: departed is likely what you want
<rog> how do we pass initialisation options to zookeeper?
<rog> looking in juju/providers/common/cloudinit.py, i can see how zookeeper gets run (zookeeperd package, thanks hazmat), but i can't see how we configure the zookeeper adress, for example.
<rog> fwereade: ^
<hazmat> rog, the zk address is the default address port 2181 on all ifaces
<fwereade> rog, hazmat beat me to it :)
<rog> hazmat: so the answer is "we don't"?
<hazmat> rog, its pretty common for daemons to bind to all available interfaces on their standard port
<hazmat> rog, yup
<rog> hazmat: ok, that makes more sense now
<rog> hazmat, fwereade: i couldn't work out how ZOOKEEPER_ADDRESS was getting through, one way or the other. there's so much care taken to make it configurable...
<hazmat> rog, ZOOKEEPER_ADDRESS is different.. its used to pass the ip:port info to agents, its passed on as an env variable to the agent process
<hazmat> its not used to configure zookeeper but the things using zookeeper
 * rog nods
<hazmat> marcoceppi, -departed is called when any unit of the related service is removed, -broken is called when the relation is removed
<hazmat> koolhead11, the ec2 provider is used for openstack, there's an example in the list and on askubuntu
<koolhead11> hazmat: i googled and found one such thread started by jcastro
<koolhead11> but no much info.
<rog> hazmat: juju/control/initialize.py was the confusing bit. ZOOKEEPER_ADDRESS will never be set there, right?
 * koolhead11 goes back to askubuntu
<rog> hazmat, fwereade: or is there another subtlety i'm missing?
<fwereade> rog, sorry, let me check what's going on there
<fwereade> if it happens to be set it will be used
<fwereade> but:
<fwereade>     zk_address = os.environ.get("ZOOKEEPER_ADDRESS", "127.0.0.1:2181")
<fwereade> does that answer your question?
<fwereade> rog: ^^
<rog> fwereade: yes, i saw the default. i just wondered if there was ever an occasion when the default would not be used
<rog> fwereade: and i *think* the answer is never.
<fwereade> rog, offhand, I think so too
<rog> fwereade: good. i'm not too far off base then.
<hazmat> rog, the default is not used lots
<hazmat> rog, the common way the value is passed is via an environment variable
<hazmat> well at least for non bootstrap nodes
<hazmat> oh.. sorry
<rog> hazmat: that was my point - this *is* the bootstrap code
<niemeyer> And that's lunch time..
<hazmat> rog, indeed its always the default there
<rog> hazmat: cool. a comment (or even removing the env var reference) might help naive folks like me to understand things, i guess.
<marcoceppi> hazmat: So I've got a service that requires mysql. I want to capture when that interface goes away. So -broken if I were to remove the relations? Also, what information is passed to the hook?
<hazmat> marcoceppi, not much, the standard hook information + rel info (JUJU_RELATION), but afaicr its not possible to interrogate any of the remote settings via the hook cli api, because their already gone, the unit can interrogate its own settings though
<marcoceppi> I just need to find the hostname of the hook that broke off, is that still available in: relation-get private-address
<jcastro> marcoceppi, bruno had ftp ready for review didn't he?
<marcoceppi> jcastro: Not that I'm aware of, last I heard he was starting on it but I've been in and out all weekend
<jcastro> he was ready for review as he was asking me how to tag it, but I can't find his branch. :-/
<marcoceppi> Neither can I :\
<hazmat> marcoceppi, its not
<hazmat> marcoceppi, there could be a dozen remote hosts
<hazmat> depending on the type of relation and which endpoint it is
<jcastro> hazmat, do you have irc topic powers here?
<hazmat> jcastro, i do
<marcoceppi> hazmat: So I'm creating a phpMyAdmin charm and it's able to join to multiple mysql servers
<marcoceppi> I'd like to capture when one goes away and remove it from the cfg
<hazmat> jcastro, something in particular you'd like in it?
<hazmat> marcoceppi, are they separate relations or a single relation?
<jcastro> hazmat, office hours please, something like: Office Hours (1600-1900UTC)
<jcastro> just tacked on at the end or whatever
* hazmat changed the topic of #juju to: http://j.mp/juju-florence http://j.mp/juju-docs http://afgen.com/juju.html http://j.mp/irclog    Office Hours (1600-1900UTC)
<marcoceppi> Well, that's what I'm a little baffled about. Can you spin up multiples of the same service. Like have two separate MySQL services running independently of eachother within the same bootstrap? Or would you simple add-unit
<hazmat> marcoceppi, you can have multiple mysql services in an environment, and if you do add-unit you'd be adding units to the existing service, something like mysql doesn't really support multiple units of the same service outside of a master slave setup, which i believe the charm models as two separate mysql services with a master/slave relation, and in that case you can add-unit for slaves
 * hazmat grabs lunch
<jimbaker> it's ironic that the actual observed time to failure for my UPS was less (which happened sometime last night) than actually seeing a power failure. in fact no power failures here in the 7 years i lived here
<rog> jimbaker: at least when your UPS fails you've still got mains power...
<jimbaker> rog, indeed, i do have amazingly reliable and inexpensive power here. time to build a data center in the basement ;)
<jimbaker> speaking of which, there was an article i saw recently about using data centers to heat buildings
<rog> in cloudinit.py, add_ssh_key states that at least one ssh key is required. why is that? wouldn't it be ok in theory to have a juju machine that wouldn't accept any incoming connections?
<rog> jimbaker: i've heard that
<rog> jimbaker: mind you, i'm not sure i'd situate a data centre next to peoples' house - they have a nasty habit of burning down.
<rog> s/house/houses/
<jimbaker> data furnaces - http://www.nytimes.com/2011/11/27/business/data-furnaces-could-bring-heat-to-homes.html
<rog> jimbaker: not gonna be very good in summer though...
<jimbaker> apparently as long as the ambient air temp is < 95 deg F, it should work for the servers, just venting heat outside at that time, instead of inside
<jimbaker> with maybe the exception of one or two days a summer, that would be the case for my house
<rog> jimbaker: it's a nice idea. i wonder how data protection laws would apply to the home owner...
<rog> jimbaker: do you know the answer to the above question, BTW?
<rog> jimbaker: i'm just wondering what would break if we allowed no ssh keys
<rog> hazmat?
<jimbaker> rog, i don't know the answer to cloudinit above. maybe just fix cloudinit for this case?
<jimbaker> worth taking a look at its codebase
<rog> i'm not sure that cloudinit itself requires any ssh keys
<rog> but i may be mistaken
 * rog goes to check
<jimbaker> interesting related point about txaws not verifying ssl. didn't know this. sounds like another reason to move to boto, which does have such support
<rog> boto?
<jimbaker> it's pretty much the standard library for working with aws
<rog> jimbaker: well, i'm using goamz :-)
<rog> jimbaker: when you say "not verifying" do you mean it doesn't check the cert chain?
<jimbaker> rog, unlike txaws, it has extensive support for nearly all of the aws api. its disadvantages are that it is blocking (but easy to workaround with deferToThread, just like what we do elsewhere in the python version of juju)
<jimbaker> rog, that's what i understand from niemeyer'
<jimbaker> s email
<niemeyer> jimbaker: Have you checked if boto is testing for SSL certs?
<hazmat> rog, the client needs ssh to work to connect  the bootstrap node
<niemeyer> jimbaker: Also, it's not an easy transition
<jimbaker> niemeyer, this is based on http://www.heikkitoivonen.net/blog/2009/10/12/using-m2crypto-with-boto-secure-access-to-amazon-web-services/
<rog> hazmat: so the bootstrap node needs to allow ssh. what about the other nodes?
<hazmat> rog, they need it minimally to support ssh/scp/debug-hook commands
<rog> hazmat: ok, so nothing crucial relies on it. just wanted to check.
<niemeyer> jimbaker: If you actually read the code there, you'll notice that it's not boto that is verifying the certificate
<niemeyer> jimbaker: If you're going to fix it for boto, you can as well fix it for txaws
<hazmat> rog, nothing internally no, except for client access to the bootstrap node
<jimbaker> niemeyer, likely it's worth looking at both. again you raised a good point about txaws. at this point, i know that people use boto successfully in this way to verify ssl :)
<rog> hazmat: cool. so "You have to set at least one SSH key." should really be "The bootstrap node requires at least one SSH key. Without at least one SSH key, other nodes will not allow ssh access, e.g. ssh,scp, debug-hook" or something like that?
<jimbaker> txaws has some users, but boto is used extensively
<jimbaker> including with twisted as i understand it
<hazmat> rog, those other subcommands are part of the interface juju exports
<hazmat> jimbaker, boto is used by twisted?
<hazmat> er. with
<rog> hazmat: that's true, but they're not key to the infrastructure. i could imagine creating a high-security node that allowed no ingress. it wouldn't break anything to do that. thus "must" seems a bit strong.
<rog> hazmat: but YMMV of course
<jimbaker> hazmat, i have seen boto + twisted in the openstack tests, for example
<hazmat> rog, by that notion ssh keys wouldn't be required at all post a REST interface
<hazmat> jimbaker, openstack doesn't use twisted anymore.. and its honestly its not a great example of proper usage of anything imo
<rog> hazmat: well, i guess REST implies some encryption 'cos we'd be using https, so yes, i'd agree.
<hazmat> its getting better, but the codebase was originally a ball of mud with different styles and mix of sync/async  and library usage
<jimbaker> hazmat, this was just in the tests. i know they are using gevent. probably want to find better examples before we decide anything :)
<hazmat> eventlet .. same difference though
<hazmat> jimbaker, there are no twisted imports in nova
<rog> hazmat: i certainly think we should default to allowing ssh access, but i'm not sure it should be strictly required. and in fact, i don't see anything that checks the requirement now - it'll probably just work as is
<hazmat> jimbaker, which tests?.. i don't know that we'll find many examples of twisted code bases using boto
<jimbaker> hazmat, i will try to dig this up
<therve> jimbaker, to throw some counter points 1) boto code base is relatively bad, and barely has any tests 2) most of the coverage of AWS API is useless to juju
<jimbaker> therve, this is in fact a very good counterpoint
<jimbaker> the only thing we can say about boto is that it's heavily used across various impls of the EC2 api. whether or not that makes up, i don't know
<jimbaker> the best thing would be extensive usage in the wild + extensive unit testing
<jimbaker> anyway, just bringing up as a possibility
<therve> jimbaker, in the mean time, I'd be happy to help if there are problems with txaws :)
<hazmat> jimbaker, i think i missed the context here, we're just talking about ssl cert checking
<rog> hazmat: no, it is checked.
<jimbaker> i believe this in ref to https://bugs.launchpad.net/txaws/+bug/781949
<_mup_> Bug #781949: Must check certificates for validity <txAWS:New> < https://launchpad.net/bugs/781949 >
<jimbaker> which was filed by niemeyer
<therve> I'll assign that to me
<jimbaker> therve, thanks
<hazmat> jimbaker, we have to be careful there, or at least only do it optionally, openstack setups don't nesc have valid certs
<niemeyer> jimbaker: Time machine..
<jimbaker> hazmat, sounds good
<jimbaker> niemeyer, what do you mean?
<niemeyer> <jimbaker> which was filed by niemeyer
<niemeyer> ... back in May.
<jimbaker> niemeyer, yes, so that's good right? i'm just pointing this out to ensure it's the right bug in question
<niemeyer> jimbaker: Yeah, I'm not saying it's bad in any way.. it was a just a bad joke
<jimbaker> niemeyer, cool
<jimbaker> hazmat, standup today?
<SpamapS> we don't verify the amazon cert right now?
<jimbaker> SpamapS, as i understand it, yes
<SpamapS> thats quite serious IMO
<SpamapS> theft of AWS credentials could be *VERY* costly
<jimbaker> SpamapS, indeed. think of the possible botnets
<SpamapS> just thinking of the possible bill
<marcoceppi> So, could I simply symlink upgrade-charm hook to the install hook?
<m_3> marcoceppi: don't see why not... as long as you change idempotency guards to handle any re-install logic needed
<marcoceppi> m_3: Right, I just realized my install hook is idempotent and since the upgrade never runs the install again it would pretty much be exactly what the install does with the exception of a few things
<m_3> marcoceppi: might have to add a couple of things like an initial "apt-get update" that you wouldn't normally need
<marcoceppi> Good point
<m_3> marcoceppi: lp:charm/ceph is an example of this
<m_3> w/o the additional apt-get update :)
<marcoceppi> Cool I think this charm is about ready. Need to test it a bit more
<hazmat> jcastro, this askubuntu thing is great ;-)
<hazmat> hmm.. i'm not sure if the unzip will respect symlinks or not
<m_3> marcoceppi: awesome man!
<hazmat> koolhead17, you mentioned on askubuntu that you where able to bootstrap without a local ssh key.. afaik this isn't possible
<hazmat> ie. it should always error out if that's the case
<koolhead17> hazmat: am trying to run juju on my existing openstack infra
 * hazmat nods
<koolhead17> juju bootstrap gave no error
<koolhead17> it was juju status which failed
<koolhead17> with some ssh related error
<hazmat> hmm.. that's a regression
<koolhead17> and i can see an instance running too
<koolhead17> juju-default
<koolhead17> i was wondering when the ssh access juju tries to acquire
<koolhead17> verbose says with user ubuntu
<koolhead17> what about the password?
<hazmat> koolhead17, it uses ssh keys everywhere, bootstrap should fail if there are no keys
<hazmat> before launching an instance
<koolhead17> by default images get password as "password" for user ubuntu and user root
<koolhead17> on the ineiric cloud image
<koolhead17> Oneiric
<koolhead17> hazmat: i created a keypair and then i was able to pass juju bootstrap part
<koolhead17> :D
<hazmat> koolhead17, juju doesn't reference api key pairs
<koolhead17> hazmat: so what is the best way to get juju running inside openstack?
<koolhead17> i had few other issues too.
<koolhead17> like the instance gets acquired with internal IP
<hazmat> koolhead17, one moment, just trying to verify the key thing against an openstack install
<jcastro> hi koolhead17
<koolhead17> jcastro: hello sir. :)
<koolhead17> i was looking for you
<jcastro> awesome, I was looking for you, you go first!
<koolhead17> about config related issue while running juju using openstack
<koolhead17> i got it figured
<koolhead17> :D
<marcoceppi> Apparently source isn't built into sh?
<koolhead17> jcastro: like last release i would like to work with ubuntu server guide, let me know how can i help. last time i did final revision part
<jcastro> koolhead17, we could always use help writing charms
<jcastro> and iirc at some point soonish hazmat will split the docs from juju itself
<koolhead17> jcastro: hmm. i allready have few assigned but i had to work on sumthin else
<jcastro> the docs could really use a review
<hazmat> koolhead17, just tried it without a key.. it does fail.. http://pastebin.ubuntu.com/752729/
<hazmat> jcastro, i've gotten some push back from that.. it should probably get brought up on list
<jcastro> ok
<jcastro> I'll bring it up
<hazmat> jcastro, cool
<koolhead17> hazmat: i had similar error and what i did was ssh-keygen  to generate key for my local user
<jcastro> marcoceppi, m_3, SpamapS: incoming new charm, teamspeak.
<m_3> marcoceppi: dash...  Use '. file.sh' instead of 'source file.sh'
<koolhead17> and when i executed ensemble-bootstrap it worked
<koolhead17> its ensemble-status where am failing
<jcastro> marcoceppi, oh hey was it you that was going to add new bling to the IRC bot? newly tagged "new-charm" announced here for review would be awesome.
<koolhead17> and i know the reason
<hazmat> koolhead17, right but that error is that bootstrap won't work without a key.. which is different than status not working.. the latter is more that the key didn't get installed onto the instance
<m_3> jcastro: just saw it pop up on the review queue
<jcastro> rock and roll
<marcoceppi> jcastro: Yeah, and feed Ask Ubuntu questions into here
<jcastro> m_3,  he's on a roll, he'll probably submit FTP tonight as well
<koolhead17> hazmat: yes the later part <ensbemle status> where am stuck currently
<koolhead17> does my defualt juju uses user "ubuntu" and passwd "ubuntu"
<koolhead17> to connect to instance
<koolhead17> ?
<hazmat> koolhead17, can you pastebin the euca-get-console-output for that instance
<hazmat> koolhead17, there are no passwords just ssh keys
<koolhead17> hazmat: am home now. i will do that once in office
<hazmat> koolhead17, which release is the image btw?
<koolhead17> hazmat: oneiric cloud 64 bit
<koolhead17> tar.gz file
<hazmat> koolhead17, k, i'm trying a full run on openstack now
<koolhead17> i use cloud-publish-tarball
<m_3> jcastro marcoceppi: autofeeding askubuntu questions here would rock!
<m_3> of course, answering them here would be pretty cool too, but that might be... tough
<jcastro> you can always take a good question and answer that you answer here and post it there
<jcastro> as a self-documenting thing
 * m_3 is more geeking out about about mup-style integration
<mpl> hmm, isn't there a way to have your authentified irc nick be linked to your askubuntu account? some sort of openid mechanism. then there could be a bot here to which we could feed questions and answers.
<hazmat> koolhead17, reproduced
<koolhead17> hazmat: cool and?
<hazmat> koolhead17, still debuggin
<koolhead17> am running it on same same nova system
<koolhead17> and added all openstack related credentials in config.yaml file
<marcoceppi> mpl: No write access on the Ask Ubuntu API yes, so unless you emulated a web browser, logged in and maintained session for each user, etc (it would become messy quick)
<mpl> too bad.
<koolhead17> hazmat: also is there a way to add cloud-init kind stuff in config
<koolhead17> am saying this because when am running juju behind proxy it will fail
<hazmat> koolhead17, short answer, no, you can put a bug in for http proxy support though
<hazmat> not sure how that's related to cloud-init re proxy
<koolhead17> hazmat: i was giving an example :P
<hazmat> so on openstack cloud-init finishes fine, but it doesn't seem to install the key
<koolhead17> hazmat: i have not tried cloud-init
<koolhead17> its the juju status
<koolhead17> where am stuck
<koolhead17> with ssh connection error
<koolhead17> i will paste exact error once am in office tommorow
<koolhead17> hazmat: juju starts an instance with juju-defualt but its not able to ssh to it
<koolhead17> after i execute juju status
<hazmat> koolhead17, yes, i understand the problem
<hazmat> and i'm able to reproduce it
<koolhead17> hazmat: cool. :)
<koolhead17> i have 2 more issues which i was concerned about
<koolhead17> by defualt when a instance starts it acquires a private IP
<koolhead17> say 192.168.1.1
<koolhead17> we attach the instance to public IP
<koolhead17> for communicating with outside world
 * hazmat nods
<hazmat> koolhead17, i do the same for interacting with our openstack cluster
<koolhead17> now if i will use juju it means i need to have my internal IP connected to internet in order 4 juju to fetch pkgs
<koolhead17> hazmat: but when i will say juju deploy mysql
<koolhead17> it will go in background and do apt-get install mysqlserver
<koolhead17> but since its not connected to internet it will fail
<marcoceppi> _mup_
<koolhead17> what is the way out
<hazmat> koolhead17, yes, but basic NAT traversal should allow for that, are you saying that the cluster has zero connectivity to the internet
<koolhead17> hazmat: yes
<koolhead17> the VM in internal nw
<koolhead17> :-(
<hazmat> or that the openstack internal network isn't bridged? .. the fact that youc an communicate at all with the bootstrap node suggests otherwise
<hazmat> koolhead17, being in an internal network is fine if it has outbound access
 * m_3 needs charm review snippets :)
<koolhead17> hazmat: which means it has to have internet access ?
<koolhead17> :D
<hazmat> koolhead17, that or an internal apt proxy and image customization
<hazmat> s/proxy/cache
<hazmat> same difference
<marcoceppi> Which bot is _mup_?
<koolhead17> hazmat: well am talking about 2 things differently
<hazmat> marcoceppi, launchpad.net/mup
<koolhead17> 1. my cluster should know about proxy-server when am trying to orchestrate via Juju
<koolhead17> 2. about this private IP and its neccesity to have internet connection.
<hazmat> koolhead17, so it took some time but the key was eventually installed on the bootstrap instance
<koolhead17> hazmat: it failed in my case. :)
<hazmat> koolhead17, i wonder if it just takes some additional time
<koolhead17> hazmat: you will have to share your yaml file with me then. :P
<koolhead17> juju bootstrap does work without error
<hazmat> koolhead17, sure, and that means the instance is started, and that an ssh key was found
<koolhead17> its juju status where i fail because of ssh credentials
<koolhead17> hazmat: +1
<hazmat> koolhead17, right, i had the same problem, but it did work after i waited a few minutes, i'm still not clear why since cloud-init had finished
<hazmat> koolhead17, which version of openstack are you using?
<koolhead17> diablo
<koolhead17> from ubuntu repositoray
<hazmat> koolhead17, i think it has something to do with a delay in associating the public-address to the instance
<hazmat> koolhead17, because the ssh host fingerprint changed
<koolhead17> hazmat: the "-v" showed its associated with ip and instance ID
<hazmat> koolhead17, i know.. this is an internal to openstack
<koolhead17> hazmat: what would you suggest and how should i proceed
<hazmat> it shows the address correctly on the metadata, but the action of connecting the instance to the ip address is derived off an async activity
<hazmat> koolhead17, offhand i'm not sure outside of waiting, the question is verifying that the ip address is connected to the right instance, which means verifying openstack internal state
<hazmat> asking on #openstack
<koolhead17> hazmat: can i trouble you tomorrow once am in office
<koolhead17> hazmat: so your saying you are able to run/execute juju status without failing from that ssh related issue
<hazmat> koolhead17, yes
<hazmat> koolhead17, i had the issue, i waited, it went away
<jcastro> marcoceppi, I forgot a totally obvious service but added it to the spreadsheet, status.net aka. run your own twitter
<koolhead17> hazmat: one more thing
<jcastro> that would be useful for organizations that want the benefits of microblogging but for internal reasons
<koolhead17> does juju status results in connecting and getting some info from externa;l metadata server
<marcoceppi> jcastro: oh, duh! good call
<hazmat> koolhead17, it talks to the nova api endpoint not the instance metadata server
<hazmat> koolhead17, cloud-init does talk to the instance metadata server
<hazmat> but that's independent of status
<koolhead17> hazmat: my internal nw has no internet in it.  :)
<koolhead17> can that be the reason 4 fail
<hazmat> koolhead17, possibly.. i find that likelihood very strange though..  inbound access and outbound access via a NAT are different concerns
<koolhead17> hazmat: i have to read much further and find some more things :)
<negronjl> m_3: ping
<m_3> negronjl: yo
<m_3> ssup?
<negronjl> m_3:  I just updated and tested the new mongodb charm with replicaset in oneiric but, I need another set of eyes to ensure I am not crazier than usual.  When you get a chance ( no rush ) can you test it.
<negronjl> ?
<negronjl> m_3: In the meantime, can you tell me ( again ) where your hadoop charm for oneiric is ?  I am going to consolidate them all into one charm .
<m_3> negronjl: sure thing on reviewing the mongodb charm
<m_3> negronjl: lemme look to see that the latest is in trunk on the hadoop charms
<negronjl> m_3:  ok on the hadoop thing ...
<negronjl> m_3:  let me know if you need help testing the mongodb thing ( commands and such )
<SpamapS> speaking of oneiric ...
<SpamapS> seems like we missed a huge set of work items at UDS
<negronjl> SpamapS: do tel
<SpamapS> which is.. develop or coopt tools to do releases
<negronjl> SpamapS: do tell
<negronjl> SpamapS: tools like charm create you mean ??
<SpamapS> Like, we need to copy all of the branches from oneiric -> precise
<SpamapS> and automatically backport new stuff from precise -> oneiric
<m_3> negronjl: lp:charm/hadoop-{master,slave} have everything for oneiric
<negronjl> m_3: thx
<m_3> negronjl: the hard part is making sure we have a single ppa or other repo that is consistent across natty and oneiric (we don't currently)
<m_3> negronjl: ppa:canonical-sig works for natty... ppa:mark-mims/hadoop works for oneiric
<negronjl> m_3:  To test the sanity of the "one hadoop charm to rule them all" theory, I'll start by if,then on the release to figure out which ppa to use... I'll work on the one ppa later.
<negronjl> SpamapS: We can still do this but, I think it'll have to be done manually at first
<m_3> negronjl: I was planning on building all the dep projects into ppa:mark-mims/hadoop for natty,
<negronjl> m_3:  The natty one is already in LP ... let me get it for ya
<m_3> negronjl: but we should get the right "partner" repo working correcly
<m_3> for both!
<SpamapS> negronjl: ugh. I really don't want to get into a situation where we are manually doing anything other than merges of conflicting changes
<negronjl> m_3: that shouldn't be hard as long as both packages work the same
<hazmat> negronjl, did you publish the new charm to lp?
<m_3> negronjl: there's a blueprint for packaging the whole bigtop stack BTW...
<hazmat> negronjl, re mongodb
<negronjl> hazmat: i did
<negronjl> hazmat: lp:charm/mongodb
<hazmat> negronjl, where ? ah.. it needs to be under charm/oneiric/mongodb for the store or charm browser to find it
<hazmat> ie. its missing the series
<hazmat> er.. actually... charm/oneiric/mongodb
<negronjl> hazmat: I just updated the lp:charm/mongodb so, if you saw it an hour ago, you should see it just fine now.
<negronjl> hazmat: I'll double-check the placement in LP ... please hold
<hazmat> negronjl,  maybe i'm missing something.. i see charm/thinkup
<hazmat>  and oneiric/postgresql
<negronjl> hazmat: https://code.launchpad.net/~charmers/charm/oneiric/mongodb/trunk
<m_3> SpamapS: is the backport process anything more than commit hooks atm?
<jcastro> negronjl, can we catch up this week wrt. the charms you are working on? I need a sync up. (Not high priority)
<hazmat> negronjl, maybe this was just very recent (less than 15m) ?
<negronjl> hazmat: more like 3mins ago :P
<hazmat> negronjl, ah.. that would explain it
<negronjl> hazmat: you're so 3 minutes ago :D
<SpamapS> m_3: commit hooks would be too automatic.. we need a way to say "I just broke backward compatibility" and stop auto-backporting of a branch.
<negronjl> SpamapS: I understand that we want to do this automatically but, I for one, have no idea on the all of the implications of this and, by doing it manually once, we should be able to learn from the mistakes that we make, issues, etc. and create a "sane" automated process for this.
<m_3> negronjl: so I still have a build VM set up... can dput the packages to a new repo pretty easily I think... just lemme know
<negronjl> m_3:  Let me add you to the existing repo ... let see if it works that way ... hold on
<m_3> no merge conflicts => "it's golden, ship it" ;)
<SpamapS> negronjl: maybe the answer is to open the precise series, without actually making it the dev focus.. so lp:charm would still be oneiric until precise releases....
<SpamapS> meh
<SpamapS> see we need a face to face for this
<negronjl> SpamapS: G+ ?
<m_3> negronjl: it might be best to _start_ with your plan of conditional repos based on series... get one charm to rule them all... then get one repo to rule them all as a next step.  gets us to working state soonest I think
<SpamapS> Maybe we should drop the notion of the series altogether. If the charm is in lp:charm, it needs to work on all supported Ubuntu releases...
<negronjl> m_3:  I'll take a look at it either way and see where the path of least resistance is
<SpamapS> and if we just can't do that for some release of Ubuntu.. then we can create a series-specific branch for that charm and that release.
<negronjl> SpamapS: I think that's the ultimate goal but I also think that the lp:charm charms will need to be heavily reviewed ( modified ? ) by us to ensure that we work all of the kinks out
<SpamapS> negronjl: s/reviewed/tested/
<SpamapS> Thats where the automated testing bits come in
<marcoceppi> Does the db-admin relation work?
<marcoceppi> on the MySQL charm
<negronjl> marcoceppi: It does ... I've used it
 * m_3 fantasizes about automated testing...
 * SpamapS tosses a bucket of cold water on m_3
<SpamapS> focus!
<m_3> marcoceppi: haven't reviewed/tested your changes yet if that's what you mean
 * m_3 didn't say "humps the leg of automated testing..."
<negronjl> marcoceppi: ahh ... didn't know if you were asking about changes that you may have made to the mysql charm ... I haven't tested any changes but, in the not so distant past, I have used the mysql-admin interface.
<negronjl> marcoceppi: ... and that worked.
<marcoceppi> I wasn't nessisarily, all I did was change the metadata.yaml file to give db-admin a mysql-root interface
<marcoceppi> since db-admin and db are the same hook file
<negronjl> marcoceppi: I'll be working on that charm soon ( to use it for CloudFoundry as opposed to a cf only one ) so, if I find any weirdness, I'll let you know
<marcoceppi> I just got a state error when adding a relation to db-admin, was looking for an excuse to not be my fault
<negronjl> marcoceppi: lol ..... aren't you the new guy ??? then it's always your fault :)
<marcoceppi> hum, does db-admin need to create a database?
<negronjl> m_3:  https://launchpad.net/~canonical-sig/+archive/thirdparty  <---- natty hadoop packages
<negronjl> marcoceppi: i don't think so,  it should be for you to get root to a db
<marcoceppi> Cool, I'll update that as well
<negronjl> marcoceppi: ahh ... I now remember why I made another charm for mysql
<negronjl> marcoceppi: mysql-admin gives you root to a db only ... CloudFoundry needs root to it all
<marcoceppi> what. what is the point of that?
<marcoceppi> I guess there were more changes needed than I though
<marcoceppi> t
<negronjl> marcoceppi: the point of what ?
<marcoceppi> having a db-admin if you don't get an admin account?
<m_3> negronjl: thanks... I'll try a dput and let you know
<negronjl> marcoceppi: you get admin of a particular DB
<m_3> negronjl: so there were lots of other repos I had to add b/c of build deps (listed http://goo.gl/n5T2i)
<m_3> negronjl: those'll have to be added as well for oneiric
<negronjl> m_3:  ok ... hit it
 * negronjl crosses fingers and prays that it works 
 * m_3 does too
<_mup_> Bug #897360 was filed: Separate docs from source <juju:Confirmed> < https://launchpad.net/bugs/897360 >
<hazmat> niemeyer, any thoughts re my comments on GC
<niemeyer> hazmat: haven't read them yet
<m_3> negronjl: dputs rejected: "Signer has no upload rights to this PPA."
<negronjl> m_3: what's you lp username ?
<m_3> mark-mims
<negronjl> m_3:  the group belongs to zaid_h and I don't have access either ... I'm trying to get you access ....
<negronjl> m_3:  if that doesn't work, maybe we can do it the other way around and I'll dput the packages from canonical-sig into your ppa
<niemeyer> hazmat: Didn't get anything in my inbox regarding the issue?
<hazmat> niemeyer, interesting i don't see it on the mp
<hazmat> oh.. i have to setup my postfix post ssd install
<hazmat> doh
<m_3> negronjl: I'll have to clone up a new natty instance and rebuild the packages there...
<hazmat> i was wondering why  i haven't gotten any responses to emails recently
<niemeyer> hazmat: Cool, np
<niemeyer> hazmat: I'm stepping out for some exercising, but will check when I'm back
<hazmat> niemeyer, cheers
<m_3> negronjl: lemme do other charm stuff while waiting for Zaid...if he doesn't get back later today I'll spin up the natty builds.  rather not have to go down that rabbit-hole if we don't have to.  Do you need this now for anything?
<negronjl> m_3: not at all .. no rush
<m_3> cool man... thanks
<m_3> hazmat: your mail from last week is being delivered
<hazmat> m_3, yeah.. just flushed the system
<hazmat> used a clean install on my new ssd forget to setup my postfix properly.. but its pretty rockin, the ssd that is
<negronjl> m_3: no rush ... you got access:  https://launchpad.net/~canonical-sig/+members#active
<m_3> negronjl: thnks
 * negronjl is out to lunch
<_mup_> juju/expose-refactor r421 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<robbiew> SpamapS: call time?
<jcastro> m_3, hey so, no worries
<jcastro> but another incoming charm. :)
<marcoceppi> If someone has time to review https://code.launchpad.net/~marcoceppi/charm/oneiric/mysql/db-admin-relation-fix/+merge/83690 I'd be greatful
<m_3> jcastro: cool... on it
<marcoceppi> Was there ever a decision on what to do about cryptographic/source checks when being checkedout from a remote repository (git, svn, etc)?
<hazmat> marcoceppi, i think if its via a secure channel it was fine
<hazmat> niemeyer, can lbox setup the reitveld for an existing branch?
 * negronjl is back
<m_3> negronjl: I think that the ppa:canonical-sig/thirdparty is going to work
<m_3> negronjl: it's still building, but it looks like it'll succeed
<negronjl> m_3: cool ...  it should make things simpler :)
<m_3> much
<m_3> we'll need to test it tho
<m_3> negronjl: ok, it finished
<negronjl> m_3:  I'll start testing a bit later
<m_3> cool... me too
<negronjl> m_3:  I'm changing the hadoop charms so you can specify your own MapReduce job ... I am building on top of yours.  I'll share later when I am done testing but, it should make charm usable with any MapReduce job that can be downloaded
<m_3> negronjl: yeah, I was going to talk to you about that... it was built for the demo but should totally be generalized
<m_3> negronjl: config has lots of specific stuff too
<negronjl> m_3: I think I am going to follow your logic of having a script .... Maybe in the config.yaml file, I'll just have a var that is a script that will do everything that is needed.  Each job is pretty unique and I don't think I'll be able to generalize enough to make it work for a big chunk of jobs out there.
<m_3> negronjl: perhaps a url for pulling the MR jar even?
<negronjl> m_3: I'll have an example and we can all work on it from there ....
<m_3> negronjl: we can also remove the job-related scripts too... whatever makes sense
<negronjl> m_3: removing the job-related stuff ... sure ... I am also toying with the idea of creating a new interface that will provide the job ... not sure how yet but, it would give us the hadoop charm and then we can have something separate ( a MapReduce charm ?? ) that will actually execute the job ( or provide the necessary stuff so the master can execute the job ).
<negronjl> m_3: many ideas to play with ... I'll do some stuff and we'll see where I end up.
<negronjl> m_3: if you have ideas, let me know too :)
<m_3> cool... happy to help, just lemme know
<m_3> maybe g+ sometime after the first pass
<negronjl> m_3:  will do
<SpamapS> What does one do if they don't have an automated install/config of HDFS/Hadoop ?
<SpamapS> ssh in and scp your .jar's manually?
<negronjl> SpamapS: yes
<SpamapS> interesting
<negronjl> SpamapS: but, the majority of people that use MapReduce often have scripts that doe this stuff
<negronjl> SpamapS: My idea is to provide a mechanism by which people can put their scripts in a charm ( interface or otherwise ) so, they can use the charm to deploy hadoop but still have their scripts to run their jobs
<negronjl> SpamapS: IMO it's a more realistic use of the charm
<m_3> negronjl: very similar to a capistrano integration with a charm
<negronjl> m_3: not sure about that ... I have to options that I am about to investigate :
<negronjl> m_3: 1. In the charm, provide a variable where the charm will download and execute a script ( dangerous ?? )
<negronjl> m_3: 2.  Provide an interface where, upon relation, will do the same
<negronjl> m_3: two sides of the same coin ... they both will end up executing arbitrary code
<m_3> I had always imagined option 1, but that doesn't really mean anything... just haven't thought too much about it
<m_3> can totally see long-running hadoop services
<negronjl> m_3: Option 1 is probably the easiest to implement but, what I really want to do is completely decouple the charm from the job
<m_3> and short-running job services that attach, run, then detach
<negronjl> m_3:  Ideally in option 2.  I can clean up the job lef-overs upon breaking the relationship
<m_3> that's a new angle too btw... good to blog about
<negronjl> m_3: this way the same hadoop cluster can be used for multiple jobs
<m_3> negronjl: yup
<negronjl> m_3:  It all sounds cool  .... IF WE CAN GET IT TO WORK :P
 * m_3 grins
<m_3> marcoceppi: yo
<marcoceppi> m_3: hey
<brunopereira81> hey
<m_3> marcoceppi: I assume the phpmyadmin package is broken?
<marcoceppi> I guess?
<m_3> brunopereira81: hi
<marcoceppi> m_3: I opted to use source, because source is cooler than apt. And it appeared there was an issue with deb-conf
<m_3> marcoceppi: ah, yeah that's what I was wondering
<marcoceppi> Is that okay? I thought charms kind of favored source over debs since the repo tends to lag behind a little
<m_3> it's okay for a charm
<m_3> charms should prefer packages over source
<m_3> for all sorts of reasons
<m_3> I'd say if the package works, use it... otherwise pull from source if the package is broken
<m_3> depends on what you want with the charm though
<m_3> for charms in general, anything goes
<m_3> for charms in lp:charm that we're committing to maintain, etc, etc...
<marcoceppi> gotchya
<m_3> SpamapS: you have an opinion on this?  i.e., pulling from source when a package is available
<marcoceppi> I've been setting up RSS feeds for each of the upstreams that I've made charms for. So I knew when to update the charm
<SpamapS> sorry I was making a red bull run to 7-11 .. reading
<m_3> marcoceppi: cool, yeah, that's important... especially if they're not publishing hashes and we're hashing them ourselves
<marcoceppi> yeah
 * marcoceppi shakes fist at mojang
<SpamapS> in general, repo > ppa > source because you have better integration and testing
<m_3> condition for acceptance into lp:charm?
<SpamapS> repeatability and providing updates is a tough one there
<SpamapS> if your charm does a source install, it needs to make it easy to update the software.
<m_3> marcoceppi: does the package work in this case or is it really broken?
<SpamapS> I was thinking actually that we could do that with a convention of having source-version as a config key
<marcoceppi> SpamapS: Should that be implemented in the charm-update hook?
<SpamapS> upgrade-charm, imo, should almost always just call install ;)
<marcoceppi> m_3: I haven't tested personally for the deb-conf errors. The package installs for local mysql
<SpamapS> so, default: would be the version you tested it with..
<m_3> wow, source-version config key implies a lot of boilerplate work in every charm
<brunopereira81> m_3: teamspeak3 installs and runs on local repo, have some time to debug that part? Rest of sugestions will be implemented and updated asap (thx for the review) but at the moment the "not spinning up" is my main prio I would say.
<SpamapS> or if there are security updates upstream, the version with the updates.
<SpamapS> m_3: charm-helper :)
<m_3> SpamapS: right
<m_3> brunopereira81: sure... sounds good
<SpamapS> . /usr/share/charm-helper/sh/updates.sh
<SpamapS> actually
<SpamapS> . /usr/share/charm-helper/sh/config-source-version.sh
<m_3> brunopereira81: gimme a sec to finish up this review
<SpamapS> this is why I'd say just use packages
<marcoceppi> SpamapS: makes sense
<SpamapS> if the phpmyadmin package isn't behaving properly, then its buggy and we should fix it
<marcoceppi> I
<marcoceppi> I'll take a look at it and update the charm if needed
<SpamapS> I recall there being issues with dbconfig-common and the way it works...
<SpamapS> because you have to have a valid db connection before it finished configuring..
<SpamapS> marcoceppi: the charm iself doesn't have to do anything to update the software.. but there needs to be a prescribed, single way to update/patch the software
<SpamapS> And since packages are *built* to do that, using a PPA (owned by charmers) seems the logical way to get that done.
<marcoceppi> SpamapS: Well if install hook always has the the method of getting the latest version (whether via update/upgrade or by source/compile) having it be idempotent and having upgrade-charm call it makes sense
<marcoceppi> Okay, so that statement really is more on the side of source/compile
<SpamapS> marcoceppi: I don't like the idea of it always getting the current version
<SpamapS> marcoceppi: that means you change behaviors whenever upstream releases.
<SpamapS> and its entirely possible the PHP or something else that you have stops working.
<SpamapS> marcoceppi: so, a default version, with a way to override it, makes sense.
<marcoceppi> As in a config hook? use_upstream
<marcoceppi> are config hooks made available in the install hook?
<m_3> perhaps separate charms?  phpmyadmin-latest -vs- phpmyadmin
<SpamapS> marcoceppi: config-get is always available, so you can just call config-changed
<marcoceppi> Well, when I say current (latest) I mean the latest available in the install hook. so not latest.tar.gz, but x.y.z.tar.gz If a new release is available then the maintainer of the charm should update the install hook, test, then upload
<marcoceppi> then upgrade-charm hook should be written in a fashion that either creates an upgrade path, etc
<marcoceppi> I just talked myself out of an argument. Because this is starting to sound like a PITA
<marcoceppi> So, in this case, phpMyAdmin package is broken. I should repackage in a PPA owned by charmers?
<SpamapS> marcoceppi: ;)
<SpamapS> marcoceppi: you just reinvented dpkg!!
<SpamapS> well done
 * SpamapS is always reinventing things that have existed for years.. its like a game
<marcoceppi> heh, I'm really good at it :)
<SpamapS> marcoceppi: I don't think the packaging is broken.. we just need to put some thought into it.
<marcoceppi> I'll take a look at it again when I get home
 * marcoceppi heads home
<SpamapS> marcoceppi: the long standing bug with dbconfig-common is that you must have mysql running before you can enter config details about your mysql connection to phpmyadmin..
<marcoceppi> Couldn't you just setup a local MySQL instance, fake a complete setup, then manually inject settings to config.inc.php?
<SpamapS> marcoceppi: it fails configuration otherwise. So.. the answer to that is, don't let dpkg configure phpmyadmin until you've related to a mysql server.
<marcoceppi> As it stands now, the charm can accept multiple db-admin relations if you have multiple mysql charms running in an environment
<marcoceppi> I'm not sure how dbconfig-common will handle that
<SpamapS> phpmyadmin only allows one database right?
<marcoceppi> nope
<marcoceppi> you can setup multiple server relations
<SpamapS> well then dbconfig-common is probably the wrong solution to configuring phpmyadmin
<SpamapS> marcoceppi: head home, I'm going to look at the package now.. it sounds like we may be able to just ignore dbconfig-common entirely.
<marcoceppi> SpamapS: \o/
<SpamapS> as much as I despise phpmyadmin.. its a godsend for some people. :)
<marcoceppi> yeah
<marcoceppi> I think it'll be nice for juju, if you want to spin up really quickly to investigate mysql :)
<SpamapS> we should support the shared-db relation in the charm too
 * m_3 likes that it can be spun down just as quickly :)
<SpamapS> so you can give people a web instance to inspect a single database
<m_3> yeah, there's real use for that case
<marcoceppi> SpamapS: Good point, I'll take a look at that too, tonight
<m_3> with a read-only config option?
 * marcoceppi heads to the metro
<m_3> marcoceppi: later man... charm looks great in general
<SpamapS> m_3: that would be cool
<brunopereira81> m_3: after deploy state: started and I can connect to the server with the client and use the pre-set admin token, restart service is implemented and metadata is fixed on next push
<brunopereira81> m_3: any idea where it might have goten stuck at?
<m_3> marcoceppi: tried to capture most of these suggestions into the review
<m_3> brunopereira81: ok, lemme back up and recycle my env
<brunopereira81> ;)
 * m_3 needs more than 2 ec2 accounts
<negronjl> m_3:  We're gonna have a hadoop-mapreduce charm
<negronjl> m_3:  This charm will be in charge of the job ( setup, execution, reporting, cleanup, whatever ).
<negronjl> m_3: After this, we can then deploy another mapreduce charm to the same hadoop cluster and have it work just fine as the previous mapreduce charm should have cleaned everything up.
<m_3> negronjl: I like it
<negronjl> m_3: After a while, I'll be working up multiple mapreduce jobs per cluster
<m_3> negronjl: multiple same time?
<m_3> ha
<negronjl> m_3:  Not that I've ever done that but, I figure I would try anyway :)
<m_3> really fits in nicely with the whole HaaS thing
<negronjl> HaaS ??
<m_3> at hadoopworld, there wer plenty of big folks
<m_3> like jpmorgan
<negronjl> Hadoop as a Service ?
<m_3> who have a hadoop services group
<m_3> yeah
<negronjl> m_3: cool.... we'll be able to easily provide that :)
<m_3> they provide hadoop services to different business units
<negronjl> m_3: I just need to make a new mapreduce job ... terasort is getting kind of old
<m_3> drove hdfs security patches and similar
<negronjl> m_3:  any ideas ?
<m_3> I bet!
<m_3> the cisco talk had a bunch of cool benchmarks
<m_3> lemme dig it up
<m_3> brunopereira81: ping
<negronjl> m_3:  any idea where I can get the text for the bible ?
<m_3> hmmmm... nope
<brunopereira81> m_3 found it?
<negronjl> m_3:   maybe I can use that as in_dir and mapreduce something
<m_3> probably part of the gutenberg project :)
<m_3> brunopereira81: was PMing you
<_mup_> juju/expose-refactor r423 committed by jim.baker@canonical.com
<_mup_> Renamed juju.state.expose to juju.state.firewall
<m_3> brunopereira81: so my ts3 daemons aren't starting
<m_3> you have a 'service stop' at the end of the install hook
<m_3> but a 'service start' in start hook
<m_3> what should be the initial state of the service?
<negronjl> m_3: thx got it.
<m_3> negronjl: figured it'd be kinda funny if the gutenberg proj didn't have a bible
<SpamapS> crap reading about terasort reminds me that I 'm still supposed to record a screen cast of the hadoop thing
 * SpamapS *HAAAAAAATES* screen casting. :-P
<negronjl> SpamapS: What is your deadline on that ?
<negronjl> SpamapS: ahh .. does it has to be on the ODS thing ?
<SpamapS> yeah it needs to be the same format
<SpamapS> actually its a lot harder to do with EC2 or even canonistack
<SpamapS> having a local openstack was nice. ;)
<SpamapS> local provider works, but looks weird because there's only 1 machine
 * SpamapS ponders setting up an openstack on his laptop to make it look more interesting.
<negronjl> SpamapS: ahh ... I'm working on the hadoop-mapreduce charm ... thought you could use it but, It'll be done soon enough and I'm sure you'll be dying to make another screencast :P
<_mup_> juju/trunk r418 committed by jim.baker@canonical.com
<_mup_> merge expose-refactor [r=bcsaller,fwereade][f=873108]
<_mup_> Refactors the firewall mgmt in the provisioning agent into a separate class.
#juju 2011-11-29
 * negronjl is afk
<marcoceppi> m_3: Thanks for the review and consolidating everything into one
<marcoceppi> I was never able to wrap my head around Cheetah, which is why I still bow to the gods of sed :)
<marcoceppi> SpamapS: you too ^^ (thanks!) :)
<m_3> marcoceppi: yeah, I'm kinda trying to add discussion as part of the review too... those parts aren't crucial, just opinion, but it's good to talk them over
<m_3> marcoceppi: I took some notes on cheetah from a shell script...
 * m_3 digs around
<SpamapS> I think most of the time heredocs are simpler than cheetah
<marcoceppi> m_3: I just find cheetah to be, overkill? for the small configurations
<SpamapS> cheetah is great when you want to loop over complex objects
<SpamapS> but most of the time just >>'ing to a new version of the file is actually a better way to go IMO
<marcoceppi> SpamapS m_3 are there any other lightweight configuration tools, that just use templating? Like Smarty for bash?
<SpamapS> marcoceppi: the only time I see templating as an important option is when you can look at the template w/o the logic that fills it... like HTML templates that are worked on by designers.
<SpamapS> marcoceppi: for config files.. just build the thing by appending.
<SpamapS> where I've used cheetah.. I've felt it was overkill ;)
<m_3> disagree... there's lots of use for lightweight templating
<SpamapS> Its mostly a style choice
<m_3> it's not that painful either... no code... it can be called from the command line
<SpamapS> heredocs are just one form of templating
<m_3> agree... totally a style choice
<m_3> stitching various here-docs together can even do complex templating
<marcoceppi> m_3: I'd be interested in your bash notes :)
<SpamapS> DOWNLOAD=`ch_get_file "http://iweb.dl.sourceforge.net/project/phpmyadmin/phpMyAdmin/3.4.7.1/phpMyAdmin-3.4.7.1-all-languages.tar.gz" "726df0e9ef918f9433364744141f67a8"`
<SpamapS> that is so awesome
<SpamapS> cat > $apache_config_file <<EOF
<marcoceppi> SpamapS: <3
<SpamapS> marcoceppi: so, for my money, its better to > to a tmpfile .. then mv -f
<SpamapS> marcoceppi: and probably worth adding a charm helper function that takes a heredoc as its stdin and a filename to rename to as $!
<SpamapS> err
<SpamapS> $1
<marcoceppi> sh/file.sh sound good?
<marcoceppi> or should there just be one file, helper.sh ?
<SpamapS> not sure...
<SpamapS> I like the idea of breaking them up into groups
<marcoceppi> I may be trying to split too soon :\
<SpamapS> but it also means people have to lookup which one each function is in...
<SpamapS> maybe have an all.sh that just sources them all
<marcoceppi> good idea
<SpamapS> one thing that sucks is that we want to cleanup temp files on any fails..which is usually done w/ a trap.. but traps in sourced files can be ugly
<SpamapS> marcoceppi: also.. VirtualHost with ServerName .. not sure thats a great idea
<SpamapS> marcoceppi: for the most part I think we can think of these as standalone boxes
<SpamapS> tho I guess the first VirtualHost is also the default one
<marcoceppi> What about when co-location happens?
 * marcoceppi looks longingly into the future
<SpamapS> marcoceppi: that isn't something I think we should think about. co-location is supposed to be for things that don't conflict.
<SpamapS> and anyway, if you co-located two things that did it this way, they'd have conflicting ServerName fields and break.
<m_3> I kinda like adding apache vhosts w explicit aliases (to /phpmyadmin) instead of overloading root
<marcoceppi> SpamapS: I guess, I keep envisioning there being an http relation with an apache interface. So an apache charm and charms like thinkup and phpmyadmin would require the apache charm and apache charm would setup the proper virtualhosts based on X things
<m_3> results in the lame "It works" though :)
<marcoceppi> via colocation
<m_3> marcoceppi: more likely you'd use a reverse-proxy in any real situation though
<SpamapS> that works now without charms.. just apt-get install the two things, they're both on port :80 at /foo and /bar ..
<SpamapS> charms tho, are oriented around configuring the machine for the one thing they do
<marcoceppi> right, it's because I have a slightly different use case that I look at warping juju to do that :)
<SpamapS> I think the way subordinate/colocated charms is being implemented, it may work that way..
<SpamapS> where you could have an apache charm.. and deploying apps that support the apache interface as subordinates of it, would give them a chance to tell apache that they want to be at /foo
<SpamapS> open-port 80/tcp
<SpamapS> chown -R www-data:www-data /var/www
<SpamapS> marcoceppi: open the port only when the service is configured
<marcoceppi> kk, thanks
<SpamapS> Since phpmyadmin is kind of nothing w/o databases.. I'd say it should be opened in the relation hook, not install
<SpamapS> Or, can it work w/o any sources configured?
<m_3> marcoceppi: simple cheetah example from the command line: http://paste.ubuntu.com/753201/
<m_3> marcoceppi: and the other extreme... a more complex template using eruby https://gist.github.com/1402781
<marcoceppi> SpamapS: It _can_ be run without a db, but it's pointless without one
<SpamapS> marcoceppi: I don't understand this change...
<SpamapS> 8	-if not slave and not broken and not admin:
<SpamapS> 9	+if not slave and not broken:
<SpamapS> marcoceppi: seems like we'd still want that.. who cares if the database already exists?
<SpamapS> marcoceppi: anyway, I have to run.. but that does seem confusing
<SpamapS> marcoceppi: oh sorry, in relation to https://code.launchpad.net/~marcoceppi/charm/oneiric/mysql/db-admin-relation-fix/+merge/83690
<marcoceppi> SpamapS: In my trials, if you destroy phpMyAdmin then deploy the service again, MySQL would error on join because it tries to create the database again
<SpamapS> marcoceppi: we should disable the creation on admin
<SpamapS> marcoceppi: they have root.. let them create their own db :)
<marcoceppi> SpamapS: that was my first course of action, then I realized that phpMyAdmin requires a database
<SpamapS> oh haha
<marcoceppi> So I put it back, looking back on it now it should be disabled though
<marcoceppi> I'll update my branch for the merge proposal
<SpamapS> just use 'mysql'
<SpamapS> or create one conditionally.. if not exists..
<SpamapS> anyway.. GONE
<marcoceppi> SpamapS: I'll just have the db-admin hook create it, no bigs
<marcoceppi> SpamapS: Damn, well when you get back I've pushed up the changes
 * marcoceppi wanders away to play Zelda
<hazmat> zelda +1
 * SpamapS creates ppa:charmers/charm-helpers
<marcoceppi> sweet
<SpamapS> More I think about it, the more I think we need a charm-helpers addon that reads metadata.yaml for declarative stuff
<SpamapS> or a juju add on I guess
 * SpamapS loathes writing specs, so would prefer charm-helpers ;)
<marcoceppi> How would it read metadata.yaml? as in metadata defining what helpers needed to be loaded into the environments?
<SpamapS> Just have a single command at the outset that bootstraps in charm-helpers
<SpamapS> and after it does the declarative bits, runs hooks/install.charmhelper or something
<SpamapS> actually you can just have install be something like
<SpamapS> #/bin/sh
<SpamapS> charm-helper
<SpamapS> ... install steps
<SpamapS> charm-helper would need to be in the OS
<SpamapS> or in juju's cloud init
<SpamapS> otherwise for now just    add-apt-repository -y ppa:charmers/charm-helper && apt-get update && apt-get -y charm-helper
<flaccid> sweet
<zoomzz> Hey folks i am having a little trouble bootstraping
<zoomzz> Fails to find and register any hosts
<zoomzz> Even though the hosts r alive and pingable
<zoomzz> My aim is to deploy openstack on servers deployed through juj and orchestra
<zoomzz> Cobbel does its job and allows the hosts to build but the do not register and juju bootstrap fails to find an orchestra server
<koolhead11> hazmat: around
<koolhead11> is zookeeper a compulsary pkg to be on the VM clients/cloud-image once they are started via juju-bootstrap?
<koolhead11> if that is the case then i can never run juju in my openstack environment if the Internal IP has no internet access :(
<rog> koolhead11: i should think so. zookeeper is used for all juju coordination (e.g. hook execution)
<rog> koolhead11: is that because you can't install zookeeper with no internet access?
<koolhead11> rog: http://paste.ubuntu.com/753467/
<koolhead11> i confirmed it after the console log
<koolhead11> rog: true
<koolhead11> so if am running juju with openstack then i need to have internal network access to internet
<rog> koolhead11: what command is that paste output from?
<koolhead11> $ euca-get-console-output instance-id
<rog> koolhead11: alternatively you could use an image with zookeeper preinstalled, presumably
<koolhead11> rog: +1
<koolhead11> so it means i have to re-create the image :D
<koolhead11> with adding all these pkgs
<rog> koolhead11: given that you've got no internet access, i don't see an alternative, other than setting up your own local PPA
<rog> koolhead11: (and i've no idea how much work that would be)
<rog> koolhead11: i imagine that the only package that you need to preinstall is juju, which will have all those other packages as dependencies
<rog> koolhead11: i may be wrong though - i'm quite new to all this
<koolhead11> rog: am going to add all those pkg and remaster the cloud-image i have just downloaded. I will have to check/work on repository issue :)
<koolhead11> rog: thanks a lot :)
<rog> koolhead11: no probs. hope it works!
<koolhead11> rog: and i think the issue am facing is more to do with my internal network :(
<koolhead11> hi all
<koolhead11> i need one more info, i just need my ubuntu image along with cloud-init? am trying 2 build a custom ubuntu image because i have to provide network info in it
<niemeyer> Morning all
<niemeyer> koolhead11: Hi!
<niemeyer> koolhead11: Custom Ubuntu images are generally not encouraged with juju
<niemeyer> koolhead11: We use the charms to tweak the image to one's nee
<niemeyer> d
<fwereade> morning niemeyer!
<niemeyer> fwereade: Hey dude
<koolhead11> niemeyer: hellos
<koolhead11> actully i need to add proxy info in my apt so the zookeper and other pkgs gets downloaded once instance starts
<koolhead11> niemeyer: euca-get-console-output instance-id  is this
<koolhead11> http://paste.ubuntu.com/753480/
<koolhead11> i hope you agree to me on this ;D
<hazmat> koolhead11, pong
<koolhead11> hazmat: hola
<koolhead11> its my running instance which has not connected to internet and failed to download zookeeper
<hazmat> koolhead11, indeed you do need your cloud to have access to the internet
<koolhead11> as result of that am having all the issue
<hazmat> koolhead11, alternatively you need a proxy/cache setup
<koolhead11> http://askubuntu.com/questions/83134/using-juju-on-openstack-returns-ssh-invalid-key-error/83698
<koolhead11> hazmat: where will i do proxy/setup
<koolhead11> because my nova via which my instance gets network routed needs proxy
<koolhead11> so the only thing came to my mind is to add proxy info in the cloud-image i downloaded
<hazmat> koolhead11, your question/problem is different then the original thread, its more appropriate to start a separate conversation regarding
<hazmat> at least its not clear to me that their the same
<hazmat> koolhead11, what do you expect juju to be able to do if you have no network access?
<hazmat> just curious
<koolhead11> hazmat: yes i got the reason for problem i was having.
<koolhead11> by any chance juju during bootstrap has any option to define proxy? or i need to hardcode it inside my image and then upload it again to the bucket
 * hazmat checks if cloud-init supports it
<hazmat> koolhead11, do you have a machine accessible from the vm/cloud network that has internet access? i guess you can setup an instance that way if you associate a public address to it?
<hazmat> g'morning
<hazmat> all ;-)
<hazmat> fwereade, i was looking over the restart-transitions branch last night, in general it looks good, i had one major concern about it, in that it would automatically effect error transitions (from error states) to started and up without user intervention
<koolhead11> hazmat: if you will not kick me then does juju has option like  " juju bootstrap --proxy= " :D
<hazmat> which is a significant change unintended change in behavior
<hazmat> koolhead11, that seems reasonable
<hazmat> i mean some option to that effect
<fwereade> hazmat, reading code...
<koolhead11> hazmat: so is it there, can i try that?
<hazmat> koolhead11, its not there yet, i'm investigating how it can be done
<koolhead11> hazmat: awesome!!
<fwereade> hazmat, yes, you're absolutely right
<koolhead11> hazmat: also you were correct, the internal network in openstack gets access to internat routed via Nova
<uksysadmin> cool - would be useful
<koolhead11> uksysadmin: +1
<fwereade> hazmat, am I right in thinking that all I actually need to fix is to verify sensible states before I attempt the transition?
<koolhead11> but in my case even nova is running on proxy :(
<hazmat> koolhead11, if that's the case the instances should have internet access for normal apt usage?
<hazmat> fwereade, yeah.. i think just checking that 'error' not in current_state suffices
<koolhead11> <koolhead11> but in my case even nova is running on proxy :(
<hazmat> fwereade, but its a little tricker than that
<hazmat> hmm
<hazmat> fwereade, just wondering if the unit is in error, do the unit rel states need to be initialized
<fwereade> hazmat, I had wondered whether I could guarantee error state by "error" in current state
<koolhead11> hazmat: so you got this trick issue :D
<hazmat> probably not, they can be lazy initiaized
<hazmat> fwereade, yeah.. its a property of all the error states that they have an error in the name, i would some strong wording to that effect in workflow.py
<fwereade> hazmat, cool
<hazmat> fwereade, actually just having a func or method in workflow.py is_error_state  should suffice to abstract
<fwereade> hazmat, good point
<fwereade> hazmat, I suspect that I'm still some way away from getting restart behaviour correct in all cases
<fwereade> hazmat, I think I finally have a decent handle on all the workflows and lifecycles, and the hook scheduler
<hazmat> fwereade, yeah.. there's several more corner cases
<hazmat> fwereade, cool
<fwereade> hazmat, but I have yet to translate all that into a coherent plan
<hazmat> the scheduler still has transient state
<hazmat> i'm still wary of trying to store that in zk, given the connection to zk may be dead
<fwereade> hazmat, the scheduler certainly does; there's also some minor complication with the unit workflow, I think, in that it'll need to put the lifecycle into a sensible ._running state when it's recovering into a surprising state (if it's *only* error states, it's not too bad, but I'm not certain of that yet)
<_mup_> Bug #897645 was filed: juju should support an apt proxy for private clouds <juju:Confirmed> < https://launchpad.net/bugs/897645 >
<fwereade> hazmat, and I agree, I think we want the HS state on disk
<fwereade> hazmat, the relation stuff does indeed seem to be ok if we do everything lazily
<koolhead11> hazmat: any ubuntu image with cloud-init should work for juju i suppose
<hazmat> fwereade, interesting re _running, we do need to ensure that its the lifecyle is running, regardless of the transition to started in this case, it will need an additional public accessor/mutator, although i wonder if needs to support external on/off or just on.
<hazmat> needs more thought
<hazmat> koolhead11, yes, but we do require fairly recent versions of cloud-init
<fwereade> hazmat, as I understand it, the current transitions will start and stop the lifecycle as appropriate
<koolhead11> hazmat: the comes with oneiric
<fwereade> hazmat, what I'm lacking is certainty about the mapping from states to should-lifecycle-be-running
<hazmat> fwereade, they do, but that's based on a notion that start is called on the lifecycle
<koolhead11> hazmat: cloud-init             0.5.10-0ubuntu1.5 will do?
<hazmat> fwereade, yeah.. it needs more thought, my thought in this case, is that regardless of state we'd want lifecycle running
<hazmat> fwereade, maybe not..
<fwereade> hazmat: hm, a config error, for example, explicitly stops lifecycle
<hazmat> fwereade, thinking about error states like install_error, or start_error
<fwereade> exactly
<fwereade> hazmat, if it's *just* errors that do that, and we can guarantee that forever, that's easy, but I'm fretting about it
<hazmat> fwereade, yeah.. its fine without manipulating _running, effectively the guarantee ic is that its only running after started, and error states will stop that
<hazmat> if we come back in an error state, we can still recover... the unit rels are only active if started
<hazmat> and we don't need to manipulate the lifecycle internals
<hazmat> koolhead11, and that corresponds to what?
<koolhead11> hazmat: is it the latest cloud-init pkg which juju needs :D
<koolhead11> because am making my own image adding the proxy issue in it and cloud-init has to be installed to
<fwereade> hazmat, then perhaps it's even simpler -- if we're in an error state do *nothing*, if we're already "started" just lifecycle.start(False), otherwise do the usual transitions into started?
<koolhead11> i cannot log in to the machine which i booted and uses image http://uec-images.ubuntu.com/
<hazmat> fwereade, yeah... that sounds about right, just wondering if that can be simplified
<koolhead11> hazmat: 0.5.10-0ubuntu1.5  version of cloud-init get installed, you think its sufficient to run juju ?
<hazmat> koolhead11, that corresponds to what distro release version?
<hazmat> koolhead11, specifically for cloud-init there where fixes for the oneiric release regarding openstack
<koolhead11> hazmat: so it will work :D
<hazmat> koolhead11, the version in oneiric is 0.6.1-0ubuntu22
<koolhead11> hmm. so i will get the latest oneiric one installed.
<hazmat> koolhead11, which implies to me that no.. 0.5.10 won't work for openstack, there was specifically a problem around installing the ssh key in  older versions
<koolhead11> thanks. let me test this
<koolhead11> hazmat: no my bad, i was on my natty system :)
<koolhead11> and got that version
<hazmat> koolhead11, no worries
<hazmat> koolhead11, uksysadmin Bug #897645 is to track the apt-mirror/proxy support
<_mup_> Bug #897645: juju should support an apt proxy for private clouds <juju:Confirmed> < https://launchpad.net/bugs/897645 >
<uksysadmin> awesome, ta hazmat
<niemeyer> mpl: ping
<niemeyer> rog: ping
<rog> niemeyer: hiya
<niemeyer> rog: Yo
<rog> niemeyer: how're them reviews coming on? :-) :-)
<niemeyer> rog: Just looking at mpl's review, and I noticed you've replied via email, which interestingly went to the merge proposal and not to codereview
<rog> niemeyer: interesting. i don't *think* i got an email from codereview about it.
<rog> niemeyer: nope, i didn't. is this a flaw in the cunning plan?
<niemeyer> rog: Not sure.. just thinking
<niemeyer> rog: and yes, you certainly have reviews coming your way
<rog> niemeyer: whee! i've been stacking up the merge requests, i'm afraid
<rog> niemeyer: i hope i haven't gone way off track
<niemeyer> rog: For now, let's start the review online in the codereview page.. this will ensure you get added to the reviewers list
<rog> niemeyer: you mean, reply on that page rather than reply to the email?
<niemeyer> rog: That's alright.. we'll sort it out.. please don't pile up things too much in the same branch, though
<niemeyer> rog: That first ec2 branch took way too long, and I'm actually wondering if we'll have to break it down
<niemeyer> rog: But let me go over it in a first pass before anything
<rog> niemeyer: yeah, it was bigger than i wanted, but i couldn't see a way to break it down into nice easy steps
<niemeyer> rog: That's right.. reply in the codereview page first
<niemeyer> rog: That will add you to the reviewers list
<rog> niemeyer: since that branch, i'm trying hard to keep the branches small and to the point
<niemeyer> rog: That's awesome, thank you
<mpl> niemeyer: pong
<niemeyer> mpl: Hey there!
<mpl> hi
<niemeyer> mpl: Just going over your change and we're mostly ready for merging
<niemeyer> mpl: One question: have you signed the contribution agreement from Canonical before, for any project?
<niemeyer> mpl: I don't recall if I actually asked that before
<hazmat> niemeyer, can lbox propose send an existing branch to codereview site ?
<mpl> niemeyer: nope, I don't thing I did.
<niemeyer> hazmat: Yep.. just propose it again with -cr
<mpl> *think
<hazmat> niemeyer, awesome
<niemeyer> hazmat: It will find the existing merge proposal, and will do the delta
<hazmat> niemeyer, does that work even if the merge proposal belongs to someone else?
<niemeyer> hazmat: Hmm
<mpl> niemeyer: same kind of procedure as with Google I suppose?
<niemeyer> hazmat: Maaaaybe
<hazmat> ie. does it actually resubmit it,or can it just do the diff and add the comment
<niemeyer> hazmat: As long as you have editing permissions, I *think* it may work
<hazmat> niemeyer, cool, i'll have to experiment
<niemeyer> hazmat: Oh, actually, probably not
<niemeyer> hazmat: Because it will attempt to push the branch
<niemeyer> hazmat: But try anyway
<hazmat> ah
<niemeyer> mpl: Yeah, mostly
<niemeyer> mpl: Except it's a single agreement for all Canonical projects
<niemeyer> mpl: So you won't have to be doing this again for juju, etc
<niemeyer> mpl: The details are here:
<niemeyer> mpl: http://www.canonical.com/contributors
<niemeyer> mpl: It's non-draconian, and quite straightforward
<mpl> niemeyer: kthx, I'm on it right away then.
<niemeyer> mpl: No problem
<niemeyer> mpl: and thank you
<mpl> sure, thank you for putting me on track.
<niemeyer> mpl: I've just reviewed your change, and there's one minor nit to address..
<niemeyer> mpl: Once you do it, just run "lbox propose -cr" again, and I'll merge it
<mpl> I'm on a different machine now, I'll just have to setup things again.
<mpl> CA submitted btw.
<niemeyer> mpl: Beautiful, thanks!
<niemeyer> Ok, I suggest going through the same procedure you did originally
<mpl> one question regarding the submit procedure (open to anyone here):
<niemeyer> mpl: and then run "bzr pull" once you're inside of the branch
<mpl> when I lbox proposed, it took me to the browser page where I have to agree to let launchpad access things on my behalf
<niemeyer> mpl: That's right
<niemeyer> mpl: Well, almost right anyway
<mpl> I selected the most permissive option
<niemeyer> mpl: It's actually lbox asking Launchpad so lbox can do things on your behalf
<niemeyer> mpl: Yeah, I think that's necessary in this case, since it's actually writing content on your behalf
<mpl> yes. what's the minimum working level I was supposed to agree on there?
<mpl> ok
<niemeyer> mpl: Btw, if that's a machine you're not comfortable leaving your credentials around, make sure you kill "~/.lpad*" when you stop working there
<mpl> ah, good to know, thx.
<niemeyer> mpl: It's not your real credentials, but rather just a token
<niemeyer> mpl: Even then, someone can impersonate you with it
<mpl> ok. no worries though. both are my laptops and somehow secure.
<mpl> *somewhat
<mpl> niemeyer: btw, I'm new to bzr (ok with hg and pretty comfy with git) so I might make mistakes with it at first. I've read the minimum docs so far.
<niemeyer> mpl: Not a problem by any means
<mpl> niemeyer: so you're saying I should pull from the original branch. not simply 'bzr branch lp:~mathieu-lonjaret/gozk/zookeeper' ?
<niemeyer> mpl: Yeah, I suggest you follow the original process, and once you have the local environment setup, get inside your feature branch and pull from your changes at that URL
<niemeyer> mpl: The reason being that this way you'll preserve the parent link with the local trunk
<niemeyer> mpl: You can also do it in other ways, but that one is simple to follow and explain :)
<mpl> ah, I think I see, thx.
<mpl> niemeyer: for further changes, I suppose it's cleaner I append them to the current commit (#22), rather than creating new commits, right? I mean, if there's such a thing with bzr...
<niemeyer> mpl: No, we generally don't encourage reorganization of history
<niemeyer> mpl: Even more considering it has been published
<niemeyer> mpl: This would be discouraged even with git
<mpl> well published, but not yet merged.
<niemeyer> mpl: Yeah, but it's published.. the review was made on a given revision, which we should be able to look at again if necessary
<niemeyer> rog: Is Jacek the guy you told me about at UDS?
<rog> niemeyer: no
<niemeyer> rog: Ok, so he mailed us independently.. cool
<mpl> ok, I just got used to doing that way with brad for camlistore. but we're using gerrit, so nothing's "published" until it's merged yeah.
<niemeyer> mpl: Ok, it feels like a bad idea even with gerrit
<niemeyer> mpl: FWIW
<niemeyer> mpl: Otherwise you have a review published, for which you don't have the codebase anymore
<mpl> true. the only trace of the patchsets are in gerrit.
<mpl> otoh it encourages to commit often for small changes in the same review without fearing of filling the log with ridiculous commits.
<mpl> but anyway, not suggesting anything, was just asking. :)
<niemeyer> mpl: Yeah, there are two religious camps.. we're on the side that don't really care about the history of feature branches
<niemeyer> mpl: The history of trunk we care about, though
<niemeyer> mpl: So we're generally more careful on the merge messages
<mpl> uh, that's odd. the tests all passed on that machine and yet I have no recollection of installing/trying more zookeeper things on here.
<mpl> niemeyer: so I've branched from lp:gozk/zk, made a feature branch, cded into it, pulled from lp:~mathieu-lonjaret/gozk/zookeeper, made my changes, made a new commit. now if I lbox propose -cr, it will know that we're still in the same CL, and not create a new one, right?
<mpl> or am I forgetting something?
<smoser> hazmat, i'm interested in what you think of my comments to bug 897645
<_mup_> Bug #897645: juju should support an apt proxy for private clouds <cloud-init:New> <juju:Confirmed> < https://launchpad.net/bugs/897645 >
<niemeyer> mpl: That's right
<niemeyer> mpl: Just propose -cr again
<niemeyer> mpl: Then, visit the codereview page, and mention it's ready
<niemeyer> mpl: In the future, I'll change lbox propose so that it adds a note by itself
<mpl> hmm I must have messed up somewhere:
<mpl> bzr: ERROR: Cannot lock LockDir(http://bazaar.launchpad.net/~mathieu-lonjaret/gozk/zookeeper/.bzr/branch/lock): Transport operation not possible: http does not support mkdir()
<rog> mpl: did you kill a bzr process?
<rog> mpl: actually, no that gives a different error
<mpl> no, I don't think I did.
<niemeyer> mpl: Ah, I see what's going on
<niemeyer> mpl: try this:
<niemeyer> mpl: within the branch
<niemeyer> bzr push --remember lp:~mathieu-lonjaret/gozk/zookeeper
<niemeyer> mpl: Then try to propose again
<rog> niemeyer: maybe #juju is better for this conversation
<niemeyer> rog: Yeah, both are cool
<niemeyer> rog: Oh, no, I guess you're right
<niemeyer> rog: We're moving onto juju territory
<rog> niemeyer: that's what i thought
<niemeyer> rog: +1
<rog> niemeyer: i'm not sure of a good way to test the juju cloudinit code though
<rog> niemeyer: it's a public function in the juju package, so it should have a test, but i'm not sure what level would be best.
<rog> niemeyer: maybe just compare the output, as discussed
<hazmat> smoser, replied on the bug
<rog> niemeyer: at least that'll guard against regression
<hazmat> stepping out to run some errands, back in 40m
<niemeyer> rog: yeah, that sounds sane
<niemeyer> hazmat: Cheers
<mpl> niemeyer: whoa I hit a go runtime panic
<niemeyer> mpl: Oh, I wanna know about that one
<mpl> (while proposing). the command you gave me worked to deblock the situation though.
<mpl> k, sending you the stack by e-mail then.
<niemeyer> mpl: Can you please paste the output somewhere?
<mpl> ok, a paste then.
<smoser> hazmat, i think juju should expose cloud-init to end users.
<mpl> niemeyer: http://paste.ubuntu.com/753711/
<smoser> not by default, but at least allow the user to specify "start all my instances with these parts"
<smoser> as that would generically solve local customization problems that you will undoubtedly run into.
<niemeyer> mpl: Oh.. hmm
<niemeyer> mpl: Damn.. I think that's the bug I fixed in Go's http package itself
<mpl> niemeyer: I have some RL work to finish, so I might get a little laggy but I'll catch up
<niemeyer> mpl: I'll have to recompile lbox with the latest tip
<mpl> ok
<niemeyer> mpl: Let me do that
<hazmat> smoser, hmm
<hazmat> niemeyer, what do you think about cloud-init for end users exposed directly via juju?
<niemeyer> hazmat: Sounds bad is my first instinct
<hazmat> smoser, it does makes sense, its already a generic high level language for customization, its just whether or not it promotes different practices (machine think vs container think) then what we're aiming for
<niemeyer> hazmat: It's an implementation detail.. we already have providers where cloud-init doesn't exist
<hazmat> niemeyer, just one.. and in that one cloud-init doesn't nesc. make sense, and its something we could add.. but thats a different context cloud-init there would be in a container, not a machine
<smoser> what is the provider where you have no cloud-init ?
<hazmat> smoser, local provider
<smoser> lxc runs as a container, no ?
<hazmat> smoser, yup
<smoser> you really *should* have cloud-init there.
<smoser> imo
<hazmat> smoser, we debated it, its lack there is mostly as optimization, it cuts down the unit creation time via lxc clone considerably
<niemeyer> hazmat: That's not entirely true
<smoser> lack of cloud-init does not cut down on unit creation time.
<smoser> a pre-customized lxc container does.
<niemeyer> smoser: The real reason we don't use cloud-init there is because LXC containers run units, rather than mimicking EC2 machines
<hazmat> bcsaller, ping
<niemeyer> smoser: So we don't really want to perform the customizations cloud-init is about
<hazmat> smoser, true, and we do use a pre-customized container, bcsaller advocated effectively for not using cloud-init there, and has some more details
<niemeyer> smoser: cloud-init is a way to put the environment in a given state for juju to run
<smoser> and to do other things.
<niemeyer> smoser: It doesn't make sense to run cloud-init on your local laptop, for instance
<smoser> that you could expose to a user
<smoser> so you wouldn't have to modify juju to know things that the user might have to change.
<smoser> it doesn't *not* make sense to run cloud-init there
<smoser> if cloud-init has no data, it does nothing.
<niemeyer> smoser: In many cases we want to know it, because we want juju to be in the loop
<smoser> (ignoring the stupid ec2 timeout thing)
<mpl> niemeyer: it looks like lbox proposed paniced when doing the push to rietveild. as a result, the new changes have been pushed to launchpad but not to rietveld.
<smoser> i'm suggesting that you do not want juju in the loop for all possible local customizations
<smoser> apt-proxy is an example.
<smoser> but you will undoubtedly run into other cases where the user needs to run some dynamic change to the image
<niemeyer> smoser: E.g. if there's a proxy, the user should be telling juju about that fact, and we'll be accommodating not only the first machine creation, but we should also be *changing* the proxy in all existing machines if the user decides to change it
<niemeyer> smoser: Same thing about ssh keys
<niemeyer> smoser: cloud-init is an implementation detail for bootstrapping a machine
<niemeyer> smoser: juju stays around forever
<smoser> thats fine. i dont disagree.
<smoser> but i think you need to expose *some* way that a user can modify the image on first boot possibly before juju takes over.
<smoser> and ideally without modifying a golden image prior to boot.
<niemeyer> mpl: That's fine, you can propose again without damages
<niemeyer> mpl: and lbox was just recopmiled
<niemeyer> mpl: can you please apt-get update; apt-get upgrade;?
<niemeyer> mpl: to get the new lbox
<niemeyer> mpl: and try again?
<niemeyer> smoser: charms is the way to modify the image
<smoser> and i think that cloud-init is the thing that makes most sense there (yes, thats probalby because its "my baby") but i think you need something.
<niemeyer> smoser: in other cases, juju itself should be in the loop
<smoser> charms cannot modify the guest before juju runs
<niemeyer> smoser: We already manage ssh keys, for instance
<niemeyer> smoser: Proxy seems to fit in the same category
<smoser> i think it is unreasonable to believe that juju will be all knowing about image customization.
<smoser> and that you should enable a generic method.
<smoser> even if it is only "i will run this for you in the image before juju runs"
<smoser> (which would not be cloud-init specific)
<smoser> but would allow the user to make modifications that juju doesn't have to know about. possibly they would only need that until juju grew appropriate legs.
<niemeyer> smoser: Maybe.. I'm certainly interested in learning about those cases.
<smoser> apt-proxy is the first.
<smoser> apt-mirror is the second
<niemeyer> smoser: Proxy is the first one we find out of the ordinary that should be supported.
<niemeyer> smoser: and in that case, it *should* be built-in
<niemeyer> smoser: E.g. we want to set up the proxy within LXC units in a local deployment
<niemeyer> smoser: cloud-init doesn't run there
<niemeyer> smoser: Same thing about apt-mirror
<smoser> (there is no reason that cloud-init does not run there)
<smoser> (but even if it wasn't cloud-init, juju could still make that promise)
<smoser> if you want a list of other things that you may need to configure try 'man apt.conf'
<smoser> or 'man resolv.conf'
<niemeyer> smoser: co-located charms can fix those problems
<niemeyer> smoser: Some of them, anyway
<niemeyer> smoser: But I digress..
<niemeyer> smoser: I don't disagree that some generic customization mechanism may be needed.
<niemeyer> smoser: I'd tend to make it more generic, though.. such as a script
<smoser> cloud-init "parts" can be a script.
<smoser> its a generic mechanism.
<niemeyer> smoser: but all of that is irrelevant for the current debate.  Proxy should be a first class option.
<smoser> i'm curious as to why cloud-init does not exist in your lxc containers
<smoser> i really dont think that proxy should be a first class citizen to a service orchestration solution.
<smoser> i would think that is way below the level of something that juju should care about, but i'm willing to accept your opinion there.
<niemeyer> smoser: juju bootstrap --http-proxy=<url>
<niemeyer> smoser: Should be as simple as that.
<niemeyer> smoser: No excuses for being harder.
<smoser> its a somewhat arbitrary complication of your command line interface to me.
<smoser> but, i'm not arguing that.
<niemeyer> smoser: Ask koolhead11's opinion.. he was the user faced with the problem
<niemeyer> Anyway, I have to head to lunch right now or I'll lose my wife.. biab
<mpl> niemeyer: something odd happened. I got a 403 when it tried pushing to rietveld. I retried; it did _not_ ask for my google credentials again, and this time it worked.
<rog> niemeyer: i've made some changes in response to your comments on ec2-error-fixes, and pushed them (with lbox propose) but i don't see the changes reflected on the codereview site
<rog> niemeyer: i'm probably doing something wrong!
<koolhead11> So juju wants a user Ubuntu to exist in my image/instance?
<koolhead11> is there a way i can change it from configuration file
<rog> niemeyer: aaargh, i forgot the -cr flag!
<marcoceppi> m_3 SpamapS I don't think phpMyAdmin has a "read-only" mode. I think it'd probably need to create a MySQL database user with just select access which makes maintaining configuration of it kind of difficult
<m_3> marcoceppi: cool... np.  was just thinking read-only mode would be cool
<m_3> might be worth filing as a feature request on the charm once it's in lp:charm/phpmyadmin so we don't forget... somebody might pick it up over time
<marcoceppi> Good idea. I'm going to try to wrestle with the package and see where that goes
<niemeyer> rog: I've been thinking about that stuff.. I'll probably introduce support for a ".lbox" file within the branch, and take default options from there
<niemeyer> rog: So that "lbox propose" will do the intended thing for the specific project
<rog> niemeyer: i think that's a good idea
<niemeyer> mpl: Superb, I'll check it again
<rog> niemeyer: i was thinking of suggesting something like that
<rog> niemeyer: ec2-error-fixes should be ready to rock BTW
<niemeyer> rog: Looking at it right now
<niemeyer> rog: Reviewed
<niemeyer> mpl: Good stuff
<niemeyer> mpl: Will get the zookeeper change submitted
<niemeyer> mpl: I'll be back in 5 mins
<niemeyer> mpl: Please let me know when you have a moment for us to talk about the next task
<rog> niemeyer: i don't think IsNil works for checking if the length of an array is zero
<niemeyer> rog: It works in this case, because the length is zero because it's actually a []string(nil)
<rog> niemeyer: hmm, i'm sure i was bitten by this - DeepEqual used to return true and now it doesn't
<rog> niemeyer: i'll check
<rog> niemeyer: ah, i see, i did the naive translation
<rog> niemeyer: why wasn't it IsNil before?
<niemeyer> rog: I was misguided to think it was an empty slice due to the poor printing we had just a couple of weeks ago
<niemeyer> rog: Thankfully, fmt was changed and it now shows the fact it's nil
<mpl> niemeyer: cool, thx. if it's not too long, pretty much anytime will be ok to discuss.
<niemeyer> mpl: Ok, let's go then
<rog> niemeyer: ah, cool. done and pushed.
<niemeyer> mpl: We don't have any kind of mechanism for logging in the juju packages right now
<niemeyer> rog: Awesome, please feel free to submit
<niemeyer> mpl: It'd be cool to have something _very_ simple going..
<mpl> niemeyer: like http basic auth, with login and pass?
<niemeyer> mpl: Oh, no, I mean just logging, not "login" :)
<mpl> oh sorry.
<mpl> ok.
<niemeyer> mpl: I'm thinking about something like this:
<niemeyer> mpl: Imagine an interface in the juju package like so:
<niemeyer> mpl: func SetLogger(l Logger); func SetDebug(enabled bool); func Logf(format string, args ...interface{}); func Debugf(...)
 * marcoceppi reinvents the wheel
<niemeyer> marcoceppi: As long as it's rounder, that's fine ;-)
<niemeyer> mpl: So one can call juju.Debugf("doing something: %v", whatever)
<niemeyer> mpl: etc
<niemeyer> mpl: Same thing about Logf
<mpl> what's the diff between LogF and Debugf? Debugf is only used when debug is enabled?
<niemeyer> mpl: Precisely
<niemeyer> mpl: Well, it's only _effective_ when ...
<rog> niemeyer: bzr: ERROR: Cannot lock LockDir(lp-85338896:///%2Bbranch/goamz/ec2/.bzr/branchlock): Transport operation not possible: readonly transport
<niemeyer> mpl: I suggest pulling from lp:goetveld
<niemeyer> rog: You've been there before, I think :)
<rog> niemeyer: i tried rebranching
<niemeyer> rog: Yeah, it's the same issue.. we have to rename
<rog> niemeyer: perhaps you haven't updated the alias for ec2?
<rog> ah yes
<niemeyer> rog: Please push to the same URL, but under ~gophers
<mpl> niemeyer: ok, got it, thx. anything else?
<niemeyer> rog: I'll switch the official URL
<niemeyer> mpl: Yeah, I suggest having a look at lp:goetveld
<niemeyer> mpl: Check the log.go file
<rog> niemeyer: pushed
<niemeyer> mpl: There's a lot you won't care about, but there's somethings you can mimic
<niemeyer> rog: Thanks, switching now
<niemeyer> rog: Done
<niemeyer> rog: exp/ssh is coming along pretty well
<niemeyer> rog: I'm hoping we'll be able to use it by the time we get tehre
<rog> niemeyer: yeah, i'm really happy to see that
<rog> niemeyer: 'cos we need it!
<rog> niemeyer: and i wasn't looking forward to writing it...
<niemeyer> rog: Well, we'd definitely not write an *ssh* package in a juju context :-)
<niemeyer> rog: We'd do the same we do in the current impl.. just wrap ssh
<niemeyer> rog: and that may actually turn out to be a better idea in the long run, anyway
<rog> niemeyer: ah, i didn't realise that's what it did
<niemeyer> rog: But.. I'm hopeful at least
<niemeyer> rog: Yeah, it's very straightforward actually
<niemeyer> rog: Well, hmmm.. ok.. not _that_ straightforward if we take into account the error handling and retry logic
<rog> niemeyer: does it just pipe into ssh? or use a local proxy port?
<niemeyer> rog: It manages the proces
<niemeyer> s
<niemeyer> rog: Ok.. what's the next MP in the pipeline?
<niemeyer> rog: I've been looking for the tip, but it looks like they already include the ec2 stuff
<rog> niemeyer: you mean, what am i working on next, or what should you review next?
<niemeyer> rog: What's the tip of the iceberg? :-)
<niemeyer> rog: The latter
<rog> niemeyer: probably ec2-ec2test
<niemeyer> rog: Cool
<niemeyer> rog: Btw, can you please give me a hand merging https://codereview.appspot.com/5445048/?
<niemeyer> rog: It'd be good to run tests before merging.. I think mpl didn't have the env setup for that
<rog> niemeyer: ok. what about things that depend on it?
<niemeyer> rog: They'll have to be fixed in due time as well
<niemeyer> rog: But there's no way to break that circle besides doing this
<rog> niemeyer: indeed - i just wondered if we should push a load of stuff together
<niemeyer> rog: Pushing loads of stuff together never sounds great ;-)
<rog> niemeyer: ok, seems good to me
<niemeyer> rog: What depends on it right now, either way?
<rog> lol
<rog> maybe... nothing
<niemeyer> 8)
<niemeyer> rog: Btw, there's an update for lbox
<rog> niemeyer: cool. what new goodies in the box today?
<niemeyer> rog: Trivial changes. Besides recompiling with the Go http fix, it allows leaving the description empty, which is something I've noticed you missing
<rog> niemeyer: yeah - sometimes the single line summary is enough
<niemeyer> rog: Agreed
<rog> niemeyer: thanks
<rog> niemeyer: hmm, should i compile zookeeper against weekly or release?
<rog> niemeyer: i think i should probs stick to release and then tag a weekly version too
<niemeyer> rog: I vote for weekly/tip for the moment
<niemeyer> rog: Since that's what both you and me are using
<niemeyer> rog: That said,
<niemeyer> rog: let's not announce the new interface right now
<niemeyer> rog: So we can move that onto lp:gozk by the time the next stable is out
<niemeyer> rog: So everybody is happy
<rog> niemeyer: ah, of course this still isn't public
<rog> niemeyer: cool.
<niemeyer> rog: Or rather, make it well known that the pkg name is now gozk/zookeeper
<rog> niemeyer: the official path is currently launchpad.net/gozk, right?
<niemeyer> rog: It is
<rog> niemeyer: the only problem is that i won't be able to straight-merge mathieu's patch, 'cos i'll have to do error-fixes first
<rog> niemeyer: but i can do that
<niemeyer> rog: Hmm..
<SpamapS> https://launchpadlibrarian.net/86149655/buildlog_ubuntu-precise-i386.juju_0.5%2Bbzr418-1juju1~precise1_FAILEDTOBUILD.txt.gz
<niemeyer> rog: Wasn't that fixed?
<SpamapS> juju hasn't been able to run its test suite in a clean chroot on precise for a while.. :-P
<niemeyer> rog: Or is it still in the queue?
<rog> niemeyer: i don't think it did an error-fixes on zk
<rog> s/it/i/
<rog> or...
<niemeyer> rog: If you want to quickly push that change, I'm happy to do a fast review on it
<rog> niemeyer: error-fixes before rename?
<rog> rather than combining?
<rog> currently i'm combining, but i can do two merges
<SpamapS> hazmat: think you can take a look at that build failure? we haven't had a good build on precise since r413
<SpamapS> hazmat: something with argparse and stale .pyc's
<hazmat> SpamapS, sure just wrapping up an email and i'll take a look
<rog> niemeyer: i think i'll do that. mathieu's patch should merge fine.
<niemeyer> rog: Yeah
<niemeyer> rog: I'm happy with either
<rog> i'll keep 'em separate.
<rog> niemeyer: that way mathieu gets his name on the merge
<niemeyer> rog: Good one
<rog> niemeyer: one small request BTW, when it gets stable, could we make lbox a little less verbose please?
<niemeyer> rog: +1.. I also had that in mind
<rog> niemeyer: one or two lines of output would be nice... (and a -verbose flag for debugging)
<hazmat> SpamapS, not sure what thats about, i'll try running the tests  on a precise image
<niemeyer> rog: Agreed
<niemeyer> rog: We may need more lines, due to the URLs, but not much else
<rog> niemeyer: cool
<rog> niemeyer: it'd be good if lbox could use parent branch for target URL if push branch of target isn't there.
<rog> (maybe it does now)
<rog> niemeyer: https://codereview.appspot.com/5440056/
 * rog loves gofix
<niemeyer> rog: It does already
<rog> niemeyer: cool
<rog> niemeyer: i'll update my lbox now
<rog> "lbox is already the newest version."
<rog> ah, apt-cache!
<niemeyer> rog: Reviewed
<niemeyer> rog: I love codereview :)
<hazmat> rog +1
<niemeyer> rog: apt-get update as well, maybe
<rog> niemeyer: that's what i was intending to mean, if i'd remembered the magic commands properly
<rog> niemeyer: another minor thing: it'd be nice if lbox didn't add a new diff to a codereview file if nothing changes in that file.
<mpl> rog, niemeyer: catching up: I don't care about my name on the merge btw. what only matters to me is to learn and that the other guys doing the work (ie you), know what I do too. :)
<rog> niemeyer: pushed
<rog> mpl: yeah, but it's nice as a historical record :-)
<rog> niemeyer: how do i create lp:gozk/zookeeper
<niemeyer> rog: We just have to rename the series.. let me do that
<rog> niemeyer: ah, ok. i just pushed to ~gophers/gozk/zookeeper
<niemeyer> mpl: Thanks, but it's just honest that we have your name in the merge proposal
<niemeyer> rog: Beautiful, if I haven't screwed up anything, it should all be working
<mpl> niemeyer: sure. as long as it's not a burden on the workflow.
<rog> niemeyer: all seems good. goinstall, cd $GOROOT/...; gotest, all clean
<niemeyer> Woohay
<mpl> niemeyer: I grepped for zk and zookeeper in lp:juju/go and found nothing relevant. I suppose I shall leave that one alone then?
<m_3> jcastro: yo... about to talk to upstream voltdb devs about charming... you wanna include anything?
<rog> i'm off now, see y'all tomorrow.
<m_3> rog: 0/
<mpl> bai
<rog> niemeyer: i said ec2-ec2test was next, but actually go-juju-initial-ec2 is before that, and is more central (there are some comments there from fwereade that need thinking about too)
<niemeyer> rog: No worries.. will look at that one
<marcoceppi> So, apparently you can pass assoc arrays to bash functions. My whole morning is shot
<niemeyer> Folks, I have a doctor appointment, so leaving a bit early today.
<niemeyer> Wish me luck. ;)
<m_3> niemeyer: luck!
<SpamapS> marcoceppi: wha?
<jcastro> m_3: not really, invite them along to charm school if they want
<SpamapS> q/win 45
<SpamapS> doh
<marcoceppi> SpamapS: can't pass associative arrays to a bash function
<SpamapS> marcoceppi: the moment you use arrays in bash.. you should start thinking about the rewrite in python/ruby/perl ;)
<marcoceppi> SpamapS: heh, yeah. I tried moving what I was doing wit templating into a bash function. If only I could pass assoc arrays it would be perfect.
<SpamapS> marcoceppi: you may be overthinking it
<marcoceppi> This is what I have now, if you want to take a look
<m_3> marcoceppi: unfortunately the only associative array available is the environment itself... :(  I guess that gets "passed" everywhere tho
<SpamapS> yeah don't be mucking with global
 * m_3 sigh...
<SpamapS> no sigh.. this is why we have better languages. :)
<m_3> true
<SpamapS> Oh and you can, btw, pass in a bunch of local variables to be set.. its evil.. but eval lets you do basically anything. ;)
<m_3> it's frustrating sometimes... because it's _almost_ a real lang :)
<marcoceppi> http://paste.ubuntu.com/753978/
<marcoceppi> It was an attempt to make a quick template parser for charm-helper
<marcoceppi> sadly, it stops just short of working
<SpamapS> so...there may be a way yet to do this
<SpamapS> remember that functions have their own stdin/stdout
<SpamapS> so don't bother separating the variables/template
<marcoceppi> I thought about using a json parser, but then backed away slowly
<SpamapS> what you really want to do is handle the file bit of the template.. not the template itself
<SpamapS> replace_file /etc/mysql/my.cnf <<<EOF
<SpamapS> or rather
<SpamapS> ch_replace_file
<SpamapS> marcoceppi: really tho.. I'm hesitant to encourage this level of logic in shell :)
<marcoceppi> So have the file come via stdin then each param is a string replace? ch_parse_tpl key1 val1 key2 val2 < /file/path.tpl
<marcoceppi> I think this idea is a bust though, at least in charm-helper terms for now
<jcastro> SpamapS: hey let's clear up the scheduling for SCALE, he has the room set aside all day but we don't need it that long right?
<jcastro> I was thinking like, 2-5pm or something like that?
<SpamapS> jcastro: indeed, also do we have any idea if we'll have a chance to talk about it earlier in the day?
<SpamapS> marcoceppi: no, just do the variable substitution in calling code
<jcastro> SpamapS: ok, I will fire off another email to him, but we'd want only like a few hours in the afternoon right?
<SpamapS> marcoceppi: for looping.. you're already pushing the limits of shell.
<SpamapS> jcastro: I think so yes
<SpamapS> jcastro: make it easy for people to attend for 1 hour and still get to the other talks going on at the same time.
<SpamapS> jcastro: I feel like we can make use of the room earlier in the day too though.. call it the Juju lounge or something. :)
<marcoceppi> SpamapS: Bash can handle it!! http://www.youtube.com/watch?v=BhsTmiK7Q2M
<SpamapS> please summarize
<SpamapS> I refuse to watch videos about anything except babies/puppies/cats
<jcastro> charmers: incoming status.net charm!
 * SpamapS braces
<jcastro> SpamapS: ... and time for newbie question.
<jcastro> SpamapS: so you know how mediawiki we can add haproxy and scale the web part with add-unit.
<SpamapS> jcastro: yes
<jcastro> do we have a special designation for charms that allow that vs. charms that are just "single-stack" I guess
<jcastro> I was thinking for best practice for charms, we should encourage relationships with things that allow charms to scale up
<jcastro> so like "be add-unit friendly"
<SpamapS> well mediawiki is an app that has this built in
<jcastro> ah ok
<SpamapS> it has nothing to do with the charm
<SpamapS> we have discussed giving charms some metadata suggesting min/max units
<SpamapS> so that people don't add-unit on a stateful service and screw themselves over
<hazmat> SpamapS, wrt to that.. take something like mysql which can assume either a master or slave, in master mode it really only accepts a single unit, but slave can have multiple
<hazmat> although in that context, it seems like it might be better to separate out the slave as a separate charm
<_mup_> Bug #897834 was filed: charms should be able to spec max unit and maybe step <juju:New> < https://launchpad.net/bugs/897834 >
<SpamapS> hazmat: the way replication works now, same charm, separate service
<hazmat> SpamapS, right, but if we wanted to support that max unit metadata for the charm it would be splitting it out
<SpamapS> hazmat: I think for mysql, such a thing is not as useful.
<SpamapS> hazmat: but for say, game server charms it makes some sense.
<hazmat> SpamapS, why wouldn't it make sense for the mysql master, isn't the reasoning the same?
<SpamapS> hazmat: I don't want two charms for that. I'd prefer to have a mysql charm which is capable of being morphed from a slave into a master.
<hazmat> SpamapS, hm.. morphing is a good point, but if has more than one unit when it morphs its an unknown state afaics
<SpamapS> hazmat: thats ok, we can promote the leader, and let the other two pound sand.
<_mup_> juju/rest-agent-api r403 committed by kapil.thangavelu@canonical.com
<_mup_> rest spec
<_mup_> juju/support-num-units r417 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<jimbaker>  hazmat, with ref to https://code.launchpad.net/~jimbaker/juju/support-num-units/+merge/81660 and your point about error handling on removing service units: using the analogy of rm in the shell, juju remove-unit should remove all the units it can in the list, logging any it can't remove (invalid name, not found, etc)
<hazmat> jimbaker, i'm not sure rm is a good analogy, but that behavior seems reasonable, currently its an exception on first failure
<jimbaker> hazmat, exactly
<hazmat> jimbaker, i don't follow
<jimbaker> hazmat, i'm just agreeing with your statement that it stops on first failure for juju remove-unit
<hazmat> rm will also do the same
<jimbaker> hazmat, no, it will remove the files it can in a list, reporting the ones it cannot
<jimbaker> hazmat, re whatever way we will go, i will simply add to the api change thread so it can be properly agreed upon
<hazmat> jimbaker, rm will error out if an encounters a file it can't remove
<hazmat> jimbaker, i guess not
<hazmat> nm
<hazmat> jimbaker, but yes that sounds like reasonable behavior, report errors after acting upon those that can be acted upon
<jimbaker> hazmat, so we have two things: 1) remove files that it can; 2) report a non-zero status code
<jimbaker> hazmat, cool
<SpamapS> hazmat: any luck on precise? juju seems totally broken on my precise box
<jimbaker> i mean of course, remove service units that it can :)
<SpamapS> hazmat: http://paste.ubuntu.com/754204/ .. thats just running 'juju status' on my precise box
<marcoceppi> m_3: (or anyone) how did you deal with unattended installs and debconf?
<SpamapS> marcoceppi: the default frontend is noninteractive, so it should just choose the default
<SpamapS> marcoceppi: or rather, it is explicitly set that way in juju
<marcoceppi> I need to override a default selection
<SpamapS> debconf-set-selections
<marcoceppi> how do I know what selection to set?
<SpamapS> debconf-get-selections ? ;-)
<marcoceppi> I guess that's the question I meant to ask
<marcoceppi> I checked :)
<SpamapS> /var/lib/dpkg/info/$packagename.template should have the questions that will be asked
<SpamapS> after unpack of the package
<hazmat> SpamapS, doh.. forgot about it, my instance is still running though, un momento
<marcoceppi> interesting
<m_3> marcoceppi: notice that it does this in your fork https://gist.github.com/1336585
<marcoceppi> yeah, but I can't figure out where this selection is. It doesn't show up in the template
<m_3> debconf-get-selections | grep <someapp>
<m_3> might catch it ift's in an aux package
<marcoceppi> Ah, there we go. I needed debconf-utils
<marcoceppi> Coooool beans lets try this out.
<jimbaker> hazmat, i'm going to suggest that the behavior in the branch be maintained. the reason is that there's precedent in juju terminate-machines to fail upon the first error
<hazmat> jimbaker, fair enough
<SpamapS> oh man, I miss my 40 core openstack cloud
<SpamapS> provisioned hadoop so fast
<_mup_> juju/support-num-units r418 committed by jim.baker@canonical.com
<_mup_> Review points
<moozz> bootstrap is giving me some pain
<moozz> Could not find any Cobbler systems marked as available and configured for network boot.
<moozz> anyone help?
<moozz> sux because cobbler is pushing them out
<_mup_> juju/trunk r419 committed by jim.baker@canonical.com
<_mup_> merge support-num-units [r=fwereade,hazmat][f=809599]
<_mup_> Modified juju add-unit and juju deploy to take --num-units/-n option;
<_mup_> juju remove-unit takes a list of service unit names to be removed.
<moozz> I am using latest ubuntu server btw
<moozz> the error comes in when I run juju bootstrap
<moozz> anyone able to offer some pointers?
<moozz> So I would have thought to make a cobbler system available I would enable the network boot flag for a system and then run juju bootstrap onece
<moozz> ?
<moozz> anyone know much about this stuff?
<_mup_> juju/ssh-passthrough r413 committed by jim.baker@canonical.com
<_mup_> Review points
<_mup_> juju/ssh-passthrough r414 committed by jim.baker@canonical.com
<_mup_> Resolved conflicts
<hazmat> moozz, you also need to setup it up with a management class that juju is using
<_mup_> juju/trunk r420 committed by jim.baker@canonical.com
<_mup_> merge ssh-passthrough [r=fwereade,hazmat][f=812441]
<_mup_> Modified juju ssh to enable standard ssh arguments, by passing them
<_mup_> through to the underlying ssh command.
<jimbaker> hazmat, originally i proposed (i think from a recommendation by m_3 ) that juju scp canonicalize relative paths under /var/lib/juju/units/SERVICE-UNIT. however, this has two issues. 1) this path is owned by root, not the ubuntu user. 2) this is different for machine IDs, which uses /home/ubuntu, since there's no real good default
<jimbaker> so i propose a simple fix: all relative paths should be with respect to /home/ubuntu, that is default scp behavior
<m_3> moozz: sorry, haven't worked with juju on bare metal yet
<SpamapS> moozz: acquired-mgmt-class and available-mgmt-class need to be setup .. juju will only take machines if they are in available-mgmt-class
<hazmat> SpamapS, think i've got most of  the test failures on precise fixed, mostly it was timeouts due to status test setting up a large tree, i did find two minor fixes for tests that weren't running predictably 100%
<hazmat> doing one more full run to verify
<hazmat> jimbaker, sounds reasonable
<hazmat> jimbaker, i believe ssh does the same
<hazmat> juju ssh that is
<hazmat> SpamapS, what kind of box was that 40 core? and where cann i get one? ;-)
<jimbaker> hazmat, cool. i will send out the email about this command, just found it when i was verifying some examples on the proposed api change. i had full mocks. but apparently i had missed when i actually tested against ec2
#juju 2011-11-30
<SpamapS> hazmat: Intel emerald ridge
<SpamapS> hazmat: re the test failures.. what about the argparse bits?
<hazmat> SpamapS, haven't run into those yet
<hazmat> SpamapS, i manually installed the dep packages, checked out and ran tests
<hazmat> SpamapS, do you have another mechanism you'd recommend to reproduce
<hazmat> jimbaker, bcsaller can i get a +1 on this trivial.. http://paste.ubuntu.com/754300/
<hazmat> that cleans up some of the tests to be more reliable
<moozz> thanks folks just back and reading your points here
<moozz> Spamaps, I open the cobbler web interface and check
<SpamapS> hazmat: a checkout doesn't seem to show the behavior, only the installed version does
<SpamapS> hazmat: its possible dh_python2 is broken or juju is doing something to confuse it
<moozz> SpamapS, my .juju/environment.yaml is the same as the one in section 2. https://wiki.edubuntu.org/ServerTeam/OrchestraJuju
<moozz> Spamaps, I also added: "default-series: oneiric"
<moozz> SpamapS, I also made sure management classes has orchestra-juju-available selected.
<moozz> SpamapS, I must have missed something else hey?
<SpamapS> moozz: can you do a 'cobbler system show-variables -n name-of-system' and pastebin it?
<SpamapS> moozz: you have to run that as root on the orchestra-provisioning-server
<moozz> Spamaps, will do
<moozz> SpamapS, to be honest I cannot remember
<bcsaller> hazmat: you have to yield on watch_d twice in a row?
<bcsaller> oh, bad read
<moozz> Spamaps, I ran: 'sudo cobbler system show-variables -n oscs1' but get usage error message. I thing "show-variable" is not an option on my version
<hazmat> bcsaller, yeah... the exists_d wait, is just to be nice.. its not really needed
<bcsaller> hazmat: yeah, +1 on the trivial
<moozz> SpamapS, Should I run this instead perhaps: "sudo cobbler system dumpvars"
<hazmat> SpamapS, fixes committed, not sure about the argparse problem, i'll email barry about them
<hazmat> s/it
<SpamapS> moozz: yes thats the one
<SpamapS> moozz: sorry mixing 3 different things in my head ;)
<moozz> SpamapS, No probs. Here is the output -> http://pastebin.com/XSfGYW1M
<hazmat> jimbaker, does 2to3 suggest xrange?
<hazmat> that seems odd
<hazmat> i thought that was gone in py3
<moozz> SpamapS, Getting this error for a few cobbler commands: TypeError: cannot marshal None unless allow_none is enabled
<_mup_> juju/trunk r421 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] some minor test fixes from precise pkg builds [r=bcsaller]
<moozz> SpamapS: It is midday here in AU not sure what time it is where u r, am I bothering u?
<marcoceppi> config-get is available outside of the config-changed hook, correct?
<hazmat> marcoceppi, yup
<SpamapS> moozz: its 17:17 for me.. not bothering me at all but have a lot of stuff going on ;)
<marcoceppi> Right, that's what I figured. What about relation-get seems it can't be used outside of it's hook, correct?
<SpamapS> moozz: the dumpvars, I think you need to give it a -n system_name
 * SpamapS re-installs cobbler to follow along
<moozz> SpamapS: Says no suck option
<moozz> SpamapS: will try with just host name
<moozz> SpamapS: sudo cobbler system dumpvars oscs1 gives the same output as the previous pastebin
<moozz> SpamapS: same with ip address
<SpamapS> moozz: sudo cobbler system list
<moozz> SpamapS: http://pastebin.com/x2R5YNFE
<moozz> SpamapS: I have a host list. These are the hosts I eventually wish to build openstack on.
<moozz> SpamapS: "sudo cobbler system dumpvars nova-compute" gives me the same error as in http://pastebin.com/XSfGYW1M
<SpamapS> moozz: alright, I'm trying to get a test system up to follow along
<SpamapS> moozz: --name nova-compute
<moozz> SpamapS: Got output pastiing now
<SpamapS> moozz: btw, you may want to apt-get install pastebinit
<SpamapS> really helpful for this
<SpamapS> cmd | pastebinit will put it into a pastebin for you :)
<moozz> SpamapS: thanks. http://pastebin.com/Z0Ni59Hv
<SpamapS> mgmt_classes : ['orchestra-juju-acquired']
<SpamapS> moozz: this means juju believe it has taken control of that system
<SpamapS> moozz: are you certain juju bootstrap failed?
<moozz> SpamapS yes
<moozz> SpamapS: I wonder how we can check f sure?
<SpamapS> moozz: juju status
<moozz> SpamapS: cool trick with pastebinit
<moozz> SpamapS
<moozz> SpamapS: ERROR juju environment not found: is the environment bootstrapped?
<SpamapS> moozz: ok, so change that mgmt class to orchestra-juju-available and try bootstrap again
<SpamapS> moozz: maybe do  juju -v bootstrap so you get verbose output
<moozz> SpamapS: http://pastebin.com/TWwZLB0g
<moozz> SpamapS: ok, I will list the env to check and out put to pastebin
<SpamapS> moozz: did you try bootstrap again yet?
<moozz> SpamapS is there a command line way to do this?
<moozz> SpamapS: Tried this "sudo cobbler mgmtclass edit --name orchestra-juju-available"
<SpamapS> moozz: no you'd want "sudo cobbler system edit --name nova-compute --mgmt-class 'orchestra-juju-available'
<moozz> SpamapS: Yay! It worked. Should I make all other hosts available to?
<moozz> SpamapS: Bummer, error actually: All available Cobbler systems were also marked as acquired (instances: MTMyMjUzMzEwOC42MjYwNDAxNjEuMjAwODk)
<SpamapS> moozz: Its best to make them available one at a time, so that you can make sure the service you're deploying goes to the system you want it on.
<SpamapS> moozz: hah, weird
<moozz> SpamapS: very
<SpamapS> moozz: dumpvars again, check the mgmtclass
<moozz> ok
<moozz> SpamapS: http://paste.ubuntu.com/754358/
<moozz> SpamapS: Seems set
<SpamapS> moozz: I just tried    sudo cobbler system edit --name=foo --mgmt-classes="orchestra-juju-available"   and it overwrite the acquired with only avialable
<moozz> SpamapS: Any idea what this error means then?
<SpamapS> moozz: on your system, you have both.. mgmt_classes : ['orchestra-juju-acquired', 'orchestra-juju-available']
<SpamapS> moozz: you need it to just have 'orchestra-juju-available'
<SpamapS> moozz: you're going to run into one problem here.. with only 3 machines.. you won't have anywhere to put rabbitmq and mysql
<moozz> SpamapS: Sorry about the dumb question but what does the "...acquired" do?
<SpamapS> moozz: available means "available for juju" , acquired means "taken by juju"
<moozz> SpamapS: ok doing now? :-)
<moozz> SpamapS: Done, now "'dict' object has no attribute 'read'" after typing juju bootstrap. :-)
<SpamapS> moozz: *wha* ?
<moozz> SpamapS: Seems now when I run "juju bootstrap" I get the error "'dict' object has no attribute 'read'"
<SpamapS> moozz: I understand that. But maybe you could run it with '-v' and pastebin that?
<moozz> SpamapS: ok
<SpamapS> moozz: thats not a normal condition. :-/
<moozz> SpamapS: http://pastebin.com/nybeHuxY
<SpamapS> moozz: interesting.. can you try 'sudo cobbler system edit --name=nova-compute --ks-meta=""' then bootstrap again?
<SpamapS> moozz: I think there's an existing bug for this problem
<moozz> SpamapS: cobbler: error: no such option: --ks-meta; You are trying to send an empty document, exiting
<SpamapS> moozz: darn inconsistent cli.. change it to --ksmeta
<moozz> SpamapS: SpamapS: Same
<SpamapS> moozz: can you do a cobbler system dumpvars on it again and grep for 'ks_meta' ?
<moozz> SpamapS: Note class is set to accuired
<moozz> ok
<SpamapS> moozz: acquired, but you're getting the dict/read bug?
<moozz> Spamaps: http://paste.ubuntu.com/754379/
<SpamapS> ks_meta : tree=http://@@http_server@@/cblr/repo_mirror/oneiric-x86_64
<SpamapS> doesn't seem like it got fixed
<moozz> SpamapS: I will try running the command again
<SpamapS> moozz: the ks_meta should be *empty*, mgmt class should be ONLY orchestra-juju-available ..
<moozz> SpamapS: tried to clear the ks_meta again but no luck
<moozz> SpamapS: hmm :-P
<SpamapS> moozz: --ksmeta="" works for me
<moozz> :-)
<moozz> SpamapS: can this be checked in the Web UI?
<SpamapS> moozz: yourserver/cobbler_web should have a web UI on it
<moozz> SpamapS: Just checked the UI and empty. ??
<moozz> SpamapS: No kickstart metadata there
<moozz> Spamaps: Checking inhetit in the UI
<SpamapS> moozz: and the mgmt classs?
<moozz> SpamapS: mgmt classes empty
<SpamapS> moozz: so.. put the available one in ;)
<moozz> SpamapS: system: no ks_meta, should i remove <<inhert>> from the oneiric-x8x_64-juju profile?
<SpamapS> moozz: definitely not!
<moozz> SpamapS: ok. Should the management classes have nothing set in them? as they do have nothing set
<SpamapS> moozz: ... no.. they should have orchestra-juju-available
<moozz> SpamapS: The class exists but has no packages or files specified
<moozz> SpamapS: Sorry about all this
<SpamapS> um, packages? what?
<SpamapS> moozz: I think you may be confusing terms
<SpamapS> moozz:under the system record in the web interface, sub menu "Management" the only thing that matters is "Management Classes"
<moozz> SpamapS: under: configuration->Management Classes-> orchestra-juju-available->resources-> nothing is set
<SpamapS> That doesn't mean anything
<SpamapS> Configuration->Systems
<SpamapS> thats where the magic happens
<moozz> SpamapS: ok
<moozz> SpamapS: Yes it is set
<moozz> SpamapS: Wonder where the dict issue is comming from
<SpamapS> dunno
<SpamapS> but I have to go now
<SpamapS> family is home
<SpamapS> moozz: good luck.. maybe send to the mailing list if you get stuck
<moozz> SpamapS: Many thanks and very sorry
<hazmat> SpamapS, cheers
<marcoceppi> Is there any limitation on the length of time a hook can run?
<SpamapS> marcoceppi: no, but there's some thought that it would be a good idea
<marcoceppi> Okay, I've got an install hook that's going to take about...45 minutes
<marcoceppi> So, just FYI
<SpamapS> marcoceppi: might it be better to put that in the background?
<marcoceppi> maybe, but how could I capture an install fail if it fails?
<SpamapS> marcoceppi: what will it be doing for 45 min?
<marcoceppi> one word: cpan
<marcoceppi> Hopefully there's an easier way to get all these Perl modules
<SpamapS> cpan?
<SpamapS> its not packaged?
<SpamapS> or rather..
<SpamapS> the stuff you need isn't packaged?
<SpamapS> marcoceppi: debian has *really* good tools for turning cpan stuff into debs
<marcoceppi> Oh, maybe it is packaged I'm following instructions from the INTERNETS
<SpamapS> marcoceppi: dh-make-perl in particular understands CPAN perfectly
<SpamapS> marcoceppi: koha?
<marcoceppi> SpamapS Yeah actually
<SpamapS> marcoceppi: saw your G+ post
<SpamapS> marcoceppi: surprised that they use debian and suggest CPAN
<marcoceppi> SpamapS: yeah, the install uses a mix of cpan and packages, I'm trying to track down all the packages now to avoid cpan
<SpamapS> marcoceppi: perl can be like wonderland... ;)
<koolhead17> hi all
<koolhead17> the instance id i provide in my yaml file is required to have a user name "ubuntu"
<koolhead17> i mean the instance ?
<SpamapS> koolhead17: what instance?
<SpamapS> koolhead17: do you mean the image id?
<koolhead17> SpamapS: yes
<koolhead17> am running Juju on openstack
<koolhead17> *trying
<SpamapS> koolhead17: the standard ubuntu 11.10 cloud images should work fine on openstack
<koolhead17> SpamapS: am not using cloud image am using amd64 11.10 server image with my own customization as per my network requirement
<SpamapS> ah so you're using the orchestra provider?
<koolhead17> SpamapS: no
<koolhead17> :)
<SpamapS> well then I'm confused
<SpamapS> why won't the cloud images work?
<koolhead17> SpamapS: this is what i have done
<koolhead17> 1. my own openstack infra is up and running
<koolhead17> 2. over that i am trying to install juju and try/experiment with the charms
<SpamapS> koolhead17: great, so, load the Ubuntu cloud images into glance, and you should be all set.
<koolhead17> 3. since my nova and internal network both are behind proxy i need to mention that info in my /etc/apt/config
<SpamapS> koolhead17: I don't think charms assume the ubuntu user.. but I do think juju might assume it for ssh purposes.
<koolhead17> SpamapS: yes juju assumes, so  i was wondering if i should remaster the image creating a user ubuntu
<koolhead17> :)
<SpamapS> koolhead17: well you can rebundle the Ubuntu cloud image with your changes.
<koolhead17> SpamapS: cloud image has no username/passwd to i can log in with :(
<koolhead17> smoser: suggested me another way, i will try to do that when am in office
<koolhead17> SpamapS: cloud image log in is keybased
<koolhead17> that is why i was using the amd64 image instead cloud image :(
<SpamapS> koolhead17: juju expects keys too
<koolhead17> SpamapS: i think metadata server/nova does that 4 juju
<koolhead17> SpamapS: let me ping you in say 1 hr once am there :)
<SpamapS> koolhead17: will be asleep by then
<SpamapS> about to sign off
<SpamapS> koolhead17: sorry... good luck tho.. :)
<koolhead17> SpamapS: np. will keep you posted. :D
<_mup_> Bug #898082 was filed: immediate agent restart fails <juju:New> < https://launchpad.net/bugs/898082 >
<niemeyer> Hello jujuers!
<koolhead11> hello niemeyer :)
<rog> niemeyer: mornin'!
<niemeyer> koolhead11, rog: What's up for the day?
<rog> niemeyer: my aim for the day is (at least) to get go juju to start an instance with zookeeper running on it, with appropriate tests.
<niemeyer> rog: Woohay!
<niemeyer> rog: Mine is to unblock your ec2 branch :-)
<rog> niemeyer: oh please, please!
<rog> niemeyer: mind you, your reviews will probably skupper my forward progress at tip :-)
<niemeyer> rog: Yeah, kind of expected.. sorry about that
<rog> niemeyer: that's fine, it will be great to remove the sword dangling over my head...
<rog> niemeyer: BTW when i come to implement a piece of functionality, i'm assuming if in doubt it's best to just port the existing python code (modulo Go structural changes) and all its heuristics, to try and avoid second system effect. is that the right thing to do?
<koolhead11> hola rog :)
<rog> koolhead11: hi
<niemeyer> rog: Absolutely
<rog> niemeyer: good
<niemeyer> rog: We can refactor down the road much more easily, and introducing more significant differences won't be a problem when there isn't a second version
<rog> niemeyer: that's what i thought.
<koolhead11> niemeyer: am not using the custom cloud image so i have 3 changes in my server image. 1. get cloud-init installed 2.user ubuntu needs to be created and my juju system publick key has to b there and at same time i need to have public key info of the image
<rog> niemeyer: but it does mean that code is more complex initially than it would be for the functionality that currently exists
<rog> s/would be/needs to be/
<niemeyer> rog: Hmm, do you have some example?
<niemeyer> koolhead11: I'm out of context.. all of those things work well in a standard Ubuntu image. What's going on there?
<koolhead11> niemeyer: am using 11.10 server 64 bit image
<koolhead11> and just added info of my proxy server in it /etc/apt/config
<rog> niemeyer: for instance, the cloudinit stuff talks about provisioners, but we don't have a provisioner yet
<koolhead11> niemeyer: Cannot connect to machine i-0000002e (perhaps still initializing): Invalid SSH key 2011-11-30 18:15:15,815 ERROR Cannot connect to machine i-0000002e (perhaps still initializing): Invalid SSH key
<niemeyer> rog: Can you paste a snippet just to make the idea more concrete?
<koolhead11> 2011-11-30 18:16:05,767 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="192.168.4.4" remote_port="2181" local_port="49900".
<rog> niemeyer: http://paste.ubuntu.com/754811/
<niemeyer> koolhead11: Yeah, for some reason it can't login into the ubuntu account
<niemeyer> koolhead11: It's really hard for me to tell you why because you have an image that does not match the standard cloud image, apparently
<rog> niemeyer: the structure is somewhat different, but the code is pretty much exactly taken from the python
<niemeyer> koolhead11: I recommend trying to login, by hand, onto the machine
<niemeyer> koolhead11: ssh ubuntu@<ip>
<niemeyer> koolhead11: This should work
<niemeyer> koolhead11: If it doesn't, it means cloud-init failed to put your key in the proper location in the proper way, for reasons that will have to be found
<koolhead11> niemeyer: its asking 4 passsword
<koolhead11> niemeyer: but am not sure if ubuntu server iso has a user with name ubuntu :(
<niemeyer> rog: I see your point. Haven't evaluated the code, but the structure looks fine.
<rog> niemeyer: good.
<niemeyer> koolhead11: Yeah, it shouldn't ask for a password, because bootstrap sent your ssh key there
<rog> niemeyer: that file is the moral equivalent of providers/common/cloudinit.py, BTW
<niemeyer> koolhead11: Which cloud-init should install in the system
<niemeyer> koolhead11: In the Ubuntu user
<koolhead11> niemeyer:  0.6.1-0ubuntu22
<niemeyer> koolhead11: ?
<koolhead11> my custom image has no user ubuntu
<koolhead11> niemeyer: my cloud init version
<koolhead11> :p
<niemeyer> koolhead11: You'll have to change your image so it behaves like the other cloud images.
<koolhead11> niemeyer: and what all it will have apart from cloud-init, a user ubuntu ?
<niemeyer> koolhead11: I apologize, but I have no way to tell how your custom image differs from a standard image
<niemeyer> koolhead11: I'd use the standard images instead
<koolhead11> niemeyer: okey. let me dig into it. Will keep you posted :D
<niemeyer> koolhead11: Cool, sorry about that
<koolhead11> np
<rog> niemeyer: BTW, i'm looking at get_user_authorized_keys in common/utils.py. how much of that is necessary? for instance, we haven't got an implementation of os.path.expandvars AFAIK, but perhaps that functionality isn't much used.
 * niemeyer looks
<niemeyer> rog: I'm a bit surprised to see it there as well
<rog> niemeyer: there's also the singular/plural thing, which i think is probably a bug in the original, if authorized-keys has more than one key
<niemeyer> rog: How do you mean
<niemeyer> rog: Ah, the comment
<niemeyer> rog: The comment is bogus
<niemeyer> rog: The name of the file is "authorized_keys", and it can contain more than one key concatenated to each other
<rog> niemeyer: not just that. depends whether append can take an item and/or a list. i can check.
<niemeyer> rog: That's where the name comes from
<rog> niemeyer: it looks like authorized-keys is a key in the environment config file
<rog> niemeyer: which contains literal keys
<niemeyer> rog: ~/.ssh/authorized_keys is the name of the ssh file
<niemeyer> rog: Where that setting will end up in
<rog> niemeyer: but:
<niemeyer> rog: The file accepts a concatenated list of keys
<rog> if config.get("authorized-keys"):
<rog>         return config["authorized-keys"]
<niemeyer> rog: Yes, that's fine?
<rog> isn't that getting a set of literal keys from the config file?
<niemeyer> rog: This is getting a value from the config file
<niemeyer> rog: That should contain the text to be inserted into ~/.ssh/authorized_keys
<rog> niemeyer: so authorized-keys contains only one string?
<niemeyer> rog: Yes, the text to be inserted into ~/.ssh/authorized_keys
<rog> niemeyer: ok, that's cool then. i'm not familiar with all the ssh config stuff
<niemeyer> rog: No worries
<rog> niemeyer: presumably all the heuristics in get_user_authorized_keys are standard, but it's all somewhat opaque to me
<niemeyer> rog: I'll change the Python version to remove the bogus comment
<rog> niemeyer: hmm, in that case shouldn't it return the concatenation of all the keys in key_names?
<niemeyer> hazmat: ping
<rog> lunch
<niemeyer> rog: I don't know what you mean by that
<hazmat> niemeyer, pong
<niemeyer> rog: Well, potentially, but the current is fine as well
<niemeyer> hazmat: Quick cowboying:
 * hazmat grabs a hat
<niemeyer> hazmat: http://paste.ubuntu.com/754828/
<hazmat> niemeyer, +1
<niemeyer> hazmat: Cheers
<hazmat> niemeyer, was the variable stuff causing a problem?
<hazmat> just unexpected?
<niemeyer> hazmat: Not exactly, but there's no _trivial_ function like that yet, and it's currently unused/untested/unnecessary in the Python side, so I decided to try the easy way of keeping the compatibility :)
<hazmat> niemeyer, fair enough
<hazmat> i have to help out building a garage out back for a little bit, i'll be back in 40m
<niemeyer> hazmat: Cool, see you soon
<hazmat> looks like thats already well in hand by others
<_mup_> juju/trunk r422 committed by gustavo@niemeyer.net
<_mup_> Removing bogus comment from get_user_authorized_keys (with
<_mup_> justification comment), and removing untested/unused/unnecessary
<_mup_> use of expandvars so that it's simpler to keep compatibility.
<_mup_> [r=hazmat]
<niemeyer> rog: ^
<rog> niemeyer: thanks
<rog> niemeyer: BTW is there a reason why authorized-keys-path isn't mentioned in environment/config.py ?
<rog> niemeyer: doesn't that mean it won't be allowed if someone does actually try to specify it?
<niemeyer> rog: No, should be mentioned
<rog> niemeyer: and authorized-keys too, presumably
<niemeyer> rog: as optional
<niemeyer> rog: yeah
<rog> niemeyer: hmm, shows it's not much used :-)
<niemeyer> rog: It does not mean that, though
<niemeyer> rog: Not really.. this is used by _every single deployment_
<rog> ah
<rog> so any key not mentioned in environment/config.py is actually allowed?
<niemeyer> rog: KeyDict doesn't kill unknowns
<rog> niemeyer: ah, so unknowns just don't get any checking, but they still go in
<niemeyer> rog: Right
<rog> damn, i was assuming that environment/config.py mentioned everything
<rog> i wonder what other keys are lurking around
<rog> niemeyer: just to check, both authorized-keys and authorized-keys-path should go in as String() ?
<hazmat> fwereade, SpamapS was helping someone with orchestra last night (8hrs ago), and they pointed to some traceback they where able to get in juju.. http://pastebin.com/nybeHuxY just wondering if this rang any bells
<mpl> niemeyer: hi. do you want the logger to be used in the charm and schema packages as well? i.e should I replace fmt calls there to logF calls for example?
<fwereade> hazmat, well, I can see what's *happening*, but colour me surprised
<hazmat> f
<hazmat> odd, my irc client spontaneously froze and reconnected
 * hazmat kicks xchat
<fwereade> hazmat, I can take a proper look at it soon if you like?
<hazmat> fwereade, the irc logs have some additional context, i dunno that it was setup correctly, but i doubt a user would know it from the traceback
<fwereade> hazmat, well, it's weird, it looks like the data is coming back in a format it would be *lovely* if it did come back in, but hasn't IME
<fwereade> hazmat, I'll kick it around and see what I can come up with
<hazmat> fwereade, for context.. most of the  conversation with 'moozz' is here http://irclogs.ubuntu.com/2011/11/30/%23juju.html
<fwereade> hazmat, I'm reading it, thanks :)
<rog> niemeyer: looking in the final cloudinit code, it does seem to assume that each entry in ssh_authorized_keys is a separate key.
<rog> niemeyer: it can put a prefix on every key, which won't work if there's more than one line in a key
<niemeyer> rog: Where's that?
<rog> in cloudinit/CloudConfig/cc_ssh.py
<rog> in the cloudinit source
<rog> niemeyer: line 98, in particular
<rog> niemeyer: it will probably work de-facto because we won't be using disable_root but i think it's probably worth getting right
<rog> niemeyer: add_ssh_key in common/cloudinit.py should probably be add_ssh_keys otherwise too
<niemeyer> rog: Looking
<niemeyer> rog: Yeah, there's a minor mismatch in the Python code indeed
<niemeyer> rog: It's not broken because it never reaches the point of actually dealing with the keys
<rog> niemeyer: i think get_user_authorized_keys should return a list
<niemeyer> rog: Yeah, that sounds fine
<niemeyer> rog: But it should not require a list from the user
<niemeyer> rog: In the configuration/file, that is
<rog> niemeyer: it's slightly broken, i think, because add_ssh_key can be called with multiple keys in the same string
<rog> niemeyer: which will lead to the above mentioned mismatch in the final cloudinit code
<niemeyer> rog: I mean it's not broken in the sense that it works
<rog> niemeyer: unless you're relying on disable_root, i guess
<niemeyer> rog: Which we're not, so it works
<rog> niemeyer: yup
<niemeyer> rog: You're right that it's logically incorrect
<rog> i wonder if authorized-keys-path should allow a set of names
<niemeyer> rog: No
<niemeyer> rog: It should accept a single path, with a list of keys, in the well known format of an authorized_keys path.. no invention there.
<rog> niemeyer: ok. so should get_authorized_keys make sure to return at most one key?
<rog> er, no
<niemeyer> rog: Yeah, it sounds like a good interface as far as Go is concerned
<niemeyer> rog: (keys []string, err error)
<rog> yup. hmm, authorized-keys could accept a set of keys
<niemeyer> rog: Well.. that said, it's trivial to test for list(keys) == 0
<niemeyer> rog: Hmmm.. but we need the error anyway, so +1 on (keys, err)
<niemeyer> rog: and erroring if not keys are ofund
<niemeyer> found
<niemeyer> no
<niemeyer> Erm
<rog> lol
<niemeyer> rog: and erroring if no keys are ofund
<niemeyer> Ok, I can't type.. I hope you get it :)
<rog> i do
<rog> niemeyer: i do wonder about authorized-keys though
<niemeyer> rog: It's too late to wonder.. it already has a format, and we should follow it for the moment
<rog> niemeyer: presumably it will only allow one key currently (it'll probably produce a malormed cloudinit if you put in something else)
<rog> ok
<rog> it should really be authorized-key :-)
<niemeyer> rog: Nope
<niemeyer> rog: For the same reason that authorized_keys is keys.. it accepts multiple keys.
<niemeyer> rog: We can handle a list internally consistently, while accepting the authorized_keys format in those options
<rog> niemeyer: sounds good
<_mup_> juju/trunk r423 committed by gustavo@niemeyer.net
<_mup_> Add a comment to get_user_authorized_keys pointing out the
<_mup_> inconsistency spotted by Roger.
<rog> niemeyer: actually, on balance, i think it's easier to treat the authorized_keys contents as universal currency, and do the split when marshalling the cloudinit file.
<rog> thus http://paste.ubuntu.com/754923/
<niemeyer> rog: Sounds fine, except this function looks weird
<rog> and http://paste.ubuntu.com/754925/
<niemeyer> rog: This feels like over-engineering things a bit
<rog> niemeyer: it ends up simpler, i think
<niemeyer> rog: get_user_authorized_keys is a single function, small, that does what we need
<rog> niemeyer: currently, get_user_authorized_keys relies on poking inside config
<rog> niemeyer: i was trying to avoid that
<rog> niemeyer: and in the process make a more generally useful function
<rog> niemeyer: but i'm open to different ideas
<niemeyer> rog: The motivation is good, but there's missing functionality, and I'm afraid you'll end up implementing pretty much the full logic for the current function elsewhere, in addition to the generic version you came up with
<niemeyer> rog: Well, or maybe not..
<niemeyer> rog: I guess the problem is mainly on the documentation
<rog> niemeyer: i think the only logic missing is this: http://paste.ubuntu.com/754934/
<rog> which seems reasonable
<niemeyer> rog: Agreed, you just have to fix the documentation
<niemeyer> rog: This function doesn't _find_ an authorized_keys file, it _assembles_ one.
<rog> niemeyer: it returns the contents of an authorized_keys file, no?
<niemeyer> rog: Then, the fact it takes an argument is weird.. if you pass the argument in, this function becomes ioutil.ReadFile
<rog> niemeyer: yeah, i know.
<niemeyer> rog: No, read the Python function again
<rog> ah, no you're right it's not.
<rog> it expands home dirs
<niemeyer> rog: Ah, ok
<niemeyer> rog: Sounds fine then
<rog> niemeyer: that needs documenting tho
<niemeyer> rog: Btw, your original point of assembling the file with all keys sounds fine
<jcastro> mainerror: ok I owe you some testing
<jcastro> I mean marcoceppi
<rog> niemeyer: good. i started to do it the other way, but this seemed better.
<niemeyer> rog: That said, I wonder if we should be returning a list indeed.. it feels like a good point for doing that, even more given that we'll have to concatenate the keys
<rog> niemeyer: i don't think we ever concatenate keys
<niemeyer> rog: If you assemble the file with all keys, how do you plan to return all of them?
<rog> niemeyer: there's only ever one file used
<niemeyer> <niemeyer> rog: Btw, your original point of assembling the file with all keys sounds fine
<rog> oh... do you mean my *original* point?
<rog> back ages and ages ago
<niemeyer> :-)
<rog> i think i'd just concatenate them.
<niemeyer> rog: I'd prefer to have a list..
<niemeyer> rog: We'll need the list no matter what
<rog> niemeyer: yeah, at one level or other.
<niemeyer> rog: But, I'm fine either way
<rog> niemeyer: i'll see how it goes with a list. i'll probably have a separate parse function
<niemeyer> rog: Cool
<rog> niemeyer: that's called by AuthorizedKeys
<rog> niemeyer: the parsing is trivial, luckily
<niemeyer> rog: Indeed
<rog> niemeyer: is it possible for an authorized_keys file to be malformed?
<niemeyer> rog: :-)
<rog> niemeyer: just wondering whether ParseAuthorizedKeys needs to return an error...
<niemeyer> rog: I'd say not at this time, at least..
<niemeyer> rog: Btw, s/Parse/parse/, please
<niemeyer> rog: Oh, crap.. we may need it to parse the config option
<rog> niemeyer: hmm. i was thinking of having it in the public interface, so external agents can easily add their own authorized_keys files to cloudconfig if they want.
<niemeyer> rog: Yeah, sorry, sounds sane
<rog> oh yeah.
<rog> *that*'s why i wanted to have it return a string
<rog> because otherwise the external logic becomes more complex
<niemeyer> rog: Ok, I'm convinced.. :)
<rog> lol
<rog> i'll go with that until i decide otherwise :-)
<niemeyer> Ouch.. I missed a call
<hazmat> niemeyer, when you've got a minute, i replied to the clean stop merge proposal, with some comments a  link to a stop protocol i'd like to discuss
<hazmat> s/and a
<jcastro> m_3: ok I'm starting with phpmyadmin and then on to status net
<niemeyer> hazmat: Cool.  I need to get some lunch, and will need to interview someone afterwards if I manage to reschedule the slot I just missed, but then we can talk.
<hazmat> niemeyer, sounds good, i'm around, just ping
<niemeyer> hazmat: Will do
<jcastro> marcoceppi: ping me when you're around
<jcastro> m_3: SpamapS: you guys too, we have an opportunity to bikeshed!
<mpl> rog: I'm having trouble setting a go env where I can satisfy all the deps at the same time without having to resort to gofix. (gotest, goetveld, juju itself etc). Have you decided on which go version you're following for most of the tools?
<rog> mpl: i'm moving to weekly for everything
<mpl> hmm, and so is gustavo?
<rog> mpl: some things i've run gofix on but not submitted a merge request. i've forgotten which ones :-)
<rog> mpl: yeah, i think so. os.Error is just old hat these days :-)
<mpl> ouch. gocheck for ex seems to still use os.Error
<mpl> exactly.
<rog> mpl: i might have pushed a merge req for gocheck. i'll just see.
<mpl> thx
<rog> mpl: yeah. https://code.launchpad.net/~rogpeppe/gocheck/fix-error-printing
<rog> it's still pending
<rog> niemeyer: ^
<mpl> ok, good to know, thx.
<mpl> I wish there was a goinstall mode where it would run gofix on the fly.
<rog> mpl: i think there's such a thing just been implemented
<mpl> cool
<m_3> jcastro: yo
<jcastro> m_3: hey so marco has a use_upstream config option
<m_3> nice
<jcastro> which is basically "get upstream instead of the package"
<jcastro> which is fine, but I think that we should sort out what we should call it
<jcastro> as I'm sure a bunch of charms will want to do this
<m_3> right, discussed yesterday
<jcastro> ah ok
 * jcastro goes reading IRC logs
 * m_3 looks to see if marco added it to charm-tools
<m_3> jcastro: may've been the day before... dunno
<jcastro> oh ok
<jcastro> so we have the same way for everybody, that's all I was asking
<jcastro> m_3: thank you for not making it "promulgate_upstream=true"
<m_3> ha!
<jcastro> ok the phpmyadmin stuff works for me
<m_3> jcastro: cool... finishing ts3 review, then I'll do phpmyadmin
<jcastro> ok I think we have status.net incoming today too
<m_3> rockin
<jcastro> m_3: oh hey, remember to manually poke me when you accept one just in case I miss it, so I can start the blog entry
<m_3> jcastro: cool... will do.  I'll ping you when it's actually promulgated to lp:charm... there's sometimes time delay between fix-released and prom
 * jcastro nods
<jcastro> SpamapS: I'm going to start a wiki page on this: https://bugs.launchpad.net/charm/+bug/894825/comments/8
<_mup_> Bug #894825: Teamspeak charm needed <new-charm> <juju Charms Collection:Triaged> < https://launchpad.net/bugs/894825 >
<jcastro> and add it to the steps, because we need to start forcing ourselves to document this
<SpamapS> jcastro: right, charm guide. :)
<jcastro> I was going to go with "charm snippets"
<jcastro> but ok, charm guide it is
<SpamapS> I don't like the term snippets... it implies that this is trivial stuff.
<jcastro> SpamapS: can you add the "get the tarball" one here to this page? That outta be enough for now.
<jcastro> https://juju.ubuntu.com/CharmGuide
<SpamapS> jcastro: Talk accepted for SCALE!!!
<jcastro> \o/
<SpamapS> jcastro: I wonder if we can use something like doxygen and have inline comment blocks document the charm helper API
<koolhead11> SpamapS: hey
<SpamapS> jcastro: ooo.. ignite talks http://www.socallinuxexpo.org/scale10x/events/upscale
<jcastro> SpamapS: oh man
<jcastro> SpamapS: I started saying "we should do one" but then that would be like 4 events for us
<robbiew> plus we're sponsoring the Cloud & Virt track
<robbiew> I think we're good ;)
<robbiew> especially if I can get the shirts sorted
<SpamapS> I might do one
<SpamapS> but it won't be ubuntu or juju related.. just something funny I've been working on
<jcastro> so far we'd do devops day lightning talk (if we can get it), charm school, and then our normal talk
<marcoceppi> jcastro: hey
<jcastro> marcoceppi: lunching, catch you on the flip side, my ride is here
<m_3> damn, "PTY allocation request failed on channel 0" suddenly crept up again for LXC
<marcoceppi> m_3: I get that all the time on another server, but only when I connect from Ubuntu, when I do it from a Mac OSX terminal, it doesn't occur.
<m_3> marcoceppi: I have to flush the lxc cache when it happens... got about two weeks of lxc runs out of this last one
 * hazmat lunches
<fwereade_> back shortly, early supper
<SpamapS> m_3: have we reported that bug yet? seems like its an LXC bug
<jcastro> marcoceppi: ok back, phpmyadmin worked for me
<jcastro> I have a dumb question though
<marcoceppi> jcastro: Sweet! I'm still working on the package thing but that's coming along nicely
<jcastro> what happens if you set the use_upstream config after you've deployed?
<marcoceppi> Oh?
<jcastro> I tried setting it before I deployed, but got an error
<marcoceppi> Well, use_upstream doesn't do anything
<jcastro> ah, so that's why I can't find anything in there
<jcastro> ok. :)
<marcoceppi> in the latest pushed, what will happen - if the package is installed then it'll purge the package and do an install from upstream then re-generate the config, vice versa for upstream -> pkg
<marcoceppi> err, when this is done - not latest pushed
 * jcastro nods
<jcastro> that will be handy
<jcastro> this could be an awesome use case right
<jcastro> you use the packaged version
<jcastro> and then you run into a bug.
<jcastro> you report it and doing your due diligence you find it was fixed in the latest upstream release
<jcastro> you can "switch" the instances you care about
<jcastro> test, report back, then eventually -> SRU.
<SpamapS> marcoceppi: *saweeet*
<marcoceppi> Yeah, I hope to have it pushed up for review by the end of the day so I can call this one a wrap
<jcastro> <3
<SpamapS> Though what would be really great would be an auto-backports PPA that just has all the latest versions of software in the Ubuntu dev release in it for all the supported versions so you don't have to go "off the reservation" :)
<jcastro> marcoceppi: george's statusnet charm will need a review soon
<marcoceppi> SpamapS: What's edge if you don't bleed a little :)
<marcoceppi> jcastro: Yeah, I was talking to him about it last night. He's just wrapping up a few logistics IIRC
<jcastro> marcoceppi: awesome
<jcastro> marcoceppi: that's 2 for him, we should get him reviewing soon
<jcastro> I fear we will kill m_3 otherwise
 * marcoceppi should help review charms more too
<jcastro> I saw a guy file a bug on cherokee webserver today
<jcastro> let's hope he's working on it. :)
 * SpamapS will review some stuff soon too. :)
<SpamapS> jcastro: cherokee + ?
<jcastro> it's a webserver
<SpamapS> webservers are not really charmable ;)
<SpamapS> They're as charmable as "python"
<SpamapS> like Johnny 5, they neeeeeeed iiinnnpuuuuuutt
<jcastro> well then, we'll see what he comes up with
<jcastro> SpamapS: you need to explain the difference between that and things like ntp to me
<jcastro> but we can do that over voice later.
<jcastro> ie. how come ntp is charmable but not a web server
<SpamapS> NTP takes input from the system clock, other NTP sources, etc.
<SpamapS> same w/ MySQL.. it takes input from mysql clients
<SpamapS> so maybe cherokee + ftp for static hosting.. that would be a cool charm.
<SpamapS> but whats more compelling is like, cherokee + php-fcgid for a cherokee PHP framework charm
<jcastro> SpamapS: nod
<jcastro> SpamapS: is your talk scheduled? I'd like to announce our presence at scale on cloud.u.c
<m_3> SpamapS: man... that's a blast from the past
<SpamapS> jcastro: schedule seems to not be public yet.
<jcastro> ok
<SpamapS> m_3: cherokee?
<m_3> Johnny 5
<SpamapS> OH haha
<SpamapS> Steeeeeeeefaaaanniieeeee
<m_3> haven't reported lxc bug yet... haven't taken the time to root it out enough for a decent bug report
<m_3> wasn't it like "no 5 is alive!"
<m_3> title?
<mpl> niemeyer: ok, I have to ask now. what's wrong with using the standard log package for what you want me to do?
<niemeyer> mpl: What we discussed actually depends on the standard logging package
<niemeyer> mpl: It's just accommodating its usage
<hazmat> jcastro, i'd add kafka to the list
<hazmat> the hot and hairy
<jcastro> hazmat: apache kafka?
<hazmat> jcastro, yup
<jcastro> on it
<mpl> niemeyer: at the moment there's no output/printing afaics in the juju package. so I have the log.go file ready, but none of its funcs are used. should I add some test func (in config_test.go for example) which uses those log functions?
<m_3> negronjl: jeez, I'm still testing mongo... the test-stack I was using yesterday had bugs and I chased down that rabbit-hole all afternoon
<niemeyer> mpl: Sounds good
<niemeyer> mpl: You can actually create log_test.go, though
<niemeyer> mpl: Since that's unrelated to config
<mpl> indeed
<mpl> niemeyer: I'm thinking instead of only one prefix ("JUJU "), there could be another one specifically for debug funcs (like "JUJU:DEBUG "). what do you think?
<niemeyer> mpl: Sounds like a good idea
<rog> niemeyer: how're you feeling about go-juju-initial-ec2?
<niemeyer> rog: I'm feeling bad, because I haven't sent you a review yet
<niemeyer> rog: But bear with me, you'll have something today
<rog> niemeyer: i'm feeling a bit like i'm on the end of a bendy branch, as i keep making modifications on a stack of 3 merge requests :-)
<rog> niemeyer: am happy about that
<niemeyer> rog: Yeah, it's time to go back to the bottom of the stack
<rog> hopefully some of what i've done in the meantime will still be valid
<niemeyer> rog: I have the interview done, am organizing a project plan for something entirely unrelated that I was asked about, and will then dig onto it
<rog> cool. i'll be finishing for the day in about 40 mins.
<niemeyer> rog: I'm optimistic too
<niemeyer> rog: Cool
<niemeyer> rog: I'll should be in before that
<niemeyer> rog: I should be in before that
<rog> niemeyer: haven't quite got zookeeper running on an instance, but i have got AuthorizedKeys working properly, with tests.
<niemeyer> rog: Sweet!
<jcastro> m_3: I made a page for "gotchas" https://juju.ubuntu.com/CharmGuide
<jcastro> if you want to expand the architecture bullet there
<m_3> jcastro: cool
<rog> niemeyer: i'm off for the day. looking forward to review in the morning, thanks in advance!
<rog> see y'all tomorrow
<niemeyer> rog: Enjoy your evening!
<rog> niemeyer: will do, i can taste the beer already....
 * SpamapS just uploaded charm-tools to precise
<hazmat> bcsaller, jimbaker if one you have a moment, this branch could use a review.. lp:~fwereade/juju/unacquire-unlaunched-machine
<jimbaker> hazmat, i will take a look at it
<SpamapS> Failure: twisted.internet.defer.TimeoutError: <juju.control.tests.test_status.StatusTest testMethod=test_collect_filtering> (test_collect_filtering) still running at 5.0 secs
<SpamapS> hazmat: didn't you say you had some fixes for that?
<SpamapS> would like to get a more recent version of juju into precise for testing purposes
<hazmat> SpamapS, i did, their on trunk
<hazmat> SpamapS, as of last night
<SpamapS> hazmat: ahh, this was 420 .. we need 423 :)
<hazmat> SpamapS, latest is 423, fixes are in 421
 * marcoceppi obligatory 420 joke
<hazmat> word
 * SpamapS giggles
<SpamapS> darn, builders are busy.. 2 hours till they are tried
<jcastro> SpamapS: awesome, while you wait you can review this:
<jcastro> https://bugs.launchpad.net/charm/+bug/897746
<_mup_> Bug #897746: Charm needed: Status.net <new-charm> <juju Charms Collection:New for george-edison55> < https://launchpad.net/bugs/897746 >
 * SpamapS reviews
<jcastro> SpamapS: ok so in this case
<jcastro> I was messing with it, and I got it wrong
<jcastro> and the author was like "it _needs_ these three config options set or it doesn't work".
<jcastro> have we thought about a way of returning what config options are required for a service to work?
<SpamapS> I suggested that optiosn w/o a default in config.yaml should prevent deploy without being set, but there was resistance to that.
<SpamapS> options even
<SpamapS> Its ok though
<SpamapS> deploy it.. thats fine. Don't open-port until you have all the config options.
<jcastro> i wasn't thinking so black and white
<jcastro> something like "W: this charm needs foo,bar, and baz set, see README" or somesuch
<jcastro> and that totally gives me an out, I read the things I need, pass "juju set foo" and then keep going
<marcoceppi> What's a case for blank config option that isn't a required?
<SpamapS> Yeah maybe a message about there being no defaults for those things.
<jcastro> I think it's fine if juju just keeps going and let's me deploy
<SpamapS>     description: The email address of the administrator (cannot be changed)
<SpamapS> that kind of sucks. ;)
<jcastro> heh
<jcastro> anyone know where "juju get" is documented?
<jcastro> I only just discovered it now. :(
<marcoceppi> SpamapS: not much of a configuration option, is it :)
 * SpamapS was supposed to write an argparse -> manpage generator at one point
<jcastro> SpamapS: what we need is debconf for charms
 * jcastro runs away quickly
<SpamapS> BTW, we need to change file_get to sha256
<SpamapS> md5sum *sucks*
<SpamapS> jcastro: or an upstart for the cloud? ;)
<jcastro> might as well fix that now
<hazmat> jcastro, juju get --help
<marcoceppi> SpamapS: which file get?
<jcastro> hazmat: any idea why that isn't exported out to juju.u.c/docs?
<jcastro> or is it supposed to be?
<hazmat> its not
<hazmat> their inline docs to the impl
<hazmat> we don't assume the location of the docs from the code
 * jcastro nods
<hazmat> hmm those get/set both need some formatting help
<jcastro> and a man page to point to all the subcommand thingies
<jcastro> I did honestly try to look for it in a man page and the online docs. :)
<SpamapS> marcoceppi: btw, using a URL for the md5sum is not adequate in ch_get_file
<marcoceppi> SpamapS: Why not?
<SpamapS> marcoceppi: unless it is retrieved over HTTPS and wget is set to verify certs
<marcoceppi> ohhh, duh
<SpamapS> marcoceppi: same problem as the tarball.. MITM. :)
<marcoceppi> Just when I thought I had it all figured out
<SpamapS> its ok
<SpamapS> I heard one guy figured it all out, and then he turned into a real dick.
<marcoceppi> I'll force a check for https with remote hashes then :)
 * SpamapS has luckily drunk enough alcohol to forget some, so not as much of a dick anymore
<SpamapS> no you don't have to force that
<SpamapS> just make sure wget verifies certs, and that we have the ca certs installed
<marcoceppi> But if it's not over https, the whole functions point is nullified
<SpamapS> pick that up on review
<SpamapS> if people want to use it in a stupid way, let 'em. ;)
<SpamapS> "enough rope to hang yourself, and then a little more"
<marcoceppi> so, just plug along merrily if someone uses http:// ?
<marcoceppi> but push a warning out with juju-log :)
<SpamapS> a warning is a good idea yeah
 * marcoceppi doh=100; while [ doh > 0 ]; do juju-log "You should have had a V8, and used https"; doh=$((doh-1)); done
<hazmat> bcsaller, jimbaker.. trivial re fix cli help formatting
<hazmat> http://paste.ubuntu.com/755300/
<jimbaker> hazmat, taking a look
<bcsaller> hazmat: just hooking the formatter used in other clis to commands where its missing
<bcsaller> hazmat: looks fine to me
<hazmat> cool, its a very trivial
<hazmat> makes the output significantly better though
<bcsaller> nice
<SpamapS> marcoceppi: another cool feature of ch_get_file would be to cache it so you don't have to re-download the next time install is run.
<SpamapS> marcoceppi: actually you can probably just do a conditional get based on file mod time
<SpamapS> wtf? wget doesn't support automatic If-Modified-Since? that seems silly
<jcastro> surely curl does?
<jimbaker> ok, well i'm getting a mod_python error on the plain diff because i wanted to try it out (http://paste.ubuntu.com/openid/login/?next=/755300/plain/), but looks fine to me
<SpamapS> jcastro: only an explicit date.. can't just give it a file
<marcoceppi> lame
<jimbaker> speaking of curl, i thought for bug 814974, it might be nice if the url being specified could also specify options for curl
<_mup_> Bug #814974: config options need a "file" type <juju:Triaged by jimbaker> < https://launchpad.net/bugs/814974 >
<jimbaker> so use curl to retrieve the desired doc, but also capture how it was retrieved
<_mup_> juju/trunk r424 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] fix cli long help formatting [r=bcsaller,jimbaker]
<hazmat> niemeyer, did you have time to talk today?
<SpamapS> jcastro: status.net reviewed.. minor problems that have to be fixed.. should be very easy. :)
<niemeyer> hazmat: We can do that now if you want
<SpamapS> jimbaker: hm. Interesting idea.
<hazmat> niemeyer, invites out
<jimbaker> raw tweak on help is definitely nice
<jimbaker> SpamapS, using juju get, you would be able to retrieve this spec. config-get would of course retrieve the file uploaded as a consequence of juju set
<SpamapS> jimbaker: metadata.. the bane of POSIX filesystems since the dawn of time. ;)
<jimbaker> SpamapS, :)
<jcastro> SpamapS: awesome, I'll poke him to check it out. When the time comes don't forget to ping me when you promulgate or whatever we call that now so I can blog about him
<SpamapS> jcastro: ROCK THE BELLS
<jimbaker> just to be clear, what config-get would retrieve would be a url pointing to a copy of that uploaded file now being stored in the env
<jimbaker> eg in the s3 control bucket
<SpamapS> jimbaker: right
<negronjl> m_3: no worries about mongo .... no rush
<SpamapS> are there any open source apps out there that make use of mongo?
<jcastro> SpamapS: George Edison is a pseudoname
<jcastro> SpamapS: nice to know you're checking copyright though!
<SpamapS> hah
<SpamapS> you people who turn your nose up at copyrights just don't appreciate how much being dicks about copyrights has helped Debian's longevity. :)
<jcastro> I wasn't making fun of you, just noting your attention to detail
<SpamapS> Oh I don't take it personally.. I just think its funny that you guys are like "meh copyright, meh, license"
<SpamapS> Something wired weird in me thinks its cool. :-P
<SpamapS> marcoceppi: I love you man, but your email client sucks. kthxbai ;)
<SpamapS> X-Mailer: Palm webOS
<jcastro> marcoceppi: hey, not bad, only like 4 weeks from your first charm to someone making fun of your MUA. That's an impressive achievement.
<SpamapS> seriously.. take a bow
<marcoceppi> à² _à² 
 * marcoceppi bows
<jcastro> SpamapS: charm updated.
<jcastro> SpamapS: sorry to be so painful about it, I just am aching to blog this one
<obuisson> hi all
<jcastro> SpamapS: line 14: /usr/share/charm-helpers/sh/net.sh: No such file or directory
<jcastro> SpamapS: ok so basically this means we'll need to install the charm helpers right?
<m_3> obuisson: welcome
<marcoceppi> jcastro: you'll have to install charm-tools
<marcoceppi> as that provides charm-helpers
<jcastro> @marcoceppi you mean the charm has to right?
<marcoceppi> jcastro: Correct
<jcastro> so the first best practice is to install the charm tools so you can use the other best practices, heh
 * marcoceppi verifies
<jcastro> but then I have to enable the juju PPA on every instance
<marcoceppi> jcastro: not sure, checking if it's enabled by default
 * marcoceppi assumes the repo is added, waits for EC2
<marcoceppi> jcastro: Yeah, it's in there. Just need to do an apt-get install charm-tools
<marcoceppi> oh, actually, just need do install charm-helper-sh
<marcoceppi> SpamapS: Could you make wget a required dependency for charm-helpers-sh ? since it uses wget and it appears to not be installed by default
<m_3> marcoceppi: looks like it's in there: http://bazaar.launchpad.net/~charmers/charm-tools/packaging/view/head:/debian/control
<m_3> but I don't know for sure... still learning about packaging stuff
<marcoceppi> m_3: Awesome, I was wondering how the magic of packaging worked. I thought SpamapS  was just doing it by hand
<marcoceppi> Oh, looks like it already has the right dependencies too
<m_3> marcoceppi: it's all juju (orig sense of the word)
<m_3> that might not be what's made it to the package archives yet though...
 * m_3 checking version
<m_3> apt-cache show charm-helper-sh shows 0.2+bzr85-4~oneiric1, but lp's at 86
<m_3> dunno how to update it tho
<niemeyer> And I hear it's Squash'o'clock
<niemeyer> Laters
<m_3> 0/
<marcoceppi> m_3: looking at debian/changelog revision 86 is package version 85
<marcoceppi> so it's up to date
<marcoceppi> http://bazaar.launchpad.net/~charmers/charm-tools/packaging/revision/86/debian/changelog
<m_3> gotcha... cool
<m_3> negronjl: we could probably remove hostname from the mongodb and mongodb-replica-set
<m_3> negronjl: it's finally up with a client that uses the replicaset (hopefully properly)... no problems yet
<m_3> just twiddling with variations atm
<m_3> (remove hostname from the _interfaces_ that is)
 * negronjl crosses fingers 
<negronjl> m_3: when you break it ( 'cause I know you will ), let me know how so i can see about fixing it :)
<negronjl> m_3: the implementation is not too robust
 * SpamapS returns from lunch
<SpamapS> Package: charm-helper-sh
<SpamapS> Architecture: all
<SpamapS> Depends: wget, bind9-host, ${misc:Depends}
<SpamapS> marcoceppi: charm-helper-sh does depend on wget
<marcoceppi> SpamapS: Yeah, I just saw that. I didn't realize it was already awesomfied
<SpamapS> marcoceppi: we should make it also depend on ca-certificates tho
<marcoceppi> truth
 * SpamapS updates the packaging
 * jcastro EODs in a few.
<jcastro> SpamapS: m_3 thanks for the reviews today chaps, good progress made today
<SpamapS> jcastro: more fixes needed
<jcastro> SpamapS: true
<jcastro> I've been mulling a "you know what, we need to slow down and have a day where we clean up our existing ones."
<jcastro> but I haven't thought it through enough yet.
<SpamapS> commented
<SpamapS> Is George not able to IRC?
<_mup_> juju/scp-command r418 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<m_3> jcastro: we have been gradually cleaning up existing ones... taking more than a day tho :)
<jcastro> m_3: yeah I just don't want to get in the habit of fast fooding charms when we can take the extra minute to sort out the charm-tool or whatever will make the next charms easier.
<m_3> jcastro: agree 10%
<m_3> er
<m_3> 100%
<m_3> :)
<m_3> just budget more than a day
<m_3> otherwise people'll make charm changes without really testing the changes
<SpamapS> sorry what?
<SpamapS> I don't follow.. budget more than a day for making a charm?
<SpamapS> Or for pulling common stuff into charm-helper?
<jcastro> SpamapS: so like, at one point we say "ok we have X new charms, freeze is Y weeks away, let's chill on the new charms for a bit and do some review and tooling"
<jcastro> so maybe removing custom bits with things that the charm tools offer as a convenience feature
<SpamapS> Hrm..
<SpamapS> I'd think stuff like that is really early cycle, likely to break type change
<SpamapS> Typically you build up debt as you approach release.. then you pay it back when you can take time to deal with the fallout.
<SpamapS> having 5 ways to wget+md5sum is just tech debt
<SpamapS> and converting them to use CH is paying that back.
<SpamapS> jcastro: but I see what you're getting at.. and would agree that there needs to be *some* period of such change.
<jcastro> SpamapS: hey man, it's not even december yet, let it wild west for another month I say
<jcastro> :)
<SpamapS> I'm trying to decide when to switch from oneiric to precise even.
<SpamapS> Feels like we should just auto-backport all new charms to all old releases, and fix them as problems are found.
<SpamapS> But.. nah.. we'll just let people bzr push their charm into the old release if they think its appropriate.
<SpamapS> release day is just when dev focus changes to the new dev release of Ubuntu
<m_3> SpamapS: all I was saying is that it takes more than a day to refactor and/or clean up all the charms we already have
<m_3> SpamapS: BTW, is it possible to 'crossbuild' for different releases?  i.e., build a package for natty on an oneiric instance?
<m_3> I've been only trying to build natty on natty, oneiric on oneiric
 * m_3 mess of VMs lying around
<SpamapS> m_3: yes thats what pbuilder and sbuild do :)
<SpamapS> I don't use VMs.. I just have chroots for everything
<SpamapS> m_3: install ubuntu-dev-tools and 'man mk-sbuild'
<SpamapS> sbuild is "tha bomb"
<m_3> ahh... yes, that'd be more lightweight <grin>
<m_3> I thought 'bzr bd' was the bomb
<SpamapS> m_3: bzr bd was the bomb, but now its just a tool.. sbuild is *BLOWIN UP*
 * SpamapS gets freakizzle in the hizzle
<SpamapS> m_3: and agreed, it will take a long time to refactor all of them. What we should do instead is just identify an issue when we see it, and file a bug.. and any time we're fixing one bug in a charm, make sure to fix any of these other lingering issues.
#juju 2011-12-01
<niemeyer> Mornings
<fwereade_> heya niemeyer
<rog> niemeyer, fwereade_: yo!
<fwereade_> niemeyer, you're up early :)
<niemeyer> fwereade_, rog: Yos!
<niemeyer> fwereade_: Yeah, a bit earlier than usual today
<TheMue> morning gustavo
<niemeyer> TheMue: hey hey!
<niemeyer> TheMue: Welcome! :-)
<TheMue> niemeyer: thx
<TheMue> niemeyer: sadly i have no access to our wiki
<niemeyer> TheMue: That happens.. you should get it soon
<TheMue> niemeyer: so i read the public stuff
<niemeyer> TheMue: Are your details being sorted out already?
<niemeyer> TheMue: All of the data for the project is public
<TheMue> niemeyer: I've got my launchpad account working and the login to the wiki is granted
<niemeyer> TheMue: That's cool
<TheMue> niemeyer: but then I've got no right to read something
<niemeyer> TheMue: Hm?
<TheMue> niemeyer: here i am => https://launchpad.net/~themue
<niemeyer> TheMue: Ok?
<mpl> hmm, forgot to group the consts in the same declaration :/
<mpl> someday gofmt will be smart enough to do even that for me hehe :)
<TheMue> gofmt in chuck norris mode would analyze the semantics of your project, optimize the code and add speaking comments for godoc. :D
<mpl> uh weird, I've just received a mail saying that: "Launchpad encountered an internal error during the following operation: generating the diff for a merge proposal.  It was logged with id OOPS-3635c3dab3398e5564a4a31dc006dd36.  Sorry for the inconvenience."
<mpl> niemeyer: hmm, got an error when trying to repropose
<mpl> 2011/12/01 11:22:20 RIETVELD Failed to prepare request: computing base hashes: command [bzr cat -r revid:roger.peppe@canonical.com-20111116163854-05vsxizrv13b83r8 juju/Makefile] failed: exit status 3
<mpl> 2011/12/01 11:22:20 RIETVELD Response from server: Issue creation errors: {'base': [u'Base URL is required.'], 'subject': [u'This field is required.']}
<niemeyer> mpl: Try to execute the bzr cat command locally and see what's the output
<niemeyer> TheMue: Do you have the list of tasks to go over yet?
<TheMue> niemeyer: Sarah passed me a starting point for day 1.
<niemeyer> TheMue: COol
<mpl> niemeyer: ok, found the problem. apparently I'm supposed to issue the command from the root of the repo. I was inside "juju", so the bzr cat command couldn't find juju/Makefile.
<mpl> thx
<niemeyer> mpl: Oh, interesting
<niemeyer> mpl: I should fix that in lbox
<mpl> that would be nice, yes.
<TheMue> niemeyer: in parallel i'm reading the docs and look around the issues in florence (nice tool)
<mpl> niemeyer: want me to file a bug report or something about that?
<niemeyer> mpl: Yes, please!
<mpl> niemeyer: k, will do.
<niemeyer> mpl: Thanks!
<mpl> sure.
<_mup_> juju/go r20 committed by gustavo@niemeyer.net
<_mup_> Merged juju-go-log branch from Mathieu Lonjaret. [r=niemeyer]
<_mup_> This branch adds support for logging into the juju package.
<_mup_> Logic in this package and in its subpackages should now call
<_mup_> juju.Logf or juju.Debugf to generate useful information about
<_mup_> what's happening.
<niemeyer> mpl: Would you mind to explore the idea of making Debug and Logger as global variables in another branch, as rog suggested?
<mpl> niemeyer: sure
<niemeyer> mpl: Cheers!
<TheMue> niemeyer: just for info, my launchpad profile is added to the canonical group and my access to the wiki does work now
<niemeyer> TheMue: Superb
<niemeyer> TheMue: But again, it doesn't matter much as far as juju goes
<niemeyer> TheMue: It's all in the open
<niemeyer> TheMue: You'll find some good details about how the company works, etc, there, though
<TheMue> niemeyer: Yep, that's what I want to know today. First the organization to see what's important here for me, and then juju. You'll then get a lot of questions. *smile*
<TheMue> niemeyer: I hope the source code is well commented to get the semantics.
<TheMue> niemeyer: Setting up internal irc and mail now, those tasks.
<rog> TheMue: hi and welcome!
<TheMue> rog: hi and thx, now i'm on board too
<rog> TheMue: cool
<TheMue> another colleague interested in go here in germany also starts today. but he's working in a different team
<mainerror> TheMue: Got hired by Canonical?
<TheMue> mainerror: yep
<mainerror> Nice!
<mainerror> Congratulations.
<niemeyer> TheMue: That sounds great
<niemeyer> TheMue: Just let me know when you're ready to rock
<TheMue> mainerror: thx
<mainerror> You mentioned go. Go as in Go the programming language?
<TheMue> niemeyer: you'll realize it with my questions. *smile*
<TheMue> mainerror: exactly
<mainerror> Cool.
<TheMue> btw, anybody visiting the oop 2012 in january in munich? i've got two talks there. one about go and the app engine, one (a pecha kucha) about concurrency as a 'natural' paradigm
<rog> niemeyer: when we're talking about stripping down go-juju-initial-ec2, how bare should it go. shall i keep it so it can still talk to ec2 (without the extraneous logic) or should i should i strip it to really bare bones, so there's almost nothing there apart from the structure?
<rog> s/should it go./should it go?/
<niemeyer> rog: The latter
<mainerror> TheMue: When in January? It wouldn't be that far for me but the date might collide with my visit to Budapest.
<niemeyer> rog: Then, try to go the TDD route, introducing new logic with tests
<TheMue> mainerror: end of jan, i'm in budapest too.
<rog> niemeyer: so copy dummyProvider from juju for an initial step?
<mainerror> Oh I see.
<niemeyer> rog: By migrating from the previous branch chunk by chunk, and submitting for review once you're happy with a first delta
<niemeyer> rog: That sounds reasonable
<mainerror> Well then chances are good that we meet each other twice. :)
<mainerror> +#
<mainerror> s/+#//
<mpl> niemeyer: should I make it a different package as well, as rog is suggesting? (I'm not sure your "and I'm happy with that." means you want me to go for it).
<niemeyer> mpl: Yes, thanks a lot for pursuing that
<TheMue> so, changed the configuration
<mpl> niemeyer: hmm, I'm still struggling with bzr. I thought that since you merged, I should see rev 20 on https://launchpad.net/juju/go, or when doing a bzr pull from the lp:juju/go branch I have. what am I missing?
<rog> mpl: from looking at this page, it looks like it hasn't been merged yet: https://code.launchpad.net/~mathieu-lonjaret/juju/juju-go-log/+merge/84074
<mpl> rog: hmm, and yet the _mup_ bot pasted a few lines above stating it had been merged.
<rog> mpl: good point. i don't know what's going on there. maybe there's a delay
<mpl> or I could simply keep on working from this not yet merged branch I suppose...
<rog> mpl: i'd do that
<rog> niemeyer: is this more the kind of thing you had in mind? https://codereview.appspot.com/5432056/
<niemeyer> rog: This patch set seems to include changes from the other branch you pushed
<rog> niemeyer: if you look at the files, there's nothing in them. i think that lbox hasn't deleted them from the codereview page
<niemeyer> rog: https://codereview.appspot.com/5432056/diff/1019/juju/dummyprovider_test.go
<rog> niemeyer: and funnily enough, there are files that *should* be there (the jujutest package)
<niemeyer> rog: Check your patch locally.. lbox simply sends what it finds
<rog> hmm, i thought i branched from the most recent trunk
<rog> so many branches!
<rog> niemeyer: bzr ls shows the jujutest directory.
<niemeyer> rog: The diff
<rog> niemeyer: i think i'll just create a new merge request
<niemeyer> rog: it'll do you no good
<niemeyer> rog: It will send the same diff
<rog> http://paste.ubuntu.com/755994/
<rog> jujutest is in there, but not on the codereview page
<niemeyer> rog: How did you generate that diff?
<rog> niemeyer: cd go-juju-initial-ec2; bzr diff -r ../go-trunk
<rog> is that wrong?
<niemeyer> rog: bzr diff -r ancestor:../go-trunk
<rog> niemeyer: hmm. what's the difference?
<niemeyer> rog: This will diff your branch against the merge base
<niemeyer> rog: Which means it's generally the true changeset that will be applied when you actually bzr merge
<rog> niemeyer: i don't understand. what's the difference between trunk tip  and the merge  base?
<niemeyer> rog: and it's what lbox uses to send to codereview
<niemeyer> rog: Maybe none, maybe there is
<niemeyer> rog: It's none if nothing was committed in tip
<rog> niemeyer: so in this case, i've added a new directory and files in it, but it's not showing up. what am i doing wrong?
<niemeyer> rog: Either way, is the diff the same or not?
<rog> no it's not. it's very different
<niemeyer> rog: Cool, ok.. let me explain then
<rog> http://paste.ubuntu.com/755999/
<niemeyer> rog: Let's call tip T, and you had a branch B1
<rog> (and i've just freshly branched go-trunk too)
<niemeyer> rog: We debated about B1, and in the end some changes were forked off onto B2
<niemeyer> rog: But to generate B2, you actually *took* B1, and *removed* stuff from it
<niemeyer> rog: Then, B2 got merged on T
<rog> yup, that's exactly what i've done
<rog> (i didn't remove anything from B1 that was in T though)
<niemeyer> rog: Now, you're checking what would happen if you merged B1 into T
<rog> B2, no?
<niemeyer> B1.. B2 was already merged, no?
<rog> i don't... think... so
<niemeyer>   merge go-juju-machine-to-instance
<niemeyer> revno: 19 [merge]
<niemeyer> Where was this taken from?
 * niemeyer looks at the branch history
<rog> ok, i think i  did a merge then removed stuff
<rog> not realising that it might make a difference
<niemeyer> rog: Yes, that was the case indeed: http://paste.ubuntu.com/756003/
<rog> i'm still confused. why are the changes that i've made since being ignored?
<niemeyer> rog: So now you're trying to merge the original changes, but you already have _new_ history demonstrating that your latest interest is to remove that stuff
<niemeyer> rog: So bzr looks at history and says "Oh, hey, I already merged those revisions.. all good!"
<rog> because the trunk counts as newer than my current branch?
<niemeyer> rog: They haven't been ignored.. on the contrary... they've already been merged
<rog> but i've put them back again!
<niemeyer> rog: That's the point, you haven't
<niemeyer> rog: The revision you have locally is exactly the same revision that bzr has in trunk
<rog> how can i put them back again then?
<rog> perhaps if i just do branch from trunk, cp -r and bzr add, that'll work?
<niemeyer> rog: You'll have to redo it, effectively creating a new revision
<niemeyer> rog: That'd be a *MAJOR* disaster :-)
<niemeyer> rog: Never, ever, do that
<rog> oh?
<niemeyer> rog: You're effectively killing everybody else's changes
<niemeyer> rog: We can't ever assume we know the latest state of trunk
<niemeyer> rog: The proper way now is this:
<niemeyer> rog: branch from trunk again onto a new branch
<niemeyer> rog: Apply the diff you really want to see happening onto this new branch
<niemeyer> rog: (with patch, etc)
<niemeyer> rog: and propose that one
<rog> ok, patch rather than cp -r, of course
<niemeyer> rog: This is a new changeset, so you're effectively reverting the revert :D
<rog> same basic idea though - make local file changes rather than merging
<niemeyer> rog: Yeah, basically teach bzr that you've changed your mind once more
<rog> niemeyer: yeah, i didn't know that there were any bad implications from doing that
<rog> niemeyer: i will never back-merge again
<niemeyer> rog: Not a problem.. common gotcha of revision control
<rog> patch is my friend
<rog> if i can remember how to use it
<rog> -p0 ?
<niemeyer> rog: You just have to remember that history matters
<niemeyer> rog: Did A commit rev1, undo A commit rev2, merge both.. Trying to merge rev1 again does nothing.
<rog> niemeyer: that surprises me
<rog> (well, not any more)
<niemeyer> rog: That's DVCS 101.. hg, git, bzr, bitkeeper..
<niemeyer> rog: and yes, it surprised people often
<niemeyer> surprises
<rog> i'd assumed that in the absence of other merges, A+B-B+B == A+B
<fwereade_> niemeyer, hazmat: what situation would cause a unit settings node to be deleted and recreated?
<niemeyer> rog: And that's what happens.. but that's not what was done..
<rog> but in fact A+B-B+B == A
<niemeyer> rog: What you did was A + B + A, which is A + B
<rog> where A+B means merge(A, B)
<niemeyer> rog: You never did -B, btw.. there's no such a thing as removing a revision
<niemeyer> rog: You added C with a reverting patch
<fwereade_> niemeyer, hazmat: because I can't see such a case, but there's a test which induces it, and if it's a realistic possibility (without the unit node itself being deleted) then I need to rethink things
<rog> well, ok. trunk + {add foo} + {remove foo} + {add foo} == trunk
<niemeyer> fwereade_: Can you please paste the test?
<rog> where { } is a delta
<fwereade_> niemeyer, just grabbing an unmodified version
<niemeyer> rog: Again, that's not what you did..
<niemeyer> rog: You never re-added foo
<niemeyer> rog: You expected that your foo from the first action would be there, despite it being removed in a latter revision
<niemeyer> rog: What you're doing _now_ is re-adding foo
<rog> niemeyer: how is "bzr add foo" not adding foo?
<fwereade_> niemeyer, as it says in the test, some "unforseen mechanism"
<fwereade_> http://paste.ubuntu.com/756017/
<niemeyer> rog: You never re-added it..
<niemeyer> rog: Well, let me pick the history to make sure
<rog> niemeyer: so "bzr add" doesn't count as an add?
<rog> niemeyer: that's what confused me, i think
<niemeyer> rog: You're being disingenuous now
<rog> niemeyer: no, that's the central difficulty i have.
<fwereade_> niemeyer: it's not something I'd really worry about normally, but the existence of this test has made me all nervous ;)
<rog> niemeyer: that "bzr add" doesn't necessarily add the file when the revision comes to be merged.
<niemeyer> rog: If you add the same file 10 times in the same revision it's of course a NOOP
<rog> niemeyer: but in this revision, i added it only once.
<niemeyer> rog: The detail is that you never acknowledged the fact that the file had been _removed_ and then _readded_ it
<niemeyer> rog: Exactly.. and it's still there, right?
<niemeyer> rog: Now you're doing "bzr add" on it again.. and bzr is saying.. yep.. still there!
<niemeyer> rog: Because you never merged the part of history that says "this file is being removed"
<niemeyer> rog: History matters
<niemeyer> rog: If you merge trunk onto your ec2 initial branch.. the file will be *gone*
<niemeyer> rog: _then_ you can readd
<niemeyer> rog: Have you ever read about vector clocks?
<rog> niemeyer: didn't go-juju-machine-to-instance (merged) have the removal history in?
<niemeyer> rog: It does
<rog> niemeyer: yes, but i never fully got my head around them. or VCS.
<niemeyer> rog: Ok
<niemeyer> rog: Not a good analogy then
<niemeyer> fwereade_: It makes sense to me
<niemeyer> fwereade_: The point is precisely that a settings node being removed shouldn't cause a related unit to think that the configuration for the other side of the relation has changed
<fwereade_> niemeyer, the test makes sense in its own narrow context
<niemeyer> fwereade_: It's likely dying instead
<fwereade_> niemeyer, however, if the mechanism it uses to verify that property is a realistic one, it means that settings node versions are not a reliable indicator of anything
<hazmat> fwereade_, nothing comes to mind
<rog> niemeyer: actually, the vector clock thing does make sense. i think you're saying that the changes to re-add the file come further back in vector "time" than my changes to remove the file.
<hazmat> fwereade_, the only thing that should be deleting a settings node is the teardown of a test
<rog> anyway, i'll just patch
<niemeyer> rog: Precisely!
<hazmat> fwereade_, btw. one other point on the restart, if its a container restart, we do want to fire the start hook..
<niemeyer> hazmat, fwereade_: The removal/cleanup of a unit can delete a settings node..
<niemeyer> hazmat, fwereade_: The logic to ignore the removal of the node continues to make sense to me.
<fwereade_> niemeyer: all I'm worrying about is settings node deletion *without* the unit being deleted
<fwereade_> niemeyer: there's no problem with that test
<fwereade_> niemeyer: I'm just checking that it's purely an artificial and unrealistic manipulation of ZK without known analogue in normal operation
<niemeyer> fwereade_: I don't think we ever desired to support something like that consciously, at least, and I'm fine to say we don't
<hazmat> fwereade_, when you say settings you don't mean relation settings, which path are you referencing?
<fwereade_> niemeyer: cool, thanks
<niemeyer> fwereade_: Again, there is a relevant analogous
<niemeyer> fwereade_: To precisely that test
<niemeyer> fwereade_: It's not unrealistic at all
<fwereade_> niemeyer: ok, sorry, I missed an important bit
<fwereade_> niemeyer: deletion, fine
<fwereade_> niemeyer: *recreation*, not fine
<niemeyer> fwereade_: Fine as well!
<rog> niemeyer: right, got it. i think.
<fwereade_> niemeyer: hmm :(
<niemeyer> fwereade_: The relation with the unit on the other side may be reestablished
<hazmat> niemeyer, how?
<niemeyer> fwereade_: In which case the settings node should be observed again
<hazmat> either the unit was removed or the relation broken
<hazmat> in either case the identity is different when restablished
<niemeyer> hazmat: Why?
<niemeyer> hazmat: Well, the identity of what is a better first question
<hazmat> of either the unit or the relation would have changed
<hazmat> which leads to a different settings path
<niemeyer> hazmat: Huh?
<niemeyer> hazmat: remove-relation a b; add-relation a b
<niemeyer> ?
<hazmat> niemeyer, its a different relation identity the second time
<niemeyer> hazmat: I see, ok
<hazmat> we do preserve unit settings in a relation post its removal, relying on ephemeral presence nodes for observation of membership
<hazmat> the test is going against a pathologic case
<niemeyer> hazmat: It's not pathologic.. the fact we keep data around forever shouldn't be trusted
<hazmat> niemeyer, agreed
<niemeyer> fwereade_: So, the behavior there looks good in terms of being resilient.. is there a reason why you'd like to not support that specifically?
<hazmat> a garbage collector processing live relations, should not trigger an observation change
<hazmat> when taking out the garbage
<hazmat> fwereade_, so recreation of the path is not a scenario
<fwereade_> niemeyer, https://bugs.launchpad.net/juju/+bug/773600 points out that we'll need to pay attention to the settings node's versions to determine whether changes happened when the agent prcess was down
<_mup_> Bug #773600: Hook scheduler should have on disk persistence <juju:In Progress by fwereade> < https://launchpad.net/bugs/773600 >
<hazmat> of the settings that is
<fwereade_> niemeyer: *if* recreation of the settings node is plausible (which I think, as hazmat says, it isn't), node versioning is not up to the job
<fwereade_> niemeyer: because the version gets reset to 0 on recreate
<niemeyer> fwereade_: Aha, that makes the matter a lot more clear..
 * niemeyer thinks
<hazmat> fwereade_, right, because we need to reconcile any change while disoconnected with the live state
<fwereade_> hazmat: yep
<hazmat> fwereade_, so just to be clear, there is no normal scenario that allow for a recreate of the unit rel settings node
<niemeyer> fwereade_: It feels fine for us to state that we never remove a unit's settings node in a given relation for as long as the unit itself is alive, at least
<hazmat> since we manage relations on a service level, the removal implies either the unit was removed, or the relation was removed
<hazmat> and attempts to resurrect would imply either a new unit identity or new relation identity at a different path
<fwereade_> niemeyer, hazmat: cool, that all makes sense to me :)
<fwereade_> thanks :)
<niemeyer> fwereade_: Sorry for the confusion.. I guided the conversation to the wrong path not understanding your context
<fwereade_> niemeyer: sorry, I could have been a lot clearer :(
<fwereade_> niemeyer, not to worry :)
<hazmat> so the test is a pathlogical case if its recreating the node
<niemeyer> hazmat: Ok, agreed
<rog> niemeyer: finally! https://codereview.appspot.com/5432056
<rog> niemeyer: sorry for my obtuseness
<niemeyer> rog: Woohay!
<niemeyer> rog: Not worries at all
<niemeyer> rog: I'm happy we've had that conversation.. you'll surely pass through similar circumstances
<zirpu> how does one specify the machine size in a charm?  i can't find that in the docs.
<zirpu> i.e. for ec2 the default seems to be t1.small. for a redis server i'd like to bump it up.
<hazmat> SpamapS, made some progress tracking down the precise builds
<hazmat> SpamapS, i realize we can drop python-argparse  as a dep for any release with py2.7 and it will remove this error
<mpl> niemeyer: when trying to pull lp:~mathieu-lonjaret/juju/juju-go-log, I'm getting that error: "bzr: ERROR: These branches have diverged. Use the missing command to see how." and bzr missing doesn't give me much of a clue, sorry.
<hazmat> its a pkg_resources import causing a warning on stderr, which juju thinks is a process error
<hazmat> its early enough (import time) that the log machinery hasn't been setup yet to go to disk
<hazmat> mpl, bzr merge the upstream should resolve it
<hazmat> mpl, it implies that both branches have commits that aren't on the other side afaicr
<niemeyer> mpl: Why are you trying to pull it?
<niemeyer> mpl: You need a new branch now
<mpl> niemeyer: because I don't have it anymore, I had removed it when I thought you had merged it.
<niemeyer> mpl: I did!
<niemeyer> mpl: Just update your trunk, and create a new branch from it
<niemeyer> mpl: "bzr pull" on trunk
<mpl> niemeyer: well, as I said above, it doesn't show on https://code.launchpad.net/~mathieu-lonjaret/juju/juju-go-log/+merge/84074
<niemeyer> mpl: Then follow the blog post on lbox again
<niemeyer> mpl: Naming it differently
<mpl> niemeyer: and bzr pull doesn't give me the rev 20
<niemeyer> mpl: Maybe I screwed up then
<niemeyer> mpl: Hold on
<mpl> niemeyer: the bot said you had merged. but nothing else seem to agree with that. :)
<mpl> from my pov of course
<niemeyer> mpl: The bot is optimistic indeed
<mpl> haha
<niemeyer> mpl: Yeah, it was my fault
<niemeyer> mpl: Just pushed the change
<niemeyer> mpl: Sorry about that
<mpl> cool, thx. new branch it then.
<mpl> no worries.
<mpl> *it is then
<niemeyer> rog: Reviewed ec2test
<niemeyer> rog: Good work there man
<rog> niemeyer: phew. i thought i might need a boatload of tests for the testing code. (i thought that was going a little too far, and there are *some* tests!)
<rog> niemeyer: thanks BTW
<niemeyer> rog: Yeah, testing the testing code would be a little too much.. :-D
<mpl> I heard you like test code so I ...
<niemeyer> rog: Firing goamz itself against it should be good enough
<niemeyer> rog: But who tests the tests tests! OMG!
<rog> indeed
<rog> niemeyer: i will end up running more ec2 tests against it as functionality gets more complete
<rog> niemeyer: oh such a pleasure to use codereview again!
<niemeyer> rog: +1!
<rog> niemeyer: BTW can you think of any way of avoiding the duplicate emails?
<niemeyer> rog: That's been bothering me a bit too.. I don't have a good answer yet
<rog> niemeyer: filters aren't that clever either:-)
<niemeyer> rog: It would actually be possible to filter out, but I'm concerned we might lose real changes that happen on the MP side
<rog> MP?
<niemeyer> rog: merge proposal
<rog> hmm, maybe the trick is to avoid the codereview emails
<rog> ah, dammit
<rog> no
<niemeyer> Ok, past lunch time here.. bbl
<SpamapS> hazmat: I wonder if thats a bug in the python-argparse package
<hazmat> SpamapS, well its actually a pkg_resources thing.. py2.7 has argparse builtin, and we also have it as a package dep
<hazmat> and pkg_resources basically flags a warning that we're shadowing a builtin package
<hazmat> and because that happens so early on, it goes to stderr instead of the log file
<SpamapS> hazmat: thats what I mean.. should the package even exist if it causes problems for pkg_resources ?
<jcastro> SpamapS: fixes in statusnet.
<jcastro> SpamapS: we're promulgating that today, I can FEEL it!
<hazmat> SpamapS, its not a fatal error, juju interprets it as such even though the process exits normally
<hazmat> we could be more discriminating about that, but right now it it interprets regardless of the exit code any stderr output of the launched unit agent as a failure since it should go to a log file per normal operations
<SpamapS> jcastro: yeah, it was close yesterday. :)
<jcastro> \o/
<zirpu> is there a way to specify the machine size for EC2 instances?
<marcoceppi> zirpu: You can do it in the environments.yaml file
<jcastro> http://askubuntu.com/questions/52021/how-do-i-adjust-the-instance-size-that-juju-uses
<zirpu> cool. thanks!
<SpamapS> jcastro: indeed, looks good, I'll promulgate, you fire up the T-shirt cannon for the party
<jcastro> SpamapS: aka. "haha you have to test and blog this"
<marcoceppi> Keep that t-shirt cannon warm, I think phpMyAdmin is almost ready :P
<jcastro> SpamapS: when you're looking for a nice friday later afternoon hack make it so promulgate makes the bot go "Ding! New charm!" or something
<zirpu> marcoceppi: you can't specify machine size in a charm?  only the global environments.yaml?
<marcoceppi> jcastro: Dude, the bot's written in erlang, my mind was nearly blown when trying to add questions to the feed
<marcoceppi> zirpu: Not that I'm aware of, there's a spec to have deploy specify machine size, but it's not implemented
<zirpu> ah. ok. thanks.
<fwereade_> robbiew, ping
<robbiew> fwereade_: hey...there you are
<robbiew> wasn't looking for the "_" :/
<SpamapS> zirpu: indeed, its #2 on the priority list
<fwereade_> robbiew, sorry, was afk, my brain melted down and I needed a coffee
<robbiew> fwereade_:  lol, no worries...got time for a catchup g+?
<fwereade_> robbiew, sure
<robbiew> one sec...will shoot out invite
<rog> niemeyer: hmm, the target has changed because of ec2 rename. should i continue with the old target (~gniemeyer) or make a new CL?
<SpamapS> jcastro: done
 * SpamapS wishes we had an audit log of promulgates
<jcastro> nice! Ok congrats everyone, there's one more charm!
<SpamapS> pretty cool charm actually
<hazmat> oi.. teamspeak3 charm is new as well
<jcastro> SpamapS: ... and working awesome for me
<jcastro> man, the speed from 0 to "out there" never gets old
<mpl> rog: hmm, are you automatically notified of everything I lbox propose, or should I somehow add you as a reviewer in codereview when you're concerned?
<mpl> s/hmm, // :)
<hazmat> mpl, people generally get emails about it if they subscribe to the repo the merge is being proposed to
<mpl> hazmat: thx. I suppose that means yes in the case of rog :)
<hazmat> mpl, well assumptions are dangerous.. but i've been seeing your merge proposals ;-).. good stuff
<SpamapS> hazmat: btw, thanks for looking into the argparse thing. So did you push a fix up, or you just think you might have one?
<mpl> hazmat: well, it's all pretty simple and preworked for me by gustavo, so I can't take much credit there. but thx. :)
<fwereade_> niemeyer, hazmat: concurrent callbacks in RelationUnitWatcherBase: I don't quite understand how they can be safe, even if they are on changes to different nodes
<fwereade_> niemeyer, hazmat: HookScheduler.notify_change doesn't look to me like it will handle them correctly
<fwereade_> niemeyer, hazmat: I presume I'm missing something..?
<hazmat> fwereade_, why not re the latter, that's how the changes are serialized into an event stream, the hook executor itself is the one that does the serialization off the ordered stream from the scheuduler
<hazmat> fwereade_, i guess i'm not understanding the question.. how are they not safe?
<fwereade_> hazmat: ...ah, ok, notify_change doesn't yield
<hazmat> fwereade_, yup.. its definitely synchronous
<hazmat> that's a key point
<fwereade_> hazmat: hm, that probably covers that question
<hazmat> SpamapS, i pushed a fix for some of the other test failures, i'm still looking at the most recent ones, but the argparse stuff needs a packaging change to fix
<hazmat> SpamapS, given the latter, we should get some package builds
<SpamapS> hazmat: that makes me think that the python-argparse package shouldn't even exist anymore.
<hazmat> SpamapS, not in precise
<SpamapS> hazmat: let me try a build w/o the dep
<hazmat> SpamapS, well to be more precise ;-) .. it should exist if py2.6 exists
<SpamapS> hazmat: a package can be made to not build for a particular version.. thats probably what needs to be done
<hazmat> SpamapS, as regards juju, afaics its a question of specifying a different dep  set based on distro series.
<hazmat> or py version
<mpl> rog: yeah I wasn't very happy with the name either, but couldn't find a better one. I like Destination.
<SpamapS> hazmat: python-argparse can be installed with no symlinks to the .py files in the python2.7 dirs.. that will allow for the deps to remain the same for backports to older versions
<SpamapS> hazmat: we've had python 2.7 for a long time tho... why would this show up now?
 * SpamapS tries sbuilding juju w/o python-argparse in the deps
 * niemeyer respawns
<SpamapS> hazmat: sbuild test passed.. will try it in my PPA as well
<m_3> jcastro: you have an agenda or something for charm-school tomorrow?
<jcastro> m_3: sort of
<jcastro> we can sort it now if you'd like
<jcastro> https://juju.ubuntu.com/CharmSchool/2December11
<m_3> gimme 10
<jcastro> m_3: bah something came up, how about an hour from now?
<jcastro> m_3: and I'm off tomorrow and most of next week so we can just bust out this week's charms easily.
<m_3> jcastro: good
<jimbaker> hazmat, standup?
<hazmat> jimbaker, full meeting today
<SpamapS> m_3: I'd be up to work on an agenda for tomorrow
<jimbaker> hazmat, sure. so are we doing it now?
<hazmat> jimbaker, yup. i'm sending invites out
<jimbaker> hazmat, cool
<m_3> SpamapS: promulgating atm... almost done
<hazmat> SpamapS, jimbaker, bcsaller, fwereade_, niemeyer invites out
<hazmat> TheMue, we do team meetings thursday on google plus
<niemeyer> rog: ping?
<rog> niemeyer: pong
<rog> niemeyer: meeting?
<niemeyer> TheMue: ping?
<m_3> marcoceppi: so what's the status of phpmyadmin?  is the last revision you pushed ready to review again?
<marcoceppi> m_3: Not yet, I'm having problems with sh/bash tests
<marcoceppi> As soon as I iron that out and give it a full test it'll be ready
<m_3> ok, I'm gonna pull off the tag then... please re-add when it's ready for review
<m_3> thanks!
<marcoceppi> np, can do!
<marcoceppi> just found out dash and bash handle boolean operators differently in test :\
<m_3> marcoceppi: yes, I've had problems with those differences myself
<SpamapS> marcoceppi: certain operators, yes. ;)
 * marcoceppi blames m_3 for suggesting dash over bash :P
<m_3> marcoceppi: no way did _I_ suggest that... that was Clint
<marcoceppi> curses!
<m_3> marcoceppi: I've been bitten by string expansion rules... ${MY_VARIABLE//\/*/} or ${MY_VARIABLE%%.ext}
<m_3> (use bash)
 * m_3 runs
<marcoceppi> Anyways, I think I've got all the replacements for == just waiting for ec2
<m_3> marcoceppi: cool
<m_3> TheMue: welcome!
<SpamapS> marcoceppi: if you want to use bash, use bash. :) I find shell's limitations to force more elegant code in many instances, but sometimes it is a dirty language.
<SpamapS> marcoceppi: perhaps we should have tests for charm-helpers
<m_3> SpamapS: +1
<m_3> I was planning tests for charm-helpers-rb
<marcoceppi> SpamapS: Good idea, how would that work exactly? just test each function and collect output -> compare to expected results?
<SpamapS> yeah
<m_3> jcastro: so thinkup's lp:charm brnach is behind the george's branch... we need to maybe talk about MPs
<SpamapS> m_3: I'm a little nervous about charm-helpers-rb ... can't you just use chef solo ?
<m_3> SpamapS: sure
<SpamapS> m_3: we don't have to wait for the MP.. if there's good stuff there.. bzr merge.. bzr commit.. bzr push... rejoice
<m_3> there're a couple of juju-specific tools though
<SpamapS> m_3: I mean, there may be stuff chef doesn't already have stuff for.. like verifying a downloaded file.. but I don't want us to be rewriting chef.. its already a rewrite of puppet ;)
<m_3> SpamapS: yeah, just didn't want the state of the "main" charms to be dependent on my happening to notice the new updates
<m_3> ha!
<SpamapS> m_3: thats actually a good point... in debian we have watch files and a service which periodically refreshes the watch file... notifying us .. "there's a new upstream version, you should package it"
<m_3> yeah, understand... and do agree that we don't need to rewrite any tools that we could just use as-is
<SpamapS> m_3: only if its simple
<m_3> right
<SpamapS> m_3: if it takes 50 extra lines of ruby to use chef solo.. then F that. ;)
<m_3> no, it wouldn't
<SpamapS> But I bet its a require or two and then some nice simple stuff.
<m_3> I'll get examples of it working over x-mas (off next week)
<m_3> I can even add a recipe or two for the juju-specific utilities I mentioned earlier
<m_3> beauty of such an approach in general is the charm-helper tools already having test suites
<SpamapS> Yeah definitely
<SpamapS> I wonder if we can use cucumber to have the same tests for different implemetations
<SpamapS> mentations even
<marcoceppi> I've got a question about relation hooks. I've got state: up for both services but MySQL charm hasn't given me all the data I need and it's been a while - how log can I expect it to run hook-changed againt?
<SpamapS> marcoceppi: up just means the two sides are exchanging joined/changed/departed events
<SpamapS> marcoceppi: you need to poll the actual provided service to find out if its ready yet
<marcoceppi> How would I poll it? while loop?
<SpamapS> marcoceppi: if mysql hasn't given the data.. perhaps check the charm log
<marcoceppi> For the helper tests, would just putting it in a tests folder inside helpers/sh/ ?
<m_3> marcoceppi: great question
<m_3> perhaps start with that... helpers/sh/tests,
<marcoceppi> Ok, cool. Shouldn't take long to write tests
<m_3> then we might be able to extract those to helpers/tests that're language neutral... great idea... might be hard to pull off though
<marcoceppi> mm
<m_3> marcoceppi: BTW, hazmat mentioned a test where two sides of a relation play tictactoe through relation-changed hooks... I think they just keep firing as long as either side keeps 'relation-set'ing
<m_3> hazmat: ^^ does that exist?
<marcoceppi> Well I know something is wrong with my relation-changed hook, because at the bottom it open's port 80 but never gets exposed when expose is run.
 * marcoceppi continues investigation
<m_3> marcoceppi: sometimes it can fail quietly.. that's why it's good to 'set -eu' in the outermost scripts
<marcoceppi> What's the e do?
<m_3> have also hit problems if relation-changed gets stuck or is waiting a long time
<m_3> status is "up" (that just refers to the relation), but the relation-changed hook never completes
<m_3> e's just exit on error (in current shell or any subshells)
<marcoceppi> ah, cool
<m_3> marcoceppi: have you worked with the debug-hooks command yet?
<marcoceppi> kind of, I just ususally juju ssh <machine#>
<marcoceppi> Doe the order of the machines in add-relation mater?
<hazmat> m_3, no it doesn't, it was an idea, never put time serious time into it
<m_3> bummer
<m_3> marcoceppi: no
<jcastro> m_3: yikes
<jcastro> got held up, I am back now though if you wanna catch up
<jcastro> marcoceppi: we're going to hash out an agenda for tomorrow's charm school if you feel like joining us
<m_3> jcastro: sounds good... SpamapS around too?
<jcastro> yeah but I think he's distro-swamped still
<m_3> jcastro: ts3 is in now btw
<jcastro> I saw
<jcastro> \o/
<jcastro> 2 in one day!
<m_3> whoohoo
<m_3> jcastro: g+?
<jcastro> m_3: https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0AoW1nhI7IMt3dFRvSFdkZmNqQ0t3RjZ2QTR2Z19teWc&hl=en_US#gid=0
<marcoceppi> jcastro: wher?
<jcastro> marcoceppi: G+
<jcastro> I sent you an invite
<jcastro> http://pad.ubuntu.com/charmschool
<m_3> for byobu-classroom, I usually...
<m_3> spin up one environment in ec2 using one email acct
<SpamapS> You guys still G+'ing?
<m_3> then copy keys over and use that byobu-classroom to drive juju with another acct
<jcastro> SpamapS: yeah, hop in
<m_3> yup
<SpamapS> hazmat: confirmed, juju builds fine on buildd's without python-argparse .. I think this is a bug in python-argparse.. will discuss w/ python peeps
<hazmat> SpamapS, cool, nice to have the precise builds working, thanks
<TheMue> hazmat: You're maintaining our hangout in the calendar? Could you please send an invitation to my canonical address?
<hazmat> TheMue, ack
<niemeyer> hazmat: How's wtf being going for you guys?
<niemeyer> s/being/been/
<robbiew> that reads funny if you don't have the context behind "wtf"
<robbiew> lol
<niemeyer> robbiew: LOL.. true
 * SpamapS thinks WTF has been great since we started using it back in 2002 or so. better than OMFG I'd say
<SpamapS> r424 uploaded to precise
<marcoceppi> Just deployed juju on ec2 to provide a fix for a failed CDN here at work - CTO is impressed :)
<hazmat> niemeyer, haven't really checked it as much since it hanged on a rev a few weeks ago
<hazmat> niemeyer, its very nice to have
<hazmat> niemeyer, i sort of wish it would feed into the irc channel here
<hazmat> blinking red lights and all ;-)
 * hazmat hopes he can declare victory over the rodents
<SpamapS> marcoceppi: *NICE*
<mpl> niemeyer: hmm, I don't get it. didn't it get through 45 mins ago?
<mpl> I can rerun it again...
<niemeyer> mpl: There's a single patch set in that change set
<mpl> hmm what the hell
<mpl> it sent it here: https://codereview.appspot.com/5448072
<mpl> dunno how I messed up, sorry.
<mpl> lemme retry from scratch.
<mpl> niemeyer: k, I had probably got the various branches confused when pulling. should be good now.
<mpl> and off to bed, see you tomorrow.
<SpamapS> sweeeeet... daily builds are fixed!
<SpamapS> hazmat: ^5
<jimbaker> very nice to hear!
<SpamapS> Now to figure out why python-argparse hasn't been removed
<hazmat> SpamapS, nice
#juju 2011-12-02
<hazmat> marcoceppi, while awesome i'd still probably recommend either cloudfront or rackfiles/limelight for a true cdn
<hazmat> cdn need points of presence for global effectiveness
<SpamapS> hazmat: depends on your perspective. Sometimes all you really want is aggressive caching and fanout
<SpamapS> latency is by far the *best* reason for CDN... but by no means the only one
<niemeyer> mpl: Super, let me check
<hazmat> SpamapS,  ec2 hosts don't really give  fanout, all a juju node for cdn amounts to is a single host.. thats a sig difference.. it can work.. but its not the same league
<niemeyer> mpl: Very nice
<hazmat>  /me heads out for beer o'clock
<niemeyer> hazmat: Cheersfully!
<_mup_> juju/go r21 committed by gustavo@niemeyer.net
<_mup_> Merged jujugo-log by branch Mathieu. [r=rog,niemeyer]
<_mup_> This branch moves logging onto the log package so that it is
<_mup_> usable by any of the subpackages, and simplifies/polishes the
<_mup_> interface further.
<_mup_> juju/scp-command r419 committed by jim.baker@canonical.com
<_mup_> Addressed review points
<marcoceppi> hazmat: It's not for a CDN, it's a patch for a CDN issue that needs to scale. So it's using hadoop charm with a custom charm I wrote that creates a sort of...CDN proxy...for an issue several customers are having because Cisco has rackcdn.com blacklisted in their enterprise router software
<marcoceppi> For those customers we have our templating engines on their site switch out rackcdn.com with another DNS and the servers fetch the file from the CDN, cache it, then serve it up. Bandaid until the blacklist is lifted which could take a while
<_mup_> juju/scp-command r420 committed by jim.baker@canonical.com
<_mup_> PEP8/PyFlakes
<_mup_> juju/scp-command r421 committed by jim.baker@canonical.com
<_mup_> Updated tab completion
<_mup_> juju/scp-command r422 committed by jim.baker@canonical.com
<_mup_> Slim down help output
<_mup_> juju/scp-command r423 committed by jim.baker@canonical.com
<_mup_> PEP8
<_mup_> juju/trunk r425 committed by jim.baker@canonical.com
<_mup_> merge scp-command [r=fwereade,hazmat][f=720307]
<_mup_> Implements juju scp subcommand.
<rog> fwereade_: mornin'
<fwereade_> rog: heyhey :)
<rog> fwereade_: :-)
<fwereade_> rog: hmm, you were up disturbingly early ;)
<rog> fwereade_: it's not uncommon - i usually don't make a peep in here until i see some activitity tho'
<fwereade_> rog, ah, yeah, I tend to follow the same sort of approach
<rog> fwereade_: actually 7.20 *is* earlier than usual! had to go out and scrape the ice off the car.
<fwereade_> rog, oof
<rog> not much ice where you are... :)
<fwereade_> rog: well, it's getting pretty cold, it's 15 now :p
<rog> lol
<rog> mind you, you probably don't have working central heating...
<fwereade_> indeed not :)
<fwereade_> actually I just realised I need to pop out for a bit
<fwereade_> bbs :)
<rog> c y
<TheMue> Mio
<TheMue> argh, moo, that's better
<mpl> hi all
<rog> mpl: hiya
<fwereade_> if anyone who's written trickyish charms is around and awake, please ping me
<fwereade_> scratch my request above
<niemeyer> Morning all!
<fwereade_> heya niemeyer
<TheMue> moo
<niemeyer> fwereade_, TheMue: Hey folks!
<rog> niemeyer, TheMue: morning!
<TheMue> hi rog
<mpl> yo
<niemeyer> rog, mpl: Hey folks
<mainerror> Sooo! Juju Charm School today. \o/
<niemeyer> mainerror: Ohhh, indeed!
<mainerror> Can't wait.
<jrgifford> niemeyer: almost forgot about that. :P
<jrgifford> what time (UTC) is it again?
<mainerror> My Charm drives me nuts. This CMS writes its config options into more than one file ...
<mainerror> 15:00 UTC jrgifford.
<rog> niemeyer: ec2-ec2test pushed. second stage merge request for ec2 now in (https://codereview.appspot.com/5449065/)
<niemeyer> rog, mpl, TheMue: I think we should give a push to make sure all of the packages we're handling run fine on the latest weekly
<niemeyer> rog: Already on re-review the former one
<niemeyer> rog: Already on re-reviewing the former one
<rog> niemeyer: i was wondering if we should let the latest weekly settle a bit before digging in
<mainerror> What are you guys going to write in Go by the way?
<niemeyer> mainerror: We're running an experiment to run juju on Go
<mainerror> Oh, nice. :)
<mainerror> Right now it is using Python, right?
<niemeyer> mainerror: That's right
<TheMue> niemeyer: Based on weekly? Not on r60?
<niemeyer> TheMue: Yeah, weekly
<mpl> niemeyer: just curious? why not using the releases instead of the weeklies?
<niemeyer> TheMue: r60 is too old by now, and given our timeframe, there's not much benefit in sticking to it
<niemeyer> mpl: ^
<mpl> ok
<TheMue> niemeyer: hmm, imho weekly is currently changing too often. but i'll look.
<niemeyer> I suspect most of the incompatibilities for Go 1 are already in too
<rog> TheMue: i think it's easier to update in stages rather than all at once. we haven't got a large set of users, so it's not that much problem for us currently.
<mainerror> Oh wow! Go has GTK bindings.
<TheMue> rog: yep, the backlog of the current weekly and the r60 is already quite large.
<TheMue> *sigh*
<mpl> mainerror: are you speaking of mattn ones? if so, you should know that nsf is also redoing them.
<mainerror> Indeed. They are not 100% done yet but still a good start.
<rog> TheMue: gofix does a lot, but it's not a panacea
<mainerror> I see.
<mainerror> mpl: Did he host it on Github as well? I can't find it. :/
<mpl> mainerror: well, you'd better ask him directly on #go-nuts
<mainerror> Oh, awesome. :)
<TheMue> rog: i know, that's what i wrote on G+ too. i would like a r61 before go 1.
<niemeyer> rog: What's the branch name?
<rog> niemeyer: which branch?
<niemeyer> rog: The one you mentioned
<rog> niemeyer: i pointed to the codereview page.
<rog> niemeyer: it's lp:~rogpeppe/juju/go-juju-ec2-regions
<niemeyer> rog: Ok, I feel like I'm missing something
<rog> i think
<rog> the first one is go-juju-initial-ec2
<niemeyer> rog: It says " juju/ec2: add code to accept an ec2 region in the configuration file"
<niemeyer> rog: But there's a _lot_ in there, completely unrelated to this description
<niemeyer> rog: Is there a pre-req missing?
<rog> niemeyer: yes - i couldn't add it after i'd forgotten to add it first
<rog> niemeyer: i thought you knew about the first one
<niemeyer> rog: The first one was initial-foo, right?
<niemeyer> rog: which we debated a lot on
<niemeyer> rog: I haven't heard on it since?
<rog> niemeyer: https://codereview.appspot.com/5432056/
<rog> niemeyer: i pushed a new version which is skeleton only
<rog> niemeyer: didn't you get the email?
<niemeyer> rog: Ok, the last comment says "I'm doing that right now."
<niemeyer> rog: Check the review out
<rog> niemeyer: ah, i did another lbox propose and changed the description, and thought you'd see that
<rog> niemeyer: anyway, there it is.
<niemeyer> rog: Understood, thanks. I'll hack a bit on lbox today I think, to implement submit, and also to fix that problem
<niemeyer> rog: It should mail asking for another look
<rog> niemeyer: i think there should be a distinction between uploading, mailing and changing the description
<niemeyer> rog: But let me review your branches first..
<rog> niemeyer: that would be good, thanks
<niemeyer> mpl: How's your agenda? Do you have something in mind already to work on?
<mpl> niemeyer: nope. I haven't really had time to think about it. I'm very open to suggestions for now ;)
<niemeyer> mpl: May I suggest you go over all of the packages for juju, goamz, gozk, and goyaml, and see if there are any incompatibilities with weekly to be fixed?
<niemeyer> mpl: goamz has a bunch of sub-packages as well
<niemeyer> rog: Have you merged the error fixes on goamz that you had pending, or is there some to go still?
<rog> niemeyer: i *think* so. i'll just check
<mpl> niemeyer: alright, will do. can't start on it right away though.
<niemeyer> mpl: That's fine, thanks
<niemeyer> mpl: After that, let's see if we come up with a more interesting/long task involving juju logic
<niemeyer> mpl: These branches that rog is working on are the base, so it's a bit hard to split the workload, but with those in, it should be much easier
<mpl> niemeyer: I'd like that. if it can involve me learning about cloud stuff, even better.
<niemeyer> mpl: Absolutely
<niemeyer> mpl: EC2 provider is precisely what we're working on
<mpl> niemeyer: no worries. I surely don't want to disturb your workflow guys.
<niemeyer> mpl: You're helping, rather than disturbing
<mpl> psh, I know you guys have spent more time hand holding me so far than if you have done it yourself. but I'm too ashamed as it's often a necessary process.
<mpl> *not too ashamed.
<rog> niemeyer: have you renamed goamz/aws yet?
<mpl> anyways, gotta run to a meeting, ttyl.
<rog> niemeyer: that's the last of error-fixes in goamz
<rog> mpl: see ysa
<rog> ya
<niemeyer> rog: Not sure.. checking
<rog> niemeyer: it still needs pushing
<niemeyer> mpl: Cheers!
<niemeyer> rog: Dude.. we're doing something wrong
<rog> niemeyer: we should probably tag r60 versions of the goamz packages so that people can still use r60 with them
<niemeyer> Crap
<rog> ?
<niemeyer> rog: Yeah, precisely
<rog> niemeyer: that's not too hard, right?
<niemeyer> rog: That's exactly what I was talking about
<niemeyer> rog: It's trivial
<niemeyer> rog: But we may have broken people trying to use it
 * niemeyer handles that right away
<niemeyer> rog: Have you pushed the branch?
<rog> niemeyer: which branch?
<niemeyer> rog: goamz/aws
<rog> niemeyer: i've pushed ec2 and s3
<rog> niemeyer: nope
<rog> niemeyer: i don't think i sent it for review, in fact
<niemeyer> rog: Ok, do you have any pending changes?
<rog> niemeyer: yes, just error fixes
<niemeyer> rog: Ok, leave that with me then
<niemeyer> rog: I'll fix the whole thing
<rog> niemeyer: there are two ErrorMatches that gofix doesn't catch, otherwise it's all gofix
<niemeyer> rog: Cool.. I'll fix that and fix the tagging
<TheMue> anyone have experiences with go weekly and release in parallel?
<TheMue> i would need both
<rog> TheMue: i have them in separate directories
<rog> TheMue: and a prefix command to run one in preference
<rog> TheMue: http://paste.ubuntu.com/757007/
<rog> (excuse the fact it's not bash, but i'm sure you get the idea)
<rog> TheMue: i call it "go-release". as in: go-release goinstall foo; go-release 6g x.go
<rog> etc
<TheMue> ic, looks good
<rog> niemeyer: gocheck looks like it needs an r60 tag too - it looks like the r59 version fails
<rog> on r60
<niemeyer> rog: Uh oh
<niemeyer> rog: Can you pin-point which version to tag?
<rog> niemeyer: will do
<niemeyer> rog: Thank you
<rog> niemeyer: ironically, it's revision 60
<niemeyer> rog: LOL
<rog> just off for lunch and to bake a cake, back in an hour
<niemeyer> rog: Enjoy
<TheMue> ... and dcc us some cake
<niemeyer> :)
<niemeyer> rog: I _think_ it's all sorted out
<niemeyer> rog: In goamz, specifically
<rog> niemeyer: cool.
<rog> TheMue: and sorry, would've but didn't have enough bananas so the cake is only 62% the size it should've been. so none spare!
<TheMue> rog: *deeplySigh* so I've got to find surrogate when I'm on x-mas market later with my family
<rog> jimbaker: you know what would be really cool - if you could use scp to copy between units without involving the local machine, e.g.  juju scp wordpress/0:/etc/wordpress/config-* wordpress/1:/etc/wordpress
<marcoceppi> Charm school soon \o/
 * uksysadmin is ready for charm school. "How yoooou doin'?"
<rog> niemeyer: before i continue in the same vein, do you think go-juju-initial-ec2 and go-juju-ec2-region are pointing vaguely in the right direction? if not, i'd prefer to fix those rather than carry further on the wrong way.
<jcastro> good morning everyone!
<jcastro> uksysadmin: thanks for stopping by, we'll start in ~20 minutes!
<rog> niemeyer: oh cool, just got yr msg
<uksysadmin> looking forward to charming my servers
 * mainerror will probably be 10 minutes late
<niemeyer> rog: Yeah, I think initial-ec2 is ready to go in with those settled
<rog> niemeyer:
<rog> niemeyer: cool. i've done those - i'll do the push
<jcastro> m_3_: around?
<mainerror> o/
<rog> niemeyer: done.
<niemeyer> rog: Looking
<niemeyer> rog: patch set is unchanged
<mpl> niemeyer: is it lp:gozk or lp:~juju/gozk/zk that you want me to check? also, fyi https://launchpad.net/gozk/zk returns a 404
<niemeyer> rog: But it was pushed 5 minutes ago
<niemeyer> mpl: gozk/zookeeper.. you've renamed it :-)
<rog> niemeyer: sorry, did the publish messages, did the push to ~rogpeppe..., forgot to do the lbox propose
<niemeyer> rog: lbox propose pushes as well, btw
<mpl> niemeyer: yeah but maybe you wanted me to check the whole project, wasn't sure.
<rog> niemeyer: yeah, but i've had it push to the wrong place, so i tend to do it myself to make sure
<m_3_> jcastro: yo
<jcastro> hi!
<niemeyer> rog: Cool
<niemeyer> mpl: Ah, no, just gozk/zookeeper is great already
<rog> niemeyer: dammit, i'm too late, the diff is empty
<marcoceppi> \m/
<jcastro> niemeyer: rog: mind if we kick you out for the charm school?
<rog> jcastro: no probs
<niemeyer> jcastro: Not at all, thanks for pushing this
<jcastro> but do idle in case there are questions!
<jcastro> Ok, welcome everyone to the first ever #juju charm school
<jcastro> we'll give it another minute for the stragglers to arrive
<niemeyer> rog, mpl: => #juju-tmp
<mpl> niemeyer: ah yes, silly me, your message pointing to that url (with zk) is from september. I had the rename the other way around in my head, sorry.
<niemeyer> TheMue: => #juju-tmp
<jcastro> Ok everyone.
<jcastro> Welcome to the first ever virtual charm school!!!
<jcastro> I am your emcee, Jorge Castro
<jcastro> I work on the community team at Canonical on Cloud
<jcastro> I am joined today by ....
<m_3_> Hi all, I'm Mark Mims
<m_3_> I'm here to help answer charm questions
<jcastro> ok
<jcastro> so first off, we'd like to see who's here to learn, so everyone who is here for charm school, raise your hand and give us a one line description of yourself
<medberry> o/
<ivoks> \o
<marcoceppi> o/ I love juju
<TheMue> o/
<nijaba> o/ (just a pm having a look)
<chute> o/
<ahs3> o/
<uksysadmin> o/ interested in learning how to deploy cloud environments in (as well as bare metal provisioning of) OpenStack
<jcastro> oh nice one. :)
<jcastro> ok so first, before we get into details on charms
<jcastro> let me arm you with some docs and links, that you can reference
<jcastro> first off, obviously, we have: https://juju.ubuntu.com/ and https://juju.ubuntu.com/docs
<jcastro> which is where we centralize all the information on juju
<jcastro> for news about juju and the latest charms, we put that on http://cloud.ubuntu.com
<jcastro> for example yesterday someone submitted a status.net charm
<jcastro> http://cloud.ubuntu.com/2011/12/deploying-status-net-quickly-with-juju/
<jcastro> We like to heavily talk about new charms
<jcastro> because while juju is an amazing tool, it's the charms that make it useful
<jcastro> in the same way that you might love apt, but you really love the huge repository that comes along with it
<jcastro> so we're running these sessions to help people write charms for things they care about.
<jcastro> So, before we get into the anatomy of a charm
<jcastro> does anyone have any questions so far?
<jcastro> hopefully you understand enough about juju to grasp the concepts.
<jcastro> but if you don't, we can get you started on the right road.
<medberry> how does one charm a package to use mysql when the current package has depends for a locally installed mysql?
<medberry> erm, to use the mysql on another box....
<fmo> Do you consider juju ready for production environments?
<jcastro> medberry: mark is answering yours now
<jcastro> fmo: we consider it a tech preview in 11.10
<m_3_> medberry: so the key here is to write a relation hook for your service
<fmo> Thank you, it's a bit out of scope but is it the same for Orchestra?
<jcastro> fmo: it's at the stage where you would start to look at it for 12.04ish timeframe deployment.
<m_3_> medberry: that relation hook would essentially reconfigure your app to point to the remote mysql instance
<medberry> nod. m_3, thanks. Is there an example charm (off the top of your head) I should refer to that does this?
<m_3_> medberry: it does this by exchanging crediential info for a db with the mysql charm
<jcastro> fmo: Not quite sure on orchestra but I /believe/ it's at the same level, though orchestra bundles some things that are already mature, like squid, etc.
<ivoks> m_3_: but you would still end up with unused mysql-server running on 'app' node, wouldn't it? i think this is a packaging bug
<fmo> thank you :)
<m_3_> lots of charms connect to mysql this way... mediawiki would be a good one
<medberry> fmo, I believe you can consider Orchestra more mature than Juju and ready for deployments.  zul may know more.
<medberry> m_3_, thanks, I'll look at it (and others)
<m_3_> ivoks: your relation hook could pass debconf info and dpkg-reconfig the package though
<ivoks> m_3_: right, thanks
<m_3_> ivoks: so what you would do depends on the interface exposed by the package itself
 * medberry was thinking specifically of moodle if someone wants the context.
<jcastro> oh are you working on moodle?
<jcastro> that would be great!
 * m_3_ grins
<jcastro> ok, this brings me to the next quick topic, who has written a charm already? and who is planning on working on one?
 * nijaba plans on doing one for limesurvey
<jcastro> and does anyone have an itch to scratch to work on a certain charm?
<uksysadmin> o/ as far as getting a basic something going (like nginx behind haproxy) to try and understand the concepts
<ivoks> o/ postfix-dovecot
<ivoks> (as soon as i find time :)
<nijaba> (same here)
<marcoceppi> o/ finishing phpmyadmin and steam
<jcastro> (steam in this context is being able to quickly deploy a steam server for game clients to connect)
<medberry> jcastro, yep, and someone has pointed out in O and P moodle only has a recommends on the server bits (not a depends like in Lucid.)
<jcastro> o/ I'm working on an alice IRC charm, which is a self hosted web irc client.
<jcastro> ok, so let's start with looking at the anatomy of a charm
<medberry> marcoceppi, I saw that Aleph One was just released fully open source...
 * marcoceppi nods
<jcastro> but before we start a charm
<jcastro> you need to map out what you want your charm to do
<jcastro> https://juju.ubuntu.com/docs/write-charm.html#have-a-plan
<jcastro> in this document we outline some tips in the things you need to think about when planning out how to write your first charm.
<jcastro> usually I make a quick mindmap of what I need
<jcastro> so for example
<jcastro> status.net
<jcastro> I know I'll need mysql
<mainerror> o/ - I'm a juju fanboy.
 * mainerror is also late
<jcastro> and that the charm would have a relationship with mysql
<hazmat>  /join #juju-tmp
<jcastro> http://charms.kapilt.com/interfaces/mysql
<m_3_> http://pad.ubuntu.com/charmschool-2011-12-02
<jcastro> I can then see what other programs use mysql
<jcastro> this is useful because now I can look at other LAMP projects and use that to build my charm
<jcastro> which is what we want, reuse.
<jcastro> So you can use the charm browser to look for other services that have relationships with services you care about
<jcastro> The guy who wrote the status.net charm just derived it from his last LAMP charm
<jcastro> part time, it was only 2 days from first cut to being accepted into the charm school
<jcastro> and Marco/Clint are working on some convenience tools to make common tasks easier
<jcastro> marcoceppi: can you explain about your convenience functions?
<jcastro> (and then we'll go into this: http://pad.ubuntu.com/charmschool-2011-12-02)
<jcastro> wrong URL, I mean this: http://pad.ubuntu.com/charmschool-2011-12-02
<jcastro> ok we can get back to Marco
<marcoceppi> Sure, so I wrote a few functions that I found myself using a lot in bash, primarily ch_get_file which is passed a file url and either a url to it's hash (md5, sha1, etc) or the actual hash itself. From there it'll download/compare file -> hash
<marcoceppi> There are a few others, check if a string is a file or a url, check if a string is a valid hash
<marcoceppi> You can find them: http://bazaar.launchpad.net/~charmers/charm-tools/trunk/view/head:/helpers/sh/net.sh to use in a charm make sure you have the juju ppa added, install charm-helper-sh, and source /usr/share/charm-helper/sh/net.sh
<SpamapS> marcoceppi: FYI, there is a PPA just for those helpers,   ppa:charmers/charm-helpers
<marcoceppi> I believe the idea is to grow out charm helpers to include helper functions in all the languages
<marcoceppi> <3 SpamapS
<jcastro> and if I make something I think can be reused, do we just submit it to charm tools?
<SpamapS> marcoceppi: otherwise you'll pull in a new version of juju
<m_3_> SpamapS: yay!
<SpamapS> they'll be in the distro for 12.04
<jcastro> Since we're just getting a bunch of  charms started, if you're writing a charm and feel that you can improve the charm tools to make it better for the next person, then we certainly encourage you to do that.
<marcoceppi> definitely
<jcastro> ok, any questions on the tools before m_3_ gets into the meat of a charm?
<jcastro> ok, moving on ....
<SpamapS> are the tools webscale?
<SpamapS> ;)
<jcastro> everyone should have this URL open: http://pad.ubuntu.com/charmschool-2011-12-02
<jcastro> and we'll review this charm
<jcastro> Mark will bold the sections of the document he's talking about
<jcastro> and then when he moves on he'll unbold it and then bold the section he'll be reviewing next.
<m_3_> I'll give people a sec to join the pad
<marcoceppi> pad needs syntax highlighting :P
<m_3_> ok, so we'll make sure everyone has juju installed and working in a bit... first of all, let's cover some basic concepts of charms with an example
<m_3_> I pulled a community-contributed charm off the list... just landed yesterday or the day before
<m_3_> so the first section we have is a basic listing of the charm contents
<m_3_> notice there're a couple of yaml files, a copyright, and then a set of scripts in the hooks/ directory
<m_3_> the scripts in the hooks/ directory are all in shell for this example
<m_3_> but juju takes great pains to be language/tool agnostic
<m_3_> you can use your fav toolset.. hooks just have to run and exit with return codes similar to a shell script ( 0 is good )
<m_3_> so before we dig into the contents of the hooks
<m_3_> let's see what this charm is / does
<m_3_> take a look at the metadata
 * m_3_ not a fast etherpad formatter :)
<m_3_> so it looks like the service described by this charm...
<m_3_> consumes a db (mysql in this case)
<m_3_> and presents a website
<m_3_> ok, that's pretty simple... and has a _lot_ in common with other services
<m_3_> so juju uses this metadata specification to stitch together different services
<m_3_> when juju spins a service unit up for this charm
<m_3_> it can 'relate' other services that match up
<m_3_> (does this by your command of course... but we'll get to the juju cli in a bit)
<chmac> Any insight on where adding Rackspace support lies in the priority list for the juju project?
<chmac> We're based in the UK and so UK IPs are a big must at the moment, which means AWS is out as they only have an Irish presence, nothing in the UK itself.
<chmac> Otherwise, I'd be very interested in experimenting with juju, it sounds like an awesome project.
<m_3_> chmac: juju can handle a number of different providers
<m_3_> including openstack
<m_3_> so when juju spins up a service unit for this charm
<m_3_> the first hook it calls is 'install' naturally enough
<m_3_> please take a sec to look at the install hook in the pa
<m_3_> pad
<m_3_> starts off installing ppas and packages
<m_3_> it's worth noting here that juju charms can be used to install services that install from packages
<m_3_> but it can also be used to install other ways as well
<m_3_> you can clone directly from the tip of your source code repo if you'd like
<m_3_> anything goes
<m_3_> for charms contributed back to the community please prefer packages when available
<m_3_> but that's not necessary
<m_3_> so this particular charm adds ppas/and packages
<m_3_> then 'ch_get's a file from status.net
<m_3_> (this is one of marco's tools to download and cryptographically verify payloads in charms)
<m_3_> anyway... rest of the script is pretty clear
<m_3_> note the open-port at the bottom
<m_3_> juju defaults to everything closed off... you have to explicitly open what you want via the 'juju expose' command
<m_3_> so that's install
<m_3_> there're start/stop hooks too... these're usually trivial 'service xxx start'
<medberry> so if you fail the open-port, there are iptables that prevent it? (firewall rules?)
<m_3_> correct
<m_3_> (not exactly iptables always)
<m_3_> depends on the provider
<medberry> ah, aws / openstack port rules, gotcha.
<m_3_> (and still in development iirc on bare-metal provider)
<m_3_> so some charms are for services that just install a service and spin it up
<m_3_> some services don't relate to other services
<m_3_> and we'd be done if that were the case for status.net
<m_3_> here, we need a db
<m_3_> and we could potentially add a reverse-proxy in front of it
<m_3_> etc etc
<m_3_> so before we move on to relations...
<m_3_>  any questions so far?
<jcastro> relations are the real meat of charms, so we'll pause here for questions
<m_3_> ok, on to relations!!
<m_3_> now this is the really cool part of juju... ok _a_ really cool part of juju
<m_3_> so when you look at the config for a service
<m_3_>  you can often bust it up into "the config specific to installing that node" and "the config specific to relating to another service"
<m_3_> often the relation config info is a nice concise set of info
<m_3_> well juju uses these boundaries to 'install' as a separate step from 'relate'
<m_3_> so the basic story here is to
<m_3_> 1.) spin up status.net
<m_3_> 2.) spin up mysql
<m_3_> 3.) relate the two
<m_3_> 4.) expose status.net to the outside world
<m_3_> when we relate two services the charm hooks for those services exchange info
<m_3_> take a look at what's bold in the pad
<m_3_> that's an example of the status.net charm getting relation-specific config info from the mysql service
<m_3_> the mysql (db-relation-{joined,changed} hooks create a new db and credentials and then pass it to the service joining)
<m_3_> so the status.net charm 'relation-get's the mysql db info that it needs
<m_3_> now we can write this info into the config for status.net... and bam!
<m_3_> take a minute to browse the db-relation-changed hook
<m_3_> BTW all hooks are optional... only implement what you need... events will fire for {joined,changed,departed,broken}
<m_3_> does this 'relation-get' exchange of information make sense to everybody?
<m_3_> any questions about this
<m_3_> it's this very separation of config info into 'install' and 'relation'-specific config that allows juju to start handling horizontal scaling quite nicely
<m_3_> butmore on that later :)
<m_3_> ok, take a sec to look at the website-relation-changed hook too
<medberry> m_3_, compare "relation-get" with "config-get" which occurs later.
<medberry> config-get comes from config.yaml?
<m_3_> it's perdy darned simple
<m_3_> was just gonna do config-get next
<medberry> okeydokey
<m_3_> so relation-get gives you info from the other side of a relation... another service
<m_3_> config-get gives you info from a couple of different places:
<m_3_> defaults are in config.yaml
<m_3_> I can set them at deploy-time on the command line
<m_3_> ( juju deploy -elocal --repository ~/charms --config ~/my_status_net.yaml local:statusnet status
<m_3_> or I can also set them individually via the command line:
<m_3_> at any time during the life of the service, I can 'juju set statusnet var1=val1 var2=val2'
<m_3_> there's a config-changed hook that can handle these changes in addition to the install or relation config
<m_3_> ok, so one more thing about relations and relation-hooks and then we can move on
<m_3_> when a relation is joined, relation-changed hooks can fire repeatedly
<m_3_> they do this as long as either side is calling relation-sets
<m_3_> what this means is that your code has to be able to be called over and over again without barfing your config
<m_3_> (idempotency for you math geeks out there.... i^2=i)
<m_3_> it's not hard, but it's good practice to make all of your hooks idempotent
<m_3_> alright, I could ramble on all day about scaling service and show you example stacks... but let's get on with charm school!
<m_3_> any questions about the status.net charm before we move on?
<jcastro> no questions so far?
<jcastro> there's no way you guys can be that good. :)
 * m_3_ is just that exciting to watch type :)
<amithkk> jcastro: lol
<mainerror> What do I do if there is no install_cli.php script for configure the site?
<m_3_> mainerror: wow, that's a tough one
<chmac> m_3_: I thought I'd read somewhere that juju only supports EC2 compatible clouds at the moment.
<chmac> m_3_: Does juju support rackspace right now?
 * jrgiffordwebchat has same question as chmac 
<amithkk> m_3_: Cant juju be installed on any server?
<m_3_> mainerror: I'd write the install scripts myself and have the charm's install hook call my install scripts
<marcoceppi> mainerror I had a problem like that, in where I just had to re-write the web-based installer into the charm
<m_3_> chmac: yes, during the last ODS we even did the meta thing on openstack...
<mainerror> I see. Thanks. :)
<m_3_> i.e., we had a bare-metal rack
<m_3_> chmac: we used juju to deploy openstack itself on the bare metal rack
<chmac> m_3_: Wow, awesome, so juju can deploy new Rackspace servers just as it deploys new EC2 servers?
<medberry> openstack is not per se the same thing as rackspace.
<chmac> m_3_: Great, I'll get into the documentation in greater detail.
<m_3_> chmac: then we backed out and used another juju environment to spin up a hadoop cluster on that openstack cloud
<m_3_> chmac: note that juju doesn't support the rackspace-api... just the ec2-compatible-api on openstack
<m_3_> medberry: thanks
<chmac> I've read that juju's not production ready, is that really true, or is it more cautionary? :-)
<m_3_> chmac: sorry, I don't know if rackspace has a public-facing openstack cloud yet
<m_3_> chmac: definitely true
<m_3_> chmac: there are components that need to be made highly available before it's totally production ready
<jcastro> chmac: it's a tech preview in 11.10
<chmac> m_3_: Really? It's so tempting to use it in production :-)
<jrgiffordwebchat> so.. it was a tech preview in 11.04/11.10, for 12.04 it'll be production?
<m_3_> chmac: we're shooting for production-ready in 12.04 (long-term-support) version of ubuntu
<m_3_> chmac: I would use it in production
<jcastro> chmac: but certainly we encourage people to run it in dev so that we can get feedback to make it better in 12.04 for an LTS.
<chmac> Makes sense
<jcastro> chmac: this is why we want people banging on it now
<m_3_> but you've got to take some extra precautions atm to insure service/data integrity
<chmac> m_3_: Now that's what I'm talking about, see, I'm usually happy to test earlier than most :-)
<m_3_> it does spin up EBS-rooted instances in ec2 atm for instance just to be on the safer side
<chmac> I'll investigate the openstack api issue with Rackspace, maybe there's an alternative provider in the UK.
<chmac> Thanks for the snappy response, gotta run now, being taken shopping! :-)
 * m_3_ totally ignorant
<jcastro> thanks for coming!
<m_3_> fun fun!
<jcastro> any other questions?
<jcastro> amithkk: I believe you asked if juju can be installed on any server?
<m_3_> amithkk: sorry missed your question
<m_3_> did the above discussion asnwer that?
<mainerror> What is the best practice to create an install script for a service if there is no direct download option?
<m_3_> mainerror: best is to use apt-get install
<m_3_> mainerror: next is to download a payload... binary or tarball
<medberry> .. then git clone?
<m_3_> then install accordingly... cli.sh or ./configure && make && make install
<m_3_> git clone's totally cool... we have several charms that do it already
<m_3_> (safer to 'git clone <url> -b stable_branch_name')
<amithkk> m_3_: yep
<mainerror> What I meant is, what if there is no wget usable direct URL but only a redirect URL?
<m_3_> mainerror: so in the latter case, you'd need the install hook to install 'build-essential'
<amithkk> sorry for the slow responce
<m_3_> or whatever's needed for that build
<m_3_> mainerror: hmmm... not sure I'm following, but perhaps a ppa for that charm would work?
<medberry> I think I know the answer... but clarifying:
<medberry> jcastro,  m_3_, if I want to populate some content, does that happen after juju deploy? Assume I'm using public charms--should I mod them or just do the population on the host after the fact?
<mainerror> Ah, right. That would be an option.
<medberry> (ie, load web content, etc.)
<jcastro> I think mainerror means
<m_3_> medberry: sure, you can pull/install during a relation hook even
<jcastro> what if I can't wget a tarball directly?
<jcastro> and it's buried in some website that has redirects, etc.
<mainerror> Indeed but m_3 actually answered it, I think.
<m_3_> so you _could_ just stick a tarball in the charm itself... there're some size limitations to that, but I don't remember them off-hand
<mainerror> As I understood his answer was to create a package for it in a PPA. Maybe I misunderstood.
<medberry> m_3_, but that's no longer really a public charm (if it has my content in it). So I've locally forked the charm?
<m_3_> mainerror: heck, let's go over the specific thing you're trying to do
<m_3_> medberry: yes, exactly
<m_3_> medberry: which is ok... anything goes in charms
<medberry> thanks all.
<m_3_> medberry: your choice to make something harder to maintain, and/or private/proprietary
<mainerror> Uhm, it was more of a general question. Luckily I can wget it directly.
<medberry> mainerror, is there an example--some actual issue you can point us at?
<medberry> ah.
<m_3_> medberry: but contrib back to community set of charms in the 'charm store' is different
<medberry> nod.
<m_3_> they get tagged according to all sorts of different criteria
<jcastro> I wouldn't consider a tarball in a charm best practice
<jcastro> ideally if it was an open project you could get them to fix that
<m_3_> prefer packages if available
<jcastro> or ask them to mirror their tarball releases on launchpad
<jcastro> or package it in a PPA as a contribution to ubuntu and that project
<m_3_> marcoceppi has a 'use_upstream' config option on a charm
<marcoceppi> yeah, it switches between latest stable tarbal and what's in the repos.
<m_3_> true => pull from source, otherwise install the package
<marcoceppi> but it always defaults to the repo, you have to juju set inorder to get the upstream version
<m_3_> which is really a beautiful way to handle that from a user-perspective (as long as default is pkgs :)
<jcastro> but if it's some horrible program that you can't improve and need it for work or something then feel free to get as ugly as you want, but for the public charms in the charm store we're looking for nice clean charms that people can use and adapt.
<jcastro> ok, we've gone through the charm
<jcastro> and this ends the official charm school
<jcastro> however
<jcastro> we certainly encourage you to hang out
<jcastro> smoke if you got em
<jcastro> and feel free to ask us questions
<jcastro> or let us know what you're working on
<jcastro> or ask for help or a review
<jcastro> we hold office hours (see topic) or you can always post to the list
<jcastro> but the juju team works a ton, and usually are always in here and love to answer questions from users
<jcastro> so please don't be shy!
<medberry> tx guys.
<amithkk> awesome session!
<mainerror> Thanks for this session!
<jcastro> we'll make the logs handy for everyone
<dannf> thanks guys - is service maintenance within the scope of juju? e.g. keeping mysql up to date w/ security patches?
<marcoceppi> That's a good question actually
<m_3_> dannf: yes, there's an upgrade action called from the command line 'juju upgrade <service-name>'
<dannf> and that is implemented with a hook, i presume?
<m_3_> that you can customize with a dedicated upgrade hook
<dannf> sweet
<m_3_> usual practice is to make 'install' idempotent and then symlink to the upgrade hool
<m_3_> s/hool/hook/
<dannf> makes sense
<m_3_> dannf: rolling upgrades can be done, but you have to segment the service a bit... i.e., plan for it... this'll improve over time
<m_3_> i.e., if you've got a 20-unit service and only wanna upgrade a few of them at a time
<m_3_> scaling in juju is just cool!  but that'll be another charm school :)
<jcastro> yes, it's actually unfair to juju not mentioning scaling this time
<jcastro> but when we get to it you'll really start to see the power
<m_3_> it's a kind of magic
<m_3_> sorry, had to
<m_3_> :)
 * m_3_ now has song stuck in head
 * mainerror too
<m_3_> hazmat niemeyer: y'all can have the channel back... thanks!
<mpl> wheee
<mpl> feels good to come out of the dark irc corner
<niemeyer> Oh, hey.. #juju is back
<niemeyer> rog: Review sent
<rog> niemeyer: w.r.t. Bootstrap, I said that I would change the test (to test for len(ms)==1) once Bootstrap actually starts a machine, but for the time being i think it's ok.
<niemeyer> rog: It's not..
<niemeyer> rog: Comment the test out if you can't satisfy it.. a test assertion that asserts something wrong is doing what exactly
<niemeyer> rog: Or, maybe better
<niemeyer> rog: Comment Bootstrap out
<rog> that's a better idea
<niemeyer> rog: Then the test is probably correct
<niemeyer> rog: It should not Destroy, though
<rog> niemeyer: why not?
<niemeyer> rog: Because that's the complement of Bootstrap
<niemeyer> rog: Destroy doesn't work in a real deployment without Bootstrap, I think
<niemeyer> fwereade_: Does it?
<rog> niemeyer: currently, Destroy takes down all machines that were started from the current environment
<rog> niemeyer: which seems ok
<fwereade_> niemeyer, reading context
<niemeyer> rog: Hmm.. fair enough.. that sounds like a good behavior, even if we don't do that in Python
<fwereade_> niemeyer, I don't think destroy-environment is actually an *error* if you don't have an environment
<niemeyer> fwereade_: The question is whether it kills all machines, even if we don't find the data about it in e.g. S3
<fwereade_> niemeyer, checking
<fwereade_> niemeyer, it gets ec2 machines by reservation, so it should be safe
<niemeyer> fwereade_: Cool, thanks
<rog> niemeyer: i replied with one question about about open vs Open
<niemeyer> rog: and I already replied to it
<rog> niemeyer: i think i'd prefer to wait until the occasion arises, if that's ok
<niemeyer> rog: Sure, no prob
<rog> niemeyer: the previous weekly builds find with all tests passing btw, so it's *something* that's changed recently
<niemeyer> rog: You mean re. http?
<rog> niemeyer: yeah
<rog> niemeyer: https://codereview.appspot.com/5448085/
<niemeyer> rog: goamz should be entirely fixed, tagged, and shiny already
<rog> niemeyer: i just pulled and it didn't seem to be
<rog> unless...
<rog> ah, rename!
<rog> no
<rog> niemeyer: http://paste.ubuntu.com/757292/
<rog> it still needs fixing, i think
<rog> (that was done just after i'd removed the aws directory, BTW)
<niemeyer> rog: I think I screwed up pushing to ~niemeyer.. let me see
<rog> niemeyer: renames are awkward - it should lazily evaluate aliases i think
<niemeyer> rog: yeah, I'm sorry
<niemeyer> rog: The rename worked fine, but I forgot the --remember
<niemeyer> rog: Which means the fixup and tagging afterwards went to the wrong location
<niemeyer> rog: Ok, I _think_ everything should be fine now, including the more unknown packages (sdb, etc)
<rog> niemeyer: if i push to lp:goamz/aws, it should remember that, even if it's later changed to be an alias for something else, i reckon
<niemeyer> rog: It doesn't
<rog> i know
<niemeyer> rog: Unless you say so with --remember
<niemeyer> rog: Ah, you're expressing a wish, sorry
<rog> yeah
<niemeyer> rog: Yeah, it makes sense to me too
<rog> niemeyer: PTAL
<niemeyer> rog: e.Destroy has already been mentioned in two other reviews.. please fix it or respond to the point.
<rog> [17:06] <niemeyer> rog: Hmm.. fair enough.. that sounds like a good behavior, even if we don't do that in Python
<niemeyer> rog: That's not the same issue
<niemeyer> rog: We should be able to Open without Destroy
<niemeyer> rog: Or do you think we shouldn't?
<rog> niemeyer: i think that the destroys will happen at the end
<rog> niemeyer: so it shouldn't matter
<rog> niemeyer: what do you propose as a solution?
<niemeyer> rog: So if I open the same environment 10 times, you think we should destroy it 10 times
<rog> niemeyer: yeah.
<niemeyer> rog: That's silly isn't it?
<rog> niemeyer: probably :-)
<niemeyer> rog: So let's do something else..
<rog> niemeyer: your suggestion was to have suite.Bootstrap, right?
<niemeyer> rog: I suppose each environment only has to be destroyed once, and that we don't really have to do through the same object that created it
<rog> niemeyer: so the environment that got bootstrapped gets destroyed
<niemeyer> rog: Does that make sense?
<fwereade_> happy weekends everybody :)
<niemeyer> fwereade_: thanks, have a great one too
<rog> fwereade_: have a good one
<niemeyer> rog: I don't mind to change the original suggestion.. just want to have some conversation going about the topic
<niemeyer> rog: You say Destroy shouldn't depend on Bootstrap.. that's fine by me too
<rog> niemeyer: i think that if we're wanting to test Bootstrap/Destroy with different opens, we can bypass the suite environment handling
<niemeyer> rog: I don't want to test.. it's the opposite.. I don't want the suite to be doing something that makes no sense all the time
<rog> most tests will be against a single environment, i think
<rog> niemeyer: isn't it worth doing something that makes sense most almost all of the time? hmm, i dunno.
<niemeyer> rog: Sorry, I don't get that point..
<niemeyer> rog: The same environment shouldn't be destroyed twice.. it will fail, and the test will fail because Destroy fails
<rog> anyway, my thought was that we could have a Bootstrap method as well as an Open method. the  Bootstrap method marks the env to be destroyed at the end.
<niemeyer> rog: That was my suggestion, that you advocated against by saying Destroy should always be called :-)
<rog> which will happen exactly once for each env created via bootstrap
<niemeyer> rog: Which is fine by me
<rog> niemeyer: that was then :-)
<niemeyer> rog: The only point I'm trying to address is not destroying thing twice
<niemeyer> rog: and it's easy to do that
<rog> niemeyer: yeah, ok i'll do that
<rog> see whether i can do it in 3 minutes!
<niemeyer> rog: Well.. let's do this
<niemeyer> rog: Please merge it as-is..
<niemeyer> rog: We'll sort it out later
<rog> ok
<rog> it's ok as is?
<niemeyer> rog: yeah
<niemeyer> rog: I'm starting to think we should just Destroy it within each test as necessary and be done with it. Let's see how things show up in practice once we have a few additional tests
<rog> niemeyer: the problem with that is that the env is left around if you do a failed Assert
<niemeyer> rog: defer
<rog> ok
<niemeyer> But let's see.. it's bothering a bit knowing this will blow up but I'm probably concerned too soon
<rog> bugger, conflicts
<niemeyer> Huh.. how come?
<rog> niemeyer: all sorted now. jeeze. i *think* i got all the steps right.
<rog> gotta go, i am required to pack for the weekend. see ya monday. thanks for all the reviews, i love 'em really :-)
<niemeyer> rog: Thanks a lot for everything this week, it was awesome
<niemeyer> rog: and you're very welcome
<rog> niemeyer: and if you did manage to have a look at ec2-region before then i'd be very happy indeed...
<niemeyer> rog: Oh, wait
<niemeyer> rog: I've checked it out, but the proposal includes unrelated changes from the previous branch
<niemeyer> rog: I mentioned that in the codereview itself
<rog> fuck
<rog> which changes?
<rog> niemeyer: i can fix it if you let me know right now, otherwise i'll have to leave it
<niemeyer> rog: See the codereview
<niemeyer> rog: I think it had all the ec2-initial changes in
<rog> niemeyer: which file?
<niemeyer> rog: All of them
<niemeyer> rog: Or most, anyway.. checking out
<rog> niemeyer: i thought ec2-initial was approved, bar these changes
<niemeyer> rog: It is!
<niemeyer> rog: and it's merged even, right?
<rog> yeah
<niemeyer> rog: My point is that this: rog: https://codereview.appspot.com/5449065/
<niemeyer> rog: Has a ton of unrelated changes coming from other branches
<rog> ah, sorry, i thought you were referring to the review just done
<rog> one mo
<rog> niemeyer: there, is that better?
<rog> niemeyer: it looks like the changes are relevant now
<rog> gotta go
<niemeyer> rog: Superb, thanks!
<niemeyer> rog: Enjoy
<_mup_> Bug #899433 was filed: YAML errors in charms should be obvious to users <juju:New> < https://launchpad.net/bugs/899433 >
#juju 2011-12-03
<m_3> crazy how many times I write config-git instead of config-get... need a new vim alias
<SpamapS> btw, charm-tools is in precise now, woot. :)
<m_3> negronjl: ping
<negronjl> m_3: pong
<m_3> hey man
<negronjl> m_3: what's happening
<m_3> ok, so I finally got a bunch of node apps in front of mongo
<m_3> adding extra mongo nodes winds up with _everything_ bound to the last mongo node that comes up
<negronjl> m_3: hmm ...
<m_3> I guess it's re-firing relation hooks... which it should
<m_3> but how can you tell that the last node up isn't what you wanna bind to?
<negronjl> m_3:  your app should connect to the output of relation-list
<negronjl> m_3: not the output of relation-get <>
<m_3> replset isn't enough info along
<m_3> alone
<m_3> ok
<m_3> so pull the whole set at once
<negronjl> m_3:  the mongo driver takes care of the rest ... just pass all of the mongodb nodes to your app
<negronjl> m_3:  which app did you use ( a charm ? ) if so, which one so I can re-create
<m_3> but it's gotta put it into facter or something to get all the ip addrs... otherwise I just get names
<negronjl> m_3: names is fine
<m_3> sorry, 'relation' names, not dns
<m_3> lp:~mark-mims/charm/oneiric/node/trunk
<negronjl> m_3:  can you email me the description of your test so I can re-create and try to fix what I can ?
<m_3> it's just hitting one db instance at a time... I can refactor that
<m_3> but it'll prob need something like facter
<m_3> no biggie
<negronjl> m_3:  let me try and recreate and I'll report back.
<m_3> what are you using to test the frontend?
<negronjl> javascript
<negronjl> m_3: ^^
<m_3> a charm? or just an external app?
<negronjl> m_3: manual testing using javascript ( mongo has a javascript interface )
<negronjl> m_3:  Let me create a simple web-app that is DB intensive and I'll get back to you to test and such
<m_3> gotcha... ok, well I'll just plan on refactoring the app
<negronjl> m_3:  cool .. the more tests ( and diff. kinds of test ) the better.
<m_3> I didn't get the replicated data when hitting the last node joining though... but there're too many things that went wrong with the test to say so reliably :)
<negronjl> m_3:  no worries ... I'll try and make a better test
<m_3> there's an outdated blog post about the app itself http://markmims.com/cloud/2011/09/07/node-mongo-ubuntu.html
<m_3> might explain the app quickly
<negronjl> m_3: I am thinking of combining a test of mapreduce and take the output and put it into mongo or, do a mapreduce job within mongo itself to test.
<m_3> simple though
<m_3> ha
<m_3> cool, yeah two birds with one stone
<negronjl> m_3:  exactly :)
<m_3> ok, maybe half dozen birds with one stone
<negronjl> m_3: I hope :)
<negronjl> m_3:  gotta jet out for a bit.  email me if you have anything else
<m_3> later man... emailing stack script now
<negronjl> m_3:  cool .. thx
<negronjl> m_3: ping
<m_3> yo
<m_3> negronjl: ^
 * m_3 dinner bell
<george_e> I'm reading this page: https://juju.ubuntu.com/docs/getting-started.html
<george_e> ...and it mentions something about control-bucket and admin-secret in environments.yaml.
<george_e> What are those two fields and what do they mean?
 * SpamapS is on his way to give a juju presentation to the ventura county linux users group :)
<marcoceppi> SpamapS: good luck!
<george_e> In environments.yaml, if the key 'default' is missing, which deploy environment is chosen as the default? The first?
<marcoceppi> george_e: I believe so
<marcoceppi> Either that or it throws an error
<george_e> Ah, okay.
<marcoceppi> george_e: http://paste.ubuntu.com/758722/
<george_e> Ah, thanks.
#juju 2011-12-04
<SpamapS> talk went fine... good practice for SCALE.. not many cloud users in ventura county. :-P
<george_e> SpamapS: Somebody told me a GUI was in the works for Juju?
<george_e> I've been working on one myself the last couple of days.
<SpamapS> Oh I don't know about guis
<SpamapS> I <heart> cmdline
 * SpamapS realizes all his problems with his demo today were because he had a lucid AMI specified for some reason
<george_e> Well me too... but a more accessible interface for beginners is always a good thing.
<george_e> Plus it makes it a little easier to keep an eye on 'juju status'.
<SpamapS> oohh and m1.large.. awesome, I spent like $8 on FAIL today
<SpamapS> george_e: Ah, yeah I think its always been something planned
<george_e> Well, if there _isn't_ anyone working on one - I am :)
<SpamapS> george_e: I'd love an interface that just showed status in a pretty way :)
<SpamapS> I did the gource thing.. but thats just a show
<george_e> SpamapS: I will certainly make sure of that.
<george_e> I plan to have a tree view that displays the services, units, etc. in a hierarchical manner.
<george_e> Plus it can even make use of libnotify for errors.
<george_e> That way you find out when something goes wrong.
<SpamapS> Oh a native GUI?
<SpamapS> I'd do it as an HTML5 app
<SpamapS> george_e: notify is weak, use an indicator. :)
<SpamapS> if you miss the notify.. you never know the problem. Indicator will let you turn the envelope blue. :)
<george_e> SpamapS: It's going to be a Qt application - that's where my skills are.
<SpamapS> sweet
<george_e> I believe there is an AppIndicator package for Qt somewhere... but I don't think it made it into the Oneiric archives.
<george_e> It's here by the way: https://launchpad.net/juju-gui
<george_e> I have daily builds and a PPA set up for it.
<marcoceppi> HTML5 app would be sweet too, that runs on the bootstrap <3
<marcoceppi> For when you're out and about
<nijaba> jcastro: done with my limesurvey charm.  Review welcome :) bug #899849
<_mup_> Bug #899849: New charm (Limesurvey) <new-charm> <juju Charms Collection:New> < https://launchpad.net/bugs/899849 >
<SpamapS_> nijaba: reviewing your limesurvey charm now
<SpamapS_> nijaba: review done.. *SO* close
 * SpamapS_ heads to brunch
<nijaba> Thanks SpamapS.  Will try to fix and let you know :)
<kees> hi! the docs are wrong for this IRC channel. ;) https://juju.ubuntu.com/docs/faq.html
<SpamapS> <sigh>  ... we really need to figure out what is wrong with LXC
<SpamapS> PTY allocation request failed on channel 0
<SpamapS> :-/
<SpamapS> kees: thanks, I'll fix that..
<kees> so, I have had juju lose it's mind.
<kees> it stopped launching systems, and complains that machine 9 is missing
<kees> any ideas? :P
<kees> MachineStateNotFound: Machine 9 was not found
<kees> and no I can do no more provisioning.
<kees> *now
<SpamapS> Interesting
<SpamapS> No I don't think I've seen that.. but I have seen the provisioning agent basically stop working..
<SpamapS> kees: if you read the environment of the provisioning agent on machine 0, you can re-start it (or it might be upstart managed in more recent releases, I haven't checked)
<SpamapS> kees: but I suspect its a probelm in ZK and the problem will continue.
<kees> ZK?
<kees> zookepper.
<kees> *eeper
<kees> where does it store the details? seems like I could just _remove_ machine 9
<kees> can I restart zookeepers without wrecking all the running units?
<SpamapS> kees: docs should referesh in the next hour
<SpamapS> kees: Zookeeper yet
<SpamapS> yeah
<SpamapS> kees: you can restart zookeeper yes, though I believe it may cause the agents to spew copious errors... they might even die... I forget if that bug was fixed yet
<SpamapS> kees: kees https://bugs.launchpad.net/juju/+bug/861928  .. is this maybe your bug ?
<_mup_> Bug #861928: provisioning agent gets confused when machines are terminated <juju:New> < https://launchpad.net/bugs/861928 >
<kees> SpamapS: yeah, that's totally my bug.
<kees> SpamapS: any work-around?
<SpamapS> kees: I think you probably have to dive into ZK and remove the machine node
<kees> SpamapS: where does it store it?
<kees> if agents die, can I restart them, or are they just totally hosed?
<SpamapS> kees: you can restart them.. its getting easier with the branch that puts them in upstart jobs
<SpamapS> actually that may have landed recently
<kees> hrm, I'm using whatever is in oneiric
<SpamapS> 398 ?
<SpamapS> kees: so you may have to dig through the cloud-init bits to find the execution line
<kees> to restart the agent, or fix ZK?
<SpamapS> to restart the agent
<SpamapS> to fix ZK, there's a zookeeper client on machine 0
<kees> right
<kees> and how do I tell that client to forget about machine 9? :)
<SpamapS> /usr/share/zookeeper/bin/zkCli.sh I think
<SpamapS> kees: rm /machines/machine-000000009
<SpamapS> kees: I think
<kees> "delete" instead of "rM" ?
<kees> *rm?
<SpamapS> maybe
<kees> rm, Node does not exist: /machines/machine-000000009
<SpamapS> ls /machines
<SpamapS> nothing there?
 * kees pokes harder
<kees> nopes
<kees> [machine-0000000025, machine-0000000024, machine-0000000000, machine-0000000001, machine-0000000010, machine-0000000006, machine-0000000007, machine-0000000008, machine-0000000002, machine-0000000023, machine-0000000004, machine-0000000005]
<SpamapS> so..
<SpamapS> it may be that the provisioning agent is internally confused
<SpamapS> so restarting it may fix your problem
<kees> and it doesn't live in /etc/init nor /etc/init.d
<kees> SpamapS: is there a correct way to restart it? Or just kill it and run   python -m juju.agents.provision -n --logfile=/var/log/juju/provision-agent.log --pidfile=/var/run/juju/prov sion-agent.pid
<kees> ?
<SpamapS> kees: I think you might need to replicate some env vars
<SpamapS> kees: have to run... good luck. ;)
<kees> okay, thanks!
#juju 2013-11-25
<freeflying> why I specify auth_url in environment.yam as http, but juju boostrap still tries to connect to the url over ssl
<freeflying> is it a feature of juju to force use ssl
<freeflying> provider is openstack
<davecheney> freeflying: force juju to use or not use ssl ?
<freeflying> davecheney, I wanna use normal http, but it was forced t connect to https, even I have it configured as http://
<davecheney> freeflying: sorry, we only support ssl urls
<freeflying> davecheney, ok, that explain, thanks for clarifying
<davecheney> freeflying: we do support self signed certificates
<davecheney> if that helps
<freeflying> davecheney, not sure
<freeflying> davecheney, I think default keystone charm doesn't provide such thing
<ashipika> hi all.. manual provisioning... when i bootstrap a host and look at /var/log/juju/machine-0.log i see the following repeating over and over:
<ashipika> worker: start "lxc-provisioner"
<ashipika> worker: exited "lxc-provisioner": no state mserver machies with addresses found
<ashipika> worker: restarting "lxc-provisioner" in 3s
<axw> ashipika: are you using the null provider?
<ashipika> yes (manual provisioning)
<axw> ashipika: sorry, it may sound like a dumb question - there are two parts to manual provisioning (one of which isn't supported). but you're not using htat, so it's ok
<axw> anyway
<axw> which version of juju?
<ashipika> 1.17.0-saucy-amd64
<ashipika> axw: sorry.. total beginner with juju. love the idea so i try to follow the documentation for null provider.. i really do appreciate all the help
<axw> ashipika: no worries, just wanted to make sure I understand what you're doing
<axw> ashipika: would you mind pastebinning your log file? is it small enough?
<ashipika> axw: sure.. you want the machine-0.log or something else?
<axw> yes, machine-0.log please
<ashipika> axw: http://paste.ubuntu.com/6472787/
<ashipika> axw: just a stray thought.. during boostrap i saw some problems with locale (python warnings)... which i believe is due to ssh-ing into a host..
<ashipika> axw: sorry soorry.. perl waring.. where is my head today..
<ashipika> perl: warning: Falling back to the standard locale ("C").
<axw> I don't think that's a problem
<axw> ashipika: dumb question- have you done a "juju status"?
<ashipika> environment: "null"
<ashipika> machines:
<ashipika>   "0":
<ashipika>     agent-state: started
<ashipika>     agent-version: 1.17.0.1
<ashipika>     dns-name: ubuntu.d.xlab.lan.
<ashipika>     instance-id: 'manual:'
<ashipika>     series: precise
<ashipika>     hardware: arch=amd64 cpu-cores=1 mem=987M
<ashipika> services: {}
<axw> had you done that before you pasted the log?
<axw> I ask because the act of doing "juju status" finalises the bootstrap process
<ashipika> oh.. no. i have not..
<axw> take a look at the log file now, it should have stopped logging that error
<ashipika> http://paste.ubuntu.com/6472815/
<axw> yikes, what is going on there
<ashipika> lxc-ls missing executalbe
<axw> indeed
<axw> not sure why it wants it
<ashipika> should i destroy the environment and try boostraping again with a new VM? just to see if i can reproduce the issue?
<axw> ashipika: there are some changes going on that will make this problem go away, but I suspect you could just "apt-get install lxc" on that maachine for now to make it be quiet
<ashipika> axw: installing...
<axw> ashipika: I think it's just a matter of the manual provider not installing lxc (it shouldn't need to, but there's a bug there that will be fixed soon)
<ashipika> axw: yaay! Starting up provisioner task machine-0
<axw> cool :)
<ashipika> axw: now on to new frontiers.. adding new machines :)
<axw> good luck!
<ashipika> axw: oh.. stuck on an issue that the bootstrapped host needs a hostname that can be resolved in the DNS
<ashipika> all i have are IPs
<ashipika> dialing "wss://ubuntu.d.xlab.lan.:17070/
<axw> ashipika: we'll probably want a bug for that one
<axw> for now you'll probably have to hack /etc/hosts :(
<ashipika> is that a know bug?
<ashipika> i'm ok hacking /etc/hosts for now..
<axw> ashipika: we don't have anything in for it at the moment. there's a vaguely similar one in that the CLI attempts to connect to the reverse lookup of bootstrap-host
<axw> which can fail for various reasons
 * ashipika does a little jig: Provisioned machine 1
<axw> woohoo :)
<ashipika> axw: trying to deploy juju-gui..  in status i get: agent-state-info: 'hook failed: "start"'
<axw> ashipika: pastebin machine-1.log please?
<ashipika> i see the error already.. :) again.. wss://ubuntu.d.xlab.lan... need a bit more /etc/hosts magic
<freeflying> ashipika, I'd rather you can set up a local dns server
<ashipika> freeflying.. roger that... will restart everything from scratch.. maybe a good idea to put this into the documentation..
<freeflying> ashipika, and use ddns to update your dns record
<axw> I'll raise a bug and we will either document a requirement or change it to not require DNS
<freeflying> axw, dnsmasq worked with local provider to resolve dns name I remember
<mgz> morning!
<ashipika> axw: removed everything from the bootstrapped host..
<ashipika> tried bootstrapping again.. now i am again on : restarting "lxc-provisioner" in 3s
<axw> ashipika: did you do juju status again?
 * ashipika stupid
<ashipika> ok..
<ashipika> dns working.. but when i provision another machine i get:  http://paste.ubuntu.com/6472942/
<ashipika> the xmaas-1.d.xlab.lan is resolvable via dns
<axw> any errors in machine-0.log?
<ashipika> last log message on machine-0: juju.provisioner provisioner_taks.go:243 machine 1 already started as instance "manual:xmaas-2.d.xlab.lan"
<ashipika> sda1: WRITE SAME failed. Manually zeroing..
<ashipika> oh.. and just before these two lines: WARNING juju.worker.addressupdater updater.go:219 cannot get addresses for instance "manual:xmaas-2.d.xlab.lan": no instance found
<ashipika> and juju status says: status missing for the machine-1
<axw> hold on... 37017... that's the mongo port
<axw> I think someone broke the code. how was it working for you before though? are you working off the source tree?
<axw> ashipika: ^^
<ashipika> go get -v launchpad.net/juju-core/...
<axw> I see
<ashipika> go install -v launchpad.net/juju-core/...
<ashipika> :)
<axw> ok, just a moment - you're going to have to patch a file manually I'm afraid
<ashipika> sure :)
<axw> ashipika: http://bazaar.launchpad.net/~go-bot/juju-core/trunk/view/head:/environs/manual/provisioner.go#L185
<axw> please locate that on disk, and modify StateAddrs to be APIAddrs
<axw> (only on that line)
<ashipika> on which host?
<axw> ashipika: on whichever host you built juju on
<ashipika> ok
<axw> afterwards, you'll have to rebuild juju and rebootstrap
<ashipika> L185: 		Addrs:    configParameters.APIAddrs,
<ashipika> correct?
<axw> let me just confirm before I waste your time
<axw> ashipika: yes that is correct
<ashipika> axw: you're not wasting my time.. you're helping.. thnx!
<axw> no problems
<axw> manual provisioning is my baby ;)
<axw> an ugly baby, but my baby nonetheless
<ashipika> damn.. still the same error.. maybe i did not rebuild the entire juju.. how do i clean the previous installation?
<ashipika> ok.. have to go to a meeting.. be back in 20m
<axw> ashipika: "go get -v launchpad.net/juju-core/..." is all you should need to do. make sure your target env is totally clean before reattempting. I may not be here in 20m, but I'll be back online at the same time tomorrow
<ashipika> axw: reinstalled, re-bootstrapped
<ashipika> still: machine-1 -> juju status: instance-state: missing
<ashipika> tried deploy of mongodb to machine-1: 'hook failed: "install"'
<davecheney> ashipika: juju ssh 1
<davecheney> less /var/log/juju/unit-*
<ashipika> ah, sorry.. destroye my environment.. reinstalling VMs, retrying from 0
<aktau> Hey guys!
<aktau> Looking to parse some YAML in go with your goyaml package
<aktau> So to be flexible I unpack a yaml file into a map[string]interface{}
<aktau> But it appears goyaml decides to unpack hashes into map[interface{}]interface{}
<aktau> Which makes me unable to marshal it to JSON
<aktau> What would you guys recommend for me to get around this?
<ashipika> davecheney: on mongodb deploy -> HOOK ImportError: No module named yaml
<ashipika> davecheney: HOOK File: /var/lib/juju/agents/unit-mongodb-0/charm/hooks/install
<jcastro> sinzui, thanks for putting that OSX bash completion in the release, that's classy!
<sinzui> jcastro, np, I was desperate to get some code landed in anyone's project to raise my self-esteem
<bloodearnest> hey all. I hitting an issue about with lxc provider about git not being installed. I think I had this problem some time ago, and it turned out to be by ISP returning matches for invalid DNS
<bloodearnest> something in juju I think looks for a particular DNS and some point in container start up? And does something if it's not found?
<marcoceppi> bloodearnest: this might also have to do with an apt proxy
<marcoceppi> do you have a proxy set up for your machine's apt?
<bloodearnest> marcoceppi: hmm so I do
 * bloodearnest has no memory of this place
<marcoceppi> bloodearnest: if your proxy on your machine is set up to read from 127.0.0.1 or another address
<marcoceppi> that address needs to be reachable in the containers
<marcoceppi> if it's not, apt will fail and git won't install
<marcoceppi> the local provider automatically inherits your proxy settings from the host machine
<marcoceppi> if you isntall squid-deb-proxy or another package, it may have automatically created the rules for you bloodearnest
<marcoceppi> either way, either update the rules so that lxc can use them or remove them and re-bootstrap
<bloodearnest> marcoceppi: does lxc provider still require an apt proxy?
<marcoceppi> it doesnt' require a proxy at all
<marcoceppi> bloodearnest: it will simply re-use the one on your host machine, as most host machines with a proxy often have it because they can't access the archives directly
<marcoceppi> so if you have a caching service set up on your host machine, the rule is usually 127.0.0.1, that won't work inside LXC
 * bloodearnest nukes apr-cacher-ng from obrit
<marcoceppi> no proxy is required, it's just a feature that exists in juju, where the local provider will inherit those settings
<bloodearnest> right
<marcoceppi> jcastro: we should probably document that caveat on the local provider page
<jcastro> huh
<jcastro> I am using a proxy and I don't have that issue
<marcoceppi> jcastro: depends n on the address for the proxy
<stub> Which reminds me, I need to tune apt-cacher-ng to be more aggressive. apt is still the slowest part of spawning new lxc instances.
<bloodearnest> marcoceppi: ok, so I removed apt-cacher-ng altogther, but still get this problem
<bloodearnest> marcoceppi: I think it's probably related to my crappy ISP rerouting DNS
<bloodearnest> marcoceppi: hm, so I can resolve archive.ubuntu.com from inside the container
<bloodearnest> ah, it was a canonical vpn issue, it seems
<bloodearnest> unrelated question - juju-core doesn't like deploying from local symlinks (pyjuju was ok with that). Is there a workaround for this?
<jcastro> evilnickveitch, bundle doc MP incoming from me!
<jcastro> bloodearnest, ok so you have charms in a directory somewhere
<jcastro> and you have those symlinked?
<bloodearnest> jcastro: yeah, the specific case a mini test repository for a charm
<jcastro> huh I didn't even know we supported that in the first case
<jcastro> can you file a bug on it on juju-core?
<bloodearnest> jcastro: can do
<jcastro> I am not sure if we supported symlinks on purpose or by accident, heh
<bloodearnest> jcastro: it may be a security feature - I get a message like 'ERROR cannot bundle charm: symlink "." links out of charm: ".." '
<bloodearnest> yeah, we used to use it for testing/dev with pyjuju, with gojuju we've have to moved to developing out of a 'precise' parent dir
<bloodearnest> which is cumbersome
<bloodearnest> jcastro: it's particularly useful when developing a subordinate charm, as you have to have a real charm as well in order to test it at all
<jcastro> well, you had me at "we use it", so if it's useful for you then I figure might as well file it
<bloodearnest> so you can have both your subordinate and a dummy test charm in a local repository
<jcastro> I don't like forcing it to have series in the path anyway. *shakes fist*
<jcastro> juju deploy <any directory and who cares about the the structure>
<bloodearnest> jcastro: +1000
<bloodearnest> jcastro: so I think it may be related to https://bugs.launchpad.net/juju-core/+bug/1129319
<_mup_> Bug #1129319: Local charm deployment not working if symlinks are used <juju-core:Fix Released by fwereade> <https://launchpad.net/bugs/1129319>
<jcastro> hey so I guess we can just reopen this
<jcastro> what version of juju core are you on?
<bloodearnest> jcastro: 1.16.3-saucy-amd64
<jcastro> ok, leave a comment there and I'll reopen it!
<bloodearnest> jcastro: will do thanks
<evilnickveitch> jcastro, ok, i fixed it, should go live in 30 minutes
<jcastro> evilnickveitch, heh, what was wrong with it?
<evilnickveitch> jcastro, it was quite a good effort for you! I just rewrote some bits in English
 * jcastro claps slowly
<evilnickveitch> jcastro, i liked the video, but i kept thinking you were going to go "ta dah!" at some point
<jcastro> I was going to make it an animated gif
<jcastro> but the results were crappy
<jcastro> I had intended it to not have audio at all
<evilnickveitch> there will be no GIFs in the docs!
<zradmin> does anyone know if the havana release of openstack moved neutron into the nova-ccc charm instead of it being in the quantum gateway charm?
<negronjl> zradmin, still the quantum-gateway charm
#juju 2013-11-26
<zradmin> negronji: odd I cant get it to work with that at all... it never seems to create the database in mysql
<zradmin> negronji: I get the following in the debug log when trying to set it up "This charm doesn't know how to handle 'shared-db-relation-joined'
<zradmin> negronjl: is there updated documentation anywhere? I'm still following the guide at https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<negronjl> zradmin, which charm is giving you the  error ?
<zradmin> negronl: quantum-gateway when I'm connecting it to the mysql charm... I finish connecting it to rabbitmq and nova-ccc but the service never seems to come online either, i just get 503 errors
<zradmin> negronjl: sorry i keep mispelling your username :)
<negronjl> zradmin, no worries ...
<negronjl> zradmin: that's interesting ... i don't get those issues at all ...
<negronjl> zradmin: I assume that you are trying to deploy this thing in ha correct ?
<zradmin> correct
<sarnold> zradmin: (most irc clients let you type or two letters of a nickname then hit 'tab' to finish it up :)
<negronjl> zradmin, let me check and do some digging
<zradmin> sarnold: sweet, thx for the tip
<negronjl> zradmin: can you pastebin your relations and your config?  Maybe i can find something odd there.
<negronjl> zradmin, how are you relating mysql to quantum ?
<zradmin> negronjl: http://pastebin.ubuntu.com/6476662/
<negronjl> zradmin: juju add-relation quantum-gateway mysql-hacluster ?
<negronjl> zradmin, Are you using juju-deployer ?
<negronjl> zradmin: it looks like it
<zradmin> negronjl: following the guide, its this command :juju add-relation quantum-gateway mysql
<zradmin> negronjl: nope just did a quick export out of juju gui to get the relations and the rest is all my local.yaml file
<negronjl> zradmin, ok
 * negronjl reads
<negronjl> zradmin, what revision of quantum-gateway are you using ?
<negronjl> zradmin, the error you are getting tells me that your quantum-gateway charm is either broken or out of date
<zradmin> negronjl: i am pullig directly from the charm store... v11 i think?
<negronjl> zradmin, Also change your oopenstack-origin to "openstack-origin: cloud:precise-havana/updates"
<zradmin> negronjl: ok im making the edits now
<negronjl> zradmin, juju remove-relation quantum-gateway mysql and then juju add-relation quantum-gateway mysql
<negronjl> zradmin, just to retry the thing ( and hope )
<zradmin> negronjl: I'm actually redeploying the quantum charm from scratch just so its clean
<negronjl> zradmin: ok
<negronjl> zradmin, I can deploy it all on my testing environment
<negronjl> zradmin, I'll check back here later but, for now, I have to go.
<zradmin> negronjl: thanks for the help, I just finished getting it connected to mysql and this was the log file http://pastebin.ubuntu.com/6476811/ - seems to have the same error
<davecheney> zradmin: how did you relate to mysql:shared-db
<davecheney> there is no relation called that
<davecheney> or no implemention for that relation in the charm
<zradmin> just running the juju add-relation quantum-gateway mysql command
<davecheney> oh dear
<davecheney> which version of the mysql charm have you deployed
<zradmin> mysql-29
<davecheney> zradmin: ok, while I look into this
<davecheney> you should remove that relationship
<davecheney> and do
<davecheney> juju add-relation quantum-gateway mysql:db
<zradmin> ok I'll give that a shot
<davecheney> zradmin: mysql-29 implements shared-db-relation
<davecheney> i think you may have an old versoin of the charm somehow
<davecheney> even so, you probvably don't want to relate to that relation endpoint
<davecheney> you want mysql:db
<zradmin> is there any updated documentation for setting up the test environment?
<zradmin> it seems like the publicly available wikis are very out of date
<davecheney> zradmin: do you mean the local provider ?
<zradmin> davecheney: so when i try and run juju add-relation quantum-gateway mysql:db it tells me "ERROR no relations found"
<davecheney> zradmin: my mistake, please trye
<zradmin> mysql:shared-db?
<zradmin> looking at all the other charm relations that seems to be what they're connecting to
<davecheney> yeah, just checked out quantum-gateway
<davecheney> ok, i thnk were; back to mysql
<davecheney> it looks like mysql-29 implements shared-db properly
<davecheney> i cannot explain why you deployment of that charm is acting weirdly
<davecheney> how did you deploy the mysql charm ?
<zradmin> juju deploy --config local.yaml -n 2 mysql
<zradmin> then juju deploy hacluster mysql-hacluster
<zradmin> added the relationship to ceph and then to the mysql-hacluster
<davecheney> zradmin: i'm sorry, i'm out of ideas
<davecheney> if that is mysql-29 then it shold implemnt shared-db
<zradmin> yeah i'm fairly confused as well
<zradmin> is there a way to manually retrigger that specific hook?
<davecheney> zradmin: is the unit in error ?
<zradmin> nope.... thats the frustrating part - everything is started
<davecheney> zradmin: i'm sorry, there is no way to do this if the unit is not in error
<davecheney> you could tru
<davecheney> juju resolved --retry $UNIT
<davecheney> but I suspect it will tell you thre is nothing to do
<zradmin> yeah it tells me the unit is not in an error state so it wont do anything
<zradmin> I've tried destroying the relationship a few times and rebuilding it since then... but each time it still says that mysql doesnt understand the shared-db relationship
<zradmin> which is pretty strange considering that glance/nova/keystone etc. all have databases and are working fine
<davecheney> zradmin: can you do juju status $MYSQL_SERVICE
<zradmin> here you go http://pastebin.ubuntu.com/6477065/
<davecheney> zradmin: could you get me an ls -al of the charm hooks dir
<davecheney> from memory it's somethignlike
<davecheney> /var/lib/juju/agents/unit-0/hooks
<davecheney> it'll be on machine 4
<zradmin> http://pastebin.ubuntu.com/6477095/
<davecheney> zradmin: hmm, truncated but looks right
<davecheney> on that same machine can you have a look in /var/lib/juju/unit-0.log
<davecheney> for the bit about 'charm doesn't implement hook "shared-db-relation-joined" [sic]
<zradmin> here it is http://pastebin.ubuntu.com/6477110/
<davecheney> zradmin: aah
<davecheney> i think that might have been a red herrin
<davecheney> the real issue appears to be
<davecheney> 2013-11-26 02:35:45 INFO juju juju-log.go:66 mysql/0 shared-db:71: MySQL service is peered, bailing shared-db relation as this service unit is not the leader
<davecheney> in fact, i don't think that is the error
<davecheney> is an error
<davecheney> if you look at the log on mysql/1
<davecheney> you'll see (this time) it was the leader
<zradmin> hmmm i thought so to at first but on the quantum-gw i cant find anything in neutron.conf relating to the mysql db backend
<zradmin> or api-paste
<zradmin> so this is what i get when i try and run a neutron command at the moment... http://pastebin.ubuntu.com/6477139/
<zradmin> the neutron.conf file on the gateway only has the rabbit host defined... but the neutron.conf on the nova-ccc has a sqllite database specified for some reason.... its the only reference to a database for neutron i can find. http://pastebin.ubuntu.com/6477146/
<zradmin> davecheney: I've tried standing up this environment 4 or 5 times since the havana release came out and i run into this wall each time.... its driving me crazy :)
<davecheney> zradmin: i'm really sorry to hear that
<davecheney> this is supposed to be easy
<davecheney> did you find anything in the mysql/1 log ?
<zradmin> i see it setting up nova.... and it is populating a quantum password field
<zradmin> but i also have the 2013-11-26 02:35:42 INFO juju juju-log.go:66 mysql/1 shared-db:71: This charm doesn't know how to handle 'shared-db-relation-joined'.
<zradmin> davecheney: yeah i love the power of juju and its much easier setting it up than tooling everything by hand... i just cant seem to figure out why those pices are giving me trouble. I do appreciate the help for sure!
<davecheney> zradmin: thanks for the log
<davecheney> that error isn't an error
<davecheney> see the prevoius discussion
<davecheney> *but* does mysql/1 also say that it is bailing out because it isn't the leader ?
<zradmin> nope no bailing that i can see
<davecheney> zradmin: ok, here is what I thin is happening
<davecheney> 1. quantum-gateway is related to mysql:shared-db
<davecheney> mysql is supposed to set some relation settings back to qyuantum-gw
<davecheney> which in turn will make it switch from sqlite to using that mysql db
<davecheney> but mysql isn't sending any relation settings (dunno why yet)
<davecheney> and that is hwy quantum is using sqllite
<zradmin> and not bringing the api online....
<davecheney> it is possible that a change of relation hooks have to run to get things properly configured
<zradmin> it should be logged in the /hooks directory right?
<davecheney> no, nothing is logged in the hooks directory
<davecheney> try removing and readding the relationship
<zradmin> by logged... i meant the config its trying to push
<davecheney> zradmin: hmm,, not really
<davecheney> relation settings are only visible to hook commands via the relation-get hook command
<zradmin> is that outside of the normal juju cli?
<davecheney> yeah, there is no way to see relation settings unless youre using debug hooks in a hook context
<zradmin> same issue with removing and readding the relationship
<zradmin> ah so the unit would have to be stuck on a hook in order to see what its trying to do
<davecheney> zradmin: i'm not sure stuck is the right thing to say
<davecheney> but i'd guess that both mysql units think they are not hte master
<davecheney> so *neither* are taking action and setting the relation settings to quantum-gw
<davecheney> can you past the log of mysql/1 ?
<zradmin> yeah I'll grab the end of it
<davecheney> ta
<zradmin> http://pastebin.ubuntu.com/6477255/
<zradmin> i know mysql is only running on /1 currently so it should be the master
<davecheney> ok, so i can see it doing relation set
<davecheney> what does the quantum-gw log say ?
<zradmin> this looks like the relevant section here.... says it is missing the quantum/nova passwords
<zradmin> http://pastebin.ubuntu.com/6477270/
<zradmin> and the database_host information apparently
<davecheney> zradmin: ahh
<davecheney> right
<davecheney> here is what is going on
<davecheney> 2013-11-26 02:35:49 INFO worker.uniter.jujuc server.go:108 running hook tool "relation-get" ["--format=json" "-r" "shared-db:71" "quantum_password" "mysql/0"]
<davecheney> 2013-11-26 02:35:49 DEBUG worker.uniter.jujuc server.go:109 hook context id "quantum-gateway/1:shared-db-relation-changed:4235752611937240132"; dir "/var/lib/juju/agents/unit-quantum-gateway-1/charm"
<davecheney> 2013-11-26 02:35:49 INFO worker.uniter.jujuc server.go:108 running hook tool "relation-get" ["--format=json" "-r" "shared-db:71" "nova_password" "mysql/0"]
<davecheney> ^ quantum has decided that ONLY mysql/0 has the relation data it wants
<davecheney> *but* mysql/1 is the unit which provided the relation data
<zradmin> that makes alot of sense
 * davecheney disappears down a charm helpers rabbit hole
<davecheney> zradmin: short version
<zradmin> doesnt it try and get itt from /1 as well though?
<davecheney> deploy one mysql unit
<zradmin> 2013-11-26 02:35:49 INFO worker.uniter.jujuc server.go:108 running hook tool "relation-get" ["--format=json" "-r" "shared-db:71" "db_host" "mysql/1"]
<davecheney> and i'll work
<davecheney> yeah
<davecheney> ok
<davecheney> that is weird
<davecheney> so what is after line 33
<zradmin> next batch http://pastebin.ubuntu.com/6477293/
<zradmin> looks like its just writing the config files
<zradmin> checked the nova.conf on the host.... and it has its db line sql_connection=mysql://nova:vSk0czUGLt20mXP1@10.10.32.4/nova
<davecheney> zradmin: ok, do you want to wait til the unit reaches swtarted state
<davecheney> all these logs are truncated
<zradmin> is there a way to get more verbosity?
<davecheney> zradmin: most people complain they are too verbose :)
<davecheney> that's all you get i'm afraid
<davecheney> is the unit in started state ?
<zradmin> yeah all of them are
<davecheney> right
<davecheney> does quantum-gw work ?
<davecheney> it looks like it's missing a restart action somewhere
<zradmin> nope i just get that 503 service not found error everytime i try and query neutron
<davecheney> zradmin: i think quantum-gw runs under upstart
<davecheney> can you go to the unit and do a service restart
<davecheney> on the service
<davecheney> maybe that will get it going
<zradmin> ok that services doesnt exist on the quantum-gateway nodes
<zradmin> looks like its called neutron-server and exists on the nova-ccc
<zradmin> davecheney: i restarted the quantum-gw nodes... but still no dice
<freeflying> zradmin, is yours ha deployment?
<zradmin> freeflying: yup been following the ha guide posted here: https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<zradmin> adjusted for havana of course
<freeflying> zradmin, does keystone work properly
<freeflying> zradmin, also 10.10.32.4 is the vip of your mysql cluster?
<zradmin> freeflying: yup, keystone, glance, nova, and cinder are fine
<zradmin> freeflying: correct
<zradmin> freeflying: only neutron and horizon are currently not working... horizon is because of neutron though :)
<freeflying> zradmin, any log from the node
<zradmin> which node
<freeflying> on your quantum
<zradmin> http://pastebin.ubuntu.com/6477270/, http://pastebin.ubuntu.com/6477293/
<freeflying> zradmin, you need login to the node, check it under /var/log/neutron
<freeflying> zradmin, also make sure your can connect to mysql from quantum node
<zradmin> freeflying: ok workin on it
<zradmin> freeflying: in var/log/neutron which agent do you want
<freeflying> zradmin, just some clues for your debugging, you need check from there
<zradmin> freeflying: ok trying to connect to mysql its telling me mysql is not installed on the quantum node
<freeflying> zradmin, by default no mysql client, you need install it
<zradmin> freeflying: ok just wanted to make sure I wouldnt mess something up by installing it
<freeflying> zradmin, no
<zradmin> freeflying: yes, i can connect to the mysql server from the quantum node
<zradmin> freeflying: ok... so there is a quantum database/user in mysql, but no neutron database or user as specified in the nova-ccc charm
<davecheney> win4
<zradmin> davecheney: wasnt there a hook that rewrote all of the quantum references to neutron?
<davecheney> zradmin: i'm not sure
<davecheney> i'd have to check
<zradmin> davecheney: ok its hooks/charmhelpers/contrib/openstack/neutron.py - it looks like most everything from nova-compute/cc and quantum-gw is installing neutron properly... but the database/users created are still named for quantum
<davecheney> zradmin: cool
<davecheney> so we've eliminated mysql
<davecheney> it's down to quantum-gateway charm
<freeflying> zradmin, in grizzly, its call neutron already in mysql
<zradmin> freeflying: all my grizzly deploys stood up with quantum
<freeflying> zradmin, interesting, here it s neutron :)
<zradmin> freeflying: what were you using for the source/openstack-origin?
<zradmin> freeflying: mine was cloud:precise-grizzly
<freeflying> zradmin, then one from cloud archive
<zradmin> freeflying: same here
<zradmin> davecheney: in the next couple of days I'll stand up a second mysql on a single node and try to attach quantum-gw to that and see what database it creates. thanks for all the help tonight
<davecheney> zradmin: you're welcome
<davecheney> thanks for your persistance
<zradmin> davecheney: hopefully I'll have something to help focus the investigation further :)
<Spacemonkey> Greetings, fellow jujulians. Er, jujumians. Ah, oh whatever.
<Spacemonkey> Anybody here know how to share juju setups for AWS across multiple machines?
<Spacemonkey> Looks like just copying the .juju folder is not enough, my developers can create/deploy/destroy but not ssh.
<davecheney> Spacemonkey: yes, they all need to share the same ~/.ssh/id_rsa
<Spacemonkey> Oh, now that's going to get complicated. LOL
<Spacemonkey> Ok thanks dave, glad to hear what I need to do.
<davecheney> Spacemonkey: the back story is when you bootstrap the ssh key of the person that bootstrapped it copied to the ubuntu user on all the workstations
<davecheney> err, not workstations
<Spacemonkey> How about using juju for different aws accounts? Do I just create a second environment, or do I have to swap out .juju folders every time I switch?
<davecheney> machines in the environment
<davecheney> Spacemonkey: thre are a few optoins
<davecheney> one is to set $JUJU_HOME
<davecheney> which does the same as switching .juju
<davecheney> the second is you can put your AWS creds into your .juju/environments.yaml
<davecheney> rather than letting them flow in from your environment vars
<Spacemonkey> Ok thanks that makes sense. Perhaps $JUJU_HOME will be the easiest option for me, and just have .juju-personal and .juju-work and .juju-client1 and so on.
<Spacemonkey> My ssh key is being used for a bazillion things though - can I just swap out the ssh keys on all running instances for the ubuntu user?
<davecheney> Spacemonkey: you can also give the path to the ssh key in your environments.yaml
 * davecheney checks that
<Spacemonkey> I see access-key and secret-key, but nothing about ssh keys there.
<davecheney> actually, scrach that
<davecheney> i don't know what i was thinking
<Spacemonkey> Dang you got me all excited.
<Spacemonkey> Ok thanks for helping me figure this out dave.
<marcoceppi> Spacemonkey: ssh-authorized-keys is what youre looking for
<marcoceppi> you can only set it prior to deployment though
<marcoceppi> it's not really documented anywhere though
<gnuoy> jamespage, sorry to hassle you but do you have an idea when you might get a chance to look again at https://code.launchpad.net/~gnuoy/charms/precise/quantum-gateway/external-nets/+merge/194153 ? I'm switching projects at then end of the week and it'd be useful to know if it's likely to be re-reviewed before then
<jamespage> gnuoy, this week
<gnuoy> thanks
<Spacemonkey> marcoceppi: aah-authorized-keys? Is that in environments.yaml somewhere or? Thanks for pointing me in this direction though.
<jcastro> <-- lunch
<dpb1> Hi -- how do I import a juju environment onto another machine now?  with pyjuju I could just connect, now I seem to need a CA certificate in the environment configuration?  Do I need to copy the .jenv file out of band?
<dpb1> hazmat: do you know ^?
<hazmat> dpb1, jenv has all you need
<hazmat> dpb1, its meant as distribution format for sharing, captures everything into a single file (certs, conn details, etc)
<dpb1> hazmat: ok.  That is the "official" way to do it?  or is there a command?
<dpb1> ok
<hazmat> dpb1, at the moment just share the file afaik, you also need a skeleton in env.yaml, but cli pulls preferentially from jenv over env.yaml config
<hazmat> actually not sure about the skeleton haven't tried it..
 * hazmat tries
<dpb1> hazmat: interesting...  the intersection and discrepencies between these files is confusing, as we already talked about.
<hazmat> dpb1, yeah it is.. so environments.yaml has to exist, but that its.. doesn't currently need any reference to the actual env that's in jenv.
<dpb1> wow
<dpb1> so is the plan to migrate away from environments.yaml?
<hazmat> ie mkdir jhome && export JUJU_HOME=jhome && cp /old/home/environments/manual.jenv $JUJU_HOME/environments && juju init && juju status -e manual works.. even though there's no 'manual'  in environments.yaml
<hazmat> kinda of funky
<hazmat> dpb1, not that i'm aware of re migration
<dpb1> ok
<hazmat> dpb1, this is more cache and share afaics
<dpb1> til: don't rely on environments.yaml so much. :)
<hazmat> dpb1, well the issue is when the cache gets stale.. destroy-enviroment clears out the corresponding jenv.. but more importantly
<hazmat> environment.yaml is not live data anymore,
<hazmat> updates must go through get-env/set-env
<dpb1> ya, that is also new to me...  the file is misleading for a pyjuju user for sure.  Maybe it doesn't matter for new users.
<InformatiQ> what's up
<cmark> Does juju support moving a bootstrap node from one machine to another?  e.g. for fail-over/high-availability
<freeflying> cmark, not at this moment
#juju 2013-11-27
<hazmat> cmark, there's some plugins in the works re backup restore to different node
<cmark> hazmat: i'm working with an older release of juju (0.6 python).  I'm assuming these wouldn't apply?
<hazmat> hmmm
<hazmat> cmark, i think we're still targeting january re a data migration from pyju to goju
<hazmat> cmark, the tsuru guys did a small abstraction around the pieces
<cmark> aside from replicating the juju and zookeeper data, what else is there?
<hazmat> cmark, basically restoring is 3 things and one pyju caveat. updating provider object storage file for env in control bucket to point to new server instance id. updating machine zk/db records with new instance id, and updating extant agents with new state server address.. all roughly while the agents are shutdown (or restart post config).
<hazmat> the pyjuju caveat being there is some craziness in libzk that doesn't like spinning on reconnect, hence the down while reconfigure recommend.
<hazmat> er.. agents down
<cmark> hazmat: when you say "updating machine zk/db records with new instance id" are you referring to updating the IP addresses in the machine entries in the zk db?
<cmark> so that machine-0 points to the new bootstrap node
<hazmat> cmark, so machine-0 points to new instance id
<hazmat> from cloud provider
<hazmat> ip addresses will get repopulated in db on restart if changed
<axw__> ashipika: hey. I'm heading off in a moment, just wanted to let you know that someone fixed the DNS issue for the null provider
<axw__> at least for his use-case; I think it applies to yours too
<axw__> it's on trunk now
<ashipika> excellent! so i just update the code and recompile
<axw__> yup
<ashipika> thanks!
<axw__> nps - cya later
<ahasenack> hi guys,
<ahasenack> how do I get the private-address of this lds-quickstart/0 unit?
<ahasenack> http://pastebin.ubuntu.com/6483983/
<ahasenack> it used to be there, but after I gave this openstack instance a floating ip, the internal IP got replaced and I don't see it anymore
<ahasenack> in the status output
<ahasenack> the "public-address" was the internal IP before
<gnuoy> jamespage, I seem to be seeing the behaviour we briefly discussed at the cloud sprint where on a fresh deploy the quantum gateway seems to be wedged. I can see namespaces on the gateway that corresponds to routers that have been created but incoming and outgoing traffic seems to fail. Restarting the machine and recreating the router fixes this. I'll raise a bug but what debug info would you like to see ? iptables from the router namespace and ... anything
<gnuoy> else ?
<jamespage> gnuoy, can you ping the router IP address?
<gnuoy> jamespage, nope
<gnuoy> after a reboot and recreate I can
<jamespage> gnuoy, it points me a two things
<jamespage> one - something is not getting restarted when it should be
<jamespage> and the reboot fixes that
<jamespage> or two - something is broken in neutron - that flushed through with the rebiit
<jamespage> reboot
<ashipika> hi all.. looking at my "juju status" report.. the service always gets exposed on a certain dns name (public-address).. would it be possible to tie that to an ip? (for example: if one does not have a dns server running in a mini diy cluster)
<iri-> I'm doing an upgrade-charm and I see on the remote side INFO juju charm.go:56 worker/uniter/charm: downloading local:blah/blah from https://s3.amazonaws.. etc, however the contents of the thing it downloads is not the contents of what I'm specifying with --repository on my local disk. How can this be?
<iri-> wtf. I just tried again moving --debug from the left hand side of "upgrade-charm" to the right hand side. Then it worked. What the hell?
<kirkland> where can I "list all charms"?
<kirkland> I see a few highlighted charms at jujucharms.com
<rick_h_> kirkland: if you perform an empty search it'll load them all, after a while. https://jujucharms.com/fullscreen/search/?text=
<iri-> Now I just get "no new charm event" whenever I try to upgrade my charm. On my machine I see "writing charm to storage" and I see my revision number increment. But it isn't incrementing remotely. What gives?
<rick_h_> kirkland: if you're looking just for a quick list you can hit up http://manage.jujucharms.com/charms/precise for just precise/etc
<kirkland> rick_h_: thanks
<benji> kirkland: there's also http://manage.jujucharms.com/charms/precise
<benji> ah, you already were given that
<iri-> If I add a variable to config.yaml and "juju set --config <secret yamlfile> <service>", is there anything else I need to do beside editing the config.yaml in the charm?
<iri-> I'm currently getting "unknown option: MYVARIABLE" even though MYVARIABLE is in both file
<iri-> +s
<Tobias_____> how can i install juju on my own public server? (no cloud hoster)/ wie kann ich juju auf meinem eigenen server installieren? (keine cloud)
<sarnold> Tobias_____: look at using the 'local' juju provider, which uses LXC to provide containers for each of the charms to execute in: https://juju.ubuntu.com/docs/config-local.html
<Tobias_____> sarnold: tried to bridge it to use the static adress of the server instead of a local lxc container... it is currently recovering
<sarnold> Tobias_____: I believe it is difficult to provide services to other hosts on the network through the local provider -- it is difficult or impossible to assign each container a LAN-based IP -- but it is quite useful for testing
<sarnold> Tobias_____: ah, so you have already disovered this :(
<Tobias_____> jep, i try to find an solution to run juju on dedicated server
<Tobias_____> it seems really useful to setup instances and stuff
<Tobias_____> just deploy something and connect mysql, expose, done
<sarnold> Tobias_____: one option is to try the ssh provider, and use .. the --to flag? .. to deploy everything to either hand-build lxc instances or similar. It isn't the same magic :( but it is something..
<Tobias_____> Any network bridging to eth0 distorts the servers static inet adress and causes routing to fail
<Tobias_____> i tried starting lxc in an container with br0, bridged to eth0
<Tobias_____> thats why my server is currently in new install
#juju 2013-11-28
<Luca__> Hi there. Does anybody knows if charms are already available for saucy? I cant see any of them
<zradmin> davecheney: are you available at the moment? I created a stand alone mysql instance and tied the quantum-gateway charm to that... still the same behavior for me
<davecheney> zradmin: oh damn
<davecheney> zradmin: i;m here to help
<zradmin> davecheney: thanks! I'm using MAAS to deploy on and my test maas controller has been around since 12.04.2/juju 1.12 - through updates etc i have upgraded the controller and it is now on 12.04.3/juju 1.16.3 - i have destroyed the environment and the charms should be pulling directly from the charm store for each deployment. could the old maas controller possibly be an issue?
<davecheney> zradmin: not really sure
<davecheney> not a maas expert
<davecheney> but i'd recommend using the latest possible maas version
<davecheney> probably from the cloud archive
<zradmin> yeah its on there
<zradmin> im just wondering if somehow im getting an older version of the charm even though its pulling directly from the store
<davecheney> zradmin: no, that is not possible
<davecheney> that is
<davecheney> it is not possible if you destroyed the environment and reboostrapped
<davecheney> as the cache is inside the environemnt
<zradmin> yeah its been rebootstrapped several times
<zradmin> is there a way to check the charm version on node 0?
<davecheney> zradmin: no
<davecheney> there is no charm deployed on macine 0
<davecheney> juju status will report the version of the charm downloaded and deployed from the store
<zradmin> ok so node0 doesnt store a cached copy of the charm then
<Luca__> davecheney: Do you know if there are chams available for saucy? I cant find any in the repository
<davecheney> Luca__: most of the charms are for the LTS relases
<davecheney> it's unlikely that we'll have a lot of saucy charms
<davecheney> Luca__: the series of the charm defines the series of the machine it is deployed on
<davecheney> so, while you migth be running saucy on your desktop
<davecheney> you want to be deploying precise machines to run your services
<Luca__> davecheney: Actually I did not find any
<davecheney> that is why most of the charms are for precise
<sarnold> Luca__: I think only the precise charms are promulgated to the charmstore; I know I've seen non-precise charms but there's not many of them, and I don't know how to search them these days..
<Luca__> davecheney: my intention was trying deploying Openstack with 13.10 using charms. I am now ending up staring again with 12.04.03
<davecheney> Luca__: canonical only supports deploying openstack on our LTS releases
<zradmin> davecheney: I found this on node0 in /var/lib/juju/charmcache/cs_3a_precise_2f_mysql-29.charm  29 was the latest revision right?
<sarnold> davecheney: sadly you'd never know that from e.g. http://www.ubuntu.com/server/
<Luca__> davecheney: Thanks. Will try from there
<sarnold> "13.10! charms! openstack!" hehe
<Luca__> sarnold: Agree, from the web page impression is quite different
<zradmin> sarnold: the website definetly needs an update... takes at least a month just to figure out where to begin :)
<Luca__> sarnold: Right, that is why I started with 13.10
<sarnold> Luca__: plus you figure starting with 13.10 would giv eyou a good jump on 14.04 LTS, right? :) you wouldn't be the first to hope so...
<Luca__> zradmin: same here, have been working on this for the last 15 days to figure out how to start
<zradmin> Luca__: this is the best publicly available document i have found so far https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<Luca__> davecheney: I have pinged jamespage trying to get some information about the status of openstack bundles, however he does not seem available
<zradmin> Luca__: but use the latest version of juju... not .7 that the guide states
<sarnold> Luca__: if he's from .us, he might already be enjoying thanksgiving holidays
<Luca__> zradmin: Yes thanks I am following that, though requirements are unrealistic.... 28 servers is really something useless and I dont have a clear idea how to install different services on same nodes
<davecheney> zradmin: looking at the charm store, mysql-29 is the latest
<Luca__> zradmin: I am now with 12.04.03 I believe I am using the latest juju, though need to check
<zradmin> Luca__: juju deploy $SERVICE --to $MACHINE#
<davecheney> Luca__: yeah, opensack isn't for the faint of heart
<davecheney> and juju won't giv eyou a lot of help there
<Luca__> zradmin: so far I have been trying adding havana repository however I am getting a GPG error: http://ubuntu-cloud.archive.canonical.com precise-updates/havana Release: The following signatures couldn't be verified because the public key is not available:
<davecheney> as the default policy is one service unit per machine
<sarnold> oh man, this looks useful :)  https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<sarnold> wonder how I hadn't seen this one before
<davecheney> Luca__: i'm not quite sure how you got onto the 13.10 track
<davecheney> charms always dictate the series of the machine they are deployed on
<davecheney> we only have openstack charms for precise
<davecheney> so the machines deployed will be precise machines
<Luca__> davecheney: you are right, it is not the easiest stuff to deploy
<zradmin> davecheney: the later jujus support multi service deployment using lxc containers right?
<Luca__> sarnold: That link is unfortunately quite outdated... Not even quantum in the deployment
<sarnold> Luca__: oh. drat.
<Luca__> sarnold: I would not advise you following that unless you want to be really behind
<davecheney> zradmin: juju 1.16.x (and 1.14 i think) supports lxc containers
<davecheney> but I would be very surprised if our openstack charms accept being deployed inside an lxc container
<davecheney> as usuall, networking is the issue with containers
<davecheney> i think maas + lxc + openstack is possible, but not tested
<zradmin> davecheney: nah i havent used it for those but i do use it for things like nagios/juju-gui
<Luca__> davecheney: I started with 13.10 then followed several docs, and looked for openstack charms for saucy, which were obviously  not  available. Ending downloading those for precise, which did not work in 13.10 and then now startign again with 12.04.03... This has been the path in the last 15 days
<zradmin> for the openstack api services im running them in their own vms in proxmox
<zradmin> eth0 is on one set of switches, while eth1 is on the dmz side for all of my machines (physical and virtual)
<Luca__> zradmin: were you able to deploy openstack with ubuntu?
<zradmin> Luca__: grizzly was the closest i came to having it all up and running, but now i have an issue with neutron not coming up properly
<davecheney> Luca__: i'm sorry you got so sidetracked
<davecheney> i'm not sure what you mean by download
<davecheney> juju deploy mysql
<davecheney> will deploy mysql on the current LTS
<davecheney> we try not to make it any more complicated than that
<zradmin> davecheney: have most of the deployments you've seen been setup as HA? or are people just setting it up as single nodes?
<davecheney> zradmin: always ha
<zradmin> thats what i thought
<davecheney> zradmin: nobody wants an unreliable hypervisor
<zradmin> is the guid similar to the on posted on the wiki or is there any updated documentation/configs we can try to follow
<zradmin> lol for sure :)
<davecheney> guid ?
<davecheney> guide
<Luca__> zradmin: did you deploy with charms?
<zradmin> yeah guide sorry for my typo
<zradmin> Luca__: yes! it makes it much easier than configuring by hand when paired with MAAS. I can deploy a test environment across 28 servers in 2 hours easily
<sarnold> two hours? zounds :)
<Luca__> davecheney: Basically I created an environment.yaml file with default-series: precise instead and not saucy, even though I was on 13.10. I as well downloaded locally charms but still for precise. This is what I meant by downloading them
<zradmin> sarnold: and then spend the next two weeks trying to figure out why neutron isnt working :(
<Luca__> zradmin; As far as I know jamespage should be working on HA deployments and bundles
<Luca__> zradmin: For production environment HA is mandatory... otherwise useless
<davecheney> Luca__: it's even easier
<davecheney> remove the default-series config optoin
<davecheney> you don't need it
<sarnold> zradmin: oh man, that's annoying. :/ I know nearly nothing of the whole environment, and it seems unlikely I'll ever own enough machines to really give it all a try...
<Luca__> davecheney: got it. In any case now I am with 12.04.03 will start from there. So far I could create a bootstrap node, now will start with the HA deployment. Unfortunately I am having some issue setting the Ubuntu Cloud Repository. It is complaining about public key not available
<davecheney> Luca__: i'm sorry, i don't quite understand
<davecheney> you don't nede to anything
<davecheney> juju does tihs
<Luca__> davecheney: You dont need to use Ubuntu Cloud Repository? http://www.ubuntu.com/download/cloud/cloud-archive-instructions
<Luca__> zradmin: Did you follow HA Guide?
<sarnold> Luca__: this page: https://wiki.ubuntu.com/ServerTeam/CloudArchive   has the commands e.g. sudo add-apt-repository cloud-archive:havana
<davecheney> Luca__: if you were going to install opensack by hand, maybe
<Luca__> zradmin: In any case as I said earlier 28 servers it is kind of unrealistic. Really need to scale down
<davecheney> but the charms will do this themselves onte machines that they spin up
<sarnold> Luca__: add-apt-repository automatically retrieves the key you need from launchpad
<Luca__> davecheney: got it
<Luca__> davecheney: basically juju would look into the correct archive and use havana packages if I understand correctly?
<Luca__> sarnold: Thanks, I was there now :)
<zradmin> Luca__: yeah i followed the ha guide and ammended it when havana was released because i wanted to get past the neutron rename. i have 16 machines to play with in a blade enclosure so my plan has been to use 3 for ceph, and virtualize all the api services between 2 others... after that its all compute-nodes
<davecheney> Luca__: not juju, the charms
<davecheney> the charms contain all the logic
<davecheney> juju is just a workflow engine
<Luca__> rzadmin: Have you been able to deploy several services into one blade? wit the --to flag?
<zradmin> well in a blade each blade (we're using half heights) is its own server
<zradmin> ^blade enclosure
<Luca__> davecheney: Yes, sorry, I was thinking about charms and wrote juju ...
<Luca__> zradmin: An enclosure comes at most with 16 blades, therefore I need to deploy some services on a single server. However the HA guide requires 28 servers. How did you move from 28 to 16?
<zradmin> i use proxmox(open source hypervisor using kvm) on 2 of the blades and have vms for each of the services on both
<zradmin> onces thats running properly we can scale the apis to new nodes as needed thanks to juju add-unit
<Luca__> zradmin: Dont know the details but I grasped the idea
<sarnold> zradmin: suddenly a two hour deploy makes sense, you're putting a ton of work on two little blades :)
<zradmin> sarnold: they're pretty big, but the deploy is slowed down mainly by me waiting for each service to finish coming online with all the relationships before provisioning the next one
<davecheney> zradmin: are you doing that because you think it will fail ?
<davecheney> or just to see it working ?
<Luca__> sarnold: As I dont have a whole chassis for myself I was thinking using couple of servers and virtualize there :)
<sarnold> zradmin: if you don't mind, what capabilities do each blade have? how much does the whole enclosure and blades cost? :D
<Luca__> sarnold: An enclosure is quite expensive
<Luca__> sarnold: A single half blade servers about 3000$
 * davecheney never understood why people use blades
<Luca__> Enclosure is more that 10,000$, plus switchs OAM
<davecheney> they use more power
<davecheney> cost more than regular macines
<davecheney> and since when was data center space a bigger problem than watts/rack
<Luca__> davecheney: well, it is good if you want to integrate servers
<sarnold> Luca__: ouch :) okay scratch that then, hehe
<zradmin> sarnold: we picked up the chasis fully loaded (used on ebay!) for around 11k, dell m610s w 2/8 core procs & 48GB RAM
<Luca__> nut definitely not worse for a openstack deployment. beside you would have problems with midplane bandwidth
<sarnold> davecheney: blades may fit better into a house :) hehe
 * davecheney stands by his asserting that blade chassises are a false economy
<davecheney> sarnold: in AU, most blade chassis need 3 phase power
<Luca__> rzadmin: Yes, if used that could be a reasonable  price
<davecheney> that isn't available in my condo
<zradmin> davecheney: agreed.... unless you are renting a half rack from a data center and they forgot to put power usage in the lease agreement! :D
<sarnold> davecheney: hehe, yeah, I wound up scratching "buy a used thumper" off my todo list once I saw three-phase requirement. d'oh )
<zradmin> lol
<davecheney> the thumper doens't need 3 phase
<davecheney> just 200v which isn't availble commonly inthe US
<sarnold> no? just 220 withot the phases?
<Luca__> :)
<sarnold> just laundry drying machines... hehe
<Luca__> rzadmin: Did yo use juju 1.6?
<Luca__> I just found out that current installed version on my 12.04.03 box is 0.5+bzr531-0ubuntu1.3
<Luca__> pretty outdated...
<zradmin> Luca__: 1.16.3
<sarnold> 0.5??
<Luca__> Version: 0.5+bzr531-0ubuntu1.3
<zradmin> sudo add-apt-repository ppa:juju/pkgs
<zradmin> sudo apt-get update
<zradmin> etc
<Luca__> yep
<Luca__> kid of surprised too about the version....
<davecheney> Luca__: sory about that
<davecheney> LTS rules mean we cannot change the version of juju in precise
<Luca__> Had already bootstrap node, need to restart all
<Luca__> davecheney: no worries, I understand
<davecheney> https://juju.ubuntu.com/docs/
<davecheney> install instruction are here
<Luca__> At least I am getting few good infos from this chat!
<sarnold> well I'll be, I thought for sure precise had shipped with 0.6...
<sarnold> no wonder "install juju from the ppa" was always step #1 :)
<Luca__> :)
<Luca__> zradmin: Strange... I added the repository but installed 0.7...
<Luca__> Version: 0.7+bzr628+bzr633~precise1
<sarnold> Luca__: https://juju.ubuntu.com/docs/ says to use "ppa:juju/stable
<Luca__> yes...
<sarnold> not "ppa:juju/pkgs" -- the stable ppa has the 1.16
<Luca__> and juju-core
<Luca__> non juju
<Luca__> yep, found now, sorry :(
<sarnold> see e.g. https://launchpad.net/~juju/+archive/stable
<zradmin> yeah the ppa:juju/stable is the right one my bad
<Luca__> now is righ , no worries
<Luca__> Is there a way to select a bootstrap node?
<zradmin> Luca__: not to my knowledge, are you using maas?
<Luca__> yes
<Luca__> I thought so
<zradmin> Luca__: i remove my vms from maas when i rebuild the environment and add them one at a time so i know which service is where
<Luca__> You are right, this is a good idea
<Luca__> Have you tried using LXC containers with Juju?
<zradmin> just for small things like juju-gui
<zradmin> i usually put that on node 0
<Luca__> Do you have any documentation to point at?
<zradmin> for which piece?
<Luca__> I dont have any experience with LXC and Juju so I was wondering if you followed any document
<Luca__> zradmin: how many networks did you define for your openstack deployment ?
<zradmin> Luca__: nope no document, just the internal has been difened so far... they used to have the ext-net configured in the nova-cc charm but it looks like they took it out
<Luca__> zradmin: what did you use for the monitor-secret under ceph.yaml? To me it looks like the way of getting this secret is kind of recursive
<Luca__> monitor-secret:      a ceph generated key used by the daemons that manage to cluster     to control security.  You can use the ceph-authtool command to      generate one:          ceph-authtool /dev/stdout --name=mon. --gen-key
<zradmin> yup thats it
<Luca__> What did you configure then?
<zradmin> i generated a key like that
<zradmin> but i think if you leave it undefined it autogenerates one as well
<Luca__> zradmin: I am not sure am getting what you mean... You need ceph-authtool to generate the key, but you dont have yet ceph-authtool as you are supposed to install with the charm. Beside the documentation related to this charm says this is a mandatory parameter
<zradmin> install the ceph tools on your maas controller
<zradmin> then you can generate it :)
<Luca__> Thanks
<stryderjzw> Hi, I am running the destroy-relation command, but nothing is happening to the services when I run juju status.  Anyone seen this or have a suggestion on how to debug?
<freeflying> will this channel be proper to ask question related charm? or do we have dedicated one for it
<sarnold> freeflying: this is good, or you can also use askubuntu.com or the mail list
<iri-> How do I make juju aware of changes to the IP/DNS in AWS?
<SuperMatt> using the power of juju, can I create linux containers on *another* machine?
<SuperMatt> I know this isn't real juju, but can someone help me with an openstack question?
<X-warrior`> Why does the "juju resolved" doesn't fix/remove this message: 'hook failed: "cluster-relation-joined"' ?
<melmoth> X-warrior, no idea, but what about adding a --retry ?
<melmoth> so it ll run the failed hook once again
<melmoth> if that fail, i guess the next step is juju debug-hooks involved/unit , and run the hook manually to see what the problem is
<X-warrior`> that sux
<X-warrior`> I can't mark it as resolved, so I can't destroy the service, neither the machine
<X-warrior`> sux
<X-warrior`> it should have an option
<X-warrior`> --force to force doing what you want, for example, if you're deleting a service and use force, it will remove ignoring the current state or something similar...
<X-warrior`> What could happen if I terminate a machine from amazon console? Instead of deleting it using destroy-machine? I can't resolve an error state, but this environment has some machines that I cannot loose.
#juju 2013-11-29
<Luca__> Anybody there having some experiences in deploying openstack using charms?
<Luca__> Hi there. Anyone with some experience in deploying openstack with charms?
<jamespage> yolanda, whats the stack trace?
<yolanda> jamespage, finished with heat charm, now i'm testing a full deployment with heat again, then you should review it?
<jamespage> yolanda, happy to take a look - have you pushed your changes?
<yolanda> jamespage, yes, latest changes are pushed
<jamespage> yolanda, some test failures: http://paste.ubuntu.com/6493740/
<jamespage> you need to stubb out os
<yolanda> it ran without tests for me
<yolanda> without test failures
<yolanda> mm, i see
<yolanda> maybe i had write permissions there
<jamespage> yolanda, something is also called apt-get update
<jamespage> maybe in install hook?
<yolanda> looking now
<yolanda> ok, pushed new changes
<yolanda> i think i was running the tests with root user, so it worked because of that
<yolanda> jamespage ^
<jamespage> yolanda, you still need to sub out around the creation of /etc/heat better
<jamespage> it does not exist on my system, so the tests fail
<jamespage> yolanda, README needs a tidy to remove ceph stuff, and metadata.yaml contains some trailing whitespace and odd indentations for categories
<jamespage> yolanda, re templates - you can drop the etc_heat from the api-paste.ini
<jamespage> its not require
<jamespage> and we should probably set auth_encryption_key to something sensible
<jamespage> maybe provide via configuration and set to something random if not supplied - but that has to persist (i.e. it needs to be stored on disk)
<jamespage> yolanda, re the [keystone_authtoken] in heat.conf
<jamespage> that looks like it needs completing - but I'm not 100% sure how that interacts with api-paste.ini
<jamespage> yolanda, in the install hook you need to deal with openstack-origin: distro on 12.04
<jamespage> I think the cinder charm does this best - basically it changes distro -> cloud:precise-XXX if on precise
<jamespage> take a look
<jamespage> yolanda, actually re the use of os.path and os.mkdir - charmhelpers has an mkdir function somewhere that encapsulates all of that
<jamespage> so you could just stubb that out instead of dealing with os directly
<yolanda> jamespage, ok, i'll work on that fixes
<jamespage> yolanda, other than that looks pretty good - nice work!
<jamespage> not tried it yet - will do later today once you have updated
<yolanda> most complicated part was to make it work, i needed a 16G compute machine for that to run :)
<jamespage> yolanda, oh - the icon is a ceph one btw
<jamespage> do ceilometer and heat have official icons just yet?
<jamespage> I don't think so - yolanda - just drop the icon for the time being
<jamespage> we can add that later
<yolanda> ok
<jamespage> ditto ceilometer
<jamespage> gonna work on that next
<yolanda> i bet ceilometer has changed a lot since i wrote it
<dweaver> Can't seem to find an equivalent to  jitsu  watch UNIT --state=started using juju-core is there any equivalent or should I be parsing juju status output myself instead?
<X-warrior`> How could I get out of a 'failed state' when juju resolved doesn't work?
<X-warrior`> :S
<X-warrior`> marcoceppi: are u around?
<dweaver> X-warrior`, try juju resolved multiple times.
<X-warrior`> I already tried it
<X-warrior`> A LOT
<dweaver> juju destroy-environment and re-deploy will work, of course.  juju destroy-unit should remove the unit, but if that doesn't work then you need juju destroy-unit --force, which is in the development version.
<X-warrior`> destroy-environment will work, but I don't want to destroy everything
<X-warrior`> destroy-unit depends on machine state
<X-warrior`> uhmm
<yolanda> jamespage, what do you think it can be a good place to store heat auth key
<yolanda> ?
<jamespage> yolanda, I'm been trying to standardize on /var/lib/charm/{service_name} for that type of stuff
<yolanda> ok
<X-warrior`> dweaver: 1.16.3 does not have this --force option
<dweaver> I know, it is in development. see bug: https://bugs.launchpad.net/juju-core/+bug/1089291
<_mup_> Bug #1089291: destroy-machine --force <canonical-webops> <destroy-machine> <theme-oil> <juju-core:Fix Committed by fwereade> <juju-core 1.16:Fix Committed by fwereade> <https://launchpad.net/bugs/1089291>
<dweaver> So, it is in 1.16.4
<dweaver> There is also a workaround in the bug report, at the bottom.
<X-warrior`> hazmat: are u there?
<hazmat> X-warrior`, i am
 * hazmat reads backlog
<X-warrior`> hazmat: sorry to bother you, but I saw your script that does a direct db delation and I think I need it...
<hazmat> X-warrior`, so this is juju-core.. could you pastebin the relevant portion of status
<hazmat> X-warrior`, failed state is rather ambigious, you mean a unit or machine?.. a status pastebin would help understand what the issue is
<X-warrior`> just a sec
<X-warrior`> http://pastebin.com/YHM3bx2E
<X-warrior`> the problem is that juju resolved, never resolve the problem, I already try to execute it a lot...
<hazmat> X-warrior`, juju resolved on elasticsearch/0 or logstash-indexer/0  ?
<X-warrior`> I tried on both of then
<hazmat> X-warrior`, in one terminal window can you start juju debug-log .. and then in another do the resolve on either elasticsearch/0 or logstash-indexer/0..  and then pastebin the log output.
<X-warrior`> hazmat: http://pastebin.com/5pJ3KjpW
<hazmat> X-warrior`, are you doing resolved --retry or just resolved ?
<X-warrior`> just resolved
<X-warrior`> hazmat: http://pastebin.com/494HpENN looks better
<X-warrior`> I see a " cannot allocate memory", I guess because I'm using a micro instance...
<hazmat> X-warrior`, eek, yeah java on micros is rather ambitious
<hazmat> X-warrior`, i'd try to ssh into that machine and manually shutdown elasticsearch to free up some memory
<hazmat> X-warrior`, micros aren't really useful virtual machines in my experience, they are severely constrained and penalized at the hypervisor level. for useful work m1.small is about as small i go.. t1.micros are good for almost static or net io primary workloads
<hazmat> er.. s/good/ok
<X-warrior`> I'm just starting on it, so testing stuff and checking how it works...
<X-warrior`> but thanks for the tip ;)
<X-warrior`> and for the help
<X-warrior`> let me try to ssh into it
<hazmat> re t1.micro and cpu penalization.. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts_micro_instances.html
<hazmat> X-warrior`, wrt to future stuff that makes this nicer.. it will be in the next dev release (1.17.0) we'll have destroy-machine --force machine_id which will forcibly kill units on the machine
<X-warrior`> hazmat: yeap I heard about this. dweaver showed me the 'bug' on launchpad and then I find your name and script
<hazmat> X-warrior`, that script isn't appropriate for your case, it was specifically around a stuck machine with no units, the  --force feature on trunk will handle both cases.
<X-warrior`> got it
 * X-warrior` running status to check if it worked
<hazmat> X-warrior`, any luck shutting down elasticsearch? .. you'll probably need to rerun the resolved hook on it
<X-warrior`> Oh men why on earth I used this micro instances :(
<X-warrior`> and trying to put more then one service on the same machine
 * X-warrior` feels dumb
<X-warrior`> elasticsearch is not marked as failed anymore... but logstash-indexer keeps with the failure status...
<hazmat> X-warrior`, updated status pastebin pls
<hazmat> X-warrior`, elasticsearch/0 should be gone i would think
<X-warrior`> http://pastebin.com/7YN7rrwE
<hazmat> X-warrior`, looks good, that's progress.. so back to logstash-indexer
<X-warrior`> yeap
<X-warrior`> same process as before? one terminal with debug-log another with resolved?
<hazmat> X-warrior`, yes pls
<X-warrior`> "ERROR cannot set resolved mode for unit "logstash-indexer/0": already resolved"
<X-warrior`> but it is not resolved
<X-warrior`> Logstash   agent-state is marked as down. Maybe it cannot do the resolve with it down
<hazmat> hmm
<hazmat> X-warrior`, can you log into that machine by ip, and manually restart the juju agent there
<hazmat> X-warrior`, ls /etc/init/juju* should tell you which upstart job it is.. then sudo service <name of file minus .conf> restart
<X-warrior`> it worked
<X-warrior`> I rebooted the machine
<X-warrior`> and then logstash and elasticsearch are gone
<hazmat> X-warrior`, cool
<X-warrior`> thanks
<X-warrior`> for all the help
<X-warrior`> :D
<X-warrior`> hazmat: thanks for all the help, I'm leaving now
<X-warrior`> dweaver: Thanks! :D
<hazmat> cheers
<aquarius> I'm trying to deploy marcoceppi's Discourse charm on Azure. juju status says "agent-state-info: 'hook failed: "install"'" for the "discourse" service. How do I work out what failed and start working out why?
<hazmat> aquarius, log into the machine and inspect the log at /var/log/juju ... alternative juju resolved --retry while doing juju debug-log...
<aquarius> hazmat, that sounds useful. How do I log into the machine? (I am completely new to juju, as you may be able to tell. :))
<hazmat> final option is interactive debug-hooks environment, https://juju.ubuntu.com/docs/authors-hook-debug.html
<hazmat> aquarius, juju ssh name-of-unit
<aquarius> ha! that's too easy :)
<hazmat> ie. juju ssh discourse/0
<aquarius> and I am logged in. Best command ever :)
<aquarius> aha, it got a timeout while connecting to github. That sounds transient
<aquarius> so "juju resolved --retry" will retry the failed deploy?
<aquarius> or will it redeploy everything?
<aquarius> ah, I need to specify the unit
<aquarius> ok, this is just too simple. :)
<aquarius> hm. Maybe I was wrong. "juju resolved --retry discourse/0" returned (and if I run it again, it says that unit is already resolved), but juju status still shows that unit as being in agent-state: error. Shouldn't it be pending or something?
<aquarius> or do I have to force a redeploy from scratch, rather than just --retry?
<hazmat> aquarius, its async
<aquarius> sure thing
<aquarius> I wasn't expecting it to hang until finished
<hazmat> aquarius, it will retry, but there
<aquarius> bu I would expect juju status to *indicate* that it's retrying :)
<hazmat> is no guarantee as well that subsequent times will fix
<hazmat> yeah.. there's an outstanding issue to allow for some observation of hook execution
<hazmat> ie. when are things really done/steady state
<hazmat> as opposed to just workflow states
<aquarius> oh, cool, so it will work the way I expect eventually but doesn't right now. that's ok
<aquarius> I don't mind if what I expect to happen agrees with the plan but code hasn't quite caught up yet :)
<hazmat> well depends on what you expect, but yeah that should be partially addresses
<hazmat> aquarius, the async is pretty fast though (a few seconds).. so if its not resolved it likely ran into the same issue again
<hazmat> i'd suggest logging into the instance and verifying github connectivity
<aquarius> and indeed it did fail again
<aquarius> although in a different place.
<hazmat> progress then
<aquarius> github connectivity seems to be *intermittent*, which is way worse than "not working at all"
<hazmat> aquarius, what's the error this time
 * aquarius gives azure a fishy look :)
<hazmat> ah azure.. tis a special child, with an iaas veneer over paas concepts
<aquarius> new question: does the juju gui run on port 80? That is: can I juju deploy the gui to a machine which already has a web thing on it?
<hazmat> aquarius, it runs on port 443, but also listens on 80 to do auto redirect to 443
<aquarius> right, so if I have a web thing on that machine I should not deploy the gui to it
<aquarius> I could give the gui its own machine, of course, but I'm currently using the azure free trial and I don't want it to eat the free trial cash amount too quickly ;)
<hazmat> aquarius, yeah.. i'd recommend not, i haven't  tried it, but i  imagine the proxy backend that its running will barf on the port already bound for 80.. would be nice if that were configurable
 * aquarius nods
<hazmat> since its not functionally used.. ie web app port 80 gui on 443
 * aquarius looks irritably at azure. It keeps failing github connections, but not all of them.
 * hazmat files a bug against gui 
<hazmat> azure services dashboard is greenlit .. http://www.windowsazure.com/en-us/support/service-dashboard/
<hazmat> not clear what the issue is
<hazmat> github status also green https://status.github.com/
<aquarius> yeah, something weird going on. It's dying on doing github stuff, but not in the same places, and previously failed commands work sometimes
<aquarius> I love intermittent failures :)
<hazmat> thats a special kind ;-)
<aquarius> maybe it's a github problem
<aquarius> rather than an azure problem
<aquarius> could be blaming azure unfairly here :)
<hazmat> aquarius, also a possibility
<hazmat> anedoctally seems to be okay for me in casual testing
<hazmat> couple of pulls
<aquarius> yeah, it's succeeding most of the time
<aquarius> but discourse pulls about 500 things
<aquarius> :)
<aquarius> things that would be nice: something which gets kicked off when I do a deploy, sits in the background, and somehow tells me if the deploy goes wrong, rather than having to juju status to see what's happened
<hazmat> there's an api for that but yeah, it would be good to integrate that into the cli
<hazmat> with desktop notification
<aquarius> also, resuming a build from the last place we got to would b nice, but that's not juju's fault, that's the discourse charm's job
<aquarius> hm, it is possible that it actually *is* resuming from the last place it got to :)
<aquarius> if that's the case then I will make slow and irritating progress, over time :)
<hazmat> aquarius, depends on the charm implementation, juju is just re-executing the  install hook
 * hazmat pokes at the discourse charm
<hazmat> marcoceppi, you around?
<aquarius> I pung him a while back, so I don't think he is. Hopefully he will be over the weekend and I can try again :)
<hazmat> aquarius, so the apt installs are intelligent about no op on already present.
<aquarius> ruby bundler, probably less so
<aquarius> everyone thinks they can writ a package manager ;)
<hazmat> aquarius, and it looks like the git pieces will just refetch/update instead of full pull
<hazmat> aquarius, yup.. there's at least one for every language
<aquarius> hm! we may have got past the git bits
<aquarius> progress!
<aquarius> 2013-11-29 21:19:03 INFO juju.worker.uniter context.go:255 HOOK Gem::InstallError: rake-compiler requires RubyGems version >= 1.8.25. Try 'gem update --system' to update RubyGems itself.
<aquarius> that has to be a dependency bug in the charm.
<marcoceppi> 0/
<marcoceppi> aquarius: there's a version of the charm not pushed yet. wip branch. that uses rvm + ruby2.0
<marcoceppi> are you using the charm store version of the one on github?
<dpb1> marcoceppi: where is a good starter example of amulet?
<marcoceppi> dpb1: lete dig you up an example
<dpb1> k
<aquarius> marcoceppi, heya!
<aquarius> marcoceppi, I'm using the one in the charm store
<aquarius> aquarius@faith:~ $ juju deploy cs:~marcoceppi/discourse
<aquarius> should I have done somethnig else?
<marcoceppi> aquarius: that one lags behind by a bit, as you can see discourse travels at a fast pace :)
<marcoceppi> aquarius: let me find the latest stable and push it up to the charm store
<aquarius> cool
<marcoceppi> aquarius: it fetches a ton of deps via gem though, that's an upstream thing and nothing I can really do about that :)
<marcoceppi> I've got some code to streamline the process from them, not yet implemented though
<aquarius> marcoceppi, yeah :)
<aquarius> marcoceppi, so, once you've pushed a new version to the charm store, can I deploy the new version over the top of the old one?
<aquarius> or do I have to kill all the existing vms and deploy from scratch?
<marcoceppi> aquarius: not really, I don't make any guarentees with upgrades, one of the reasons why it's still in a personal branch and not in the store. Once they get a 1.0 out I'll probably stabilize the charm and push it to cs:precise/discourse
<hazmat> aquarius, generically you can switch charm origins.. juju switch -h see --switch and point it to a local branch checkout
<aquarius> so from what marcoceppi is saying I'm best to kill this setup stone dead and start fresh with the new charm once it's in the charm store, yes?
<marcoceppi> aquarius hazmat: that's true, but there's a big difference in charm between what you have and current version (building ruby from rvm, etc) and there's no upgrade-charm hook
<hazmat> marcoceppi, in this case it never made it through the install hook, is that reasonable safe?
<aquarius> hazmat, I'm happy to destroy the existing deployment
<hazmat> aquarius, that's the safest way
<marcoceppi> hazmat aquarius if you're willing to try, sure!
<aquarius> how do I do it? :)
<hazmat> in terms of not creating new errors
<hazmat> aquarius, so..
<marcoceppi> hazmat: however, won't it not work because of install hook in error?
<hazmat> marcoceppi, not with --switch and --force
<marcoceppi> hazmat: ah, the good 'ol --force flag
<aquarius> am I best to juju destroy-environment which kills the whole thing, and then juju bootstrap again from the start?
<aquarius> I don't think I understand what an environment is :)
<hazmat> marcoceppi, its an intended workflow for just this case.. ie. switching origins to fix/work/customize  a charm.
<hazmat> tis a good flag indeed
<hazmat> aquarius, so azure takes forever
<hazmat> aquarius, there's another option
<hazmat> aquarius, which is destroy-service and terminate-machine.. its a bit annoying with juju-core  in that you have to resolve errors first. There's a juju deployer plugin on pypi, that automates this into juju-deplyer -TW ... and it does watch what's happening and tell you what's going on (re your earlier suggestion)
<hazmat> its also a nice declarative way to build a topology of services.
<aquarius> one step at a time. I'd rather do things the slow stupid way at first and then get clever later :)
<hazmat> ttyl :-)
<aquarius> so I understand this: juju boostrap creates an environment? So if I juju destroy-environment, then the whole thing goes away, and then I can just juju bootstrap again?
<hazmat> give me a ping when its done, i'll be around.
<aquarius> I'm stopping shortly and going to the pub anyway. It is Friday night ;)
<hazmat> aquarius, yes, azure is slightly different in that destroy involves synchronous destruction of various internal azure resources.
<hazmat> its a bit time consuming, and O(n) the size of the environment. It is pretty reliable  (imo)
<hazmat> aquarius, so yeah.. slow but safe, works
<aquarius> that's the plan here :)
 * aquarius destroys the environment
<aquarius> if it's not done in ten minutes or so, I'll resume tomorrow :)
<hazmat> cheers
<aquarius> marcoceppi, are you planning on pushing to the charm store soon? If not, can I use the github version of the charm directly? I'm happy to do whichever's convenient for you :)
<marcoceppi> aquarius: let me test to make sure what I'm giving you works
<marcoceppi> aquarius: spinning up on hp cloud atm
<marcoceppi> aquarius: I'm getting a segfault in the charm now during install, poking now
<aquarius> marcoceppi, this sounds like good progress, though! nice one :
<aquarius> :)
<aquarius> I have to go out, but I'll leave irc open if you want to drop comments in here, and thank you again!
<marcoceppi> aquarius: np, still trying to figure out why rvm is segfaulting
<marcoceppi> aquarius: okay, figured it out. Moving on to the rest of the charm for testing
#juju 2013-11-30
<aquarius> marcoceppi, nice that you figured it out! Looking forward to  it being available to play with :)
<aquarius> marcoceppi, is the version in the charm store now (updated today) an OK good version?
 * aquarius finds out by deploying it ;)
<aquarius> marcoceppi, yay! it works!
<marcoceppi> aquarius: yay! glad I could get that patched up for you.
<marcoceppi> aquarius: I've commited to obseleting the github branch since it's a PITA to keep the two sync'd. Everything, for better or for worse, will go in the personal charm store branch. Be aware of potential breakage and feel free to ping me if you have any problems
<aquarius> it's working
<aquarius> if I want to customise the look of it, do I need to deploy a charm with that, or can I just poke the thing on the server?
<aquarius> marcoceppi, ^ :)
<marcoceppi> aquarius: you want to either use the admin section and "customize" or deploy as a plugin
<marcoceppi> we use a plugin
<marcoceppi> https://github.com/marcoceppi/discourse-ubuntu
<marcoceppi> we also have this, https://github.com/marcoceppi/discourse-ubuntu-sso
<marcoceppi> if you want ubuntu sso as a login option
<lazypower> ooo
<lazypower> that looks crispy
<aquarius> marcoceppi, don't need sso signin. I like the idea of the admin customize thing :)
<aquarius> thanks!
<marcoceppi> aquarius: for repeatability (unless you're backing up the database) i'd recommend moving to a plugin, eventaully
<marcoceppi> but the admin section is really nice for on-the-fly work
<aquarius> marcoceppi, and to write a plugin I need to also write a charm for it and distribute it with juju?
<marcoceppi> aquarius: not nessisarily, you could write a subordinate charm
 * aquarius does not know what that is :)
<marcoceppi> but i will have plugin support for discourse in the charm soon
<lazypower> aquarius: https://juju.ubuntu.com/docs/authors-subordinate-services.html
<marcoceppi> so you would just `juju set discourse plugins="https://githubrepo/path/;https://somethingelse/path;"
<aquarius> ooh, clever
<lazypower> Documentation on Subordinate charms. I'm going over this myself right now so if you have any questions, i'm available to help you through the discovery process.
<aquarius> but I can't do that yet, right? I need to wait until your charm supports that?
<marcoceppi> aquarius: the latter is not implemented yet, the former you can do yourself if you want, the last alternative is to just juju ssh discourse/0; cd /home/discourse/discourse; bundle exec rake plugin:install repo=https://url`
<marcoceppi> aquarius: right, I'll have that implemented this weekend most likely
<aquarius> smart!
<aquarius> I'll probably be looking at the discourse stuff again tomorrow
<marcoceppi> aquarius: cool, I'll ping you if I see you online and it's been updated
<aquarius> nice one!
<aquarius> thank you for help on this :)
<marcoceppi> I had to land some patches in to core before I could add the option
<marcoceppi> since core has support now, I can update the charm
<aquarius> marcoceppi, how do I add a new admin?
<aquarius> juju set discourse admins=username1,username2 ?
<marcoceppi> aquarius: yeah, or once you have one admin, just use the control panel
<marcoceppi> admins= is basically god mode
<aquarius> marcoceppi, oh, I don't need to use juju to add all admins once I've added myself?
<marcoceppi> aquarius: correct, it's just for the first admin really
<aquarius> smart! thank you
#juju 2013-12-01
<lazypower> Has the testing framework for juju been finalized?
<lazypower> *juju charms
<marcoceppi> lazypower: basically, yes
<marcoceppi> there's a new release of a testing tool, amulet, coming out next week. But the structures of tests are essentially the same. files in tests/ are run and need to exit with either 0 (OK), 1 (FAIL), or 100 (SKIP)
<lazypower> Can you point me to the documentation for this? none of the example charms I chose have a tests/ directory.
<lazypower> Oh, nevermind its on github where it should be: https://github.com/marcoceppi/amulet
<marcoceppi> lazypower: the documentation is out of date, that's being updated too :)
<lazypower> well thats handy
<lazypower> Ok, i'll continue hacking in rinse/repeat mode until notification that amulet is up2date
<marcoceppi> lazypower: should be up around Tuesday
<marcoceppi> lazypower: here's an example of a test using amulet: https://gist.github.com/marcoceppi/7727543
<lazypower> marcoceppi: I've got a prototype subordinate charm written for papertrail thats working for rsyslog integration. There's another unit provided by papertrail for arbitrary file tailing and logshipping. Is there a preferred method for adding multiple resources like this in configuration?
<lazypower> Or would a smarter approach be to split the charm into 2 flavors, one for rsyslog, and another charm that would house the path for this arbitrary log resource, and deploy multiple subordinates to accomplish the logshipping?
<marcoceppi> lazypower: there's a few ways to tackle this one. The easiest is to make it a configuration option, something like "additional-logs" and then have a user pass in a comma seperated list. The next would be to have the interface specify this, but that would require either a new interface or a change to the existing one (if there is one). Another option would be to have the interface change, but also have the charm do some basic checking. Like
<marcoceppi> instead of just rsyslog, have a config options for "all-logs" as a boolean, where it will automatically do everything in /var/log/*
<lazypower> What if i'm running a local resource that doesn't log to /var/log?
<marcoceppi> lazypower: I wouldnt' split the charms. I would instill both logics in one charm and switch based on context (either relationship, configuration, etc)
<marcoceppi> lazypower: configuration option. This might actually be available in the logging interface, let me check
<lazypower> http://manage.jujucharms.com/charms/precise/logstash-agent - seems to achieve this with an array type.
<lazypower> ok, i like how logstash did it. I'll add the boolean flag for all logs, and a sibling config value for specific log files.
<marcoceppi> lazypower: that's actually pretty janky. It's abusing type casting within YAML and Juju
<lazypower> oh
<lazypower> well nevermind then
<marcoceppi> lazypower: the idea is good, the implementaiton is a bit..not user friendly
<lazypower> It looked really nice though, having an array type
<marcoceppi> lazypower: yeah, except we don't have a true array type in juju configruation, it's basically a string representation of a json array
<marcoceppi> then it's coercing that in to an array during the hook, not exactly user friendly (dev friendly, sure)
<lazypower> if I cleary document that, with a #todo left in the readme, think it'll get enough traction to be useful or would I get dinged during review?
<lazypower> *clearly
<marcoceppi> a " " space seperated list, or comma delimited list that gets parsed in the hook is much better approach for fomat
<lazypower> ok. i can do that.
<marcoceppi> lazypower: something else to consider, we don't have a logs interface yet (or a logging one). You could create a spec for one so that charms could reveal where their logs were in addition to having users set it via config
<marcoceppi> one less thing for the user to do if charms adopt that new interface
<marcoceppi> that's not a requirement, just something to consider
<lazypower> hmm... good idea.
<lazypower> I'll open a Feature Request on it
<marcoceppi> cool
<marcoceppi> it might be worth while for us to consider adding "make sure services logs are present in /var/log" as a best practice for charms
<lazypower> https://github.com/chuckbutler/papertrail-charm/issues?labels=&state=open
<lazypower> i know the upstream version of this needs ot be in bazaar, but if you think of anything ot add to the FR would you add it?
<marcoceppi> lazypower: I'd have to look at the charm. I typically do that during reviews ;)
<marcoceppi> if I have some time tomorrow I'll poke at it
<marcoceppi> lazypower: I recommend running `juju charm proof` in the charm directory, as it'll cover most of the things formatting wise that need to be done to the charm
<lazypower> Ok, Thank you.
<Luca__> Does anybody know how network are set up on nodes managed by charms? I am having hard time finding documentation about underlying network infrastructure
<marcoceppi> Luca__: there's not much that happen there now. A little bit is done by juju on maas, but otherwise the provider is responsible for networking'
<Luca__> marcoceppi: would you mind providing few more details or point to some doc? as far as I could see maas nodes gets a br interfaces where eth0 is included, and a lxcbr0 bridge.
<Luca__> marcoceppi: I am trying to deploy openstack HA using jamespage documents, but cant fully understand how network interfaces are managed by juju, and looks like the documentation is not quite what I expect
<marcoceppi> Luca__: I'm not sure where it's documented, and only recently was some additional networking stuff added
<Luca__> marcoceppi: in an usual openstack deployment I would see at least 3 different network, one for management, one for storage and one for public/floating. Not sure how to include all those into the charms
<Luca__> marcoceppi; https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<Luca__> marcoceppi: there jamespage uses different networks for maas maas and ie. mysql vip however from the example it looks like this address is defined on eth0, which is already used for the maas nodes, therefore it is somehow not very clear
<Luca__> marcoceppi: I will be boarding in few minutes but may be online in 3 hours if you are still available
<marcoceppi> Luca__: yeah, I've not used the openstack charms so I can't speak to them in a MAAS setup
<jamespage> marcoceppi, man - I keep missing him
<jamespage> the answer is that the openstack charms currently assume a flat network
<jamespage> (multi-network support is in dev - but not beta or GA yet)
<marcoceppi> jamespage: cool, he said he'll be online in a few hours, hopefull you can catch him again!
<aquarius> marcoceppi, how do I set "host names" in discourse's database.yml? That file says "this was created by juju, do not edit"
<marcoceppi> aquarius: oh, that's something I need to fix still. You can't at the moment. You'll need to manually edit it for the time being
<aquarius> I should edit that file even though it says in big letters not to? :)
<aquarius> if I deploy a new version of the charm, won't it get overwritten?
<marcoceppi> aquarius: yes ;)
<marcoceppi> aquarius: it will, but I'll have a hostname configuration option by then
 * aquarius laughs
<aquarius> OK :)
<aquarius> when Bad Voltage's discourse stops working, I'm going to blame you ;-)
<aquarius> on the other hand you *did* make it so I could deploy in 4 lines of code, which has earned you a sizeable block of credit ;)
<aquarius> do I have to restart discourse somehow to make it pick up changes to that file?
<aquarius> marcoceppi, ^
<marcoceppi> aquarius: just the webs, `sudo stop discourse-webs; sudo start discourse-webs` should do it
<aquarius> I don't have a discourse-webs?
<aquarius> just discourse and discourse-sidekiq?
<marcoceppi> aquarius: what's `initctl list | grep discourse` show?
<marcoceppi> aquarius: odd, you should have a ton of upstart scripts
<aquarius> jujud-unit-discourse-0 start/running, process 9278
<aquarius> discourse-sidekiq start/running, process 53655
<aquarius> discourse-clockwork stop/waiting
<aquarius> discourse start/running
<aquarius> discourse-webs stop/waiting
<aquarius> discourse-web (3000) start/running, process 53537
<aquarius> hrm
<aquarius> -webs isn't even running
<marcoceppi> aquarius: okay, you have to manually stop discourse-web, then run discourse-webs
<marcoceppi> aquarius: that's okay, discourse-webs is like a meta upstart thing
<marcoceppi> I don't think I have it coded correctly
<marcoceppi> sudo stop discourse-web PORT=3000; sudo start discourse-webs
<marcoceppi> and discourse-clockwork is legacy, it should be stopped if you have the latest discourse
<aquarius> it is stopped, so that's OK
<aquarius> right, we seem to be up and running again :)
<marcoceppi> \o/
<marcoceppi> aquarius: in versions soon to be coming, you'll be able to restart discourse with 0 downtime :)
<aquarius> nice!
<marcoceppi> aquarius: keep the feedback coming, happy to implement things for people using the charm in production
<aquarius> marcoceppi, if I add a different header, will that also get overwritten on a new deployment?
<marcoceppi> aquarius: if you're doing everything either in the customization are or a plugin, then no it'll work between deployments
<marcoceppi> so long as you're using the same database (for the admin customizations)
<marcoceppi> forgot to mention the database is vital, but it's easy to dump and reimport
 * marcoceppi should put instructions in the readme
<aquarius> marcoceppi, how can I check whether discorse is sending emails correctly?
<marcoceppi> aquarius: there's an email testing screen in the admin panel under "Email"
<aquarius> It sent an email yesterday, but I've just tried setting up a new user and it didn't arrive
<aquarius> oh, really? brilliant!
<marcoceppi> aquarius: how long ago? Emails send about every 5 mins
<aquarius> longer ago than that
<aquarius> will try the email test thing
<marcoceppi> aquarius: you can also check /sidekiq to see what's going on with the periodic jobs
<marcoceppi> that will list failed tasks, etc
<aquarius> test mail not arriving either :(
<aquarius> ok, mail is a problem at my end
<marcoceppi> aquarius: if you have a legit smtp server (other than sendmail) you can add those details to /home/discourse/discourse/config/environments/production.rb
<aquarius> and you have to restart discourse, not just discourse-webs, to pick up database.yml changes :)
<aquarius> nah, mail arrives for jono, so it's my fault ;)
<marcoceppi> aquarius: interesting, good to know
<marcoceppi> ah
<marcoceppi> aquarius: check your spam?
<aquarius> gnaaaaah there they are
<aquarius> thank you :)
<marcoceppi> aquarius: I recommend using a real SMTP server and not just sendmail, or more people may have that problem
<aquarius> ya, but that means I have to *have* a real smtp server, which I do not ;)
#juju 2014-11-24
<jamespage> alai, hey - whats the current status on the vnx charm? specifically https://bugs.launchpad.net/charms/+source/cinder/+bug/1394276
<mup> Bug #1394276: Invalid config charm cinder-vnx <openstack> <partner> <cinder (Juju Charms Collection):New> <https://launchpad.net/bugs/1394276>
<jamespage> alai, is that work complete? can we promulgate the charm now?
<johnmce> Hi, can anyone give any quick advice on how to remove a charm/service that simply won't go away. I performed a "juju destroy-machine --force" on the last machine/unit. Now the charm can't be removed or re-installed. There seems to be no --force option for services.
<jamespage> johnmce, I'd try juju terminate-machine --force on the machines its on
<jamespage> and them destroy-service afterwards
<johnmce> jamespage: Hi James. I've already destroyed the machine (--force), and now have a charm not linked to any machine (that exists).
<johnmce> jamespage: This the output when I stat the service: $ juju stat openstack-dashboard\n environment: maas machines: {}\n services: {}
<johnmce> jamespage: the juju-agent gui still shows the charm, and juju won't allow the same charm to be deployed under the same name because it still exists. It can however be deployed under a different alias.
<jamespage> stub, erm do you think we should back out the py3 changes and consider exactly how we deal with this across 12.04 and 14.04 py3 versions?
<stub> jamespage: I think the branch that works with the ancient six is good for now.
<stub> jamespage: Unless you have found new problems
<stub> https://code.launchpad.net/~stub/charm-helpers/py3-2/+merge/242653
<stub> jamespage: That branch also fixes some other revisions, so we are testing against precise versions. We could run the tests multiple times against precise, trusty and trunk versions, but that is probably overkill.
<stub> other package revisions I mean.
<stub> I think backing it out is a bad idea, as if we can't fix any issue now I doubt anybody is going to bother attempting to fix them later.
<stub> EENGLISH
 * stub wanders off for a bit
<alai> jamespage, we are moving the vnx charm to run on Juno.  For the charm to run on Juno, some changes need to go into prodstack which will happen sometimes this week.
<alai> jamespage, we will not maintain the direct driver ppa for icehouse, Juno should have the features that we need.
<gnuoy> dosaboy, jamespage either of you have a moment for https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/pop-unused-resources/+merge/242662 ? If you agree with the approach I'd like to apply it to the other os charms that can use corosync.
<jamespage> gnuoy, use of haproxy is not tied to hacluster
<jamespage> that's the most typical use case, but it works just fine without it
<gnuoy> jamespage, ok, let me check again
<gnuoy> jamespage, it looks like it would make sense to just change the gate to check for cluster rather than ha
 * gnuoy goes to test that
<jamespage> gnuoy, maybe we should think about this the other way round and just always run haproxy
<gnuoy> interesting, that would trigger less change when scaling out for the first time I guess
<jamespage> gnuoy, yeah - that was my thinking
<jamespage> it would then be possible to just reload haproxy
<jamespage> much less distruption
<jamespage> gnuoy, its what I made the openstack-dashboard charm do
<gnuoy> jamespage, oh, I'm surprised you have that already since the charmhelpers code seems to have hardcoded references to enable haproxy when peers a present
<gnuoy> s/when/only when/
 * gnuoy goes to look at the dashboard
<jamespage> gnuoy, I think I override that behaviour in a subclass
<jamespage> and as it always runs in apache anyway :-)
<jamespage> gnuoy, actually that might be something to think on
<jamespage> gnuoy, switching to using wsgi in apache rather than the native stuff
<jamespage> just a thought
<avoine> is it easy to launch an automated test on a charm?
<avoine> I would like to know if my MP is finally passing the test -> https://code.launchpad.net/~patrick-hetu/charms/precise/python-django/pure-python/+merge/226742
<tvansteenburgh> avoine: I kicked one off for you http://juju-ci.vapour.ws:8080/job/charm-bundle-test/10415/console
<avoine> thanks
<mbruzek> avoine:  You can also run bundletester on your local laptop https://github.com/juju-solutions/bundletester
<mbruzek> Install bundletester per the readme, and then bundletester -F -e local -l DEBUG -v
<skay> avoine: one test failed :(
<mhall119> I think I broke my LXC again, do we have a "nuke it all and start over" option yet?
<marcoceppi> mhall119: there's a juju-clean plugin. What version of juju?
<marcoceppi> I'm more concerned that lxc keeps breaking
<mhall119> marcoceppi: 1.20.10-utopic-i386
<mhall119> my machine-2 gets stuck
<mhall119> "2":
<mhall119>     agent-state-info: 'open /var/lib/lxc/mhall-local-machine-2/config
<mhall119> : no such file
<mhall119>       or directory'
<mhall119>     instance-id: pending
<mhall119>     series: trusty
<mhall119> even if I juju destroy-environment and re-bootstrap, machine-2 does this
<mhall119> not machine 1, not machine-3 or higher, only machine-2
<mhall119> I think I was chroot'd into it's rootfs last week when I tried to destroy it
<avoine> skay: yeah, I think I'm hitting the timeout
<lazyPower> mhall119: do you have something leftover when you destroy the environment in /var/lib/lxc?
<lazyPower> mhall119: i've seen weird issues crop up due to stale lxc modifications left around - i just expected them to get cleared out when id estroyed the environment but the machine config was left over
<lazyPower> anecdotal - but worth looking into
<mhall119> lazyPower: destroying it to find out
<mhall119> lazyPower: it turns out that I do
<mhall119> http://paste.ubuntu.com/9220770/
<mhall119> and machine-2 is one of the things left behind
<lazyPower> mhall119: i think we know why its doing that now - if you wipe that stuff out it should be able to create what it needs when the machine is provisioned.
<lazyPower> s/machine/container
<mhall119> lazyPower: delete all of it?
<mhall119> or just machine-2?
<lazyPower> just machine-2
<lazyPower> i mean it *should* be fine
<lazyPower> but be surgical in what you're modifying so its easier to unwind what's been done, and you can isolate behavior change.
<mhall119> alright, deleted that and bootstrapping agian
<lazyPower> i've hozed my local provider by thinking blowing away things willy nilly would be fine, and had to start from scratch by nuking everything.
<lazyPower> mbruzek: can i get a quick review fo this bundle rev for CTS? https://code.launchpad.net/~lazypower/charms/bundles/hdp-core-batch-processing/bundle/+merge/242715
<mbruzek> yes
<mwak> hi
#juju 2014-11-25
<hatch> Has anyone asked for a config field in which there are a select few options which users could choose from? Which could be represented in the GUI as a select/dropdown?
<rick_h_> hatch: yea, enums basically
<rick_h_> hatch: it's part of the discussion around using jsonschema to represent config options
<hatch> ok - atm I am just going to give them the possible options in the description but that just feels ripe for issues
<hatch> typos and whatnot
<rick_h_> hatch: yea
<hatch> it would be even better if those enum options could enable/disable subsequent config options :)
<hatch> when running debug-hooks on a unit the tmux session that opens is in the home directory of the unit not the charm as shown in the docs - where can I find the actual hooks?
<hatch> $CHARM_DIR is blank
<hatch> ahah I need to actually trigger the hook externally
<hatch> looks like we need to update these docs :)
<noodles775> hatch: Yeah - it could be clearer that none of the juju env is available until you're in a hook context (ie. config-get and other juju binaries)
<hatch> yeah just a small wording change will help with that - I'll try and remember to submit a PR for that this week
<gnuoy> Am I missing something or is charmhelpers now broken on trusty since the py3 support mp?
 * gnuoy goes to investigate further
<marcoceppi> gnuoy: there's a bug already
<marcoceppi> gnuoy: https://bugs.launchpad.net/charm-helpers/+bug/1395378
<gnuoy> marcoceppi, ah, ok. thanks
<mup> Bug #1395378: latest python3 additions seem to break with trusty python-six <Charm Helpers:In Progress by stub> <https://launchpad.net/bugs/1395378>
<stub> gnuoy: If you want, check out if lp:~stub/charmhelpers/py3-2 sorts things for you
<gnuoy> stub, thanks
<stub> I can't get the venvs built in a precise VM :-(
<mhall119> marcoceppi: does juju work against Canonistack? I was told it did, I can bootstrap on it but `juju status` never returns
<marcoceppi> mhall119: it does
<marcoceppi> lazyPower may have some insight on this
<marcoceppi> mhall119: you have do to like proxies and shuttle and stuff
<lazyPower> mhall119: do you have your VPN setup?
<lazyPower> you wont get anything back from juju status util you've activated your VPN or canonistack-sshuttle - i highly recommend the VPN route
<mhall119> lazyPower: no, I don't
<stub> gnuoy: did that work btw? I'd like to know what series you deployed too.
<gnuoy> stub, I haven't tried as yet tbh
<gnuoy> jamespage, got a sec for https://code.launchpad.net/~gnuoy/charm-helpers/haproxy-singlenode-mode/+merge/242790 ?
<jamespage> gnuoy, looks reasonable
<jamespage> is this for the 'run haproxy always' stuff?
<gnuoy> jamespage, it is
<stub> tvansteenburgh: yes, apt-get as an import side effect is disgusting. There are some others hiding in charm-helpers too...
<jamespage> gnuoy, +1
<gnuoy> jamespage, thanks
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/quantum-gateway/stable-vlan-flat-support
<jamespage> I'd like to get that landed asap
<gnuoy> jamespage, err, I thought  I already had. sorry about that
<jamespage> gnuoy, you acked and I landed next
<jamespage> thats for stable
<gnuoy> jamespage, +1
<jamespage> gnuoy, thanks
<sayon> hey there, i am wondering if i can upgrade from juju 0.7 to the newer juju-core, without having to re-bootstrap the whole environment again?
<marcoceppi> sayon: that's a great question
<marcoceppi> sayon: as you may be aware a lot has changed in the latest releases of juju, let me find out if there is a migration path
<sayon> marcoceppi: i know, right :)
<lazyPower> oh wow 0.7 to 1.20.x? woo thats a huge update.
<sayon> lazyPower: yea, i tried to upgrade once juju-gui was introduced but i could not figure out how to do it without rebootstrapping
<marcoceppi> sayon: so we've since introduced a "juju upgrade-juju" command but I don't think that will work on an environment that's as old as 0.7
<hazmat> sayon, there's isn't anything supported from pyjuju/zookeeper to juju-core/mongodb
<sayon> marcoceppi: it won't i already tried, there is no upgrade-juju command :(
<sayon> hazmat: i thought so
<hazmat> sayon, capturing the topology from juju status > and capturing the data in env and then rebootstrapping and restoring by hand .. its a pretty manual process.. juju-core does support in place upgrades.
<sayon> i am running  in alot of trouble since i upgraded my maas installation on ubuntu 12.04
<hazmat> sayon, doh..
<hazmat> sayon, was that recently?
<sayon> hazmat: it has been some month i guess, everything worked just fine till i wanted to add another machine
<hazmat> sayon, there was a security fix that got pushed for maas in the last few weeks that broke pyjuju .. pyjuju has been eol'd for a few years.
<sayon> hazmat i guess thats what i am facing right now
<hazmat> sayon, if you want to keep the env, i'd recommending apt-pinning maas to the older version
<hazmat> the security fix was around the object storage capabilities of maas, and is only relevant if maas was exposed directly to untrusted users which is not common.
<sayon> hazmat, tried that already but i am running into problem since the database structure seems to have changed alot
<hazmat> ugh
<sayon> when downgrading it fails to convert from the newer to the older database version
 * hazmat tries to dig up details 
<hazmat> sayon, do you have the error when adding a new machine to the environment?
<hazmat> ie from log
<hazmat> it looks like the change was specifically around how charm urls are handed to units
<hazmat> sayon, also what version of maas is the newer one?
<sayon> well when i add a new machine it gets commisioned in maas and is the ready
<sayon> but "juju status" won't list the new machine and may not acuire it for charm installations
<sayon> hazmat: its maas 1.2+bzr1373+dfsg-0ubuntu1~12.04.5
<hazmat> sayon, interesting.. thats a different issue.. the commit i was referencing hasn't landed in 1.2 series yet... do you know what the other version of maas was.. apt-cache policy maas should have it
<sayon> hazmat: wow, seems like it was 0.1+bzr482+dfsg-0ubuntu1
<hazmat> sayon, hmmm.. so nutshell i'd recommend a bootstrapping that env again using the juju stable ppa on precise https://launchpad.net/~juju/+archive/ubuntu/stable?field.series_filter=precise
<hazmat> juju-core can  self-update itself..
<hazmat> i'm asking on #maas re best version/ppa for precise
<sayon> hazmat: that means reinstallation of all machines used with juju charms, right?
<hazmat> sayon, sadly yes
<hazmat> that version of maas was ancient, and pyjuju has been eol'd for a while
<sayon> hazmat: ok, i think i will then do a complete reinstallation with the new LTS version of ubuntu
<sayon> and not stick with precise for any longer
<sayon> but thanks alot for your help and support hazmat and marcoceppi!
<hazmat> np
<skay> avoine: I'd like to be able to run collectstatic, any objection to the idea of having it happen during install and upgrade?
<skay> avoine: also, there is the ansible branch you have, and I'm wondering if most all of hte python code in hooks.py is going away and it will turn in to a lightweight shim around ansible calls?
<avoine> skay: The idea was to use a subordinate for that: http://bazaar.launchpad.net/~patrick-hetu/+junk/django-contrib-staticfiles/files
<avoine> skay: also Simon proposed to use dj-static: https://code.launchpad.net/~bloodearnest/charms/precise/python-django/trunk/+merge/235430
<avoine> skay: yes, migrating to Ansible is the long term plan
<skay> for that dj-static branch -- I would leave whether or not someone uses dj-static up to the project
<skay> but the dollectstatic command still needs to be called, and if it should be handled as a subordinate, maybe document it in the readme?
<skay> I happen to be using dj-static right now, and I handle the dependencies in my project's requirements file
<skay> so python-django installs it for me accordingly
<skay> on another note, when find_django_admin_cmd fails to find the admin, would you want sys.exit(1) there?
<skay> I'm not clear on when things should fail silently or not, but that one seems like maybe it shouldn't fail silently
<avoine> skay: can you give me the line number?
<avoine> skay: Simon's patch would be better since it is not recommended to use django.contrib.staticfiles in production
<skay> avoine: http://bazaar.launchpad.net/~patrick-hetu/charms/precise/python-django/pure-python/view/head:/hooks/hooks.py#L206
<avoine> I think I did that to catch the cases where django is not installed
<avoine> skay: do you have a case where this should not append?
<skay> avoine: I'll try to dig up what was happening to me. somehow the method was returning None and then the charm was attemping to call None syncdb etc.
<avoine> ouch
<skay> avoine: I'm not sure how much you want to dig in to it. I was working on a branch of this to allow for pip extra args and was doing --no-index --file-files and had hte wrong version of something, if memory serves
<skay> avoine: so it didn't install django properly? I thought perhaps that should put the unit in to an error state earlier than it did
<skay> it ended up looking like an error with the pgsql relation rather than earlier
<skay> so, the method allows for the django-admin command not to be found, and doesn't fail. perhaps it should
<avoine> skay: no because you could have installed django from a debian package so I'm not checking if it exist earlier
<skay> avoine: oh that makes sense.
<skay> I figured there was a reason
<avoine> skay: yeah the pgsql will also fail if django is not installed
<avoine> *relation
#juju 2014-11-26
<rick_h_> pluses and reshares welcome please. https://plus.google.com/116120911388966791792/posts/4ZuLf9ZS2S9
<gnuoy> jamespage, dosaboy do either of you have anytime to looks at my haproxy-all-the-time mps ( http://paste.ubuntu.com/9248100/ ) ? The diffs are decpetively large because of the resent updates to charmhelpers
<gnuoy> s/decpetively/deceptively/
<bloodearnest> hmm, I'm having problems with deployer/juju (0.4.1/1.20.12) giving the following error when trying to add relations:
<bloodearnest> u'Error': u'no relations found', u'RequestId': 1, u'Response': {   }
<bloodearnest> juju status reports all is well
<bloodearnest> local provider
<bloodearnest> juju ssh is giving public key denied
<bloodearnest> so, this was after trying to add a relation between two subordinates, which was working fine until yesterday
<bloodearnest> now I can't add the relation
<bloodearnest> is this a change in 1.20.12?
<bloodearnest> https://bugs.launchpad.net/juju-core/+bug/1382751
<mup> Bug #1382751: non subordinate container scoped relations broken  <regression> <relations> <subordinate> <juju-core:Fix Committed by menno.smits> <juju-core 1.20:Fix Released by menno.smits> <juju-core 1.21:Fix Released by natefinch> <https://launchpad.net/bugs/1382751>
<bloodearnest> so, this is a breaking change
<bloodearnest> in a point release
<hazmat> bloodearnest, ugh
<hazmat> that looks like unintended fallout
<bloodearnest> seems so
<bloodearnest> relating 2 subordinates has been supported for ages, I've been using for a good while
<hazmat> bloodearnest,  is it a container scoped relation between the two subordinates?
<bloodearnest> hazmat: yes
<bloodearnest> hmm, I guess one side could be non-scoped, since the placement has already been done by the scoped relation
<hazmat> bloodearnest, it looks like this line +		if eps[0].Scope == charm.ScopeContainer && subordinateCount != 1 {
<hazmat> should be subordinateCount >= 1
<hazmat> bloodearnest, is this with public charms, i think its a worth a bug report
<bloodearnest> hazmat: public charm, but not yet promulgated. It's the charm for the new conn-check utility we announce recently, related to nrpe-external-master to surface the chamr s
<bloodearnest> gah
<bloodearnest> surface the checks as nagios results
<bloodearnest> I will file a report
<bloodearnest> hazmat: fyi: https://bugs.launchpad.net/juju-core/+bug/1396625
<mup> Bug #1396625: container scoped relations between 2 subordinates broken in 1.20.12 <juju-core:New> <https://launchpad.net/bugs/1396625>
<bloodearnest> hm, how do I down grade to 1.20.11, stable ppa doesn't have it
<gnuoy> small charmhelpers mp if anyone has a sec https://code.launchpad.net/~gnuoy/charm-helpers/add-default-nagios-servicegroup/+merge/242928
<mthaddon> gnuoy: that looks reasonable to me - will merge
<gnuoy> mthaddon, thanks!
<mthaddon> sure
#juju 2014-11-27
<hatch> Hey I've upated my namespaced repo for my charm, how do I go about getting the promoted version updated now too?
#juju 2014-11-28
<bradm> anyone about who can help with a problem deploying something in juju?  I'm getting an odd traceback when I try to add a service using juju deployer
<LinStatSDR> What is the error bradm
<bradm> LinStatSDR: http://pastebin.ubuntu.com/9276609/, using a juju deployer config
<lamont> I say juju upgrade-charm and the bundle it delivers is the _old_ contents of the charm directory.. (juju 1.16.5)  - thoughts?
<captine> Hi all.  Can anyone point me to a write up (if available) of using Juju and MAAS in a corporate environment.  My company currently has an IBM pureflex running Hyper-V on all nodes and VM's on them.  All is managed as if the machines were physical e.g. no automated provisioning etc.  I am just trying to see if Juju is more for dev type people or if there is a usecase for it in a corporate where vm's dont chane much.
<mgz> gnuoy: how busy are you today?
<gnuoy> mgz, that seems like a loaded question...
<gnuoy> mgz, I'm working on adding a feature so I can take a pause
<mgz> so, I was looking on the train at the 'right' way of doing devel tools with juju and simplestreams
<mgz> I think I have all the bits down now
<mgz> so, if you want to retry the windows workloads tools part the right way, we could have a quick bash at that
<gnuoy> ok, I have a call in 1 min, then I was hoping to grab some lunch but it'd be great to catch up about that this afternoon if taht's ok?
<mgz> that works for me, poke when available
<gnuoy> mgz my world is running behind, I'm heading to lunch now
<hazmat> bradm, do you have a pastebin for the traceback?
<hazmat> bradm, what version of juju?
<hazmat> bradm, do you have a service 'infra' in your deployer config?
<gnuoy> mgz, I'm champing at the bit
<mgz> gnuoy: okay, I'm back from lunch too
<gnuoy> mgz, ok, hit me
<mgz> so, rough plan
<mgz> make tools tarballs for a trunk juju version for trusty and windows
<mgz> generate tools streams
<mgz> bootstrap with --metadata-source pointing at that directory
<gnuoy> mgzAre you suggesting I bootstrap with 1.20.12 tools?
<mgz> gnuoy: no
<mgz> we bootstrap with the devel, we just need to give bootstrap the path to our carefully constructed simplestreams version
<gnuoy> ok
<mgz> gnuoy: do you want to ssh into a box I have to poke things? or some other means to pair on this?
<gnuoy> mgz, I need to disappear for 20mins but I'll be back after that. I'd like to get you access to the lab I'm working in tbh
<mgz> ah, I should check if I have the vlan stuff set up on here
<gnuoy> mgz, do you have vpn access to the qalab ?
<gnuoy> mgzare you happy with juju-1.21-beta3-trusty-amd64.tgz juju-1.21-beta4-win2012hvr2-amd64.tg being the tools?
<mgz> gnuoy: they need to be the same beta
<gnuoy> mgz so there's a trusty beta 3 but no windows and a windows beta 4 but no trusty, right ?
<mgz> gnuoy: there are both, it's just a question of putting in the same versions
<mgz> sec, I just need to restore all my sessions (X login had a bit of a fit)
<gnuoy> mgz, Are you saying that there is trusty beta 4 out there somewhere ?
<mgz> there is trunk 1.21 which reports as the upcoming beta4
<gnuoy> mgz, I'm looking at https://streams.canonical.com/juju/tools/devel/ fwiw
<mgz> gnuoy: actually, let's just use tools from this job: <http://reports.vapour.ws/releases/2121>
<gnuoy> mgz, Build Artifacts  empty  <- on the few I've tried
<mgz> gnuoy: `wget http://data.vapour.ws/juju-ci/products/version-2121/win-client-build-installer/build-1325/juju-1.21.0-win2012-amd64.tgz`
<mgz> gnuoy: `wget http://data.vapour.ws/juju-ci/products/version-2121/publish-revision/build-1237/juju-core_1.21.0-0ubuntu1~14.04.1~juju1_amd64.deb`
<mgz> then `dpkg-deb -x juju-core_1.22-alpha1-0ubuntu1~14.04.1~juju1_amd64.deb extracted-bin
<mgz> and get the jujud binary from in there for the trusty/amd64 combo
<mgz> gnuoy: then my instructions from earlier has `tar -cf tools/devel/juju-`~/go/bin/juju version`.tgz -C ~/go/bin jujud` - but this is going to be different for these two tools
<mgz> then we put the two tarballs in tools/devel/ under our working dir
<mgz> and run `juju-metadata --debug generate-tools --clean --stream devel -d .`
<mgz> check that the streams data generated has both tools
<gnuoy> mgz ok so far: http://paste.ubuntu.com/9284701/
<gnuoy> let me redo with the devel dir
<mgz> gnuoy: okay, do releases rather than devel (pretty much the same, just different environments.yaml setting fun)
<mgz> releases is fine
<mgz> I did devel mostly because it seemed more fun
<gnuoy> mgz happy for me to: juju bootstrap --metadata-source /var/lib/jenkins/.juju/tools  ?
<mgz> gnuoy: next step is bootstrap with that `juju --debug bootstrap --metadata-source . -e $ENV`
<mgz> gnuoy: I think not with the tools, it's the dir containing the tools
<gnuoy> ack
<mgz> I'd use a big buffer or tee out/err somewhere
<mgz> because you need to read a lot of simplestreams junk if something's not right
<mgz> after that it's add-machine with da windows
<mgz> gnuoy: how it going?
<mgz> I'll need to reboot for standup ina sec so will be off irc briefly
<gnuoy> mgz one explosion down to the tgz not really being z
<gnuoy> rebootstrapping now
<mgz> yeah, I did .tar.gz first time around... which tar did z for me, but juju metadata refused to acknowledge
<gnuoy> mgz, bootstrap done.Happy for me to go on to the windows deploy ?
<gnuoy> me goes for it
<mgz> go for it
<gnuoy> arggh, I missed a v out of the windows tools name
<mgz> gnuoy: whoops
<mgz> I'll be back in ~10 or so, tell me how you get on
<mgz> (and yes, this streams assembly stuff is all very error prone... does juju-metadata validate-tools help at all?
<mgz> )
<gnuoy> mgz http://paste.ubuntu.com/9285206/
<mgz> gnuoy: how goes?
<gnuoy> mgz, do you see my pastebin?
<gnuoy>  http://paste.ubuntu.com/9285206/
<mgz> gnuoy: looking now
<mgz> gnuoy: can you kill that machine, do set-env with the debug log config (we should have supplied on bootstrap really..) and look at machine-0.log
<gnuoy> mgz debug was set in the environments.yaml
<mgz> okay, ace, then just look at machine-0.log for the provisioner bit that says "no matching tools available"
<gnuoy> mgz I've popped the log on chinstrap
<mgz> you're liam on chinstrap? not gnuoy?
<mgz> 2014-11-28 16:45:46 DEBUG juju.environs.simplestreams simplestreams.go:428 read metadata index at "https://streams.canonical.com/juju/tools/streams/v1/index2.json"
<mgz> 2014-11-28 16:45:46 DEBUG juju.environs.simplestreams simplestreams.go:436 index file has no data for product name(s) ["com.ubuntu.juju:win2012hvr2:amd64" "com.ubuntu.juju:win2012hvr2:i386" "com.ubuntu.juju:win2012hvr2:armhf" "com.ubuntu.juju:win2012hvr2:arm64" "com.ubuntu.juju:win2012hvr2:ppc64el"]
<mgz> graaaaa
<themonk> lazyPower, hi
<themonk> i am getting a strange error after updating juju
<themonk> few hours back
<themonk> this is the error: WARNING discarding API open error: unable to connect to "wss://172......:17070/environment/4aaa9a83............/api" ERROR Unable to connect to environment "amazon". Please check your credentials or use 'juju bootstrap' to create a new environment.
<themonk> credentials are ok
<lkraider> hello, we are looking into extending juju to notify charm relations when their machines restart (right now only config-changed is called when the machine boots). Any pointers?
<lkraider> we want to call relation-changed when a machine reboots
<lkraider> is this possible?
<sarnold> lkraider: i'm just a spectator, but that feels like it'd introduce a huge amount of overhead when machines reboot... running scripts to take that unit out of service in all N relations that know about it might take longer than the machine rebooting will take..
<sarnold> lkraider: I suspect providing a unit-reboot hook would let charms that care about it handle it and let other charms ignore it without the overhead..
<lkraider> @sarnold I mainly want to update the old relations with new IP when the machine reboots
<sarnold> lkraider: aha :)
<lkraider> sarnold: am I right in that there's no provision in Juju right now for that?
<lkraider> sarnold: this is my understanding of the hooks as they are now: http://stackoverflow.com/a/25980368/324731
<sarnold> lkraider: sorry, I'm too much of a bystander for that -- I wouldn't be surprised if semi-stable IPs are assumed though, and teardown / re-provision in the case of new unit IP addresses..
<sarnold> lkraider: nice graph :)
<lkraider> sarnold: yep, I found the docs confusing so I made it from what I gathered
<lkraider> sarnold: not sure if its 100% accurate tho
<lkraider> sarnold: thanks for your help, I'll try asking sometime later to the folks here again
<sarnold> lkraider: good luck :)
#juju 2014-11-30
<Tug_> Hi, I'm trying to bootstrap a manual env but it fails to configure Juju machine agent
<Tug_> this is the output I get: http://pastebin.com/tZ2sQErt
<Tug_> any idea ?
<Tug_> juju 1.20.13-trusty-amd64
<Tug_> destination is a trusty vm as well
<Tug_> now I have bootstraped the env but running juju status fails
<Tug_> 2014-11-30 20:50:41 DEBUG juju.state.api apiclient.go:248 error dialing "wss://juju:17070/", will retry: websocket.Dial wss://juju:17070/: dial tcp 130.211.X.X:17070: connection refused
<thumper> that url looks wrong
<thumper> Tug_: bootstrapped what type of environment... with what config?
<Tug_> thumper, manual env with very simple config
<Tug_> bootstrap-host: juju
<Tug_> the rest is default
<thumper> hmm...
<thumper> is the "juju" host the same as the machine you are running on?
<thumper> or a different one?
<Tug_> a different one
<Tug_> actually I just tried something interesting
<thumper> is it actually running?
<Tug_> I installed juju-core on the machine agent
<Tug_> and set up the same config and it worked
<thumper> it shouldn't need that
<thumper> so something looks wrong
<Tug_> it looks like the port is closed maybe ?
<thumper> can you uninstall juju-core and look again?
<Tug_> on my machine ?
<thumper> you can ssh to the machine and look in /etc/init to see if the jobs are registered
<thumper> no, the fact that it failed before, then you installed juju-core on the other machine then it worked
<thumper> could be a missing dependency
<thumper> what version ?
<Tug_> I mean it worked when I run juju status from the machine agent
<Tug_> not from my machine
<Tug_> 1.20.13-trusty-amd64
<thumper> oh
<thumper> perhaps it is just a problem resolving 'juju' ?
<Tug_> I still get "connection refused" from my machine
<Tug_> yeah that's what I was thinking
<thumper> you can try this...
<Tug_> but it does resolve to 130.211.X.X
<thumper> it is a hack mind
<Tug_> which is correct
<Tug_> and then the port is open
<Tug_> I think
<thumper> well if it works on that machine, but not from your client
<thumper> I'd double check that the port is open
<thumper> can you telnet to it?
<Tug_> telnet: Unable to connect to remote host: Connection refused
<Tug_> nop
<thumper> check your settings :-)
<Tug_> yeah it must be that :)
<Tug_> but sadly it's all set
<Tug_> (it's just a network config on google compute engine)
<Tug_> http://ifjfij.appspot.com/i?b=fe584c4a904d35cc52a5d807a46f6d3413f27308
 * thumper shrugs
<Tug_> at least I excluded juju from the cause :)
<thumper> yeah, there is that
<Tug_> or it could be... here is what I have with netstat
<Tug_> tcp6       0      0 :::17070                :::*                    LISTEN      17596/jujud
<Tug_> maybe juju does not forward ipv4 connections
<Tug_> fyi, it was a bug in google cloud (for real) removing the rule and recreating it did the trick :)
#juju 2015-11-23
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/nova-compute/lp1509267-stable/+merge/278286
<gnuoy> +1
<jamespage> gnuoy, https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1518771
<jamespage> ?
<mup> Bug #1518771: nova-compute hugepages breaks the boot process <upstart :New> <nova-compute (Juju Charms Collection):New> <https://launchpad.net/bugs/1518771>
<Icey> exporting a bundle through juju-gui doesn't export added storage?
<marcoceppi> Icey: storage isn't in the bundle format AFAIK
<marcoceppi> rick_h_: ^?
<tvansteenburgh> marcoceppi: suggestions on this? https://bugs.launchpad.net/charms/+bug/1513612
<mup> Bug #1513612: HyperV lis-test Charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1513612>
<marcoceppi> tvansteenburgh: we can't really test it with the automated testing because we don't have a windows machine in our CI and we don't have simplestreams for Windows deployments
<marcoceppi> that charm really only works right now on a MAAS enabled substrate with a Windows image baked in
<mwenning> marcoceppi, how should I proceed then?
<marcoceppi> tvansteenburgh: we need to probably blacklist LXC/LXD with charms in the windows series
<marcoceppi> mwenning: I'm not sure. We should review the code as normal, if possible. But testing is the best way for us to be sure it works. I'll see if we can get a windows enabled substrate for the test runner
<mwenning> marcoceppi, the OIL team has a windows image that will deploy (from blake_r).  That's what I've bee using to develop
<marcoceppi> mwenning: sure, we need to get that into our charm-testing infrastructure so we can validate windows charms
<marcoceppi> we've not had many go through our pipeline :)
<marcoceppi> mwenning: for the time being I have a maas and might be able to load the windows image on there to test this one off
 * marcoceppi looks into how we can get resources to test this
<mwenning> marcoceppi, tvansteenburgh , also had a question (in the bug) - this charm requires a download of ~6M  zip file containing the lis-test code.
<mwenning> should I re-do this as a "fat charm" & include it?
<mwenning> seems like it would be easier to control
<marcoceppi> mwenning: is the zip accessible on the internet?
<mwenning> marcoceppi, yes it is on git hub, but I'm using one that I've downloaded & modified
<mwenning> marcoceppi, sorry the _zip_ is not accessible, source is.
<marcoceppi> mwenning: you could fat-pack it, if it's not easily accessible from the public internet
<mwenning> marcoceppi, cool I'd prefer that, easier to control new versions.   Also it's going in OIL and they don't like outside downloads
<mwenning> marcoceppi, tvansteenburgh , will amulet work in windows?   that's probably the next thing
<marcoceppi> mwenning: amulet is python, Python can be installed in Windows, it should work. However, I'm not sure how juju-run/juju ssh work in a Windows environment
<marcoceppi> there's a lot of unknowns here, you're trail blazing a bit here ;)
<mwenning> marcoceppi, :-)
<rick_h_> marcoceppi: Icey I think there's work underway to add it. axw was doing something and frankban was reviewing it if I recall
<rick_h_> marcoceppi: Icey but once that lands a gui release will have to be cut to support exporting it.
<rick_h_> urulama: hatch fyi ^
<frankban> rick_h_: yes storage is being added to bundlechanges and to the bundle format
<rick_h_> frankban: and will export support that or do we need a card for that to get updated as well?
<rick_h_> frankban: or the bundlechanges support adds the export as well?
<frankban> rick_h_: export from the GUI? Jeff is working on adding storage to the GUI
<rick_h_> frankban: ok
<marcoceppi> cory_fu: I've got some justifications for the update-status hook
<cory_fu> FYI, in case I wasn't clear before, I'm for including the update-status hook, I just don't want anyone to be surprised when their handlers run every 5 minutes
<marcoceppi> cory_fu: they should have smarter handlers ;)
<marcoceppi> this is what I'm trying to do, as an fyi: http://paste.ubuntu.com/13477912/
<marcoceppi> since there's so many different layers at play, each method just calls update_status() at the end of it's run atm
<cory_fu> Agreed.  And having them re-run every 5 minutes has actually been really helpful for me when developing and having my charms get into a blocked state that I needed to resolve just by re-running a hook w/o changing any config
<marcoceppi> cory_fu: should I mail the list about including this?
<cory_fu> Probably worth doing, yes
<marcoceppi> cory_fu: cool, I'll drop a line today
<cory_fu> marcoceppi: I'd also like you to compare how we're handling the status logic in https://github.com/johnsca/layer-apache-spark/blob/master/reactive/spark.py and https://github.com/ktsakalozos/layered-hive-charm/blob/master/reactive/hive.py to your approach
<marcoceppi> cory_fu: ohh, I'd love to, I've been trying to find a simple way to do this
<marcoceppi> cory_fu: ah, I see, leveraging states more
<marcoceppi> I like, I need to set/react to more states in general. I'm a little sparse in my layers to date
<marcoceppi> cory_fu: I'll refactor my django layer a bit to use this approach
<marcoceppi> cory_fu: I have a use case for actions in reactive too
<marcoceppi> well, s/use case/need/
<cory_fu> Agreed, we need to support actions
<marcoceppi> cory_fu: is there a way to do something like `psql.changed()` to see if relation data has changed?
<marcoceppi> cory_fu: nvm, I can just use states
<jose> arosales, marcoceppi: hey! I wanted to confirm if you guys wanted to be mentors for Google Code-In
<marcoceppi> jose: yes
<cory_fu> marcoceppi: Were are you talking about setting the "changed" state from?  The charm layer?
<jose> marcoceppi: ok, you're getting a PM to confirm something real quick and we'll get this rolling
<arosales> jose: yes +1 :-)
<marcoceppi> cory_fu: I was referring to postgresql recieving changed data, but I just realized I don't care that much
<jose> woot woot! same for you!
<marcoceppi> since the tinerface takes care of that
<cory_fu> marcoceppi: Right.  The interface should handle changes and turn them into appropriate states.  But you also shouldn't expect a "db.changed" state or similar from an interface layer (nor try to create one) because it almost certainly won't work the way you expect (I know from experience)
<marcoceppi> cory_fu: yeah, I found a better way around what I was trying to work though
<cory_fu> marcoceppi: If you do need to track changes in data, though, you can use https://pythonhosted.org/charms.reactive/charms.reactive.helpers.html#charms.reactive.helpers.data_changed
<marcoceppi> jose: can you send along good examples of tasks?
<jose> marcoceppi: Code In Sample Tasks (https://docs.google.com/document/d/1povliHCv-_5AuTdRtrs-AJTZBbyNhufZYs_bykuALHA/edit?usp=docslist_api)
<marcoceppi> jose arosales I really don't think we want to encourage a charm review as a task? Not sure the scope of this though
<jose> marcoceppi: those were just samples of things we do written in task format
<marcoceppi> jose: ack, cool
<marcoceppi> jose arosales we should meet up to discuss, to avoid building duplicate tasks?
<jrwren> can someone help me understand some failing test output?  http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1485/console  I see a make -s lint and make -s test being run, but the charm doesn't have a Makefile. what is happening?
<marcoceppi> jrwren: yes it does http://bazaar.launchpad.net/~evarlast/charms/trusty/apache2/add-logs-interface/view/head:/Makefile
<jrwren> ok, something is very broken on my end. thanks.
<jrwren> i'm going back to bed.
<arosales> marcoceppi +1 on meeting up, perhaps post us holiday though
<marcoceppi> arosales: well we have 14 days before they're due ;)
<arosales> marcoceppi, ah well then :-)
<arosales> I guess sooner rather than later
<arosales> marcoceppi: my first thought is failing charm test triage or picking a few charms we know need some tlc, or is that to deep of a task?
 * marcoceppi dunno
<beisner> hi jrwren, tvansteenburgh - looks like we are all hitting the same mongodb test race.  in my observation, when everything is just right, and under low-load, the tests will generally pass.  but the sleep time is really just a bug in itself, and the tests will need to be more clever about knowing when to start poking.  re: bug 1518468
<mup> Bug #1518468: mongodb functional tests have race condition due to fixed sleep time <amulet> <uosci> <mongodb (Juju Charms Collection):New> <https://launchpad.net/bugs/1518468>
<jrwren> beisner: thanks for the replies and this msg. I'll update the test to poll.
<beisner> jrwren, awesome.  thanks!
<tvansteenburgh> jrwren, i'm curious what you willing be checking for in the polling loop
<tvansteenburgh> i ask because amulet's sentry.wait() already does this in a non-charm-specific way, waiting for every unit to quiesce (no running hooks) for at least 30 seconds
<tvansteenburgh> i assume what you have in mind is specific to the mongodb charm?
<jrwren> tvansteenburgh: i'll poll every 10s until some max timeout.
<jrwren> tvansteenburgh: I'm unsure, but I think the issue is not juju and something sentry.wait() would trigger, but mongodb and its replset choosing a primary
<tvansteenburgh> fwiw, i think the most correct way forward is to do what beisner suggested and have the charm implement extended status to advertise when it is ready, then use amulet's sentry.wait_for_messages() to block until that status is set. but i realize that may not be a feasible change to make right now
<tvansteenburgh> jrwren: ah, i see
<adam_g> spinning my wheels trying to bootstrap a manually provisioned container: http://paste.openstack.org/show/479791/ any ideas?
<marcoceppi> thumper: lp:~ubucon-site-developers/ubucon-site/ubucon-layer and https://github.com/marcoceppi/layer-django
<marcoceppi> thumper: zero to no documentation yet, the first branch is what a consumer of the django layer looks like and the second is the actual guts of it
 * thumper nods
<marcoceppi> thumper: I've charmed ubucon site and and working on taiga which is also django. Trying to make the layer as useful as possible
<thumper> marcoceppi: I'd have to compare with what I currently deploy
<thumper> I have a python-django charm that starts celery workers when related to rabbitmq
<marcoceppi> thumper: yeah, I know you're doing a pretty expansive deployment
<thumper> I haven't proposed it yet...
<thumper> but my focus hasn't been charm development, but site development
<thumper> I'd love it to be more useful...
<marcoceppi> thumper: yeah, totally understand
<blr> marcoceppi: is there some documentation somewhere on what layers provide?
<marcoceppi> blr: not sure I understand what you mean
<blr> I'm almost entirely ignorant of the changes in the reactive framework... our charms just use charmhelpers
<marcoceppi> blr: so each layer provides it's own set of events it creates/reacts to
<marcoceppi> they're typically documented in the readme
<blr> marcoceppi: does this compliment the services framework, or is it orthogonal?
<marcoceppi> blr: it replaces, services framework is still valid but reactive is the evolution of it
<blr> right okay... so we should probably look at refactoring our charms.
<marcoceppi> blr: well, I don't know if refactor is the right word
<marcoceppi> blr: I think new charms going forward should be in reactive
<marcoceppi> but services framework is still valid
#juju 2015-11-24
<axw> marcoceppi Icey: storage will be supported in bundles when deployed from CLI in 1.26, not sure when juju-gui support will be added
<axw> rick_h_: ^^
<lathiat> storage in bundles?
<axw> lathiat: yes, the ability to specify how storage is allocated for charms that support the new storage feature
<axw> lathiat: e.g. how many disks to allocate for each unit by default
<rick_h_> axw: conversations took place today. we wanted to check about creating pools from bundles?
<axw> rick_h_: that sounds a bit odd, since a bundle should be deployable to multiple clouds?
<rick_h_> axw: true but it also can't be a full deployment without the pool. it's sonething to chat on there.
<rick_h_> axw: but the bundle wxport with storage will be on gui 2.0 in dec i think
<rick_h_> export
<axw> rick_h_: yeah I think it needs sprint-meeting discussion. it doesn't seem straight forward to me. for some providers the pools are really environment specific. e.g. for MAAS, you can specify tags that disks must match... that's going to be specific to an installation of MAAS
<axw> rick_h_: so then you have (bundle, pool, MAAS-install) tuple that defines your deployment
<rick_h_> axw: right so curious if it's like spexifying machines and the putting services on those machines
<rick_h_> axw: it can be very substrate specific and we'd not want those in the store
<rick_h_> axw: but useful for a repeatable deploy for your own use
<axw> rick_h_: yeah, I guess so. like using instance-type in constraints.
<rick_h_> axw: right
<axw> rick_h_: FWIW, there's a way to specify an override for storage when deploying a bundle on the command line. --storage <service>:<storage-name>=<constraints>. so it's possible to deploy a bundle with a specific pool, it's just not self-contained
<axw> rick_h_: if it's just for personal use, I'm not really convinced the amount of work involved in support it is warranted, but I'm not deploying bundles all the time so I may have a warped view :)
 * Sharan slaps kwmonroe around a bit with a large fishbot
<tvansteenburgh> rick_h_: need clarification - with new juju store stuff, can one promulgate directly from a /development/ url, or must that development revision be published first?
<rick_h_> tvansteenburgh: yes, you can --publish a ~rharding/development/mysql to publish the latest development revision
<rick_h_> tvansteenburgh: at least that's the spec, I'm looking forward to getting the client to try it out
<tvansteenburgh> rick_h_: and you could do the same with a specific revision i assume?
<rick_h_> tvansteenburgh: yes
<tvansteenburgh> oh wait, you didn't answer my question :)
<rick_h_> tvansteenburgh: or even just 'charm upload --publish .' to both upload it to dev and publish it in one stroke
<tvansteenburgh> *promulgate*
<rick_h_> tvansteenburgh: oh,  so in promulgate you have to first promulgate the published url.
<tvansteenburgh> ok, thanks
<rick_h_> tvansteenburgh: then the end user can only upload to develop, and only those with the promulgate ACL can publish from develop to the promulgated published space
<rick_h_> if that makes sense
<tvansteenburgh> yep
<lazypower> woo
<lazypower> new tooling
<stokachu> anyone know if there is a way to react to a 'relation finished' using juju api or any other means?
<stokachu> not specific to reactive pattern just in general
<tvansteenburgh> rick_h_: how does one determine whether a user-namespace charm has advanced beyond what is currently promulgated?
<tvansteenburgh> stokachu: relation-departed|broken hooks?
<stokachu> looking for a way to run a process against a service after it has joined a relation
<stokachu> but want to make sure the relation stuff is done
<tvansteenburgh> ah, probably need to use status-set for that
<roadmr> helloo juju people. How can a unit know its own id? i.e. which command can I run, while inside unit foo/0 (via ssh), to get "foo/0" (or the tag: unit-foo-0)?
<marcoceppi> roadmr: if you're in a hook, $JUJU_UNIT_NAME
<jrwren> is there a place to see queue of tests to be run for stuff at http://reports.vapour.ws/ ?
<roadmr> marcoceppi: what if I'm not in a hook? :( i.e. a plain shell
<marcoceppi> jrwren: what do you mean
<marcoceppi> roadmr: I mean, not easily, there are "ways"
<jrwren> marcoceppi: I updated a MR and am wondering when tests for it will run again.
<marcoceppi> jrwren: never, we have to push a button for updates. Only initial new items are run. In the new review queue it'll work on update but it's hard for us to track that in the old one
<roadmr> marcoceppi: I could write the unit's id/tag to a file as part of a charm, then I'd have that info available for later... off the top of my crazy head that's one idea
<marcoceppi> jrwren: link me to the review item in the review queue and I'll kick off tests
<marcoceppi> roadmr: yeah, the other is to sniff init files, but that won't work if you have multiple units on the node
<jrwren> marcoceppi: oh!  I'm glad I asked :)  https://code.launchpad.net/~evarlast/charms/trusty/mongodb/fix-dump-actions/+merge/277191
<marcoceppi> jrwren: that's not a review queue link ;)
<jrwren> marcoceppi: oh.
<marcoceppi> jrwren: http://review.juju.solutions/review/2357 will show tests and the results
<roadmr> marcoceppi: I see them! is it at all possible for two units of the *same* service to be deployed on the same node?
<marcoceppi> jrwren: if there are no tests "PENDING" then none are running
<jrwren> marcoceppi: where do I get a review queue link? its in the list here, I've no idea what link you want. http://reports.vapour.ws/latest-bundle-and-charm-results
<marcoceppi> roadmr: if someone is crazy, and does a juju add-unit --to, then yes
<marcoceppi> jrwren: http://review.juju.solutions
<roadmr> marcoceppi: if not, maybe init file poking would help me, since I don't care about services bar, baz, quux, as long as I know this node is foo/0
<roadmr> marcoceppi: oh, yes the crazy factor :)
<marcoceppi> roadmr: it's probably easier to write it to a file, tbh
<marcoceppi> jrwren: I just queued up tests for it
<roadmr> marcoceppi: hey thanks for your help/feedback :) at least I know 1) it's not straightforward, 2) I have several options to work with.
<jrwren> marcoceppi: http://review.juju.solutions/review/2357 ?  that link?
<marcoceppi> yes
<marcoceppi> jrwren: it'll say either PASS, FAIL, PENDING, or RUNNING
<jrwren> marcoceppi: in that case, this too please: http://review.juju.solutions/review/2371
<marcoceppi> jrwren: so if you don't see any PENDING or RUNNING then ping a ~charmer to kick them off
<marcoceppi> jrwren: done :)
<jrwren> marcoceppi: thank you.
<marcoceppi> tvansteenburgh: it looks like LXC substrate for charm testing is broken again "ERROR there was an issue examining the environment: cannot use 37017 as state port, already in use"
<marcoceppi> tvansteenburgh: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1535/console
<tvansteenburgh> marcoceppi: thanks, looking into it
<roadmr> marcoceppi: hey feel free to defer me if busy, but what I want to ultimately achieve is being able to deploy a crontab configuration to all units for one particular service but have only one of them run the crontab, the others should ignore or early-exit from it. I can't imagine I'm the first to need something like this?
<marcoceppi> roadmr: so, are you doing this crontab as part of a charm? or outside the charm?
<roadmr> marcoceppi: good question :) my plan is for the charm to write the crontab file (say in /etc/cron.daily/blah)
<marcoceppi> roadmr: if you want it to only run on one unit, then just have this kind of codeblock
<marcoceppi> http://paste.ubuntu.com/13493735/
<marcoceppi> roadmr: juju will elect a leader from the service group for you at deploy time, and there will ever only be one leader. If that leader goes away a `leader-elected` hook will fire where you can codify that check so that the cront will only ever exist on one machine
<roadmr> marcoceppi: oh cool! yes, I was gravitating towards using is-leader but was thinking of using it at runtime (i.e. in the unit). Doing it on hooks sounds reasonable
<marcoceppi> roadmr: it's be a safer and repeatable way to do what you're looking for
<roadmr> marcoceppi: indeed... and it sounds like the correct way to use is-leader, rather than my horrid mental hacks :)
<marcoceppi> :D
<roadmr> marcoceppi: cool! I'll dive into doing it that way. Thanks so much!
<cory_fu> marcoceppi: Still no ANN for the new charm-tools?  mbruzek was hitting an issue that was fixed in the new version with --hide-metrics
<marcoceppi> cory_fu: the latest version is out and live
<marcoceppi> cory_fu: it's been out (1.9.2) since Friday, still fighting homebrew'
<cory_fu> Anything I can help with?
<marcoceppi> cory_fu: it's grunt work, see https://github.com/Homebrew/homebrew/pull/46273 apparently the way I've been doing Formulas for the past 2 years is "wrong"
<marcoceppi> cory_fu: I'm working on it again now, if I can get poet to work I should have it in homebrew soone enough
<cory_fu> Thanks a bunch
<cory_fu> mbruzek's issue was just that he was not aware of the new release.  It is working now, I believe.
<marcoceppi> of course, poet doesn't install cleanly.
<tvansteenburgh> how would one fix this: ERROR failed to bootstrap environment: cannot make cloud-init init script for the machine-0 agent: relative path in ExecStart (cloud-city/charm-testing-lxc/tools/machine-0/jujud) not valid
<tvansteenburgh> "relative path not valid"... i have cloud-city/ dir, but nothing beyond in that path
 * tvansteenburgh tries creating dirs...
<tvansteenburgh> nope
 * tvansteenburgh facepalms. note to future self - don't set JUJU_HOME to a relative path
<rick_h_> tvansteenburgh: ouch, that seems like a party there
<blahdeblah> Hi all, quick Q: is it possible to tell juju to use MAAS to bootstrap and install the boostrap node into a container in MAAS rather than the base node itself, leaving the base node available for other juju units?
<marcoceppi> blahdeblah: so bootstrap in a LXC container?
<blahdeblah> marcoceppi: yep
<marcoceppi> blahdeblah: not really, at least not atm, however you can just create a KVM container on the maas-master and tag it "bootstrap" so you can `juju bootstrap --constraints="tags=bootstrap"`
<blahdeblah> yeah - I've done that before, but it's cumbersome and manual
<blahdeblah> When the LXD driver comes it will be possible?
<blahdeblah> s/it will/will it/
<lazypower> marcoceppi: it may be complete coincidence, but if i have a machine tagged bootstrap maas always picks it for the bootstrap node.
<lazypower> i dont have to pass the --constraints bit
<marcoceppi> blahdeblah: I don't know, I'm not sure if you'll be able to register lxd as a chassis for maas
<blahdeblah> lazypower: hi - did you see my question about the DNS charm recently?
<marcoceppi> though that would be pretty awesome
<lazypower> blahdeblah : I did not
<blahdeblah> marcoceppi: :-(
<lazypower> blahdeblah on the repo?
<blahdeblah> lazypower: No, here; I've read through the doco on it a couple of times and I'm still struggling to understand what you're aiming at with it.
<lazypower> blahdeblah : a single charm to handle DNS
<lazypower> either setup the bind infrastructure to handle DNS for me, or proxy requests to my upstream DNS provider like Rt53
<blahdeblah> lazypower: Was hoping you'd have some time to have a hangout to discuss so I can fit into it with the stuff I'm working on.
<lazypower> Sure. I have a todo to lend a hand integrating it into the big data bundles possibly
<lazypower> I can wrap it into that todo, and sync next week over it so it'll be fresh in my mind?
<lazypower> i haven't looked under teh hood of the charm in a bit, its been an on-going WIP
<blahdeblah> marcoceppi: Any idea who are the people to talk to about explaining my use case?
<blahdeblah> lazypower: Cool - thanks
<blahdeblah> lazypower: The current thing that you seem to be aiming for is sending requests to manage DNS records over the relation, right?
<lazypower> Correct
<blahdeblah> lazypower: What I'm hoping to do is tie the DNS records directly to a relation, so that as soon as I add a unit and it's functional, it gets added to DNS without needing to ask anything.
<lazypower> Thats exactly the plans of the auto relation
<lazypower> it uses the units name and a wildcard domain to populate the entire model
<lazypower> that or SRV records
<lazypower> er
<blahdeblah> I thought that might be the case, but the doco for it is empty at the moment. :-)
<lazypower> welllll
<lazypower> lets sling some code g-funky
<lazypower> :D
<marcoceppi> blahdeblah: the maas team, or email the juju mailing list about it
<blahdeblah> marcoceppi: I would have thought it would be all in the juju driver side of things; we can deploy units to LXCs on MAAS now, just not the bootstrap node.
<marcoceppi> blahdeblah: wait
<marcoceppi> blahdeblah: what?
<marcoceppi> the lxc container has to run somehwere though
<blahdeblah> lazypower: I'm definitely happy to do some coding for it
<blahdeblah> marcoceppi: yep, and the MAAS driver allows saying "juju add-unit --to lxc:N", where N is a MAAS-provisioned machine.
<blahdeblah> marcoceppi: s/driver/provider/ maybe - not sure on the exact terminology there
<marcoceppi> blahdeblah: sure, that makes sense, that exists for other providers
<marcoceppi> so what you want is for maas to spin up a machine and put the bootstrap node on a lxc container on that node?
<blahdeblah> exactly
<blahdeblah> That way an environment could be fully auto-provisioned in MAAS without requiring a dedicated bootstrap machine or manually adding KVM nodes to MAAS.
<los_> Q: is there a current list of providers for JuJu?  "providers" is the "per IaaS enabler" bits right?
<cholcombe> i seem to have run into this problem with the manual provider: https://bugs.launchpad.net/juju-core/+bug/1412621
<mup> Bug #1412621: replica set EMPTYCONFIG MAAS bootstrap <adoption> <bootstrap> <bug-squad> <charmers> <cpec> <cpp> <maas-provider> <mongodb> <oil> <juju-core:Fix Committed by frobware> <juju-core 1.24:Won't Fix> <juju-core 1.25:Fix Released by frobware> <https://launchpad.net/bugs/1412621>
<marcoceppi> los_: there is
<los_> marcoceppi: thankx...I'll look again.  Did you see this re: LXD? https://www.youtube.com/watch?v=QyXLRDN0ERo
<marcoceppi> los_: `juju init --show | grep "type:" | grep -v "#" |  awk '{print $2}'` which says the following:
<marcoceppi> los_: openstack maas joyent gce ec2 cloudsigma vsphere manual local azure
<marcoceppi> los_: Yes! I have seen teh lxd provider, it'll be in the 1.26-alpha2 release which should be out in a week or so
<los_> marcoceppi: I swear, if I could wave a magic wand and banish all the old maas/juju vids I think that'd be my best contribution!
<marcoceppi> los_: haha, yeah... you and me both. You should checkout the Juju video channel which is only publishing fresh content
<marcoceppi> los_: https://www.youtube.com/channel/UCSsoSZBAZ3Ivlbt_fxyjIkw
<marcoceppi> los_: we're publishing new content about once or twice a week to that channel
<los_> marcoceppi: THANKS!  Awesome.
#juju 2015-11-25
<pmatulis_> using juju-quickstart can i specify the maas node i want to use as bootstrap server?
<marcoceppi> pmatulis_: good question, not sure. You can pre-bootstrap the environment then use quickstart
<rick_h_> pmatulis_: I think you can pass --constraints to the quickstart command
<rick_h_> pmatulis_: see juju-quickstart --help for the note on constraints
<pmatulis_> alright guys, looking
<pmatulis_> i would love to know why the tags i created (and can list) with the maas cli do not show up in the maas gui...
<marcoceppi> pmatulis_: what version of maas?
<pmatulis_> marcoceppi: 1.8
 * marcoceppi shrugs
<pmatulis_> rick_h_, marcoceppi: do you guys know if juju-deployer is being actively maintained? who supports that?
<rick_h_> pmatulis_: as needed between folks on eco and landacape
<rick_h_> pmatulis_: what's up?
<pmatulis_> rick_h_: i just wanted to know if it is being maintained in some way
<rick_h_> pmatulis_: there's been work to do juju deploy the bundle withoit extra tools
<rick_h_> pmatulis_: in the next version of juju
<pmatulis_> ok
<rick_h_> pmatulis_: it's mostly maintenance as we support bundles in core
<pmatulis_> rick_h_: ack
<maht> hi, I just installed Juju and the GUI in a MAAS cluster in a private network, and I am trying to connect to Juju GUI from Safari and Chrome using a SSH tunnel to the Juju GUI. In Safari it connect without problems, but with Chrome it keeps trying to switch to SSL despite the secure option of juju-gui service have been set to false
<Xat`> hi guys
<Xat`> is juju needed for maas ?
<bloodearnest> hey folks. I'm writing a very simple subordinate for managing some files/config. It's only relation is the subordinate relation to it's primary charm
<bloodearnest> I'm also trying to learn the reactive framework in the process
<bloodearnest> the main thing I want to hook into is config-chagned, but its not obvious how to do that, at least from the examples
<bloodearnest> is it as simple as when('config-changed') ?
<tvansteenburgh> bloodearnest: @hook('config-changed')
<bloodearnest> tvansteenburgh, that is charmhelpers.hookenv.hook ? Or some reactive hook decorator
<bloodearnest> ?
<tvansteenburgh> bloodearnest: charms.reactive.decorators.hook
<tvansteenburgh> https://pythonhosted.org/charms.reactive/charms.reactive.decorators.html
<bloodearnest> tvansteenburgh, ta
<bloodearnest> using reactive, do I *have* to write a relation class? Can I somehow used the default RelationBase class or similar?
<jacekn> hello. What is the best way to get new layer I wrote into http://interfaces.juju.solutions/ ?
<lazypower> jacekn if you have a launchpad account, you can self publish the layer in the index.
<lazypower> jacekn top left corner theres a login with launchpad link
<jacekn> aha, let me do that
<lazypower> bloodearnest: Its kind of a packaged deal, yes
<lazypower> bloodearnest: early adopters get to define the interface(s) - but think of all the people coming along after you that will see the interface and get to consume it without any investment :)
<bloodearnest> lazypower, so, I'm confused. I am writing a new charm, that provides 1 relation. I am trying to use @when('relation.available') to trigger logic to send certain information down the relation when it's added.
<bloodearnest> to do that, I need a class that sets the 'state' to 'available' some how, right?
<lazypower> bloodearnest 1 sec
<lazypower> i have a doc for you
<lazypower> bloodearnest start here: https://github.com/mbruzek/docs/blob/mbruzek-developer-guide/src/en/developer-layers-interfaces.md
<lazypower> bloodearnest: implementation is here https://github.com/mbruzek/docs/blob/mbruzek-developer-guide/src/en/developer-layers-interfaces2.md
<bloodearnest> lazypower, so, my charm should have 2 layers (+base) then: the interface layer, and the 'charm' layer?
 * bloodearnest again wishes provides and requires where not the terms used  in juju
<lazypower> bloodearnest correct, as its completely reasonable to have a charm which has no relations. It may not be the most useful, but its a use-case :)
<bloodearnest> especially since my charm is a subordinate that provides a service to the principle charm, but the relation has to be "requires", not provides
<bloodearnest> lazypower, right
<lazypower> bloodearnest: the idea behind having the interfaces as a separate layer, is it breaks apart the conversation happening between units, and the implementation  - that decoupling is giving you a consistent contract to talk to whichever service is implementing the interface you are including in your charm layer
<bloodearnest> yep
<bloodearnest> make sense to bind the two
<bloodearnest> side of the relation
<bloodearnest> lazypower, can I define this layer in the same place as the charm layer, or does it need to be complete separate?
<lazypower> by convention it needs to be separate
<lazypower> when you charm build, it will scan your INTERFACE_PATH to find the interface and build the associated hooks for you
<lazypower> bloodearnest - thats covered ni the developer-layers-interfaces2.md file
<bloodearnest> lazypower, ok
<bloodearnest> lazypower, a templating /skeleton tool to create a default interface with basic available 'state' management on both provides/requires might be useful?
<marcoceppi> bloodearnest: we're going to be adding those to charm create soon
<marcoceppi> `charm create -t {interface-layer, charm-layer}` etc
<bloodearnest> marcoceppi, nice
<nottrobin> is it possible to setup my environment such that unpriviledged users can "juju bootstrap" in the local environment?
<nottrobin> as in a way to make "juju bootstrap" not need sudo, or a way to just enable to sudo commands they do need?
<bloodearnest> nottrobin, I don't think so, as it's a generic sudo bash command that is run, not a specific script that you could give limited access to via sudoers
<nottrobin> bloodearnest: yeah that's what I feared. I was just wondering if there's a directory or file somewhere that I could expand permissions on that would mean sudo wasn't necessary (and maybe the script would be clever enough to realise it)
<bloodearnest> nottrobin, I have a feeling it does | sudo bash :(
<nottrobin> well that's sad
<nottrobin> thanks
<bloodearnest> lxd provider doesn't require root, however
<bloodearnest> but it's brand new
<nottrobin> bloodearnest: brand new, but usable?
<nottrobin> do you know of any guides that could help me get started?
<bloodearnest> nottrobin, I think it will be in the next alpha release, next week I think
<bloodearnest> probably not usable yet
<nottrobin> okay. never mind. nice to know it's coming
<tpsilva> I'm trying to deploy Openstack with autopilot (Ubuntu 15.04), but it hangs at 82%... can anybody help me?
<los__> (How do I get rid of the persisting state so I don't get logged in as los______________ ???)
<erlon> Â¬Â¬
<tpsilva> erlon: ping
<erlon> tpsilva: pong
<erlon> tpsilva: have you tried RDO? they use to be very responsive
<los__> erlon: talkin' to me? :D
<erlon> los__: agree with me :) ?
<lazypower> bloodearnest - https://github.com/juju/docs/pull/746
<lazypower> you may be interested in that :)
<los__> erlon: I was wondering if "RDO" was something as a response to my Q :)
<krondor> hey all attempting to build a reactive charm but when I deploy I'm seeing ImportError: No module named charms.reactive
<krondor> t also occurs when I try using the vanilla forums example cloned from the git repo.
<erlon> los__: actually didn't see your question, I have just entered, it was more about tpsilva question
<los__> erlon: thankx
<los__> Anyone had problems with the GCE provider?  https://jujucharms.com/docs/stable/config-gce is out of date (Google constantly changing dashboard) and I'm getting an error:
<los__> ERROR there was an issue examining the environment: invalid config: key "auth_provider_x509_cert_url" not supported
<los__> had to delete tags: auth_uri, token_uri, auth_provider_x509_cert_url, client_x509_cert_url
<cory_fu> jpt
<cory_fu> Oops
<los__> https://github.com/CanonicalLtd/jujucharms.com/issues/185
<cholcombe> with juju storage if i forgot to add a device do i juju set {service-name} "/dev/sda" with just 1 device or all of them i had before plus the extra one?
#juju 2015-11-26
<apuimedo> jamespage: gnuoy: I'm getting lxc deployment failures on 1.25.0
<apuimedo> any idea?
<apuimedo> it worked on one of the machines provided by maas, doesn't work at all in another
<apuimedo>         agent-state-info: lxc container cloning failed
<apuimedo> (I have to say that it is for precise, I don't know if that matters)
<suchvenu> Hi
<suchvenu> We have upgraded to latest 1.25 version of juju where unitnames are created in sequential order
<suchvenu> Previously we had used the following code in amulet test , unit_manager_0 = d.sentry.unit['charmname/0']
<suchvenu> Now using unit_manager_0 = d.sentry['charmname'][0]
<suchvenu> which is working right
<suchvenu> we had also used juju action in amulet as follows
<suchvenu>  uuid = d.action_do('ibm-db2/0', 'download', {"username": config.get('
<suchvenu> How do we correct this for the 1.25 version of Juju ?
<suchvenu> I mean how to provide the unit name ?
<Icey> is it possible to use config values for some settings, unless a specific relation exists? I'd like to support using an external database provided by juju through a relation, OR a config specified one, and I'd like the relation to override the config
<apuimedo> cherylj_: https://bugs.launchpad.net/juju-core/+bug/1441319
<mup> Bug #1441319: intermittent: failed to retrieve the template to clone: template container juju-trusty-lxc-template did not stop <canonical-bootstack> <cisco> <cpec> <deployer> <landscape> <lxc> <oil> <regression> <systemd> <upstart> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1441319>
<apuimedo> cherylj_: added logs
<roadmr> heya folks, I think I found a bug with juju not firing the leader-elected hooks: https://bugs.launchpad.net/juju-core/+bug/1519994
<mup> Bug #1519994: leader-elected hook never fires <juju-core:New> <https://launchpad.net/bugs/1519994>
<Prabakaran> Hello Matt,
<suchvenu> hi
<suchvenu> I want to give a juju action on a unit name as juju action do charmname/0 download
<suchvenu> With latest version of Juju the unit names are increasing in sequential order, How can I give the unitname in juju action ?
<suchvenu> Can anyone pls help
<marcoceppi> suchvenu: what do you mean get the unit name?
<marcoceppi> suchvenu: you can just use `juju status ` to find the unit name
<suchvenu> Hi MArco
<suchvenu> I want to add juju action in amulet test and want to give the unit name there. For each deployment the unit name changes as charmname/0, charmname/1 etc
<suchvenu> So i may not know which is the unitname i need to give in juju action in amulet test
<suchvenu> before for getting the unit manager in amulet we used to call unit_manager_0 = d.sentry.unit['charmname/0']
<Prabakaran> Can anyone help me how to deploy openjdk layer charm available in the link https://github.com/juju-solutions/layer-openjdk as i am not able to find the deployment steps in the README.md file.
<suchvenu> But now with latest version of Juju 1.25, this gives an error as the unit names keeps increasing as charmname/1, charmname/2 etc
<suchvenu> We could get the unit name with this comamnd now : unit_manager_0 = d.sentry['charmname'][0]
<suchvenu> Similarly for juju action, how do we do it ?
<suchvenu> marcoceppi, are you able to understand my query ?
#juju 2015-11-27
<jamespage> dosaboy_, https://code.launchpad.net/~hopem/charms/trusty/swift-storage/support-rsync-acl/+merge/278424 +1
<dosaboy_> jamespage: ta, minor nit i have to fix in the template then i'll land it
<jamespage> dosaboy, awesome-o
<jamespage> dosaboy, thankyou for picking that one up - much appreciated
<dosaboy> jamespage: heh yeah only took a few months for to finally get it done ;)
<dosaboy> jamespage: we can always add it the extras on top, iptables rules, pwds etc
<dosaboy> jamespage: you +1 on the -proxy mp too?
<jamespage> dosaboy, probably - link?
<dosaboy> jamespage: https://code.launchpad.net/~hopem/charms/trusty/swift-proxy/support-rsync-acl/+merge/278426
<dosaboy> jamespage: they need to land in tandem
<jamespage> dosaboy, one needs fixing on that one
<dosaboy> jamespage: ack
<dosaboy> jamespage: fixed
<roadmr> hey jujuers, with an impending Juju release in the horizon perhaps https://bugs.launchpad.net/juju-core/+bug/1519994 could use at least a glance-over?
<mup> Bug #1519994: leader-elected hook never fires <juju-core:New> <https://launchpad.net/bugs/1519994>
<jacekn> is it possible to write a charm that can be deployed as a principal service but in other environments as subordinate?
<roadmr> hey juju people. How would I go about getting logs for specific units from the master? juju debug-log has this tendency to block if the log is very short. Is juju ssh $unit_name sudo cat /var/log/juju/unit-${unit_tag}.log the best way?
#juju 2015-11-28
<blahdeblah> Anyone know when 1.25.1 is due to release?  Looking for the fix for https://bugs.launchpad.net/juju-core/+bug/1412621
<mup> Bug #1412621: replica set EMPTYCONFIG MAAS bootstrap <adoption> <bootstrap> <bug-squad> <charmers> <cpec> <cpp> <maas-provider> <mongodb> <oil> <juju-core:Fix Released by frobware> <juju-core 1.24:Won't Fix> <juju-core 1.25:Fix Released by frobware> <https://launchpad.net/bugs/1412621>
<blahdeblah> (i.e. release into stable)
#juju 2016-11-28
<hloeung> is there a way to switch back to using a local charm?
<hloeung> I switched to use haproxy from the charmstore but can't seem to switch back
<hloeung> $ juju upgrade-charm --switch local:xenial/haproxy haproxy
<hloeung> ERROR unknown schema for charm URL "local:xenial/haproxy"
<hloeung> $ juju upgrade-charm --switch xenial/haproxy haproxy
<hloeung> ERROR already running latest charm "cs:haproxy-38"
<hloeung> hmm, code says local: is meant to be supported
<hloeung> https://github.com/juju/juju/blob/staging/cmd/juju/application/upgradecharm.go#L278
<hloeung> ok worked it out, had to specify full path, so /srv/mojo/...
<kjackal> good morning juju world!
<anrah> morning!
<khyr0n> nites over here :)
<Ankammarao> Mattyw,Hi
<mattyw> Ankammarao, morning
<Ankammarao> mattyw,stiil we are not able to create terms
<mattyw> Ankammarao, what's the error you get?
<Ankammarao> error: cannot get discharge from "https://api.jujucharms.com/identity/v1/discharger": third party refused discharge: cannot discharge: user is not a member of required groups
<mattyw> Ankammarao, what's the name of the term you're uploading?
<Ankammarao> mattyw,we have tried with different users which they are already members of that group
<Ankammarao> ibm-platform-ac
<mattyw> Ankammarao, and the group you're uploading to?
<Ankammarao> ibmcharmers
<Ankammarao> mattyw,/snap/bin/charm push-term /root/ibm-platform-ac.txt ibmcharmers/ibm-platform-ac is the command using to push terms
<mattyw> Ankammarao, and you're a member of the ibmcharmers group?
<urulama> Ankammarao: can you see the groups returned with "charm whoami"?
<Ankammarao> no i am just seeing the user-name logged in
<Ankammarao> sry, i am getting like this "root@islrpbeixv685:~# charm whoami User: rajith-pv Group membership: ibmcharmers"
<urulama> mattyw: ^ this means that the group is ok ... please check access on terms side
<Ankammarao> mattyw,i am not a member of ibmcharmers group,but my team meber tried with his userid(already a member to that group)
<ashipika> mattyw: ping?
<mattyw> Ankammarao, so rajith should be able to push the terms then from his machine?
<mattyw> Ankammarao, and to get access you will need to contact one of the admins for that team
<Ankammarao> mattyw,no he unable to push terms
<mattyw> Ankammarao, what error does he get?
<mattyw> Ankammarao, the same one?
<Ankammarao> mattyw,the same error
<mattyw> ashipika, can you help Ankammarao work out what's going on? they're trying to push terms but keep getting "user is not a member of required group"
<ashipika> mattyw, Ankammarao: just trying to verify that it works for me.. one minute, please
<mattyw> Ankammarao, how long as rajith been a member of that group?
<mattyw> ashipika, is it possible they'd need to login again?
<ashipika> mattyw: yes, perhaps
<Ankammarao> mattyw,he is a member since from 6months or more
<Ankammarao> mattyw,root@islrpbeixv685:~# /snap/bin/charm version charm 2.2.0 charm-tools 2.1.8
<Ankammarao> mattyw,is that fine with version
<mattyw> Ankammarao, looks fine to me - there's some kind of issue with login, ashipika is the expert there so he'll be able to help out
<ashipika> Ankammarao: interesting.. so which term are you trying to push? what is the term id?
<mattyw> ashipika, ibmcharmers/ibm-platform-ac
<mattyw> ashipika, and charm whoami shows ibmcharmers membership
<ashipika> mattyw: that is unusual
<mattyw> Ankammarao, when you ran charm whoami was that running /snap/bin/charm whoami?
<mattyw> Ankammarao, would be interesting to see if they return the same information
<ashipika> mattyw: for me "charm whoami" returns "not logged into https://api.ujucharms.com/charmstore" :)
<Ankammarao> mattyw,showing diffrent user root@islrpbeixv685:~# /snap/bin/charm whoami User: achittet
<ashipika> Ankammarao:  could you please try removing your .go-cookies file?
<mattyw> ashipika, be careful...
<ashipika> mattyw: ?
<mattyw> ashipika, there's some confusion between the go-cookies being used by the charm command and the one being used by the snap charm command
<mattyw> ashipika, and only the snap charm command contains the commands for pushing terms
<mattyw> ashipika, so we need to remove go-cookies for that command
<mattyw> ashipika, which might not be ~/.go-cookies
<ashipika> mattyw: ah, interesting.. i wonder why charm whoami does not work for me
<ashipika> Ankammarao: launchpad says you are not member of any group..
<ashipika> mattyw: ^
<mattyw> ashipika, they're using login for rajith-pv
<Ankammarao> ashipika : yes i am not member to that group, but my team member rajith is a member
<ashipika> Ankammarao: could you please go to https://jujucharms.com and log in as rajith-pv? and then try again?
<Ankammarao> ashipika: ok
<ashipika> Ankammarao: thank you
<Ankammarao> mattyw,ashipika: its working now
<ashipika> Ankammarao: \o/
<ashipika> Ankammarao: it's because of the way our identity manager works.. it cannot get user information until you log in with jujucharms.com at least once..
<ashipika> Ankammarao: sorry for the inconvenience
<Ankammarao> ashika: there is a conflict b/w charm login and /snap/bin/charm login
<Ankammarao> ashipika : both are different users
<Ankammarao> mattyw,ashika: root@islrpbeixv685:~# sudo /snap/bin/charm push-term /root/ibm-platform-ac.txt ibmcharmers/ibm-platform-ac ibmcharmers/ibm-platform-ac/1
<Ankammarao> ashika : getting output like "ibmcharmers/ibm-platform-ac/1"
<ashipika> Ankammarao: did you do snap login first?
<Ankammarao> yes
<ashipika> Ankammarao: now you can use ibmcharmers/ibm-platform-ac/1 term in your charms..
<ashipika> mattyw: any idea why the two users should be different?
<Ankammarao> ashika : but we have mentioned term name like "ibm-platform-ac/1" in metada.yaml
<Ankammarao> ashipika : no, both users should be same as ealrier i got error because of both users are diffrenet
<Spaulding> folks, one simple question... update-status... what is should return
<Spaulding> it's like a health check?
<Spaulding> so if i'll try to run curl to see that website is working, and return error code ... that should be fine?
<ashipika> Ankammarao: you need to use ibmcharmers/ibm-platform-ac/1 in metadaya.yaml.. because therm id consists of the namespace (ibmcharmers) and the term name (ibm-platform-ac).
<ashipika> Ankammarao: if you want your charm to require agreement to a specific term, you need to include the term revision number.. otherwise your charm will always require agreement to the latest revision of the term document
<Ankammarao> ashika: ok,thank you
<ashipika> Ankammarao: you're welcome.. if you have any further issues pushing or releasing terms, please ping me
<Ankammarao> ashipika : sure i'll ping you
<dbuliga> hey guys. What is the status of this PR? https://code.launchpad.net/~dbuliga/charms/trusty/nagios/nagios/+merge/288614 Nothing changed on it since 2016-10-07. Is it possible to get it reviewed? Thx. Denis
<geetha> Hi, I have installed VNC server and firefox on ubuntu 16.0 s390x machine. When I tried to start firefox through vnc client, it's giving me error: sementation fault (core dumped).
<geetha> *Segmentation fault
<bdx> hey whats up guys, I'm having issues connecting to my controller, see -> https://bugs.launchpad.net/juju/+bug/1644634
<mup> Bug #1644634: cannot access controller <juju:New> <https://launchpad.net/bugs/1644634>
<bdx> this has basically rendered all of the models I have provisioned on that controller inaccessable
<bdx> any ideas on what to do here?
<marcoceppi> bdx: so you're not able to restart the mongodb process?
<bdx> marcoceppi: I'm not able to get any correspondence from the controller
<bdx> marcoceppi: my initial inclination was that the controller is just mad iowaiting
<bdx> but my crud monitoring through aws console suggests its not under much load
<marcoceppi> too many file descriptors open?
<marcoceppi> bdx: you're not able to SSh or anything?
<bdx> omg
<bdx> my ssh just connected
<bdx> after like 5+ minutes
<bdx> I was right
<bdx> controller is slammed
<bdx> I couldn't connect all weekend
<bdx> it would just time out
<bdx> I'm actually surprised it is still running
<bdx> marcoceppi: https://s22.postimg.org/gfo71ug4h/Screen_Shot_2016_11_28_at_9_26_25_AM.png
<marcoceppi> okya, that's a lot of mongod processes
<bdx> cloudwatch metrics are poop
<marcoceppi> bdx: can you clean shutdown those processes?
<bdx> I guess ...
<marcoceppi> I want to grab logs, it seems like systemd is just going crazy spawning monogd procs
<marcoceppi> rick_h: ^ ?
<bdx> like `sudo kill <proc#>`
<marcoceppi> `killall -15 mongod`
<marcoceppi> might be quicker
<bdx> right lol
<marcoceppi> bdx: service juju-db stop might be good too
<marcoceppi> bdx: then getting logs and restarting
<bdx> restart the controller?
<bdx> I must of hit a memory increase, my ssh session froze
<bdx> eeehh, waiting for ssh again omp
<rick_h> bdx: marcoceppi sorry was otp, looking
<bdx> marcoceppi: ok, back on the controller
<marcoceppi> bdx: restarting the services, though at this point, restarting the controller VM might not be a bad idea, though grabbing logs will be super helpful
<bdx> https://s14.postimg.org/65u1rqcc1/Screen_Shot_2016_11_28_at_9_33_37_AM.png
<rick_h> bdx: k, yea worst case reset the VM and it should come up if things are working.
<bdx> grabbing logs now
<rick_h> bdx: ouch, so load of 30 but cpu/memory is ok...disk thrashing to no end?
<marcoceppi> bdx: are you running LXD workloads on here?
<bdx> marcoceppi: no, but my controller was initially throwing a bunch of errors due to not having lxd it seemed, so I installed lxd simply to negate the noise
<bdx> rick_h, marcoceppi: my tar of the controller logs is ~100MB, how can I get this to you guys?
<marcoceppi> bdx: is that also gzipped?
<bdx> ya
<marcoceppi> dang. I don't know if LP will let you upload that much, but the bug you linked might be a good place
<rick_h> bdx: if not, try slicing into 10 chunks perhaps? or dropbox link or whatever.
<rick_h> bdx: get me a url and I'll get it and help get it to folks if needed
<bdx> rick_h, marcoceppi: https://bugs.launchpad.net/juju/+bug/1644634/comments/1
<mup> Bug #1644634: cannot access controller <juju:New> <https://launchpad.net/bugs/1644634>
<rick_h> bdx: <3 ty
<rick_h> bdx: are things somewhat sane post-restart?
<rick_h> bdx: or still nuts?
<bdx> rick_h, marcoceppi: no, the controller is iowaiting again, fully maxed
<marcoceppi> bdx: is iotop installed on the machine?
<bdx> rick_h, marcoceppi: I have a redis cluster deployed supporting a staging and QA env, other than that I could ditch this controller, unfortunately I need to keep that QA env up at all costs bc its getting beat on pretty heavily right now
<rick_h> bdx: can you restart the instance?
<rick_h> bdx: or did you do that? /me glanced through the backlog but might have missed it
<bdx> iotop -> https://s21.postimg.org/58l6v7787/Screen_Shot_2016_11_28_at_9_47_38_AM.png
<bdx> rick_h: no, i didn't restart .... do you think that will send my redis cluster into a state of disarray?
<rick_h> bdx: shouldn't effect it at all.
<rick_h> bdx: I mean the controller is not the same machiens as the redis cluster right?
<bdx> correct
<rick_h> bdx: restarting the controller should just have agents on the redis machines time out for a bit while it comes back up
<rick_h> bdx: no damage to the running redis processes at all
<bdx> rick_h: ok, restarting controller now
<rick_h> bdx: poking at the logs but will take a sec
<rick_h> lol 1.7G of logs
<bdx> rick_h: I've previously had a whole openstack go tits up after losing the controller ... only reason I ask
<bdx> rick_h: right ...
<rick_h> bdx: really? I'd be curious about that story. We've had folks deploy OS with juju and then go in and shutdown all jujud on each machine/etc
<rick_h> bdx: openstack still ran and they made it a manually run openstack at that point
<bdx> rick_h: yeah, after I shutdown all of the juju agents everywhere I was able to move forward and save it all ... but if I remember correctly my db ended up borked
<rick_h> bdx: the controller db? or the OS mysql db?
<bdx> rick_h: openstack mysql db
<rick_h> bdx: can you add info to the bug about the cloud/instances running the controller/etc?
<bdx> yea, omp
<bdx> ahhhh shoot, wrong issue
<bdx> rick_h: comment #3
<rick_h> bdx: k, ty
<rick_h> bdx: can you note the instance/disk and such of the controller nodes. So there's a storm in the logs, but based on that top output it seems like it's not ram/cpu but disk and so wondering if we can draw some lessons on disk peroformance and the mount of logs/etc going on
<rick_h> bdx: there's an openstack in here as well as the redis?
<rick_h> how many models with what running?
<bdx> rick_h: no the openstack was yesteryear
<bdx> rick_h: http://paste.ubuntu.com/23549554/
<rick_h> bdx: hmm, seeing this in the log
<rick_h> 2016-11-28 17:32:16 DEBUG juju.apiserver request_notifier.go:140 -> [55] unit-openstack-dashboard-0 795.567548ms {"request-id":44,"error":"watcher has been stopped","error-code":"stopped","response":"'body redacted'"} NotifyWatcher["10"].Next
<bdx> rick_h: yeah, I've been using the dashboard to help troubleshoot keystone v3 ops I'm trying to solidify
<rick_h> bdx: oh ok, so not all of openstack but that is there ok
<bdx> rick_h: I've deployed keystone + barbican as a standalone secrets provisioning
<bdx> yea
<rick_h> bdx: gotcha, just trying to understand the logs as I go through sorry
<bdx> np
<rick_h> bdx: is the restart back up?
<bdx> rick_h: yesa
<rick_h> bdx: sane or slammed?
<bdx> rick_h: looking much better now -> https://s22.postimg.org/gkfbj32ld/Screen_Shot_2016_11_28_at_10_09_21_AM.png
<rick_h> bdx: ok, I've got to run to a meeting. I've got hte logs and alexisb is getting someone to look at the bug/details when they come online in a bit.
<bdx> rick_h: thanks man
<rick_h> bdx: thanks for the added details, we're currently working on a lot of this so this is good extra data.
<bdx> rick_h: appreciate it
<bdx> do I get a guinea pig award?
<alexisb> bdx you get the juju ecosystem rockstar award :)
<alexisb> thanks for all the info in the bug
<alexisb> we will get updates in it tonight
<Guest34504> Is it possible to use Juju resource with a bundle? https://jujucharms.com/docs/stable/developer-resources
<Guest34504> If so, pls let me know the way to do it
<bdx> alexisb: awesome, thx
<bdx> after adding and removing a machine multiple times from the manual provider, I am now not able to add the machine anymore, I'm getting this -> http://paste.ubuntu.com/23549809/
<bdx> even though the machine doesn't exist on my controller in any models
<bdx> I've tried ssh'ing in and 'rm -rf /var/{log,lib}/juju`, and `sudo deluser --remove-all-files ubuntu`
<bdx> but I still get the same result
<bdx> could the machine be hanging around in the controller somewhere, even though its not shown via `juju status`?
<bdx> rick_h, marcoceppi: ^
<rick_h> bdx: looking, want to check what juju looks for to make that determination
<marcoceppi> rick_h: can you use resources in a bundle?
<rick_h> marcoceppi: not a local resource atm, it's on the 17.04 roadmap list
<marcoceppi> rick_h: ack, ta
<bdx> rick_h: https://bugs.launchpad.net/juju/+bug/1645446
<mup> Bug #1645446: juju incorrectly reports machine already provisioned <juju:New> <https://launchpad.net/bugs/1645446>
<bdx> I'm going to hit every corner case today, fyi
<rick_h> bdx: wheeee, replied with request for as much detail on the before as we can get please
<rick_h> bdx: only work around I could think of atm is to manually remove the doc from the db, or change the ip address/hostname of the machine.
<rick_h> bdx: but looks like some bug in the removal that you hit with the add/remove several times.
<bdx> rick_h: ok, I don't have dynamic provisioning of spinning these up and down ... I could kill-controller and reprovision it all if that would be easier ?
<rick_h> bdx: yea, unfortunately that's the non-hackiest path forward.
<rick_h> bdx: since it's in the mongodb that the doc is hanging around that is triggering that error
<bdx> rick_h: alright, updated, thanks
<rick_h> ty bdx
<bdx> rick_h: ehh, even after killing my controller, and re-bootstrapping, I'm getting the same thing
<rick_h> bdx: ?! /me goes back to the code he thought he understood
<rick_h> bdx: when you kill-controller in manual...I wonder if it clears the db? I mean that would be nuts...but.
<bdx> rick_h: yea, I'm wondering if need to manualy tear all that ishk outta there?
<rick_h> bdx: sec, can you grab the logs please?
<rick_h> bdx: looking for lines like: "Checking if %s is already provisioned"
<rick_h> bdx: looks like there's another place that can hit this if the juju service exists on the system
<rick_h> bdx: so that's in upstart or systemd depending on series
<rick_h> bdx: so it's probably not the juju directory, but the services directory that's the issue.
<bdx> rick_h: ok, yeah, its a trusty controller
<rick_h> bdx: but the machine is xenial?
<bdx> rick_h: no, the machine is trusty too
<bdx> ooh, the services dir on the machine ... my b
<rick_h> bdx: right, the systemd script for jujud on that machine that's "already provisioned"
<bdx> yea, I was looking around in there and didn't see anything that said juju .... oh, where is that?
 * rick_h is looking
<rick_h>  /etc/systemd/system ?
<rick_h> anything juju in there?
<bdx> rick_h: http://paste.ubuntu.com/23549930/
<rick_h> bdx: yea, there we go: https://github.com/juju/juju/blob/6cf1bc9d917a4c56d0034fd6e0d6f394d6eddb6e/provider/manual/environ_test.go#L106
<rick_h> bah
<rick_h> bdx: any others from that code link hit?
<jrwren> when did trusty get systemd?
<rick_h> bdx: /etc/init/juju ?
<bdx> I checked there too .. omp
<rick_h> jrwren: no, but the controller is trusty, but the machine is xenial (the add-machine one that is)
<jrwren> oh, ok.
<rick_h> bdx: ok, what's omp? /me is ignorant
<bdx> http://paste.ubuntu.com/23549940/
<bdx> nothing
<bdx> rick_h: no its trusty
<rick_h> ok, so the controller was killed/restart, the machine in question has no reference to the juju service...
<bdx> rick_h: here we go -> http://paste.ubuntu.com/23549954/
<bdx> rick_h: exactly
<bdx> rick_h: should I kill the controller, then ssh in and rm everything juju and mongo from the controller ?
<rick_h> bdx: not yet, /me is chasing the code from that log output
<rick_h> ok, that is definitely from https://github.com/juju/juju/blob/6cf1bc9d917a4c56d0034fd6e0d6f394d6eddb6e/environs/manual/init.go#L34 then
<rick_h> at least the log lines match up
<rick_h> so time to figure out wtf we run on systemd to get the list of services
<rick_h> bdx: can you run this on the machine? -- /bin/systemctl list-unit-files --no-legend --no-page -t service | grep -o -P '^\w[\S]*(?=\.service)'
<bdx> rick_h: bash: /bin/systemctl: No such file or directory
<rick_h> bdx: this is on the xenial host?
<bdx> rick_h: there is no xenial host
<rick_h> bdx: on that pastebin it said xenial?
<bdx> wha?
<rick_h> http://paste.ubuntu.com/23549833/
<rick_h> bdx: ok sorry, I thought you had a trusty controller with a xenial machine you were trying to add
<rick_h> bdx: I see now that your other paste said it was --trusty
<rick_h> bdx: sorry, my confusion. Ok, looking in the wrong place then
<rick_h> bdx: so new command for trusty: sudo initctl list | awk '{print $1}' | sort | uniq
<bdx> rick_h: http://paste.ubuntu.com/23549991/
<bdx> its there
<rick_h> bdx: there you go, so Juju thinks the machine's provisioned because of the juju entries in there
<bdx> awesome, I'm gonig to rm the jujud-machine-17
<rick_h> bdx: rgr
<rick_h> bdx: the unit one as well if that's done/gone
<rick_h> bdx: if any line has "juju" in it it's considered provisioned so need both gone: 	provisioned := strings.Contains(output, "juju")
<bdx> yeah, just the machine-17 didn't do the trick
<bdx> there we go, removing both did it
<rick_h> bdx: <3 ok...updating the bug with these notes
<bdx> thanks rick
<rick_h> bdx: np, see you in an hour or so for round 3? :P
<bdx> rick_h: I just got elasticsearch (patched with the addition of the python-yaml dep) to successfully deploy to that machine
<bdx> rick_h: that was my last standing issue in the bunch, for the time being :-)
<rick_h> bdx: ok, I'm going to take nap for now then :P
<lazyPower> mbruzek  https://github.com/juju-solutions/jujubox/pull/20#pullrequestreview-10408603  - i had one minor comment. fi you dont want to take any action on that i'm +1 to this as it is.
<mbruzek> lazyPower: I still think the juju-1 packaging is not final, and I don't want to create a sym-link at this time
<mbruzek> If you want to create a bug we can track it that way.
<lazyPower> ack, not that serious
<lazyPower> i'm +1 as it is, merging now
<lazyPower> mbruzek - both jujubox pr's landed. You made mention of a charmbox pr but i dont see one on the repo. Did it land already or is it MIA?
<lazyPower> mbruzek - i assume it was this one? https://github.com/juju-solutions/charmbox/commit/3df12bf82ce6bd16d519bb812c710831e799e6fb
<mbruzek> yes that is it
<lazyPower> awesome. landed that one over the weekend
<lazyPower> so i think we're good on the box images now, just need to kick the builders and get the flavors setup
<lazyPower> plus get this back in CI
<lazyPower> want to batcave it and get that done real quick? it'll only take a few minutes
<lazyPower> mbruzek ^
<mbruzek> yes I am there?
<kwmonroe> lazyPower: i see a juju-1 and latest tag now in jujubox.  is the 'devel' branch now deprecated?
<lazyPower> kwmonroe - i beleive mbruzek sent a mail to the list, but yes, devel is now :latest
<lazyPower> and the devel tag is in the process of being deprecated
<lazyPower> juju-1 will be the best-effort tag to keep a juju-1 compliant charmbox around
<kwmonroe> word lazyPower -- i see mbruzek's note now (spelling errors and all).  thanks!
<lazyPower> kwmonroe np happy to help
<hallyn> 'juju bootstrap' always seems to hang (on both xenial and yakkety hosts), the console claims it's doing "apt-get upgrade", but the process list shows a
<hallyn> 100000   15822 15821  0 22:58 ?        00:00:00 sudo /bin/bash -c /bin/bash -c  set -e tmpfile=$(mktemp) trap "rm -f $tmpfile" EXIT cat > $tmpfile /bin/bash $tmpfile
<hallyn> which has a bash child and bash grandchild with no arguments
<hallyn> tych0: using juju 2.0 and lxd, how can i specify that it should use an image i create?  (i need to drop open-iscsi so juju bootstrap won't hang)
<tych0> hallyn: sec,
<tych0> hallyn: make an image called ubuntu-$series
<tych0> or, one that has a tag called that
<tych0> it'll use that one
<hallyn> oh, ok.  that'll suffice for now,
<hallyn> but so there's no way to say "use image xenial-base" tha tyou knwo of?
<hallyn> tych0: (trying that - thanks)
<tych0> hallyn: no, i don't think so
<tych0> the ubuntu-$series alias is always used for exactly htis reason
<tych0> i didn't think about making the name custom at the time :)
<hallyn> kewl - easy enough, re-bootstrapping, let's see
<hallyn> tych0: success, thx :)
<tych0> hallyn: cool, glad it worked!
#juju 2016-11-29
<junaidali> is there a command that can get the current focused model?
<junaidali> ah, got it. I missed the show-model command
<kjackal> good morning juju world!
<rick_h> junaidali: yes, and if you list-models or just models the current one should have a * on it
<anrah> Quick question about update-status hook
<anrah> It seems when using reactive charms it runs all reactive decorators what are available then?
<anrah> can I somehow avoid this?
<kjackal> anrah: yes, supose you have a method that should be run only once. inside that method you should do something like set_state('method.called') and you should decorate that method with "when_not('method.called')"
<anrah> so I can use decorators inside methods?
<kjackal> anrah: you can declare states inside your methods. Then you can guard aganst these states with the appropriate decorators
<kjackal> let me find an example
<kjackal> anrah: https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/mahout/layer-mahout/reactive/mahout.py#L23
<kjackal> here we make sure mahout is installed only once
<anrah> yep
<anrah> that is familiar
<anrah> Problem right now is that I use layers and somerelation.connected part gets executed multiple times
<jcastro> rick_h: I have to hit the road late next week, but perhaps we should do an office hours before then?
<rick_h> jcastro: on the calendar tomorrow
<jcastro> rick_h: did I miss an announcement on a list?
<rick_h> jcastro: was going to today
<rick_h> jcastro: on my todo to chat with you about it but been in calls today
<rick_h> jcastro: started updating the doc with a script
<jcastro> yeah I have calls all day
<jcastro> ok so you just want me to rally my folks and we'll talk about whatever?
<rick_h> jcastro: yea, but have some topics I want to walk through
<jcastro> ack I see them
<rick_h> jcastro: so yea, I wanted to see what else you wanted to add.
<rick_h> jcastro: and see if anyone wants to join
<rick_h> jcastro: and that's on the calendar for tomorrow, are you able to send an email or should I?
<jcastro> go for it
<jcastro> I'll add content to your notes
<rick_h> jcastro: k, ty
<jcastro> kwmonroe: any new hotness from BD I should include?
<rick_h> jcastro: is there any sort of HO link/etc we can put into the email? How's that working these days?
<jcastro> hmm, let me think
 * rick_h recalls you chatting with someone that it's not hangout on air any more but more a public youtube thingy?
<jcastro> yeah they changed it
<jcastro> https://www.youtube.com/my_live_events as the juju users
<jcastro> I'll add it now
<rick_h> ah, gotcha
<jcastro> oh man, this is actually way better
<jcastro> https://www.youtube.com/watch?v=wo23ZXwa8ZU is the URL for tomorrow
<jcastro> that's handy
<jcastro> it doesn't let me select a hangout though unless I go live, so I guess we'll get that URL before we start
<jcastro> rick_h: oh nice bonus, it announced the upcoming office hours in my subscription feed as "Upcoming" with the time and date, that's really awesome.
<rick_h> jcastro: sweet
<jcastro> rick_h: the bottom of your mail encourage people to just subscribe to the YT channel, then from now on they'll get a notification when we go live
<jcastro> that's an easy way to get caught up to date without too much overhead, for people who want to keep up but not necessarily participate.
<rick_h> jcastro: k, is there any hint on how folks can join the HO?
<jcastro> not afaict, what we usually do is fire it off early and then put the URL here
<rick_h> jcastro: so just tell them "a hangout link will be made available in #juju or something?
<jcastro> right
<rick_h> k
<jcastro> that's exactly what we do
<jcastro> rick_h: it has a setting for hangouts, and then it has another setting for custom encoders
<jcastro> that means we could each OBS into one stream, we should experiment with that in the future, that's how the pros do the multi-stream deals
<rick_h> jcastro: any feedback let me know: https://pastebin.canonical.com/172188/
<jcastro> lgtm, I did rename the session to The Juju Show though
<jcastro> sounds cooler
<rick_h> hah ok
<rick_h> jcastro: k, sent
<rick_h> jcastro: let me know of any other topics you want to toss up and see you tomorrow for it?
<jcastro> yeah I'll add more post-daily call, I need to round up some status from folks
<rick_h> jcastro: rgr ty
<arosales> marcoceppi: hello
<marcoceppi> o/
<arosales> marcoceppi: if you have some time could you merge (if appropriate) and push updates to nagios from the following MPs
<arosales> https://code.launchpad.net/~majduk/charms/trusty/nagios/branch/+merge/304492
<arosales> https://code.launchpad.net/~dbuliga/charms/trusty/nagios/nagios/+merge/288614
<arosales> I think petevg has reviewed the later
 * wxl is sad there's no juju snap
<magicaltrout> eh?
<wxl> i was going to set up a quick vm to test a little wordpress thingy. idea was get ubuntu snappy core in a vm which happens almost immediately, and then install the juju snap so i could grab the wordpress charm. but no such snap exists.
<magicaltrout> balloons: jcastro
<petevg> wxl: I believe there is a juju snap, though it's still under development: `snap install juju --beta --devmode`
<balloons> wxl, there is, it's just in the beta channel
<wxl> amd64 only tho yeah?
<balloons> right, it's in beta because it requires devmode still
<balloons> nope, all supported arches
<wxl> yeah weird i can't find it
<balloons> snap find won't find it.. just install
<wxl> no i mean i tried to install it and it says it can't find it
<wxl> this is an i386 core fwiw
<balloons> i386 is not a supported arch
<wxl> not supported by juju?
<arosales> jcastro: to confirm juju office hours is tomorrow, correct?
<balloons> wxl, correct
<wxl> ah, k
<wxl> thanks :)
<balloons> wxl, :-) Can you not use the amd64 client?
<wxl> probably. unfortunately i've been hobbling along on this i386 install that i've sort of multiarched to amd64 but it's a real pain dealing with all the apps. just been too lazy to do a new install
<wxl> so i'll see :) thanks for the help guys
<wxl> and gals :)
<magicaltrout> i386? makes me feel old
<rick_h> arosales: yes, tomorrow
<marcoceppi> arosales: well there's larger problems there
<marcoceppi> those rely on ingestion
<jcastro> arosales: yep
<wxl> reading through https://jujucharms.com/docs/stable/getting-started
<wxl> is bootstrap a necessity regardless of whether or not you're using lxd?
<wxl> seems that the lxd snap is in devmode too
<wxl> s/dev/mode/beta/
<wxl> unfortunately it doesn't seem to add the lxd group
<jcastro> Using the snapped juju with the snapped lxd doesn't work yet
<jcastro> It's on their roadmap though last time I checked
<jcastro> ideally you'd get the entire thing from snaps and be good to go
<wxl> jcastro: but tl;dr juju does require lxd, no?
<magicaltrout> no
<magicaltrout> unless you want to use lxd ;)
<wxl> oh
<wxl> ok well i don't give a hoot about that. :)
<magicaltrout> bootstrap will just launch a controller node on your given cloud
<magicaltrout> or lxd provide
<magicaltrout> r
<jcastro> oh, if you don't care about that then you can just use it to deploy to AWS or whatever
<magicaltrout> so you do need to bootstrap but its not lxd specific
<jcastro> `juju bootstrap aws/us-east-2` for example
<wxl> oic well given that i'm running off of a cloud image in virtualbox, i guess i do need it XD
<wxl> s/cloud/snappy-core/
<jrwren> wxl: it should work VERY well on a cloud image in virtualbox, but I don't use snaps for anything.
<wxl> jrwren: yeah i'm aiming for "quickest way to a local wordpress install" and right now the issue is apparently that the snaps need work still.
<jrwren> wxl: which snaps?
<wxl> jrwren: specifically the issue is with the lxd snap, which admittedly in beta, but it doesn't add the lxd group and there's no easy way to do that in snappy's locked down environment
<jrwren> wxl: is there a reason to use the lxd snap and not deb package? apt install lxd works very well for me.
<wxl> jrwren: well snappy core relies entirely on snaps. the reaosn i want to use snappy core is i can download an image and have it running in minutes, versus going through a whole install.
<wxl> jrwren: realize this is an "ideal world" kind of thing. it's not a complaint about juju or snappy. development just hasn't got ther eyet
<jrwren> wxl: oh! I missed the snappy core part. I only saw the "reading through https://jujucharms.com/docs/stable/getting-started"
<wxl> jrwren: yeah sorry for the confusion. i'm an odd case for sure :)
<arosales> jcastro: rick_h thanks. If possible I would like to talk about review queue and charm testing
<arosales> marcoceppi: re nagios and dep on ingestion, sorry I didn't parse that
<marcoceppi> arosales: it's a ~charmers charm, it's not been taken over by anyone
<marcoceppi> and we can't just update the store, its a charm that relied on ingestion that no one has taken over
<arosales> marcoceppi: ah, but your listed as a maintainer
<arosales> I thought we pushed fixes to this already....
<marcoceppi> i guess I could just move this to my name on charmstore
<marcoceppi> though I'm really no longer the maintainer
<arosales> Well I normally ping maintainers to do this
<marcoceppi> arosales: maintainers in the yaml file really means nothing
<marcoceppi> it's the person that matinained it at one point in time
<marcoceppi> it's really about the owner in the charm store now
<arosales> It's the best we have thus far to mark a maintainer
<marcoceppi> arosales: but I'm telling you, I've not been maintaining it
<arosales> marcoceppi: in any case if you would no longer like to maintain could you email the list?
<marcoceppi> it's essentially, unmaintained
<arosales> Gotcha
<arosales> We have some contributions on it so I'll ping ~charmers to merge and push for now
<marcoceppi> arosales: you're missing the point
<marcoceppi> we can merge it sure
<marcoceppi> but it'll never get into the store
<arosales> Sorry, I do understand
<arosales> We need to land the fix in LP
<marcoceppi> but that fixes nothing in the store
<arosales> And the push that code to the store
<marcoceppi> the second half needs a maintainer
<petevg> bcsaller, cory_fu: Do either of you mind if I push a new build of python-libjuju to matrix's wheelhouse?
<marcoceppi> someone other than ~charmers, and I'm not an active maintainer
<arosales> Sorry me typing on a phone atm
<arosales> marcoceppi: ok, could we
<arosales> start by you emailing the list looking for a maintainer?
<marcoceppi> sure
<arosales> Perhaps there is someone untested in maintaining
<arosales> marcoceppi: thanks
<magicaltrout> i'm interested in maintaining it
<arosales> Untested = interested
<magicaltrout> because i use it and i know of a couple of other issues that I've patched
<arosales> Looks
<arosales> Like we have a winner :-)
<arosales> marcoceppi: ^ petevg ^
<marcoceppi> magicaltrout: if you want to create a public repos *somewhere* I'll merge these two things, convert it to git and hand it over
<arosales> marcoceppi: magicaltrout thanks!
<magicaltrout> marcoceppi: i can just push the code to github then?
<marcoceppi> magicaltrout: as the *previous* maintainer, yes
<marcoceppi> magicaltrout: if you set up a repo, I'll push over the latest code
<magicaltrout> https://github.com/buggtb/nagios-charm
<magicaltrout> shove it in there marcoceppi
<marcoceppi> magicaltrout: what email address do you want to use as maintainer?
<magicaltrout> tom@spicule.co.uk please marcoceppi
<marcoceppi> magicaltrout: cool
<marcoceppi> magicaltrout: okay, pushed
<marcoceppi> magicaltrout: if you want to push that to the store, I'll move the promulgated name to you
<cory_fu> petevg: Nope
<petevg> cory_fu: you referencing my comment on the hadoop-processing issue? Or is it just a general "nope"?
<petevg> cory_fu: oh, right. You're answering my question. I had forgotten that I had asked it :-)
<cory_fu> petevg: :)
<petevg> cory_fu, bcsaller: Pushed an updated python-libjuju .whl to matrix master. Please pull and rebase if you want to deploy fancy bundles :-)
<bcsaller> petevg: thanks
<kwmonroe> lazyPower: mbruzek:  matt's note said charmbox:devel was going away.  can we keep it and build a jujubox:devel that pulls from ppa:juju/devel?  it'd be nice to have a juju/charmbox with 2.1-betaX
<mbruzek> kwmonroe: make a pull request if you want that and we will review
<lazyPower> kwmonroe: we're kind of booked with what we're supporting already. Matts suggestion is what i would recommend, that if you can contribute the Dockerfile and a minimal test (See matts latest pr) we could reasonably get that in and accounted for in the automation.
<magicaltrout> sorry marcoceppi https://jujucharms.com/u/spicule/nagios-charm
<magicaltrout> done
<marcoceppi> magicaltrout: can you push it as jujucharms.com/u/spicule/nagios ?
<magicaltrout> boo
<magicaltrout> yeah
<magicaltrout> dunno why i did that :)
<magicaltrout> done
<magicaltrout> whats the charm grant command to revoke access?
<marcoceppi> magicaltrout: revoke
<magicaltrout> ah up a level i was still looking in the grant menu
<marcoceppi> magicaltrout: done
<magicaltrout> joy
<kwmonroe> lazyPower: mbruzek:  local build looks good with 2.1-beta1.  here's a pr to bring devel inline with master (plus a couple commits to add ppa:juju/devel).  this will affect cwrbox, so you may want to hold off until https://github.com/seman/cwrbox/pull/2 is merged.
<kwmonroe> https://github.com/juju-solutions/jujubox/pull/28
<lazyPower> nice
<lazyPower> thanks kwmonroe
<kwmonroe> np.. nice job on travis integration!
<lazyPower> thats all matt :0
<cory_fu> bcsaller: So, I'm using async for the charm archive download and all, but I can't figure out a way to extract the zip in an async way.  Also, fetching the archive URL involves a non-async call to the charm store API, which we have seen hit a 10s timeout in tests.  I guess my only option on those is run_in_executor?
<tvansteenburgh> cory_fu: that's what i would do (for both)
<lazyPower> marcoceppi: Cynerva: ryebot: - https://k8s-gubernator.appspot.com/build/canonical-kubernetes-tests/logs/e2e-node/1122_17:10:48.000    despite the fact we're not officially merged, with some url hax, we're totally in the dashboard :P
<ryebot> lazyPower: Sweet!
<pcdummy> What are the minimum specs needed for Juju? (To try it out) ?
<marcoceppi> pcdummy: if you're using the lxd provider,  4 cores and 8gb ram
<marcoceppi> otherwise, a cloud provider
<pcdummy> marcoceppi: thanks
<pcdummy> Will install Juju the coming days and give it a try.
<marcoceppi> pcdummy: cool, if you need cloud credentials, sign up for the developer program: https://developer.juju.solutions
<pcdummy> marcoceppi: juju will give me something like that: https://drawstack.github.io/qxjoint/ ? :)
<pcdummy> marcoceppi: that was a fun project i was working on (should be a interface for LXD in conjunction with saltstack).
<lazyPower> pcdummy: correct, however juju is application centric view in the GUI instead of machine/container centric.
<marcoceppi> pcdummy: it will give you something...similar
<pcdummy> NICE
<pcdummy> I'll give it certainly a try.
<pcdummy> juju has no live migration of containers?
<rick_h> pcdummy: no, it does not currently
<pcdummy> So downtimes on maintenance of the host, if the app isn't scaleable/loadbalanced/whatever.
#juju 2016-11-30
<petevg> cory_fu: that bug you're running into is probably my fault. Was going to test some code more thoroughly and push in the morning, but take a look at https://github.com/petevg/python-libjuju/tree/bug/fixes-for-null-cases
<cory_fu> petevg: I think preventing the crashes on our end is good, but that controller panic definitely shouldn't happen
<petevg> True.
<cory_fu> petevg: Also, there's some overlap with your branch and a PR I got in this evening: https://github.com/juju/python-libjuju/pull/24
<petevg> cory_fu: I may also be wrong about it being my fault. Those fixes don't actually change anything (just ran a matrix test) :-/
<cory_fu> petevg: Yeah, the directive change doesn't fix the controller panic.  I think we must be passing something in to the API as a None that it expects to have a value
<petevg> Lovely.
<cory_fu> petevg: If you look at the panic log that I included, you can see which params are null (0x00) and compare that to the code signature to work backwards to what we should be passing
<cory_fu> But it's not going to be fun
<cory_fu> Anyway, I'm done for the evening.  We can pair on it tomorrow, if you want.  Have a good night!
<petevg> cory_fu: sounds good. You have a good night, too (undoing my placement fix fixes things, btw, so it is something to do with the Placement object.)
<petevg> cory_fu: this works, btw `placement=[parse_placement(to)] if to else None`, for wiki-simple, and for hadoop-processing. I want to do some more banging on it before I check that in, though ... (for now, to bed for me)
<wxl> is `juju set` not actually a thing? can't find it in the manpage and i get an error trying to run it in a fresh zesty server. and yet: https://jujucharms.com/wordpress/trusty/4
<zeestrat> wxl: It's 'juju config' in Juju 2.0
<anita> Hi
<Guest5875> how to get the values from config.yaml file in juju 2.0. I searched and found config=hookenv.config(). then config.get('parameter'). but when I am giving as "juju config <service> paramter=value" its not accepting.
<Guest5875> is it correct?
<marcoceppi> Guest5875: everything you've mentioned is correct
<marcoceppi> Guest5875: maybe a look at your charm code would be helpful
<Guest5875> marcoceppi_:ok, thanks a lot
<BlackDex> hello there
<BlackDex> I have an error "Incomplete relations: identity"
<BlackDex> on all units of the same service
<BlackDex> glance in this case
<BlackDex> i tried to remove it and add it again, but it doesn't seem to work
<marcoceppi> BlackDex: can you put the output of juju status into paste.ubuntu.com and share the link?
<BlackDex> one moment
<BlackDex> marcoceppi: http://paste.ubuntu.com/23557948/
<marcoceppi> BlackDex: so a few things, are you locked into 1.25? 2.0 has been out for a little over a month, second, there seems to be a few things still churning, finally you might get better assitance in #openstack-charms since this is really an openstack setup question
<BlackDex> i'm locked to 1.25 for now
<SimonKLB> is there a relation, similar to the special juju-info one for subordinate charms, which can be used with more or less every charm when the goal is simply to get the name of the related charm?
<SimonKLB> the application name, that is
<SimonKLB> s/relation/interface
<marcoceppi> SimonKLB: juju-info is that relation
<marcoceppi> well, that's the interface
<SimonKLB> oooh, i thought that was only intended for subordinate charms
<marcoceppi> SimonKLB: oh, you want this for primary charms?
<SimonKLB> marcoceppi: yes
<marcoceppi> I've never tried, but it theoretically *should* work
<SimonKLB> marcoceppi: doesnt look like it works out of the box - ERROR charm "testcharm" using a reserved relation name: "juju-info"
<marcoceppi> SimonKLB: relation name, what's the metadata look like?
<SimonKLB> marcoceppi: i guess this is the relevant part:
<SimonKLB> subordinate: false
<SimonKLB> requires:
<SimonKLB>   juju-info:
<SimonKLB>     interface: juju-info
<marcoceppi> SimonKLB: don't call the relation juju-info
<marcoceppi> call it anything else but that
<SimonKLB> marcoceppi: doesnt seem to make a difference
<petevg> cory_fu: PR for you: https://github.com/juju/python-libjuju/pull/28
<SimonKLB> marcoceppi: nvm!
<petevg> (I saw the code you merged, but I think that we still need to fix these two edge cases, too.)
<SimonKLB> marcoceppi: shouldve tried that first :) thanks man!
<marcoceppi> cheers
<stub> Anyone know what happened with https://review.jujucharms.com/reviews/13 ?
<stub> I think it was supposed to be promulgated, but https://jujucharms.com/nagios/ is the trusty only charm owned by Tom
<marcoceppi> stub: there was a bit of a miscommunication then, I've reset things
<marcoceppi> magicaltrout: there appears to already be a group of people taking up nagios, you may want to join them https://launchpad.net/~nagios-charmers
<stub> More the merrier
<marcoceppi> stub: can I request that more people be added to metadata.yaml maintainers?
<stub> Sure. Not sure what it has atm.
<marcoceppi> me and someone at nagios
<marcoceppi> does nagios-charmers have a mailing list?
<stub> That sounds like it needs fixing
<marcoceppi> that might be the best for a "contact the maintainers"
<stub> It could have, but I suspect half the people would filter it :-(
<stub> Can we use the main juju list, or is that naughty?
<marcoceppi> naughty :)
<stub> Otherwise I can click the 'create a mailing list' button here in Launchpad
<marcoceppi> eh
<marcoceppi> it's a pretty low traffic list
<marcoceppi> as the previous maintainer, I only ever recieved 1 email
<stub> Launchpad has mailman builtin, but I suspect if I turned it on for something this low traffic emails would just get lost in mail filters.
<stub> But at least it is a non main-juju-mailing-list contact point
<marcoceppi> stub: as an additional maintainer
<marcoceppi> I'm happy to stay on as a POC, but would hate to be the only POC
<stub> oh, sure. I was thinking you would want out of there completely :)
<marcoceppi> naw
<marcoceppi> I am mildly interested in maintaining
<deanman> Hi, any suggestion for an existing subordinate well maintained charm to study ?
<marcoceppi> deanman: filebeat?
<deanman> marcoceppi: on it
<stub> https://api.jujucharms.com/charmstore/v5/nagios-14/archive/metadata.yaml
<marcoceppi> +1
<skay_> Hi, I ran `charm proof` on my new charm and I have warnings about missing hooks and amissing hook directory. when I generated the charm structure, it created a reactive directory. I've been putting hooks and reactive functions in my one file there.
<petevg> cory_fu, bcsaller: I pushed another build of python-libjuju to matrix master's wheelhouse. Contains the fixes cory_fu and I pushed for edge cases.
<skay_> based on running charm proof, should I split out the hooks in to a separate location?
<petevg> skay_: if you're building a layered charm (which you probably are, if you're following the updated instructions), you need to do "charm build", and then run "charm proof" on the built charm.
<marcoceppi> skay_: proof needs to be run after you do a charm build
<skay_> ack, thanks
<petevg> no worries :-)
<skay_> much better :)
<skay_> I have some hooks, like config-changed, that should switch status to 'maintenance' and then eventually switch to 'active'.
<skay_> if an exception occurs between those, would the status be stuck in 'maintenance'?
<skay_> and where can I look at how exceptions get handled by the framework so that I can determine if I should do some handling before allowing an exception to be raised
<skay_> also, are there best practices for commenting on status and state transitions?
<skay_> I didn't add comments on everything that transitions, but where something might not be clear I made a list of possible transitions
<petevg> skay_: it should flip into an errored state if you have an unhandled Exception. Juju will then retry the hook, and stay in an errored state until the Exception goes away, or someone manually intervenes and fixes things.
<skay_> thanks
<petevg> skay_: Documenting transitions where they are not clear sounds like a best practice to me :-)
<petevg> np
<skay_> thanks I am just guessing here
<skay_> okay, another question. I've been deploying my charm to try it out and it gets listed in an 'unknown' state, and I don't know why that happens
<skay_> do you have advice on how to dig in to that?
<skay_> listed by juju status
<petevg> skay_: if I were troubleshooting, I'd read code and think through why a state wasn't getting set. You can add logging statements, or take a look at the documentation on the 'debug-hooks' command here:  https://jujucharms.com/docs/1.25/authors-hook-debug
<petevg> And feel free to file issues or make contributions to juju docs (https://github.com/juju/docs) -- they could definitely be clearer about some of this stuff.
<skay_> petevg: thanks. (I'm using juju2 so I'd go to a different url)
<skay_> petevg: :)
<skay_> petevg: I'm never sure when something is supposed to be obvious or not :)
<petevg> skay_: yep. The correct url is https://jujucharms.com/docs/stable/authors-hook-debug  Sorry about that.
<petevg> skay_: if it's not obvious to you, it's probably not obvious to someone else. Asking questions is always good. Filing issues when you get a good answer that isn't in the docs is even better :-)
<petevg> skay_: in other words, thank you for the excellent questions :-)
<rick_h> jcastro: ping
<jcastro> yo
<rick_h> jcastro: got a sec for prep?
<jcastro> yeah, fire it up!
<rick_h> jcastro: wanted to make sure the agenda is good, you mentioned some adding yesterday
<mbruzek> rick_h: Will the link be here so we can join the show?
<rick_h> mbruzek: definitely
<mbruzek> looking forward to it, carry on with prep
<rick_h> jcastro: can you use the one for the show just not hit record for prep?
<jcastro> rick_h: I think marcoceppi wants to crash the party!
<rick_h> jcastro: or a diff one ?
<jcastro> right, working that now
<rick_h> woot! crashing is fun
<jcastro> https://hangouts.google.com/hangouts/_/ytl/mmAQjIgU5Fj-06oIFgojUAUgmlXJO-VnVCiW-omyT_g=?eid=103184405956510785630
<jcastro> give it 30 seconds
<jcastro> http://youtu.be/wo23ZXwa8ZU to listen in
<marcoceppi> jcastro: "that's an error" when accessing page
<marcoceppi> nvm
<rick_h> loaded here
<marcoceppi> https://lists.ubuntu.com/archives/juju-dev/2016-November/006169.html
<jrwren> sounds like keystone charms are so good that its the easiest identity management on juju
<mbruzek> Are there any questions for the Juju show live on you tube right now? http://youtu.be/wo23ZXwa8ZU
<marcoceppi> http://summit.juju.solutions/
<CoderEurope> Wich #channel should I be in for the Juju Show ?
<zeestrat> CoderEurope: This one.
<CoderEurope> zeestrat: Cheers - wheres jcastro ?
<zeestrat> Not sure. Ping rick_h or mbruzek for questions
<mbruzek> zeestrat: either
<CoderEurope> rick_h, mbruzek I would like to chat with jcastro about a project that I am kicking around. Can I talk to him after the show ?
<arosales> jcastro: rick_h is the hangout still live?
<mbruzek> CoderEurope: Absolutely you can
<rick_h> arosales: yes
<arosales> mind if I drop in?
<arosales> https://hangouts.google.com/hangouts/_/ytl/mmAQjIgU5Fj-06oIFgojUAUgmlXJO-VnVCiW-omyT_g=?eid=103184405956510785630 isn't working for me
<arosales> 403
<rick_h> arosales: :/ not sure
<mbruzek> probably because we are live now
<bdx> https://hangouts.google.com/hangouts/_/ytl/mmAQjIgU5Fj-06oIFgojUAUgmlXJO-VnVCiW-omyT_g=
<jcastro> https://hangouts.google.com/call/ymfbnowfynfabphu36fwwyk46ee
<jcastro> try this one
<jcastro> you can both drop in if you'd like!
<CoderEurope> me too ?
<mbruzek> sure
<jcastro> yeah!
<marcoceppi> jump in, we've got like 15-20 mins to hang out
<zeestrat> rick_h: Thanks for going through my questions. Much appreciated!
<rick_h> zeestrat: np! thanks for the feedback/questions!
<CoderEurope> @jcastro, Something went snap on my hangout.
<CoderEurope> jcastro, thank-you should be downloaded in 5mins. Thanks again.
<vmorris> watching the playback on the hangout -- this discussion about openstack endpoints being on private networks is super relevant to my interests
<vmorris> but isn't this the point of having internalurl and externalurl attributes in the endpoint?
<bdx> vmorris: yea, but natting wasn't taken into consideration
<bdx> vmorris: I'm quite sure that functionality is meant to be facilitated by having actual interfaces with ip addresses on the separate networks
<vmorris> bdx: ah i see now, ty
<beisner> hi bdx, marcoceppi - re: barbican, be aware that the it is intended to be used with an hsm.  when used without it, you'll want to be aware of the warning @ http://docs.openstack.org/developer/barbican/setup/dev.html which we've also echoed in the charm release notes @ http://docs.openstack.org/developer/charm-guide/1610.html#barbican
<wxl> zeestrat: thank you kindly. is 2.0 docuemnted outside of the man page?
<zeestrat> wxl: All of the updated docs for 2.0 is up on https://jujucharms.com/docs/stable/ (see https://jujucharms.com/docs/stable/commands for commands for example)
<zeestrat> Some charms might be a bit older and therefore refer to juju 1.x commands and concepts.
<bdx> beisner: thanks for that
<bdx> beisner: Note that this plugin DOES NOT WORK at present due to
<bdx> bug#1611393.y
<bdx> beisner: so its not really usable to that end
<beisner> bdx, exactly.  the idea here is:  standalone barbican is a poc/test only scenario.  or, provide a hardware hsm and point barbican at it.  or eventually resume softhsm work when the stars align wrt versions.
<bdx> I see
<bdx> I don't want to believe it though
<bdx> this spoils all my fun
<bdx> beisner: jerk
<bdx> jp :)
<beisner> bdx, haha. just looking out for ya :)
<bdx> much appreciated
<beisner> bdx, i've not looked, but i would guess that for yakkety and onward, the necessary openssl library versions are in line with continuing the softhsm work.  but i don't think we've got that roadmapped atm.  an opportunity to collab/explore?
<bdx> yeah ... yakkety has 1.0.2g-1ubuntu9
<bdx> I would love to get this working ..... I have devs from other teams super interested in implementing what I've showed them in there organizations too .... not sure how apt they are to actually helping move it forward
<bdx> beisner: is there a roadmap to get 1.0.2h in xenial at all?
<bdx> their*^
<beisner> bdx, i'm not sure but short of a major vulnerability, xenial's default stance will be on version stability generally speaking.
<bdx> beisner: do you know who deals with, or would know more about the openssl packaging?
<beisner> bdx, ubuntu security team, i believe.  have a look through https://launchpad.net/ubuntu/xenial/+source/openssl/+changelog
<bdx> thx
<bdx> mdeslaur: how's it going?
<bdx> mdeslaur: would you mind chiming in here?
<bdx> xnox, jmbl: ^^
<bdx> beisner: possibly those guys aren't on #juju all the time, should I email the group of what looks like 4 or 5 maintainers concerning this you think, or openstack/juju lists?
<beisner> bdx, it's worth getting familiarized with https://wiki.ubuntu.com/StableReleaseUpdates - those are the folks who will likely err on the side of version stability, for many good reasons of course.  that is not to say that it cannot happen.
<beisner> o/ bdx - i've gotta check out.  cheers
<bdx> beisner: oooh 1.0.2h isn't stable yet
<bdx> beisner: ok, thanks
<wxl> zeestrat: thank you!
<xnox> bdx, beisner - i'm lost, what is the question?
<xnox> bdx, beisner - we do not take openssl point releases, we cherrypick CVE fixes only. If there is a specific CVE you are after, you can check the security CVE tracker.
<xnox> if you are after any other performance enchancement or fixes, it may need stand alone SRU.
<xnox> that's my understanding.
<beisner> xnox, ack, same understanding here.
<beisner> thanks xnox
<jacekn> hello. What's the process to get https://jujucharms.com/apache2/trusty/20 cham pushed to xenial series? It should work just fine form what I can see
<bdx> xnox: thanks
<cory_fu> bcsaller: If an additional matrix suite wants to override a rule in a test, should we match the rules up by task like we do tests by name?  Also, how do we handle removing args from a rule, or removing a rule entirely?  Or is it always just additive?
<cory_fu> (Current implementation is match by name and task and always add or override)
<cory_fu> I suppose you could remove an arg by setting it to None, but no way to remove an entire rule
<cory_fu> bcsaller, petevg: What about "task" becoming a top-level rule key and "do" changing to "args"?
<cory_fu> I think that might be clearer anyway
<cory_fu> Hrm.  Still doesn't really help with deleting rules (or tests)
<petevg> cory_fu: it feels like the right thing to do is just to write a new test.
<cory_fu> petevg: What if you want all but one of the default tests?
<petevg> cory_fu: you're kind of moving toward writing an inheritance system for tests. That's interesting, but might be out of scope.
<petevg> I can see wanting to avoid copypasta, though ...
<cory_fu> petevg: Well, we originally talked about it as more of a type of inheritance, but that's hard to do with just yaml
<petevg> Yeah ... gets messy quickly.
<cory_fu> petevg, bcsaller: Could support a special "delete: True" attribute, perhaps
<cory_fu> tests: [{name: foo, delete: True}]
<cory_fu> Or tests: [{name: foo, rules: {task: bar, delete: True}}]
<bcsaller> cory_fu: sorry, on another call. I'd suggest a new version of an existing name overwrites it, not extends it (though there might be real use cases for that...)
<cory_fu> bcsaller: Just at the rule level, or overwrite the entire test?
<bcsaller> the entire test
<cory_fu> Hrm.  That seems heavy handed.  I feel like something small like tweaking a timeout or period should be easier than that
<bcsaller> then you need a syntax for handling deletes and so on like you said
<cory_fu> Or even just adding an additional rule to an existing test
<cory_fu> petevg: Thoughts on only being able to add new tests or entirely overwrite existing ones, rather than being able to tweak it by adding a rule or changing an arg?
<cory_fu> bcsaller: Regardless of that, I feel like moving the task name up a level makes more sense to me.
<petevg> cory_fu: My brain is mush right now, but I like the simplicity of just having to override an entire test.
<petevg> It means duplicated yaml.
<petevg> But less duplicated yaml than having to write a whole new .yaml
<cory_fu> {do: deploy, args: {version: previous}}
<cory_fu> petevg: Ok, I'm out voted, then I guess
<petevg> cory_fu: sorry.
<cory_fu> no problem.  Less code to maintain in matrix.  :)
<cory_fu> bcsaller, petevg: https://github.com/juju-solutions/matrix/pull/26 updated for review
<beisner> anyone have advice/workarounds for juju controller mem leak/usage issues? http://i.imgur.com/C1WpcK2.png  in just a couple of hrs with the lxd provider i've succeeded in OOMing and Calltracing the pretty beefy host.  juju 2.0.1
<alexisb> beisner, move to 2.0.2
<beisner> hi alexisb - wherefrom?
<alexisb> it is in proposed atm
<alexisb> beisner, there may be other work arounds, but 2.0.2 release has some good fixes in it around controller exhaustion issues
<beisner> alexisb, ack.  will tear down, upgrade, redeploy and stuff.  many thanks.
<cory_fu> petevg: Usage examples added
<cory_fu> bcsaller: ^
<petevg> cory_fu: yay docs!
<skay_> for the 2nd time maybe I've had a crash report pop up from juju-deployer. has that happened to anyone else?
<cory_fu> petevg: Did I mention that that PR has the newest libjuju with your fixes for the controller panics?  I thought those fixes were in master, but I was still getting them until I updated libjuju again
<petevg> cory_fu: interesting. I thought that I had updated this morning. Maybe I forgot to pull python-libjuju master or something ...
<petevg> cory_fu: are you sure that it's not just that you didn't rebuild your tox environment?
<cory_fu> That's possible, but I thought I did
<cory_fu> I can test it real quick
<cory_fu> petevg: Yeah, you're right.  It seems to be working on master now
<petevg> cory_fu: the semantics of matrix in your PR are a little confusing. If I have a bundle that sits at, say ~/Code/bigtop.hadoop/bigtop-deploy/juju/hadoop-processing/, and it doesn't have a matrix.yaml, but I want to run the default matrix test suite, how do I do so?
<petevg> skay_: I haven't had deployer pull up a crash report, if by "crash report", you mean Ubuntu's crash reporter. I have had it just plain crash, but not recently.
<cory_fu> petevg: matrix  ~/Code/bigtop.hadoop/bigtop-deploy/juju/hadoop-processing/
<skay_> petevg: yeah, it's trigger Ubuntu's crash reporter. surprised me
<skay_> I'm running a mojo manifest
<petevg> cory_fu: that's what I thought the docs were telling me, but that doesn't work.
<cory_fu> petevg: tests/matrix.yaml is included if present, but can be excluded with -B.  The default suite is always included unless excluded with -D
<petevg> cory_fu: My matrix log: http://paste.ubuntu.com/23560175/
<petevg> (It looks like its trying to interpret the bundle as a matrix test.)
<cory_fu> petevg: Sorry, that should have been: matrix -p  ~/Code/bigtop.hadoop/bigtop-deploy/juju/hadoop-processing/
<cory_fu> CLI args are additional suites.  -p <path> is the path to the bundle.
<petevg> cory_fu: Aha. That's it. It tried running -Dp from the example, but that only works if there's a matrix.yaml file, I think.
<cory_fu> petevg: Added another usage example for that
<petevg> cory_fu: awesome. Was just about to ask you to do that :-)
<cory_fu> petevg: Yeah, if you use -D and there's no tests/matrix.yaml, you'll have no suites.  Should probably catch that and report more nicely
<petevg> cory_fu: merged. This is awesome stuff -- saves me a lot of find-replacing when I test hadoop-processing :-)
<cory_fu> bcsaller: Where would I look to colorize log messages in matrix?
<bcsaller> cory_fu: easier to explain in a hangout
<cory_fu> bcsaller: Ok, I'm in matrix daily
<alexisb> thumper, axw ping
<skay_> juju-deployer stacktrace https://www.irccloud.com/pastebin/O28WeBHn/
<skay_> not sure why the above happened
<cory_fu> petevg, bcsaller: Quick PR and I'm EOD.  Have a good one
<cory_fu> petevg, bcsaller: Helps if I include the link; https://github.com/juju-solutions/matrix/pull/30
<cory_fu> bcsaller: Removed the  unused block and merged
<bcsaller> cory_fu: thanks
<tvansteenburgh> skay_: i'd check the juju logs on that one. looks like something went wrong server-side
#juju 2016-12-01
<geekgonecrazy> Greetings everyone.  Does anyone know of a mongodb 3.2 charm?  Looks like the "mongodb" one is 2.4 only
<geekgonecrazy> or even a mongo 2.6 version :D
<lazyPower> geekgonecrazy: Are you looking for one thats ready to use or are you looking for something thats still wip but on its way to being better than the 2.4 charm we have available today?
<lazyPower> geekgonecrazy: if you're looking for the former, we dont have anything more recent than whats in the charm store. There's a layer that has been under dev and put on hold for other priorities that you could reasonably pick up and help complete
<geekgonecrazy> lazyPower: i'm not necessarily looking for better.  :)  Rocket.Chat due to meteor underneith needs at least Mongodb 2.6 to function.
<geekgonecrazy> After wasting several hours trying to get our charm working, I finally realized that the mongodb charm is only at 2.4
<geekgonecrazy> :D
<geekgonecrazy> I ended up taking the juju 1.x type approach instead of 2.0 so no layers.  Mainly because I was looking to get done quickly and had absorbed enough information prior to your suggestion that it was just easier to go this way for a quick and dirty first version
<geekgonecrazy> https://github.com/RocketChat/juju-charm
<geekgonecrazy> lazyPower: but I have a team mate very urgently wanting to try it.  So if you know where abouts I can find the wip version we'd gladly take a look
<geekgonecrazy> I assume its one that would have to be built and not already in the charm store in some beta form :)
<lazyPower> geekgonecrazy: correct, let me fish that repository up
<lazyPower> geekgonecrazy: https://github.com/marcoceppi/layer-mongodb
<lazyPower> no readme, but the gist is clone that repository, run `charm build` and give it a whirl.
<geekgonecrazy> lazyPower: sweet! much appreciated!
 * lazyPower hat tips
<kjackal> Good morning Juju world!
<jacekn> what's the process to get this charm into xenial? It should work just fine from what I can see
<kjackal> jacekn: what charm is this?
<jacekn> kjackal: I forgot to paste it didn't I? https://jujucharms.com/apache2/trusty/20
<jacekn> from what I can see it's just simple upload + publish
<kjackal> jacekn: yeap, sounds simple. You should ping the maintainer of the charm
<jacekn> kjackal: what's charmers' IRC nick?
<kjackal> let me check
<kjackal> jacekn:  the maintainer should be gnuoy if I am not mistaken
<jacekn> gnuoy: so hello! I want to use https://jujucharms.com/apache2/trusty/20 on xenial, I think it shoudl work just fine but needs to be pushed to jujucharms.com
<jacekn> gnuoy: is this somethig you can help your IS colleagues with?
<gnuoy> jacekn, I haven't touched that charm in years
<gnuoy> jacekn, happy to help out however I can though
<gnuoy> Maybe transfer of ownership is the best longterm thing?
<jacekn> gnuoy: if you're not active maintainer that's fine, I'm just tryihng to figure out who to talk to abou tit
<jacekn> the problem here is that I can probably push it but it's promulgated charm and it would end up in xenial without any review
<kjackal> Hey marcoceppi the jenkins charm on github.com/jenkinsci is using some interfaces that are outside the jenkinsci org. I can add them to the the jenkinsci org if i get added there or we could pull the jenkins charm in juju-solutions along with the review-queue etc.
<marcoceppi> kjackal: what?
<kjackal> marcoceppi: the jenkins charm (https://github.com/jenkinsci/jenkins-charm) uses layers (https://github.com/jenkinsci/jenkins-charm/blob/master/layer.yaml) that are  in free's github repos (eg https://github.com/freeekanayaka/interface-jenkins-extension.git)
<marcoceppi> kjackal: that's fine
<kjackal> marcoceppi: wouldn't it be better if we move the interfaces with the charm; under the same org?
<kjackal> marcoceppi: Free does not object to that
<marcoceppi> kjackal: sure, but we don't own that org, jenkins does, and it takes a bit of time
<marcoceppi> i'd rather just move the interface to github.com/charms for now, as a central location
<kjackal> marcoceppi: that would be an improvement, I could move to juju-solutions
<marcoceppi> no
<marcoceppi>  /charms would be better
<kjackal> marcoceppi: we are talking about https://github.com/orgs/charms/people ? With three people?
<deanman> j #ubuntu
<kjackal> marcoceppi: (I thought we owned jenkinsci)
<marcoceppi> kjackal: no, jenkins.org owns it
<marcoceppi> it's their upstream org
<kjackal> kwmonroe: are yo around?
<marcoceppi> rick_h: btw, stop editing your yaml files for $JUJU_HOME, `juju help set-default-region` ;)
<rick_h> marcoceppi: oh! you're right!
 * rick_h hangs head in shame
<mgz> marcoceppi: some old habits are very hard to break
<mgz> I almost like hand editing yaml now...
 * rick_h thinks mgz is nuts...but ok
<rick_h> SimonKLB: ok, got one going with released 2.0.1 http://paste.ubuntu.com/23563553/
<rick_h> SimonKLB: same thing, no volume in the output
<SimonKLB> rick_h: odd...
<SimonKLB> rick_h: we seem to diverge at the user data, the size is different and after that i get 'destroying model "controller"'
<SimonKLB> while you get 'started instance "47d92c..'
<marcoceppi> rick_h: what's the api endpoint for bundle file? https://api.jujucharms.com/v5/bundle/canonical-kubernetes/file/bundle.yaml
<marcoceppi> rick_h: I keep trying variations of that, and can't get it to work
<rick_h> marcoceppihttps://api.jujucharms.com/v5/bundle/canonical-kubernetes/archive/bundle.yaml:
<rick_h> marcoceppi: s/file/archive
<marcoceppi> bleh, thanks!
<SimonKLB> rick_h: are you running on the public or private cloud?
<rick_h> marcoceppi: there's a link on the charm details page as well
<rick_h> SimonKLB: public
<SimonKLB> also the same then
<rick_h> SimonKLB: are you on private? I don't think the provider works on the private cloud becuase it's so different
<SimonKLB> nope, public here as well
<SimonKLB> this is really strange :D
<magicaltrout> marcoceppi: is developer.juju.solutions still the place for CDP signups?
<marcoceppi> magicaltrout: yup
<magicaltrout> ta
<SimonKLB> rick_h: ah, i'm actually running 2.1-beta1 if that could make any difference
<SimonKLB> rick_h: http://paste.ubuntu.com/23563595/
<SimonKLB> see if you can make any sense of it
<SimonKLB> rick_h: 2.1-beta2 fixed it :)
<cholcombe> lazyPower, if i'm making a subordinate that needs to access both reactive and non reactive charms is best practice to keep it non reactive for now?
<lazyPower> cholcombe i'm not sure i understand the question. if its remote charms, it doesn't matter what its built with.
<cholcombe> my question is i'm making a subordinate to attach to other charms like ceph, gluster, vault, etc.  some of them are old style and some of them are reactive.  icey mentioned that mixing a reactive subordinate with a non reactive charm causes problems
<lazyPower> ah i haven't run into that. i cant see why it would cause a problem tho
<lazyPower> just make sure you have your subordinate in a virtualenv
<lazyPower> which can be controlled by the layer.yaml, see layer-basic's readme for that one.
<cholcombe> the subordinate needs to access the filesystem of the parent charm
<cholcombe> it's going to be backing up directories from the parent charm
<lazyPower> should be fine
<cholcombe> ok
<magicaltrout> marcoceppi: if I rewrote the Solr Charm in reactive and pulled the lastest solr stuff, is it worth me submitting a PR to the existing charmers solr charm? or just publishing a new one?
<magicaltrout> i'm working with the JPL/USC guys to bring a web crawler to juju but it requires a newer solr
<kwmonroe> bbcmicrocomputer: i didn't see you here (hence the PM).  anyway, you're the current maintainer of https://jujucharms.com/solr/.  would you like magicaltrout to submit PRs to you, or push a new solr charm?
<bbcmicrocomputer> kwmonroe, magicaltrout: I'd just push a new charm
<kwmonroe> magicaltrout: fwiw, bigtop's solr is 4.9.0 (https://ci.bigtop.apache.org/job/Bigtop-1.1.0/BUILD_ENVIRONMENTS=ubuntu-14.04,label=docker-slave/lastSuccessfulBuild/artifact/output/solr/).  if that's recent enough, i can work with you to create a bigtop-solr charm.
<magicaltrout> thanks chaps. No kwmonroe we need to target solr6, solrcloud etc
<magicaltrout> i've just migrated a bunch of stuff off of 4, 4 is like the stone age ;)
<kwmonroe> heh -- roger that magicaltrout
<justicefries> hey all. is the OSX binary for juju building with Go 1.7 yet?
<marcoceppi> geekgonecrazy: hey, re-mongodb, most everything is there wrt to newer versions. Just needs replicasets
<marcoceppi> justicefries: no idea, rick_h ^
<rick_h> justicefries: marcoceppi sorry, don't think so yet. /me goes to check what go version is available in the ubuntu OS
<rick_h> huh, did golang move package names in recent ubuntu versions?
<justicefries> dunno. the main reason I ask is we're still suffering a bit from the Go 1.6 MacOS Sierra issue with the juju client, and it makes it hard to distribute around our team.
<jrwren> rick_h: not afaik.  still 1.2 everywhere.
<justicefries> i'm working off the version I compiled.
<jrwren> err, 1.6
 * rick_h is on the hunt now, wtf
<jrwren> justicefries: any reason you can't give them your juju binary and tell them to put it in ~/bin or /usr/local/bin ?
<marcoceppi> justicefries: what's the sierra issue?
<marcoceppi> jrwren: well we should probably fix the issue
<rick_h> oooh, they moved to the golang-$version
<rick_h> https://launchpad.net/ubuntu/+source/golang-1.7 and https://launchpad.net/ubuntu/+source/golang-1.6
<rick_h> justicefries: so no, we're not using 1.7 at the moment, but looks like 1.7 is available in zesty/yakkety so something for us to consider at some point.
<justicefries> jrwren: mostly the natural nervousness of "here, take this thing I compiled and run it. ;)". I'll probably make my own internal build CI job.
<jrwren> yes, fix the issue. I only suggest work arounds.
<jrwren> justicefries: its funny how folks will trust things from strangers off the internet before they trust their own coworkers. :)
<justicefries> i know right?
<magicaltrout> yeah but i put keyloggers in all binaries i ship to my coworkers....
<justicefries> in reality, they trust it just fine, though I don't want them to trust it unless there's a clear source -> binary build that's automated and auditable.
<vmorris> you're more likely to get scammed from someone you know... that's what i hear repeated all the time anyways
<rick_h> justicefries: I'm sorry, is there a bug/note on the sierra issue?
<rick_h> justicefries: I assume we're not on top of it because it needs go 1.7?
<justicefries> hmm, I think there's a closed bug I saw somewhere but it hadn't been rolled out for builds.
<lazyPower> magicaltrout - i know, and i wiresharked your traffic :|
<justicefries> i'll have to dig it up, but basically, Go 1.7 fixes some (I think ABI?) changes for MacOS Sierra. Go 1.6 and prior has weird, unexpected failures - random panics, random networking issues, etc.
<rick_h> justicefries: random networking issues?
<justicefries> specifically on MacOS Sierra.
<rick_h> justicefries: ok, if you can find that I'd love to see what's up. We'd like it to work and I'm not aware of us pressing on 1.7 so if we should be it'd be good to know.
<justicefries> https://github.com/golang/go/issues/16579
<justicefries> basically any binary will just unexpectedly fail. let me find the launchpad issue for Juju..
<justicefries> juju-core encompasses everything yeah?
<lazyPower> justicefries - i know these symptoms well. i feel them on kubectl
<justicefries> i could've sworn there was an issue.
<justicefries> lazyPower: how are you installing kubectl today? its fixed! unless you grab the gcloud components install version.
<lazyPower> justicefries https://bugs.launchpad.net/juju/+bug/1633495
<mup> Bug #1633495: Panic MacOS Sierra <osx> <juju:Fix Released by cox-katherine-e> <https://launchpad.net/bugs/1633495>
<justicefries> brew install kubernetes-cli will give you a good version. :) I fixed that sucker.
<justicefries> ya, that's it.
<lazyPower> justicefries - i'm using what was shipping in brew
<lazyPower> i haven't updated brew recently though
<justicefries> ah yeah, do that.
<justicefries> so yeah, that's the launchpad issue. for juju, short lived commands you just repeat.
<justicefries> but long running ops you're guaranteed a panic just about.
<justicefries> kat-co's fix too fixes the issue with creating controllers from MacOS which is sweet.
<geekgonecrazy> marcoceppi: ok awesome.  Colleague is giving it a shot.  Said he ran into an error where it said "missing implementation of provides interface"
<rick_h> justicefries: ouch! "First, doing so would give the appearance of support for Go 1.6 when in fact Go 1.6 is now unsupported." well wtf
<marcoceppi> geekgonecrazy: yeah, you'll need my fork of the mongodb interface, https://github.com/marcoceppi/interface-mongodb
<justicefries> rick_h: yeaaaah. :|
<marcoceppi> geekgonecrazy: if you create an INTERFACE_PATH environment variable and clone that into it you'll get past the error
<marcoceppi> geekgonecrazy: basically INTERFACE_PATH=$JUJU_REPOSITORY/interfaces; mkdir -p $INTERFACE_PATH; git clone https://github.com/marcoceppi/interface-mongodb $INTERFACE_PATH/mongodb
<justicefries> rick_h: that's the Go team's weird cognitive dissonance around LTS versions, as if everybody should re-compile and re-deploy everything every 6 months, and replace all of their binaries.
<rick_h> thanks justicefries I didn't realize it wasn't a supported release. Like someone else mentions in https://github.com/golang/go/issues/17234 it's been listed as supported
<rick_h> but guess that fell off...
<jrwren> justicefries: we should probably reopen 1633495. I'm adding a comment.
<rick_h> justicefries: so this is what i mean I guess, I'll have to bring up to the release team what our plan is with 1.7 and see what we can do.
<jrwren> i'm trying to add a comment, but LP will not let me :{
<justicefries> yeah that makes sense. I've got a manual workaround over here for the moment, which I'll probably end up just creating my own matching CI pipeline to cut Sierra releases for us.
<justicefries> or maybe its not worth it, and I just distribute the juju binary I've added a bunch of spyware to. ;)
<marcoceppi> justicefries rick_h could we maybe just update the homebrew recipe to use go-1.7?
<rick_h> marcoceppi: yea, I'll bring it up with the release team in our sync later today
<marcoceppi> <3
#juju 2016-12-02
<kjackal> Good morning Juju world!
<ionutbalutoiu> Hello, guys! Do you know if I'm able to release a charm on charm store without uploading any resources ? My use case is that I have a charm that uses resources, but I don't want to upload them to charm store due to EULA. Still I want users to get the resources and deploy the charm specifying them at deploy.
<ionutbalutoiu> Users will get the resources separately as described in the README, thus we don't get into trouble because we redistribute software.
<kjackal> ionutbalutoiu: Hi, yes you can do that
<kjackal> ionutbalutoiu: this is practicaly how all charms written prior to juju 2.0 work
<kjackal> let me grab you an example
<ionutbalutoiu> ibalutoiu@samba:~/work/juju/charms/windows/azure-service-fabric$ charm release cs:~cloudbaseit/azure-service-fabric-8
<ionutbalutoiu> ERROR cannot release charm or bundle: bad request: charm published with incorrect resources: resources are missing from publish request: asf-zip-package, dotnet-installer
<ionutbalutoiu> This is what I get when trying to release a charm that has resources defined but not uploaded. Remember, I mean 'release/publish' not push to the charm store.
<kjackal> ionutbalutoiu: Ah I see now what you are saying
<kjackal> ionutbalutoiu: you want to use the resources mechanism but not upload a binary
<ionutbalutoiu> yep
<kjackal> ionutbalutoiu: yes we do have exmple for that too
<ionutbalutoiu> as I want users to specify them at deploy.
<kjackal> ionutbalutoiu: the idea is that you upload a dummy empty file as a resource
<kjackal> ionutbalutoiu: let me grab an example :)
<ionutbalutoiu> kjackal, yea. I thought about that too. But hoped that there might be a switch of something which I didn't find in the help to publish without resources.
<kjackal> ionutbalutoiu: I am not aware of any such switch
<kjackal> unfortunately the example I had in mind is not that helpfull
<ionutbalutoiu> kjackal, people will now go the charm page and they can download the resources, a file will pop up for the download, but instead will be a fake file.
<ionutbalutoiu> kjackal, that would do the work for now. But still imo it would've been better if resources remained grayed as it is now: https://jujucharms.com/u/cloudbaseit/azure-service-fabric/8 even after the publish.
<ionutbalutoiu> Use case it is that sometimes EULA doesn't permit redistributing the software. So people using my resources have to go the original website, accept eula there, and provide the resource for my charm manually at deploy time.
<kjackal> I see your point. True, exposing the resources like this some times might be wrong. There might be resources there essoteric to th internal operation of the charm
<ionutbalutoiu> Yep
<kjackal> Good point, would you want to send an email with your use-case on the list? Best case schenario there might be a way around this already, worst case the teams responsible for this would hear you.
<ionutbalutoiu> Yep. Please do send. That would be awesome.
<kjackal> What should I tell them? Who are you? What is your affiliation & email?
<ionutbalutoiu> I'm Ionut Balutoiu, Cloud Engineer at Cloudbase Solutions. The guys writing Juju charms for Windows. :)
<kjackal> Thanks
<ionutbalutoiu> np
<ionutbalutoiu> kjackal, For now, question is: When I upload an empty file as resource so I can publish the charm, if someone manually specifies the resource at deploy as I want them to, will the resource specified by the user get uploaded to state machine and not the default empty file from charm store, right?
<kjackal> Yes. You can even get the size of the resource you got from the store so that you detect that this is not the right file and display the right message
<ionutbalutoiu> Cool
<ionutbalutoiu> kjackal, and btw, ibalutoiu@cloudbasesolutions.com is my e-mail. You asked for this one also, sorry.
<kjackal> thanks I will put you in cc in case you want to add-up anything
<ionutbalutoiu> Thank-you
<ionutbalutoiu> There's also another topic that I have question for, if you still have some time or anyone else. Context, you have 4 peer units, you remove one unit, only in relation departed you still have access to relation data, I need relation data info, so I want to do my business logic here.
<ionutbalutoiu> How do I find out which is the unit leaving and the ones staying ? As far as I noticed, relation departed triggers on all of them like this: once on the units remaining in the peer relation and N number of times on the unit leaving (where N is the number of units remaining).
<kjackal> ionutbalutoiu: Email sent
<ionutbalutoiu> kjackal, received, thanks.
<kjackal> ionutbalutoiu: here is how we query for the nodes in spark: https://github.com/juju-solutions/interface-spark-quorum/blob/master/peers.py
<kjackal> we iterate over the conversations
<ionutbalutoiu> Yes. I understand, but is there a way to figure out which is the unit leaving?
<ionutbalutoiu> That would give you a list of all the peer units, right ? The one leaving and the others. I would like to know which is the one leaving.
<stub> ionutbalutoiu: A unit cannot tell if it is the one being torn down, or if it is $REMOTE_UNIT that is being torn down.
<stub> I'm sure there is an open bug on this... looking
<ionutbalutoiu> stub, I just got into this use base when writing my last charm. Sometimes is if useful to know if you're leaving or not.
<ionutbalutoiu> use case*
<stub> Yes. I have some data loss issues it causes.
<stub> If a Cassandra unit is leaving the cluster, it needs to decomission itself cleanly. If a different unit is leaving the cluster, it doesn't care.
<stub> Its tied up in https://bugs.launchpad.net/juju-core/+bug/1417874, but a separate bug report may help
<mup> Bug #1417874: [RFE] Impossible to cleanly remove a unit from a relation <canonical-is> <charms> <feature> <hooks> <sts> <sts-rfe> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1417874>
<stub> ionutbalutoiu, kjackal : I opened up https://github.com/juju/charmstore-client/issues/103 on optional resources. I realized this problem is going to hit all the charms we have in development using snaps.
<kjackal> thanks stub
<ionutbalutoiu> thank-you stub
<skay> hi, I was using 'waiting' for things like waiting for a database relation, but after reading https://jujucharms.com/docs/stable/reference-status I should use 'blocking' for things like that, yes or no?
<rick_h> skay: yes, if your application doesn't function without a database then blocked is best
<rick_h> skay: it indicates that the user needs to pay attention here
<rick_h> skay: waiting is only temp and does not alarm the user
<skay_> rick_h: thanks
<skay_> (I'm also building up to some questions about reactive states)
<skay> ok, reactive states question. the docs don't show @hooks mixed with reactive state decorators. am I allowed to mix them?
<skay> if I can't mix them, then I'd just have conditionals inside my hook
<stub> Whoops, ranting on bug reports. Must be dinner time.
<rick_h> stub: :)
 * stub wanders off to find a baby to eat
<Spaulding> Hi guys
<lazyPower> Heyo Spaulding  (belated)
<skay> question, I'm not sure I should keep my install hook. I have a function that does some installation stuff @when_not('snap.installed.yadda')
<skay> so, I could just get rid of my install hook and use a reactive function instead. do I understand correctly?
<marcoceppi> skay: sounds about right
<skay_> cool
<marcoceppi> skay: to clarify, you're removing like @hook('install'), yeah?
<skay_> marcoceppi: yes. because it doesn't seem necessary anymore
<marcoceppi> skay: totally, you almost never need to use @hook
<skay_> the relation-joined/changed/departed hooks still get used a lot?
<marcoceppi> skay: oh, sure, but in the charm layers (not the interfaces)
<skay> marcoceppi: is there a state that gets set when a resource is attached?
<skay> marcoceppi: I didn't find it but maybe I overlooked it
<marcoceppi> skay: not at the moment, that's triggered in the @hook('upgrade-charm')
<marcoceppi> or, if provided during deploy, then it'll just be available
<skay> currently I'm just checking resource_get before deciding to install the snap
<skay> and in the upgrade-charm hook I check to see if the resource has actually changed
<marcoceppi> skay: yeah, resource_get will either get latest, or return current path
<skay_> I'm doing that, and then checking any_file_changed([resource_file]) before calling snap.install
<skay_> you or someone may have explained that the any_file_changed was good for checking for that
<marcoceppi> skay: it wasn't I, but it's actually a pretty good idea. I think we'll work to adding a resource.changed state into the base layer, but it'll need a bit more time to mature to figure out what that would look like form a usability perspective
<cholcombe> are we no longer allowed to use the juju-info relation?
<cholcombe> i'm seeing: ERROR charm "preserve" using a reserved relation name: "juju-info"
<cholcombe> for 2.0.1 juju
<cholcombe> oh nvm i had juju-info in the wrong place :)
<cholcombe> hah
<skay_> when I try to destroy-model it doesn't work because it cannot connect to API
<skay_> what do I need to kick?
<skay_> help, how did I kill juju this time
<rick_h> skay_: ? what's up?
<rick_h> skay_: use kill-controller if the controller is down and it should tear everything down bypassing the api server?
<skay_> aha, I'll try that
<skay_> the api server is not responding
<rick_h> yea, it should try, and then fail, and then go straight to the cloud provider to remove the machines using the provider's apis
<petevg> bcsaller, cory_fu: hadoop-processing working! (modulo the reset issue, at least) https://github.com/juju-solutions/matrix/pull/34
<petevg> bcsaller: any objections to me merging cory_fu's updated color handling? https://github.com/juju-solutions/matrix/pull/32
<bcsaller> petevg: sounds good
<petevg> bcsaller: merged cory_fu's branch. Rebased and pushed an update for https://github.com/juju-solutions/matrix/pull/34
<cory_fu> petevg: Just added a minor comment / question to your PR
<petevg> cory_fu: answered your question :-)
<petevg> cory_fu: basically, lots of empty parens made me sad, and made me worry about bugs.
<cory_fu> Yeah, fair enough.  I think the no-parens vs empty-parens thing with Python decorators is kind of dumb.
<cory_fu> petevg: I made @only_once work either way in charms.reactive: https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/decorators.py#L217
<petevg> cory_fu: yeah. Admittedly, they are two separate beasts. Add the parens, and you either need another level of wrapper function, or another method in the class. The implementation would get really messy if you had to figure which one the user meant ...
<cory_fu> But to do that with your decorator would be more of a hassel
<cory_fu> *hassle
<petevg> cory_fu: fancy :-)  But yeah, too late on a Friday to figure out how to do that with the monster that I've built ...
<cory_fu> petevg: Yeah, it's not worth worrying about
<petevg> Cool.
<cory_fu> petevg: Actually, now that I think about it, I don't really like the potential for errors with tag name typos, either.  What about something like this: http://pastebin.ubuntu.com/23569587/
<petevg> cory_fu: if anyone wants to add actions and tags, they'll need to add a bunch of arguments to the tagged_action decorator. The idea with the tags was that outside actions could add to them, without messing with core.
<cory_fu> petevg: What would such tags be useful for if they're not understood by the plan generator?
<petevg> cory_fu: They might not be. But you're still looking at adding an arg in two places (plan, action) rather than three, plus checking to make sure that nobody has messed things up by making assumptions about positional arguments.
<petevg> cory_fu: I guess another approach could be to make a list of constants in tags.py. That saves typos (or at least, surfaces them quickly).
<petevg> cory_fu: updated my PR with the constants thing. If you want to argue strongly for just doing args, feel free. I think that the constants will be neater in the long run, though.
<petevg> bcsaller ^
<petevg> cory_fu, kwmonroe: do we have the sauce for the hadoop-spark bundle checked in anywhere?
<kwmonroe> petevg: https://github.com/juju-solutions/bigtop/tree/feature/hadoop-spark-bundle/bigtop-deploy/juju/hadoop-spark
<petevg> kwmonroe: got it. Is that close to being checked into master, by any chance?
<kwmonroe> no petevg, i haven't created a pr to link to https://issues.apache.org/jira/browse/BIGTOP-2561 yet
<kwmonroe> i mean, i could if you need it.  it's not hard.  i just haven't done it yet.
<petevg> kwmonroe: I was going to brag about how we were in Apache bigtop in D.C., but then the bundle isn't actually in trunk. I can point the the source on the store, though, and then look at the apache repo once I get to the layer ...
<petevg> cory_fu: heh. I had misspelled "subordinate" in my docstring. Thank you for pushing to less typo bugs :-)
<lucacome> hello
<kwmonroe> petevg: if you're still around, care to gander at this?  https://github.com/apache/bigtop/pull/166
<kwmonroe> no worries if you can't get to it tonight petevg.  i can def get it merged before Monday.
<petevg> kwmonroe: looking ...
<petevg> kwmonroe: +1 thank you :-)
<kwmonroe> thar she blows petevg:  https://github.com/apache/bigtop/tree/master/bigtop-deploy/juju  have a good time in DC!
<petevg> kwmonroe: yay! You rock. Thank you :-)
<lucacome> I'm creating a charm for NGINX, is there someone that can help me/review what I've done so far?
<kwmonroe> lucacome: unfortunately, it's already EOD for a lot of people that hang out here (i've got 1 foot out the door myself), but there is an nginx layer that may be helpful to consume/tweak:  https://github.com/battlemidget/juju-layer-nginx
<kwmonroe> lucacome: to keep your issues from being lost in irc ether, i'd suggest firing off an email to the list (juju@lists.ubuntu.com) if you have specific charming questions or want to have some eyeballs on your work so far.
<lucacome> kwmonroe, yes I started from that implementat
<lucacome> *implementation
#juju 2016-12-03
<derekcat> Hey everyone, could I trouble someone for a little help the cs:~/blake-rouse/maas-region-4 charm?
<derekcat> I'm getting the Workload: blocked and the Message: "Missing admin config", but the credentials appear to be there if I run $ juju config maas-region
<derekcat> Not sure where it's trying to tell me to go look...
<beisner> oh where oh where did `charm publish` go?   ERROR unrecognized command: charm publish  http://pastebin.ubuntu.com/23570197/
<beisner> marcoceppi, arosales - did we change the publish command / syntax? ^
<beisner> when users move from `charm` in xenial/updates to juju/stable, publish becomes release, and i just did not know such a thing, somehow.
<arosales> beisner: I'll file a bug against the docs re publish https://jujucharms.com/docs/stable/authors-charm-store#publishing-to-channels
<arosales> beisner: https://github.com/juju/docs/issues/1555
<constl> Good morning all, marcoceppi are you available? I was wondering why bash-completion is not working when installing charm-tools
<beisner> many thanks arosales
#juju 2016-12-04
<slrocketchat> @marcoceppi   we are using wip  layer-mongodb to deploy an instance of 2.6.10 ;  found that the default instance gives `MongoError: not master and slaveOk=false`  and needed a `rs.initiate()` from the mongo shell to make it primary before accessible (read/write)  ; is there any way to build this into the default instance?  tia
#juju 2017-11-27
<assaf> hi guys , im trying to deploy the canonical distro of kubernetes in an offline enviorment , currently im stuck with deploying kube-proxy snaps , i tried manually downloading and attaching the snaps to the kubernetes-worker resource but they serivces fails to start
<ryebot> assaf: Hey, I can try to help you with that. Can we start with which snaps you downloaded and attached?
<ryebot> assaf: I'd also like to refer you to this documentation: https://github.com/juju-solutions/bundle-canonical-kubernetes/wiki/Running-CDK-in-a-restricted-environment
<ryebot> assaf: In particular, you may find the cdk-shrinkwrap tool helpful in your case.
<torontoyes> Is it possible to use juju to deploy windows 10 using Maas?
<jose-phillips> hi
<jose-phillips> is posible remove ceph from openstack juju charm?
<jose-phillips> and where the charms are stored?
<jose-phillips> to perform a modification?
<jamesbenson> stokachu:ping
<jamesbenson> ryebot: ping
<ryebot> jamesbenson: Something I can help with?
<jamesbenson> ryebot: I was wondering if you could provide some input on canonical k8s deploy
<ryebot> jamesbenson: Yes, definitely
<jamesbenson> ryebot: It's completed (I think) the conjure up, all services are active, and in juju status everything is active and idle, however, the conjure up window hasn't gone away yet... I thought there was post conjure things that happened.
<jamesbenson> ryebot: I can send screen shots if you want
<jamesbenson> ryebot: In conjure-up it has a "Setting relationship kubeapi-load-balancer:apiserver <-> kuberneters-master:kube-api-endpoint"
<jamesbenson> ryebot: But has been there for a while now.... 1+ hr.
<ryebot> alright
 * ryebot thinks
<ryebot> jamesbenson: can you open up another terminal and take a look at `juju status`
<jamesbenson> ryebot : yeah, I'm actively looking at that as well
<ryebot> jamesbenson: okay great, can you send me a pastebin of that?
<jamesbenson> http://paste.ubuntu.com/26060543/
<ryebot> perfect, one sec
<ryebot> jamesbenson: it appears that everything has deployed successfully.
<ryebot> jamesbenson: There may have been a hiccup in the conjureup output, but that looks good to me.
<jamesbenson> https://snag.gy/liuxHj.jpg
<jamesbenson> ryebot: so where do I go from here then?
<jamesbenson> ryebot : kubectl get nodes
<ryebot> jamesbenson: yeah, you can log into one of the boxes and go to town with kubectl
<ryebot> jamesbenson: or copy out the kubeconfig and use kubectl locally
<jamesbenson> ryebot : do you have a command for that?  which node is it in?
<ryebot> jamesbenson: all workers and masters will have a copy
<jamesbenson> okay
<ryebot> `juju ssh kubernetes-master/0`
<jamesbenson> yep, found the config file
<jamesbenson> copied it locally
<jamesbenson> ryebot: having issues the command: kubectl --kubeconfig=~/.kube/config  (I made the dir & copied the config file here).  The command isn't valid it seems...
<ryebot> jamesbenson: do you have kubectl in your path?
<ryebot> ie can you kubectl --help ?
<jamesbenson> yes
<jamesbenson> http://paste.ubuntu.com/26060596/
<jamesbenson> ryebot: that's the output when I issue the kubeconfig command above.
<ryebot> jamesbenson: oh, it may not expand ~
<ryebot> jamesbenson: try it with the absolute path
<jamesbenson> ok
<jamesbenson> ryebot: same issue
<jamesbenson> ubuntu@k8s:~/.kube$ kubectl --kubeconfig=/home/ubuntu/.kube/config |pastebinit
<jamesbenson> http://paste.ubuntu.com/26060615/
<ryebot> jamesbenson: alright hmm
<jamesbenson> can I safely kill the conjure-up screen?
<jamesbenson> ryebot: I actually deployed twice on two different machines... both are stuck on the conjure up screen with different status' but same juju status, active/idle
<jamesbenson> ryebot: The other machine says it's stuck at "Setting relation easyrsa:client <-> kubernetes-master:certificates
<jamesbenson> "
<ryebot> jamesbenson: I -think- so, but am not entirely sure - maybe stokachu can weigh in
<jamesbenson> https://snag.gy/rUToJM.jpg
<ryebot> jamesbenson: oh, I'm being silly - you'll need to tack a command onto that
<jamesbenson> ?
<ryebot> jamesbenson: so, `kubectl --kubeconfig=... get po`
<jamesbenson> ok
<stokachu> relations are there
<stokachu> you can kill conjure-up
<ryebot> stokachu: thanks :)
<jamesbenson> ah yeah, get nodes worked
<stokachu> cory_fu: ^ i see this on internal testing ci as well, where the deployment in conjure-up will stop at setting relations
<jamesbenson> is there a way to set the config so I dont' always have to include that?
<jamesbenson> ah, I guess when I moved it into that dir, it did it...
<stokachu> i think there is  an environment var
<jamesbenson> kubectl get nodes works now without issue.
<stokachu> we also want to do some magic later on to merge the configs and have you switch by context
<stokachu> just not done yet
<jamesbenson> sweet, I'm still pretty new to k8s, so learning a lot.
<jamesbenson> thank you stokachu and ryebot
<stokachu> np
<stokachu> jamesbenson: hit us up in #conjure-up if you have any more specific questiosn around that
<jamesbenson> out of curiousity, where are you guys based out of?  I'm always curious how ^_^
<cory_fu> stokachu: conjure-up stops, but the relations exist?  That sounds like the AllWatcher not seeing the relations come through.  Not sure if that we could handle that in libjuju in a reasonable way.  Maybe a timeout after which it re-inspects the complete model status?
<cory_fu> Seems heavy-handed
<cory_fu> We've already
<stokachu> cory_fu: yea, it's why majority of the auto ci runs fail
<cory_fu> *gotten complaints about pulling in the full model state on connect
<stokachu> jamesbenson: im EST
<cory_fu> "complaints"
<cory_fu> comments
<stokachu> "suggestions"
<jamesbenson> stokachuL nice I used to be in NY,  now in Tx
#juju 2017-11-28
<assaf> @ryebot hi
<assaf> @ryebot here is my history : i installed juju controller and add-machine manually , then i downloaded the charms and installed the applications with juju and machines set to 0, manually created all the relations
<assaf> @ryebot then i got stuck with missing resources like flannel-amd64
<assaf> @ryebot so resources i downloaded from the juju charm page and attached to the application
<assaf> @ryebot but charms i downloaded using snap download kube-proxy for example
<assaf> @ryebot i got etcd cluster and master with flannel installed but the worker isnt loading kube-proxy and kubelet beacuse of configuration issues
<gizmo__> hello. Im trying to deploy a bundle with a charm with terms and this is the error im getting. https://gist.github.com/gizmo693/5a4fc5235da987a4f64e378e1850dd62
<cory_fu> bdx: Not sure if you're around this early, but I have an update on the Endpoints branch of reactive.  We're going to cut a dev release today, run it though CI for a week, and then release it for real.  However, we're going to make one small change that will break things.  We're going to rename Endpoint.flag to Endpoint.expand_name to make it more clear.
<cory_fu> bdx: If need be, I can deprecate the existing flag method for a bit so that it's not a hard break
<bdx> cory_fu: nah, its cool, I'll update the bits I have
<cory_fu> beisner: Hey, I just tagged 0.6.0rc1 a.k.a. proposed for charms.reactive.  I'll let that stew for this week, but is there anything else we need to do to get it run through, e.g. the OpenStack CI?
<jose-phi_> hi
<jose-phi_> question someone have the problem that when the container
<jose-phi_> is created in lxd the container just ping in the host
<jose-phi_> and not on the network?
<beisner> hi cory_fu - thx for the heads up.  we'll discuss in our daily standup.
<bdx> kwmonroe: sup
<bdx> kwmonroe: do you hit this http://paste.ubuntu.com/26066506/
<bdx> with your graylog bundle?
<bdx> I feel like I'v filed a bug on that before for the elasticsearch charm
<bdx> I want that thing gone
<bdx> It a huge burden that constantly causes me issues
<bdx> every corner, eh ... .I think I have the fix for this in my fork of the upstream charm
<bdx> dont know why I thought the upstream elasticsearch charm would work
<kwmonroe> yup yup bdx
<kwmonroe> that's https://bugs.launchpad.net/elasticsearch-charm/+bug/1714393
<mup> Bug #1714393: ERROR! lookup plugin (dns) not found <conjure> <Elasticsearch Charm:New> <https://launchpad.net/bugs/1714393>
<bdx> oh I think its different
<kwmonroe> bdx: the dns plugin packed into elasticsearch is too old and doesn't conform to the new plugin api, which means ES can't find it, which causes the firewall logic to fail.
<bdx> ahh right
<bdx> ok
<kwmonroe> you worked around it with a dig plugin (iirc)
<kwmonroe> i worked around it by disabling the ES firewall, which skips the firewall logic.
<kwmonroe> proving once again that firewalls are stupid and we should all just trust one another with our public ipv6 addresses.
<bdx> i see
<kwmonroe> coke, stop looking at pepsi traffic!
<kwmonroe> "ok".  problem solved ;)
<bdx> yea, I just wasnt seeing the dig error in my logs
<kwmonroe> bdx: you may have made other changes to the firewaller in the elasticsearch charm that doesn't fail if/when the dns/dig plugins fail
<bdx> ahhok
<bdx> http://paste.ubuntu.com/26066536/
<bdx> running it manually exposes the underlying error
<kwmonroe> bdx: line 726 of your first paste shows the underlying error too :)  http://paste.ubuntu.com/26066506/
<bdx> ahh I see now, thx thx thx
<bdx> possibly I'll get some tests and polishh in my new elasticsearch charm and we can look to get it swapped with upstream after the new endpoints stuff lands
<kwmonroe> +100 bdx
<bdx> hey, kwmonroe
<bdx> thanks for the +100
<bdx> but also
<bdx> https://imgur.com/a/lUAGr
<bdx> I think I see the disconnect
<bdx> that is leading to graylog seeming like its not working
<bdx> https://imgur.com/a/XzOZW
<bdx> the elasticsearch node that graylog sees is itself
<bdx> lol
<bdx> "hey there are no logs!"
<bdx> go figure
<bdx> kwmonroe: not sure if you have gotten past that or if you are hitting that too
<bdx> just for kicks, I'm going to point filebeat at graylog and see what gives
<bdx> http://paste.ubuntu.com/26066668/ <- from graylog
<bdx> its listening
<kwmonroe> yeah bdx, you'll need to do "juju config filebeat logstash_hosts=GRAYLOG_IP:5044
<bdx> ohh, not 9200?
<kwmonroe> negative bdx, you want to link filebeat to the graylog beats input
<kwmonroe> bdx: if you go to the graylog interface, System->Inputs, you'll see a beats input
<bdx> ahhhh
<bdx> I see it
<kwmonroe> and that'll be bound to 0.0.0.0:5044
<kwmonroe> bdx: i just learned this today.  i assumed graylog would pull logs out of ES, so the path would go Filebeat->ES->graylog, but that's not how it works.  graylog is more like a logstash replacement, so it goes Filebeat->graylog->ES
<kwmonroe> meaning filebeat needs to connect to the graylog beats input (which is done by filebeat logstash_hosts config, and not via relation... yet)
<bdx> got it got it
<bdx> then the elasticsearch charm/application is not needed
<bdx> ?
<bdx> ok, I see logs!
<bdx> yes
<bdx> I have been eyeing this thing for a few months now, trialing every few days when it catches my interest and just always failing due to like 1 of 50 reasons
<bdx> lol
<bdx> this is great to know the full path
<bdx> :)
<bdx> kwmonroe: priceless colab on that, thank you
<bdx> now we just habe to figure out how to make it better
<kwmonroe> bdx: graylog does require ES, so you can't just get rid of it.  if the internet taught me anything today, it's that graylog presents itself as an ES cluster node to take advantage of ES indexing.  as a cluster member, it can also read/write really fast to ES (non cluster members would have to hit the api and (de)serialize json all the time.
<bdx> right right
<bdx> but it runs es
<kwmonroe> whatchu talkin bout willis?
<bdx> oh, so what you are saying is just use juju to deploy an es cluster next to it to hook it up to
<bdx> so, like
<bdx> if you deploy graylog
<bdx> and look at the running processes
<bdx> the java/elasticsearch is running on graylog
<bdx> and it only seems to know about the elasticsearch node that is itself
<kwmonroe> that ain't because of graylog bdx.  did you deploy both gl and es to the same unit?
<bdx> no
<kwmonroe> don't you lie to me
<bdx> it gets that automatically
<bdx> thats what I was trying to show you ^^^^^']
<kwmonroe> i have a graylog deployed, and i don't see any elastic java procs on my gl unit
<bdx> with the http://paste.ubuntu.com/26066668/
<bdx> really
<bdx> ok
<bdx> so
<kwmonroe> right bdx -- i'm assuming you're running that netstat on 172.31.103.25, and your ES node is 172.31.103.161
<kwmonroe> also, lol @ -peanut.  i've never seen that
<kwmonroe> anyway bdx, that connection to 9200 is on a separate machine.  graylog is connecting to it, but it's not running an embedded ES or anything like that
<bdx> ok
<bdx> I think I follow
<bdx> so, how do you explain this
<bdx> ooooo
<bdx> I think I see
<bdx> this https://imgur.com/a/3lD5G
<bdx> is not indicative of an elasticsearch node, but a graylog node
<bdx> becasue graylog is a clustering type service
<bdx> ok
<bdx> I was so backwards
<bdx> thank you for enlightening me
<kwmonroe> you got it
<bdx> the elasticsearch config is in there somewhere
<kwmonroe> that's right bdx -- in the graylog interface, System->Overview will show you the ES config
<bdx> ahh I see it now
<kwmonroe> which graylog knows about because it's an ES cluster member
<bdx> I was loooking in the wrong place initially
<bdx> sudo cat /var/snap/graylog/common/server.conf | grep elasticsearch
<bdx> I see
<bdx> that totally makes sense
<jose-phi_> is posbile when i deploy containers with juju
<jose-phi_> use a local image instead of  copying image for juju/xenial/amd64 from https://cloud-images.ubuntu.com/releases
<jose-phi_> ?
<hml> navinsridharan: ping
<navinsridharan> hi
<navinsridharan> Is this Heather
<hml> navinsridharan: yes - itâs heather
<navinsridharan> Okay great
<hml> navinsridharan: can I get a pastebin of the debug juju bootstrap output?
<navinsridharan> Yeah will send you in a sec
<navinsridharan> https://pastebin.com/QMkrhahm
<hml> navinsridharan: looking
<navinsridharan> Thanks
<hml> navinsridharan: what do the nova logs say about instance 7833ed83-345a-4796-89cb-086bf01bc78b?  is there more information about why the âNo valid host was foundâ error?
<hml> navinsridharan: to confirm the uuid of the âprivateâ network is 383fd64b-4c4c-497d-809d-3bcf8ed72e1c?
<navinsridharan> Yes that's correct
<navinsridharan> I checked by logging into Openstack GUI
<navinsridharan> In case of instance 7833ed83-345a-4796-89cb-086bf01bc78b , I don't see any log written into nova-compute.log file
<navinsridharan> Is there any other log file that I should be checking for??
<hml> navinsridharan: check all the /var/log/nova/*.log files
<hml> navinsridharan: âNo valid hostâ should be in the logs also
<navinsridharan> I only see two log files under /var/log/nova -- > nova-compute.log and privsep-helper.log ( empty)
<navinsridharan> I don't see "No valid host" written into the log
<hml> navinsridharan: so the question is where is itâ¦  hmmm
<navinsridharan> but if I boot an instance manually in Openstack cloud using GUI, I see the log written into nova-compute.log
<hml> navinsridharan: are the credentials and openstack endpoint given to juju the same as what youâre using in the OpenStack GUI?
<navinsridharan> Yes I did..
<navinsridharan> If JUJU was not able to contact the endpoint, then it wouldnt be able to resolve the private network's UUID
<hml> navinsridharan: trueâ¦ iâm just wondering if itâs the same openstack cloudâ¦
<navinsridharan> but we do see UUID of "private" network in the --debug log
<hml> navinsridharan: the instance juju created should be in the log
<navinsridharan> Where do I check for this?
<hml> navinsridharan: same place as the other logs - where you found the other instance.
<navinsridharan> Yeah, but it's just weird that it doesn't write this into that log
<hml> navinsridharan: try the juju bootstrap with âkeep-broken, this will cause the instance juju created not to be deleted
<navinsridharan> I am kind of completely stuck at this issue for about 2 weeks now not able to move forward
<hml> navinsridharan: then we might be able to see from the cli or the gui more info
<hml> navinsridharan: sorry - this is frustrating I know.
<hml> navinsridharan: Iâve been asking a few others for some hints, so far it looks like this should workâ¦ we just need to find the little thing different
<navinsridharan> like instead of "--debug" use "--keep-broken"
<hml> navinsridharan: use both
<navinsridharan> That's so nice of you, thank you so much
<navinsridharan> Let me try and get back in a sec
<navinsridharan> https://www.irccloud.com/pastebin/sI8pg2oB/
<navinsridharan> I have copied from the point it says "using network id.....
<hml> navinsridharan: now look at the openstack juju instance in the GUI - can you see details of the failure?
<navinsridharan> Quick question though --- > should I enter the credentials for Openstack cloud in here (  /home/ubuntu/.local/share/juju/controllers.yaml)
<hml> navinsridharan: i donât recommended editting the files - run `juju autoload-credentials` after sourcing your novarc file
<navinsridharan> I don't see any instance failure in the  Openstack GUI
<navinsridharan> ubuntu@ubuntu-ProLiant-DL380-G6:~$ sudo juju autoload-credentials
<navinsridharan> Looking for cloud and credential information locally...
<navinsridharan> No cloud credentials found.
<hml> navinsridharan: do you have a novarc file you can source
<navinsridharan> I do have one sitting under /joid_config
<navinsridharan> in the name "admin-openrc"
<hml> navinsridharan: the autoload-credentials command will look for the environment variables used for OpenStack authentication to use and import them for juju
<hml> navinsridharan: though it should have been done already to get as far as you have
<navinsridharan> True, but looks for a ".yaml" file correct?
<navinsridharan> I manually fed the credentials saying "juju add-cloud"
<hml> navinsridharan: thatâs all under the covers of juju so to speak.  as a user, you can verify them with âjuju credentials --format yaml --show-secretsâ
<navinsridharan> ubuntu@ubuntu-ProLiant-DL380-G6:~/joid_config$ juju credentials --format yaml --show-secrets
<navinsridharan> credentials:
<navinsridharan>   openstack:
<navinsridharan>     openstack:
<navinsridharan>       auth-type: userpass
<navinsridharan>       password: openstack
<navinsridharan>       project-domain-name: admin_domain
<navinsridharan>       tenant-name: admin
<navinsridharan>       user-domain-name: admin_domain
<navinsridharan>       username: admin
<navinsridharan>   opnfv-virtualpod1-maas:
<navinsridharan>     opnfv-credentials:
<hml> navinsridharan:   those should be fine.
<hml> navinsridharan:  i know why we couldnât see the instance the gui - thereâs a juju bug  :-/ for openstack.
<navinsridharan> Ohh I see, I thought this bug was fixed in JUJU2.0 ??
<hml> navinsridharan:  this one is specific to keep-broken
<hml> navinsridharan:   not the rest of it
<navinsridharan> Ohh I see, okay
<navinsridharan> Counting on you..... :)
<hml> navinsridharan:  hold on a sec
<navinsridharan> sure
<hml> navinsridharan: my personal openstack is busted, but there should be a bunch of other nova logs -  could they be on a different VM? from nova-compute.log - they are in my config
<navinsridharan> sorry, missed your message
<hml> navinsridharan: i filed a bug on the one keep-broken problem: https://bugs.launchpad.net/juju/+bug/1735013
<mup> Bug #1735013: openstack provider deletes instance when keep-broken used during bootstrap <openstack-provider> <juju:Triaged> <https://launchpad.net/bugs/1735013>
<navinsridharan> There are only two VM's where the control and compute logs are hosted
<navinsridharan> I checked both the locations
<hml> navinsridharan: hrmmâ¦
<hml> navinsridharan: can you look at the security groups with the admin-openrc credentials from the CLI?  the juju created ones should still be there.
<navinsridharan> Yes I do see them there
<hml> navinsridharan: thatâs good newsâ¦ do they show up in the neutron logs?
<navinsridharan> neutron-api/0*            active    idle   2/lxd/2  192.168.122.183  9696/tcp                                 Unit is ready
<navinsridharan> neutron-gateway/0*        active    idle   0        192.168.122.174                                           Unit is ready
<navinsridharan> I see  two units in the name of neutron
<navinsridharan> which one am I supposed to login?
<hml> navinsridharan: try both?  iâm blanking on the specific one
<hml> navinsridharan: did you deploy openstack with juju?
<navinsridharan> Yes I did..
<hml> navinsridharan:the nova logs with instance info would be on nova-cloud-controller/0
<navinsridharan> I see a bunch of *.log under /var/log/nova on nova-cloud-controller/0
<navinsridharan> Is there any specific file you expect me to check??
<navinsridharan> got it
<hml> navinsridharan: iâd just grep all of them
<navinsridharan> I see No valid host found
<navinsridharan> using grepping
<navinsridharan> Yeah did the same
<navinsridharan> I see them in nova-conductor.log file
<hml> navinsridharan: so then that file should have the info weâre looking for around where the No valid host is located
<navinsridharan> https://www.irccloud.com/pastebin/W2vCpvmn/nova-conductor.log
<hml> navinsridharan: now i have to laugh a little - reason=ââ  :-)
<navinsridharan> is it empty??
<hml> navinsridharan: thereâs more info - i can track it down hopefully by the trace
<hml> navinsridharan: let me talk to some folks with this info and get back to you by email hopefully tomorrow ok?
<navinsridharan> Thanks, just can't wait for you to kill this :)
<hml> navinsridharan: me too!
<navinsridharan> Sure
<navinsridharan> Is this info more than enough or would you be needing anything else Heather?>
<hml> navinsridharan: ttyl
<hml> navinsridharan: nothing specific is coming to mind right now
<navinsridharan> Sure, thanks once again for guiding me through, appreciate it
<navinsridharan> take caere
<navinsridharan> Hoping to hear from you on something positive by tom :)
#juju 2017-11-29
<marosg> hi, could somebody explain me me what is happening here? https://pastebin.com/AuvB61Et  running the same command on unit works, action terminated when running via juju run
<akshay__> Hi All, is there a way to abort an already fired hook? Eg if accidentally do "juju remove-application app_name" and now it needs to aborted
<andreas_s> Hi. I'm deploying a juju bundle from file - but it seems to ignore the config settings I made. Does anyone have an idea what might be wrong? This is my service definition: http://paste.openstack.org/show/627722/
<andreas_s> the bridge-mappings and the dataport option seem to be ignored
<andreas_s> juju config shows some defaults
<andreas_s> but not the values I set
<ejat> hi, i dont see the rest of the openstack component charm at the store ... is there available anywhere?
<bdx> kwmonroe: continuing the graylog thing, what we talked about yesterday makes me think the filebeat:elasticsearch <-> elasticsearch:client relation in the graylog bundle is not needed
<bdx> seeing as the logs are making there way to elasticsearch through filebeat
<bdx> or blah
<bdx> through graylog
<bdx> having the relation to filebeat -> elasticsearch as part of that whole shmorgishborg probably/might get you 2x indicies
<bdx> lol
<bdx> do you think?
<bdx> im looking into it now
<bdx> I think I have 2x indices for the filebeats
<bdx> just one is namespaced under a graylog prefix
<kwmonroe> marosg: dunno for sure, but it looks like when you rant mysqldump on the unit, you ran it as a normal user.  juju run does things as root.  perhaps mysqldump doesn't like being run as root?
<kwmonroe> you could try "juju ssh mysql/0 'blah blah blah'" to do something on the mysql unit as non-root.
<kwmonroe> (instead of juju run)
<marosg> kwmonroe, I think I ran it as root when I logged there, but could be wrong. Thanks for tips, I will try that
<kwmonroe> bdx: correct, kill the fb <-> es relation.  you are correct in that the logs will make it to ES by way of graylog.
<kwmonroe> bdx: the piece we're missing is a relation to support fb <-> graylog.  the woraround is to set the logstash config on filebeat, but that ain't nice.
<bdx> totally
<bdx> I was just running into some of your graylog bundles with the relation
<kwmonroe> oh yeah bdx?  where?
<kwmonroe> i should fix those
<bdx> yeah, just rando ones all over the internet
<bdx> lol
<bdx> but
<bdx> I was using one as a comparison to what I had done
<bdx> and then it struck me
<bdx> that there is this extra relation in there
<kwmonroe> this is the snippet i'm using for GL on k8s: https://github.com/kwmonroe/spells/blob/feature/cdk-log-monitor/canonical-kubernetes/steps/05_logging/efg.yaml
<kwmonroe> with the juju config pieces happening here: https://github.com/kwmonroe/spells/blob/feature/cdk-log-monitor/canonical-kubernetes/steps/05_logging/after-deploy#L9
<bdx> ahh thats real nice
<bdx> to tell you the truth, I think I saw a bundle in an email for a conjure-up issue
<kwmonroe> thanks for not lying to me
<bdx> and I was in the middle of trying to get the graylog stack up
<bdx> so I was like
<bdx> aha
<bdx> oh this is how hes doing it
<kwmonroe> yeah, it's getting closer.. def ironed out a lot of this in the last few commits.
<bdx> yeha, that logging step is really sweet
<bdx> glad we are on the same page
<bdx> thanks
<bdx> lol
<kwmonroe> sure thing bdx!
#juju 2017-11-30
<jose-phillips> hi someone have idea
<jose-phillips> how to setup a extra interface to a lxd container while is deployed with juju
<jose-phillips> i need add extra network to a lxd container mapped to a extra interface
<jose-phillips> is possible do that?
<hml> jose-phillips: you can create lxc profiles named juju-<model-name>; then deploy from a model with that name.
<jose-phillips> how i select the container profile with juju
<hml> jose-phillips: i do this with the localhost cloud - not sure where to setup with containers nested in other clouds
<jose-phillips> juju deploy --to lxd:1 openstack-dashboard
<hml> jose-phillips: juju will auto-magically look for it - itâs found by creating it.
<hml> jose-phillips: what cloud did you bootstrap?
<jose-phillips> maas
<jose-phillips> and openstack
<jose-phillips> deployed by juju
<hml> jose-phillips: iâm not sure where the modified profile needs to exist in that case
<hml> jose-phillips: perhaps if you check machine 1 for an lxc profile of juju-default?   juju does create an lxc profile for the models itself with a few necessary items
<jose-phillips> yeah but the main goal is do this setting from juju controller
<jose-phillips> so when the container is created is already setted up
<jose-phillips> with the second interface
<vds> is there a preferred way to install oracle java 8 in charms? I've only found thism so far: https://github.com/jamesbeedy/juju-layer-java
<bdx> vds: that was an experiment based on layer-openjdk
<bdx> vds: that made sense for me because I was already using layer-openjdk, but needed oracle java
<vds> bdx, how did the experiment go? :)
<bdx> vds: I stole the super awesome layer-openjdk code and changed the bits to make it install oracle java such that I could use it as a drop in replacement to get my charms that were already using layer-openjdk to run oracle java
<bdx> it went well, I mean, it works
<bdx> the side note here is that
<bdx> I probably spent a total of 30 mins getting that to do what I needed
<bdx> once it worked I never looked back or touched the repo again
<bdx> its not using the practices in some places
<bdx> best*
<bdx> like, automatically accepting the terms and conditions for the user
<bdx> like, you have no option to decline or accept the terms, and the charm just automatically masks and makes that decision for you
<bdx> but, if you don't care about things like that
<bdx> its fair game
<bdx> vds: if you are interested in using it, I'll move it to a legit repo and give it a few touch ups
<bdx> let me know
<vds> bdx, the only concern I have about it is the use of the PPA
<bdx> vds: yeah, so that is configurable through the charm config I think
<bdx> vds: depending on what you are doing (packaging a java app) ... there is a way cooler/more better way of doing packaging/delivery for java apps now, "snaps"
<bdx> vds: have you looked into this much?
<vds> bdx, good suggestion, thanks
<bdx> vds: https://docs.snapcraft.io/reference/plugins/
<bdx> np
#juju 2017-12-01
<stub> vds: Last time I talked to my legal, I was told I can't bypass the clickthroughs in any way (including things like the Juju licence acceptance feature). However, if you look at the Oracle JRE distribution licence IIRC you can bundle it with software where it is required to make it work or make features work.
<stub> vds: My interpretation being you would be able to embed the tarball in your charm or snap, but not in a charm layer. But I didn't put that interpretation to our legal.
<stub> vds: If you want to use the Oracle JDK, the Cassandra charm requires the user to download (accepting the licence) and make the tarball available at a configurable URL. This was before Juju resources existed. It is a terrible user experience.
<ericj> Ubuntu 16.04 Installed LXD and conjure-up snap packages.
<ericj> <ericj> ran conjure-up kubernetes
<ericj> <ericj> made it all the way to the end and received following error:
<ericj> <ericj> "could not fetch IP addresses and link layer devices: cannot get all ip addreses
<ericj> <ericj> Anyone know how to resolve this?
<akshay__> https://www.irccloud.com/pastebin/3nNyg5ZI
<akshay__> Need help
<akshay__> Charm deployment fails as not able it add repository "cloud-archive:pike"
<akshay__> Please see the pastebin for exact error
<jryberg> Hi, I just started to play around with Juju and I'm impressed of the simplicity and how easy it is to bootstrap an entire Kubernetes cluster in the cloud. I do however have a question around the entire concept.  If you need to customize the entire kubernetes setup for example. Is this possible or should it only be used "as is". Will settings be ove
<jryberg> rwritten during upgrades.  I know it's a hard question to answer but should charms be "fire and forget" or used as a base for further modifications?
<elmaciej> Hello Everyone! I'm writing a charm which encapsultes a jar with some spark program. Does anyone knows how can I run action from hadoop-spark charm submit-spark during the deployment of my charm
<elmaciej> I need your help charm masters!
<kjackal> hi elmaciej, not sure about the masters but I can try to help
<kjackal> how do you write your charm? python+reactive?
<elmaciej> yes, that was my intention, I'm doing it based on vanilla charm
<elmaciej> and to be honest it's first charm I have to write
<kjackal> ok, no worries
<kjackal> I would suggest you start by building the bigtop spark charm so you get familiar with the toolchain
<kjackal> let me grab some links
<kjackal> elmaciej: here is the bigtop project that has a number of charm layers. https://github.com/apache/bigtop
<kjackal> the spark layers is here: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/spark/layer-spark
<kjackal> so you should try building that one first. (git clone bigtop; charm build <pathtolayerspark>)
<elmaciej> great, will do it now. Thank you very much.
<kjackal> now you said you want to run a submit-spark during the deployment. I think you should first have to wait the deployment to finish and then submit a job.
<kjackal> elmaciej: there is this action you ca take a look at to see how you can do a spark submit: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/spark/layer-spark/actions/spark-submit
<kjackal> your jar can be inside your version of the charm or have it served from a remote location or make use of juju resources https://jujucharms.com/docs/2.1/developer-resources
<elmaciej> kjackal : Thanks a lot!
<kjackal> gl hf
<bdx> kwmonroe: mind if I continue to hassle you about Greylog stuff?
<kwmonroe> hassle away bdx!
<bdx> I'll just start rattling off in that case
<bdx> :)
<bdx> 1) SSL
<bdx> graylog has instructions for generating a key/cert and setting up ssl at the application/graylog level
<bdx> http://docs.graylog.org/en/2.3/pages/configuration/https.html
<bdx> its a bit cumbersome, and not really what I would want to go with anyway
<bdx> I would rather just deploy an ssl terminating proxy infront of graylog
<bdx> with the legit trusted key/cert from my CA
<bdx> I'm thinking I should just be able to go ahead and setup the ssl terminating proxy in front of graylog, and forward to the web/api ports - bypassing the graylog ssl setup because terminating 1 hop in front of it
<kwmonroe> bdx: agreed that an ssl proxy feels easiest.  GL needs the apache vhost proxy anyway, so just make the apache unit ssl-aware.
<bdx> right
<bdx> the other thing I was thinking about
<bdx> I've used this https://gist.github.com/jamesbeedy/d587cbf048038fb274ef4cd55c4ee3dd
<bdx> for quite some time
<bdx> in front of apps that I want to terminate ssl infront of
<bdx> I was thinking
<bdx> it might be easier to just make the reverseproxy relation for haproxy do the right things to setup the extra frontend/backend for both 9000 on / and 9001 on /api
<bdx> then we could kill the apache2 middle man
<bdx> I gave the haproxy bits a try
<bdx> kwmonroe: https://gist.github.com/jamesbeedy/b784bfd8779fca668b536332a10be5c1
<bdx> my thinking behind ^
<bdx> was such that it would just be easier to create the dict in python and just dump it to yaml
<bdx> rather then trying to concat a bunch of templated strings inline
<bdx> lol
<bdx> but what I realized
<kwmonroe> bdx: not to dissuade you from haproxy, but did you look at https://jujucharms.com/u/tengu-team/ssl-termination-proxy/?
<kwmonroe> it's built with layer-nginx, so might handle the revproxy stuff ootb
<kwmonroe> (plus letsencrypt support)
<bdx> oh sick
<bdx> sweet
<bdx> ok, so lets use that as an example instead of haproxy
<bdx> well
<bdx> kwmonroe: the reason I went with haproxy, is because we have two ports that need forwarding based on the route right?
<kwmonroe> i haven't actually used it myself, but i am encouraged that it calls the ngingx.congure_site bits, which i think will auto handle 9000 and 9001/api  bits: https://github.com/tengu-team/layer-ssl-termination-proxy/blob/master/reactive/ssl_termination_proxy.py#L121
<bdx> kwmonroe: so this https://gist.github.com/jamesbeedy/b784bfd8779fca668b536332a10be5c1#file-haproxy_reverseproxy_test-py-L13
<bdx> was my little bit of secret sauce
<bdx> to accommodate the url route based routing
<bdx> default / -> :9000, if /api, -> :9001
<kwmonroe> yeah bdx, i *think* graylog is doing the right thing with the http interface: https://git.launchpad.net/graylog-charm/tree/reactive/graylog.py#n322
<kwmonroe> that is setting a "services" string for both 9000 / and 9001 /api
<kwmonroe> and i *think* the revproxy config_site bits of the nginx layer will do the right thing with those to make them vhost templates for nginx
<bdx> https://jujucharms.com/haproxy/
<bdx> it so contrived what type the relation data is lol
<bdx> look at the last example for multiple backends
<bdx> which is what I was trying to follow
<bdx> but yeah
<kwmonroe> oof, yeah, that example is hard to follow
<bdx> the graylog charm is setting the correct type of data .... yaml strings
<bdx> yeah
<bdx> so you see how my example gist would work if we were just setting a yaml string, or even python types (just pass the dict to the relation)
<bdx> it expresses the correct key:val in the data structure
<bdx> it just has to get transformed into this crazy thing before it will make any sense to the haproxy / reverseproxy relation
<bdx> lo
<bdx> 9l
<kwmonroe> lo9l fo real
<bdx> I feel like building into this "crazy thing" any more then already has been done is just perpetuating the issue here
<bdx> issue being the contrived data that the reverseproxy relation is accepting
<bdx> it will never be something people will look at and be like "oh ok, I get it"
<bdx> even after many hours now
<bdx> :)
<bdx> but I do get it, it just totally seems like the worst thing in the world
<bdx> anyway
<bdx> I don't even know where I was going with this
<bdx> oh yeah, so I feel we are presented with a few not so great options here
<bdx> given this isn't the most common use case, but its a pretty simple reverseproxy configuration that we should be able to support via charms w/o taking a round trip on the event horizon
<kwmonroe> heh
<bdx> possibly just "making it work" with the reverseproxy is the cheapest way to short term victory with Graylog
<bdx> and looking at possibly doing something about the reverseproxy relation in the future
<bdx> idk
<bdx> possibly other people like it
<bdx> I'm really forcing my view here
<bdx> ha
<bdx> sorry
<bdx> but I feel my reasoning is justified
<kwmonroe> i think one problem is the complex example in the haproxy charm.  i don't know what those options actually do, but the http interface example for rev proxy seems much simpler: https://github.com/juju-solutions/interface-http#requires
<bdx> got it
<bdx> ok
<bdx> what they do
<bdx> is allow you to configure more complex proxying
<bdx> https://gist.github.com/jamesbeedy/b784bfd8779fca668b536332a10be5c1#file-haproxy_reverseproxy_test-py-L13
<bdx> ^ takes care of proxying both of the routes
<bdx>  / -> 9000 and /api -> 9001
<bdx> by using the extra relation data context to configure extended routing/backend
<bdx> its only complex because the data type/content isn't understandable
<bdx> if was written out like this
<bdx> http://paste.ubuntu.com/26090348/
<bdx> and it said, "set the yaml string as relation data"
<bdx> lol
<bdx> but that (set the yaml string as relation data) isn't even whats going on
<bdx> its not even a yaml string
<bdx> I still don't know what it is
<jose-phillips> hey
<jose-phillips> exist a way to put a custom config
<jose-phillips> on juju config file
<jose-phillips> example i want to add something on the configuration of a charm
<jose-phillips> cinder that is not supported on the yaml
<bdx> jose-phillips: https://jujucharms.com/cinder/#charm-config-config-flags
<jose-phillips> and if i want to add it on a different stanza
<bdx> jose-phillips: make a bug report for the cinder charm about that
<bdx> jose-phillips: https://bugs.launchpad.net/charm-cinder
<jose-phillips> is exactly for this setting
<jose-phillips> # A list of backend names to use. These backend names should be backed by a
<jose-phillips> # unique [CONFIG] group with its options (list value)
<jose-phillips> enable_backends
<jose-phillips> also support enable_backends
<bdx> jose-phillips: put that in a bug please
<bdx> and post the bug back here after you have filed it
<jose-phillips> ok bdx thanks
<jose-phillips> another question did you know where juju store the containers configuration for openstack
<jose-phillips> i need to create glance and cinder interface with 2 interfaces
<bdx> jose-phillips: if you are deploying openstack to the lxd provider, you do that through lxd profile modification
<jose-phillips> no is kvm
<jose-phillips> novakvm
<bdx> jose-phillips: so you have maas setup?
<jose-phillips> yep
<bdx> and you checked a bunch of kvm nodes into your maas?
<jose-phillips> yep the deployment is completed on 3 nodes
<jose-phillips> for example i dont need ceph
<jose-phillips> because im using a netapp storage
<jose-phillips> so i take 1 node for controller and 2 for compute
<jose-phillips> on the controller node have 2 interfaces , data and storage
<jose-phillips> when juju create the containters inside of controller node
<jose-phillips> openstack controller node
<bdx> jose-phillips: you need to create a feature request tp support netapp storge
<jose-phillips> ok
<jose-phillips> another question if i modify manually cinder.conf
<jose-phillips> when will be overwritted?
<jose-phillips> during upgrades only
<jose-phillips> or restarting the container may overwrite the configuration
<bdx> jose-phillips: modifying the config by hand is not supported in any way
<bdx> jose-phillips: the openstack charms will keep persistence of the config files
<bdx> modifying it by hand will only cause you grief
#juju 2017-12-02
<torontoyes> I'm currently using MAAS and was wondering if juju charm would be used to deploy a custom windows image(Sys Prep General Windows 10 image)?
<torontoyes> is this one of the uses of Juju?
<torontoyes> Or is juju used in this way?
#juju 2019-11-25
<wallyworld> babbageclunk: here's that forward port of 2.7 https://github.com/juju/juju/pull/10945
<babbageclunk> wallyworld: oops - looking
<babbageclunk> wallyworld: approved
<wallyworld> ty
<babbageclunk> quick review for a test fix backport: https://github.com/juju/juju/pull/10946
<wallyworld> is github shite for anyone else?
<babbageclunk> yeah, pretty slow at the moment
<wallyworld> slow af, also 500 errors
<babbageclunk> haven't seen those
<wallyworld> i can't really access it
<babbageclunk> uh oh, failures in gating tests
<wallyworld> for 2.7 though i think?
<babbageclunk> oh right - are we not worrying about that?
<wallyworld> well, hmmm. i would like to think we've made fixes moving forward and we won't necessariy backport every fix. but right now we'd not expect  any or much divergence
<wallyworld> so any failure now could or will probably affect develop also
<babbageclunk> I'll see whether there have been any changes for the persistent-storage test in develop
<babbageclunk> nope
<babbageclunk> hpidcock: another easy review? https://github.com/juju/juju/pull/10946
<hpidcock> babbageclunk: sure
<timClicks> i should have called dibs
<babbageclunk> you snost, you lost
<hpidcock> babbageclunk: doneski
<babbageclunk> hpidcock: merci beaucoup!
<wallyworld> babbageclunk: +1 on your pr
<babbageclunk> thanks
<manadart> Need a tick on 0-conflict forward merge. Brings forward babbageclunk's test fix: https://github.com/juju/juju/pull/10948
<manadart> stickupkid: Morning. https://github.com/juju/juju/pull/10948.
<stickupkid> manadart, done
<manadart> stickupkid: Ta.
<manadart> stickupkid: Another one. https://github.com/CanonicalLtd/juju-qa-jenkins/pull/326
<nammn_de1> morning manadart stickupkid i might need a look at this https://github.com/juju/description/pull/66 and this https://github.com/juju/juju/pull/10943
<nammn_de1> but timewise not important, as they are not ci
<manadart> nammn_de1: OK.
<nammn_de1> manadart stickupkid  ci fix https://github.com/juju/juju/pull/10949
<nammn_de1> small one
<stickupkid> nammn_de1, i think juju is wrong here, I think maas, should have a default region
<nammn_de1> stickupkid: you mean juju should check the endpoint of maas during interactive adding?
<manadart> nammn_de1 stickupkid: MAAS doesn't have regions in the Juju sense. A MAAS region controller is the cloud (endpoint). The default region is "". A MAAS Rack controller is an availability zone.
<stickupkid> nammn_de1, approved, let's improve this test - OR we can move it to the integration tests
<stickupkid> nammn_de1, manadart it does have a region in terms of a "client-cloud-region"
<stickupkid> manadart, i.e. "default" is just the only one...
<manadart> stickupkid: Since you reviewed the first one :) https://github.com/CanonicalLtd/juju-qa-jenkins/pull/327
<manadart> stickupkid: Hold up. I will add something to that patch.
<manadart> stickupkid: https://github.com/juju/juju/pull/10951
<stickupkid> manadart, does it?
<stickupkid> manadart, when i use juju switch with no argument it works, is this without a bootstrap?
<stickupkid> ah, yes, that's right
 * manadart nods.
<nammn_de1> backport to make it pass 2.7 as well https://github.com/juju/juju/pull/10952
<nammn_de1> anyone pointers to make the `integration caas test ` run locally? Im trying it with `python3 ./assess caas_deploy_charms --caas-provider="microk8s"`
<dosaboy> hi all, i have an issue i wanted to ask about before raising a bug,
<dosaboy> ive deployed k8s using the ost provider
<dosaboy> kubernetes-master says it has port 6443/tcp open
<dosaboy> yet none of the security groups have that port open
<dosaboy> is this a know issue? am i missing something?
<dosaboy> ill raise a bug tomorrow with more data
<babbageclunk> dosaboy: have you exposed kubernetes-master?
#juju 2019-11-26
<hpidcock> wallyworld: noticed that cancel-action didn't become cancel-operation with the v3 rename of actions. I'll add it to the spec for operation interruption
<wallyworld> hpidcock: yeah, that got missed :-(
<kelvinliu_> wallyworld: free to HO for 1min?
<wallyworld> kelvinliu_: sure, give me 5
<kelvinliu_> ok
<wallyworld> kelvinliu_: or hpidcock: this PR adds a pod-spec-get command. can't land till after 2.7.0 ships so no rush https://github.com/juju/juju/pull/10953
<kelvinliu_> wallyworld: looking now
<hpidcock> sorry xwayland locked up had to reboot
<wallyworld> yay wayland
<hpidcock> wallyworld: when you talk about task vs operation, operation could be against multiple units? and task is against the individual unit?
<wallyworld> hpidcock: yeah, operation is a top level grouping of tasks. juju call creates an operation. the actual code that runs on any given uinit is a task. there's still discussion to be had as to whether we allow a task to create a "sub" task
<hpidcock> task-inception
<hpidcock> wallyworld: it doesn't seem the concept of grouped "tasks" as an "operation" has been done yet. Is that planned for this cycle?
<wallyworld> hpidcock: that's what the new spec is for :-)
<wallyworld> in the 20.04 folder
<hpidcock> ahah yep
<hpidcock> wallyworld: I've updated the doc, should be simple enough to implement I think.
<dosaboy> wallyworld: babbageclunk that was the problem :)
<dosaboy> thank you
<achilleasa> manadart: is this what you had in mind about 10942? https://github.com/juju/juju/pull/10942/commits/43d47dcd6178841e3b060200e0be84f534b1d9ec
<manadart> achilleasa: Yes, that will do it.
<nammn_de1> manadart: here I only added the export steps https://github.com/juju/juju/pull/10943 so qa only has test for regression. Would you suggest adding the import steps to another PR  or just in this PR as well?
<wallyworld> achilleasa: hey, quick question. UniterAPIV12 comment says the LXDProfileAPI hs been removed but it's still embedded in the API struct. It's been taken out in V13. I have a PR where I am doing a V14 and as a driveby can remove from V12 if that's correct to do. or else I will move the incorrect comment. do you know what's correct?
<wallyworld> PR is https://github.com/juju/juju/pull/10953
<achilleasa> wallyworld: sorry; no clue about it. however, it seems to be missing from the V12 struct: https://github.com/juju/juju/blob/b67e487e19f1835a66c7cd415c70730548c3aa57/apiserver/facades/agent/uniter/uniter.go#L88
<wallyworld> achilleasa: it is there in 2.7 so I think we can assume devel was updated but not 2.7 maybe
<wallyworld> it seems like it's safe to remove
<wallyworld> i didn't check develop as i've got 2.7 checkout out urrently
<achilleasa> wallyworld: I think so too. Looks like it has been replaced by a different set of methods anyway
<wallyworld> yeah, it has, the imlementation was reworked and the old one removed
<wallyworld> thanks for the input. i just wanted to double check removing from V12 was correct, seems like it is
<stickupkid> manadart, https://github.com/juju/cmd/pull/68
<manadart> stickupkid: Done.
<stickupkid> manadart, let's see if this lands as you don't have a green tick :|
<manadart> stickupkid: Busted down.
<achilleasa> jam: I have pushed a bunch of commits to 10902 to address your comments. Can you take a look if I missed anything so I can land the PR?
<stickupkid> manadart, I've brought the changes in now - https://github.com/juju/juju/pull/10955
<nammn_de1> stickupkid: moment, when did we start having a master ? :D
<nammn_de1> oh damn i missread, nvm me
<stickupkid> manadart, when processing relations (updating external controllers) and it fails after this, we'll leave the CMR in a broken state
<stickupkid> manadart, trying to think how/if we can do something different, I know they're supposed to be idempotent, I'm unsure how this can be
<manadart> stickupkid: Yes, jam and I spoke about this. We will have to ensure the pre-checks are tight. Then for connectivity issues *to* the external controllers during the change, we can re-try.
<stickupkid> :(
<manadart> stickupkid: Or roll them back, or something. We just can't break it.
<stickupkid> manadart, let me see if rollback is possible
<manadart> stickupkid: I haven't looked into it yet, but is it possible to be passive with regard to the consumers, let them get a re-direct from the source controller and update themselves?
<stickupkid> manadart, we do have ABORT and REAPFAILED
<stickupkid> manadart,  thinking
<jam> achilleasa: reviewd
<achilleasa> jam: thanks for the review. I will apply your suggestions and push
<stickupkid> manadart, i'm unsure you know, we're going to have to keep the old firewall rules around for the redirect, this doesn't seem very clever
<stickupkid> manadart, if you migrate a lot of things around, you'll just have a sieve
<achilleasa> manadart: any suggestions for modifying params/network.go:Network config to align with my recent changes? Presumably, the machiner worker might be running an older jujud binary prior to updating the model
<achilleasa> I could add additional fields (Addresses, ShadowAddresses) and retain "Address" for backwards compatibility
<nammn_de1> manadart stickupkid: I think this is ready for a review https://github.com/juju/juju/pull/10943 QA steps are added.  It migrated the rules in the Database, are there additional steps which I have missed?
<nammn_de1> QA steps describe my mind flow
<stickupkid> nammn_de1, looks great
<nammn_de1> Because I thought that the responsible worker picks up the database changes (addition) somewhere and thus apply those firewall changes. Therefore no need for me to additionally check that. But maybe need to doublecheck that
<stickupkid> nammn_de1, you tested on aws?
<nammn_de1> stickupkid: only lxd
<nammn_de1> stickupkid: i can spin up the same qa steps from the description in aws
<nammn_de1> to be safe
<stickupkid> nammn_de1, yes please
<nammn_de1> any other qa steps needed beside the `juju list-firewall-rules`  being the same on the src and dst?
<stickupkid> nammn_de1, we should verify that the firewall rules are actually applied
<nammn_de1> stickupkid: makes sense
<stickupkid> nammn_de1, with aws, you can check the console
<nammn_de1> stickupkid:  ohhh great to know
<nammn_de1> just for understanding. Most of our juju magic works through a worker which checks the database and does something upon it, right?
<nammn_de1> like i add a firewall rule to the db and a worker picks that up and create that rule on the specific provider
<stickupkid> yeah, that's the theory
<nammn_de1> stickupkid: haha okay
<manadart> achilleasa: I think that will work. We just use omitempty and add a comment about retention for compatibility + removing Address in Juju 3.0.
<danboid> I can't destroy a model, even when using --force
<danboid> I was trying to deploy the openstack bundle, so I'm good to nuke and pave
<danboid> How do you do that though? Is it enough to release all the MAAS machines and delete ~/.local/juju or is there anything else? Maybe its better to purge juju from my machine too?
<danboid> I couldn't deploy the charm because the model is 'destroying'
<danboid> Is it safe/correct to remove ~/.local/juju if you want to start over?
<danboid> I think I fixed it
<rick_h> danboid:  no, just juju unregister
<rick_h> danboid:  and then release the MAAS machines
<rick_h> danboid:  let me know if you hit it and want to pull info to see if there's a bug/etc
<rick_h> danboid:  what juju version?
<achilleasa> manadart: should we assign shadow addresses to the alpha space?
<achilleasa> Since they don't really belong to a space, can we use alpha as a fallback?
<manadart> achilleasa: Hmm. jam and I spoke about the notion of external spaces.
<manadart> I think yes.
<manadart> But they will not have a subnet to link them via that mechanism.
<manadart> I think this should be OK. Let me think about it.
<danboid> rick_h, The OS bundle is deploying now. We'll see how it went in the morning. I'm sure I'll be recommending a few tweaks to the docs at the end - I've already got a couple of things I think should be added
<rick_h> danboid:  cool
<nammn_de1> achilleasa manadart did you guys ever interact with the firewallrules of juju in aws? `juju set-firewall-rule`?
<achilleasa> nammn_de1: not me; sorry
<stickupkid> nammn_de1, you should see a security group inside the ec2 instance, check that has the right input/output ports
<nammn_de1> stickupkid: firewall rules does not set ports afact
<nammn_de1> so i cannot seem to find any mapping between those
<nammn_de1> on the first glance adding a firewall rule and having none shows the same sg
<stickupkid> nammn_de1, https://github.com/juju/juju/blob/dca1ee163dd2f6abdc989d3580d9a56e3ec22864/provider/ec2/environ.go#L1555
<stickupkid> nammn_de1, https://github.com/juju/juju/blob/dca1ee163dd2f6abdc989d3580d9a56e3ec22864/provider/ec2/environ.go#L1626
<nammn_de1> stickupkid: hmm
<nammn_de1> cannot find any connection to the firewallrule from state
<stickupkid> nammn_de1, check the workers
<nammn_de1> this should be the responsible worker https://github.com/juju/juju/blob/52602ea0fb7b79f9fa55040dc4a817d80a4422e4/worker/firewaller/firewaller.go#L221
<stickupkid> nammn_de1, ho?
<nammn_de1> stickupkid: sure
#juju 2019-11-27
<wallyworld> kelvinliu: found it. https://github.com/juju/jujusvg/pull/58
<kelvinliu> wallyworld: not easy! hah approved!
<wallyworld> ty :-)
<kelvinliu> ð
<timClicks> babbageclunk: do you know if there are any plans for storage on vsphere? https://discourse.jujucharms.com/t/adding-additional-disks-to-machine/2367
<babbageclunk> timClicks: not sure - wallyworld?
<wallyworld> there's currently no plans - vsphere provider in juju does not support storage directives it seems. we'd need to size up any work as a bug
<wallyworld> it would be a medium chunk of work
<timClicks> so the recommendation today would be to use the vSphere console or an equivalent VMware CLI command?
<kelvinliu> wallyworld: if the charm already provided ingress for app svc itself in podspec, should we touch that spec at all if the user run `juju expose` or just reject and errors out?
<wallyworld> kelvinliu: that's a good question. i am not sure. we can land PR without changing expose and discuss and maybe do a fllowup
<kelvinliu> wallyworld: ok, i will create a card for this as a TODO
<wallyworld> ty
<kelvinliu> np
<thumper> timClicks: I don't know of a workaround that would work reasonably
<timClicks> thumper: no, not for deploying ceph w/ juju
<thumper> ceph on vsphere?
<thumper> seems weird
<babbageclunk> sorry, I've got a bit nerdsniped trying to work out whether we could use different image metadata to make a machine that was centos on vsphere.
<babbageclunk> it's lead me to realise I don't actually understand how we get non-ubuntu images at all
<timClicks> thumper: they want to deploy cdk on vsphere, and use ceph as storage for that cdk instance
<thumper> timClicks: but where is the ceph?
<thumper> ew
<timClicks> on vsphere apparently..
<thumper> ah
<thumper> no we don't support that
<thumper> I don't think so
<thumper> not natively anyway
<timClicks> I've added a note to the request https://discourse.jujucharms.com/t/adding-additional-disks-to-machine/2367
<thumper> there may be charmed-kubernetes config for external storage
<timClicks> i expect we'll receive a grumpy response along the lines of "what's the point in deploying ceph manually?"
<timClicks> we've received an HA-related question
<timClicks> https://discourse.jujucharms.com/t/auto-remove-var-lib-juju-when-i-restarting-jujud-in-empty-mongodb/2362
<thumper> timClicks: I think if you add a machine, then add the disks via CLI, then deploy ceph to the devices it may work
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10957 - cred validation relaxation as discussed
<wallyworld> ok, in a bit
<anastasiamac> no rush at all... it's for 2.7 (whci i think is blocked until .0 anyway, right?)
<anastasiamac> which*
<wallyworld> yeah
<manadart> stickupkid: Breaking up my patch. This one is for replacing migration-master test mocks with generated ones: https://github.com/juju/juju/pull/10958
<stickupkid> manadart, we should do the same with the minion
<nammn_de1> stickupkid: did the email from ian work for you?
<nammn_de1> he did give me a review/comment as well
<stickupkid> nammn_de1, we where missing expose :D
<nammn_de1> stickupkid: yeah, so doing that it shows something in the logs and doing something. I guess we would need a charm which would use the juju-application-offer to open some ports so that we can see that in the security group, right? At least thats how I understand it
<nammn_de1> :D
<stickupkid> yeap
<nammn_de1> stickupkid: do you have some in your mind? At least trying it with mariadb and mysql shows log work but no change i the security group, as it seems they did not open a port. I do think it should be fine codewise.
<stickupkid> nammn_de1, not sure tbh
<achilleasa> manadart: got a few min for a quick HO?
<manadart> achilleasa: Sure.
<stickupkid> damn, we don't import application offers - wonder why my code wasn't working :sigh:
<stickupkid> manadart, quick HO before i grab some lunch?
<manadart> stickupkid: Eating mine now. Do it after?
<stickupkid> manadart, sure can... i'll ping back in about 45mins
 * manadart has to do a drop-off at sort notice. 20-30 mins.
<stickupkid> manadart, ho
<stickupkid> ran away
<nammn_de1> stickupkid: soo i updated the qa steps on my pr https://github.com/juju/juju/pull/10943
<nammn_de1> so because it is related to the "offer" the pr, or another, needs to move the offer as well which then can be consumed again
<stickupkid> i'm on it :D
<nammn_de1> not sure if we want to have that in that pr, because i thought that you and manadart are working on it
<nammn_de1> ha! :D
<stickupkid> nammn_de1, let's call it done
<stickupkid> biab
<nammn_de1> stickupkid: cool, happy for a review then
<stickupkid> nammn_de1, give me a bit
<nammn_de1> stickupkid: no worries, take your time. A suggestion for the next task from the cmr column, seems like some are related to the one you are doing currently. Just to make sure that we do not overlap
<stickupkid> nammn_de1, unsure tbh
<stickupkid> thinking...
<nammn_de1> stickupkid: if you have something with the workers that would be cool but open to anything
<manadart> stickupkid: I have to make a move. Patch is here: https://github.com/juju/juju/pull/10959.
<stickupkid> manadart, sure nps
<manadart> stickupkid: I still need to regenerate the facade schema. Once I do that, I'll squash the choppy commits.
<stickupkid> manadart, fine by me
<timClicks> what is the command I need to use to to inspect/trace a unit agent hook's execution history?
<rick_h> timClicks:  juju show-status-log ?
<timClicks> rick_h: that's the one
<timClicks> thanks!
<rick_h> timClicks:  <3
<timClicks> is it possible to inspect calls to hook-tools, such as relation-get and relation-set?
<rick_h> timClicks:  you're going to need debug-hooks at that point I think
<timClicks> yeah, that's what I thought..
<rick_h> timClicks:  because you need the hook context which isn't normally available. You could try walking the relation ids and going that route but it takes a little owrk
<rick_h> well, not hook context, but the relation context in the hook context
<timClicks> yeah I experimented with that in a noop charm I've written, explore-relations
<rick_h> gotcha
<timClicks> juju run --unit explore-relations/0 ârelation-ids peer-relationâ
<timClicks> new tutorial up https://discourse.jujucharms.com/t/what-is-a-juju-relation-and-what-purpose-do-they-serve-part-2/2378
#juju 2019-11-28
<kelvinliu> wallyworld: https://github.com/juju/juju/pull/10961 +1 plz
<wallyworld> kelvinliu: there's 3 places to change
<wallyworld> setup.iss etc as well
<wallyworld> look at a closed pr from the bot for what to cange
<kelvinliu> yep
<kelvinliu> wallyworld: updated
<wallyworld> kelvinliu: merged. i'll start a doc when i'm done with xtian
<kelvinliu> wallyworld: thanks!
<babbageclunk> thumper: have you got a moment for a quick chat? Want to make sure I understand something about watchers.
<thumper> babbageclunk: sure
<thumper> jump in 1:1
<babbageclunk> yup
<thumper> wallyworld, kelvinliu: there is more than one file to update for the version number
<kelvinliu> thumper: yeah, there are 3
<thumper> kelvinliu: ah, I was just looking at the initial PR files changed
<kelvinliu> refresh, haha
<kelvinliu> wallyworld: https://github.com/juju/juju/pull/10963 to add k8s python client to operator pod; +1 plz
<achilleasa> manadart: when you 've got a bit of time can you please take a look at https://github.com/juju/juju/pull/10964?
<manadart> achilleasa: Yep.
<achilleasa> manadart: the aliasing was intentional to avoid confusion. Initially I tried to replace juju/network with core/network so we could reference the types as network.InterfaceInfo but it looked like a mess esp between files and their _test siblings so I decided to have consistent aliasing everywhere
<manadart> achilleasa: We can live with it for now.
<achilleasa> btw, I 've noted this in the PR description. It should make renaming easier when network gets eventually phased out in favor of core/network
<manadart> stickupkid: As discussed: https://github.com/juju/juju/pull/10965
<nammn_de1> stickupkid  initial pr for cmr saas migration: https://github.com/juju/juju/pull/10967 after your pr is merged i can test whether this would work. At least it is finding the correct controller/model and redirects correctly
<stickupkid> nammn_de1, i'm not here tomorrow, so it might not be till next week, i'm almost done on applicationOffers, the release got in the way
<stickupkid> nammn_de1, still around?
#juju 2019-11-29
<anastasiamac> wallyworld: PTAL when/if u get a chance https://github.com/juju/juju/pull/10968
<wallyworld> looking
<nammn_de1> manadart mind taking a look at that? https://github.com/juju/juju/pull/10943 stickupkid said he approved on trello but maybe he forgott on gh
<manadart> nammn_de1: Sure.
<nammn_de1> manadart: thanks for the review
<manadart> nammn_de1: Sure.
<nammn_de1> the reason I had 2 getters was
<nammn_de1> that the migration/firewallrules interface
<nammn_de1> "needs" a wellknownservice of type string
<nammn_de1> wanted to follow how we right now seem to do the migrations
<nammn_de1> I could alternatively remove the migrations/firewallRules file and do everything in migration_import
<nammn_de1> manadart: https://github.com/juju/juju/blob/70869ebce8e8f914a5ff68c410d7a2f5434a61ce/state/migrations/firewallRules.go#L15
<achilleasa> nammn_de1: ^^ could you please snake case that file?
<achilleasa> (name)
<nammn_de1> achilleasa: oh yeah
<nammn_de1> puh
<nammn_de1> didnt even realize
<manadart> nammn_de1: If you need that to be a string in order not to depend on the type in state, move the type and consts to core/firewall.
<manadart> I think the firewall type itself could go there, but there's no need to do that at this point.
<nammn_de1> manadart: ahh that makes sense. Didn't even took a proper look at core
<manadart> nammn_de1: firewall doesn't exist in core; just create it.
<nammn_de1> manadart: oh, i meant the general concept of core. Just read through the doc.go :D
<nammn_de1> manadart: this should be reviewable again https://github.com/juju/juju/pull/10943
<nammn_de1> + added a comment on your comment  https://github.com/juju/juju/pull/10967#discussion_r352162753 Maybe I couldnt follow you fully what you meant
<hallback> At my previous company (Scania, Sweden) we've been using reactive charms on CentOS 7 for almost a year now in production. This has required some small modifications on charmhelpers and layer-basic (PR made by Erik LÃ¶nroth already), and I decided to share that today: https://github.com/juju/charm-helpers/pull/400
<rick_h> hallback:  awesome, I've been wanting to catch up on that to see what gaps there were
<rick_h> hallback:  good stuff, look forward to giving it a go
<pmatulis> er, on 2.7 is it normal for 'juju credentials' to hang?
#juju 2019-11-30
<hallback> rick_h: thanks, sounds good! there are more things that can be done most likely. These are just minimal modifications to get things up and running.
